xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 for Xen 4.7 0/4] Enable per-VCPU parameter settings for RTDS scheduler
@ 2016-03-06 17:55 Chong Li
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 1/4] xen: enable " Chong Li
                   ` (3 more replies)
  0 siblings, 4 replies; 51+ messages in thread
From: Chong Li @ 2016-03-06 17:55 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, george.dunlap, dario.faggioli, ian.jackson,
	ian.campbell, mengxu, jbeulich, lichong659, dgolomb

[Goal]
The current xl sched-rtds tool can only set the VCPUs of a domain 
to the same parameter although the scheduler supports VCPUs with 
different parameters. This patchset is to enable xl sched-rtds 
tool to configure the VCPUs of a domain with different parameters.

This per-VCPU settings can be used in many scenarios. For example,
based on Dario's statement in our pervious discussion
(http://lists.xen.org/archives/html/xen-devel/2014-09/msg00423.html), 
if there are two real-time applications, which have different timing 
requirements, running in a multi-VCPU guest domain, it is beneficial 
to pin these two applications to two seperate VCPUs with different 
scheduling parameters.

What this patchset includes is a wanted and planned feature for RTDS 
scheudler(http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler) in 
Xen 4.7. The interface design of the xl sched-rtds tool is based on 
Meng's previous discussion with Dario, George and Wei
(http://lists.xen.org/archives/html/xen-devel/2015-02/msg02606.html).
Basically, there are two main changes:

1) in xl, we create an array that records all VCPUs whose parameters 
are about to modify or output.

2) in libxl, we receive the array and call different xc functions to 
handle it.

3) in xen and libxc, we use 
XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo(introduced by this
patchset) as the hypercall for per-VCPU operations(get/set method).


[Usage]
With this patchset in use, xl sched-rtds tool can:

1) show the budget and period of each VCPU of each domain, 
by using "xl sched-rtds -v all" command. An example would be like:

# xl sched-rtds -v all
Cpupool Pool-0: sched=RTDS
Name                                ID VCPU    Period    Budget
Domain-0                             0    0     10000      4000
vm1                                  1    0       300       150
vm1                                  1    1       400       200
vm1                                  1    2     10000      4000
vm1                                  1    3      1000       500
vm2                                  2    0     10000      4000
vm2                                  2    1     10000      4000

Using "xl sched-rtds" will output the default scheduling parameters
for each domain. An example would be like:

# xl sched-rtds
Cpupool Pool-0: sched=RTDS
Name                                ID    Period    Budget
Domain-0                             0     10000      4000
vm1                                  1     10000      4000
vm2                                  2     10000      4000


2) show the budget and period of each VCPU of a specific domain, 
by using, e.g., "xl sched-rtds -d vm1 -v all" command. The output 
would be like:

# xl sched-rtds -d vm1 -v all
Name                                ID VCPU    Period    Budget
vm1                                  1    0       300       150
vm1                                  1    1       400       200
vm1                                  1    2     10000      4000
vm1                                  1    3      1000       500

To show a subset of the parameters of the VCPUs of a specific domain, 
please use, e.g.,"xl sched-rtds -d vm1 -v 0 -v 3" command. 
The output would be:

# xl sched-rtds -d vm1 -v 0 -v 3
Name                                ID VCPU    Period    Budget
vm1                                  1    0       300       150
vm1                                  1    3      1000       500

Using command, e.g., "xl sched-rtds -d vm1" will output the default
scheduling parameters of vm1. An example would be like:

# xl sched-rtds -d vm1
Name                                ID    Period    Budget
vm1                                  1     10000      4000


3) Users can set the budget and period of multiple VCPUs of a 
specific domain with only one command, 
e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -v 3 -p 300 -b 150".

Users can set all VCPUs with the same parameters, by one command.
e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250".


---
Previous conclusion:
On PATCH v4, our concern was about the usage of hypercall_preemption_check
and the print of warning message (both in xen). These issues are addressed 
in this patch.


CC: <dario.faggioli@citrix.com>
CC: <george.dunlap@eu.citrix.com>
CC: <dgolomb@seas.upenn.edu>
CC: <mengxu@cis.upenn.edu>
CC: <jbeulich@suse.com>
CC: <wei.liu2@citrix.com>
CC: <lichong659@gmail.com>
CC: <ian.jackson@eu.citrix.com>
CC: <ian.campbell@eu.citrix.com>



Chong Li (4):
  xen: enable per-VCPU parameter settings for RTDS scheduler
  libxc: enable per-VCPU parameter settings for RTDS scheduler
  libxl: enable per-VCPU parameter settings for RTDS scheduler
  xl: enable per-VCPU parameter settings for RTDS scheduler

 docs/man/xl.pod.1             |   4 +
 tools/libxc/include/xenctrl.h |  16 ++-
 tools/libxc/xc_rt.c           |  68 +++++++++
 tools/libxl/libxl.c           | 326 +++++++++++++++++++++++++++++++++++++++---
 tools/libxl/libxl.h           |  37 +++++
 tools/libxl/libxl_types.idl   |  14 ++
 tools/libxl/xl_cmdimpl.c      | 292 ++++++++++++++++++++++++++++++++-----
 tools/libxl/xl_cmdtable.c     |  10 +-
 xen/common/sched_credit.c     |   4 +
 xen/common/sched_credit2.c    |   4 +
 xen/common/sched_rt.c         | 130 +++++++++++++++--
 xen/common/schedule.c         |  15 +-
 xen/include/public/domctl.h   |  59 ++++++--
 13 files changed, 885 insertions(+), 94 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 [PATCH v6 for Xen 4.7 0/4] Enable per-VCPU parameter settings for RTDS scheduler Chong Li
@ 2016-03-06 17:55 ` Chong Li
  2016-03-07 12:59   ` Jan Beulich
  2016-03-08 19:09   ` Wei Liu
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 2/4] libxc: " Chong Li
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 51+ messages in thread
From: Chong Li @ 2016-03-06 17:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, Sisu Xi, george.dunlap, dario.faggioli, Meng Xu,
	jbeulich, lichong659, dgolomb

Add XEN_DOMCTL_SCHEDOP_getvcpuinfo and _putvcpuinfo hypercalls
to independently get and set the scheduling parameters of each
vCPU of a domain

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>

---
Changes on PATCH v5:
1) When processing XEN_DOMCTL_SCHEDOP_get/putvcpuinfo, we do
preemption check in a similar way to XEN_SYSCTL_pcitopoinfo

Changes on PATCH v4:
1) Add uint32_t vcpu_index to struct xen_domctl_scheduler_op.
When processing XEN_DOMCTL_SCHEDOP_get/putvcpuinfo, we call
hypercall_preemption_check in case the current hypercall lasts
too long. If we decide to preempt the current hypercall, we record
the index of the most-recent finished vcpu into the vcpu_index of
struct xen_domctl_scheduler_op. So when we resume the hypercall after
preemption, we start processing from the posion specified by vcpu_index,
and don't need to repeat the work that has already been done in the
hypercall before the preemption.
(This design is based on the do_grant_table_op() in grant_table.c)

2) Coding style changes

Changes on PATCH v3:
1) Remove struct xen_domctl_schedparam_t.

2) Change struct xen_domctl_scheduler_op.

3) Check if period/budget is within a validated range

Changes on PATCH v2:
1) Change struct xen_domctl_scheduler_op, for transferring per-vcpu parameters
between libxc and hypervisor.

2) Handler of XEN_DOMCTL_SCHEDOP_getinfo now just returns the default budget and
period values of RTDS scheduler.

3) Handler of XEN_DOMCTL_SCHEDOP_getvcpuinfo now can return a random subset of
the parameters of the VCPUs of a specific domain

CC: <dario.faggioli@citrix.com>
CC: <george.dunlap@eu.citrix.com>
CC: <dgolomb@seas.upenn.edu>
CC: <mengxu@cis.upenn.edu>
CC: <jbeulich@suse.com>
CC: <lichong659@gmail.com>
---
 xen/common/sched_credit.c   |   4 ++
 xen/common/sched_credit2.c  |   4 ++
 xen/common/sched_rt.c       | 130 +++++++++++++++++++++++++++++++++++++++-----
 xen/common/schedule.c       |  15 ++++-
 xen/include/public/domctl.h |  59 ++++++++++++++++----
 5 files changed, 182 insertions(+), 30 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 0dce790..455c684 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -1054,6 +1054,10 @@ csched_dom_cntl(
      * lock. Runq lock not needed anywhere in here. */
     spin_lock_irqsave(&prv->lock, flags);
 
+    if ( op->cmd == XEN_DOMCTL_SCHEDOP_putvcpuinfo ||
+         op->cmd == XEN_DOMCTL_SCHEDOP_getvcpuinfo )
+        return -EINVAL;
+
     if ( op->cmd == XEN_DOMCTL_SCHEDOP_getinfo )
     {
         op->u.credit.weight = sdom->weight;
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 3c49ffa..c3049a0 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -1421,6 +1421,10 @@ csched2_dom_cntl(
      * runq lock to update csvcs. */
     spin_lock_irqsave(&prv->lock, flags);
 
+    if ( op->cmd == XEN_DOMCTL_SCHEDOP_putvcpuinfo ||
+         op->cmd == XEN_DOMCTL_SCHEDOP_getvcpuinfo )
+        return -EINVAL;
+
     if ( op->cmd == XEN_DOMCTL_SCHEDOP_getinfo )
     {
         op->u.credit2.weight = sdom->weight;
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 3f1d047..4fcbf40 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -86,6 +86,22 @@
 #define RTDS_DEFAULT_PERIOD     (MICROSECS(10000))
 #define RTDS_DEFAULT_BUDGET     (MICROSECS(4000))
 
+/*
+ * Max period: max delta of time type, because period is added to the time
+ * a vcpu activates, so this must not overflow.
+ * Min period: 10 us, considering the scheduling overhead (when period is
+ * too low, scheduling is invoked too frequently, causing high overhead).
+ */
+#define RTDS_MAX_PERIOD     (STIME_DELTA_MAX)
+#define RTDS_MIN_PERIOD     (MICROSECS(10))
+
+/*
+ * Min budget: 10 us, considering the scheduling overhead (when budget is
+ * consumed too fast, scheduling is invoked too frequently, causing
+ * high overhead).
+ */
+#define RTDS_MIN_BUDGET     (MICROSECS(10))
+
 #define UPDATE_LIMIT_SHIFT      10
 #define MAX_SCHEDULE            (MILLISECS(1))
 /*
@@ -1130,23 +1146,17 @@ rt_dom_cntl(
     unsigned long flags;
     int rc = 0;
 
+    xen_domctl_schedparam_vcpu_t local_sched;
+    s_time_t period, budget;
+    uint32_t index = 0;
+
     switch ( op->cmd )
     {
-    case XEN_DOMCTL_SCHEDOP_getinfo:
-        if ( d->max_vcpus > 0 )
-        {
-            spin_lock_irqsave(&prv->lock, flags);
-            svc = rt_vcpu(d->vcpu[0]);
-            op->u.rtds.period = svc->period / MICROSECS(1);
-            op->u.rtds.budget = svc->budget / MICROSECS(1);
-            spin_unlock_irqrestore(&prv->lock, flags);
-        }
-        else
-        {
-            /* If we don't have vcpus yet, let's just return the defaults. */
-            op->u.rtds.period = RTDS_DEFAULT_PERIOD;
-            op->u.rtds.budget = RTDS_DEFAULT_BUDGET;
-        }
+    case XEN_DOMCTL_SCHEDOP_getinfo: /* return the default parameters */
+        spin_lock_irqsave(&prv->lock, flags);
+        op->u.rtds.period = RTDS_DEFAULT_PERIOD / MICROSECS(1);
+        op->u.rtds.budget = RTDS_DEFAULT_BUDGET / MICROSECS(1);
+        spin_unlock_irqrestore(&prv->lock, flags);
         break;
     case XEN_DOMCTL_SCHEDOP_putinfo:
         if ( op->u.rtds.period == 0 || op->u.rtds.budget == 0 )
@@ -1163,6 +1173,96 @@ rt_dom_cntl(
         }
         spin_unlock_irqrestore(&prv->lock, flags);
         break;
+    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
+        if ( guest_handle_is_null(op->u.v.vcpus) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+        while ( index < op->u.v.nr_vcpus )
+        {
+            if ( copy_from_guest_offset(&local_sched,
+                          op->u.v.vcpus, index, 1) )
+            {
+                rc = -EFAULT;
+                break;
+            }
+            if ( local_sched.vcpuid >= d->max_vcpus ||
+                          d->vcpu[local_sched.vcpuid] == NULL )
+            {
+                rc = -EINVAL;
+                break;
+            }
+
+            spin_lock_irqsave(&prv->lock, flags);
+            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
+            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
+            local_sched.s.rtds.period = svc->period / MICROSECS(1);
+            spin_unlock_irqrestore(&prv->lock, flags);
+
+            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
+                    &local_sched, 1) )
+            {
+                rc = -EFAULT;
+                break;
+            }
+            if ( (++index > 0x3f) && hypercall_preempt_check() )
+                break;
+        }
+
+        if ( !rc && (op->u.v.nr_vcpus != index) )
+            op->u.v.nr_vcpus = index;
+        break;
+    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
+        if ( guest_handle_is_null(op->u.v.vcpus) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+        while ( index < op->u.v.nr_vcpus )
+        {
+            if ( copy_from_guest_offset(&local_sched,
+                          op->u.v.vcpus, index, 1) )
+            {
+                rc = -EFAULT;
+                break;
+            }
+            if ( local_sched.vcpuid >= d->max_vcpus ||
+                          d->vcpu[local_sched.vcpuid] == NULL )
+            {
+                rc = -EINVAL;
+                break;
+            }
+
+            period = MICROSECS(local_sched.s.rtds.period);
+            budget = MICROSECS(local_sched.s.rtds.budget);
+            if ( period > RTDS_MAX_PERIOD || budget < RTDS_MIN_BUDGET ||
+                          budget > period || period < RTDS_MIN_PERIOD )
+            {
+                rc = -EINVAL;
+                break;
+            }
+
+            /*
+             * We accept period/budget less than 100 us, but will warn users about
+             * the large scheduling overhead due to it
+             */
+            if ( period < MICROSECS(100) || budget < MICROSECS(100) )
+                printk("Warning: period or budget set to less than 100us.\n"
+                       "This may result in high scheduling overhead.\n");
+
+            spin_lock_irqsave(&prv->lock, flags);
+            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
+            svc->period = period;
+            svc->budget = budget;
+            spin_unlock_irqrestore(&prv->lock, flags);
+
+            if ( (++index > 0x3f) && hypercall_preempt_check() )
+                break;
+        }
+        if ( !rc && (op->u.v.nr_vcpus != index) )
+            op->u.v.nr_vcpus = index;
+        break;
     }
 
     return rc;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index c195129..f4a4032 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1148,10 +1148,19 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op)
     if ( ret )
         return ret;
 
-    if ( (op->sched_id != DOM2OP(d)->sched_id) ||
-         ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
-          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
+    if ( op->sched_id != DOM2OP(d)->sched_id )
         return -EINVAL;
+    else
+        switch ( op->cmd )
+        {
+        case XEN_DOMCTL_SCHEDOP_putinfo:
+        case XEN_DOMCTL_SCHEDOP_getinfo:
+        case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
+        case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
+            break;
+        default:
+            return -EINVAL;
+        }
 
     /* NB: the pluggable scheduler code needs to take care
      * of locking by itself. */
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 7a56b3f..b9975f6 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -338,24 +338,59 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
 #define XEN_SCHEDULER_ARINC653 7
 #define XEN_SCHEDULER_RTDS     8
 
-/* Set or get info? */
+typedef struct xen_domctl_sched_credit {
+    uint16_t weight;
+    uint16_t cap;
+} xen_domctl_sched_credit_t;
+
+typedef struct xen_domctl_sched_credit2 {
+    uint16_t weight;
+} xen_domctl_sched_credit2_t;
+
+typedef struct xen_domctl_sched_rtds {
+    uint32_t period;
+    uint32_t budget;
+} xen_domctl_sched_rtds_t;
+
+typedef struct xen_domctl_schedparam_vcpu {
+    union {
+        xen_domctl_sched_credit_t credit;
+        xen_domctl_sched_credit2_t credit2;
+        xen_domctl_sched_rtds_t rtds;
+    } s;
+    uint16_t vcpuid;
+    uint16_t padding[3];
+} xen_domctl_schedparam_vcpu_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_schedparam_vcpu_t);
+
+/*
+ * Set or get info?
+ * For schedulers supporting per-vcpu settings (e.g., RTDS):
+ *  XEN_DOMCTL_SCHEDOP_putinfo sets params for all vcpus;
+ *  XEN_DOMCTL_SCHEDOP_getinfo gets default params;
+ *  XEN_DOMCTL_SCHEDOP_put(get)vcpuinfo sets (gets) params of vcpus;
+ *
+ * For schedulers not supporting per-vcpu settings:
+ *  XEN_DOMCTL_SCHEDOP_putinfo sets params for all vcpus;
+ *  XEN_DOMCTL_SCHEDOP_getinfo gets domain-wise params;
+ *  XEN_DOMCTL_SCHEDOP_put(get)vcpuinfo returns error;
+ */
 #define XEN_DOMCTL_SCHEDOP_putinfo 0
 #define XEN_DOMCTL_SCHEDOP_getinfo 1
+#define XEN_DOMCTL_SCHEDOP_putvcpuinfo 2
+#define XEN_DOMCTL_SCHEDOP_getvcpuinfo 3
 struct xen_domctl_scheduler_op {
     uint32_t sched_id;  /* XEN_SCHEDULER_* */
     uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_* */
     union {
-        struct xen_domctl_sched_credit {
-            uint16_t weight;
-            uint16_t cap;
-        } credit;
-        struct xen_domctl_sched_credit2 {
-            uint16_t weight;
-        } credit2;
-        struct xen_domctl_sched_rtds {
-            uint32_t period;
-            uint32_t budget;
-        } rtds;
+        xen_domctl_sched_credit_t credit;
+        xen_domctl_sched_credit2_t credit2;
+        xen_domctl_sched_rtds_t rtds;
+        struct {
+            XEN_GUEST_HANDLE_64(xen_domctl_schedparam_vcpu_t) vcpus;
+            uint32_t nr_vcpus;
+            uint32_t padding;
+        } v;
     } u;
 };
 typedef struct xen_domctl_scheduler_op xen_domctl_scheduler_op_t;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 for Xen 4.7 2/4] libxc: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 [PATCH v6 for Xen 4.7 0/4] Enable per-VCPU parameter settings for RTDS scheduler Chong Li
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 1/4] xen: enable " Chong Li
@ 2016-03-06 17:55 ` Chong Li
  2016-03-08 19:09   ` Wei Liu
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 3/4] libxl: " Chong Li
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 4/4] xl: " Chong Li
  3 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-06 17:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	Meng Xu, lichong659, dgolomb

Add xc_sched_rtds_vcpu_get/set functions to interact with
Xen to get/set a domain's per-VCPU parameters.

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>

---
Changes on PATCH v5:
1) In xc_sched_rtds_vcpu_get/set, re-issueing the hypercall
if it is preempted.

Changes on PATCH v4:
1) Minor modifications on the function parameters.

Changes on PATCH v2:
1) Minor modifications due to the change of struct xen_domctl_scheduler_op.

CC: <dario.faggioli@citrix.com>
CC: <george.dunlap@eu.citrix.com>
CC: <dgolomb@seas.upenn.edu>
CC: <mengxu@cis.upenn.edu>
CC: <wei.liu2@citrix.com>
CC: <lichong659@gmail.com>
---
 tools/libxc/include/xenctrl.h | 16 +++++++---
 tools/libxc/xc_rt.c           | 68 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 80 insertions(+), 4 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 01a6dda..9462271 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -888,11 +888,19 @@ int xc_sched_credit2_domain_get(xc_interface *xch,
                                struct xen_domctl_sched_credit2 *sdom);
 
 int xc_sched_rtds_domain_set(xc_interface *xch,
-                            uint32_t domid,
-                            struct xen_domctl_sched_rtds *sdom);
+                               uint32_t domid,
+                               struct xen_domctl_sched_rtds *sdom);
 int xc_sched_rtds_domain_get(xc_interface *xch,
-                            uint32_t domid,
-                            struct xen_domctl_sched_rtds *sdom);
+                               uint32_t domid,
+                               struct xen_domctl_sched_rtds *sdom);
+int xc_sched_rtds_vcpu_set(xc_interface *xch,
+                               uint32_t domid,
+                               struct xen_domctl_schedparam_vcpu *vcpus,
+                               uint32_t num_vcpus);
+int xc_sched_rtds_vcpu_get(xc_interface *xch,
+                               uint32_t domid,
+                               struct xen_domctl_schedparam_vcpu *vcpus,
+                               uint32_t num_vcpus);
 
 int
 xc_sched_arinc653_schedule_set(
diff --git a/tools/libxc/xc_rt.c b/tools/libxc/xc_rt.c
index d59e5ce..4be9624 100644
--- a/tools/libxc/xc_rt.c
+++ b/tools/libxc/xc_rt.c
@@ -62,3 +62,71 @@ int xc_sched_rtds_domain_get(xc_interface *xch,
 
     return rc;
 }
+
+int xc_sched_rtds_vcpu_set(xc_interface *xch,
+                           uint32_t domid,
+                           struct xen_domctl_schedparam_vcpu *vcpus,
+                           uint32_t num_vcpus)
+{
+    int rc = 0;
+    unsigned processed = 0;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(vcpus, sizeof(*vcpus) * num_vcpus,
+            XC_HYPERCALL_BUFFER_BOUNCE_IN);
+
+    if ( xc_hypercall_bounce_pre(xch, vcpus) )
+        return -1;
+
+    domctl.cmd = XEN_DOMCTL_scheduler_op;
+    domctl.domain = (domid_t) domid;
+    domctl.u.scheduler_op.sched_id = XEN_SCHEDULER_RTDS;
+    domctl.u.scheduler_op.cmd = XEN_DOMCTL_SCHEDOP_putvcpuinfo;
+
+    while ( processed < num_vcpus )
+    {
+        domctl.u.scheduler_op.u.v.nr_vcpus = num_vcpus - processed;
+        set_xen_guest_handle_offset(domctl.u.scheduler_op.u.v.vcpus, vcpus,
+                                    processed);
+        if ( (rc = do_domctl(xch, &domctl)) != 0 )
+            break;
+        processed += domctl.u.scheduler_op.u.v.nr_vcpus;
+    }
+
+    xc_hypercall_bounce_post(xch, vcpus);
+
+    return rc;
+}
+
+int xc_sched_rtds_vcpu_get(xc_interface *xch,
+                           uint32_t domid,
+                           struct xen_domctl_schedparam_vcpu *vcpus,
+                           uint32_t num_vcpus)
+{
+    int rc;
+    unsigned processed = 0;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(vcpus, sizeof(*vcpus) * num_vcpus,
+            XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
+    if ( xc_hypercall_bounce_pre(xch, vcpus) )
+        return -1;
+
+    domctl.cmd = XEN_DOMCTL_scheduler_op;
+    domctl.domain = (domid_t) domid;
+    domctl.u.scheduler_op.sched_id = XEN_SCHEDULER_RTDS;
+    domctl.u.scheduler_op.cmd = XEN_DOMCTL_SCHEDOP_getvcpuinfo;
+
+    while ( processed < num_vcpus )
+    {
+        domctl.u.scheduler_op.u.v.nr_vcpus = num_vcpus - processed;
+        set_xen_guest_handle_offset(domctl.u.scheduler_op.u.v.vcpus, vcpus,
+                                    processed);
+        if ( (rc = do_domctl(xch, &domctl)) != 0 )
+            break;
+        processed += domctl.u.scheduler_op.u.v.nr_vcpus;
+    }
+
+    xc_hypercall_bounce_post(xch, vcpus);
+
+    return rc;
+}
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 [PATCH v6 for Xen 4.7 0/4] Enable per-VCPU parameter settings for RTDS scheduler Chong Li
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 1/4] xen: enable " Chong Li
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 2/4] libxc: " Chong Li
@ 2016-03-06 17:55 ` Chong Li
  2016-03-08 19:12   ` Wei Liu
  2016-03-09 17:09   ` Dario Faggioli
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 4/4] xl: " Chong Li
  3 siblings, 2 replies; 51+ messages in thread
From: Chong Li @ 2016-03-06 17:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	ian.jackson, ian.campbell, Meng Xu, lichong659, dgolomb

Add libxl_vcpu_sched_params_get/set and sched_rtds_vcpu_get/set
functions to support per-VCPU settings.

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>

---
Changes on PATCH v5:
1) Add a seperate function, sched_rtds_vcpus_params_set_all(), to set
the parameters of all vcpus of a domain.

2) Add libxl_vcpu_sched_params_set_all() to invoke the above function.

3) Coding style changes. (I didn't find the indentation rules for function
calls with long parameters (still 4 spaces?), so I just imitated the
indentation style of some existing functions)

Changes on PATCH v4:
1) Coding style changes

Changes on PATCH v3:
1) Add sanity check on vcpuid

2) Add comments on per-domain and per-vcpu functions for libxl
users

Changes on PATCH v2:
1) New data structure (libxl_vcpu_sched_params and libxl_sched_params)
to help per-VCPU settings.

2) sched_rtds_vcpu_get now can return a random subset of the parameters
of the VCPUs of a specific domain.

CC: <dario.faggioli@citrix.com>
CC: <george.dunlap@eu.citrix.com>
CC: <dgolomb@seas.upenn.edu>
CC: <mengxu@cis.upenn.edu>
CC: <wei.liu2@citrix.com>
CC: <lichong659@gmail.com>
CC: <ian.jackson@eu.citrix.com>
CC: <ian.campbell@eu.citrix.com>
---
 tools/libxl/libxl.c         | 326 ++++++++++++++++++++++++++++++++++++++++----
 tools/libxl/libxl.h         |  37 +++++
 tools/libxl/libxl_types.idl |  14 ++
 3 files changed, 354 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index bd3aac8..4532e86 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5770,6 +5770,207 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
     return 0;
 }
 
+static int sched_rtds_validate_params(libxl__gc *gc, int period,
+                                    int budget, uint32_t *sdom_period,
+                                    uint32_t *sdom_budget)
+{
+    int rc = 0;
+    if (period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
+        if (period < 1) {
+            LOG(ERROR, "VCPU period is out of range, "
+                       "valid values are larger than or equal to 1");
+            rc = ERROR_INVAL; /* error scheduling parameter */
+            goto out;
+        }
+        *sdom_period = period;
+    }
+
+    if (budget != LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT) {
+        if (budget < 1) {
+            LOG(ERROR, "VCPU budget is not set or out of range, "
+                       "valid values are larger than or equal to 1");
+            rc = ERROR_INVAL;
+            goto out;
+        }
+        *sdom_budget = budget;
+    }
+
+    if (*sdom_budget > *sdom_period) {
+        LOG(ERROR, "VCPU budget must be smaller than "
+                   "or equal to VCPU period");
+        rc = ERROR_INVAL;
+    }
+out:
+    return rc;
+}
+
+/* Get the RTDS scheduling parameters of vcpu(s) */
+static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
+                               libxl_vcpu_sched_params *scinfo)
+{
+    uint32_t num_vcpus;
+    int i, r, rc;
+    xc_dominfo_t info;
+    struct xen_domctl_schedparam_vcpu *vcpus;
+
+    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    if (r < 0) {
+        LOGE(ERROR, "getting domain info");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    num_vcpus = scinfo->num_vcpus ? scinfo->num_vcpus :
+                                info.max_vcpu_id + 1;
+
+    GCNEW_ARRAY(vcpus, num_vcpus);
+
+    if (scinfo->num_vcpus > 0) {
+        for (i = 0; i < num_vcpus; i++) {
+            if (scinfo->vcpus[i].vcpuid < 0 ||
+                scinfo->vcpus[i].vcpuid > info.max_vcpu_id) {
+                LOG(ERROR, "VCPU index is out of range, "
+                           "valid values are within range from 0 to %d",
+                           info.max_vcpu_id);
+                rc = ERROR_INVAL;
+                goto out;
+            }
+            vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
+        }
+    } else
+        for (i = 0; i < num_vcpus; i++)
+            vcpus[i].vcpuid = i;
+
+    r = xc_sched_rtds_vcpu_get(CTX->xch, domid, vcpus, num_vcpus);
+    if (r != 0) {
+        LOGE(ERROR, "getting vcpu sched rtds");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+    scinfo->sched = LIBXL_SCHEDULER_RTDS;
+    if (scinfo->num_vcpus == 0) {
+        scinfo->num_vcpus = num_vcpus;
+        scinfo->vcpus = libxl__calloc(NOGC, num_vcpus,
+                                sizeof(libxl_sched_params));
+    }
+    for(i = 0; i < num_vcpus; i++) {
+        scinfo->vcpus[i].period = vcpus[i].s.rtds.period;
+        scinfo->vcpus[i].budget = vcpus[i].s.rtds.budget;
+        scinfo->vcpus[i].vcpuid = vcpus[i].vcpuid;
+    }
+return r;
+out:
+    return rc;
+}
+
+/* Set the RTDS scheduling parameters of vcpu(s) */
+static int sched_rtds_vcpus_params_set(libxl__gc *gc, uint32_t domid,
+                               const libxl_vcpu_sched_params *scinfo)
+{
+    int r, rc;
+    int i;
+    uint16_t max_vcpuid;
+    xc_dominfo_t info;
+    struct xen_domctl_schedparam_vcpu *vcpus;
+    uint32_t num_vcpus;
+
+    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    if (r < 0) {
+        LOGE(ERROR, "getting domain info");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+    max_vcpuid = info.max_vcpu_id;
+
+    if (scinfo->num_vcpus > 0) {
+        num_vcpus = scinfo->num_vcpus;
+        GCNEW_ARRAY(vcpus, num_vcpus);
+        for (i = 0; i < num_vcpus; i++) {
+            if (scinfo->vcpus[i].vcpuid < 0 ||
+                scinfo->vcpus[i].vcpuid > max_vcpuid) {
+                LOG(ERROR, "VCPU index is out of range, "
+                           "valid values are within range from 0 to %d",
+                           max_vcpuid);
+                rc = ERROR_INVAL;
+                goto out;
+            }
+            vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
+
+            rc = sched_rtds_validate_params(gc,
+                    scinfo->vcpus[i].period, scinfo->vcpus[i].budget,
+                    &vcpus[i].s.rtds.period, &vcpus[i].s.rtds.budget);
+            if (rc) {
+                rc = ERROR_INVAL;
+                goto out;
+            }
+        }
+    } else {
+        rc = ERROR_INVAL;
+        goto out;
+    }
+
+    r = xc_sched_rtds_vcpu_set(CTX->xch, domid,
+        vcpus, num_vcpus);
+    if (r != 0) {
+        LOGE(ERROR, "setting vcpu sched rtds");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+return r;
+out:
+    return rc;
+}
+
+/* Set the RTDS scheduling parameters of all vcpus of a domain */
+static int sched_rtds_vcpus_params_set_all(libxl__gc *gc, uint32_t domid,
+                               const libxl_vcpu_sched_params *scinfo)
+{
+    int r, rc;
+    int i;
+    uint16_t max_vcpuid;
+    xc_dominfo_t info;
+    struct xen_domctl_schedparam_vcpu *vcpus;
+    uint32_t num_vcpus;
+
+    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    if (r < 0) {
+        LOGE(ERROR, "getting domain info");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+    max_vcpuid = info.max_vcpu_id;
+
+    if (scinfo->num_vcpus == 1) {
+        num_vcpus = max_vcpuid + 1;
+        GCNEW_ARRAY(vcpus, num_vcpus);
+        if (sched_rtds_validate_params(gc, scinfo->vcpus[0].period,
+            scinfo->vcpus[0].budget, &vcpus[0].s.rtds.period,
+            &vcpus[0].s.rtds.budget)) {
+            rc = ERROR_INVAL;
+            goto out;
+        }
+        for (i = 0; i < num_vcpus; i++) {
+            vcpus[i].vcpuid = i;
+            vcpus[i].s.rtds.period = scinfo->vcpus[0].period;
+            vcpus[i].s.rtds.budget = scinfo->vcpus[0].budget;
+        }
+    } else {
+        rc = ERROR_INVAL;
+        goto out;
+    }
+
+    r = xc_sched_rtds_vcpu_set(CTX->xch, domid,
+        vcpus, num_vcpus);
+    if (r != 0) {
+        LOGE(ERROR, "setting vcpu sched rtds");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+return r;
+out:
+    return rc;
+}
+
 static int sched_rtds_domain_get(libxl__gc *gc, uint32_t domid,
                                libxl_domain_sched_params *scinfo)
 {
@@ -5802,30 +6003,9 @@ static int sched_rtds_domain_set(libxl__gc *gc, uint32_t domid,
         LOGE(ERROR, "getting domain sched rtds");
         return ERROR_FAIL;
     }
-
-    if (scinfo->period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
-        if (scinfo->period < 1) {
-            LOG(ERROR, "VCPU period is not set or out of range, "
-                       "valid values are larger than 1");
-            return ERROR_INVAL;
-        }
-        sdom.period = scinfo->period;
-    }
-
-    if (scinfo->budget != LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT) {
-        if (scinfo->budget < 1) {
-            LOG(ERROR, "VCPU budget is not set or out of range, "
-                       "valid values are larger than 1");
-            return ERROR_INVAL;
-        }
-        sdom.budget = scinfo->budget;
-    }
-
-    if (sdom.budget > sdom.period) {
-        LOG(ERROR, "VCPU budget is larger than VCPU period, "
-                   "VCPU budget should be no larger than VCPU period");
+    if (sched_rtds_validate_params(gc, scinfo->period, scinfo->budget,
+                             &sdom.period, &sdom.budget))
         return ERROR_INVAL;
-    }
 
     rc = xc_sched_rtds_domain_set(CTX->xch, domid, &sdom);
     if (rc < 0) {
@@ -5873,6 +6053,74 @@ int libxl_domain_sched_params_set(libxl_ctx *ctx, uint32_t domid,
     return ret;
 }
 
+int libxl_vcpu_sched_params_set(libxl_ctx *ctx, uint32_t domid,
+                                  const libxl_vcpu_sched_params *scinfo)
+{
+    GC_INIT(ctx);
+    libxl_scheduler sched = scinfo->sched;
+    int rc;
+
+    if (sched == LIBXL_SCHEDULER_UNKNOWN)
+        sched = libxl__domain_scheduler(gc, domid);
+
+    switch (sched) {
+    case LIBXL_SCHEDULER_SEDF:
+        LOG(ERROR, "SEDF scheduler no longer available");
+        rc = ERROR_FEATURE_REMOVED;
+        break;
+    case LIBXL_SCHEDULER_CREDIT:
+    case LIBXL_SCHEDULER_CREDIT2:
+    case LIBXL_SCHEDULER_ARINC653:
+        LOG(ERROR, "per-VCPU parameter setting not supported for this scheduler");
+        rc = ERROR_INVAL;
+        break;
+    case LIBXL_SCHEDULER_RTDS:
+        rc = sched_rtds_vcpus_params_set(gc, domid, scinfo);
+        break;
+    default:
+        LOG(ERROR, "Unknown scheduler");
+        rc = ERROR_INVAL;
+        break;
+    }
+
+    GC_FREE;
+    return rc;
+}
+
+int libxl_vcpu_sched_params_set_all(libxl_ctx *ctx, uint32_t domid,
+                                  const libxl_vcpu_sched_params *scinfo)
+{
+    GC_INIT(ctx);
+    libxl_scheduler sched = scinfo->sched;
+    int rc;
+
+    if (sched == LIBXL_SCHEDULER_UNKNOWN)
+        sched = libxl__domain_scheduler(gc, domid);
+
+    switch (sched) {
+    case LIBXL_SCHEDULER_SEDF:
+        LOG(ERROR, "SEDF scheduler no longer available");
+        rc = ERROR_FEATURE_REMOVED;
+        break;
+    case LIBXL_SCHEDULER_CREDIT:
+    case LIBXL_SCHEDULER_CREDIT2:
+    case LIBXL_SCHEDULER_ARINC653:
+        LOG(ERROR, "per-VCPU parameter setting not supported for this scheduler");
+        rc = ERROR_INVAL;
+        break;
+    case LIBXL_SCHEDULER_RTDS:
+        rc = sched_rtds_vcpus_params_set_all(gc, domid, scinfo);
+        break;
+    default:
+        LOG(ERROR, "Unknown scheduler");
+        rc = ERROR_INVAL;
+        break;
+    }
+
+    GC_FREE;
+    return rc;
+}
+
 int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
                                   libxl_domain_sched_params *scinfo)
 {
@@ -5907,6 +6155,38 @@ int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
     return ret;
 }
 
+int libxl_vcpu_sched_params_get(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_vcpu_sched_params *scinfo)
+{
+    GC_INIT(ctx);
+    int rc;
+
+    scinfo->sched = libxl__domain_scheduler(gc, domid);
+
+    switch (scinfo->sched) {
+    case LIBXL_SCHEDULER_SEDF:
+        LOG(ERROR, "SEDF scheduler is no longer available");
+        rc = ERROR_FEATURE_REMOVED;
+        break;
+    case LIBXL_SCHEDULER_CREDIT:
+    case LIBXL_SCHEDULER_CREDIT2:
+    case LIBXL_SCHEDULER_ARINC653:
+        LOG(ERROR, "per-VCPU parameter getting not supported for this scheduler");
+        rc = ERROR_INVAL;
+        break;
+    case LIBXL_SCHEDULER_RTDS:
+        rc = sched_rtds_vcpu_get(gc, domid, scinfo);
+        break;
+    default:
+        LOG(ERROR, "Unknown scheduler");
+        rc = ERROR_INVAL;
+        break;
+    }
+
+    GC_FREE;
+    return rc;
+}
+
 static int libxl__domain_s3_resume(libxl__gc *gc, int domid)
 {
     int rc = 0;
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 6b73848..f1d53d8 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -206,6 +206,17 @@
 #define LIBXL_HAVE_DEVICE_MODEL_USER 1
 
 /*
+ * libxl_vcpu_sched_params is used to store per-vcpu params.
+ */
+#define LIBXL_HAVE_VCPU_SCHED_PARAMS 1
+
+/*
+ * LIBXL_HAVE_SCHED_RTDS_VCPU_PARAMS indicates RTDS scheduler
+ * now supports per-vcpu settings.
+ */
+#define LIBXL_HAVE_SCHED_RTDS_VCPU_PARAMS 1
+
+/*
  * libxl_domain_build_info has the arm.gic_version field.
  */
 #define LIBXL_HAVE_BUILDINFO_ARM_GIC_VERSION 1
@@ -1647,11 +1658,37 @@ int libxl_sched_credit_params_set(libxl_ctx *ctx, uint32_t poolid,
 #define LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT -1
 #define LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT    -1
 
+/* Per-VCPU parameters*/
+#define LIBXL_SCHED_PARAM_VCPU_INDEX_DEFAULT   -1
+
+/* Get the per-domain scheduling parameters.
+ * For schedulers that support per-vcpu settings (e.g., RTDS),
+ * calling *_domain_get functions will get default scheduling
+ * parameters.
+ */
 int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
                                   libxl_domain_sched_params *params);
+
+/* Set the per-domain scheduling parameters.
+ * For schedulers that support per-vcpu settings (e.g., RTDS),
+ * calling *_domain_set functions will set all vcpus with the same
+ * scheduling parameters.
+ */
 int libxl_domain_sched_params_set(libxl_ctx *ctx, uint32_t domid,
                                   const libxl_domain_sched_params *params);
 
+/* Get the per-vcpu scheduling parameters */
+int libxl_vcpu_sched_params_get(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_vcpu_sched_params *params);
+
+/* Set the per-vcpu scheduling parameters */
+int libxl_vcpu_sched_params_set(libxl_ctx *ctx, uint32_t domid,
+                                  const libxl_vcpu_sched_params *params);
+
+/* Set the per-vcpu scheduling parameters of all vcpus of a domain*/
+int libxl_vcpu_sched_params_set_all(libxl_ctx *ctx, uint32_t domid,
+                                  const libxl_vcpu_sched_params *params);
+
 int libxl_send_trigger(libxl_ctx *ctx, uint32_t domid,
                        libxl_trigger trigger, uint32_t vcpuid);
 int libxl_send_sysrq(libxl_ctx *ctx, uint32_t domid, char sysrq);
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index cf3730f..7487fc9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -378,6 +378,20 @@ libxl_domain_restore_params = Struct("domain_restore_params", [
     ("stream_version", uint32, {'init_val': '1'}),
     ])
 
+libxl_sched_params = Struct("sched_params",[
+    ("vcpuid",       integer, {'init_val': 'LIBXL_SCHED_PARAM_VCPU_INDEX_DEFAULT'}),
+    ("weight",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT'}),
+    ("cap",          integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT'}),
+    ("period",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
+    ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
+    ("budget",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
+    ])
+
+libxl_vcpu_sched_params = Struct("vcpu_sched_params",[
+    ("sched",        libxl_scheduler),
+    ("vcpus",        Array(libxl_sched_params, "num_vcpus")),
+    ])
+
 libxl_domain_sched_params = Struct("domain_sched_params",[
     ("sched",        libxl_scheduler),
     ("weight",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT'}),
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 for Xen 4.7 4/4] xl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 [PATCH v6 for Xen 4.7 0/4] Enable per-VCPU parameter settings for RTDS scheduler Chong Li
                   ` (2 preceding siblings ...)
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 3/4] libxl: " Chong Li
@ 2016-03-06 17:55 ` Chong Li
  2016-03-08 19:12   ` Wei Liu
  2016-03-09 14:09   ` Wei Liu
  3 siblings, 2 replies; 51+ messages in thread
From: Chong Li @ 2016-03-06 17:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	Meng Xu, lichong659, dgolomb

Change main_sched_rtds and related output functions to support
per-VCPU settings.

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>

---
Changes on PATCH v5:
1) Add sched_vcpu_set_all() for the cases that all vcpus of a
domain need to be changed together.

Changes on PATCH v4:
1) Coding style changes

Changes on PATCH v3:
1) Support commands, e.g., "xl sched-rtds -d vm1" to output the
default scheduling parameters

Changes on PATCH v2:
1) Remove per-domain output functions for RTDS scheduler.

2) Users now use '-v all' to specify all VCPUs.

3) Support outputting a subset of the parameters of the VCPUs
of a specific domain.

4) When setting all VCPUs with the same parameters (by only one
command), no per-domain function is invoked.

CC: <dario.faggioli@citrix.com>
CC: <george.dunlap@eu.citrix.com>
CC: <dgolomb@seas.upenn.edu>
CC: <mengxu@cis.upenn.edu>
CC: <wei.liu2@citrix.com>
CC: <lichong659@gmail.com>
---
 docs/man/xl.pod.1         |   4 +
 tools/libxl/xl_cmdimpl.c  | 292 ++++++++++++++++++++++++++++++++++++++++------
 tools/libxl/xl_cmdtable.c |  10 +-
 3 files changed, 269 insertions(+), 37 deletions(-)

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index 4279c7c..f9ff917 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -1051,6 +1051,10 @@ B<OPTIONS>
 Specify domain for which scheduler parameters are to be modified or retrieved.
 Mandatory for modifying scheduler parameters.
 
+=item B<-v VCPUID/all>, B<--vcpuid=VCPUID/all>
+
+Specify vcpu for which scheduler parameters are to be modified or retrieved.
+
 =item B<-p PERIOD>, B<--period=PERIOD>
 
 Period of time, in microseconds, over which to replenish the budget.
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 2b6371d..7d5620f 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5823,6 +5823,52 @@ static int sched_domain_set(int domid, const libxl_domain_sched_params *scinfo)
     return 0;
 }
 
+static int sched_vcpu_get(libxl_scheduler sched, int domid,
+                                  libxl_vcpu_sched_params *scinfo)
+{
+    int rc;
+
+    rc = libxl_vcpu_sched_params_get(ctx, domid, scinfo);
+    if (rc) {
+        fprintf(stderr, "libxl_vcpu_sched_params_get failed.\n");
+        exit(-1);
+    }
+    if (scinfo->sched != sched) {
+        fprintf(stderr, "libxl_vcpu_sched_params_get returned %s not %s.\n",
+                libxl_scheduler_to_string(scinfo->sched),
+                libxl_scheduler_to_string(sched));
+        return 1;
+    }
+
+    return 0;
+}
+
+static int sched_vcpu_set(int domid, const libxl_vcpu_sched_params *scinfo)
+{
+    int rc;
+
+    rc = libxl_vcpu_sched_params_set(ctx, domid, scinfo);
+    if (rc) {
+        fprintf(stderr, "libxl_vcpu_sched_params_set failed.\n");
+        exit(-1);
+    }
+
+    return rc;
+}
+
+static int sched_vcpu_set_all(int domid, const libxl_vcpu_sched_params *scinfo)
+{
+    int rc;
+
+    rc = libxl_vcpu_sched_params_set_all(ctx, domid, scinfo);
+    if (rc) {
+        fprintf(stderr, "libxl_vcpu_sched_params_set failed.\n");
+        exit(-1);
+    }
+
+    return rc;
+}
+
 static int sched_credit_params_set(int poolid, libxl_sched_credit_params *scinfo)
 {
     if (libxl_sched_credit_params_set(ctx, poolid, scinfo)) {
@@ -5942,6 +5988,38 @@ static int sched_rtds_domain_output(
     return 0;
 }
 
+static int sched_rtds_vcpu_output(
+                                  int domid, libxl_vcpu_sched_params *scinfo)
+{
+    char *domname;
+    int rc = 0;
+    int i;
+
+    if (domid < 0) {
+        printf("%-33s %4s %4s %9s %9s\n", "Name", "ID",
+                        "VCPU", "Period", "Budget");
+        return 0;
+    }
+
+    rc = sched_vcpu_get(LIBXL_SCHEDULER_RTDS, domid, scinfo);
+    if (rc)
+        goto out;
+
+    domname = libxl_domid_to_name(ctx, domid);
+    for( i = 0; i < scinfo->num_vcpus; i++ ) {
+        printf("%-33s %4d %4d %9"PRIu32" %9"PRIu32"\n",
+            domname,
+            domid,
+            scinfo->vcpus[i].vcpuid,
+            scinfo->vcpus[i].period,
+            scinfo->vcpus[i].budget);
+    }
+    free(domname);
+
+out:
+    return rc;
+}
+
 static int sched_rtds_pool_output(uint32_t poolid)
 {
     char *poolname;
@@ -6015,6 +6093,65 @@ static int sched_domain_output(libxl_scheduler sched, int (*output)(int),
     return 0;
 }
 
+static int sched_vcpu_output(libxl_scheduler sched,
+                                  int (*output)(int, libxl_vcpu_sched_params *),
+                                  int (*pooloutput)(uint32_t), const char *cpupool)
+{
+    libxl_dominfo *info;
+    libxl_cpupoolinfo *poolinfo = NULL;
+    uint32_t poolid;
+    int nb_domain, n_pools = 0, i, p;
+    int rc = 0;
+
+    if (cpupool) {
+        if (libxl_cpupool_qualifier_to_cpupoolid(ctx, cpupool, &poolid, NULL)
+            || !libxl_cpupoolid_is_valid(ctx, poolid)) {
+            fprintf(stderr, "unknown cpupool \'%s\'\n", cpupool);
+            return -ERROR_FAIL;
+        }
+    }
+
+    info = libxl_list_domain(ctx, &nb_domain);
+    if (!info) {
+        fprintf(stderr, "libxl_list_domain failed.\n");
+        return 1;
+    }
+    poolinfo = libxl_list_cpupool(ctx, &n_pools);
+    if (!poolinfo) {
+        fprintf(stderr, "error getting cpupool info\n");
+        libxl_dominfo_list_free(info, nb_domain);
+        return -ERROR_NOMEM;
+    }
+
+    for (p = 0; !rc && (p < n_pools); p++) {
+        if ((poolinfo[p].sched != sched) ||
+            (cpupool && (poolid != poolinfo[p].poolid)))
+            continue;
+
+        pooloutput(poolinfo[p].poolid);
+
+        libxl_vcpu_sched_params scinfo_out;
+        libxl_vcpu_sched_params_init(&scinfo_out);
+        output(-1, &scinfo_out);
+        libxl_vcpu_sched_params_dispose(&scinfo_out);
+        for (i = 0; i < nb_domain; i++) {
+            if (info[i].cpupool != poolinfo[p].poolid)
+                continue;
+            libxl_vcpu_sched_params scinfo;
+            libxl_vcpu_sched_params_init(&scinfo);
+            scinfo.num_vcpus = 0;
+            rc = output(info[i].domid, &scinfo);
+            libxl_vcpu_sched_params_dispose(&scinfo);
+            if (rc)
+                break;
+        }
+    }
+
+    libxl_cpupoolinfo_list_free(poolinfo, n_pools);
+    libxl_dominfo_list_free(info, nb_domain);
+    return 0;
+}
+
 /* 
  * <nothing>             : List all domain params and sched params from all pools
  * -d [domid]            : List domain params for domain
@@ -6222,77 +6359,166 @@ int main_sched_rtds(int argc, char **argv)
 {
     const char *dom = NULL;
     const char *cpupool = NULL;
-    int period = 0; /* period is in microsecond */
-    int budget = 0; /* budget is in microsecond */
+    int *vcpus = (int *)xmalloc(sizeof(int)); /* IDs of VCPUs that change */
+    int *periods = (int *)xmalloc(sizeof(int)); /* period is in microsecond */
+    int *budgets = (int *)xmalloc(sizeof(int)); /* budget is in microsecond */
+    int v_size = 1; /* size of vcpus array */
+    int p_size = 1; /* size of periods array */
+    int b_size = 1; /* size of budgets array */
+    int v_index = 0; /* index in vcpus array */
+    int p_index =0; /* index in periods array */
+    int b_index =0; /* index for in budgets array */
     bool opt_p = false;
     bool opt_b = false;
-    int opt, rc;
+    bool opt_v = false;
+    bool opt_all = false; /* output per-dom parameters */
+    int opt, i;
+    int rc = 0;
     static struct option opts[] = {
         {"domain", 1, 0, 'd'},
         {"period", 1, 0, 'p'},
         {"budget", 1, 0, 'b'},
+        {"vcpuid",1, 0, 'v'},
         {"cpupool", 1, 0, 'c'},
         COMMON_LONG_OPTS
     };
 
-    SWITCH_FOREACH_OPT(opt, "d:p:b:c:", opts, "sched-rtds", 0) {
+    SWITCH_FOREACH_OPT(opt, "d:p:b:v:c", opts, "sched-rtds", 0) {
     case 'd':
         dom = optarg;
         break;
     case 'p':
-        period = strtol(optarg, NULL, 10);
+        if (p_index >= p_size) { /* periods array is full */
+            p_size *= 2;
+            periods = xrealloc(periods, p_size);
+        }
+        periods[p_index++] = strtol(optarg, NULL, 10);
         opt_p = 1;
         break;
     case 'b':
-        budget = strtol(optarg, NULL, 10);
+        if (b_index >= b_size) { /* budgets array is full */
+            b_size *= 2;
+            budgets = xrealloc(budgets, b_size);
+        }
+        budgets[b_index++] = strtol(optarg, NULL, 10);
         opt_b = 1;
         break;
+    case 'v':
+        if (!strcmp(optarg, "all")) { /* get or set all vcpus of a domain */
+            opt_all = 1;
+            break;
+        }
+        if (v_index >= v_size) { /* vcpus array is full */
+            v_size *= 2;
+            vcpus = xrealloc(vcpus, v_size);
+        }
+        vcpus[v_index++] = strtol(optarg, NULL, 10);
+        opt_v = 1;
+        break;
     case 'c':
         cpupool = optarg;
         break;
     }
 
-    if (cpupool && (dom || opt_p || opt_b)) {
+    if (cpupool && (dom || opt_p || opt_b || opt_v || opt_all)) {
         fprintf(stderr, "Specifying a cpupool is not allowed with "
                 "other options.\n");
-        return EXIT_FAILURE;
+        rc = 1;
+        goto out;
     }
-    if (!dom && (opt_p || opt_b)) {
-        fprintf(stderr, "Must specify a domain.\n");
-        return EXIT_FAILURE;
+    if (!dom && (opt_p || opt_b || opt_v)) {
+        fprintf(stderr, "Missing parameters.\n");
+        rc = 1;
+        goto out;
     }
-    if (opt_p != opt_b) {
-        fprintf(stderr, "Must specify period and budget\n");
-        return EXIT_FAILURE;
+    if (dom && !opt_v && !opt_all && (opt_p || opt_b)) {
+        fprintf(stderr, "Must specify VCPU.\n");
+        rc = 1;
+        goto out;
+    }
+    if (opt_v && opt_all) {
+        fprintf(stderr, "Incorrect VCPU IDs.\n");
+        rc = 1;
+        goto out;
+    }
+    if (((v_index > b_index) && opt_b) || ((v_index > p_index) && opt_p)
+            || p_index != b_index) {
+        fprintf(stderr, "Incorrect number of period and budget\n");
+        rc = 1;
+        goto out;
     }
 
-    if (!dom) { /* list all domain's rt scheduler info */
-        if (sched_domain_output(LIBXL_SCHEDULER_RTDS,
-                                sched_rtds_domain_output,
-                                sched_rtds_pool_output,
-                                cpupool))
-            return EXIT_FAILURE;
+    if ((!dom) && opt_all) {
+        /* get all domain's per-vcpu rtds scheduler parameters */
+        rc = -sched_vcpu_output(LIBXL_SCHEDULER_RTDS,
+                                    sched_rtds_vcpu_output,
+                                    sched_rtds_pool_output,
+                                    cpupool);
+        goto out;
+    } else if (!dom && !opt_all) {
+        /* list all domain's default scheduling parameters */
+        rc = -sched_domain_output(LIBXL_SCHEDULER_RTDS,
+                                    sched_rtds_domain_output,
+                                    sched_rtds_pool_output,
+                                    cpupool);
+        goto out;
     } else {
         uint32_t domid = find_domain(dom);
-        if (!opt_p && !opt_b) { /* output rt scheduler info */
+        if (!opt_v && !opt_all) { /* output default scheduling parameters */
             sched_rtds_domain_output(-1);
-            if (sched_rtds_domain_output(domid))
-                return EXIT_FAILURE;
-        } else { /* set rt scheduler paramaters */
-            libxl_domain_sched_params scinfo;
-            libxl_domain_sched_params_init(&scinfo);
+            rc = -sched_rtds_domain_output(domid);
+            goto out;
+        } else if (!opt_p && !opt_b) {
+            /* get per-vcpu rtds scheduling parameters */
+            libxl_vcpu_sched_params scinfo;
+            libxl_vcpu_sched_params_init(&scinfo);
+            sched_rtds_vcpu_output(-1, &scinfo);
+            scinfo.num_vcpus = v_index;
+            if (v_index > 0)
+                scinfo.vcpus = (libxl_sched_params *)
+                        xmalloc(sizeof(libxl_sched_params) * (v_index));
+            for (i = 0; i < v_index; i++)
+                scinfo.vcpus[i].vcpuid = vcpus[i];
+            rc = -sched_rtds_vcpu_output(domid, &scinfo);
+            libxl_vcpu_sched_params_dispose(&scinfo);
+            goto out;
+    } else if (opt_v || opt_all) {
+            /* set per-vcpu rtds scheduling parameters */
+            libxl_vcpu_sched_params scinfo;
+            libxl_vcpu_sched_params_init(&scinfo);
             scinfo.sched = LIBXL_SCHEDULER_RTDS;
-            scinfo.period = period;
-            scinfo.budget = budget;
+            if (v_index > 0) {
+                scinfo.num_vcpus = v_index;
+                scinfo.vcpus = (libxl_sched_params *)
+                        xmalloc(sizeof(libxl_sched_params) * (v_index));
+                for (i = 0; i < v_index; i++) {
+                    scinfo.vcpus[i].vcpuid = vcpus[i];
+                    scinfo.vcpus[i].period = periods[i];
+                    scinfo.vcpus[i].budget = budgets[i];
+                }
+                rc = sched_vcpu_set(domid, &scinfo);
+            } else { /* set params for all vcpus */
+                scinfo.num_vcpus = 1;
+                scinfo.vcpus = (libxl_sched_params *)
+                        xmalloc(sizeof(libxl_sched_params));
+                scinfo.vcpus[0].period = periods[0];
+                scinfo.vcpus[0].budget = budgets[0];
+                rc = sched_vcpu_set_all(domid, &scinfo);
+            }
 
-            rc = sched_domain_set(domid, &scinfo);
-            libxl_domain_sched_params_dispose(&scinfo);
-            if (rc)
-                return EXIT_FAILURE;
+            libxl_vcpu_sched_params_dispose(&scinfo);
+            if (rc) {
+                rc = -rc;
+                goto out;
+            }
         }
     }
 
-    return EXIT_SUCCESS;
+out:
+    free(vcpus);
+    free(periods);
+    free(budgets);
+    return rc;
 }
 
 int main_domid(int argc, char **argv)
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index fdc1ac6..c68b656 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -268,10 +268,12 @@ struct cmd_spec cmd_table[] = {
     { "sched-rtds",
       &main_sched_rtds, 0, 1,
       "Get/set rtds scheduler parameters",
-      "[-d <Domain> [-p[=PERIOD]] [-b[=BUDGET]]]",
-      "-d DOMAIN, --domain=DOMAIN     Domain to modify\n"
-      "-p PERIOD, --period=PERIOD     Period (us)\n"
-      "-b BUDGET, --budget=BUDGET     Budget (us)\n"
+      "[-d <Domain> [-v[=VCPUID]] [-p[=PERIOD]] [-b[=BUDGET]]]",
+      "-d DOMAIN, --domain=DOMAIN 	Domain to modify\n"
+      "-v VCPUID/all, --vcpuid=VCPUID/all    VCPU to modify or output;\n"
+      "                               Using '-v all' to modify/output all vcpus\n"
+      "-p PERIOD, --period=PERIOD 	Period (us)\n"
+      "-b BUDGET, --budget=BUDGET 	Budget (us)\n"
     },
     { "domid",
       &main_domid, 0, 0,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 1/4] xen: enable " Chong Li
@ 2016-03-07 12:59   ` Jan Beulich
  2016-03-07 16:28     ` Chong Li
  2016-03-08 19:09   ` Wei Liu
  1 sibling, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-03-07 12:59 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Sisu Xi, george.dunlap, dario.faggioli, xen-devel,
	Meng Xu, dgolomb

>>> On 06.03.16 at 18:55, <lichong659@gmail.com> wrote:
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -1054,6 +1054,10 @@ csched_dom_cntl(
>       * lock. Runq lock not needed anywhere in here. */
>      spin_lock_irqsave(&prv->lock, flags);
>  
> +    if ( op->cmd == XEN_DOMCTL_SCHEDOP_putvcpuinfo ||
> +         op->cmd == XEN_DOMCTL_SCHEDOP_getvcpuinfo )
> +        return -EINVAL;
> +
>      if ( op->cmd == XEN_DOMCTL_SCHEDOP_getinfo )
>      {
>          op->u.credit.weight = sdom->weight;

Considering the rest of the code following where, I would - albeit
I'm not maintainer of this code - strongly suggest moving to
switch() in such cases, with the default case returning -EINVAL (or
maybe better -EOPNOTSUPP).

> @@ -1130,23 +1146,17 @@ rt_dom_cntl(
>      unsigned long flags;
>      int rc = 0;
>  
> +    xen_domctl_schedparam_vcpu_t local_sched;
> +    s_time_t period, budget;
> +    uint32_t index = 0;
> +

There's a stray blank line left ahead of this addition.

>      switch ( op->cmd )
>      {
> -    case XEN_DOMCTL_SCHEDOP_getinfo:
> -        if ( d->max_vcpus > 0 )
> -        {
> -            spin_lock_irqsave(&prv->lock, flags);
> -            svc = rt_vcpu(d->vcpu[0]);
> -            op->u.rtds.period = svc->period / MICROSECS(1);
> -            op->u.rtds.budget = svc->budget / MICROSECS(1);
> -            spin_unlock_irqrestore(&prv->lock, flags);
> -        }
> -        else
> -        {
> -            /* If we don't have vcpus yet, let's just return the defaults. */
> -            op->u.rtds.period = RTDS_DEFAULT_PERIOD;
> -            op->u.rtds.budget = RTDS_DEFAULT_BUDGET;
> -        }
> +    case XEN_DOMCTL_SCHEDOP_getinfo: /* return the default parameters */
> +        spin_lock_irqsave(&prv->lock, flags);
> +        op->u.rtds.period = RTDS_DEFAULT_PERIOD / MICROSECS(1);
> +        op->u.rtds.budget = RTDS_DEFAULT_BUDGET / MICROSECS(1);
> +        spin_unlock_irqrestore(&prv->lock, flags);
>          break;

This alters the values returned when d->max_vcpus == 0 - while
this looks to be intentional, I think calling out such a bug fix in the
description is a must.

> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>          }
>          spin_unlock_irqrestore(&prv->lock, flags);
>          break;
> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> +        if ( guest_handle_is_null(op->u.v.vcpus) )
> +        {
> +            rc = -EINVAL;

Perhaps rather -EFAULT? But then again - what is this check good for
(considering that it doesn't cover other obviously bad handle values)?

> +            break;
> +        }
> +        while ( index < op->u.v.nr_vcpus )
> +        {
> +            if ( copy_from_guest_offset(&local_sched,
> +                          op->u.v.vcpus, index, 1) )

Indentation.

> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( local_sched.vcpuid >= d->max_vcpus ||
> +                          d->vcpu[local_sched.vcpuid] == NULL )

Again. And more below.

> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            spin_lock_irqsave(&prv->lock, flags);
> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> +            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
> +            local_sched.s.rtds.period = svc->period / MICROSECS(1);
> +            spin_unlock_irqrestore(&prv->lock, flags);
> +
> +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
> +                    &local_sched, 1) )
> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( (++index > 0x3f) && hypercall_preempt_check() )
> +                break;

So how is the caller going to be able to reliably read all vCPU-s'
information for a guest with more than 64 vCPU-s?

> +        }
> +
> +        if ( !rc && (op->u.v.nr_vcpus != index) )
> +            op->u.v.nr_vcpus = index;

I don't think the right side of the && is really necessary / useful.

> +        break;
> +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:

When switch statements get large, please put blank lines between
individual case blocks.

> +        if ( guest_handle_is_null(op->u.v.vcpus) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +        while ( index < op->u.v.nr_vcpus )
> +        {
> +            if ( copy_from_guest_offset(&local_sched,
> +                          op->u.v.vcpus, index, 1) )
> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( local_sched.vcpuid >= d->max_vcpus ||
> +                          d->vcpu[local_sched.vcpuid] == NULL )
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            period = MICROSECS(local_sched.s.rtds.period);
> +            budget = MICROSECS(local_sched.s.rtds.budget);
> +            if ( period > RTDS_MAX_PERIOD || budget < RTDS_MIN_BUDGET ||
> +                          budget > period || period < RTDS_MIN_PERIOD )
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            /*
> +             * We accept period/budget less than 100 us, but will warn users about
> +             * the large scheduling overhead due to it
> +             */
> +            if ( period < MICROSECS(100) || budget < MICROSECS(100) )
> +                printk("Warning: period or budget set to less than 100us.\n"
> +                       "This may result in high scheduling overhead.\n");

This should use a log level which is rate limited by default. Quite
likely that would be one of the guest log levels.

> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1148,10 +1148,19 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op)
>      if ( ret )
>          return ret;
>  
> -    if ( (op->sched_id != DOM2OP(d)->sched_id) ||
> -         ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
> -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
> +    if ( op->sched_id != DOM2OP(d)->sched_id )
>          return -EINVAL;
> +    else
> +        switch ( op->cmd )

Pointless else, pointlessly increasing the necessary indentation
for the entire switch().

> +typedef struct xen_domctl_schedparam_vcpu {
> +    union {
> +        xen_domctl_sched_credit_t credit;
> +        xen_domctl_sched_credit2_t credit2;
> +        xen_domctl_sched_rtds_t rtds;
> +    } s;

Please call such unions "u", as done everywhere else.

> +    uint16_t vcpuid;

Any particular reason to limit this to 16 bits, when elsewhere
we commonly use 32 bits for vCPU IDs?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-07 12:59   ` Jan Beulich
@ 2016-03-07 16:28     ` Chong Li
  2016-03-07 16:40       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-07 16:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Chong Li, Sisu Xi, George Dunlap, dario.faggioli, xen-devel,
	Meng Xu, Dagaen Golomb

On Mon, Mar 7, 2016 at 6:59 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.03.16 at 18:55, <lichong659@gmail.com> wrote:

>
>> @@ -1130,23 +1146,17 @@ rt_dom_cntl(
>>      unsigned long flags;
>>      int rc = 0;
>>
>> +    xen_domctl_schedparam_vcpu_t local_sched;
>> +    s_time_t period, budget;
>> +    uint32_t index = 0;
>> +
>
> There's a stray blank line left ahead of this addition.
>
>>      switch ( op->cmd )
>>      {
>> -    case XEN_DOMCTL_SCHEDOP_getinfo:
>> -        if ( d->max_vcpus > 0 )
>> -        {
>> -            spin_lock_irqsave(&prv->lock, flags);
>> -            svc = rt_vcpu(d->vcpu[0]);
>> -            op->u.rtds.period = svc->period / MICROSECS(1);
>> -            op->u.rtds.budget = svc->budget / MICROSECS(1);
>> -            spin_unlock_irqrestore(&prv->lock, flags);
>> -        }
>> -        else
>> -        {
>> -            /* If we don't have vcpus yet, let's just return the defaults. */
>> -            op->u.rtds.period = RTDS_DEFAULT_PERIOD;
>> -            op->u.rtds.budget = RTDS_DEFAULT_BUDGET;
>> -        }
>> +    case XEN_DOMCTL_SCHEDOP_getinfo: /* return the default parameters */
>> +        spin_lock_irqsave(&prv->lock, flags);
>> +        op->u.rtds.period = RTDS_DEFAULT_PERIOD / MICROSECS(1);
>> +        op->u.rtds.budget = RTDS_DEFAULT_BUDGET / MICROSECS(1);
>> +        spin_unlock_irqrestore(&prv->lock, flags);
>>          break;
>
> This alters the values returned when d->max_vcpus == 0 - while
> this looks to be intentional, I think calling out such a bug fix in the
> description is a must.

Based on previous discussion, XEN_DOMCTL_SCHEDOP_getinfo only returns
the default parameters,
no matter whether vcpu is created yet or not. But I can absolutely
explain this in the description.
>
>> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>>          }
>>          spin_unlock_irqrestore(&prv->lock, flags);
>>          break;
>> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> +        if ( guest_handle_is_null(op->u.v.vcpus) )
>> +        {
>> +            rc = -EINVAL;
>
> Perhaps rather -EFAULT? But then again - what is this check good for
> (considering that it doesn't cover other obviously bad handle values)?

Dario suggested this in the last post, because vcpus is a handle and
needs to be validated.

>> +            {
>> +                rc = -EINVAL;
>> +                break;
>> +            }
>> +
>> +            spin_lock_irqsave(&prv->lock, flags);
>> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>> +            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
>> +            local_sched.s.rtds.period = svc->period / MICROSECS(1);
>> +            spin_unlock_irqrestore(&prv->lock, flags);
>> +
>> +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
>> +                    &local_sched, 1) )
>> +            {
>> +                rc = -EFAULT;
>> +                break;
>> +            }
>> +            if ( (++index > 0x3f) && hypercall_preempt_check() )
>> +                break;
>
> So how is the caller going to be able to reliably read all vCPU-s'
> information for a guest with more than 64 vCPU-s?

In libxc, we re-issue hypercall if the current one is preempted.

>
>> +        }
>> +
>> +        if ( !rc && (op->u.v.nr_vcpus != index) )
>> +            op->u.v.nr_vcpus = index;
>
> I don't think the right side of the && is really necessary / useful.

The right side is to check whether the vcpus array is fully processed.
When it is true and no error occurs (rc == 0), we
update op->u.v.nr_vcpus, which is returned to libxc, and helps xc
function figuring out how many un-processed vcpus should
be taken care of in the next hypercall.


>
>> +typedef struct xen_domctl_schedparam_vcpu {
>> +    union {
>> +        xen_domctl_sched_credit_t credit;
>> +        xen_domctl_sched_credit2_t credit2;
>> +        xen_domctl_sched_rtds_t rtds;
>> +    } s;
>
> Please call such unions "u", as done everywhere else.
>
>> +    uint16_t vcpuid;
>
> Any particular reason to limit this to 16 bits, when elsewhere
> we commonly use 32 bits for vCPU IDs?

I'll change it.

Thanks for your comments.
Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-07 16:28     ` Chong Li
@ 2016-03-07 16:40       ` Jan Beulich
  2016-03-07 17:53         ` Dario Faggioli
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-03-07 16:40 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Sisu Xi, George Dunlap, dario.faggioli, xen-devel,
	Meng Xu, Dagaen Golomb

>>> On 07.03.16 at 17:28, <lichong659@gmail.com> wrote:
> On Mon, Mar 7, 2016 at 6:59 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 06.03.16 at 18:55, <lichong659@gmail.com> wrote:
>>>      switch ( op->cmd )
>>>      {
>>> -    case XEN_DOMCTL_SCHEDOP_getinfo:
>>> -        if ( d->max_vcpus > 0 )
>>> -        {
>>> -            spin_lock_irqsave(&prv->lock, flags);
>>> -            svc = rt_vcpu(d->vcpu[0]);
>>> -            op->u.rtds.period = svc->period / MICROSECS(1);
>>> -            op->u.rtds.budget = svc->budget / MICROSECS(1);
>>> -            spin_unlock_irqrestore(&prv->lock, flags);
>>> -        }
>>> -        else
>>> -        {
>>> -            /* If we don't have vcpus yet, let's just return the defaults. */
>>> -            op->u.rtds.period = RTDS_DEFAULT_PERIOD;
>>> -            op->u.rtds.budget = RTDS_DEFAULT_BUDGET;
>>> -        }
>>> +    case XEN_DOMCTL_SCHEDOP_getinfo: /* return the default parameters */
>>> +        spin_lock_irqsave(&prv->lock, flags);
>>> +        op->u.rtds.period = RTDS_DEFAULT_PERIOD / MICROSECS(1);
>>> +        op->u.rtds.budget = RTDS_DEFAULT_BUDGET / MICROSECS(1);
>>> +        spin_unlock_irqrestore(&prv->lock, flags);
>>>          break;
>>
>> This alters the values returned when d->max_vcpus == 0 - while
>> this looks to be intentional, I think calling out such a bug fix in the
>> description is a must.
> 
> Based on previous discussion, XEN_DOMCTL_SCHEDOP_getinfo only returns
> the default parameters,
> no matter whether vcpu is created yet or not. But I can absolutely
> explain this in the description.

That wasn't the point of the comment. Instead the change (fix) to
divide by MICROSECS(1) is what otherwise would go in silently.

>>> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>>>          }
>>>          spin_unlock_irqrestore(&prv->lock, flags);
>>>          break;
>>> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>>> +        if ( guest_handle_is_null(op->u.v.vcpus) )
>>> +        {
>>> +            rc = -EINVAL;
>>
>> Perhaps rather -EFAULT? But then again - what is this check good for
>> (considering that it doesn't cover other obviously bad handle values)?
> 
> Dario suggested this in the last post, because vcpus is a handle and
> needs to be validated.

Well, as said - the handle being non-null doesn't make it a valid
handle. Any validation can be left to copy_{to,from}_guest*()
unless you mean to give a null handle some special meaning.

>>> +            {
>>> +                rc = -EINVAL;
>>> +                break;
>>> +            }
>>> +
>>> +            spin_lock_irqsave(&prv->lock, flags);
>>> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>>> +            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
>>> +            local_sched.s.rtds.period = svc->period / MICROSECS(1);
>>> +            spin_unlock_irqrestore(&prv->lock, flags);
>>> +
>>> +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
>>> +                    &local_sched, 1) )
>>> +            {
>>> +                rc = -EFAULT;
>>> +                break;
>>> +            }
>>> +            if ( (++index > 0x3f) && hypercall_preempt_check() )
>>> +                break;
>>
>> So how is the caller going to be able to reliably read all vCPU-s'
>> information for a guest with more than 64 vCPU-s?
> 
> In libxc, we re-issue hypercall if the current one is preempted.

And with the current code - how does libxc know? (And anyway,
this should only be a last resort, if the hypervisor can't by itself
arrange for a continuation. If done this way, having a code
comment referring to the required caller behavior would seem to
be an absolute must.)

>>> +        }
>>> +
>>> +        if ( !rc && (op->u.v.nr_vcpus != index) )
>>> +            op->u.v.nr_vcpus = index;
>>
>> I don't think the right side of the && is really necessary / useful.
> 
> The right side is to check whether the vcpus array is fully processed.
> When it is true and no error occurs (rc == 0), we
> update op->u.v.nr_vcpus, which is returned to libxc, and helps xc
> function figuring out how many un-processed vcpus should
> be taken care of in the next hypercall.

Just consider what the contents of op->u.v.nr_vcpus is after
this piece of code was executed, once with the full conditional,
and another time with the right side of the && omitted.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-07 16:40       ` Jan Beulich
@ 2016-03-07 17:53         ` Dario Faggioli
  2016-03-07 22:16           ` Chong Li
  2016-03-08  9:10           ` Jan Beulich
  0 siblings, 2 replies; 51+ messages in thread
From: Dario Faggioli @ 2016-03-07 17:53 UTC (permalink / raw)
  To: Jan Beulich, Chong Li
  Cc: Chong Li, Sisu Xi, George Dunlap, xen-devel, Meng Xu, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 5062 bytes --]

On Mon, 2016-03-07 at 09:40 -0700, Jan Beulich wrote:
> > > > On 07.03.16 at 17:28, <lichong659@gmail.com> wrote:
> > On Mon, Mar 7, 2016 at 6:59 AM, Jan Beulich <JBeulich@suse.com>
> > wrote:
> > > 
> > > > @@ -1163,6 +1173,96 @@ rt_dom_cntl(
> > > > 
> > > > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> > > > +        if ( guest_handle_is_null(op->u.v.vcpus) )
> > > > +        {
> > > > +            rc = -EINVAL;
> > > Perhaps rather -EFAULT? But then again - what is this check good
> > > for
> > > (considering that it doesn't cover other obviously bad handle
> > > values)?
> > Dario suggested this in the last post, because vcpus is a handle
> > and
> > needs to be validated.
>
> Well, as said - the handle being non-null doesn't make it a valid
> handle. Any validation can be left to copy_{to,from}_guest*()
> unless you mean to give a null handle some special meaning.
> 
IIRC, I was looking at how XEN_SYSCTL_pcitopoinfo is handled, for
reference, and that has some guest_handle_is_null()==>EINVAL sainity
checking (in xen/common/sysctl.c), which, when I thought about it, made
sense to me.

My reasoning was, sort of:
 1. if the handle is NULL, no point getting into the somewhat 
    complicated logic of the while,
 2. more accurate error reporting: as being passed a NULL handler 
    looked something we could identify and call invalid, rather than 
    waiting for the copy to fault.

In any event, I've no problem at all with this being dropped.

> > > > +            {
> > > > +                rc = -EINVAL;
> > > > +                break;
> > > > +            }
> > > > +
> > > > +            spin_lock_irqsave(&prv->lock, flags);
> > > > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> > > > +            local_sched.s.rtds.budget = svc->budget /
> > > > MICROSECS(1);
> > > > +            local_sched.s.rtds.period = svc->period /
> > > > MICROSECS(1);
> > > > +            spin_unlock_irqrestore(&prv->lock, flags);
> > > > +
> > > > +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
> > > > +                    &local_sched, 1) )
> > > > +            {
> > > > +                rc = -EFAULT;
> > > > +                break;
> > > > +            }
> > > > +            if ( (++index > 0x3f) && hypercall_preempt_check()
> > > > )
> > > > +                break;
> > > So how is the caller going to be able to reliably read all vCPU-
> > > s'
> > > information for a guest with more than 64 vCPU-s?
> > In libxc, we re-issue hypercall if the current one is preempted.
> And with the current code - how does libxc know? (And anyway,
> this should only be a last resort, if the hypervisor can't by itself
> arrange for a continuation. If done this way, having a code
> comment referring to the required caller behavior would seem to
> be an absolute must.)
> 
I definitely agree on commenting.

About the structure of the code, as said above, I do like
how XEN_SYSCTL_pcitopoinfo ended up being handled, I think it is a
great fit for this specific case and, comparing at both this and
previous version, I do think this one is (bugs apart) looking better.

I'm sure I said this --long ago-- when discussing v4 (and maybe even
previous versions), as well as more recently, when reviewing v5, and
that's why Chong (finally! :-D) did it.

So, with the comment in place (and with bugs fixed :-)), are you (Jan)
ok with this being done this way?

> > > > +        }
> > > > +
> > > > +        if ( !rc && (op->u.v.nr_vcpus != index) )
> > > > +            op->u.v.nr_vcpus = index;
> > > I don't think the right side of the && is really necessary /
> > > useful.
> > The right side is to check whether the vcpus array is fully
> > processed.
> > When it is true and no error occurs (rc == 0), we
> > update op->u.v.nr_vcpus, which is returned to libxc, and helps xc
> > function figuring out how many un-processed vcpus should
> > be taken care of in the next hypercall.
> Just consider what the contents of op->u.v.nr_vcpus is after
> this piece of code was executed, once with the full conditional,
> and another time with the right side of the && omitted.
> 
BTW, Chong, I'm not sure this has to do with what Jan is saying, but
looking again at XEN_SYSCTL_pcitopoinfo, it looks to me you're missing
copying nr_vcpus back up to the guest (which is actually what makes
libxc knows whether all vcpus have been processed or now).

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-07 17:53         ` Dario Faggioli
@ 2016-03-07 22:16           ` Chong Li
  2016-03-08  9:10           ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Chong Li @ 2016-03-07 22:16 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Sisu Xi, George Dunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb

On Mon, Mar 7, 2016 at 11:53 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Mon, 2016-03-07 at 09:40 -0700, Jan Beulich wrote:
>> > > > On 07.03.16 at 17:28, <lichong659@gmail.com> wrote:
>> > On Mon, Mar 7, 2016 at 6:59 AM, Jan Beulich <JBeulich@suse.com>
>> > wrote:
>> > >
>> > > > @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>> > > >
>> > > > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> > > > +        if ( guest_handle_is_null(op->u.v.vcpus) )
>> > > > +        {
>> > > > +            rc = -EINVAL;

>
>> > > > +            {
>> > > > +                rc = -EINVAL;
>> > > > +                break;
>> > > > +            }
>> > > > +
>> > > > +            spin_lock_irqsave(&prv->lock, flags);
>> > > > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>> > > > +            local_sched.s.rtds.budget = svc->budget /
>> > > > MICROSECS(1);
>> > > > +            local_sched.s.rtds.period = svc->period /
>> > > > MICROSECS(1);
>> > > > +            spin_unlock_irqrestore(&prv->lock, flags);
>> > > > +
>> > > > +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
>> > > > +                    &local_sched, 1) )
>> > > > +            {
>> > > > +                rc = -EFAULT;
>> > > > +                break;
>> > > > +            }
>> > > > +            if ( (++index > 0x3f) && hypercall_preempt_check()
>> > > > )
>> > > > +                break;

>
>> > > > +        }
>> > > > +
>> > > > +        if ( !rc && (op->u.v.nr_vcpus != index) )
>> > > > +            op->u.v.nr_vcpus = index;
>> > > I don't think the right side of the && is really necessary /
>> > > useful.
>> > The right side is to check whether the vcpus array is fully
>> > processed.
>> > When it is true and no error occurs (rc == 0), we
>> > update op->u.v.nr_vcpus, which is returned to libxc, and helps xc
>> > function figuring out how many un-processed vcpus should
>> > be taken care of in the next hypercall.
>> Just consider what the contents of op->u.v.nr_vcpus is after
>> this piece of code was executed, once with the full conditional,
>> and another time with the right side of the && omitted.
>>
> BTW, Chong, I'm not sure this has to do with what Jan is saying, but
> looking again at XEN_SYSCTL_pcitopoinfo, it looks to me you're missing
> copying nr_vcpus back up to the guest (which is actually what makes
> libxc knows whether all vcpus have been processed or now).

I think by "op->u.v.nr_vcpus = index", we already make the new nr_vcpus
seen by the guest (I've verified it).

In the case XEN_DOMCTL_scheduler_op of do_domctl(),
we make "copyback = 1" after calling sched_adjust(), which means all
fields in op (including
the new nr_vcpus) will be copied to u_domctl (at the end of
do_domctl()). This operation
ensures the new nr_vcpus is copied back up to the guest.

Please correct me if my understanding is wrong.

Chong

>
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-07 17:53         ` Dario Faggioli
  2016-03-07 22:16           ` Chong Li
@ 2016-03-08  9:10           ` Jan Beulich
  2016-03-08 10:34             ` Dario Faggioli
  1 sibling, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-03-08  9:10 UTC (permalink / raw)
  To: Dario Faggioli, Chong Li
  Cc: Chong Li, Sisu Xi, George Dunlap, xen-devel, Meng Xu, Dagaen Golomb

>>> On 07.03.16 at 18:53, <dario.faggioli@citrix.com> wrote:
> On Mon, 2016-03-07 at 09:40 -0700, Jan Beulich wrote:
>> > > > On 07.03.16 at 17:28, <lichong659@gmail.com> wrote:
>> > On Mon, Mar 7, 2016 at 6:59 AM, Jan Beulich <JBeulich@suse.com>
>> > wrote:
>> > > 
>> > > > @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>> > > > 
>> > > > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> > > > +        if ( guest_handle_is_null(op->u.v.vcpus) )
>> > > > +        {
>> > > > +            rc = -EINVAL;
>> > > Perhaps rather -EFAULT? But then again - what is this check good
>> > > for
>> > > (considering that it doesn't cover other obviously bad handle
>> > > values)?
>> > Dario suggested this in the last post, because vcpus is a handle
>> > and
>> > needs to be validated.
>>
>> Well, as said - the handle being non-null doesn't make it a valid
>> handle. Any validation can be left to copy_{to,from}_guest*()
>> unless you mean to give a null handle some special meaning.
>> 
> IIRC, I was looking at how XEN_SYSCTL_pcitopoinfo is handled, for
> reference, and that has some guest_handle_is_null()==>EINVAL sainity
> checking (in xen/common/sysctl.c), which, when I thought about it, made
> sense to me.
> 
> My reasoning was, sort of:
>  1. if the handle is NULL, no point getting into the somewhat 
>     complicated logic of the while,
>  2. more accurate error reporting: as being passed a NULL handler 
>     looked something we could identify and call invalid, rather than 
>     waiting for the copy to fault.

I think the XEN_SYSCTL_pcitopoinfo was misguided in this respect,
cloning non applicable logic here which returns the number of needed
(array) elements in such a case for a few other operations.

>> > > > +            {
>> > > > +                rc = -EINVAL;
>> > > > +                break;
>> > > > +            }
>> > > > +
>> > > > +            spin_lock_irqsave(&prv->lock, flags);
>> > > > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>> > > > +            local_sched.s.rtds.budget = svc->budget /
>> > > > MICROSECS(1);
>> > > > +            local_sched.s.rtds.period = svc->period /
>> > > > MICROSECS(1);
>> > > > +            spin_unlock_irqrestore(&prv->lock, flags);
>> > > > +
>> > > > +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
>> > > > +                    &local_sched, 1) )
>> > > > +            {
>> > > > +                rc = -EFAULT;
>> > > > +                break;
>> > > > +            }
>> > > > +            if ( (++index > 0x3f) && hypercall_preempt_check()
>> > > > )
>> > > > +                break;
>> > > So how is the caller going to be able to reliably read all vCPU-
>> > > s'
>> > > information for a guest with more than 64 vCPU-s?
>> > In libxc, we re-issue hypercall if the current one is preempted.
>> And with the current code - how does libxc know? (And anyway,
>> this should only be a last resort, if the hypervisor can't by itself
>> arrange for a continuation. If done this way, having a code
>> comment referring to the required caller behavior would seem to
>> be an absolute must.)
>> 
> I definitely agree on commenting.
> 
> About the structure of the code, as said above, I do like
> how XEN_SYSCTL_pcitopoinfo ended up being handled, I think it is a
> great fit for this specific case and, comparing at both this and
> previous version, I do think this one is (bugs apart) looking better.
> 
> I'm sure I said this --long ago-- when discussing v4 (and maybe even
> previous versions), as well as more recently, when reviewing v5, and
> that's why Chong (finally! :-D) did it.
> 
> So, with the comment in place (and with bugs fixed :-)), are you (Jan)
> ok with this being done this way?

Well, this _might_ be acceptable for "get" (since the caller
abandoning the sequence of calls prematurely is no problem),
but for "set" it looks less suitable, as similar abandoning would
leave the guest in some inconsistent / unintended state. The
issue with XEN_SYSCTL_pcitopoinfo was, iirc, the lack of a
good way of encoding the continuation information, and while
that would seem applicable here too I'm not sure now whether
doing it the way it was done was the best choice. Clearly
stating (in the public interface header) that certain normally
input-only fields are volatile would allow the continuation to
be handled without tool stack assistance afaict.

>> > > > +        }
>> > > > +
>> > > > +        if ( !rc && (op->u.v.nr_vcpus != index) )
>> > > > +            op->u.v.nr_vcpus = index;
>> > > I don't think the right side of the && is really necessary /
>> > > useful.
>> > The right side is to check whether the vcpus array is fully
>> > processed.
>> > When it is true and no error occurs (rc == 0), we
>> > update op->u.v.nr_vcpus, which is returned to libxc, and helps xc
>> > function figuring out how many un-processed vcpus should
>> > be taken care of in the next hypercall.
>> Just consider what the contents of op->u.v.nr_vcpus is after
>> this piece of code was executed, once with the full conditional,
>> and another time with the right side of the && omitted.
>> 
> BTW, Chong, I'm not sure this has to do with what Jan is saying, but
> looking again at XEN_SYSCTL_pcitopoinfo, it looks to me you're missing
> copying nr_vcpus back up to the guest (which is actually what makes
> libxc knows whether all vcpus have been processed or now).

Indeed that is why the conditional makes sense there, but not here.
And the copying back is already being taken care of by the caller of
sched_adjust().

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08  9:10           ` Jan Beulich
@ 2016-03-08 10:34             ` Dario Faggioli
  2016-03-08 11:47               ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Dario Faggioli @ 2016-03-08 10:34 UTC (permalink / raw)
  To: Jan Beulich, Chong Li
  Cc: Chong Li, Sisu Xi, George Dunlap, xen-devel, Meng Xu, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 11426 bytes --]

On Tue, 2016-03-08 at 02:10 -0700, Jan Beulich wrote:
> > > > On 07.03.16 at 18:53, <dario.faggioli@citrix.com> wrote:
> > On Mon, 2016-03-07 at 09:40 -0700, Jan Beulich wrote:
> > > 
> > IIRC, I was looking at how XEN_SYSCTL_pcitopoinfo is handled, for
> > reference, and that has some guest_handle_is_null()==>EINVAL
> > sainity
> > checking (in xen/common/sysctl.c), which, when I thought about it,
> > made
> > sense to me.
> > 
> > My reasoning was, sort of:
> >  1. if the handle is NULL, no point getting into the somewhat 
> >     complicated logic of the while,
> >  2. more accurate error reporting: as being passed a NULL handler 
> >     looked something we could identify and call invalid, rather
> > than 
> >     waiting for the copy to fault.
> I think the XEN_SYSCTL_pcitopoinfo was misguided in this respect,
> cloning non applicable logic here which returns the number of needed
> (array) elements in such a case for a few other operations.
> 
Sorry, I'm not sure I am getting: are you saying that, for _these_
domctls, we should consider the handle being NULL as a way of the
caller to ask for the size of the array?

*If* yes, well, that is "just" the number of vcpus of the guest, but,
nevertheless, that, FWIW, looks fine to me.

> > About the structure of the code, as said above, I do like
> > how XEN_SYSCTL_pcitopoinfo ended up being handled, I think it is a
> > great fit for this specific case and, comparing at both this and
> > previous version, I do think this one is (bugs apart) looking
> > better.
> > 
> > I'm sure I said this --long ago-- when discussing v4 (and maybe
> > even
> > previous versions), as well as more recently, when reviewing v5,
> > and
> > that's why Chong (finally! :-D) did it.
> > 
> > So, with the comment in place (and with bugs fixed :-)), are you
> > (Jan)
> > ok with this being done this way?
>
> Well, this _might_ be acceptable for "get" (since the caller
> abandoning the sequence of calls prematurely is no problem),
> but for "set" it looks less suitable, as similar abandoning would
> leave the guest in some inconsistent / unintended state.
>
Are you referring to the fact that, with this interface, the caller has
the chance to leave intentionally, or that it may happen that not all
vcpus are updated because of some bug (still in the caller)?

Well, if it's intentional, or even if the caller is buggy in the sense
that the code is written in a way that it misses updating some vcpus
(and if the interface and the behavior is well documented, as you
request), then one gets what he "wants" (and, in the latter case, it
wouldn't be too hard to debug and figure out the issue, I think).

If it's for bugs (still in the caller) like copy_from_guest_offset()
faulting because the array is too small, that can happen if using
continuation too, can't it? And it would still leave the guest in
similar inconsistent or unintended state, IMO...

One last point. Of course, since we are talking about bugs, the final
status is not the one desired, but it's not inconsistent in the sense
that the guest can't continue running, or crashes, or anything like
that. It's something like:
 - you wants all the 32 vcpus of guest A to have these new parameters
 - due to a bug, you're (for instance) passing me an array with only 
   16 vcpus parameters
 - result: onlyt 16 vcpus will have the new parameters.

>  The
> issue with XEN_SYSCTL_pcitopoinfo was, iirc, the lack of a
> good way of encoding the continuation information, and while
> that would seem applicable here too I'm not sure now whether
> doing it the way it was done was the best choice. 
>
As far as I can remember and see, it was being done by means of an
additional dedicated parameter in the handle (called ti->first_dev in
that case). Then at some point, you said:

http://lists.xenproject.org/archives/html/xen-devel/2015-03/msg02623.html
"Considering this is a tools only interface, enforcing a not too high
 limit on num_devs would seem better than this not really clean
 continuation mechanism. The (tool stack) caller(s) can be made
 iterate."

With which I did agree (and I still do :-)), as well as I agree on the
fact that we basically are in the same situation here.

Chong tried doing things with continuations for a few rounds, including
v5, which is here:
http://lists.xenproject.org/archives/html/xen-devel/2016-02/msg00817.html

and he also used an additional field (vcpu_index).

So, all this being said, my preference stays for the way the code looks
like in this version (with all the due commenting added). Of course,
it's your preference that really matters here, me not being the
maintainer of this code. :-)

So, how do you prefer Chong to continue doing this?

> Clearly
> stating (in the public interface header) that certain normally
> input-only fields are volatile would allow the continuation to
> be handled without tool stack assistance afaict.
>
Which (sorry, I'm not getting again) I guess is something
different/more than what was done in v5 (the relevant hunks of which
I'm pasting at the bottom of this email, for convenience)?

> > BTW, Chong, I'm not sure this has to do with what Jan is saying,
> > but
> > looking again at XEN_SYSCTL_pcitopoinfo, it looks to me you're
> > missing
> > copying nr_vcpus back up to the guest (which is actually what makes
> > libxc knows whether all vcpus have been processed or now).
> Indeed that is why the conditional makes sense there, but not here.
> And the copying back is already being taken care of by the caller of
> sched_adjust().
> 
Indeed. So your original point was: we're always copying back, so it's
fine to always update the field, with the only exception of errors
having occurred. I do get it now.

Regards,
Dario

---
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 46b967e..b294221 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -847,9 +847,14 @@ long
do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
u_domctl)
     }
 
     case XEN_DOMCTL_scheduler_op:
+    {
         ret = sched_adjust(d, &op->u.scheduler_op);
+        if ( ret == -ERESTART )
+            ret = hypercall_create_continuation(
+                __HYPERVISOR_domctl, "h", u_domctl);
         copyback = 1;
         break;
+    }
 
     case XEN_DOMCTL_getdomaininfo:
     {

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 3f1d047..34ae48d 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -1163,6 +1168,94 @@ rt_dom_cntl(
         }
         spin_unlock_irqrestore(&prv->lock, flags);
         break;
+    case XEN_DOMCTL_SCHEDOP_getvcpuinfo: 
+        for ( index = op->u.v.vcpu_index; index < op->u.v.nr_vcpus; index++ )
+        {
+            spin_lock_irqsave(&prv->lock, flags);
+            if ( copy_from_guest_offset(&local_sched,
+                          op->u.v.vcpus, index, 1) )
+            {
+                rc = -EFAULT;
+                spin_unlock_irqrestore(&prv->lock, flags);
+                break;
+            }
+            if ( local_sched.vcpuid >= d->max_vcpus ||
+                          d->vcpu[local_sched.vcpuid] == NULL )
+            {
+                rc = -EINVAL;
+                spin_unlock_irqrestore(&prv->lock, flags);
+                break;
+            }
+            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
+
+            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
+            local_sched.s.rtds.period = svc->period / MICROSECS(1);
+
+            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
+                    &local_sched, 1) )
+            {
+                rc = -EFAULT;
+                spin_unlock_irqrestore(&prv->lock, flags);
+                break;
+            }
+            spin_unlock_irqrestore(&prv->lock, flags);
+            if ( hypercall_preempt_check() )
+            {
+                op->u.v.vcpu_index = index + 1;
+                /* hypercall (after preemption) will continue at vcpu_index */
+                rc = -ERESTART;
+                break;
+            }
+        }
+        break;
+    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
+        for ( index = op->u.v.vcpu_index; index < op->u.v.nr_vcpus; index++ )
+        {
+            spin_lock_irqsave(&prv->lock, flags);
+            if ( copy_from_guest_offset(&local_sched,
+                          op->u.v.vcpus, index, 1) )
+            {
+                rc = -EFAULT;
+                spin_unlock_irqrestore(&prv->lock, flags);
+                break;
+            }
+            if ( local_sched.vcpuid >= d->max_vcpus ||
+                          d->vcpu[local_sched.vcpuid] == NULL )
+            {
+                rc = -EINVAL;
+                spin_unlock_irqrestore(&prv->lock, flags);
+                break;
+            }
+            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
+            period = MICROSECS(local_sched.s.rtds.period);
+            budget = MICROSECS(local_sched.s.rtds.budget);
+            if ( period > RTDS_MAX_PERIOD || budget < RTDS_MIN_BUDGET ||
+                          budget > period )
+            {
+                rc = -EINVAL;
+                spin_unlock_irqrestore(&prv->lock, flags);
+                break;
+            }
+
+            /* 
+             * We accept period/budget less than 100 us, but will warn users about
+             * the large scheduling overhead due to it
+             */
+            if ( period < MICROSECS(100) || budget < MICROSECS(100) )
+                printk("Warning: period/budget less than 100 micro-secs "
+                       "results in large scheduling overhead.\n");
+
+            svc->period = period;
+            svc->budget = budget;
+            spin_unlock_irqrestore(&prv->lock, flags);
+            if ( hypercall_preempt_check() )
+            {
+                op->u.v.vcpu_index = index + 1;
+                rc = -ERESTART;
+                break;
+            }
+        }
+        break;
     }
 
     return rc;
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 10:34             ` Dario Faggioli
@ 2016-03-08 11:47               ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2016-03-08 11:47 UTC (permalink / raw)
  To: Dario Faggioli, Chong Li
  Cc: Chong Li, Sisu Xi, George Dunlap, xen-devel, Meng Xu, Dagaen Golomb

>>> On 08.03.16 at 11:34, <dario.faggioli@citrix.com> wrote:
> On Tue, 2016-03-08 at 02:10 -0700, Jan Beulich wrote:
>> > > > On 07.03.16 at 18:53, <dario.faggioli@citrix.com> wrote:
>> > On Mon, 2016-03-07 at 09:40 -0700, Jan Beulich wrote:
>> > > 
>> > IIRC, I was looking at how XEN_SYSCTL_pcitopoinfo is handled, for
>> > reference, and that has some guest_handle_is_null()==>EINVAL
>> > sainity
>> > checking (in xen/common/sysctl.c), which, when I thought about it,
>> > made
>> > sense to me.
>> > 
>> > My reasoning was, sort of:
>> >  1. if the handle is NULL, no point getting into the somewhat 
>> >     complicated logic of the while,
>> >  2. more accurate error reporting: as being passed a NULL handler 
>> >     looked something we could identify and call invalid, rather
>> > than 
>> >     waiting for the copy to fault.
>> I think the XEN_SYSCTL_pcitopoinfo was misguided in this respect,
>> cloning non applicable logic here which returns the number of needed
>> (array) elements in such a case for a few other operations.
>> 
> Sorry, I'm not sure I am getting: are you saying that, for _these_
> domctls, we should consider the handle being NULL as a way of the
> caller to ask for the size of the array?

No, I've tried to point out that _when_ such behavior is intended,
the special casing of a null handle is warranted. But not (normally)
in other cases.

>> > About the structure of the code, as said above, I do like
>> > how XEN_SYSCTL_pcitopoinfo ended up being handled, I think it is a
>> > great fit for this specific case and, comparing at both this and
>> > previous version, I do think this one is (bugs apart) looking
>> > better.
>> > 
>> > I'm sure I said this --long ago-- when discussing v4 (and maybe
>> > even
>> > previous versions), as well as more recently, when reviewing v5,
>> > and
>> > that's why Chong (finally! :-D) did it.
>> > 
>> > So, with the comment in place (and with bugs fixed :-)), are you
>> > (Jan)
>> > ok with this being done this way?
>>
>> Well, this _might_ be acceptable for "get" (since the caller
>> abandoning the sequence of calls prematurely is no problem),
>> but for "set" it looks less suitable, as similar abandoning would
>> leave the guest in some inconsistent / unintended state.
>>
> Are you referring to the fact that, with this interface, the caller has
> the chance to leave intentionally, or that it may happen that not all
> vcpus are updated because of some bug (still in the caller)?
> 
> Well, if it's intentional, or even if the caller is buggy in the sense
> that the code is written in a way that it misses updating some vcpus
> (and if the interface and the behavior is well documented, as you
> request), then one gets what he "wants" (and, in the latter case, it
> wouldn't be too hard to debug and figure out the issue, I think).

Intentional or buggy doesn't matter much - if we can avoid making
ourselves dependent upon user mode code behaving well, I think
we should.

> If it's for bugs (still in the caller) like copy_from_guest_offset()
> faulting because the array is too small, that can happen if using
> continuation too, can't it? And it would still leave the guest in
> similar inconsistent or unintended state, IMO...

True, albeit that's what error indications are for.

> One last point. Of course, since we are talking about bugs, the final
> status is not the one desired, but it's not inconsistent in the sense
> that the guest can't continue running, or crashes, or anything like
> that. It's something like:
>  - you wants all the 32 vcpus of guest A to have these new parameters
>  - due to a bug, you're (for instance) passing me an array with only 
>    16 vcpus parameters
>  - result: onlyt 16 vcpus will have the new parameters.

That was my understanding, yes.

>>  The
>> issue with XEN_SYSCTL_pcitopoinfo was, iirc, the lack of a
>> good way of encoding the continuation information, and while
>> that would seem applicable here too I'm not sure now whether
>> doing it the way it was done was the best choice. 
>>
> As far as I can remember and see, it was being done by means of an
> additional dedicated parameter in the handle (called ti->first_dev in
> that case). Then at some point, you said:
> 
> http://lists.xenproject.org/archives/html/xen-devel/2015-03/msg02623.html 
> "Considering this is a tools only interface, enforcing a not too high
>  limit on num_devs would seem better than this not really clean
>  continuation mechanism. The (tool stack) caller(s) can be made
>  iterate."
> 
> With which I did agree (and I still do :-)), as well as I agree on the
> fact that we basically are in the same situation here.
> 
> Chong tried doing things with continuations for a few rounds, including
> v5, which is here:
> http://lists.xenproject.org/archives/html/xen-devel/2016-02/msg00817.html 
> 
> and he also used an additional field (vcpu_index).
> 
> So, all this being said, my preference stays for the way the code looks
> like in this version (with all the due commenting added). Of course,
> it's your preference that really matters here, me not being the
> maintainer of this code. :-)
> 
> So, how do you prefer Chong to continue doing this?

Well, modeling this after the other sysctl makes it something I
can't reasonably reject. Hence what I've been saying so far were
merely suggestions, as - other than you - I'm not convinced
anymore the model used there was a good one. The more that
here - afaict - no extra fields would be needed: The continuation
encoding would simply consist of op->u.v.nr_vcpus getting
suitably decreased and op->u.v.vcpus suitably bumped.

>> Clearly
>> stating (in the public interface header) that certain normally
>> input-only fields are volatile would allow the continuation to
>> be handled without tool stack assistance afaict.
>>
> Which (sorry, I'm not getting again) I guess is something
> different/more than what was done in v5 (the relevant hunks of which
> I'm pasting at the bottom of this email, for convenience)?

The hunks you had included didn't cover the public interface
header changes. My point here is: Generally we demand that
fields in public interface structures not documented as OUT
would not get modified by a hypercall. Since I'm suggesting to
use both fields to encode continuation information, that rule
would get violated, which therefore needs spelling out in a
comment in the header.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 1/4] xen: enable " Chong Li
  2016-03-07 12:59   ` Jan Beulich
@ 2016-03-08 19:09   ` Wei Liu
  2016-03-09 16:10     ` Dario Faggioli
  2016-03-10 22:35     ` Chong Li
  1 sibling, 2 replies; 51+ messages in thread
From: Wei Liu @ 2016-03-08 19:09 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, Meng Xu, jbeulich, dgolomb

On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
[...]
> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>          }
>          spin_unlock_irqrestore(&prv->lock, flags);
>          break;
> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> +        if ( guest_handle_is_null(op->u.v.vcpus) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +        while ( index < op->u.v.nr_vcpus )
> +        {
> +            if ( copy_from_guest_offset(&local_sched,
> +                          op->u.v.vcpus, index, 1) )

Indentation.

> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( local_sched.vcpuid >= d->max_vcpus ||
> +                          d->vcpu[local_sched.vcpuid] == NULL )

Ditto.

> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            spin_lock_irqsave(&prv->lock, flags);
> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> +            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
> +            local_sched.s.rtds.period = svc->period / MICROSECS(1);
> +            spin_unlock_irqrestore(&prv->lock, flags);
> +
> +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
> +                    &local_sched, 1) )

Here as well.

You might want to check your editor setting. I won't point out other
places that need fixing.

> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( (++index > 0x3f) && hypercall_preempt_check() )
> +                break;
> +        }
> +
> +        if ( !rc && (op->u.v.nr_vcpus != index) )
> +            op->u.v.nr_vcpus = index;
> +        break;
> +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> +        if ( guest_handle_is_null(op->u.v.vcpus) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +        while ( index < op->u.v.nr_vcpus )
> +        {
> +            if ( copy_from_guest_offset(&local_sched,
> +                          op->u.v.vcpus, index, 1) )
> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( local_sched.vcpuid >= d->max_vcpus ||
> +                          d->vcpu[local_sched.vcpuid] == NULL )
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            period = MICROSECS(local_sched.s.rtds.period);
> +            budget = MICROSECS(local_sched.s.rtds.budget);
> +            if ( period > RTDS_MAX_PERIOD || budget < RTDS_MIN_BUDGET ||
> +                          budget > period || period < RTDS_MIN_PERIOD )
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            /*
> +             * We accept period/budget less than 100 us, but will warn users about
> +             * the large scheduling overhead due to it
> +             */
> +            if ( period < MICROSECS(100) || budget < MICROSECS(100) )
> +                printk("Warning: period or budget set to less than 100us.\n"
> +                       "This may result in high scheduling overhead.\n");
> +

I'm not the maintainer, but I think having printk here is bad idea
because the toolstack can then DoS the hypervisor.


> +            spin_lock_irqsave(&prv->lock, flags);
> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> +            svc->period = period;
> +            svc->budget = budget;
> +            spin_unlock_irqrestore(&prv->lock, flags);
> +

And this locking pattern seems sub-optimal. You might be able to move
the lock and unlock outside the while loop?

Note that I merely look at this snippet. I don't know that other
constraints you have, so take what I said with a grained of salt.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 2/4] libxc: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 2/4] libxc: " Chong Li
@ 2016-03-08 19:09   ` Wei Liu
  2016-03-08 19:32     ` Chong Li
  0 siblings, 1 reply; 51+ messages in thread
From: Wei Liu @ 2016-03-08 19:09 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, Meng Xu, dgolomb

On Sun, Mar 06, 2016 at 11:55:56AM -0600, Chong Li wrote:
> Add xc_sched_rtds_vcpu_get/set functions to interact with
> Xen to get/set a domain's per-VCPU parameters.
> 
> Signed-off-by: Chong Li <chong.li@wustl.edu>
> Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
> Signed-off-by: Sisu Xi <xisisu@gmail.com>
> 
> ---
> Changes on PATCH v5:
> 1) In xc_sched_rtds_vcpu_get/set, re-issueing the hypercall
> if it is preempted.
> 
> Changes on PATCH v4:
> 1) Minor modifications on the function parameters.
> 
> Changes on PATCH v2:
> 1) Minor modifications due to the change of struct xen_domctl_scheduler_op.
> 
> CC: <dario.faggioli@citrix.com>
> CC: <george.dunlap@eu.citrix.com>
> CC: <dgolomb@seas.upenn.edu>
> CC: <mengxu@cis.upenn.edu>
> CC: <wei.liu2@citrix.com>
> CC: <lichong659@gmail.com>
> ---
>  tools/libxc/include/xenctrl.h | 16 +++++++---
>  tools/libxc/xc_rt.c           | 68 +++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 80 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 01a6dda..9462271 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -888,11 +888,19 @@ int xc_sched_credit2_domain_get(xc_interface *xch,
>                                 struct xen_domctl_sched_credit2 *sdom);
>  
>  int xc_sched_rtds_domain_set(xc_interface *xch,
> -                            uint32_t domid,
> -                            struct xen_domctl_sched_rtds *sdom);
> +                               uint32_t domid,
> +                               struct xen_domctl_sched_rtds *sdom);
>  int xc_sched_rtds_domain_get(xc_interface *xch,
> -                            uint32_t domid,
> -                            struct xen_domctl_sched_rtds *sdom);
> +                               uint32_t domid,
> +                               struct xen_domctl_sched_rtds *sdom);

Please don't mix fixing indentation with new functionality. Besides, the
new indentation looks wrong.

> +int xc_sched_rtds_vcpu_set(xc_interface *xch,
> +                               uint32_t domid,
> +                               struct xen_domctl_schedparam_vcpu *vcpus,
> +                               uint32_t num_vcpus);
> +int xc_sched_rtds_vcpu_get(xc_interface *xch,
> +                               uint32_t domid,
> +                               struct xen_domctl_schedparam_vcpu *vcpus,
> +                               uint32_t num_vcpus);
>  

Indentation also looks wrong. Maybe you used tab? Please use spaces
instead.

>  int
>  xc_sched_arinc653_schedule_set(
> diff --git a/tools/libxc/xc_rt.c b/tools/libxc/xc_rt.c
> index d59e5ce..4be9624 100644
> --- a/tools/libxc/xc_rt.c
> +++ b/tools/libxc/xc_rt.c
> @@ -62,3 +62,71 @@ int xc_sched_rtds_domain_get(xc_interface *xch,
>  
>      return rc;
>  }
> +

I only skim the rest but it looks sensible.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 3/4] libxl: " Chong Li
@ 2016-03-08 19:12   ` Wei Liu
  2016-03-09  0:38     ` Chong Li
  2016-03-09 17:28     ` Dario Faggioli
  2016-03-09 17:09   ` Dario Faggioli
  1 sibling, 2 replies; 51+ messages in thread
From: Wei Liu @ 2016-03-08 19:12 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	ian.jackson, xen-devel, ian.campbell, Meng Xu, dgolomb

On Sun, Mar 06, 2016 at 11:55:57AM -0600, Chong Li wrote:
[...]
>  tools/libxl/libxl.c         | 326 ++++++++++++++++++++++++++++++++++++++++----
>  tools/libxl/libxl.h         |  37 +++++
>  tools/libxl/libxl_types.idl |  14 ++
>  3 files changed, 354 insertions(+), 23 deletions(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index bd3aac8..4532e86 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -5770,6 +5770,207 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
>      return 0;
>  }
>  
> +static int sched_rtds_validate_params(libxl__gc *gc, int period,
> +                                    int budget, uint32_t *sdom_period,
> +                                    uint32_t *sdom_budget)

Indentation.

> +{
> +    int rc = 0;
> +    if (period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
> +        if (period < 1) {
> +            LOG(ERROR, "VCPU period is out of range, "
> +                       "valid values are larger than or equal to 1");
> +            rc = ERROR_INVAL; /* error scheduling parameter */
> +            goto out;
> +        }
> +        *sdom_period = period;
> +    }
> +
> +    if (budget != LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT) {
> +        if (budget < 1) {
> +            LOG(ERROR, "VCPU budget is not set or out of range, "
> +                       "valid values are larger than or equal to 1");
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +        *sdom_budget = budget;
> +    }
> +
> +    if (*sdom_budget > *sdom_period) {
> +        LOG(ERROR, "VCPU budget must be smaller than "
> +                   "or equal to VCPU period");
> +        rc = ERROR_INVAL;
> +    }

This is a strange pattern. It does more than what it's name suggests.

It seems this function just returns the vanilla values in period and
budget. It can be simplified by removing sdom_period and sdom_budget all
together. Do you agree?

Then at callsites you set those values with two direct assignment:

   if (validate(period_value, budget_value) != 0) {
       error;
   }
   period = period_value;
   budget = budget_value;

> +out:
> +    return rc;
> +}
> +
> +/* Get the RTDS scheduling parameters of vcpu(s) */
> +static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
> +                               libxl_vcpu_sched_params *scinfo)
> +{
> +    uint32_t num_vcpus;
> +    int i, r, rc;
> +    xc_dominfo_t info;
> +    struct xen_domctl_schedparam_vcpu *vcpus;
> +
> +    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
> +    if (r < 0) {
> +        LOGE(ERROR, "getting domain info");
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +
> +    num_vcpus = scinfo->num_vcpus ? scinfo->num_vcpus :
> +                                info.max_vcpu_id + 1;
> +
> +    GCNEW_ARRAY(vcpus, num_vcpus);
> +
> +    if (scinfo->num_vcpus > 0) {
> +        for (i = 0; i < num_vcpus; i++) {
> +            if (scinfo->vcpus[i].vcpuid < 0 ||
> +                scinfo->vcpus[i].vcpuid > info.max_vcpu_id) {
> +                LOG(ERROR, "VCPU index is out of range, "
> +                           "valid values are within range from 0 to %d",
> +                           info.max_vcpu_id);
> +                rc = ERROR_INVAL;
> +                goto out;
> +            }
> +            vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
> +        }
> +    } else
> +        for (i = 0; i < num_vcpus; i++)
> +            vcpus[i].vcpuid = i;
> +
> +    r = xc_sched_rtds_vcpu_get(CTX->xch, domid, vcpus, num_vcpus);
> +    if (r != 0) {
> +        LOGE(ERROR, "getting vcpu sched rtds");
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +    scinfo->sched = LIBXL_SCHEDULER_RTDS;
> +    if (scinfo->num_vcpus == 0) {
> +        scinfo->num_vcpus = num_vcpus;
> +        scinfo->vcpus = libxl__calloc(NOGC, num_vcpus,
> +                                sizeof(libxl_sched_params));
> +    }
> +    for(i = 0; i < num_vcpus; i++) {
> +        scinfo->vcpus[i].period = vcpus[i].s.rtds.period;
> +        scinfo->vcpus[i].budget = vcpus[i].s.rtds.budget;
> +        scinfo->vcpus[i].vcpuid = vcpus[i].vcpuid;
> +    }
> +return r;

Just set rc = 0 here?

> +out:
> +    return rc;
> +}
> +
> +/* Set the RTDS scheduling parameters of vcpu(s) */
> +static int sched_rtds_vcpus_params_set(libxl__gc *gc, uint32_t domid,
> +                               const libxl_vcpu_sched_params *scinfo)

If this is indentation that you're not sure about. Just leave it like
this.

> +{
> +    int r, rc;
> +    int i;
> +    uint16_t max_vcpuid;
> +    xc_dominfo_t info;
> +    struct xen_domctl_schedparam_vcpu *vcpus;
> +    uint32_t num_vcpus;
> +
> +    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
> +    if (r < 0) {
> +        LOGE(ERROR, "getting domain info");
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +    max_vcpuid = info.max_vcpu_id;
> +
> +    if (scinfo->num_vcpus > 0) {
> +        num_vcpus = scinfo->num_vcpus;
> +        GCNEW_ARRAY(vcpus, num_vcpus);
> +        for (i = 0; i < num_vcpus; i++) {
> +            if (scinfo->vcpus[i].vcpuid < 0 ||
> +                scinfo->vcpus[i].vcpuid > max_vcpuid) {
> +                LOG(ERROR, "VCPU index is out of range, "
> +                           "valid values are within range from 0 to %d",
> +                           max_vcpuid);
> +                rc = ERROR_INVAL;
> +                goto out;
> +            }
> +            vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
> +

I would suggest you use a separate loop to validate the input, otherwise
you risk getting input partial success.

> +            rc = sched_rtds_validate_params(gc,
> +                    scinfo->vcpus[i].period, scinfo->vcpus[i].budget,
> +                    &vcpus[i].s.rtds.period, &vcpus[i].s.rtds.budget);
> +            if (rc) {
> +                rc = ERROR_INVAL;
> +                goto out;
> +            }
> +        }
> +    } else {
> +        rc = ERROR_INVAL;
> +        goto out;
> +    }

The above hunk would be better if you write it like:

    if (scinfo->num_vcpus <= 0) {
        rc = ERROR_INVAL;
        goto out;
    }

    /* ... The other branch follows. No indentation needed. */
    num_vcpus = ...

> +
> +    r = xc_sched_rtds_vcpu_set(CTX->xch, domid,
> +        vcpus, num_vcpus);

Indentation.

> +    if (r != 0) {
> +        LOGE(ERROR, "setting vcpu sched rtds");
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +return r;

Use rc = 0 is better.

> +out:
> +    return rc;
> +}
> +
> +/* Set the RTDS scheduling parameters of all vcpus of a domain */
> +static int sched_rtds_vcpus_params_set_all(libxl__gc *gc, uint32_t domid,
> +                               const libxl_vcpu_sched_params *scinfo)
> +{
> +    int r, rc;
> +    int i;
> +    uint16_t max_vcpuid;
> +    xc_dominfo_t info;
> +    struct xen_domctl_schedparam_vcpu *vcpus;
> +    uint32_t num_vcpus;
> +
> +    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
> +    if (r < 0) {
> +        LOGE(ERROR, "getting domain info");
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +    max_vcpuid = info.max_vcpu_id;
> +
> +    if (scinfo->num_vcpus == 1) {
> +        num_vcpus = max_vcpuid + 1;
> +        GCNEW_ARRAY(vcpus, num_vcpus);
> +        if (sched_rtds_validate_params(gc, scinfo->vcpus[0].period,
> +            scinfo->vcpus[0].budget, &vcpus[0].s.rtds.period,
> +            &vcpus[0].s.rtds.budget)) {
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +        for (i = 0; i < num_vcpus; i++) {
> +            vcpus[i].vcpuid = i;
> +            vcpus[i].s.rtds.period = scinfo->vcpus[0].period;
> +            vcpus[i].s.rtds.budget = scinfo->vcpus[0].budget;
> +        }
> +    } else {
> +        rc = ERROR_INVAL;
> +        goto out;
> +    }
> +

Same here, just move the else branch up.

> +    r = xc_sched_rtds_vcpu_set(CTX->xch, domid,
> +        vcpus, num_vcpus);

Indentation.

> +    if (r != 0) {
> +        LOGE(ERROR, "setting vcpu sched rtds");
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +return r;

rc = 0;

> +out:
> +    return rc;
> +}
> +
>  static int sched_rtds_domain_get(libxl__gc *gc, uint32_t domid,
>                                 libxl_domain_sched_params *scinfo)
>  {
> @@ -5802,30 +6003,9 @@ static int sched_rtds_domain_set(libxl__gc *gc, uint32_t domid,
>          LOGE(ERROR, "getting domain sched rtds");
>          return ERROR_FAIL;
>      }
> -
> -    if (scinfo->period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
> -        if (scinfo->period < 1) {
> -            LOG(ERROR, "VCPU period is not set or out of range, "
> -                       "valid values are larger than 1");
> -            return ERROR_INVAL;
> -        }
> -        sdom.period = scinfo->period;
> -    }
> -
> -    if (scinfo->budget != LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT) {
> -        if (scinfo->budget < 1) {
> -            LOG(ERROR, "VCPU budget is not set or out of range, "
> -                       "valid values are larger than 1");
> -            return ERROR_INVAL;
> -        }
> -        sdom.budget = scinfo->budget;
> -    }
> -
> -    if (sdom.budget > sdom.period) {
> -        LOG(ERROR, "VCPU budget is larger than VCPU period, "
> -                   "VCPU budget should be no larger than VCPU period");
> +    if (sched_rtds_validate_params(gc, scinfo->period, scinfo->budget,
> +                             &sdom.period, &sdom.budget))
>          return ERROR_INVAL;
> -    }
>  
>      rc = xc_sched_rtds_domain_set(CTX->xch, domid, &sdom);
>      if (rc < 0) {
> @@ -5873,6 +6053,74 @@ int libxl_domain_sched_params_set(libxl_ctx *ctx, uint32_t domid,
>      return ret;
>  }
>  
> +int libxl_vcpu_sched_params_set(libxl_ctx *ctx, uint32_t domid,
> +                                  const libxl_vcpu_sched_params *scinfo)

Indentation.

> +{
> +    GC_INIT(ctx);
> +    libxl_scheduler sched = scinfo->sched;
> +    int rc;
> +
> +    if (sched == LIBXL_SCHEDULER_UNKNOWN)
> +        sched = libxl__domain_scheduler(gc, domid);
> +
> +    switch (sched) {
> +    case LIBXL_SCHEDULER_SEDF:
> +        LOG(ERROR, "SEDF scheduler no longer available");
> +        rc = ERROR_FEATURE_REMOVED;
> +        break;
> +    case LIBXL_SCHEDULER_CREDIT:
> +    case LIBXL_SCHEDULER_CREDIT2:
> +    case LIBXL_SCHEDULER_ARINC653:
> +        LOG(ERROR, "per-VCPU parameter setting not supported for this scheduler");
> +        rc = ERROR_INVAL;
> +        break;
> +    case LIBXL_SCHEDULER_RTDS:
> +        rc = sched_rtds_vcpus_params_set(gc, domid, scinfo);
> +        break;
> +    default:
> +        LOG(ERROR, "Unknown scheduler");
> +        rc = ERROR_INVAL;
> +        break;
> +    }
> +
> +    GC_FREE;
> +    return rc;
> +}
[...]
>  static int libxl__domain_s3_resume(libxl__gc *gc, int domid)
>  {
>      int rc = 0;
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 6b73848..f1d53d8 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -206,6 +206,17 @@
>  #define LIBXL_HAVE_DEVICE_MODEL_USER 1
>  
>  /*
> + * libxl_vcpu_sched_params is used to store per-vcpu params.
> + */
> +#define LIBXL_HAVE_VCPU_SCHED_PARAMS 1
> +
> +/*
> + * LIBXL_HAVE_SCHED_RTDS_VCPU_PARAMS indicates RTDS scheduler
> + * now supports per-vcpu settings.
> + */
> +#define LIBXL_HAVE_SCHED_RTDS_VCPU_PARAMS 1
> +
> +/*
>   * libxl_domain_build_info has the arm.gic_version field.
>   */
>  #define LIBXL_HAVE_BUILDINFO_ARM_GIC_VERSION 1
> @@ -1647,11 +1658,37 @@ int libxl_sched_credit_params_set(libxl_ctx *ctx, uint32_t poolid,
>  #define LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT -1
>  #define LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT    -1
>  
> +/* Per-VCPU parameters*/
> +#define LIBXL_SCHED_PARAM_VCPU_INDEX_DEFAULT   -1
> +
> +/* Get the per-domain scheduling parameters.
> + * For schedulers that support per-vcpu settings (e.g., RTDS),
> + * calling *_domain_get functions will get default scheduling
> + * parameters.
> + */
>  int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
>                                    libxl_domain_sched_params *params);
> +
> +/* Set the per-domain scheduling parameters.
> + * For schedulers that support per-vcpu settings (e.g., RTDS),
> + * calling *_domain_set functions will set all vcpus with the same
> + * scheduling parameters.
> + */
>  int libxl_domain_sched_params_set(libxl_ctx *ctx, uint32_t domid,
>                                    const libxl_domain_sched_params *params);
>  
> +/* Get the per-vcpu scheduling parameters */
> +int libxl_vcpu_sched_params_get(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_vcpu_sched_params *params);
> +
> +/* Set the per-vcpu scheduling parameters */
> +int libxl_vcpu_sched_params_set(libxl_ctx *ctx, uint32_t domid,
> +                                  const libxl_vcpu_sched_params *params);
> +
> +/* Set the per-vcpu scheduling parameters of all vcpus of a domain*/
> +int libxl_vcpu_sched_params_set_all(libxl_ctx *ctx, uint32_t domid,
> +                                  const libxl_vcpu_sched_params *params);
> +
>  int libxl_send_trigger(libxl_ctx *ctx, uint32_t domid,
>                         libxl_trigger trigger, uint32_t vcpuid);
>  int libxl_send_sysrq(libxl_ctx *ctx, uint32_t domid, char sysrq);
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index cf3730f..7487fc9 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -378,6 +378,20 @@ libxl_domain_restore_params = Struct("domain_restore_params", [
>      ("stream_version", uint32, {'init_val': '1'}),
>      ])
>  
> +libxl_sched_params = Struct("sched_params",[
> +    ("vcpuid",       integer, {'init_val': 'LIBXL_SCHED_PARAM_VCPU_INDEX_DEFAULT'}),
> +    ("weight",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT'}),
> +    ("cap",          integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT'}),
> +    ("period",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
> +    ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
> +    ("budget",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
> +    ])
> +
> +libxl_vcpu_sched_params = Struct("vcpu_sched_params",[
> +    ("sched",        libxl_scheduler),
> +    ("vcpus",        Array(libxl_sched_params, "num_vcpus")),
> +    ])
> +

IIRC Dario and you have come to agreement on the data structure so I
skim this.

Overall I think the patch is moving towards the right direction.  Just
that there are too many places where indentation need fixing.  Please
fix them in the next iteration. I really don't like holding back patches
just because of indentation issues.  Depending on the editor you use,
there might be some common runes that we can give you.

If you feel unclear about my comments, just ask for clarification.

Wei.

>  libxl_domain_sched_params = Struct("domain_sched_params",[
>      ("sched",        libxl_scheduler),
>      ("weight",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT'}),
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 4/4] xl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 4/4] xl: " Chong Li
@ 2016-03-08 19:12   ` Wei Liu
  2016-03-08 21:24     ` Chong Li
  2016-03-09 14:09   ` Wei Liu
  1 sibling, 1 reply; 51+ messages in thread
From: Wei Liu @ 2016-03-08 19:12 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, Meng Xu, dgolomb

On Sun, Mar 06, 2016 at 11:55:58AM -0600, Chong Li wrote:
[...]
> @@ -6222,77 +6359,166 @@ int main_sched_rtds(int argc, char **argv)
>  {
>      const char *dom = NULL;
>      const char *cpupool = NULL;
> -    int period = 0; /* period is in microsecond */
> -    int budget = 0; /* budget is in microsecond */
> +    int *vcpus = (int *)xmalloc(sizeof(int)); /* IDs of VCPUs that change */
> +    int *periods = (int *)xmalloc(sizeof(int)); /* period is in microsecond */
> +    int *budgets = (int *)xmalloc(sizeof(int)); /* budget is in microsecond */
> +    int v_size = 1; /* size of vcpus array */
> +    int p_size = 1; /* size of periods array */
> +    int b_size = 1; /* size of budgets array */
> +    int v_index = 0; /* index in vcpus array */
> +    int p_index =0; /* index in periods array */
> +    int b_index =0; /* index for in budgets array */
>      bool opt_p = false;
>      bool opt_b = false;
> -    int opt, rc;
> +    bool opt_v = false;
> +    bool opt_all = false; /* output per-dom parameters */
> +    int opt, i;
> +    int rc = 0;
>      static struct option opts[] = {
>          {"domain", 1, 0, 'd'},
>          {"period", 1, 0, 'p'},
>          {"budget", 1, 0, 'b'},
> +        {"vcpuid",1, 0, 'v'},
>          {"cpupool", 1, 0, 'c'},
>          COMMON_LONG_OPTS
>      };
>  
> -    SWITCH_FOREACH_OPT(opt, "d:p:b:c:", opts, "sched-rtds", 0) {
> +    SWITCH_FOREACH_OPT(opt, "d:p:b:v:c", opts, "sched-rtds", 0) {
>      case 'd':
>          dom = optarg;
>          break;
>      case 'p':
> -        period = strtol(optarg, NULL, 10);
> +        if (p_index >= p_size) { /* periods array is full */
> +            p_size *= 2;
> +            periods = xrealloc(periods, p_size);
> +        }
> +        periods[p_index++] = strtol(optarg, NULL, 10);
>          opt_p = 1;
>          break;
>      case 'b':
> -        budget = strtol(optarg, NULL, 10);
> +        if (b_index >= b_size) { /* budgets array is full */
> +            b_size *= 2;
> +            budgets = xrealloc(budgets, b_size);
> +        }
> +        budgets[b_index++] = strtol(optarg, NULL, 10);
>          opt_b = 1;
>          break;
> +    case 'v':
> +        if (!strcmp(optarg, "all")) { /* get or set all vcpus of a domain */
> +            opt_all = 1;
> +            break;
> +        }
> +        if (v_index >= v_size) { /* vcpus array is full */
> +            v_size *= 2;
> +            vcpus = xrealloc(vcpus, v_size);
> +        }
> +        vcpus[v_index++] = strtol(optarg, NULL, 10);
> +        opt_v = 1;
> +        break;

I'm still not quite sure why this is written like this. What's the
expected syntax of this command? The hunk to patch xl manpage is very
terse...

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 2/4] libxc: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 19:09   ` Wei Liu
@ 2016-03-08 19:32     ` Chong Li
  2016-03-08 19:36       ` Wei Liu
  0 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-08 19:32 UTC (permalink / raw)
  To: Wei Liu
  Cc: Chong Li, Sisu Xi, George Dunlap, dario.faggioli, xen-devel,
	Meng Xu, Dagaen Golomb

On Tue, Mar 8, 2016 at 1:09 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Sun, Mar 06, 2016 at 11:55:56AM -0600, Chong Li wrote:
>> Add xc_sched_rtds_vcpu_get/set functions to interact with
>> Xen to get/set a domain's per-VCPU parameters.
>>
>> Signed-off-by: Chong Li <chong.li@wustl.edu>
>> Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
>> Signed-off-by: Sisu Xi <xisisu@gmail.com>
>>
>> ---
>> Changes on PATCH v5:
>> 1) In xc_sched_rtds_vcpu_get/set, re-issueing the hypercall
>> if it is preempted.
>>
>> Changes on PATCH v4:
>> 1) Minor modifications on the function parameters.
>>
>> Changes on PATCH v2:
>> 1) Minor modifications due to the change of struct xen_domctl_scheduler_op.
>>
>> CC: <dario.faggioli@citrix.com>
>> CC: <george.dunlap@eu.citrix.com>
>> CC: <dgolomb@seas.upenn.edu>
>> CC: <mengxu@cis.upenn.edu>
>> CC: <wei.liu2@citrix.com>
>> CC: <lichong659@gmail.com>
>> ---
>>  tools/libxc/include/xenctrl.h | 16 +++++++---
>>  tools/libxc/xc_rt.c           | 68 +++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 80 insertions(+), 4 deletions(-)
>>
>> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
>> index 01a6dda..9462271 100644
>> --- a/tools/libxc/include/xenctrl.h
>> +++ b/tools/libxc/include/xenctrl.h
>> @@ -888,11 +888,19 @@ int xc_sched_credit2_domain_get(xc_interface *xch,
>>                                 struct xen_domctl_sched_credit2 *sdom);
>>
>>  int xc_sched_rtds_domain_set(xc_interface *xch,
>> -                            uint32_t domid,
>> -                            struct xen_domctl_sched_rtds *sdom);
>> +                               uint32_t domid,
>> +                               struct xen_domctl_sched_rtds *sdom);
>>  int xc_sched_rtds_domain_get(xc_interface *xch,
>> -                            uint32_t domid,
>> -                            struct xen_domctl_sched_rtds *sdom);
>> +                               uint32_t domid,
>> +                               struct xen_domctl_sched_rtds *sdom);
>
> Please don't mix fixing indentation with new functionality. Besides, the
> new indentation looks wrong.
>
>> +int xc_sched_rtds_vcpu_set(xc_interface *xch,
>> +                               uint32_t domid,
>> +                               struct xen_domctl_schedparam_vcpu *vcpus,
>> +                               uint32_t num_vcpus);
>> +int xc_sched_rtds_vcpu_get(xc_interface *xch,
>> +                               uint32_t domid,
>> +                               struct xen_domctl_schedparam_vcpu *vcpus,
>> +                               uint32_t num_vcpus);
>>
>
> Indentation also looks wrong. Maybe you used tab? Please use spaces
> instead.
>
I'm not using tab. This is just what I'm confused with. The
indentation for these
four *rtds* functions is the same as the two *credit2* functions
above. I can not
find the indentation rules for function calls with many / long parameters.

Chong

-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 2/4] libxc: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 19:32     ` Chong Li
@ 2016-03-08 19:36       ` Wei Liu
  0 siblings, 0 replies; 51+ messages in thread
From: Wei Liu @ 2016-03-08 19:36 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, dario.faggioli,
	xen-devel, Meng Xu, Dagaen Golomb

On Tue, Mar 08, 2016 at 01:32:58PM -0600, Chong Li wrote:
> On Tue, Mar 8, 2016 at 1:09 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> > On Sun, Mar 06, 2016 at 11:55:56AM -0600, Chong Li wrote:
> >> Add xc_sched_rtds_vcpu_get/set functions to interact with
> >> Xen to get/set a domain's per-VCPU parameters.
> >>
> >> Signed-off-by: Chong Li <chong.li@wustl.edu>
> >> Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
> >> Signed-off-by: Sisu Xi <xisisu@gmail.com>
> >>
> >> ---
> >> Changes on PATCH v5:
> >> 1) In xc_sched_rtds_vcpu_get/set, re-issueing the hypercall
> >> if it is preempted.
> >>
> >> Changes on PATCH v4:
> >> 1) Minor modifications on the function parameters.
> >>
> >> Changes on PATCH v2:
> >> 1) Minor modifications due to the change of struct xen_domctl_scheduler_op.
> >>
> >> CC: <dario.faggioli@citrix.com>
> >> CC: <george.dunlap@eu.citrix.com>
> >> CC: <dgolomb@seas.upenn.edu>
> >> CC: <mengxu@cis.upenn.edu>
> >> CC: <wei.liu2@citrix.com>
> >> CC: <lichong659@gmail.com>
> >> ---
> >>  tools/libxc/include/xenctrl.h | 16 +++++++---
> >>  tools/libxc/xc_rt.c           | 68 +++++++++++++++++++++++++++++++++++++++++++
> >>  2 files changed, 80 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> >> index 01a6dda..9462271 100644
> >> --- a/tools/libxc/include/xenctrl.h
> >> +++ b/tools/libxc/include/xenctrl.h
> >> @@ -888,11 +888,19 @@ int xc_sched_credit2_domain_get(xc_interface *xch,
> >>                                 struct xen_domctl_sched_credit2 *sdom);
> >>
> >>  int xc_sched_rtds_domain_set(xc_interface *xch,
> >> -                            uint32_t domid,
> >> -                            struct xen_domctl_sched_rtds *sdom);
> >> +                               uint32_t domid,
> >> +                               struct xen_domctl_sched_rtds *sdom);
> >>  int xc_sched_rtds_domain_get(xc_interface *xch,
> >> -                            uint32_t domid,
> >> -                            struct xen_domctl_sched_rtds *sdom);
> >> +                               uint32_t domid,
> >> +                               struct xen_domctl_sched_rtds *sdom);
> >
> > Please don't mix fixing indentation with new functionality. Besides, the
> > new indentation looks wrong.
> >
> >> +int xc_sched_rtds_vcpu_set(xc_interface *xch,
> >> +                               uint32_t domid,
> >> +                               struct xen_domctl_schedparam_vcpu *vcpus,
> >> +                               uint32_t num_vcpus);
> >> +int xc_sched_rtds_vcpu_get(xc_interface *xch,
> >> +                               uint32_t domid,
> >> +                               struct xen_domctl_schedparam_vcpu *vcpus,
> >> +                               uint32_t num_vcpus);
> >>
> >
> > Indentation also looks wrong. Maybe you used tab? Please use spaces
> > instead.
> >
> I'm not using tab. This is just what I'm confused with. The
> indentation for these
> four *rtds* functions is the same as the two *credit2* functions
> above. I can not
> find the indentation rules for function calls with many / long parameters.
> 


Use white space and stay aligned with previous line wherever you can.
So:

int xc_sched_rtds_vcpu_get(xc_interface *xch,
                           uint32_t domid,
                           struct xen_domctl_schedparam_vcpu *vcpus,
                           uint32_t num_vcpus);

I know there is inconsistency in the code base. I'm sorry it makes you
feel confused.

Wei.

> Chong
> 
> -- 
> Chong Li
> Department of Computer Science and Engineering
> Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 4/4] xl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 19:12   ` Wei Liu
@ 2016-03-08 21:24     ` Chong Li
  2016-03-09 14:01       ` Wei Liu
  0 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-08 21:24 UTC (permalink / raw)
  To: Wei Liu
  Cc: Chong Li, Sisu Xi, George Dunlap, dario.faggioli, xen-devel,
	Meng Xu, Dagaen Golomb

On Tue, Mar 8, 2016 at 1:12 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Sun, Mar 06, 2016 at 11:55:58AM -0600, Chong Li wrote:
> [...]
>> @@ -6222,77 +6359,166 @@ int main_sched_rtds(int argc, char **argv)
>>  {
>>      const char *dom = NULL;
>>      const char *cpupool = NULL;
>> -    int period = 0; /* period is in microsecond */
>> -    int budget = 0; /* budget is in microsecond */
>> +    int *vcpus = (int *)xmalloc(sizeof(int)); /* IDs of VCPUs that change */
>> +    int *periods = (int *)xmalloc(sizeof(int)); /* period is in microsecond */
>> +    int *budgets = (int *)xmalloc(sizeof(int)); /* budget is in microsecond */
>> +    int v_size = 1; /* size of vcpus array */
>> +    int p_size = 1; /* size of periods array */
>> +    int b_size = 1; /* size of budgets array */
>> +    int v_index = 0; /* index in vcpus array */
>> +    int p_index =0; /* index in periods array */
>> +    int b_index =0; /* index for in budgets array */
>>      bool opt_p = false;
>>      bool opt_b = false;
>> -    int opt, rc;
>> +    bool opt_v = false;
>> +    bool opt_all = false; /* output per-dom parameters */
>> +    int opt, i;
>> +    int rc = 0;
>>      static struct option opts[] = {
>>          {"domain", 1, 0, 'd'},
>>          {"period", 1, 0, 'p'},
>>          {"budget", 1, 0, 'b'},
>> +        {"vcpuid",1, 0, 'v'},
>>          {"cpupool", 1, 0, 'c'},
>>          COMMON_LONG_OPTS
>>      };
>>
>> -    SWITCH_FOREACH_OPT(opt, "d:p:b:c:", opts, "sched-rtds", 0) {
>> +    SWITCH_FOREACH_OPT(opt, "d:p:b:v:c", opts, "sched-rtds", 0) {
>>      case 'd':
>>          dom = optarg;
>>          break;
>>      case 'p':
>> -        period = strtol(optarg, NULL, 10);
>> +        if (p_index >= p_size) { /* periods array is full */
>> +            p_size *= 2;
>> +            periods = xrealloc(periods, p_size);
>> +        }
>> +        periods[p_index++] = strtol(optarg, NULL, 10);
>>          opt_p = 1;
>>          break;
>>      case 'b':
>> -        budget = strtol(optarg, NULL, 10);
>> +        if (b_index >= b_size) { /* budgets array is full */
>> +            b_size *= 2;
>> +            budgets = xrealloc(budgets, b_size);
>> +        }
>> +        budgets[b_index++] = strtol(optarg, NULL, 10);
>>          opt_b = 1;
>>          break;
>> +    case 'v':
>> +        if (!strcmp(optarg, "all")) { /* get or set all vcpus of a domain */
>> +            opt_all = 1;
>> +            break;
>> +        }
>> +        if (v_index >= v_size) { /* vcpus array is full */
>> +            v_size *= 2;
>> +            vcpus = xrealloc(vcpus, v_size);
>> +        }
>> +        vcpus[v_index++] = strtol(optarg, NULL, 10);
>> +        opt_v = 1;
>> +        break;
>
> I'm still not quite sure why this is written like this. What's the
> expected syntax of this command? The hunk to patch xl manpage is very
> terse...

We have three arrays here, vcpus[], periods[] and budgets[]. If the xl
command is like (more examples at cover letter):

xl sched-rtds -d vm1 -v 3 -p 500 -b 200 -v 1 -p 600 -b 300 (set the
scheduling parameters of two vcpus of vm1)

then, the three arrays would be like:

vcpus: [3, 1]
periods: [500, 600]
budgets: [200, 300]

What makes this code complicated is the size of these three arrays
grows along with the
reading of OPTS. At the beginning, all three arrays have the size for
only one int. When
one array becomes full, we double its size.

Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 19:12   ` Wei Liu
@ 2016-03-09  0:38     ` Chong Li
  2016-03-09 14:01       ` Wei Liu
  2016-03-09 17:28     ` Dario Faggioli
  1 sibling, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-09  0:38 UTC (permalink / raw)
  To: Wei Liu
  Cc: Chong Li, Sisu Xi, George Dunlap, dario.faggioli, Ian Jackson,
	xen-devel, Ian Campbell, Meng Xu, Dagaen Golomb

On Tue, Mar 8, 2016 at 1:12 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Sun, Mar 06, 2016 at 11:55:57AM -0600, Chong Li wrote:
> [...]
>>  tools/libxl/libxl.c         | 326 ++++++++++++++++++++++++++++++++++++++++----
>>  tools/libxl/libxl.h         |  37 +++++
>>  tools/libxl/libxl_types.idl |  14 ++
>>  3 files changed, 354 insertions(+), 23 deletions(-)
>>
>> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
>> index bd3aac8..4532e86 100644
>> --- a/tools/libxl/libxl.c
>> +++ b/tools/libxl/libxl.c
>> @@ -5770,6 +5770,207 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
>>      return 0;
>>  }
>>
>> +static int sched_rtds_validate_params(libxl__gc *gc, int period,
>> +                                    int budget, uint32_t *sdom_period,
>> +                                    uint32_t *sdom_budget)
>
>
> This is a strange pattern. It does more than what it's name suggests.
>
> It seems this function just returns the vanilla values in period and
> budget. It can be simplified by removing sdom_period and sdom_budget all
> together. Do you agree?

Yes.

>> +
>> +/* Get the RTDS scheduling parameters of vcpu(s) */
>> +static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
>> +                               libxl_vcpu_sched_params *scinfo)
>> +{
>> +    uint32_t num_vcpus;
>> +    int i, r, rc;
>> +    xc_dominfo_t info;
>> +    struct xen_domctl_schedparam_vcpu *vcpus;
>> +
>> +    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
>> +    if (r < 0) {
>> +        LOGE(ERROR, "getting domain info");
>> +        rc = ERROR_FAIL;
>> +        goto out;
>> +    }
>> +
>> +    num_vcpus = scinfo->num_vcpus ? scinfo->num_vcpus :
>> +                                info.max_vcpu_id + 1;
>> +
>> +    GCNEW_ARRAY(vcpus, num_vcpus);
>> +
>> +    if (scinfo->num_vcpus > 0) {
>> +        for (i = 0; i < num_vcpus; i++) {
>> +            if (scinfo->vcpus[i].vcpuid < 0 ||
>> +                scinfo->vcpus[i].vcpuid > info.max_vcpu_id) {
>> +                LOG(ERROR, "VCPU index is out of range, "
>> +                           "valid values are within range from 0 to %d",
>> +                           info.max_vcpu_id);
>> +                rc = ERROR_INVAL;
>> +                goto out;
>> +            }
>> +            vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
>> +        }
>> +    } else
>> +        for (i = 0; i < num_vcpus; i++)
>> +            vcpus[i].vcpuid = i;
>> +
>> +    r = xc_sched_rtds_vcpu_get(CTX->xch, domid, vcpus, num_vcpus);
>> +    if (r != 0) {
>> +        LOGE(ERROR, "getting vcpu sched rtds");
>> +        rc = ERROR_FAIL;
>> +        goto out;
>> +    }
>> +    scinfo->sched = LIBXL_SCHEDULER_RTDS;
>> +    if (scinfo->num_vcpus == 0) {
>> +        scinfo->num_vcpus = num_vcpus;
>> +        scinfo->vcpus = libxl__calloc(NOGC, num_vcpus,
>> +                                sizeof(libxl_sched_params));
>> +    }
>> +    for(i = 0; i < num_vcpus; i++) {
>> +        scinfo->vcpus[i].period = vcpus[i].s.rtds.period;
>> +        scinfo->vcpus[i].budget = vcpus[i].s.rtds.budget;
>> +        scinfo->vcpus[i].vcpuid = vcpus[i].vcpuid;
>> +    }
>> +return r;
>
> Just set rc = 0 here?

Ok.

>
>> +out:
>> +    return rc;
>> +}
>> +
>> +/* Set the RTDS scheduling parameters of vcpu(s) */
>> +static int sched_rtds_vcpus_params_set(libxl__gc *gc, uint32_t domid,
>> +                               const libxl_vcpu_sched_params *scinfo)
>> +{
>> +    int r, rc;
>> +    int i;
>> +    uint16_t max_vcpuid;
>> +    xc_dominfo_t info;
>> +    struct xen_domctl_schedparam_vcpu *vcpus;
>> +    uint32_t num_vcpus;
>> +
>> +    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
>> +    if (r < 0) {
>> +        LOGE(ERROR, "getting domain info");
>> +        rc = ERROR_FAIL;
>> +        goto out;
>> +    }
>> +    max_vcpuid = info.max_vcpu_id;
>> +
>> +    if (scinfo->num_vcpus > 0) {
>> +        num_vcpus = scinfo->num_vcpus;
>> +        GCNEW_ARRAY(vcpus, num_vcpus);
>> +        for (i = 0; i < num_vcpus; i++) {
>> +            if (scinfo->vcpus[i].vcpuid < 0 ||
>> +                scinfo->vcpus[i].vcpuid > max_vcpuid) {
>> +                LOG(ERROR, "VCPU index is out of range, "
>> +                           "valid values are within range from 0 to %d",
>> +                           max_vcpuid);
>> +                rc = ERROR_INVAL;
>> +                goto out;
>> +            }
>> +            vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
>> +
>
> I would suggest you use a separate loop to validate the input, otherwise
> you risk getting input partial success.

What do you mean by "partial success"?
Do you suggest to validate the entire input first, and then create &&
set the vcpus array?

>
>> +            rc = sched_rtds_validate_params(gc,
>> +                    scinfo->vcpus[i].period, scinfo->vcpus[i].budget,
>> +                    &vcpus[i].s.rtds.period, &vcpus[i].s.rtds.budget);
>> +            if (rc) {
>> +                rc = ERROR_INVAL;
>> +                goto out;
>> +            }
>> +        }
>> +    } else {
>> +        rc = ERROR_INVAL;
>> +        goto out;
>> +    }
>

Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-09  0:38     ` Chong Li
@ 2016-03-09 14:01       ` Wei Liu
  0 siblings, 0 replies; 51+ messages in thread
From: Wei Liu @ 2016-03-09 14:01 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, dario.faggioli,
	Ian Jackson, xen-devel, Ian Campbell, Meng Xu, Dagaen Golomb

On Tue, Mar 08, 2016 at 06:38:54PM -0600, Chong Li wrote:
> >> +    if (scinfo->num_vcpus > 0) {
> >> +        num_vcpus = scinfo->num_vcpus;
> >> +        GCNEW_ARRAY(vcpus, num_vcpus);
> >> +        for (i = 0; i < num_vcpus; i++) {
> >> +            if (scinfo->vcpus[i].vcpuid < 0 ||
> >> +                scinfo->vcpus[i].vcpuid > max_vcpuid) {
> >> +                LOG(ERROR, "VCPU index is out of range, "
> >> +                           "valid values are within range from 0 to %d",
> >> +                           max_vcpuid);
> >> +                rc = ERROR_INVAL;
> >> +                goto out;
> >> +            }
> >> +            vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
> >> +
> >
> > I would suggest you use a separate loop to validate the input, otherwise
> > you risk getting input partial success.
> 
> What do you mean by "partial success"?

You discover error half way through the array. All vcpus before the
error occurs would have been set to new parameters.

> Do you suggest to validate the entire input first, and then create &&
> set the vcpus array?
> 

Yes.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 4/4] xl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 21:24     ` Chong Li
@ 2016-03-09 14:01       ` Wei Liu
  0 siblings, 0 replies; 51+ messages in thread
From: Wei Liu @ 2016-03-09 14:01 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, dario.faggioli,
	xen-devel, Meng Xu, Dagaen Golomb

On Tue, Mar 08, 2016 at 03:24:35PM -0600, Chong Li wrote:
[...]
> >> +    case 'v':
> >> +        if (!strcmp(optarg, "all")) { /* get or set all vcpus of a domain */
> >> +            opt_all = 1;
> >> +            break;
> >> +        }
> >> +        if (v_index >= v_size) { /* vcpus array is full */
> >> +            v_size *= 2;
> >> +            vcpus = xrealloc(vcpus, v_size);
> >> +        }
> >> +        vcpus[v_index++] = strtol(optarg, NULL, 10);
> >> +        opt_v = 1;
> >> +        break;
> >
> > I'm still not quite sure why this is written like this. What's the
> > expected syntax of this command? The hunk to patch xl manpage is very
> > terse...
> 
> We have three arrays here, vcpus[], periods[] and budgets[]. If the xl
> command is like (more examples at cover letter):
> 
> xl sched-rtds -d vm1 -v 3 -p 500 -b 200 -v 1 -p 600 -b 300 (set the
> scheduling parameters of two vcpus of vm1)
> 
> then, the three arrays would be like:
> 
> vcpus: [3, 1]
> periods: [500, 600]
> budgets: [200, 300]
> 
> What makes this code complicated is the size of these three arrays
> grows along with the
> reading of OPTS. At the beginning, all three arrays have the size for
> only one int. When
> one array becomes full, we double its size.

OK. In that case, please:

1. Introduce a helper function to double the size of the array.
2. Improve manual a bit, maybe with one or two examples.

Wei.

> 
> Chong
> 
> 
> 
> -- 
> Chong Li
> Department of Computer Science and Engineering
> Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 4/4] xl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 4/4] xl: " Chong Li
  2016-03-08 19:12   ` Wei Liu
@ 2016-03-09 14:09   ` Wei Liu
  1 sibling, 0 replies; 51+ messages in thread
From: Wei Liu @ 2016-03-09 14:09 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, Meng Xu, dgolomb

I saw quite a few coding style issues. Please fix them in next
iteration.

On Sun, Mar 06, 2016 at 11:55:58AM -0600, Chong Li wrote:
[...]
>  
> -    if (cpupool && (dom || opt_p || opt_b)) {
> +    if (cpupool && (dom || opt_p || opt_b || opt_v || opt_all)) {
>          fprintf(stderr, "Specifying a cpupool is not allowed with "
>                  "other options.\n");
> -        return EXIT_FAILURE;
> +        rc = 1;

The exit value should be EXIT_FAILURE.

> +        goto out;
>      }
> -    if (!dom && (opt_p || opt_b)) {
> -        fprintf(stderr, "Must specify a domain.\n");
> -        return EXIT_FAILURE;
> +    if (!dom && (opt_p || opt_b || opt_v)) {
> +        fprintf(stderr, "Missing parameters.\n");
> +        rc = 1;
> +        goto out;
>      }
[...]
>  
> -            rc = sched_domain_set(domid, &scinfo);
> -            libxl_domain_sched_params_dispose(&scinfo);
> -            if (rc)
> -                return EXIT_FAILURE;
> +            libxl_vcpu_sched_params_dispose(&scinfo);
> +            if (rc) {
> +                rc = -rc;
> +                goto out;
> +            }
>          }
>      }
>  

Set rc to EXIT_SUCCESS here.

Wei.

> -    return EXIT_SUCCESS;
> +out:
> +    free(vcpus);
> +    free(periods);
> +    free(budgets);
> +    return rc;
>  }
>  
>  int main_domid(int argc, char **argv)
> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index fdc1ac6..c68b656 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -268,10 +268,12 @@ struct cmd_spec cmd_table[] = {
>      { "sched-rtds",
>        &main_sched_rtds, 0, 1,
>        "Get/set rtds scheduler parameters",
> -      "[-d <Domain> [-p[=PERIOD]] [-b[=BUDGET]]]",
> -      "-d DOMAIN, --domain=DOMAIN     Domain to modify\n"
> -      "-p PERIOD, --period=PERIOD     Period (us)\n"
> -      "-b BUDGET, --budget=BUDGET     Budget (us)\n"
> +      "[-d <Domain> [-v[=VCPUID]] [-p[=PERIOD]] [-b[=BUDGET]]]",
> +      "-d DOMAIN, --domain=DOMAIN 	Domain to modify\n"
> +      "-v VCPUID/all, --vcpuid=VCPUID/all    VCPU to modify or output;\n"
> +      "                               Using '-v all' to modify/output all vcpus\n"
> +      "-p PERIOD, --period=PERIOD 	Period (us)\n"
> +      "-b BUDGET, --budget=BUDGET 	Budget (us)\n"
>      },
>      { "domid",
>        &main_domid, 0, 0,
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 19:09   ` Wei Liu
@ 2016-03-09 16:10     ` Dario Faggioli
  2016-03-09 16:38       ` Jan Beulich
  2016-03-10 22:35     ` Chong Li
  1 sibling, 1 reply; 51+ messages in thread
From: Dario Faggioli @ 2016-03-09 16:10 UTC (permalink / raw)
  To: Wei Liu, Chong Li
  Cc: Chong Li, Sisu Xi, george.dunlap, xen-devel, Meng Xu, jbeulich, dgolomb


[-- Attachment #1.1: Type: text/plain, Size: 1128 bytes --]

On Tue, 2016-03-08 at 19:09 +0000, Wei Liu wrote:
> On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
> 
> > +            spin_lock_irqsave(&prv->lock, flags);
> > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> > +            svc->period = period;
> > +            svc->budget = budget;
> > +            spin_unlock_irqrestore(&prv->lock, flags);
> > +
> And this locking pattern seems sub-optimal. You might be able to move
> the lock and unlock outside the while loop?
> 
Yes, unless I'm missing something, that looks possible to me, and would
save a lot of acquire/release ping pong on the lock.

And yet, I'm not sure doing (for large guests) batches of 64 iterations
with (as of now) interrupts disabled.

I'll think about this...

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-09 16:10     ` Dario Faggioli
@ 2016-03-09 16:38       ` Jan Beulich
  2016-03-13 17:05         ` Chong Li
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-03-09 16:38 UTC (permalink / raw)
  To: Dario Faggioli, Wei Liu, Chong Li
  Cc: Chong Li, Sisu Xi, george.dunlap, xen-devel, Meng Xu, dgolomb

>>> On 09.03.16 at 17:10, <dario.faggioli@citrix.com> wrote:
> On Tue, 2016-03-08 at 19:09 +0000, Wei Liu wrote:
>> On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
>> 
>> > +            spin_lock_irqsave(&prv->lock, flags);
>> > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>> > +            svc->period = period;
>> > +            svc->budget = budget;
>> > +            spin_unlock_irqrestore(&prv->lock, flags);
>> > +
>> And this locking pattern seems sub-optimal. You might be able to move
>> the lock and unlock outside the while loop?
>> 
> Yes, unless I'm missing something, that looks possible to me, and would
> save a lot of acquire/release ping pong on the lock.

Well, there are guest memory accesses (which may fault) in the
loop body. While this may work right now, I don't think doing so
is a good idea.

> And yet, I'm not sure doing (for large guests) batches of 64 iterations
> with (as of now) interrupts disabled.

This I'd be much less worried about as long as what's inside the
loop body was a reasonably short code sequence.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 3/4] libxl: " Chong Li
  2016-03-08 19:12   ` Wei Liu
@ 2016-03-09 17:09   ` Dario Faggioli
  2016-03-09 17:28     ` Dario Faggioli
  1 sibling, 1 reply; 51+ messages in thread
From: Dario Faggioli @ 2016-03-09 17:09 UTC (permalink / raw)
  To: Chong Li, xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, ian.jackson, Meng Xu,
	dgolomb


[-- Attachment #1.1: Type: text/plain, Size: 2474 bytes --]

On the whole series, you are Cc-ing ian.campbell@eu.citrix.com, which
is no longer a tools maintainer.

Please, double check by looking at MAINTAINERS and/or with
./scripts/get_maintainer.pl.

While fiddling with the Cc-list, can you please try to arrange things
in such a way that each person is Cc-ed only to the subset of the
series that he would reasonably care about.

For instance, Jan is likely not going to look at libxl and xl stuff, so
no need bothering him with that. Similarly, IanJ wouldn't probably look
at hypervisor stuff.

If you are in doubt, stick to MAINTAINERS, with the exception that, if
some random person commented on vX, he should be Cc-ed to vX+1, where
the comments are addressed, no matter what he/she maintains.

There are other exceptions, actually. For example, it's ok to Cc me to
the whole series. This is hard to "set in stone", so again, if in
doubt, follow the rule above... If people are interested in a patch
that they happen not to be Cc-ed to, they'll find it via the mailing
list.

On Sun, 2016-03-06 at 11:55 -0600, Chong Li wrote:
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index bd3aac8..4532e86 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -5770,6 +5770,207 @@ static int sched_credit2_domain_set(libxl__gc
> *gc, uint32_t domid,
>      return 0;
>  }
>  
> +static int sched_rtds_validate_params(libxl__gc *gc, int period,
> +                                    int budget, uint32_t
> *sdom_period,
> +                                    uint32_t *sdom_budget)
> +{
> +    int rc = 0;
>
Empty line here (between local variables and code).

Also, libxl coding style says:

  * If the function is to return a libxl error value, `rc' is
    used to contain the error code, but it is NOT initialised:
            int rc;

This, in addition to all what Wei pointed out, to mean that, although
things are certainly improving, some non negligible amount of coding
style issues needs fixing.

(I'll elaborate more, and more in general, in another email.)

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-09 17:09   ` Dario Faggioli
@ 2016-03-09 17:28     ` Dario Faggioli
  0 siblings, 0 replies; 51+ messages in thread
From: Dario Faggioli @ 2016-03-09 17:28 UTC (permalink / raw)
  To: Chong Li, xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, ian.jackson, Meng Xu,
	dgolomb


[-- Attachment #1.1: Type: text/plain, Size: 1429 bytes --]

On Wed, 2016-03-09 at 18:09 +0100, Dario Faggioli wrote:
> While fiddling with the Cc-list, can you please try to arrange things
> in such a way that each person is Cc-ed only to the subset of the
> series that he would reasonably care about.
> 
> For instance, Jan is likely not going to look at libxl and xl stuff,
> so
> no need bothering him with that. Similarly, IanJ wouldn't probably
> look
> at hypervisor stuff.
> 
> If you are in doubt, stick to MAINTAINERS, with the exception that,
> if
> some random person commented on vX, he should be Cc-ed to vX+1, where
> the comments are addressed, no matter what he/she maintains.
> 
> There are other exceptions, actually. For example, it's ok to Cc me
> to
> the whole series. This is hard to "set in stone", so again, if in
> doubt, follow the rule above... If people are interested in a patch
> that they happen not to be Cc-ed to, they'll find it via the mailing
> list.
> 
To be fair, you are actually doing this already, and it was me that had
the wrong impression that you weren't.

So, good work with that, and sorry (you and everyone else) for the
noise on this. :-)

Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 19:12   ` Wei Liu
  2016-03-09  0:38     ` Chong Li
@ 2016-03-09 17:28     ` Dario Faggioli
  2016-03-09 21:57       ` Chong Li
  1 sibling, 1 reply; 51+ messages in thread
From: Dario Faggioli @ 2016-03-09 17:28 UTC (permalink / raw)
  To: Wei Liu, Chong Li
  Cc: Chong Li, Sisu Xi, george.dunlap, Ian Jackson, xen-devel,
	Meng Xu, dgolomb


[-- Attachment #1.1: Type: text/plain, Size: 1896 bytes --]

On Tue, 2016-03-08 at 19:12 +0000, Wei Liu wrote:

> Overall I think the patch is moving towards the right direction.  
>
I also think so. Thanks a lot Wei for the review, BTW.

> Just
> that there are too many places where indentation need fixing.  Please
> fix them in the next iteration. I really don't like holding back
> patches
> just because of indentation issues.
>
Exactly. Trying to elaborate a bit more, this series does not include
any complex algorithm or similar, so one may thing that it is not that
hard to review. But it is indeed complex and hard to review, because
the patches are rather big and because the API and the command line
syntax we want to support is complex.

If there are too many style issue, any review will likely end up
focusing mostly, if not only, on them, for various reasons. E.g., when
one finds a stile issue, avoiding commenting on it (e.g., because one
wants to focus on "more important" aspects) means risking forgetting
about it and, in the end, letting it hit the repository (if others also
miss it or does the same). Also, we're all used to look at code that
(well, mostly :-D) conforms to coding style, so it's harder to focus on
code that does not. And more.

Add to this that the most difficult part of the tools side of this
series (like API and data structures) is actually ok.

So, Chong, for us to be able to quickly and effectively help you
forward, we need to ask you to do your best and get rid of (ideally)
all coding style problems.

Once that's done, we're not far from calling this all a done deal. :-)

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-09 17:28     ` Dario Faggioli
@ 2016-03-09 21:57       ` Chong Li
  0 siblings, 0 replies; 51+ messages in thread
From: Chong Li @ 2016-03-09 21:57 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, Ian Jackson,
	xen-devel, Meng Xu, Dagaen Golomb

On Wed, Mar 9, 2016 at 11:28 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Tue, 2016-03-08 at 19:12 +0000, Wei Liu wrote:
>
>> Overall I think the patch is moving towards the right direction.
>>
> I also think so. Thanks a lot Wei for the review, BTW.
>
>> Just
>> that there are too many places where indentation need fixing.  Please
>> fix them in the next iteration. I really don't like holding back
>> patches
>> just because of indentation issues.
>>
> Exactly. Trying to elaborate a bit more, this series does not include
> any complex algorithm or similar, so one may thing that it is not that
> hard to review. But it is indeed complex and hard to review, because
> the patches are rather big and because the API and the command line
> syntax we want to support is complex.
>
> If there are too many style issue, any review will likely end up
> focusing mostly, if not only, on them, for various reasons. E.g., when
> one finds a stile issue, avoiding commenting on it (e.g., because one
> wants to focus on "more important" aspects) means risking forgetting
> about it and, in the end, letting it hit the repository (if others also
> miss it or does the same). Also, we're all used to look at code that
> (well, mostly :-D) conforms to coding style, so it's harder to focus on
> code that does not. And more.
>
> Add to this that the most difficult part of the tools side of this
> series (like API and data structures) is actually ok.
>
> So, Chong, for us to be able to quickly and effectively help you
> forward, we need to ask you to do your best and get rid of (ideally)
> all coding style problems.
>
> Once that's done, we're not far from calling this all a done deal. :-)
>
> Thanks and Regards,
> Dario

Wei and Dario, thanks for your help reviewing this post. I'm now
working on the coding style issues and will send out the next version
when
all problems are figured out.

Chong

> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-08 19:09   ` Wei Liu
  2016-03-09 16:10     ` Dario Faggioli
@ 2016-03-10 22:35     ` Chong Li
  2016-03-10 22:50       ` Wei Liu
  1 sibling, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-10 22:35 UTC (permalink / raw)
  To: Wei Liu
  Cc: Chong Li, Sisu Xi, George Dunlap, dario.faggioli, xen-devel,
	Meng Xu, Jan Beulich, Dagaen Golomb

On Tue, Mar 8, 2016 at 1:09 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
> [...]
>> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>>          }
>>          spin_unlock_irqrestore(&prv->lock, flags);
>>          break;
>> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:

>> +
>> +            period = MICROSECS(local_sched.s.rtds.period);
>> +            budget = MICROSECS(local_sched.s.rtds.budget);
>> +            if ( period > RTDS_MAX_PERIOD || budget < RTDS_MIN_BUDGET ||
>> +                          budget > period || period < RTDS_MIN_PERIOD )
>> +            {
>> +                rc = -EINVAL;
>> +                break;
>> +            }
>> +
>> +            /*
>> +             * We accept period/budget less than 100 us, but will warn users about
>> +             * the large scheduling overhead due to it
>> +             */
>> +            if ( period < MICROSECS(100) || budget < MICROSECS(100) )
>> +                printk("Warning: period or budget set to less than 100us.\n"
>> +                       "This may result in high scheduling overhead.\n");
>> +
>
> I'm not the maintainer, but I think having printk here is bad idea
> because the toolstack can then DoS the hypervisor.
>
>
> Wei.

So what function should I use here? I see many LOG() calls in libxl,
but I'm not sure whether that can be used here.

Chong


-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-10 22:35     ` Chong Li
@ 2016-03-10 22:50       ` Wei Liu
  2016-03-14  9:07         ` Dario Faggioli
  0 siblings, 1 reply; 51+ messages in thread
From: Wei Liu @ 2016-03-10 22:50 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, dario.faggioli,
	xen-devel, Meng Xu, Jan Beulich, Dagaen Golomb

On Thu, Mar 10, 2016 at 04:35:30PM -0600, Chong Li wrote:
> On Tue, Mar 8, 2016 at 1:09 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> > On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
> > [...]
> >> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
> >>          }
> >>          spin_unlock_irqrestore(&prv->lock, flags);
> >>          break;
> >> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> 
> >> +
> >> +            period = MICROSECS(local_sched.s.rtds.period);
> >> +            budget = MICROSECS(local_sched.s.rtds.budget);
> >> +            if ( period > RTDS_MAX_PERIOD || budget < RTDS_MIN_BUDGET ||
> >> +                          budget > period || period < RTDS_MIN_PERIOD )
> >> +            {
> >> +                rc = -EINVAL;
> >> +                break;
> >> +            }
> >> +
> >> +            /*
> >> +             * We accept period/budget less than 100 us, but will warn users about
> >> +             * the large scheduling overhead due to it
> >> +             */
> >> +            if ( period < MICROSECS(100) || budget < MICROSECS(100) )
> >> +                printk("Warning: period or budget set to less than 100us.\n"
> >> +                       "This may result in high scheduling overhead.\n");
> >> +
> >
> > I'm not the maintainer, but I think having printk here is bad idea
> > because the toolstack can then DoS the hypervisor.
> >
> >
> > Wei.
> 
> So what function should I use here? I see many LOG() calls in libxl,
> but I'm not sure whether that can be used here.
> 

IMHO you just don't log anything here. System administrator probably
won't see it anyway.

If you think this warning is really necessary, move it to xl.

Wei.

> Chong
> 
> 
> -- 
> Chong Li
> Department of Computer Science and Engineering
> Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-09 16:38       ` Jan Beulich
@ 2016-03-13 17:05         ` Chong Li
  2016-03-14  8:37           ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-13 17:05 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, Dario Faggioli,
	xen-devel, Meng Xu, Dagaen Golomb

On Wed, Mar 9, 2016 at 10:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.03.16 at 17:10, <dario.faggioli@citrix.com> wrote:
>> On Tue, 2016-03-08 at 19:09 +0000, Wei Liu wrote:
>>> On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
>>>
>>> > +            spin_lock_irqsave(&prv->lock, flags);
>>> > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>>> > +            svc->period = period;
>>> > +            svc->budget = budget;
>>> > +            spin_unlock_irqrestore(&prv->lock, flags);
>>> > +
>>> And this locking pattern seems sub-optimal. You might be able to move
>>> the lock and unlock outside the while loop?
>>>
>> Yes, unless I'm missing something, that looks possible to me, and would
>> save a lot of acquire/release ping pong on the lock.
>
> Well, there are guest memory accesses (which may fault) in the
> loop body. While this may work right now, I don't think doing so
> is a good idea.

So I still keep my design here?

Dario, Jan and Wei,

I almost finished a new version, but since this part is critical for
the whole patch, let me summarize the feedbacks here. Please correct
me if my understanding is wrong.

1) We don't need "guest_handle_is_null()" check, because null handle
could be used in some special cases. And normally, handle is checked
by copy_from(to)_guest* functions.

2) In domctl.h, add explain for nr_vcpus, because it is used in both
IN and OUT ways.

3) Use printk(XENLOG_G_WARNING ...) here, because of its rate limit feature.

4) Do I still keep the spin_lock inside the loop body?

Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-13 17:05         ` Chong Li
@ 2016-03-14  8:37           ` Jan Beulich
  2016-03-14  9:10             ` Dario Faggioli
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-03-14  8:37 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, Dario Faggioli,
	xen-devel, Meng Xu, Dagaen Golomb

>>> On 13.03.16 at 18:05, <lichong659@gmail.com> wrote:
> So I still keep my design here?
> 
> Dario, Jan and Wei,

Afaic:

> I almost finished a new version, but since this part is critical for
> the whole patch, let me summarize the feedbacks here. Please correct
> me if my understanding is wrong.
> 
> 1) We don't need "guest_handle_is_null()" check, because null handle
> could be used in some special cases. And normally, handle is checked
> by copy_from(to)_guest* functions.

Yes.

> 2) In domctl.h, add explain for nr_vcpus, because it is used in both
> IN and OUT ways.

Yes.

> 3) Use printk(XENLOG_G_WARNING ...) here, because of its rate limit feature.

I'd say XENLOG_G_DEBUG, and even that only if you really think
the message is useful.

> 4) Do I still keep the spin_lock inside the loop body?

I think overall that's the better alternative.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-10 22:50       ` Wei Liu
@ 2016-03-14  9:07         ` Dario Faggioli
  0 siblings, 0 replies; 51+ messages in thread
From: Dario Faggioli @ 2016-03-14  9:07 UTC (permalink / raw)
  To: Wei Liu, Chong Li
  Cc: Chong Li, Sisu Xi, George Dunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 2189 bytes --]

On Thu, 2016-03-10 at 22:50 +0000, Wei Liu wrote:
> On Thu, Mar 10, 2016 at 04:35:30PM -0600, Chong Li wrote:
> > On Tue, Mar 8, 2016 at 1:09 PM, Wei Liu <wei.liu2@citrix.com>
> > wrote:
> > > > @@ -1163,6 +1173,96 @@ rt_dom_cntl(
> > > > 
> > > > +            /*
> > > > +             * We accept period/budget less than 100 us, but
> > > > will warn users about
> > > > +             * the large scheduling overhead due to it
> > > > +             */
> > > > +            if ( period < MICROSECS(100) || budget <
> > > > MICROSECS(100) )
> > > > +                printk("Warning: period or budget set to less
> > > > than 100us.\n"
> > > > +                       "This may result in high scheduling
> > > > overhead.\n");
> > > > +
> > > I'm not the maintainer, but I think having printk here is bad
> > > idea
> > > because the toolstack can then DoS the hypervisor.
> > > 
> > > 
> > > Wei.
> > So what function should I use here? I see many LOG() calls in
> > libxl,
> > but I'm not sure whether that can be used here.
> > 
> IMHO you just don't log anything here. System administrator probably
> won't see it anyway.
> 
> If you think this warning is really necessary, move it to xl.
> 
I do think it adds some value to have this. Moving the printing outside
of Xen would need exporting a symbol for the minimum budget/period that
we think are best chosen, but:
 - this is really an implementation details, that can (potentially)
   vary between different architectures, and change with future Xen 
   version;
 - it would mean to keep the hypervisor and tools symbols in sync. 

So, as I'm saying in another reply in this thread, I'd use a guest,
rate-limited, logging variant, and print it only once, as
countermeasure to log spamming problem.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-14  8:37           ` Jan Beulich
@ 2016-03-14  9:10             ` Dario Faggioli
  2016-03-14  9:15               ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Dario Faggioli @ 2016-03-14  9:10 UTC (permalink / raw)
  To: Jan Beulich, Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, xen-devel, Meng Xu,
	Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 1535 bytes --]

On Mon, 2016-03-14 at 02:37 -0600, Jan Beulich wrote:
> > 
> > > 
> > > > 
> > > > On 13.03.16 at 18:05, <lichong659@gmail.com> wrote:
> > So I still keep my design here?
> > 
> > Dario, Jan and Wei,
> Afaic:
> 
I agree almost on everything too.

About this:

> > 3) Use printk(XENLOG_G_WARNING ...) here, because of its rate limit
> > feature.
> I'd say XENLOG_G_DEBUG, and even that only if you really think
> the message is useful.
> 
I think it has some value to have this in the logs. In fact, someone
that ended up with small values --either by bug/chance, or in general
without a specific need for them-- and is seeing performance/scheduling
issues will still be able to find this.

And printing it somewhere else than in Xen is impractical (see my reply
to Wei).

However, we may well print it just once, as soon as the first vcpu with
potentially problematic parameters is hit, and then silence it.

Linux has things like WARN_ON_ONCE to do this:
http://lxr.free-electrons.com/source/include/asm-generic/bug.h#L109

I don't see anything like it in Xen, so you'd need to either implement
it, or arrange for something with the same effect locally.

If we just print it once, I'd keep it G_WARNING.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-14  9:10             ` Dario Faggioli
@ 2016-03-14  9:15               ` Jan Beulich
  2016-03-14 10:05                 ` Dario Faggioli
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-03-14  9:15 UTC (permalink / raw)
  To: Dario Faggioli, Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Meng Xu,
	Dagaen Golomb

>>> On 14.03.16 at 10:10, <dario.faggioli@citrix.com> wrote:
> On Mon, 2016-03-14 at 02:37 -0600, Jan Beulich wrote:
>> > > > On 13.03.16 at 18:05, <lichong659@gmail.com> wrote:
> About this:
> 
>> > 3) Use printk(XENLOG_G_WARNING ...) here, because of its rate limit
>> > feature.
>> I'd say XENLOG_G_DEBUG, and even that only if you really think
>> the message is useful.
>> 
> I think it has some value to have this in the logs. In fact, someone
> that ended up with small values --either by bug/chance, or in general
> without a specific need for them-- and is seeing performance/scheduling
> issues will still be able to find this.
> 
> And printing it somewhere else than in Xen is impractical (see my reply
> to Wei).
> 
> However, we may well print it just once, as soon as the first vcpu with
> potentially problematic parameters is hit, and then silence it.
> 
> Linux has things like WARN_ON_ONCE to do this:
> http://lxr.free-electrons.com/source/include/asm-generic/bug.h#L109 
> 
> I don't see anything like it in Xen, so you'd need to either implement
> it, or arrange for something with the same effect locally.

One of the reasons we don't (yet) have this is because commonly
(like is the case here) printing such once globally isn't really what
you want - you'd much rather see it once per domain.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-14  9:15               ` Jan Beulich
@ 2016-03-14 10:05                 ` Dario Faggioli
  2016-03-15 16:22                   ` Chong Li
  0 siblings, 1 reply; 51+ messages in thread
From: Dario Faggioli @ 2016-03-14 10:05 UTC (permalink / raw)
  To: Jan Beulich, Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Meng Xu,
	Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 1675 bytes --]

On Mon, 2016-03-14 at 03:15 -0600, Jan Beulich wrote:
> > > > On 14.03.16 at 10:10, <dario.faggioli@citrix.com> wrote:
> > 
> > I think it has some value to have this in the logs. In fact,
> > someone
> > that ended up with small values --either by bug/chance, or in
> > general
> > without a specific need for them-- and is seeing
> > performance/scheduling
> > issues will still be able to find this.
> > 
> > And printing it somewhere else than in Xen is impractical (see my
> > reply
> > to Wei).
> > 
> > However, we may well print it just once, as soon as the first vcpu
> > with
> > potentially problematic parameters is hit, and then silence it.
> > 
> > Linux has things like WARN_ON_ONCE to do this:
> > http://lxr.free-electrons.com/source/include/asm-generic/bug.h#L109
> >  
> > 
> > I don't see anything like it in Xen, so you'd need to either
> > implement
> > it, or arrange for something with the same effect locally.
> One of the reasons we don't (yet) have this is because commonly
> (like is the case here) printing such once globally isn't really what
> you want - you'd much rather see it once per domain.
>
Ah, I did not know that was part of the reason, but it makes sense.

However, for this specific case, I was also considering that, and I
agree, if possible/not to difficult to implement, once per domain would
indeed be better.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-14 10:05                 ` Dario Faggioli
@ 2016-03-15 16:22                   ` Chong Li
  2016-03-15 16:41                     ` Dario Faggioli
  0 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-15 16:22 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb

On Mon, Mar 14, 2016 at 5:05 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Mon, 2016-03-14 at 03:15 -0600, Jan Beulich wrote:
>> > > > On 14.03.16 at 10:10, <dario.faggioli@citrix.com> wrote:
>> >
>> > I think it has some value to have this in the logs. In fact,
>> > someone
>> > that ended up with small values --either by bug/chance, or in
>> > general
>> > without a specific need for them-- and is seeing
>> > performance/scheduling
>> > issues will still be able to find this.
>> >
>> > And printing it somewhere else than in Xen is impractical (see my
>> > reply
>> > to Wei).
>> >
>> > However, we may well print it just once, as soon as the first vcpu
>> > with
>> > potentially problematic parameters is hit, and then silence it.
>> >
>> > Linux has things like WARN_ON_ONCE to do this:
>> > http://lxr.free-electrons.com/source/include/asm-generic/bug.h#L109
>> >
>> >
>> > I don't see anything like it in Xen, so you'd need to either
>> > implement
>> > it, or arrange for something with the same effect locally.
>> One of the reasons we don't (yet) have this is because commonly
>> (like is the case here) printing such once globally isn't really what
>> you want - you'd much rather see it once per domain.
>>
> Ah, I did not know that was part of the reason, but it makes sense.
>
> However, for this specific case, I was also considering that, and I
> agree, if possible/not to difficult to implement, once per domain would
> indeed be better.
>
It is easy to implement a global "warn_on_once".

To implement a per-domain "warn_on_once", I may need a hash table
(name it as "dom_warned"), and dom_warned[domid] returns true or
false.
I don't know if hash table is supported in Xen. Or do we have any list
structure which is very good at searching?

Another way is to add a "bool" field in struct domain, but I don't
think that would be a good design.

Any ideas?

Thanks,
Chong


-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-15 16:22                   ` Chong Li
@ 2016-03-15 16:41                     ` Dario Faggioli
  2016-03-15 17:22                       ` Chong Li
  0 siblings, 1 reply; 51+ messages in thread
From: Dario Faggioli @ 2016-03-15 16:41 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 2373 bytes --]

On Tue, 2016-03-15 at 11:22 -0500, Chong Li wrote:
> On Mon, Mar 14, 2016 at 5:05 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> > 
> > However, for this specific case, I was also considering that, and I
> > agree, if possible/not to difficult to implement, once per domain
> > would
> > indeed be better.
> > 
> It is easy to implement a global "warn_on_once".
> 
Sure.

> To implement a per-domain "warn_on_once", I may need a hash table
> (name it as "dom_warned"), and dom_warned[domid] returns true or
> false.
> I don't know if hash table is supported in Xen. Or do we have any
> list
> structure which is very good at searching?
> 
And all this just for having _this_ specific warning printed once per
domain.

No, I don't think it is worth... Or maybe it is, but I don't think it's
fair to ask you to do it. Or maybe, if you're up for doing it, then
you're free to, but I don't think it's fair to ask for it as
prerequisite for this patch. :-)

> Another way is to add a "bool" field in struct domain, but I don't
> think that would be a good design.
> 
Yeah, I like it even less. :-/

We said 'once' and then 'once per domain', but something I'd be fine
with (coupled with keeping G_WARNING) would be 'once per operation'.
Basically, if a domain has 128 vcpus, and an hypercall tries to set all
of them to period=100, budget=50, we just print the warning once. Then,
if after a while the sysadmin tries the same again, we again just log
once, etc.

Doing this seems much easier, as the 'warned' flag could just be a
local variable of the hypercall implementation. I'm quite sure that
would work if there is not any continuation/re-issueing mechanism in
the hypercall in question. BUT in our case there is, so things may be
more complicated... :-/

Had you thought about a solution like this already? If no, can you see
whether there is a nice and easy way to make something like what I just
described above to work in our case?

If we find nothing, we'll think at alternatives, including killing the
waning.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-15 16:41                     ` Dario Faggioli
@ 2016-03-15 17:22                       ` Chong Li
  2016-03-16  3:14                         ` Meng Xu
  0 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-15 17:22 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb

On Tue, Mar 15, 2016 at 11:41 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Tue, 2016-03-15 at 11:22 -0500, Chong Li wrote:
>> On Mon, Mar 14, 2016 at 5:05 AM, Dario Faggioli
>> <dario.faggioli@citrix.com> wrote:
>> >
>
> We said 'once' and then 'once per domain', but something I'd be fine
> with (coupled with keeping G_WARNING) would be 'once per operation'.
> Basically, if a domain has 128 vcpus, and an hypercall tries to set all
> of them to period=100, budget=50, we just print the warning once. Then,
> if after a while the sysadmin tries the same again, we again just log
> once, etc.
>
> Doing this seems much easier, as the 'warned' flag could just be a
> local variable of the hypercall implementation. I'm quite sure that
> would work if there is not any continuation/re-issueing mechanism in
> the hypercall in question. BUT in our case there is, so things may be
> more complicated... :-/
>
> Had you thought about a solution like this already? If no, can you see
> whether there is a nice and easy way to make something like what I just
> described above to work in our case?
>
How about:

We create a global variable in sched_rt.c:
    /* This variable holds its value through hyerpcall re-issueing.
     * When finding vcpu settings with too low budget or period (e.g,
100 us), we print a warning
     * and set this variable "true". No more warnings are printed
until this variable
     * becomes false.
     */
    static bool warned;
Initialize it as "false" in rt_init().
In your example,
we "warned = true" when we find the first vcpu has budget less than
100 us. Outside
of the while loop, we do:
    if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-issueing */
        warned = false;

Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-15 17:22                       ` Chong Li
@ 2016-03-16  3:14                         ` Meng Xu
  2016-03-16  3:32                           ` Chong Li
  0 siblings, 1 reply; 51+ messages in thread
From: Meng Xu @ 2016-03-16  3:14 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, Dario Faggioli,
	xen-devel, Jan Beulich, Dagaen Golomb

On Tue, Mar 15, 2016 at 1:22 PM, Chong Li <lichong659@gmail.com> wrote:
> On Tue, Mar 15, 2016 at 11:41 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
>> On Tue, 2016-03-15 at 11:22 -0500, Chong Li wrote:
>>> On Mon, Mar 14, 2016 at 5:05 AM, Dario Faggioli
>>> <dario.faggioli@citrix.com> wrote:
>>> >
>>
>> We said 'once' and then 'once per domain', but something I'd be fine
>> with (coupled with keeping G_WARNING) would be 'once per operation'.
>> Basically, if a domain has 128 vcpus, and an hypercall tries to set all
>> of them to period=100, budget=50, we just print the warning once. Then,
>> if after a while the sysadmin tries the same again, we again just log
>> once, etc.
>>
>> Doing this seems much easier, as the 'warned' flag could just be a
>> local variable of the hypercall implementation. I'm quite sure that
>> would work if there is not any continuation/re-issueing mechanism in
>> the hypercall in question. BUT in our case there is, so things may be
>> more complicated... :-/
>>
>> Had you thought about a solution like this already? If no, can you see
>> whether there is a nice and easy way to make something like what I just
>> described above to work in our case?
>>
> How about:
>
> We create a global variable in sched_rt.c:
>     /* This variable holds its value through hyerpcall re-issueing.
>      * When finding vcpu settings with too low budget or period (e.g,
> 100 us), we print a warning
>      * and set this variable "true". No more warnings are printed
> until this variable
>      * becomes false.
>      */
>     static bool warned;
> Initialize it as "false" in rt_init().
> In your example,
> we "warned = true" when we find the first vcpu has budget less than
> 100 us. Outside
> of the while loop, we do:
>     if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-issueing */
>         warned = false;
>
Hi Chong,

I don't think creating a global variable just for the warning thing is
a better idea. Even if we do want such a variable, it should only
occur in rt_dom_cntl() function, since it is only used in
rt_dom_cntl().
Global variable should be used "globally", isn't it. ;-)

Thanks,

Meng
-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16  3:14                         ` Meng Xu
@ 2016-03-16  3:32                           ` Chong Li
  2016-03-16  3:43                             ` Meng Xu
  0 siblings, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-16  3:32 UTC (permalink / raw)
  To: Meng Xu
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, Dario Faggioli,
	xen-devel, Jan Beulich, Dagaen Golomb

On Tue, Mar 15, 2016 at 10:14 PM, Meng Xu <mengxu@cis.upenn.edu> wrote:
> On Tue, Mar 15, 2016 at 1:22 PM, Chong Li <lichong659@gmail.com> wrote:
>> On Tue, Mar 15, 2016 at 11:41 AM, Dario Faggioli
>> <dario.faggioli@citrix.com> wrote:
>>> On Tue, 2016-03-15 at 11:22 -0500, Chong Li wrote:
>>>> On Mon, Mar 14, 2016 at 5:05 AM, Dario Faggioli
>>>> <dario.faggioli@citrix.com> wrote:
>>>> >
>>>
>>> We said 'once' and then 'once per domain', but something I'd be fine
>>> with (coupled with keeping G_WARNING) would be 'once per operation'.
>>> Basically, if a domain has 128 vcpus, and an hypercall tries to set all
>>> of them to period=100, budget=50, we just print the warning once. Then,
>>> if after a while the sysadmin tries the same again, we again just log
>>> once, etc.
>>>
>>> Doing this seems much easier, as the 'warned' flag could just be a
>>> local variable of the hypercall implementation. I'm quite sure that
>>> would work if there is not any continuation/re-issueing mechanism in
>>> the hypercall in question. BUT in our case there is, so things may be
>>> more complicated... :-/
>>>
>>> Had you thought about a solution like this already? If no, can you see
>>> whether there is a nice and easy way to make something like what I just
>>> described above to work in our case?
>>>
>> How about:
>>
>> We create a global variable in sched_rt.c:
>>     /* This variable holds its value through hyerpcall re-issueing.
>>      * When finding vcpu settings with too low budget or period (e.g,
>> 100 us), we print a warning
>>      * and set this variable "true". No more warnings are printed
>> until this variable
>>      * becomes false.
>>      */
>>     static bool warned;
>> Initialize it as "false" in rt_init().
>> In your example,
>> we "warned = true" when we find the first vcpu has budget less than
>> 100 us. Outside
>> of the while loop, we do:
>>     if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-issueing */
>>         warned = false;
>>
> Hi Chong,
>
> I don't think creating a global variable just for the warning thing is
> a better idea. Even if we do want such a variable, it should only
> occur in rt_dom_cntl() function, since it is only used in
> rt_dom_cntl().
> Global variable should be used "globally", isn't it. ;-)
>
You're right.

If we define

   static bool warned;

at the beginning of rt_dom_cntl(), do we need to initialize it? If without
initialization, I think its default value is "false", which is just
what we need.

Chong


-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16  3:32                           ` Chong Li
@ 2016-03-16  3:43                             ` Meng Xu
  2016-03-16  8:23                               ` Dario Faggioli
  2016-03-16 10:48                               ` Jan Beulich
  0 siblings, 2 replies; 51+ messages in thread
From: Meng Xu @ 2016-03-16  3:43 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, Dario Faggioli,
	xen-devel, Jan Beulich, Dagaen Golomb

On Tue, Mar 15, 2016 at 11:32 PM, Chong Li <lichong659@gmail.com> wrote:
> On Tue, Mar 15, 2016 at 10:14 PM, Meng Xu <mengxu@cis.upenn.edu> wrote:
>> On Tue, Mar 15, 2016 at 1:22 PM, Chong Li <lichong659@gmail.com> wrote:
>>> On Tue, Mar 15, 2016 at 11:41 AM, Dario Faggioli
>>> <dario.faggioli@citrix.com> wrote:
>>>> On Tue, 2016-03-15 at 11:22 -0500, Chong Li wrote:
>>>>> On Mon, Mar 14, 2016 at 5:05 AM, Dario Faggioli
>>>>> <dario.faggioli@citrix.com> wrote:
>>>>> >
>>>>
>>>> We said 'once' and then 'once per domain', but something I'd be fine
>>>> with (coupled with keeping G_WARNING) would be 'once per operation'.
>>>> Basically, if a domain has 128 vcpus, and an hypercall tries to set all
>>>> of them to period=100, budget=50, we just print the warning once. Then,
>>>> if after a while the sysadmin tries the same again, we again just log
>>>> once, etc.
>>>>
>>>> Doing this seems much easier, as the 'warned' flag could just be a
>>>> local variable of the hypercall implementation. I'm quite sure that
>>>> would work if there is not any continuation/re-issueing mechanism in
>>>> the hypercall in question. BUT in our case there is, so things may be
>>>> more complicated... :-/
>>>>
>>>> Had you thought about a solution like this already? If no, can you see
>>>> whether there is a nice and easy way to make something like what I just
>>>> described above to work in our case?
>>>>
>>> How about:
>>>
>>> We create a global variable in sched_rt.c:
>>>     /* This variable holds its value through hyerpcall re-issueing.
>>>      * When finding vcpu settings with too low budget or period (e.g,
>>> 100 us), we print a warning
>>>      * and set this variable "true". No more warnings are printed
>>> until this variable
>>>      * becomes false.
>>>      */
>>>     static bool warned;
>>> Initialize it as "false" in rt_init().
>>> In your example,
>>> we "warned = true" when we find the first vcpu has budget less than
>>> 100 us. Outside
>>> of the while loop, we do:
>>>     if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-issueing */
>>>         warned = false;
>>>
>> Hi Chong,
>>
>> I don't think creating a global variable just for the warning thing is
>> a better idea. Even if we do want such a variable, it should only
>> occur in rt_dom_cntl() function, since it is only used in
>> rt_dom_cntl().
>> Global variable should be used "globally", isn't it. ;-)
>>
> You're right.
>
> If we define
>
>    static bool warned;
>
> at the beginning of rt_dom_cntl(), do we need to initialize it? If without
> initialization, I think its default value is "false", which is just
> what we need.
>

We need initializing any variable we are going to use, of course. We
should not reply on the compiler to give an initialized value. :-)

Meng
-- 
-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16  3:43                             ` Meng Xu
@ 2016-03-16  8:23                               ` Dario Faggioli
  2016-03-16 14:37                                 ` Meng Xu
  2016-03-16 14:46                                 ` Chong Li
  2016-03-16 10:48                               ` Jan Beulich
  1 sibling, 2 replies; 51+ messages in thread
From: Dario Faggioli @ 2016-03-16  8:23 UTC (permalink / raw)
  To: Meng Xu, Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Jan Beulich,
	Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 2724 bytes --]

On Tue, 2016-03-15 at 23:43 -0400, Meng Xu wrote:
> On Tue, Mar 15, 2016 at 11:32 PM, Chong Li <lichong659@gmail.com>
> wrote:
> > > > How about:
> > > > 
> > > > We create a global variable in sched_rt.c:
> > > >     /* This variable holds its value through hyerpcall re-
> > > > issueing.
> > > >      * When finding vcpu settings with too low budget or period
> > > > (e.g,
> > > > 100 us), we print a warning
> > > >      * and set this variable "true". No more warnings are
> > > > printed
> > > > until this variable
> > > >      * becomes false.
> > > >      */
> > > >     static bool warned;
> > > > Initialize it as "false" in rt_init().
> > > > In your example,
> > > > we "warned = true" when we find the first vcpu has budget less
> > > > than
> > > > 100 us. Outside
> > > > of the while loop, we do:
> > > >     if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-
> > > > issueing */
> > > >         warned = false;
> > > > 
> > > 
> > > 
> > If we define
> > 
> >    static bool warned;
> > 
> > at the beginning of rt_dom_cntl(), do we need to initialize it? If
> > without
> > initialization, I think its default value is "false", which is just
> > what we need.
> > 
> We need initializing any variable we are going to use, of course. We
> should not reply on the compiler to give an initialized value. :-)
>
We need to initialize any variable that would be used uninitialized, if
we don't initialize it. :-)

However, something along the line of a static variable was also what I
was thinking to, but I don't think it works sufficiently well for
justifying it being introduced. And making things work well is proving
to be too hard to keep bothering.

Reasons why I'm saying I don't think it works well are that: (a) there
may be more than one CPU executing this hypercall, and they'd race on
the value of the static flag; (b) what if the hypercall finishes
processing the first lot of 64 vCPUs with the flag set to false, are we
sure it can't be anything than "still false", when the new hypercal,
for the next lot of vCPUs of the same domain, is re-issued?

I continue to think that it could be useful to have this logged, but
I'm leaning toward just killing it for now (and maybe finding another
way to check and warn about the same thing or one of the effects it
produces, later).

Meng, what do you think?



> 
> Meng
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16  3:43                             ` Meng Xu
  2016-03-16  8:23                               ` Dario Faggioli
@ 2016-03-16 10:48                               ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2016-03-16 10:48 UTC (permalink / raw)
  To: Chong Li, Meng Xu
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, Dario Faggioli,
	xen-devel, Dagaen Golomb

>>> On 16.03.16 at 04:43, <xumengpanda@gmail.com> wrote:
> On Tue, Mar 15, 2016 at 11:32 PM, Chong Li <lichong659@gmail.com> wrote:
>> If we define
>>
>>    static bool warned;
>>
>> at the beginning of rt_dom_cntl(), do we need to initialize it? If without
>> initialization, I think its default value is "false", which is just
>> what we need.
>>
> 
> We need initializing any variable we are going to use, of course. We
> should not reply on the compiler to give an initialized value. :-)

Relying on implicit initialization where the language standard
requires such is more than just optional - if I see code changes
with an explicit zero initializer of a static variable, I always
reply back stating that the initializer is pointless.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16  8:23                               ` Dario Faggioli
@ 2016-03-16 14:37                                 ` Meng Xu
  2016-03-16 14:46                                   ` Chong Li
  2016-03-16 14:53                                   ` Dario Faggioli
  2016-03-16 14:46                                 ` Chong Li
  1 sibling, 2 replies; 51+ messages in thread
From: Meng Xu @ 2016-03-16 14:37 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Jan Beulich,
	Chong Li, Dagaen Golomb

On Wed, Mar 16, 2016 at 4:23 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Tue, 2016-03-15 at 23:43 -0400, Meng Xu wrote:
>> On Tue, Mar 15, 2016 at 11:32 PM, Chong Li <lichong659@gmail.com>
>> wrote:
>> > > > How about:
>> > > >
>> > > > We create a global variable in sched_rt.c:
>> > > >     /* This variable holds its value through hyerpcall re-
>> > > > issueing.
>> > > >      * When finding vcpu settings with too low budget or period
>> > > > (e.g,
>> > > > 100 us), we print a warning
>> > > >      * and set this variable "true". No more warnings are
>> > > > printed
>> > > > until this variable
>> > > >      * becomes false.
>> > > >      */
>> > > >     static bool warned;
>> > > > Initialize it as "false" in rt_init().
>> > > > In your example,
>> > > > we "warned = true" when we find the first vcpu has budget less
>> > > > than
>> > > > 100 us. Outside
>> > > > of the while loop, we do:
>> > > >     if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-
>> > > > issueing */
>> > > >         warned = false;
>> > > >
>> > >
>> > >
>> > If we define
>> >
>> >    static bool warned;
>> >
>> > at the beginning of rt_dom_cntl(), do we need to initialize it? If
>> > without
>> > initialization, I think its default value is "false", which is just
>> > what we need.
>> >
>> We need initializing any variable we are going to use, of course. We
>> should not reply on the compiler to give an initialized value. :-)
>>
> We need to initialize any variable that would be used uninitialized, if
> we don't initialize it. :-)
>
> However, something along the line of a static variable was also what I
> was thinking to, but I don't think it works sufficiently well for
> justifying it being introduced. And making things work well is proving
> to be too hard to keep bothering.
>
> Reasons why I'm saying I don't think it works well are that: (a) there
> may be more than one CPU executing this hypercall, and they'd race on
> the value of the static flag; (b) what if the hypercall finishes
> processing the first lot of 64 vCPUs with the flag set to false, are we
> sure it can't be anything than "still false", when the new hypercal,
> for the next lot of vCPUs of the same domain, is re-issued?
>
> I continue to think that it could be useful to have this logged, but
> I'm leaning toward just killing it for now (and maybe finding another
> way to check and warn about the same thing or one of the effects it
> produces, later).
>
> Meng, what do you think?

I'm thinking about if it may not be worthwhile *for now only* to
provide such information with so much effort and the danger of
introducing more serious issues.

Right, race condition occurs on the global variable and I believe we
don't want to encounter this race condition.
So let's just not use the global variable.

We should definitely put a large warning in the wiki for the RTDS
scheduler about the parameter settings. Incorrect setting should never
crash system but may lead to poor real-time performance users want.

Once this patch is done, I will modify the wiki then. (Chong, could
you remind me if I happen to forget?)

Thanks,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16  8:23                               ` Dario Faggioli
  2016-03-16 14:37                                 ` Meng Xu
@ 2016-03-16 14:46                                 ` Chong Li
  2016-03-16 14:54                                   ` Dario Faggioli
  1 sibling, 1 reply; 51+ messages in thread
From: Chong Li @ 2016-03-16 14:46 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb

On Wed, Mar 16, 2016 at 3:23 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Tue, 2016-03-15 at 23:43 -0400, Meng Xu wrote:
>> On Tue, Mar 15, 2016 at 11:32 PM, Chong Li <lichong659@gmail.com>
>> wrote:
>> > > > How about:
>> > > >
>> > > > We create a global variable in sched_rt.c:
>> > > >     /* This variable holds its value through hyerpcall re-
>> > > > issueing.
>> > > >      * When finding vcpu settings with too low budget or period
>> > > > (e.g,
>> > > > 100 us), we print a warning
>> > > >      * and set this variable "true". No more warnings are
>> > > > printed
>> > > > until this variable
>> > > >      * becomes false.
>> > > >      */
>> > > >     static bool warned;
>> > > > Initialize it as "false" in rt_init().
>> > > > In your example,
>> > > > we "warned = true" when we find the first vcpu has budget less
>> > > > than
>> > > > 100 us. Outside
>> > > > of the while loop, we do:
>> > > >     if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-
>> > > > issueing */
>> > > >         warned = false;
>> > > >
>> > >
>> > >
>> > If we define
>> >
>> >    static bool warned;
>> >
>> > at the beginning of rt_dom_cntl(), do we need to initialize it? If
>> > without
>> > initialization, I think its default value is "false", which is just
>> > what we need.
>> >
>> We need initializing any variable we are going to use, of course. We
>> should not reply on the compiler to give an initialized value. :-)
>>
> We need to initialize any variable that would be used uninitialized, if
> we don't initialize it. :-)
>
> However, something along the line of a static variable was also what I
> was thinking to, but I don't think it works sufficiently well for
> justifying it being introduced. And making things work well is proving
> to be too hard to keep bothering.
>
> Reasons why I'm saying I don't think it works well are that: (a) there
> may be more than one CPU executing this hypercall, and they'd race on
> the value of the static flag; (b) what if the hypercall finishes
> processing the first lot of 64 vCPUs with the flag set to false, are we
> sure it can't be anything than "still false", when the new hypercal,
> for the next lot of vCPUs of the same domain, is re-issued?
>
> I continue to think that it could be useful to have this logged, but
> I'm leaning toward just killing it for now (and maybe finding another
> way to check and warn about the same thing or one of the effects it
> produces, later).
>

By "killing it", do you mean we don't do this check nor print the
warning? Or just
print the warning globally once?

Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16 14:37                                 ` Meng Xu
@ 2016-03-16 14:46                                   ` Chong Li
  2016-03-16 14:53                                   ` Dario Faggioli
  1 sibling, 0 replies; 51+ messages in thread
From: Chong Li @ 2016-03-16 14:46 UTC (permalink / raw)
  To: Meng Xu
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, Dario Faggioli,
	xen-devel, Jan Beulich, Dagaen Golomb

On Wed, Mar 16, 2016 at 9:37 AM, Meng Xu <mengxu@cis.upenn.edu> wrote:
> On Wed, Mar 16, 2016 at 4:23 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
>> On Tue, 2016-03-15 at 23:43 -0400, Meng Xu wrote:
>>> On Tue, Mar 15, 2016 at 11:32 PM, Chong Li <lichong659@gmail.com>
>>> wrote:
>>> > > > How about:
>>> > > >
>>> > > > We create a global variable in sched_rt.c:
>>> > > >     /* This variable holds its value through hyerpcall re-
>>> > > > issueing.
>>> > > >      * When finding vcpu settings with too low budget or period
>>> > > > (e.g,
>>> > > > 100 us), we print a warning
>>> > > >      * and set this variable "true". No more warnings are
>>> > > > printed
>>> > > > until this variable
>>> > > >      * becomes false.
>>> > > >      */
>>> > > >     static bool warned;
>>> > > > Initialize it as "false" in rt_init().
>>> > > > In your example,
>>> > > > we "warned = true" when we find the first vcpu has budget less
>>> > > > than
>>> > > > 100 us. Outside
>>> > > > of the while loop, we do:
>>> > > >     if ( index == op->u.v.nr_vcpus ) /* no more hypercall re-
>>> > > > issueing */
>>> > > >         warned = false;
>>> > > >
>>> > >
>>> > >
>>> > If we define
>>> >
>>> >    static bool warned;
>>> >
>>> > at the beginning of rt_dom_cntl(), do we need to initialize it? If
>>> > without
>>> > initialization, I think its default value is "false", which is just
>>> > what we need.
>>> >
>>> We need initializing any variable we are going to use, of course. We
>>> should not reply on the compiler to give an initialized value. :-)
>>>
>> We need to initialize any variable that would be used uninitialized, if
>> we don't initialize it. :-)
>>
>> However, something along the line of a static variable was also what I
>> was thinking to, but I don't think it works sufficiently well for
>> justifying it being introduced. And making things work well is proving
>> to be too hard to keep bothering.
>>
>> Reasons why I'm saying I don't think it works well are that: (a) there
>> may be more than one CPU executing this hypercall, and they'd race on
>> the value of the static flag; (b) what if the hypercall finishes
>> processing the first lot of 64 vCPUs with the flag set to false, are we
>> sure it can't be anything than "still false", when the new hypercal,
>> for the next lot of vCPUs of the same domain, is re-issued?
>>
>> I continue to think that it could be useful to have this logged, but
>> I'm leaning toward just killing it for now (and maybe finding another
>> way to check and warn about the same thing or one of the effects it
>> produces, later).
>>
>> Meng, what do you think?
>
> I'm thinking about if it may not be worthwhile *for now only* to
> provide such information with so much effort and the danger of
> introducing more serious issues.
>
> Right, race condition occurs on the global variable and I believe we
> don't want to encounter this race condition.
> So let's just not use the global variable.
>
> We should definitely put a large warning in the wiki for the RTDS
> scheduler about the parameter settings. Incorrect setting should never
> crash system but may lead to poor real-time performance users want.
>
> Once this patch is done, I will modify the wiki then. (Chong, could
> you remind me if I happen to forget?)
>
Sure, I'll.

Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16 14:37                                 ` Meng Xu
  2016-03-16 14:46                                   ` Chong Li
@ 2016-03-16 14:53                                   ` Dario Faggioli
  1 sibling, 0 replies; 51+ messages in thread
From: Dario Faggioli @ 2016-03-16 14:53 UTC (permalink / raw)
  To: Meng Xu
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Jan Beulich,
	Chong Li, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 2216 bytes --]

On Wed, 2016-03-16 at 10:37 -0400, Meng Xu wrote:
> On Wed, Mar 16, 2016 at 4:23 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> > 
> > I continue to think that it could be useful to have this logged,
> > but
> > I'm leaning toward just killing it for now (and maybe finding
> > another
> > way to check and warn about the same thing or one of the effects it
> > produces, later).
> > 
> > Meng, what do you think?
> I'm thinking about if it may not be worthwhile *for now only* to
> provide such information with so much effort and the danger of
> introducing more serious issues.
> 
Yeah, exactly what I was also saying.

> Right, race condition occurs on the global variable and I believe we
> don't want to encounter this race condition.
> So let's just not use the global variable.
> 
Well, it would just affect the logging itself. But then, why clobber
the (clarity of the) code for introducing proper per-domain logging,
and ending up with something that may not actually be per-domain..
that's how I was thinking at it.

> We should definitely put a large warning in the wiki for the RTDS
> scheduler about the parameter settings. Incorrect setting should
> never
> crash system but may lead to poor real-time performance users want.
> 
For example, one way of dealing with this would be to allow the
user/sysadmin to set what the minimum budget and period they want to be
able to use would be, by means of SYSCTLs.

We kind of like have a static and a dynamic limit. The former,
hardcoded in Xen, to something that we know is unreasonable on any
hardware platform available at that time. The latter, we can set to a
rather conservative high value by default, but let users that wants to
lower it, do that... and there is where we can print all the warnings
we want!

I'm just tossing this out there, though, and this is certainly
something that can come as future work.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
  2016-03-16 14:46                                 ` Chong Li
@ 2016-03-16 14:54                                   ` Dario Faggioli
  0 siblings, 0 replies; 51+ messages in thread
From: Dario Faggioli @ 2016-03-16 14:54 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, GeorgeDunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 871 bytes --]

On Wed, 2016-03-16 at 09:46 -0500, Chong Li wrote:
> On Wed, Mar 16, 2016 at 3:23 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> > 
> > I continue to think that it could be useful to have this logged,
> > but
> > I'm leaning toward just killing it for now (and maybe finding
> > another
> > way to check and warn about the same thing or one of the effects it
> > produces, later).
> > 
> By "killing it", do you mean we don't do this check nor print the
> warning? Or just
> print the warning globally once?
> 
Remove the warning (and, of course, the check as well! :-)).

Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2016-03-16 14:54 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-06 17:55 [PATCH v6 for Xen 4.7 0/4] Enable per-VCPU parameter settings for RTDS scheduler Chong Li
2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 1/4] xen: enable " Chong Li
2016-03-07 12:59   ` Jan Beulich
2016-03-07 16:28     ` Chong Li
2016-03-07 16:40       ` Jan Beulich
2016-03-07 17:53         ` Dario Faggioli
2016-03-07 22:16           ` Chong Li
2016-03-08  9:10           ` Jan Beulich
2016-03-08 10:34             ` Dario Faggioli
2016-03-08 11:47               ` Jan Beulich
2016-03-08 19:09   ` Wei Liu
2016-03-09 16:10     ` Dario Faggioli
2016-03-09 16:38       ` Jan Beulich
2016-03-13 17:05         ` Chong Li
2016-03-14  8:37           ` Jan Beulich
2016-03-14  9:10             ` Dario Faggioli
2016-03-14  9:15               ` Jan Beulich
2016-03-14 10:05                 ` Dario Faggioli
2016-03-15 16:22                   ` Chong Li
2016-03-15 16:41                     ` Dario Faggioli
2016-03-15 17:22                       ` Chong Li
2016-03-16  3:14                         ` Meng Xu
2016-03-16  3:32                           ` Chong Li
2016-03-16  3:43                             ` Meng Xu
2016-03-16  8:23                               ` Dario Faggioli
2016-03-16 14:37                                 ` Meng Xu
2016-03-16 14:46                                   ` Chong Li
2016-03-16 14:53                                   ` Dario Faggioli
2016-03-16 14:46                                 ` Chong Li
2016-03-16 14:54                                   ` Dario Faggioli
2016-03-16 10:48                               ` Jan Beulich
2016-03-10 22:35     ` Chong Li
2016-03-10 22:50       ` Wei Liu
2016-03-14  9:07         ` Dario Faggioli
2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 2/4] libxc: " Chong Li
2016-03-08 19:09   ` Wei Liu
2016-03-08 19:32     ` Chong Li
2016-03-08 19:36       ` Wei Liu
2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 3/4] libxl: " Chong Li
2016-03-08 19:12   ` Wei Liu
2016-03-09  0:38     ` Chong Li
2016-03-09 14:01       ` Wei Liu
2016-03-09 17:28     ` Dario Faggioli
2016-03-09 21:57       ` Chong Li
2016-03-09 17:09   ` Dario Faggioli
2016-03-09 17:28     ` Dario Faggioli
2016-03-06 17:55 ` [PATCH v6 for Xen 4.7 4/4] xl: " Chong Li
2016-03-08 19:12   ` Wei Liu
2016-03-08 21:24     ` Chong Li
2016-03-09 14:01       ` Wei Liu
2016-03-09 14:09   ` Wei Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).