All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC Patch 0/3] Putting the "Simple" back in sedf.
@ 2014-03-14 19:13 Nathan Studer
  2014-03-14 19:13 ` [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support Nathan Studer
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Nathan Studer @ 2014-03-14 19:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, Dario Faggioli, Nathan Studer

From: Nathan Studer <nate.studer@dornerworks.com>

With the increased interest in embedded Xen, there is a need for a suitable
real-time scheduler.  The arinc653 scheduler currently only supports a
single core and has limited niche appeal, while the sedf scheduler is
widely consider deprecated and is currently a mess.

Since both the CBS scheduler proposed by Dario and the schedulers of Xen-RT
use an edf scheduler as the lowest-level scheduling mechanism, it seems
worthwhile to start repurposing the sedf scheduler instead of creating a
completely new scheduler.

This patchset begins this repurposing by removing the extra scheduling code
that has built up over the years, and returns the sedf scheduler to its 
simple roots.

Nathan Studer (3):
  Remove sedf extra, weight, and latency parameter support.
  Remove extra queues, latency scaling, and weight support from sedf
  Fix formatting and misleading comments/variables in sedf

 docs/man/xl.cfg.pod.5             |   10 -
 tools/libxc/xc_sedf.c             |   16 +-
 tools/libxc/xenctrl.h             |    8 +-
 tools/libxl/libxl.c               |   26 +-
 tools/libxl/libxl.h               |    2 -
 tools/libxl/libxl_create.c        |   61 ---
 tools/libxl/libxl_types.idl       |    2 -
 tools/libxl/xl_cmdimpl.c          |   54 +-
 tools/libxl/xl_cmdtable.c         |    6 -
 tools/python/xen/lowlevel/xc/xc.c |   35 +-
 xen/common/sched_sedf.c           | 1032 +++++++------------------------------
 xen/include/public/domctl.h       |    3 -
 12 files changed, 222 insertions(+), 1033 deletions(-)
 mode change 100644 => 100755 docs/man/xl.cfg.pod.5
 mode change 100644 => 100755 tools/libxc/xc_sedf.c
 mode change 100644 => 100755 tools/libxc/xenctrl.h
 mode change 100644 => 100755 tools/libxl/libxl.c
 mode change 100644 => 100755 tools/libxl/libxl.h
 mode change 100644 => 100755 tools/libxl/libxl_create.c
 mode change 100644 => 100755 tools/libxl/libxl_types.idl
 mode change 100644 => 100755 tools/libxl/xl_cmdimpl.c
 mode change 100644 => 100755 tools/libxl/xl_cmdtable.c
 mode change 100644 => 100755 xen/common/sched_sedf.c
 mode change 100644 => 100755 xen/include/public/domctl.h

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-14 19:13 [RFC Patch 0/3] Putting the "Simple" back in sedf Nathan Studer
@ 2014-03-14 19:13 ` Nathan Studer
  2014-03-17  8:13   ` Jan Beulich
                     ` (2 more replies)
  2014-03-14 19:13 ` [RFC Patch 2/3] Remove extra queues, latency scaling, and weight support from sedf Nathan Studer
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 19+ messages in thread
From: Nathan Studer @ 2014-03-14 19:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, Joshua Whitehead, Dario Faggioli,
	Nathan Studer

From: Nathan Studer <nate.studer@dornerworks.com>

Remove sedf extra, weight, and latency parameters from the scheduler's adjust
function.  Also remove the support for these parameters from the xl toolstack.

Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
Signed-off-by: Joshua Whitehead <josh.whitehead@dornerworks.com>
---
 docs/man/xl.cfg.pod.5             |   10 --
 tools/libxc/xc_sedf.c             |   16 +--
 tools/libxc/xenctrl.h             |    8 +-
 tools/libxl/libxl.c               |   26 +----
 tools/libxl/libxl.h               |    2 -
 tools/libxl/libxl_create.c        |   61 -----------
 tools/libxl/libxl_types.idl       |    2 -
 tools/libxl/xl_cmdimpl.c          |   54 ++--------
 tools/libxl/xl_cmdtable.c         |    6 --
 tools/python/xen/lowlevel/xc/xc.c |   35 +++----
 xen/common/sched_sedf.c           |  205 ++++++-------------------------------
 xen/include/public/domctl.h       |    3 -
 12 files changed, 61 insertions(+), 367 deletions(-)
 mode change 100644 => 100755 docs/man/xl.cfg.pod.5
 mode change 100644 => 100755 tools/libxc/xc_sedf.c
 mode change 100644 => 100755 tools/libxc/xenctrl.h
 mode change 100644 => 100755 tools/libxl/libxl.c
 mode change 100644 => 100755 tools/libxl/libxl.h
 mode change 100644 => 100755 tools/libxl/libxl_create.c
 mode change 100644 => 100755 tools/libxl/libxl_types.idl
 mode change 100644 => 100755 tools/libxl/xl_cmdimpl.c
 mode change 100644 => 100755 tools/libxl/xl_cmdtable.c
 mode change 100644 => 100755 xen/common/sched_sedf.c
 mode change 100644 => 100755 xen/include/public/domctl.h

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
old mode 100644
new mode 100755
index c02ad55..dc52ed2
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -204,16 +204,6 @@ The normal EDF scheduling usage in nanoseconds. it defines the time
 a domain get every period time.
 Honoured by the sedf scheduler.
 
-=item B<latency=N>
-
-Scaled period if domain is doing heavy I/O.
-Honoured by the sedf scheduler.
-
-=item B<extratime=BOOLEAN>
-
-Flag for allowing domain to run in extra time.
-Honoured by the sedf scheduler.
-
 =back
 
 =head3 Memory Allocation
diff --git a/tools/libxc/xc_sedf.c b/tools/libxc/xc_sedf.c
old mode 100644
new mode 100755
index db372ca..6a0c8e2
--- a/tools/libxc/xc_sedf.c
+++ b/tools/libxc/xc_sedf.c
@@ -28,10 +28,7 @@ int xc_sedf_domain_set(
     xc_interface *xch,
     uint32_t domid,
     uint64_t period,
-    uint64_t slice,
-    uint64_t latency,
-    uint16_t extratime,
-    uint16_t weight)
+    uint64_t slice)
 {
     DECLARE_DOMCTL;
     struct xen_domctl_sched_sedf *p = &domctl.u.scheduler_op.u.sedf;
@@ -43,9 +40,6 @@ int xc_sedf_domain_set(
 
     p->period    = period;
     p->slice     = slice;
-    p->latency   = latency;
-    p->extratime = extratime;
-    p->weight    = weight;
     return do_domctl(xch, &domctl);
 }
 
@@ -53,10 +47,7 @@ int xc_sedf_domain_get(
     xc_interface *xch,
     uint32_t domid,
     uint64_t *period,
-    uint64_t *slice,
-    uint64_t *latency,
-    uint16_t *extratime,
-    uint16_t *weight)
+    uint64_t *slice)
 {
     DECLARE_DOMCTL;
     int ret;
@@ -71,8 +62,5 @@ int xc_sedf_domain_get(
 
     *period    = p->period;
     *slice     = p->slice;
-    *latency   = p->latency;
-    *extratime = p->extratime;
-    *weight    = p->weight;
     return ret;
 }
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
old mode 100644
new mode 100755
index 13f816b..bec91b9
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -765,15 +765,11 @@ int xc_shadow_control(xc_interface *xch,
 
 int xc_sedf_domain_set(xc_interface *xch,
                        uint32_t domid,
-                       uint64_t period, uint64_t slice,
-                       uint64_t latency, uint16_t extratime,
-                       uint16_t weight);
+                       uint64_t period, uint64_t slice);
 
 int xc_sedf_domain_get(xc_interface *xch,
                        uint32_t domid,
-                       uint64_t* period, uint64_t *slice,
-                       uint64_t *latency, uint16_t *extratime,
-                       uint16_t *weight);
+                       uint64_t* period, uint64_t *slice);
 
 int xc_sched_credit_domain_set(xc_interface *xch,
                                uint32_t domid,
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
old mode 100644
new mode 100755
index 730f6e1..f790727
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4915,13 +4915,9 @@ static int sched_sedf_domain_get(libxl__gc *gc, uint32_t domid,
 {
     uint64_t period;
     uint64_t slice;
-    uint64_t latency;
-    uint16_t extratime;
-    uint16_t weight;
     int rc;
 
-    rc = xc_sedf_domain_get(CTX->xch, domid, &period, &slice, &latency,
-                            &extratime, &weight);
+    rc = xc_sedf_domain_get(CTX->xch, domid, &period, &slice);
     if (rc != 0) {
         LOGE(ERROR, "getting domain sched sedf");
         return ERROR_FAIL;
@@ -4931,9 +4927,6 @@ static int sched_sedf_domain_get(libxl__gc *gc, uint32_t domid,
     scinfo->sched = LIBXL_SCHEDULER_SEDF;
     scinfo->period = period / 1000000;
     scinfo->slice = slice / 1000000;
-    scinfo->latency = latency / 1000000;
-    scinfo->extratime = extratime;
-    scinfo->weight = weight;
 
     return 0;
 }
@@ -4943,14 +4936,10 @@ static int sched_sedf_domain_set(libxl__gc *gc, uint32_t domid,
 {
     uint64_t period;
     uint64_t slice;
-    uint64_t latency;
-    uint16_t extratime;
-    uint16_t weight;
 
     int ret;
 
-    ret = xc_sedf_domain_get(CTX->xch, domid, &period, &slice, &latency,
-                            &extratime, &weight);
+    ret = xc_sedf_domain_get(CTX->xch, domid, &period, &slice);
     if (ret != 0) {
         LOGE(ERROR, "getting domain sched sedf");
         return ERROR_FAIL;
@@ -4960,15 +4949,8 @@ static int sched_sedf_domain_set(libxl__gc *gc, uint32_t domid,
         period = (uint64_t)scinfo->period * 1000000;
     if (scinfo->slice != LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT)
         slice = (uint64_t)scinfo->slice * 1000000;
-    if (scinfo->latency != LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT)
-        latency = (uint64_t)scinfo->latency * 1000000;
-    if (scinfo->extratime != LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT)
-        extratime = scinfo->extratime;
-    if (scinfo->weight != LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT)
-        weight = scinfo->weight;
-
-    ret = xc_sedf_domain_set(CTX->xch, domid, period, slice, latency,
-                            extratime, weight);
+
+    ret = xc_sedf_domain_set(CTX->xch, domid, period, slice);
     if ( ret < 0 ) {
         LOGE(ERROR, "setting domain sched sedf");
         return ERROR_FAIL;
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
old mode 100644
new mode 100755
index 4c9cd64..6be5575
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1093,8 +1093,6 @@ int libxl_sched_credit_params_set(libxl_ctx *ctx, uint32_t poolid,
 #define LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT       -1
 #define LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT    -1
 #define LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT     -1
-#define LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT   -1
-#define LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT -1
 
 int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
                                   libxl_domain_sched_params *params);
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
old mode 100644
new mode 100755
index 53e7cb6..3e7fb60
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -44,61 +44,6 @@ int libxl__domain_create_info_setdefault(libxl__gc *gc,
     return 0;
 }
 
-static int sched_params_valid(libxl__gc *gc,
-                              uint32_t domid, libxl_domain_sched_params *scp)
-{
-    int has_weight = scp->weight != LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT;
-    int has_period = scp->period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT;
-    int has_slice = scp->slice != LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT;
-    int has_extratime =
-                scp->extratime != LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT;
-
-    /* The sedf scheduler needs some more consistency checking */
-    if (libxl__domain_scheduler(gc, domid) == LIBXL_SCHEDULER_SEDF) {
-        if (has_weight && (has_period || has_slice))
-            return 0;
-        /* If you want a real-time domain, with its own period and
-         * slice, please, do provide both! */
-        if (has_period != has_slice)
-            return 0;
-
-        /*
-         * Idea is, if we specify a weight, then both period and
-         * slice has to be zero. OTOH, if we do specify a period and
-         * slice, it is weight that should be zeroed. See
-         * docs/misc/sedf_scheduler_mini-HOWTO.txt for more details
-         * on the meaningful combinations and their meanings.
-         */
-        if (has_weight) {
-            scp->slice = 0;
-            scp->period = 0;
-        }
-        else if (!has_period) {
-            /* No weight nor slice/period means best effort. Parameters needs
-             * some mangling in order to properly ask for that, though. */
-
-            /*
-             * Providing no weight does not make any sense if we do not allow
-             * the domain to run in extra time. On the other hand, if we have
-             * extra time, weight will be ignored (and zeroed) by Xen, but it
-             * can't be zero here, or the call for setting the scheduling
-             * parameters will fail. So, avoid the latter by setting a random
-             * weight (namely, 1), as it will be ignored anyway.
-             */
-
-            /* We can setup a proper best effort domain (extra time only)
-             * iff we either already have or are asking for some extra time. */
-            scp->weight = has_extratime ? scp->extratime : 1;
-            scp->period = 0;
-        } else {
-            /* Real-time domain: will get slice CPU time over every period */
-            scp->weight = 0;
-        }
-    }
-
-    return 1;
-}
-
 int libxl__domain_build_info_setdefault(libxl__gc *gc,
                                         libxl_domain_build_info *b_info)
 {
@@ -752,12 +697,6 @@ static void initiate_domain_create(libxl__egc *egc,
     ret = libxl__domain_build_info_setdefault(gc, &d_config->b_info);
     if (ret) goto error_out;
 
-    if (!sched_params_valid(gc, domid, &d_config->b_info.sched_params)) {
-        LOG(ERROR, "Invalid scheduling parameters\n");
-        ret = ERROR_INVAL;
-        goto error_out;
-    }
-
     for (i = 0; i < d_config->num_disks; i++) {
         ret = libxl__device_disk_setdefault(gc, &d_config->disks[i]);
         if (ret) goto error_out;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
old mode 100644
new mode 100755
index 7d3a62b..1265a73
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -291,8 +291,6 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
     ("cap",          integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT'}),
     ("period",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
     ("slice",        integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT'}),
-    ("latency",      integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT'}),
-    ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
     ])
 
 libxl_domain_build_info = Struct("domain_build_info",[
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
old mode 100644
new mode 100755
index 5f59bbc..4457289
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -834,10 +834,6 @@ static void parse_config_data(const char *config_source,
         b_info->sched_params.period = l;
     if (!xlu_cfg_get_long (config, "slice", &l, 0))
         b_info->sched_params.slice = l;
-    if (!xlu_cfg_get_long (config, "latency", &l, 0))
-        b_info->sched_params.latency = l;
-    if (!xlu_cfg_get_long (config, "extratime", &l, 0))
-        b_info->sched_params.extratime = l;
 
     if (!xlu_cfg_get_long (config, "vcpus", &l, 0)) {
         b_info->max_vcpus = l;
@@ -5170,22 +5166,19 @@ static int sched_sedf_domain_output(
     int rc;
 
     if (domid < 0) {
-        printf("%-33s %4s %6s %-6s %7s %5s %6s\n", "Name", "ID", "Period",
-               "Slice", "Latency", "Extra", "Weight");
+        printf("%-33s %4s %6s %-6s\n", "Name", "ID", "Period",
+               "Slice");
         return 0;
     }
     rc = sched_domain_get(LIBXL_SCHEDULER_SEDF, domid, &scinfo);
     if (rc)
         return rc;
     domname = libxl_domid_to_name(ctx, domid);
-    printf("%-33s %4d %6d %6d %7d %5d %6d\n",
+    printf("%-33s %4d %6d %6d\n",
         domname,
         domid,
         scinfo.period,
-        scinfo.slice,
-        scinfo.latency,
-        scinfo.extratime,
-        scinfo.weight);
+        scinfo.slice);
     free(domname);
     libxl_domain_sched_params_dispose(&scinfo);
     return 0;
@@ -5455,22 +5448,16 @@ int main_sched_sedf(int argc, char **argv)
     const char *cpupool = NULL;
     int period = 0, opt_p = 0;
     int slice = 0, opt_s = 0;
-    int latency = 0, opt_l = 0;
-    int extra = 0, opt_e = 0;
-    int weight = 0, opt_w = 0;
     int opt, rc;
     static struct option opts[] = {
         {"period", 1, 0, 'p'},
         {"slice", 1, 0, 's'},
-        {"latency", 1, 0, 'l'},
-        {"extra", 1, 0, 'e'},
-        {"weight", 1, 0, 'w'},
         {"cpupool", 1, 0, 'c'},
         COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    SWITCH_FOREACH_OPT(opt, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0) {
+    SWITCH_FOREACH_OPT(opt, "d:p:s:c:h", opts, "sched-sedf", 0) {
     case 'd':
         dom = optarg;
         break;
@@ -5482,36 +5469,20 @@ int main_sched_sedf(int argc, char **argv)
         slice = strtol(optarg, NULL, 10);
         opt_s = 1;
         break;
-    case 'l':
-        latency = strtol(optarg, NULL, 10);
-        opt_l = 1;
-        break;
-    case 'e':
-        extra = strtol(optarg, NULL, 10);
-        opt_e = 1;
-        break;
-    case 'w':
-        weight = strtol(optarg, NULL, 10);
-        opt_w = 1;
-        break;
     case 'c':
         cpupool = optarg;
         break;
     }
 
-    if (cpupool && (dom || opt_p || opt_s || opt_l || opt_e || opt_w)) {
+    if (cpupool && (dom || opt_p || opt_s)) {
         fprintf(stderr, "Specifying a cpupool is not allowed with other "
                 "options.\n");
         return 1;
     }
-    if (!dom && (opt_p || opt_s || opt_l || opt_e || opt_w)) {
+    if (!dom && (opt_p || opt_s)) {
         fprintf(stderr, "Must specify a domain.\n");
         return 1;
     }
-    if (opt_w && (opt_p || opt_s)) {
-        fprintf(stderr, "Specifying a weight AND period or slice is not "
-                "allowed.\n");
-    }
 
     if (!dom) { /* list all domain's credit scheduler info */
         return -sched_domain_output(LIBXL_SCHEDULER_SEDF,
@@ -5521,7 +5492,7 @@ int main_sched_sedf(int argc, char **argv)
     } else {
         uint32_t domid = find_domain(dom);
 
-        if (!opt_p && !opt_s && !opt_l && !opt_e && !opt_w) {
+        if (!opt_p && !opt_s) {
             /* output sedf scheduler info */
             sched_sedf_domain_output(-1);
             return -sched_sedf_domain_output(domid);
@@ -5538,15 +5509,6 @@ int main_sched_sedf(int argc, char **argv)
                 scinfo.slice = slice;
                 scinfo.weight = 0;
             }
-            if (opt_l)
-                scinfo.latency = latency;
-            if (opt_e)
-                scinfo.extratime = extra;
-            if (opt_w) {
-                scinfo.weight = weight;
-                scinfo.period = 0;
-                scinfo.slice = 0;
-            }
             rc = sched_domain_set(domid, &scinfo);
             libxl_domain_sched_params_dispose(&scinfo);
             if (rc)
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
old mode 100644
new mode 100755
index e8ab93a..fd49fba
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -266,12 +266,6 @@ struct cmd_spec cmd_table[] = {
       "-p MS, --period=MS             Relative deadline(ms)\n"
       "-s MS, --slice=MS              Worst-case execution time(ms).\n"
       "                               (slice < period)\n"
-      "-l MS, --latency=MS            Scaled period (ms) when domain\n"
-      "                               performs heavy I/O\n"
-      "-e FLAG, --extra=FLAG          Flag (0 or 1) controls if domain\n"
-      "                               can run in extra time\n"
-      "-w FLOAT, --weight=FLOAT       CPU Period/slice (do not set with\n"
-      "                               --period/--slice)\n"
       "-c CPUPOOL, --cpupool=CPUPOOL  Restrict output to CPUPOOL"
     },
     { "domid",
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 737bdac..aab6e5c 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1437,17 +1437,14 @@ static PyObject *pyxc_sedf_domain_set(XcObject *self,
                                       PyObject *kwds)
 {
     uint32_t domid;
-    uint64_t period, slice, latency;
-    uint16_t extratime, weight;
-    static char *kwd_list[] = { "domid", "period", "slice",
-                                "latency", "extratime", "weight",NULL };
+    uint64_t period, slice;
+    static char *kwd_list[] = { "domid", "period", "slice",NULL };
     
-    if( !PyArg_ParseTupleAndKeywords(args, kwds, "iLLLhh", kwd_list, 
-                                     &domid, &period, &slice,
-                                     &latency, &extratime, &weight) )
+    if( !PyArg_ParseTupleAndKeywords(args, kwds, "iLL", kwd_list, 
+                                     &domid, &period, &slice) )
         return NULL;
    if ( xc_sedf_domain_set(self->xc_handle, domid, period,
-                           slice, latency, extratime,weight) != 0 )
+                           slice) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
     Py_INCREF(zero);
@@ -1457,23 +1454,19 @@ static PyObject *pyxc_sedf_domain_set(XcObject *self,
 static PyObject *pyxc_sedf_domain_get(XcObject *self, PyObject *args)
 {
     uint32_t domid;
-    uint64_t period, slice,latency;
-    uint16_t weight, extratime;
+    uint64_t period, slice;
     
     if(!PyArg_ParseTuple(args, "i", &domid))
         return NULL;
     
     if (xc_sedf_domain_get(self->xc_handle, domid, &period,
-                           &slice,&latency,&extratime,&weight))
+                           &slice))
         return pyxc_error_to_exception(self->xc_handle);
 
-    return Py_BuildValue("{s:i,s:L,s:L,s:L,s:i,s:i}",
+    return Py_BuildValue("{s:i,s:L,s:L}",
                          "domid",    domid,
                          "period",    period,
-                         "slice",     slice,
-                         "latency",   latency,
-                         "extratime", extratime,
-                         "weight",    weight);
+                         "slice",     slice);
 }
 
 static PyObject *pyxc_shadow_control(PyObject *self,
@@ -2506,26 +2499,22 @@ static PyMethodDef pyxc_methods[] = {
     { "sedf_domain_set",
       (PyCFunction)pyxc_sedf_domain_set,
       METH_KEYWORDS, "\n"
-      "Set the scheduling parameters for a domain when running with Atropos.\n"
+      "Set the scheduling parameters for a domain when running with sedf.\n"
       " dom       [int]:  domain to set\n"
       " period    [long]: domain's scheduling period\n"
       " slice     [long]: domain's slice per period\n"
-      " latency   [long]: domain's wakeup latency hint\n"
-      " extratime [int]:  domain aware of extratime?\n"
       "Returns: [int] 0 on success; -1 on error.\n" },
 
     { "sedf_domain_get",
       (PyCFunction)pyxc_sedf_domain_get,
       METH_VARARGS, "\n"
       "Get the current scheduling parameters for a domain when running with\n"
-      "the Atropos scheduler."
+      "the sedf scheduler."
       " dom       [int]: domain to query\n"
       "Returns:   [dict]\n"
       " domain    [int]: domain ID\n"
       " period    [long]: scheduler period\n"
-      " slice     [long]: CPU reservation per period\n"
-      " latency   [long]: domain's wakeup latency hint\n"
-      " extratime [int]:  domain aware of extratime?\n"},
+      " slice     [long]: CPU reservation per period\n"},
     
     { "sched_credit_domain_set",
       (PyCFunction)pyxc_sched_credit_domain_set,
diff --git a/xen/common/sched_sedf.c b/xen/common/sched_sedf.c
old mode 100644
new mode 100755
index 7c24171..6ebf72b
--- a/xen/common/sched_sedf.c
+++ b/xen/common/sched_sedf.c
@@ -38,6 +38,9 @@
 #define WEIGHT_PERIOD (MILLISECS(100))
 #define WEIGHT_SAFETY (MILLISECS(5))
 
+#define DEFAULT_PERIOD (MILLISECS(20))
+#define DEFAULT_SLICE (MILLISECS(10))
+
 #define PERIOD_MAX MILLISECS(10000) /* 10s  */
 #define PERIOD_MIN (MICROSECS(10))  /* 10us */
 #define SLICE_MIN (MICROSECS(5))    /*  5us */
@@ -320,11 +323,20 @@ static void *sedf_alloc_vdata(const struct scheduler *ops, struct vcpu *v, void
     /* Every VCPU gets an equal share of extratime by default */
     inf->deadl_abs   = 0;
     inf->latency     = 0;
-    inf->status      = EXTRA_AWARE | SEDF_ASLEEP;
-    inf->extraweight = 1;
-    /* Upon creation all domain are best-effort */
-    inf->period      = WEIGHT_PERIOD;
-    inf->slice       = 0;
+    inf->status      = SEDF_ASLEEP;
+    inf->extraweight = 0;
+
+    if (v->domain->domain_id == 0)
+    {
+        /* Domain 0, needs a slice to boot the machine */
+        inf->period      = DEFAULT_PERIOD;
+        inf->slice       = DEFAULT_SLICE;
+    }
+    else
+    {
+        inf->period      = 0;
+        inf->slice       = 0;
+    }
 
     inf->period_orig = inf->period; inf->slice_orig = inf->slice;
     INIT_LIST_HEAD(&(inf->list));
@@ -1291,92 +1303,11 @@ static void sedf_dump_cpu_state(const struct scheduler *ops, int i)
 }
 
 
-/* Adjusts periods and slices of the domains accordingly to their weights */
-static int sedf_adjust_weights(struct cpupool *c, int nr_cpus, int *sumw, s_time_t *sumt)
-{
-    struct vcpu *p;
-    struct domain      *d;
-    unsigned int        cpu;
-
-    /*
-     * Sum across all weights. Notice that no runq locking is needed
-     * here: the caller holds sedf_priv_info.lock and we're not changing
-     * anything that is accessed during scheduling.
-     */
-    rcu_read_lock(&domlist_read_lock);
-    for_each_domain_in_cpupool( d, c )
-    {
-        for_each_vcpu( d, p )
-        {
-            if ( (cpu = p->processor) >= nr_cpus )
-                continue;
-
-            if ( EDOM_INFO(p)->weight )
-            {
-                sumw[cpu] += EDOM_INFO(p)->weight;
-            }
-            else
-            {
-                /*
-                 * Don't modify domains who don't have a weight, but sum
-                 * up the time they need, projected to a WEIGHT_PERIOD,
-                 * so that this time is not given to the weight-driven
-                 *  domains
-                 */
-
-                /* Check for overflows */
-                ASSERT((WEIGHT_PERIOD < ULONG_MAX) 
-                       && (EDOM_INFO(p)->slice_orig < ULONG_MAX));
-                sumt[cpu] += 
-                    (WEIGHT_PERIOD * EDOM_INFO(p)->slice_orig) / 
-                    EDOM_INFO(p)->period_orig;
-            }
-        }
-    }
-    rcu_read_unlock(&domlist_read_lock);
-
-    /*
-     * Adjust all slices (and periods) to the new weight. Unlike above, we
-     * need to take thr runq lock for the various VCPUs: we're modyfing
-     * slice and period which are referenced during scheduling.
-     */
-    rcu_read_lock(&domlist_read_lock);
-    for_each_domain_in_cpupool( d, c )
-    {
-        for_each_vcpu ( d, p )
-        {
-            if ( (cpu = p->processor) >= nr_cpus )
-                continue;
-            if ( EDOM_INFO(p)->weight )
-            {
-                /* Interrupts already off */
-                spinlock_t *lock = vcpu_schedule_lock(p);
-
-                EDOM_INFO(p)->period_orig = 
-                    EDOM_INFO(p)->period  = WEIGHT_PERIOD;
-                EDOM_INFO(p)->slice_orig  =
-                    EDOM_INFO(p)->slice   = 
-                    (EDOM_INFO(p)->weight *
-                     (WEIGHT_PERIOD - WEIGHT_SAFETY - sumt[cpu])) / sumw[cpu];
-
-                vcpu_schedule_unlock(lock, p);
-            }
-        }
-    }
-    rcu_read_unlock(&domlist_read_lock);
-
-    return 0;
-}
-
-
 /* Set or fetch domain scheduling parameters */
 static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen_domctl_scheduler_op *op)
 {
     struct sedf_priv_info *prv = SEDF_PRIV(ops);
     unsigned long flags;
-    unsigned int nr_cpus = cpumask_last(&cpu_online_map) + 1;
-    int *sumw = xzalloc_array(int, nr_cpus);
-    s_time_t *sumt = xzalloc_array(s_time_t, nr_cpus);
     struct vcpu *v;
     int rc = 0;
 
@@ -1391,99 +1322,35 @@ static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen
 
     if ( op->cmd == XEN_DOMCTL_SCHEDOP_putinfo )
     {
-        /*
-         * These are used in sedf_adjust_weights() but have to be allocated in
-         * this function, as we need to avoid nesting xmem_pool_alloc's lock
-         * within our prv->lock.
-         */
-        if ( !sumw || !sumt )
-        {
-            /* Check for errors here, the _getinfo branch doesn't care */
-            rc = -ENOMEM;
-            goto out;
-        }
-
         /* Check for sane parameters */
-        if ( !op->u.sedf.period && !op->u.sedf.weight )
+        if ( !op->u.sedf.period )
         {
             rc = -EINVAL;
             goto out;
         }
 
-        if ( op->u.sedf.weight )
-        {
-            if ( (op->u.sedf.extratime & EXTRA_AWARE) &&
-                 (!op->u.sedf.period) )
-            {
-                /* Weight-driven domains with extratime only */
-                for_each_vcpu ( p, v )
-                {
-                    /* (Here and everywhere in the following) IRQs are already off,
-                     * hence vcpu_spin_lock() is the one. */
-                    spinlock_t *lock = vcpu_schedule_lock(v);
-
-                    EDOM_INFO(v)->extraweight = op->u.sedf.weight;
-                    EDOM_INFO(v)->weight = 0;
-                    EDOM_INFO(v)->slice = 0;
-                    EDOM_INFO(v)->period = WEIGHT_PERIOD;
-                    vcpu_schedule_unlock(lock, v);
-                }
-            }
-            else
-            {
-                /* Weight-driven domains with real-time execution */
-                for_each_vcpu ( p, v )
-                {
-                    spinlock_t *lock = vcpu_schedule_lock(v);
-
-                    EDOM_INFO(v)->weight = op->u.sedf.weight;
-                    vcpu_schedule_unlock(lock, v);
-                }
-            }
-        }
-        else
+        /*
+         * Sanity checking: note that disabling extra weight requires
+         * that we set a non-zero slice.
+         */
+        if ( (op->u.sedf.period > PERIOD_MAX) ||
+             (op->u.sedf.period < PERIOD_MIN) ||
+             (op->u.sedf.slice  > op->u.sedf.period) ||
+             (op->u.sedf.slice  < SLICE_MIN) )
         {
-            /*
-             * Sanity checking: note that disabling extra weight requires
-             * that we set a non-zero slice.
-             */
-            if ( (op->u.sedf.period > PERIOD_MAX) ||
-                 (op->u.sedf.period < PERIOD_MIN) ||
-                 (op->u.sedf.slice  > op->u.sedf.period) ||
-                 (op->u.sedf.slice  < SLICE_MIN) )
-            {
-                rc = -EINVAL;
-                goto out;
-            }
-
-            /* Time-driven domains */
-            for_each_vcpu ( p, v )
-            {
-                spinlock_t *lock = vcpu_schedule_lock(v);
-
-                EDOM_INFO(v)->weight = 0;
-                EDOM_INFO(v)->extraweight = 0;
-                EDOM_INFO(v)->period_orig = 
-                    EDOM_INFO(v)->period  = op->u.sedf.period;
-                EDOM_INFO(v)->slice_orig  = 
-                    EDOM_INFO(v)->slice   = op->u.sedf.slice;
-                vcpu_schedule_unlock(lock, v);
-            }
-        }
-
-        rc = sedf_adjust_weights(p->cpupool, nr_cpus, sumw, sumt);
-        if ( rc )
+            rc = -EINVAL;
             goto out;
+        }
 
+        /* Time-driven domains */
         for_each_vcpu ( p, v )
         {
             spinlock_t *lock = vcpu_schedule_lock(v);
 
-            EDOM_INFO(v)->status  = 
-                (EDOM_INFO(v)->status &
-                 ~EXTRA_AWARE) | (op->u.sedf.extratime & EXTRA_AWARE);
-            EDOM_INFO(v)->latency = op->u.sedf.latency;
-            extraq_check(v);
+            EDOM_INFO(v)->period_orig = 
+                EDOM_INFO(v)->period  = op->u.sedf.period;
+            EDOM_INFO(v)->slice_orig  = 
+                EDOM_INFO(v)->slice   = op->u.sedf.slice;
             vcpu_schedule_unlock(lock, v);
         }
     }
@@ -1497,17 +1364,11 @@ static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen
 
         op->u.sedf.period    = EDOM_INFO(p->vcpu[0])->period;
         op->u.sedf.slice     = EDOM_INFO(p->vcpu[0])->slice;
-        op->u.sedf.extratime = EDOM_INFO(p->vcpu[0])->status & EXTRA_AWARE;
-        op->u.sedf.latency   = EDOM_INFO(p->vcpu[0])->latency;
-        op->u.sedf.weight    = EDOM_INFO(p->vcpu[0])->weight;
     }
 
 out:
     spin_unlock_irqrestore(&prv->lock, flags);
 
-    xfree(sumt);
-    xfree(sumw);
-
     return rc;
 }
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
old mode 100644
new mode 100755
index f22fe2e..91bcbe9
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -331,9 +331,6 @@ struct xen_domctl_scheduler_op {
         struct xen_domctl_sched_sedf {
             uint64_aligned_t period;
             uint64_aligned_t slice;
-            uint64_aligned_t latency;
-            uint32_t extratime;
-            uint32_t weight;
         } sedf;
         struct xen_domctl_sched_credit {
             uint16_t weight;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC Patch 2/3] Remove extra queues, latency scaling, and weight support from sedf
  2014-03-14 19:13 [RFC Patch 0/3] Putting the "Simple" back in sedf Nathan Studer
  2014-03-14 19:13 ` [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support Nathan Studer
@ 2014-03-14 19:13 ` Nathan Studer
  2014-03-14 19:13 ` [RFC Patch 3/3] Fix formatting and misleading comments/variables in sedf Nathan Studer
  2014-03-14 19:22 ` [RFC Patch 0/3] Putting the "Simple" back " George Dunlap
  3 siblings, 0 replies; 19+ messages in thread
From: Nathan Studer @ 2014-03-14 19:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, Joshua Whitehead, Dario Faggioli,
	Nathan Studer

From: Nathan Studer <nate.studer@dornerworks.com>

The extra queues and latency scaling are meant to make the sedf scheduler
work conserving.  While this was useful in the past, with the advent of
the credit scheduler and cpupools it is no longer that useful.

Also remove weight support, which adds extra complexity the scheduling code
for the purpose of making setting the scheduler easier.

Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
Signed-off-by: Joshua Whitehead <josh.whitehead@dornerworks.com>
---
 xen/common/sched_sedf.c |  573 +++--------------------------------------------
 1 file changed, 34 insertions(+), 539 deletions(-)

diff --git a/xen/common/sched_sedf.c b/xen/common/sched_sedf.c
index 6ebf72b..7a827c8 100755
--- a/xen/common/sched_sedf.c
+++ b/xen/common/sched_sedf.c
@@ -25,19 +25,8 @@
 #define CHECK(_p) ((void)0)
 #endif
 
-#define EXTRA_NONE (0)
-#define EXTRA_AWARE (1)
-#define EXTRA_RUN_PEN (2)
-#define EXTRA_RUN_UTIL (4)
-#define EXTRA_WANT_PEN_Q (8)
-#define EXTRA_PEN_Q (0)
-#define EXTRA_UTIL_Q (1)
 #define SEDF_ASLEEP (16)
 
-#define EXTRA_QUANTUM (MICROSECS(500)) 
-#define WEIGHT_PERIOD (MILLISECS(100))
-#define WEIGHT_SAFETY (MILLISECS(5))
-
 #define DEFAULT_PERIOD (MILLISECS(20))
 #define DEFAULT_SLICE (MILLISECS(10))
 
@@ -61,24 +50,13 @@ struct sedf_priv_info {
 struct sedf_vcpu_info {
     struct vcpu *vcpu;
     struct list_head list;
-    struct list_head extralist[2];
  
     /* Parameters for EDF */
     s_time_t  period;  /* = relative deadline */
     s_time_t  slice;   /* = worst case execution time */
  
-    /* Advaced Parameters */
-
-    /* Latency Scaling */
-    s_time_t  period_orig;
-    s_time_t  slice_orig;
-    s_time_t  latency;
- 
     /* Status of domain */
     int       status;
-    /* Weights for "Scheduling for beginners/ lazy/ etc." ;) */
-    short     weight;
-    short     extraweight;
     /* Bookkeeping */
     s_time_t  deadl_abs;
     s_time_t  sched_start_abs;
@@ -87,28 +65,18 @@ struct sedf_vcpu_info {
     s_time_t  block_abs;
     s_time_t  unblock_abs;
  
-    /* Scores for {util, block penalty}-weighted extratime distribution */
-    int   score[2];
-    s_time_t  short_block_lost_tot;
- 
-    /* Statistics */
-    s_time_t  extra_time_tot;
-
 #ifdef SEDF_STATS
     s_time_t  block_time_tot;
     s_time_t  penalty_time_tot;
     int   block_tot;
     int   short_block_tot;
     int   long_block_tot;
-    int   pen_extra_blocks;
-    int   pen_extra_slices;
 #endif
 };
 
 struct sedf_cpu_info {
     struct list_head runnableq;
     struct list_head waitq;
-    struct list_head extraq[2];
     s_time_t         current_slice_expires;
 };
 
@@ -118,115 +86,19 @@ struct sedf_cpu_info {
 #define CPU_INFO(cpu)  \
     ((struct sedf_cpu_info *)per_cpu(schedule_data, cpu).sched_priv)
 #define LIST(d)        (&EDOM_INFO(d)->list)
-#define EXTRALIST(d,i) (&(EDOM_INFO(d)->extralist[i]))
 #define RUNQ(cpu)      (&CPU_INFO(cpu)->runnableq)
 #define WAITQ(cpu)     (&CPU_INFO(cpu)->waitq)
-#define EXTRAQ(cpu,i)  (&(CPU_INFO(cpu)->extraq[i]))
 #define IDLETASK(cpu)  (idle_vcpu[cpu])
 
 #define PERIOD_BEGIN(inf) ((inf)->deadl_abs - (inf)->period)
 
 #define DIV_UP(x,y) (((x) + (y) - 1) / y)
 
-#define extra_runs(inf)      ((inf->status) & 6)
-#define extra_get_cur_q(inf) (((inf->status & 6) >> 1)-1)
 #define sedf_runnable(edom)  (!(EDOM_INFO(edom)->status & SEDF_ASLEEP))
 
 
 static void sedf_dump_cpu_state(const struct scheduler *ops, int i);
 
-static inline int extraq_on(struct vcpu *d, int i)
-{
-    return ((EXTRALIST(d,i)->next != NULL) &&
-            (EXTRALIST(d,i)->next != EXTRALIST(d,i)));
-}
-
-static inline void extraq_add_head(struct vcpu *d, int i)
-{
-    list_add(EXTRALIST(d,i), EXTRAQ(d->processor,i));
-    ASSERT(extraq_on(d, i));
-}
-
-static inline void extraq_add_tail(struct vcpu *d, int i)
-{
-    list_add_tail(EXTRALIST(d,i), EXTRAQ(d->processor,i));
-    ASSERT(extraq_on(d, i));
-}
-
-static inline void extraq_del(struct vcpu *d, int i)
-{
-    struct list_head *list = EXTRALIST(d,i);
-    ASSERT(extraq_on(d,i));
-    list_del(list);
-    list->next = NULL;
-    ASSERT(!extraq_on(d, i));
-}
-
-/*
- * Adds a domain to the queue of processes which are aware of extra time. List
- * is sorted by score, where a lower score means higher priority for an extra
- * slice. It also updates the score, by simply subtracting a fixed value from
- * each entry, in order to avoid overflow. The algorithm works by simply
- * charging each domain that recieved extratime with an inverse of its weight.
- */ 
-static inline void extraq_add_sort_update(struct vcpu *d, int i, int sub)
-{
-    struct list_head      *cur;
-    struct sedf_vcpu_info *curinf;
- 
-    ASSERT(!extraq_on(d,i));
-
-    /*
-     * Iterate through all elements to find our "hole" and on our way
-     * update all the other scores.
-     */
-    list_for_each ( cur, EXTRAQ(d->processor, i) )
-    {
-        curinf = list_entry(cur,struct sedf_vcpu_info,extralist[i]);
-        curinf->score[i] -= sub;
-        if ( EDOM_INFO(d)->score[i] < curinf->score[i] )
-            break;
-    }
-
-    /* cur now contains the element, before which we'll enqueue */
-    list_add(EXTRALIST(d,i),cur->prev);
- 
-    /* Continue updating the extraq */
-    if ( (cur != EXTRAQ(d->processor,i)) && sub )
-    {
-        for ( cur = cur->next; cur != EXTRAQ(d->processor,i); cur = cur->next )
-        {
-            curinf = list_entry(cur,struct sedf_vcpu_info, extralist[i]);
-            curinf->score[i] -= sub;
-        }
-    }
-
-    ASSERT(extraq_on(d,i));
-}
-static inline void extraq_check(struct vcpu *d)
-{
-    if ( extraq_on(d, EXTRA_UTIL_Q) )
-    {
-        if ( !(EDOM_INFO(d)->status & EXTRA_AWARE) &&
-             !extra_runs(EDOM_INFO(d)) )
-            extraq_del(d, EXTRA_UTIL_Q);
-    }
-    else
-    {
-        if ( (EDOM_INFO(d)->status & EXTRA_AWARE) && sedf_runnable(d) )
-            extraq_add_sort_update(d, EXTRA_UTIL_Q, 0);
-    }
-}
-
-static inline void extraq_check_add_unblocked(struct vcpu *d, int priority)
-{
-    struct sedf_vcpu_info *inf = EDOM_INFO(d);
-
-    if ( inf->status & EXTRA_AWARE )
-        /* Put on the weighted extraq without updating any scores */
-        extraq_add_sort_update(d, EXTRA_UTIL_Q, 0);
-}
-
 static inline int __task_on_queue(struct vcpu *d)
 {
     return (((LIST(d))->next != NULL) && (LIST(d)->next != LIST(d)));
@@ -299,11 +171,7 @@ static inline void __add_to_runqueue_sort(struct vcpu *v)
 
 static void sedf_insert_vcpu(const struct scheduler *ops, struct vcpu *v)
 {
-    if ( !is_idle_vcpu(v) )
-    {
-        extraq_check(v);
-    }
-    else
+    if ( is_idle_vcpu(v) )
     {
         EDOM_INFO(v)->deadl_abs = 0;
         EDOM_INFO(v)->status &= ~SEDF_ASLEEP;
@@ -320,11 +188,8 @@ static void *sedf_alloc_vdata(const struct scheduler *ops, struct vcpu *v, void
 
     inf->vcpu = v;
 
-    /* Every VCPU gets an equal share of extratime by default */
     inf->deadl_abs   = 0;
-    inf->latency     = 0;
     inf->status      = SEDF_ASLEEP;
-    inf->extraweight = 0;
 
     if (v->domain->domain_id == 0)
     {
@@ -338,10 +203,7 @@ static void *sedf_alloc_vdata(const struct scheduler *ops, struct vcpu *v, void
         inf->slice       = 0;
     }
 
-    inf->period_orig = inf->period; inf->slice_orig = inf->slice;
     INIT_LIST_HEAD(&(inf->list));
-    INIT_LIST_HEAD(&(inf->extralist[EXTRA_PEN_Q]));
-    INIT_LIST_HEAD(&(inf->extralist[EXTRA_UTIL_Q]));
 
     SCHED_STAT_CRANK(vcpu_init);
 
@@ -357,8 +219,6 @@ sedf_alloc_pdata(const struct scheduler *ops, int cpu)
     BUG_ON(spc == NULL);
     INIT_LIST_HEAD(&spc->waitq);
     INIT_LIST_HEAD(&spc->runnableq);
-    INIT_LIST_HEAD(&spc->extraq[EXTRA_PEN_Q]);
-    INIT_LIST_HEAD(&spc->extraq[EXTRA_UTIL_Q]);
 
     return (void *)spc;
 }
@@ -441,20 +301,6 @@ static void desched_edf_dom(s_time_t now, struct vcpu* d)
     if ( inf->cputime >= inf->slice )
     {
         inf->cputime -= inf->slice;
-  
-        if ( inf->period < inf->period_orig )
-        {
-            /* This domain runs in latency scaling or burst mode */
-            inf->period *= 2;
-            inf->slice  *= 2;
-            if ( (inf->period > inf->period_orig) ||
-                 (inf->slice > inf->slice_orig) )
-            {
-                /* Reset slice and period */
-                inf->period = inf->period_orig;
-                inf->slice = inf->slice_orig;
-            }
-        }
 
         /* Set next deadline */
         inf->deadl_abs += inf->period;
@@ -465,18 +311,8 @@ static void desched_edf_dom(s_time_t now, struct vcpu* d)
     {
         __add_to_waitqueue_sort(d);
     }
-    else
-    {
-        /* We have a blocked realtime task -> remove it from exqs too */
-        if ( extraq_on(d, EXTRA_PEN_Q) )
-            extraq_del(d, EXTRA_PEN_Q);
-        if ( extraq_on(d, EXTRA_UTIL_Q) )
-            extraq_del(d, EXTRA_UTIL_Q);
-    }
 
     ASSERT(EQ(sedf_runnable(d), __task_on_queue(d)));
-    ASSERT(IMPLY(extraq_on(d, EXTRA_UTIL_Q) || extraq_on(d, EXTRA_PEN_Q), 
-                 sedf_runnable(d)));
 }
 
 
@@ -564,175 +400,6 @@ static void update_queues(
 }
 
 
-/*
- * removes a domain from the head of the according extraQ and
- * requeues it at a specified position:
- *   round-robin extratime: end of extraQ
- *   weighted ext.: insert in sorted list by score
- * if the domain is blocked / has regained its short-block-loss
- * time it is not put on any queue.
- */
-static void desched_extra_dom(s_time_t now, struct vcpu *d)
-{
-    struct sedf_vcpu_info *inf = EDOM_INFO(d);
-    int i = extra_get_cur_q(inf);
-    unsigned long oldscore;
-
-    ASSERT(extraq_on(d, i));
-
-    /* Unset all running flags */
-    inf->status  &= ~(EXTRA_RUN_PEN | EXTRA_RUN_UTIL);
-    /* Fresh slice for the next run */
-    inf->cputime = 0;
-    /* Accumulate total extratime */
-    inf->extra_time_tot += now - inf->sched_start_abs;
-    /* Remove extradomain from head of the queue. */
-    extraq_del(d, i);
-
-    /* Update the score */
-    oldscore = inf->score[i];
-    if ( i == EXTRA_PEN_Q )
-    {
-        /* Domain was running in L0 extraq */
-        /* reduce block lost, probably more sophistication here!*/
-        /*inf->short_block_lost_tot -= EXTRA_QUANTUM;*/
-        inf->short_block_lost_tot -= now - inf->sched_start_abs;
-#if 0
-        /* KAF: If we don't exit short-blocking state at this point
-         * domain0 can steal all CPU for up to 10 seconds before
-         * scheduling settles down (when competing against another
-         * CPU-bound domain). Doing this seems to make things behave
-         * nicely. Noone gets starved by default.
-         */
-        if ( inf->short_block_lost_tot <= 0 )
-#endif
-        {
-            /* We have (over-)compensated our block penalty */
-            inf->short_block_lost_tot = 0;
-            /* We don't want a place on the penalty queue anymore! */
-            inf->status &= ~EXTRA_WANT_PEN_Q;
-            goto check_extra_queues;
-        }
-
-        /*
-         * We have to go again for another try in the block-extraq,
-         * the score is not used incremantally here, as this is
-         * already done by recalculating the block_lost
-         */
-        inf->score[EXTRA_PEN_Q] = (inf->period << 10) /
-            inf->short_block_lost_tot;
-        oldscore = 0;
-    }
-    else
-    {
-        /*
-         * Domain was running in L1 extraq => score is inverse of
-         * utilization and is used somewhat incremental!
-         */
-        if ( !inf->extraweight )
-        {
-            /* NB: use fixed point arithmetic with 10 bits */
-            inf->score[EXTRA_UTIL_Q] = (inf->period << 10) /
-                inf->slice;
-        }
-        else
-        {
-            /*
-             * Conversion between realtime utilisation and extrawieght:
-             * full (ie 100%) utilization is equivalent to 128 extraweight
-             */
-            inf->score[EXTRA_UTIL_Q] = (1<<17) / inf->extraweight;
-        }
-    }
-
- check_extra_queues:
-    /* Adding a runnable domain to the right queue and removing blocked ones */
-    if ( sedf_runnable(d) )
-    {
-        /* Add according to score: weighted round robin */
-        if (((inf->status & EXTRA_AWARE) && (i == EXTRA_UTIL_Q)) ||
-            ((inf->status & EXTRA_WANT_PEN_Q) && (i == EXTRA_PEN_Q)))
-            extraq_add_sort_update(d, i, oldscore);
-    }
-    else
-    {
-        /* Remove this blocked domain from the waitq! */
-        __del_from_queue(d);
-        /* Make sure that we remove a blocked domain from the other
-         * extraq too. */
-        if ( i == EXTRA_PEN_Q )
-        {
-            if ( extraq_on(d, EXTRA_UTIL_Q) )
-                extraq_del(d, EXTRA_UTIL_Q);
-        }
-        else
-        {
-            if ( extraq_on(d, EXTRA_PEN_Q) )
-                extraq_del(d, EXTRA_PEN_Q);
-        }
-    }
-
-    ASSERT(EQ(sedf_runnable(d), __task_on_queue(d)));
-    ASSERT(IMPLY(extraq_on(d, EXTRA_UTIL_Q) || extraq_on(d, EXTRA_PEN_Q), 
-                 sedf_runnable(d)));
-}
-
-
-static struct task_slice sedf_do_extra_schedule(
-    s_time_t now, s_time_t end_xt, struct list_head *extraq[], int cpu)
-{
-    struct task_slice   ret = { 0 };
-    struct sedf_vcpu_info *runinf;
-    ASSERT(end_xt > now);
-
-    /* Enough time left to use for extratime? */
-    if ( end_xt - now < EXTRA_QUANTUM )
-        goto return_idle;
-
-    if ( !list_empty(extraq[EXTRA_PEN_Q]) )
-    {
-        /*
-         * We still have elements on the level 0 extraq
-         * => let those run first!
-         */
-        runinf   = list_entry(extraq[EXTRA_PEN_Q]->next, 
-                              struct sedf_vcpu_info, extralist[EXTRA_PEN_Q]);
-        runinf->status |= EXTRA_RUN_PEN;
-        ret.task = runinf->vcpu;
-        ret.time = EXTRA_QUANTUM;
-#ifdef SEDF_STATS
-        runinf->pen_extra_slices++;
-#endif
-    }
-    else
-    {
-        if ( !list_empty(extraq[EXTRA_UTIL_Q]) )
-        {
-            /* Use elements from the normal extraqueue */
-            runinf   = list_entry(extraq[EXTRA_UTIL_Q]->next,
-                                  struct sedf_vcpu_info,
-                                  extralist[EXTRA_UTIL_Q]);
-            runinf->status |= EXTRA_RUN_UTIL;
-            ret.task = runinf->vcpu;
-            ret.time = EXTRA_QUANTUM;
-        }
-        else
-            goto return_idle;
-    }
-
-    ASSERT(ret.time > 0);
-    ASSERT(sedf_runnable(ret.task));
-    return ret;
- 
- return_idle:
-    ret.task = IDLETASK(cpu);
-    ret.time = end_xt - now;
-    ASSERT(ret.time > 0);
-    ASSERT(sedf_runnable(ret.task));
-    return ret;
-}
-
-
 static int sedf_init(struct scheduler *ops)
 {
     struct sedf_priv_info *prv;
@@ -772,8 +439,6 @@ static struct task_slice sedf_do_schedule(
     struct list_head     *runq     = RUNQ(cpu);
     struct list_head     *waitq    = WAITQ(cpu);
     struct sedf_vcpu_info *inf     = EDOM_INFO(current);
-    struct list_head      *extraq[] = {
-        EXTRAQ(cpu, EXTRA_PEN_Q), EXTRAQ(cpu, EXTRA_UTIL_Q)};
     struct sedf_vcpu_info *runinf, *waitinf;
     struct task_slice      ret;
 
@@ -794,15 +459,7 @@ static struct task_slice sedf_do_schedule(
     if ( inf->status & SEDF_ASLEEP )
         inf->block_abs = now;
 
-    if ( unlikely(extra_runs(inf)) )
-    {
-        /* Special treatment of domains running in extra time */
-        desched_extra_dom(now, current);
-    }
-    else 
-    {
-        desched_edf_dom(now, current);
-    }
+    desched_edf_dom(now, current);
  check_waitq:
     update_queues(now, runq, waitq);
 
@@ -844,12 +501,9 @@ static struct task_slice sedf_do_schedule(
     else
     {
         waitinf  = list_entry(waitq->next,struct sedf_vcpu_info, list);
-        /*
-         * We could not find any suitable domain 
-         * => look for domains that are aware of extratime
-         */
-        ret = sedf_do_extra_schedule(now, PERIOD_BEGIN(waitinf),
-                                     extraq, cpu);
+
+        ret.task = IDLETASK(cpu);
+        ret.time = PERIOD_BEGIN(waitinf) - now;
     }
 
     /*
@@ -857,11 +511,8 @@ static struct task_slice sedf_do_schedule(
      * still can happen!!!
      */
     if ( ret.time < 0)
-    {
         printk("Ouch! We are seriously BEHIND schedule! %"PRIi64"\n",
                ret.time);
-        ret.time = EXTRA_QUANTUM;
-    }
 
     ret.migrated = 0;
 
@@ -872,7 +523,6 @@ static struct task_slice sedf_do_schedule(
     return ret;
 }
 
-
 static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
 {
     if ( is_idle_vcpu(d) )
@@ -888,14 +538,9 @@ static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
     {
         if ( __task_on_queue(d) )
             __del_from_queue(d);
-        if ( extraq_on(d, EXTRA_UTIL_Q) ) 
-            extraq_del(d, EXTRA_UTIL_Q);
-        if ( extraq_on(d, EXTRA_PEN_Q) )
-            extraq_del(d, EXTRA_PEN_Q);
     }
 }
 
-
 /*
  * This function wakes up a domain, i.e. moves them into the waitqueue
  * things to mention are: admission control is taking place nowhere at
@@ -928,8 +573,6 @@ static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
  *
  *     -this also doesn't disturb scheduling, but might lead to the fact, that
  *      the domain can't finish it's workload in the period
- *     -in addition to that the domain can be treated prioritised when
- *      extratime is available
  *     -addition: experiments have shown that this may have a HUGE impact on
  *      performance of other domains, becaus it can lead to excessive context
  *      switches
@@ -955,10 +598,6 @@ static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
  *      DRB______D___URRRR___D...<prev [Thread] next>
  *                       (D) <- old deadline was here
  *     -problem: deadlines don't occur isochronous anymore
- *    Part 2c (Improved Atropos design)
- *     -when a domain unblocks it is given a very short period (=latency hint)
- *      and slice length scaled accordingly
- *     -both rise again to the original value (e.g. get doubled every period)
  *
  * 3. Unconservative (i.e. incorrect)
  *     -to boost the performance of I/O dependent domains it would be possible
@@ -968,59 +607,11 @@ static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
  *     -either behaviour can lead to missed deadlines in other domains as
  *      opposed to approaches 1,2a,2b
  */
-static void unblock_short_extra_support(
+static void unblock_short_very_cons(
     struct sedf_vcpu_info* inf, s_time_t now)
 {
-    /*
-     * This unblocking scheme tries to support the domain, by assigning it
-     * a priority in extratime distribution according to the loss of time
-     * in this slice due to blocking
-     */
-    s_time_t pen;
- 
-    /* No more realtime execution in this period! */
+    /* Run at the next period. */ 
     inf->deadl_abs += inf->period;
-    if ( likely(inf->block_abs) )
-    {
-        /* Treat blocked time as consumed by the domain */
-        /*inf->cputime += now - inf->block_abs;*/
-        /*
-         * Penalty is time the domain would have
-         * had if it continued to run.
-         */
-        pen = (inf->slice - inf->cputime);
-        if ( pen < 0 )
-            pen = 0;
-        /* Accumulate all penalties over the periods */
-        /*inf->short_block_lost_tot += pen;*/
-        /* Set penalty to the current value */
-        inf->short_block_lost_tot = pen;
-        /* Not sure which one is better.. but seems to work well... */
-  
-        if ( inf->short_block_lost_tot )
-        {
-            inf->score[0] = (inf->period << 10) /
-                inf->short_block_lost_tot;
-#ifdef SEDF_STATS
-            inf->pen_extra_blocks++;
-#endif
-            if ( extraq_on(inf->vcpu, EXTRA_PEN_Q) )
-                /* Remove domain for possible resorting! */
-                extraq_del(inf->vcpu, EXTRA_PEN_Q);
-            else
-                /*
-                 * Remember that we want to be on the penalty q
-                 * so that we can continue when we (un-)block
-                 * in penalty-extratime
-                 */
-                inf->status |= EXTRA_WANT_PEN_Q;
-   
-            /* (re-)add domain to the penalty extraq */
-            extraq_add_sort_update(inf->vcpu, EXTRA_PEN_Q, 0);
-        }
-    }
-
-    /* Give it a fresh slice in the next period! */
     inf->cputime = 0;
 }
 
@@ -1034,34 +625,12 @@ static void unblock_long_cons_b(struct sedf_vcpu_info* inf,s_time_t now)
     inf->cputime = 0;
 }
 
-
-#define DOMAIN_EDF   1
-#define DOMAIN_EXTRA_PEN  2
-#define DOMAIN_EXTRA_UTIL  3
-#define DOMAIN_IDLE   4
-static inline int get_run_type(struct vcpu* d)
-{
-    struct sedf_vcpu_info* inf = EDOM_INFO(d);
-    if (is_idle_vcpu(d))
-        return DOMAIN_IDLE;
-    if (inf->status & EXTRA_RUN_PEN)
-        return DOMAIN_EXTRA_PEN;
-    if (inf->status & EXTRA_RUN_UTIL)
-        return DOMAIN_EXTRA_UTIL;
-    return DOMAIN_EDF;
-}
-
-
 /*
  * Compares two domains in the relation of whether the one is allowed to
  * interrupt the others execution.
  * It returns true (!=0) if a switch to the other domain is good.
- * Current Priority scheme is as follows:
- *  EDF > L0 (penalty based) extra-time > 
- *  L1 (utilization) extra-time > idle-domain
- * In the same class priorities are assigned as following:
+ * Priority scheme is as follows:
  *  EDF: early deadline > late deadline
- *  L0 extra-time: lower score > higher score
  */
 static inline int should_switch(struct vcpu *cur,
                                 struct vcpu *other,
@@ -1070,32 +639,17 @@ static inline int should_switch(struct vcpu *cur,
     struct sedf_vcpu_info *cur_inf, *other_inf;
     cur_inf   = EDOM_INFO(cur);
     other_inf = EDOM_INFO(other);
- 
+
+    /* Always interrupt idle domain. */
+    if ( is_idle_vcpu(cur) )
+        return 1;
+
     /* Check whether we need to make an earlier scheduling decision */
     if ( PERIOD_BEGIN(other_inf) < 
          CPU_INFO(other->processor)->current_slice_expires )
         return 1;
 
-    /* No timing-based switches need to be taken into account here */
-    switch ( get_run_type(cur) )
-    {
-    case DOMAIN_EDF:
-        /* Do not interrupt a running EDF domain */
-        return 0;
-    case DOMAIN_EXTRA_PEN:
-        /* Check whether we also want the L0 ex-q with lower score */
-        return ((other_inf->status & EXTRA_WANT_PEN_Q) &&
-                (other_inf->score[EXTRA_PEN_Q] < 
-                 cur_inf->score[EXTRA_PEN_Q]));
-    case DOMAIN_EXTRA_UTIL:
-        /* Check whether we want the L0 extraq. Don't
-         * switch if both domains want L1 extraq. */
-        return !!(other_inf->status & EXTRA_WANT_PEN_Q);
-    case DOMAIN_IDLE:
-        return 1;
-    }
-
-    return 1;
+    return 0;
 }
 
 static void sedf_wake(const struct scheduler *ops, struct vcpu *d)
@@ -1111,8 +665,6 @@ static void sedf_wake(const struct scheduler *ops, struct vcpu *d)
 
     ASSERT(!sedf_runnable(d));
     inf->status &= ~SEDF_ASLEEP;
-    ASSERT(!extraq_on(d, EXTRA_UTIL_Q));
-    ASSERT(!extraq_on(d, EXTRA_PEN_Q));
  
     if ( unlikely(inf->deadl_abs == 0) )
     {
@@ -1124,43 +676,21 @@ static void sedf_wake(const struct scheduler *ops, struct vcpu *d)
     inf->block_tot++;
 #endif
 
-    if ( unlikely(now < PERIOD_BEGIN(inf)) )
+    if ( now < inf->deadl_abs )
     {
-        /* Unblocking in extra-time! */
-        if ( inf->status & EXTRA_WANT_PEN_Q )
-        {
-            /*
-             * We have a domain that wants compensation
-             * for block penalty and did just block in
-             * its compensation time. Give it another
-             * chance!
-             */
-            extraq_add_sort_update(d, EXTRA_PEN_Q, 0);
-        }
-        extraq_check_add_unblocked(d, 0);
-    }  
-    else
-    {  
-        if ( now < inf->deadl_abs )
-        {
-            /* Short blocking */
+        /* Short blocking */
 #ifdef SEDF_STATS
-            inf->short_block_tot++;
+        inf->short_block_tot++;
 #endif
-            unblock_short_extra_support(inf, now);
-
-            extraq_check_add_unblocked(d, 1);
-        }
-        else
-        {
+        unblock_short_very_cons(inf, now);
+    }
+    else
+    {
             /* Long unblocking */
 #ifdef SEDF_STATS
-            inf->long_block_tot++;
+        inf->long_block_tot++;
 #endif
-            unblock_long_cons_b(inf, now);
-
-            extraq_check_add_unblocked(d, 1);
-        }
+        unblock_long_cons_b(inf, now);
     }
 
     if ( PERIOD_BEGIN(inf) > now )
@@ -1178,8 +708,6 @@ static void sedf_wake(const struct scheduler *ops, struct vcpu *d)
     }
 #endif
 
-    /* Sanity check: make sure each extra-aware domain IS on the util-q! */
-    ASSERT(IMPLY(inf->status & EXTRA_AWARE, extraq_on(d, EXTRA_UTIL_Q)));
     ASSERT(__task_on_queue(d));
     /*
      * Check whether the awakened task needs to invoke the do_schedule
@@ -1200,25 +728,18 @@ static void sedf_dump_domain(struct vcpu *d)
 {
     printk("%i.%i has=%c ", d->domain->domain_id, d->vcpu_id,
            d->is_running ? 'T':'F');
-    printk("p=%"PRIu64" sl=%"PRIu64" ddl=%"PRIu64" w=%hu"
-           " sc=%i xtr(%s)=%"PRIu64" ew=%hu",
-           EDOM_INFO(d)->period, EDOM_INFO(d)->slice, EDOM_INFO(d)->deadl_abs,
-           EDOM_INFO(d)->weight,
-           EDOM_INFO(d)->score[EXTRA_UTIL_Q],
-           (EDOM_INFO(d)->status & EXTRA_AWARE) ? "yes" : "no",
-           EDOM_INFO(d)->extra_time_tot, EDOM_INFO(d)->extraweight);
+    printk("p=%"PRIu64" sl=%"PRIu64" ddl=%"PRIu64,
+           EDOM_INFO(d)->period, EDOM_INFO(d)->slice, EDOM_INFO(d)->deadl_abs);
     
 #ifdef SEDF_STATS
     if ( EDOM_INFO(d)->block_time_tot != 0 )
         printk(" pen=%"PRIu64"%%", (EDOM_INFO(d)->penalty_time_tot * 100) /
                EDOM_INFO(d)->block_time_tot);
     if ( EDOM_INFO(d)->block_tot != 0 )
-        printk("\n   blks=%u sh=%u (%u%%) (shex=%i "\
-               "shexsl=%i) l=%u (%u%%) avg: b=%"PRIu64" p=%"PRIu64"",
+        printk("\n   blks=%u sh=%u (%u%%) "\
+               "l=%u (%u%%) avg: b=%"PRIu64" p=%"PRIu64"",
                EDOM_INFO(d)->block_tot, EDOM_INFO(d)->short_block_tot,
                (EDOM_INFO(d)->short_block_tot * 100) / EDOM_INFO(d)->block_tot,
-               EDOM_INFO(d)->pen_extra_blocks,
-               EDOM_INFO(d)->pen_extra_slices,
                EDOM_INFO(d)->long_block_tot,
                (EDOM_INFO(d)->long_block_tot * 100) / EDOM_INFO(d)->block_tot,
                (EDOM_INFO(d)->block_time_tot) / EDOM_INFO(d)->block_tot,
@@ -1258,30 +779,6 @@ static void sedf_dump_cpu_state(const struct scheduler *ops, int i)
         sedf_dump_domain(d_inf->vcpu);
     }
  
-    queue = EXTRAQ(i,EXTRA_PEN_Q); loop = 0;
-    printk("\nEXTRAQ (penalty) rq %lx   n: %lx, p: %lx\n",
-           (unsigned long)queue, (unsigned long) queue->next,
-           (unsigned long) queue->prev);
-    list_for_each_safe ( list, tmp, queue )
-    {
-        d_inf = list_entry(list, struct sedf_vcpu_info,
-                           extralist[EXTRA_PEN_Q]);
-        printk("%3d: ",loop++);
-        sedf_dump_domain(d_inf->vcpu);
-    }
- 
-    queue = EXTRAQ(i,EXTRA_UTIL_Q); loop = 0;
-    printk("\nEXTRAQ (utilization) rq %lx   n: %lx, p: %lx\n",
-           (unsigned long)queue, (unsigned long) queue->next,
-           (unsigned long) queue->prev);
-    list_for_each_safe ( list, tmp, queue )
-    {
-        d_inf = list_entry(list, struct sedf_vcpu_info,
-                           extralist[EXTRA_UTIL_Q]);
-        printk("%3d: ",loop++);
-        sedf_dump_domain(d_inf->vcpu);
-    }
- 
     loop = 0;
     printk("\nnot on Q\n");
 
@@ -1314,9 +811,10 @@ static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen
     /*
      * Serialize against the pluggable scheduler lock to protect from
      * concurrent updates. We need to take the runq lock for the VCPUs
-     * as well, since we are touching extraweight, weight, slice and
-     * period. As in sched_credit2.c, runq locks nest inside the
-     * pluggable scheduler lock.
+     * as well, since we are touching slice and period. 
+     *
+     * As in sched_credit2.c, runq locks nest inside the  pluggable scheduler
+     * lock.
      */
     spin_lock_irqsave(&prv->lock, flags);
 
@@ -1330,8 +828,7 @@ static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen
         }
 
         /*
-         * Sanity checking: note that disabling extra weight requires
-         * that we set a non-zero slice.
+         * Sanity checking
          */
         if ( (op->u.sedf.period > PERIOD_MAX) ||
              (op->u.sedf.period < PERIOD_MIN) ||
@@ -1347,10 +844,8 @@ static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen
         {
             spinlock_t *lock = vcpu_schedule_lock(v);
 
-            EDOM_INFO(v)->period_orig = 
-                EDOM_INFO(v)->period  = op->u.sedf.period;
-            EDOM_INFO(v)->slice_orig  = 
-                EDOM_INFO(v)->slice   = op->u.sedf.slice;
+            EDOM_INFO(v)->period  = op->u.sedf.period;
+            EDOM_INFO(v)->slice   = op->u.sedf.slice;
             vcpu_schedule_unlock(lock, v);
         }
     }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC Patch 3/3] Fix formatting and misleading comments/variables in sedf
  2014-03-14 19:13 [RFC Patch 0/3] Putting the "Simple" back in sedf Nathan Studer
  2014-03-14 19:13 ` [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support Nathan Studer
  2014-03-14 19:13 ` [RFC Patch 2/3] Remove extra queues, latency scaling, and weight support from sedf Nathan Studer
@ 2014-03-14 19:13 ` Nathan Studer
  2014-03-17 16:49   ` Dario Faggioli
  2014-03-14 19:22 ` [RFC Patch 0/3] Putting the "Simple" back " George Dunlap
  3 siblings, 1 reply; 19+ messages in thread
From: Nathan Studer @ 2014-03-14 19:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, Joshua Whitehead, Dario Faggioli,
	Nathan Studer

From: Nathan Studer <nate.studer@dornerworks.com>

Update the sedf scheduler to correct some of the more aggregious formatting
issues.  Also update some of the misleading comments/variable names.
Specifically the sedf scheduler still implies that a domain and a vcpu
are the same thing, which while true in the past is no longer the case.

Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
Signed-off-by: Joshua Whitehead <josh.whitehead@dornerworks.com>
---
 xen/common/sched_sedf.c |  278 +++++++++++++++++++++++------------------------
 1 file changed, 139 insertions(+), 139 deletions(-)

diff --git a/xen/common/sched_sedf.c b/xen/common/sched_sedf.c
index 7a827c8..16fa9f9 100755
--- a/xen/common/sched_sedf.c
+++ b/xen/common/sched_sedf.c
@@ -34,8 +34,7 @@
 #define PERIOD_MIN (MICROSECS(10))  /* 10us */
 #define SLICE_MIN (MICROSECS(5))    /*  5us */
 
-#define IMPLY(a, b) (!(a) || (b))
-#define EQ(a, b) ((!!(a)) == (!!(b)))
+#define EQ(_A, _B) ((!!(_A)) == (!!(_B)))
 
 
 struct sedf_dom_info {
@@ -55,13 +54,13 @@ struct sedf_vcpu_info {
     s_time_t  period;  /* = relative deadline */
     s_time_t  slice;   /* = worst case execution time */
  
-    /* Status of domain */
+    /* Status of vcpu */
     int       status;
     /* Bookkeeping */
     s_time_t  deadl_abs;
     s_time_t  sched_start_abs;
     s_time_t  cputime;
-    /* Times the domain un-/blocked */
+    /* Times the vcpu un-/blocked */
     s_time_t  block_abs;
     s_time_t  unblock_abs;
  
@@ -82,35 +81,35 @@ struct sedf_cpu_info {
 
 #define SEDF_PRIV(_ops) \
     ((struct sedf_priv_info *)((_ops)->sched_data))
-#define EDOM_INFO(d)   ((struct sedf_vcpu_info *)((d)->sched_priv))
-#define CPU_INFO(cpu)  \
-    ((struct sedf_cpu_info *)per_cpu(schedule_data, cpu).sched_priv)
-#define LIST(d)        (&EDOM_INFO(d)->list)
-#define RUNQ(cpu)      (&CPU_INFO(cpu)->runnableq)
-#define WAITQ(cpu)     (&CPU_INFO(cpu)->waitq)
-#define IDLETASK(cpu)  (idle_vcpu[cpu])
+#define SEDF_VCPU(_vcpu)   ((struct sedf_vcpu_info *)((_vcpu)->sched_priv))
+#define SEDF_PCPU(_cpu)  \
+    ((struct sedf_cpu_info *)per_cpu(schedule_data, _cpu).sched_priv)
+#define LIST(_vcpu)        (&SEDF_VCPU(_vcpu)->list)
+#define RUNQ(_cpu)      (&SEDF_PCPU(_cpu)->runnableq)
+#define WAITQ(_cpu)     (&SEDF_PCPU(_cpu)->waitq)
+#define IDLETASK(_cpu)  (idle_vcpu[_cpu])
 
 #define PERIOD_BEGIN(inf) ((inf)->deadl_abs - (inf)->period)
 
-#define DIV_UP(x,y) (((x) + (y) - 1) / y)
+#define DIV_UP(_X, _Y) (((_X) + (_Y) - 1) / _Y)
 
-#define sedf_runnable(edom)  (!(EDOM_INFO(edom)->status & SEDF_ASLEEP))
+#define sedf_runnable(edom)  (!(SEDF_VCPU(edom)->status & SEDF_ASLEEP))
 
 
-static void sedf_dump_cpu_state(const struct scheduler *ops, int i);
+static void sedf_dump_cpu_state(const struct scheduler *ops, int cpu);
 
-static inline int __task_on_queue(struct vcpu *d)
+static inline int __task_on_queue(struct vcpu *v)
 {
-    return (((LIST(d))->next != NULL) && (LIST(d)->next != LIST(d)));
+    return (((LIST(v))->next != NULL) && (LIST(v)->next != LIST(v)));
 }
 
-static inline void __del_from_queue(struct vcpu *d)
+static inline void __del_from_queue(struct vcpu *v)
 {
-    struct list_head *list = LIST(d);
-    ASSERT(__task_on_queue(d));
+    struct list_head *list = LIST(v);
+    ASSERT(__task_on_queue(v));
     list_del(list);
     list->next = NULL;
-    ASSERT(!__task_on_queue(d));
+    ASSERT(!__task_on_queue(v));
 }
 
 typedef int(*list_comparer)(struct list_head* el1, struct list_head* el2);
@@ -129,12 +128,12 @@ static inline void list_insert_sort(
     list_add(element, cur->prev);
 }
 
-#define DOMAIN_COMPARER(name, field, comp1, comp2)                      \
+#define VCPU_COMPARER(name, field, comp1, comp2)                      \
 static int name##_comp(struct list_head* el1, struct list_head* el2)    \
 {                                                                       \
-    struct sedf_vcpu_info *d1, *d2;                                     \
-    d1 = list_entry(el1,struct sedf_vcpu_info, field);                  \
-    d2 = list_entry(el2,struct sedf_vcpu_info, field);                  \
+    struct sedf_vcpu_info *v1, *v2;                                     \
+    v1 = list_entry(el1, struct sedf_vcpu_info, field);                  \
+    v2 = list_entry(el2, struct sedf_vcpu_info, field);                  \
     if ( (comp1) == (comp2) )                                           \
         return 0;                                                       \
     if ( (comp1) < (comp2) )                                            \
@@ -144,11 +143,11 @@ static int name##_comp(struct list_head* el1, struct list_head* el2)    \
 }
 
 /*
- * Adds a domain to the queue of processes which wait for the beginning of the
+ * Adds a vcpu to the queue of processes which wait for the beginning of the
  * next period; this list is therefore sortet by this time, which is simply
  * absol. deadline - period.
  */ 
-DOMAIN_COMPARER(waitq, list, PERIOD_BEGIN(d1), PERIOD_BEGIN(d2));
+VCPU_COMPARER(waitq, list, PERIOD_BEGIN(v1), PERIOD_BEGIN(v2));
 static inline void __add_to_waitqueue_sort(struct vcpu *v)
 {
     ASSERT(!__task_on_queue(v));
@@ -157,12 +156,12 @@ static inline void __add_to_waitqueue_sort(struct vcpu *v)
 }
 
 /*
- * Adds a domain to the queue of processes which have started their current
+ * Adds a vcpu to the queue of processes which have started their current
  * period and are runnable (i.e. not blocked, dieing,...). The first element
  * on this list is running on the processor, if the list is empty the idle
  * task will run. As we are implementing EDF, this list is sorted by deadlines.
  */ 
-DOMAIN_COMPARER(runq, list, d1->deadl_abs, d2->deadl_abs);
+VCPU_COMPARER(runq, list, v1->deadl_abs, v2->deadl_abs);
 static inline void __add_to_runqueue_sort(struct vcpu *v)
 {
     list_insert_sort(RUNQ(v->processor), LIST(v), runq_comp);
@@ -173,8 +172,8 @@ static void sedf_insert_vcpu(const struct scheduler *ops, struct vcpu *v)
 {
     if ( is_idle_vcpu(v) )
     {
-        EDOM_INFO(v)->deadl_abs = 0;
-        EDOM_INFO(v)->status &= ~SEDF_ASLEEP;
+        SEDF_VCPU(v)->deadl_abs = 0;
+        SEDF_VCPU(v)->status &= ~SEDF_ASLEEP;
     }
 }
 
@@ -274,29 +273,29 @@ static int sedf_pick_cpu(const struct scheduler *ops, struct vcpu *v)
 }
 
 /*
- * Handles the rescheduling & bookkeeping of domains running in their
+ * Handles the rescheduling & bookkeeping of vcpus running in their
  * guaranteed timeslice.
  */
-static void desched_edf_dom(s_time_t now, struct vcpu* d)
+static void desched_edf_vcpu(s_time_t now, struct vcpu *v)
 {
-    struct sedf_vcpu_info* inf = EDOM_INFO(d);
+    struct sedf_vcpu_info* inf = SEDF_VCPU(v);
 
-    /* Current domain is running in real time mode */
-    ASSERT(__task_on_queue(d));
+    /* Current vcpu is running in real time mode */
+    ASSERT(__task_on_queue(v));
 
-    /* Update the domain's cputime */
+    /* Update the vcpu's cputime */
     inf->cputime += now - inf->sched_start_abs;
 
-    /* Scheduling decisions which don't remove the running domain from
+    /* Scheduling decisions which don't remove the running vcpu from
      * the runq */
-    if ( (inf->cputime < inf->slice) && sedf_runnable(d) )
+    if ( (inf->cputime < inf->slice) && sedf_runnable(v) )
         return;
   
-    __del_from_queue(d);
+    __del_from_queue(v);
 
     /*
      * Manage bookkeeping (i.e. calculate next deadline, memorise
-     * overrun-time of slice) of finished domains.
+     * overrun-time of slice) of finished vcpus.
      */
     if ( inf->cputime >= inf->slice )
     {
@@ -306,13 +305,13 @@ static void desched_edf_dom(s_time_t now, struct vcpu* d)
         inf->deadl_abs += inf->period;
     }
  
-    /* Add a runnable domain to the waitqueue */
-    if ( sedf_runnable(d) )
+    /* Add a runnable vcpu to the waitqueue */
+    if ( sedf_runnable(v) )
     {
-        __add_to_waitqueue_sort(d);
+        __add_to_waitqueue_sort(v);
     }
 
-    ASSERT(EQ(sedf_runnable(d), __task_on_queue(d)));
+    ASSERT(EQ(sedf_runnable(v), __task_on_queue(v)));
 }
 
 
@@ -336,14 +335,14 @@ static void update_queues(
         __add_to_runqueue_sort(curinf->vcpu);
     }
  
-    /* Process the runq, find domains that are on the runq that shouldn't */
+    /* Process the runq, find vcpus that are on the runq that shouldn't */
     list_for_each_safe ( cur, tmp, runq )
     {
-        curinf = list_entry(cur,struct sedf_vcpu_info,list);
+        curinf = list_entry(cur, struct sedf_vcpu_info, list);
 
         if ( unlikely(curinf->slice == 0) )
         {
-            /* Ignore domains with empty slice */
+            /* Ignore vcpus with empty slice */
             __del_from_queue(curinf->vcpu);
 
             /* Move them to their next period */
@@ -429,8 +428,8 @@ static void sedf_deinit(const struct scheduler *ops)
  * Main scheduling function
  * Reasons for calling this function are:
  * -timeslice for the current period used up
- * -domain on waitqueue has started it's period
- * -and various others ;) in general: determine which domain to run next
+ * -vcpu on waitqueue has started it's period
+ * -and various others ;) in general: determine which vcpu to run next
  */
 static struct task_slice sedf_do_schedule(
     const struct scheduler *ops, s_time_t now, bool_t tasklet_work_scheduled)
@@ -438,7 +437,7 @@ static struct task_slice sedf_do_schedule(
     int                   cpu      = smp_processor_id();
     struct list_head     *runq     = RUNQ(cpu);
     struct list_head     *waitq    = WAITQ(cpu);
-    struct sedf_vcpu_info *inf     = EDOM_INFO(current);
+    struct sedf_vcpu_info *inf     = SEDF_VCPU(current);
     struct sedf_vcpu_info *runinf, *waitinf;
     struct task_slice      ret;
 
@@ -449,7 +448,7 @@ static struct task_slice sedf_do_schedule(
         goto check_waitq;
 
     /*
-     * Create local state of the status of the domain, in order to avoid
+     * Create local state of the status of the vcpu, in order to avoid
      * inconsistent state during scheduling decisions, because data for
      * vcpu_runnable is not protected by the scheduling lock!
      */
@@ -459,12 +458,12 @@ static struct task_slice sedf_do_schedule(
     if ( inf->status & SEDF_ASLEEP )
         inf->block_abs = now;
 
-    desched_edf_dom(now, current);
+    desched_edf_vcpu(now, current);
  check_waitq:
     update_queues(now, runq, waitq);
 
     /*
-     * Now simply pick the first domain from the runqueue, which has the
+     * Now simply pick the first vcpu from the runqueue, which has the
      * earliest deadline, because the list is sorted
      *
      * Tasklet work (which runs in idle VCPU context) overrides all else.
@@ -479,15 +478,15 @@ static struct task_slice sedf_do_schedule(
     }
     else if ( !list_empty(runq) )
     {
-        runinf   = list_entry(runq->next,struct sedf_vcpu_info,list);
+        runinf   = list_entry(runq->next, struct sedf_vcpu_info, list);
         ret.task = runinf->vcpu;
         if ( !list_empty(waitq) )
         {
             waitinf  = list_entry(waitq->next,
-                                  struct sedf_vcpu_info,list);
+                                  struct sedf_vcpu_info, list);
             /*
-             * Rerun scheduler, when scheduled domain reaches it's
-             * end of slice or the first domain from the waitqueue
+             * Rerun scheduler, when scheduled vcpu reaches it's
+             * end of slice or the first vcpu from the waitqueue
              * gets ready.
              */
             ret.time = MIN(now + runinf->slice - runinf->cputime,
@@ -500,7 +499,7 @@ static struct task_slice sedf_do_schedule(
     }
     else
     {
-        waitinf  = list_entry(waitq->next,struct sedf_vcpu_info, list);
+        waitinf  = list_entry(waitq->next, struct sedf_vcpu_info, list);
 
         ret.task = IDLETASK(cpu);
         ret.time = PERIOD_BEGIN(waitinf) - now;
@@ -516,55 +515,55 @@ static struct task_slice sedf_do_schedule(
 
     ret.migrated = 0;
 
-    EDOM_INFO(ret.task)->sched_start_abs = now;
+    SEDF_VCPU(ret.task)->sched_start_abs = now;
     CHECK(ret.time > 0);
     ASSERT(sedf_runnable(ret.task));
-    CPU_INFO(cpu)->current_slice_expires = now + ret.time;
+    SEDF_PCPU(cpu)->current_slice_expires = now + ret.time;
     return ret;
 }
 
-static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
+static void sedf_sleep(const struct scheduler *ops, struct vcpu *v)
 {
-    if ( is_idle_vcpu(d) )
+    if ( is_idle_vcpu(v) )
         return;
 
-    EDOM_INFO(d)->status |= SEDF_ASLEEP;
+    SEDF_VCPU(v)->status |= SEDF_ASLEEP;
  
-    if ( per_cpu(schedule_data, d->processor).curr == d )
+    if ( per_cpu(schedule_data, v->processor).curr == v )
     {
-        cpu_raise_softirq(d->processor, SCHEDULE_SOFTIRQ);
+        cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ);
     }
     else
     {
-        if ( __task_on_queue(d) )
-            __del_from_queue(d);
+        if ( __task_on_queue(v) )
+            __del_from_queue(v);
     }
 }
 
 /*
- * This function wakes up a domain, i.e. moves them into the waitqueue
+ * This function wakes up a vcpu, i.e. moves them into the waitqueue
  * things to mention are: admission control is taking place nowhere at
- * the moment, so we can't be sure, whether it is safe to wake the domain
+ * the moment, so we can't be sure, whether it is safe to wake the vcpu
  * up at all. Anyway, even if it is safe (total cpu usage <=100%) there are
- * some considerations on when to allow the domain to wake up and have it's
+ * some considerations on when to allow the vcpu to wake up and have it's
  * first deadline...
  * I detected 3 cases, which could describe the possible behaviour of the
  * scheduler,
  * and I'll try to make them more clear:
  *
  * 1. Very conservative
- *     -when a blocked domain unblocks, it is allowed to start execution at
+ *     -when a blocked vcpu unblocks, it is allowed to start execution at
  *      the beginning of the next complete period
  *      (D..deadline, R..running, B..blocking/sleeping, U..unblocking/waking up
  *
  *      DRRB_____D__U_____DRRRRR___D________ ... 
  *
- *     -this causes the domain to miss a period (and a deadlline)
+ *     -this causes the vcpu to miss a period (and a deadlline)
  *     -doesn't disturb the schedule at all
  *     -deadlines keep occuring isochronous
  *
  * 2. Conservative Part 1: Short Unblocking
- *     -when a domain unblocks in the same period as it was blocked it
+ *     -when a vcpu unblocks in the same period as it was blocked it
  *      unblocks and may consume the rest of it's original time-slice minus
  *      the time it was blocked
  *      (assume period=9, slice=5)
@@ -572,16 +571,16 @@ static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
  *      DRB_UR___DRRRRR___D...
  *
  *     -this also doesn't disturb scheduling, but might lead to the fact, that
- *      the domain can't finish it's workload in the period
+ *      the vcpu can't finish it's workload in the period
  *     -addition: experiments have shown that this may have a HUGE impact on
- *      performance of other domains, becaus it can lead to excessive context
+ *      performance of other vcpus, becaus it can lead to excessive context
  *      switches
  *
  *    Part2: Long Unblocking
  *    Part 2a
  *     -it is obvious that such accounting of block time, applied when
  *      unblocking is happening in later periods, works fine aswell
- *     -the domain is treated as if it would have been running since the start
+ *     -the vcpu is treated as if it would have been running since the start
  *      of its new period
  *
  *      DRB______D___UR___D... 
@@ -600,11 +599,11 @@ static void sedf_sleep(const struct scheduler *ops, struct vcpu *d)
  *     -problem: deadlines don't occur isochronous anymore
  *
  * 3. Unconservative (i.e. incorrect)
- *     -to boost the performance of I/O dependent domains it would be possible
- *      to put the domain into the runnable queue immediately, and let it run
+ *     -to boost the performance of I/O dependent vcpus it would be possible
+ *      to put the vcpu into the runnable queue immediately, and let it run
  *      for the remainder of the slice of the current period
- *      (or even worse: allocate a new full slice for the domain) 
- *     -either behaviour can lead to missed deadlines in other domains as
+ *      (or even worse: allocate a new full slice for the vcpu) 
+ *     -either behaviour can lead to missed deadlines in other vcpus as
  *      opposed to approaches 1,2a,2b
  */
 static void unblock_short_very_cons(
@@ -616,7 +615,7 @@ static void unblock_short_very_cons(
 }
 
 
-static void unblock_long_cons_b(struct sedf_vcpu_info* inf,s_time_t now)
+static void unblock_long_cons_b(struct sedf_vcpu_info* inf, s_time_t now)
 {
     /* Conservative 2b */
 
@@ -626,9 +625,9 @@ static void unblock_long_cons_b(struct sedf_vcpu_info* inf,s_time_t now)
 }
 
 /*
- * Compares two domains in the relation of whether the one is allowed to
+ * Compares two vcpus in the relation of whether the one is allowed to
  * interrupt the others execution.
- * It returns true (!=0) if a switch to the other domain is good.
+ * It returns true (!=0) if a switch to the other vcpu is good.
  * Priority scheme is as follows:
  *  EDF: early deadline > late deadline
  */
@@ -637,33 +636,33 @@ static inline int should_switch(struct vcpu *cur,
                                 s_time_t now)
 {
     struct sedf_vcpu_info *cur_inf, *other_inf;
-    cur_inf   = EDOM_INFO(cur);
-    other_inf = EDOM_INFO(other);
+    cur_inf   = SEDF_VCPU(cur);
+    other_inf = SEDF_VCPU(other);
 
-    /* Always interrupt idle domain. */
+    /* Always interrupt idle vcpu. */
     if ( is_idle_vcpu(cur) )
         return 1;
 
     /* Check whether we need to make an earlier scheduling decision */
     if ( PERIOD_BEGIN(other_inf) < 
-         CPU_INFO(other->processor)->current_slice_expires )
+         SEDF_PCPU(other->processor)->current_slice_expires )
         return 1;
 
     return 0;
 }
 
-static void sedf_wake(const struct scheduler *ops, struct vcpu *d)
+static void sedf_wake(const struct scheduler *ops, struct vcpu *v)
 {
     s_time_t              now = NOW();
-    struct sedf_vcpu_info* inf = EDOM_INFO(d);
+    struct sedf_vcpu_info* inf = SEDF_VCPU(v);
 
-    if ( unlikely(is_idle_vcpu(d)) )
+    if ( unlikely(is_idle_vcpu(v)) )
         return;
    
-    if ( unlikely(__task_on_queue(d)) )
+    if ( unlikely(__task_on_queue(v)) )
         return;
 
-    ASSERT(!sedf_runnable(d));
+    ASSERT(!sedf_runnable(v));
     inf->status &= ~SEDF_ASLEEP;
  
     if ( unlikely(inf->deadl_abs == 0) )
@@ -694,9 +693,9 @@ static void sedf_wake(const struct scheduler *ops, struct vcpu *d)
     }
 
     if ( PERIOD_BEGIN(inf) > now )
-        __add_to_waitqueue_sort(d);
+        __add_to_waitqueue_sort(v);
     else
-        __add_to_runqueue_sort(d);
+        __add_to_runqueue_sort(v);
  
 #ifdef SEDF_STATS
     /* Do some statistics here... */
@@ -708,75 +707,76 @@ static void sedf_wake(const struct scheduler *ops, struct vcpu *d)
     }
 #endif
 
-    ASSERT(__task_on_queue(d));
+    ASSERT(__task_on_queue(v));
     /*
      * Check whether the awakened task needs to invoke the do_schedule
      * routine. Try to avoid unnecessary runs but:
      * Save approximation: Always switch to scheduler!
      */
-    ASSERT(d->processor >= 0);
-    ASSERT(d->processor < nr_cpu_ids);
-    ASSERT(per_cpu(schedule_data, d->processor).curr);
+    ASSERT(v->processor >= 0);
+    ASSERT(v->processor < nr_cpu_ids);
+    ASSERT(per_cpu(schedule_data, v->processor).curr);
 
-    if ( should_switch(per_cpu(schedule_data, d->processor).curr, d, now) )
-        cpu_raise_softirq(d->processor, SCHEDULE_SOFTIRQ);
+    if ( should_switch(per_cpu(schedule_data, v->processor).curr, v, now) )
+        cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ);
 }
 
 
-/* Print a lot of useful information about a domains in the system */
-static void sedf_dump_domain(struct vcpu *d)
+/* Print a lot of useful information about a vcpus in the system */
+static void sedf_dump_vcpu(struct vcpu *v)
 {
-    printk("%i.%i has=%c ", d->domain->domain_id, d->vcpu_id,
-           d->is_running ? 'T':'F');
+    printk("%i.%i has=%c ", v->domain->domain_id, v->vcpu_id,
+           v->is_running ? 'T':'F');
     printk("p=%"PRIu64" sl=%"PRIu64" ddl=%"PRIu64,
-           EDOM_INFO(d)->period, EDOM_INFO(d)->slice, EDOM_INFO(d)->deadl_abs);
+           SEDF_VCPU(v)->period, SEDF_VCPU(v)->slice, SEDF_VCPU(v)->deadl_abs);
     
 #ifdef SEDF_STATS
-    if ( EDOM_INFO(d)->block_time_tot != 0 )
-        printk(" pen=%"PRIu64"%%", (EDOM_INFO(d)->penalty_time_tot * 100) /
-               EDOM_INFO(d)->block_time_tot);
-    if ( EDOM_INFO(d)->block_tot != 0 )
+    if ( SEDF_VCPU(v)->block_time_tot != 0 )
+        printk(" pen=%"PRIu64"%%", (SEDF_VCPU(v)->penalty_time_tot * 100) /
+               SEDF_VCPU(v)->block_time_tot);
+    if ( SEDF_VCPU(v)->block_tot != 0 )
         printk("\n   blks=%u sh=%u (%u%%) "\
                "l=%u (%u%%) avg: b=%"PRIu64" p=%"PRIu64"",
-               EDOM_INFO(d)->block_tot, EDOM_INFO(d)->short_block_tot,
-               (EDOM_INFO(d)->short_block_tot * 100) / EDOM_INFO(d)->block_tot,
-               EDOM_INFO(d)->long_block_tot,
-               (EDOM_INFO(d)->long_block_tot * 100) / EDOM_INFO(d)->block_tot,
-               (EDOM_INFO(d)->block_time_tot) / EDOM_INFO(d)->block_tot,
-               (EDOM_INFO(d)->penalty_time_tot) / EDOM_INFO(d)->block_tot);
+               SEDF_VCPU(v)->block_tot, SEDF_VCPU(v)->short_block_tot,
+               (SEDF_VCPU(v)->short_block_tot * 100) / SEDF_VCPU(v)->block_tot,
+               SEDF_VCPU(v)->long_block_tot,
+               (SEDF_VCPU(v)->long_block_tot * 100) / SEDF_VCPU(v)->block_tot,
+               (SEDF_VCPU(v)->block_time_tot) / SEDF_VCPU(v)->block_tot,
+               (SEDF_VCPU(v)->penalty_time_tot) / SEDF_VCPU(v)->block_tot);
 #endif
     printk("\n");
 }
 
 
-/* Dumps all domains on the specified cpu */
-static void sedf_dump_cpu_state(const struct scheduler *ops, int i)
+/* Dumps all vcpus on the specified cpu */
+static void sedf_dump_cpu_state(const struct scheduler *ops, int cpu)
 {
     struct list_head      *list, *queue, *tmp;
-    struct sedf_vcpu_info *d_inf;
+    struct sedf_vcpu_info *v_inf;
     struct domain         *d;
-    struct vcpu    *ed;
+    struct vcpu    *v;
     int loop = 0;
  
-    printk("now=%"PRIu64"\n",NOW());
-    queue = RUNQ(i);
+    printk("now=%"PRIu64"\n", NOW());
+    queue = RUNQ(cpu);
     printk("RUNQ rq %lx   n: %lx, p: %lx\n",  (unsigned long)queue,
            (unsigned long) queue->next, (unsigned long) queue->prev);
     list_for_each_safe ( list, tmp, queue )
     {
-        printk("%3d: ",loop++);
-        d_inf = list_entry(list, struct sedf_vcpu_info, list);
-        sedf_dump_domain(d_inf->vcpu);
+        printk("%3d: ", loop++);
+        v_inf = list_entry(list, struct sedf_vcpu_info, list);
+        sedf_dump_vcpu(v_inf->vcpu);
     }
  
-    queue = WAITQ(i); loop = 0;
+    queue = WAITQ(cpu); 
+    loop = 0;
     printk("\nWAITQ rq %lx   n: %lx, p: %lx\n",  (unsigned long)queue,
            (unsigned long) queue->next, (unsigned long) queue->prev);
     list_for_each_safe ( list, tmp, queue )
     {
-        printk("%3d: ",loop++);
-        d_inf = list_entry(list, struct sedf_vcpu_info, list);
-        sedf_dump_domain(d_inf->vcpu);
+        printk("%3d: ", loop++);
+        v_inf = list_entry(list, struct sedf_vcpu_info, list);
+        sedf_dump_vcpu(v_inf->vcpu);
     }
  
     loop = 0;
@@ -787,12 +787,12 @@ static void sedf_dump_cpu_state(const struct scheduler *ops, int i)
     {
         if ( (d->cpupool ? d->cpupool->sched : &sched_sedf_def) != ops )
             continue;
-        for_each_vcpu(d, ed)
+        for_each_vcpu(d, v)
         {
-            if ( !__task_on_queue(ed) && (ed->processor == i) )
+            if ( !__task_on_queue(v) && (v->processor == cpu) )
             {
-                printk("%3d: ",loop++);
-                sedf_dump_domain(ed);
+                printk("%3d: ", loop++);
+                sedf_dump_vcpu(v);
             }
         }
     }
@@ -801,7 +801,7 @@ static void sedf_dump_cpu_state(const struct scheduler *ops, int i)
 
 
 /* Set or fetch domain scheduling parameters */
-static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen_domctl_scheduler_op *op)
+static int sedf_adjust(const struct scheduler *ops, struct domain *d, struct xen_domctl_scheduler_op *op)
 {
     struct sedf_priv_info *prv = SEDF_PRIV(ops);
     unsigned long flags;
@@ -840,25 +840,25 @@ static int sedf_adjust(const struct scheduler *ops, struct domain *p, struct xen
         }
 
         /* Time-driven domains */
-        for_each_vcpu ( p, v )
+        for_each_vcpu ( d, v )
         {
             spinlock_t *lock = vcpu_schedule_lock(v);
 
-            EDOM_INFO(v)->period  = op->u.sedf.period;
-            EDOM_INFO(v)->slice   = op->u.sedf.slice;
+            SEDF_VCPU(v)->period  = op->u.sedf.period;
+            SEDF_VCPU(v)->slice   = op->u.sedf.slice;
             vcpu_schedule_unlock(lock, v);
         }
     }
     else if ( op->cmd == XEN_DOMCTL_SCHEDOP_getinfo )
     {
-        if ( p->vcpu[0] == NULL )
+        if ( d->vcpu[0] == NULL )
         {
             rc = -EINVAL;
             goto out;
         }
 
-        op->u.sedf.period    = EDOM_INFO(p->vcpu[0])->period;
-        op->u.sedf.slice     = EDOM_INFO(p->vcpu[0])->slice;
+        op->u.sedf.period    = SEDF_VCPU(d->vcpu[0])->period;
+        op->u.sedf.slice     = SEDF_VCPU(d->vcpu[0])->slice;
     }
 
 out:
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 0/3] Putting the "Simple" back in sedf.
  2014-03-14 19:13 [RFC Patch 0/3] Putting the "Simple" back in sedf Nathan Studer
                   ` (2 preceding siblings ...)
  2014-03-14 19:13 ` [RFC Patch 3/3] Fix formatting and misleading comments/variables in sedf Nathan Studer
@ 2014-03-14 19:22 ` George Dunlap
  2014-03-14 20:13   ` Nate Studer
  3 siblings, 1 reply; 19+ messages in thread
From: George Dunlap @ 2014-03-14 19:22 UTC (permalink / raw)
  To: Nathan Studer, xen-devel
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, Ian Jackson,
	Robert VanVossen, Dario Faggioli

On 03/14/2014 07:13 PM, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
>
> With the increased interest in embedded Xen, there is a need for a suitable
> real-time scheduler.  The arinc653 scheduler currently only supports a
> single core and has limited niche appeal, while the sedf scheduler is
> widely consider deprecated and is currently a mess.
>
> Since both the CBS scheduler proposed by Dario and the schedulers of Xen-RT
> use an edf scheduler as the lowest-level scheduling mechanism, it seems
> worthwhile to start repurposing the sedf scheduler instead of creating a
> completely new scheduler.
>
> This patchset begins this repurposing by removing the extra scheduling code
> that has built up over the years, and returns the sedf scheduler to its
> simple roots.

Hey Nate,

Thanks for these patches -- what you describe at a high level, making 
sedf a suitable rts for embedded applications, sounds like a great idea.

I think what might be helpful in evaluating whether these patches are a 
good idea at the high level is a bit of a description of where you see 
this going long-term.  Can you sketch out, at a high level, what you 
envision the sedf scheduler becoming?  What kinds of parameters and 
features *will* it have?

  -George

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 0/3] Putting the "Simple" back in sedf.
  2014-03-14 19:22 ` [RFC Patch 0/3] Putting the "Simple" back " George Dunlap
@ 2014-03-14 20:13   ` Nate Studer
  2014-03-14 20:31     ` Nate Studer
  2014-03-17 15:51     ` Dario Faggioli
  0 siblings, 2 replies; 19+ messages in thread
From: Nate Studer @ 2014-03-14 20:13 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, Ian Jackson,
	Robert VanVossen, josh.whitehead, Dario Faggioli

On 3/14/2014 3:22 PM, George Dunlap wrote:
> On 03/14/2014 07:13 PM, Nathan Studer wrote:
>> From: Nathan Studer <nate.studer@dornerworks.com>
>>
>> With the increased interest in embedded Xen, there is a need for a suitable
>> real-time scheduler.  The arinc653 scheduler currently only supports a
>> single core and has limited niche appeal, while the sedf scheduler is
>> widely consider deprecated and is currently a mess.
>>
>> Since both the CBS scheduler proposed by Dario and the schedulers of Xen-RT
>> use an edf scheduler as the lowest-level scheduling mechanism, it seems
>> worthwhile to start repurposing the sedf scheduler instead of creating a
>> completely new scheduler.
>>
>> This patchset begins this repurposing by removing the extra scheduling code
>> that has built up over the years, and returns the sedf scheduler to its
>> simple roots.
> 
> Hey Nate,
> 
> Thanks for these patches -- what you describe at a high level, making 
> sedf a suitable rts for embedded applications, sounds like a great idea.
> 
> I think what might be helpful in evaluating whether these patches are a 
> good idea at the high level is a bit of a description of where you see 
> this going long-term.  Can you sketch out, at a high level, what you 
> envision the sedf scheduler becoming?  What kinds of parameters and 
> features *will* it have?

In the long term, a more extensible version of Dario's favorite scheduler, CBS
(Constant Bandwidth Server):  a selectable budgeting algorithm that sets vcpu
deadlines with the sedf scheduler on the backend scheduling the vcpu with the
earliest deadline.  Preferably it would support other budgeting algorithms as
well such as Total Bandwidth Server, etc...

The parameters for the scheduler would be the budgeting algorithm, server
budget, and the server period.  The parameters for the domains/vcpus would be
domain/vcpu budget/timeslice and domain/vcpu period.

Those are our ideas though and I know that Dario and others have ideas as well,
so any feedback is appreciated.

In the short term, we are working on upstreaming a version of the CBS scheduler.
 Dario mentored Josh Whitehead, who works with me, in implementing a crude
version of it for his undergraduate project, so we are already half-way there.

http://wiki.xen.org/wiki/Archived/GSoC_2013#Temporal_Isolation_and_Multiprocessor_Support_in_the_SEDF_Scheduler

   Nate

> 
>   -George
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 0/3] Putting the "Simple" back in sedf.
  2014-03-14 20:13   ` Nate Studer
@ 2014-03-14 20:31     ` Nate Studer
  2014-03-17 10:29       ` Dario Faggioli
  2014-03-17 15:51     ` Dario Faggioli
  1 sibling, 1 reply; 19+ messages in thread
From: Nate Studer @ 2014-03-14 20:31 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, Dario Faggioli,
	Ian Jackson, Robert VanVossen, josh.whitehead

On 3/14/2014 4:13 PM, Nate Studer wrote:
> On 3/14/2014 3:22 PM, George Dunlap wrote:
>> On 03/14/2014 07:13 PM, Nathan Studer wrote:
>>> From: Nathan Studer <nate.studer@dornerworks.com>
>>>
>>> With the increased interest in embedded Xen, there is a need for a suitable
>>> real-time scheduler.  The arinc653 scheduler currently only supports a
>>> single core and has limited niche appeal, while the sedf scheduler is
>>> widely consider deprecated and is currently a mess.
>>>
>>> Since both the CBS scheduler proposed by Dario and the schedulers of Xen-RT
>>> use an edf scheduler as the lowest-level scheduling mechanism, it seems
>>> worthwhile to start repurposing the sedf scheduler instead of creating a
>>> completely new scheduler.
>>>
>>> This patchset begins this repurposing by removing the extra scheduling code
>>> that has built up over the years, and returns the sedf scheduler to its
>>> simple roots.
>>
>> Hey Nate,
>>
>> Thanks for these patches -- what you describe at a high level, making 
>> sedf a suitable rts for embedded applications, sounds like a great idea.
>>
>> I think what might be helpful in evaluating whether these patches are a 
>> good idea at the high level is a bit of a description of where you see 
>> this going long-term.  Can you sketch out, at a high level, what you 
>> envision the sedf scheduler becoming?  What kinds of parameters and 
>> features *will* it have?
> 
> In the long term, a more extensible version of Dario's favorite scheduler, CBS
> (Constant Bandwidth Server):  a selectable budgeting algorithm that sets vcpu
> deadlines with the sedf scheduler on the backend scheduling the vcpu with the
> earliest deadline.  Preferably it would support other budgeting algorithms as
> well such as Total Bandwidth Server, etc...

Speaking of Dario, I got his e-mail address wrong. My apologies.

CC'ing his correct address.

> 
> The parameters for the scheduler would be the budgeting algorithm, server
> budget, and the server period.  The parameters for the domains/vcpus would be
> domain/vcpu budget/timeslice and domain/vcpu period.
> 
> Those are our ideas though and I know that Dario and others have ideas as well,
> so any feedback is appreciated.
> 
> In the short term, we are working on upstreaming a version of the CBS scheduler.
>  Dario mentored Josh Whitehead, who works with me, in implementing a crude
> version of it for his undergraduate project, so we are already half-way there.
> 
> http://wiki.xen.org/wiki/Archived/GSoC_2013#Temporal_Isolation_and_Multiprocessor_Support_in_the_SEDF_Scheduler
> 
>    Nate
> 
>>
>>   -George
>>
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-14 19:13 ` [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support Nathan Studer
@ 2014-03-17  8:13   ` Jan Beulich
  2014-03-17 17:02   ` Dario Faggioli
  2014-03-21 11:16   ` Ian Campbell
  2 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2014-03-17  8:13 UTC (permalink / raw)
  To: Nathan Studer
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, xen-devel, Joshua Whitehead,
	Dario Faggioli

>>> On 14.03.14 at 20:13, Nathan Studer <nate.studer@dornerworks.com> wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -331,9 +331,6 @@ struct xen_domctl_scheduler_op {
>          struct xen_domctl_sched_sedf {
>              uint64_aligned_t period;
>              uint64_aligned_t slice;
> -            uint64_aligned_t latency;
> -            uint32_t extratime;
> -            uint32_t weight;
>          } sedf;
>          struct xen_domctl_sched_credit {
>              uint16_t weight;

A purely mechanical remark: The first interface changing patch to
this file in a release cycle has to bump XEN_DOMCTL_INTERFACE_VERSION.

Jan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 0/3] Putting the "Simple" back in sedf.
  2014-03-14 20:31     ` Nate Studer
@ 2014-03-17 10:29       ` Dario Faggioli
  0 siblings, 0 replies; 19+ messages in thread
From: Dario Faggioli @ 2014-03-17 10:29 UTC (permalink / raw)
  To: Nate Studer
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, xen-devel, josh.whitehead


[-- Attachment #1.1: Type: text/plain, Size: 1079 bytes --]

On ven, 2014-03-14 at 16:31 -0400, Nate Studer wrote:
> > In the long term, a more extensible version of Dario's favorite scheduler, CBS
> > (Constant Bandwidth Server):  a selectable budgeting algorithm that sets vcpu
> > deadlines with the sedf scheduler on the backend scheduling the vcpu with the
> > earliest deadline.  Preferably it would support other budgeting algorithms as
> > well such as Total Bandwidth Server, etc...
> 
> Speaking of Dario, I got his e-mail address wrong. My apologies.
> 
> CC'ing his correct address.
> 
Hey, no problem... I saw it on the list, I saw my name there, but did
not notice the address, and was wondering why I did not get my own
copy! :-P

Sorry I haven't replied yet, I was otherwise engaged, but I'm on it
right now! :-P

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 0/3] Putting the "Simple" back in sedf.
  2014-03-14 20:13   ` Nate Studer
  2014-03-14 20:31     ` Nate Studer
@ 2014-03-17 15:51     ` Dario Faggioli
  2014-03-17 17:01       ` Sisu Xi
  1 sibling, 1 reply; 19+ messages in thread
From: Dario Faggioli @ 2014-03-17 15:51 UTC (permalink / raw)
  To: Nate Studer
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, xen-devel, josh.whitehead,
	Dario Faggioli


[-- Attachment #1.1: Type: text/plain, Size: 4402 bytes --]

On ven, 2014-03-14 at 16:13 -0400, Nate Studer wrote:
> On 3/14/2014 3:22 PM, George Dunlap wrote:
> > 
> > Hey Nate,
> > 
> > Thanks for these patches -- what you describe at a high level, making 
> > sedf a suitable rts for embedded applications, sounds like a great idea.
> > 
It does indeed... And thanks a ton for stepping up! As said many times,
this is something I always wanted to do/make happen, but could never
find enough time for actually doing.

Having someone like you and Josh and you on board is absolutely great,
thanks again! :-)

> > I think what might be helpful in evaluating whether these patches are a 
> > good idea at the high level is a bit of a description of where you see 
> > this going long-term.  Can you sketch out, at a high level, what you 
> > envision the sedf scheduler becoming?  What kinds of parameters and 
> > features *will* it have?
> 
> In the long term, a more extensible version of Dario's favorite scheduler, CBS
> (Constant Bandwidth Server):  a selectable budgeting algorithm that sets vcpu
> deadlines with the sedf scheduler on the backend scheduling the vcpu with the
> earliest deadline.  Preferably it would support other budgeting algorithms as
> well such as Total Bandwidth Server, etc...
> 
EhEh, nicely put... One day you'll have to explain me what is it that
you like in TBS (not not mention what is it that you like more in TBS
than in CBS!) :-P

Jokes apart, the point is not the algorithm, it's the approach. Resource
reservation is the only sane way to achieve good enough (soft and hard)
real-time scheduling in complex and dynamic virtualized environment
(where ARINC would fall short). A sane and a really simple (simple to
understand, simple to modify/augment, etc.) implementation of EDF is
indeed the best basic building block we could ever have for getting
there.

Once there, we will see about what specific budgeting algorithm to adopt
and if (and I don't see why not) and how to support more than just one.

The one thing I'd be really interested, would be RT-Xen people's opinion
(and I see Sisu is copied), as I'd love to see some collaboration
happening in here, especially in this phase, when we are basically
re-architecting the whole thing! :-)

Sisu?

> The parameters for the scheduler would be the budgeting algorithm, server
> budget, and the server period.  The parameters for the domains/vcpus would be
> domain/vcpu budget/timeslice and domain/vcpu period.
> 
That sounds a good plan. At some point, I think we want to have at least
a flag to flip on and off some kind of work conserving behavior...
something like the extratime we have right now, don't you Nate (and
Josh)?

That being said, I fully support Josh's and Nate's approach of
"simplifying first". Resource reservation scheduling algorithm,
especially in multiprocessor environment, are complex to get right.
Starting from something like we have right now, which wouldn't be that
good even in UP, and trying to fix it in a backward compatible way has,
if you ask me, no chance of being successful.

So, George, I guess the interface won't, in the long run, be that
different from what we have now. It's the implementation that will
change a great deal. And to come up with a sensible implementation, I
think Nate's proposed path is the best one to follow.

> Those are our ideas though and I know that Dario and others have ideas as well,
> so any feedback is appreciated.
> 
I'll comment on the patches, but the approach is the good one, and --let
me state this again-- it's wonderful to see someone stepping up to work
on it.

> In the short term, we are working on upstreaming a version of the CBS scheduler.
>  Dario mentored Josh Whitehead, who works with me, in implementing a crude
> version of it for his undergraduate project, so we are already half-way there.
> 
> http://wiki.xen.org/wiki/Archived/GSoC_2013#Temporal_Isolation_and_Multiprocessor_Support_in_the_SEDF_Scheduler
> 
Even more glad to see that work finally trying to find a way upstream!
Thanks again to both!

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 3/3] Fix formatting and misleading comments/variables in sedf
  2014-03-14 19:13 ` [RFC Patch 3/3] Fix formatting and misleading comments/variables in sedf Nathan Studer
@ 2014-03-17 16:49   ` Dario Faggioli
  2014-03-17 17:00     ` Nate Studer
  0 siblings, 1 reply; 19+ messages in thread
From: Dario Faggioli @ 2014-03-17 16:49 UTC (permalink / raw)
  To: Nathan Studer
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, xen-devel, Joshua Whitehead,
	Dario Faggioli


[-- Attachment #1.1: Type: text/plain, Size: 2683 bytes --]

On ven, 2014-03-14 at 15:13 -0400, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
> 
> Update the sedf scheduler to correct some of the more aggregious formatting
> issues.  Also update some of the misleading comments/variable names.
> Specifically the sedf scheduler still implies that a domain and a vcpu
> are the same thing, which while true in the past is no longer the case.
> 
> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
> Signed-off-by: Joshua Whitehead <josh.whitehead@dornerworks.com>
>
Both this and the previous patch looks fine, and, as said replying to
the cover letter, are something I think we want.

I'd provide a formal Reviewed-by tag, but I guess it's not that
important, as this is an RFC.... I'll do as soon as a non-RFC series
will pop up.

One question, on what is this based? I tried to apply the series on
today's tip, and it fails :-/

This is what I get trying to apply the first patch:
checking file xen/common/sched_sedf.c
Hunk #1 FAILED at 25.
Hunk #2 succeeded at 58 (offset -3 lines).
Hunk #3 succeeded at 73 (offset -3 lines).
Hunk #4 succeeded at 94 (offset -3 lines).
Hunk #5 succeeded at 179 (offset -3 lines).
Hunk #6 FAILED at 199.
Hunk #7 succeeded at 205 with fuzz 2 (offset -12 lines).
Hunk #8 succeeded at 221 (offset -12 lines).
Hunk #9 succeeded at 303 (offset -12 lines).
Hunk #10 succeeded at 313 (offset -12 lines).
Hunk #11 succeeded at 402 (offset -12 lines).
Hunk #12 succeeded at 441 (offset -12 lines).
Hunk #13 succeeded at 461 (offset -12 lines).
Hunk #14 succeeded at 503 (offset -12 lines).
Hunk #15 succeeded at 513 (offset -12 lines).
Hunk #16 succeeded at 525 (offset -12 lines).
Hunk #17 succeeded at 540 (offset -12 lines).
Hunk #18 succeeded at 575 (offset -12 lines).
Hunk #19 succeeded at 600 (offset -12 lines).
Hunk #20 succeeded at 609 (offset -12 lines).
Hunk #21 succeeded at 627 (offset -12 lines).
Hunk #22 succeeded at 641 (offset -12 lines).
Hunk #23 succeeded at 667 (offset -12 lines).
Hunk #24 succeeded at 678 (offset -12 lines).
Hunk #25 succeeded at 710 (offset -12 lines).
Hunk #26 succeeded at 730 (offset -12 lines).
Hunk #27 succeeded at 781 (offset -12 lines).
Hunk #28 succeeded at 894 (offset 69 lines).
Hunk #29 FAILED at 842.
Hunk #30 FAILED at 859.
4 out of 30 hunks FAILED

Can you provide an updated version?

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 3/3] Fix formatting and misleading comments/variables in sedf
  2014-03-17 16:49   ` Dario Faggioli
@ 2014-03-17 17:00     ` Nate Studer
  0 siblings, 0 replies; 19+ messages in thread
From: Nate Studer @ 2014-03-17 17:00 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, xen-devel, Joshua Whitehead,
	Dario Faggioli

On 3/17/2014 12:49 PM, Dario Faggioli wrote:
> On ven, 2014-03-14 at 15:13 -0400, Nathan Studer wrote:
>> From: Nathan Studer <nate.studer@dornerworks.com>
>>
>> Update the sedf scheduler to correct some of the more aggregious formatting
>> issues.  Also update some of the misleading comments/variable names.
>> Specifically the sedf scheduler still implies that a domain and a vcpu
>> are the same thing, which while true in the past is no longer the case.
>>
>> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>
>> Signed-off-by: Joshua Whitehead <josh.whitehead@dornerworks.com>
>>
> Both this and the previous patch looks fine, and, as said replying to
> the cover letter, are something I think we want.
> 
> I'd provide a formal Reviewed-by tag, but I guess it's not that
> important, as this is an RFC.... I'll do as soon as a non-RFC series
> will pop up.
> 
> One question, on what is this based? I tried to apply the series on
> today's tip, and it fails :-/

Did you apply the other two patches first?  This patch is dependent on the
previous two, since we did not want to clean-up more than we had to.

     Nate

> 
> This is what I get trying to apply the first patch:
> checking file xen/common/sched_sedf.c
> Hunk #1 FAILED at 25.
> Hunk #2 succeeded at 58 (offset -3 lines).
> Hunk #3 succeeded at 73 (offset -3 lines).
> Hunk #4 succeeded at 94 (offset -3 lines).
> Hunk #5 succeeded at 179 (offset -3 lines).
> Hunk #6 FAILED at 199.
> Hunk #7 succeeded at 205 with fuzz 2 (offset -12 lines).
> Hunk #8 succeeded at 221 (offset -12 lines).
> Hunk #9 succeeded at 303 (offset -12 lines).
> Hunk #10 succeeded at 313 (offset -12 lines).
> Hunk #11 succeeded at 402 (offset -12 lines).
> Hunk #12 succeeded at 441 (offset -12 lines).
> Hunk #13 succeeded at 461 (offset -12 lines).
> Hunk #14 succeeded at 503 (offset -12 lines).
> Hunk #15 succeeded at 513 (offset -12 lines).
> Hunk #16 succeeded at 525 (offset -12 lines).
> Hunk #17 succeeded at 540 (offset -12 lines).
> Hunk #18 succeeded at 575 (offset -12 lines).
> Hunk #19 succeeded at 600 (offset -12 lines).
> Hunk #20 succeeded at 609 (offset -12 lines).
> Hunk #21 succeeded at 627 (offset -12 lines).
> Hunk #22 succeeded at 641 (offset -12 lines).
> Hunk #23 succeeded at 667 (offset -12 lines).
> Hunk #24 succeeded at 678 (offset -12 lines).
> Hunk #25 succeeded at 710 (offset -12 lines).
> Hunk #26 succeeded at 730 (offset -12 lines).
> Hunk #27 succeeded at 781 (offset -12 lines).
> Hunk #28 succeeded at 894 (offset 69 lines).
> Hunk #29 FAILED at 842.
> Hunk #30 FAILED at 859.
> 4 out of 30 hunks FAILED
> 
> Can you provide an updated version?
> 
> Regards,
> Dario
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 0/3] Putting the "Simple" back in sedf.
  2014-03-17 15:51     ` Dario Faggioli
@ 2014-03-17 17:01       ` Sisu Xi
  0 siblings, 0 replies; 19+ messages in thread
From: Sisu Xi @ 2014-03-17 17:01 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian Campbell, Stefano Stabellini, George Dunlap, Ian Jackson,
	Robert VanVossen, xen-devel, josh.whitehead, Dario Faggioli,
	Nate Studer


[-- Attachment #1.1: Type: text/plain, Size: 5480 bytes --]

On Mon, Mar 17, 2014 at 10:51 AM, Dario Faggioli
<dario.faggioli@citrix.com>wrote:

> On ven, 2014-03-14 at 16:13 -0400, Nate Studer wrote:
> > On 3/14/2014 3:22 PM, George Dunlap wrote:
> > >
> > > Hey Nate,
> > >
> > > Thanks for these patches -- what you describe at a high level, making
> > > sedf a suitable rts for embedded applications, sounds like a great
> idea.
> > >
> It does indeed... And thanks a ton for stepping up! As said many times,
> this is something I always wanted to do/make happen, but could never
> find enough time for actually doing.
>
> Having someone like you and Josh and you on board is absolutely great,
> thanks again! :-)
>
> > > I think what might be helpful in evaluating whether these patches are a
> > > good idea at the high level is a bit of a description of where you see
> > > this going long-term.  Can you sketch out, at a high level, what you
> > > envision the sedf scheduler becoming?  What kinds of parameters and
> > > features *will* it have?
> >
> > In the long term, a more extensible version of Dario's favorite
> scheduler, CBS
> > (Constant Bandwidth Server):  a selectable budgeting algorithm that sets
> vcpu
> > deadlines with the sedf scheduler on the backend scheduling the vcpu
> with the
> > earliest deadline.  Preferably it would support other budgeting
> algorithms as
> > well such as Total Bandwidth Server, etc...
> >
> EhEh, nicely put... One day you'll have to explain me what is it that
> you like in TBS (not not mention what is it that you like more in TBS
> than in CBS!) :-P
>
> Jokes apart, the point is not the algorithm, it's the approach. Resource
> reservation is the only sane way to achieve good enough (soft and hard)
> real-time scheduling in complex and dynamic virtualized environment
> (where ARINC would fall short). A sane and a really simple (simple to
> understand, simple to modify/augment, etc.) implementation of EDF is
> indeed the best basic building block we could ever have for getting
> there.
>
> Once there, we will see about what specific budgeting algorithm to adopt
> and if (and I don't see why not) and how to support more than just one.
>
> The one thing I'd be really interested, would be RT-Xen people's opinion
> (and I see Sisu is copied), as I'd love to see some collaboration
> happening in here, especially in this phase, when we are basically
> re-architecting the whole thing! :-)
>
> Sisu?
>

Hi, Dario:

I am more than happy to collaborate with Nate on this.

Hi, Nate:

Thanks for the patches. I'll take a look at them carefully.
On our side, we have been doing RT-Xen project for a while. The current
RT-Xen supports two schedulers: rt-global (using a global runq) and
rt-partition (using one runq per pcpu). Within each scheduler, we support
both EDF and Rate monotonic. We support the deferrable server in the
current version, but also explored periodic server, polling server, and
sporadic server in the previous versions.
You can take a look at RT-Xen website:
https://sites.google.com/site/realtimexen/
The source code is available at: https://github.com/xisisu/RT-Xen

Thanks.

Sisu



>
> > The parameters for the scheduler would be the budgeting algorithm, server
> > budget, and the server period.  The parameters for the domains/vcpus
> would be
> > domain/vcpu budget/timeslice and domain/vcpu period.
> >
> That sounds a good plan. At some point, I think we want to have at least
> a flag to flip on and off some kind of work conserving behavior...
> something like the extratime we have right now, don't you Nate (and
> Josh)?
>
> That being said, I fully support Josh's and Nate's approach of
> "simplifying first". Resource reservation scheduling algorithm,
> especially in multiprocessor environment, are complex to get right.
> Starting from something like we have right now, which wouldn't be that
> good even in UP, and trying to fix it in a backward compatible way has,
> if you ask me, no chance of being successful.
>
> So, George, I guess the interface won't, in the long run, be that
> different from what we have now. It's the implementation that will
> change a great deal. And to come up with a sensible implementation, I
> think Nate's proposed path is the best one to follow.
>
> > Those are our ideas though and I know that Dario and others have ideas
> as well,
> > so any feedback is appreciated.
> >
> I'll comment on the patches, but the approach is the good one, and --let
> me state this again-- it's wonderful to see someone stepping up to work
> on it.
>
> > In the short term, we are working on upstreaming a version of the CBS
> scheduler.
> >  Dario mentored Josh Whitehead, who works with me, in implementing a
> crude
> > version of it for his undergraduate project, so we are already half-way
> there.
> >
> >
> http://wiki.xen.org/wiki/Archived/GSoC_2013#Temporal_Isolation_and_Multiprocessor_Support_in_the_SEDF_Scheduler
> >
> Even more glad to see that work finally trying to find a way upstream!
> Thanks again to both!
>
> Regards,
> Dario
>
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>


-- 
Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130

[-- Attachment #1.2: Type: text/html, Size: 7370 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-14 19:13 ` [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support Nathan Studer
  2014-03-17  8:13   ` Jan Beulich
@ 2014-03-17 17:02   ` Dario Faggioli
  2014-03-21 11:16   ` Ian Campbell
  2 siblings, 0 replies; 19+ messages in thread
From: Dario Faggioli @ 2014-03-17 17:02 UTC (permalink / raw)
  To: Nathan Studer
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, xen-devel, Joshua Whitehead,
	Dario Faggioli


[-- Attachment #1.1: Type: text/plain, Size: 2654 bytes --]

On ven, 2014-03-14 at 15:13 -0400, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
> 
> Remove sedf extra, weight, and latency parameters from the scheduler's adjust
> function.  Also remove the support for these parameters from the xl toolstack.
> 
So, from the code point of view, this looks okay, as the rest of the
series.

My only concern is that, at least extratime and weight, we may need in
future, so we'll be removing them here, to reintroduce them in a bit.

The reason why I think we could need them back is that, although I
concur that work conserving-ness is less important now that there are
schedulers much more suitable for general purpose workloads (credit and
credit2), it, in my experience, could come handy in real-time scenarios
too.

However, what the best interface is will only become clear in a while,
after the re-design and the re-implementation. Therefore, my take on
this would be, let's remove everything, as you're doing in here, and
perhaps add what we will end up needing when we will need it/them...
which would mean an Ack, from my side, on this patch too.

However, since we're discussing interfaces, I'd love to hear what others
think. Of course, even if we take the 'kill everything [for now]' route,
there's the matter of libxl API stability.

So, when it'll come to this, and other libxl API related changes:

> index 7d3a62b..1265a73
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -291,8 +291,6 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>      ("cap",          integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT'}),
>      ("period",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
>      ("slice",        integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT'}),
> -    ("latency",      integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT'}),
> -    ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>      ])
>  
We'll need the macro tricks to make it safe.

For adding fields we usually specify a suitable LIBXL_HAV_FOO symbol...
I guess in this case LIBXL_API_VERSION is more suitable? If yes, I'm not
sure how it would be best use to affect the actual struct definition, as
it originates from the IDL parser...

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-14 19:13 ` [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support Nathan Studer
  2014-03-17  8:13   ` Jan Beulich
  2014-03-17 17:02   ` Dario Faggioli
@ 2014-03-21 11:16   ` Ian Campbell
  2014-03-21 12:25     ` Nate Studer
  2 siblings, 1 reply; 19+ messages in thread
From: Ian Campbell @ 2014-03-21 11:16 UTC (permalink / raw)
  To: Nathan Studer, Ian Jackson
  Cc: Xi Sisu, Stefano Stabellini, George Dunlap, Robert VanVossen,
	xen-devel, Joshua Whitehead, Dario Faggioli

On Fri, 2014-03-14 at 15:13 -0400, Nathan Studer wrote:
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> old mode 100644
> new mode 100755
> index 4c9cd64..6be5575
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -1093,8 +1093,6 @@ int libxl_sched_credit_params_set(libxl_ctx *ctx, uint32_t poolid,
>  #define LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT       -1
>  #define LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT    -1
>  #define LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT     -1
> -#define LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT   -1
> -#define LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT -1
>  
>  int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
>                                    libxl_domain_sched_params *params);
[...]
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> old mode 100644
> new mode 100755
> index 7d3a62b..1265a73
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -291,8 +291,6 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>      ("cap",          integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT'}),
>      ("period",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
>      ("slice",        integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT'}),
> -    ("latency",      integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT'}),
> -    ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>      ])
>  
>  libxl_domain_build_info = Struct("domain_build_info",[

We need to do something about ABI compatiblity here. Please see the
comments near the top of libxl.h.

If you were adding new features, or even if replacing, then the obvious
answer would be LIBXL_HAVE_NEW_SCHED_THING (which would imply removal of
the old).

If you intend in a future non-RFC version of this series to do something
like that then we can follow that path at that time.

Otherwise two options come to mind:
	#define LIBXL_HAVE_NO_SCHED_SEDF_{LATENCY,EXTRATIME}
or
	#define LIBXL_HAVE_SCHED_SEDF_V2

I think I vaguely prefer the first.

Ian.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-21 11:16   ` Ian Campbell
@ 2014-03-21 12:25     ` Nate Studer
  2014-03-21 16:16       ` Dario Faggioli
  0 siblings, 1 reply; 19+ messages in thread
From: Nate Studer @ 2014-03-21 12:25 UTC (permalink / raw)
  To: Ian Campbell, Ian Jackson
  Cc: Xi Sisu, Stefano Stabellini, George Dunlap, Dario Faggioli,
	Robert VanVossen, xen-devel, Joshua Whitehead

On 3/21/2014 7:16 AM, Ian Campbell wrote:
> On Fri, 2014-03-14 at 15:13 -0400, Nathan Studer wrote:
>> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
>> old mode 100644
>> new mode 100755
>> index 4c9cd64..6be5575
>> --- a/tools/libxl/libxl.h
>> +++ b/tools/libxl/libxl.h
>> @@ -1093,8 +1093,6 @@ int libxl_sched_credit_params_set(libxl_ctx *ctx, uint32_t poolid,
>>  #define LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT       -1
>>  #define LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT    -1
>>  #define LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT     -1
>> -#define LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT   -1
>> -#define LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT -1
>>  
>>  int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
>>                                    libxl_domain_sched_params *params);
> [...]
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> old mode 100644
>> new mode 100755
>> index 7d3a62b..1265a73
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -291,8 +291,6 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>>      ("cap",          integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_CAP_DEFAULT'}),
>>      ("period",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
>>      ("slice",        integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_SLICE_DEFAULT'}),
>> -    ("latency",      integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT'}),
>> -    ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>>      ])
>>  
>>  libxl_domain_build_info = Struct("domain_build_info",[
> 
> We need to do something about ABI compatiblity here. Please see the
> comments near the top of libxl.h.
> 
> If you were adding new features, or even if replacing, then the obvious
> answer would be LIBXL_HAVE_NEW_SCHED_THING (which would imply removal of
> the old).
> 
> If you intend in a future non-RFC version of this series to do something
> like that then we can follow that path at that time.

Thanks for the information Ian.

This is the intention, so we would prefer the LIBXL_HAVE_NEW_SCHED_THING path.
It seems cleaner.

We just wanted to make sure that there were no major objections to re-purposing
the sedf scheduler before we went too far down that path, and so far we have not
seen anybody step up to defend the sedf scheduler.

    Nate
> 
> Otherwise two options come to mind:
> 	#define LIBXL_HAVE_NO_SCHED_SEDF_{LATENCY,EXTRATIME}
> or
> 	#define LIBXL_HAVE_SCHED_SEDF_V2
> 
> I think I vaguely prefer the first.
> 
> Ian.
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-21 12:25     ` Nate Studer
@ 2014-03-21 16:16       ` Dario Faggioli
  2014-03-21 16:50         ` Sisu Xi
  0 siblings, 1 reply; 19+ messages in thread
From: Dario Faggioli @ 2014-03-21 16:16 UTC (permalink / raw)
  To: Nate Studer
  Cc: Ian Campbell, Xi Sisu, Stefano Stabellini, George Dunlap,
	Ian Jackson, Robert VanVossen, xen-devel, Joshua Whitehead


[-- Attachment #1.1: Type: text/plain, Size: 1901 bytes --]

On ven, 2014-03-21 at 08:25 -0400, Nate Studer wrote:
> On 3/21/2014 7:16 AM, Ian Campbell wrote:

> > If you intend in a future non-RFC version of this series to do something
> > like that then we can follow that path at that time.
> 
> Thanks for the information Ian.
> 
> This is the intention, so we would prefer the LIBXL_HAVE_NEW_SCHED_THING path.
> It seems cleaner.
> 
> We just wanted to make sure that there were no major objections to re-purposing
> the sedf scheduler before we went too far down that path, and so far we have not
> seen anybody step up to defend the sedf scheduler.
> 
Well, TBH, seeing someone standing up to defend it as it is now would
have been very surprising, from my point of view. :-)

As I said, we really want something that is working, easier to maintain,
extensible and more advanced, and I think you're putting efforts in the
right direction" simplifying the current implementation is absolutely
necessary, given its super-broken status.

So, unless someone starts screaming really soon, I'd say "go ahead".

The one thing I'd like to see, as I already said, is whether, once we'll
have simplified it, and once it will get to enhance it (back), we could
collaborate with the RT-Xen people.

They already have an EDF scheduler which supports multiple budgetting
algorithms, so I'm hoping that we can at least learn from their
experience, if not (and let's see why not) borrow/upstream some of their
code.

If you think it would be useful, I'm up for setting up a call between
me, you, Sisu, and everyone hat is interested. Just let me know.

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-21 16:16       ` Dario Faggioli
@ 2014-03-21 16:50         ` Sisu Xi
  2014-03-24 15:44           ` Dario Faggioli
  0 siblings, 1 reply; 19+ messages in thread
From: Sisu Xi @ 2014-03-21 16:50 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian Campbell, Stefano Stabellini, George Dunlap, Ian Jackson,
	Robert VanVossen, xen-devel, Joshua Whitehead, Chong Lee,
	Meng Xu, Nate Studer


[-- Attachment #1.1: Type: text/plain, Size: 2563 bytes --]

Hi, Dario:

Thanks for the coordination! It would be great if we can schedule a meeting
to discuss this.

I am widely open today, this weekend, and next Monday.
I am ccing Meng and Chong, who's my collaborator on the RT-Xen project to
see if they are interested in the meeting.

Thanks again and I look forward to talking to you!
Sisu


On Fri, Mar 21, 2014 at 11:16 AM, Dario Faggioli
<dario.faggioli@citrix.com>wrote:

> On ven, 2014-03-21 at 08:25 -0400, Nate Studer wrote:
> > On 3/21/2014 7:16 AM, Ian Campbell wrote:
>
> > > If you intend in a future non-RFC version of this series to do
> something
> > > like that then we can follow that path at that time.
> >
> > Thanks for the information Ian.
> >
> > This is the intention, so we would prefer the LIBXL_HAVE_NEW_SCHED_THING
> path.
> > It seems cleaner.
> >
> > We just wanted to make sure that there were no major objections to
> re-purposing
> > the sedf scheduler before we went too far down that path, and so far we
> have not
> > seen anybody step up to defend the sedf scheduler.
> >
> Well, TBH, seeing someone standing up to defend it as it is now would
> have been very surprising, from my point of view. :-)
>
> As I said, we really want something that is working, easier to maintain,
> extensible and more advanced, and I think you're putting efforts in the
> right direction" simplifying the current implementation is absolutely
> necessary, given its super-broken status.
>
> So, unless someone starts screaming really soon, I'd say "go ahead".
>
> The one thing I'd like to see, as I already said, is whether, once we'll
> have simplified it, and once it will get to enhance it (back), we could
> collaborate with the RT-Xen people.
>
> They already have an EDF scheduler which supports multiple budgetting
> algorithms, so I'm hoping that we can at least learn from their
> experience, if not (and let's see why not) borrow/upstream some of their
> code.
>
> If you think it would be useful, I'm up for setting up a call between
> me, you, Sisu, and everyone hat is interested. Just let me know.
>
> Regards,
> Dario
>
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>


-- 
Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130

[-- Attachment #1.2: Type: text/html, Size: 3486 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support.
  2014-03-21 16:50         ` Sisu Xi
@ 2014-03-24 15:44           ` Dario Faggioli
  0 siblings, 0 replies; 19+ messages in thread
From: Dario Faggioli @ 2014-03-24 15:44 UTC (permalink / raw)
  To: Sisu Xi
  Cc: Ian Campbell, Stefano Stabellini, George Dunlap, Ian Jackson,
	Robert VanVossen, xen-devel, Meng Xu, Joshua Whitehead,
	Chong Lee, Nate Studer


[-- Attachment #1.1: Type: text/plain, Size: 1051 bytes --]

On ven, 2014-03-21 at 11:50 -0500, Sisu Xi wrote:
> Hi, Dario:
> 
> 
> Thanks for the coordination! It would be great if we can schedule a
> meeting to discuss this.
> 
So, FTR, we're having the meeting and we are trying to arrange for the
details off-list. Should anyone want to be involved in it, drop me a
line.

> I am widely open today, this weekend, and next Monday. 
> I am ccing Meng and Chong, who's my collaborator on the RT-Xen project
> to see if they are interested in the meeting.
> 
EhEh... the fact that it was a bit late on Friday (at least here in
Italy!) when I saw this, and that I usually don't work over weekends,
pretty much ruled the above out. :-P

Sisu, can you reply to the off-list conversation Nate started?

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2014-03-24 15:44 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-14 19:13 [RFC Patch 0/3] Putting the "Simple" back in sedf Nathan Studer
2014-03-14 19:13 ` [RFC Patch 1/3] Remove sedf extra, weight, and latency parameter support Nathan Studer
2014-03-17  8:13   ` Jan Beulich
2014-03-17 17:02   ` Dario Faggioli
2014-03-21 11:16   ` Ian Campbell
2014-03-21 12:25     ` Nate Studer
2014-03-21 16:16       ` Dario Faggioli
2014-03-21 16:50         ` Sisu Xi
2014-03-24 15:44           ` Dario Faggioli
2014-03-14 19:13 ` [RFC Patch 2/3] Remove extra queues, latency scaling, and weight support from sedf Nathan Studer
2014-03-14 19:13 ` [RFC Patch 3/3] Fix formatting and misleading comments/variables in sedf Nathan Studer
2014-03-17 16:49   ` Dario Faggioli
2014-03-17 17:00     ` Nate Studer
2014-03-14 19:22 ` [RFC Patch 0/3] Putting the "Simple" back " George Dunlap
2014-03-14 20:13   ` Nate Studer
2014-03-14 20:31     ` Nate Studer
2014-03-17 10:29       ` Dario Faggioli
2014-03-17 15:51     ` Dario Faggioli
2014-03-17 17:01       ` Sisu Xi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.