All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v1] xen:rtds: towards work conserving RTDS
@ 2017-08-01 18:13 Meng Xu
  2017-08-02 17:46 ` Dario Faggioli
  0 siblings, 1 reply; 8+ messages in thread
From: Meng Xu @ 2017-08-01 18:13 UTC (permalink / raw)
  To: xen-devel; +Cc: george.dunlap, dario.faggioli, xumengpanda, Meng Xu

Make RTDS scheduler work conserving to utilize the idle resource,
without breaking the real-time guarantees.

VCPU model:
Each real-time VCPU is extended to have a work conserving flag
and a priority_level field.
When a VCPU's budget is depleted in the current period,
if it has work conserving flag set,
its priority_level will increase by 1 and its budget will be refilled;
othewrise, the VCPU will be moved to the depletedq.

Scheduling policy: modified global EDF:
A VCPU v1 has higher priority than another VCPU v2 if
(i) v1 has smaller priority_leve; or
(ii) v1 has the same priority_level but has a smaller deadline

Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
---
 xen/common/sched_rt.c | 71 ++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 59 insertions(+), 12 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 39f6bee..740a712 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -49,13 +49,16 @@
  * A PCPU is feasible if the VCPU can run on this PCPU and (the PCPU is idle or
  * has a lower-priority VCPU running on it.)
  *
- * Each VCPU has a dedicated period and budget.
+ * Each VCPU has a dedicated period, budget and is_work_conserving flag
  * The deadline of a VCPU is at the end of each period;
  * A VCPU has its budget replenished at the beginning of each period;
  * While scheduled, a VCPU burns its budget.
  * The VCPU needs to finish its budget before its deadline in each period;
  * The VCPU discards its unused budget at the end of each period.
- * If a VCPU runs out of budget in a period, it has to wait until next period.
+ * A work conserving VCPU has is_work_conserving flag set to true;
+ * When a VCPU runs out of budget in a period, if it is work conserving,
+ * it increases its priority_level by 1 and refill its budget; otherwise,
+ * it has to wait until next period.
  *
  * Each VCPU is implemented as a deferable server.
  * When a VCPU has a task running on it, its budget is continuously burned;
@@ -63,7 +66,8 @@
  *
  * Queue scheme:
  * A global runqueue and a global depletedqueue for each CPU pool.
- * The runqueue holds all runnable VCPUs with budget, sorted by deadline;
+ * The runqueue holds all runnable VCPUs with budget,
+ * sorted by priority_level and deadline;
  * The depletedqueue holds all VCPUs without budget, unsorted;
  *
  * Note: cpumask and cpupool is supported.
@@ -191,6 +195,7 @@ struct rt_vcpu {
     /* VCPU parameters, in nanoseconds */
     s_time_t period;
     s_time_t budget;
+    bool_t is_work_conserving;   /* is vcpu work conserving */
 
     /* VCPU current infomation in nanosecond */
     s_time_t cur_budget;         /* current budget */
@@ -201,6 +206,8 @@ struct rt_vcpu {
     struct rt_dom *sdom;
     struct vcpu *vcpu;
 
+    unsigned priority_level;
+
     unsigned flags;              /* mark __RTDS_scheduled, etc.. */
 };
 
@@ -245,6 +252,11 @@ static inline struct list_head *rt_replq(const struct scheduler *ops)
     return &rt_priv(ops)->replq;
 }
 
+static inline bool_t is_work_conserving(const struct rt_vcpu *svc)
+{
+    return svc->is_work_conserving;
+}
+
 /*
  * Helper functions for manipulating the runqueue, the depleted queue,
  * and the replenishment events queue.
@@ -273,6 +285,20 @@ vcpu_on_replq(const struct rt_vcpu *svc)
     return !list_empty(&svc->replq_elem);
 }
 
+/* If v1 priority >= v2 priority, return value > 0
+ * Otherwise, return value < 0
+ */
+static int
+compare_vcpu_priority(const struct rt_vcpu *v1, const struct rt_vcpu *v2)
+{
+    if ( v1->priority_level < v2->priority_level ||
+         ( v1->priority_level == v2->priority_level && 
+             v1->cur_deadline <= v2->cur_deadline ) )
+            return 1;
+    else
+        return -1;
+}
+
 /*
  * Debug related code, dump vcpu/cpu information
  */
@@ -303,6 +329,7 @@ rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
     cpulist_scnprintf(keyhandler_scratch, sizeof(keyhandler_scratch), mask);
     printk("[%5d.%-2u] cpu %u, (%"PRI_stime", %"PRI_stime"),"
            " cur_b=%"PRI_stime" cur_d=%"PRI_stime" last_start=%"PRI_stime"\n"
+           " \t\t priority_level=%d work_conserving=%d\n"
            " \t\t onQ=%d runnable=%d flags=%x effective hard_affinity=%s\n",
             svc->vcpu->domain->domain_id,
             svc->vcpu->vcpu_id,
@@ -312,6 +339,8 @@ rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
             svc->cur_budget,
             svc->cur_deadline,
             svc->last_start,
+            svc->priority_level,
+            is_work_conserving(svc),
             vcpu_on_q(svc),
             vcpu_runnable(svc->vcpu),
             svc->flags,
@@ -423,15 +452,18 @@ rt_update_deadline(s_time_t now, struct rt_vcpu *svc)
      */
     svc->last_start = now;
     svc->cur_budget = svc->budget;
+    svc->priority_level = 0;
 
     /* TRACE */
     {
         struct __packed {
             unsigned vcpu:16, dom:16;
+            unsigned priority_level;
             uint64_t cur_deadline, cur_budget;
         } d;
         d.dom = svc->vcpu->domain->domain_id;
         d.vcpu = svc->vcpu->vcpu_id;
+        d.priority_level = svc->priority_level;
         d.cur_deadline = (uint64_t) svc->cur_deadline;
         d.cur_budget = (uint64_t) svc->cur_budget;
         trace_var(TRC_RTDS_BUDGET_REPLENISH, 1,
@@ -477,7 +509,7 @@ deadline_queue_insert(struct rt_vcpu * (*qelem)(struct list_head *),
     list_for_each ( iter, queue )
     {
         struct rt_vcpu * iter_svc = (*qelem)(iter);
-        if ( svc->cur_deadline <= iter_svc->cur_deadline )
+        if ( compare_vcpu_priority(svc, iter_svc) > 0 )
             break;
         pos++;
     }
@@ -537,8 +569,9 @@ runq_insert(const struct scheduler *ops, struct rt_vcpu *svc)
     ASSERT( !vcpu_on_q(svc) );
     ASSERT( vcpu_on_replq(svc) );
 
-    /* add svc to runq if svc still has budget */
-    if ( svc->cur_budget > 0 )
+    /* add svc to runq if svc still has budget or svc is work_conserving */
+    if ( svc->cur_budget > 0 ||
+         is_work_conserving(svc) )
         deadline_runq_insert(svc, &svc->q_elem, runq);
     else
         list_add(&svc->q_elem, &prv->depletedq);
@@ -857,6 +890,8 @@ rt_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
     svc->vcpu = vc;
     svc->last_start = 0;
 
+    svc->is_work_conserving = 1;
+    svc->priority_level = 0;
     svc->period = RTDS_DEFAULT_PERIOD;
     if ( !is_idle_vcpu(vc) )
         svc->budget = RTDS_DEFAULT_BUDGET;
@@ -966,8 +1001,16 @@ burn_budget(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now)
 
     if ( svc->cur_budget <= 0 )
     {
-        svc->cur_budget = 0;
-        __set_bit(__RTDS_depleted, &svc->flags);
+        if ( is_work_conserving(svc) )
+        {
+            svc->priority_level++;
+            svc->cur_budget = svc->budget;
+        }
+        else
+        {
+            svc->cur_budget = 0;
+            __set_bit(__RTDS_depleted, &svc->flags);
+        }
     }
 
     /* TRACE */
@@ -976,11 +1019,15 @@ burn_budget(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now)
             unsigned vcpu:16, dom:16;
             uint64_t cur_budget;
             int delta;
+            unsigned priority_level;
+            bool_t is_work_conserving;
         } d;
         d.dom = svc->vcpu->domain->domain_id;
         d.vcpu = svc->vcpu->vcpu_id;
         d.cur_budget = (uint64_t) svc->cur_budget;
         d.delta = delta;
+        d.priority_level = svc->priority_level;
+        d.is_work_conserving = svc->is_work_conserving;
         trace_var(TRC_RTDS_BUDGET_BURN, 1,
                   sizeof(d),
                   (unsigned char *) &d);
@@ -1088,7 +1135,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
              vcpu_runnable(current) &&
              scurr->cur_budget > 0 &&
              ( is_idle_vcpu(snext->vcpu) ||
-               scurr->cur_deadline <= snext->cur_deadline ) )
+               compare_vcpu_priority(scurr, snext) > 0 ) )
             snext = scurr;
     }
 
@@ -1198,13 +1245,13 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new)
         }
         iter_svc = rt_vcpu(iter_vc);
         if ( latest_deadline_vcpu == NULL ||
-             iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline )
+             compare_vcpu_priority(iter_svc, latest_deadline_vcpu) < 0 )
             latest_deadline_vcpu = iter_svc;
     }
 
     /* 3) candicate has higher priority, kick out lowest priority vcpu */
     if ( latest_deadline_vcpu != NULL &&
-         new->cur_deadline < latest_deadline_vcpu->cur_deadline )
+         compare_vcpu_priority(latest_deadline_vcpu, new) < 0 )
     {
         SCHED_STAT_CRANK(tickled_busy_cpu);
         cpu_to_tickle = latest_deadline_vcpu->vcpu->processor;
@@ -1493,7 +1540,7 @@ static void repl_timer_handler(void *data){
         {
             struct rt_vcpu *next_on_runq = q_elem(runq->next);
 
-            if ( svc->cur_deadline > next_on_runq->cur_deadline )
+            if ( compare_vcpu_priority(svc, next_on_runq) < 0 )
                 runq_tickle(ops, next_on_runq);
         }
         else if ( __test_and_clear_bit(__RTDS_depleted, &svc->flags) &&
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC v1] xen:rtds: towards work conserving RTDS
  2017-08-01 18:13 [PATCH RFC v1] xen:rtds: towards work conserving RTDS Meng Xu
@ 2017-08-02 17:46 ` Dario Faggioli
  2017-08-03  2:31   ` Meng Xu
  2017-08-05 21:35   ` Meng Xu
  0 siblings, 2 replies; 8+ messages in thread
From: Dario Faggioli @ 2017-08-02 17:46 UTC (permalink / raw)
  To: Meng Xu, xen-devel; +Cc: george.dunlap, xumengpanda


[-- Attachment #1.1: Type: text/plain, Size: 4829 bytes --]

Hey, Meng!

It's really cool to see progress on this... There was quite a bit of
interest in scheduling in general at the Summit in Budapest, and one
important thing for making sure RTDS will be really useful, is for it
to have a work conserving mode! :-)

On Tue, 2017-08-01 at 14:13 -0400, Meng Xu wrote:
> Make RTDS scheduler work conserving to utilize the idle resource,
> without breaking the real-time guarantees.

Just kill the "to utilize the idle resource". We can expect that people
 that are interested in this commit, also know what 'work conserving'
means. :-)

> VCPU model:
> Each real-time VCPU is extended to have a work conserving flag
> and a priority_level field.
> When a VCPU's budget is depleted in the current period,
> if it has work conserving flag set,
> its priority_level will increase by 1 and its budget will be
> refilled;
> othewrise, the VCPU will be moved to the depletedq.
> 
Mmm... Ok. But is the budget burned, while the vCPU executes at
priority_level 1? If yes, doesn't this mean we risk having less budget
when we get back to priority_lvevel 0?

Oh, wait, maybe it's the case that, when we get back to priority_level
0, we also get another replenishment, is that the case? If yes, I
actually think it's fine...

> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 39f6bee..740a712 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -191,6 +195,7 @@ struct rt_vcpu {
>      /* VCPU parameters, in nanoseconds */
>      s_time_t period;
>      s_time_t budget;
> +    bool_t is_work_conserving;   /* is vcpu work conserving */
>  
>      /* VCPU current infomation in nanosecond */
>      s_time_t cur_budget;         /* current budget */
> @@ -201,6 +206,8 @@ struct rt_vcpu {
>      struct rt_dom *sdom;
>      struct vcpu *vcpu;
>  
> +    unsigned priority_level;
> +
>      unsigned flags;              /* mark __RTDS_scheduled, etc.. */
>
So, since we've got a 'flags' field already, can the flag be one of its
bit, instead of adding a new bool in the struct:

/*
 * RTDS_work_conserving: Can the vcpu run in the time that is
 * not part of any real-time reservation, and would therefore
 * be otherwise left idle?
 */
__RTDS_work_conserving       4
#define RTDS_work_conserving (1<<__RTDS_work_conserving)

> @@ -245,6 +252,11 @@ static inline struct list_head *rt_replq(const
> struct scheduler *ops)
>      return &rt_priv(ops)->replq;
>  }
>  
> +static inline bool_t is_work_conserving(const struct rt_vcpu *svc)
> +{
>
Use bool.

> @@ -273,6 +285,20 @@ vcpu_on_replq(const struct rt_vcpu *svc)
>      return !list_empty(&svc->replq_elem);
>  }
>  
> +/* If v1 priority >= v2 priority, return value > 0
> + * Otherwise, return value < 0
> + */
>
Comment style.

Apart from that, do you want this to return >0 if v1 should have
priority over v2, and <0 if vice-versa, right? If yes...

> +static int
> +compare_vcpu_priority(const struct rt_vcpu *v1, const struct rt_vcpu
> *v2)
> +{
> +    if ( v1->priority_level < v2->priority_level ||
> +         ( v1->priority_level == v2->priority_level && 
> +             v1->cur_deadline <= v2->cur_deadline ) )
> +            return 1;
> +    else
> +        return -1;
>
  int prio = v2->priority_level - v1->priority_level;

  if ( prio == 0 )
    return v2->cur_deadline - v1->cur_deadline;

  return prio;

Return type has to become s_time_t, and there's a chance that it'll
return 0, if they are at the same level, and have the same absolute
deadline. But I think you can deal with this in the caller.

> @@ -966,8 +1001,16 @@ burn_budget(const struct scheduler *ops, struct
> rt_vcpu *svc, s_time_t now)
>  
>      if ( svc->cur_budget <= 0 )
>      {
> -        svc->cur_budget = 0;
> -        __set_bit(__RTDS_depleted, &svc->flags);
> +        if ( is_work_conserving(svc) )
> +        {
> +            svc->priority_level++;
>
               ASSERT(svc->priority_level <= 1);

> +            svc->cur_budget = svc->budget;
> +        }
> +        else
> +        {
> +            svc->cur_budget = 0;
> +            __set_bit(__RTDS_depleted, &svc->flags);
> +        }
>      }
>  
The rest looks good to me.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC v1] xen:rtds: towards work conserving RTDS
  2017-08-02 17:46 ` Dario Faggioli
@ 2017-08-03  2:31   ` Meng Xu
  2017-08-05 21:35   ` Meng Xu
  1 sibling, 0 replies; 8+ messages in thread
From: Meng Xu @ 2017-08-03  2:31 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel

On Wed, Aug 2, 2017 at 1:46 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> Hey, Meng!
>
> It's really cool to see progress on this... There was quite a bit of
> interest in scheduling in general at the Summit in Budapest, and one
> important thing for making sure RTDS will be really useful, is for it
> to have a work conserving mode! :-)

Glad to hear that. :-)

>
> On Tue, 2017-08-01 at 14:13 -0400, Meng Xu wrote:
>> Make RTDS scheduler work conserving to utilize the idle resource,
>> without breaking the real-time guarantees.
>
> Just kill the "to utilize the idle resource". We can expect that people
>  that are interested in this commit, also know what 'work conserving'
> means. :-)

Got it. Will do.

>
>> VCPU model:
>> Each real-time VCPU is extended to have a work conserving flag
>> and a priority_level field.
>> When a VCPU's budget is depleted in the current period,
>> if it has work conserving flag set,
>> its priority_level will increase by 1 and its budget will be
>> refilled;
>> othewrise, the VCPU will be moved to the depletedq.
>>
> Mmm... Ok. But is the budget burned, while the vCPU executes at
> priority_level 1? If yes, doesn't this mean we risk having less budget
> when we get back to priority_lvevel 0?
>
> Oh, wait, maybe it's the case that, when we get back to priority_level
> 0, we also get another replenishment, is that the case? If yes, I
> actually think it's fine...

It's the latter case: the vcpu will get another replenishment when it
gets back to priority_level 0.

>
>> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
>> index 39f6bee..740a712 100644
>> --- a/xen/common/sched_rt.c
>> +++ b/xen/common/sched_rt.c
>> @@ -191,6 +195,7 @@ struct rt_vcpu {
>>      /* VCPU parameters, in nanoseconds */
>>      s_time_t period;
>>      s_time_t budget;
>> +    bool_t is_work_conserving;   /* is vcpu work conserving */
>>
>>      /* VCPU current infomation in nanosecond */
>>      s_time_t cur_budget;         /* current budget */
>> @@ -201,6 +206,8 @@ struct rt_vcpu {
>>      struct rt_dom *sdom;
>>      struct vcpu *vcpu;
>>
>> +    unsigned priority_level;
>> +
>>      unsigned flags;              /* mark __RTDS_scheduled, etc.. */
>>
> So, since we've got a 'flags' field already, can the flag be one of its
> bit, instead of adding a new bool in the struct:
>
> /*
>  * RTDS_work_conserving: Can the vcpu run in the time that is
>  * not part of any real-time reservation, and would therefore
>  * be otherwise left idle?
>  */
> __RTDS_work_conserving       4
> #define RTDS_work_conserving (1<<__RTDS_work_conserving)

Thank you very much for the suggestion! I will modify based on your suggestion.

Actually, I was not very comfortable with the is_work_conserving field either.
It makes the structure verbose and mess up the struct's the cache_line
alignment.

>
>> @@ -245,6 +252,11 @@ static inline struct list_head *rt_replq(const
>> struct scheduler *ops)
>>      return &rt_priv(ops)->replq;
>>  }
>>
>> +static inline bool_t is_work_conserving(const struct rt_vcpu *svc)
>> +{
>>
> Use bool.

OK.

>
>> @@ -273,6 +285,20 @@ vcpu_on_replq(const struct rt_vcpu *svc)
>>      return !list_empty(&svc->replq_elem);
>>  }
>>
>> +/* If v1 priority >= v2 priority, return value > 0
>> + * Otherwise, return value < 0
>> + */
>>
> Comment style.

Got it. Will make it as:
/*
 * If v1 priority >= v2 priority, return value > 0
 * Otherwise, return value < 0
 */

>
> Apart from that, do you want this to return >0 if v1 should have
> priority over v2, and <0 if vice-versa, right? If yes...

Yes.

>
>> +static int
>> +compare_vcpu_priority(const struct rt_vcpu *v1, const struct rt_vcpu
>> *v2)
>> +{
>> +    if ( v1->priority_level < v2->priority_level ||
>> +         ( v1->priority_level == v2->priority_level &&
>> +             v1->cur_deadline <= v2->cur_deadline ) )
>> +            return 1;
>> +    else
>> +        return -1;
>>
>   int prio = v2->priority_level - v1->priority_level;
>
>   if ( prio == 0 )
>     return v2->cur_deadline - v1->cur_deadline;
>
>   return prio;
>
> Return type has to become s_time_t, and there's a chance that it'll
> return 0, if they are at the same level, and have the same absolute
> deadline. But I think you can deal with this in the caller.

OK. Will do.

>
>> @@ -966,8 +1001,16 @@ burn_budget(const struct scheduler *ops, struct
>> rt_vcpu *svc, s_time_t now)
>>
>>      if ( svc->cur_budget <= 0 )
>>      {
>> -        svc->cur_budget = 0;
>> -        __set_bit(__RTDS_depleted, &svc->flags);
>> +        if ( is_work_conserving(svc) )
>> +        {
>> +            svc->priority_level++;
>>
>                ASSERT(svc->priority_level <= 1);
>
>> +            svc->cur_budget = svc->budget;
>> +        }
>> +        else
>> +        {
>> +            svc->cur_budget = 0;
>> +            __set_bit(__RTDS_depleted, &svc->flags);
>> +        }
>>      }
>>
> The rest looks good to me.

Thank you very much for the review!

I will revise it and combine this patch into the series of the RTDS
work-conserving patches.
Once I receive your comments on the rest of patches, I will send
another version of this patch set.

Thanks and best regards,

Meng

-----------
Meng Xu
PhD Candidate in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC v1] xen:rtds: towards work conserving RTDS
  2017-08-02 17:46 ` Dario Faggioli
  2017-08-03  2:31   ` Meng Xu
@ 2017-08-05 21:35   ` Meng Xu
  2017-08-07 17:35     ` Dario Faggioli
  1 sibling, 1 reply; 8+ messages in thread
From: Meng Xu @ 2017-08-05 21:35 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel

>
>> @@ -966,8 +1001,16 @@ burn_budget(const struct scheduler *ops, struct
>> rt_vcpu *svc, s_time_t now)
>>
>>      if ( svc->cur_budget <= 0 )
>>      {
>> -        svc->cur_budget = 0;
>> -        __set_bit(__RTDS_depleted, &svc->flags);
>> +        if ( is_work_conserving(svc) )
>> +        {
>> +            svc->priority_level++;
>>
>                ASSERT(svc->priority_level <= 1);

I'm sorry I didn't see this suggestion in previous email. I don't
think this assert makes sense.

A vcpu that has extratime can have priority_level > 1.
For example, a VCPU (period = 100ms, budget = 10ms) runs alone on a
core. The VCPU may get its budget replenished  for 9 times in a
period. the vcpu's priority_level may be 9.

The priority_level here also indicates how many times the VCPU gets
the extra budget in the current period.

>
>> +            svc->cur_budget = svc->budget;
>> +        }
>> +        else
>> +        {
>> +            svc->cur_budget = 0;
>> +            __set_bit(__RTDS_depleted, &svc->flags);
>> +        }
>>      }

Thanks,

Meng

-----------
Meng Xu
PhD Candidate in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC v1] xen:rtds: towards work conserving RTDS
  2017-08-05 21:35   ` Meng Xu
@ 2017-08-07 17:35     ` Dario Faggioli
  2017-08-07 18:27       ` Meng Xu
  0 siblings, 1 reply; 8+ messages in thread
From: Dario Faggioli @ 2017-08-07 17:35 UTC (permalink / raw)
  To: Meng Xu; +Cc: George Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2018 bytes --]

On Sat, 2017-08-05 at 17:35 -0400, Meng Xu wrote:
> > 
> > > @@ -966,8 +1001,16 @@ burn_budget(const struct scheduler *ops,
> > > struct
> > > rt_vcpu *svc, s_time_t now)
> > > 
> > >      if ( svc->cur_budget <= 0 )
> > >      {
> > > -        svc->cur_budget = 0;
> > > -        __set_bit(__RTDS_depleted, &svc->flags);
> > > +        if ( is_work_conserving(svc) )
> > > +        {
> > > +            svc->priority_level++;
> > > 
> > 
> >                ASSERT(svc->priority_level <= 1);
> 
> I'm sorry I didn't see this suggestion in previous email. I don't
> think this assert makes sense.
> 
> A vcpu that has extratime can have priority_level > 1.
> For example, a VCPU (period = 100ms, budget = 10ms) runs alone on a
> core. The VCPU may get its budget replenished  for 9 times in a
> period. the vcpu's priority_level may be 9.
> 
Ah, ok. Yes, I missed this, while I see this now.

But doesn't this mean that, at a certain time t, between both CPUs that
are both in 'etratime mode' (i.e., they've run out of budget, but
they're running because they have extratime set), the one that has
received less replenishments gets priority?

Is this wanted or expected?

Basically, if I'm not wrong, this means that the actual priority,
during the extratime phase, is some combination of deadline and budget
(which would make me think to utilization)... is this the case?

I don't care much about the actual schedule during the extratime phase,
in the sense that it doesn't have to be anything too complicated or
super advanced... but I at least would like:
- to know how it works, and hence what to expect,
- for it to be roughly fair.

Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC v1] xen:rtds: towards work conserving RTDS
  2017-08-07 17:35     ` Dario Faggioli
@ 2017-08-07 18:27       ` Meng Xu
  2017-08-07 19:14         ` Dario Faggioli
  0 siblings, 1 reply; 8+ messages in thread
From: Meng Xu @ 2017-08-07 18:27 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel

On Mon, Aug 7, 2017 at 1:35 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Sat, 2017-08-05 at 17:35 -0400, Meng Xu wrote:
>> >
>> > > @@ -966,8 +1001,16 @@ burn_budget(const struct scheduler *ops,
>> > > struct
>> > > rt_vcpu *svc, s_time_t now)
>> > >
>> > >      if ( svc->cur_budget <= 0 )
>> > >      {
>> > > -        svc->cur_budget = 0;
>> > > -        __set_bit(__RTDS_depleted, &svc->flags);
>> > > +        if ( is_work_conserving(svc) )
>> > > +        {
>> > > +            svc->priority_level++;
>> > >
>> >
>> >                ASSERT(svc->priority_level <= 1);
>>
>> I'm sorry I didn't see this suggestion in previous email. I don't
>> think this assert makes sense.
>>
>> A vcpu that has extratime can have priority_level > 1.
>> For example, a VCPU (period = 100ms, budget = 10ms) runs alone on a
>> core. The VCPU may get its budget replenished  for 9 times in a
>> period. the vcpu's priority_level may be 9.
>>
> Ah, ok. Yes, I missed this, while I see this now.
>
> But doesn't this mean that, at a certain time t, between both CPUs that
> are both in 'etratime mode' (i.e., they've run out of budget, but
> they're running because they have extratime set), the one that has
> received less replenishments gets priority?

Yes.

>
> Is this wanted or expected?

It is wanted.

A VCPU i that has already got budget_i * priority_level_i time has
higher priority than another VCPU j that got budget_j *
priority_level_j time, where priority_level_j > priority_level_i.

For the unreserved resource, a VCPU will gets roughly budget/period
proportional unreserved CPU time.


> Basically, if I'm not wrong, this means that the actual priority,
> during the extratime phase, is some combination of deadline and budget
> (which would make me think to utilization)... is this the case?

Yes.
The higher utilization a VCPU has, the more extra time it will get in
the extratime phase.

>
> I don't care much about the actual schedule during the extratime phase,
> in the sense that it doesn't have to be anything too complicated or
> super advanced... but I at least would like:
> - to know how it works, and hence what to expect,
> - for it to be roughly fair.

The unreserved resource is proportionally allocated to VCPUs roughly
based on VCPU's budget/period.

Best,

Meng


-----------
Meng Xu
PhD Candidate in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC v1] xen:rtds: towards work conserving RTDS
  2017-08-07 18:27       ` Meng Xu
@ 2017-08-07 19:14         ` Dario Faggioli
  2017-08-07 19:40           ` Meng Xu
  0 siblings, 1 reply; 8+ messages in thread
From: Dario Faggioli @ 2017-08-07 19:14 UTC (permalink / raw)
  To: Meng Xu; +Cc: George Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1739 bytes --]

On Mon, 2017-08-07 at 14:27 -0400, Meng Xu wrote:
> On Mon, Aug 7, 2017 at 1:35 PM, Dario Faggioli
> 
> > Is this wanted or expected?
> 
> It is wanted.
> 
> A VCPU i that has already got budget_i * priority_level_i time has
> higher priority than another VCPU j that got budget_j *
> priority_level_j time, where priority_level_j > priority_level_i.
> 
> For the unreserved resource, a VCPU will gets roughly budget/period
> proportional unreserved CPU time.
> 
> 
> > Basically, if I'm not wrong, this means that the actual priority,
> > during the extratime phase, is some combination of deadline and
> > budget
> > (which would make me think to utilization)... is this the case?
> 
> Yes.
> The higher utilization a VCPU has, the more extra time it will get in
> the extratime phase.
> 
> > 
> > I don't care much about the actual schedule during the extratime
> > phase,
> > in the sense that it doesn't have to be anything too complicated or
> > super advanced... but I at least would like:
> > - to know how it works, and hence what to expect,
> > - for it to be roughly fair.
> 
> The unreserved resource is proportionally allocated to VCPUs roughly
> based on VCPU's budget/period.
> 
Right. Then this deserves both:
- a quick mention in the changelog
- a little bit more detailed explanation in a comment close to one of 
  the place where the policy is enacted (or at the top of the file, 
  or, well, somewhere :-) )

Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC v1] xen:rtds: towards work conserving RTDS
  2017-08-07 19:14         ` Dario Faggioli
@ 2017-08-07 19:40           ` Meng Xu
  0 siblings, 0 replies; 8+ messages in thread
From: Meng Xu @ 2017-08-07 19:40 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel

On Mon, Aug 7, 2017 at 3:14 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> On Mon, 2017-08-07 at 14:27 -0400, Meng Xu wrote:
>> On Mon, Aug 7, 2017 at 1:35 PM, Dario Faggioli
>>
>> > Is this wanted or expected?
>>
>> It is wanted.
>>
>> A VCPU i that has already got budget_i * priority_level_i time has
>> higher priority than another VCPU j that got budget_j *
>> priority_level_j time, where priority_level_j > priority_level_i.
>>
>> For the unreserved resource, a VCPU will gets roughly budget/period
>> proportional unreserved CPU time.
>>
>>
>> > Basically, if I'm not wrong, this means that the actual priority,
>> > during the extratime phase, is some combination of deadline and
>> > budget
>> > (which would make me think to utilization)... is this the case?
>>
>> Yes.
>> The higher utilization a VCPU has, the more extra time it will get in
>> the extratime phase.
>>
>> >
>> > I don't care much about the actual schedule during the extratime
>> > phase,
>> > in the sense that it doesn't have to be anything too complicated or
>> > super advanced... but I at least would like:
>> > - to know how it works, and hence what to expect,
>> > - for it to be roughly fair.
>>
>> The unreserved resource is proportionally allocated to VCPUs roughly
>> based on VCPU's budget/period.
>>
> Right. Then this deserves both:
> - a quick mention in the changelog
> - a little bit more detailed explanation in a comment close to one of
>   the place where the policy is enacted (or at the top of the file,
>   or, well, somewhere :-) )
>

Sure. I can do that in the next version.
Hopefully we can reach the agreement on the code based on this version
so that the next version can be the final version for this patch
series. Hopefully. :)

Best,

Meng

-----------
Meng Xu
PhD Candidate in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-08-07 19:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-01 18:13 [PATCH RFC v1] xen:rtds: towards work conserving RTDS Meng Xu
2017-08-02 17:46 ` Dario Faggioli
2017-08-03  2:31   ` Meng Xu
2017-08-05 21:35   ` Meng Xu
2017-08-07 17:35     ` Dario Faggioli
2017-08-07 18:27       ` Meng Xu
2017-08-07 19:14         ` Dario Faggioli
2017-08-07 19:40           ` Meng Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.