xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] xen: sched: rtds clean-up
@ 2016-06-26  0:48 Tianyang Chen
  2016-06-26  0:48 ` [PATCH v2 1/2] xen: sched: rtds code clean-up Tianyang Chen
  2016-06-26  0:48 ` [PATCH v2 2/2] xen: sched: rtds: use non-atomic bit-ops Tianyang Chen
  0 siblings, 2 replies; 5+ messages in thread
From: Tianyang Chen @ 2016-06-26  0:48 UTC (permalink / raw)
  To: xen-devel; +Cc: dario.faggioli, Tianyang Chen, george.dunlap, mengxu

The first part of this patch series aims at fixing coding syle issue
for control structures, as well as aligning comments in the structs.

Related discussions: 
http://lists.xenproject.org/archives/html/xen-devel/2016-03/msg01528.html

Because locks are grabbed in schedule.c before
hooks are called, underscores in front of function names are removed.

The second part replaces atomic bit-ops with non-atomic ones since locks
are grabbed in schedule.c.

Discussions:
http://lists.xenproject.org/archives/html/xen-devel/2016-03/msg01528.html
http://www.gossamer-threads.com/lists/xen/devel/431251?do=post_view_threaded#431251

Tianyang Chen (2):
  xen: sched: rtds code clean-up
  xen: sched: rtds: use non-atomic bit-ops

 xen/common/sched_rt.c |  106 ++++++++++++++++++++++++++-----------------------
 1 file changed, 56 insertions(+), 50 deletions(-)

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/2] xen: sched: rtds code clean-up
  2016-06-26  0:48 [PATCH v2 0/2] xen: sched: rtds clean-up Tianyang Chen
@ 2016-06-26  0:48 ` Tianyang Chen
  2016-06-29 16:21   ` Dario Faggioli
  2016-06-26  0:48 ` [PATCH v2 2/2] xen: sched: rtds: use non-atomic bit-ops Tianyang Chen
  1 sibling, 1 reply; 5+ messages in thread
From: Tianyang Chen @ 2016-06-26  0:48 UTC (permalink / raw)
  To: xen-devel; +Cc: dario.faggioli, Tianyang Chen, george.dunlap, mengxu

No functional change:
 -aligned comments in rt_vcpu struct
 -removed double underscores from the names of some functions
 -fixed coding sytle for control structures involving lists
 -fixed typos in the comments
 -added comments for UPDATE_LIMIT_SHIFT

Signed-off-by: Tianyang Chen <tiche@cis.upenn.edu>
---
Changes since v1:
    -added a detailed list of changes
---
 xen/common/sched_rt.c |   90 ++++++++++++++++++++++++++-----------------------
 1 file changed, 48 insertions(+), 42 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 7f8f411..255d1f6 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -80,7 +80,7 @@
  * in schedule.c
  *
  * The functions involes RunQ and needs to grab locks are:
- *    vcpu_insert, vcpu_remove, context_saved, __runq_insert
+ *    vcpu_insert, vcpu_remove, context_saved, runq_insert
  */
 
 
@@ -107,6 +107,12 @@
  */
 #define RTDS_MIN_BUDGET     (MICROSECS(10))
 
+/*
+ * UPDATE_LIMIT_SHIFT: a constant used in rt_update_deadline(). When finding
+ * the next deadline, performing addition could be faster if the difference
+ * between cur_deadline and now is small. If the difference is bigger than
+ * 1024 * period, use multiplication.
+ */
 #define UPDATE_LIMIT_SHIFT      10
 
 /*
@@ -158,12 +164,12 @@
 static void repl_timer_handler(void *data);
 
 /*
- * Systme-wide private data, include global RunQueue/DepletedQ
+ * System-wide private data, include global RunQueue/DepletedQ
  * Global lock is referenced by schedule_data.schedule_lock from all
  * physical cpus. It can be grabbed via vcpu_schedule_lock_irq()
  */
 struct rt_private {
-    spinlock_t lock;            /* the global coarse grand lock */
+    spinlock_t lock;            /* the global coarse-grained lock */
     struct list_head sdom;      /* list of availalbe domains, used for dump */
     struct list_head runq;      /* ordered list of runnable vcpus */
     struct list_head depletedq; /* unordered list of depleted vcpus */
@@ -176,7 +182,7 @@ struct rt_private {
  * Virtual CPU
  */
 struct rt_vcpu {
-    struct list_head q_elem;    /* on the runq/depletedq list */
+    struct list_head q_elem;     /* on the runq/depletedq list */
     struct list_head replq_elem; /* on the replenishment events list */
 
     /* Up-pointers */
@@ -188,11 +194,11 @@ struct rt_vcpu {
     s_time_t budget;
 
     /* VCPU current infomation in nanosecond */
-    s_time_t cur_budget;        /* current budget */
-    s_time_t last_start;        /* last start time */
-    s_time_t cur_deadline;      /* current deadline for EDF */
+    s_time_t cur_budget;         /* current budget */
+    s_time_t last_start;         /* last start time */
+    s_time_t cur_deadline;       /* current deadline for EDF */
 
-    unsigned flags;             /* mark __RTDS_scheduled, etc.. */
+    unsigned flags;              /* mark __RTDS_scheduled, etc.. */
 };
 
 /*
@@ -241,13 +247,13 @@ static inline struct list_head *rt_replq(const struct scheduler *ops)
  * and the replenishment events queue.
  */
 static int
-__vcpu_on_q(const struct rt_vcpu *svc)
+vcpu_on_q(const struct rt_vcpu *svc)
 {
    return !list_empty(&svc->q_elem);
 }
 
 static struct rt_vcpu *
-__q_elem(struct list_head *elem)
+q_elem(struct list_head *elem)
 {
     return list_entry(elem, struct rt_vcpu, q_elem);
 }
@@ -303,7 +309,7 @@ rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
             svc->cur_budget,
             svc->cur_deadline,
             svc->last_start,
-            __vcpu_on_q(svc),
+            vcpu_on_q(svc),
             vcpu_runnable(svc->vcpu),
             svc->flags,
             keyhandler_scratch);
@@ -339,28 +345,28 @@ rt_dump(const struct scheduler *ops)
     replq = rt_replq(ops);
 
     printk("Global RunQueue info:\n");
-    list_for_each( iter, runq )
+    list_for_each ( iter, runq )
     {
-        svc = __q_elem(iter);
+        svc = q_elem(iter);
         rt_dump_vcpu(ops, svc);
     }
 
     printk("Global DepletedQueue info:\n");
-    list_for_each( iter, depletedq )
+    list_for_each ( iter, depletedq )
     {
-        svc = __q_elem(iter);
+        svc = q_elem(iter);
         rt_dump_vcpu(ops, svc);
     }
 
     printk("Global Replenishment Events info:\n");
-    list_for_each( iter, replq )
+    list_for_each ( iter, replq )
     {
         svc = replq_elem(iter);
         rt_dump_vcpu(ops, svc);
     }
 
     printk("Domain info:\n");
-    list_for_each( iter, &prv->sdom )
+    list_for_each ( iter, &prv->sdom )
     {
         struct vcpu *v;
 
@@ -380,7 +386,7 @@ rt_dump(const struct scheduler *ops)
 
 /*
  * update deadline and budget when now >= cur_deadline
- * it need to be updated to the deadline of the current period
+ * it needs to be updated to the deadline of the current period
  */
 static void
 rt_update_deadline(s_time_t now, struct rt_vcpu *svc)
@@ -463,14 +469,14 @@ deadline_queue_insert(struct rt_vcpu * (*qelem)(struct list_head *),
     return !pos;
 }
 #define deadline_runq_insert(...) \
-  deadline_queue_insert(&__q_elem, ##__VA_ARGS__)
+  deadline_queue_insert(&q_elem, ##__VA_ARGS__)
 #define deadline_replq_insert(...) \
   deadline_queue_insert(&replq_elem, ##__VA_ARGS__)
 
 static inline void
-__q_remove(struct rt_vcpu *svc)
+q_remove(struct rt_vcpu *svc)
 {
-    ASSERT( __vcpu_on_q(svc) );
+    ASSERT( vcpu_on_q(svc) );
     list_del_init(&svc->q_elem);
 }
 
@@ -506,13 +512,13 @@ replq_remove(const struct scheduler *ops, struct rt_vcpu *svc)
  * Insert svc without budget in DepletedQ unsorted;
  */
 static void
-__runq_insert(const struct scheduler *ops, struct rt_vcpu *svc)
+runq_insert(const struct scheduler *ops, struct rt_vcpu *svc)
 {
     struct rt_private *prv = rt_priv(ops);
     struct list_head *runq = rt_runq(ops);
 
     ASSERT( spin_is_locked(&prv->lock) );
-    ASSERT( !__vcpu_on_q(svc) );
+    ASSERT( !vcpu_on_q(svc) );
     ASSERT( vcpu_on_replq(svc) );
 
     /* add svc to runq if svc still has budget */
@@ -840,12 +846,12 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
     if ( now >= svc->cur_deadline )
         rt_update_deadline(now, svc);
 
-    if ( !__vcpu_on_q(svc) && vcpu_runnable(vc) )
+    if ( !vcpu_on_q(svc) && vcpu_runnable(vc) )
     {
         replq_insert(ops, svc);
 
         if ( !vc->is_running )
-            __runq_insert(ops, svc);
+            runq_insert(ops, svc);
     }
     vcpu_schedule_unlock_irq(lock, vc);
 
@@ -867,8 +873,8 @@ rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
     BUG_ON( sdom == NULL );
 
     lock = vcpu_schedule_lock_irq(vc);
-    if ( __vcpu_on_q(svc) )
-        __q_remove(svc);
+    if ( vcpu_on_q(svc) )
+        q_remove(svc);
 
     if ( vcpu_on_replq(svc) )
         replq_remove(ops,svc);
@@ -955,7 +961,7 @@ burn_budget(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now)
  * lock is grabbed before calling this function
  */
 static struct rt_vcpu *
-__runq_pick(const struct scheduler *ops, const cpumask_t *mask)
+runq_pick(const struct scheduler *ops, const cpumask_t *mask)
 {
     struct list_head *runq = rt_runq(ops);
     struct list_head *iter;
@@ -964,9 +970,9 @@ __runq_pick(const struct scheduler *ops, const cpumask_t *mask)
     cpumask_t cpu_common;
     cpumask_t *online;
 
-    list_for_each(iter, runq)
+    list_for_each ( iter, runq )
     {
-        iter_svc = __q_elem(iter);
+        iter_svc = q_elem(iter);
 
         /* mask cpu_hard_affinity & cpupool & mask */
         online = cpupool_domain_cpumask(iter_svc->vcpu->domain);
@@ -1028,7 +1034,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
     }
     else
     {
-        snext = __runq_pick(ops, cpumask_of(cpu));
+        snext = runq_pick(ops, cpumask_of(cpu));
         if ( snext == NULL )
             snext = rt_vcpu(idle_vcpu[cpu]);
 
@@ -1052,7 +1058,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
     {
         if ( snext != scurr )
         {
-            __q_remove(snext);
+            q_remove(snext);
             set_bit(__RTDS_scheduled, &snext->flags);
         }
         if ( snext->vcpu->processor != cpu )
@@ -1081,9 +1087,9 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
 
     if ( curr_on_cpu(vc->processor) == vc )
         cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
-    else if ( __vcpu_on_q(svc) )
+    else if ( vcpu_on_q(svc) )
     {
-        __q_remove(svc);
+        q_remove(svc);
         replq_remove(ops, svc);
     }
     else if ( svc->flags & RTDS_delayed_runq_add )
@@ -1201,7 +1207,7 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
     }
 
     /* on RunQ/DepletedQ, just update info is ok */
-    if ( unlikely(__vcpu_on_q(svc)) )
+    if ( unlikely(vcpu_on_q(svc)) )
     {
         SCHED_STAT_CRANK(vcpu_wake_onrunq);
         return;
@@ -1245,7 +1251,7 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
     /* Replenishment event got cancelled when we blocked. Add it back. */
     replq_insert(ops, svc);
     /* insert svc to runq/depletedq because svc is not in queue now */
-    __runq_insert(ops, svc);
+    runq_insert(ops, svc);
 
     runq_tickle(ops, svc);
 }
@@ -1268,7 +1274,7 @@ rt_context_saved(const struct scheduler *ops, struct vcpu *vc)
     if ( test_and_clear_bit(__RTDS_delayed_runq_add, &svc->flags) &&
          likely(vcpu_runnable(vc)) )
     {
-        __runq_insert(ops, svc);
+        runq_insert(ops, svc);
         runq_tickle(ops, svc);
     }
     else
@@ -1414,10 +1420,10 @@ static void repl_timer_handler(void *data){
         rt_update_deadline(now, svc);
         list_add(&svc->replq_elem, &tmp_replq);
 
-        if ( __vcpu_on_q(svc) )
+        if ( vcpu_on_q(svc) )
         {
-            __q_remove(svc);
-            __runq_insert(ops, svc);
+            q_remove(svc);
+            runq_insert(ops, svc);
         }
     }
 
@@ -1435,12 +1441,12 @@ static void repl_timer_handler(void *data){
         if ( curr_on_cpu(svc->vcpu->processor) == svc->vcpu &&
              !list_empty(runq) )
         {
-            struct rt_vcpu *next_on_runq = __q_elem(runq->next);
+            struct rt_vcpu *next_on_runq = q_elem(runq->next);
 
             if ( svc->cur_deadline > next_on_runq->cur_deadline )
                 runq_tickle(ops, next_on_runq);
         }
-        else if ( __vcpu_on_q(svc) &&
+        else if ( vcpu_on_q(svc) &&
                   test_and_clear_bit(__RTDS_depleted, &svc->flags) )
             runq_tickle(ops, svc);
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/2] xen: sched: rtds: use non-atomic bit-ops
  2016-06-26  0:48 [PATCH v2 0/2] xen: sched: rtds clean-up Tianyang Chen
  2016-06-26  0:48 ` [PATCH v2 1/2] xen: sched: rtds code clean-up Tianyang Chen
@ 2016-06-26  0:48 ` Tianyang Chen
  2016-06-29 16:44   ` Dario Faggioli
  1 sibling, 1 reply; 5+ messages in thread
From: Tianyang Chen @ 2016-06-26  0:48 UTC (permalink / raw)
  To: xen-devel; +Cc: dario.faggioli, Tianyang Chen, george.dunlap, mengxu

Vcpu flags are checked and cleared atomically. Performance can be
improved with corresponding non-atomic versions since schedule.c
already has spin_locks in place.

Signed-off-by: Tianyang Chen <tiche@cis.upenn.edu>
---
Changes since v1:
    -none
---
 xen/common/sched_rt.c |   16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 255d1f6..8ecc908 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -936,7 +936,7 @@ burn_budget(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now)
     if ( svc->cur_budget <= 0 )
     {
         svc->cur_budget = 0;
-        set_bit(__RTDS_depleted, &svc->flags);
+        __set_bit(__RTDS_depleted, &svc->flags);
     }
 
     /* TRACE */
@@ -1050,7 +1050,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
     if ( snext != scurr &&
          !is_idle_vcpu(current) &&
          vcpu_runnable(current) )
-        set_bit(__RTDS_delayed_runq_add, &scurr->flags);
+        __set_bit(__RTDS_delayed_runq_add, &scurr->flags);
 
     snext->last_start = now;
     ret.time =  -1; /* if an idle vcpu is picked */
@@ -1059,7 +1059,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
         if ( snext != scurr )
         {
             q_remove(snext);
-            set_bit(__RTDS_scheduled, &snext->flags);
+            __set_bit(__RTDS_scheduled, &snext->flags);
         }
         if ( snext->vcpu->processor != cpu )
         {
@@ -1093,7 +1093,7 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
         replq_remove(ops, svc);
     }
     else if ( svc->flags & RTDS_delayed_runq_add )
-        clear_bit(__RTDS_delayed_runq_add, &svc->flags);
+        __clear_bit(__RTDS_delayed_runq_add, &svc->flags);
 }
 
 /*
@@ -1235,7 +1235,7 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
      */
     if ( unlikely(svc->flags & RTDS_scheduled) )
     {
-        set_bit(__RTDS_delayed_runq_add, &svc->flags);
+        __set_bit(__RTDS_delayed_runq_add, &svc->flags);
         /*
          * The vcpu is waking up already, and we didn't even had the time to
          * remove its next replenishment event from the replenishment queue
@@ -1266,12 +1266,12 @@ rt_context_saved(const struct scheduler *ops, struct vcpu *vc)
     struct rt_vcpu *svc = rt_vcpu(vc);
     spinlock_t *lock = vcpu_schedule_lock_irq(vc);
 
-    clear_bit(__RTDS_scheduled, &svc->flags);
+    __clear_bit(__RTDS_scheduled, &svc->flags);
     /* not insert idle vcpu to runq */
     if ( is_idle_vcpu(vc) )
         goto out;
 
-    if ( test_and_clear_bit(__RTDS_delayed_runq_add, &svc->flags) &&
+    if ( __test_and_clear_bit(__RTDS_delayed_runq_add, &svc->flags) &&
          likely(vcpu_runnable(vc)) )
     {
         runq_insert(ops, svc);
@@ -1447,7 +1447,7 @@ static void repl_timer_handler(void *data){
                 runq_tickle(ops, next_on_runq);
         }
         else if ( vcpu_on_q(svc) &&
-                  test_and_clear_bit(__RTDS_depleted, &svc->flags) )
+                  __test_and_clear_bit(__RTDS_depleted, &svc->flags) )
             runq_tickle(ops, svc);
 
         list_del(&svc->replq_elem);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 1/2] xen: sched: rtds code clean-up
  2016-06-26  0:48 ` [PATCH v2 1/2] xen: sched: rtds code clean-up Tianyang Chen
@ 2016-06-29 16:21   ` Dario Faggioli
  0 siblings, 0 replies; 5+ messages in thread
From: Dario Faggioli @ 2016-06-29 16:21 UTC (permalink / raw)
  To: Tianyang Chen, xen-devel; +Cc: george.dunlap, mengxu


[-- Attachment #1.1: Type: text/plain, Size: 741 bytes --]

On Sat, 2016-06-25 at 20:48 -0400, Tianyang Chen wrote:
> No functional change:
>  -aligned comments in rt_vcpu struct
>  -removed double underscores from the names of some functions
>  -fixed coding sytle for control structures involving lists
>  -fixed typos in the comments
>  -added comments for UPDATE_LIMIT_SHIFT
> 
> Signed-off-by: Tianyang Chen <tiche@cis.upenn.edu>
>
Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com>

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 2/2] xen: sched: rtds: use non-atomic bit-ops
  2016-06-26  0:48 ` [PATCH v2 2/2] xen: sched: rtds: use non-atomic bit-ops Tianyang Chen
@ 2016-06-29 16:44   ` Dario Faggioli
  0 siblings, 0 replies; 5+ messages in thread
From: Dario Faggioli @ 2016-06-29 16:44 UTC (permalink / raw)
  To: Tianyang Chen, xen-devel; +Cc: george.dunlap, mengxu


[-- Attachment #1.1: Type: text/plain, Size: 640 bytes --]

On Sat, 2016-06-25 at 20:48 -0400, Tianyang Chen wrote:
> Vcpu flags are checked and cleared atomically. Performance can be
> improved with corresponding non-atomic versions since schedule.c
> already has spin_locks in place.
> 
> Signed-off-by: Tianyang Chen <tiche@cis.upenn.edu>
>
Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com>

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-06-29 16:44 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-26  0:48 [PATCH v2 0/2] xen: sched: rtds clean-up Tianyang Chen
2016-06-26  0:48 ` [PATCH v2 1/2] xen: sched: rtds code clean-up Tianyang Chen
2016-06-29 16:21   ` Dario Faggioli
2016-06-26  0:48 ` [PATCH v2 2/2] xen: sched: rtds: use non-atomic bit-ops Tianyang Chen
2016-06-29 16:44   ` Dario Faggioli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).