All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu()
@ 2015-10-29 23:04 Dario Faggioli
  2015-10-29 23:04 ` [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
                   ` (5 more replies)
  0 siblings, 6 replies; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:04 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Juergen Gross, Meng Xu, Jan Beulich, Andrew Cooper

Hi,

Take 3 of this series, improving how inserting vCPUs in schedulers runqueues is
done, including fixing a couple of bugs, and paving the way for more
improvement in Credit2 runqueue handling (will be submitted as a separate
series).

v2 is here:
http://lists.xen.org/archives/html/xen-devel/2015-10/msg01605.html

v1 was here:
http://lists.xen.org/archives/html/xen-devel/2015-10/msg00974.html

In this iteration, wrt v2, only patches 2, 3 and 4 really changed, to cope with
review comments and sligthly changing my own mind about where to do things
(nothing too big, anyways).

Patch 1 and 2 are actual bugfix and, IMO, are candidate of being backported (I
kept that in mind when wroting them and when deciding how to structure the
series). However, let's see after this get committed (I'll rise the topic
myself with stable maintainers).

There is a git branch with the series applied here:

 git://xenbits.xen.org/people/dariof/xen.git  rel/sched/fix-vcpu-ins-rem-v2

Thanks and Regards,
Dario
---
Dario Faggioli (6):
      xen: sched: fix locking of remove_vcpu() in credit1
      xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
      xen: sched: clarify use cases of schedule_cpu_switch()
      xen: sched: better handle (not) inserting idle vCPUs in runqueues
      xen: sched: get rid of the per domain vCPU list in RTDS
      xen: sched: get rid of the per domain vCPU list in Credit2

 xen/common/cpupool.c       |    7 -----
 xen/common/sched_credit.c  |   17 ++++++++++--
 xen/common/sched_credit2.c |   55 ++++++++++++++--------------------------
 xen/common/sched_rt.c      |   61 ++++++++++++++++++++++----------------------
 xen/common/schedule.c      |   57 +++++++++++++++++++++++++++++++----------
 5 files changed, 106 insertions(+), 91 deletions(-)
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1
  2015-10-29 23:04 [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
@ 2015-10-29 23:04 ` Dario Faggioli
  2015-11-02 18:04   ` George Dunlap
  2015-10-29 23:04 ` [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:04 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

In fact, csched_vcpu_remove() (i.e., the credit1
implementation of remove_vcpu()) manipulates runqueues,
so holding the runqueue lock is necessary.

Also, while there, *_lock_irq() (for the private lock) is
enough, there is no need to *_lock_irqsave().

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
---
Changes from the other series:
 * split the patch (wrt the original patch, in the original
   series), and take care, in this one, only of remove_vcpu();
 * removed pointless parentheses.
---
 xen/common/sched_credit.c |   10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index b8f28fe..6f71e0d 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -926,7 +926,7 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
     struct csched_private *prv = CSCHED_PRIV(ops);
     struct csched_vcpu * const svc = CSCHED_VCPU(vc);
     struct csched_dom * const sdom = svc->sdom;
-    unsigned long flags;
+    spinlock_t *lock;
 
     SCHED_STAT_CRANK(vcpu_remove);
 
@@ -936,15 +936,19 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
         vcpu_unpause(svc->vcpu);
     }
 
+    lock = vcpu_schedule_lock_irq(vc);
+
     if ( __vcpu_on_runq(svc) )
         __runq_remove(svc);
 
-    spin_lock_irqsave(&(prv->lock), flags);
+    vcpu_schedule_unlock_irq(lock, vc);
+
+    spin_lock_irq(&prv->lock);
 
     if ( !list_empty(&svc->active_vcpu_elem) )
         __csched_vcpu_acct_stop_locked(prv, svc);
 
-    spin_unlock_irqrestore(&(prv->lock), flags);
+    spin_unlock_irq(&prv->lock);
 
     BUG_ON( sdom == NULL );
     BUG_ON( !list_empty(&svc->runq_elem) );

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-10-29 23:04 [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
  2015-10-29 23:04 ` [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
@ 2015-10-29 23:04 ` Dario Faggioli
  2015-10-30 23:00   ` Meng Xu
  2015-10-29 23:04 ` [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch() Dario Faggioli
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:04 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Meng Xu, Jan Beulich

The insert_vcpu() hook is handled with inconsistent locking.
In fact, schedule_cpu_switch() calls the hook with runqueue
lock held, while sched_move_domain() relies on the hook
implementations to take the lock themselves (and, since that
is not done in Credit1 and RTDS, such operation is not safe
in those cases).

This is fixed as follows:
 - take the lock in the hook implementations, in specific
   schedulers' code;
 - avoid calling insert_vcpu(), for the idle vCPU, in
   schedule_cpu_switch(). In fact, idle vCPUs are set to run
   immediately, and the various schedulers won't insert them
   in their runqueues anyway, even when explicitly asked to.

While there, still in schedule_cpu_switch(), locking with
_irq() is enough (there's no need to do *_irqsave()).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Meng Xu <mengxu@cis.upenn.edu>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
---
Cahnges from v2:
 * locking in schedule_cpu_switch() is left in place (but
   turned to just _irq(), instead than *_irqsave());
 * call to insert_vcpu() in schedule_cpu_switch() is
   removed in this patch (rather than later in the series).

Changes from v1 (of this series):
 * in Credit1, the lock wants to be an _irqsave() one. If
   not, the ASSERT() in _spin_lock_irq() will trigger when
   the hook is called, during boot, from sched_init_vcpu();
 * reprhased the changelog (to be less verbose);
 * add a few words, in the changelog, about why it is safe
   to get rid of the locking in schedule_cpu_switch(). Proper
   commentary and ASSERT()-s about that will come in another
   patch.

Changes from the other series:
 * split the patch (wrt the original patch, in the original
   series), and take care, in this one, only of insert_vcpu();
---
 xen/common/sched_credit.c |    6 ++++++
 xen/common/sched_rt.c     |    3 +++
 xen/common/schedule.c     |    6 ++----
 3 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 6f71e0d..e16bd3a 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -903,10 +903,16 @@ static void
 csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu *svc = vc->sched_priv;
+    spinlock_t *lock;
+    unsigned long flags;
+
+    lock = vcpu_schedule_lock_irqsave(vc, &flags);
 
     if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
         __runq_insert(vc->processor, svc);
 
+    vcpu_schedule_unlock_irqrestore(lock, flags, vc);
+
     SCHED_STAT_CRANK(vcpu_insert);
 }
 
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 822f23c..3a66c9a 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -622,16 +622,19 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct rt_vcpu *svc = rt_vcpu(vc);
     s_time_t now = NOW();
+    spinlock_t *lock;
 
     /* not addlocate idle vcpu to dom vcpu list */
     if ( is_idle_vcpu(vc) )
         return;
 
+    lock = vcpu_schedule_lock_irq(vc);
     if ( now >= svc->cur_deadline )
         rt_update_deadline(now, svc);
 
     if ( !__vcpu_on_q(svc) && vcpu_runnable(vc) && !vc->is_running )
         __runq_insert(ops, svc);
+    vcpu_schedule_unlock_irq(lock, vc);
 
     /* add rt_vcpu svc to scheduler-specific vcpu list of the dom */
     list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index c5f640f..80d6fb7 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1488,7 +1488,6 @@ void __init scheduler_init(void)
 
 int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
 {
-    unsigned long flags;
     struct vcpu *idle;
     spinlock_t *lock;
     void *ppriv, *ppriv_old, *vpriv, *vpriv_old;
@@ -1509,7 +1508,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
         return -ENOMEM;
     }
 
-    lock = pcpu_schedule_lock_irqsave(cpu, &flags);
+    lock = pcpu_schedule_lock_irq(cpu);
 
     SCHED_OP(old_ops, tick_suspend, cpu);
     vpriv_old = idle->sched_priv;
@@ -1518,9 +1517,8 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
     ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
     per_cpu(schedule_data, cpu).sched_priv = ppriv;
     SCHED_OP(new_ops, tick_resume, cpu);
-    SCHED_OP(new_ops, insert_vcpu, idle);
 
-    pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
+    pcpu_schedule_unlock_irq(lock, cpu);
 
     SCHED_OP(old_ops, free_vdata, vpriv_old);
     SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch()
  2015-10-29 23:04 [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
  2015-10-29 23:04 ` [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
  2015-10-29 23:04 ` [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
@ 2015-10-29 23:04 ` Dario Faggioli
  2015-10-29 23:59   ` Dario Faggioli
  2015-11-02 18:23   ` George Dunlap
  2015-10-29 23:04 ` [PATCH v3 4/6] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:04 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Jan Beulich

schedule_cpu_switch() is meant to be only used for moving
pCPUs from a cpupool to no cpupool, and from there back
to a cpupool, *not* to move them directly from one cpupool
to another.

This is something that is reflected in the way it is
implemented, and should be kept in mind when looking at
it. However, that is not that clear, by just the look of
it.

Make it more evident by:
 - adding commentary and ASSERT()s;
 - update the cpupool per-CPU variable (mapping pCPUs to
   pools) directly in schedule_cpu_switch(), rather than
   in various places in cpupool.c.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
---
Changes from v2:
 * updating of per_cpu(cpupool, cpu) is now done in
   schedule_cpu_switch(), as suggested during review.

Changes from v1:
 * new patch, was not there in v1;
---
 xen/common/cpupool.c  |    7 -------
 xen/common/schedule.c |   31 ++++++++++++++++++++++++++++++-
 2 files changed, 30 insertions(+), 8 deletions(-)

diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
index e79850b..8e7b723 100644
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -261,19 +261,13 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
-    struct cpupool *old;
     struct domain *d;
 
     if ( (cpupool_moving_cpu == cpu) && (c != cpupool_cpu_moving) )
         return -EBUSY;
-    old = per_cpu(cpupool, cpu);
-    per_cpu(cpupool, cpu) = c;
     ret = schedule_cpu_switch(cpu, c);
     if ( ret )
-    {
-        per_cpu(cpupool, cpu) = old;
         return ret;
-    }
 
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
     if (cpupool_moving_cpu == cpu)
@@ -326,7 +320,6 @@ static long cpupool_unassign_cpu_helper(void *info)
             cpumask_clear_cpu(cpu, &cpupool_free_cpus);
             goto out;
         }
-        per_cpu(cpupool, cpu) = NULL;
         cpupool_moving_cpu = -1;
         cpupool_put(cpupool_cpu_moving);
         cpupool_cpu_moving = NULL;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 80d6fb7..9f5e12b 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1486,6 +1486,17 @@ void __init scheduler_init(void)
         BUG();
 }
 
+/*
+ * Move a pCPU outside of the influence of the scheduler of its current
+ * cpupool, or subject it to the scheduler of a new cpupool.
+ *
+ * For the pCPUs that are removed from their cpupool, their scheduler becomes
+ * &ops (the default scheduler, selected at boot, which also services the
+ * default cpupool). However, as these pCPUs are not really part of any pool,
+ * there won't be any scheduling event on them, not even from the default
+ * scheduler. Basically, they will just sit idle until they are explicitly
+ * added back to a cpupool.
+ */
 int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
 {
     struct vcpu *idle;
@@ -1493,9 +1504,24 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
     void *ppriv, *ppriv_old, *vpriv, *vpriv_old;
     struct scheduler *old_ops = per_cpu(scheduler, cpu);
     struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
+    struct cpupool *old_pool = per_cpu(cpupool, cpu);
+
+    /*
+     * pCPUs only move from a valid cpupool to free (i.e., out of any pool),
+     * or from free to a valid cpupool. In the former case (which happens when
+     * c is NULL), we want the CPU to have been marked as free already, as
+     * well as to not be valid for the source pool any longer, when we get to
+     * here. In the latter case (which happens when c is a valid cpupool), we
+     * want the CPU to still be marked as free, as well as to not yet be valid
+     * for the destination pool.
+     */
+    ASSERT(c != old_pool && (c != NULL || old_pool != NULL));
+    ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus));
+    ASSERT((c == NULL && !cpumask_test_cpu(cpu, old_pool->cpu_valid)) ||
+           (c != NULL && !cpumask_test_cpu(cpu, c->cpu_valid)));
 
     if ( old_ops == new_ops )
-        return 0;
+        goto out;
 
     idle = idle_vcpu[cpu];
     ppriv = SCHED_OP(new_ops, alloc_pdata, cpu);
@@ -1523,6 +1549,9 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
     SCHED_OP(old_ops, free_vdata, vpriv_old);
     SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);
 
+ out:
+    per_cpu(cpupool, cpu) = c;
+
     return 0;
 }

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 4/6] xen: sched: better handle (not) inserting idle vCPUs in runqueues
  2015-10-29 23:04 [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
                   ` (2 preceding siblings ...)
  2015-10-29 23:04 ` [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch() Dario Faggioli
@ 2015-10-29 23:04 ` Dario Faggioli
  2015-10-29 23:04 ` [PATCH v3 5/6] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
  2015-10-29 23:04 ` [PATCH v3 6/6] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
  5 siblings, 0 replies; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:04 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Juergen Gross

Idle vCPUs are set to run immediately, as a part of their
own initialization, so we shouldn't even try to put them
in a runqueue. In fact, actual schedulers don't do that,
even when asked to (that is rather explicit in Credit2
and RTDS, a bit less evident in Credit1).

Let's make things look as follows:
 - in generic code, explicitly avoid even trying to
   insert idle vcpus in runqueues;
 - in specific schedulers' code, enforce that.

Note that, as csched_vcpu_insert() is no longer being
called, during boot, from sched_init_vcpu(), we can
safely avoid saving the flags when taking the runqueue
lock.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
Changes from v1:
 * changelog: updated and made a little bit less verbose.
---
 xen/common/sched_credit.c  |    7 ++++---
 xen/common/sched_credit2.c |   25 ++++++++++---------------
 xen/common/sched_rt.c      |    4 +---
 xen/common/schedule.c      |   20 +++++++++++---------
 4 files changed, 26 insertions(+), 30 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index e16bd3a..fc447a7 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -904,14 +904,15 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu *svc = vc->sched_priv;
     spinlock_t *lock;
-    unsigned long flags;
 
-    lock = vcpu_schedule_lock_irqsave(vc, &flags);
+    BUG_ON( is_idle_vcpu(vc) );
+
+    lock = vcpu_schedule_lock_irq(vc);
 
     if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
         __runq_insert(vc->processor, svc);
 
-    vcpu_schedule_unlock_irqrestore(lock, flags, vc);
+    vcpu_schedule_unlock_irq(lock, vc);
 
     SCHED_STAT_CRANK(vcpu_insert);
 }
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index fc51a75..556ca0f 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -868,30 +868,25 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched2_vcpu *svc = vc->sched_priv;
     struct csched2_dom * const sdom = svc->sdom;
+    spinlock_t *lock;
 
     printk("%s: Inserting %pv\n", __func__, vc);
 
-    /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
-     * been called for that cpu.
-     */
-    if ( ! is_idle_vcpu(vc) )
-    {
-        spinlock_t *lock;
+    BUG_ON(is_idle_vcpu(vc));
 
-        /* FIXME: Do we need the private lock here? */
-        list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
+    /* FIXME: Do we need the private lock here? */
+    list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
 
-        /* Add vcpu to runqueue of initial processor */
-        lock = vcpu_schedule_lock_irq(vc);
+    /* Add vcpu to runqueue of initial processor */
+    lock = vcpu_schedule_lock_irq(vc);
 
-        runq_assign(ops, vc);
+    runq_assign(ops, vc);
 
-        vcpu_schedule_unlock_irq(lock, vc);
+    vcpu_schedule_unlock_irq(lock, vc);
 
-        sdom->nr_vcpus++;
+    sdom->nr_vcpus++;
 
-        SCHED_STAT_CRANK(vcpu_insert);
-    }
+    SCHED_STAT_CRANK(vcpu_insert);
 
     CSCHED2_VCPU_CHECK(vc);
 }
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 3a66c9a..cbe7b17 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -624,9 +624,7 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
     s_time_t now = NOW();
     spinlock_t *lock;
 
-    /* not addlocate idle vcpu to dom vcpu list */
-    if ( is_idle_vcpu(vc) )
-        return;
+    BUG_ON( is_idle_vcpu(vc) );
 
     lock = vcpu_schedule_lock_irq(vc);
     if ( now >= svc->cur_deadline )
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 9f5e12b..6ad516f 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -240,20 +240,22 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
     init_timer(&v->poll_timer, poll_timer_fn,
                v, v->processor);
 
-    /* Idle VCPUs are scheduled immediately. */
+    v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
+    if ( v->sched_priv == NULL )
+        return 1;
+
+    TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
+
+    /* Idle VCPUs are scheduled immediately, so don't put them in runqueue. */
     if ( is_idle_domain(d) )
     {
         per_cpu(schedule_data, v->processor).curr = v;
         v->is_running = 1;
     }
-
-    TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
-
-    v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
-    if ( v->sched_priv == NULL )
-        return 1;
-
-    SCHED_OP(DOM2OP(d), insert_vcpu, v);
+    else
+    {
+        SCHED_OP(DOM2OP(d), insert_vcpu, v);
+    }
 
     return 0;
 }

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 5/6] xen: sched: get rid of the per domain vCPU list in RTDS
  2015-10-29 23:04 [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
                   ` (3 preceding siblings ...)
  2015-10-29 23:04 ` [PATCH v3 4/6] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
@ 2015-10-29 23:04 ` Dario Faggioli
  2015-11-02 18:57   ` George Dunlap
  2015-10-29 23:04 ` [PATCH v3 6/6] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
  5 siblings, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:04 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Meng Xu

As, curently, there is no reason for bothering having
it and keeping it updated.

In fact, it is only used for dumping and changing
vCPUs parameters, but that can be achieved easily with
for_each_vcpu.

While there, take care of the case when
XEN_DOMCTL_SCHEDOP_getinfo is called but no vCPUs have
been allocated yet (by returning the default scheduling
parameters).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes from v1:
 * address the case when d->vcpu[] has not been allocated
   yet, as requested during review;
 * style: space before brackets of for_each_vcpu, as
   requested during review;
 * used 'v' instead of 'vc' for 'vcpu' (in new code).
---
 xen/common/sched_rt.c |   54 ++++++++++++++++++++++++-------------------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index cbe7b17..3f1d047 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -160,7 +160,6 @@ struct rt_private {
  */
 struct rt_vcpu {
     struct list_head q_elem;    /* on the runq/depletedq list */
-    struct list_head sdom_elem; /* on the domain VCPU list */
 
     /* Up-pointers */
     struct rt_dom *sdom;
@@ -182,7 +181,6 @@ struct rt_vcpu {
  * Domain
  */
 struct rt_dom {
-    struct list_head vcpu;      /* link its VCPUs */
     struct list_head sdom_elem; /* link list on rt_priv */
     struct domain *dom;         /* pointer to upper domain */
 };
@@ -290,7 +288,7 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu)
 static void
 rt_dump(const struct scheduler *ops)
 {
-    struct list_head *iter_sdom, *iter_svc, *runq, *depletedq, *iter;
+    struct list_head *runq, *depletedq, *iter;
     struct rt_private *prv = rt_priv(ops);
     struct rt_vcpu *svc;
     struct rt_dom *sdom;
@@ -319,14 +317,16 @@ rt_dump(const struct scheduler *ops)
     }
 
     printk("Domain info:\n");
-    list_for_each( iter_sdom, &prv->sdom )
+    list_for_each( iter, &prv->sdom )
     {
-        sdom = list_entry(iter_sdom, struct rt_dom, sdom_elem);
+        struct vcpu *v;
+
+        sdom = list_entry(iter, struct rt_dom, sdom_elem);
         printk("\tdomain: %d\n", sdom->dom->domain_id);
 
-        list_for_each( iter_svc, &sdom->vcpu )
+        for_each_vcpu ( sdom->dom, v )
         {
-            svc = list_entry(iter_svc, struct rt_vcpu, sdom_elem);
+            svc = rt_vcpu(v);
             rt_dump_vcpu(ops, svc);
         }
     }
@@ -527,7 +527,6 @@ rt_alloc_domdata(const struct scheduler *ops, struct domain *dom)
     if ( sdom == NULL )
         return NULL;
 
-    INIT_LIST_HEAD(&sdom->vcpu);
     INIT_LIST_HEAD(&sdom->sdom_elem);
     sdom->dom = dom;
 
@@ -587,7 +586,6 @@ rt_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
         return NULL;
 
     INIT_LIST_HEAD(&svc->q_elem);
-    INIT_LIST_HEAD(&svc->sdom_elem);
     svc->flags = 0U;
     svc->sdom = dd;
     svc->vcpu = vc;
@@ -614,8 +612,7 @@ rt_free_vdata(const struct scheduler *ops, void *priv)
  * This function is called in sched_move_domain() in schedule.c
  * When move a domain to a new cpupool.
  * It inserts vcpus of moving domain to the scheduler's RunQ in
- * dest. cpupool; and insert rt_vcpu svc to scheduler-specific
- * vcpu list of the dom
+ * dest. cpupool.
  */
 static void
 rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
@@ -634,15 +631,11 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
         __runq_insert(ops, svc);
     vcpu_schedule_unlock_irq(lock, vc);
 
-    /* add rt_vcpu svc to scheduler-specific vcpu list of the dom */
-    list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
-
     SCHED_STAT_CRANK(vcpu_insert);
 }
 
 /*
- * Remove rt_vcpu svc from the old scheduler in source cpupool; and
- * Remove rt_vcpu svc from scheduler-specific vcpu list of the dom
+ * Remove rt_vcpu svc from the old scheduler in source cpupool.
  */
 static void
 rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
@@ -659,9 +652,6 @@ rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
     if ( __vcpu_on_q(svc) )
         __q_remove(svc);
     vcpu_schedule_unlock_irq(lock, vc);
-
-    if ( !is_idle_vcpu(vc) )
-        list_del_init(&svc->sdom_elem);
 }
 
 /*
@@ -1135,20 +1125,28 @@ rt_dom_cntl(
     struct xen_domctl_scheduler_op *op)
 {
     struct rt_private *prv = rt_priv(ops);
-    struct rt_dom * const sdom = rt_dom(d);
     struct rt_vcpu *svc;
-    struct list_head *iter;
+    struct vcpu *v;
     unsigned long flags;
     int rc = 0;
 
     switch ( op->cmd )
     {
     case XEN_DOMCTL_SCHEDOP_getinfo:
-        spin_lock_irqsave(&prv->lock, flags);
-        svc = list_entry(sdom->vcpu.next, struct rt_vcpu, sdom_elem);
-        op->u.rtds.period = svc->period / MICROSECS(1); /* transfer to us */
-        op->u.rtds.budget = svc->budget / MICROSECS(1);
-        spin_unlock_irqrestore(&prv->lock, flags);
+        if ( d->max_vcpus > 0 )
+        {
+            spin_lock_irqsave(&prv->lock, flags);
+            svc = rt_vcpu(d->vcpu[0]);
+            op->u.rtds.period = svc->period / MICROSECS(1);
+            op->u.rtds.budget = svc->budget / MICROSECS(1);
+            spin_unlock_irqrestore(&prv->lock, flags);
+        }
+        else
+        {
+            /* If we don't have vcpus yet, let's just return the defaults. */
+            op->u.rtds.period = RTDS_DEFAULT_PERIOD;
+            op->u.rtds.budget = RTDS_DEFAULT_BUDGET;
+        }
         break;
     case XEN_DOMCTL_SCHEDOP_putinfo:
         if ( op->u.rtds.period == 0 || op->u.rtds.budget == 0 )
@@ -1157,9 +1155,9 @@ rt_dom_cntl(
             break;
         }
         spin_lock_irqsave(&prv->lock, flags);
-        list_for_each( iter, &sdom->vcpu )
+        for_each_vcpu ( d, v )
         {
-            svc = list_entry(iter, struct rt_vcpu, sdom_elem);
+            svc = rt_vcpu(v);
             svc->period = MICROSECS(op->u.rtds.period); /* transfer to nanosec */
             svc->budget = MICROSECS(op->u.rtds.budget);
         }

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 6/6] xen: sched: get rid of the per domain vCPU list in Credit2
  2015-10-29 23:04 [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
                   ` (4 preceding siblings ...)
  2015-10-29 23:04 ` [PATCH v3 5/6] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
@ 2015-10-29 23:04 ` Dario Faggioli
  2015-11-02 18:56   ` George Dunlap
  5 siblings, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:04 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper

As, curently, there is no reason for bothering having
it and keeping it updated.

In fact, it is only used for dumping and changing
vCPUs parameters, but that can be achieved easily with
for_each_vcpu.

While there, improve alignment of comments, ad
add a const qualifier to a pointer, making things
more consistent with what happens everywhere else
in the source file.

This also allows us to kill one of the remaining
FIXMEs in the code, which is always good.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes from v1:
 * removed spurious hunk about sched_rt.c, as noted
   during review;
 * fixed the BUG_ON inside csched2_dom_destroy(), as
   noted during review;
 * used 'v' instead of 'vc' for 'vcpu' (in new code),
   as suggested during review.
---
 xen/common/sched_credit2.c |   34 +++++++++++-----------------------
 1 file changed, 11 insertions(+), 23 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 556ca0f..3c49ffa 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -234,9 +234,8 @@ struct csched2_private {
  * Virtual CPU
  */
 struct csched2_vcpu {
-    struct list_head rqd_elem;  /* On the runqueue data list */
-    struct list_head sdom_elem; /* On the domain vcpu list */
-    struct list_head runq_elem; /* On the runqueue         */
+    struct list_head rqd_elem;         /* On the runqueue data list  */
+    struct list_head runq_elem;        /* On the runqueue            */
     struct csched2_runqueue_data *rqd; /* Up-pointer to the runqueue */
 
     /* Up-pointers */
@@ -261,7 +260,6 @@ struct csched2_vcpu {
  * Domain
  */
 struct csched2_dom {
-    struct list_head vcpu;
     struct list_head sdom_elem;
     struct domain *dom;
     uint16_t weight;
@@ -770,7 +768,6 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
         return NULL;
 
     INIT_LIST_HEAD(&svc->rqd_elem);
-    INIT_LIST_HEAD(&svc->sdom_elem);
     INIT_LIST_HEAD(&svc->runq_elem);
 
     svc->sdom = dd;
@@ -874,9 +871,6 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 
     BUG_ON(is_idle_vcpu(vc));
 
-    /* FIXME: Do we need the private lock here? */
-    list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
-
     /* Add vcpu to runqueue of initial processor */
     lock = vcpu_schedule_lock_irq(vc);
 
@@ -921,10 +915,6 @@ csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
 
         vcpu_schedule_unlock_irq(lock, vc);
 
-        /* Remove from sdom list.  Don't need a lock for this, as it's called
-         * syncronously when nothing else can happen. */
-        list_del_init(&svc->sdom_elem);
-
         svc->sdom->nr_vcpus--;
     }
 }
@@ -1441,7 +1431,7 @@ csched2_dom_cntl(
 
         if ( op->u.credit2.weight != 0 )
         {
-            struct list_head *iter;
+            struct vcpu *v;
             int old_weight;
 
             old_weight = sdom->weight;
@@ -1449,9 +1439,9 @@ csched2_dom_cntl(
             sdom->weight = op->u.credit2.weight;
 
             /* Update weights for vcpus, and max_weight for runqueues on which they reside */
-            list_for_each ( iter, &sdom->vcpu )
+            for_each_vcpu ( d, v )
             {
-                struct csched2_vcpu *svc = list_entry(iter, struct csched2_vcpu, sdom_elem);
+                struct csched2_vcpu *svc = CSCHED2_VCPU(v);
 
                 /* NB: Locking order is important here.  Because we grab this lock here, we
                  * must never lock csched2_priv.lock if we're holding a runqueue lock.
@@ -1485,7 +1475,6 @@ csched2_alloc_domdata(const struct scheduler *ops, struct domain *dom)
         return NULL;
 
     /* Initialize credit and weight */
-    INIT_LIST_HEAD(&sdom->vcpu);
     INIT_LIST_HEAD(&sdom->sdom_elem);
     sdom->dom = dom;
     sdom->weight = CSCHED2_DEFAULT_WEIGHT;
@@ -1537,9 +1526,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
 static void
 csched2_dom_destroy(const struct scheduler *ops, struct domain *dom)
 {
-    struct csched2_dom *sdom = CSCHED2_DOM(dom);
-
-    BUG_ON(!list_empty(&sdom->vcpu));
+    BUG_ON(CSCHED2_DOM(dom)->nr_vcpus > 0);
 
     csched2_free_domdata(ops, CSCHED2_DOM(dom));
 }
@@ -1879,7 +1866,7 @@ csched2_dump_pcpu(const struct scheduler *ops, int cpu)
 static void
 csched2_dump(const struct scheduler *ops)
 {
-    struct list_head *iter_sdom, *iter_svc;
+    struct list_head *iter_sdom;
     struct csched2_private *prv = CSCHED2_PRIV(ops);
     unsigned long flags;
     int i, loop;
@@ -1924,6 +1911,8 @@ csched2_dump(const struct scheduler *ops)
     list_for_each( iter_sdom, &prv->sdom )
     {
         struct csched2_dom *sdom;
+        struct vcpu *v;
+
         sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
 
         printk("\tDomain: %d w %d v %d\n",
@@ -1931,12 +1920,11 @@ csched2_dump(const struct scheduler *ops)
                sdom->weight,
                sdom->nr_vcpus);
 
-        list_for_each( iter_svc, &sdom->vcpu )
+        for_each_vcpu( sdom->dom, v )
         {
-            struct csched2_vcpu *svc;
+            struct csched2_vcpu * const svc = CSCHED2_VCPU(v);
             spinlock_t *lock;
 
-            svc = list_entry(iter_svc, struct csched2_vcpu, sdom_elem);
             lock = vcpu_schedule_lock(svc->vcpu);
 
             printk("\t%3d: ", ++loop);

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch()
  2015-10-29 23:04 ` [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch() Dario Faggioli
@ 2015-10-29 23:59   ` Dario Faggioli
  2015-10-30  4:33     ` Juergen Gross
  2015-11-02 18:23   ` George Dunlap
  1 sibling, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-10-29 23:59 UTC (permalink / raw)
  To: Juergen Gross; +Cc: George Dunlap, xen-devel, Jan Beulich


[-- Attachment #1.1: Type: text/plain, Size: 2848 bytes --]

On Fri, 2015-10-30 at 00:04 +0100, Dario Faggioli wrote:
> schedule_cpu_switch() is meant to be only used for moving
> pCPUs from a cpupool to no cpupool, and from there back
> to a cpupool, *not* to move them directly from one cpupool
> to another.
> 
> This is something that is reflected in the way it is
> implemented, and should be kept in mind when looking at
> it. However, that is not that clear, by just the look of
> it.
> 
> Make it more evident by:
>  - adding commentary and ASSERT()s;
>  - update the cpupool per-CPU variable (mapping pCPUs to
>    pools) directly in schedule_cpu_switch(), rather than
>    in various places in cpupool.c.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <jgross@suse.com>
>
BTW, Juergen, I had the whole series tested with the script below, for
a few hourse, as per your suggestion to help confirming that this patch
is actually correct, as it seems by just looking at the code.

Since everything was fine, I added your Ack, as you said I could.

Regards,
Dario


#!/bin/bash

set -ex
xl cpupool-cpu-remove Pool-0 0,2,4,6,8,10,12,14
xl cpupool-create name=\"Pool-even\" cpus=\"0,2,4,6,8,10,12,14\" sched=\"credit\"
xl cpupool-rename Pool-0 Pool-odd

# Switch CPUs between pools
function switchcpus() {
  while true; do
    for i in `seq 0 2 15`; do
      xl cpupool-cpu-remove Pool-even $i
      xl cpupool-cpu-add Pool-odd $i
      xl cpupool-list -c
      sleep $(($RANDOM%5))
      xl cpupool-cpu-remove Pool-odd $i
      xl cpupool-cpu-add Pool-even $i
    done
    for i in `seq 1 2 15`; do
    echo $i
      xl cpupool-cpu-remove Pool-odd $i
      xl cpupool-cpu-add Pool-even $i
      xl cpupool-list -c
      sleep $(($RANDOM%5))
      xl cpupool-cpu-remove Pool-even $i
      xl cpupool-cpu-add Pool-odd $i
    done
    xl cpupool-list -c
  done
}

# Moving a domain between pools
function movedomain() {
  xl create vms/vm1.cfg pool=\"Pool-odd\"
  sleep 10

  while true; do
    xl cpupool-migrate vm1 Pool-even
    sleep $(($RANDOM%10))
    xl cpupool-migrate vm1 Pool-odd
    sleep $(($RANDOM%10))
  done
}

# creating and destroying a domain
function createdestroydomain() {
  while true; do
    xl create vms/vm2.cfg pool=\"Pool-odd\"
    xl list -c
    sleep $((10+$RANDOM%10))
    xl destroy vm2
    sleep $(($RANDOM%5))
    xl create vms/vm2.cfg pool=\"Pool-even\"
    xl list -c
    sleep $((10+$RANDOM%10))
    xl destroy vm2
  done
}

switchcpus &
movedomain &
createdestroydomain &

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch()
  2015-10-29 23:59   ` Dario Faggioli
@ 2015-10-30  4:33     ` Juergen Gross
  0 siblings, 0 replies; 21+ messages in thread
From: Juergen Gross @ 2015-10-30  4:33 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel, Jan Beulich

On 10/30/2015 12:59 AM, Dario Faggioli wrote:
> On Fri, 2015-10-30 at 00:04 +0100, Dario Faggioli wrote:
>> schedule_cpu_switch() is meant to be only used for moving
>> pCPUs from a cpupool to no cpupool, and from there back
>> to a cpupool, *not* to move them directly from one cpupool
>> to another.
>>
>> This is something that is reflected in the way it is
>> implemented, and should be kept in mind when looking at
>> it. However, that is not that clear, by just the look of
>> it.
>>
>> Make it more evident by:
>>   - adding commentary and ASSERT()s;
>>   - update the cpupool per-CPU variable (mapping pCPUs to
>>     pools) directly in schedule_cpu_switch(), rather than
>>     in various places in cpupool.c.
>>
>> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>> Acked-by: Juergen Gross <jgross@suse.com>
>>
> BTW, Juergen, I had the whole series tested with the script below, for
> a few hourse, as per your suggestion to help confirming that this patch
> is actually correct, as it seems by just looking at the code.
>
> Since everything was fine, I added your Ack, as you said I could.

Great. Thanks for doing this!

BTW: I especially like that you removed cpu 0 from Pool-0!


Juergen

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-10-29 23:04 ` [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
@ 2015-10-30 23:00   ` Meng Xu
  2015-11-02  8:03     ` Dario Faggioli
  0 siblings, 1 reply; 21+ messages in thread
From: Meng Xu @ 2015-10-30 23:00 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: George Dunlap, xen-devel, Meng Xu, Jan Beulich, Andrew Cooper

Hi Dario,

2015-10-29 19:04 GMT-04:00 Dario Faggioli <dario.faggioli@citrix.com>:
>
> The insert_vcpu() hook is handled with inconsistent locking.
> In fact, schedule_cpu_switch() calls the hook with runqueue
> lock held, while sched_move_domain() relies on the hook
> implementations to take the lock themselves (and, since that
> is not done in Credit1 and RTDS, such operation is not safe
> in those cases).
>
> This is fixed as follows:
>  - take the lock in the hook implementations, in specific
>    schedulers' code;
>  - avoid calling insert_vcpu(), for the idle vCPU, in
>    schedule_cpu_switch(). In fact, idle vCPUs are set to run
>    immediately, and the various schedulers won't insert them
>    in their runqueues anyway, even when explicitly asked to.
>
> While there, still in schedule_cpu_switch(), locking with
> _irq() is enough (there's no need to do *_irqsave()).
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Meng Xu <mengxu@cis.upenn.edu>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> ---
> Cahnges from v2:
>  * locking in schedule_cpu_switch() is left in place (but
>    turned to just _irq(), instead than *_irqsave());
>  * call to insert_vcpu() in schedule_cpu_switch() is
>    removed in this patch (rather than later in the series).
>
> Changes from v1 (of this series):
>  * in Credit1, the lock wants to be an _irqsave() one. If
>    not, the ASSERT() in _spin_lock_irq() will trigger when
>    the hook is called, during boot, from sched_init_vcpu();
>  * reprhased the changelog (to be less verbose);
>  * add a few words, in the changelog, about why it is safe
>    to get rid of the locking in schedule_cpu_switch(). Proper
>    commentary and ASSERT()-s about that will come in another
>    patch.
>
> Changes from the other series:
>  * split the patch (wrt the original patch, in the original
>    series), and take care, in this one, only of insert_vcpu();
> ---
>  xen/common/sched_credit.c |    6 ++++++
>  xen/common/sched_rt.c     |    3 +++
>  xen/common/schedule.c     |    6 ++----
>  3 files changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index 6f71e0d..e16bd3a 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -903,10 +903,16 @@ static void
>  csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>  {
>      struct csched_vcpu *svc = vc->sched_priv;
> +    spinlock_t *lock;
> +    unsigned long flags;
> +
> +    lock = vcpu_schedule_lock_irqsave(vc, &flags);


This is inconsistent with the commit log.
I also checked the

branch rel/sched/fix-vcpu-ins-rem-v2 in your repo., it is the following code:

lock = vcpu_schedule_lock_irq(vc, &flags);

I guess maybe you forgot to change it in this commit but change it the
following commit?


Besides this mistake, this patch looks good to me.

Thanks,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-10-30 23:00   ` Meng Xu
@ 2015-11-02  8:03     ` Dario Faggioli
  2015-11-02 14:45       ` Meng Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-11-02  8:03 UTC (permalink / raw)
  To: Meng Xu; +Cc: George Dunlap, xen-devel, Meng Xu, Jan Beulich, Andrew Cooper


[-- Attachment #1.1: Type: text/plain, Size: 2158 bytes --]

On Fri, 2015-10-30 at 19:00 -0400, Meng Xu wrote:
> Hi Dario,
>
Hi,

> > This is fixed as follows:
> >  - take the lock in the hook implementations, in specific
> >    schedulers' code;
> >  - avoid calling insert_vcpu(), for the idle vCPU, in
> >    schedule_cpu_switch(). In fact, idle vCPUs are set to run
> >    immediately, and the various schedulers won't insert them
> >    in their runqueues anyway, even when explicitly asked to.
> > 
> > While there, still in schedule_cpu_switch(), locking with
> > _irq() is enough (there's no need to do *_irqsave()).
> > 
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > ---

> > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> > index 6f71e0d..e16bd3a 100644
> > --- a/xen/common/sched_credit.c
> > +++ b/xen/common/sched_credit.c
> > @@ -903,10 +903,16 @@ static void
> >  csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> >  {
> >      struct csched_vcpu *svc = vc->sched_priv;
> > +    spinlock_t *lock;
> > +    unsigned long flags;
> > +
> > +    lock = vcpu_schedule_lock_irqsave(vc, &flags);
> 
> 
> This is inconsistent with the commit log.
>
Mmm... no, the changelog speaks about schedule_cpu_switch(), not this
functions.

For this function, the conversion from _irqsave() to _irq() is done
later in the series, when the call to insert_vcpu() is removed from the
boot path, and hence does not require IRQs to be disabled when called
(and the changelog of that later patch explains this).

> I also checked the
> 
> branch rel/sched/fix-vcpu-ins-rem-v2 in your repo., it is the
> following code:
> 
> lock = vcpu_schedule_lock_irq(vc, &flags);
> 
> I guess maybe you forgot to change it in this commit but change it
> the
> following commit?
> 
No, this is one of the few thing that changed between v2 and v3.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-11-02  8:03     ` Dario Faggioli
@ 2015-11-02 14:45       ` Meng Xu
  2015-11-04 14:12         ` Dario Faggioli
  0 siblings, 1 reply; 21+ messages in thread
From: Meng Xu @ 2015-11-02 14:45 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: George Dunlap, xen-devel, Meng Xu, Jan Beulich, Andrew Cooper

Hi Dario,

>> > This is fixed as follows:
>> >  - take the lock in the hook implementations, in specific
>> >    schedulers' code;
>> >  - avoid calling insert_vcpu(), for the idle vCPU, in
>> >    schedule_cpu_switch(). In fact, idle vCPUs are set to run
>> >    immediately, and the various schedulers won't insert them
>> >    in their runqueues anyway, even when explicitly asked to.
>> >
>> > While there, still in schedule_cpu_switch(), locking with
>> > _irq() is enough (there's no need to do *_irqsave()).
>> >
>> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>> > ---
>
>> > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
>> > index 6f71e0d..e16bd3a 100644
>> > --- a/xen/common/sched_credit.c
>> > +++ b/xen/common/sched_credit.c
>> > @@ -903,10 +903,16 @@ static void
>> >  csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>> >  {
>> >      struct csched_vcpu *svc = vc->sched_priv;
>> > +    spinlock_t *lock;
>> > +    unsigned long flags;
>> > +
>> > +    lock = vcpu_schedule_lock_irqsave(vc, &flags);
>>
>>
>> This is inconsistent with the commit log.
>>
> Mmm... no, the changelog speaks about schedule_cpu_switch(), not this
> functions.
>
> For this function, the conversion from _irqsave() to _irq() is done
> later in the series, when the call to insert_vcpu() is removed from the
> boot path, and hence does not require IRQs to be disabled when called
> (and the changelog of that later patch explains this).
>
>> I also checked the
>>
>> branch rel/sched/fix-vcpu-ins-rem-v2 in your repo., it is the
>> following code:
>>
>> lock = vcpu_schedule_lock_irq(vc, &flags);
>>
>> I guess maybe you forgot to change it in this commit but change it
>> the
>> following commit?
>>
> No, this is one of the few thing that changed between v2 and v3.
>
> Regards,
> Dario

Thanks for the explanation! Then the patch looks good to me, at least
for RTDS scheduler. :-)

Best,

Meng


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1
  2015-10-29 23:04 ` [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
@ 2015-11-02 18:04   ` George Dunlap
  2015-11-03 17:15     ` Dario Faggioli
  0 siblings, 1 reply; 21+ messages in thread
From: George Dunlap @ 2015-11-02 18:04 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 29/10/15 23:04, Dario Faggioli wrote:
> In fact, csched_vcpu_remove() (i.e., the credit1
> implementation of remove_vcpu()) manipulates runqueues,
> so holding the runqueue lock is necessary.
> 
> Also, while there, *_lock_irq() (for the private lock) is
> enough, there is no need to *_lock_irqsave().
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> ---
> Changes from the other series:
>  * split the patch (wrt the original patch, in the original
>    series), and take care, in this one, only of remove_vcpu();
>  * removed pointless parentheses.
> ---
>  xen/common/sched_credit.c |   10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index b8f28fe..6f71e0d 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -926,7 +926,7 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
>      struct csched_private *prv = CSCHED_PRIV(ops);
>      struct csched_vcpu * const svc = CSCHED_VCPU(vc);
>      struct csched_dom * const sdom = svc->sdom;
> -    unsigned long flags;
> +    spinlock_t *lock;
>  
>      SCHED_STAT_CRANK(vcpu_remove);
>  
> @@ -936,15 +936,19 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
>          vcpu_unpause(svc->vcpu);
>      }
>  
> +    lock = vcpu_schedule_lock_irq(vc);
> +
>      if ( __vcpu_on_runq(svc) )
>          __runq_remove(svc);
>  
> -    spin_lock_irqsave(&(prv->lock), flags);
> +    vcpu_schedule_unlock_irq(lock, vc);

Actually, at this point the domain should be either paused or in the
middle of being destroyed, so it shouldn't be possible for the vcpu to
be on the runqueue, should it?  Should we instead change that if() to an
ASSERT(!__vcpu_on_runqueue(svc))?

 -George

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch()
  2015-10-29 23:04 ` [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch() Dario Faggioli
  2015-10-29 23:59   ` Dario Faggioli
@ 2015-11-02 18:23   ` George Dunlap
  1 sibling, 0 replies; 21+ messages in thread
From: George Dunlap @ 2015-11-02 18:23 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel; +Cc: Juergen Gross, George Dunlap, Jan Beulich

On 29/10/15 23:04, Dario Faggioli wrote:
> schedule_cpu_switch() is meant to be only used for moving
> pCPUs from a cpupool to no cpupool, and from there back
> to a cpupool, *not* to move them directly from one cpupool
> to another.
> 
> This is something that is reflected in the way it is
> implemented, and should be kept in mind when looking at
> it. However, that is not that clear, by just the look of
> it.
> 
> Make it more evident by:
>  - adding commentary and ASSERT()s;
>  - update the cpupool per-CPU variable (mapping pCPUs to
>    pools) directly in schedule_cpu_switch(), rather than
>    in various places in cpupool.c.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <jgross@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 6/6] xen: sched: get rid of the per domain vCPU list in Credit2
  2015-10-29 23:04 ` [PATCH v3 6/6] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
@ 2015-11-02 18:56   ` George Dunlap
  0 siblings, 0 replies; 21+ messages in thread
From: George Dunlap @ 2015-11-02 18:56 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Andrew Cooper

On 29/10/15 23:04, Dario Faggioli wrote:
> As, curently, there is no reason for bothering having
> it and keeping it updated.
> 
> In fact, it is only used for dumping and changing
> vCPUs parameters, but that can be achieved easily with
> for_each_vcpu.
> 
> While there, improve alignment of comments, ad
> add a const qualifier to a pointer, making things
> more consistent with what happens everywhere else
> in the source file.
> 
> This also allows us to kill one of the remaining
> FIXMEs in the code, which is always good.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 5/6] xen: sched: get rid of the per domain vCPU list in RTDS
  2015-10-29 23:04 ` [PATCH v3 5/6] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
@ 2015-11-02 18:57   ` George Dunlap
  0 siblings, 0 replies; 21+ messages in thread
From: George Dunlap @ 2015-11-02 18:57 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Andrew Cooper, Meng Xu

On 29/10/15 23:04, Dario Faggioli wrote:
> As, curently, there is no reason for bothering having
> it and keeping it updated.
> 
> In fact, it is only used for dumping and changing
> vCPUs parameters, but that can be achieved easily with
> for_each_vcpu.
> 
> While there, take care of the case when
> XEN_DOMCTL_SCHEDOP_getinfo is called but no vCPUs have
> been allocated yet (by returning the default scheduling
> parameters).
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>

(FYI I don't think this needs my Ack)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1
  2015-11-02 18:04   ` George Dunlap
@ 2015-11-03 17:15     ` Dario Faggioli
  0 siblings, 0 replies; 21+ messages in thread
From: Dario Faggioli @ 2015-11-03 17:15 UTC (permalink / raw)
  To: George Dunlap, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich


[-- Attachment #1.1: Type: text/plain, Size: 1109 bytes --]

On Mon, 2015-11-02 at 18:04 +0000, George Dunlap wrote:
> On 29/10/15 23:04, Dario Faggioli wrote:

> > @@ -936,15 +936,19 @@ csched_vcpu_remove(const struct scheduler
> > *ops, struct vcpu *vc)
> >          vcpu_unpause(svc->vcpu);
> >      }
> >  
> > +    lock = vcpu_schedule_lock_irq(vc);
> > +
> >      if ( __vcpu_on_runq(svc) )
> >          __runq_remove(svc);
> >  
> > -    spin_lock_irqsave(&(prv->lock), flags);
> > +    vcpu_schedule_unlock_irq(lock, vc);
> 
> Actually, at this point the domain should be either paused or in the
> middle of being destroyed, so it shouldn't be possible for the vcpu
> to
> be on the runqueue, should it?
>
Makes sense.

>   Should we instead change that if() to an
> ASSERT(!__vcpu_on_runqueue(svc))?
> 
I like the idea, I'll do it like this.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-11-02 14:45       ` Meng Xu
@ 2015-11-04 14:12         ` Dario Faggioli
  2015-11-04 15:01           ` Meng Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-11-04 14:12 UTC (permalink / raw)
  To: Meng Xu; +Cc: George Dunlap, xen-devel, Meng Xu


[-- Attachment #1.1: Type: text/plain, Size: 1555 bytes --]

On Mon, 2015-11-02 at 09:45 -0500, Meng Xu wrote:
> > > I guess maybe you forgot to change it in this commit but change
> > > it
> > > the
> > > following commit?
> > > 
> > No, this is one of the few thing that changed between v2 and v3.
> > 
> > Regards,
> > Dario
> 
> Thanks for the explanation! Then the patch looks good to me, at least
> for RTDS scheduler. :-)
> 
Thanks for looking at the patch.

Just FTR (and for next time :-D), is the above something that can be
interpreted as a 'Reviewed-by: Meng Xu <xxx>' ?  If no (e.g., because
you haven't looking thoroughly enough to feel confident to express it),
then fine, I was just asking.

If yes, I encourage you to say it explicitly, to avoid errors and
misjudgements. If you 'only' looked at the patch with the RTDS
scheduler in mind, that is fine too. You can say something like "As far
as the RTDS scheduler is concerned: Reviewed-by: Meng Xu <xxx>". Other
reviewers and committers will take this into account and properly
weight it.

Every akc/review is important, and, if you took the time to look at a
patch, why don't say it in the proper way? :-)

I'm about to send v4 of this series. Feel free (only if you want, of
course!), to chime in in that thread.

Thanks again and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-11-04 14:12         ` Dario Faggioli
@ 2015-11-04 15:01           ` Meng Xu
  2015-11-04 15:52             ` Dario Faggioli
  0 siblings, 1 reply; 21+ messages in thread
From: Meng Xu @ 2015-11-04 15:01 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel, Meng Xu


[-- Attachment #1.1: Type: text/plain, Size: 2250 bytes --]

Hi Dario,

2015-11-04 9:12 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:

> On Mon, 2015-11-02 at 09:45 -0500, Meng Xu wrote:
> > > > I guess maybe you forgot to change it in this commit but change
> > > > it
> > > > the
> > > > following commit?
> > > >
> > > No, this is one of the few thing that changed between v2 and v3.
> > >
> > > Regards,
> > > Dario
> >
> > Thanks for the explanation! Then the patch looks good to me, at least
> > for RTDS scheduler. :-)
> >
> Thanks for looking at the patch.
>
> Just FTR (and for next time :-D), is the above something that can be
> interpreted as a 'Reviewed-by: Meng Xu <xxx>' ?  If no (e.g., because
> you haven't looking thoroughly enough to feel confident to express it),
> then fine, I was just asking.
>

​Thank you very much for explaining this for me. :-)

I feel confident about the changes for RTDS scheduler. I'm not so confident
about the change in the schedule.c. To be specific, this patch removes
insert_vcpu in
schedule_cpu_switch
​() in schedule.c; I'm not so sure if it is ok to insert_vcpu when a domain
is moved. (Next time, I will stand out and ask although it may be a stupid
question. ;-) )
​

​​So as to this patch, I will say:
As far
​ ​
as the RTDS scheduler is concerned: Reviewed-by: Meng Xu <
​mengxu@cis.upenn.edu​
>
>
> If yes, I encourage you to say it explicitly, to avoid errors and
> misjudgements. If you 'only' looked at the patch with the RTDS
> scheduler in mind, that is fine too. You can say something like "As far
> as the RTDS scheduler is concerned: Reviewed-by: Meng Xu <xxx>". Other
> reviewers and committers will take this into account and properly
> weight it.
>
> Every akc/review is important, and, if you took the time to look at a
> patch, why don't say it in the proper way? :-)
>

​Sure! I will keep this in mind. ​


> I'm about to send v4 of this series. Feel free (only if you want, of
> course!), to chime in in that thread.
>

​Sure. I will try to have a look. :-)​


​Thanks,​

​Meng​


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

[-- Attachment #1.2: Type: text/html, Size: 4515 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-11-04 15:01           ` Meng Xu
@ 2015-11-04 15:52             ` Dario Faggioli
  2015-11-05  3:55               ` Meng Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dario Faggioli @ 2015-11-04 15:52 UTC (permalink / raw)
  To: Meng Xu; +Cc: George Dunlap, xen-devel, Meng Xu


[-- Attachment #1.1: Type: text/plain, Size: 2585 bytes --]

On Wed, 2015-11-04 at 10:01 -0500, Meng Xu wrote:
> 2015-11-04 9:12 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
> > Just FTR (and for next time :-D), is the above something that can
> > be
> > interpreted as a 'Reviewed-by: Meng Xu <xxx>' ?  If no (e.g.,
> > because
> > you haven't looking thoroughly enough to feel confident to express
> > it),
> > then fine, I was just asking.
> Thank you very much for explaining this for me. :-) 
> 
> I feel confident about the changes for RTDS scheduler. 
>
Ok.

> I'm not so confident about the change in the schedule.c. To be
> specific, this patch removes insert_vcpu in schedule_cpu_switch() in
> schedule.c; 
>
It removes the attempt of inserting the idle vCPU in the runqueue of
the scheduler of the target cpupool for the pCPU.

More specifically, this line:

  SCHED_OP(new_ops, insert_vcpu, idle);

If we look at the various ways in which insert_vcpu is implemented, we
have:

csched_vcpu_insert:

    if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
        __runq_insert(vc->processor, svc);

but the pCPU being switched is free, i.e., it is not in any cpupool,
and it is idling. So, the idle vCPU is running, and the condition above
is false, which means __runq_insert() is not really called.

csched2_vcpu_insert:

    if ( ! is_idle_vcpu(vc) )
    {
     ...
    }

so trying to insert the idle vCPU is actually a nop.

rt_vcpu_insert:

    if ( is_idle_vcpu(vc) )
        return;

a nop again.

My point being that this patch actually removes nothing but a bunch of
if()-s, with no effect at all.

> I'm not so sure if it is ok to insert_vcpu when a domain is moved.

>
Hopefully, I addressed your doubts.

BTW, we're not talking about a domain being moved between pools. That
is done with another function. schedule_cpu_switch() is about taking a
pCPU off from a cpupool, or assigning a pCPU to cpupool.

> (Next time, I will stand out and ask although it may be a stupid
> question. ;-) )
> 
Sure! And this does not look a stupid question to me. :-)

> So as to this patch, I will say:
> As far as the RTDS scheduler is concerned: Reviewed-by: Meng Xu <
> mengxu@cis.upenn.edu>
>  
Ok, I haven't sent v4 yet, so I'll apply it there. Thanks.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
  2015-11-04 15:52             ` Dario Faggioli
@ 2015-11-05  3:55               ` Meng Xu
  0 siblings, 0 replies; 21+ messages in thread
From: Meng Xu @ 2015-11-05  3:55 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel, Meng Xu


[-- Attachment #1.1: Type: text/plain, Size: 2649 bytes --]

Hi Dario,

Thank you very much for the explanation! I got it. To be specific,

2015-11-04 10:52 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:

> On Wed, 2015-11-04 at 10:01 -0500, Meng Xu wrote:
> > 2015-11-04 9:12 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
> > > Just FTR (and for next time :-D), is the above something that can
> > > be
> > > interpreted as a 'Reviewed-by: Meng Xu <xxx>' ?  If no (e.g.,
> > > because
> > > you haven't looking thoroughly enough to feel confident to express
> > > it),
> > > then fine, I was just asking.
> > Thank you very much for explaining this for me. :-)
> >
> > I feel confident about the changes for RTDS scheduler.
> >
> Ok.
>
> > I'm not so confident about the change in the schedule.c. To be
> > specific, this patch removes insert_vcpu in schedule_cpu_switch() in
> > schedule.c;
> >
> It removes the attempt of inserting the idle vCPU in the runqueue of
> the scheduler of the target cpupool for the pCPU.
>
> More specifically, this line:
>
>   SCHED_OP(new_ops, insert_vcpu, idle);
>

​I neglected the parameter "idle"​ here. :-)



> If we look at the various ways in which insert_vcpu is implemented, we
> have:
>
> csched_vcpu_insert:
>
>     if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
>         __runq_insert(vc->processor, svc);
>
> but the pCPU being switched is free, i.e., it is not in any cpupool,
> and it is idling. So, the idle vCPU is running, and the condition above
> is false, which means __runq_insert() is not really called.
>
> csched2_vcpu_insert:
>
>     if ( ! is_idle_vcpu(vc) )
>     {
>      ...
>     }
>
> so trying to insert the idle vCPU is actually a nop.
>
> rt_vcpu_insert:
>
>     if ( is_idle_vcpu(vc) )
>         return;
> ​
>


> a nop again.
>
>
Yes. :-) After seeing this, I recalled... :-D


> My point being that this patch actually removes nothing but a bunch of
> if()-s, with no effect at all.
>
> > I'm not so sure if it is ok to insert_vcpu when a domain is moved.
>
> >
> Hopefully, I addressed your doubts.
>

​Yes. It clears my doubts. :-D



> Ok, I haven't sent v4 yet, so I'll apply it there. Thanks.
>
>
​Thank you very much for your explanation!​

​ Now I'm confident about

Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>​

I saw the v4 patch series that comes with the Reviewed-by above. So I think
you don't need to do anything.

Best regards,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

[-- Attachment #1.2: Type: text/html, Size: 5897 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2015-11-05  3:55 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-29 23:04 [PATCH v3 0/6] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
2015-10-29 23:04 ` [PATCH v3 1/6] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
2015-11-02 18:04   ` George Dunlap
2015-11-03 17:15     ` Dario Faggioli
2015-10-29 23:04 ` [PATCH v3 2/6] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
2015-10-30 23:00   ` Meng Xu
2015-11-02  8:03     ` Dario Faggioli
2015-11-02 14:45       ` Meng Xu
2015-11-04 14:12         ` Dario Faggioli
2015-11-04 15:01           ` Meng Xu
2015-11-04 15:52             ` Dario Faggioli
2015-11-05  3:55               ` Meng Xu
2015-10-29 23:04 ` [PATCH v3 3/6] xen: sched: clarify use cases of schedule_cpu_switch() Dario Faggioli
2015-10-29 23:59   ` Dario Faggioli
2015-10-30  4:33     ` Juergen Gross
2015-11-02 18:23   ` George Dunlap
2015-10-29 23:04 ` [PATCH v3 4/6] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
2015-10-29 23:04 ` [PATCH v3 5/6] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
2015-11-02 18:57   ` George Dunlap
2015-10-29 23:04 ` [PATCH v3 6/6] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
2015-11-02 18:56   ` George Dunlap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.