All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit}
@ 2017-08-16 16:45 Dario Faggioli
  2017-08-16 16:45 ` [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all Dario Faggioli
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Dario Faggioli @ 2017-08-16 16:45 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Tim Deegan, Julien Grall, Jan Beulich

Hello,

This is take 2 of this series, v1 of which can be found here:

 https://lists.xen.org/archives/html/xen-devel/2017-07/msg02770.html

This new version is mostly about taking care of the various review comments
received. Something of the differences are worth a mention here, though:

- patch 3 is significantly different, as a consequence of the fact that Tim
  highlighted, during v1 review, that there was another latent race that we
  should deal with. Luckily, it basically "was enough" to move the invocations
  of rcu_idle_{enter,exit}() a bit, and add some barriers (details in the
  patch changelog);

- patch 6 has been added, in the attempt of addressing concerns (coming mainly
  from Stefano) about the fact that the timer we need to introduce to deal
  with idle CPUs with queued callbacks, may be firing too often (leading to
  power being wasted).

There is a git branch, with this series in it, available here:

 git://xenbits.xen.org/people/dariof/xen.git  rel/rcu/introduce-idle-enter-exit-v2
 https://travis-ci.org/fdario/xen/builds/265225626

This patch series addresses the XEN-27 issue, which I think Julien wants to
consider a blocker for 4.10:

 https://xenproject.atlassian.net/browse/XEN-27

Thanks and Regards,
Dario
---

Dario Faggioli (6):
      xen: in do_softirq() sample smp_processor_id() once and for all.
      xen: ARM: suspend the tick (if in use) when going idle.
      xen: RCU/x86/ARM: discount CPUs that were idle when grace period started.
      xen: RCU: don't let a CPU with a callback go idle.
      xen: RCU: avoid busy waiting until the end of grace period.
      xen: try to prevent idle timer from firing too often.

 xen/arch/arm/domain.c         |   29 ++++++---
 xen/arch/x86/cpu/mwait-idle.c |    3 -
 xen/common/rcupdate.c         |  130 ++++++++++++++++++++++++++++++++++++++++-
 xen/common/schedule.c         |    4 +
 xen/common/softirq.c          |    8 +--
 xen/include/xen/perfc_defn.h  |    2 +
 xen/include/xen/rcupdate.h    |    6 ++
 xen/include/xen/sched.h       |    6 +-
 8 files changed, 166 insertions(+), 22 deletions(-)
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all.
  2017-08-16 16:45 [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit} Dario Faggioli
@ 2017-08-16 16:45 ` Dario Faggioli
  2017-08-29 14:02   ` George Dunlap
  2017-08-16 16:45 ` [PATCH v2 2/6] xen: ARM: suspend the tick (if in use) when going idle Dario Faggioli
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Dario Faggioli @ 2017-08-16 16:45 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, George Dunlap, Andrew Cooper, Tim Deegan,
	Julien Grall, Jan Beulich

In fact, right now, we read it at every iteration of the loop.
The reason it's done like this is how context switch was handled
on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).

However:
1) we don't have IA64 any longer, and all the achitectures that
   we do support, are ok with sampling once and for all;
2) sampling at every iteration (slightly) affect performance;
3) sampling at every iteration is misleading, as it makes people
   believe that it is currently possible that SCHEDULE_SOFTIRQ
   moves the execution flow on another CPU (and the comment,
   by reinforcing this belief, makes things even worse!).

Therefore, let's:
- do the sampling only once, and remove the comment;
- leave an ASSERT() around, so that, if context switching
  logic changes (in current or new arches), we will notice.

[1] Some more (historical) information here:
    http://old-list-archives.xenproject.org/archives/html/xen-devel/2006-06/msg01262.html

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Tim Deegan <tim@xen.org>
---
This has been submitted already, as a part of another series. Discussion is here:
 https://lists.xen.org/archives/html/xen-devel/2017-06/msg00102.html

For the super lazy, Jan's latest word in that thread were these:
 "I've voiced my opinion, but I don't mean to block the patch. After
  all there's no active issue the change introduces."
 (https://lists.xen.org/archives/html/xen-devel/2017-06/msg00797.html)

Since then:
- changed "once and for all" with "only once", as requested by George (and
  applied his Reviewed-by, as he said I could).
---
 xen/common/softirq.c |    8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/xen/common/softirq.c b/xen/common/softirq.c
index ac12cf8..67c84ba 100644
--- a/xen/common/softirq.c
+++ b/xen/common/softirq.c
@@ -27,16 +27,12 @@ static DEFINE_PER_CPU(unsigned int, batching);
 
 static void __do_softirq(unsigned long ignore_mask)
 {
-    unsigned int i, cpu;
+    unsigned int i, cpu = smp_processor_id();
     unsigned long pending;
 
     for ( ; ; )
     {
-        /*
-         * Initialise @cpu on every iteration: SCHEDULE_SOFTIRQ may move
-         * us to another processor.
-         */
-        cpu = smp_processor_id();
+        ASSERT(cpu == smp_processor_id());
 
         if ( rcu_pending(cpu) )
             rcu_check_callbacks(cpu);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/6] xen: ARM: suspend the tick (if in use) when going idle.
  2017-08-16 16:45 [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit} Dario Faggioli
  2017-08-16 16:45 ` [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all Dario Faggioli
@ 2017-08-16 16:45 ` Dario Faggioli
  2017-08-16 16:45 ` [PATCH v2 3/6] xen: RCU/x86/ARM: discount CPUs that were idle when grace period started Dario Faggioli
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Dario Faggioli @ 2017-08-16 16:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini

Since commit 964fae8ac ("cpuidle: suspend/resume scheduler
tick timer during cpu idle state entry/exit"), if a scheduler
has a periodic tick timer, we stop it when going idle.

This, however, is only true for x86. Make it true for ARM as
well.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain.c |   29 ++++++++++++++++++++---------
 1 file changed, 20 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index eeebbdb..2160d2b 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -39,6 +39,25 @@
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
+static void do_idle(void)
+{
+    unsigned int cpu = smp_processor_id();
+
+    sched_tick_suspend();
+    /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+    process_pending_softirqs();
+
+    local_irq_disable();
+    if ( cpu_is_haltable(cpu) )
+    {
+        dsb(sy);
+        wfi();
+    }
+    local_irq_enable();
+
+    sched_tick_resume();
+}
+
 void idle_loop(void)
 {
     unsigned int cpu = smp_processor_id();
@@ -52,15 +71,7 @@ void idle_loop(void)
         if ( unlikely(tasklet_work_to_do(cpu)) )
             do_tasklet();
         else
-        {
-            local_irq_disable();
-            if ( cpu_is_haltable(cpu) )
-            {
-                dsb(sy);
-                wfi();
-            }
-            local_irq_enable();
-        }
+            do_idle();
 
         do_softirq();
         /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/6] xen: RCU/x86/ARM: discount CPUs that were idle when grace period started.
  2017-08-16 16:45 [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit} Dario Faggioli
  2017-08-16 16:45 ` [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all Dario Faggioli
  2017-08-16 16:45 ` [PATCH v2 2/6] xen: ARM: suspend the tick (if in use) when going idle Dario Faggioli
@ 2017-08-16 16:45 ` Dario Faggioli
  2017-08-17 13:03   ` Tim Deegan
  2017-08-16 16:45 ` [PATCH v2 4/6] xen: RCU: don't let a CPU with a callback go idle Dario Faggioli
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Dario Faggioli @ 2017-08-16 16:45 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Tim Deegan, Julien Grall, Jan Beulich

Xen is a tickless (micro-)kernel, i.e., when a CPU becomes
idle there is no timer tick that will periodically wake the
CPU up.
OTOH, when we imported RCU from Linux, Linux was (on x86) a
ticking kernel, i.e., there was a periodic timer tick always
running, even on idle CPUs. This was bad for power consumption,
but, for instance, made it easy to monitor the quiescent states
of all the CPUs, and hence tell when RCU grace periods ended.

In Xen, that is impossible, and that's particularly problematic
when the system is very lightly loaded, as some CPUs may never
have the chance to tell the RCU core logic about their quiescence,
and grace periods could extend indefinitely!

This has led, on x86, to long (and unpredictable) delays between
RCU callbacks queueing and their actual invokation. On ARM, we've
even seen infinite grace periods (e.g., complate_domain_destroy()
never being actually invoked!). See here:

 https://lists.xenproject.org/archives/html/xen-devel/2017-01/msg02454.html

The first step for fixing this situation is for RCU to record,
at the beginning of a grace period, which CPUs are already idle.
In fact, being idle, they can't be in the middle of any read-side
critical section, and we don't have to wait for their quiescence.

This is tracked in a cpumask, in a similar way to how it was also
done in Linux (on s390, which was tickless already). It is also
basically the same approach used for making Linux x86 tickless,
in 2.6.21 on (see commit 79bf2bb3 "tick-management: dyntick /
highres functionality").

For correctness, wee also add barriers. One is also present in
Linux, (see commit c3f59023, "Fix RCU race in access of nohz_cpu_mask",
although, we change the code comment to something that makes better
sense for us). The other (which is its pair), is put in the newly
introduced function rcu_idle_enter(), right after updating the
cpumask. They prevent races between CPUs going idle during the
beginning of a grace period.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
---
Changes from v1:
* call rcu_idle_{enter,exit}() from tick suspension/restarting logic.  This
  widen the window during which a CPU has its bit set in the idle cpumask.
  During review, it was suggested to do the opposite (narrow it), and that's
  what I did first. But then, I changed my mind, as doing things as they look
  now (wide window), cures another pre-existing (and independent) raca which
  Tim discovered, still during v1 review;
* add a barrier in rcu_idle_enter() too, to properly deal with the race Tim
  pointed out during review;
* mark CPU where RCU initialization happens, at boot, as non-idle.
---
 xen/common/rcupdate.c      |   48 ++++++++++++++++++++++++++++++++++++++++++--
 xen/common/schedule.c      |    2 ++
 xen/include/xen/rcupdate.h |    3 +++
 3 files changed, 51 insertions(+), 2 deletions(-)

diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 8cc5a82..9f7d41d 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -52,7 +52,8 @@ static struct rcu_ctrlblk {
     int  next_pending;  /* Is the next batch already waiting?         */
 
     spinlock_t  lock __cacheline_aligned;
-    cpumask_t   cpumask; /* CPUs that need to switch in order    */
+    cpumask_t   cpumask; /* CPUs that need to switch in order ... */
+    cpumask_t   idle_cpumask; /* ... unless they are already idle */
     /* for current batch to proceed.        */
 } __cacheline_aligned rcu_ctrlblk = {
     .cur = -300,
@@ -248,7 +249,16 @@ static void rcu_start_batch(struct rcu_ctrlblk *rcp)
         smp_wmb();
         rcp->cur++;
 
-        cpumask_copy(&rcp->cpumask, &cpu_online_map);
+       /*
+        * Make sure the increment of rcp->cur is visible so, even if a
+        * CPU that is about to go idle, is captured inside rcp->cpumask,
+        * rcu_pending() will return false, which then means cpu_quiet()
+        * will be invoked, before the CPU would actually enter idle.
+        *
+        * This barrier is paired with the one in rcu_idle_enter().
+        */
+        smp_mb();
+        cpumask_andnot(&rcp->cpumask, &cpu_online_map, &rcp->idle_cpumask);
     }
 }
 
@@ -474,7 +484,41 @@ static struct notifier_block cpu_nfb = {
 void __init rcu_init(void)
 {
     void *cpu = (void *)(long)smp_processor_id();
+
+    cpumask_setall(&rcu_ctrlblk.idle_cpumask);
+    /* The CPU we're running on is certainly not idle */
+    cpumask_clear_cpu(smp_processor_id(), &rcu_ctrlblk.idle_cpumask);
     cpu_callback(&cpu_nfb, CPU_UP_PREPARE, cpu);
     register_cpu_notifier(&cpu_nfb);
     open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
 }
+
+/*
+ * The CPU is becoming idle, so no more read side critical
+ * sections, and one more step toward grace period.
+ */
+void rcu_idle_enter(unsigned int cpu)
+{
+    /*
+     * During non-boot CPU bringup and resume, until this function is
+     * called for the first time, it's fine to find our bit already set.
+     */
+    ASSERT(!cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask) ||
+           (system_state < SYS_STATE_active || system_state >= SYS_STATE_resume));
+    cpumask_set_cpu(cpu, &rcu_ctrlblk.idle_cpumask);
+    /*
+     * If some other CPU is starting a new grace period, we'll notice that
+     * by seeing a new value in rcp->cur (different than our quiescbatch).
+     * That will force us all the way until cpu_quiet(), clearing our bit
+     * in rcp->cpumask, even in case we managed to get in there.
+     *
+     * Se the comment before cpumask_andnot() in  rcu_start_batch().
+     */
+    smp_mb();
+}
+
+void rcu_idle_exit(unsigned int cpu)
+{
+    ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask));
+    cpumask_clear_cpu(cpu, &rcu_ctrlblk.idle_cpumask);
+}
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index e83f4c7..c6f4817 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1903,6 +1903,7 @@ void sched_tick_suspend(void)
 
     sched = per_cpu(scheduler, cpu);
     SCHED_OP(sched, tick_suspend, cpu);
+    rcu_idle_enter(cpu);
 }
 
 void sched_tick_resume(void)
@@ -1910,6 +1911,7 @@ void sched_tick_resume(void)
     struct scheduler *sched;
     unsigned int cpu = smp_processor_id();
 
+    rcu_idle_exit(cpu);
     sched = per_cpu(scheduler, cpu);
     SCHED_OP(sched, tick_resume, cpu);
 }
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index 557a7b1..561ac43 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -146,4 +146,7 @@ void call_rcu(struct rcu_head *head,
 
 int rcu_barrier(void);
 
+void rcu_idle_enter(unsigned int cpu);
+void rcu_idle_exit(unsigned int cpu);
+
 #endif /* __XEN_RCUPDATE_H */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/6] xen: RCU: don't let a CPU with a callback go idle.
  2017-08-16 16:45 [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit} Dario Faggioli
                   ` (2 preceding siblings ...)
  2017-08-16 16:45 ` [PATCH v2 3/6] xen: RCU/x86/ARM: discount CPUs that were idle when grace period started Dario Faggioli
@ 2017-08-16 16:45 ` Dario Faggioli
  2017-08-16 16:46 ` [PATCH v2 5/6] xen: RCU: avoid busy waiting until the end of grace period Dario Faggioli
  2017-08-16 16:46 ` [PATCH v2 6/6] xen: try to prevent idle timer from firing too often Dario Faggioli
  5 siblings, 0 replies; 11+ messages in thread
From: Dario Faggioli @ 2017-08-16 16:45 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Tim Deegan, Julien Grall, Jan Beulich

If a CPU has a callback queued, it must be ready to invoke
it, as soon as all the other CPUs involved in the grace period
has gone through a quiescent state.

But if we let such CPU go idle, we can't really tell when (if!)
it will realize that it is actually time to invoke the callback.
To solve this problem, a CPU that has a callback queued (and has
already gone through a quiescent state itself) will stay online,
until the grace period ends, and the callback can be invoked.

This is similar to what Linux does, and is the second and last
step for fixing the overly long (or infinite!) grace periods.
The problem, though, is that, within Linux, we have the tick,
so, all that is necessary is to not stop the tick for the CPU
(even if it has gone idle). In Xen, there's no tick, so we must
avoid for the CPU to go idle entirely, and let it spin on
rcu_pending(), consuming power and causing overhead.

In this commit, we implement the above, using rcu_needs_cpu(),
in a way similar to how it is used in Linux. This it correct,
useful and not wasteful for CPUs that participate in grace
period, but have not a callback queued. For the ones that
has callbacks, an optimization that avoids having to spin is
introduced in a subsequent change.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 xen/include/xen/sched.h |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5828a01..c116604 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -847,7 +847,8 @@ uint64_t get_cpu_idle_time(unsigned int cpu);
 
 /*
  * Used by idle loop to decide whether there is work to do:
- *  (1) Run softirqs; or (2) Play dead; or (3) Run tasklets.
+ *  (1) Deal with RCU; (2) or run softirqs; or (3) Play dead;
+ *  or (4) Run tasklets.
  *
  * About (3), if a tasklet is enqueued, it will be scheduled
  * really really soon, and hence it's pointless to try to
@@ -855,7 +856,8 @@ uint64_t get_cpu_idle_time(unsigned int cpu);
  * the tasklet_work_to_do() helper).
  */
 #define cpu_is_haltable(cpu)                    \
-    (!softirq_pending(cpu) &&                   \
+    (!rcu_needs_cpu(cpu) &&                     \
+     !softirq_pending(cpu) &&                   \
      cpu_online(cpu) &&                         \
      !per_cpu(tasklet_work_to_do, cpu))
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 5/6] xen: RCU: avoid busy waiting until the end of grace period.
  2017-08-16 16:45 [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit} Dario Faggioli
                   ` (3 preceding siblings ...)
  2017-08-16 16:45 ` [PATCH v2 4/6] xen: RCU: don't let a CPU with a callback go idle Dario Faggioli
@ 2017-08-16 16:46 ` Dario Faggioli
  2017-08-16 16:46 ` [PATCH v2 6/6] xen: try to prevent idle timer from firing too often Dario Faggioli
  5 siblings, 0 replies; 11+ messages in thread
From: Dario Faggioli @ 2017-08-16 16:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Tim Deegan, Julien Grall, Jan Beulich

On the CPU where a callback is queued, cpu_is_haltable()
returns false (due to rcu_needs_cpu() being itself false).
That means the CPU would spin inside idle_loop(), continuously
calling do_softirq(), and, in there, continuously checking
rcu_pending(), in a tight loop.

Let's instead allow the CPU to really go idle, but make sure,
by arming a timer, that we periodically check whether the
grace period has come to an ended. As the period of the
timer, we pick a value that makes thing look like what
happens in Linux, with the periodic tick (as this code
comes from there).

Note that the timer will *only* be armed on CPUs that are
going idle while having queued RCU callbacks. On CPUs that
don't, there won't be any timer, and their sleep won't be
interrupted (and even for CPUs with callbacks, we only
expect an handful of wakeups at most, but that depends on
the system load, as much as from other things).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
---
Changes from v1:
* clarified changelog;
* fix style/indentation issues;
* deal with RCU idle timer in tick suspension logic;
* as a consequence of the point above, the timer now fires, so kill
  the ASSERT_UNREACHABLE, and put a perfcounter there (to count the
  times it triggers);
* add a comment about the value chosen for programming the idle timer;
* avoid pointless/bogus '!!' and void* casts;
* rearrange the rcu_needs_cpu() return condition;
* add a comment to clarify why we don't want to check rcu_pending()
  in rcu_idle_timer_start().
---
 xen/arch/x86/cpu/mwait-idle.c |    3 +-
 xen/common/rcupdate.c         |   72 ++++++++++++++++++++++++++++++++++++++++-
 xen/common/schedule.c         |    2 +
 xen/include/xen/perfc_defn.h  |    2 +
 xen/include/xen/rcupdate.h    |    3 ++
 5 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 762dff1..b6770ea 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -741,9 +741,8 @@ static void mwait_idle(void)
 	}
 
 	cpufreq_dbs_timer_suspend();
-
 	sched_tick_suspend();
-	/* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+	/* Timer related operations can raise TIMER_SOFTIRQ. Process it now. */
 	process_pending_softirqs();
 
 	/* Interrupts must be disabled for C2 and higher transitions. */
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 9f7d41d..e27bfed 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -84,8 +84,37 @@ struct rcu_data {
     int cpu;
     struct rcu_head barrier;
     long            last_rs_qlen;     /* qlen during the last resched */
+
+    /* 3) idle CPUs handling */
+    struct timer idle_timer;
+    bool idle_timer_active;
 };
 
+/*
+ * If a CPU with RCU callbacks queued goes idle, when the grace period is
+ * not finished yet, how can we make sure that the callbacks will eventually
+ * be executed? In Linux (2.6.21, the first "tickless idle" Linux kernel),
+ * the periodic timer tick would not be stopped for such CPU. Here in Xen,
+ * we (may) don't even have a periodic timer tick, so we need to use a
+ * special purpose timer.
+ *
+ * Such timer:
+ * 1) is armed only when a CPU with an RCU callback(s) queued goes idle
+ *    before the end of the current grace period (_not_ for any CPUs that
+ *    go idle!);
+ * 2) when it fires, it is only re-armed if the grace period is still
+ *    running;
+ * 3) it is stopped immediately, if the CPU wakes up from idle and
+ *    resumes 'normal' execution.
+ *
+ * About how far in the future the timer should be programmed each time,
+ * it's hard to tell (guess!!). Since this mimics Linux's periodic timer
+ * tick, take values used there as an indication. In Linux 2.6.21, tick
+ * period can be 10ms, 4ms, 3.33ms or 1ms. Let's use 10ms, to enable
+ * at least some power saving on the CPU that is going idle.
+ */
+#define RCU_IDLE_TIMER_PERIOD MILLISECS(10)
+
 static DEFINE_PER_CPU(struct rcu_data, rcu_data);
 
 static int blimit = 10;
@@ -404,7 +433,45 @@ int rcu_needs_cpu(int cpu)
 {
     struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
 
-    return (!!rdp->curlist || rcu_pending(cpu));
+    return (rdp->curlist && !rdp->idle_timer_active) || rcu_pending(cpu);
+}
+
+/*
+ * Timer for making sure the CPU where a callback is queued does
+ * periodically poke rcu_pedning(), so that it will invoke the callback
+ * not too late after the end of the grace period.
+ */
+void rcu_idle_timer_start()
+{
+    struct rcu_data *rdp = &this_cpu(rcu_data);
+
+    /*
+     * Note that we don't check rcu_pending() here. In fact, we don't want
+     * the timer armed on CPUs that are in the process of quiescing while
+     * going idle, unless they really are the ones with a queued callback.
+     */
+    if (likely(!rdp->curlist))
+        return;
+
+    set_timer(&rdp->idle_timer, NOW() + RCU_IDLE_TIMER_PERIOD);
+    rdp->idle_timer_active = true;
+}
+
+void rcu_idle_timer_stop()
+{
+    struct rcu_data *rdp = &this_cpu(rcu_data);
+
+    if (likely(!rdp->idle_timer_active))
+        return;
+
+    rdp->idle_timer_active = false;
+    stop_timer(&rdp->idle_timer);
+}
+
+static void rcu_idle_timer_handler(void* data)
+{
+    /* Nothing, really... Just count the number of times we fire */
+    perfc_incr(rcu_idle_timer);
 }
 
 void rcu_check_callbacks(int cpu)
@@ -425,6 +492,8 @@ static void rcu_move_batch(struct rcu_data *this_rdp, struct rcu_head *list,
 static void rcu_offline_cpu(struct rcu_data *this_rdp,
                             struct rcu_ctrlblk *rcp, struct rcu_data *rdp)
 {
+    kill_timer(&rdp->idle_timer);
+
     /* If the cpu going offline owns the grace period we can block
      * indefinitely waiting for it, so flush it here.
      */
@@ -453,6 +522,7 @@ static void rcu_init_percpu_data(int cpu, struct rcu_ctrlblk *rcp,
     rdp->qs_pending = 0;
     rdp->cpu = cpu;
     rdp->blimit = blimit;
+    init_timer(&rdp->idle_timer, rcu_idle_timer_handler, rdp, cpu);
 }
 
 static int cpu_callback(
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index c6f4817..8827921 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1904,6 +1904,7 @@ void sched_tick_suspend(void)
     sched = per_cpu(scheduler, cpu);
     SCHED_OP(sched, tick_suspend, cpu);
     rcu_idle_enter(cpu);
+    rcu_idle_timer_start();
 }
 
 void sched_tick_resume(void)
@@ -1911,6 +1912,7 @@ void sched_tick_resume(void)
     struct scheduler *sched;
     unsigned int cpu = smp_processor_id();
 
+    rcu_idle_timer_stop();
     rcu_idle_exit(cpu);
     sched = per_cpu(scheduler, cpu);
     SCHED_OP(sched, tick_resume, cpu);
diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
index 53849af..ca446e5 100644
--- a/xen/include/xen/perfc_defn.h
+++ b/xen/include/xen/perfc_defn.h
@@ -12,6 +12,8 @@ PERFCOUNTER(calls_from_multicall,       "calls from multicall")
 PERFCOUNTER(irqs,                   "#interrupts")
 PERFCOUNTER(ipis,                   "#IPIs")
 
+PERFCOUNTER(rcu_idle_timer,         "RCU: idle_timer")
+
 /* Generic scheduler counters (applicable to all schedulers) */
 PERFCOUNTER(sched_irq,              "sched: timer")
 PERFCOUNTER(sched_run,              "sched: runs through scheduler")
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index 561ac43..3402eb5 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -149,4 +149,7 @@ int rcu_barrier(void);
 void rcu_idle_enter(unsigned int cpu);
 void rcu_idle_exit(unsigned int cpu);
 
+void rcu_idle_timer_start(void);
+void rcu_idle_timer_stop(void);
+
 #endif /* __XEN_RCUPDATE_H */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 6/6] xen: try to prevent idle timer from firing too often.
  2017-08-16 16:45 [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit} Dario Faggioli
                   ` (4 preceding siblings ...)
  2017-08-16 16:46 ` [PATCH v2 5/6] xen: RCU: avoid busy waiting until the end of grace period Dario Faggioli
@ 2017-08-16 16:46 ` Dario Faggioli
  5 siblings, 0 replies; 11+ messages in thread
From: Dario Faggioli @ 2017-08-16 16:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Tim Deegan, Julien Grall, Jan Beulich

Idea is: the more CPUs are still active in a grace period,
the more we can wait to check whether it's time to invoke
the callbacks (on those CPUs that have already quiesced,
are idle, and have callbacks queued).

What we're trying to avoid is one of those idle CPUs to
wake up, only to discover that the grace period is still
running, and that it hence could have be slept longer
(saving more power).

This patch implements an heuristic aimed at achieving
that, at the price of having to call cpumask_weight() on
the 'entering idle' path, on CPUs with queued callbacks.

Of course, we, at the same time, don't want to delay
recognising that we can invoke the callbacks for too
much, so we also set a maximum.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/common/rcupdate.c |   18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index e27bfed..b9ae6cc 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -110,10 +110,17 @@ struct rcu_data {
  * About how far in the future the timer should be programmed each time,
  * it's hard to tell (guess!!). Since this mimics Linux's periodic timer
  * tick, take values used there as an indication. In Linux 2.6.21, tick
- * period can be 10ms, 4ms, 3.33ms or 1ms. Let's use 10ms, to enable
- * at least some power saving on the CPU that is going idle.
+ * period can be 10ms, 4ms, 3.33ms or 1ms.
+ *
+ * That being said, we can assume that, the more CPUs are still active in
+ * the current grace period, the longer it will take for it to come to its
+ * end. We wait 10ms for each active CPU, as minimizing the wakeups enables
+ * more effective power saving, on the CPU that has gone idle. But we also
+ * never wait more than 100ms, to avoid delaying recognising the end of a
+ * grace period (and the invocation of the callbacks) by too much.
  */
-#define RCU_IDLE_TIMER_PERIOD MILLISECS(10)
+#define RCU_IDLE_TIMER_CPU_DELAY  MILLISECS(10)
+#define RCU_IDLE_TIMER_PERIOD_MAX MILLISECS(100)
 
 static DEFINE_PER_CPU(struct rcu_data, rcu_data);
 
@@ -444,6 +451,7 @@ int rcu_needs_cpu(int cpu)
 void rcu_idle_timer_start()
 {
     struct rcu_data *rdp = &this_cpu(rcu_data);
+    s_time_t next;
 
     /*
      * Note that we don't check rcu_pending() here. In fact, we don't want
@@ -453,7 +461,9 @@ void rcu_idle_timer_start()
     if (likely(!rdp->curlist))
         return;
 
-    set_timer(&rdp->idle_timer, NOW() + RCU_IDLE_TIMER_PERIOD);
+    next = min_t(s_time_t, RCU_IDLE_TIMER_PERIOD_MAX,
+                 cpumask_weight(&rcu_ctrlblk.cpumask) * RCU_IDLE_TIMER_CPU_DELAY);
+    set_timer(&rdp->idle_timer, NOW() + next);
     rdp->idle_timer_active = true;
 }
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/6] xen: RCU/x86/ARM: discount CPUs that were idle when grace period started.
  2017-08-16 16:45 ` [PATCH v2 3/6] xen: RCU/x86/ARM: discount CPUs that were idle when grace period started Dario Faggioli
@ 2017-08-17 13:03   ` Tim Deegan
  2017-08-18 17:06     ` Dario Faggioli
  0 siblings, 1 reply; 11+ messages in thread
From: Tim Deegan @ 2017-08-17 13:03 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Julien Grall, Jan Beulich, xen-devel

Hi,

This looks good to me.  I have one question:

At 18:45 +0200 on 16 Aug (1502909149), Dario Faggioli wrote:
> @@ -474,7 +484,41 @@ static struct notifier_block cpu_nfb = {
>  void __init rcu_init(void)
>  {
>      void *cpu = (void *)(long)smp_processor_id();
> +
> +    cpumask_setall(&rcu_ctrlblk.idle_cpumask);
> +    /* The CPU we're running on is certainly not idle */
> +    cpumask_clear_cpu(smp_processor_id(), &rcu_ctrlblk.idle_cpumask);
>      cpu_callback(&cpu_nfb, CPU_UP_PREPARE, cpu);
>      register_cpu_notifier(&cpu_nfb);
>      open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
>  }
> +
> +/*
> + * The CPU is becoming idle, so no more read side critical
> + * sections, and one more step toward grace period.
> + */
> +void rcu_idle_enter(unsigned int cpu)
> +{
> +    /*
> +     * During non-boot CPU bringup and resume, until this function is
> +     * called for the first time, it's fine to find our bit already set.
> +     */
> +    ASSERT(!cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask) ||
> +           (system_state < SYS_STATE_active || system_state >= SYS_STATE_resume));

Does every newly started CPU immediately idle?  If not, then it might
run in an RCU read section but excluded from the grace period
mechanism.

It seems like it would be better to start with the idle_cpumask empty,
and rely on online_cpumask to exclude CPUs that aren't running.
Or if that doesn't work, to call rcu_idle_exit/enter on the CPU
bringup/shutdown paths and simplify this assertion.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/6] xen: RCU/x86/ARM: discount CPUs that were idle when grace period started.
  2017-08-17 13:03   ` Tim Deegan
@ 2017-08-18 17:06     ` Dario Faggioli
  0 siblings, 0 replies; 11+ messages in thread
From: Dario Faggioli @ 2017-08-18 17:06 UTC (permalink / raw)
  To: Tim Deegan
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Julien Grall, Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1942 bytes --]

On Thu, 2017-08-17 at 14:03 +0100, Tim Deegan wrote:
> At 18:45 +0200 on 16 Aug (1502909149), Dario Faggioli wrote:
> > +
> > +/*
> > + * The CPU is becoming idle, so no more read side critical
> > + * sections, and one more step toward grace period.
> > + */
> > +void rcu_idle_enter(unsigned int cpu)
> > +{
> > +    /*
> > +     * During non-boot CPU bringup and resume, until this function
> > is
> > +     * called for the first time, it's fine to find our bit
> > already set.
> > +     */
> > +    ASSERT(!cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask) ||
> > +           (system_state < SYS_STATE_active || system_state >=
> > SYS_STATE_resume));
> 
> Does every newly started CPU immediately idle?  If not, then it might
> run in an RCU read section but excluded from the grace period
> mechanism.
> 
They do call startup_cpu_idle_loop() pretty soon, yes (right at the end
of start_secondary(), on both x86 and ARM). But technically, yes, there
is a window for that.

> It seems like it would be better to start with the idle_cpumask
> empty,
> and rely on online_cpumask to exclude CPUs that aren't running.
>
I thought about that too, but then ended up doing it the other way
(i.e., having the mask fully set).

Now, I just tried to initialize it to "all clear"... It works, and I
have to admit that I like it better. :-)

As I'm going on vacations for a couple of weeks, I'll send v3 right
now, with just this changed, so it could even be checked-in, if others
too are happy with this, and the rest of the patches (if not, we'll
talk about it when I'm back :-P).

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all.
  2017-08-16 16:45 ` [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all Dario Faggioli
@ 2017-08-29 14:02   ` George Dunlap
  0 siblings, 0 replies; 11+ messages in thread
From: George Dunlap @ 2017-08-29 14:02 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Stefano Stabellini, Andrew Cooper, Tim Deegan, Julien Grall,
	Jan Beulich, xen-devel

On Wed, Aug 16, 2017 at 5:45 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> In fact, right now, we read it at every iteration of the loop.
> The reason it's done like this is how context switch was handled
> on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).
>
> However:
> 1) we don't have IA64 any longer, and all the achitectures that
>    we do support, are ok with sampling once and for all;
> 2) sampling at every iteration (slightly) affect performance;
> 3) sampling at every iteration is misleading, as it makes people
>    believe that it is currently possible that SCHEDULE_SOFTIRQ
>    moves the execution flow on another CPU (and the comment,
>    by reinforcing this belief, makes things even worse!).
>
> Therefore, let's:
> - do the sampling only once, and remove the comment;
> - leave an ASSERT() around, so that, if context switching
>   logic changes (in current or new arches), we will notice.
>
> [1] Some more (historical) information here:
>     http://old-list-archives.xenproject.org/archives/html/xen-devel/2006-06/msg01262.html
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Tim Deegan <tim@xen.org>
> ---
> This has been submitted already, as a part of another series. Discussion is here:
>  https://lists.xen.org/archives/html/xen-devel/2017-06/msg00102.html
>
> For the super lazy, Jan's latest word in that thread were these:
>  "I've voiced my opinion, but I don't mean to block the patch. After
>   all there's no active issue the change introduces."
>  (https://lists.xen.org/archives/html/xen-devel/2017-06/msg00797.html)
>
> Since then:
> - changed "once and for all" with "only once", as requested by George (and
>   applied his Reviewed-by, as he said I could).


The commit message, but forgot to change the title. :-)  That can be
addressed on check-in if need be.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all.
@ 2017-08-29 14:07 Dario Faggioli
  0 siblings, 0 replies; 11+ messages in thread
From: Dario Faggioli @ 2017-08-29 14:07 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, Andrew Cooper, Tim (Xen.org),
	Julien Grall, Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2392 bytes --]

Il 29 Ago 2017 4:03 PM, George Dunlap <George.Dunlap@eu.citrix.com> ha scritto:
On Wed, Aug 16, 2017 at 5:45 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> In fact, right now, we read it at every iteration of the loop.
> The reason it's done like this is how context switch was handled
> on IA64 (see commit ae9bfcdc, "[XEN] Various softirq cleanups" [1]).
>
> However:
> 1) we don't have IA64 any longer, and all the achitectures that
>    we do support, are ok with sampling once and for all;
> 2) sampling at every iteration (slightly) affect performance;
> 3) sampling at every iteration is misleading, as it makes people
>    believe that it is currently possible that SCHEDULE_SOFTIRQ
>    moves the execution flow on another CPU (and the comment,
>    by reinforcing this belief, makes things even worse!).
>
> Therefore, let's:
> - do the sampling only once, and remove the comment;
> - leave an ASSERT() around, so that, if context switching
>   logic changes (in current or new arches), we will notice.
>
> [1] Some more (historical) information here:
>     http://old-list-archives.xenproject.org/archives/html/xen-devel/2006-06/msg01262.html
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Tim Deegan <tim@xen.org>
> ---
> This has been submitted already, as a part of another series. Discussion is here:
>  https://lists.xen.org/archives/html/xen-devel/2017-06/msg00102.html
>
> For the super lazy, Jan's latest word in that thread were these:
>  "I've voiced my opinion, but I don't mean to block the patch. After
>   all there's no active issue the change introduces."
>  (https://lists.xen.org/archives/html/xen-devel/2017-06/msg00797.html)
>
> Since then:
> - changed "once and for all" with "only once", as requested by George (and
>   applied his Reviewed-by, as he said I could).


The commit message, but forgot to change the title. :-)

Indeed. I focused on the body of the changelog, and didn't even recall/notice, that it was in the present in the subject line as well! :-(

Sorry,
Dario


[-- Attachment #1.2: Type: text/html, Size: 3999 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-08-29 14:09 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-16 16:45 [PATCH v2 0/6] xen: RCU: x86/ARM: Add support of rcu_idle_{enter, exit} Dario Faggioli
2017-08-16 16:45 ` [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all Dario Faggioli
2017-08-29 14:02   ` George Dunlap
2017-08-16 16:45 ` [PATCH v2 2/6] xen: ARM: suspend the tick (if in use) when going idle Dario Faggioli
2017-08-16 16:45 ` [PATCH v2 3/6] xen: RCU/x86/ARM: discount CPUs that were idle when grace period started Dario Faggioli
2017-08-17 13:03   ` Tim Deegan
2017-08-18 17:06     ` Dario Faggioli
2017-08-16 16:45 ` [PATCH v2 4/6] xen: RCU: don't let a CPU with a callback go idle Dario Faggioli
2017-08-16 16:46 ` [PATCH v2 5/6] xen: RCU: avoid busy waiting until the end of grace period Dario Faggioli
2017-08-16 16:46 ` [PATCH v2 6/6] xen: try to prevent idle timer from firing too often Dario Faggioli
2017-08-29 14:07 [PATCH v2 1/6] xen: in do_softirq() sample smp_processor_id() once and for all Dario Faggioli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.