xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v6 0/4] xen/rcu: let rcu work better with core scheduling
@ 2020-03-13 13:06 Juergen Gross
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier() Juergen Gross
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Juergen Gross @ 2020-03-13 13:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich

Today the RCU handling in Xen is affecting scheduling in several ways.
It is raising sched softirqs without any real need and it requires
tasklets for rcu_barrier(), which interacts badly with core scheduling.

This small series repairs those issues.

Additionally some ASSERT()s are added for verification of sane rcu
handling. In order to avoid those triggering right away the obvious
violations are fixed. This includes making rcu locking functions type
safe.

Changes in V6:
- added memory barrier in patch 1
- drop cpu_map_lock only at the end of rcu_barrier()
- re-add prempt_disable() in patch 3

Changes in V5:
- dropped already committed patches 1 and 4
- fixed race
- rework blocking of rcu processing with held rcu locks

Changes in V4:
- patch 5: use barrier()

Changes in V3:
- type safe locking functions (functions instead of macros)
- per-lock debug additions
- new patches 4 and 6
- fixed races

Changes in V2:
- use get_cpu_maps() in rcu_barrier() handling
- avoid recursion in rcu_barrier() handling
- new patches 3 and 4

Juergen Gross (4):
  xen/rcu: don't use stop_machine_run() for rcu_barrier()
  xen: don't process rcu callbacks when holding a rcu_read_lock()
  xen/rcu: add assertions to debug build
  xen/rcu: add per-lock counter in debug builds

 xen/common/rcupdate.c      | 97 +++++++++++++++++++++++++++++++++-------------
 xen/common/softirq.c       | 14 ++++++-
 xen/include/xen/rcupdate.h | 76 +++++++++++++++++++++++++++++-------
 3 files changed, 145 insertions(+), 42 deletions(-)

-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier()
  2020-03-13 13:06 [Xen-devel] [PATCH v6 0/4] xen/rcu: let rcu work better with core scheduling Juergen Gross
@ 2020-03-13 13:06 ` Juergen Gross
  2020-03-16 15:24   ` Igor Druzhinin
  2020-03-17 13:56   ` Jan Beulich
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 2/4] xen: don't process rcu callbacks when holding a rcu_read_lock() Juergen Gross
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 17+ messages in thread
From: Juergen Gross @ 2020-03-13 13:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich

Today rcu_barrier() is calling stop_machine_run() to synchronize all
physical cpus in order to ensure all pending rcu calls have finished
when returning.

As stop_machine_run() is using tasklets this requires scheduling of
idle vcpus on all cpus imposing the need to call rcu_barrier() on idle
cpus only in case of core scheduling being active, as otherwise a
scheduling deadlock would occur.

There is no need at all to do the syncing of the cpus in tasklets, as
rcu activity is started in __do_softirq() called whenever softirq
activity is allowed. So rcu_barrier() can easily be modified to use
softirq for synchronization of the cpus no longer requiring any
scheduling activity.

As there already is a rcu softirq reuse that for the synchronization.

Remove the barrier element from struct rcu_data as it isn't used.

Finally switch rcu_barrier() to return void as it now can never fail.

Partially-based-on-patch-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add recursion detection

V3:
- fix races (Igor Druzhinin)

V5:
- rename done_count to pending_count (Jan Beulich)
- fix race (Jan Beulich)

V6:
- add barrier (Julien Grall)
- add ASSERT() (Julien Grall)
- hold cpu_map lock until end of rcu_barrier() (Julien Grall)
---
 xen/common/rcupdate.c      | 95 +++++++++++++++++++++++++++++++++-------------
 xen/include/xen/rcupdate.h |  2 +-
 2 files changed, 69 insertions(+), 28 deletions(-)

diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 03d84764d2..ed9083d2b2 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -83,7 +83,6 @@ struct rcu_data {
     struct rcu_head **donetail;
     long            blimit;           /* Upper limit on a processed batch */
     int cpu;
-    struct rcu_head barrier;
     long            last_rs_qlen;     /* qlen during the last resched */
 
     /* 3) idle CPUs handling */
@@ -91,6 +90,7 @@ struct rcu_data {
     bool idle_timer_active;
 
     bool            process_callbacks;
+    bool            barrier_active;
 };
 
 /*
@@ -143,51 +143,85 @@ static int qhimark = 10000;
 static int qlowmark = 100;
 static int rsinterval = 1000;
 
-struct rcu_barrier_data {
-    struct rcu_head head;
-    atomic_t *cpu_count;
-};
+/*
+ * rcu_barrier() handling:
+ * cpu_count holds the number of cpus required to finish barrier handling.
+ * pending_count is initialized to nr_cpus + 1.
+ * Cpus are synchronized via softirq mechanism. rcu_barrier() is regarded to
+ * be active if pending_count is not zero. In case rcu_barrier() is called on
+ * multiple cpus it is enough to check for pending_count being not zero on entry
+ * and to call process_pending_softirqs() in a loop until pending_count drops to
+ * zero, before starting the new rcu_barrier() processing.
+ * In order to avoid hangs when rcu_barrier() is called multiple times on the
+ * same cpu in fast sequence and a slave cpu couldn't drop out of the
+ * barrier handling fast enough a second counter pending_count is needed.
+ * The rcu_barrier() invoking cpu will wait until pending_count reaches 1
+ * (meaning that all cpus have finished processing the barrier) and then will
+ * reset pending_count to 0 to enable entering rcu_barrier() again.
+ */
+static atomic_t cpu_count = ATOMIC_INIT(0);
+static atomic_t pending_count = ATOMIC_INIT(0);
 
 static void rcu_barrier_callback(struct rcu_head *head)
 {
-    struct rcu_barrier_data *data = container_of(
-        head, struct rcu_barrier_data, head);
-    atomic_inc(data->cpu_count);
+    smp_wmb();     /* Make all previous writes visible to other cpus. */
+    atomic_dec(&cpu_count);
 }
 
-static int rcu_barrier_action(void *_cpu_count)
+static void rcu_barrier_action(void)
 {
-    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
-
-    ASSERT(!local_irq_is_enabled());
-    local_irq_enable();
+    struct rcu_head head;
 
     /*
      * When callback is executed, all previously-queued RCU work on this CPU
-     * is completed. When all CPUs have executed their callback, data.cpu_count
-     * will have been incremented to include every online CPU.
+     * is completed. When all CPUs have executed their callback, cpu_count
+     * will have been decremented to 0.
      */
-    call_rcu(&data.head, rcu_barrier_callback);
+    call_rcu(&head, rcu_barrier_callback);
 
-    while ( atomic_read(data.cpu_count) != num_online_cpus() )
+    while ( atomic_read(&cpu_count) )
     {
         process_pending_softirqs();
         cpu_relax();
     }
 
-    local_irq_disable();
-
-    return 0;
+    atomic_dec(&pending_count);
 }
 
-/*
- * As rcu_barrier() is using stop_machine_run() it is allowed to be used in
- * idle context only (see comment for stop_machine_run()).
- */
-int rcu_barrier(void)
+void rcu_barrier(void)
 {
-    atomic_t cpu_count = ATOMIC_INIT(0);
-    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
+    unsigned int n_cpus;
+
+    ASSERT(!in_irq() && local_irq_is_enabled());
+
+    for ( ;; )
+    {
+        if ( !atomic_read(&pending_count) && get_cpu_maps() )
+        {
+            n_cpus = num_online_cpus();
+
+            if ( atomic_cmpxchg(&pending_count, 0, n_cpus + 1) == 0 )
+                break;
+
+            put_cpu_maps();
+        }
+
+        process_pending_softirqs();
+        cpu_relax();
+    }
+
+    atomic_set(&cpu_count, n_cpus);
+    cpumask_raise_softirq(&cpu_online_map, RCU_SOFTIRQ);
+
+    while ( atomic_read(&pending_count) != 1 )
+    {
+        process_pending_softirqs();
+        cpu_relax();
+    }
+
+    atomic_set(&pending_count, 0);
+
+    put_cpu_maps();
 }
 
 /* Is batch a before batch b ? */
@@ -426,6 +460,13 @@ static void rcu_process_callbacks(void)
         rdp->process_callbacks = false;
         __rcu_process_callbacks(&rcu_ctrlblk, rdp);
     }
+
+    if ( atomic_read(&cpu_count) && !rdp->barrier_active )
+    {
+        rdp->barrier_active = true;
+        rcu_barrier_action();
+        rdp->barrier_active = false;
+    }
 }
 
 static int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp)
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index eb9b60df07..31c8b86d13 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -144,7 +144,7 @@ void rcu_check_callbacks(int cpu);
 void call_rcu(struct rcu_head *head, 
               void (*func)(struct rcu_head *head));
 
-int rcu_barrier(void);
+void rcu_barrier(void);
 
 void rcu_idle_enter(unsigned int cpu);
 void rcu_idle_exit(unsigned int cpu);
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v6 2/4] xen: don't process rcu callbacks when holding a rcu_read_lock()
  2020-03-13 13:06 [Xen-devel] [PATCH v6 0/4] xen/rcu: let rcu work better with core scheduling Juergen Gross
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier() Juergen Gross
@ 2020-03-13 13:06 ` Juergen Gross
  2020-03-17 14:22   ` Jan Beulich
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build Juergen Gross
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 4/4] xen/rcu: add per-lock counter in debug builds Juergen Gross
  3 siblings, 1 reply; 17+ messages in thread
From: Juergen Gross @ 2020-03-13 13:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich

Some keyhandlers are calling process_pending_softirqs() while holding
a rcu_read_lock(). This is wrong, as process_pending_softirqs() might
activate rcu calls which should not happen inside a rcu_read_lock().

For that purpose modify process_pending_softirqs() to not allow rcu
callback processing when a rcu_read_lock() is being held.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- add RCU_SOFTIRQ to ignore in process_pending_softirqs_norcu()
  (Roger Pau Monné)

V5:
- block rcu processing depending on rch_read_lock() being held or not
  (Jan Beulich)
---
 xen/common/softirq.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/common/softirq.c b/xen/common/softirq.c
index b83ad96d6c..00d676b62c 100644
--- a/xen/common/softirq.c
+++ b/xen/common/softirq.c
@@ -29,6 +29,7 @@ static void __do_softirq(unsigned long ignore_mask)
 {
     unsigned int i, cpu;
     unsigned long pending;
+    bool rcu_allowed = !(ignore_mask & (1ul << RCU_SOFTIRQ));
 
     for ( ; ; )
     {
@@ -38,7 +39,7 @@ static void __do_softirq(unsigned long ignore_mask)
          */
         cpu = smp_processor_id();
 
-        if ( rcu_pending(cpu) )
+        if ( rcu_allowed && rcu_pending(cpu) )
             rcu_check_callbacks(cpu);
 
         if ( ((pending = (softirq_pending(cpu) & ~ignore_mask)) == 0)
@@ -53,9 +54,16 @@ static void __do_softirq(unsigned long ignore_mask)
 
 void process_pending_softirqs(void)
 {
+    unsigned long ignore_mask = (1ul << SCHEDULE_SOFTIRQ) |
+                                (1ul << SCHED_SLAVE_SOFTIRQ);
+
+    /* Block RCU processing in case of rcu_read_lock() held. */
+    if ( preempt_count() )
+        ignore_mask |= 1ul << RCU_SOFTIRQ;
+
     ASSERT(!in_irq() && local_irq_is_enabled());
     /* Do not enter scheduler as it can preempt the calling context. */
-    __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ));
+    __do_softirq(ignore_mask);
 }
 
 void do_softirq(void)
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build
  2020-03-13 13:06 [Xen-devel] [PATCH v6 0/4] xen/rcu: let rcu work better with core scheduling Juergen Gross
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier() Juergen Gross
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 2/4] xen: don't process rcu callbacks when holding a rcu_read_lock() Juergen Gross
@ 2020-03-13 13:06 ` Juergen Gross
  2020-03-17 14:36   ` Jan Beulich
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 4/4] xen/rcu: add per-lock counter in debug builds Juergen Gross
  3 siblings, 1 reply; 17+ messages in thread
From: Juergen Gross @ 2020-03-13 13:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich

Xen's RCU implementation relies on no softirq handling taking place
while being in a RCU critical section. Add ASSERT()s in debug builds
in order to catch any violations.

For that purpose modify rcu_read_[un]lock() to use a dedicated percpu
counter additional to preempt_[en|dis]able() as this enables to test
that condition in __do_softirq() (ASSERT_NOT_IN_ATOMIC() is not
usable there due to __cpu_up() calling process_pending_softirqs()
while holding the cpu hotplug lock).

While at it switch the rcu_read_[un]lock() implementation to static
inline functions instead of macros.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- add barriers to rcu_[en|dis]able() (Roger Pau Monné)
- add rcu_quiesce_allowed() to ASSERT_NOT_IN_ATOMIC (Roger Pau Monné)
- convert macros to static inline functions
- add sanity check in rcu_read_unlock()

V4:
- use barrier() in rcu_[en|dis]able() (Julien Grall)

V5:
- use rcu counter even if not using a debug build

V6:
- keep preempt_[dis|en]able() calls
---
 xen/common/rcupdate.c      |  2 ++
 xen/common/softirq.c       |  4 +++-
 xen/include/xen/rcupdate.h | 36 +++++++++++++++++++++++++++++++++---
 3 files changed, 38 insertions(+), 4 deletions(-)

diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index ed9083d2b2..5e7bd7196f 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -46,6 +46,8 @@
 #include <xen/cpu.h>
 #include <xen/stop_machine.h>
 
+DEFINE_PER_CPU(unsigned int, rcu_lock_cnt);
+
 /* Global control variables for rcupdate callback mechanism. */
 static struct rcu_ctrlblk {
     long cur;           /* Current batch number.                      */
diff --git a/xen/common/softirq.c b/xen/common/softirq.c
index 00d676b62c..eba65c5fc0 100644
--- a/xen/common/softirq.c
+++ b/xen/common/softirq.c
@@ -31,6 +31,8 @@ static void __do_softirq(unsigned long ignore_mask)
     unsigned long pending;
     bool rcu_allowed = !(ignore_mask & (1ul << RCU_SOFTIRQ));
 
+    ASSERT(!rcu_allowed || rcu_quiesce_allowed());
+
     for ( ; ; )
     {
         /*
@@ -58,7 +60,7 @@ void process_pending_softirqs(void)
                                 (1ul << SCHED_SLAVE_SOFTIRQ);
 
     /* Block RCU processing in case of rcu_read_lock() held. */
-    if ( preempt_count() )
+    if ( !rcu_quiesce_allowed() )
         ignore_mask |= 1ul << RCU_SOFTIRQ;
 
     ASSERT(!in_irq() && local_irq_is_enabled());
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index 31c8b86d13..d3c2b7b093 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -32,12 +32,35 @@
 #define __XEN_RCUPDATE_H
 
 #include <xen/cache.h>
+#include <xen/compiler.h>
 #include <xen/spinlock.h>
 #include <xen/cpumask.h>
+#include <xen/percpu.h>
 #include <xen/preempt.h>
 
 #define __rcu
 
+DECLARE_PER_CPU(unsigned int, rcu_lock_cnt);
+
+static inline void rcu_quiesce_disable(void)
+{
+    preempt_disable();
+    this_cpu(rcu_lock_cnt)++;
+    barrier();
+}
+
+static inline void rcu_quiesce_enable(void)
+{
+    barrier();
+    this_cpu(rcu_lock_cnt)--;
+    preempt_enable();
+}
+
+static inline bool rcu_quiesce_allowed(void)
+{
+    return !this_cpu(rcu_lock_cnt);
+}
+
 /**
  * struct rcu_head - callback structure for use with RCU
  * @next: next update requests in a list
@@ -91,16 +114,23 @@ typedef struct _rcu_read_lock rcu_read_lock_t;
  * will be deferred until the outermost RCU read-side critical section
  * completes.
  *
- * It is illegal to block while in an RCU read-side critical section.
+ * It is illegal to process softirqs while in an RCU read-side critical section.
  */
-#define rcu_read_lock(x)       ({ ((void)(x)); preempt_disable(); })
+static inline void rcu_read_lock(rcu_read_lock_t *lock)
+{
+    rcu_quiesce_disable();
+}
 
 /**
  * rcu_read_unlock - marks the end of an RCU read-side critical section.
  *
  * See rcu_read_lock() for more information.
  */
-#define rcu_read_unlock(x)     ({ ((void)(x)); preempt_enable(); })
+static inline void rcu_read_unlock(rcu_read_lock_t *lock)
+{
+    ASSERT(!rcu_quiesce_allowed());
+    rcu_quiesce_enable();
+}
 
 /*
  * So where is rcu_write_lock()?  It does not exist, as there is no
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v6 4/4] xen/rcu: add per-lock counter in debug builds
  2020-03-13 13:06 [Xen-devel] [PATCH v6 0/4] xen/rcu: let rcu work better with core scheduling Juergen Gross
                   ` (2 preceding siblings ...)
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build Juergen Gross
@ 2020-03-13 13:06 ` Juergen Gross
  2020-03-17 15:39   ` Jan Beulich
  3 siblings, 1 reply; 17+ messages in thread
From: Juergen Gross @ 2020-03-13 13:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich

Add a lock specific counter to rcu read locks in debug builds. This
allows to test for matching lock/unlock calls.

This will help to avoid cases like the one fixed by commit
98ed1f43cc2c89 where different rcu read locks were referenced in the
lock and unlock calls.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- updated commit message (Jan Beulich)
---
 xen/include/xen/rcupdate.h | 46 +++++++++++++++++++++++++++++++++-------------
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index d3c2b7b093..e0c3b16e7d 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -37,21 +37,50 @@
 #include <xen/cpumask.h>
 #include <xen/percpu.h>
 #include <xen/preempt.h>
+#include <asm/atomic.h>
 
 #define __rcu
 
+#ifndef NDEBUG
+/* * Lock type for passing to rcu_read_{lock,unlock}. */
+struct _rcu_read_lock {
+    atomic_t cnt;
+};
+typedef struct _rcu_read_lock rcu_read_lock_t;
+#define DEFINE_RCU_READ_LOCK(x) rcu_read_lock_t x = { .cnt = ATOMIC_INIT(0) }
+#define RCU_READ_LOCK_INIT(x)   atomic_set(&(x)->cnt, 0)
+
+#else
+/*
+ * Dummy lock type for passing to rcu_read_{lock,unlock}. Currently exists
+ * only to document the reason for rcu_read_lock() critical sections.
+ */
+struct _rcu_read_lock {};
+typedef struct _rcu_read_lock rcu_read_lock_t;
+#define DEFINE_RCU_READ_LOCK(x) rcu_read_lock_t x
+#define RCU_READ_LOCK_INIT(x)
+
+#endif
+
 DECLARE_PER_CPU(unsigned int, rcu_lock_cnt);
 
-static inline void rcu_quiesce_disable(void)
+static inline void rcu_quiesce_disable(rcu_read_lock_t *lock)
 {
     preempt_disable();
     this_cpu(rcu_lock_cnt)++;
+#ifndef NDEBUG
+    atomic_inc(&lock->cnt);
+#endif
     barrier();
 }
 
-static inline void rcu_quiesce_enable(void)
+static inline void rcu_quiesce_enable(rcu_read_lock_t *lock)
 {
     barrier();
+#ifndef NDEBUG
+    ASSERT(atomic_read(&lock->cnt));
+    atomic_dec(&lock->cnt);
+#endif
     this_cpu(rcu_lock_cnt)--;
     preempt_enable();
 }
@@ -81,15 +110,6 @@ struct rcu_head {
 int rcu_pending(int cpu);
 int rcu_needs_cpu(int cpu);
 
-/*
- * Dummy lock type for passing to rcu_read_{lock,unlock}. Currently exists
- * only to document the reason for rcu_read_lock() critical sections.
- */
-struct _rcu_read_lock {};
-typedef struct _rcu_read_lock rcu_read_lock_t;
-#define DEFINE_RCU_READ_LOCK(x) rcu_read_lock_t x
-#define RCU_READ_LOCK_INIT(x)
-
 /**
  * rcu_read_lock - mark the beginning of an RCU read-side critical section.
  *
@@ -118,7 +138,7 @@ typedef struct _rcu_read_lock rcu_read_lock_t;
  */
 static inline void rcu_read_lock(rcu_read_lock_t *lock)
 {
-    rcu_quiesce_disable();
+    rcu_quiesce_disable(lock);
 }
 
 /**
@@ -129,7 +149,7 @@ static inline void rcu_read_lock(rcu_read_lock_t *lock)
 static inline void rcu_read_unlock(rcu_read_lock_t *lock)
 {
     ASSERT(!rcu_quiesce_allowed());
-    rcu_quiesce_enable();
+    rcu_quiesce_enable(lock);
 }
 
 /*
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier()
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier() Juergen Gross
@ 2020-03-16 15:24   ` Igor Druzhinin
  2020-03-16 16:01     ` Jürgen Groß
  2020-03-17 13:56   ` Jan Beulich
  1 sibling, 1 reply; 17+ messages in thread
From: Igor Druzhinin @ 2020-03-16 15:24 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	George Dunlap, Jan Beulich, Ian Jackson

On 13/03/2020 13:06, Juergen Gross wrote:
> Today rcu_barrier() is calling stop_machine_run() to synchronize all
> physical cpus in order to ensure all pending rcu calls have finished
> when returning.
> 
> As stop_machine_run() is using tasklets this requires scheduling of
> idle vcpus on all cpus imposing the need to call rcu_barrier() on idle
> cpus only in case of core scheduling being active, as otherwise a
> scheduling deadlock would occur.
> 
> There is no need at all to do the syncing of the cpus in tasklets, as
> rcu activity is started in __do_softirq() called whenever softirq
> activity is allowed. So rcu_barrier() can easily be modified to use
> softirq for synchronization of the cpus no longer requiring any
> scheduling activity.
> 
> As there already is a rcu softirq reuse that for the synchronization.
> 
> Remove the barrier element from struct rcu_data as it isn't used.
> 
> Finally switch rcu_barrier() to return void as it now can never fail.
> 
> Partially-based-on-patch-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - add recursion detection
> 
> V3:
> - fix races (Igor Druzhinin)
> 
> V5:
> - rename done_count to pending_count (Jan Beulich)
> - fix race (Jan Beulich)
> 
> V6:
> - add barrier (Julien Grall)
> - add ASSERT() (Julien Grall)
> - hold cpu_map lock until end of rcu_barrier() (Julien Grall)
> ---
>   xen/common/rcupdate.c      | 95 +++++++++++++++++++++++++++++++++-------------
>   xen/include/xen/rcupdate.h |  2 +-
>   2 files changed, 69 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
> index 03d84764d2..ed9083d2b2 100644
> --- a/xen/common/rcupdate.c
> +++ b/xen/common/rcupdate.c
> @@ -83,7 +83,6 @@ struct rcu_data {
>       struct rcu_head **donetail;
>       long            blimit;           /* Upper limit on a processed batch */
>       int cpu;
> -    struct rcu_head barrier;
>       long            last_rs_qlen;     /* qlen during the last resched */
>   
>       /* 3) idle CPUs handling */
> @@ -91,6 +90,7 @@ struct rcu_data {
>       bool idle_timer_active;
>   
>       bool            process_callbacks;
> +    bool            barrier_active;
>   };
>   
>   /*
> @@ -143,51 +143,85 @@ static int qhimark = 10000;
>   static int qlowmark = 100;
>   static int rsinterval = 1000;
>   
> -struct rcu_barrier_data {
> -    struct rcu_head head;
> -    atomic_t *cpu_count;
> -};
> +/*
> + * rcu_barrier() handling:
> + * cpu_count holds the number of cpus required to finish barrier handling.
> + * pending_count is initialized to nr_cpus + 1.
> + * Cpus are synchronized via softirq mechanism. rcu_barrier() is regarded to
> + * be active if pending_count is not zero. In case rcu_barrier() is called on
> + * multiple cpus it is enough to check for pending_count being not zero on entry
> + * and to call process_pending_softirqs() in a loop until pending_count drops to
> + * zero, before starting the new rcu_barrier() processing.
> + * In order to avoid hangs when rcu_barrier() is called multiple times on the
> + * same cpu in fast sequence and a slave cpu couldn't drop out of the
> + * barrier handling fast enough a second counter pending_count is needed.
> + * The rcu_barrier() invoking cpu will wait until pending_count reaches 1
> + * (meaning that all cpus have finished processing the barrier) and then will
> + * reset pending_count to 0 to enable entering rcu_barrier() again.
> + */
> +static atomic_t cpu_count = ATOMIC_INIT(0);
> +static atomic_t pending_count = ATOMIC_INIT(0);
>   
>   static void rcu_barrier_callback(struct rcu_head *head)
>   {
> -    struct rcu_barrier_data *data = container_of(
> -        head, struct rcu_barrier_data, head);
> -    atomic_inc(data->cpu_count);
> +    smp_wmb();     /* Make all previous writes visible to other cpus. */
> +    atomic_dec(&cpu_count);
>   }
>   
> -static int rcu_barrier_action(void *_cpu_count)
> +static void rcu_barrier_action(void)
>   {
> -    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
> -
> -    ASSERT(!local_irq_is_enabled());
> -    local_irq_enable();
> +    struct rcu_head head;
>   
>       /*
>        * When callback is executed, all previously-queued RCU work on this CPU
> -     * is completed. When all CPUs have executed their callback, data.cpu_count
> -     * will have been incremented to include every online CPU.
> +     * is completed. When all CPUs have executed their callback, cpu_count
> +     * will have been decremented to 0.
>        */
> -    call_rcu(&data.head, rcu_barrier_callback);
> +    call_rcu(&head, rcu_barrier_callback);
>   
> -    while ( atomic_read(data.cpu_count) != num_online_cpus() )
> +    while ( atomic_read(&cpu_count) )
>       {
>           process_pending_softirqs();
>           cpu_relax();
>       }
>   
> -    local_irq_disable();
> -
> -    return 0;
> +    atomic_dec(&pending_count);
>   }
>   
> -/*
> - * As rcu_barrier() is using stop_machine_run() it is allowed to be used in
> - * idle context only (see comment for stop_machine_run()).
> - */
> -int rcu_barrier(void)
> +void rcu_barrier(void)
>   {
> -    atomic_t cpu_count = ATOMIC_INIT(0);
> -    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
> +    unsigned int n_cpus;
> +
> +    ASSERT(!in_irq() && local_irq_is_enabled());
> +
> +    for ( ;; )
> +    {
> +        if ( !atomic_read(&pending_count) && get_cpu_maps() )
> +        {

If the whole action is happening while cpu_maps are taken why do you 
need to check pending_count first? I think the logic of this loop
could be simplified if taken this into account.

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier()
  2020-03-16 15:24   ` Igor Druzhinin
@ 2020-03-16 16:01     ` Jürgen Groß
  2020-03-16 16:21       ` Igor Druzhinin
  0 siblings, 1 reply; 17+ messages in thread
From: Jürgen Groß @ 2020-03-16 16:01 UTC (permalink / raw)
  To: Igor Druzhinin, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	George Dunlap, Jan Beulich, Ian Jackson

On 16.03.20 16:24, Igor Druzhinin wrote:
> On 13/03/2020 13:06, Juergen Gross wrote:
>> Today rcu_barrier() is calling stop_machine_run() to synchronize all
>> physical cpus in order to ensure all pending rcu calls have finished
>> when returning.
>>
>> As stop_machine_run() is using tasklets this requires scheduling of
>> idle vcpus on all cpus imposing the need to call rcu_barrier() on idle
>> cpus only in case of core scheduling being active, as otherwise a
>> scheduling deadlock would occur.
>>
>> There is no need at all to do the syncing of the cpus in tasklets, as
>> rcu activity is started in __do_softirq() called whenever softirq
>> activity is allowed. So rcu_barrier() can easily be modified to use
>> softirq for synchronization of the cpus no longer requiring any
>> scheduling activity.
>>
>> As there already is a rcu softirq reuse that for the synchronization.
>>
>> Remove the barrier element from struct rcu_data as it isn't used.
>>
>> Finally switch rcu_barrier() to return void as it now can never fail.
>>
>> Partially-based-on-patch-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - add recursion detection
>>
>> V3:
>> - fix races (Igor Druzhinin)
>>
>> V5:
>> - rename done_count to pending_count (Jan Beulich)
>> - fix race (Jan Beulich)
>>
>> V6:
>> - add barrier (Julien Grall)
>> - add ASSERT() (Julien Grall)
>> - hold cpu_map lock until end of rcu_barrier() (Julien Grall)
>> ---
>>    xen/common/rcupdate.c      | 95 +++++++++++++++++++++++++++++++++-------------
>>    xen/include/xen/rcupdate.h |  2 +-
>>    2 files changed, 69 insertions(+), 28 deletions(-)
>>
>> diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
>> index 03d84764d2..ed9083d2b2 100644
>> --- a/xen/common/rcupdate.c
>> +++ b/xen/common/rcupdate.c
>> @@ -83,7 +83,6 @@ struct rcu_data {
>>        struct rcu_head **donetail;
>>        long            blimit;           /* Upper limit on a processed batch */
>>        int cpu;
>> -    struct rcu_head barrier;
>>        long            last_rs_qlen;     /* qlen during the last resched */
>>    
>>        /* 3) idle CPUs handling */
>> @@ -91,6 +90,7 @@ struct rcu_data {
>>        bool idle_timer_active;
>>    
>>        bool            process_callbacks;
>> +    bool            barrier_active;
>>    };
>>    
>>    /*
>> @@ -143,51 +143,85 @@ static int qhimark = 10000;
>>    static int qlowmark = 100;
>>    static int rsinterval = 1000;
>>    
>> -struct rcu_barrier_data {
>> -    struct rcu_head head;
>> -    atomic_t *cpu_count;
>> -};
>> +/*
>> + * rcu_barrier() handling:
>> + * cpu_count holds the number of cpus required to finish barrier handling.
>> + * pending_count is initialized to nr_cpus + 1.
>> + * Cpus are synchronized via softirq mechanism. rcu_barrier() is regarded to
>> + * be active if pending_count is not zero. In case rcu_barrier() is called on
>> + * multiple cpus it is enough to check for pending_count being not zero on entry
>> + * and to call process_pending_softirqs() in a loop until pending_count drops to
>> + * zero, before starting the new rcu_barrier() processing.
>> + * In order to avoid hangs when rcu_barrier() is called multiple times on the
>> + * same cpu in fast sequence and a slave cpu couldn't drop out of the
>> + * barrier handling fast enough a second counter pending_count is needed.
>> + * The rcu_barrier() invoking cpu will wait until pending_count reaches 1
>> + * (meaning that all cpus have finished processing the barrier) and then will
>> + * reset pending_count to 0 to enable entering rcu_barrier() again.
>> + */
>> +static atomic_t cpu_count = ATOMIC_INIT(0);
>> +static atomic_t pending_count = ATOMIC_INIT(0);
>>    
>>    static void rcu_barrier_callback(struct rcu_head *head)
>>    {
>> -    struct rcu_barrier_data *data = container_of(
>> -        head, struct rcu_barrier_data, head);
>> -    atomic_inc(data->cpu_count);
>> +    smp_wmb();     /* Make all previous writes visible to other cpus. */
>> +    atomic_dec(&cpu_count);
>>    }
>>    
>> -static int rcu_barrier_action(void *_cpu_count)
>> +static void rcu_barrier_action(void)
>>    {
>> -    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
>> -
>> -    ASSERT(!local_irq_is_enabled());
>> -    local_irq_enable();
>> +    struct rcu_head head;
>>    
>>        /*
>>         * When callback is executed, all previously-queued RCU work on this CPU
>> -     * is completed. When all CPUs have executed their callback, data.cpu_count
>> -     * will have been incremented to include every online CPU.
>> +     * is completed. When all CPUs have executed their callback, cpu_count
>> +     * will have been decremented to 0.
>>         */
>> -    call_rcu(&data.head, rcu_barrier_callback);
>> +    call_rcu(&head, rcu_barrier_callback);
>>    
>> -    while ( atomic_read(data.cpu_count) != num_online_cpus() )
>> +    while ( atomic_read(&cpu_count) )
>>        {
>>            process_pending_softirqs();
>>            cpu_relax();
>>        }
>>    
>> -    local_irq_disable();
>> -
>> -    return 0;
>> +    atomic_dec(&pending_count);
>>    }
>>    
>> -/*
>> - * As rcu_barrier() is using stop_machine_run() it is allowed to be used in
>> - * idle context only (see comment for stop_machine_run()).
>> - */
>> -int rcu_barrier(void)
>> +void rcu_barrier(void)
>>    {
>> -    atomic_t cpu_count = ATOMIC_INIT(0);
>> -    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
>> +    unsigned int n_cpus;
>> +
>> +    ASSERT(!in_irq() && local_irq_is_enabled());
>> +
>> +    for ( ;; )
>> +    {
>> +        if ( !atomic_read(&pending_count) && get_cpu_maps() )
>> +        {
> 
> If the whole action is happening while cpu_maps are taken why do you
> need to check pending_count first? I think the logic of this loop
> could be simplified if taken this into account.

get_cpu_maps() can be successful on multiple cpus (its a read_lock()).
Testing pending_count avoids hammering on the cache lines.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier()
  2020-03-16 16:01     ` Jürgen Groß
@ 2020-03-16 16:21       ` Igor Druzhinin
  0 siblings, 0 replies; 17+ messages in thread
From: Igor Druzhinin @ 2020-03-16 16:21 UTC (permalink / raw)
  To: Jürgen Groß, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	George Dunlap, Jan Beulich, Ian Jackson

On 16/03/2020 16:01, Jürgen Groß wrote:
> On 16.03.20 16:24, Igor Druzhinin wrote:
>> On 13/03/2020 13:06, Juergen Gross wrote:
>>> Today rcu_barrier() is calling stop_machine_run() to synchronize all
>>> physical cpus in order to ensure all pending rcu calls have finished
>>> when returning.
>>>
>>> As stop_machine_run() is using tasklets this requires scheduling of
>>> idle vcpus on all cpus imposing the need to call rcu_barrier() on idle
>>> cpus only in case of core scheduling being active, as otherwise a
>>> scheduling deadlock would occur.
>>>
>>> There is no need at all to do the syncing of the cpus in tasklets, as
>>> rcu activity is started in __do_softirq() called whenever softirq
>>> activity is allowed. So rcu_barrier() can easily be modified to use
>>> softirq for synchronization of the cpus no longer requiring any
>>> scheduling activity.
>>>
>>> As there already is a rcu softirq reuse that for the synchronization.
>>>
>>> Remove the barrier element from struct rcu_data as it isn't used.
>>>
>>> Finally switch rcu_barrier() to return void as it now can never fail.
>>>
>>> Partially-based-on-patch-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - add recursion detection
>>>
>>> V3:
>>> - fix races (Igor Druzhinin)
>>>
>>> V5:
>>> - rename done_count to pending_count (Jan Beulich)
>>> - fix race (Jan Beulich)
>>>
>>> V6:
>>> - add barrier (Julien Grall)
>>> - add ASSERT() (Julien Grall)
>>> - hold cpu_map lock until end of rcu_barrier() (Julien Grall)
>>> ---
>>>    xen/common/rcupdate.c      | 95 
>>> +++++++++++++++++++++++++++++++++-------------
>>>    xen/include/xen/rcupdate.h |  2 +-
>>>    2 files changed, 69 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
>>> index 03d84764d2..ed9083d2b2 100644
>>> --- a/xen/common/rcupdate.c
>>> +++ b/xen/common/rcupdate.c
>>> @@ -83,7 +83,6 @@ struct rcu_data {
>>>        struct rcu_head **donetail;
>>>        long            blimit;           /* Upper limit on a 
>>> processed batch */
>>>        int cpu;
>>> -    struct rcu_head barrier;
>>>        long            last_rs_qlen;     /* qlen during the last 
>>> resched */
>>>        /* 3) idle CPUs handling */
>>> @@ -91,6 +90,7 @@ struct rcu_data {
>>>        bool idle_timer_active;
>>>        bool            process_callbacks;
>>> +    bool            barrier_active;
>>>    };
>>>    /*
>>> @@ -143,51 +143,85 @@ static int qhimark = 10000;
>>>    static int qlowmark = 100;
>>>    static int rsinterval = 1000;
>>> -struct rcu_barrier_data {
>>> -    struct rcu_head head;
>>> -    atomic_t *cpu_count;
>>> -};
>>> +/*
>>> + * rcu_barrier() handling:
>>> + * cpu_count holds the number of cpus required to finish barrier 
>>> handling.
>>> + * pending_count is initialized to nr_cpus + 1.
>>> + * Cpus are synchronized via softirq mechanism. rcu_barrier() is 
>>> regarded to
>>> + * be active if pending_count is not zero. In case rcu_barrier() is 
>>> called on
>>> + * multiple cpus it is enough to check for pending_count being not 
>>> zero on entry
>>> + * and to call process_pending_softirqs() in a loop until 
>>> pending_count drops to
>>> + * zero, before starting the new rcu_barrier() processing.
>>> + * In order to avoid hangs when rcu_barrier() is called multiple 
>>> times on the
>>> + * same cpu in fast sequence and a slave cpu couldn't drop out of the
>>> + * barrier handling fast enough a second counter pending_count is 
>>> needed.
>>> + * The rcu_barrier() invoking cpu will wait until pending_count 
>>> reaches 1
>>> + * (meaning that all cpus have finished processing the barrier) and 
>>> then will
>>> + * reset pending_count to 0 to enable entering rcu_barrier() again.
>>> + */
>>> +static atomic_t cpu_count = ATOMIC_INIT(0);
>>> +static atomic_t pending_count = ATOMIC_INIT(0);
>>>    static void rcu_barrier_callback(struct rcu_head *head)
>>>    {
>>> -    struct rcu_barrier_data *data = container_of(
>>> -        head, struct rcu_barrier_data, head);
>>> -    atomic_inc(data->cpu_count);
>>> +    smp_wmb();     /* Make all previous writes visible to other 
>>> cpus. */
>>> +    atomic_dec(&cpu_count);
>>>    }
>>> -static int rcu_barrier_action(void *_cpu_count)
>>> +static void rcu_barrier_action(void)
>>>    {
>>> -    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
>>> -
>>> -    ASSERT(!local_irq_is_enabled());
>>> -    local_irq_enable();
>>> +    struct rcu_head head;
>>>        /*
>>>         * When callback is executed, all previously-queued RCU work 
>>> on this CPU
>>> -     * is completed. When all CPUs have executed their callback, 
>>> data.cpu_count
>>> -     * will have been incremented to include every online CPU.
>>> +     * is completed. When all CPUs have executed their callback, 
>>> cpu_count
>>> +     * will have been decremented to 0.
>>>         */
>>> -    call_rcu(&data.head, rcu_barrier_callback);
>>> +    call_rcu(&head, rcu_barrier_callback);
>>> -    while ( atomic_read(data.cpu_count) != num_online_cpus() )
>>> +    while ( atomic_read(&cpu_count) )
>>>        {
>>>            process_pending_softirqs();
>>>            cpu_relax();
>>>        }
>>> -    local_irq_disable();
>>> -
>>> -    return 0;
>>> +    atomic_dec(&pending_count);
>>>    }
>>> -/*
>>> - * As rcu_barrier() is using stop_machine_run() it is allowed to be 
>>> used in
>>> - * idle context only (see comment for stop_machine_run()).
>>> - */
>>> -int rcu_barrier(void)
>>> +void rcu_barrier(void)
>>>    {
>>> -    atomic_t cpu_count = ATOMIC_INIT(0);
>>> -    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
>>> +    unsigned int n_cpus;
>>> +
>>> +    ASSERT(!in_irq() && local_irq_is_enabled());
>>> +
>>> +    for ( ;; )
>>> +    {
>>> +        if ( !atomic_read(&pending_count) && get_cpu_maps() )
>>> +        {
>>
>> If the whole action is happening while cpu_maps are taken why do you
>> need to check pending_count first? I think the logic of this loop
>> could be simplified if taken this into account.
> 
> get_cpu_maps() can be successful on multiple cpus (its a read_lock()).
> Testing pending_count avoids hammering on the cache lines.

I see - the logic was changed recently. I'm currently testing this 
version of the patch.

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier()
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier() Juergen Gross
  2020-03-16 15:24   ` Igor Druzhinin
@ 2020-03-17 13:56   ` Jan Beulich
  2020-03-19 12:06     ` Jürgen Groß
  1 sibling, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2020-03-17 13:56 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 13.03.2020 14:06, Juergen Gross wrote:
> @@ -143,51 +143,85 @@ static int qhimark = 10000;
>  static int qlowmark = 100;
>  static int rsinterval = 1000;
>  
> -struct rcu_barrier_data {
> -    struct rcu_head head;
> -    atomic_t *cpu_count;
> -};
> +/*
> + * rcu_barrier() handling:
> + * cpu_count holds the number of cpus required to finish barrier handling.
> + * pending_count is initialized to nr_cpus + 1.
> + * Cpus are synchronized via softirq mechanism. rcu_barrier() is regarded to
> + * be active if pending_count is not zero. In case rcu_barrier() is called on
> + * multiple cpus it is enough to check for pending_count being not zero on entry
> + * and to call process_pending_softirqs() in a loop until pending_count drops to
> + * zero, before starting the new rcu_barrier() processing.

Everything up to here reads fine, but ...

> + * In order to avoid hangs when rcu_barrier() is called multiple times on the
> + * same cpu in fast sequence and a slave cpu couldn't drop out of the
> + * barrier handling fast enough a second counter pending_count is needed.
> + * The rcu_barrier() invoking cpu will wait until pending_count reaches 1
> + * (meaning that all cpus have finished processing the barrier) and then will
> + * reset pending_count to 0 to enable entering rcu_barrier() again.

... this starts as if pending_count wasn't mentioned before at all,
which might end up being confusing (e.g. suspecting the text having
gone out of sync with the code, as has happened to me).

> + */
> +static atomic_t cpu_count = ATOMIC_INIT(0);
> +static atomic_t pending_count = ATOMIC_INIT(0);
>  
>  static void rcu_barrier_callback(struct rcu_head *head)
>  {
> -    struct rcu_barrier_data *data = container_of(
> -        head, struct rcu_barrier_data, head);
> -    atomic_inc(data->cpu_count);
> +    smp_wmb();     /* Make all previous writes visible to other cpus. */
> +    atomic_dec(&cpu_count);

In Linux terms, wouldn't this be smp_mb__before_atomic()? If so,
perhaps better if we also introduce this and its "after" sibling.

>  }
>  
> -static int rcu_barrier_action(void *_cpu_count)
> +static void rcu_barrier_action(void)
>  {
> -    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
> -
> -    ASSERT(!local_irq_is_enabled());
> -    local_irq_enable();
> +    struct rcu_head head;
>  
>      /*
>       * When callback is executed, all previously-queued RCU work on this CPU
> -     * is completed. When all CPUs have executed their callback, data.cpu_count
> -     * will have been incremented to include every online CPU.
> +     * is completed. When all CPUs have executed their callback, cpu_count
> +     * will have been decremented to 0.
>       */
> -    call_rcu(&data.head, rcu_barrier_callback);
> +    call_rcu(&head, rcu_barrier_callback);
>  
> -    while ( atomic_read(data.cpu_count) != num_online_cpus() )
> +    while ( atomic_read(&cpu_count) )
>      {
>          process_pending_softirqs();
>          cpu_relax();
>      }
>  
> -    local_irq_disable();
> -
> -    return 0;
> +    atomic_dec(&pending_count);

Isn't there a barrier needed between the atomic_read() and this
atomic_dec()?

> +void rcu_barrier(void)
>  {
> -    atomic_t cpu_count = ATOMIC_INIT(0);
> -    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
> +    unsigned int n_cpus;
> +
> +    ASSERT(!in_irq() && local_irq_is_enabled());
> +
> +    for ( ;; )

Nit: Canonically there ought to also be a blank between the two
semicolons.

> +    {
> +        if ( !atomic_read(&pending_count) && get_cpu_maps() )
> +        {
> +            n_cpus = num_online_cpus();
> +
> +            if ( atomic_cmpxchg(&pending_count, 0, n_cpus + 1) == 0 )
> +                break;
> +
> +            put_cpu_maps();
> +        }
> +
> +        process_pending_softirqs();
> +        cpu_relax();

Is this really needed after having invoked
process_pending_softirqs()?

> +    }
> +
> +    atomic_set(&cpu_count, n_cpus);

Isn't there a barrier needed ahead of this, to order it wrt the
cmpxchg?

> +    cpumask_raise_softirq(&cpu_online_map, RCU_SOFTIRQ);

Isn't there another barrier needed ahead of this, to order it wrt
the set?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 2/4] xen: don't process rcu callbacks when holding a rcu_read_lock()
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 2/4] xen: don't process rcu callbacks when holding a rcu_read_lock() Juergen Gross
@ 2020-03-17 14:22   ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2020-03-17 14:22 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 13.03.2020 14:06, Juergen Gross wrote:
> Some keyhandlers are calling process_pending_softirqs() while holding
> a rcu_read_lock(). This is wrong, as process_pending_softirqs() might
> activate rcu calls which should not happen inside a rcu_read_lock().
> 
> For that purpose modify process_pending_softirqs() to not allow rcu
> callback processing when a rcu_read_lock() is being held.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build Juergen Gross
@ 2020-03-17 14:36   ` Jan Beulich
  2020-03-18  6:26     ` Jürgen Groß
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2020-03-17 14:36 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 13.03.2020 14:06, Juergen Gross wrote:
> Xen's RCU implementation relies on no softirq handling taking place
> while being in a RCU critical section. Add ASSERT()s in debug builds
> in order to catch any violations.
> 
> For that purpose modify rcu_read_[un]lock() to use a dedicated percpu
> counter additional to preempt_[en|dis]able() as this enables to test
> that condition in __do_softirq() (ASSERT_NOT_IN_ATOMIC() is not
> usable there due to __cpu_up() calling process_pending_softirqs()
> while holding the cpu hotplug lock).
> 
> While at it switch the rcu_read_[un]lock() implementation to static
> inline functions instead of macros.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one remark:

> @@ -91,16 +114,23 @@ typedef struct _rcu_read_lock rcu_read_lock_t;
>   * will be deferred until the outermost RCU read-side critical section
>   * completes.
>   *
> - * It is illegal to block while in an RCU read-side critical section.
> + * It is illegal to process softirqs while in an RCU read-side critical section.

The latest with the re-added preempt_disable(), wouldn't this better
say "... to process softirqs or block ..."?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 4/4] xen/rcu: add per-lock counter in debug builds
  2020-03-13 13:06 ` [Xen-devel] [PATCH v6 4/4] xen/rcu: add per-lock counter in debug builds Juergen Gross
@ 2020-03-17 15:39   ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2020-03-17 15:39 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 13.03.2020 14:06, Juergen Gross wrote:
> Add a lock specific counter to rcu read locks in debug builds. This
> allows to test for matching lock/unlock calls.
> 
> This will help to avoid cases like the one fixed by commit
> 98ed1f43cc2c89 where different rcu read locks were referenced in the
> lock and unlock calls.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit to be honest I'm not fully convinced we need to go this far.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build
  2020-03-17 14:36   ` Jan Beulich
@ 2020-03-18  6:26     ` Jürgen Groß
  2020-03-18  7:37       ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Jürgen Groß @ 2020-03-18  6:26 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 17.03.20 15:36, Jan Beulich wrote:
> On 13.03.2020 14:06, Juergen Gross wrote:
>> Xen's RCU implementation relies on no softirq handling taking place
>> while being in a RCU critical section. Add ASSERT()s in debug builds
>> in order to catch any violations.
>>
>> For that purpose modify rcu_read_[un]lock() to use a dedicated percpu
>> counter additional to preempt_[en|dis]able() as this enables to test
>> that condition in __do_softirq() (ASSERT_NOT_IN_ATOMIC() is not
>> usable there due to __cpu_up() calling process_pending_softirqs()
>> while holding the cpu hotplug lock).
>>
>> While at it switch the rcu_read_[un]lock() implementation to static
>> inline functions instead of macros.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one remark:
> 
>> @@ -91,16 +114,23 @@ typedef struct _rcu_read_lock rcu_read_lock_t;
>>    * will be deferred until the outermost RCU read-side critical section
>>    * completes.
>>    *
>> - * It is illegal to block while in an RCU read-side critical section.
>> + * It is illegal to process softirqs while in an RCU read-side critical section.
> 
> The latest with the re-added preempt_disable(), wouldn't this better
> say "... to process softirqs or block ..."?

I can add this, but OTOH blocking without processing softirqs is not
possible, as there is no other (legal) way to enter the scheduler.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build
  2020-03-18  6:26     ` Jürgen Groß
@ 2020-03-18  7:37       ` Jan Beulich
  2020-03-19 12:07         ` Jürgen Groß
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2020-03-18  7:37 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 18.03.2020 07:26, Jürgen Groß wrote:
> On 17.03.20 15:36, Jan Beulich wrote:
>> On 13.03.2020 14:06, Juergen Gross wrote:
>>> Xen's RCU implementation relies on no softirq handling taking place
>>> while being in a RCU critical section. Add ASSERT()s in debug builds
>>> in order to catch any violations.
>>>
>>> For that purpose modify rcu_read_[un]lock() to use a dedicated percpu
>>> counter additional to preempt_[en|dis]able() as this enables to test
>>> that condition in __do_softirq() (ASSERT_NOT_IN_ATOMIC() is not
>>> usable there due to __cpu_up() calling process_pending_softirqs()
>>> while holding the cpu hotplug lock).
>>>
>>> While at it switch the rcu_read_[un]lock() implementation to static
>>> inline functions instead of macros.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> with one remark:
>>
>>> @@ -91,16 +114,23 @@ typedef struct _rcu_read_lock rcu_read_lock_t;
>>>    * will be deferred until the outermost RCU read-side critical section
>>>    * completes.
>>>    *
>>> - * It is illegal to block while in an RCU read-side critical section.
>>> + * It is illegal to process softirqs while in an RCU read-side critical section.
>>
>> The latest with the re-added preempt_disable(), wouldn't this better
>> say "... to process softirqs or block ..."?
> 
> I can add this, but OTOH blocking without processing softirqs is not
> possible, as there is no other (legal) way to enter the scheduler.

Sure, but that's still implicit, but could do with saying explicitly.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier()
  2020-03-17 13:56   ` Jan Beulich
@ 2020-03-19 12:06     ` Jürgen Groß
  2020-03-19 13:59       ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Jürgen Groß @ 2020-03-19 12:06 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 17.03.20 14:56, Jan Beulich wrote:
> On 13.03.2020 14:06, Juergen Gross wrote:
>> @@ -143,51 +143,85 @@ static int qhimark = 10000;
>>   static int qlowmark = 100;
>>   static int rsinterval = 1000;
>>   
>> -struct rcu_barrier_data {
>> -    struct rcu_head head;
>> -    atomic_t *cpu_count;
>> -};
>> +/*
>> + * rcu_barrier() handling:
>> + * cpu_count holds the number of cpus required to finish barrier handling.
>> + * pending_count is initialized to nr_cpus + 1.
>> + * Cpus are synchronized via softirq mechanism. rcu_barrier() is regarded to
>> + * be active if pending_count is not zero. In case rcu_barrier() is called on
>> + * multiple cpus it is enough to check for pending_count being not zero on entry
>> + * and to call process_pending_softirqs() in a loop until pending_count drops to
>> + * zero, before starting the new rcu_barrier() processing.
> 
> Everything up to here reads fine, but ...
> 
>> + * In order to avoid hangs when rcu_barrier() is called multiple times on the
>> + * same cpu in fast sequence and a slave cpu couldn't drop out of the
>> + * barrier handling fast enough a second counter pending_count is needed.
>> + * The rcu_barrier() invoking cpu will wait until pending_count reaches 1
>> + * (meaning that all cpus have finished processing the barrier) and then will
>> + * reset pending_count to 0 to enable entering rcu_barrier() again.
> 
> ... this starts as if pending_count wasn't mentioned before at all,
> which might end up being confusing (e.g. suspecting the text having
> gone out of sync with the code, as has happened to me).

I'll reword the comment.

> 
>> + */
>> +static atomic_t cpu_count = ATOMIC_INIT(0);
>> +static atomic_t pending_count = ATOMIC_INIT(0);
>>   
>>   static void rcu_barrier_callback(struct rcu_head *head)
>>   {
>> -    struct rcu_barrier_data *data = container_of(
>> -        head, struct rcu_barrier_data, head);
>> -    atomic_inc(data->cpu_count);
>> +    smp_wmb();     /* Make all previous writes visible to other cpus. */
>> +    atomic_dec(&cpu_count);
> 
> In Linux terms, wouldn't this be smp_mb__before_atomic()? If so,
> perhaps better if we also introduce this and its "after" sibling.

Okay, will add a patch.

> 
>>   }
>>   
>> -static int rcu_barrier_action(void *_cpu_count)
>> +static void rcu_barrier_action(void)
>>   {
>> -    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
>> -
>> -    ASSERT(!local_irq_is_enabled());
>> -    local_irq_enable();
>> +    struct rcu_head head;
>>   
>>       /*
>>        * When callback is executed, all previously-queued RCU work on this CPU
>> -     * is completed. When all CPUs have executed their callback, data.cpu_count
>> -     * will have been incremented to include every online CPU.
>> +     * is completed. When all CPUs have executed their callback, cpu_count
>> +     * will have been decremented to 0.
>>        */
>> -    call_rcu(&data.head, rcu_barrier_callback);
>> +    call_rcu(&head, rcu_barrier_callback);
>>   
>> -    while ( atomic_read(data.cpu_count) != num_online_cpus() )
>> +    while ( atomic_read(&cpu_count) )
>>       {
>>           process_pending_softirqs();
>>           cpu_relax();
>>       }
>>   
>> -    local_irq_disable();
>> -
>> -    return 0;
>> +    atomic_dec(&pending_count);
> 
> Isn't there a barrier needed between the atomic_read() and this
> atomic_dec()?

Yes, probably.

> 
>> +void rcu_barrier(void)
>>   {
>> -    atomic_t cpu_count = ATOMIC_INIT(0);
>> -    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
>> +    unsigned int n_cpus;
>> +
>> +    ASSERT(!in_irq() && local_irq_is_enabled());
>> +
>> +    for ( ;; )
> 
> Nit: Canonically there ought to also be a blank between the two
> semicolons.

Okay.

> 
>> +    {
>> +        if ( !atomic_read(&pending_count) && get_cpu_maps() )
>> +        {
>> +            n_cpus = num_online_cpus();
>> +
>> +            if ( atomic_cmpxchg(&pending_count, 0, n_cpus + 1) == 0 )
>> +                break;
>> +
>> +            put_cpu_maps();
>> +        }
>> +
>> +        process_pending_softirqs();
>> +        cpu_relax();
> 
> Is this really needed after having invoked
> process_pending_softirqs()?

With no softirq pending this loop might be rather tight. Better to give
a potential other sibling a chance to make progress.

> 
>> +    }
>> +
>> +    atomic_set(&cpu_count, n_cpus);
> 
> Isn't there a barrier needed ahead of this, to order it wrt the
> cmpxchg?

I'll add one.

> 
>> +    cpumask_raise_softirq(&cpu_online_map, RCU_SOFTIRQ);
> 
> Isn't there another barrier needed ahead of this, to order it wrt
> the set?

No, I don't think so. cpumask_raise_softirq() needs to have appropriate
ordering semantics as otherwise the softirq pending bit wouldn't be
guaranteed to be seen by softirq processing.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build
  2020-03-18  7:37       ` Jan Beulich
@ 2020-03-19 12:07         ` Jürgen Groß
  0 siblings, 0 replies; 17+ messages in thread
From: Jürgen Groß @ 2020-03-19 12:07 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 18.03.20 08:37, Jan Beulich wrote:
> On 18.03.2020 07:26, Jürgen Groß wrote:
>> On 17.03.20 15:36, Jan Beulich wrote:
>>> On 13.03.2020 14:06, Juergen Gross wrote:
>>>> Xen's RCU implementation relies on no softirq handling taking place
>>>> while being in a RCU critical section. Add ASSERT()s in debug builds
>>>> in order to catch any violations.
>>>>
>>>> For that purpose modify rcu_read_[un]lock() to use a dedicated percpu
>>>> counter additional to preempt_[en|dis]able() as this enables to test
>>>> that condition in __do_softirq() (ASSERT_NOT_IN_ATOMIC() is not
>>>> usable there due to __cpu_up() calling process_pending_softirqs()
>>>> while holding the cpu hotplug lock).
>>>>
>>>> While at it switch the rcu_read_[un]lock() implementation to static
>>>> inline functions instead of macros.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> with one remark:
>>>
>>>> @@ -91,16 +114,23 @@ typedef struct _rcu_read_lock rcu_read_lock_t;
>>>>    * will be deferred until the outermost RCU read-side critical 
>>>> section
>>>>    * completes.
>>>>    *
>>>> - * It is illegal to block while in an RCU read-side critical section.
>>>> + * It is illegal to process softirqs while in an RCU read-side 
>>>> critical section.
>>>
>>> The latest with the re-added preempt_disable(), wouldn't this better
>>> say "... to process softirqs or block ..."?
>>
>> I can add this, but OTOH blocking without processing softirqs is not
>> possible, as there is no other (legal) way to enter the scheduler.
> 
> Sure, but that's still implicit, but could do with saying explicitly.

Okay.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier()
  2020-03-19 12:06     ` Jürgen Groß
@ 2020-03-19 13:59       ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2020-03-19 13:59 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel

On 19.03.2020 13:06, Jürgen Groß wrote:
> On 17.03.20 14:56, Jan Beulich wrote:
>> On 13.03.2020 14:06, Juergen Gross wrote:
>>> +    cpumask_raise_softirq(&cpu_online_map, RCU_SOFTIRQ);
>>
>> Isn't there another barrier needed ahead of this, to order it wrt
>> the set?
> 
> No, I don't think so. cpumask_raise_softirq() needs to have appropriate
> ordering semantics as otherwise the softirq pending bit wouldn't be
> guaranteed to be seen by softirq processing.

You may have a point here, but I had given my comment after
looking at cpumask_raise_softirq() and not finding any such
barrier there. Oh, actually - set_bit() and test_and_set_bit()
differ in their barrier characteristics; I wasn't aware of
this.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-03-19 13:59 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-13 13:06 [Xen-devel] [PATCH v6 0/4] xen/rcu: let rcu work better with core scheduling Juergen Gross
2020-03-13 13:06 ` [Xen-devel] [PATCH v6 1/4] xen/rcu: don't use stop_machine_run() for rcu_barrier() Juergen Gross
2020-03-16 15:24   ` Igor Druzhinin
2020-03-16 16:01     ` Jürgen Groß
2020-03-16 16:21       ` Igor Druzhinin
2020-03-17 13:56   ` Jan Beulich
2020-03-19 12:06     ` Jürgen Groß
2020-03-19 13:59       ` Jan Beulich
2020-03-13 13:06 ` [Xen-devel] [PATCH v6 2/4] xen: don't process rcu callbacks when holding a rcu_read_lock() Juergen Gross
2020-03-17 14:22   ` Jan Beulich
2020-03-13 13:06 ` [Xen-devel] [PATCH v6 3/4] xen/rcu: add assertions to debug build Juergen Gross
2020-03-17 14:36   ` Jan Beulich
2020-03-18  6:26     ` Jürgen Groß
2020-03-18  7:37       ` Jan Beulich
2020-03-19 12:07         ` Jürgen Groß
2020-03-13 13:06 ` [Xen-devel] [PATCH v6 4/4] xen/rcu: add per-lock counter in debug builds Juergen Gross
2020-03-17 15:39   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).