All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] Introduce local_lock()
@ 2020-05-19 20:19 Sebastian Andrzej Siewior
  2020-05-19 20:19 ` [PATCH 1/8] locking: " Sebastian Andrzej Siewior
                   ` (7 more replies)
  0 siblings, 8 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds

preempt_disable() and local_irq_disable/save() are in principle per CPU big
kernel locks. This has several downsides:

  - The protection scope is unknown

  - Violation of protection rules is hard to detect by instrumentation

  - For PREEMPT_RT such sections, unless in low level critical code, can
    violate the preemptability constraints.

To address this PREEMPT_RT introduced the concept of local_locks which are
strictly per CPU.

The lock operations map to preempt_disable(), local_irq_disable/save() and
the enabling counterparts on non RT enabled kernels.

If lockdep is enabled local locks gain a lock map which tracks the usage
context. This will catch cases where an area is protected by
preempt_disable() but the access also happens from interrupt context. local
locks have identified quite a few such issues over the years, the most
recent example is:

  b7d5dc21072cd ("random: add a spinlock_t to struct batched_entropy")

Aside of the lockdep coverage this also improves code readability as it
precisely annotates the protection scope.

PREEMPT_RT substitutes these local locks with 'sleeping' spinlocks to
protect such sections while maintaining preemtability and CPU locality.

The following series introduces the infrastructure including
documentation and provides a couple of examples how they are used to
adjust code to be RT ready.

Sebastian



^ permalink raw reply	[flat|nested] 48+ messages in thread

* [PATCH 1/8] locking: Introduce local_lock()
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-20 12:04   ` Peter Zijlstra
  2020-05-19 20:19 ` [PATCH 2/8] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

preempt_disable() and local_irq_disable/save() are in principle per CPU big
kernel locks. This has several downsides:

  - The protection scope is unknown

  - Violation of protection rules is hard to detect by instrumentation

  - For PREEMPT_RT such sections, unless in low level critical code, can
    violate the preemptability constraints.

To address this PREEMPT_RT introduced the concept of local_locks which are
strictly per CPU.

The lock operations map to preempt_disable(), local_irq_disable/save() and
the enabling counterparts on non RT enabled kernels.

If lockdep is enabled local locks gain a lock map which tracks the usage
context. This will catch cases where an area is protected by
preempt_disable() but the access also happens from interrupt context. local
locks have identified quite a few such issues over the years, the most
recent example is:

  b7d5dc21072cd ("random: add a spinlock_t to struct batched_entropy")

Aside of the lockdep coverage this also improves code readability as it
precisely annotates the protection scope.

PREEMPT_RT substitutes these local locks with 'sleeping' spinlocks to
protect such sections while maintaining preemtability and CPU locality.

local locks can replace:

  - preempt_enable()/disable() pairs
  - local_irq_disable/enable() pairs
  - local_irq_save/restore() pairs

They are also used to replace code which implicitly disables preemption
like:

  - get_cpu()/put_cpu()
  - get_cpu_var()/put_cpu_var()

with PREEMPT_RT friendly constructs.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 Documentation/locking/locktypes.rst | 215 ++++++++++++++++++++++++++--
 include/linux/locallock.h           |  68 +++++++++
 include/linux/locallock_internal.h  |  97 +++++++++++++
 3 files changed, 369 insertions(+), 11 deletions(-)
 create mode 100644 include/linux/locallock.h
 create mode 100644 include/linux/locallock_internal.h

diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst
index 09f45ce38d262..1b577a8bf9829 100644
--- a/Documentation/locking/locktypes.rst
+++ b/Documentation/locking/locktypes.rst
@@ -13,6 +13,7 @@ The kernel provides a variety of locking primitives which can be divided
 into two categories:
 
  - Sleeping locks
+ - CPU local locks
  - Spinning locks
 
 This document conceptually describes these lock types and provides rules
@@ -44,9 +45,23 @@ other contexts unless there is no other option.
 
 On PREEMPT_RT kernels, these lock types are converted to sleeping locks:
 
+ - local_lock
  - spinlock_t
  - rwlock_t
 
+
+CPU local locks
+---------------
+
+ - local_lock
+
+On non-PREEMPT_RT kernels, local_lock functions are wrappers around
+preemption and interrupt disabling primitives. Contrary to other locking
+mechanisms, disabling preemption or interrupts are pure CPU local
+concurrency control mechanisms and not suited for inter-CPU concurrency
+control.
+
+
 Spinning locks
 --------------
 
@@ -67,6 +82,7 @@ Spinning locks implicitly disable preemption and the lock / unlock functions
  _irqsave/restore()   Save and disable / restore interrupt disabled state
  ===================  ====================================================
 
+
 Owner semantics
 ===============
 
@@ -139,6 +155,56 @@ PREEMPT_RT kernels map rw_semaphore to a separate rt_mutex-based
  writer from starving readers.
 
 
+local_lock
+==========
+
+local_lock provides a named scope to critical sections which are protected
+by disabling preemption or interrupts.
+
+On non-PREEMPT_RT kernels local_lock operations map to the preemption and
+interrupt disabling and enabling primitives:
+
+ =========================== ======================
+ local_lock(&llock)          preempt_disable()
+ local_unlock(&llock)        preempt_enable()
+ local_lock_irq(&llock)      local_irq_disable()
+ local_unlock_irq(&llock)    local_irq_enable()
+ local_lock_save(&llock)     local_irq_save()
+ local_lock_restore(&llock)  local_irq_save()
+ =========================== ======================
+
+The named scope of local_lock has two advantages over the regular
+primitives:
+
+  - The lock name allows static analysis and is also a clear documentation
+    of the protection scope while the regular primitives are scopeless and
+    opaque.
+
+  - If lockdep is enabled the local_lock gains a lockmap which allows to
+    validate the correctness of the protection. This can detect cases where
+    e.g. a function using preempt_disable() as protection mechanism is
+    invoked from interrupt or soft-interrupt context. Aside of that
+    lockdep_assert_held(&llock) works as with any other locking primitive.
+
+local_lock and PREEMPT_RT
+-------------------------
+
+PREEMPT_RT kernels map local_lock to a per-CPU spinlock_t, thus changing
+semantics:
+
+  - All spinlock_t changes also apply to local_lock.
+
+local_lock usage
+----------------
+
+local_lock should be used in situations where disabling preemption or
+interrupts is the appropriate form of concurrency control to protect
+per-CPU data structures on a non PREEMPT_RT kernel.
+
+local_lock is not suitable to protect against preemption or interrupts on a
+PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
+
+
 raw_spinlock_t and spinlock_t
 =============================
 
@@ -258,10 +324,82 @@ PREEMPT_RT kernels map rwlock_t to a separate rt_mutex-based
 PREEMPT_RT caveats
 ==================
 
+local_lock on RT
+----------------
+
+The mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a few
+implications. For example, on a non-PREEMPT_RT kernel the following code
+sequence works as expected::
+
+  local_lock_irq(&local_lock);
+  raw_spin_lock(&lock);
+
+and is fully equivalent to::
+
+   raw_spin_lock_irq(&lock);
+
+On a PREEMPT_RT kernel this code sequence breaks because local_lock_irq()
+is mapped to a per-CPU spinlock_t which neither disables interrupts nor
+preemption. The following code sequence works perfectly correct on both
+PREEMPT_RT and non-PREEMPT_RT kernels::
+
+  local_lock_irq(&local_lock);
+  spin_lock(&lock);
+
+Another caveat with local locks is that each local_lock has a specific
+protection scope. So the following substitution is wrong::
+
+  func1()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock_1, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock_1, flags);
+  }
+
+  func2()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock_2, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock_2, flags);
+  }
+
+  func3()
+  {
+    lockdep_assert_irqs_disabled();
+    access_protected_data();
+  }
+
+On a non-PREEMPT_RT kernel this works correctly, but on a PREEMPT_RT kernel
+local_lock_1 and local_lock_2 are distinct and cannot serialize the callers
+of func3(). Also the lockdep assert will trigger on a PREEMPT_RT kernel
+because local_lock_irqsave() does not disable interrupts due to the
+PREEMPT_RT-specific semantics of spinlock_t. The correct substitution is::
+
+  func1()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock, flags);
+  }
+
+  func2()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock, flags);
+  }
+
+  func3()
+  {
+    lockdep_assert_held(&local_lock);
+    access_protected_data();
+  }
+
+
 spinlock_t and rwlock_t
 -----------------------
 
-These changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels
+The changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels
 have a few implications.  For example, on a non-PREEMPT_RT kernel the
 following code sequence works as expected::
 
@@ -282,9 +420,61 @@ local_lock mechanism.  Acquiring the local_lock pins the task to a CPU,
 allowing things like per-CPU interrupt disabled locks to be acquired.
 However, this approach should be used only where absolutely necessary.
 
+A typical scenario is protection of per-CPU variables in thread context::
 
-raw_spinlock_t
---------------
+  struct foo *p = get_cpu_ptr(&var1);
+
+  spin_lock(&p->lock);
+  p->count += this_cpu_read(var2);
+
+This is correct code on a non-PREEMPT_RT kernel, but on a PREEMPT_RT kernel
+this breaks. The PREEMPT_RT-specific change of spinlock_t semantics does
+not allow to acquire p->lock because get_cpu_ptr() implicitly disables
+preemption. The following substitution works on both kernels::
+
+  struct foo *p;
+
+  migrate_disable();
+  p = this_cpu_ptr(&var1);
+  spin_lock(&p->lock);
+  p->count += this_cpu_read(var2);
+
+On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable()
+which makes the above code fully equivalent. On a PREEMPT_RT kernel
+migrate_disable() ensures that the task is pinned on the current CPU which
+in turn guarantees that the per-CPU access to var1 and var2 are staying on
+the same CPU.
+
+The migrate_disable() substitution is not valid for the following
+scenario::
+
+  func()
+  {
+    struct foo *p;
+
+    migrate_disable();
+    p = this_cpu_ptr(&var1);
+    p->val = func2();
+
+While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because
+here migrate_disable() does not protect against reentrancy from a
+preempting task. A correct substitution for this case is::
+
+  func()
+  {
+    struct foo *p;
+
+    local_lock(&foo_lock);
+    p = this_cpu_ptr(&var1);
+    p->val = func2();
+
+On a non-PREEMPT_RT kernel this protects against reentrancy by disabling
+preemption. On a PREEMPT_RT kernel this is achieved by acquiring the
+underlying per-CPU spinlock.
+
+
+raw_spinlock_t on RT
+--------------------
 
 Acquiring a raw_spinlock_t disables preemption and possibly also
 interrupts, so the critical section must avoid acquiring a regular
@@ -325,22 +515,25 @@ Lock type nesting rules
 
 The most basic rules are:
 
-  - Lock types of the same lock category (sleeping, spinning) can nest
-    arbitrarily as long as they respect the general lock ordering rules to
-    prevent deadlocks.
+  - Lock types of the same lock category (sleeping, CPU local, spinning)
+    can nest arbitrarily as long as they respect the general lock ordering
+    rules to prevent deadlocks.
 
-  - Sleeping lock types cannot nest inside spinning lock types.
+  - Sleeping lock types cannot nest inside CPU local and spinning lock types.
 
-  - Spinning lock types can nest inside sleeping lock types.
+  - CPU local and spinning lock types can nest inside sleeping lock types.
+
+  - Spinning lock types can nest inside all lock types
 
 These constraints apply both in PREEMPT_RT and otherwise.
 
 The fact that PREEMPT_RT changes the lock category of spinlock_t and
-rwlock_t from spinning to sleeping means that they cannot be acquired while
-holding a raw spinlock.  This results in the following nesting ordering:
+rwlock_t from spinning to sleeping and substitutes local_lock with a
+per-CPU spinlock_t means that they cannot be acquired while holding a raw
+spinlock.  This results in the following nesting ordering:
 
   1) Sleeping locks
-  2) spinlock_t and rwlock_t
+  2) spinlock_t, rwlock_t, local_lock
   3) raw_spinlock_t and bit spinlocks
 
 Lockdep will complain if these constraints are violated, both in
diff --git a/include/linux/locallock.h b/include/linux/locallock.h
new file mode 100644
index 0000000000000..15addc97d0109
--- /dev/null
+++ b/include/linux/locallock.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_LOCALLOCK_H
+#define _LINUX_LOCALLOCK_H
+
+#include <linux/locallock_internal.h>
+
+/**
+ * DEFINE_LOCAL_LOCK - Define and initialize a per CPU local lock
+ * @lock:	Name of the lock instance
+ */
+#define DEFINE_LOCAL_LOCK(lvar)					\
+	DEFINE_PER_CPU(struct local_lock, lvar) = { INIT_LOCAL_LOCK(lvar) }
+
+/**
+ * DECLARE_LOCAL_LOCK - Declare a defined per CPU local lock
+ * @lock:	Name of the lock instance
+ */
+#define DECLARE_LOCAL_LOCK(lvar)				\
+	DECLARE_PER_CPU(struct local_lock, lvar)
+
+/**
+ * local_lock_init - Runtime initialize a lock instance
+ */
+#define local_lock_init(lock)		__local_lock_init(lock)
+
+/**
+ * local_lock - Acquire a per CPU local lock
+ * @lock:	The lock variable
+ */
+#define local_lock(lock)		__local_lock(lock)
+
+/**
+ * local_lock_irq - Acquire a per CPU local lock and disable interrupts
+ * @lock:	The lock variable
+ */
+#define local_lock_irq(lock)		__local_lock_irq(lock)
+
+/**
+ * local_lock_irqsave - Acquire a per CPU local lock, save and disable
+ *			 interrupts
+ * @lock:	The lock variable
+ * @flags:	Storage for interrupt flags
+ */
+#define local_lock_irqsave(lock, flags)				\
+	__local_lock_irqsave(lock, flags)
+
+/**
+ * local_unlock - Release a per CPU local lock
+ * @lock:	The lock variable
+ */
+#define local_unlock(lock)		__local_unlock(lock)
+
+/**
+ * local_unlock_irq - Release a per CPU local lock and enable interrupts
+ * @lock:	The lock variable
+ */
+#define local_unlock_irq(lock)		__local_unlock_irq(lock)
+
+/**
+ * local_unlock_irqrestore - Release a per CPU local lock and restore
+ *			      interrupt flags
+ * @lock:	The lock variable
+ * @flags:      Interrupt flags to restore
+ */
+#define local_unlock_irqrestore(lock, flags)			\
+	__local_unlock_irqrestore(lock, flags)
+
+#endif
diff --git a/include/linux/locallock_internal.h b/include/linux/locallock_internal.h
new file mode 100644
index 0000000000000..e79641c2ae9a7
--- /dev/null
+++ b/include/linux/locallock_internal.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_LOCALLOCK_H
+# error "Do not include directly, include linux/locallock.h"
+#endif
+
+#include <linux/percpu-defs.h>
+#include <linux/lockdep.h>
+
+struct local_lock {
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map	dep_map;
+	struct task_struct	*owner;
+#endif
+};
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define LL_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
+#else
+# define LL_DEP_MAP_INIT(lockname)
+#endif
+
+#define INIT_LOCAL_LOCK(lockname)	LL_DEP_MAP_INIT(lockname)
+
+#define local_lock_lockdep_init(lock)				\
+do {								\
+	static struct lock_class_key __key;			\
+								\
+	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
+	lockdep_init_map(&lock->dep_map, #lock, &__key, 0);	\
+} while (0)
+
+#define __local_lock_init(lockvar)				\
+do {								\
+	struct local_lock *__lock;				\
+	int __cpu;						\
+								\
+	for_each_possible_cpu(__cpu) {				\
+		__lock = &per_cpu(lockvar, __cpu));		\
+		local_lock_lockdep_init(__lock);		\
+	}							\
+} while (0)
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+static inline void local_lock_acquire(struct local_lock *l)
+{
+	lock_map_acquire(&l->dep_map);
+	DEBUG_LOCKS_WARN_ON(l->owner);
+	l->owner = current;
+}
+
+static inline void local_lock_release(struct local_lock *l)
+{
+	DEBUG_LOCKS_WARN_ON(l->owner != current);
+	l->owner = NULL;
+	lock_map_release(&l->dep_map);
+}
+
+#else /* CONFIG_DEBUG_LOCK_ALLOC */
+static inline void local_lock_acquire(struct local_lock *l) { }
+static inline void local_lock_release(struct local_lock *l) { }
+#endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+
+#define __local_lock(lock)					\
+	do {							\
+		preempt_disable();				\
+		local_lock_acquire(this_cpu_ptr(&(lock)));	\
+	} while (0)
+
+#define __local_lock_irq(lock)					\
+	do {							\
+		local_irq_disable();				\
+		local_lock_acquire(this_cpu_ptr(&(lock)));	\
+	} while (0)
+
+#define __local_lock_irqsave(lock, flags)			\
+	do {							\
+		local_irq_save(flags);				\
+		local_lock_acquire(this_cpu_ptr(&(lock)));	\
+	} while (0)
+
+#define __local_unlock(lock)					\
+	do {							\
+		local_lock_release(this_cpu_ptr(&lock));	\
+		preempt_enable();				\
+	} while (0)
+
+#define __local_unlock_irq(lock)				\
+	do {							\
+		local_lock_release(this_cpu_ptr(&lock));	\
+		local_irq_enable();				\
+	} while (0)
+
+#define __local_unlock_irqrestore(lock, flags)			\
+	do {							\
+		local_lock_release(this_cpu_ptr(&lock));	\
+		local_irq_restore(flags);			\
+	} while (0)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH 2/8] radix-tree: Use local_lock for protection
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
  2020-05-19 20:19 ` [PATCH 1/8] locking: " Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-19 20:45   ` Matthew Wilcox
  2020-05-20 10:21   ` Peter Zijlstra
  2020-05-19 20:19 ` [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access Sebastian Andrzej Siewior
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Sebastian Andrzej Siewior, Matthew Wilcox, linux-fsdevel

The radix-tree and idr preload mechanisms use preempt_disable() to protect
the complete operation between xxx_preload() and xxx_preload_end().

As the code inside the preempt disabled section acquires regular spinlocks,
which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
eventually calls into a memory allocator, this conflicts with the RT
semantics.

Convert it to a local_lock which allows RT kernels to substitute them with
a real per CPU lock. On non RT kernels this maps to preempt_disable() as
before, but provides also lockdep coverage of the critical region.
No functional change.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/idr.h        |  5 +----
 include/linux/radix-tree.h |  6 +-----
 lib/radix-tree.c           | 25 +++++++++++++++++++------
 3 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/include/linux/idr.h b/include/linux/idr.h
index ac6e946b6767b..839da8f2f6f13 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -169,10 +169,7 @@ static inline bool idr_is_empty(const struct idr *idr)
  * Each idr_preload() should be matched with an invocation of this
  * function.  See idr_preload() for details.
  */
-static inline void idr_preload_end(void)
-{
-	preempt_enable();
-}
+void idr_preload_end(void);
 
 /**
  * idr_for_each_entry() - Iterate over an IDR's elements of a given type.
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 63e62372443a5..040b1fd0ab940 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -226,6 +226,7 @@ unsigned int radix_tree_gang_lookup(const struct radix_tree_root *,
 			unsigned int max_items);
 int radix_tree_preload(gfp_t gfp_mask);
 int radix_tree_maybe_preload(gfp_t gfp_mask);
+void radix_tree_preload_end(void);
 void radix_tree_init(void);
 void *radix_tree_tag_set(struct radix_tree_root *,
 			unsigned long index, unsigned int tag);
@@ -243,11 +244,6 @@ unsigned int radix_tree_gang_lookup_tag_slot(const struct radix_tree_root *,
 		unsigned int max_items, unsigned int tag);
 int radix_tree_tagged(const struct radix_tree_root *, unsigned int tag);
 
-static inline void radix_tree_preload_end(void)
-{
-	preempt_enable();
-}
-
 void __rcu **idr_get_free(struct radix_tree_root *root,
 			      struct radix_tree_iter *iter, gfp_t gfp,
 			      unsigned long max);
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 2ee6ae3b0ade0..8a44f7b85dfdc 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -20,6 +20,7 @@
 #include <linux/kernel.h>
 #include <linux/kmemleak.h>
 #include <linux/percpu.h>
+#include <linux/locallock.h>
 #include <linux/preempt.h>		/* in_interrupt() */
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -27,7 +28,6 @@
 #include <linux/string.h>
 #include <linux/xarray.h>
 
-
 /*
  * Radix tree node cache.
  */
@@ -64,6 +64,7 @@ struct radix_tree_preload {
 	struct radix_tree_node *nodes;
 };
 static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
+static DEFINE_LOCAL_LOCK(radix_tree_preloads_lock);
 
 static inline struct radix_tree_node *entry_to_node(void *ptr)
 {
@@ -332,14 +333,14 @@ static __must_check int __radix_tree_preload(gfp_t gfp_mask, unsigned nr)
 	 */
 	gfp_mask &= ~__GFP_ACCOUNT;
 
-	preempt_disable();
+	local_lock(radix_tree_preloads_lock);
 	rtp = this_cpu_ptr(&radix_tree_preloads);
 	while (rtp->nr < nr) {
-		preempt_enable();
+		local_unlock(radix_tree_preloads_lock);
 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
 		if (node == NULL)
 			goto out;
-		preempt_disable();
+		local_lock(radix_tree_preloads_lock);
 		rtp = this_cpu_ptr(&radix_tree_preloads);
 		if (rtp->nr < nr) {
 			node->parent = rtp->nodes;
@@ -381,11 +382,17 @@ int radix_tree_maybe_preload(gfp_t gfp_mask)
 	if (gfpflags_allow_blocking(gfp_mask))
 		return __radix_tree_preload(gfp_mask, RADIX_TREE_PRELOAD_SIZE);
 	/* Preloading doesn't help anything with this gfp mask, skip it */
-	preempt_disable();
+	local_lock(radix_tree_preloads_lock);
 	return 0;
 }
 EXPORT_SYMBOL(radix_tree_maybe_preload);
 
+void radix_tree_preload_end(void)
+{
+	local_unlock(radix_tree_preloads_lock);
+}
+EXPORT_SYMBOL(radix_tree_preload_end);
+
 static unsigned radix_tree_load_root(const struct radix_tree_root *root,
 		struct radix_tree_node **nodep, unsigned long *maxindex)
 {
@@ -1470,10 +1477,16 @@ EXPORT_SYMBOL(radix_tree_tagged);
 void idr_preload(gfp_t gfp_mask)
 {
 	if (__radix_tree_preload(gfp_mask, IDR_PRELOAD_SIZE))
-		preempt_disable();
+		local_lock(radix_tree_preloads_lock);
 }
 EXPORT_SYMBOL(idr_preload);
 
+void idr_preload_end(void)
+{
+	local_unlock(radix_tree_preloads_lock);
+}
+EXPORT_SYMBOL(idr_preload_end);
+
 void __rcu **idr_get_free(struct radix_tree_root *root,
 			      struct radix_tree_iter *iter, gfp_t gfp,
 			      unsigned long max)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
  2020-05-19 20:19 ` [PATCH 1/8] locking: " Sebastian Andrzej Siewior
  2020-05-19 20:19 ` [PATCH 2/8] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-20 10:24   ` Peter Zijlstra
  2020-05-19 20:19 ` [PATCH 4/8] mm/swap: Use local_lock for protection Sebastian Andrzej Siewior
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Sebastian Andrzej Siewior, Lai Jiangshan, Josh Triplett,
	Mathieu Desnoyers, rcu

SRCU disables interrupts to get a stable per-CPU pointer and then
acquires the spinlock which is in the per-CPU data structure. The
release uses spin_unlock_irqrestore(). While this is correct on a non-RT
kernel, this conflicts with the RT semantics because the spinlock is
converted to a 'sleeping' spinlock. Sleeping locks can obviously not be
acquired with interrupts disabled.

Add a local lock and use the corresponding local lock operations.  Split
the restore into unlock and local_lock_irqrestore(). The local lock
operations map to local_irq_disable/enable() on a non-RT kernel. On a RT
kernel the local lock is substituted with a real per CPU lock which
serializes the access and guarantees CPU locality, but keeps the code
section preemptible. No functional change.

Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: rcu@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/rcu/srcutree.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 0c71505f0e19c..8d2b5f75145d7 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -25,6 +25,7 @@
 #include <linux/delay.h>
 #include <linux/module.h>
 #include <linux/srcu.h>
+#include <linux/locallock.h>
 
 #include "rcu.h"
 #include "rcu_segcblist.h"
@@ -735,6 +736,7 @@ static void srcu_flip(struct srcu_struct *ssp)
 	smp_mb(); /* D */  /* Pairs with C. */
 }
 
+static DEFINE_LOCAL_LOCK(sda_lock);
 /*
  * If SRCU is likely idle, return true, otherwise return false.
  *
@@ -765,13 +767,13 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
 	unsigned long tlast;
 
 	/* If the local srcu_data structure has callbacks, not idle.  */
-	local_irq_save(flags);
+	local_lock_irqsave(sda_lock, flags);
 	sdp = this_cpu_ptr(ssp->sda);
 	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(sda_lock, flags);
 		return false; /* Callbacks already present, so not idle. */
 	}
-	local_irq_restore(flags);
+	local_unlock_irqrestore(sda_lock, flags);
 
 	/*
 	 * No local callbacks, so probabalistically probe global state.
@@ -851,7 +853,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 	}
 	rhp->func = func;
 	idx = srcu_read_lock(ssp);
-	local_irq_save(flags);
+	local_lock_irqsave(sda_lock, flags);
 	sdp = this_cpu_ptr(ssp->sda);
 	spin_lock_rcu_node(sdp);
 	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
@@ -867,7 +869,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 		sdp->srcu_gp_seq_needed_exp = s;
 		needexp = true;
 	}
-	spin_unlock_irqrestore_rcu_node(sdp, flags);
+	spin_unlock_rcu_node(sdp);
+	local_unlock_irqrestore(sda_lock, flags);
 	if (needgp)
 		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
 	else if (needexp)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH 4/8] mm/swap: Use local_lock for protection
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (2 preceding siblings ...)
  2020-05-19 20:19 ` [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-19 23:58   ` Andrew Morton
  2020-05-20 10:53   ` Peter Zijlstra
  2020-05-19 20:19 ` [PATCH 5/8] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Andrew Morton, linux-mm, Sebastian Andrzej Siewior

From: Ingo Molnar <mingo@kernel.org>

The various struct pagevec per CPU variables are protected by disabling
either preemption or interrupts across the critical sections. Inside
these sections spinlocks have to be acquired.

These spinlocks are regular spinlock_t types which are converted to
"sleeping" spinlocks on PREEMPT_RT enabled kernels. Obviously sleeping
locks cannot be acquired in preemption or interrupt disabled sections.

local locks provide a trivial way to substitute preempt and interrupt
disable instances. On a non PREEMPT_RT enabled kernel local_lock() maps
to preempt_disable() and local_lock_irq() to local_irq_disable().

Add swapvec_lock to protect the per-CPU lru_add_pvec and
lru_lazyfree_pvecs variables and rotate_lock to protect the per-CPU
lru_rotate_pvecs variable

Change the relevant call sites to acquire these locks instead of using
preempt_disable() / get_cpu() / get_cpu_var() and local_irq_disable() /
local_irq_save().

There is neither a functional change nor a change in the generated
binary code for non PREEMPT_RT enabled non-debug kernels.

When lockdep is enabled local locks have lockdep maps embedded. These
allow lockdep to validate the protections, i.e. inappropriate usage of a
preemption only protected sections would result in a lockdep warning
while the same problem would not be noticed with a plain
preempt_disable() based protection.

local locks also improve readability as they provide a named scope for
the protections while preempt/interrupt disable are opaque scopeless.

Finally local locks allow PREEMPT_RT to substitute them with real
locking primitives to ensure the correctness of operation in a fully
preemptible kernel.
No functional change.

[ bigeasy: Adopted to use local_lock ]

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/swap.h |  2 ++
 mm/compaction.c      |  7 +++---
 mm/swap.c            | 59 ++++++++++++++++++++++++++++++--------------
 3 files changed, 45 insertions(+), 23 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index e1bbf7a16b276..540b52c71bc95 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -12,6 +12,7 @@
 #include <linux/fs.h>
 #include <linux/atomic.h>
 #include <linux/page-flags.h>
+#include <linux/locallock.h>
 #include <asm/page.h>
 
 struct notifier_block;
@@ -328,6 +329,7 @@ extern unsigned long nr_free_pagecache_pages(void);
 
 
 /* linux/mm/swap.c */
+DECLARE_LOCAL_LOCK(swapvec_lock);
 extern void lru_cache_add(struct page *);
 extern void lru_cache_add_anon(struct page *page);
 extern void lru_cache_add_file(struct page *page);
diff --git a/mm/compaction.c b/mm/compaction.c
index 46f0fcc93081e..77972c8d4dead 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2243,15 +2243,14 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
 		 * would succeed.
 		 */
 		if (cc->order > 0 && last_migrated_pfn) {
-			int cpu;
 			unsigned long current_block_start =
 				block_start_pfn(cc->migrate_pfn, cc->order);
 
 			if (last_migrated_pfn < current_block_start) {
-				cpu = get_cpu();
-				lru_add_drain_cpu(cpu);
+				local_lock(swapvec_lock);
+				lru_add_drain_cpu(smp_processor_id());
 				drain_local_pages(cc->zone);
-				put_cpu();
+				local_unlock(swapvec_lock);
 				/* No more flushing until we migrate again */
 				last_migrated_pfn = 0;
 			}
diff --git a/mm/swap.c b/mm/swap.c
index bf9a79fed62d7..03c97d15fcd69 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -44,8 +44,14 @@
 /* How many pages do we try to swap or page in/out together? */
 int page_cluster;
 
-static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
+
+/* Protecting lru_rotate_pvecs */
+static DEFINE_LOCAL_LOCK(rotate_lock);
 static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
+
+/* Protecting the following struct pagevec */
+DEFINE_LOCAL_LOCK(swapvec_lock);
+static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
 static DEFINE_PER_CPU(struct pagevec, lru_deactivate_file_pvecs);
 static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
 static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs);
@@ -254,11 +260,11 @@ void rotate_reclaimable_page(struct page *page)
 		unsigned long flags;
 
 		get_page(page);
-		local_irq_save(flags);
+		local_lock_irqsave(rotate_lock, flags);
 		pvec = this_cpu_ptr(&lru_rotate_pvecs);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(rotate_lock, flags);
 	}
 }
 
@@ -308,12 +314,14 @@ void activate_page(struct page *page)
 {
 	page = compound_head(page);
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(activate_page_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(swapvec_lock);
+		pvec = this_cpu_ptr(&activate_page_pvecs);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, __activate_page, NULL);
-		put_cpu_var(activate_page_pvecs);
+		local_unlock(swapvec_lock);
 	}
 }
 
@@ -335,9 +343,12 @@ void activate_page(struct page *page)
 
 static void __lru_cache_activate_page(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec;
 	int i;
 
+	local_lock(swapvec_lock);
+	pvec = this_cpu_ptr(&lru_add_pvec);
+
 	/*
 	 * Search backwards on the optimistic assumption that the page being
 	 * activated has just been added to this pagevec. Note that only
@@ -357,7 +368,7 @@ static void __lru_cache_activate_page(struct page *page)
 		}
 	}
 
-	put_cpu_var(lru_add_pvec);
+	local_unlock(swapvec_lock);
 }
 
 /*
@@ -404,12 +415,14 @@ EXPORT_SYMBOL(mark_page_accessed);
 
 static void __lru_cache_add(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec;
 
+	local_lock(swapvec_lock);
+	pvec = this_cpu_ptr(&lru_add_pvec);
 	get_page(page);
 	if (!pagevec_add(pvec, page) || PageCompound(page))
 		__pagevec_lru_add(pvec);
-	put_cpu_var(lru_add_pvec);
+	local_unlock(swapvec_lock);
 }
 
 /**
@@ -603,9 +616,9 @@ void lru_add_drain_cpu(int cpu)
 		unsigned long flags;
 
 		/* No harm done if a racing interrupt already did this */
-		local_irq_save(flags);
+		local_lock_irqsave(rotate_lock, flags);
 		pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(rotate_lock, flags);
 	}
 
 	pvec = &per_cpu(lru_deactivate_file_pvecs, cpu);
@@ -641,11 +654,14 @@ void deactivate_file_page(struct page *page)
 		return;
 
 	if (likely(get_page_unless_zero(page))) {
-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_file_pvecs);
+		struct pagevec *pvec;
+
+		local_lock(swapvec_lock);
+		pvec = this_cpu_ptr(&lru_deactivate_file_pvecs);
 
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
-		put_cpu_var(lru_deactivate_file_pvecs);
+		local_unlock(swapvec_lock);
 	}
 }
 
@@ -660,12 +676,14 @@ void deactivate_file_page(struct page *page)
 void deactivate_page(struct page *page)
 {
 	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(swapvec_lock);
+		pvec = this_cpu_ptr(&lru_deactivate_pvecs);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
-		put_cpu_var(lru_deactivate_pvecs);
+		local_unlock(swapvec_lock);
 	}
 }
 
@@ -680,19 +698,22 @@ void mark_page_lazyfree(struct page *page)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(swapvec_lock);
+		pvec = this_cpu_ptr(&lru_lazyfree_pvecs);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
-		put_cpu_var(lru_lazyfree_pvecs);
+		local_unlock(swapvec_lock);
 	}
 }
 
 void lru_add_drain(void)
 {
-	lru_add_drain_cpu(get_cpu());
-	put_cpu();
+	local_lock(swapvec_lock);
+	lru_add_drain_cpu(smp_processor_id());
+	local_unlock(swapvec_lock);
 }
 
 #ifdef CONFIG_SMP
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH 5/8] squashfs: make use of local lock in multi_cpu decompressor
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (3 preceding siblings ...)
  2020-05-19 20:19 ` [PATCH 4/8] mm/swap: Use local_lock for protection Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-20 10:56   ` Peter Zijlstra
  2020-05-19 20:19 ` [PATCH 6/8] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Julia Cartwright, Phillip Lougher, Alexander Stein,
	Sebastian Andrzej Siewior

From: Julia Cartwright <julia@ni.com>

The squashfs multi CPU decompressor makes use of get_cpu_ptr() to
acquire a pointer to per-CPU data. get_cpu_ptr() implicitly disables
preemption which serializes the access to the per-CPU data.

But decompression can take quite some time depending on the size. The
observed preempt disabled times in real world scenarios went up to 8ms,
causing massive wakeup latencies. This happens on all CPUs as the
decompression is fully parallelized.

Replace the implicit preemption control with an explicit local lock.
This allows RT kernels to substitute it with a real per CPU lock, which
serializes the access but keeps the code section preemptible. On non RT
kernels this maps to preempt_disable() as before, i.e. no functional
change.

[ bigeasy: Use local_lock(), patch description]

Cc: Phillip Lougher <phillip@squashfs.org.uk>
Reported-by: Alexander Stein <alexander.stein@systec-electronic.com>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Alexander Stein <alexander.stein@systec-electronic.com>
---
 fs/squashfs/decompressor_multi_percpu.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/fs/squashfs/decompressor_multi_percpu.c b/fs/squashfs/decompressor_multi_percpu.c
index 2a2a2d106440e..8a77a2741c176 100644
--- a/fs/squashfs/decompressor_multi_percpu.c
+++ b/fs/squashfs/decompressor_multi_percpu.c
@@ -8,6 +8,7 @@
 #include <linux/slab.h>
 #include <linux/percpu.h>
 #include <linux/buffer_head.h>
+#include <linux/locallock.h>
 
 #include "squashfs_fs.h"
 #include "squashfs_fs_sb.h"
@@ -23,6 +24,8 @@ struct squashfs_stream {
 	void		*stream;
 };
 
+static DEFINE_LOCAL_LOCK(stream_lock);
+
 void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
 						void *comp_opts)
 {
@@ -75,12 +78,16 @@ void squashfs_decompressor_destroy(struct squashfs_sb_info *msblk)
 int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh,
 	int b, int offset, int length, struct squashfs_page_actor *output)
 {
-	struct squashfs_stream __percpu *percpu =
-			(struct squashfs_stream __percpu *) msblk->stream;
-	struct squashfs_stream *stream = get_cpu_ptr(percpu);
-	int res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
-		offset, length, output);
-	put_cpu_ptr(stream);
+	struct squashfs_stream *stream;
+	int res;
+
+	local_lock(stream_lock);
+	stream = this_cpu_ptr(msblk->stream);
+
+	res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
+			offset, length, output);
+
+	local_unlock(stream_lock);
 
 	if (res < 0)
 		ERROR("%s decompression failed, data probably corrupt\n",
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH 6/8] connector/cn_proc: Protect send_msg() with a local lock
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (4 preceding siblings ...)
  2020-05-19 20:19 ` [PATCH 5/8] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-20 11:03   ` Peter Zijlstra
  2020-05-19 20:19 ` [PATCH 7/8] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
  2020-05-19 20:19 ` [PATCH 8/8] mm/zswap: " Sebastian Andrzej Siewior
  7 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Mike Galbraith, Evgeniy Polyakov, netdev,
	Sebastian Andrzej Siewior

From: Mike Galbraith <umgwanakikbuti@gmail.com>

send_msg() disables preemption to avoid out-of-order messages. As the
code inside the preempt disabled section acquires regular spinlocks,
which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
eventually calls into a memory allocator, this conflicts with the RT
semantics.

Convert it to a local_lock which allows RT kernels to substitute them with
a real per CPU lock. On non RT kernels this maps to preempt_disable() as
before. No functional change.

[bigeasy: Patch description]

Cc: Evgeniy Polyakov <zbr@ioremap.net>
Cc: netdev@vger.kernel.org
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/connector/cn_proc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index d58ce664da843..055b0c86a0693 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -18,6 +18,7 @@
 #include <linux/pid_namespace.h>
 
 #include <linux/cn_proc.h>
+#include <linux/locallock.h>
 
 /*
  * Size of a cn_msg followed by a proc_event structure.  Since the
@@ -40,10 +41,11 @@ static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
 
 /* proc_event_counts is used as the sequence number of the netlink message */
 static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
+static DEFINE_LOCAL_LOCK(send_msg_lock);
 
 static inline void send_msg(struct cn_msg *msg)
 {
-	preempt_disable();
+	local_lock(send_msg_lock);
 
 	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
 	((struct proc_event *)msg->data)->cpu = smp_processor_id();
@@ -56,7 +58,7 @@ static inline void send_msg(struct cn_msg *msg)
 	 */
 	cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
 
-	preempt_enable();
+	local_unlock(send_msg_lock);
 }
 
 void proc_fork_connector(struct task_struct *task)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH 7/8] zram: Use local lock to protect per-CPU data
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (5 preceding siblings ...)
  2020-05-19 20:19 ` [PATCH 6/8] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-20 11:07   ` Peter Zijlstra
  2020-05-19 20:19 ` [PATCH 8/8] mm/zswap: " Sebastian Andrzej Siewior
  7 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Mike Galbraith, Minchan Kim, Nitin Gupta, Sergey Senozhatsky,
	Sebastian Andrzej Siewior

From: Mike Galbraith <umgwanakikbuti@gmail.com>

The zcomp driver uses per-CPU compression. The per-CPU data pointer is
acquired with get_cpu_ptr() which implicitly disables preemption.
It allocates memory inside the preempt disabled region which conflicts
with the PREEMPT_RT semantics.

Replace the implicit preemption control with an explicit local lock.
This allows RT kernels to substitute it with a real per CPU lock, which
serializes the access but keeps the code section preemptible. On non RT
kernels this maps to preempt_disable() as before, i.e. no functional
change.

[bigeasy: Use local_lock(), description, drop reordering]

Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/block/zram/zcomp.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
index 1a8564a79d8dc..32854d460b299 100644
--- a/drivers/block/zram/zcomp.c
+++ b/drivers/block/zram/zcomp.c
@@ -11,6 +11,7 @@
 #include <linux/sched.h>
 #include <linux/cpu.h>
 #include <linux/crypto.h>
+#include <linux/locallock.h>
 
 #include "zcomp.h"
 
@@ -111,14 +112,17 @@ ssize_t zcomp_available_show(const char *comp, char *buf)
 	return sz;
 }
 
+static DEFINE_LOCAL_LOCK(zcomp_lock);
+
 struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
 {
-	return *get_cpu_ptr(comp->stream);
+	local_lock(zcomp_lock);
+	return *this_cpu_ptr(comp->stream);
 }
 
 void zcomp_stream_put(struct zcomp *comp)
 {
-	put_cpu_ptr(comp->stream);
+	local_unlock(zcomp_lock);
 }
 
 int zcomp_compress(struct zcomp_strm *zstrm,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH 8/8] mm/zswap: Use local lock to protect per-CPU data
  2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (6 preceding siblings ...)
  2020-05-19 20:19 ` [PATCH 7/8] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
@ 2020-05-19 20:19 ` Sebastian Andrzej Siewior
  2020-05-19 21:46   ` Song Bao Hua
  7 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-19 20:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Luis Claudio R. Goncalves, Seth Jennings, Dan Streetman,
	Vitaly Wool, Andrew Morton, linux-mm, Sebastian Andrzej Siewior

From: "Luis Claudio R. Goncalves" <lgoncalv@redhat.com>

zwap uses per-CPU compression. The per-CPU data pointer is acquired with
get_cpu_ptr() which implicitly disables preemption. It allocates
memory inside the preempt disabled region which conflicts with the
PREEMPT_RT semantics.

Replace the implicit preemption control with an explicit local lock.
This allows RT kernels to substitute it with a real per CPU lock, which
serializes the access but keeps the code section preemptible. On non RT
kernels this maps to preempt_disable() as before, i.e. no functional
change.

[bigeasy: Use local_lock(), additional hunks, patch description]

Cc: Seth Jennings <sjenning@redhat.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Vitaly Wool <vitaly.wool@konsulko.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Signed-off-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 mm/zswap.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index fbb782924ccc5..1db2ad941e501 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -18,6 +18,7 @@
 #include <linux/highmem.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/locallock.h>
 #include <linux/types.h>
 #include <linux/atomic.h>
 #include <linux/frontswap.h>
@@ -388,6 +389,8 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
 * per-cpu code
 **********************************/
 static DEFINE_PER_CPU(u8 *, zswap_dstmem);
+/* Used for zswap_dstmem and tfm */
+static DEFINE_LOCAL_LOCK(zswap_cpu_lock);
 
 static int zswap_dstmem_prepare(unsigned int cpu)
 {
@@ -919,10 +922,11 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
 		dlen = PAGE_SIZE;
 		src = (u8 *)zhdr + sizeof(struct zswap_header);
 		dst = kmap_atomic(page);
-		tfm = *get_cpu_ptr(entry->pool->tfm);
+		local_lock(zswap_cpu_lock);
+		tfm = *this_cpu_ptr(entry->pool->tfm);
 		ret = crypto_comp_decompress(tfm, src, entry->length,
 					     dst, &dlen);
-		put_cpu_ptr(entry->pool->tfm);
+		local_unlock(zswap_cpu_lock);
 		kunmap_atomic(dst);
 		BUG_ON(ret);
 		BUG_ON(dlen != PAGE_SIZE);
@@ -1074,12 +1078,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	}
 
 	/* compress */
-	dst = get_cpu_var(zswap_dstmem);
-	tfm = *get_cpu_ptr(entry->pool->tfm);
+	local_lock(zswap_cpu_lock);
+	dst = *this_cpu_ptr(&zswap_dstmem);
+	tfm = *this_cpu_ptr(entry->pool->tfm);
 	src = kmap_atomic(page);
 	ret = crypto_comp_compress(tfm, src, PAGE_SIZE, dst, &dlen);
 	kunmap_atomic(src);
-	put_cpu_ptr(entry->pool->tfm);
 	if (ret) {
 		ret = -EINVAL;
 		goto put_dstmem;
@@ -1103,7 +1107,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	memcpy(buf, &zhdr, hlen);
 	memcpy(buf + hlen, dst, dlen);
 	zpool_unmap_handle(entry->pool->zpool, handle);
-	put_cpu_var(zswap_dstmem);
+	local_unlock(zswap_cpu_lock);
 
 	/* populate entry */
 	entry->offset = offset;
@@ -1131,7 +1135,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	return 0;
 
 put_dstmem:
-	put_cpu_var(zswap_dstmem);
+	local_unlock(zswap_cpu_lock);
 	zswap_pool_put(entry->pool);
 freepage:
 	zswap_entry_cache_free(entry);
@@ -1176,9 +1180,10 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 	if (zpool_evictable(entry->pool->zpool))
 		src += sizeof(struct zswap_header);
 	dst = kmap_atomic(page);
-	tfm = *get_cpu_ptr(entry->pool->tfm);
+	local_lock(zswap_cpu_lock);
+	tfm = *this_cpu_ptr(entry->pool->tfm);
 	ret = crypto_comp_decompress(tfm, src, entry->length, dst, &dlen);
-	put_cpu_ptr(entry->pool->tfm);
+	local_unlock(zswap_cpu_lock);
 	kunmap_atomic(dst);
 	zpool_unmap_handle(entry->pool->zpool, entry->handle);
 	BUG_ON(ret);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH 2/8] radix-tree: Use local_lock for protection
  2020-05-19 20:19 ` [PATCH 2/8] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
@ 2020-05-19 20:45   ` Matthew Wilcox
  2020-05-19 20:54     ` Steven Rostedt
  2020-05-20 10:21   ` Peter Zijlstra
  1 sibling, 1 reply; 48+ messages in thread
From: Matthew Wilcox @ 2020-05-19 20:45 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	linux-fsdevel

On Tue, May 19, 2020 at 10:19:06PM +0200, Sebastian Andrzej Siewior wrote:
> The radix-tree and idr preload mechanisms use preempt_disable() to protect
> the complete operation between xxx_preload() and xxx_preload_end().
> 
> As the code inside the preempt disabled section acquires regular spinlocks,
> which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
> eventually calls into a memory allocator, this conflicts with the RT
> semantics.
> 
> Convert it to a local_lock which allows RT kernels to substitute them with
> a real per CPU lock. On non RT kernels this maps to preempt_disable() as
> before, but provides also lockdep coverage of the critical region.
> No functional change.

I don't seem to have a locallock.h in my tree.  Where can I find more
information about it?

> +++ b/lib/radix-tree.c
> @@ -20,6 +20,7 @@
>  #include <linux/kernel.h>
>  #include <linux/kmemleak.h>
>  #include <linux/percpu.h>
> +#include <linux/locallock.h>
>  #include <linux/preempt.h>		/* in_interrupt() */
>  #include <linux/radix-tree.h>
>  #include <linux/rcupdate.h>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 2/8] radix-tree: Use local_lock for protection
  2020-05-19 20:45   ` Matthew Wilcox
@ 2020-05-19 20:54     ` Steven Rostedt
  2020-05-20  2:05       ` Matthew Wilcox
  0 siblings, 1 reply; 48+ messages in thread
From: Steven Rostedt @ 2020-05-19 20:54 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Sebastian Andrzej Siewior, linux-kernel, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E . McKenney,
	Linus Torvalds, linux-fsdevel

On Tue, 19 May 2020 13:45:45 -0700
Matthew Wilcox <willy@infradead.org> wrote:

> On Tue, May 19, 2020 at 10:19:06PM +0200, Sebastian Andrzej Siewior wrote:
> > The radix-tree and idr preload mechanisms use preempt_disable() to protect
> > the complete operation between xxx_preload() and xxx_preload_end().
> > 
> > As the code inside the preempt disabled section acquires regular spinlocks,
> > which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
> > eventually calls into a memory allocator, this conflicts with the RT
> > semantics.
> > 
> > Convert it to a local_lock which allows RT kernels to substitute them with
> > a real per CPU lock. On non RT kernels this maps to preempt_disable() as
> > before, but provides also lockdep coverage of the critical region.
> > No functional change.  
> 
> I don't seem to have a locallock.h in my tree.  Where can I find more
> information about it?

PATCH 1 ;-)

 https://lore.kernel.org/r/20200519201912.1564477-1-bigeasy@linutronix.de

With lore and b4, it should now be easy to get full patch series.

-- Steve

> 
> > +++ b/lib/radix-tree.c
> > @@ -20,6 +20,7 @@
> >  #include <linux/kernel.h>
> >  #include <linux/kmemleak.h>
> >  #include <linux/percpu.h>
> > +#include <linux/locallock.h>
> >  #include <linux/preempt.h>		/* in_interrupt() */
> >  #include <linux/radix-tree.h>
> >  #include <linux/rcupdate.h>  


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [PATCH 8/8] mm/zswap: Use local lock to protect per-CPU data
  2020-05-19 20:19 ` [PATCH 8/8] mm/zswap: " Sebastian Andrzej Siewior
@ 2020-05-19 21:46   ` Song Bao Hua
  2020-05-20 10:26     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Song Bao Hua @ 2020-05-19 21:46 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Luis Claudio R. Goncalves, Seth Jennings, Dan Streetman,
	Vitaly Wool, Andrew Morton, linux-mm, Linuxarm


> From: "Luis Claudio R. Goncalves" <lgoncalv@redhat.com>

> zwap uses per-CPU compression. The per-CPU data pointer is acquired with
> get_cpu_ptr() which implicitly disables preemption. It allocates memory inside the preempt disabled region which conflicts with the PREEMPT_RT semantics.

> Replace the implicit preemption control with an explicit local lock.
> This allows RT kernels to substitute it with a real per CPU lock, which serializes the access but keeps the code section preemptible. On non RT kernels this maps to preempt_disable() as before, i.e. no functional change.

Hi Luis,
In the below patch, in order to use the acomp APIs to leverage the power of hardware compressors. I have moved to mutex:
https://marc.info/?l=linux-crypto-vger&m=158941285830302&w=2
https://marc.info/?l=linux-crypto-vger&m=158941287930311&w=2

so once we get some progress on that one, I guess we don't need a special patch for RT any more.

> [bigeasy: Use local_lock(), additional hunks, patch description]

> Cc: Seth Jennings <sjenning@redhat.com>
> Cc: Dan Streetman <ddstreet@ieee.org>
> Cc: Vitaly Wool <vitaly.wool@konsulko.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
> Signed-off-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  mm/zswap.c | 23 ++++++++++++++---------
>  1 file changed, 14 insertions(+), 9 deletions(-)

Thanks
Barry



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 4/8] mm/swap: Use local_lock for protection
  2020-05-19 20:19 ` [PATCH 4/8] mm/swap: Use local_lock for protection Sebastian Andrzej Siewior
@ 2020-05-19 23:58   ` Andrew Morton
  2020-05-20  2:17     ` Matthew Wilcox
  2020-05-20 10:53   ` Peter Zijlstra
  1 sibling, 1 reply; 48+ messages in thread
From: Andrew Morton @ 2020-05-19 23:58 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	linux-mm

On Tue, 19 May 2020 22:19:08 +0200 Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> From: Ingo Molnar <mingo@kernel.org>
> 
> The various struct pagevec per CPU variables are protected by disabling
> either preemption or interrupts across the critical sections. Inside
> these sections spinlocks have to be acquired.
> 
> These spinlocks are regular spinlock_t types which are converted to
> "sleeping" spinlocks on PREEMPT_RT enabled kernels. Obviously sleeping
> locks cannot be acquired in preemption or interrupt disabled sections.
> 
> local locks provide a trivial way to substitute preempt and interrupt
> disable instances. On a non PREEMPT_RT enabled kernel local_lock() maps
> to preempt_disable() and local_lock_irq() to local_irq_disable().
> 
> Add swapvec_lock to protect the per-CPU lru_add_pvec and
> lru_lazyfree_pvecs variables and rotate_lock to protect the per-CPU
> lru_rotate_pvecs variable
> 
> Change the relevant call sites to acquire these locks instead of using
> preempt_disable() / get_cpu() / get_cpu_var() and local_irq_disable() /
> local_irq_save().
> 
> There is neither a functional change nor a change in the generated
> binary code for non PREEMPT_RT enabled non-debug kernels.
> 
> When lockdep is enabled local locks have lockdep maps embedded. These
> allow lockdep to validate the protections, i.e. inappropriate usage of a
> preemption only protected sections would result in a lockdep warning
> while the same problem would not be noticed with a plain
> preempt_disable() based protection.
> 
> local locks also improve readability as they provide a named scope for
> the protections while preempt/interrupt disable are opaque scopeless.
> 
> Finally local locks allow PREEMPT_RT to substitute them with real
> locking primitives to ensure the correctness of operation in a fully
> preemptible kernel.
> No functional change.
>
> ...
>
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -12,6 +12,7 @@
>  #include <linux/fs.h>
>  #include <linux/atomic.h>
>  #include <linux/page-flags.h>
> +#include <linux/locallock.h>

Could we please make these local_lock.h and local_lock_internal.h?  Making
the filenames different from everything else is just irritating!

> +				local_lock(swapvec_lock);

It's quite peculiar that these operations appear to be pass-by-value. 
All other locking operations are pass-by-reference - spin_lock(&lock),
not spin_lock(lock).  This is what the eye expects to see and it's
simply more logical - calling code shouldn't have to "know" that the
locking operations are implemented as cpp macros.  And we'd be in a
mess if someone tried to convert these to real C functions.

Which prompts the question: why were all these operations implemented
in the processor anyway?  afaict they could have been written in C.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 2/8] radix-tree: Use local_lock for protection
  2020-05-19 20:54     ` Steven Rostedt
@ 2020-05-20  2:05       ` Matthew Wilcox
  2020-05-20 10:13         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Matthew Wilcox @ 2020-05-20  2:05 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Sebastian Andrzej Siewior, linux-kernel, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E . McKenney,
	Linus Torvalds, linux-fsdevel

On Tue, May 19, 2020 at 04:54:53PM -0400, Steven Rostedt wrote:
> On Tue, 19 May 2020 13:45:45 -0700
> Matthew Wilcox <willy@infradead.org> wrote:
> 
> > On Tue, May 19, 2020 at 10:19:06PM +0200, Sebastian Andrzej Siewior wrote:
> > > The radix-tree and idr preload mechanisms use preempt_disable() to protect
> > > the complete operation between xxx_preload() and xxx_preload_end().
> > > 
> > > As the code inside the preempt disabled section acquires regular spinlocks,
> > > which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
> > > eventually calls into a memory allocator, this conflicts with the RT
> > > semantics.
> > > 
> > > Convert it to a local_lock which allows RT kernels to substitute them with
> > > a real per CPU lock. On non RT kernels this maps to preempt_disable() as
> > > before, but provides also lockdep coverage of the critical region.
> > > No functional change.  
> > 
> > I don't seem to have a locallock.h in my tree.  Where can I find more
> > information about it?
> 
> PATCH 1 ;-)

... this is why we have the convention to cc everybody on all the patches.

>  https://lore.kernel.org/r/20200519201912.1564477-1-bigeasy@linutronix.de
> 
> With lore and b4, it should now be easy to get full patch series.

Thats asking too much of the random people cc'd on random patches.
What is b4 anyway?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 4/8] mm/swap: Use local_lock for protection
  2020-05-19 23:58   ` Andrew Morton
@ 2020-05-20  2:17     ` Matthew Wilcox
  0 siblings, 0 replies; 48+ messages in thread
From: Matthew Wilcox @ 2020-05-20  2:17 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sebastian Andrzej Siewior, linux-kernel, Peter Zijlstra,
	Ingo Molnar, Steven Rostedt, Will Deacon, Thomas Gleixner,
	Paul E . McKenney, Linus Torvalds, linux-mm

On Tue, May 19, 2020 at 04:58:37PM -0700, Andrew Morton wrote:
> On Tue, 19 May 2020 22:19:08 +0200 Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> > +				local_lock(swapvec_lock);
> 
> It's quite peculiar that these operations appear to be pass-by-value. 
> All other locking operations are pass-by-reference - spin_lock(&lock),
> not spin_lock(lock).  This is what the eye expects to see and it's
> simply more logical - calling code shouldn't have to "know" that the
> locking operations are implemented as cpp macros.  And we'd be in a
> mess if someone tried to convert these to real C functions.

The funny thing is that the documentation gets this right:

+The mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a few
+implications. For example, on a non-PREEMPT_RT kernel the following code
+sequence works as expected::
+
+  local_lock_irq(&local_lock);
+  raw_spin_lock(&lock);

but apparently the implementation changed without the documentation matching.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 2/8] radix-tree: Use local_lock for protection
  2020-05-20  2:05       ` Matthew Wilcox
@ 2020-05-20 10:13         ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-20 10:13 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Steven Rostedt, linux-kernel, Peter Zijlstra, Ingo Molnar,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	linux-fsdevel

On 2020-05-19 19:05:16 [-0700], Matthew Wilcox wrote:
> >  https://lore.kernel.org/r/20200519201912.1564477-1-bigeasy@linutronix.de
> > 
> > With lore and b4, it should now be easy to get full patch series.
> 
> Thats asking too much of the random people cc'd on random patches.

Well, other complain that they don't care about the other 20 patches
just because one patch affects them. And they can look it up if needed
so you can't make everyone happy.

> What is b4 anyway?

  git://git.kernel.org/pub/scm/utils/b4/b4.git

It is a tool written by Konstantin which can grab a whole series giving
the message-id of one patch in series, save the series as mbox, patch
series and collect all the tags (like replies with acked/tested/…-by)
and fold those tags it into the right patches.

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 2/8] radix-tree: Use local_lock for protection
  2020-05-19 20:19 ` [PATCH 2/8] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
  2020-05-19 20:45   ` Matthew Wilcox
@ 2020-05-20 10:21   ` Peter Zijlstra
  2020-05-20 12:28     ` Thomas Gleixner
  1 sibling, 1 reply; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 10:21 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, linux-fsdevel

On Tue, May 19, 2020 at 10:19:06PM +0200, Sebastian Andrzej Siewior wrote:
> @@ -64,6 +64,7 @@ struct radix_tree_preload {
>  	struct radix_tree_node *nodes;
>  };
>  static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
> +static DEFINE_LOCAL_LOCK(radix_tree_preloads_lock);
>  
>  static inline struct radix_tree_node *entry_to_node(void *ptr)
>  {

So I'm all with Andrew on the naming and pass-by-pointer thing, but
also, the above is pretty crap. You want the local_lock to be in the
structure you're actually protecting, and there really isn't anything
stopping you from doing that.

The below builds just fine and is ever so much more sensible.

--- a/include/linux/locallock_internal.h
+++ b/include/linux/locallock_internal.h
@@ -19,7 +19,7 @@ struct local_lock {
 # define LL_DEP_MAP_INIT(lockname)
 #endif
 
-#define INIT_LOCAL_LOCK(lockname)	LL_DEP_MAP_INIT(lockname)
+#define INIT_LOCAL_LOCK(lockname)	{ LL_DEP_MAP_INIT(lockname) }
 
 #define local_lock_lockdep_init(lock)				\
 do {								\
@@ -63,35 +63,35 @@ static inline void local_lock_release(st
 #define __local_lock(lock)					\
 	do {							\
 		preempt_disable();				\
-		local_lock_acquire(this_cpu_ptr(&(lock)));	\
+		local_lock_acquire(this_cpu_ptr(lock));		\
 	} while (0)
 
 #define __local_lock_irq(lock)					\
 	do {							\
 		local_irq_disable();				\
-		local_lock_acquire(this_cpu_ptr(&(lock)));	\
+		local_lock_acquire(this_cpu_ptr(lock));		\
 	} while (0)
 
 #define __local_lock_irqsave(lock, flags)			\
 	do {							\
 		local_irq_save(flags);				\
-		local_lock_acquire(this_cpu_ptr(&(lock)));	\
+		local_lock_acquire(this_cpu_ptr(lock));		\
 	} while (0)
 
 #define __local_unlock(lock)					\
 	do {							\
-		local_lock_release(this_cpu_ptr(&lock));	\
+		local_lock_release(this_cpu_ptr(lock));		\
 		preempt_enable();				\
 	} while (0)
 
 #define __local_unlock_irq(lock)				\
 	do {							\
-		local_lock_release(this_cpu_ptr(&lock));	\
+		local_lock_release(this_cpu_ptr(lock));		\
 		local_irq_enable();				\
 	} while (0)
 
 #define __local_unlock_irqrestore(lock, flags)			\
 	do {							\
-		local_lock_release(this_cpu_ptr(&lock));	\
+		local_lock_release(this_cpu_ptr(lock));		\
 		local_irq_restore(flags);			\
 	} while (0)
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -59,12 +59,14 @@ struct kmem_cache *radix_tree_node_cache
  * Per-cpu pool of preloaded nodes
  */
 struct radix_tree_preload {
+	struct local_lock lock;
 	unsigned nr;
 	/* nodes->parent points to next preallocated node */
 	struct radix_tree_node *nodes;
 };
-static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
-static DEFINE_LOCAL_LOCK(radix_tree_preloads_lock);
+static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) =
+	{ .lock = INIT_LOCAL_LOCK(lock),
+	  .nr = 0, };
 
 static inline struct radix_tree_node *entry_to_node(void *ptr)
 {
@@ -333,14 +335,14 @@ static __must_check int __radix_tree_pre
 	 */
 	gfp_mask &= ~__GFP_ACCOUNT;
 
-	local_lock(radix_tree_preloads_lock);
+	local_lock(&radix_tree_preloads.lock);
 	rtp = this_cpu_ptr(&radix_tree_preloads);
 	while (rtp->nr < nr) {
-		local_unlock(radix_tree_preloads_lock);
+		local_unlock(&radix_tree_preloads.lock);
 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
 		if (node == NULL)
 			goto out;
-		local_lock(radix_tree_preloads_lock);
+		local_lock(&radix_tree_preloads.lock);
 		rtp = this_cpu_ptr(&radix_tree_preloads);
 		if (rtp->nr < nr) {
 			node->parent = rtp->nodes;
@@ -382,14 +384,14 @@ int radix_tree_maybe_preload(gfp_t gfp_m
 	if (gfpflags_allow_blocking(gfp_mask))
 		return __radix_tree_preload(gfp_mask, RADIX_TREE_PRELOAD_SIZE);
 	/* Preloading doesn't help anything with this gfp mask, skip it */
-	local_lock(radix_tree_preloads_lock);
+	local_lock(&radix_tree_preloads.lock);
 	return 0;
 }
 EXPORT_SYMBOL(radix_tree_maybe_preload);
 
 void radix_tree_preload_end(void)
 {
-	local_unlock(radix_tree_preloads_lock);
+	local_unlock(&radix_tree_preloads.lock);
 }
 EXPORT_SYMBOL(radix_tree_preload_end);
 
@@ -1477,13 +1479,13 @@ EXPORT_SYMBOL(radix_tree_tagged);
 void idr_preload(gfp_t gfp_mask)
 {
 	if (__radix_tree_preload(gfp_mask, IDR_PRELOAD_SIZE))
-		local_lock(radix_tree_preloads_lock);
+		local_lock(&radix_tree_preloads.lock);
 }
 EXPORT_SYMBOL(idr_preload);
 
 void idr_preload_end(void)
 {
-	local_unlock(radix_tree_preloads_lock);
+	local_unlock(&radix_tree_preloads.lock);
 }
 EXPORT_SYMBOL(idr_preload_end);
 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-19 20:19 ` [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access Sebastian Andrzej Siewior
@ 2020-05-20 10:24   ` Peter Zijlstra
  2020-05-20 12:06     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 10:24 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Lai Jiangshan, Josh Triplett, Mathieu Desnoyers, rcu

On Tue, May 19, 2020 at 10:19:07PM +0200, Sebastian Andrzej Siewior wrote:

> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index 0c71505f0e19c..8d2b5f75145d7 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -25,6 +25,7 @@
>  #include <linux/delay.h>
>  #include <linux/module.h>
>  #include <linux/srcu.h>
> +#include <linux/locallock.h>
>  
>  #include "rcu.h"
>  #include "rcu_segcblist.h"
> @@ -735,6 +736,7 @@ static void srcu_flip(struct srcu_struct *ssp)
>  	smp_mb(); /* D */  /* Pairs with C. */
>  }
>  
> +static DEFINE_LOCAL_LOCK(sda_lock);
>  /*
>   * If SRCU is likely idle, return true, otherwise return false.
>   *
> @@ -765,13 +767,13 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
>  	unsigned long tlast;
>  
>  	/* If the local srcu_data structure has callbacks, not idle.  */
> -	local_irq_save(flags);
> +	local_lock_irqsave(sda_lock, flags);
>  	sdp = this_cpu_ptr(ssp->sda);
>  	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> -		local_irq_restore(flags);
> +		local_unlock_irqrestore(sda_lock, flags);
>  		return false; /* Callbacks already present, so not idle. */
>  	}
> -	local_irq_restore(flags);
> +	local_unlock_irqrestore(sda_lock, flags);

Would it perhaps make sense to stick the local_lock in struct srcu_data ?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 8/8] mm/zswap: Use local lock to protect per-CPU data
  2020-05-19 21:46   ` Song Bao Hua
@ 2020-05-20 10:26     ` Sebastian Andrzej Siewior
  2020-05-20 11:13       ` Song Bao Hua
  0 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-20 10:26 UTC (permalink / raw)
  To: Song Bao Hua
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Luis Claudio R. Goncalves, Seth Jennings, Dan Streetman,
	Vitaly Wool, Andrew Morton, linux-mm, Linuxarm

On 2020-05-19 21:46:06 [+0000], Song Bao Hua wrote:
> Hi Luis,
> In the below patch, in order to use the acomp APIs to leverage the power of hardware compressors. I have moved to mutex:
> https://marc.info/?l=linux-crypto-vger&m=158941285830302&w=2
> https://marc.info/?l=linux-crypto-vger&m=158941287930311&w=2
> 
> so once we get some progress on that one, I guess we don't need a special patch for RT any more.

If you convert this way from the current concept then we could drop it
from the series.
The second patch shows the following hunk:

|@@ -1075,11 +1124,20 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
| 
| 	/* compress */
| 	dst = get_cpu_var(zswap_dstmem);
|	acomp_ctx = *this_cpu_ptr(entry->pool->acomp_ctx);
|	put_cpu_var(zswap_dstmem);

So here you get per-CPU version of `dst' and `acomp_ctx' and then allow
preemption again.

|	mutex_lock(&acomp_ctx->mutex);
|
|	src = kmap(page);
|	sg_init_one(&input, src, PAGE_SIZE);
|	/* zswap_dstmem is of size (PAGE_SIZE * 2). Reflect same in sg_list */
|	sg_init_one(&output, dst, PAGE_SIZE * 2);

and here you use `dst' and `acomp_ctx' after the preempt_disable() has
been dropped so I don't understand why you used get_cpu_var(). It is
either protected by the mutex and doesn't require get_cpu_var() or it
isn't (and should have additional protection).

|	acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen);
|	ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait);
|	dlen = acomp_ctx->req->dlen;
|	kunmap(page);
|
| 	if (ret) {
| 		ret = -EINVAL;
| 		goto put_dstmem;
|

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 4/8] mm/swap: Use local_lock for protection
  2020-05-19 20:19 ` [PATCH 4/8] mm/swap: Use local_lock for protection Sebastian Andrzej Siewior
  2020-05-19 23:58   ` Andrew Morton
@ 2020-05-20 10:53   ` Peter Zijlstra
  1 sibling, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 10:53 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Andrew Morton, linux-mm

On Tue, May 19, 2020 at 10:19:08PM +0200, Sebastian Andrzej Siewior wrote:
> diff --git a/mm/swap.c b/mm/swap.c
> index bf9a79fed62d7..03c97d15fcd69 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -44,8 +44,14 @@
>  /* How many pages do we try to swap or page in/out together? */
>  int page_cluster;
>  
> -static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
> +
> +/* Protecting lru_rotate_pvecs */
> +static DEFINE_LOCAL_LOCK(rotate_lock);
>  static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> +
> +/* Protecting the following struct pagevec */
> +DEFINE_LOCAL_LOCK(swapvec_lock);
> +static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
>  static DEFINE_PER_CPU(struct pagevec, lru_deactivate_file_pvecs);
>  static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>  static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs);

So personally I'd prefer to have this look like:

struct lru_vecs {
	struct local_lock lock;
	struct pagevec add;
	struct pagevec rotate;
	struct pagevec deact_file;
	struct pagevec deact;
	struct pagevec lazyfree;
#ifdef CONFIG_SMP
	struct pagevec active;
#endif
};

DEFINE_PER_CPU(struct lru_pvec, lru_pvec);

or something, but I realize that is a lot of churn (although highly
automated), so I'll leave that to the mm folks.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 5/8] squashfs: make use of local lock in multi_cpu decompressor
  2020-05-19 20:19 ` [PATCH 5/8] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
@ 2020-05-20 10:56   ` Peter Zijlstra
  0 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 10:56 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Julia Cartwright, Phillip Lougher, Alexander Stein

On Tue, May 19, 2020 at 10:19:09PM +0200, Sebastian Andrzej Siewior wrote:
> diff --git a/fs/squashfs/decompressor_multi_percpu.c b/fs/squashfs/decompressor_multi_percpu.c
> index 2a2a2d106440e..8a77a2741c176 100644
> --- a/fs/squashfs/decompressor_multi_percpu.c
> +++ b/fs/squashfs/decompressor_multi_percpu.c
> @@ -8,6 +8,7 @@
>  #include <linux/slab.h>
>  #include <linux/percpu.h>
>  #include <linux/buffer_head.h>
> +#include <linux/locallock.h>
>  
>  #include "squashfs_fs.h"
>  #include "squashfs_fs_sb.h"
> @@ -23,6 +24,8 @@ struct squashfs_stream {
>  	void		*stream;
>  };
>  
> +static DEFINE_LOCAL_LOCK(stream_lock);

As per the others, you can stick it in struct squashfs_stream.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 6/8] connector/cn_proc: Protect send_msg() with a local lock
  2020-05-19 20:19 ` [PATCH 6/8] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
@ 2020-05-20 11:03   ` Peter Zijlstra
  0 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 11:03 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Mike Galbraith, Evgeniy Polyakov, netdev

On Tue, May 19, 2020 at 10:19:10PM +0200, Sebastian Andrzej Siewior wrote:
> @@ -40,10 +41,11 @@ static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
>  
>  /* proc_event_counts is used as the sequence number of the netlink message */
>  static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
> +static DEFINE_LOCAL_LOCK(send_msg_lock);

Put it in a struct ?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 7/8] zram: Use local lock to protect per-CPU data
  2020-05-19 20:19 ` [PATCH 7/8] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
@ 2020-05-20 11:07   ` Peter Zijlstra
  0 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 11:07 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Mike Galbraith, Minchan Kim, Nitin Gupta, Sergey Senozhatsky

On Tue, May 19, 2020 at 10:19:11PM +0200, Sebastian Andrzej Siewior wrote:
> +static DEFINE_LOCAL_LOCK(zcomp_lock);
> +
>  struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
>  {
> -	return *get_cpu_ptr(comp->stream);
> +	local_lock(zcomp_lock);
> +	return *this_cpu_ptr(comp->stream);
>  }

put it in struct czomp_strm ?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [PATCH 8/8] mm/zswap: Use local lock to protect per-CPU data
  2020-05-20 10:26     ` Sebastian Andrzej Siewior
@ 2020-05-20 11:13       ` Song Bao Hua
  2020-05-20 11:57         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Song Bao Hua @ 2020-05-20 11:13 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Luis Claudio R. Goncalves, Seth Jennings, Dan Streetman,
	Vitaly Wool, Andrew Morton, linux-mm, Linuxarm

> On 2020-05-19 21:46:06 [+0000], Song Bao Hua wrote:
> > Hi Luis,
> > In the below patch, in order to use the acomp APIs to leverage the power of
> hardware compressors. I have moved to mutex:
> > https://marc.info/?l=linux-crypto-vger&m=158941285830302&w=2
> > https://marc.info/?l=linux-crypto-vger&m=158941287930311&w=2
> >
> > so once we get some progress on that one, I guess we don't need a special
> patch for RT any more.
> 
> If you convert this way from the current concept then we could drop it from
> the series.
> The second patch shows the following hunk:
> 
> |@@ -1075,11 +1124,20 @@ static int zswap_frontswap_store(unsigned
> type,
> |pgoff_t offset,
> |
> | 	/* compress */
> | 	dst = get_cpu_var(zswap_dstmem);
> |	acomp_ctx = *this_cpu_ptr(entry->pool->acomp_ctx);
> |	put_cpu_var(zswap_dstmem);
> 
> So here you get per-CPU version of `dst' and `acomp_ctx' and then allow
> preemption again.
> 
> |	mutex_lock(&acomp_ctx->mutex);
> |
> |	src = kmap(page);
> |	sg_init_one(&input, src, PAGE_SIZE);
> |	/* zswap_dstmem is of size (PAGE_SIZE * 2). Reflect same in sg_list */
> |	sg_init_one(&output, dst, PAGE_SIZE * 2);
> 
> and here you use `dst' and `acomp_ctx' after the preempt_disable() has been
> dropped so I don't understand why you used get_cpu_var(). It is either
> protected by the mutex and doesn't require get_cpu_var() or it isn't (and
> should have additional protection).

The old code was like:
For each cpu, there is one percpu comp and one percpu pages for compression destination - zswap_dstmem.
For example, on cpu1, once you begin to compress, you hold the percpu comp and percpu destination buffer. Meanwhile, preemption is disabled. So decompression won't be able to work at the same core in parallel. And two compressions won't be able to do at the same core in parallel as well. At the same time, the thread won't be able to migrate to another core. Other cores might can do compression/decompression in parallel

The new code is like:
For each cpu, there is still one percpu acomp-ctx and one percpu pages for compression destination. Here acomp replaces comp, and acomp requires sleep during compressing and decompressing.
For example, on cpu1, once you begin to compress, you hold the percpu acomp-ctx and percpu destination buffer of CPU1, the below code makes sure you get the acomp and dstmem from the same core by disabling preemption with get_cpu_var and put_cpu_var:
dst = get_cpu_var(zswap_dstmem);
acomp_ctx = *this_cpu_ptr(entry->pool->acomp_ctx);
put_cpu_var(zswap_dstmem);

then there are two cases:

1. after getting dst and acomp_ctx of cpu1, you might always work in cpu1, the mutex in per-cpu acomp-ctx will guarantee two compressions won't do at the same core in parallel, and it also makes certain compression and decompression won't do at the same core in parallel. Everything is like before.

2. after getting dst and acomp_ctx of cpu1, you might migrate to cpu2, but even you move to cpu2, you are still holding the mutex of cpu1's acomp-ctx.
If at that time, cpu1 has another request to do compression, it will be blocked by the mutex held by cpu2.
If at that time, cpu1 wants to do decompression, it wil be blocked by the mutex held by cpu2.

Everything is like before. No matter which core you have migrated to, once you hold the mutex of core N, another compression/decompression who wants to hold the mutex of core N will be blocked. So mostly, if you have M cores, you can do M compression/decompression in parallel like before.

My basic idea is keeping the work model unchanged like before.

> 
> |	acomp_request_set_params(acomp_ctx->req, &input, &output,
> PAGE_SIZE, dlen);
> |	ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req),
> &acomp_ctx->wait);
> |	dlen = acomp_ctx->req->dlen;
> |	kunmap(page);
> |
> | 	if (ret) {
> | 		ret = -EINVAL;
> | 		goto put_dstmem;
> |
> 
> Sebastian

Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 8/8] mm/zswap: Use local lock to protect per-CPU data
  2020-05-20 11:13       ` Song Bao Hua
@ 2020-05-20 11:57         ` Sebastian Andrzej Siewior
  2020-05-20 12:01           ` Song Bao Hua
  0 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-20 11:57 UTC (permalink / raw)
  To: Song Bao Hua
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Luis Claudio R. Goncalves, Seth Jennings, Dan Streetman,
	Vitaly Wool, Andrew Morton, linux-mm, Linuxarm

On 2020-05-20 11:13:31 [+0000], Song Bao Hua wrote:
> For example, on cpu1, once you begin to compress, you hold the percpu acomp-ctx and percpu destination buffer of CPU1, the below code makes sure you get the acomp and dstmem from the same core by disabling preemption with get_cpu_var and put_cpu_var:
> dst = get_cpu_var(zswap_dstmem);
> acomp_ctx = *this_cpu_ptr(entry->pool->acomp_ctx);
> put_cpu_var(zswap_dstmem);
> 
> then there are two cases:
> 
> 1. after getting dst and acomp_ctx of cpu1, you might always work in cpu1, the mutex in per-cpu acomp-ctx will guarantee two compressions won't do at the same core in parallel, and it also makes certain compression and decompression won't do at the same core in parallel. Everything is like before.

For readability I suggest not to mix per-CPU and per-CTX variables like
that. If zswap_dstmem is protected by the mutex, please make it part of
acomp_ctx.

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [PATCH 8/8] mm/zswap: Use local lock to protect per-CPU data
  2020-05-20 11:57         ` Sebastian Andrzej Siewior
@ 2020-05-20 12:01           ` Song Bao Hua
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua @ 2020-05-20 12:01 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Luis Claudio R. Goncalves, Seth Jennings, Dan Streetman,
	Vitaly Wool, Andrew Morton, linux-mm, Linuxarm

> Subject: Re: [PATCH 8/8] mm/zswap: Use local lock to protect per-CPU data
> 
> On 2020-05-20 11:13:31 [+0000], Song Bao Hua wrote:
> > For example, on cpu1, once you begin to compress, you hold the percpu
> acomp-ctx and percpu destination buffer of CPU1, the below code makes sure
> you get the acomp and dstmem from the same core by disabling preemption
> with get_cpu_var and put_cpu_var:
> > dst = get_cpu_var(zswap_dstmem);
> > acomp_ctx = *this_cpu_ptr(entry->pool->acomp_ctx);
> > put_cpu_var(zswap_dstmem);
> >
> > then there are two cases:
> >
> > 1. after getting dst and acomp_ctx of cpu1, you might always work in cpu1,
> the mutex in per-cpu acomp-ctx will guarantee two compressions won't do at
> the same core in parallel, and it also makes certain compression and
> decompression won't do at the same core in parallel. Everything is like before.
> 
> For readability I suggest not to mix per-CPU and per-CTX variables like that. If
> zswap_dstmem is protected by the mutex, please make it part of acomp_ctx.
> 

Yep. it seems this will avoid the further explanations to more people who will read the code :-)

> Sebastian

Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 1/8] locking: Introduce local_lock()
  2020-05-19 20:19 ` [PATCH 1/8] locking: " Sebastian Andrzej Siewior
@ 2020-05-20 12:04   ` Peter Zijlstra
  2020-05-22 11:05     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 12:04 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds

On Tue, May 19, 2020 at 10:19:05PM +0200, Sebastian Andrzej Siewior wrote:
> +/**
> + * DEFINE_LOCAL_LOCK - Define and initialize a per CPU local lock
> + * @lock:	Name of the lock instance
> + */
> +#define DEFINE_LOCAL_LOCK(lvar)					\
> +	DEFINE_PER_CPU(struct local_lock, lvar) = { INIT_LOCAL_LOCK(lvar) }
> +
> +/**
> + * DECLARE_LOCAL_LOCK - Declare a defined per CPU local lock
> + * @lock:	Name of the lock instance
> + */
> +#define DECLARE_LOCAL_LOCK(lvar)				\
> +	DECLARE_PER_CPU(struct local_lock, lvar)

So I think I'm going to argue having these is a mistake. The local_lock
should be put in a percpu structure along with the data it actually
protects.


> +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> +# define LL_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }

That wants to be:

	.dep_map = {
		.name = #lockname,
		.wait_type_inner = LD_WAIT_SPIN,
	}

> +#else
> +# define LL_DEP_MAP_INIT(lockname)
> +#endif

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 10:24   ` Peter Zijlstra
@ 2020-05-20 12:06     ` Sebastian Andrzej Siewior
  2020-05-20 13:27       ` Peter Zijlstra
                         ` (2 more replies)
  0 siblings, 3 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-20 12:06 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Lai Jiangshan, Josh Triplett, Mathieu Desnoyers, rcu

On 2020-05-20 12:24:07 [+0200], Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 10:19:07PM +0200, Sebastian Andrzej Siewior wrote:
> 
> > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> > index 0c71505f0e19c..8d2b5f75145d7 100644
> > --- a/kernel/rcu/srcutree.c
> > +++ b/kernel/rcu/srcutree.c
> > @@ -25,6 +25,7 @@
> >  #include <linux/delay.h>
> >  #include <linux/module.h>
> >  #include <linux/srcu.h>
> > +#include <linux/locallock.h>
> >  
> >  #include "rcu.h"
> >  #include "rcu_segcblist.h"
> > @@ -735,6 +736,7 @@ static void srcu_flip(struct srcu_struct *ssp)
> >  	smp_mb(); /* D */  /* Pairs with C. */
> >  }
> >  
> > +static DEFINE_LOCAL_LOCK(sda_lock);
> >  /*
> >   * If SRCU is likely idle, return true, otherwise return false.
> >   *
> > @@ -765,13 +767,13 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
> >  	unsigned long tlast;
> >  
> >  	/* If the local srcu_data structure has callbacks, not idle.  */
> > -	local_irq_save(flags);
> > +	local_lock_irqsave(sda_lock, flags);
> >  	sdp = this_cpu_ptr(ssp->sda);
> >  	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> > -		local_irq_restore(flags);
> > +		local_unlock_irqrestore(sda_lock, flags);
> >  		return false; /* Callbacks already present, so not idle. */
> >  	}
> > -	local_irq_restore(flags);
> > +	local_unlock_irqrestore(sda_lock, flags);
> 
> Would it perhaps make sense to stick the local_lock in struct srcu_data ?

In that case we would need something for pointer stability before the
lock is acquired.
I remember Paul looked at that patch a few years ago and he said that
that disabling interrupts here is important and matches the other part
instance where the interrupts are disabled. Looking at it now, it seems
that there is just pointer stability but I can't tell if
rcu_segcblist_pend_cbs() needs more than just this.

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 2/8] radix-tree: Use local_lock for protection
  2020-05-20 10:21   ` Peter Zijlstra
@ 2020-05-20 12:28     ` Thomas Gleixner
  0 siblings, 0 replies; 48+ messages in thread
From: Thomas Gleixner @ 2020-05-20 12:28 UTC (permalink / raw)
  To: Peter Zijlstra, Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Paul E . McKenney, Linus Torvalds, Matthew Wilcox, linux-fsdevel

Peter Zijlstra <peterz@infradead.org> writes:
> On Tue, May 19, 2020 at 10:19:06PM +0200, Sebastian Andrzej Siewior wrote:
>> @@ -64,6 +64,7 @@ struct radix_tree_preload {
>>  	struct radix_tree_node *nodes;
>>  };
>>  static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
>> +static DEFINE_LOCAL_LOCK(radix_tree_preloads_lock);
>>  
>>  static inline struct radix_tree_node *entry_to_node(void *ptr)
>>  {
>
> So I'm all with Andrew on the naming and pass-by-pointer thing, but
> also, the above is pretty crap. You want the local_lock to be in the
> structure you're actually protecting, and there really isn't anything
> stopping you from doing that.
>
> The below builds just fine and is ever so much more sensible.

Right you are. It's pretty obvious now that you hit me over the head
with it.

Note to self: Remove the brown paperbag _before_ touching code.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 12:06     ` Sebastian Andrzej Siewior
@ 2020-05-20 13:27       ` Peter Zijlstra
  2020-05-20 17:42       ` Joel Fernandes
  2020-05-20 18:43       ` Paul E. McKenney
  2 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 13:27 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Lai Jiangshan, Josh Triplett, Mathieu Desnoyers, rcu

On Wed, May 20, 2020 at 02:06:08PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-20 12:24:07 [+0200], Peter Zijlstra wrote:

> > Would it perhaps make sense to stick the local_lock in struct srcu_data ?
> 
> In that case we would need something for pointer stability before the
> lock is acquired.

Maybe I need sleep; but I think this will work.

 &x->foo = x + foo-offset

 this_cpu_ptr(x) = x + cpu-offset

 &this_cpu_ptr(x)->foo = (x + cpu-offset) + foo-offset
                       = (x + foo-offset) + cpu-offset
		       = this_cpu_ptr(&x->foo)

---
--- a/include/linux/srcutree.h
+++ b/include/linux/srcutree.h
@@ -13,6 +13,7 @@
 
 #include <linux/rcu_node_tree.h>
 #include <linux/completion.h>
+#include <linux/locallock.h>
 
 struct srcu_node;
 struct srcu_struct;
@@ -22,6 +23,8 @@ struct srcu_struct;
  * to rcu_node.
  */
 struct srcu_data {
+	struct local_lock llock;
+
 	/* Read-side state. */
 	unsigned long srcu_lock_count[2];	/* Locks per CPU. */
 	unsigned long srcu_unlock_count[2];	/* Unlocks per CPU. */
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -778,13 +778,13 @@ static bool srcu_might_be_idle(struct sr
 	unsigned long tlast;
 
 	/* If the local srcu_data structure has callbacks, not idle.  */
-	local_irq_save(flags);
+	local_lock_irqsave(&ssp->sda->llock, flags);
 	sdp = this_cpu_ptr(ssp->sda);
 	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&ssp->sda->llock, flags);
 		return false; /* Callbacks already present, so not idle. */
 	}
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&ssp->sda->llock, flags);
 
 	/*
 	 * No local callbacks, so probabalistically probe global state.
@@ -864,7 +864,7 @@ static void __call_srcu(struct srcu_stru
 	}
 	rhp->func = func;
 	idx = srcu_read_lock(ssp);
-	local_irq_save(flags);
+	local_lock_irqsave(&ssp->sda->llock, flags);
 	sdp = this_cpu_ptr(ssp->sda);
 	spin_lock_rcu_node(sdp);
 	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
@@ -880,7 +880,8 @@ static void __call_srcu(struct srcu_stru
 		sdp->srcu_gp_seq_needed_exp = s;
 		needexp = true;
 	}
-	spin_unlock_irqrestore_rcu_node(sdp, flags);
+	spin_unlock_rcu_node(sdp);
+	local_unlock_irqrestore(&ssp->sda->llock, flags);
 	if (needgp)
 		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
 	else if (needexp)

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 12:06     ` Sebastian Andrzej Siewior
  2020-05-20 13:27       ` Peter Zijlstra
@ 2020-05-20 17:42       ` Joel Fernandes
  2020-05-20 18:28         ` Sebastian Andrzej Siewior
  2020-05-20 18:43       ` Paul E. McKenney
  2 siblings, 1 reply; 48+ messages in thread
From: Joel Fernandes @ 2020-05-20 17:42 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Lai Jiangshan, Josh Triplett, Mathieu Desnoyers, rcu

Hi Sebastian,

On Wed, May 20, 2020 at 02:06:08PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-20 12:24:07 [+0200], Peter Zijlstra wrote:
> > On Tue, May 19, 2020 at 10:19:07PM +0200, Sebastian Andrzej Siewior wrote:
> > 
> > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> > > index 0c71505f0e19c..8d2b5f75145d7 100644
> > > --- a/kernel/rcu/srcutree.c
> > > +++ b/kernel/rcu/srcutree.c
> > > @@ -25,6 +25,7 @@
> > >  #include <linux/delay.h>
> > >  #include <linux/module.h>
> > >  #include <linux/srcu.h>
> > > +#include <linux/locallock.h>
> > >  
> > >  #include "rcu.h"
> > >  #include "rcu_segcblist.h"
> > > @@ -735,6 +736,7 @@ static void srcu_flip(struct srcu_struct *ssp)
> > >  	smp_mb(); /* D */  /* Pairs with C. */
> > >  }
> > >  
> > > +static DEFINE_LOCAL_LOCK(sda_lock);
> > >  /*
> > >   * If SRCU is likely idle, return true, otherwise return false.
> > >   *
> > > @@ -765,13 +767,13 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
> > >  	unsigned long tlast;
> > >  
> > >  	/* If the local srcu_data structure has callbacks, not idle.  */
> > > -	local_irq_save(flags);
> > > +	local_lock_irqsave(sda_lock, flags);
> > >  	sdp = this_cpu_ptr(ssp->sda);
> > >  	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> > > -		local_irq_restore(flags);
> > > +		local_unlock_irqrestore(sda_lock, flags);
> > >  		return false; /* Callbacks already present, so not idle. */
> > >  	}
> > > -	local_irq_restore(flags);
> > > +	local_unlock_irqrestore(sda_lock, flags);
> > 
> > Would it perhaps make sense to stick the local_lock in struct srcu_data ?
> 
> In that case we would need something for pointer stability before the
> lock is acquired.

For pointer stability, can we just use get_local_ptr() and put_local_ptr()
instead of adding an extra lock? This keeps the pointer stable while keeping
the section preemptible on -rt. And we already have a lock in rcu_data, I
prefer not to add another lock if possible.

I wrote a diff below with get_local_ptr() (just build tested). Does this
solve your issue?

> I remember Paul looked at that patch a few years ago and he said that
> that disabling interrupts here is important and matches the other part
> instance where the interrupts are disabled. Looking at it now, it seems
> that there is just pointer stability but I can't tell if
> rcu_segcblist_pend_cbs() needs more than just this.

Which 'other part' are you referring to? Your patch removed local_irq_save()
from other places as well right?

thanks,

 - Joel

---8<-----------------------

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 8ff71e5d0fe8b..5f49919205317 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -778,13 +778,17 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
 	unsigned long tlast;
 
 	/* If the local srcu_data structure has callbacks, not idle.  */
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
+	sdp = get_local_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
+
 	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
-		local_irq_restore(flags);
+		spin_unlock_irqrestore_rcu_node(sdp, flags);
+		put_local_ptr(sdp);
 		return false; /* Callbacks already present, so not idle. */
 	}
-	local_irq_restore(flags);
+
+	spin_unlock_irqrestore_rcu_node(sdp, flags);
+	put_local_ptr(sdp);
 
 	/*
 	 * No local callbacks, so probabalistically probe global state.
@@ -864,9 +868,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 	}
 	rhp->func = func;
 	idx = srcu_read_lock(ssp);
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
-	spin_lock_rcu_node(sdp);
+	sdp = get_local_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
 	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
 	rcu_segcblist_advance(&sdp->srcu_cblist,
 			      rcu_seq_current(&ssp->srcu_gp_seq));
@@ -886,6 +889,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 	else if (needexp)
 		srcu_funnel_exp_start(ssp, sdp->mynode, s);
 	srcu_read_unlock(ssp, idx);
+
+	put_local_ptr(sdp);
 }
 
 /**

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 17:42       ` Joel Fernandes
@ 2020-05-20 18:28         ` Sebastian Andrzej Siewior
  2020-05-20 18:35           ` Peter Zijlstra
  2020-05-20 18:59           ` Joel Fernandes
  0 siblings, 2 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-20 18:28 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Lai Jiangshan, Josh Triplett, Mathieu Desnoyers, rcu

On 2020-05-20 13:42:59 [-0400], Joel Fernandes wrote:
> Hi Sebastian,
Hi Joel,

> For pointer stability, can we just use get_local_ptr() and put_local_ptr()
> instead of adding an extra lock? This keeps the pointer stable while keeping
> the section preemptible on -rt. And we already have a lock in rcu_data, I
> prefer not to add another lock if possible.

What is this get_local_ptr() doing? I can't find it anywhere…

> I wrote a diff below with get_local_ptr() (just build tested). Does this
> solve your issue?

see below.

> > I remember Paul looked at that patch a few years ago and he said that
> > that disabling interrupts here is important and matches the other part
> > instance where the interrupts are disabled. Looking at it now, it seems
> > that there is just pointer stability but I can't tell if
> > rcu_segcblist_pend_cbs() needs more than just this.
> 
> Which 'other part' are you referring to? Your patch removed local_irq_save()
> from other places as well right?

The patch converted hunks.

> thanks,
> 
>  - Joel
> 
> ---8<-----------------------
> 
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index 8ff71e5d0fe8b..5f49919205317 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -778,13 +778,17 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
>  	unsigned long tlast;
>  
>  	/* If the local srcu_data structure has callbacks, not idle.  */
> -	local_irq_save(flags);
> -	sdp = this_cpu_ptr(ssp->sda);
> +	sdp = get_local_ptr(ssp->sda);
> +	spin_lock_irqsave_rcu_node(sdp, flags);

You acquire the node lock which was not acquired before. Is that okay?
How is get_local_ptr() different to raw_cpu_ptr()?

>  	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> -		local_irq_restore(flags);
> +		spin_unlock_irqrestore_rcu_node(sdp, flags);
> +		put_local_ptr(sdp);
>  		return false; /* Callbacks already present, so not idle. */
>  	}
> -	local_irq_restore(flags);
> +
> +	spin_unlock_irqrestore_rcu_node(sdp, flags);
> +	put_local_ptr(sdp);
>  
>  	/*
>  	 * No local callbacks, so probabalistically probe global state.

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 18:28         ` Sebastian Andrzej Siewior
@ 2020-05-20 18:35           ` Peter Zijlstra
  2020-05-20 18:44             ` Joel Fernandes
  2020-05-20 18:59           ` Joel Fernandes
  1 sibling, 1 reply; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-20 18:35 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Joel Fernandes, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Lai Jiangshan, Josh Triplett, Mathieu Desnoyers, rcu

On Wed, May 20, 2020 at 08:28:00PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-20 13:42:59 [-0400], Joel Fernandes wrote:
> > Hi Sebastian,
> Hi Joel,
> 
> > For pointer stability, can we just use get_local_ptr() and put_local_ptr()
> > instead of adding an extra lock? This keeps the pointer stable while keeping
> > the section preemptible on -rt. And we already have a lock in rcu_data, I
> > prefer not to add another lock if possible.
> 
> What is this get_local_ptr() doing? I can't find it anywhere…

I suspect it is ({ preempt_disable(); this_cpu_ptr(ptr); }), or
something along those lines.

But yeah, I can't find it either.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 12:06     ` Sebastian Andrzej Siewior
  2020-05-20 13:27       ` Peter Zijlstra
  2020-05-20 17:42       ` Joel Fernandes
@ 2020-05-20 18:43       ` Paul E. McKenney
  2020-05-22 15:12         ` Sebastian Andrzej Siewior
  2 siblings, 1 reply; 48+ messages in thread
From: Paul E. McKenney @ 2020-05-20 18:43 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On Wed, May 20, 2020 at 02:06:08PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-20 12:24:07 [+0200], Peter Zijlstra wrote:
> > On Tue, May 19, 2020 at 10:19:07PM +0200, Sebastian Andrzej Siewior wrote:
> > 
> > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> > > index 0c71505f0e19c..8d2b5f75145d7 100644
> > > --- a/kernel/rcu/srcutree.c
> > > +++ b/kernel/rcu/srcutree.c
> > > @@ -25,6 +25,7 @@
> > >  #include <linux/delay.h>
> > >  #include <linux/module.h>
> > >  #include <linux/srcu.h>
> > > +#include <linux/locallock.h>
> > >  
> > >  #include "rcu.h"
> > >  #include "rcu_segcblist.h"
> > > @@ -735,6 +736,7 @@ static void srcu_flip(struct srcu_struct *ssp)
> > >  	smp_mb(); /* D */  /* Pairs with C. */
> > >  }
> > >  
> > > +static DEFINE_LOCAL_LOCK(sda_lock);
> > >  /*
> > >   * If SRCU is likely idle, return true, otherwise return false.
> > >   *
> > > @@ -765,13 +767,13 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
> > >  	unsigned long tlast;
> > >  
> > >  	/* If the local srcu_data structure has callbacks, not idle.  */
> > > -	local_irq_save(flags);
> > > +	local_lock_irqsave(sda_lock, flags);
> > >  	sdp = this_cpu_ptr(ssp->sda);
> > >  	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> > > -		local_irq_restore(flags);
> > > +		local_unlock_irqrestore(sda_lock, flags);
> > >  		return false; /* Callbacks already present, so not idle. */
> > >  	}
> > > -	local_irq_restore(flags);
> > > +	local_unlock_irqrestore(sda_lock, flags);
> > 
> > Would it perhaps make sense to stick the local_lock in struct srcu_data ?
> 
> In that case we would need something for pointer stability before the
> lock is acquired.
> I remember Paul looked at that patch a few years ago and he said that
> that disabling interrupts here is important and matches the other part
> instance where the interrupts are disabled. Looking at it now, it seems
> that there is just pointer stability but I can't tell if
> rcu_segcblist_pend_cbs() needs more than just this.

Yes, that CPU's rcu_segcblist structure does need mutual exclusion in
this case.  This is because rcu_segcblist_pend_cbs() looks not just
at the ->tails[] pointer, but also at the pointer referenced by the
->tails[] pointer.  This last pointer is in an rcu_head structure, and
not just any rcu_head structure, but one that is ready to be invoked.
So this callback could vanish into the freelist (or worse) at any time.
But callback invocation runs on the CPU that enqueued the callbacks
(as long as that CPU remains online, anyway), so disabling interrupts
suffices in mainline.

Now, we could have srcu_might_be_idle() instead acquire the sdp->lock
to protect the structure.

What would be really nice is a primitive that acquires such a per-CPU
lock and remains executing on that CPU, whether by the graces of
preempt_disable(), local_irq_save(), migrate_disable(), or what have you.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 18:35           ` Peter Zijlstra
@ 2020-05-20 18:44             ` Joel Fernandes
  2020-05-20 18:50               ` Uladzislau Rezki
  0 siblings, 1 reply; 48+ messages in thread
From: Joel Fernandes @ 2020-05-20 18:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Sebastian Andrzej Siewior, linux-kernel, Ingo Molnar,
	Steven Rostedt, Will Deacon, Thomas Gleixner, Paul E . McKenney,
	Linus Torvalds, Lai Jiangshan, Josh Triplett, Mathieu Desnoyers,
	rcu

On Wed, May 20, 2020 at 08:35:29PM +0200, Peter Zijlstra wrote:
> On Wed, May 20, 2020 at 08:28:00PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2020-05-20 13:42:59 [-0400], Joel Fernandes wrote:
> > > Hi Sebastian,
> > Hi Joel,
> > 
> > > For pointer stability, can we just use get_local_ptr() and put_local_ptr()
> > > instead of adding an extra lock? This keeps the pointer stable while keeping
> > > the section preemptible on -rt. And we already have a lock in rcu_data, I
> > > prefer not to add another lock if possible.
> > 
> > What is this get_local_ptr() doing? I can't find it anywhere…
> 
> I suspect it is ({ preempt_disable(); this_cpu_ptr(ptr); }), or
> something along those lines.
> 
> But yeah, I can't find it either.

I actually found it in RT 4.4 kernel, I thought this was also on newer RT
kernels as well (is that not true anymore?). But yes it was exactly what
Peter said.

In 4.4 RT there's code similar to:

#ifdef CONFIG_PREEMPT_RT_FULL

# define get_local_ptr(var) ({          \
                migrate_disable();      \
                this_cpu_ptr(var);      })

# define put_local_ptr(var) do {        \
        (void)(var);                    \
        migrate_enable();               \
} while (0)

#else

#define get_local_ptr(var)      get_cpu_ptr(var)
#define put_local_ptr(var)      put_cpu_ptr(var)

#endif


thanks,

 - Joel


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 18:44             ` Joel Fernandes
@ 2020-05-20 18:50               ` Uladzislau Rezki
  0 siblings, 0 replies; 48+ messages in thread
From: Uladzislau Rezki @ 2020-05-20 18:50 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Peter Zijlstra, Sebastian Andrzej Siewior, linux-kernel,
	Ingo Molnar, Steven Rostedt, Will Deacon, Thomas Gleixner,
	Paul E . McKenney, Linus Torvalds, Lai Jiangshan, Josh Triplett,
	Mathieu Desnoyers, rcu

>
> I actually found it in RT 4.4 kernel, I thought this was also on newer RT
> kernels as well (is that not true anymore?). But yes it was exactly what
> Peter said.
> 
I see it also in 5.6.4 linut-rt-devel:

<snip>
#ifdef CONFIG_PREEMPT_RT
...
# define get_local_ptr(var) ({ \
 migrate_disable(); \
 this_cpu_ptr(var); })
...
<snip>

--
Vlad Rezki

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 18:28         ` Sebastian Andrzej Siewior
  2020-05-20 18:35           ` Peter Zijlstra
@ 2020-05-20 18:59           ` Joel Fernandes
  1 sibling, 0 replies; 48+ messages in thread
From: Joel Fernandes @ 2020-05-20 18:59 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Lai Jiangshan, Josh Triplett, Mathieu Desnoyers, rcu

On Wed, May 20, 2020 at 08:28:00PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-20 13:42:59 [-0400], Joel Fernandes wrote:
> > Hi Sebastian,
> Hi Joel,
> 
> > For pointer stability, can we just use get_local_ptr() and put_local_ptr()
> > instead of adding an extra lock? This keeps the pointer stable while keeping
> > the section preemptible on -rt. And we already have a lock in rcu_data, I
> > prefer not to add another lock if possible.
> 
> What is this get_local_ptr() doing? I can't find it anywhere…

I replied about it in the other thread.

 
> > > I remember Paul looked at that patch a few years ago and he said that
> > > that disabling interrupts here is important and matches the other part
> > > instance where the interrupts are disabled. Looking at it now, it seems
> > > that there is just pointer stability but I can't tell if
> > > rcu_segcblist_pend_cbs() needs more than just this.
> > 
> > Which 'other part' are you referring to? Your patch removed local_irq_save()
> > from other places as well right?
> 
> The patch converted hunks.
> 

So then there are no other local_irq_save() to match with. Or may be I did
not understand your concern, could you share any threads from past
discussions about disabling interrupts in this code? You mentioned about a
discussion from few years ago.

> > 
> >  - Joel
> > 
> > ---8<-----------------------
> > 
> > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> > index 8ff71e5d0fe8b..5f49919205317 100644
> > --- a/kernel/rcu/srcutree.c
> > +++ b/kernel/rcu/srcutree.c
> > @@ -778,13 +778,17 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
> >  	unsigned long tlast;
> >  
> >  	/* If the local srcu_data structure has callbacks, not idle.  */
> > -	local_irq_save(flags);
> > -	sdp = this_cpu_ptr(ssp->sda);
> > +	sdp = get_local_ptr(ssp->sda);
> > +	spin_lock_irqsave_rcu_node(sdp, flags);
> 
> You acquire the node lock which was not acquired before. Is that okay?
> How is get_local_ptr() different to raw_cpu_ptr()?

get_cpu_ptr() disables preemption which you might not want, right?

Most (all?) other paths are accessing the cblist under lock so I added it
here to be safe. This is anyway called from a slowpath. Do you see a problem?

thanks,

 - Joel


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 1/8] locking: Introduce local_lock()
  2020-05-20 12:04   ` Peter Zijlstra
@ 2020-05-22 11:05     ` Sebastian Andrzej Siewior
  2020-05-22 13:01       ` Peter Zijlstra
  0 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-22 11:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds

On 2020-05-20 14:04:50 [+0200], Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 10:19:05PM +0200, Sebastian Andrzej Siewior wrote:
> > +/**
> > + * DEFINE_LOCAL_LOCK - Define and initialize a per CPU local lock
> > + * @lock:	Name of the lock instance
> > + */
> > +#define DEFINE_LOCAL_LOCK(lvar)					\
> > +	DEFINE_PER_CPU(struct local_lock, lvar) = { INIT_LOCAL_LOCK(lvar) }
> > +
> > +/**
> > + * DECLARE_LOCAL_LOCK - Declare a defined per CPU local lock
> > + * @lock:	Name of the lock instance
> > + */
> > +#define DECLARE_LOCAL_LOCK(lvar)				\
> > +	DECLARE_PER_CPU(struct local_lock, lvar)
> 
> So I think I'm going to argue having these is a mistake. The local_lock
> should be put in a percpu structure along with the data it actually
> protects.

So I got rid of these and made the local_lock part of the per-CPU
struct.

> > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > +# define LL_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
> 
> That wants to be:
> 
> 	.dep_map = {
> 		.name = #lockname,
> 		.wait_type_inner = LD_WAIT_SPIN,

Why LD_WAIT_SPIN and not LD_WAIT_SLEEP? On RT the lock becomes sleeping
and none of the SPIN restrictions apply. 

> 	}
> 
> > +#else
> > +# define LL_DEP_MAP_INIT(lockname)
> > +#endif

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 1/8] locking: Introduce local_lock()
  2020-05-22 11:05     ` Sebastian Andrzej Siewior
@ 2020-05-22 13:01       ` Peter Zijlstra
  0 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-05-22 13:01 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds

On Fri, May 22, 2020 at 01:05:56PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-20 14:04:50 [+0200], Peter Zijlstra wrote:
> > On Tue, May 19, 2020 at 10:19:05PM +0200, Sebastian Andrzej Siewior wrote:
> > > +/**
> > > + * DEFINE_LOCAL_LOCK - Define and initialize a per CPU local lock
> > > + * @lock:	Name of the lock instance
> > > + */
> > > +#define DEFINE_LOCAL_LOCK(lvar)					\
> > > +	DEFINE_PER_CPU(struct local_lock, lvar) = { INIT_LOCAL_LOCK(lvar) }
> > > +
> > > +/**
> > > + * DECLARE_LOCAL_LOCK - Declare a defined per CPU local lock
> > > + * @lock:	Name of the lock instance
> > > + */
> > > +#define DECLARE_LOCAL_LOCK(lvar)				\
> > > +	DECLARE_PER_CPU(struct local_lock, lvar)
> > 
> > So I think I'm going to argue having these is a mistake. The local_lock
> > should be put in a percpu structure along with the data it actually
> > protects.
> 
> So I got rid of these and made the local_lock part of the per-CPU
> struct.

Great!

> > > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > > +# define LL_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
> > 
> > That wants to be:
> > 
> > 	.dep_map = {
> > 		.name = #lockname,
> > 		.wait_type_inner = LD_WAIT_SPIN,
> 
> Why LD_WAIT_SPIN and not LD_WAIT_SLEEP? On RT the lock becomes sleeping
> and none of the SPIN restrictions apply. 

Ah, then it wants to be LD_WAIT_CONFIG. I completely forgot what RT did
here.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-20 18:43       ` Paul E. McKenney
@ 2020-05-22 15:12         ` Sebastian Andrzej Siewior
  2020-05-22 17:39           ` Paul E. McKenney
  0 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-22 15:12 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On 2020-05-20 11:43:45 [-0700], Paul E. McKenney wrote:
> 
> Yes, that CPU's rcu_segcblist structure does need mutual exclusion in
> this case.  This is because rcu_segcblist_pend_cbs() looks not just
> at the ->tails[] pointer, but also at the pointer referenced by the
> ->tails[] pointer.  This last pointer is in an rcu_head structure, and
> not just any rcu_head structure, but one that is ready to be invoked.
> So this callback could vanish into the freelist (or worse) at any time.
> But callback invocation runs on the CPU that enqueued the callbacks
> (as long as that CPU remains online, anyway), so disabling interrupts
> suffices in mainline.
> 
> Now, we could have srcu_might_be_idle() instead acquire the sdp->lock
> to protect the structure.

Joel suggested that.

> What would be really nice is a primitive that acquires such a per-CPU
> lock and remains executing on that CPU, whether by the graces of
> preempt_disable(), local_irq_save(), migrate_disable(), or what have you.

It depends on what is required. migrate_disable() would limit you to
executing one CPU but would allow preemption. You would need a lock to
ensure exclusive access to the data structure. preempt_disable() /
local_irq_save() guarantee more than that.

Looking at the two call-sites there is no damage there is a CPU
migration after obtaining the per-CPU pointer. There could be a
CPU-migration before and after the pointer has been obtained so the code
before and after this function can not make any assumptions.

Would something like this work: ?

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -764,14 +764,15 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
 	unsigned long t;
 	unsigned long tlast;
 
+	check_init_srcu_struct(ssp);
 	/* If the local srcu_data structure has callbacks, not idle.  */
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
+	sdp = raw_cpu_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
 	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
-		local_irq_restore(flags);
+		spin_unlock_irqrestore_rcu_node(sdp, flags);
 		return false; /* Callbacks already present, so not idle. */
 	}
-	local_irq_restore(flags);
+	spin_unlock_irqrestore_rcu_node(sdp, flags);
 
 	/*
 	 * No local callbacks, so probabalistically probe global state.
@@ -851,9 +852,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 	}
 	rhp->func = func;
 	idx = srcu_read_lock(ssp);
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
-	spin_lock_rcu_node(sdp);
+	sdp = raw_cpu_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
 	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
 	rcu_segcblist_advance(&sdp->srcu_cblist,
 			      rcu_seq_current(&ssp->srcu_gp_seq));


That check_init_srcu_struct() is needed, because otherwise:

| BUG: spinlock bad magic on CPU#2, swapper/0/1
|  lock: 0xffff88803ed28ac0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
| CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc6+ #81
| Call Trace:
|  dump_stack+0x71/0xa0
|  do_raw_spin_lock+0x6c/0xb0
|  _raw_spin_lock_irqsave+0x33/0x40
|  synchronize_srcu+0x24/0xc9
|  wakeup_source_remove+0x4d/0x70
|  wakeup_source_unregister.part.0+0x9/0x40
|  device_wakeup_enable+0x99/0xc0

I'm not sure if there should be an explicit init of `wakeup_srcu' or if
an srcu function (like call_srcu()) is supposed to do it.

> 							Thanx, Paul

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-22 15:12         ` Sebastian Andrzej Siewior
@ 2020-05-22 17:39           ` Paul E. McKenney
  2020-05-23 15:08             ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Paul E. McKenney @ 2020-05-22 17:39 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On Fri, May 22, 2020 at 05:12:55PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-20 11:43:45 [-0700], Paul E. McKenney wrote:
> > 
> > Yes, that CPU's rcu_segcblist structure does need mutual exclusion in
> > this case.  This is because rcu_segcblist_pend_cbs() looks not just
> > at the ->tails[] pointer, but also at the pointer referenced by the
> > ->tails[] pointer.  This last pointer is in an rcu_head structure, and
> > not just any rcu_head structure, but one that is ready to be invoked.
> > So this callback could vanish into the freelist (or worse) at any time.
> > But callback invocation runs on the CPU that enqueued the callbacks
> > (as long as that CPU remains online, anyway), so disabling interrupts
> > suffices in mainline.
> > 
> > Now, we could have srcu_might_be_idle() instead acquire the sdp->lock
> > to protect the structure.
> 
> Joel suggested that.

Good!

> > What would be really nice is a primitive that acquires such a per-CPU
> > lock and remains executing on that CPU, whether by the graces of
> > preempt_disable(), local_irq_save(), migrate_disable(), or what have you.
> 
> It depends on what is required. migrate_disable() would limit you to
> executing one CPU but would allow preemption. You would need a lock to
> ensure exclusive access to the data structure. preempt_disable() /
> local_irq_save() guarantee more than that.
> 
> Looking at the two call-sites there is no damage there is a CPU
> migration after obtaining the per-CPU pointer. There could be a
> CPU-migration before and after the pointer has been obtained so the code
> before and after this function can not make any assumptions.
> 
> Would something like this work: ?

It looks good to me, but I have not yet tested it.  (Happy to let you
take the first crack at rcutorture in any case, scenarios SRCU-P and
SRCU-N.)

> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -764,14 +764,15 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
>  	unsigned long t;
>  	unsigned long tlast;
>  
> +	check_init_srcu_struct(ssp);
>  	/* If the local srcu_data structure has callbacks, not idle.  */
> -	local_irq_save(flags);
> -	sdp = this_cpu_ptr(ssp->sda);
> +	sdp = raw_cpu_ptr(ssp->sda);
> +	spin_lock_irqsave_rcu_node(sdp, flags);
>  	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> -		local_irq_restore(flags);
> +		spin_unlock_irqrestore_rcu_node(sdp, flags);
>  		return false; /* Callbacks already present, so not idle. */
>  	}
> -	local_irq_restore(flags);
> +	spin_unlock_irqrestore_rcu_node(sdp, flags);
>  
>  	/*
>  	 * No local callbacks, so probabalistically probe global state.
> @@ -851,9 +852,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
>  	}
>  	rhp->func = func;
>  	idx = srcu_read_lock(ssp);
> -	local_irq_save(flags);
> -	sdp = this_cpu_ptr(ssp->sda);
> -	spin_lock_rcu_node(sdp);
> +	sdp = raw_cpu_ptr(ssp->sda);
> +	spin_lock_irqsave_rcu_node(sdp, flags);
>  	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
>  	rcu_segcblist_advance(&sdp->srcu_cblist,
>  			      rcu_seq_current(&ssp->srcu_gp_seq));
> 
> 
> That check_init_srcu_struct() is needed, because otherwise:
> 
> | BUG: spinlock bad magic on CPU#2, swapper/0/1
> |  lock: 0xffff88803ed28ac0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
> | CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc6+ #81
> | Call Trace:
> |  dump_stack+0x71/0xa0
> |  do_raw_spin_lock+0x6c/0xb0
> |  _raw_spin_lock_irqsave+0x33/0x40
> |  synchronize_srcu+0x24/0xc9
> |  wakeup_source_remove+0x4d/0x70
> |  wakeup_source_unregister.part.0+0x9/0x40
> |  device_wakeup_enable+0x99/0xc0
> 
> I'm not sure if there should be an explicit init of `wakeup_srcu' or if
> an srcu function (like call_srcu()) is supposed to do it.

It is fine.  Beforehand, that check_init_srcu_struct() would have been
invoked very shortly thereafter from __call_srcu(), and there is no
instead harm invoking it a few microseconds earlier.  ;-)

 							Thanx, Paul

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-22 17:39           ` Paul E. McKenney
@ 2020-05-23 15:08             ` Sebastian Andrzej Siewior
  2020-05-23 16:59               ` Paul E. McKenney
  2020-05-24 19:03               ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-23 15:08 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On 2020-05-22 10:39:53 [-0700], Paul E. McKenney wrote:
> It looks good to me, but I have not yet tested it.  (Happy to let you
> take the first crack at rcutorture in any case, scenarios SRCU-P and
> SRCU-N.)

on it.

> > That check_init_srcu_struct() is needed, because otherwise:
> > 
> > | BUG: spinlock bad magic on CPU#2, swapper/0/1
> > |  lock: 0xffff88803ed28ac0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
> > | CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc6+ #81
> > | Call Trace:
> > |  dump_stack+0x71/0xa0
> > |  do_raw_spin_lock+0x6c/0xb0
> > |  _raw_spin_lock_irqsave+0x33/0x40
> > |  synchronize_srcu+0x24/0xc9
> > |  wakeup_source_remove+0x4d/0x70
> > |  wakeup_source_unregister.part.0+0x9/0x40
> > |  device_wakeup_enable+0x99/0xc0
> > 
> > I'm not sure if there should be an explicit init of `wakeup_srcu' or if
> > an srcu function (like call_srcu()) is supposed to do it.
> 
> It is fine.  Beforehand, that check_init_srcu_struct() would have been
> invoked very shortly thereafter from __call_srcu(), and there is no
> instead harm invoking it a few microseconds earlier.  ;-)

Oki. I wasn't sure if an explizit initialized on wakeup_srcu's side was
missing or if this is new since we use the lock earlier.

>  							Thanx, Paul

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-23 15:08             ` Sebastian Andrzej Siewior
@ 2020-05-23 16:59               ` Paul E. McKenney
  2020-05-24 19:03               ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 48+ messages in thread
From: Paul E. McKenney @ 2020-05-23 16:59 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On Sat, May 23, 2020 at 05:08:31PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-22 10:39:53 [-0700], Paul E. McKenney wrote:
> > It looks good to me, but I have not yet tested it.  (Happy to let you
> > take the first crack at rcutorture in any case, scenarios SRCU-P and
> > SRCU-N.)
> 
> on it.
> 
> > > That check_init_srcu_struct() is needed, because otherwise:
> > > 
> > > | BUG: spinlock bad magic on CPU#2, swapper/0/1
> > > |  lock: 0xffff88803ed28ac0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
> > > | CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.7.0-rc6+ #81
> > > | Call Trace:
> > > |  dump_stack+0x71/0xa0
> > > |  do_raw_spin_lock+0x6c/0xb0
> > > |  _raw_spin_lock_irqsave+0x33/0x40
> > > |  synchronize_srcu+0x24/0xc9
> > > |  wakeup_source_remove+0x4d/0x70
> > > |  wakeup_source_unregister.part.0+0x9/0x40
> > > |  device_wakeup_enable+0x99/0xc0
> > > 
> > > I'm not sure if there should be an explicit init of `wakeup_srcu' or if
> > > an srcu function (like call_srcu()) is supposed to do it.
> > 
> > It is fine.  Beforehand, that check_init_srcu_struct() would have been
> > invoked very shortly thereafter from __call_srcu(), and there is no
> > instead harm invoking it a few microseconds earlier.  ;-)
> 
> Oki. I wasn't sure if an explizit initialized on wakeup_srcu's side was
> missing or if this is new since we use the lock earlier.

If you want to worry, the cause for concern that comes to mind is lock
contention.  Except that this is a per-CPU lock, and it isn't acquired
all that often.  And when it is acquired often (call_srcu() floods, for
example), it is acquired on the CPU in question.  So seems unlikely.

But should lock contention nevertheless become a problem:

1.	Tighten up any acquisitions to avoid unncessary off-CPU
	acquisition of this lock.

2.	Convert this particular lock acquisition into a trylock,
	and make trylock failure return such that a non-expedited
	grace period results.

But again, I don't see the need for these at the moment.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-23 15:08             ` Sebastian Andrzej Siewior
  2020-05-23 16:59               ` Paul E. McKenney
@ 2020-05-24 19:03               ` Sebastian Andrzej Siewior
  2020-05-25  3:27                 ` Paul E. McKenney
  1 sibling, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 19:03 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On 2020-05-23 17:08:32 [+0200], To Paul E. McKenney wrote:
> On 2020-05-22 10:39:53 [-0700], Paul E. McKenney wrote:
> > It looks good to me, but I have not yet tested it.  (Happy to let you
> > take the first crack at rcutorture in any case, scenarios SRCU-P and
> > SRCU-N.)
> 
> on it.

|tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 32 --duration 240 --configs 8*SRCU-N
|SRCU-N ------- 2127126 GPs (147.717/s) [srcu: g26566592 f0x0 ]
|SRCU-N.2 ------- 2123520 GPs (147.467/s) [srcu: g26563696 f0x0 ]
|SRCU-N.3 ------- 2122181 GPs (147.374/s) [srcu: g26549140 f0x0 ]
|SRCU-N.4 ------- 2126099 GPs (147.646/s) [srcu: g26564232 f0x0 ]
|SRCU-N.5 ------- 2127107 GPs (147.716/s) [srcu: g26590168 f0x0 ]
|SRCU-N.6 ------- 2125489 GPs (147.603/s) [srcu: g26545616 f0x0 ]
|SRCU-N.7 ------- 2128308 GPs (147.799/s) [srcu: g26591524 f0x0 ]
|SRCU-N.8 ------- 2126432 GPs (147.669/s) [srcu: g26586816 f0x0 ]
|
|tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 32 --duration 240 --configs 8*SRCU-P
|SRCU-P ------- 565751 GPs (39.2883/s) [srcud: g5034676 f0x0 ]
|SRCU-P.2 ------- 569508 GPs (39.5492/s) [srcud: g5062872 f0x0 ]
|SRCU-P.3 ------- 574240 GPs (39.8778/s) [srcud: g5098488 f0x0 ]
|SRCU-P.4 ------- 564376 GPs (39.1928/s) [srcud: g5031244 f0x0 ]
|SRCU-P.5 ------- 563705 GPs (39.1462/s) [srcud: g5024720 f0x0 ]
|SRCU-P.6 ------- 565043 GPs (39.2391/s) [srcud: g5030272 f0x0 ]
|SRCU-P.7 ------- 567450 GPs (39.4062/s) [srcud: g5046392 f0x0 ]
|SRCU-P.8 ------- 566496 GPs (39.34/s) [srcud: g5039396 f0x0 ]

Results at
   https://breakpoint.cc/res.tar.xz

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access
  2020-05-24 19:03               ` Sebastian Andrzej Siewior
@ 2020-05-25  3:27                 ` Paul E. McKenney
  2020-05-26 13:41                   ` [PATCH] srcu: Avoid local_irq_save() before acquiring spinlock_t Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Paul E. McKenney @ 2020-05-25  3:27 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On Sun, May 24, 2020 at 09:03:56PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-05-23 17:08:32 [+0200], To Paul E. McKenney wrote:
> > On 2020-05-22 10:39:53 [-0700], Paul E. McKenney wrote:
> > > It looks good to me, but I have not yet tested it.  (Happy to let you
> > > take the first crack at rcutorture in any case, scenarios SRCU-P and
> > > SRCU-N.)
> > 
> > on it.
> 
> |tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 32 --duration 240 --configs 8*SRCU-N
> |SRCU-N ------- 2127126 GPs (147.717/s) [srcu: g26566592 f0x0 ]
> |SRCU-N.2 ------- 2123520 GPs (147.467/s) [srcu: g26563696 f0x0 ]
> |SRCU-N.3 ------- 2122181 GPs (147.374/s) [srcu: g26549140 f0x0 ]
> |SRCU-N.4 ------- 2126099 GPs (147.646/s) [srcu: g26564232 f0x0 ]
> |SRCU-N.5 ------- 2127107 GPs (147.716/s) [srcu: g26590168 f0x0 ]
> |SRCU-N.6 ------- 2125489 GPs (147.603/s) [srcu: g26545616 f0x0 ]
> |SRCU-N.7 ------- 2128308 GPs (147.799/s) [srcu: g26591524 f0x0 ]
> |SRCU-N.8 ------- 2126432 GPs (147.669/s) [srcu: g26586816 f0x0 ]
> |
> |tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 32 --duration 240 --configs 8*SRCU-P
> |SRCU-P ------- 565751 GPs (39.2883/s) [srcud: g5034676 f0x0 ]
> |SRCU-P.2 ------- 569508 GPs (39.5492/s) [srcud: g5062872 f0x0 ]
> |SRCU-P.3 ------- 574240 GPs (39.8778/s) [srcud: g5098488 f0x0 ]
> |SRCU-P.4 ------- 564376 GPs (39.1928/s) [srcud: g5031244 f0x0 ]
> |SRCU-P.5 ------- 563705 GPs (39.1462/s) [srcud: g5024720 f0x0 ]
> |SRCU-P.6 ------- 565043 GPs (39.2391/s) [srcud: g5030272 f0x0 ]
> |SRCU-P.7 ------- 567450 GPs (39.4062/s) [srcud: g5046392 f0x0 ]
> |SRCU-P.8 ------- 566496 GPs (39.34/s) [srcud: g5039396 f0x0 ]
> 
> Results at
>    https://breakpoint.cc/res.tar.xz

Looks good, thank you!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 48+ messages in thread

* [PATCH] srcu: Avoid local_irq_save() before acquiring spinlock_t
  2020-05-25  3:27                 ` Paul E. McKenney
@ 2020-05-26 13:41                   ` Sebastian Andrzej Siewior
  2020-05-26 16:16                     ` Paul E. McKenney
  0 siblings, 1 reply; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-26 13:41 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

SRCU disables interrupts to get a stable per-CPU pointer and then
acquires the spinlock which is in the per-CPU data structure. The
release uses spin_unlock_irqrestore(). While this is correct on a non-RT
kernel, this conflicts with the RT semantics because the spinlock is
converted to a 'sleeping' spinlock. Sleeping locks can obviously not be
acquired with interrupts disabled.

Acquire the per-CPU pointer `ssp->sda' without disabling preemption and
then acquire the spinlock_t of the per-CPU data structure. The lock
will ensure that the data is consistent.
The added check_init_srcu_struct() is now needed because a statically 
defined srcu_struct may remain uninitialized until this point and the
newly introduced locking operation requires an initialized spinlock_t.

This change was tested for four hours with 8*SRCU-N and 8*SRCU-P without
causing any warnings.

Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: rcu@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/rcu/srcutree.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 0c71505f0e19c..9459bca58c380 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -764,14 +764,15 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
 	unsigned long t;
 	unsigned long tlast;
 
+	check_init_srcu_struct(ssp);
 	/* If the local srcu_data structure has callbacks, not idle.  */
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
+	sdp = raw_cpu_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
 	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
-		local_irq_restore(flags);
+		spin_unlock_irqrestore_rcu_node(sdp, flags);
 		return false; /* Callbacks already present, so not idle. */
 	}
-	local_irq_restore(flags);
+	spin_unlock_irqrestore_rcu_node(sdp, flags);
 
 	/*
 	 * No local callbacks, so probabalistically probe global state.
@@ -851,9 +852,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 	}
 	rhp->func = func;
 	idx = srcu_read_lock(ssp);
-	local_irq_save(flags);
-	sdp = this_cpu_ptr(ssp->sda);
-	spin_lock_rcu_node(sdp);
+	sdp = raw_cpu_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
 	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
 	rcu_segcblist_advance(&sdp->srcu_cblist,
 			      rcu_seq_current(&ssp->srcu_gp_seq));
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH] srcu: Avoid local_irq_save() before acquiring spinlock_t
  2020-05-26 13:41                   ` [PATCH] srcu: Avoid local_irq_save() before acquiring spinlock_t Sebastian Andrzej Siewior
@ 2020-05-26 16:16                     ` Paul E. McKenney
  2020-05-26 16:31                       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 48+ messages in thread
From: Paul E. McKenney @ 2020-05-26 16:16 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On Tue, May 26, 2020 at 03:41:34PM +0200, Sebastian Andrzej Siewior wrote:
> SRCU disables interrupts to get a stable per-CPU pointer and then
> acquires the spinlock which is in the per-CPU data structure. The
> release uses spin_unlock_irqrestore(). While this is correct on a non-RT
> kernel, this conflicts with the RT semantics because the spinlock is
> converted to a 'sleeping' spinlock. Sleeping locks can obviously not be
> acquired with interrupts disabled.
> 
> Acquire the per-CPU pointer `ssp->sda' without disabling preemption and
> then acquire the spinlock_t of the per-CPU data structure. The lock
> will ensure that the data is consistent.
> The added check_init_srcu_struct() is now needed because a statically 
> defined srcu_struct may remain uninitialized until this point and the
> newly introduced locking operation requires an initialized spinlock_t.
> 
> This change was tested for four hours with 8*SRCU-N and 8*SRCU-P without
> causing any warnings.

Queued, thank you!!!

							Thanx, Paul

> Cc: Lai Jiangshan <jiangshanlai@gmail.com>
> Cc: "Paul E. McKenney" <paulmck@kernel.org>
> Cc: Josh Triplett <josh@joshtriplett.org>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Cc: rcu@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  kernel/rcu/srcutree.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index 0c71505f0e19c..9459bca58c380 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -764,14 +764,15 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
>  	unsigned long t;
>  	unsigned long tlast;
>  
> +	check_init_srcu_struct(ssp);
>  	/* If the local srcu_data structure has callbacks, not idle.  */
> -	local_irq_save(flags);
> -	sdp = this_cpu_ptr(ssp->sda);
> +	sdp = raw_cpu_ptr(ssp->sda);
> +	spin_lock_irqsave_rcu_node(sdp, flags);
>  	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
> -		local_irq_restore(flags);
> +		spin_unlock_irqrestore_rcu_node(sdp, flags);
>  		return false; /* Callbacks already present, so not idle. */
>  	}
> -	local_irq_restore(flags);
> +	spin_unlock_irqrestore_rcu_node(sdp, flags);
>  
>  	/*
>  	 * No local callbacks, so probabalistically probe global state.
> @@ -851,9 +852,8 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
>  	}
>  	rhp->func = func;
>  	idx = srcu_read_lock(ssp);
> -	local_irq_save(flags);
> -	sdp = this_cpu_ptr(ssp->sda);
> -	spin_lock_rcu_node(sdp);
> +	sdp = raw_cpu_ptr(ssp->sda);
> +	spin_lock_irqsave_rcu_node(sdp, flags);
>  	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
>  	rcu_segcblist_advance(&sdp->srcu_cblist,
>  			      rcu_seq_current(&ssp->srcu_gp_seq));
> -- 
> 2.27.0.rc0
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH] srcu: Avoid local_irq_save() before acquiring spinlock_t
  2020-05-26 16:16                     ` Paul E. McKenney
@ 2020-05-26 16:31                       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 48+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-26 16:31 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Steven Rostedt,
	Will Deacon, Thomas Gleixner, Linus Torvalds, Lai Jiangshan,
	Josh Triplett, Mathieu Desnoyers, rcu

On 2020-05-26 09:16:09 [-0700], Paul E. McKenney wrote:
> Queued, thank you!!!
thank you.

> 							Thanx, Paul

Sebastian

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2020-05-26 16:32 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-19 20:19 [PATCH 0/8] Introduce local_lock() Sebastian Andrzej Siewior
2020-05-19 20:19 ` [PATCH 1/8] locking: " Sebastian Andrzej Siewior
2020-05-20 12:04   ` Peter Zijlstra
2020-05-22 11:05     ` Sebastian Andrzej Siewior
2020-05-22 13:01       ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 2/8] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
2020-05-19 20:45   ` Matthew Wilcox
2020-05-19 20:54     ` Steven Rostedt
2020-05-20  2:05       ` Matthew Wilcox
2020-05-20 10:13         ` Sebastian Andrzej Siewior
2020-05-20 10:21   ` Peter Zijlstra
2020-05-20 12:28     ` Thomas Gleixner
2020-05-19 20:19 ` [PATCH 3/8] srcu: Use local_lock() for per-CPU struct srcu_data access Sebastian Andrzej Siewior
2020-05-20 10:24   ` Peter Zijlstra
2020-05-20 12:06     ` Sebastian Andrzej Siewior
2020-05-20 13:27       ` Peter Zijlstra
2020-05-20 17:42       ` Joel Fernandes
2020-05-20 18:28         ` Sebastian Andrzej Siewior
2020-05-20 18:35           ` Peter Zijlstra
2020-05-20 18:44             ` Joel Fernandes
2020-05-20 18:50               ` Uladzislau Rezki
2020-05-20 18:59           ` Joel Fernandes
2020-05-20 18:43       ` Paul E. McKenney
2020-05-22 15:12         ` Sebastian Andrzej Siewior
2020-05-22 17:39           ` Paul E. McKenney
2020-05-23 15:08             ` Sebastian Andrzej Siewior
2020-05-23 16:59               ` Paul E. McKenney
2020-05-24 19:03               ` Sebastian Andrzej Siewior
2020-05-25  3:27                 ` Paul E. McKenney
2020-05-26 13:41                   ` [PATCH] srcu: Avoid local_irq_save() before acquiring spinlock_t Sebastian Andrzej Siewior
2020-05-26 16:16                     ` Paul E. McKenney
2020-05-26 16:31                       ` Sebastian Andrzej Siewior
2020-05-19 20:19 ` [PATCH 4/8] mm/swap: Use local_lock for protection Sebastian Andrzej Siewior
2020-05-19 23:58   ` Andrew Morton
2020-05-20  2:17     ` Matthew Wilcox
2020-05-20 10:53   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 5/8] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
2020-05-20 10:56   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 6/8] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
2020-05-20 11:03   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 7/8] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
2020-05-20 11:07   ` Peter Zijlstra
2020-05-19 20:19 ` [PATCH 8/8] mm/zswap: " Sebastian Andrzej Siewior
2020-05-19 21:46   ` Song Bao Hua
2020-05-20 10:26     ` Sebastian Andrzej Siewior
2020-05-20 11:13       ` Song Bao Hua
2020-05-20 11:57         ` Sebastian Andrzej Siewior
2020-05-20 12:01           ` Song Bao Hua

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.