linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7 v2] Introduce local_lock()
@ 2020-05-24 21:57 Sebastian Andrzej Siewior
  2020-05-24 21:57 ` [PATCH v2 1/7] locking: " Sebastian Andrzej Siewior
                   ` (6 more replies)
  0 siblings, 7 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox

This is v2 of the local_lock() series. The v1 can be found at 

   https://lore.kernel.org/lkml/20200519201912.1564477-1-bigeasy@linutronix.de/

v1…v2:
  - Remove static initializer so a local_lock is not used as a single
    per-CPU variable but as a member of an existing structure, that is
    used per-CPU.

  - Use LD_WAIT_CONFIG as wait-type in the dep_map.

  - Expect a pointer like value as argument (same as this_cpu_ptr()).

  - Drop the SRCU patch. A different sollution is worked on.

  - Drop the zswap patch. That code part will be reworked.


preempt_disable() and local_irq_disable/save() are in principle per CPU big
kernel locks. This has several downsides:

  - The protection scope is unknown

  - Violation of protection rules is hard to detect by instrumentation

  - For PREEMPT_RT such sections, unless in low level critical code, can
    violate the preemptability constraints.

To address this PREEMPT_RT introduced the concept of local_locks which are
strictly per CPU.

The lock operations map to preempt_disable(), local_irq_disable/save() and
the enabling counterparts on non RT enabled kernels.

If lockdep is enabled local locks gain a lock map which tracks the usage
context. This will catch cases where an area is protected by
preempt_disable() but the access also happens from interrupt context. local
locks have identified quite a few such issues over the years, the most
recent example is:

  b7d5dc21072cd ("random: add a spinlock_t to struct batched_entropy")

Aside of the lockdep coverage this also improves code readability as it
precisely annotates the protection scope.

PREEMPT_RT substitutes these local locks with 'sleeping' spinlocks to
protect such sections while maintaining preemtability and CPU locality.

The followin series introduces the infrastructure including
documentation and provides a couple of examples how they are used to
adjust code to be RT ready.

Sebastian


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2 1/7] locking: Introduce local_lock()
  2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-25  7:01   ` Ingo Molnar
  2020-05-24 21:57 ` [PATCH v2 2/7] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

preempt_disable() and local_irq_disable/save() are in principle per CPU big
kernel locks. This has several downsides:

  - The protection scope is unknown

  - Violation of protection rules is hard to detect by instrumentation

  - For PREEMPT_RT such sections, unless in low level critical code, can
    violate the preemptability constraints.

To address this PREEMPT_RT introduced the concept of local_locks which are
strictly per CPU.

The lock operations map to preempt_disable(), local_irq_disable/save() and
the enabling counterparts on non RT enabled kernels.

If lockdep is enabled local locks gain a lock map which tracks the usage
context. This will catch cases where an area is protected by
preempt_disable() but the access also happens from interrupt context. local
locks have identified quite a few such issues over the years, the most
recent example is:

  b7d5dc21072cd ("random: add a spinlock_t to struct batched_entropy")

Aside of the lockdep coverage this also improves code readability as it
precisely annotates the protection scope.

PREEMPT_RT substitutes these local locks with 'sleeping' spinlocks to
protect such sections while maintaining preemtability and CPU locality.

local locks can replace:

  - preempt_enable()/disable() pairs
  - local_irq_disable/enable() pairs
  - local_irq_save/restore() pairs

They are also used to replace code which implicitly disables preemption
like:

  - get_cpu()/put_cpu()
  - get_cpu_var()/put_cpu_var()

with PREEMPT_RT friendly constructs.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 Documentation/locking/locktypes.rst | 215 ++++++++++++++++++++++++++--
 include/linux/locallock.h           |  54 +++++++
 include/linux/locallock_internal.h  |  90 ++++++++++++
 3 files changed, 348 insertions(+), 11 deletions(-)
 create mode 100644 include/linux/locallock.h
 create mode 100644 include/linux/locallock_internal.h

diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst
index 09f45ce38d262..1b577a8bf9829 100644
--- a/Documentation/locking/locktypes.rst
+++ b/Documentation/locking/locktypes.rst
@@ -13,6 +13,7 @@ The kernel provides a variety of locking primitives which can be divided
 into two categories:
 
  - Sleeping locks
+ - CPU local locks
  - Spinning locks
 
 This document conceptually describes these lock types and provides rules
@@ -44,9 +45,23 @@ other contexts unless there is no other option.
 
 On PREEMPT_RT kernels, these lock types are converted to sleeping locks:
 
+ - local_lock
  - spinlock_t
  - rwlock_t
 
+
+CPU local locks
+---------------
+
+ - local_lock
+
+On non-PREEMPT_RT kernels, local_lock functions are wrappers around
+preemption and interrupt disabling primitives. Contrary to other locking
+mechanisms, disabling preemption or interrupts are pure CPU local
+concurrency control mechanisms and not suited for inter-CPU concurrency
+control.
+
+
 Spinning locks
 --------------
 
@@ -67,6 +82,7 @@ Spinning locks implicitly disable preemption and the lock / unlock functions
  _irqsave/restore()   Save and disable / restore interrupt disabled state
  ===================  ====================================================
 
+
 Owner semantics
 ===============
 
@@ -139,6 +155,56 @@ PREEMPT_RT kernels map rw_semaphore to a separate rt_mutex-based
  writer from starving readers.
 
 
+local_lock
+==========
+
+local_lock provides a named scope to critical sections which are protected
+by disabling preemption or interrupts.
+
+On non-PREEMPT_RT kernels local_lock operations map to the preemption and
+interrupt disabling and enabling primitives:
+
+ =========================== ======================
+ local_lock(&llock)          preempt_disable()
+ local_unlock(&llock)        preempt_enable()
+ local_lock_irq(&llock)      local_irq_disable()
+ local_unlock_irq(&llock)    local_irq_enable()
+ local_lock_save(&llock)     local_irq_save()
+ local_lock_restore(&llock)  local_irq_save()
+ =========================== ======================
+
+The named scope of local_lock has two advantages over the regular
+primitives:
+
+  - The lock name allows static analysis and is also a clear documentation
+    of the protection scope while the regular primitives are scopeless and
+    opaque.
+
+  - If lockdep is enabled the local_lock gains a lockmap which allows to
+    validate the correctness of the protection. This can detect cases where
+    e.g. a function using preempt_disable() as protection mechanism is
+    invoked from interrupt or soft-interrupt context. Aside of that
+    lockdep_assert_held(&llock) works as with any other locking primitive.
+
+local_lock and PREEMPT_RT
+-------------------------
+
+PREEMPT_RT kernels map local_lock to a per-CPU spinlock_t, thus changing
+semantics:
+
+  - All spinlock_t changes also apply to local_lock.
+
+local_lock usage
+----------------
+
+local_lock should be used in situations where disabling preemption or
+interrupts is the appropriate form of concurrency control to protect
+per-CPU data structures on a non PREEMPT_RT kernel.
+
+local_lock is not suitable to protect against preemption or interrupts on a
+PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
+
+
 raw_spinlock_t and spinlock_t
 =============================
 
@@ -258,10 +324,82 @@ PREEMPT_RT kernels map rwlock_t to a separate rt_mutex-based
 PREEMPT_RT caveats
 ==================
 
+local_lock on RT
+----------------
+
+The mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a few
+implications. For example, on a non-PREEMPT_RT kernel the following code
+sequence works as expected::
+
+  local_lock_irq(&local_lock);
+  raw_spin_lock(&lock);
+
+and is fully equivalent to::
+
+   raw_spin_lock_irq(&lock);
+
+On a PREEMPT_RT kernel this code sequence breaks because local_lock_irq()
+is mapped to a per-CPU spinlock_t which neither disables interrupts nor
+preemption. The following code sequence works perfectly correct on both
+PREEMPT_RT and non-PREEMPT_RT kernels::
+
+  local_lock_irq(&local_lock);
+  spin_lock(&lock);
+
+Another caveat with local locks is that each local_lock has a specific
+protection scope. So the following substitution is wrong::
+
+  func1()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock_1, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock_1, flags);
+  }
+
+  func2()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock_2, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock_2, flags);
+  }
+
+  func3()
+  {
+    lockdep_assert_irqs_disabled();
+    access_protected_data();
+  }
+
+On a non-PREEMPT_RT kernel this works correctly, but on a PREEMPT_RT kernel
+local_lock_1 and local_lock_2 are distinct and cannot serialize the callers
+of func3(). Also the lockdep assert will trigger on a PREEMPT_RT kernel
+because local_lock_irqsave() does not disable interrupts due to the
+PREEMPT_RT-specific semantics of spinlock_t. The correct substitution is::
+
+  func1()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock, flags);
+  }
+
+  func2()
+  {
+    local_irq_save(flags);    -> local_lock_irqsave(&local_lock, flags);
+    func3();
+    local_irq_restore(flags); -> local_lock_irqrestore(&local_lock, flags);
+  }
+
+  func3()
+  {
+    lockdep_assert_held(&local_lock);
+    access_protected_data();
+  }
+
+
 spinlock_t and rwlock_t
 -----------------------
 
-These changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels
+The changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels
 have a few implications.  For example, on a non-PREEMPT_RT kernel the
 following code sequence works as expected::
 
@@ -282,9 +420,61 @@ local_lock mechanism.  Acquiring the local_lock pins the task to a CPU,
 allowing things like per-CPU interrupt disabled locks to be acquired.
 However, this approach should be used only where absolutely necessary.
 
+A typical scenario is protection of per-CPU variables in thread context::
 
-raw_spinlock_t
---------------
+  struct foo *p = get_cpu_ptr(&var1);
+
+  spin_lock(&p->lock);
+  p->count += this_cpu_read(var2);
+
+This is correct code on a non-PREEMPT_RT kernel, but on a PREEMPT_RT kernel
+this breaks. The PREEMPT_RT-specific change of spinlock_t semantics does
+not allow to acquire p->lock because get_cpu_ptr() implicitly disables
+preemption. The following substitution works on both kernels::
+
+  struct foo *p;
+
+  migrate_disable();
+  p = this_cpu_ptr(&var1);
+  spin_lock(&p->lock);
+  p->count += this_cpu_read(var2);
+
+On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable()
+which makes the above code fully equivalent. On a PREEMPT_RT kernel
+migrate_disable() ensures that the task is pinned on the current CPU which
+in turn guarantees that the per-CPU access to var1 and var2 are staying on
+the same CPU.
+
+The migrate_disable() substitution is not valid for the following
+scenario::
+
+  func()
+  {
+    struct foo *p;
+
+    migrate_disable();
+    p = this_cpu_ptr(&var1);
+    p->val = func2();
+
+While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because
+here migrate_disable() does not protect against reentrancy from a
+preempting task. A correct substitution for this case is::
+
+  func()
+  {
+    struct foo *p;
+
+    local_lock(&foo_lock);
+    p = this_cpu_ptr(&var1);
+    p->val = func2();
+
+On a non-PREEMPT_RT kernel this protects against reentrancy by disabling
+preemption. On a PREEMPT_RT kernel this is achieved by acquiring the
+underlying per-CPU spinlock.
+
+
+raw_spinlock_t on RT
+--------------------
 
 Acquiring a raw_spinlock_t disables preemption and possibly also
 interrupts, so the critical section must avoid acquiring a regular
@@ -325,22 +515,25 @@ Lock type nesting rules
 
 The most basic rules are:
 
-  - Lock types of the same lock category (sleeping, spinning) can nest
-    arbitrarily as long as they respect the general lock ordering rules to
-    prevent deadlocks.
+  - Lock types of the same lock category (sleeping, CPU local, spinning)
+    can nest arbitrarily as long as they respect the general lock ordering
+    rules to prevent deadlocks.
 
-  - Sleeping lock types cannot nest inside spinning lock types.
+  - Sleeping lock types cannot nest inside CPU local and spinning lock types.
 
-  - Spinning lock types can nest inside sleeping lock types.
+  - CPU local and spinning lock types can nest inside sleeping lock types.
+
+  - Spinning lock types can nest inside all lock types
 
 These constraints apply both in PREEMPT_RT and otherwise.
 
 The fact that PREEMPT_RT changes the lock category of spinlock_t and
-rwlock_t from spinning to sleeping means that they cannot be acquired while
-holding a raw spinlock.  This results in the following nesting ordering:
+rwlock_t from spinning to sleeping and substitutes local_lock with a
+per-CPU spinlock_t means that they cannot be acquired while holding a raw
+spinlock.  This results in the following nesting ordering:
 
   1) Sleeping locks
-  2) spinlock_t and rwlock_t
+  2) spinlock_t, rwlock_t, local_lock
   3) raw_spinlock_t and bit spinlocks
 
 Lockdep will complain if these constraints are violated, both in
diff --git a/include/linux/locallock.h b/include/linux/locallock.h
new file mode 100644
index 0000000000000..eec88a8225883
--- /dev/null
+++ b/include/linux/locallock.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_LOCALLOCK_H
+#define _LINUX_LOCALLOCK_H
+
+#include <linux/locallock_internal.h>
+
+/**
+ * local_lock_init - Runtime initialize a lock instance
+ */
+#define local_lock_init(lock)		__local_lock_init(lock)
+
+/**
+ * local_lock - Acquire a per CPU local lock
+ * @lock:	The lock variable
+ */
+#define local_lock(lock)		__local_lock(lock)
+
+/**
+ * local_lock_irq - Acquire a per CPU local lock and disable interrupts
+ * @lock:	The lock variable
+ */
+#define local_lock_irq(lock)		__local_lock_irq(lock)
+
+/**
+ * local_lock_irqsave - Acquire a per CPU local lock, save and disable
+ *			 interrupts
+ * @lock:	The lock variable
+ * @flags:	Storage for interrupt flags
+ */
+#define local_lock_irqsave(lock, flags)				\
+	__local_lock_irqsave(lock, flags)
+
+/**
+ * local_unlock - Release a per CPU local lock
+ * @lock:	The lock variable
+ */
+#define local_unlock(lock)		__local_unlock(lock)
+
+/**
+ * local_unlock_irq - Release a per CPU local lock and enable interrupts
+ * @lock:	The lock variable
+ */
+#define local_unlock_irq(lock)		__local_unlock_irq(lock)
+
+/**
+ * local_unlock_irqrestore - Release a per CPU local lock and restore
+ *			      interrupt flags
+ * @lock:	The lock variable
+ * @flags:      Interrupt flags to restore
+ */
+#define local_unlock_irqrestore(lock, flags)			\
+	__local_unlock_irqrestore(lock, flags)
+
+#endif
diff --git a/include/linux/locallock_internal.h b/include/linux/locallock_internal.h
new file mode 100644
index 0000000000000..5332680b92d66
--- /dev/null
+++ b/include/linux/locallock_internal.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_LOCALLOCK_H
+# error "Do not include directly, include linux/locallock.h"
+#endif
+
+#include <linux/percpu-defs.h>
+#include <linux/lockdep.h>
+
+struct local_lock {
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map	dep_map;
+	struct task_struct	*owner;
+#endif
+};
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define LL_DEP_MAP_INIT(lockname)			\
+	.dep_map = {					\
+		.name = #lockname,			\
+		.wait_type_inner = LD_WAIT_CONFIG,	\
+	}
+#else
+# define LL_DEP_MAP_INIT(lockname)
+#endif
+
+#define INIT_LOCAL_LOCK(lockname)	{ LL_DEP_MAP_INIT(lockname) }
+
+#define __local_lock_init(lock)					\
+do {								\
+	static struct lock_class_key __key;			\
+								\
+	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
+	lockdep_init_map_wait(&(lock)->dep_map, #lock, &__key, 0, LD_WAIT_CONFIG);\
+} while (0)
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+static inline void local_lock_acquire(struct local_lock *l)
+{
+	lock_map_acquire(&l->dep_map);
+	DEBUG_LOCKS_WARN_ON(l->owner);
+	l->owner = current;
+}
+
+static inline void local_lock_release(struct local_lock *l)
+{
+	DEBUG_LOCKS_WARN_ON(l->owner != current);
+	l->owner = NULL;
+	lock_map_release(&l->dep_map);
+}
+
+#else /* CONFIG_DEBUG_LOCK_ALLOC */
+static inline void local_lock_acquire(struct local_lock *l) { }
+static inline void local_lock_release(struct local_lock *l) { }
+#endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+
+#define __local_lock(lock)					\
+	do {							\
+		preempt_disable();				\
+		local_lock_acquire(this_cpu_ptr(lock));		\
+	} while (0)
+
+#define __local_lock_irq(lock)					\
+	do {							\
+		local_irq_disable();				\
+		local_lock_acquire(this_cpu_ptr(lock));		\
+	} while (0)
+
+#define __local_lock_irqsave(lock, flags)			\
+	do {							\
+		local_irq_save(flags);				\
+		local_lock_acquire(this_cpu_ptr(lock));		\
+	} while (0)
+
+#define __local_unlock(lock)					\
+	do {							\
+		local_lock_release(this_cpu_ptr(lock));		\
+		preempt_enable();				\
+	} while (0)
+
+#define __local_unlock_irq(lock)				\
+	do {							\
+		local_lock_release(this_cpu_ptr(lock));		\
+		local_irq_enable();				\
+	} while (0)
+
+#define __local_unlock_irqrestore(lock, flags)			\
+	do {							\
+		local_lock_release(this_cpu_ptr(lock));		\
+		local_irq_restore(flags);			\
+	} while (0)
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/7] radix-tree: Use local_lock for protection
  2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
  2020-05-24 21:57 ` [PATCH v2 1/7] locking: " Sebastian Andrzej Siewior
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-25  6:29   ` Ingo Molnar
  2020-05-24 21:57 ` [PATCH v2 3/7] mm/swap: " Sebastian Andrzej Siewior
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Sebastian Andrzej Siewior, linux-fsdevel

The radix-tree and idr preload mechanisms use preempt_disable() to protect
the complete operation between xxx_preload() and xxx_preload_end().

As the code inside the preempt disabled section acquires regular spinlocks,
which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
eventually calls into a memory allocator, this conflicts with the RT
semantics.

Convert it to a local_lock which allows RT kernels to substitute them with
a real per CPU lock. On non RT kernels this maps to preempt_disable() as
before, but provides also lockdep coverage of the critical region.
No functional change.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/idr.h        |  5 +----
 include/linux/radix-tree.h |  6 +-----
 lib/radix-tree.c           | 29 ++++++++++++++++++++++-------
 3 files changed, 24 insertions(+), 16 deletions(-)

diff --git a/include/linux/idr.h b/include/linux/idr.h
index ac6e946b6767b..839da8f2f6f13 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -169,10 +169,7 @@ static inline bool idr_is_empty(const struct idr *idr)
  * Each idr_preload() should be matched with an invocation of this
  * function.  See idr_preload() for details.
  */
-static inline void idr_preload_end(void)
-{
-	preempt_enable();
-}
+void idr_preload_end(void);
 
 /**
  * idr_for_each_entry() - Iterate over an IDR's elements of a given type.
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 63e62372443a5..040b1fd0ab940 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -226,6 +226,7 @@ unsigned int radix_tree_gang_lookup(const struct radix_tree_root *,
 			unsigned int max_items);
 int radix_tree_preload(gfp_t gfp_mask);
 int radix_tree_maybe_preload(gfp_t gfp_mask);
+void radix_tree_preload_end(void);
 void radix_tree_init(void);
 void *radix_tree_tag_set(struct radix_tree_root *,
 			unsigned long index, unsigned int tag);
@@ -243,11 +244,6 @@ unsigned int radix_tree_gang_lookup_tag_slot(const struct radix_tree_root *,
 		unsigned int max_items, unsigned int tag);
 int radix_tree_tagged(const struct radix_tree_root *, unsigned int tag);
 
-static inline void radix_tree_preload_end(void)
-{
-	preempt_enable();
-}
-
 void __rcu **idr_get_free(struct radix_tree_root *root,
 			      struct radix_tree_iter *iter, gfp_t gfp,
 			      unsigned long max);
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 2ee6ae3b0ade0..609aeb900b550 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -20,6 +20,7 @@
 #include <linux/kernel.h>
 #include <linux/kmemleak.h>
 #include <linux/percpu.h>
+#include <linux/locallock.h>
 #include <linux/preempt.h>		/* in_interrupt() */
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -27,7 +28,6 @@
 #include <linux/string.h>
 #include <linux/xarray.h>
 
-
 /*
  * Radix tree node cache.
  */
@@ -59,11 +59,14 @@ struct kmem_cache *radix_tree_node_cachep;
  * Per-cpu pool of preloaded nodes
  */
 struct radix_tree_preload {
+	struct local_lock lock;
 	unsigned nr;
 	/* nodes->parent points to next preallocated node */
 	struct radix_tree_node *nodes;
 };
-static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
+static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) =
+	{ .lock = INIT_LOCAL_LOCK(lock),
+	  .nr = 0, };
 
 static inline struct radix_tree_node *entry_to_node(void *ptr)
 {
@@ -332,14 +335,14 @@ static __must_check int __radix_tree_preload(gfp_t gfp_mask, unsigned nr)
 	 */
 	gfp_mask &= ~__GFP_ACCOUNT;
 
-	preempt_disable();
+	local_lock(&radix_tree_preloads.lock);
 	rtp = this_cpu_ptr(&radix_tree_preloads);
 	while (rtp->nr < nr) {
-		preempt_enable();
+		local_unlock(&radix_tree_preloads.lock);
 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
 		if (node == NULL)
 			goto out;
-		preempt_disable();
+		local_lock(&radix_tree_preloads.lock);
 		rtp = this_cpu_ptr(&radix_tree_preloads);
 		if (rtp->nr < nr) {
 			node->parent = rtp->nodes;
@@ -381,11 +384,17 @@ int radix_tree_maybe_preload(gfp_t gfp_mask)
 	if (gfpflags_allow_blocking(gfp_mask))
 		return __radix_tree_preload(gfp_mask, RADIX_TREE_PRELOAD_SIZE);
 	/* Preloading doesn't help anything with this gfp mask, skip it */
-	preempt_disable();
+	local_lock(&radix_tree_preloads.lock);
 	return 0;
 }
 EXPORT_SYMBOL(radix_tree_maybe_preload);
 
+void radix_tree_preload_end(void)
+{
+	local_unlock(&radix_tree_preloads.lock);
+}
+EXPORT_SYMBOL(radix_tree_preload_end);
+
 static unsigned radix_tree_load_root(const struct radix_tree_root *root,
 		struct radix_tree_node **nodep, unsigned long *maxindex)
 {
@@ -1470,10 +1479,16 @@ EXPORT_SYMBOL(radix_tree_tagged);
 void idr_preload(gfp_t gfp_mask)
 {
 	if (__radix_tree_preload(gfp_mask, IDR_PRELOAD_SIZE))
-		preempt_disable();
+		local_lock(&radix_tree_preloads.lock);
 }
 EXPORT_SYMBOL(idr_preload);
 
+void idr_preload_end(void)
+{
+	local_unlock(&radix_tree_preloads.lock);
+}
+EXPORT_SYMBOL(idr_preload_end);
+
 void __rcu **idr_get_free(struct radix_tree_root *root,
 			      struct radix_tree_iter *iter, gfp_t gfp,
 			      unsigned long max)
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/7] mm/swap: Use local_lock for protection
  2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
  2020-05-24 21:57 ` [PATCH v2 1/7] locking: " Sebastian Andrzej Siewior
  2020-05-24 21:57 ` [PATCH v2 2/7] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-25  6:44   ` Ingo Molnar
  2020-05-24 21:57 ` [PATCH v2 4/7] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Andrew Morton, linux-mm,
	Sebastian Andrzej Siewior

From: Ingo Molnar <mingo@kernel.org>

The various struct pagevec per CPU variables are protected by disabling
either preemption or interrupts across the critical sections. Inside
these sections spinlocks have to be acquired.

These spinlocks are regular spinlock_t types which are converted to
"sleeping" spinlocks on PREEMPT_RT enabled kernels. Obviously sleeping
locks cannot be acquired in preemption or interrupt disabled sections.

local locks provide a trivial way to substitute preempt and interrupt
disable instances. On a non PREEMPT_RT enabled kernel local_lock() maps
to preempt_disable() and local_lock_irq() to local_irq_disable().

Create lru_rotate_pvecs containing the pagevec and the locallock.
Create lru_pvecs containing the remaining pagevecs and the locallock.
Add lru_add_drain_cpu_zone() which is used from compact_zone() to avoid
exporting the pvec structure.

Change the relevant call sites to acquire these locks instead of using
preempt_disable() / get_cpu() / get_cpu_var() and local_irq_disable() /
local_irq_save().

There is neither a functional change nor a change in the generated
binary code for non PREEMPT_RT enabled non-debug kernels.

When lockdep is enabled local locks have lockdep maps embedded. These
allow lockdep to validate the protections, i.e. inappropriate usage of a
preemption only protected sections would result in a lockdep warning
while the same problem would not be noticed with a plain
preempt_disable() based protection.

local locks also improve readability as they provide a named scope for
the protections while preempt/interrupt disable are opaque scopeless.

Finally local locks allow PREEMPT_RT to substitute them with real
locking primitives to ensure the correctness of operation in a fully
preemptible kernel.

[ bigeasy: Adopted to use local_lock ]

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/swap.h |   1 +
 mm/compaction.c      |   6 +--
 mm/swap.c            | 114 +++++++++++++++++++++++++++++--------------
 3 files changed, 79 insertions(+), 42 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index e1bbf7a16b276..25181d2dd0b9f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -337,6 +337,7 @@ extern void activate_page(struct page *);
 extern void mark_page_accessed(struct page *);
 extern void lru_add_drain(void);
 extern void lru_add_drain_cpu(int cpu);
+extern void lru_add_drain_cpu_zone(struct zone *zone);
 extern void lru_add_drain_all(void);
 extern void rotate_reclaimable_page(struct page *page);
 extern void deactivate_file_page(struct page *page);
diff --git a/mm/compaction.c b/mm/compaction.c
index 46f0fcc93081e..c9d659e6a02c5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2243,15 +2243,11 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
 		 * would succeed.
 		 */
 		if (cc->order > 0 && last_migrated_pfn) {
-			int cpu;
 			unsigned long current_block_start =
 				block_start_pfn(cc->migrate_pfn, cc->order);
 
 			if (last_migrated_pfn < current_block_start) {
-				cpu = get_cpu();
-				lru_add_drain_cpu(cpu);
-				drain_local_pages(cc->zone);
-				put_cpu();
+				lru_add_drain_cpu_zone(cc->zone);
 				/* No more flushing until we migrate again */
 				last_migrated_pfn = 0;
 			}
diff --git a/mm/swap.c b/mm/swap.c
index bf9a79fed62d7..4f965292044ca 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -35,6 +35,7 @@
 #include <linux/uio.h>
 #include <linux/hugetlb.h>
 #include <linux/page_idle.h>
+#include <linux/locallock.h>
 
 #include "internal.h"
 
@@ -44,14 +45,29 @@
 /* How many pages do we try to swap or page in/out together? */
 int page_cluster;
 
-static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
-static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
-static DEFINE_PER_CPU(struct pagevec, lru_deactivate_file_pvecs);
-static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
-static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs);
+/* Protecting lru_rotate_pvecs */
+struct lru_rotate_pvecs {
+	struct local_lock lock;
+	struct pagevec pvec;
+};
+static DEFINE_PER_CPU(struct lru_rotate_pvecs, lru_rotate_pvecs) = {
+	.lock = INIT_LOCAL_LOCK(lock),
+};
+
+/* Protecting the following struct pagevec */
+struct lru_pvecs {
+	struct local_lock lock;
+	struct pagevec lru_add_pvec;
+	struct pagevec lru_deactivate_file_pvecs;
+	struct pagevec lru_deactivate_pvecs;
+	struct pagevec lru_lazyfree_pvecs;
 #ifdef CONFIG_SMP
-static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);
+	struct pagevec activate_page_pvecs;
 #endif
+};
+static DEFINE_PER_CPU(struct lru_pvecs, lru_pvecs) = {
+	.lock = INIT_LOCAL_LOCK(lock),
+};
 
 /*
  * This path almost never happens for VM activity - pages are normally
@@ -254,11 +270,11 @@ void rotate_reclaimable_page(struct page *page)
 		unsigned long flags;
 
 		get_page(page);
-		local_irq_save(flags);
-		pvec = this_cpu_ptr(&lru_rotate_pvecs);
+		local_lock_irqsave(&lru_rotate_pvecs.lock, flags);
+		pvec = this_cpu_ptr(&lru_rotate_pvecs.pvec);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&lru_rotate_pvecs.lock, flags);
 	}
 }
 
@@ -293,7 +309,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
 #ifdef CONFIG_SMP
 static void activate_page_drain(int cpu)
 {
-	struct pagevec *pvec = &per_cpu(activate_page_pvecs, cpu);
+	struct pagevec *pvec = &per_cpu(lru_pvecs.activate_page_pvecs, cpu);
 
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, __activate_page, NULL);
@@ -301,19 +317,21 @@ static void activate_page_drain(int cpu)
 
 static bool need_activate_page_drain(int cpu)
 {
-	return pagevec_count(&per_cpu(activate_page_pvecs, cpu)) != 0;
+	return pagevec_count(&per_cpu(lru_pvecs.activate_page_pvecs, cpu)) != 0;
 }
 
 void activate_page(struct page *page)
 {
 	page = compound_head(page);
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(activate_page_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.activate_page_pvecs);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, __activate_page, NULL);
-		put_cpu_var(activate_page_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
@@ -335,9 +353,12 @@ void activate_page(struct page *page)
 
 static void __lru_cache_activate_page(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec;
 	int i;
 
+	local_lock(&lru_pvecs.lock);
+	pvec = this_cpu_ptr(&lru_pvecs.lru_add_pvec);
+
 	/*
 	 * Search backwards on the optimistic assumption that the page being
 	 * activated has just been added to this pagevec. Note that only
@@ -357,7 +378,7 @@ static void __lru_cache_activate_page(struct page *page)
 		}
 	}
 
-	put_cpu_var(lru_add_pvec);
+	local_unlock(&lru_pvecs.lock);
 }
 
 /*
@@ -404,12 +425,14 @@ EXPORT_SYMBOL(mark_page_accessed);
 
 static void __lru_cache_add(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec;
 
+	local_lock(&lru_pvecs.lock);
+	pvec = this_cpu_ptr(&lru_pvecs.lru_add_pvec);
 	get_page(page);
 	if (!pagevec_add(pvec, page) || PageCompound(page))
 		__pagevec_lru_add(pvec);
-	put_cpu_var(lru_add_pvec);
+	local_unlock(&lru_pvecs.lock);
 }
 
 /**
@@ -593,30 +616,30 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
  */
 void lru_add_drain_cpu(int cpu)
 {
-	struct pagevec *pvec = &per_cpu(lru_add_pvec, cpu);
+	struct pagevec *pvec = &per_cpu(lru_pvecs.lru_add_pvec, cpu);
 
 	if (pagevec_count(pvec))
 		__pagevec_lru_add(pvec);
 
-	pvec = &per_cpu(lru_rotate_pvecs, cpu);
+	pvec = &per_cpu(lru_rotate_pvecs.pvec, cpu);
 	if (pagevec_count(pvec)) {
 		unsigned long flags;
 
 		/* No harm done if a racing interrupt already did this */
-		local_irq_save(flags);
+		local_lock_irqsave(&lru_rotate_pvecs.lock, flags);
 		pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&lru_rotate_pvecs.lock, flags);
 	}
 
-	pvec = &per_cpu(lru_deactivate_file_pvecs, cpu);
+	pvec = &per_cpu(lru_pvecs.lru_deactivate_file_pvecs, cpu);
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
 
-	pvec = &per_cpu(lru_deactivate_pvecs, cpu);
+	pvec = &per_cpu(lru_pvecs.lru_deactivate_pvecs, cpu);
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
 
-	pvec = &per_cpu(lru_lazyfree_pvecs, cpu);
+	pvec = &per_cpu(lru_pvecs.lru_lazyfree_pvecs, cpu);
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
 
@@ -641,11 +664,14 @@ void deactivate_file_page(struct page *page)
 		return;
 
 	if (likely(get_page_unless_zero(page))) {
-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_file_pvecs);
+		struct pagevec *pvec;
+
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file_pvecs);
 
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
-		put_cpu_var(lru_deactivate_file_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
@@ -660,12 +686,14 @@ void deactivate_file_page(struct page *page)
 void deactivate_page(struct page *page)
 {
 	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_pvecs);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
-		put_cpu_var(lru_deactivate_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
@@ -680,21 +708,33 @@ void mark_page_lazyfree(struct page *page)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree_pvecs);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
-		put_cpu_var(lru_lazyfree_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
 void lru_add_drain(void)
 {
-	lru_add_drain_cpu(get_cpu());
-	put_cpu();
+	local_lock(&lru_pvecs.lock);
+	lru_add_drain_cpu(smp_processor_id());
+	local_unlock(&lru_pvecs.lock);
 }
 
+void lru_add_drain_cpu_zone(struct zone *zone)
+{
+	local_lock(&lru_pvecs.lock);
+	lru_add_drain_cpu(smp_processor_id());
+	drain_local_pages(zone);
+	local_unlock(&lru_pvecs.lock);
+}
+
+
 #ifdef CONFIG_SMP
 
 static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
@@ -743,11 +783,11 @@ void lru_add_drain_all(void)
 	for_each_online_cpu(cpu) {
 		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
 
-		if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
-		    pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
-		    pagevec_count(&per_cpu(lru_deactivate_file_pvecs, cpu)) ||
-		    pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
-		    pagevec_count(&per_cpu(lru_lazyfree_pvecs, cpu)) ||
+		if (pagevec_count(&per_cpu(lru_pvecs.lru_add_pvec, cpu)) ||
+		    pagevec_count(&per_cpu(lru_rotate_pvecs.pvec, cpu)) ||
+		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file_pvecs, cpu)) ||
+		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_pvecs, cpu)) ||
+		    pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree_pvecs, cpu)) ||
 		    need_activate_page_drain(cpu)) {
 			INIT_WORK(work, lru_add_drain_per_cpu);
 			queue_work_on(cpu, mm_percpu_wq, work);
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 4/7] squashfs: make use of local lock in multi_cpu decompressor
  2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (2 preceding siblings ...)
  2020-05-24 21:57 ` [PATCH v2 3/7] mm/swap: " Sebastian Andrzej Siewior
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-24 21:57 ` [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Julia Cartwright, Phillip Lougher,
	Alexander Stein, Sebastian Andrzej Siewior

From: Julia Cartwright <julia@ni.com>

The squashfs multi CPU decompressor makes use of get_cpu_ptr() to
acquire a pointer to per-CPU data. get_cpu_ptr() implicitly disables
preemption which serializes the access to the per-CPU data.

But decompression can take quite some time depending on the size. The
observed preempt disabled times in real world scenarios went up to 8ms,
causing massive wakeup latencies. This happens on all CPUs as the
decompression is fully parallelized.

Replace the implicit preemption control with an explicit local lock.
This allows RT kernels to substitute it with a real per CPU lock, which
serializes the access but keeps the code section preemptible. On non RT
kernels this maps to preempt_disable() as before, i.e. no functional
change.

[ bigeasy: Use local_lock(), patch description]

Cc: Phillip Lougher <phillip@squashfs.org.uk>
Reported-by: Alexander Stein <alexander.stein@systec-electronic.com>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Alexander Stein <alexander.stein@systec-electronic.com>
---
 fs/squashfs/decompressor_multi_percpu.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/fs/squashfs/decompressor_multi_percpu.c b/fs/squashfs/decompressor_multi_percpu.c
index 2a2a2d106440e..bac511cdebdb0 100644
--- a/fs/squashfs/decompressor_multi_percpu.c
+++ b/fs/squashfs/decompressor_multi_percpu.c
@@ -8,6 +8,7 @@
 #include <linux/slab.h>
 #include <linux/percpu.h>
 #include <linux/buffer_head.h>
+#include <linux/locallock.h>
 
 #include "squashfs_fs.h"
 #include "squashfs_fs_sb.h"
@@ -20,7 +21,8 @@
  */
 
 struct squashfs_stream {
-	void		*stream;
+	void			*stream;
+	struct local_lock	lock;
 };
 
 void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
@@ -41,6 +43,7 @@ void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
 			err = PTR_ERR(stream->stream);
 			goto out;
 		}
+		local_lock_init(&stream->lock);
 	}
 
 	kfree(comp_opts);
@@ -75,12 +78,16 @@ void squashfs_decompressor_destroy(struct squashfs_sb_info *msblk)
 int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh,
 	int b, int offset, int length, struct squashfs_page_actor *output)
 {
-	struct squashfs_stream __percpu *percpu =
-			(struct squashfs_stream __percpu *) msblk->stream;
-	struct squashfs_stream *stream = get_cpu_ptr(percpu);
-	int res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
-		offset, length, output);
-	put_cpu_ptr(stream);
+	struct squashfs_stream *stream;
+	int res;
+
+	local_lock(&msblk->stream->lock);
+	stream = this_cpu_ptr(msblk->stream);
+
+	res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
+			offset, length, output);
+
+	local_unlock(&msblk->stream->lock);
 
 	if (res < 0)
 		ERROR("%s decompression failed, data probably corrupt\n",
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock
  2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (3 preceding siblings ...)
  2020-05-24 21:57 ` [PATCH v2 4/7] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-25  7:18   ` Ingo Molnar
  2020-05-24 21:57 ` [PATCH v2 6/7] zram: Allocate struct zcomp_strm as per-CPU memory Sebastian Andrzej Siewior
  2020-05-24 21:57 ` [PATCH v2 7/7] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
  6 siblings, 1 reply; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Evgeniy Polyakov, netdev,
	Sebastian Andrzej Siewior

From: Mike Galbraith <umgwanakikbuti@gmail.com>

send_msg() disables preemption to avoid out-of-order messages. As the
code inside the preempt disabled section acquires regular spinlocks,
which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
eventually calls into a memory allocator, this conflicts with the RT
semantics.

Convert it to a local_lock which allows RT kernels to substitute them with
a real per CPU lock. On non RT kernels this maps to preempt_disable() as
before. No functional change.

[bigeasy: Patch description]

Cc: Evgeniy Polyakov <zbr@ioremap.net>
Cc: netdev@vger.kernel.org
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/connector/cn_proc.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index d58ce664da843..d424d1f469136 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -18,6 +18,7 @@
 #include <linux/pid_namespace.h>
 
 #include <linux/cn_proc.h>
+#include <linux/locallock.h>
 
 /*
  * Size of a cn_msg followed by a proc_event structure.  Since the
@@ -38,25 +39,32 @@ static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
 static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
 static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
 
-/* proc_event_counts is used as the sequence number of the netlink message */
-static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
+/* local_evt.counts is used as the sequence number of the netlink message */
+struct local_evt {
+	__u32 counts;
+	struct local_lock lock;
+};
+static DEFINE_PER_CPU(struct local_evt, local_evt) = {
+	.counts = 0,
+	.lock = INIT_LOCAL_LOCK(lock),
+};
 
 static inline void send_msg(struct cn_msg *msg)
 {
-	preempt_disable();
+	local_lock(&local_evt.lock);
 
-	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
+	msg->seq = __this_cpu_inc_return(local_evt.counts) - 1;
 	((struct proc_event *)msg->data)->cpu = smp_processor_id();
 
 	/*
-	 * Preemption remains disabled during send to ensure the messages are
-	 * ordered according to their sequence numbers.
+	 * local_lock() disables preemption during send to ensure the messages
+	 * are ordered according to their sequence numbers.
 	 *
 	 * If cn_netlink_send() fails, the data is not sent.
 	 */
 	cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
 
-	preempt_enable();
+	local_unlock(&local_evt.lock);
 }
 
 void proc_fork_connector(struct task_struct *task)
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 6/7] zram: Allocate struct zcomp_strm as per-CPU memory
  2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (4 preceding siblings ...)
  2020-05-24 21:57 ` [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-25  7:24   ` Ingo Molnar
  2020-05-24 21:57 ` [PATCH v2 7/7] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
  6 siblings, 1 reply; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Sebastian Andrzej Siewior, Minchan Kim,
	Nitin Gupta, Sergey Senozhatsky

zcomp::stream is per-CPU pointer, pointing to struct zcomp_strm which
contains two pointer. Having struct zcomp_strm allocated directly as
per-CPU memory would avoid one additional memory allocation and a
pointer dereference.
This also also simplifies adding a local_lock to struct zcomp_strm.

Allocate zcomp::stream directly as per-CPU memory.

Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/block/zram/zcomp.c | 34 ++++++++++++++--------------------
 drivers/block/zram/zcomp.h |  2 +-
 2 files changed, 15 insertions(+), 21 deletions(-)

diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
index 1a8564a79d8dc..ae6dc137a1ed8 100644
--- a/drivers/block/zram/zcomp.c
+++ b/drivers/block/zram/zcomp.c
@@ -37,19 +37,17 @@ static void zcomp_strm_free(struct zcomp_strm *zstrm)
 	if (!IS_ERR_OR_NULL(zstrm->tfm))
 		crypto_free_comp(zstrm->tfm);
 	free_pages((unsigned long)zstrm->buffer, 1);
-	kfree(zstrm);
+	zstrm->tfm = NULL;
+	zstrm->buffer = NULL;
 }
 
 /*
  * allocate new zcomp_strm structure with ->tfm initialized by
  * backend, return NULL on error
  */
-static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
+static int zcomp_strm_alloc(struct zcomp_strm *zstrm,
+			    struct zcomp *comp)
 {
-	struct zcomp_strm *zstrm = kmalloc(sizeof(*zstrm), GFP_KERNEL);
-	if (!zstrm)
-		return NULL;
-
 	zstrm->tfm = crypto_alloc_comp(comp->name, 0, 0);
 	/*
 	 * allocate 2 pages. 1 for compressed data, plus 1 extra for the
@@ -58,9 +56,9 @@ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
 	zstrm->buffer = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1);
 	if (IS_ERR_OR_NULL(zstrm->tfm) || !zstrm->buffer) {
 		zcomp_strm_free(zstrm);
-		zstrm = NULL;
+		return -ENOMEM;
 	}
-	return zstrm;
+	return 0;
 }
 
 bool zcomp_available_algorithm(const char *comp)
@@ -113,7 +111,7 @@ ssize_t zcomp_available_show(const char *comp, char *buf)
 
 struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
 {
-	return *get_cpu_ptr(comp->stream);
+	return get_cpu_ptr(comp->stream);
 }
 
 void zcomp_stream_put(struct zcomp *comp)
@@ -159,16 +157,14 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node)
 {
 	struct zcomp *comp = hlist_entry(node, struct zcomp, node);
 	struct zcomp_strm *zstrm;
+	int ret;
 
-	if (WARN_ON(*per_cpu_ptr(comp->stream, cpu)))
-		return 0;
-
-	zstrm = zcomp_strm_alloc(comp);
-	if (IS_ERR_OR_NULL(zstrm)) {
+	zstrm = per_cpu_ptr(comp->stream, cpu);
+	ret = zcomp_strm_alloc(zstrm, comp);
+	if (ret) {
 		pr_err("Can't allocate a compression stream\n");
 		return -ENOMEM;
 	}
-	*per_cpu_ptr(comp->stream, cpu) = zstrm;
 	return 0;
 }
 
@@ -177,10 +173,8 @@ int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node)
 	struct zcomp *comp = hlist_entry(node, struct zcomp, node);
 	struct zcomp_strm *zstrm;
 
-	zstrm = *per_cpu_ptr(comp->stream, cpu);
-	if (!IS_ERR_OR_NULL(zstrm))
-		zcomp_strm_free(zstrm);
-	*per_cpu_ptr(comp->stream, cpu) = NULL;
+	zstrm = per_cpu_ptr(comp->stream, cpu);
+	zcomp_strm_free(zstrm);
 	return 0;
 }
 
@@ -188,7 +182,7 @@ static int zcomp_init(struct zcomp *comp)
 {
 	int ret;
 
-	comp->stream = alloc_percpu(struct zcomp_strm *);
+	comp->stream = alloc_percpu(struct zcomp_strm);
 	if (!comp->stream)
 		return -ENOMEM;
 
diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h
index 1806475b919df..72c2ee4d843ed 100644
--- a/drivers/block/zram/zcomp.h
+++ b/drivers/block/zram/zcomp.h
@@ -14,7 +14,7 @@ struct zcomp_strm {
 
 /* dynamic per-device compression frontend */
 struct zcomp {
-	struct zcomp_strm * __percpu *stream;
+	struct zcomp_strm __percpu *stream;
 	const char *name;
 	struct hlist_node node;
 };
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 7/7] zram: Use local lock to protect per-CPU data
  2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
                   ` (5 preceding siblings ...)
  2020-05-24 21:57 ` [PATCH v2 6/7] zram: Allocate struct zcomp_strm as per-CPU memory Sebastian Andrzej Siewior
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-25  7:26   ` Ingo Molnar
  6 siblings, 1 reply; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Minchan Kim, Nitin Gupta,
	Sergey Senozhatsky, Sebastian Andrzej Siewior

From: Mike Galbraith <umgwanakikbuti@gmail.com>

The zcomp driver uses per-CPU compression. The per-CPU data pointer is
acquired with get_cpu_ptr() which implicitly disables preemption.
It allocates memory inside the preempt disabled region which conflicts
with the PREEMPT_RT semantics.

Replace the implicit preemption control with an explicit local lock.
This allows RT kernels to substitute it with a real per CPU lock, which
serializes the access but keeps the code section preemptible. On non RT
kernels this maps to preempt_disable() as before, i.e. no functional
change.

[bigeasy: Use local_lock(), description, drop reordering]

Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/block/zram/zcomp.c | 17 ++++++++++-------
 drivers/block/zram/zcomp.h |  2 ++
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
index ae6dc137a1ed8..fa3485309735e 100644
--- a/drivers/block/zram/zcomp.c
+++ b/drivers/block/zram/zcomp.c
@@ -42,11 +42,11 @@ static void zcomp_strm_free(struct zcomp_strm *zstrm)
 }
 
 /*
- * allocate new zcomp_strm structure with ->tfm initialized by
- * backend, return NULL on error
+ * Initialize zcomp_strm structure with ->tfm initialized by
+ * backend, and ->buffer. Return a negative value on error.
  */
-static int zcomp_strm_alloc(struct zcomp_strm *zstrm,
-			    struct zcomp *comp)
+static int zcomp_strm_init(struct zcomp_strm *zstrm,
+			   struct zcomp *comp)
 {
 	zstrm->tfm = crypto_alloc_comp(comp->name, 0, 0);
 	/*
@@ -111,12 +111,13 @@ ssize_t zcomp_available_show(const char *comp, char *buf)
 
 struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
 {
-	return get_cpu_ptr(comp->stream);
+	local_lock(&comp->stream->lock);
+	return this_cpu_ptr(comp->stream);
 }
 
 void zcomp_stream_put(struct zcomp *comp)
 {
-	put_cpu_ptr(comp->stream);
+	local_unlock(&comp->stream->lock);
 }
 
 int zcomp_compress(struct zcomp_strm *zstrm,
@@ -160,7 +161,9 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node)
 	int ret;
 
 	zstrm = per_cpu_ptr(comp->stream, cpu);
-	ret = zcomp_strm_alloc(zstrm, comp);
+	local_lock_init(&zstrm->lock);
+
+	ret = zcomp_strm_init(zstrm, comp);
 	if (ret) {
 		pr_err("Can't allocate a compression stream\n");
 		return -ENOMEM;
diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h
index 72c2ee4d843ed..45c4c1858e5a9 100644
--- a/drivers/block/zram/zcomp.h
+++ b/drivers/block/zram/zcomp.h
@@ -5,11 +5,13 @@
 
 #ifndef _ZCOMP_H_
 #define _ZCOMP_H_
+#include <linux/locallock.h>
 
 struct zcomp_strm {
 	/* compression/decompression buffer */
 	void *buffer;
 	struct crypto_comp *tfm;
+	struct local_lock lock;
 };
 
 /* dynamic per-device compression frontend */
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/7] radix-tree: Use local_lock for protection
  2020-05-24 21:57 ` [PATCH v2 2/7] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
@ 2020-05-25  6:29   ` Ingo Molnar
  2020-05-25 11:11     ` Matthew Wilcox
  2020-05-25 11:17     ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25  6:29 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, linux-fsdevel


* Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> The radix-tree and idr preload mechanisms use preempt_disable() to protect
> the complete operation between xxx_preload() and xxx_preload_end().
> 
> As the code inside the preempt disabled section acquires regular spinlocks,
> which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
> eventually calls into a memory allocator, this conflicts with the RT
> semantics.
> 
> Convert it to a local_lock which allows RT kernels to substitute them with
> a real per CPU lock. On non RT kernels this maps to preempt_disable() as
> before, but provides also lockdep coverage of the critical region.
> No functional change.
> 
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: linux-fsdevel@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  include/linux/idr.h        |  5 +----
>  include/linux/radix-tree.h |  6 +-----
>  lib/radix-tree.c           | 29 ++++++++++++++++++++++-------
>  3 files changed, 24 insertions(+), 16 deletions(-)

> -static inline void idr_preload_end(void)
> -{
> -	preempt_enable();
> -}
> +void idr_preload_end(void);

> +void idr_preload_end(void)
> +{
> +	local_unlock(&radix_tree_preloads.lock);
> +}
> +EXPORT_SYMBOL(idr_preload_end);

> +void radix_tree_preload_end(void);

> -static inline void radix_tree_preload_end(void)
> -{
> -	preempt_enable();
> -}

> +void radix_tree_preload_end(void)
> +{
> +	local_unlock(&radix_tree_preloads.lock);
> +}
> +EXPORT_SYMBOL(radix_tree_preload_end);

Since upstream we are still mapping the local_lock primitives to
preempt_disable()/preempt_enable(), I believe these uninlining changes should not be done
in this patch, i.e. idr_preload_end() and radix_tree_preload_end() should stay inline.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/7] mm/swap: Use local_lock for protection
  2020-05-24 21:57 ` [PATCH v2 3/7] mm/swap: " Sebastian Andrzej Siewior
@ 2020-05-25  6:44   ` Ingo Molnar
  2020-05-25 17:07     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25  6:44 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Andrew Morton, linux-mm



* Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> From: Ingo Molnar <mingo@kernel.org>
> 
> The various struct pagevec per CPU variables are protected by disabling
> either preemption or interrupts across the critical sections. Inside
> these sections spinlocks have to be acquired.

> diff --git a/mm/swap.c b/mm/swap.c
> index bf9a79fed62d7..4f965292044ca 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -35,6 +35,7 @@
>  #include <linux/uio.h>
>  #include <linux/hugetlb.h>
>  #include <linux/page_idle.h>
> +#include <linux/locallock.h>
>  
>  #include "internal.h"
>  
> @@ -44,14 +45,29 @@
>  /* How many pages do we try to swap or page in/out together? */
>  int page_cluster;
>  
> -static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
> -static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> -static DEFINE_PER_CPU(struct pagevec, lru_deactivate_file_pvecs);
> -static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
> -static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs);
> +/* Protecting lru_rotate_pvecs */
> +struct lru_rotate_pvecs {
> +	struct local_lock lock;
> +	struct pagevec pvec;
> +};
> +static DEFINE_PER_CPU(struct lru_rotate_pvecs, lru_rotate_pvecs) = {
> +	.lock = INIT_LOCAL_LOCK(lock),
> +};
> +
> +/* Protecting the following struct pagevec */
> +struct lru_pvecs {
> +	struct local_lock lock;
> +	struct pagevec lru_add_pvec;
> +	struct pagevec lru_deactivate_file_pvecs;
> +	struct pagevec lru_deactivate_pvecs;
> +	struct pagevec lru_lazyfree_pvecs;

Ack on coalescing these into the 'struct lru_pvecs' helper structure, 
but a minor namespace organization nit: I'd drop the _pvec/_pvecs 
postfix from the field names, i.e. make it something like this:

/* Protecting the following struct pagevec */
struct lru_pvecs {
	struct local_lock lock;
	struct pagevec lru_add;
	struct pagevec lru_deactivate_file;
	struct pagevec lru_deactivate;
	struct pagevec lru_lazyfree;

With that change, usage is a straightforward:

   pvec->lru_deactivate

instead of the double-pvec name:

   pvec->lru_deactivate_pvec

> +		local_lock_irqsave(&lru_rotate_pvecs.lock, flags);


Also:

> +		pvec = this_cpu_ptr(&lru_rotate_pvecs.pvec);

s/lru_rotate_pvecs
 /lru_rotate_pvec

it's a single pagevec, using plural is confusing when reading the 
code.

I'd also suggest adding a comment explaining why the lru_rotate_pvec 
local lock is split away from lru_add/deactivate/deactivate/lazyfree 
pagevec local lock.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/7] locking: Introduce local_lock()
  2020-05-24 21:57 ` [PATCH v2 1/7] locking: " Sebastian Andrzej Siewior
@ 2020-05-25  7:01   ` Ingo Molnar
  2020-05-25  7:12     ` Ingo Molnar
  2020-05-25 11:26     ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25  7:01 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox


* Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> From: Thomas Gleixner <tglx@linutronix.de>
> 
> To address this PREEMPT_RT introduced the concept of local_locks which are
> strictly per CPU.

> +++ b/include/linux/locallock_internal.h
> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_LOCALLOCK_H
> +# error "Do not include directly, include linux/locallock.h"
> +#endif
> +
> +#include <linux/percpu-defs.h>
> +#include <linux/lockdep.h>
> +
> +struct local_lock {
> +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> +	struct lockdep_map	dep_map;
> +	struct task_struct	*owner;
> +#endif
> +};

This this looks very nice to me, there's a minor data structure 
nomenclature related comment I have:

So local locks were supposed to be a look-alike to all the other 
locking constructs we have, spinlock_t in particular. Why isn't there 
a local_lock_t, instead of requiring 'struct local_lock'?

This abbreviation signals that these are 'small' data structures on 
mainline kernels (zero size in fact), but the other advantage is that 
the shorter name would prevent bloating of previously compact 
structure definitions, such as:

>  struct squashfs_stream {
> -	void		*stream;
> +	void			*stream;
> +	struct local_lock	lock;
>  };

This would become:

>  struct squashfs_stream {
>	void		*stream;
> +	locallock_t	lock;
>  };

( The other departure from spinlocks is that the 'spinlock_t' name, 
  without underscores, while making the API names such as spin_lock() 
  with an underscore, was a conscious didactic choice. Applying that 
  principle to local locks gives us the spinlock_t-equivalent name of 
  'locallock_t' - but the double 'l' reads a bit weirdly in this 
  context. So I think using 'local_lock_t' as the data structure is 
  probably the better approach. )

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/7] locking: Introduce local_lock()
  2020-05-25  7:01   ` Ingo Molnar
@ 2020-05-25  7:12     ` Ingo Molnar
  2020-05-25 11:27       ` Sebastian Andrzej Siewior
  2020-05-25 11:26     ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25  7:12 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox


* Ingo Molnar <mingo@kernel.org> wrote:

> ( The other departure from spinlocks is that the 'spinlock_t' name, 
>   without underscores, while making the API names such as spin_lock() 
>   with an underscore, was a conscious didactic choice. Applying that 
>   principle to local locks gives us the spinlock_t-equivalent name of 
>   'locallock_t' - but the double 'l' reads a bit weirdly in this 
>   context. So I think using 'local_lock_t' as the data structure is 
>   probably the better approach. )

BTW., along this argument, I believe we should rename the local-lock 
header file from <linux/locallock.h> to <linux/local_lock.h>.

The reason for the <linux/spinlock.h> naming is that the main data 
structure is spinlock_t.

Having <linux/locallock.h> for 'struct local_lock' or 'local_lock_t' 
would introduce an idiosyncratic namespace quirk for no good reason.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock
  2020-05-24 21:57 ` [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
@ 2020-05-25  7:18   ` Ingo Molnar
  2020-05-25 14:51     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25  7:18 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Evgeniy Polyakov, netdev


* Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> From: Mike Galbraith <umgwanakikbuti@gmail.com>
> 
> send_msg() disables preemption to avoid out-of-order messages. As the
> code inside the preempt disabled section acquires regular spinlocks,
> which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
> eventually calls into a memory allocator, this conflicts with the RT
> semantics.
> 
> Convert it to a local_lock which allows RT kernels to substitute them with
> a real per CPU lock. On non RT kernels this maps to preempt_disable() as
> before. No functional change.
> 
> [bigeasy: Patch description]
> 
> Cc: Evgeniy Polyakov <zbr@ioremap.net>
> Cc: netdev@vger.kernel.org
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  drivers/connector/cn_proc.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
> index d58ce664da843..d424d1f469136 100644
> --- a/drivers/connector/cn_proc.c
> +++ b/drivers/connector/cn_proc.c
> @@ -18,6 +18,7 @@
>  #include <linux/pid_namespace.h>
>  
>  #include <linux/cn_proc.h>
> +#include <linux/locallock.h>
>  
>  /*
>   * Size of a cn_msg followed by a proc_event structure.  Since the
> @@ -38,25 +39,32 @@ static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
>  static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
>  static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
>  
> -/* proc_event_counts is used as the sequence number of the netlink message */
> -static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
> +/* local_evt.counts is used as the sequence number of the netlink message */
> +struct local_evt {
> +	__u32 counts;
> +	struct local_lock lock;
> +};
> +static DEFINE_PER_CPU(struct local_evt, local_evt) = {
> +	.counts = 0,

I don't think zero initializations need to be written out explicitly.

> +	.lock = INIT_LOCAL_LOCK(lock),
> +};
>  
>  static inline void send_msg(struct cn_msg *msg)
>  {
> -	preempt_disable();
> +	local_lock(&local_evt.lock);
>  
> -	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
> +	msg->seq = __this_cpu_inc_return(local_evt.counts) - 1;

Naming nit: renaming this from 'proc_event_counts' to 
'local_evt.counts' is a step back IMO - what's an 'evt',
did we run out of e's? ;-)

Should be something like local_event.count? (Singular.)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 6/7] zram: Allocate struct zcomp_strm as per-CPU memory
  2020-05-24 21:57 ` [PATCH v2 6/7] zram: Allocate struct zcomp_strm as per-CPU memory Sebastian Andrzej Siewior
@ 2020-05-25  7:24   ` Ingo Molnar
  2020-05-25 16:50     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25  7:24 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Minchan Kim, Nitin Gupta, Sergey Senozhatsky


* Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> zcomp::stream is per-CPU pointer, pointing to struct zcomp_strm which
> contains two pointer. Having struct zcomp_strm allocated directly as
> per-CPU memory would avoid one additional memory allocation and a
> pointer dereference.
> This also also simplifies adding a local_lock to struct zcomp_strm.
> 
> Allocate zcomp::stream directly as per-CPU memory.

Various typo/spelling fixes:

> zcomp::stream is a per-CPU pointer, pointing to struct zcomp_strm 
> which contains two pointers. Having struct zcomp_strm allocated 
> directly as per-CPU memory would avoid one additional memory 
> allocation and a pointer dereference. This also simplifies the 
> addition of a local_lock to struct zcomp_strm.


> diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
> index 1a8564a79d8dc..ae6dc137a1ed8 100644
> --- a/drivers/block/zram/zcomp.c
> +++ b/drivers/block/zram/zcomp.c
> @@ -37,19 +37,17 @@ static void zcomp_strm_free(struct zcomp_strm *zstrm)
>  	if (!IS_ERR_OR_NULL(zstrm->tfm))
>  		crypto_free_comp(zstrm->tfm);
>  	free_pages((unsigned long)zstrm->buffer, 1);
> -	kfree(zstrm);
> +	zstrm->tfm = NULL;
> +	zstrm->buffer = NULL;
>  }
>  
>  /*
>   * allocate new zcomp_strm structure with ->tfm initialized by
>   * backend, return NULL on error
>   */
> -static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
> +static int zcomp_strm_alloc(struct zcomp_strm *zstrm,
> +			    struct zcomp *comp)

There's no need to put these into two lines, in a single line it's 
only 73 columns long. Leftover from some earlier bloat?

>  void zcomp_stream_put(struct zcomp *comp)
> @@ -159,16 +157,14 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node)
>  {
>  	struct zcomp *comp = hlist_entry(node, struct zcomp, node);
>  	struct zcomp_strm *zstrm;
> +	int ret;
>  
> -	if (WARN_ON(*per_cpu_ptr(comp->stream, cpu)))
> -		return 0;
> -
> -	zstrm = zcomp_strm_alloc(comp);
> -	if (IS_ERR_OR_NULL(zstrm)) {
> +	zstrm = per_cpu_ptr(comp->stream, cpu);
> +	ret = zcomp_strm_alloc(zstrm, comp);
> +	if (ret) {
>  		pr_err("Can't allocate a compression stream\n");
>  		return -ENOMEM;

BTW., with the allocation being in a single place and us having a 
proper 'ret', the return -ENOMEM could turn into 'return ret'?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/7] zram: Use local lock to protect per-CPU data
  2020-05-24 21:57 ` [PATCH v2 7/7] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
@ 2020-05-25  7:26   ` Ingo Molnar
  2020-05-25 16:51     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25  7:26 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Minchan Kim, Nitin Gupta,
	Sergey Senozhatsky


* Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> --- a/drivers/block/zram/zcomp.h
> +++ b/drivers/block/zram/zcomp.h
> @@ -5,11 +5,13 @@
>  
>  #ifndef _ZCOMP_H_
>  #define _ZCOMP_H_
> +#include <linux/locallock.h>
>  
>  struct zcomp_strm {
>  	/* compression/decompression buffer */
>  	void *buffer;
>  	struct crypto_comp *tfm;
> +	struct local_lock lock;
>  };

I believe the general pattern is to put the lock in front of the 
fields it protects.

I'd also add a comment documenting that both fields ->buffer and ->tfm 
are protected by the lock.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/7] radix-tree: Use local_lock for protection
  2020-05-25  6:29   ` Ingo Molnar
@ 2020-05-25 11:11     ` Matthew Wilcox
  2020-05-25 13:26       ` Ingo Molnar
  2020-05-25 11:17     ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 24+ messages in thread
From: Matthew Wilcox @ 2020-05-25 11:11 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Sebastian Andrzej Siewior, linux-kernel, Peter Zijlstra,
	Steven Rostedt, Will Deacon, Thomas Gleixner, Paul E . McKenney,
	Linus Torvalds, linux-fsdevel

On Mon, May 25, 2020 at 08:29:54AM +0200, Ingo Molnar wrote:
> > +void radix_tree_preload_end(void)
> > +{
> > +	local_unlock(&radix_tree_preloads.lock);
> > +}
> > +EXPORT_SYMBOL(radix_tree_preload_end);
> 
> Since upstream we are still mapping the local_lock primitives to
> preempt_disable()/preempt_enable(), I believe these uninlining changes should not be done
> in this patch, i.e. idr_preload_end() and radix_tree_preload_end() should stay inline.

But radix_tree_preloads is static, and I wouldn't be terribly happy to
see that exported to modules.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/7] radix-tree: Use local_lock for protection
  2020-05-25  6:29   ` Ingo Molnar
  2020-05-25 11:11     ` Matthew Wilcox
@ 2020-05-25 11:17     ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 11:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, linux-fsdevel

On 2020-05-25 08:29:54 [+0200], Ingo Molnar wrote:
> Since upstream we are still mapping the local_lock primitives to
> preempt_disable()/preempt_enable(), I believe these uninlining changes should not be done
> in this patch, i.e. idr_preload_end() and radix_tree_preload_end() should stay inline.

That means we need to export the per-CPU struct radix_tree_preload in
order to access the ::lock from an inline function.

Something like this then:

diff --git a/include/linux/idr.h b/include/linux/idr.h
index ac6e946b6767b..3ade03e5c7af3 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -171,7 +171,7 @@ static inline bool idr_is_empty(const struct idr *idr)
  */
 static inline void idr_preload_end(void)
 {
-	preempt_enable();
+	local_unlock(&radix_tree_preloads.lock);
 }
 
 /**
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 63e62372443a5..1dcc43ac75aed 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -16,11 +16,20 @@
 #include <linux/spinlock.h>
 #include <linux/types.h>
 #include <linux/xarray.h>
+#include <linux/locallock.h>
 
 /* Keep unconverted code working */
 #define radix_tree_root		xarray
 #define radix_tree_node		xa_node
 
+struct radix_tree_preload {
+	struct local_lock lock;
+	unsigned nr;
+	/* nodes->parent points to next preallocated node */
+	struct radix_tree_node *nodes;
+};
+DECLARE_PER_CPU(struct radix_tree_preload, radix_tree_preloads);
+
 /*
  * The bottom two bits of the slot determine how the remaining bits in the
  * slot are interpreted:
@@ -245,7 +254,7 @@ int radix_tree_tagged(const struct radix_tree_root *, unsigned int tag);
 
 static inline void radix_tree_preload_end(void)
 {
-	preempt_enable();
+	local_unlock(&radix_tree_preloads.lock);
 }
 
 void __rcu **idr_get_free(struct radix_tree_root *root,
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 2ee6ae3b0ade0..1c46840b4f1d3 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -20,6 +20,7 @@
 #include <linux/kernel.h>
 #include <linux/kmemleak.h>
 #include <linux/percpu.h>
+#include <linux/locallock.h>
 #include <linux/preempt.h>		/* in_interrupt() */
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -27,7 +28,6 @@
 #include <linux/string.h>
 #include <linux/xarray.h>
 
-
 /*
  * Radix tree node cache.
  */
@@ -58,12 +58,10 @@ struct kmem_cache *radix_tree_node_cachep;
 /*
  * Per-cpu pool of preloaded nodes
  */
-struct radix_tree_preload {
-	unsigned nr;
-	/* nodes->parent points to next preallocated node */
-	struct radix_tree_node *nodes;
+DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = {
+	.lock = INIT_LOCAL_LOCK(lock),
 };
-static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
+EXPORT_PER_CPU_SYMBOL_GPL(radix_tree_preloads);
 
 static inline struct radix_tree_node *entry_to_node(void *ptr)
 {
@@ -332,14 +330,14 @@ static __must_check int __radix_tree_preload(gfp_t gfp_mask, unsigned nr)
 	 */
 	gfp_mask &= ~__GFP_ACCOUNT;
 
-	preempt_disable();
+	local_lock(&radix_tree_preloads.lock);
 	rtp = this_cpu_ptr(&radix_tree_preloads);
 	while (rtp->nr < nr) {
-		preempt_enable();
+		local_unlock(&radix_tree_preloads.lock);
 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
 		if (node == NULL)
 			goto out;
-		preempt_disable();
+		local_lock(&radix_tree_preloads.lock);
 		rtp = this_cpu_ptr(&radix_tree_preloads);
 		if (rtp->nr < nr) {
 			node->parent = rtp->nodes;
@@ -381,7 +379,7 @@ int radix_tree_maybe_preload(gfp_t gfp_mask)
 	if (gfpflags_allow_blocking(gfp_mask))
 		return __radix_tree_preload(gfp_mask, RADIX_TREE_PRELOAD_SIZE);
 	/* Preloading doesn't help anything with this gfp mask, skip it */
-	preempt_disable();
+	local_lock(&radix_tree_preloads.lock);
 	return 0;
 }
 EXPORT_SYMBOL(radix_tree_maybe_preload);
@@ -1470,7 +1468,7 @@ EXPORT_SYMBOL(radix_tree_tagged);
 void idr_preload(gfp_t gfp_mask)
 {
 	if (__radix_tree_preload(gfp_mask, IDR_PRELOAD_SIZE))
-		preempt_disable();
+		local_lock(&radix_tree_preloads.lock);
 }
 EXPORT_SYMBOL(idr_preload);
 
-- 
2.27.0.rc0


> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/7] locking: Introduce local_lock()
  2020-05-25  7:01   ` Ingo Molnar
  2020-05-25  7:12     ` Ingo Molnar
@ 2020-05-25 11:26     ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 11:26 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox

On 2020-05-25 09:01:39 [+0200], Ingo Molnar wrote:
> 
> * Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> 
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > To address this PREEMPT_RT introduced the concept of local_locks which are
> > strictly per CPU.
> 
> > +++ b/include/linux/locallock_internal.h
> > @@ -0,0 +1,90 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _LINUX_LOCALLOCK_H
> > +# error "Do not include directly, include linux/locallock.h"
> > +#endif
> > +
> > +#include <linux/percpu-defs.h>
> > +#include <linux/lockdep.h>
> > +
> > +struct local_lock {
> > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > +	struct lockdep_map	dep_map;
> > +	struct task_struct	*owner;
> > +#endif
> > +};
> 
> This this looks very nice to me, there's a minor data structure 
> nomenclature related comment I have:
> 
> So local locks were supposed to be a look-alike to all the other 
> locking constructs we have, spinlock_t in particular. Why isn't there 
> a local_lock_t, instead of requiring 'struct local_lock'?

|git grep "struct \<spinlock\>"

and I did convert them spinlock_t and got even asked why
  https://lore.kernel.org/driverdev-devel/20190706100253.GA20497@kroah.com/

but yes. I can stick to local_lock_t instead.

> This abbreviation signals that these are 'small' data structures on 
> mainline kernels (zero size in fact), but the other advantage is that 
> the shorter name would prevent bloating of previously compact 
> structure definitions, such as:
> 
> >  struct squashfs_stream {
> > -	void		*stream;
> > +	void			*stream;
> > +	struct local_lock	lock;
> >  };
> 
> This would become:
> 
> >  struct squashfs_stream {
> >	void		*stream;
> > +	locallock_t	lock;
> >  };

Wasn't aware as this is considered bloating. 

> ( The other departure from spinlocks is that the 'spinlock_t' name, 
>   without underscores, while making the API names such as spin_lock() 
>   with an underscore, was a conscious didactic choice. Applying that 
>   principle to local locks gives us the spinlock_t-equivalent name of 
>   'locallock_t' - but the double 'l' reads a bit weirdly in this 
>   context. So I think using 'local_lock_t' as the data structure is 
>   probably the better approach. )

Okay, okay, I'm all yours.

> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/7] locking: Introduce local_lock()
  2020-05-25  7:12     ` Ingo Molnar
@ 2020-05-25 11:27       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 11:27 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox

On 2020-05-25 09:12:14 [+0200], Ingo Molnar wrote:
> 
> * Ingo Molnar <mingo@kernel.org> wrote:
> 
> > ( The other departure from spinlocks is that the 'spinlock_t' name, 
> >   without underscores, while making the API names such as spin_lock() 
> >   with an underscore, was a conscious didactic choice. Applying that 
> >   principle to local locks gives us the spinlock_t-equivalent name of 
> >   'locallock_t' - but the double 'l' reads a bit weirdly in this 
> >   context. So I think using 'local_lock_t' as the data structure is 
> >   probably the better approach. )
> 
> BTW., along this argument, I believe we should rename the local-lock 
> header file from <linux/locallock.h> to <linux/local_lock.h>.
> 
> The reason for the <linux/spinlock.h> naming is that the main data 
> structure is spinlock_t.
> 
> Having <linux/locallock.h> for 'struct local_lock' or 'local_lock_t' 
> would introduce an idiosyncratic namespace quirk for no good reason.

agreed.

> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/7] radix-tree: Use local_lock for protection
  2020-05-25 11:11     ` Matthew Wilcox
@ 2020-05-25 13:26       ` Ingo Molnar
  0 siblings, 0 replies; 24+ messages in thread
From: Ingo Molnar @ 2020-05-25 13:26 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Sebastian Andrzej Siewior, linux-kernel, Peter Zijlstra,
	Steven Rostedt, Will Deacon, Thomas Gleixner, Paul E . McKenney,
	Linus Torvalds, linux-fsdevel


* Matthew Wilcox <willy@infradead.org> wrote:

> On Mon, May 25, 2020 at 08:29:54AM +0200, Ingo Molnar wrote:
> > > +void radix_tree_preload_end(void)
> > > +{
> > > +	local_unlock(&radix_tree_preloads.lock);
> > > +}
> > > +EXPORT_SYMBOL(radix_tree_preload_end);
> > 
> > Since upstream we are still mapping the local_lock primitives to
> > preempt_disable()/preempt_enable(), I believe these uninlining changes should not be done
> > in this patch, i.e. idr_preload_end() and radix_tree_preload_end() should stay inline.
> 
> But radix_tree_preloads is static, and I wouldn't be terribly happy to
> see that exported to modules.

Well, it seems a bit silly to make radix_tree_preload_end() a 
standalone function, on most distro kernels that don't have 
CONFIG_PREEMPT=y, preempt_enable() is a NOP:

 0000000000002bf0 <radix_tree_preload_end>:
     2bf0:       c3                      retq   

I.e. we'd be introducing a separate function call for no good reason.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock
  2020-05-25  7:18   ` Ingo Molnar
@ 2020-05-25 14:51     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 14:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Evgeniy Polyakov, netdev

On 2020-05-25 09:18:19 [+0200], Ingo Molnar wrote:
> > +static DEFINE_PER_CPU(struct local_evt, local_evt) = {
> > +	.counts = 0,
> 
> I don't think zero initializations need to be written out explicitly.
yes.

> > +	.lock = INIT_LOCAL_LOCK(lock),
> > +};
> >  
> >  static inline void send_msg(struct cn_msg *msg)
> >  {
> > -	preempt_disable();
> > +	local_lock(&local_evt.lock);
> >  
> > -	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
> > +	msg->seq = __this_cpu_inc_return(local_evt.counts) - 1;
> 
> Naming nit: renaming this from 'proc_event_counts' to 
> 'local_evt.counts' is a step back IMO - what's an 'evt',
> did we run out of e's? ;-)
> 
> Should be something like local_event.count? (Singular.)

okay.

> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 6/7] zram: Allocate struct zcomp_strm as per-CPU memory
  2020-05-25  7:24   ` Ingo Molnar
@ 2020-05-25 16:50     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 16:50 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Minchan Kim, Nitin Gupta, Sergey Senozhatsky

On 2020-05-25 09:24:07 [+0200], Ingo Molnar wrote:
> Various typo/spelling fixes:
> 
> > zcomp::stream is a per-CPU pointer, pointing to struct zcomp_strm 
> > which contains two pointers. Having struct zcomp_strm allocated 
> > directly as per-CPU memory would avoid one additional memory 
> > allocation and a pointer dereference. This also simplifies the 
> > addition of a local_lock to struct zcomp_strm.

thx, updated.

> > diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
> > index 1a8564a79d8dc..ae6dc137a1ed8 100644
> > --- a/drivers/block/zram/zcomp.c
> > +++ b/drivers/block/zram/zcomp.c
> > @@ -37,19 +37,17 @@ static void zcomp_strm_free(struct zcomp_strm *zstrm)
> >  	if (!IS_ERR_OR_NULL(zstrm->tfm))
> >  		crypto_free_comp(zstrm->tfm);
> >  	free_pages((unsigned long)zstrm->buffer, 1);
> > -	kfree(zstrm);
> > +	zstrm->tfm = NULL;
> > +	zstrm->buffer = NULL;
> >  }
> >  
> >  /*
> >   * allocate new zcomp_strm structure with ->tfm initialized by
> >   * backend, return NULL on error
> >   */
> > -static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
> > +static int zcomp_strm_alloc(struct zcomp_strm *zstrm,
> > +			    struct zcomp *comp)
> 
> There's no need to put these into two lines, in a single line it's 
> only 73 columns long. Leftover from some earlier bloat?

yup, updated.

> >  void zcomp_stream_put(struct zcomp *comp)
> > @@ -159,16 +157,14 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node)
> >  {
> >  	struct zcomp *comp = hlist_entry(node, struct zcomp, node);
> >  	struct zcomp_strm *zstrm;
> > +	int ret;
> >  
> > -	if (WARN_ON(*per_cpu_ptr(comp->stream, cpu)))
> > -		return 0;
> > -
> > -	zstrm = zcomp_strm_alloc(comp);
> > -	if (IS_ERR_OR_NULL(zstrm)) {
> > +	zstrm = per_cpu_ptr(comp->stream, cpu);
> > +	ret = zcomp_strm_alloc(zstrm, comp);
> > +	if (ret) {
> >  		pr_err("Can't allocate a compression stream\n");
> >  		return -ENOMEM;
> 
> BTW., with the allocation being in a single place and us having a 
> proper 'ret', the return -ENOMEM could turn into 'return ret'?

yes.

> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/7] zram: Use local lock to protect per-CPU data
  2020-05-25  7:26   ` Ingo Molnar
@ 2020-05-25 16:51     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 16:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Minchan Kim, Nitin Gupta,
	Sergey Senozhatsky

On 2020-05-25 09:26:48 [+0200], Ingo Molnar wrote:
> 
> * Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> 
> > --- a/drivers/block/zram/zcomp.h
> > +++ b/drivers/block/zram/zcomp.h
> > @@ -5,11 +5,13 @@
> >  
> >  #ifndef _ZCOMP_H_
> >  #define _ZCOMP_H_
> > +#include <linux/locallock.h>
> >  
> >  struct zcomp_strm {
> >  	/* compression/decompression buffer */
> >  	void *buffer;
> >  	struct crypto_comp *tfm;
> > +	struct local_lock lock;
> >  };
> 
> I believe the general pattern is to put the lock in front of the 
> fields it protects.
> 
> I'd also add a comment documenting that both fields ->buffer and ->tfm 
> are protected by the lock.

I moved the member, and added a comment.

> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/7] mm/swap: Use local_lock for protection
  2020-05-25  6:44   ` Ingo Molnar
@ 2020-05-25 17:07     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 24+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 17:07 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Andrew Morton, linux-mm

On 2020-05-25 08:44:36 [+0200], Ingo Molnar wrote:
> s/lru_rotate_pvecs
>  /lru_rotate_pvec
> 
> it's a single pagevec, using plural is confusing when reading the 
> code.

right. It had the _pvecs from the beginning so I assumed it is because
it is per-CPU.

With all you suggestions I'm at:

diff --git a/include/linux/swap.h b/include/linux/swap.h
index e1bbf7a16b276..25181d2dd0b9f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -337,6 +337,7 @@ extern void activate_page(struct page *);
 extern void mark_page_accessed(struct page *);
 extern void lru_add_drain(void);
 extern void lru_add_drain_cpu(int cpu);
+extern void lru_add_drain_cpu_zone(struct zone *zone);
 extern void lru_add_drain_all(void);
 extern void rotate_reclaimable_page(struct page *page);
 extern void deactivate_file_page(struct page *page);
diff --git a/mm/compaction.c b/mm/compaction.c
index 46f0fcc93081e..c9d659e6a02c5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2243,15 +2243,11 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
 		 * would succeed.
 		 */
 		if (cc->order > 0 && last_migrated_pfn) {
-			int cpu;
 			unsigned long current_block_start =
 				block_start_pfn(cc->migrate_pfn, cc->order);
 
 			if (last_migrated_pfn < current_block_start) {
-				cpu = get_cpu();
-				lru_add_drain_cpu(cpu);
-				drain_local_pages(cc->zone);
-				put_cpu();
+				lru_add_drain_cpu_zone(cc->zone);
 				/* No more flushing until we migrate again */
 				last_migrated_pfn = 0;
 			}
diff --git a/mm/swap.c b/mm/swap.c
index bf9a79fed62d7..0ac463d44cff4 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -35,6 +35,7 @@
 #include <linux/uio.h>
 #include <linux/hugetlb.h>
 #include <linux/page_idle.h>
+#include <linux/local_lock.h>
 
 #include "internal.h"
 
@@ -44,14 +45,32 @@
 /* How many pages do we try to swap or page in/out together? */
 int page_cluster;
 
-static DEFINE_PER_CPU(struct pagevec, lru_add_pvec);
-static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
-static DEFINE_PER_CPU(struct pagevec, lru_deactivate_file_pvecs);
-static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
-static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs);
+/* Protecting only lru_rotate.pvec which requires disabling interrupts */
+struct lru_rotate {
+	local_lock_t lock;
+	struct pagevec pvec;
+};
+static DEFINE_PER_CPU(struct lru_rotate, lru_rotate) = {
+	.lock = INIT_LOCAL_LOCK(lock),
+};
+
+/*
+ * The following struct pagevec are grouped together because they are protected
+ * by disabling preemption (and interrupts remain enabled).
+ */
+struct lru_pvecs {
+	local_lock_t lock;
+	struct pagevec lru_add;
+	struct pagevec lru_deactivate_file;
+	struct pagevec lru_deactivate;
+	struct pagevec lru_lazyfree;
 #ifdef CONFIG_SMP
-static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);
+	struct pagevec activate_page;
 #endif
+};
+static DEFINE_PER_CPU(struct lru_pvecs, lru_pvecs) = {
+	.lock = INIT_LOCAL_LOCK(lock),
+};
 
 /*
  * This path almost never happens for VM activity - pages are normally
@@ -254,11 +273,11 @@ void rotate_reclaimable_page(struct page *page)
 		unsigned long flags;
 
 		get_page(page);
-		local_irq_save(flags);
-		pvec = this_cpu_ptr(&lru_rotate_pvecs);
+		local_lock_irqsave(&lru_rotate.lock, flags);
+		pvec = this_cpu_ptr(&lru_rotate.pvec);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&lru_rotate.lock, flags);
 	}
 }
 
@@ -293,7 +312,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
 #ifdef CONFIG_SMP
 static void activate_page_drain(int cpu)
 {
-	struct pagevec *pvec = &per_cpu(activate_page_pvecs, cpu);
+	struct pagevec *pvec = &per_cpu(lru_pvecs.activate_page, cpu);
 
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, __activate_page, NULL);
@@ -301,19 +320,21 @@ static void activate_page_drain(int cpu)
 
 static bool need_activate_page_drain(int cpu)
 {
-	return pagevec_count(&per_cpu(activate_page_pvecs, cpu)) != 0;
+	return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0;
 }
 
 void activate_page(struct page *page)
 {
 	page = compound_head(page);
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(activate_page_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.activate_page);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, __activate_page, NULL);
-		put_cpu_var(activate_page_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
@@ -335,9 +356,12 @@ void activate_page(struct page *page)
 
 static void __lru_cache_activate_page(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec;
 	int i;
 
+	local_lock(&lru_pvecs.lock);
+	pvec = this_cpu_ptr(&lru_pvecs.lru_add);
+
 	/*
 	 * Search backwards on the optimistic assumption that the page being
 	 * activated has just been added to this pagevec. Note that only
@@ -357,7 +381,7 @@ static void __lru_cache_activate_page(struct page *page)
 		}
 	}
 
-	put_cpu_var(lru_add_pvec);
+	local_unlock(&lru_pvecs.lock);
 }
 
 /*
@@ -385,7 +409,7 @@ void mark_page_accessed(struct page *page)
 	} else if (!PageActive(page)) {
 		/*
 		 * If the page is on the LRU, queue it for activation via
-		 * activate_page_pvecs. Otherwise, assume the page is on a
+		 * lru_pvecs.activate_page. Otherwise, assume the page is on a
 		 * pagevec, mark it active and it'll be moved to the active
 		 * LRU on the next drain.
 		 */
@@ -404,12 +428,14 @@ EXPORT_SYMBOL(mark_page_accessed);
 
 static void __lru_cache_add(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec;
 
+	local_lock(&lru_pvecs.lock);
+	pvec = this_cpu_ptr(&lru_pvecs.lru_add);
 	get_page(page);
 	if (!pagevec_add(pvec, page) || PageCompound(page))
 		__pagevec_lru_add(pvec);
-	put_cpu_var(lru_add_pvec);
+	local_unlock(&lru_pvecs.lock);
 }
 
 /**
@@ -593,30 +619,30 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
  */
 void lru_add_drain_cpu(int cpu)
 {
-	struct pagevec *pvec = &per_cpu(lru_add_pvec, cpu);
+	struct pagevec *pvec = &per_cpu(lru_pvecs.lru_add, cpu);
 
 	if (pagevec_count(pvec))
 		__pagevec_lru_add(pvec);
 
-	pvec = &per_cpu(lru_rotate_pvecs, cpu);
+	pvec = &per_cpu(lru_rotate.pvec, cpu);
 	if (pagevec_count(pvec)) {
 		unsigned long flags;
 
 		/* No harm done if a racing interrupt already did this */
-		local_irq_save(flags);
+		local_lock_irqsave(&lru_rotate.lock, flags);
 		pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&lru_rotate.lock, flags);
 	}
 
-	pvec = &per_cpu(lru_deactivate_file_pvecs, cpu);
+	pvec = &per_cpu(lru_pvecs.lru_deactivate_file, cpu);
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
 
-	pvec = &per_cpu(lru_deactivate_pvecs, cpu);
+	pvec = &per_cpu(lru_pvecs.lru_deactivate, cpu);
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
 
-	pvec = &per_cpu(lru_lazyfree_pvecs, cpu);
+	pvec = &per_cpu(lru_pvecs.lru_lazyfree, cpu);
 	if (pagevec_count(pvec))
 		pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
 
@@ -641,11 +667,14 @@ void deactivate_file_page(struct page *page)
 		return;
 
 	if (likely(get_page_unless_zero(page))) {
-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_file_pvecs);
+		struct pagevec *pvec;
+
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file);
 
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
-		put_cpu_var(lru_deactivate_file_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
@@ -660,12 +689,14 @@ void deactivate_file_page(struct page *page)
 void deactivate_page(struct page *page)
 {
 	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
-		put_cpu_var(lru_deactivate_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
@@ -680,19 +711,30 @@ void mark_page_lazyfree(struct page *page)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
+		struct pagevec *pvec;
 
+		local_lock(&lru_pvecs.lock);
+		pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
-		put_cpu_var(lru_lazyfree_pvecs);
+		local_unlock(&lru_pvecs.lock);
 	}
 }
 
 void lru_add_drain(void)
 {
-	lru_add_drain_cpu(get_cpu());
-	put_cpu();
+	local_lock(&lru_pvecs.lock);
+	lru_add_drain_cpu(smp_processor_id());
+	local_unlock(&lru_pvecs.lock);
+}
+
+void lru_add_drain_cpu_zone(struct zone *zone)
+{
+	local_lock(&lru_pvecs.lock);
+	lru_add_drain_cpu(smp_processor_id());
+	drain_local_pages(zone);
+	local_unlock(&lru_pvecs.lock);
 }
 
 #ifdef CONFIG_SMP
@@ -743,11 +785,11 @@ void lru_add_drain_all(void)
 	for_each_online_cpu(cpu) {
 		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
 
-		if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
-		    pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
-		    pagevec_count(&per_cpu(lru_deactivate_file_pvecs, cpu)) ||
-		    pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
-		    pagevec_count(&per_cpu(lru_lazyfree_pvecs, cpu)) ||
+		if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
+		    pagevec_count(&per_cpu(lru_rotate.pvec, cpu)) ||
+		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
+		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
+		    pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) ||
 		    need_activate_page_drain(cpu)) {
 			INIT_WORK(work, lru_add_drain_per_cpu);
 			queue_work_on(cpu, mm_percpu_wq, work);
-- 
2.27.0.rc0


> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-05-25 17:08 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-24 21:57 [PATCH 0/7 v2] Introduce local_lock() Sebastian Andrzej Siewior
2020-05-24 21:57 ` [PATCH v2 1/7] locking: " Sebastian Andrzej Siewior
2020-05-25  7:01   ` Ingo Molnar
2020-05-25  7:12     ` Ingo Molnar
2020-05-25 11:27       ` Sebastian Andrzej Siewior
2020-05-25 11:26     ` Sebastian Andrzej Siewior
2020-05-24 21:57 ` [PATCH v2 2/7] radix-tree: Use local_lock for protection Sebastian Andrzej Siewior
2020-05-25  6:29   ` Ingo Molnar
2020-05-25 11:11     ` Matthew Wilcox
2020-05-25 13:26       ` Ingo Molnar
2020-05-25 11:17     ` Sebastian Andrzej Siewior
2020-05-24 21:57 ` [PATCH v2 3/7] mm/swap: " Sebastian Andrzej Siewior
2020-05-25  6:44   ` Ingo Molnar
2020-05-25 17:07     ` Sebastian Andrzej Siewior
2020-05-24 21:57 ` [PATCH v2 4/7] squashfs: make use of local lock in multi_cpu decompressor Sebastian Andrzej Siewior
2020-05-24 21:57 ` [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
2020-05-25  7:18   ` Ingo Molnar
2020-05-25 14:51     ` Sebastian Andrzej Siewior
2020-05-24 21:57 ` [PATCH v2 6/7] zram: Allocate struct zcomp_strm as per-CPU memory Sebastian Andrzej Siewior
2020-05-25  7:24   ` Ingo Molnar
2020-05-25 16:50     ` Sebastian Andrzej Siewior
2020-05-24 21:57 ` [PATCH v2 7/7] zram: Use local lock to protect per-CPU data Sebastian Andrzej Siewior
2020-05-25  7:26   ` Ingo Molnar
2020-05-25 16:51     ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).