linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 0/7] Linux 3.12.70-rt95-rc1
@ 2017-03-08 20:22 Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 1/7] radix-tree: use local locks Steven Rostedt
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright


Dear RT Folks,

This is the RT stable review cycle of patch 3.12.70-rt95-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 3/10/2017.

Enjoy,

-- Steve


To build 3.12.70-rt95-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.12.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.12.70.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/patch-3.12.70-rt95-rc1.patch.xz

You can also build from 3.12.70-rt94 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/incr/patch-3.12.70-rt94-rt95-rc1.patch.xz


Changes from 3.12.70-rt94:

---


Dan Murphy (1):
      lockdep: Fix compilation error for !CONFIG_MODULES and !CONFIG_SMP

John Ogness (1):
      x86/mm/cpa: avoid wbinvd() for PREEMPT

Sebastian Andrzej Siewior (3):
      radix-tree: use local locks
      rt: Drop mutex_disable() on !DEBUG configs and the GPL suffix from export symbol
      rt: Drop the removal of _GPL from rt_mutex_destroy()'s EXPORT_SYMBOL

Steven Rostedt (VMware) (1):
      Linux 3.12.70-rt95-rc1

Thomas Gleixner (1):
      lockdep: Handle statically initialized PER_CPU locks proper

----
 arch/x86/mm/pageattr.c     |  8 ++++++++
 include/linux/module.h     |  6 ++++++
 include/linux/mutex_rt.h   |  5 +++++
 include/linux/percpu.h     |  1 +
 include/linux/radix-tree.h | 12 ++----------
 kernel/lockdep.c           | 32 +++++++++++++++++++++++---------
 kernel/module.c            | 36 ++++++++++++++++++++++++------------
 lib/radix-tree.c           | 23 ++++++++++++++---------
 localversion-rt            |  2 +-
 mm/percpu.c                | 37 +++++++++++++++++++++++--------------
 10 files changed, 107 insertions(+), 55 deletions(-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH RT 1/7] radix-tree: use local locks
  2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
@ 2017-03-08 20:22 ` Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 2/7] x86/mm/cpa: avoid wbinvd() for PREEMPT Steven Rostedt
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, stable-rt

[-- Attachment #1: 0001-radix-tree-use-local-locks.patch --]
[-- Type: text/plain, Size: 4759 bytes --]

3.12.70-rt95-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The preload functionality uses per-CPU variables and preempt-disable to
ensure that it does not switch CPUs during its usage. This patch adds
local_locks() instead preempt_disable() for the same purpose and to
remain preemptible on -RT.

Cc: stable-rt@vger.kernel.org
Reported-and-debugged-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/radix-tree.h | 12 ++----------
 lib/radix-tree.c           | 23 ++++++++++++++---------
 2 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 148ae0fb87ad..c0df4bc0d297 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -227,13 +227,10 @@ radix_tree_gang_lookup(struct radix_tree_root *root, void **results,
 unsigned int radix_tree_gang_lookup_slot(struct radix_tree_root *root,
 			void ***results, unsigned long *indices,
 			unsigned long first_index, unsigned int max_items);
-#ifndef CONFIG_PREEMPT_RT_FULL
 int radix_tree_preload(gfp_t gfp_mask);
 int radix_tree_maybe_preload(gfp_t gfp_mask);
-#else
-static inline int radix_tree_preload(gfp_t gm) { return 0; }
-static inline int radix_tree_maybe_preload(gfp_t gfp_mask) { return 0; }
-#endif
+void radix_tree_preload_end(void);
+
 void radix_tree_init(void);
 void *radix_tree_tag_set(struct radix_tree_root *root,
 			unsigned long index, unsigned int tag);
@@ -256,11 +253,6 @@ unsigned long radix_tree_range_tag_if_tagged(struct radix_tree_root *root,
 int radix_tree_tagged(struct radix_tree_root *root, unsigned int tag);
 unsigned long radix_tree_locate_item(struct radix_tree_root *root, void *item);
 
-static inline void radix_tree_preload_end(void)
-{
-	preempt_enable_nort();
-}
-
 /**
  * struct radix_tree_iter - radix tree iterator state
  *
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 980e869e19a8..1236857dfa1f 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -33,7 +33,7 @@
 #include <linux/bitops.h>
 #include <linux/rcupdate.h>
 #include <linux/hardirq.h>		/* in_interrupt() */
-
+#include <linux/locallock.h>
 
 #ifdef __KERNEL__
 #define RADIX_TREE_MAP_SHIFT	(CONFIG_BASE_SMALL ? 4 : 6)
@@ -94,6 +94,7 @@ struct radix_tree_preload {
 	struct radix_tree_node *nodes[RADIX_TREE_PRELOAD_SIZE];
 };
 static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
+static DEFINE_LOCAL_IRQ_LOCK(radix_tree_preloads_lock);
 
 static inline void *ptr_to_indirect(void *ptr)
 {
@@ -221,13 +222,13 @@ radix_tree_node_alloc(struct radix_tree_root *root)
 		 * succeed in getting a node here (and never reach
 		 * kmem_cache_alloc)
 		 */
-		rtp = &get_cpu_var(radix_tree_preloads);
+		rtp = &get_locked_var(radix_tree_preloads_lock, radix_tree_preloads);
 		if (rtp->nr) {
 			ret = rtp->nodes[rtp->nr - 1];
 			rtp->nodes[rtp->nr - 1] = NULL;
 			rtp->nr--;
 		}
-		put_cpu_var(radix_tree_preloads);
+		put_locked_var(radix_tree_preloads_lock, radix_tree_preloads);
 	}
 	if (ret == NULL)
 		ret = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
@@ -262,7 +263,6 @@ radix_tree_node_free(struct radix_tree_node *node)
 	call_rcu(&node->rcu_head, radix_tree_node_rcu_free);
 }
 
-#ifndef CONFIG_PREEMPT_RT_FULL
 /*
  * Load up this CPU's radix_tree_node buffer with sufficient objects to
  * ensure that the addition of a single element in the tree cannot fail.  On
@@ -278,14 +278,14 @@ static int __radix_tree_preload(gfp_t gfp_mask)
 	struct radix_tree_node *node;
 	int ret = -ENOMEM;
 
-	preempt_disable();
+	local_lock(radix_tree_preloads_lock);
 	rtp = &__get_cpu_var(radix_tree_preloads);
 	while (rtp->nr < ARRAY_SIZE(rtp->nodes)) {
-		preempt_enable();
+		local_unlock(radix_tree_preloads_lock);
 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
 		if (node == NULL)
 			goto out;
-		preempt_disable();
+		local_lock(radix_tree_preloads_lock);
 		rtp = &__get_cpu_var(radix_tree_preloads);
 		if (rtp->nr < ARRAY_SIZE(rtp->nodes))
 			rtp->nodes[rtp->nr++] = node;
@@ -324,11 +324,16 @@ int radix_tree_maybe_preload(gfp_t gfp_mask)
 	if (gfp_mask & __GFP_WAIT)
 		return __radix_tree_preload(gfp_mask);
 	/* Preloading doesn't help anything with this gfp mask, skip it */
-	preempt_disable();
+	local_lock(radix_tree_preloads_lock);
 	return 0;
 }
 EXPORT_SYMBOL(radix_tree_maybe_preload);
-#endif
+
+void radix_tree_preload_end(void)
+{
+	local_unlock(radix_tree_preloads_lock);
+}
+EXPORT_SYMBOL(radix_tree_preload_end);
 
 /*
  *	Return the maximum key which can be store into a
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 2/7] x86/mm/cpa: avoid wbinvd() for PREEMPT
  2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 1/7] radix-tree: use local locks Steven Rostedt
@ 2017-03-08 20:22 ` Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 3/7] rt: Drop mutex_disable() on !DEBUG configs and the GPL suffix from export symbol Steven Rostedt
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, stable-rt,
	Peter Zijlstra (Intel),
	John Ogness

[-- Attachment #1: 0002-x86-mm-cpa-avoid-wbinvd-for-PREEMPT.patch --]
[-- Type: text/plain, Size: 1522 bytes --]

3.12.70-rt95-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: John Ogness <john.ogness@linutronix.de>

Although wbinvd() is faster than flushing many individual pages, it
blocks the memory bus for "long" periods of time (>100us), thus
directly causing unusually large latencies on all CPUs, regardless
of any CPU isolation features that may be active.

For 1024 pages, flushing those pages individually can take up to
2200us, but the task remains fully preemptible during that time.

Cc: stable-rt@vger.kernel.org
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 arch/x86/mm/pageattr.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 0fcd960b382a..0fd8d4e4c601 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -210,7 +210,15 @@ static void cpa_flush_array(unsigned long *start, int numpages, int cache,
 			    int in_flags, struct page **pages)
 {
 	unsigned int i, level;
+#ifdef CONFIG_PREEMPT
+	/*
+	 * Avoid wbinvd() because it causes latencies on all CPUs,
+	 * regardless of any CPU isolation that may be in effect.
+	 */
+	unsigned long do_wbinvd = 0;
+#else
 	unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */
+#endif
 
 	BUG_ON(irqs_disabled());
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 3/7] rt: Drop mutex_disable() on !DEBUG configs and the GPL suffix from export symbol
  2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 1/7] radix-tree: use local locks Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 2/7] x86/mm/cpa: avoid wbinvd() for PREEMPT Steven Rostedt
@ 2017-03-08 20:22 ` Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 4/7] lockdep: Handle statically initialized PER_CPU locks proper Steven Rostedt
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Alex Goins

[-- Attachment #1: 0003-rt-Drop-mutex_disable-on-DEBUG-configs-and-the-GPL-s.patch --]
[-- Type: text/plain, Size: 2001 bytes --]

3.12.70-rt95-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Alex Goins reported that mutex_destroy() on RT will force a GPL only symbol
which won't link and therefore fail on a non-GPL kernel module.
This does not happen on !RT and is a regression on RT which we would like to
avoid.
I try here the easy thing and to not use rt_mutex_destroy() if
CONFIG_DEBUG_MUTEXES is not enabled. This will still break for the DEBUG
configs so instead of adding a wrapper around rt_mutex_destroy() (which we have
for rt_mutex_lock() for instance) I am simply dropping the GPL part from the
export.

Reported-by: Alex Goins <agoins@nvidia.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/mutex_rt.h | 5 +++++
 kernel/rtmutex.c         | 3 +--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/mutex_rt.h b/include/linux/mutex_rt.h
index c38a44b14da5..e0284edec655 100644
--- a/include/linux/mutex_rt.h
+++ b/include/linux/mutex_rt.h
@@ -43,7 +43,12 @@ extern void __lockfunc _mutex_unlock(struct mutex *lock);
 #define mutex_lock_killable(l)		_mutex_lock_killable(l)
 #define mutex_trylock(l)		_mutex_trylock(l)
 #define mutex_unlock(l)			_mutex_unlock(l)
+
+#ifdef CONFIG_DEBUG_MUTEXES
 #define mutex_destroy(l)		rt_mutex_destroy(&(l)->lock)
+#else
+static inline void mutex_destroy(struct mutex *lock) {}
+#endif
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 # define mutex_lock_nested(l, s)	_mutex_lock_nested(l, s)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 43d98d373809..63ac099b3b8d 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -2004,8 +2004,7 @@ void rt_mutex_destroy(struct rt_mutex *lock)
 	lock->magic = NULL;
 #endif
 }
-
-EXPORT_SYMBOL_GPL(rt_mutex_destroy);
+EXPORT_SYMBOL(rt_mutex_destroy);
 
 /**
  * __rt_mutex_init - initialize the rt lock
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 4/7] lockdep: Handle statically initialized PER_CPU locks proper
  2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2017-03-08 20:22 ` [PATCH RT 3/7] rt: Drop mutex_disable() on !DEBUG configs and the GPL suffix from export symbol Steven Rostedt
@ 2017-03-08 20:22 ` Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 5/7] rt: Drop the removal of _GPL from rt_mutex_destroy()s EXPORT_SYMBOL Steven Rostedt
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Mike Galbraith,
	stable-rt

[-- Attachment #1: 0004-lockdep-Handle-statically-initialized-PER_CPU-locks-.patch --]
[-- Type: text/plain, Size: 9046 bytes --]

3.12.70-rt95-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

If a PER_CPU struct which contains a spin_lock is statically initialized
via:

DEFINE_PER_CPU(struct foo, bla) = {
	.lock = __SPIN_LOCK_UNLOCKED(bla.lock)
};

then lockdep assigns a seperate key to each lock because the logic for
assigning a key to statically initialized locks is to use the address as
the key. With per CPU locks the address is obvioulsy different on each CPU.

That's wrong, because all locks should have the same key.

To solve this the following modifications are required:

 1) Extend the is_kernel/module_percpu_addr() functions to hand back the
    canonical address of the per CPU address, i.e. the per CPU address
    minus the per CPU offset.

 2) Check the lock address with these functions and if the per CPU check
    matches use the returned canonical address as the lock key, so all per
    CPU locks have the same key.

 3) Move the static_obj(key) check into look_up_lock_class() so this check
    can be avoided for statically initialized per CPU locks.  That's
    required because the canonical address fails the static_obj(key) check
    for obvious reasons.

Reported-by: Mike Galbraith <efault@gmx.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/module.h |  1 +
 include/linux/percpu.h |  1 +
 kernel/lockdep.c       | 32 +++++++++++++++++++++++---------
 kernel/module.c        | 31 +++++++++++++++++++------------
 mm/percpu.c            | 37 +++++++++++++++++++++++--------------
 5 files changed, 67 insertions(+), 35 deletions(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index 842ef3877d5b..7ae21ed9453d 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -397,6 +397,7 @@ static inline int module_is_live(struct module *mod)
 struct module *__module_text_address(unsigned long addr);
 struct module *__module_address(unsigned long addr);
 bool is_module_address(unsigned long addr);
+bool __is_module_percpu_address(unsigned long addr, unsigned long *can_addr);
 bool is_module_percpu_address(unsigned long addr);
 bool is_module_text_address(unsigned long addr);
 
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index f05adf59041c..3e22237bf5db 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -176,6 +176,7 @@ extern int __init pcpu_page_first_chunk(size_t reserved_size,
 #endif
 
 extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align);
+extern bool __is_kernel_percpu_address(unsigned long addr, unsigned long *can_addr);
 extern bool is_kernel_percpu_address(unsigned long addr);
 
 #if !defined(CONFIG_SMP) || !defined(CONFIG_HAVE_SETUP_PER_CPU_AREA)
diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index b74f7a5dc812..dadf83fa87df 100644
--- a/kernel/lockdep.c
+++ b/kernel/lockdep.c
@@ -650,6 +650,7 @@ look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 	struct lockdep_subclass_key *key;
 	struct list_head *hash_head;
 	struct lock_class *class;
+	bool is_static = false;
 
 #ifdef CONFIG_DEBUG_LOCKDEP
 	/*
@@ -677,10 +678,23 @@ look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 
 	/*
 	 * Static locks do not have their class-keys yet - for them the key
-	 * is the lock object itself:
+	 * is the lock object itself. If the lock is in the per cpu area,
+	 * the canonical address of the lock (per cpu offset removed) is
+	 * used.
 	 */
-	if (unlikely(!lock->key))
-		lock->key = (void *)lock;
+	if (unlikely(!lock->key)) {
+		unsigned long can_addr, addr = (unsigned long)lock;
+
+		if (__is_kernel_percpu_address(addr, &can_addr))
+			lock->key = (void *)can_addr;
+		else if (__is_module_percpu_address(addr, &can_addr))
+			lock->key = (void *)can_addr;
+		else if (static_obj(lock))
+			lock->key = (void *)lock;
+		else
+			return ERR_PTR(-EINVAL);
+		is_static = true;
+	}
 
 	/*
 	 * NOTE: the class-key must be unique. For dynamic locks, a static
@@ -710,7 +724,7 @@ look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 		}
 	}
 
-	return NULL;
+	return is_static || static_obj(lock->key) ? NULL : ERR_PTR(-EINVAL);
 }
 
 /*
@@ -727,13 +741,13 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	unsigned long flags;
 
 	class = look_up_lock_class(lock, subclass);
-	if (likely(class))
+	if (likely(!IS_ERR_OR_NULL(class)))
 		goto out_set_class_cache;
 
 	/*
 	 * Debug-check: all keys must be persistent!
- 	 */
-	if (!static_obj(lock->key)) {
+	 */
+	if (IS_ERR(class)) {
 		debug_locks_off();
 		printk("INFO: trying to register non-static key.\n");
 		printk("the code is fine but needs lockdep annotation.\n");
@@ -3275,7 +3289,7 @@ static int match_held_lock(struct held_lock *hlock, struct lockdep_map *lock)
 		 * Clearly if the lock hasn't been acquired _ever_, we're not
 		 * holding it either, so report failure.
 		 */
-		if (!class)
+		if (IS_ERR_OR_NULL(class))
 			return 0;
 
 		/*
@@ -3937,7 +3951,7 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 		 * If the class exists we look it up and zap it:
 		 */
 		class = look_up_lock_class(lock, j);
-		if (class)
+		if (!IS_ERR_OR_NULL(class))
 			zap_class(class);
 	}
 	/*
diff --git a/kernel/module.c b/kernel/module.c
index a8c4d4163a41..4347aa243941 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -530,16 +530,7 @@ static void percpu_modcopy(struct module *mod,
 		memcpy(per_cpu_ptr(mod->percpu, cpu), from, size);
 }
 
-/**
- * is_module_percpu_address - test whether address is from module static percpu
- * @addr: address to test
- *
- * Test whether @addr belongs to module static percpu area.
- *
- * RETURNS:
- * %true if @addr is from module static percpu area
- */
-bool is_module_percpu_address(unsigned long addr)
+bool __is_module_percpu_address(unsigned long addr, unsigned long *can_addr)
 {
 	struct module *mod;
 	unsigned int cpu;
@@ -553,9 +544,11 @@ bool is_module_percpu_address(unsigned long addr)
 			continue;
 		for_each_possible_cpu(cpu) {
 			void *start = per_cpu_ptr(mod->percpu, cpu);
+			void *va = (void *)addr;
 
-			if ((void *)addr >= start &&
-			    (void *)addr < start + mod->percpu_size) {
+			if (va >= start && va < start + mod->percpu_size) {
+				if (can_addr)
+					*can_addr = (unsigned long) (va - start);
 				preempt_enable();
 				return true;
 			}
@@ -566,6 +559,20 @@ bool is_module_percpu_address(unsigned long addr)
 	return false;
 }
 
+/**
+ * is_module_percpu_address - test whether address is from module static percpu
+ * @addr: address to test
+ *
+ * Test whether @addr belongs to module static percpu area.
+ *
+ * RETURNS:
+ * %true if @addr is from module static percpu area
+ */
+bool is_module_percpu_address(unsigned long addr)
+{
+	return __is_module_percpu_address(addr, NULL);
+}
+
 #else /* ... !CONFIG_SMP */
 
 static inline void __percpu *mod_percpu(struct module *mod)
diff --git a/mm/percpu.c b/mm/percpu.c
index 25e2ea52db82..b96d41d20b1e 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -946,18 +946,7 @@ void free_percpu(void __percpu *ptr)
 }
 EXPORT_SYMBOL_GPL(free_percpu);
 
-/**
- * is_kernel_percpu_address - test whether address is from static percpu area
- * @addr: address to test
- *
- * Test whether @addr belongs to in-kernel static percpu area.  Module
- * static percpu areas are not considered.  For those, use
- * is_module_percpu_address().
- *
- * RETURNS:
- * %true if @addr is from in-kernel static percpu area, %false otherwise.
- */
-bool is_kernel_percpu_address(unsigned long addr)
+bool __is_kernel_percpu_address(unsigned long addr, unsigned long *can_addr)
 {
 #ifdef CONFIG_SMP
 	const size_t static_size = __per_cpu_end - __per_cpu_start;
@@ -966,16 +955,36 @@ bool is_kernel_percpu_address(unsigned long addr)
 
 	for_each_possible_cpu(cpu) {
 		void *start = per_cpu_ptr(base, cpu);
+		void *va = (void *)addr;
 
-		if ((void *)addr >= start && (void *)addr < start + static_size)
+		if (va >= start && va < start + static_size) {
+			if (can_addr)
+				*can_addr = (unsigned long) (va - start);
 			return true;
-        }
+		}
+	}
 #endif
 	/* on UP, can't distinguish from other static vars, always false */
 	return false;
 }
 
 /**
+ * is_kernel_percpu_address - test whether address is from static percpu area
+ * @addr: address to test
+ *
+ * Test whether @addr belongs to in-kernel static percpu area.  Module
+ * static percpu areas are not considered.  For those, use
+ * is_module_percpu_address().
+ *
+ * RETURNS:
+ * %true if @addr is from in-kernel static percpu area, %false otherwise.
+ */
+bool is_kernel_percpu_address(unsigned long addr)
+{
+	return __is_kernel_percpu_address(addr, NULL);
+}
+
+/**
  * per_cpu_ptr_to_phys - convert translated percpu address to physical address
  * @addr: the address to be converted to physical address
  *
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 5/7] rt: Drop the removal of _GPL from rt_mutex_destroy()s EXPORT_SYMBOL
  2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2017-03-08 20:22 ` [PATCH RT 4/7] lockdep: Handle statically initialized PER_CPU locks proper Steven Rostedt
@ 2017-03-08 20:22 ` Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 6/7] lockdep: Fix compilation error for !CONFIG_MODULES and !CONFIG_SMP Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 7/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright

[-- Attachment #1: 0005-rt-Drop-the-removal-of-_GPL-from-rt_mutex_destroy-s-.patch --]
[-- Type: text/plain, Size: 847 bytes --]

3.12.70-rt95-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

What we have now should be enough, the EXPORT_SYMBOL statement for
rt_mutex_destroy() is not required.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/rtmutex.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 63ac099b3b8d..43d98d373809 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -2004,7 +2004,8 @@ void rt_mutex_destroy(struct rt_mutex *lock)
 	lock->magic = NULL;
 #endif
 }
-EXPORT_SYMBOL(rt_mutex_destroy);
+
+EXPORT_SYMBOL_GPL(rt_mutex_destroy);
 
 /**
  * __rt_mutex_init - initialize the rt lock
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 6/7] lockdep: Fix compilation error for !CONFIG_MODULES and !CONFIG_SMP
  2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2017-03-08 20:22 ` [PATCH RT 5/7] rt: Drop the removal of _GPL from rt_mutex_destroy()s EXPORT_SYMBOL Steven Rostedt
@ 2017-03-08 20:22 ` Steven Rostedt
  2017-03-08 20:22 ` [PATCH RT 7/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Dan Murphy

[-- Attachment #1: 0006-lockdep-Fix-compilation-error-for-CONFIG_MODULES-and.patch --]
[-- Type: text/plain, Size: 1939 bytes --]

3.12.70-rt95-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Dan Murphy <dmurphy@ti.com>

When CONFIG_MODULES is not set then it fails to compile in lockdep:

|kernel/locking/lockdep.c: In function 'look_up_lock_class':
|kernel/locking/lockdep.c:684:12: error: implicit declaration of function
| '__is_module_percpu_address' [-Werror=implicit-function-declaration]

If CONFIG_MODULES is set but CONFIG_SMP is not, then it compiles but
fails link at the end:

|kernel/locking/lockdep.c:684: undefined reference to `__is_module_percpu_address'
|kernel/built-in.o:(.debug_addr+0x1e674): undefined reference to `__is_module_percpu_address'

This patch adds the function for both cases.

Signed-off-by: Dan Murphy <dmurphy@ti.com>
[bigeasy: merge the two patches from Dan into one, adapt changelog]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/module.h | 5 +++++
 kernel/module.c        | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/include/linux/module.h b/include/linux/module.h
index 7ae21ed9453d..874504dcc825 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -543,6 +543,11 @@ static inline bool is_module_percpu_address(unsigned long addr)
 	return false;
 }
 
+static inline bool __is_module_percpu_address(unsigned long addr, unsigned long *can_addr)
+{
+	return false;
+}
+
 static inline bool is_module_text_address(unsigned long addr)
 {
 	return false;
diff --git a/kernel/module.c b/kernel/module.c
index 4347aa243941..64135e935223 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -604,6 +604,11 @@ bool is_module_percpu_address(unsigned long addr)
 	return false;
 }
 
+bool __is_module_percpu_address(unsigned long addr, unsigned long *can_addr)
+{
+	return false;
+}
+
 #endif /* CONFIG_SMP */
 
 #define MODINFO_ATTR(field)	\
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 7/7] Linux 3.12.70-rt95-rc1
  2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2017-03-08 20:22 ` [PATCH RT 6/7] lockdep: Fix compilation error for !CONFIG_MODULES and !CONFIG_SMP Steven Rostedt
@ 2017-03-08 20:22 ` Steven Rostedt
  6 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:22 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright

[-- Attachment #1: 0007-Linux-3.12.70-rt95-rc1.patch --]
[-- Type: text/plain, Size: 412 bytes --]

3.12.70-rt95-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 8d02a9bac500..fe529ae51f64 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt94
+-rt95-rc1
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH RT 5/7] rt: Drop the removal of _GPL from rt_mutex_destroy()s EXPORT_SYMBOL
  2017-03-08 20:30 [PATCH RT 0/7] Linux 3.10.105-rt120-rc1 Steven Rostedt
@ 2017-03-08 20:30 ` Steven Rostedt
  0 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2017-03-08 20:30 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright

[-- Attachment #1: 0005-rt-Drop-the-removal-of-_GPL-from-rt_mutex_destroy-s-.patch --]
[-- Type: text/plain, Size: 849 bytes --]

3.10.105-rt120-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

What we have now should be enough, the EXPORT_SYMBOL statement for
rt_mutex_destroy() is not required.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/rtmutex.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index a28e9fd41965..a809539f443c 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -1789,7 +1789,8 @@ void rt_mutex_destroy(struct rt_mutex *lock)
 	lock->magic = NULL;
 #endif
 }
-EXPORT_SYMBOL(rt_mutex_destroy);
+
+EXPORT_SYMBOL_GPL(rt_mutex_destroy);
 
 /**
  * __rt_mutex_init - initialize the rt lock
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-03-08 20:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-08 20:22 [PATCH RT 0/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
2017-03-08 20:22 ` [PATCH RT 1/7] radix-tree: use local locks Steven Rostedt
2017-03-08 20:22 ` [PATCH RT 2/7] x86/mm/cpa: avoid wbinvd() for PREEMPT Steven Rostedt
2017-03-08 20:22 ` [PATCH RT 3/7] rt: Drop mutex_disable() on !DEBUG configs and the GPL suffix from export symbol Steven Rostedt
2017-03-08 20:22 ` [PATCH RT 4/7] lockdep: Handle statically initialized PER_CPU locks proper Steven Rostedt
2017-03-08 20:22 ` [PATCH RT 5/7] rt: Drop the removal of _GPL from rt_mutex_destroy()s EXPORT_SYMBOL Steven Rostedt
2017-03-08 20:22 ` [PATCH RT 6/7] lockdep: Fix compilation error for !CONFIG_MODULES and !CONFIG_SMP Steven Rostedt
2017-03-08 20:22 ` [PATCH RT 7/7] Linux 3.12.70-rt95-rc1 Steven Rostedt
2017-03-08 20:30 [PATCH RT 0/7] Linux 3.10.105-rt120-rc1 Steven Rostedt
2017-03-08 20:30 ` [PATCH RT 5/7] rt: Drop the removal of _GPL from rt_mutex_destroy()s EXPORT_SYMBOL Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).