All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2
@ 2009-10-14  6:01 Tejun Heo
  2009-10-14  6:01 ` [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var() Tejun Heo
                   ` (16 more replies)
  0 siblings, 17 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert

Hello, all.

This is expanded version of Rusty's drop per_cpu__ prefix patchset
which first appeared way back and has been bit-rotting waiting for all
archs to move to the dynamic percpu allocator.  Now that it's
complete.  I tried to refresh Rusty's patchset and I ended up with
this rather large patchset.

This change has the following benefits.

* Allow unifying different percpu accessors.  With recent changes,
  static and dynamic variables are equal but we still have various now
  equivalent but slightly different accessors.  Dropping special
  handling of static percpu variable symbols will allow unifying them.

* Provide proper protection against incorrect accesses.  With dynamic
  ones becoming the first class citizens and addition of this_cpu_*()
  ops, the protection offered by __per_cpu_ prefixing has been
  diluted.  The usage of dynamic percpu variables and pointer
  variables will keep expanding and the prefixing simply can't cover
  anything not involving static symbols while accessors which don't
  require the prefix allow the same accessbility to both static and
  dynamic ones circumventing any existing protection.

This is the second take.  The cc list was too long on the first post
and lkml rejected it.  Changes from the first post are...

* Slight update on patch descriptions.

* Added Acked and Reviewed-by's.

* Updated against the current percpu#for-next.

  0001-vmalloc-fix-use-of-non-existent-percpu-variable-in-p.patch
  0002-percpu-make-alloc_percpu-handle-array-types.patch
  0003-percpu-remove-some-sparse-warnings.patch
  0004-percpu-make-percpu-symbols-under-kernel-and-mm-uniqu.patch
  0005-percpu-make-percpu-symbols-in-tracer-unique.patch
  0006-percpu-make-percpu-symbols-in-oprofile-unique.patch
  0007-percpu-make-percpu-symbols-in-cpufreq-unique.patch
  0008-percpu-make-percpu-symbols-in-xen-unique.patch
  0009-percpu-make-percpu-symbols-in-x86-unique.patch
  0010-percpu-make-percpu-symbols-in-powerpc-unique.patch
  0011-percpu-make-percpu-symbols-in-ia64-unique.patch
  0012-percpu-make-misc-percpu-symbols-unique.patch
  0013-percpu-remove-per_cpu__-prefix.patch
  0014-percpu-make-access-macros-universal.patch
  0015-percpu-add-__percpu-for-sparse.patch
  0016-percpu-make-accessors-check-for-percpu-pointer-in-sp.patch

0001-0002 fix misc stuff (not strictly necessary for upstream at this
point).

0003 removes existing sparse warnings triggered by percpu code.

0004-0012 make percpu symbols unique.  These patches are of larger
scope than Rusty's patch mainly because of the global visibility of
static percpu variables (for __weak usage in alpha and s390).  So, I
audited all percpu variables and made sure they are unique in global
namespace even with per_cpu__ prefix removed and they don't clash with
local variable usages.  If you're maintainer of the affected
subsystem, please ack the patch.  If you think a patch better be
routed through different tree, please let me know.  As long as it's a
stable git tree, the patch can go there and percpu tree can pull from
it.

0013-0014 remove per_cpu__ prefix and make all percpu accessors safe
to use for both static and dynamic percpu variables.

0015-0016 add sparse annotations to catch incorrect accesses.  BTW,
there are way too many existing sparse warnings which makes it very
difficult to verify stuff.  Is this expected?

This unification of static and dynamic variable handlings kind of
brings out the ugliness of the current percpu interface.  There are
different macros doing the same thing, names are inconsistent and too
many take lvalue when they should take a pointer.  So, it seems we
definitely can use some cleanup here.

This patchset is available in the following git tree.

  git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu.git unified-symbols

and contains the following changes.

 arch/blackfin/mach-common/entry.S          |    4 
 arch/cris/arch-v10/kernel/entry.S          |    2 
 arch/cris/arch-v32/mm/mmu.S                |    2 
 arch/ia64/include/asm/percpu.h             |    4 
 arch/ia64/include/asm/processor.h          |    6 -
 arch/ia64/kernel/head.S                    |    4 
 arch/ia64/kernel/ia64_ksyms.c              |    4 
 arch/ia64/kernel/mca_asm.S                 |    2 
 arch/ia64/kernel/relocate_kernel.S         |    2 
 arch/ia64/kernel/setup.c                   |    4 
 arch/ia64/mm/discontig.c                   |    5 -
 arch/ia64/sn/kernel/sn2/sn2_smp.c          |    8 -
 arch/ia64/xen/irq_xen.c                    |  131 ++++++++++++++---------------
 arch/ia64/xen/time.c                       |   22 ++--
 arch/microblaze/include/asm/entry.h        |    2 
 arch/mn10300/kernel/kprobes.c              |   61 ++++++-------
 arch/parisc/lib/fixup.S                    |    8 -
 arch/powerpc/include/asm/smp.h             |    2 
 arch/powerpc/kernel/perf_callchain.c       |    4 
 arch/powerpc/kernel/setup-common.c         |    4 
 arch/powerpc/kernel/smp.c                  |    2 
 arch/powerpc/platforms/cell/interrupt.c    |   14 +--
 arch/powerpc/platforms/pseries/dtl.c       |    4 
 arch/powerpc/platforms/pseries/hvCall.S    |    2 
 arch/sparc/kernel/nmi.c                    |    6 -
 arch/sparc/kernel/rtrap_64.S               |    8 -
 arch/x86/include/asm/percpu.h              |   63 ++++++-------
 arch/x86/include/asm/system.h              |    8 -
 arch/x86/kernel/apic/nmi.c                 |    6 -
 arch/x86/kernel/cpu/common.c               |    8 -
 arch/x86/kernel/cpu/cpu_debug.c            |   30 +++---
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c |   28 +++---
 arch/x86/kernel/cpu/intel_cacheinfo.c      |   54 +++++------
 arch/x86/kernel/ds.c                       |    4 
 arch/x86/kernel/head_32.S                  |    6 -
 arch/x86/kernel/vmlinux.lds.S              |    4 
 arch/x86/kvm/svm.c                         |   63 ++++++-------
 arch/x86/xen/smp.c                         |   41 ++++-----
 arch/x86/xen/time.c                        |   24 ++---
 arch/x86/xen/xen-asm_32.S                  |    4 
 drivers/cpufreq/cpufreq.c                  |   16 +--
 drivers/cpufreq/freq_table.c               |   12 +-
 drivers/crypto/padlock-aes.c               |   12 +-
 drivers/lguest/x86/core.c                  |    6 -
 drivers/oprofile/cpu_buffer.c              |   19 +---
 drivers/oprofile/cpu_buffer.h              |    4 
 drivers/oprofile/oprofile_stats.c          |    4 
 drivers/s390/net/netiucv.c                 |    8 -
 include/asm-generic/percpu.h               |   18 ++-
 include/linux/compiler.h                   |    4 
 include/linux/percpu-defs.h                |   41 +++++----
 include/linux/percpu.h                     |   94 +++++++++++---------
 include/linux/vmstat.h                     |    8 -
 kernel/lockdep.c                           |   11 +-
 kernel/rcutorture.c                        |    8 -
 kernel/sched.c                             |    8 -
 kernel/softirq.c                           |    4 
 kernel/softlockup.c                        |   54 +++++------
 kernel/time/timer_stats.c                  |   11 +-
 kernel/trace/trace.c                       |   10 +-
 kernel/trace/trace_functions_graph.c       |    4 
 kernel/trace/trace_hw_branches.c           |   51 +++++------
 mm/percpu.c                                |    1 
 mm/slab.c                                  |   18 +--
 mm/vmalloc.c                               |    4 
 mm/vmstat.c                                |    7 -
 66 files changed, 562 insertions(+), 535 deletions(-)

Thanks.

--
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var()
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14 15:06   ` Christoph Lameter
  2009-10-14  6:01 ` [PATCH 02/16] percpu: make alloc_percpu() handle array types Tejun Heo
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Nick Piggin

vmalloc used non-existent percpu variable vmap_cpu_blocks instead of
the intended vmap_block_queue.  This went unnoticed because
put_cpu_var() didn't evaluate the parameter.  Fix it.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Nick Piggin <npiggin@suse.de>
---
 mm/vmalloc.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 69511e6..b65cfe4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -760,7 +760,7 @@ static struct vmap_block *new_vmap_block(gfp_t gfp_mask)
 	spin_lock(&vbq->lock);
 	list_add(&vb->free_list, &vbq->free);
 	spin_unlock(&vbq->lock);
-	put_cpu_var(vmap_cpu_blocks);
+	put_cpu_var(vmap_block_queue);
 
 	return vb;
 }
@@ -825,7 +825,7 @@ again:
 		}
 		spin_unlock(&vb->lock);
 	}
-	put_cpu_var(vmap_cpu_blocks);
+	put_cpu_var(vmap_block_queue);
 	rcu_read_unlock();
 
 	if (!addr) {
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 02/16] percpu: make alloc_percpu() handle array types
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
  2009-10-14  6:01 ` [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var() Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 03/16] percpu: remove some sparse warnings Tejun Heo
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo

alloc_percpu() couldn't handle array types like "int [100]" due to the
way return type was casted.  Fix it by using typeof() instead.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
---
 include/linux/percpu.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 3d9ba92..519d687 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -164,8 +164,8 @@ static inline void *pcpu_lpage_remapped(void *kaddr)
 
 #endif /* CONFIG_SMP */
 
-#define alloc_percpu(type)	(type *)__alloc_percpu(sizeof(type), \
-						       __alignof__(type))
+#define alloc_percpu(type)	\
+	(typeof(type) *)__alloc_percpu(sizeof(type), __alignof__(type))
 
 /*
  * Optional methods for optimized non-lvalue per-cpu variable access.
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 03/16] percpu: remove some sparse warnings
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
  2009-10-14  6:01 ` [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var() Tejun Heo
  2009-10-14  6:01 ` [PATCH 02/16] percpu: make alloc_percpu() handle array types Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14 14:20   ` Christoph Lameter
  2009-10-14  6:01 ` [PATCH 04/16] percpu: make percpu symbols under kernel/ and mm/ unique Tejun Heo
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo

Make the following changes to remove some sparse warnings.

* Make DEFINE_PER_CPU_SECTION() declare __pcpu_unique_* before
  defining it.

* Annotate pcpu_extend_area_map() that it is entered with pcpu_lock
  held, releases it and then reacquires it.

* Make percpu related macros use unique nested variable names.

* While at it, add pcpu prefix to __size_call[_return]() macros as
  to-be-implemented sparse annotations will add percpu specific stuff
  to these macros.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Christoph Lameter <cl@linux-foundation.org>
---
 arch/x86/include/asm/percpu.h |   26 +++++++++++-----------
 include/linux/percpu-defs.h   |    1 +
 include/linux/percpu.h        |   48 ++++++++++++++++++++--------------------
 mm/percpu.c                   |    1 +
 4 files changed, 39 insertions(+), 37 deletions(-)

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 8b5ec19..0c44196 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -74,31 +74,31 @@ extern void __bad_percpu_size(void);
 
 #define percpu_to_op(op, var, val)			\
 do {							\
-	typedef typeof(var) T__;			\
+	typedef typeof(var) pto_T__;			\
 	if (0) {					\
-		T__ tmp__;				\
-		tmp__ = (val);				\
+		pto_T__ pto_tmp__;			\
+		pto_tmp__ = (val);			\
 	}						\
 	switch (sizeof(var)) {				\
 	case 1:						\
 		asm(op "b %1,"__percpu_arg(0)		\
 		    : "+m" (var)			\
-		    : "qi" ((T__)(val)));		\
+		    : "qi" ((pto_T__)(val)));		\
 		break;					\
 	case 2:						\
 		asm(op "w %1,"__percpu_arg(0)		\
 		    : "+m" (var)			\
-		    : "ri" ((T__)(val)));		\
+		    : "ri" ((pto_T__)(val)));		\
 		break;					\
 	case 4:						\
 		asm(op "l %1,"__percpu_arg(0)		\
 		    : "+m" (var)			\
-		    : "ri" ((T__)(val)));		\
+		    : "ri" ((pto_T__)(val)));		\
 		break;					\
 	case 8:						\
 		asm(op "q %1,"__percpu_arg(0)		\
 		    : "+m" (var)			\
-		    : "re" ((T__)(val)));		\
+		    : "re" ((pto_T__)(val)));		\
 		break;					\
 	default: __bad_percpu_size();			\
 	}						\
@@ -106,31 +106,31 @@ do {							\
 
 #define percpu_from_op(op, var, constraint)		\
 ({							\
-	typeof(var) ret__;				\
+	typeof(var) pfo_ret__;				\
 	switch (sizeof(var)) {				\
 	case 1:						\
 		asm(op "b "__percpu_arg(1)",%0"		\
-		    : "=q" (ret__)			\
+		    : "=q" (pfo_ret__)			\
 		    : constraint);			\
 		break;					\
 	case 2:						\
 		asm(op "w "__percpu_arg(1)",%0"		\
-		    : "=r" (ret__)			\
+		    : "=r" (pfo_ret__)			\
 		    : constraint);			\
 		break;					\
 	case 4:						\
 		asm(op "l "__percpu_arg(1)",%0"		\
-		    : "=r" (ret__)			\
+		    : "=r" (pfo_ret__)			\
 		    : constraint);			\
 		break;					\
 	case 8:						\
 		asm(op "q "__percpu_arg(1)",%0"		\
-		    : "=r" (ret__)			\
+		    : "=r" (pfo_ret__)			\
 		    : constraint);			\
 		break;					\
 	default: __bad_percpu_size();			\
 	}						\
-	ret__;						\
+	pfo_ret__;					\
 })
 
 /*
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index 9bd0319..5a5d6ce 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -60,6 +60,7 @@
 
 #define DEFINE_PER_CPU_SECTION(type, name, sec)				\
 	__PCPU_DUMMY_ATTRS char __pcpu_scope_##name;			\
+	extern __PCPU_DUMMY_ATTRS char __pcpu_unique_##name;		\
 	__PCPU_DUMMY_ATTRS char __pcpu_unique_##name;			\
 	__PCPU_ATTRS(sec) PER_CPU_DEF_ATTRIBUTES __weak			\
 	__typeof__(type) per_cpu__##name
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 519d687..522f421 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -226,20 +226,20 @@ do {									\
 
 extern void __bad_size_call_parameter(void);
 
-#define __size_call_return(stem, variable)				\
-({	typeof(variable) ret__;						\
+#define __pcpu_size_call_return(stem, variable)				\
+({	typeof(variable) pscr_ret__;					\
 	switch(sizeof(variable)) {					\
-	case 1: ret__ = stem##1(variable);break;			\
-	case 2: ret__ = stem##2(variable);break;			\
-	case 4: ret__ = stem##4(variable);break;			\
-	case 8: ret__ = stem##8(variable);break;			\
+	case 1: pscr_ret__ = stem##1(variable);break;			\
+	case 2: pscr_ret__ = stem##2(variable);break;			\
+	case 4: pscr_ret__ = stem##4(variable);break;			\
+	case 8: pscr_ret__ = stem##8(variable);break;			\
 	default:							\
 		__bad_size_call_parameter();break;			\
 	}								\
-	ret__;								\
+	pscr_ret__;							\
 })
 
-#define __size_call(stem, variable, ...)				\
+#define __pcpu_size_call(stem, variable, ...)				\
 do {									\
 	switch(sizeof(variable)) {					\
 		case 1: stem##1(variable, __VA_ARGS__);break;		\
@@ -299,7 +299,7 @@ do {									\
 # ifndef this_cpu_read_8
 #  define this_cpu_read_8(pcp)	_this_cpu_generic_read(pcp)
 # endif
-# define this_cpu_read(pcp)	__size_call_return(this_cpu_read_, (pcp))
+# define this_cpu_read(pcp)	__pcpu_size_call_return(this_cpu_read_, (pcp))
 #endif
 
 #define _this_cpu_generic_to_op(pcp, val, op)				\
@@ -322,7 +322,7 @@ do {									\
 # ifndef this_cpu_write_8
 #  define this_cpu_write_8(pcp, val)	_this_cpu_generic_to_op((pcp), (val), =)
 # endif
-# define this_cpu_write(pcp, val)	__size_call(this_cpu_write_, (pcp), (val))
+# define this_cpu_write(pcp, val)	__pcpu_size_call(this_cpu_write_, (pcp), (val))
 #endif
 
 #ifndef this_cpu_add
@@ -338,7 +338,7 @@ do {									\
 # ifndef this_cpu_add_8
 #  define this_cpu_add_8(pcp, val)	_this_cpu_generic_to_op((pcp), (val), +=)
 # endif
-# define this_cpu_add(pcp, val)		__size_call(this_cpu_add_, (pcp), (val))
+# define this_cpu_add(pcp, val)		__pcpu_size_call(this_cpu_add_, (pcp), (val))
 #endif
 
 #ifndef this_cpu_sub
@@ -366,7 +366,7 @@ do {									\
 # ifndef this_cpu_and_8
 #  define this_cpu_and_8(pcp, val)	_this_cpu_generic_to_op((pcp), (val), &=)
 # endif
-# define this_cpu_and(pcp, val)		__size_call(this_cpu_and_, (pcp), (val))
+# define this_cpu_and(pcp, val)		__pcpu_size_call(this_cpu_and_, (pcp), (val))
 #endif
 
 #ifndef this_cpu_or
@@ -382,7 +382,7 @@ do {									\
 # ifndef this_cpu_or_8
 #  define this_cpu_or_8(pcp, val)	_this_cpu_generic_to_op((pcp), (val), |=)
 # endif
-# define this_cpu_or(pcp, val)		__size_call(this_cpu_or_, (pcp), (val))
+# define this_cpu_or(pcp, val)		__pcpu_size_call(this_cpu_or_, (pcp), (val))
 #endif
 
 #ifndef this_cpu_xor
@@ -398,7 +398,7 @@ do {									\
 # ifndef this_cpu_xor_8
 #  define this_cpu_xor_8(pcp, val)	_this_cpu_generic_to_op((pcp), (val), ^=)
 # endif
-# define this_cpu_xor(pcp, val)		__size_call(this_cpu_or_, (pcp), (val))
+# define this_cpu_xor(pcp, val)		__pcpu_size_call(this_cpu_or_, (pcp), (val))
 #endif
 
 /*
@@ -428,7 +428,7 @@ do {									\
 # ifndef __this_cpu_read_8
 #  define __this_cpu_read_8(pcp)	(*__this_cpu_ptr(&(pcp)))
 # endif
-# define __this_cpu_read(pcp)	__size_call_return(__this_cpu_read_, (pcp))
+# define __this_cpu_read(pcp)	__pcpu_size_call_return(__this_cpu_read_, (pcp))
 #endif
 
 #define __this_cpu_generic_to_op(pcp, val, op)				\
@@ -449,7 +449,7 @@ do {									\
 # ifndef __this_cpu_write_8
 #  define __this_cpu_write_8(pcp, val)	__this_cpu_generic_to_op((pcp), (val), =)
 # endif
-# define __this_cpu_write(pcp, val)	__size_call(__this_cpu_write_, (pcp), (val))
+# define __this_cpu_write(pcp, val)	__pcpu_size_call(__this_cpu_write_, (pcp), (val))
 #endif
 
 #ifndef __this_cpu_add
@@ -465,7 +465,7 @@ do {									\
 # ifndef __this_cpu_add_8
 #  define __this_cpu_add_8(pcp, val)	__this_cpu_generic_to_op((pcp), (val), +=)
 # endif
-# define __this_cpu_add(pcp, val)	__size_call(__this_cpu_add_, (pcp), (val))
+# define __this_cpu_add(pcp, val)	__pcpu_size_call(__this_cpu_add_, (pcp), (val))
 #endif
 
 #ifndef __this_cpu_sub
@@ -493,7 +493,7 @@ do {									\
 # ifndef __this_cpu_and_8
 #  define __this_cpu_and_8(pcp, val)	__this_cpu_generic_to_op((pcp), (val), &=)
 # endif
-# define __this_cpu_and(pcp, val)	__size_call(__this_cpu_and_, (pcp), (val))
+# define __this_cpu_and(pcp, val)	__pcpu_size_call(__this_cpu_and_, (pcp), (val))
 #endif
 
 #ifndef __this_cpu_or
@@ -509,7 +509,7 @@ do {									\
 # ifndef __this_cpu_or_8
 #  define __this_cpu_or_8(pcp, val)	__this_cpu_generic_to_op((pcp), (val), |=)
 # endif
-# define __this_cpu_or(pcp, val)	__size_call(__this_cpu_or_, (pcp), (val))
+# define __this_cpu_or(pcp, val)	__pcpu_size_call(__this_cpu_or_, (pcp), (val))
 #endif
 
 #ifndef __this_cpu_xor
@@ -525,7 +525,7 @@ do {									\
 # ifndef __this_cpu_xor_8
 #  define __this_cpu_xor_8(pcp, val)	__this_cpu_generic_to_op((pcp), (val), ^=)
 # endif
-# define __this_cpu_xor(pcp, val)	__size_call(__this_cpu_xor_, (pcp), (val))
+# define __this_cpu_xor(pcp, val)	__pcpu_size_call(__this_cpu_xor_, (pcp), (val))
 #endif
 
 /*
@@ -556,7 +556,7 @@ do {									\
 # ifndef irqsafe_cpu_add_8
 #  define irqsafe_cpu_add_8(pcp, val) irqsafe_cpu_generic_to_op((pcp), (val), +=)
 # endif
-# define irqsafe_cpu_add(pcp, val) __size_call(irqsafe_cpu_add_, (pcp), (val))
+# define irqsafe_cpu_add(pcp, val) __pcpu_size_call(irqsafe_cpu_add_, (pcp), (val))
 #endif
 
 #ifndef irqsafe_cpu_sub
@@ -584,7 +584,7 @@ do {									\
 # ifndef irqsafe_cpu_and_8
 #  define irqsafe_cpu_and_8(pcp, val) irqsafe_cpu_generic_to_op((pcp), (val), &=)
 # endif
-# define irqsafe_cpu_and(pcp, val) __size_call(irqsafe_cpu_and_, (val))
+# define irqsafe_cpu_and(pcp, val) __pcpu_size_call(irqsafe_cpu_and_, (val))
 #endif
 
 #ifndef irqsafe_cpu_or
@@ -600,7 +600,7 @@ do {									\
 # ifndef irqsafe_cpu_or_8
 #  define irqsafe_cpu_or_8(pcp, val) irqsafe_cpu_generic_to_op((pcp), (val), |=)
 # endif
-# define irqsafe_cpu_or(pcp, val) __size_call(irqsafe_cpu_or_, (val))
+# define irqsafe_cpu_or(pcp, val) __pcpu_size_call(irqsafe_cpu_or_, (val))
 #endif
 
 #ifndef irqsafe_cpu_xor
@@ -616,7 +616,7 @@ do {									\
 # ifndef irqsafe_cpu_xor_8
 #  define irqsafe_cpu_xor_8(pcp, val) irqsafe_cpu_generic_to_op((pcp), (val), ^=)
 # endif
-# define irqsafe_cpu_xor(pcp, val) __size_call(irqsafe_cpu_xor_, (val))
+# define irqsafe_cpu_xor(pcp, val) __pcpu_size_call(irqsafe_cpu_xor_, (val))
 #endif
 
 #endif /* __LINUX_PERCPU_H */
diff --git a/mm/percpu.c b/mm/percpu.c
index ec158bb..e2e80fc 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -365,6 +365,7 @@ static struct pcpu_chunk *pcpu_chunk_addr_search(void *addr)
  * 0 if noop, 1 if successfully extended, -errno on failure.
  */
 static int pcpu_extend_area_map(struct pcpu_chunk *chunk)
+	__releases(lock) __acquires(lock)
 {
 	int new_alloc;
 	int *new;
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 04/16] percpu: make percpu symbols under kernel/ and mm/ unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (2 preceding siblings ...)
  2009-10-14  6:01 ` [PATCH 03/16] percpu: remove some sparse warnings Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 05/16] percpu: make percpu symbols in tracer unique Tejun Heo
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Nick Piggin

This patch updates percpu related symbols under kernel/ and mm/ such
that percpu symbols are unique and don't clash with local symbols.
This serves two purposes of decreasing the possibility of global
percpu symbol collision and allowing dropping per_cpu__ prefix from
percpu symbols.

* kernel/lockdep.c: s/lock_stats/cpu_lock_stats/

* kernel/sched.c: s/init_rq_rt/init_rt_rq_var/	(any better idea?)
  		  s/sched_group_cpus/sched_groups/

* kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a

* kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/
  		       s/watchdog_task/softlockup_watchdog/
		       s/timestamp/ts/ for local variables

* kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/

* mm/slab.c: s/reap_work/slab_reap_work/
  	     s/reap_node/slab_reap_node/

* mm/vmstat.c: local variable changed to avoid collision with vmstat_work

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: (slab/vmstat) Christoph Lameter <cl@linux-foundation.org>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nick Piggin <npiggin@suse.de>
---
 kernel/lockdep.c          |   11 +++++----
 kernel/sched.c            |    8 +++---
 kernel/softirq.c          |    4 +-
 kernel/softlockup.c       |   54 ++++++++++++++++++++++----------------------
 kernel/time/timer_stats.c |   11 +++++----
 mm/slab.c                 |   18 +++++++-------
 mm/vmstat.c               |    7 ++---
 7 files changed, 57 insertions(+), 56 deletions(-)

diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index 3815ac1..8631320 100644
--- a/kernel/lockdep.c
+++ b/kernel/lockdep.c
@@ -140,7 +140,8 @@ static inline struct lock_class *hlock_class(struct held_lock *hlock)
 }
 
 #ifdef CONFIG_LOCK_STAT
-static DEFINE_PER_CPU(struct lock_class_stats[MAX_LOCKDEP_KEYS], lock_stats);
+static DEFINE_PER_CPU(struct lock_class_stats[MAX_LOCKDEP_KEYS],
+		      cpu_lock_stats);
 
 static int lock_point(unsigned long points[], unsigned long ip)
 {
@@ -186,7 +187,7 @@ struct lock_class_stats lock_stats(struct lock_class *class)
 	memset(&stats, 0, sizeof(struct lock_class_stats));
 	for_each_possible_cpu(cpu) {
 		struct lock_class_stats *pcs =
-			&per_cpu(lock_stats, cpu)[class - lock_classes];
+			&per_cpu(cpu_lock_stats, cpu)[class - lock_classes];
 
 		for (i = 0; i < ARRAY_SIZE(stats.contention_point); i++)
 			stats.contention_point[i] += pcs->contention_point[i];
@@ -213,7 +214,7 @@ void clear_lock_stats(struct lock_class *class)
 
 	for_each_possible_cpu(cpu) {
 		struct lock_class_stats *cpu_stats =
-			&per_cpu(lock_stats, cpu)[class - lock_classes];
+			&per_cpu(cpu_lock_stats, cpu)[class - lock_classes];
 
 		memset(cpu_stats, 0, sizeof(struct lock_class_stats));
 	}
@@ -223,12 +224,12 @@ void clear_lock_stats(struct lock_class *class)
 
 static struct lock_class_stats *get_lock_stats(struct lock_class *class)
 {
-	return &get_cpu_var(lock_stats)[class - lock_classes];
+	return &get_cpu_var(cpu_lock_stats)[class - lock_classes];
 }
 
 static void put_lock_stats(struct lock_class_stats *stats)
 {
-	put_cpu_var(lock_stats);
+	put_cpu_var(cpu_lock_stats);
 }
 
 static void lock_release_holdtime(struct held_lock *hlock)
diff --git a/kernel/sched.c b/kernel/sched.c
index 1535f38..854ab41 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -298,7 +298,7 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(struct cfs_rq, init_tg_cfs_rq);
 
 #ifdef CONFIG_RT_GROUP_SCHED
 static DEFINE_PER_CPU(struct sched_rt_entity, init_sched_rt_entity);
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct rt_rq, init_rt_rq);
+static DEFINE_PER_CPU_SHARED_ALIGNED(struct rt_rq, init_rt_rq_var);
 #endif /* CONFIG_RT_GROUP_SCHED */
 #else /* !CONFIG_USER_SCHED */
 #define root_task_group init_task_group
@@ -8199,14 +8199,14 @@ enum s_alloc {
  */
 #ifdef CONFIG_SCHED_SMT
 static DEFINE_PER_CPU(struct static_sched_domain, cpu_domains);
-static DEFINE_PER_CPU(struct static_sched_group, sched_group_cpus);
+static DEFINE_PER_CPU(struct static_sched_group, sched_groups);
 
 static int
 cpu_to_cpu_group(int cpu, const struct cpumask *cpu_map,
 		 struct sched_group **sg, struct cpumask *unused)
 {
 	if (sg)
-		*sg = &per_cpu(sched_group_cpus, cpu).sg;
+		*sg = &per_cpu(sched_groups, cpu).sg;
 	return cpu;
 }
 #endif /* CONFIG_SCHED_SMT */
@@ -9470,7 +9470,7 @@ void __init sched_init(void)
 #elif defined CONFIG_USER_SCHED
 		init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, 0, NULL);
 		init_tg_rt_entry(&init_task_group,
-				&per_cpu(init_rt_rq, i),
+				&per_cpu(init_rt_rq_var, i),
 				&per_cpu(init_sched_rt_entity, i), i, 1,
 				root_task_group.rt_se[i]);
 #endif
diff --git a/kernel/softirq.c b/kernel/softirq.c
index f8749e5..0740dfd 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -697,7 +697,7 @@ void __init softirq_init(void)
 	open_softirq(HI_SOFTIRQ, tasklet_hi_action);
 }
 
-static int ksoftirqd(void * __bind_cpu)
+static int run_ksoftirqd(void * __bind_cpu)
 {
 	set_current_state(TASK_INTERRUPTIBLE);
 
@@ -810,7 +810,7 @@ static int __cpuinit cpu_callback(struct notifier_block *nfb,
 	switch (action) {
 	case CPU_UP_PREPARE:
 	case CPU_UP_PREPARE_FROZEN:
-		p = kthread_create(ksoftirqd, hcpu, "ksoftirqd/%d", hotcpu);
+		p = kthread_create(run_ksoftirqd, hcpu, "ksoftirqd/%d", hotcpu);
 		if (IS_ERR(p)) {
 			printk("ksoftirqd for %i failed\n", hotcpu);
 			return NOTIFY_BAD;
diff --git a/kernel/softlockup.c b/kernel/softlockup.c
index 81324d1..d225790 100644
--- a/kernel/softlockup.c
+++ b/kernel/softlockup.c
@@ -22,9 +22,9 @@
 
 static DEFINE_SPINLOCK(print_lock);
 
-static DEFINE_PER_CPU(unsigned long, touch_timestamp);
-static DEFINE_PER_CPU(unsigned long, print_timestamp);
-static DEFINE_PER_CPU(struct task_struct *, watchdog_task);
+static DEFINE_PER_CPU(unsigned long, softlockup_touch_ts); /* touch timestamp */
+static DEFINE_PER_CPU(unsigned long, softlockup_print_ts); /* print timestamp */
+static DEFINE_PER_CPU(struct task_struct *, softlockup_watchdog);
 
 static int __read_mostly did_panic;
 int __read_mostly softlockup_thresh = 60;
@@ -70,12 +70,12 @@ static void __touch_softlockup_watchdog(void)
 {
 	int this_cpu = raw_smp_processor_id();
 
-	__raw_get_cpu_var(touch_timestamp) = get_timestamp(this_cpu);
+	__raw_get_cpu_var(softlockup_touch_ts) = get_timestamp(this_cpu);
 }
 
 void touch_softlockup_watchdog(void)
 {
-	__raw_get_cpu_var(touch_timestamp) = 0;
+	__raw_get_cpu_var(softlockup_touch_ts) = 0;
 }
 EXPORT_SYMBOL(touch_softlockup_watchdog);
 
@@ -85,7 +85,7 @@ void touch_all_softlockup_watchdogs(void)
 
 	/* Cause each CPU to re-update its timestamp rather than complain */
 	for_each_online_cpu(cpu)
-		per_cpu(touch_timestamp, cpu) = 0;
+		per_cpu(softlockup_touch_ts, cpu) = 0;
 }
 EXPORT_SYMBOL(touch_all_softlockup_watchdogs);
 
@@ -104,28 +104,28 @@ int proc_dosoftlockup_thresh(struct ctl_table *table, int write,
 void softlockup_tick(void)
 {
 	int this_cpu = smp_processor_id();
-	unsigned long touch_timestamp = per_cpu(touch_timestamp, this_cpu);
-	unsigned long print_timestamp;
+	unsigned long touch_ts = per_cpu(softlockup_touch_ts, this_cpu);
+	unsigned long print_ts;
 	struct pt_regs *regs = get_irq_regs();
 	unsigned long now;
 
 	/* Is detection switched off? */
-	if (!per_cpu(watchdog_task, this_cpu) || softlockup_thresh <= 0) {
+	if (!per_cpu(softlockup_watchdog, this_cpu) || softlockup_thresh <= 0) {
 		/* Be sure we don't false trigger if switched back on */
-		if (touch_timestamp)
-			per_cpu(touch_timestamp, this_cpu) = 0;
+		if (touch_ts)
+			per_cpu(softlockup_touch_ts, this_cpu) = 0;
 		return;
 	}
 
-	if (touch_timestamp == 0) {
+	if (touch_ts == 0) {
 		__touch_softlockup_watchdog();
 		return;
 	}
 
-	print_timestamp = per_cpu(print_timestamp, this_cpu);
+	print_ts = per_cpu(softlockup_print_ts, this_cpu);
 
 	/* report at most once a second */
-	if (print_timestamp == touch_timestamp || did_panic)
+	if (print_ts == touch_ts || did_panic)
 		return;
 
 	/* do not print during early bootup: */
@@ -140,18 +140,18 @@ void softlockup_tick(void)
 	 * Wake up the high-prio watchdog task twice per
 	 * threshold timespan.
 	 */
-	if (now > touch_timestamp + softlockup_thresh/2)
-		wake_up_process(per_cpu(watchdog_task, this_cpu));
+	if (now > touch_ts + softlockup_thresh/2)
+		wake_up_process(per_cpu(softlockup_watchdog, this_cpu));
 
 	/* Warn about unreasonable delays: */
-	if (now <= (touch_timestamp + softlockup_thresh))
+	if (now <= (touch_ts + softlockup_thresh))
 		return;
 
-	per_cpu(print_timestamp, this_cpu) = touch_timestamp;
+	per_cpu(softlockup_print_ts, this_cpu) = touch_ts;
 
 	spin_lock(&print_lock);
 	printk(KERN_ERR "BUG: soft lockup - CPU#%d stuck for %lus! [%s:%d]\n",
-			this_cpu, now - touch_timestamp,
+			this_cpu, now - touch_ts,
 			current->comm, task_pid_nr(current));
 	print_modules();
 	print_irqtrace_events(current);
@@ -209,32 +209,32 @@ cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	switch (action) {
 	case CPU_UP_PREPARE:
 	case CPU_UP_PREPARE_FROZEN:
-		BUG_ON(per_cpu(watchdog_task, hotcpu));
+		BUG_ON(per_cpu(softlockup_watchdog, hotcpu));
 		p = kthread_create(watchdog, hcpu, "watchdog/%d", hotcpu);
 		if (IS_ERR(p)) {
 			printk(KERN_ERR "watchdog for %i failed\n", hotcpu);
 			return NOTIFY_BAD;
 		}
-		per_cpu(touch_timestamp, hotcpu) = 0;
-		per_cpu(watchdog_task, hotcpu) = p;
+		per_cpu(softlockup_touch_ts, hotcpu) = 0;
+		per_cpu(softlockup_watchdog, hotcpu) = p;
 		kthread_bind(p, hotcpu);
 		break;
 	case CPU_ONLINE:
 	case CPU_ONLINE_FROZEN:
-		wake_up_process(per_cpu(watchdog_task, hotcpu));
+		wake_up_process(per_cpu(softlockup_watchdog, hotcpu));
 		break;
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_UP_CANCELED:
 	case CPU_UP_CANCELED_FROZEN:
-		if (!per_cpu(watchdog_task, hotcpu))
+		if (!per_cpu(softlockup_watchdog, hotcpu))
 			break;
 		/* Unbind so it can run.  Fall thru. */
-		kthread_bind(per_cpu(watchdog_task, hotcpu),
+		kthread_bind(per_cpu(softlockup_watchdog, hotcpu),
 			     cpumask_any(cpu_online_mask));
 	case CPU_DEAD:
 	case CPU_DEAD_FROZEN:
-		p = per_cpu(watchdog_task, hotcpu);
-		per_cpu(watchdog_task, hotcpu) = NULL;
+		p = per_cpu(softlockup_watchdog, hotcpu);
+		per_cpu(softlockup_watchdog, hotcpu) = NULL;
 		kthread_stop(p);
 		break;
 #endif /* CONFIG_HOTPLUG_CPU */
diff --git a/kernel/time/timer_stats.c b/kernel/time/timer_stats.c
index ee5681f..63b117e 100644
--- a/kernel/time/timer_stats.c
+++ b/kernel/time/timer_stats.c
@@ -86,7 +86,7 @@ static DEFINE_SPINLOCK(table_lock);
 /*
  * Per-CPU lookup locks for fast hash lookup:
  */
-static DEFINE_PER_CPU(spinlock_t, lookup_lock);
+static DEFINE_PER_CPU(spinlock_t, tstats_lookup_lock);
 
 /*
  * Mutex to serialize state changes with show-stats activities:
@@ -245,7 +245,7 @@ void timer_stats_update_stats(void *timer, pid_t pid, void *startf,
 	if (likely(!timer_stats_active))
 		return;
 
-	lock = &per_cpu(lookup_lock, raw_smp_processor_id());
+	lock = &per_cpu(tstats_lookup_lock, raw_smp_processor_id());
 
 	input.timer = timer;
 	input.start_func = startf;
@@ -348,9 +348,10 @@ static void sync_access(void)
 	int cpu;
 
 	for_each_online_cpu(cpu) {
-		spin_lock_irqsave(&per_cpu(lookup_lock, cpu), flags);
+		spinlock_t *lock = &per_cpu(tstats_lookup_lock, cpu);
+		spin_lock_irqsave(lock, flags);
 		/* nothing */
-		spin_unlock_irqrestore(&per_cpu(lookup_lock, cpu), flags);
+		spin_unlock_irqrestore(lock, flags);
 	}
 }
 
@@ -408,7 +409,7 @@ void __init init_timer_stats(void)
 	int cpu;
 
 	for_each_possible_cpu(cpu)
-		spin_lock_init(&per_cpu(lookup_lock, cpu));
+		spin_lock_init(&per_cpu(tstats_lookup_lock, cpu));
 }
 
 static int __init init_tstats_procfs(void)
diff --git a/mm/slab.c b/mm/slab.c
index 7dfa481..211b174 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -685,7 +685,7 @@ int slab_is_available(void)
 	return g_cpucache_up >= EARLY;
 }
 
-static DEFINE_PER_CPU(struct delayed_work, reap_work);
+static DEFINE_PER_CPU(struct delayed_work, slab_reap_work);
 
 static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
 {
@@ -826,7 +826,7 @@ __setup("noaliencache", noaliencache_setup);
  * objects freed on different nodes from which they were allocated) and the
  * flushing of remote pcps by calling drain_node_pages.
  */
-static DEFINE_PER_CPU(unsigned long, reap_node);
+static DEFINE_PER_CPU(unsigned long, slab_reap_node);
 
 static void init_reap_node(int cpu)
 {
@@ -836,17 +836,17 @@ static void init_reap_node(int cpu)
 	if (node == MAX_NUMNODES)
 		node = first_node(node_online_map);
 
-	per_cpu(reap_node, cpu) = node;
+	per_cpu(slab_reap_node, cpu) = node;
 }
 
 static void next_reap_node(void)
 {
-	int node = __get_cpu_var(reap_node);
+	int node = __get_cpu_var(slab_reap_node);
 
 	node = next_node(node, node_online_map);
 	if (unlikely(node >= MAX_NUMNODES))
 		node = first_node(node_online_map);
-	__get_cpu_var(reap_node) = node;
+	__get_cpu_var(slab_reap_node) = node;
 }
 
 #else
@@ -863,7 +863,7 @@ static void next_reap_node(void)
  */
 static void __cpuinit start_cpu_timer(int cpu)
 {
-	struct delayed_work *reap_work = &per_cpu(reap_work, cpu);
+	struct delayed_work *reap_work = &per_cpu(slab_reap_work, cpu);
 
 	/*
 	 * When this gets called from do_initcalls via cpucache_init(),
@@ -1027,7 +1027,7 @@ static void __drain_alien_cache(struct kmem_cache *cachep,
  */
 static void reap_alien(struct kmem_cache *cachep, struct kmem_list3 *l3)
 {
-	int node = __get_cpu_var(reap_node);
+	int node = __get_cpu_var(slab_reap_node);
 
 	if (l3->alien) {
 		struct array_cache *ac = l3->alien[node];
@@ -1286,9 +1286,9 @@ static int __cpuinit cpuup_callback(struct notifier_block *nfb,
 		 * anything expensive but will only modify reap_work
 		 * and reschedule the timer.
 		*/
-		cancel_rearming_delayed_work(&per_cpu(reap_work, cpu));
+		cancel_rearming_delayed_work(&per_cpu(slab_reap_work, cpu));
 		/* Now the cache_reaper is guaranteed to be not running. */
-		per_cpu(reap_work, cpu).work.func = NULL;
+		per_cpu(slab_reap_work, cpu).work.func = NULL;
   		break;
   	case CPU_DOWN_FAILED:
   	case CPU_DOWN_FAILED_FROZEN:
diff --git a/mm/vmstat.c b/mm/vmstat.c
index c81321f..dad2327 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -883,11 +883,10 @@ static void vmstat_update(struct work_struct *w)
 
 static void __cpuinit start_cpu_timer(int cpu)
 {
-	struct delayed_work *vmstat_work = &per_cpu(vmstat_work, cpu);
+	struct delayed_work *work = &per_cpu(vmstat_work, cpu);
 
-	INIT_DELAYED_WORK_DEFERRABLE(vmstat_work, vmstat_update);
-	schedule_delayed_work_on(cpu, vmstat_work,
-				 __round_jiffies_relative(HZ, cpu));
+	INIT_DELAYED_WORK_DEFERRABLE(work, vmstat_update);
+	schedule_delayed_work_on(cpu, work, __round_jiffies_relative(HZ, cpu));
 }
 
 /*
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 05/16] percpu: make percpu symbols in tracer unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (3 preceding siblings ...)
  2009-10-14  6:01 ` [PATCH 04/16] percpu: make percpu symbols under kernel/ and mm/ unique Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 06/16] percpu: make percpu symbols in oprofile unique Tejun Heo
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Frederic Weisbecker

This patch updates percpu related symbols in kernel tracer such that
percpu symbols are unique and don't clash with local symbols.  This
serves two purposes of decreasing the possibility of global percpu
symbol collision and allowing dropping per_cpu__ prefix from percpu
symbols.

* kernel/trace/trace.c: s/max_data/max_tr_data/
* kernel/trace/trace_hw_branches: s/tracer/hwb_tracer/, s/buffer/hwb_buffer/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
---
 kernel/trace/trace.c             |    4 +-
 kernel/trace/trace_hw_branches.c |   51 +++++++++++++++++++------------------
 2 files changed, 28 insertions(+), 27 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8439cdc..85a5ed7 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -203,7 +203,7 @@ cycle_t ftrace_now(int cpu)
  */
 static struct trace_array	max_tr;
 
-static DEFINE_PER_CPU(struct trace_array_cpu, max_data);
+static DEFINE_PER_CPU(struct trace_array_cpu, max_tr_data);
 
 /* tracer_enabled is used to toggle activation of a tracer */
 static int			tracer_enabled = 1;
@@ -4426,7 +4426,7 @@ __init static int tracer_alloc_buffers(void)
 	/* Allocate the first page for all buffers */
 	for_each_tracing_cpu(i) {
 		global_trace.data[i] = &per_cpu(global_trace_cpu, i);
-		max_tr.data[i] = &per_cpu(max_data, i);
+		max_tr.data[i] = &per_cpu(max_tr_data, i);
 	}
 
 	trace_init_cmdlines();
diff --git a/kernel/trace/trace_hw_branches.c b/kernel/trace/trace_hw_branches.c
index 23b6385..adaf7a3 100644
--- a/kernel/trace/trace_hw_branches.c
+++ b/kernel/trace/trace_hw_branches.c
@@ -20,10 +20,10 @@
 
 #define BTS_BUFFER_SIZE (1 << 13)
 
-static DEFINE_PER_CPU(struct bts_tracer *, tracer);
-static DEFINE_PER_CPU(unsigned char[BTS_BUFFER_SIZE], buffer);
+static DEFINE_PER_CPU(struct bts_tracer *, hwb_tracer);
+static DEFINE_PER_CPU(unsigned char[BTS_BUFFER_SIZE], hwb_buffer);
 
-#define this_tracer per_cpu(tracer, smp_processor_id())
+#define this_tracer per_cpu(hwb_tracer, smp_processor_id())
 
 static int trace_hw_branches_enabled __read_mostly;
 static int trace_hw_branches_suspended __read_mostly;
@@ -32,12 +32,13 @@ static struct trace_array *hw_branch_trace __read_mostly;
 
 static void bts_trace_init_cpu(int cpu)
 {
-	per_cpu(tracer, cpu) =
-		ds_request_bts_cpu(cpu, per_cpu(buffer, cpu), BTS_BUFFER_SIZE,
-				   NULL, (size_t)-1, BTS_KERNEL);
+	per_cpu(hwb_tracer, cpu) =
+		ds_request_bts_cpu(cpu, per_cpu(hwb_buffer, cpu),
+				   BTS_BUFFER_SIZE, NULL, (size_t)-1,
+				   BTS_KERNEL);
 
-	if (IS_ERR(per_cpu(tracer, cpu)))
-		per_cpu(tracer, cpu) = NULL;
+	if (IS_ERR(per_cpu(hwb_tracer, cpu)))
+		per_cpu(hwb_tracer, cpu) = NULL;
 }
 
 static int bts_trace_init(struct trace_array *tr)
@@ -51,7 +52,7 @@ static int bts_trace_init(struct trace_array *tr)
 	for_each_online_cpu(cpu) {
 		bts_trace_init_cpu(cpu);
 
-		if (likely(per_cpu(tracer, cpu)))
+		if (likely(per_cpu(hwb_tracer, cpu)))
 			trace_hw_branches_enabled = 1;
 	}
 	trace_hw_branches_suspended = 0;
@@ -67,9 +68,9 @@ static void bts_trace_reset(struct trace_array *tr)
 
 	get_online_cpus();
 	for_each_online_cpu(cpu) {
-		if (likely(per_cpu(tracer, cpu))) {
-			ds_release_bts(per_cpu(tracer, cpu));
-			per_cpu(tracer, cpu) = NULL;
+		if (likely(per_cpu(hwb_tracer, cpu))) {
+			ds_release_bts(per_cpu(hwb_tracer, cpu));
+			per_cpu(hwb_tracer, cpu) = NULL;
 		}
 	}
 	trace_hw_branches_enabled = 0;
@@ -83,8 +84,8 @@ static void bts_trace_start(struct trace_array *tr)
 
 	get_online_cpus();
 	for_each_online_cpu(cpu)
-		if (likely(per_cpu(tracer, cpu)))
-			ds_resume_bts(per_cpu(tracer, cpu));
+		if (likely(per_cpu(hwb_tracer, cpu)))
+			ds_resume_bts(per_cpu(hwb_tracer, cpu));
 	trace_hw_branches_suspended = 0;
 	put_online_cpus();
 }
@@ -95,8 +96,8 @@ static void bts_trace_stop(struct trace_array *tr)
 
 	get_online_cpus();
 	for_each_online_cpu(cpu)
-		if (likely(per_cpu(tracer, cpu)))
-			ds_suspend_bts(per_cpu(tracer, cpu));
+		if (likely(per_cpu(hwb_tracer, cpu)))
+			ds_suspend_bts(per_cpu(hwb_tracer, cpu));
 	trace_hw_branches_suspended = 1;
 	put_online_cpus();
 }
@@ -114,16 +115,16 @@ static int __cpuinit bts_hotcpu_handler(struct notifier_block *nfb,
 			bts_trace_init_cpu(cpu);
 
 			if (trace_hw_branches_suspended &&
-			    likely(per_cpu(tracer, cpu)))
-				ds_suspend_bts(per_cpu(tracer, cpu));
+			    likely(per_cpu(hwb_tracer, cpu)))
+				ds_suspend_bts(per_cpu(hwb_tracer, cpu));
 		}
 		break;
 
 	case CPU_DOWN_PREPARE:
 		/* The notification is sent with interrupts enabled. */
-		if (likely(per_cpu(tracer, cpu))) {
-			ds_release_bts(per_cpu(tracer, cpu));
-			per_cpu(tracer, cpu) = NULL;
+		if (likely(per_cpu(hwb_tracer, cpu))) {
+			ds_release_bts(per_cpu(hwb_tracer, cpu));
+			per_cpu(hwb_tracer, cpu) = NULL;
 		}
 	}
 
@@ -256,8 +257,8 @@ static void trace_bts_prepare(struct trace_iterator *iter)
 
 	get_online_cpus();
 	for_each_online_cpu(cpu)
-		if (likely(per_cpu(tracer, cpu)))
-			ds_suspend_bts(per_cpu(tracer, cpu));
+		if (likely(per_cpu(hwb_tracer, cpu)))
+			ds_suspend_bts(per_cpu(hwb_tracer, cpu));
 	/*
 	 * We need to collect the trace on the respective cpu since ftrace
 	 * implicitly adds the record for the current cpu.
@@ -266,8 +267,8 @@ static void trace_bts_prepare(struct trace_iterator *iter)
 	on_each_cpu(trace_bts_cpu, iter->tr, 1);
 
 	for_each_online_cpu(cpu)
-		if (likely(per_cpu(tracer, cpu)))
-			ds_resume_bts(per_cpu(tracer, cpu));
+		if (likely(per_cpu(hwb_tracer, cpu)))
+			ds_resume_bts(per_cpu(hwb_tracer, cpu));
 	put_online_cpus();
 }
 
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 06/16] percpu: make percpu symbols in oprofile unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (4 preceding siblings ...)
  2009-10-14  6:01 ` [PATCH 05/16] percpu: make percpu symbols in tracer unique Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 07/16] percpu: make percpu symbols in cpufreq unique Tejun Heo
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo

This patch updates percpu related symbols in oprofile such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* drivers/oprofile/cpu_buffer.c: s/cpu_buffer/op_cpu_buffer/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Robert Richter <robert.richter@amd.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
---
 drivers/oprofile/cpu_buffer.c     |   19 +++++++++----------
 drivers/oprofile/cpu_buffer.h     |    4 ++--
 drivers/oprofile/oprofile_stats.c |    4 ++--
 3 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/drivers/oprofile/cpu_buffer.c b/drivers/oprofile/cpu_buffer.c
index a7aae24..166b67e 100644
--- a/drivers/oprofile/cpu_buffer.c
+++ b/drivers/oprofile/cpu_buffer.c
@@ -47,7 +47,7 @@
  */
 static struct ring_buffer *op_ring_buffer_read;
 static struct ring_buffer *op_ring_buffer_write;
-DEFINE_PER_CPU(struct oprofile_cpu_buffer, cpu_buffer);
+DEFINE_PER_CPU(struct oprofile_cpu_buffer, op_cpu_buffer);
 
 static void wq_sync_buffer(struct work_struct *work);
 
@@ -61,8 +61,7 @@ unsigned long oprofile_get_cpu_buffer_size(void)
 
 void oprofile_cpu_buffer_inc_smpl_lost(void)
 {
-	struct oprofile_cpu_buffer *cpu_buf
-		= &__get_cpu_var(cpu_buffer);
+	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(op_cpu_buffer);
 
 	cpu_buf->sample_lost_overflow++;
 }
@@ -95,7 +94,7 @@ int alloc_cpu_buffers(void)
 		goto fail;
 
 	for_each_possible_cpu(i) {
-		struct oprofile_cpu_buffer *b = &per_cpu(cpu_buffer, i);
+		struct oprofile_cpu_buffer *b = &per_cpu(op_cpu_buffer, i);
 
 		b->last_task = NULL;
 		b->last_is_kernel = -1;
@@ -122,7 +121,7 @@ void start_cpu_work(void)
 	work_enabled = 1;
 
 	for_each_online_cpu(i) {
-		struct oprofile_cpu_buffer *b = &per_cpu(cpu_buffer, i);
+		struct oprofile_cpu_buffer *b = &per_cpu(op_cpu_buffer, i);
 
 		/*
 		 * Spread the work by 1 jiffy per cpu so they dont all
@@ -139,7 +138,7 @@ void end_cpu_work(void)
 	work_enabled = 0;
 
 	for_each_online_cpu(i) {
-		struct oprofile_cpu_buffer *b = &per_cpu(cpu_buffer, i);
+		struct oprofile_cpu_buffer *b = &per_cpu(op_cpu_buffer, i);
 
 		cancel_delayed_work(&b->work);
 	}
@@ -330,7 +329,7 @@ static inline void
 __oprofile_add_ext_sample(unsigned long pc, struct pt_regs * const regs,
 			  unsigned long event, int is_kernel)
 {
-	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(cpu_buffer);
+	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(op_cpu_buffer);
 	unsigned long backtrace = oprofile_backtrace_depth;
 
 	/*
@@ -375,7 +374,7 @@ oprofile_write_reserve(struct op_entry *entry, struct pt_regs * const regs,
 {
 	struct op_sample *sample;
 	int is_kernel = !user_mode(regs);
-	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(cpu_buffer);
+	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(op_cpu_buffer);
 
 	cpu_buf->sample_received++;
 
@@ -430,13 +429,13 @@ int oprofile_write_commit(struct op_entry *entry)
 
 void oprofile_add_pc(unsigned long pc, int is_kernel, unsigned long event)
 {
-	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(cpu_buffer);
+	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(op_cpu_buffer);
 	log_sample(cpu_buf, pc, 0, is_kernel, event);
 }
 
 void oprofile_add_trace(unsigned long pc)
 {
-	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(cpu_buffer);
+	struct oprofile_cpu_buffer *cpu_buf = &__get_cpu_var(op_cpu_buffer);
 
 	if (!cpu_buf->tracing)
 		return;
diff --git a/drivers/oprofile/cpu_buffer.h b/drivers/oprofile/cpu_buffer.h
index 272995d..68ea16a 100644
--- a/drivers/oprofile/cpu_buffer.h
+++ b/drivers/oprofile/cpu_buffer.h
@@ -50,7 +50,7 @@ struct oprofile_cpu_buffer {
 	struct delayed_work work;
 };
 
-DECLARE_PER_CPU(struct oprofile_cpu_buffer, cpu_buffer);
+DECLARE_PER_CPU(struct oprofile_cpu_buffer, op_cpu_buffer);
 
 /*
  * Resets the cpu buffer to a sane state.
@@ -60,7 +60,7 @@ DECLARE_PER_CPU(struct oprofile_cpu_buffer, cpu_buffer);
  */
 static inline void op_cpu_buffer_reset(int cpu)
 {
-	struct oprofile_cpu_buffer *cpu_buf = &per_cpu(cpu_buffer, cpu);
+	struct oprofile_cpu_buffer *cpu_buf = &per_cpu(op_cpu_buffer, cpu);
 
 	cpu_buf->last_is_kernel = -1;
 	cpu_buf->last_task = NULL;
diff --git a/drivers/oprofile/oprofile_stats.c b/drivers/oprofile/oprofile_stats.c
index 61689e8..917d28e 100644
--- a/drivers/oprofile/oprofile_stats.c
+++ b/drivers/oprofile/oprofile_stats.c
@@ -23,7 +23,7 @@ void oprofile_reset_stats(void)
 	int i;
 
 	for_each_possible_cpu(i) {
-		cpu_buf = &per_cpu(cpu_buffer, i);
+		cpu_buf = &per_cpu(op_cpu_buffer, i);
 		cpu_buf->sample_received = 0;
 		cpu_buf->sample_lost_overflow = 0;
 		cpu_buf->backtrace_aborted = 0;
@@ -51,7 +51,7 @@ void oprofile_create_stats_files(struct super_block *sb, struct dentry *root)
 		return;
 
 	for_each_possible_cpu(i) {
-		cpu_buf = &per_cpu(cpu_buffer, i);
+		cpu_buf = &per_cpu(op_cpu_buffer, i);
 		snprintf(buf, 10, "cpu%d", i);
 		cpudir = oprofilefs_mkdir(sb, dir, buf);
 
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 07/16] percpu: make percpu symbols in cpufreq unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (5 preceding siblings ...)
  2009-10-14  6:01 ` [PATCH 06/16] percpu: make percpu symbols in oprofile unique Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 08/16] percpu: make percpu symbols in xen unique Tejun Heo
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo

This patch updates percpu related symbols in cpufreq such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* drivers/cpufreq/cpufreq.c: s/policy_cpu/cpufreq_policy_cpu/
* drivers/cpufreq/freq_table.c: s/show_table/cpufreq_show_table/
* arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c: s/drv_data/acfreq_data/
  					      s/old_perf/acfreq_old_perf/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
---
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c |   28 ++++++++++++++--------------
 drivers/cpufreq/cpufreq.c                  |   16 ++++++++--------
 drivers/cpufreq/freq_table.c               |   12 ++++++------
 3 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
index 7d5c3b0..43eb346 100644
--- a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -68,9 +68,9 @@ struct acpi_cpufreq_data {
 	unsigned int cpu_feature;
 };
 
-static DEFINE_PER_CPU(struct acpi_cpufreq_data *, drv_data);
+static DEFINE_PER_CPU(struct acpi_cpufreq_data *, acfreq_data);
 
-static DEFINE_PER_CPU(struct aperfmperf, old_perf);
+static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf);
 
 /* acpi_perf_data is a pointer to percpu data. */
 static struct acpi_processor_performance *acpi_perf_data;
@@ -214,14 +214,14 @@ static u32 get_cur_val(const struct cpumask *mask)
 	if (unlikely(cpumask_empty(mask)))
 		return 0;
 
-	switch (per_cpu(drv_data, cpumask_first(mask))->cpu_feature) {
+	switch (per_cpu(acfreq_data, cpumask_first(mask))->cpu_feature) {
 	case SYSTEM_INTEL_MSR_CAPABLE:
 		cmd.type = SYSTEM_INTEL_MSR_CAPABLE;
 		cmd.addr.msr.reg = MSR_IA32_PERF_STATUS;
 		break;
 	case SYSTEM_IO_CAPABLE:
 		cmd.type = SYSTEM_IO_CAPABLE;
-		perf = per_cpu(drv_data, cpumask_first(mask))->acpi_data;
+		perf = per_cpu(acfreq_data, cpumask_first(mask))->acpi_data;
 		cmd.addr.io.port = perf->control_register.address;
 		cmd.addr.io.bit_width = perf->control_register.bit_width;
 		break;
@@ -268,8 +268,8 @@ static unsigned int get_measured_perf(struct cpufreq_policy *policy,
 	if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1))
 		return 0;
 
-	ratio = calc_aperfmperf_ratio(&per_cpu(old_perf, cpu), &perf);
-	per_cpu(old_perf, cpu) = perf;
+	ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf);
+	per_cpu(acfreq_old_perf, cpu) = perf;
 
 	retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT;
 
@@ -278,7 +278,7 @@ static unsigned int get_measured_perf(struct cpufreq_policy *policy,
 
 static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
 {
-	struct acpi_cpufreq_data *data = per_cpu(drv_data, cpu);
+	struct acpi_cpufreq_data *data = per_cpu(acfreq_data, cpu);
 	unsigned int freq;
 	unsigned int cached_freq;
 
@@ -322,7 +322,7 @@ static unsigned int check_freqs(const struct cpumask *mask, unsigned int freq,
 static int acpi_cpufreq_target(struct cpufreq_policy *policy,
 			       unsigned int target_freq, unsigned int relation)
 {
-	struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+	struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
 	struct acpi_processor_performance *perf;
 	struct cpufreq_freqs freqs;
 	struct drv_cmd cmd;
@@ -416,7 +416,7 @@ out:
 
 static int acpi_cpufreq_verify(struct cpufreq_policy *policy)
 {
-	struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+	struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
 
 	dprintk("acpi_cpufreq_verify\n");
 
@@ -563,7 +563,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
 		return -ENOMEM;
 
 	data->acpi_data = per_cpu_ptr(acpi_perf_data, cpu);
-	per_cpu(drv_data, cpu) = data;
+	per_cpu(acfreq_data, cpu) = data;
 
 	if (cpu_has(c, X86_FEATURE_CONSTANT_TSC))
 		acpi_cpufreq_driver.flags |= CPUFREQ_CONST_LOOPS;
@@ -714,20 +714,20 @@ err_unreg:
 	acpi_processor_unregister_performance(perf, cpu);
 err_free:
 	kfree(data);
-	per_cpu(drv_data, cpu) = NULL;
+	per_cpu(acfreq_data, cpu) = NULL;
 
 	return result;
 }
 
 static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
 {
-	struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+	struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
 
 	dprintk("acpi_cpufreq_cpu_exit\n");
 
 	if (data) {
 		cpufreq_frequency_table_put_attr(policy->cpu);
-		per_cpu(drv_data, policy->cpu) = NULL;
+		per_cpu(acfreq_data, policy->cpu) = NULL;
 		acpi_processor_unregister_performance(data->acpi_data,
 						      policy->cpu);
 		kfree(data);
@@ -738,7 +738,7 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
 
 static int acpi_cpufreq_resume(struct cpufreq_policy *policy)
 {
-	struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+	struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
 
 	dprintk("acpi_cpufreq_resume\n");
 
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 3938c78..af93a81 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -64,14 +64,14 @@ static DEFINE_SPINLOCK(cpufreq_driver_lock);
  * - Lock should not be held across
  *     __cpufreq_governor(data, CPUFREQ_GOV_STOP);
  */
-static DEFINE_PER_CPU(int, policy_cpu);
+static DEFINE_PER_CPU(int, cpufreq_policy_cpu);
 static DEFINE_PER_CPU(struct rw_semaphore, cpu_policy_rwsem);
 
 #define lock_policy_rwsem(mode, cpu)					\
 int lock_policy_rwsem_##mode						\
 (int cpu)								\
 {									\
-	int policy_cpu = per_cpu(policy_cpu, cpu);			\
+	int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu);		\
 	BUG_ON(policy_cpu == -1);					\
 	down_##mode(&per_cpu(cpu_policy_rwsem, policy_cpu));		\
 	if (unlikely(!cpu_online(cpu))) {				\
@@ -90,7 +90,7 @@ EXPORT_SYMBOL_GPL(lock_policy_rwsem_write);
 
 void unlock_policy_rwsem_read(int cpu)
 {
-	int policy_cpu = per_cpu(policy_cpu, cpu);
+	int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu);
 	BUG_ON(policy_cpu == -1);
 	up_read(&per_cpu(cpu_policy_rwsem, policy_cpu));
 }
@@ -98,7 +98,7 @@ EXPORT_SYMBOL_GPL(unlock_policy_rwsem_read);
 
 void unlock_policy_rwsem_write(int cpu)
 {
-	int policy_cpu = per_cpu(policy_cpu, cpu);
+	int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu);
 	BUG_ON(policy_cpu == -1);
 	up_write(&per_cpu(cpu_policy_rwsem, policy_cpu));
 }
@@ -799,7 +799,7 @@ int cpufreq_add_dev_policy(unsigned int cpu, struct cpufreq_policy *policy,
 
 			/* Set proper policy_cpu */
 			unlock_policy_rwsem_write(cpu);
-			per_cpu(policy_cpu, cpu) = managed_policy->cpu;
+			per_cpu(cpufreq_policy_cpu, cpu) = managed_policy->cpu;
 
 			if (lock_policy_rwsem_write(cpu) < 0) {
 				/* Should not go through policy unlock path */
@@ -906,7 +906,7 @@ int cpufreq_add_dev_interface(unsigned int cpu, struct cpufreq_policy *policy,
 	if (!cpu_online(j))
 		continue;
 		per_cpu(cpufreq_cpu_data, j) = policy;
-		per_cpu(policy_cpu, j) = policy->cpu;
+		per_cpu(cpufreq_policy_cpu, j) = policy->cpu;
 	}
 	spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
@@ -991,7 +991,7 @@ static int cpufreq_add_dev(struct sys_device *sys_dev)
 	cpumask_copy(policy->cpus, cpumask_of(cpu));
 
 	/* Initially set CPU itself as the policy_cpu */
-	per_cpu(policy_cpu, cpu) = cpu;
+	per_cpu(cpufreq_policy_cpu, cpu) = cpu;
 	ret = (lock_policy_rwsem_write(cpu) < 0);
 	WARN_ON(ret);
 
@@ -1946,7 +1946,7 @@ static int __init cpufreq_core_init(void)
 	int cpu;
 
 	for_each_possible_cpu(cpu) {
-		per_cpu(policy_cpu, cpu) = -1;
+		per_cpu(cpufreq_policy_cpu, cpu) = -1;
 		init_rwsem(&per_cpu(cpu_policy_rwsem, cpu));
 	}
 
diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c
index a9bd3a0..0543221 100644
--- a/drivers/cpufreq/freq_table.c
+++ b/drivers/cpufreq/freq_table.c
@@ -174,7 +174,7 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
 }
 EXPORT_SYMBOL_GPL(cpufreq_frequency_table_target);
 
-static DEFINE_PER_CPU(struct cpufreq_frequency_table *, show_table);
+static DEFINE_PER_CPU(struct cpufreq_frequency_table *, cpufreq_show_table);
 /**
  * show_available_freqs - show available frequencies for the specified CPU
  */
@@ -185,10 +185,10 @@ static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf)
 	ssize_t count = 0;
 	struct cpufreq_frequency_table *table;
 
-	if (!per_cpu(show_table, cpu))
+	if (!per_cpu(cpufreq_show_table, cpu))
 		return -ENODEV;
 
-	table = per_cpu(show_table, cpu);
+	table = per_cpu(cpufreq_show_table, cpu);
 
 	for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) {
 		if (table[i].frequency == CPUFREQ_ENTRY_INVALID)
@@ -217,20 +217,20 @@ void cpufreq_frequency_table_get_attr(struct cpufreq_frequency_table *table,
 				      unsigned int cpu)
 {
 	dprintk("setting show_table for cpu %u to %p\n", cpu, table);
-	per_cpu(show_table, cpu) = table;
+	per_cpu(cpufreq_show_table, cpu) = table;
 }
 EXPORT_SYMBOL_GPL(cpufreq_frequency_table_get_attr);
 
 void cpufreq_frequency_table_put_attr(unsigned int cpu)
 {
 	dprintk("clearing show_table for cpu %u\n", cpu);
-	per_cpu(show_table, cpu) = NULL;
+	per_cpu(cpufreq_show_table, cpu) = NULL;
 }
 EXPORT_SYMBOL_GPL(cpufreq_frequency_table_put_attr);
 
 struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu)
 {
-	return per_cpu(show_table, cpu);
+	return per_cpu(cpufreq_show_table, cpu);
 }
 EXPORT_SYMBOL_GPL(cpufreq_frequency_get_table);
 
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 08/16] percpu: make percpu symbols in xen unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (6 preceding siblings ...)
  2009-10-14  6:01 ` [PATCH 07/16] percpu: make percpu symbols in cpufreq unique Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 09/16] percpu: make percpu symbols in x86 unique Tejun Heo
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Jeremy Fitzhardinge, Chris Wright

This patch updates percpu related symbols in xen such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* arch/x86/xen/smp.c, arch/x86/xen/time.c, arch/ia64/xen/irq_xen.c:
  add xen_ prefix to percpu variables

* arch/ia64/xen/time.c: add xen_ prefix to percpu variables, drop
  processed_ prefix and make them static

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 arch/ia64/xen/irq_xen.c |  131 ++++++++++++++++++++++++-----------------------
 arch/ia64/xen/time.c    |   22 ++++----
 arch/x86/xen/smp.c      |   41 ++++++++-------
 arch/x86/xen/time.c     |   24 ++++----
 4 files changed, 111 insertions(+), 107 deletions(-)

diff --git a/arch/ia64/xen/irq_xen.c b/arch/ia64/xen/irq_xen.c
index f042e19..a3fb7cf 100644
--- a/arch/ia64/xen/irq_xen.c
+++ b/arch/ia64/xen/irq_xen.c
@@ -63,19 +63,19 @@ xen_free_irq_vector(int vector)
 }
 
 
-static DEFINE_PER_CPU(int, timer_irq) = -1;
-static DEFINE_PER_CPU(int, ipi_irq) = -1;
-static DEFINE_PER_CPU(int, resched_irq) = -1;
-static DEFINE_PER_CPU(int, cmc_irq) = -1;
-static DEFINE_PER_CPU(int, cmcp_irq) = -1;
-static DEFINE_PER_CPU(int, cpep_irq) = -1;
+static DEFINE_PER_CPU(int, xen_timer_irq) = -1;
+static DEFINE_PER_CPU(int, xen_ipi_irq) = -1;
+static DEFINE_PER_CPU(int, xen_resched_irq) = -1;
+static DEFINE_PER_CPU(int, xen_cmc_irq) = -1;
+static DEFINE_PER_CPU(int, xen_cmcp_irq) = -1;
+static DEFINE_PER_CPU(int, xen_cpep_irq) = -1;
 #define NAME_SIZE	15
-static DEFINE_PER_CPU(char[NAME_SIZE], timer_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], ipi_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], resched_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], cmc_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], cmcp_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], cpep_name);
+static DEFINE_PER_CPU(char[NAME_SIZE], xen_timer_name);
+static DEFINE_PER_CPU(char[NAME_SIZE], xen_ipi_name);
+static DEFINE_PER_CPU(char[NAME_SIZE], xen_resched_name);
+static DEFINE_PER_CPU(char[NAME_SIZE], xen_cmc_name);
+static DEFINE_PER_CPU(char[NAME_SIZE], xen_cmcp_name);
+static DEFINE_PER_CPU(char[NAME_SIZE], xen_cpep_name);
 #undef NAME_SIZE
 
 struct saved_irq {
@@ -144,64 +144,64 @@ __xen_register_percpu_irq(unsigned int cpu, unsigned int vec,
 	if (xen_slab_ready) {
 		switch (vec) {
 		case IA64_TIMER_VECTOR:
-			snprintf(per_cpu(timer_name, cpu),
-				 sizeof(per_cpu(timer_name, cpu)),
+			snprintf(per_cpu(xen_timer_name, cpu),
+				 sizeof(per_cpu(xen_timer_name, cpu)),
 				 "%s%d", action->name, cpu);
 			irq = bind_virq_to_irqhandler(VIRQ_ITC, cpu,
 				action->handler, action->flags,
-				per_cpu(timer_name, cpu), action->dev_id);
-			per_cpu(timer_irq, cpu) = irq;
+				per_cpu(xen_timer_name, cpu), action->dev_id);
+			per_cpu(xen_timer_irq, cpu) = irq;
 			break;
 		case IA64_IPI_RESCHEDULE:
-			snprintf(per_cpu(resched_name, cpu),
-				 sizeof(per_cpu(resched_name, cpu)),
+			snprintf(per_cpu(xen_resched_name, cpu),
+				 sizeof(per_cpu(xen_resched_name, cpu)),
 				 "%s%d", action->name, cpu);
 			irq = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR, cpu,
 				action->handler, action->flags,
-				per_cpu(resched_name, cpu), action->dev_id);
-			per_cpu(resched_irq, cpu) = irq;
+				per_cpu(xen_resched_name, cpu), action->dev_id);
+			per_cpu(xen_resched_irq, cpu) = irq;
 			break;
 		case IA64_IPI_VECTOR:
-			snprintf(per_cpu(ipi_name, cpu),
-				 sizeof(per_cpu(ipi_name, cpu)),
+			snprintf(per_cpu(xen_ipi_name, cpu),
+				 sizeof(per_cpu(xen_ipi_name, cpu)),
 				 "%s%d", action->name, cpu);
 			irq = bind_ipi_to_irqhandler(XEN_IPI_VECTOR, cpu,
 				action->handler, action->flags,
-				per_cpu(ipi_name, cpu), action->dev_id);
-			per_cpu(ipi_irq, cpu) = irq;
+				per_cpu(xen_ipi_name, cpu), action->dev_id);
+			per_cpu(xen_ipi_irq, cpu) = irq;
 			break;
 		case IA64_CMC_VECTOR:
-			snprintf(per_cpu(cmc_name, cpu),
-				 sizeof(per_cpu(cmc_name, cpu)),
+			snprintf(per_cpu(xen_cmc_name, cpu),
+				 sizeof(per_cpu(xen_cmc_name, cpu)),
 				 "%s%d", action->name, cpu);
 			irq = bind_virq_to_irqhandler(VIRQ_MCA_CMC, cpu,
-						      action->handler,
-						      action->flags,
-						      per_cpu(cmc_name, cpu),
-						      action->dev_id);
-			per_cpu(cmc_irq, cpu) = irq;
+						action->handler,
+						action->flags,
+						per_cpu(xen_cmc_name, cpu),
+						action->dev_id);
+			per_cpu(xen_cmc_irq, cpu) = irq;
 			break;
 		case IA64_CMCP_VECTOR:
-			snprintf(per_cpu(cmcp_name, cpu),
-				 sizeof(per_cpu(cmcp_name, cpu)),
+			snprintf(per_cpu(xen_cmcp_name, cpu),
+				 sizeof(per_cpu(xen_cmcp_name, cpu)),
 				 "%s%d", action->name, cpu);
 			irq = bind_ipi_to_irqhandler(XEN_CMCP_VECTOR, cpu,
-						     action->handler,
-						     action->flags,
-						     per_cpu(cmcp_name, cpu),
-						     action->dev_id);
-			per_cpu(cmcp_irq, cpu) = irq;
+						action->handler,
+						action->flags,
+						per_cpu(xen_cmcp_name, cpu),
+						action->dev_id);
+			per_cpu(xen_cmcp_irq, cpu) = irq;
 			break;
 		case IA64_CPEP_VECTOR:
-			snprintf(per_cpu(cpep_name, cpu),
-				 sizeof(per_cpu(cpep_name, cpu)),
+			snprintf(per_cpu(xen_cpep_name, cpu),
+				 sizeof(per_cpu(xen_cpep_name, cpu)),
 				 "%s%d", action->name, cpu);
 			irq = bind_ipi_to_irqhandler(XEN_CPEP_VECTOR, cpu,
-						     action->handler,
-						     action->flags,
-						     per_cpu(cpep_name, cpu),
-						     action->dev_id);
-			per_cpu(cpep_irq, cpu) = irq;
+						action->handler,
+						action->flags,
+						per_cpu(xen_cpep_name, cpu),
+						action->dev_id);
+			per_cpu(xen_cpep_irq, cpu) = irq;
 			break;
 		case IA64_CPE_VECTOR:
 		case IA64_MCA_RENDEZ_VECTOR:
@@ -275,30 +275,33 @@ unbind_evtchn_callback(struct notifier_block *nfb,
 
 	if (action == CPU_DEAD) {
 		/* Unregister evtchn.  */
-		if (per_cpu(cpep_irq, cpu) >= 0) {
-			unbind_from_irqhandler(per_cpu(cpep_irq, cpu), NULL);
-			per_cpu(cpep_irq, cpu) = -1;
+		if (per_cpu(xen_cpep_irq, cpu) >= 0) {
+			unbind_from_irqhandler(per_cpu(xen_cpep_irq, cpu),
+					       NULL);
+			per_cpu(xen_cpep_irq, cpu) = -1;
 		}
-		if (per_cpu(cmcp_irq, cpu) >= 0) {
-			unbind_from_irqhandler(per_cpu(cmcp_irq, cpu), NULL);
-			per_cpu(cmcp_irq, cpu) = -1;
+		if (per_cpu(xen_cmcp_irq, cpu) >= 0) {
+			unbind_from_irqhandler(per_cpu(xen_cmcp_irq, cpu),
+					       NULL);
+			per_cpu(xen_cmcp_irq, cpu) = -1;
 		}
-		if (per_cpu(cmc_irq, cpu) >= 0) {
-			unbind_from_irqhandler(per_cpu(cmc_irq, cpu), NULL);
-			per_cpu(cmc_irq, cpu) = -1;
+		if (per_cpu(xen_cmc_irq, cpu) >= 0) {
+			unbind_from_irqhandler(per_cpu(xen_cmc_irq, cpu), NULL);
+			per_cpu(xen_cmc_irq, cpu) = -1;
 		}
-		if (per_cpu(ipi_irq, cpu) >= 0) {
-			unbind_from_irqhandler(per_cpu(ipi_irq, cpu), NULL);
-			per_cpu(ipi_irq, cpu) = -1;
+		if (per_cpu(xen_ipi_irq, cpu) >= 0) {
+			unbind_from_irqhandler(per_cpu(xen_ipi_irq, cpu), NULL);
+			per_cpu(xen_ipi_irq, cpu) = -1;
 		}
-		if (per_cpu(resched_irq, cpu) >= 0) {
-			unbind_from_irqhandler(per_cpu(resched_irq, cpu),
-						NULL);
-			per_cpu(resched_irq, cpu) = -1;
+		if (per_cpu(xen_resched_irq, cpu) >= 0) {
+			unbind_from_irqhandler(per_cpu(xen_resched_irq, cpu),
+					       NULL);
+			per_cpu(xen_resched_irq, cpu) = -1;
 		}
-		if (per_cpu(timer_irq, cpu) >= 0) {
-			unbind_from_irqhandler(per_cpu(timer_irq, cpu), NULL);
-			per_cpu(timer_irq, cpu) = -1;
+		if (per_cpu(xen_timer_irq, cpu) >= 0) {
+			unbind_from_irqhandler(per_cpu(xen_timer_irq, cpu),
+					       NULL);
+			per_cpu(xen_timer_irq, cpu) = -1;
 		}
 	}
 	return NOTIFY_OK;
diff --git a/arch/ia64/xen/time.c b/arch/ia64/xen/time.c
index dbeadb9..c1c5445 100644
--- a/arch/ia64/xen/time.c
+++ b/arch/ia64/xen/time.c
@@ -34,15 +34,15 @@
 
 #include "../kernel/fsyscall_gtod_data.h"
 
-DEFINE_PER_CPU(struct vcpu_runstate_info, runstate);
-DEFINE_PER_CPU(unsigned long, processed_stolen_time);
-DEFINE_PER_CPU(unsigned long, processed_blocked_time);
+static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
+static DEFINE_PER_CPU(unsigned long, xen_stolen_time);
+static DEFINE_PER_CPU(unsigned long, xen_blocked_time);
 
 /* taken from i386/kernel/time-xen.c */
 static void xen_init_missing_ticks_accounting(int cpu)
 {
 	struct vcpu_register_runstate_memory_area area;
-	struct vcpu_runstate_info *runstate = &per_cpu(runstate, cpu);
+	struct vcpu_runstate_info *runstate = &per_cpu(xen_runstate, cpu);
 	int rc;
 
 	memset(runstate, 0, sizeof(*runstate));
@@ -52,8 +52,8 @@ static void xen_init_missing_ticks_accounting(int cpu)
 				&area);
 	WARN_ON(rc && rc != -ENOSYS);
 
-	per_cpu(processed_blocked_time, cpu) = runstate->time[RUNSTATE_blocked];
-	per_cpu(processed_stolen_time, cpu) = runstate->time[RUNSTATE_runnable]
+	per_cpu(xen_blocked_time, cpu) = runstate->time[RUNSTATE_blocked];
+	per_cpu(xen_stolen_time, cpu) = runstate->time[RUNSTATE_runnable]
 					    + runstate->time[RUNSTATE_offline];
 }
 
@@ -68,7 +68,7 @@ static void get_runstate_snapshot(struct vcpu_runstate_info *res)
 
 	BUG_ON(preemptible());
 
-	state = &__get_cpu_var(runstate);
+	state = &__get_cpu_var(xen_runstate);
 
 	/*
 	 * The runstate info is always updated by the hypervisor on
@@ -103,12 +103,12 @@ consider_steal_time(unsigned long new_itm)
 	 * This function just checks and reject this effect.
 	 */
 	if (!time_after_eq(runstate.time[RUNSTATE_blocked],
-			   per_cpu(processed_blocked_time, cpu)))
+			   per_cpu(xen_blocked_time, cpu)))
 		blocked = 0;
 
 	if (!time_after_eq(runstate.time[RUNSTATE_runnable] +
 			   runstate.time[RUNSTATE_offline],
-			   per_cpu(processed_stolen_time, cpu)))
+			   per_cpu(xen_stolen_time, cpu)))
 		stolen = 0;
 
 	if (!time_after(delta_itm + new_itm, ia64_get_itc()))
@@ -147,8 +147,8 @@ consider_steal_time(unsigned long new_itm)
 		} else {
 			local_cpu_data->itm_next = delta_itm + new_itm;
 		}
-		per_cpu(processed_stolen_time, cpu) += NS_PER_TICK * stolen;
-		per_cpu(processed_blocked_time, cpu) += NS_PER_TICK * blocked;
+		per_cpu(xen_stolen_time, cpu) += NS_PER_TICK * stolen;
+		per_cpu(xen_blocked_time, cpu) += NS_PER_TICK * blocked;
 	}
 	return delta_itm;
 }
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index fe03eee..1167d98 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -35,10 +35,10 @@
 
 cpumask_var_t xen_cpu_initialized_map;
 
-static DEFINE_PER_CPU(int, resched_irq);
-static DEFINE_PER_CPU(int, callfunc_irq);
-static DEFINE_PER_CPU(int, callfuncsingle_irq);
-static DEFINE_PER_CPU(int, debug_irq) = -1;
+static DEFINE_PER_CPU(int, xen_resched_irq);
+static DEFINE_PER_CPU(int, xen_callfunc_irq);
+static DEFINE_PER_CPU(int, xen_callfuncsingle_irq);
+static DEFINE_PER_CPU(int, xen_debug_irq) = -1;
 
 static irqreturn_t xen_call_function_interrupt(int irq, void *dev_id);
 static irqreturn_t xen_call_function_single_interrupt(int irq, void *dev_id);
@@ -103,7 +103,7 @@ static int xen_smp_intr_init(unsigned int cpu)
 				    NULL);
 	if (rc < 0)
 		goto fail;
-	per_cpu(resched_irq, cpu) = rc;
+	per_cpu(xen_resched_irq, cpu) = rc;
 
 	callfunc_name = kasprintf(GFP_KERNEL, "callfunc%d", cpu);
 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
@@ -114,7 +114,7 @@ static int xen_smp_intr_init(unsigned int cpu)
 				    NULL);
 	if (rc < 0)
 		goto fail;
-	per_cpu(callfunc_irq, cpu) = rc;
+	per_cpu(xen_callfunc_irq, cpu) = rc;
 
 	debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
 	rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu, xen_debug_interrupt,
@@ -122,7 +122,7 @@ static int xen_smp_intr_init(unsigned int cpu)
 				     debug_name, NULL);
 	if (rc < 0)
 		goto fail;
-	per_cpu(debug_irq, cpu) = rc;
+	per_cpu(xen_debug_irq, cpu) = rc;
 
 	callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
@@ -133,19 +133,20 @@ static int xen_smp_intr_init(unsigned int cpu)
 				    NULL);
 	if (rc < 0)
 		goto fail;
-	per_cpu(callfuncsingle_irq, cpu) = rc;
+	per_cpu(xen_callfuncsingle_irq, cpu) = rc;
 
 	return 0;
 
  fail:
-	if (per_cpu(resched_irq, cpu) >= 0)
-		unbind_from_irqhandler(per_cpu(resched_irq, cpu), NULL);
-	if (per_cpu(callfunc_irq, cpu) >= 0)
-		unbind_from_irqhandler(per_cpu(callfunc_irq, cpu), NULL);
-	if (per_cpu(debug_irq, cpu) >= 0)
-		unbind_from_irqhandler(per_cpu(debug_irq, cpu), NULL);
-	if (per_cpu(callfuncsingle_irq, cpu) >= 0)
-		unbind_from_irqhandler(per_cpu(callfuncsingle_irq, cpu), NULL);
+	if (per_cpu(xen_resched_irq, cpu) >= 0)
+		unbind_from_irqhandler(per_cpu(xen_resched_irq, cpu), NULL);
+	if (per_cpu(xen_callfunc_irq, cpu) >= 0)
+		unbind_from_irqhandler(per_cpu(xen_callfunc_irq, cpu), NULL);
+	if (per_cpu(xen_debug_irq, cpu) >= 0)
+		unbind_from_irqhandler(per_cpu(xen_debug_irq, cpu), NULL);
+	if (per_cpu(xen_callfuncsingle_irq, cpu) >= 0)
+		unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu),
+				       NULL);
 
 	return rc;
 }
@@ -348,10 +349,10 @@ static void xen_cpu_die(unsigned int cpu)
 		current->state = TASK_UNINTERRUPTIBLE;
 		schedule_timeout(HZ/10);
 	}
-	unbind_from_irqhandler(per_cpu(resched_irq, cpu), NULL);
-	unbind_from_irqhandler(per_cpu(callfunc_irq, cpu), NULL);
-	unbind_from_irqhandler(per_cpu(debug_irq, cpu), NULL);
-	unbind_from_irqhandler(per_cpu(callfuncsingle_irq, cpu), NULL);
+	unbind_from_irqhandler(per_cpu(xen_resched_irq, cpu), NULL);
+	unbind_from_irqhandler(per_cpu(xen_callfunc_irq, cpu), NULL);
+	unbind_from_irqhandler(per_cpu(xen_debug_irq, cpu), NULL);
+	unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu), NULL);
 	xen_uninit_lock_cpu(cpu);
 	xen_teardown_timer(cpu);
 
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 0a5aa44..26e37b7 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -31,14 +31,14 @@
 #define NS_PER_TICK	(1000000000LL / HZ)
 
 /* runstate info updated by Xen */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, runstate);
+static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
 
 /* snapshots of runstate info */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, runstate_snapshot);
+static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate_snapshot);
 
 /* unused ns of stolen and blocked time */
-static DEFINE_PER_CPU(u64, residual_stolen);
-static DEFINE_PER_CPU(u64, residual_blocked);
+static DEFINE_PER_CPU(u64, xen_residual_stolen);
+static DEFINE_PER_CPU(u64, xen_residual_blocked);
 
 /* return an consistent snapshot of 64-bit time/counter value */
 static u64 get64(const u64 *p)
@@ -79,7 +79,7 @@ static void get_runstate_snapshot(struct vcpu_runstate_info *res)
 
 	BUG_ON(preemptible());
 
-	state = &__get_cpu_var(runstate);
+	state = &__get_cpu_var(xen_runstate);
 
 	/*
 	 * The runstate info is always updated by the hypervisor on
@@ -97,14 +97,14 @@ static void get_runstate_snapshot(struct vcpu_runstate_info *res)
 /* return true when a vcpu could run but has no real cpu to run on */
 bool xen_vcpu_stolen(int vcpu)
 {
-	return per_cpu(runstate, vcpu).state == RUNSTATE_runnable;
+	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
 }
 
 static void setup_runstate_info(int cpu)
 {
 	struct vcpu_register_runstate_memory_area area;
 
-	area.addr.v = &per_cpu(runstate, cpu);
+	area.addr.v = &per_cpu(xen_runstate, cpu);
 
 	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
 			       cpu, &area))
@@ -122,7 +122,7 @@ static void do_stolen_accounting(void)
 
 	WARN_ON(state.state != RUNSTATE_running);
 
-	snap = &__get_cpu_var(runstate_snapshot);
+	snap = &__get_cpu_var(xen_runstate_snapshot);
 
 	/* work out how much time the VCPU has not been runn*ing*  */
 	blocked = state.time[RUNSTATE_blocked] - snap->time[RUNSTATE_blocked];
@@ -133,24 +133,24 @@ static void do_stolen_accounting(void)
 
 	/* Add the appropriate number of ticks of stolen time,
 	   including any left-overs from last time. */
-	stolen = runnable + offline + __get_cpu_var(residual_stolen);
+	stolen = runnable + offline + __get_cpu_var(xen_residual_stolen);
 
 	if (stolen < 0)
 		stolen = 0;
 
 	ticks = iter_div_u64_rem(stolen, NS_PER_TICK, &stolen);
-	__get_cpu_var(residual_stolen) = stolen;
+	__get_cpu_var(xen_residual_stolen) = stolen;
 	account_steal_ticks(ticks);
 
 	/* Add the appropriate number of ticks of blocked time,
 	   including any left-overs from last time. */
-	blocked += __get_cpu_var(residual_blocked);
+	blocked += __get_cpu_var(xen_residual_blocked);
 
 	if (blocked < 0)
 		blocked = 0;
 
 	ticks = iter_div_u64_rem(blocked, NS_PER_TICK, &blocked);
-	__get_cpu_var(residual_blocked) = blocked;
+	__get_cpu_var(xen_residual_blocked) = blocked;
 	account_idle_ticks(ticks);
 }
 
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 09/16] percpu: make percpu symbols in x86 unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (7 preceding siblings ...)
  2009-10-14  6:01 ` [PATCH 08/16] percpu: make percpu symbols in xen unique Tejun Heo
@ 2009-10-14  6:01 ` Tejun Heo
  2009-10-14  6:01   ` Tejun Heo
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Ingo Molnar, Marcelo Tosatti, x86

This patch updates percpu related symbols in x86 such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* arch/x86/kernel/cpu/common.c: rename local variable to avoid collision

* arch/x86/kvm/svm.c: s/svm_data/sd/ for local variables to avoid collision

* arch/x86/kernel/cpu/cpu_debug.c: s/cpu_arr/cpud_arr/
  				   s/priv_arr/cpud_priv_arr/
				   s/cpu_priv_count/cpud_priv_count/

* arch/x86/kernel/cpu/intel_cacheinfo.c: s/cpuid4_info/ici_cpuid4_info/
  					 s/cache_kobject/ici_cache_kobject/
					 s/index_kobject/ici_index_kobject/

* arch/x86/kernel/ds.c: s/cpu_context/cpu_ds_context/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: (kvm) Avi Kivity <avi@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: x86@kernel.org
---
 arch/x86/kernel/cpu/common.c          |    8 ++--
 arch/x86/kernel/cpu/cpu_debug.c       |   30 ++++++++--------
 arch/x86/kernel/cpu/intel_cacheinfo.c |   54 ++++++++++++++--------------
 arch/x86/kernel/ds.c                  |    4 +-
 arch/x86/kvm/svm.c                    |   63 ++++++++++++++++-----------------
 5 files changed, 79 insertions(+), 80 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index cc25c2b..3192f22 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1093,7 +1093,7 @@ static void clear_all_debug_regs(void)
 
 void __cpuinit cpu_init(void)
 {
-	struct orig_ist *orig_ist;
+	struct orig_ist *oist;
 	struct task_struct *me;
 	struct tss_struct *t;
 	unsigned long v;
@@ -1102,7 +1102,7 @@ void __cpuinit cpu_init(void)
 
 	cpu = stack_smp_processor_id();
 	t = &per_cpu(init_tss, cpu);
-	orig_ist = &per_cpu(orig_ist, cpu);
+	oist = &per_cpu(orig_ist, cpu);
 
 #ifdef CONFIG_NUMA
 	if (cpu != 0 && percpu_read(node_number) == 0 &&
@@ -1143,12 +1143,12 @@ void __cpuinit cpu_init(void)
 	/*
 	 * set up and load the per-CPU TSS
 	 */
-	if (!orig_ist->ist[0]) {
+	if (!oist->ist[0]) {
 		char *estacks = per_cpu(exception_stacks, cpu);
 
 		for (v = 0; v < N_EXCEPTION_STACKS; v++) {
 			estacks += exception_stack_sizes[v];
-			orig_ist->ist[v] = t->x86_tss.ist[v] =
+			oist->ist[v] = t->x86_tss.ist[v] =
 					(unsigned long)estacks;
 		}
 	}
diff --git a/arch/x86/kernel/cpu/cpu_debug.c b/arch/x86/kernel/cpu/cpu_debug.c
index dca325c..b368cd8 100644
--- a/arch/x86/kernel/cpu/cpu_debug.c
+++ b/arch/x86/kernel/cpu/cpu_debug.c
@@ -30,9 +30,9 @@
 #include <asm/apic.h>
 #include <asm/desc.h>
 
-static DEFINE_PER_CPU(struct cpu_cpuX_base [CPU_REG_ALL_BIT], cpu_arr);
-static DEFINE_PER_CPU(struct cpu_private * [MAX_CPU_FILES], priv_arr);
-static DEFINE_PER_CPU(int, cpu_priv_count);
+static DEFINE_PER_CPU(struct cpu_cpuX_base [CPU_REG_ALL_BIT], cpud_arr);
+static DEFINE_PER_CPU(struct cpu_private * [MAX_CPU_FILES], cpud_priv_arr);
+static DEFINE_PER_CPU(int, cpud_priv_count);
 
 static DEFINE_MUTEX(cpu_debug_lock);
 
@@ -531,7 +531,7 @@ static int cpu_create_file(unsigned cpu, unsigned type, unsigned reg,
 
 	/* Already intialized */
 	if (file == CPU_INDEX_BIT)
-		if (per_cpu(cpu_arr[type].init, cpu))
+		if (per_cpu(cpud_arr[type].init, cpu))
 			return 0;
 
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
@@ -543,8 +543,8 @@ static int cpu_create_file(unsigned cpu, unsigned type, unsigned reg,
 	priv->reg = reg;
 	priv->file = file;
 	mutex_lock(&cpu_debug_lock);
-	per_cpu(priv_arr[type], cpu) = priv;
-	per_cpu(cpu_priv_count, cpu)++;
+	per_cpu(cpud_priv_arr[type], cpu) = priv;
+	per_cpu(cpud_priv_count, cpu)++;
 	mutex_unlock(&cpu_debug_lock);
 
 	if (file)
@@ -552,10 +552,10 @@ static int cpu_create_file(unsigned cpu, unsigned type, unsigned reg,
 				    dentry, (void *)priv, &cpu_fops);
 	else {
 		debugfs_create_file(cpu_base[type].name, S_IRUGO,
-				    per_cpu(cpu_arr[type].dentry, cpu),
+				    per_cpu(cpud_arr[type].dentry, cpu),
 				    (void *)priv, &cpu_fops);
 		mutex_lock(&cpu_debug_lock);
-		per_cpu(cpu_arr[type].init, cpu) = 1;
+		per_cpu(cpud_arr[type].init, cpu) = 1;
 		mutex_unlock(&cpu_debug_lock);
 	}
 
@@ -615,7 +615,7 @@ static int cpu_init_allreg(unsigned cpu, struct dentry *dentry)
 		if (!is_typeflag_valid(cpu, cpu_base[type].flag))
 			continue;
 		cpu_dentry = debugfs_create_dir(cpu_base[type].name, dentry);
-		per_cpu(cpu_arr[type].dentry, cpu) = cpu_dentry;
+		per_cpu(cpud_arr[type].dentry, cpu) = cpu_dentry;
 
 		if (type < CPU_TSS_BIT)
 			err = cpu_init_msr(cpu, type, cpu_dentry);
@@ -647,11 +647,11 @@ static int cpu_init_cpu(void)
 		err = cpu_init_allreg(cpu, cpu_dentry);
 
 		pr_info("cpu%d(%d) debug files %d\n",
-			cpu, nr_cpu_ids, per_cpu(cpu_priv_count, cpu));
-		if (per_cpu(cpu_priv_count, cpu) > MAX_CPU_FILES) {
+			cpu, nr_cpu_ids, per_cpu(cpud_priv_count, cpu));
+		if (per_cpu(cpud_priv_count, cpu) > MAX_CPU_FILES) {
 			pr_err("Register files count %d exceeds limit %d\n",
-				per_cpu(cpu_priv_count, cpu), MAX_CPU_FILES);
-			per_cpu(cpu_priv_count, cpu) = MAX_CPU_FILES;
+				per_cpu(cpud_priv_count, cpu), MAX_CPU_FILES);
+			per_cpu(cpud_priv_count, cpu) = MAX_CPU_FILES;
 			err = -ENFILE;
 		}
 		if (err)
@@ -676,8 +676,8 @@ static void __exit cpu_debug_exit(void)
 		debugfs_remove_recursive(cpu_debugfs_dir);
 
 	for (cpu = 0; cpu <  nr_cpu_ids; cpu++)
-		for (i = 0; i < per_cpu(cpu_priv_count, cpu); i++)
-			kfree(per_cpu(priv_arr[i], cpu));
+		for (i = 0; i < per_cpu(cpud_priv_count, cpu); i++)
+			kfree(per_cpu(cpud_priv_arr[i], cpu));
 }
 
 module_init(cpu_debug_init);
diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c
index 804c40e..f5ccb4f 100644
--- a/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ b/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -512,8 +512,8 @@ unsigned int __cpuinit init_intel_cacheinfo(struct cpuinfo_x86 *c)
 #ifdef CONFIG_SYSFS
 
 /* pointer to _cpuid4_info array (for each cache leaf) */
-static DEFINE_PER_CPU(struct _cpuid4_info *, cpuid4_info);
-#define CPUID4_INFO_IDX(x, y)	(&((per_cpu(cpuid4_info, x))[y]))
+static DEFINE_PER_CPU(struct _cpuid4_info *, ici_cpuid4_info);
+#define CPUID4_INFO_IDX(x, y)	(&((per_cpu(ici_cpuid4_info, x))[y]))
 
 #ifdef CONFIG_SMP
 static void __cpuinit cache_shared_cpu_map_setup(unsigned int cpu, int index)
@@ -526,7 +526,7 @@ static void __cpuinit cache_shared_cpu_map_setup(unsigned int cpu, int index)
 	if ((index == 3) && (c->x86_vendor == X86_VENDOR_AMD)) {
 		struct cpuinfo_x86 *d;
 		for_each_online_cpu(i) {
-			if (!per_cpu(cpuid4_info, i))
+			if (!per_cpu(ici_cpuid4_info, i))
 				continue;
 			d = &cpu_data(i);
 			this_leaf = CPUID4_INFO_IDX(i, index);
@@ -548,7 +548,7 @@ static void __cpuinit cache_shared_cpu_map_setup(unsigned int cpu, int index)
 			    c->apicid >> index_msb) {
 				cpumask_set_cpu(i,
 					to_cpumask(this_leaf->shared_cpu_map));
-				if (i != cpu && per_cpu(cpuid4_info, i))  {
+				if (i != cpu && per_cpu(ici_cpuid4_info, i))  {
 					sibling_leaf =
 						CPUID4_INFO_IDX(i, index);
 					cpumask_set_cpu(cpu, to_cpumask(
@@ -587,8 +587,8 @@ static void __cpuinit free_cache_attributes(unsigned int cpu)
 	for (i = 0; i < num_cache_leaves; i++)
 		cache_remove_shared_cpu_map(cpu, i);
 
-	kfree(per_cpu(cpuid4_info, cpu));
-	per_cpu(cpuid4_info, cpu) = NULL;
+	kfree(per_cpu(ici_cpuid4_info, cpu));
+	per_cpu(ici_cpuid4_info, cpu) = NULL;
 }
 
 static int
@@ -627,15 +627,15 @@ static int __cpuinit detect_cache_attributes(unsigned int cpu)
 	if (num_cache_leaves == 0)
 		return -ENOENT;
 
-	per_cpu(cpuid4_info, cpu) = kzalloc(
+	per_cpu(ici_cpuid4_info, cpu) = kzalloc(
 	    sizeof(struct _cpuid4_info) * num_cache_leaves, GFP_KERNEL);
-	if (per_cpu(cpuid4_info, cpu) == NULL)
+	if (per_cpu(ici_cpuid4_info, cpu) == NULL)
 		return -ENOMEM;
 
 	smp_call_function_single(cpu, get_cpu_leaves, &retval, true);
 	if (retval) {
-		kfree(per_cpu(cpuid4_info, cpu));
-		per_cpu(cpuid4_info, cpu) = NULL;
+		kfree(per_cpu(ici_cpuid4_info, cpu));
+		per_cpu(ici_cpuid4_info, cpu) = NULL;
 	}
 
 	return retval;
@@ -647,7 +647,7 @@ static int __cpuinit detect_cache_attributes(unsigned int cpu)
 extern struct sysdev_class cpu_sysdev_class; /* from drivers/base/cpu.c */
 
 /* pointer to kobject for cpuX/cache */
-static DEFINE_PER_CPU(struct kobject *, cache_kobject);
+static DEFINE_PER_CPU(struct kobject *, ici_cache_kobject);
 
 struct _index_kobject {
 	struct kobject kobj;
@@ -656,8 +656,8 @@ struct _index_kobject {
 };
 
 /* pointer to array of kobjects for cpuX/cache/indexY */
-static DEFINE_PER_CPU(struct _index_kobject *, index_kobject);
-#define INDEX_KOBJECT_PTR(x, y)		(&((per_cpu(index_kobject, x))[y]))
+static DEFINE_PER_CPU(struct _index_kobject *, ici_index_kobject);
+#define INDEX_KOBJECT_PTR(x, y)		(&((per_cpu(ici_index_kobject, x))[y]))
 
 #define show_one_plus(file_name, object, val)				\
 static ssize_t show_##file_name						\
@@ -876,10 +876,10 @@ static struct kobj_type ktype_percpu_entry = {
 
 static void __cpuinit cpuid4_cache_sysfs_exit(unsigned int cpu)
 {
-	kfree(per_cpu(cache_kobject, cpu));
-	kfree(per_cpu(index_kobject, cpu));
-	per_cpu(cache_kobject, cpu) = NULL;
-	per_cpu(index_kobject, cpu) = NULL;
+	kfree(per_cpu(ici_cache_kobject, cpu));
+	kfree(per_cpu(ici_index_kobject, cpu));
+	per_cpu(ici_cache_kobject, cpu) = NULL;
+	per_cpu(ici_index_kobject, cpu) = NULL;
 	free_cache_attributes(cpu);
 }
 
@@ -895,14 +895,14 @@ static int __cpuinit cpuid4_cache_sysfs_init(unsigned int cpu)
 		return err;
 
 	/* Allocate all required memory */
-	per_cpu(cache_kobject, cpu) =
+	per_cpu(ici_cache_kobject, cpu) =
 		kzalloc(sizeof(struct kobject), GFP_KERNEL);
-	if (unlikely(per_cpu(cache_kobject, cpu) == NULL))
+	if (unlikely(per_cpu(ici_cache_kobject, cpu) == NULL))
 		goto err_out;
 
-	per_cpu(index_kobject, cpu) = kzalloc(
+	per_cpu(ici_index_kobject, cpu) = kzalloc(
 	    sizeof(struct _index_kobject) * num_cache_leaves, GFP_KERNEL);
-	if (unlikely(per_cpu(index_kobject, cpu) == NULL))
+	if (unlikely(per_cpu(ici_index_kobject, cpu) == NULL))
 		goto err_out;
 
 	return 0;
@@ -926,7 +926,7 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
 	if (unlikely(retval < 0))
 		return retval;
 
-	retval = kobject_init_and_add(per_cpu(cache_kobject, cpu),
+	retval = kobject_init_and_add(per_cpu(ici_cache_kobject, cpu),
 				      &ktype_percpu_entry,
 				      &sys_dev->kobj, "%s", "cache");
 	if (retval < 0) {
@@ -940,12 +940,12 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
 		this_object->index = i;
 		retval = kobject_init_and_add(&(this_object->kobj),
 					      &ktype_cache,
-					      per_cpu(cache_kobject, cpu),
+					      per_cpu(ici_cache_kobject, cpu),
 					      "index%1lu", i);
 		if (unlikely(retval)) {
 			for (j = 0; j < i; j++)
 				kobject_put(&(INDEX_KOBJECT_PTR(cpu, j)->kobj));
-			kobject_put(per_cpu(cache_kobject, cpu));
+			kobject_put(per_cpu(ici_cache_kobject, cpu));
 			cpuid4_cache_sysfs_exit(cpu);
 			return retval;
 		}
@@ -953,7 +953,7 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
 	}
 	cpumask_set_cpu(cpu, to_cpumask(cache_dev_map));
 
-	kobject_uevent(per_cpu(cache_kobject, cpu), KOBJ_ADD);
+	kobject_uevent(per_cpu(ici_cache_kobject, cpu), KOBJ_ADD);
 	return 0;
 }
 
@@ -962,7 +962,7 @@ static void __cpuinit cache_remove_dev(struct sys_device * sys_dev)
 	unsigned int cpu = sys_dev->id;
 	unsigned long i;
 
-	if (per_cpu(cpuid4_info, cpu) == NULL)
+	if (per_cpu(ici_cpuid4_info, cpu) == NULL)
 		return;
 	if (!cpumask_test_cpu(cpu, to_cpumask(cache_dev_map)))
 		return;
@@ -970,7 +970,7 @@ static void __cpuinit cache_remove_dev(struct sys_device * sys_dev)
 
 	for (i = 0; i < num_cache_leaves; i++)
 		kobject_put(&(INDEX_KOBJECT_PTR(cpu, i)->kobj));
-	kobject_put(per_cpu(cache_kobject, cpu));
+	kobject_put(per_cpu(ici_cache_kobject, cpu));
 	cpuid4_cache_sysfs_exit(cpu);
 }
 
diff --git a/arch/x86/kernel/ds.c b/arch/x86/kernel/ds.c
index ef42a03..1c47390 100644
--- a/arch/x86/kernel/ds.c
+++ b/arch/x86/kernel/ds.c
@@ -265,13 +265,13 @@ struct ds_context {
 	int			cpu;
 };
 
-static DEFINE_PER_CPU(struct ds_context *, cpu_context);
+static DEFINE_PER_CPU(struct ds_context *, cpu_ds_context);
 
 
 static struct ds_context *ds_get_context(struct task_struct *task, int cpu)
 {
 	struct ds_context **p_context =
-		(task ? &task->thread.ds_ctx : &per_cpu(cpu_context, cpu));
+		(task ? &task->thread.ds_ctx : &per_cpu(cpu_ds_context, cpu));
 	struct ds_context *context = NULL;
 	struct ds_context *new_context = NULL;
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 944cc9c..6c79a14 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -319,7 +319,7 @@ static void svm_hardware_disable(void *garbage)
 static void svm_hardware_enable(void *garbage)
 {
 
-	struct svm_cpu_data *svm_data;
+	struct svm_cpu_data *sd;
 	uint64_t efer;
 	struct descriptor_table gdt_descr;
 	struct desc_struct *gdt;
@@ -329,62 +329,61 @@ static void svm_hardware_enable(void *garbage)
 		printk(KERN_ERR "svm_cpu_init: err EOPNOTSUPP on %d\n", me);
 		return;
 	}
-	svm_data = per_cpu(svm_data, me);
+	sd = per_cpu(svm_data, me);
 
-	if (!svm_data) {
+	if (!sd) {
 		printk(KERN_ERR "svm_cpu_init: svm_data is NULL on %d\n",
 		       me);
 		return;
 	}
 
-	svm_data->asid_generation = 1;
-	svm_data->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
-	svm_data->next_asid = svm_data->max_asid + 1;
+	sd->asid_generation = 1;
+	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
+	sd->next_asid = sd->max_asid + 1;
 
 	kvm_get_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.base;
-	svm_data->tss_desc = (struct kvm_ldttss_desc *)(gdt + GDT_ENTRY_TSS);
+	sd->tss_desc = (struct kvm_ldttss_desc *)(gdt + GDT_ENTRY_TSS);
 
 	rdmsrl(MSR_EFER, efer);
 	wrmsrl(MSR_EFER, efer | EFER_SVME);
 
 	wrmsrl(MSR_VM_HSAVE_PA,
-	       page_to_pfn(svm_data->save_area) << PAGE_SHIFT);
+	       page_to_pfn(sd->save_area) << PAGE_SHIFT);
 }
 
 static void svm_cpu_uninit(int cpu)
 {
-	struct svm_cpu_data *svm_data
-		= per_cpu(svm_data, raw_smp_processor_id());
+	struct svm_cpu_data *sd = per_cpu(svm_data, raw_smp_processor_id());
 
-	if (!svm_data)
+	if (!sd)
 		return;
 
 	per_cpu(svm_data, raw_smp_processor_id()) = NULL;
-	__free_page(svm_data->save_area);
-	kfree(svm_data);
+	__free_page(sd->save_area);
+	kfree(sd);
 }
 
 static int svm_cpu_init(int cpu)
 {
-	struct svm_cpu_data *svm_data;
+	struct svm_cpu_data *sd;
 	int r;
 
-	svm_data = kzalloc(sizeof(struct svm_cpu_data), GFP_KERNEL);
-	if (!svm_data)
+	sd = kzalloc(sizeof(struct svm_cpu_data), GFP_KERNEL);
+	if (!sd)
 		return -ENOMEM;
-	svm_data->cpu = cpu;
-	svm_data->save_area = alloc_page(GFP_KERNEL);
+	sd->cpu = cpu;
+	sd->save_area = alloc_page(GFP_KERNEL);
 	r = -ENOMEM;
-	if (!svm_data->save_area)
+	if (!sd->save_area)
 		goto err_1;
 
-	per_cpu(svm_data, cpu) = svm_data;
+	per_cpu(svm_data, cpu) = sd;
 
 	return 0;
 
 err_1:
-	kfree(svm_data);
+	kfree(sd);
 	return r;
 
 }
@@ -1094,16 +1093,16 @@ static void save_host_msrs(struct kvm_vcpu *vcpu)
 #endif
 }
 
-static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *svm_data)
+static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 {
-	if (svm_data->next_asid > svm_data->max_asid) {
-		++svm_data->asid_generation;
-		svm_data->next_asid = 1;
+	if (sd->next_asid > sd->max_asid) {
+		++sd->asid_generation;
+		sd->next_asid = 1;
 		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 
-	svm->asid_generation = svm_data->asid_generation;
-	svm->vmcb->control.asid = svm_data->next_asid++;
+	svm->asid_generation = sd->asid_generation;
+	svm->vmcb->control.asid = sd->next_asid++;
 }
 
 static unsigned long svm_get_dr(struct kvm_vcpu *vcpu, int dr)
@@ -2377,8 +2376,8 @@ static void reload_tss(struct kvm_vcpu *vcpu)
 {
 	int cpu = raw_smp_processor_id();
 
-	struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu);
-	svm_data->tss_desc->type = 9; /* available 32/64-bit TSS */
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+	sd->tss_desc->type = 9; /* available 32/64-bit TSS */
 	load_TR_desc();
 }
 
@@ -2386,12 +2385,12 @@ static void pre_svm_run(struct vcpu_svm *svm)
 {
 	int cpu = raw_smp_processor_id();
 
-	struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu);
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 
 	svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
 	/* FIXME: handle wraparound of asid_generation */
-	if (svm->asid_generation != svm_data->asid_generation)
-		new_asid(svm, svm_data);
+	if (svm->asid_generation != sd->asid_generation)
+		new_asid(svm, sd);
 }
 
 static void svm_inject_nmi(struct kvm_vcpu *vcpu)
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 10/16] percpu: make percpu symbols in powerpc unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
@ 2009-10-14  6:01   ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 02/16] percpu: make alloc_percpu() handle array types Tejun Heo
                     ` (15 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Benjamin Herrenschmidt, Paul Mackerras, linuxppc-dev

This patch updates percpu related symbols in powerpc such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* arch/powerpc/kernel/perf_callchain.c: s/callchain/cpu_perf_callchain/

* arch/powerpc/kernel/setup-common.c: s/pvr/cpu_pvr/

* arch/powerpc/platforms/pseries/dtl.c: s/dtl/cpu_dtl/

* arch/powerpc/platforms/cell/interrupt.c: s/iic/cpu_iic/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@ozlabs.org
---
 arch/powerpc/include/asm/smp.h          |    2 +-
 arch/powerpc/kernel/perf_callchain.c    |    4 ++--
 arch/powerpc/kernel/setup-common.c      |    4 ++--
 arch/powerpc/kernel/smp.c               |    2 +-
 arch/powerpc/platforms/cell/interrupt.c |   14 +++++++-------
 arch/powerpc/platforms/pseries/dtl.c    |    4 ++--
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index d9ea8d3..1d3b270 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -37,7 +37,7 @@ extern void cpu_die(void);
 extern void smp_send_debugger_break(int cpu);
 extern void smp_message_recv(int);
 
-DECLARE_PER_CPU(unsigned int, pvr);
+DECLARE_PER_CPU(unsigned int, cpu_pvr);
 
 #ifdef CONFIG_HOTPLUG_CPU
 extern void fixup_irqs(cpumask_t map);
diff --git a/arch/powerpc/kernel/perf_callchain.c b/arch/powerpc/kernel/perf_callchain.c
index 0a03cf7..fe59c44 100644
--- a/arch/powerpc/kernel/perf_callchain.c
+++ b/arch/powerpc/kernel/perf_callchain.c
@@ -497,11 +497,11 @@ static void perf_callchain_user_32(struct pt_regs *regs,
  * Since we can't get PMU interrupts inside a PMU interrupt handler,
  * we don't need separate irq and nmi entries here.
  */
-static DEFINE_PER_CPU(struct perf_callchain_entry, callchain);
+static DEFINE_PER_CPU(struct perf_callchain_entry, cpu_perf_callchain);
 
 struct perf_callchain_entry *perf_callchain(struct pt_regs *regs)
 {
-	struct perf_callchain_entry *entry = &__get_cpu_var(callchain);
+	struct perf_callchain_entry *entry = &__get_cpu_var(cpu_perf_callchain);
 
 	entry->nr = 0;
 
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 4271f7a..aa5aeb9 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -157,7 +157,7 @@ extern u32 cpu_temp_both(unsigned long cpu);
 #endif /* CONFIG_TAU */
 
 #ifdef CONFIG_SMP
-DEFINE_PER_CPU(unsigned int, pvr);
+DEFINE_PER_CPU(unsigned int, cpu_pvr);
 #endif
 
 static int show_cpuinfo(struct seq_file *m, void *v)
@@ -209,7 +209,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
 	}
 
 #ifdef CONFIG_SMP
-	pvr = per_cpu(pvr, cpu_id);
+	pvr = per_cpu(cpu_pvr, cpu_id);
 #else
 	pvr = mfspr(SPRN_PVR);
 #endif
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 9b86a74..2ebb484 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -232,7 +232,7 @@ struct thread_info *current_set[NR_CPUS];
 
 static void __devinit smp_store_cpu_info(int id)
 {
-	per_cpu(pvr, id) = mfspr(SPRN_PVR);
+	per_cpu(cpu_pvr, id) = mfspr(SPRN_PVR);
 }
 
 static void __init smp_create_idle(unsigned int cpu)
diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
index 882e470..54bad90 100644
--- a/arch/powerpc/platforms/cell/interrupt.c
+++ b/arch/powerpc/platforms/cell/interrupt.c
@@ -54,7 +54,7 @@ struct iic {
 	struct device_node *node;
 };
 
-static DEFINE_PER_CPU(struct iic, iic);
+static DEFINE_PER_CPU(struct iic, cpu_iic);
 #define IIC_NODE_COUNT	2
 static struct irq_host *iic_host;
 
@@ -82,7 +82,7 @@ static void iic_unmask(unsigned int irq)
 
 static void iic_eoi(unsigned int irq)
 {
-	struct iic *iic = &__get_cpu_var(iic);
+	struct iic *iic = &__get_cpu_var(cpu_iic);
 	out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]);
 	BUG_ON(iic->eoi_ptr < 0);
 }
@@ -146,7 +146,7 @@ static unsigned int iic_get_irq(void)
 	struct iic *iic;
 	unsigned int virq;
 
-	iic = &__get_cpu_var(iic);
+	iic = &__get_cpu_var(cpu_iic);
 	*(unsigned long *) &pending =
 		in_be64((u64 __iomem *) &iic->regs->pending_destr);
 	if (!(pending.flags & CBE_IIC_IRQ_VALID))
@@ -161,12 +161,12 @@ static unsigned int iic_get_irq(void)
 
 void iic_setup_cpu(void)
 {
-	out_be64(&__get_cpu_var(iic).regs->prio, 0xff);
+	out_be64(&__get_cpu_var(cpu_iic).regs->prio, 0xff);
 }
 
 u8 iic_get_target_id(int cpu)
 {
-	return per_cpu(iic, cpu).target_id;
+	return per_cpu(cpu_iic, cpu).target_id;
 }
 
 EXPORT_SYMBOL_GPL(iic_get_target_id);
@@ -181,7 +181,7 @@ static inline int iic_ipi_to_irq(int ipi)
 
 void iic_cause_IPI(int cpu, int mesg)
 {
-	out_be64(&per_cpu(iic, cpu).regs->generate, (0xf - mesg) << 4);
+	out_be64(&per_cpu(cpu_iic, cpu).regs->generate, (0xf - mesg) << 4);
 }
 
 struct irq_host *iic_get_irq_host(int node)
@@ -348,7 +348,7 @@ static void __init init_one_iic(unsigned int hw_cpu, unsigned long addr,
 	/* XXX FIXME: should locate the linux CPU number from the HW cpu
 	 * number properly. We are lucky for now
 	 */
-	struct iic *iic = &per_cpu(iic, hw_cpu);
+	struct iic *iic = &per_cpu(cpu_iic, hw_cpu);
 
 	iic->regs = ioremap(addr, sizeof(struct cbe_iic_thread_regs));
 	BUG_ON(iic->regs == NULL);
diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
index 937a544..c5f3116 100644
--- a/arch/powerpc/platforms/pseries/dtl.c
+++ b/arch/powerpc/platforms/pseries/dtl.c
@@ -54,7 +54,7 @@ struct dtl {
 	int			buf_entries;
 	u64			last_idx;
 };
-static DEFINE_PER_CPU(struct dtl, dtl);
+static DEFINE_PER_CPU(struct dtl, cpu_dtl);
 
 /*
  * Dispatch trace log event mask:
@@ -261,7 +261,7 @@ static int dtl_init(void)
 
 	/* set up the per-cpu log structures */
 	for_each_possible_cpu(i) {
-		struct dtl *dtl = &per_cpu(dtl, i);
+		struct dtl *dtl = &per_cpu(cpu_dtl, i);
 		dtl->cpu = i;
 
 		rc = dtl_setup_file(dtl);
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 10/16] percpu: make percpu symbols in powerpc unique
@ 2009-10-14  6:01   ` Tejun Heo
  0 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:01 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Paul Mackerras, linuxppc-dev

This patch updates percpu related symbols in powerpc such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* arch/powerpc/kernel/perf_callchain.c: s/callchain/cpu_perf_callchain/

* arch/powerpc/kernel/setup-common.c: s/pvr/cpu_pvr/

* arch/powerpc/platforms/pseries/dtl.c: s/dtl/cpu_dtl/

* arch/powerpc/platforms/cell/interrupt.c: s/iic/cpu_iic/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@ozlabs.org
---
 arch/powerpc/include/asm/smp.h          |    2 +-
 arch/powerpc/kernel/perf_callchain.c    |    4 ++--
 arch/powerpc/kernel/setup-common.c      |    4 ++--
 arch/powerpc/kernel/smp.c               |    2 +-
 arch/powerpc/platforms/cell/interrupt.c |   14 +++++++-------
 arch/powerpc/platforms/pseries/dtl.c    |    4 ++--
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index d9ea8d3..1d3b270 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -37,7 +37,7 @@ extern void cpu_die(void);
 extern void smp_send_debugger_break(int cpu);
 extern void smp_message_recv(int);
 
-DECLARE_PER_CPU(unsigned int, pvr);
+DECLARE_PER_CPU(unsigned int, cpu_pvr);
 
 #ifdef CONFIG_HOTPLUG_CPU
 extern void fixup_irqs(cpumask_t map);
diff --git a/arch/powerpc/kernel/perf_callchain.c b/arch/powerpc/kernel/perf_callchain.c
index 0a03cf7..fe59c44 100644
--- a/arch/powerpc/kernel/perf_callchain.c
+++ b/arch/powerpc/kernel/perf_callchain.c
@@ -497,11 +497,11 @@ static void perf_callchain_user_32(struct pt_regs *regs,
  * Since we can't get PMU interrupts inside a PMU interrupt handler,
  * we don't need separate irq and nmi entries here.
  */
-static DEFINE_PER_CPU(struct perf_callchain_entry, callchain);
+static DEFINE_PER_CPU(struct perf_callchain_entry, cpu_perf_callchain);
 
 struct perf_callchain_entry *perf_callchain(struct pt_regs *regs)
 {
-	struct perf_callchain_entry *entry = &__get_cpu_var(callchain);
+	struct perf_callchain_entry *entry = &__get_cpu_var(cpu_perf_callchain);
 
 	entry->nr = 0;
 
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 4271f7a..aa5aeb9 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -157,7 +157,7 @@ extern u32 cpu_temp_both(unsigned long cpu);
 #endif /* CONFIG_TAU */
 
 #ifdef CONFIG_SMP
-DEFINE_PER_CPU(unsigned int, pvr);
+DEFINE_PER_CPU(unsigned int, cpu_pvr);
 #endif
 
 static int show_cpuinfo(struct seq_file *m, void *v)
@@ -209,7 +209,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
 	}
 
 #ifdef CONFIG_SMP
-	pvr = per_cpu(pvr, cpu_id);
+	pvr = per_cpu(cpu_pvr, cpu_id);
 #else
 	pvr = mfspr(SPRN_PVR);
 #endif
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 9b86a74..2ebb484 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -232,7 +232,7 @@ struct thread_info *current_set[NR_CPUS];
 
 static void __devinit smp_store_cpu_info(int id)
 {
-	per_cpu(pvr, id) = mfspr(SPRN_PVR);
+	per_cpu(cpu_pvr, id) = mfspr(SPRN_PVR);
 }
 
 static void __init smp_create_idle(unsigned int cpu)
diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
index 882e470..54bad90 100644
--- a/arch/powerpc/platforms/cell/interrupt.c
+++ b/arch/powerpc/platforms/cell/interrupt.c
@@ -54,7 +54,7 @@ struct iic {
 	struct device_node *node;
 };
 
-static DEFINE_PER_CPU(struct iic, iic);
+static DEFINE_PER_CPU(struct iic, cpu_iic);
 #define IIC_NODE_COUNT	2
 static struct irq_host *iic_host;
 
@@ -82,7 +82,7 @@ static void iic_unmask(unsigned int irq)
 
 static void iic_eoi(unsigned int irq)
 {
-	struct iic *iic = &__get_cpu_var(iic);
+	struct iic *iic = &__get_cpu_var(cpu_iic);
 	out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]);
 	BUG_ON(iic->eoi_ptr < 0);
 }
@@ -146,7 +146,7 @@ static unsigned int iic_get_irq(void)
 	struct iic *iic;
 	unsigned int virq;
 
-	iic = &__get_cpu_var(iic);
+	iic = &__get_cpu_var(cpu_iic);
 	*(unsigned long *) &pending =
 		in_be64((u64 __iomem *) &iic->regs->pending_destr);
 	if (!(pending.flags & CBE_IIC_IRQ_VALID))
@@ -161,12 +161,12 @@ static unsigned int iic_get_irq(void)
 
 void iic_setup_cpu(void)
 {
-	out_be64(&__get_cpu_var(iic).regs->prio, 0xff);
+	out_be64(&__get_cpu_var(cpu_iic).regs->prio, 0xff);
 }
 
 u8 iic_get_target_id(int cpu)
 {
-	return per_cpu(iic, cpu).target_id;
+	return per_cpu(cpu_iic, cpu).target_id;
 }
 
 EXPORT_SYMBOL_GPL(iic_get_target_id);
@@ -181,7 +181,7 @@ static inline int iic_ipi_to_irq(int ipi)
 
 void iic_cause_IPI(int cpu, int mesg)
 {
-	out_be64(&per_cpu(iic, cpu).regs->generate, (0xf - mesg) << 4);
+	out_be64(&per_cpu(cpu_iic, cpu).regs->generate, (0xf - mesg) << 4);
 }
 
 struct irq_host *iic_get_irq_host(int node)
@@ -348,7 +348,7 @@ static void __init init_one_iic(unsigned int hw_cpu, unsigned long addr,
 	/* XXX FIXME: should locate the linux CPU number from the HW cpu
 	 * number properly. We are lucky for now
 	 */
-	struct iic *iic = &per_cpu(iic, hw_cpu);
+	struct iic *iic = &per_cpu(cpu_iic, hw_cpu);
 
 	iic->regs = ioremap(addr, sizeof(struct cbe_iic_thread_regs));
 	BUG_ON(iic->regs == NULL);
diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
index 937a544..c5f3116 100644
--- a/arch/powerpc/platforms/pseries/dtl.c
+++ b/arch/powerpc/platforms/pseries/dtl.c
@@ -54,7 +54,7 @@ struct dtl {
 	int			buf_entries;
 	u64			last_idx;
 };
-static DEFINE_PER_CPU(struct dtl, dtl);
+static DEFINE_PER_CPU(struct dtl, cpu_dtl);
 
 /*
  * Dispatch trace log event mask:
@@ -261,7 +261,7 @@ static int dtl_init(void)
 
 	/* set up the per-cpu log structures */
 	for_each_possible_cpu(i) {
-		struct dtl *dtl = &per_cpu(dtl, i);
+		struct dtl *dtl = &per_cpu(cpu_dtl, i);
 		dtl->cpu = i;
 
 		rc = dtl_setup_file(dtl);
-- 
1.6.4.2

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 11/16] percpu: make percpu symbols in ia64 unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
@ 2009-10-14  6:02   ` Tejun Heo
  2009-10-14  6:01 ` [PATCH 02/16] percpu: make alloc_percpu() handle array types Tejun Heo
                     ` (15 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:02 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Tony Luck, Fenghua Yu, linux-ia64

This patch updates percpu related symbols in ia64 such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* arch/ia64/kernel/setup.c: s/cpu_info/ia64_cpu_info/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
---
 arch/ia64/include/asm/processor.h  |    6 +++---
 arch/ia64/kernel/head.S            |    4 ++--
 arch/ia64/kernel/ia64_ksyms.c      |    2 +-
 arch/ia64/kernel/mca_asm.S         |    2 +-
 arch/ia64/kernel/relocate_kernel.S |    2 +-
 arch/ia64/kernel/setup.c           |    4 ++--
 arch/ia64/mm/discontig.c           |    5 +++--
 arch/ia64/sn/kernel/sn2/sn2_smp.c  |    8 ++++----
 8 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
index 3eaeedf..7fa90f7 100644
--- a/arch/ia64/include/asm/processor.h
+++ b/arch/ia64/include/asm/processor.h
@@ -229,7 +229,7 @@ struct cpuinfo_ia64 {
 #endif
 };
 
-DECLARE_PER_CPU(struct cpuinfo_ia64, cpu_info);
+DECLARE_PER_CPU(struct cpuinfo_ia64, ia64_cpu_info);
 
 /*
  * The "local" data variable.  It refers to the per-CPU data of the currently executing
@@ -237,8 +237,8 @@ DECLARE_PER_CPU(struct cpuinfo_ia64, cpu_info);
  * Do not use the address of local_cpu_data, since it will be different from
  * cpu_data(smp_processor_id())!
  */
-#define local_cpu_data		(&__ia64_per_cpu_var(cpu_info))
-#define cpu_data(cpu)		(&per_cpu(cpu_info, cpu))
+#define local_cpu_data		(&__ia64_per_cpu_var(ia64_cpu_info))
+#define cpu_data(cpu)		(&per_cpu(ia64_cpu_info, cpu))
 
 extern void print_cpu_info (struct cpuinfo_ia64 *);
 
diff --git a/arch/ia64/kernel/head.S b/arch/ia64/kernel/head.S
index 696eff2..17a9fba 100644
--- a/arch/ia64/kernel/head.S
+++ b/arch/ia64/kernel/head.S
@@ -1051,7 +1051,7 @@ END(ia64_delay_loop)
  * intermediate precision so that we can produce a full 64-bit result.
  */
 GLOBAL_ENTRY(ia64_native_sched_clock)
-	addl r8=THIS_CPU(cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
+	addl r8=THIS_CPU(ia64_cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
 	mov.m r9=ar.itc		// fetch cycle-counter				(35 cyc)
 	;;
 	ldf8 f8=[r8]
@@ -1077,7 +1077,7 @@ sched_clock = ia64_native_sched_clock
 #ifdef CONFIG_VIRT_CPU_ACCOUNTING
 GLOBAL_ENTRY(cycle_to_cputime)
 	alloc r16=ar.pfs,1,0,0,0
-	addl r8=THIS_CPU(cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
+	addl r8=THIS_CPU(ia64_cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
 	;;
 	ldf8 f8=[r8]
 	;;
diff --git a/arch/ia64/kernel/ia64_ksyms.c b/arch/ia64/kernel/ia64_ksyms.c
index 14d39e3..461b999 100644
--- a/arch/ia64/kernel/ia64_ksyms.c
+++ b/arch/ia64/kernel/ia64_ksyms.c
@@ -30,7 +30,7 @@ EXPORT_SYMBOL(max_low_pfn);	/* defined by bootmem.c, but not exported by generic
 #endif
 
 #include <asm/processor.h>
-EXPORT_SYMBOL(per_cpu__cpu_info);
+EXPORT_SYMBOL(per_cpu__ia64_cpu_info);
 #ifdef CONFIG_SMP
 EXPORT_SYMBOL(per_cpu__local_per_cpu_offset);
 #endif
diff --git a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S
index 7461d25..d5bdf9d 100644
--- a/arch/ia64/kernel/mca_asm.S
+++ b/arch/ia64/kernel/mca_asm.S
@@ -59,7 +59,7 @@
 ia64_do_tlb_purge:
 #define O(member)	IA64_CPUINFO_##member##_OFFSET
 
-	GET_THIS_PADDR(r2, cpu_info)	// load phys addr of cpu_info into r2
+	GET_THIS_PADDR(r2, ia64_cpu_info) // load phys addr of cpu_info into r2
 	;;
 	addl r17=O(PTCE_STRIDE),r2
 	addl r2=O(PTCE_BASE),r2
diff --git a/arch/ia64/kernel/relocate_kernel.S b/arch/ia64/kernel/relocate_kernel.S
index 32f6fc1..c370e02 100644
--- a/arch/ia64/kernel/relocate_kernel.S
+++ b/arch/ia64/kernel/relocate_kernel.S
@@ -61,7 +61,7 @@ GLOBAL_ENTRY(relocate_new_kernel)
 
 	// purge all TC entries
 #define O(member)       IA64_CPUINFO_##member##_OFFSET
-        GET_THIS_PADDR(r2, cpu_info)    // load phys addr of cpu_info into r2
+        GET_THIS_PADDR(r2, ia64_cpu_info) // load phys addr of cpu_info into r2
         ;;
         addl r17=O(PTCE_STRIDE),r2
         addl r2=O(PTCE_BASE),r2
diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
index bc1ef4a..a1ea879 100644
--- a/arch/ia64/kernel/setup.c
+++ b/arch/ia64/kernel/setup.c
@@ -74,7 +74,7 @@ unsigned long __per_cpu_offset[NR_CPUS];
 EXPORT_SYMBOL(__per_cpu_offset);
 #endif
 
-DEFINE_PER_CPU(struct cpuinfo_ia64, cpu_info);
+DEFINE_PER_CPU(struct cpuinfo_ia64, ia64_cpu_info);
 DEFINE_PER_CPU(unsigned long, local_per_cpu_offset);
 unsigned long ia64_cycles_per_usec;
 struct ia64_boot_param *ia64_boot_param;
@@ -967,7 +967,7 @@ cpu_init (void)
 	 * depends on the data returned by identify_cpu().  We break the dependency by
 	 * accessing cpu_data() through the canonical per-CPU address.
 	 */
-	cpu_info = cpu_data + ((char *) &__ia64_per_cpu_var(cpu_info) - __per_cpu_start);
+	cpu_info = cpu_data + ((char *) &__ia64_per_cpu_var(ia64_cpu_info) - __per_cpu_start);
 	identify_cpu(cpu_info);
 
 #ifdef CONFIG_MCKINLEY
diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
index 40e4c1f..19c4b21 100644
--- a/arch/ia64/mm/discontig.c
+++ b/arch/ia64/mm/discontig.c
@@ -450,7 +450,8 @@ static void __init initialize_pernode_data(void)
 	/* Set the node_data pointer for each per-cpu struct */
 	for_each_possible_early_cpu(cpu) {
 		node = node_cpuid[cpu].nid;
-		per_cpu(cpu_info, cpu).node_data = mem_data[node].node_data;
+		per_cpu(ia64_cpu_info, cpu).node_data =
+			mem_data[node].node_data;
 	}
 #else
 	{
@@ -458,7 +459,7 @@ static void __init initialize_pernode_data(void)
 		cpu = 0;
 		node = node_cpuid[cpu].nid;
 		cpu0_cpu_info = (struct cpuinfo_ia64 *)(__phys_per_cpu_start +
-			((char *)&per_cpu__cpu_info - __per_cpu_start));
+			((char *)&per_cpu__ia64_cpu_info - __per_cpu_start));
 		cpu0_cpu_info->node_data = mem_data[node].node_data;
 	}
 #endif /* CONFIG_SMP */
diff --git a/arch/ia64/sn/kernel/sn2/sn2_smp.c b/arch/ia64/sn/kernel/sn2/sn2_smp.c
index 1176506..e884ba4 100644
--- a/arch/ia64/sn/kernel/sn2/sn2_smp.c
+++ b/arch/ia64/sn/kernel/sn2/sn2_smp.c
@@ -496,13 +496,13 @@ static int sn2_ptc_seq_show(struct seq_file *file, void *data)
 		seq_printf(file, "cpu %d %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld\n", cpu, stat->ptc_l,
 				stat->change_rid, stat->shub_ptc_flushes, stat->nodes_flushed,
 				stat->deadlocks,
-				1000 * stat->lock_itc_clocks / per_cpu(cpu_info, cpu).cyc_per_usec,
-				1000 * stat->shub_itc_clocks / per_cpu(cpu_info, cpu).cyc_per_usec,
-				1000 * stat->shub_itc_clocks_max / per_cpu(cpu_info, cpu).cyc_per_usec,
+				1000 * stat->lock_itc_clocks / per_cpu(ia64_cpu_info, cpu).cyc_per_usec,
+				1000 * stat->shub_itc_clocks / per_cpu(ia64_cpu_info, cpu).cyc_per_usec,
+				1000 * stat->shub_itc_clocks_max / per_cpu(ia64_cpu_info, cpu).cyc_per_usec,
 				stat->shub_ptc_flushes_not_my_mm,
 				stat->deadlocks2,
 				stat->shub_ipi_flushes,
-				1000 * stat->shub_ipi_flushes_itc_clocks / per_cpu(cpu_info, cpu).cyc_per_usec);
+				1000 * stat->shub_ipi_flushes_itc_clocks / per_cpu(ia64_cpu_info, cpu).cyc_per_usec);
 	}
 	return 0;
 }
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 11/16] percpu: make percpu symbols in ia64 unique
@ 2009-10-14  6:02   ` Tejun Heo
  0 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:02 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Tony Luck, Fenghua Yu, linux-ia64

This patch updates percpu related symbols in ia64 such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* arch/ia64/kernel/setup.c: s/cpu_info/ia64_cpu_info/

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
---
 arch/ia64/include/asm/processor.h  |    6 +++---
 arch/ia64/kernel/head.S            |    4 ++--
 arch/ia64/kernel/ia64_ksyms.c      |    2 +-
 arch/ia64/kernel/mca_asm.S         |    2 +-
 arch/ia64/kernel/relocate_kernel.S |    2 +-
 arch/ia64/kernel/setup.c           |    4 ++--
 arch/ia64/mm/discontig.c           |    5 +++--
 arch/ia64/sn/kernel/sn2/sn2_smp.c  |    8 ++++----
 8 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
index 3eaeedf..7fa90f7 100644
--- a/arch/ia64/include/asm/processor.h
+++ b/arch/ia64/include/asm/processor.h
@@ -229,7 +229,7 @@ struct cpuinfo_ia64 {
 #endif
 };
 
-DECLARE_PER_CPU(struct cpuinfo_ia64, cpu_info);
+DECLARE_PER_CPU(struct cpuinfo_ia64, ia64_cpu_info);
 
 /*
  * The "local" data variable.  It refers to the per-CPU data of the currently executing
@@ -237,8 +237,8 @@ DECLARE_PER_CPU(struct cpuinfo_ia64, cpu_info);
  * Do not use the address of local_cpu_data, since it will be different from
  * cpu_data(smp_processor_id())!
  */
-#define local_cpu_data		(&__ia64_per_cpu_var(cpu_info))
-#define cpu_data(cpu)		(&per_cpu(cpu_info, cpu))
+#define local_cpu_data		(&__ia64_per_cpu_var(ia64_cpu_info))
+#define cpu_data(cpu)		(&per_cpu(ia64_cpu_info, cpu))
 
 extern void print_cpu_info (struct cpuinfo_ia64 *);
 
diff --git a/arch/ia64/kernel/head.S b/arch/ia64/kernel/head.S
index 696eff2..17a9fba 100644
--- a/arch/ia64/kernel/head.S
+++ b/arch/ia64/kernel/head.S
@@ -1051,7 +1051,7 @@ END(ia64_delay_loop)
  * intermediate precision so that we can produce a full 64-bit result.
  */
 GLOBAL_ENTRY(ia64_native_sched_clock)
-	addl r8=THIS_CPU(cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
+	addl r8=THIS_CPU(ia64_cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
 	mov.m r9=ar.itc		// fetch cycle-counter				(35 cyc)
 	;;
 	ldf8 f8=[r8]
@@ -1077,7 +1077,7 @@ sched_clock = ia64_native_sched_clock
 #ifdef CONFIG_VIRT_CPU_ACCOUNTING
 GLOBAL_ENTRY(cycle_to_cputime)
 	alloc r16=ar.pfs,1,0,0,0
-	addl r8=THIS_CPU(cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
+	addl r8=THIS_CPU(ia64_cpu_info) + IA64_CPUINFO_NSEC_PER_CYC_OFFSET,r0
 	;;
 	ldf8 f8=[r8]
 	;;
diff --git a/arch/ia64/kernel/ia64_ksyms.c b/arch/ia64/kernel/ia64_ksyms.c
index 14d39e3..461b999 100644
--- a/arch/ia64/kernel/ia64_ksyms.c
+++ b/arch/ia64/kernel/ia64_ksyms.c
@@ -30,7 +30,7 @@ EXPORT_SYMBOL(max_low_pfn);	/* defined by bootmem.c, but not exported by generic
 #endif
 
 #include <asm/processor.h>
-EXPORT_SYMBOL(per_cpu__cpu_info);
+EXPORT_SYMBOL(per_cpu__ia64_cpu_info);
 #ifdef CONFIG_SMP
 EXPORT_SYMBOL(per_cpu__local_per_cpu_offset);
 #endif
diff --git a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S
index 7461d25..d5bdf9d 100644
--- a/arch/ia64/kernel/mca_asm.S
+++ b/arch/ia64/kernel/mca_asm.S
@@ -59,7 +59,7 @@
 ia64_do_tlb_purge:
 #define O(member)	IA64_CPUINFO_##member##_OFFSET
 
-	GET_THIS_PADDR(r2, cpu_info)	// load phys addr of cpu_info into r2
+	GET_THIS_PADDR(r2, ia64_cpu_info) // load phys addr of cpu_info into r2
 	;;
 	addl r17=O(PTCE_STRIDE),r2
 	addl r2=O(PTCE_BASE),r2
diff --git a/arch/ia64/kernel/relocate_kernel.S b/arch/ia64/kernel/relocate_kernel.S
index 32f6fc1..c370e02 100644
--- a/arch/ia64/kernel/relocate_kernel.S
+++ b/arch/ia64/kernel/relocate_kernel.S
@@ -61,7 +61,7 @@ GLOBAL_ENTRY(relocate_new_kernel)
 
 	// purge all TC entries
 #define O(member)       IA64_CPUINFO_##member##_OFFSET
-        GET_THIS_PADDR(r2, cpu_info)    // load phys addr of cpu_info into r2
+        GET_THIS_PADDR(r2, ia64_cpu_info) // load phys addr of cpu_info into r2
         ;;
         addl r17=O(PTCE_STRIDE),r2
         addl r2=O(PTCE_BASE),r2
diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
index bc1ef4a..a1ea879 100644
--- a/arch/ia64/kernel/setup.c
+++ b/arch/ia64/kernel/setup.c
@@ -74,7 +74,7 @@ unsigned long __per_cpu_offset[NR_CPUS];
 EXPORT_SYMBOL(__per_cpu_offset);
 #endif
 
-DEFINE_PER_CPU(struct cpuinfo_ia64, cpu_info);
+DEFINE_PER_CPU(struct cpuinfo_ia64, ia64_cpu_info);
 DEFINE_PER_CPU(unsigned long, local_per_cpu_offset);
 unsigned long ia64_cycles_per_usec;
 struct ia64_boot_param *ia64_boot_param;
@@ -967,7 +967,7 @@ cpu_init (void)
 	 * depends on the data returned by identify_cpu().  We break the dependency by
 	 * accessing cpu_data() through the canonical per-CPU address.
 	 */
-	cpu_info = cpu_data + ((char *) &__ia64_per_cpu_var(cpu_info) - __per_cpu_start);
+	cpu_info = cpu_data + ((char *) &__ia64_per_cpu_var(ia64_cpu_info) - __per_cpu_start);
 	identify_cpu(cpu_info);
 
 #ifdef CONFIG_MCKINLEY
diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
index 40e4c1f..19c4b21 100644
--- a/arch/ia64/mm/discontig.c
+++ b/arch/ia64/mm/discontig.c
@@ -450,7 +450,8 @@ static void __init initialize_pernode_data(void)
 	/* Set the node_data pointer for each per-cpu struct */
 	for_each_possible_early_cpu(cpu) {
 		node = node_cpuid[cpu].nid;
-		per_cpu(cpu_info, cpu).node_data = mem_data[node].node_data;
+		per_cpu(ia64_cpu_info, cpu).node_data +			mem_data[node].node_data;
 	}
 #else
 	{
@@ -458,7 +459,7 @@ static void __init initialize_pernode_data(void)
 		cpu = 0;
 		node = node_cpuid[cpu].nid;
 		cpu0_cpu_info = (struct cpuinfo_ia64 *)(__phys_per_cpu_start +
-			((char *)&per_cpu__cpu_info - __per_cpu_start));
+			((char *)&per_cpu__ia64_cpu_info - __per_cpu_start));
 		cpu0_cpu_info->node_data = mem_data[node].node_data;
 	}
 #endif /* CONFIG_SMP */
diff --git a/arch/ia64/sn/kernel/sn2/sn2_smp.c b/arch/ia64/sn/kernel/sn2/sn2_smp.c
index 1176506..e884ba4 100644
--- a/arch/ia64/sn/kernel/sn2/sn2_smp.c
+++ b/arch/ia64/sn/kernel/sn2/sn2_smp.c
@@ -496,13 +496,13 @@ static int sn2_ptc_seq_show(struct seq_file *file, void *data)
 		seq_printf(file, "cpu %d %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld\n", cpu, stat->ptc_l,
 				stat->change_rid, stat->shub_ptc_flushes, stat->nodes_flushed,
 				stat->deadlocks,
-				1000 * stat->lock_itc_clocks / per_cpu(cpu_info, cpu).cyc_per_usec,
-				1000 * stat->shub_itc_clocks / per_cpu(cpu_info, cpu).cyc_per_usec,
-				1000 * stat->shub_itc_clocks_max / per_cpu(cpu_info, cpu).cyc_per_usec,
+				1000 * stat->lock_itc_clocks / per_cpu(ia64_cpu_info, cpu).cyc_per_usec,
+				1000 * stat->shub_itc_clocks / per_cpu(ia64_cpu_info, cpu).cyc_per_usec,
+				1000 * stat->shub_itc_clocks_max / per_cpu(ia64_cpu_info, cpu).cyc_per_usec,
 				stat->shub_ptc_flushes_not_my_mm,
 				stat->deadlocks2,
 				stat->shub_ipi_flushes,
-				1000 * stat->shub_ipi_flushes_itc_clocks / per_cpu(cpu_info, cpu).cyc_per_usec);
+				1000 * stat->shub_ipi_flushes_itc_clocks / per_cpu(ia64_cpu_info, cpu).cyc_per_usec);
 	}
 	return 0;
 }
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 12/16] percpu: make misc percpu symbols unique
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (10 preceding siblings ...)
  2009-10-14  6:02   ` Tejun Heo
@ 2009-10-14  6:02 ` Tejun Heo
  2009-10-14  6:02 ` [PATCH 13/16] percpu: remove per_cpu__ prefix Tejun Heo
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:02 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Herbert Xu, David Howells, Koichi Yasutake,
	Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Masami Hiramatsu, Martin Schwidefsky,
	Heiko Carstens, linux390

This patch updates misc percpu related symbols such that percpu
symbols are unique and don't clash with local symbols.  This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.

* drivers/crypto/padlock-aes.c: s/last_cword/paes_last_cword/

* drivers/lguest/x86/core.c: s/last_cpu/lg_last_cpu/

* drivers/s390/net/netiucv.c: rename the variable used in a macro to
  avoid clashing with percpu symbol

* arch/mn10300/kernel/kprobes.c: replace current_ prefix with cur_ for
  static variables.  Please note that percpu symbol current_kprobe
  can't be changed as it's used by generic code.

Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Chuck Ebbert <cebbert@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux390@de.ibm.com
---
 arch/mn10300/kernel/kprobes.c |   61 ++++++++++++++++++++---------------------
 drivers/crypto/padlock-aes.c  |   12 ++++----
 drivers/lguest/x86/core.c     |    6 ++--
 drivers/s390/net/netiucv.c    |    8 ++---
 4 files changed, 42 insertions(+), 45 deletions(-)

diff --git a/arch/mn10300/kernel/kprobes.c b/arch/mn10300/kernel/kprobes.c
index dacafab..67e6389 100644
--- a/arch/mn10300/kernel/kprobes.c
+++ b/arch/mn10300/kernel/kprobes.c
@@ -31,13 +31,13 @@ const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
 #define KPROBE_HIT_ACTIVE	0x00000001
 #define KPROBE_HIT_SS		0x00000002
 
-static struct kprobe *current_kprobe;
-static unsigned long current_kprobe_orig_pc;
-static unsigned long current_kprobe_next_pc;
-static int current_kprobe_ss_flags;
+static struct kprobe *cur_kprobe;
+static unsigned long cur_kprobe_orig_pc;
+static unsigned long cur_kprobe_next_pc;
+static int cur_kprobe_ss_flags;
 static unsigned long kprobe_status;
-static kprobe_opcode_t current_kprobe_ss_buf[MAX_INSN_SIZE + 2];
-static unsigned long current_kprobe_bp_addr;
+static kprobe_opcode_t cur_kprobe_ss_buf[MAX_INSN_SIZE + 2];
+static unsigned long cur_kprobe_bp_addr;
 
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
 
@@ -399,26 +399,25 @@ void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
 {
 	unsigned long nextpc;
 
-	current_kprobe_orig_pc = regs->pc;
-	memcpy(current_kprobe_ss_buf, &p->ainsn.insn[0], MAX_INSN_SIZE);
-	regs->pc = (unsigned long) current_kprobe_ss_buf;
+	cur_kprobe_orig_pc = regs->pc;
+	memcpy(cur_kprobe_ss_buf, &p->ainsn.insn[0], MAX_INSN_SIZE);
+	regs->pc = (unsigned long) cur_kprobe_ss_buf;
 
-	nextpc = find_nextpc(regs, &current_kprobe_ss_flags);
-	if (current_kprobe_ss_flags & SINGLESTEP_PCREL)
-		current_kprobe_next_pc =
-			current_kprobe_orig_pc + (nextpc - regs->pc);
+	nextpc = find_nextpc(regs, &cur_kprobe_ss_flags);
+	if (cur_kprobe_ss_flags & SINGLESTEP_PCREL)
+		cur_kprobe_next_pc = cur_kprobe_orig_pc + (nextpc - regs->pc);
 	else
-		current_kprobe_next_pc = nextpc;
+		cur_kprobe_next_pc = nextpc;
 
 	/* branching instructions need special handling */
-	if (current_kprobe_ss_flags & SINGLESTEP_BRANCH)
+	if (cur_kprobe_ss_flags & SINGLESTEP_BRANCH)
 		nextpc = singlestep_branch_setup(regs);
 
-	current_kprobe_bp_addr = nextpc;
+	cur_kprobe_bp_addr = nextpc;
 
 	*(u8 *) nextpc = BREAKPOINT_INSTRUCTION;
-	mn10300_dcache_flush_range2((unsigned) current_kprobe_ss_buf,
-				    sizeof(current_kprobe_ss_buf));
+	mn10300_dcache_flush_range2((unsigned) cur_kprobe_ss_buf,
+				    sizeof(cur_kprobe_ss_buf));
 	mn10300_icache_inv();
 }
 
@@ -440,7 +439,7 @@ static inline int __kprobes kprobe_handler(struct pt_regs *regs)
 			disarm_kprobe(p, regs);
 			ret = 1;
 		} else {
-			p = current_kprobe;
+			p = cur_kprobe;
 			if (p->break_handler && p->break_handler(p, regs))
 				goto ss_probe;
 		}
@@ -464,7 +463,7 @@ static inline int __kprobes kprobe_handler(struct pt_regs *regs)
 	}
 
 	kprobe_status = KPROBE_HIT_ACTIVE;
-	current_kprobe = p;
+	cur_kprobe = p;
 	if (p->pre_handler(p, regs)) {
 		/* handler has already set things up, so skip ss setup */
 		return 1;
@@ -491,8 +490,8 @@ no_kprobe:
 static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs)
 {
 	/* we may need to fixup regs/stack after singlestepping a call insn */
-	if (current_kprobe_ss_flags & SINGLESTEP_BRANCH) {
-		regs->pc = current_kprobe_orig_pc;
+	if (cur_kprobe_ss_flags & SINGLESTEP_BRANCH) {
+		regs->pc = cur_kprobe_orig_pc;
 		switch (p->ainsn.insn[0]) {
 		case 0xcd:	/* CALL (d16,PC) */
 			*(unsigned *) regs->sp = regs->mdr = regs->pc + 5;
@@ -523,8 +522,8 @@ static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs)
 		}
 	}
 
-	regs->pc = current_kprobe_next_pc;
-	current_kprobe_bp_addr = 0;
+	regs->pc = cur_kprobe_next_pc;
+	cur_kprobe_bp_addr = 0;
 }
 
 static inline int __kprobes post_kprobe_handler(struct pt_regs *regs)
@@ -532,10 +531,10 @@ static inline int __kprobes post_kprobe_handler(struct pt_regs *regs)
 	if (!kprobe_running())
 		return 0;
 
-	if (current_kprobe->post_handler)
-		current_kprobe->post_handler(current_kprobe, regs, 0);
+	if (cur_kprobe->post_handler)
+		cur_kprobe->post_handler(cur_kprobe, regs, 0);
 
-	resume_execution(current_kprobe, regs);
+	resume_execution(cur_kprobe, regs);
 	reset_current_kprobe();
 	preempt_enable_no_resched();
 	return 1;
@@ -545,12 +544,12 @@ static inline int __kprobes post_kprobe_handler(struct pt_regs *regs)
 static inline
 int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
 {
-	if (current_kprobe->fault_handler &&
-	    current_kprobe->fault_handler(current_kprobe, regs, trapnr))
+	if (cur_kprobe->fault_handler &&
+	    cur_kprobe->fault_handler(cur_kprobe, regs, trapnr))
 		return 1;
 
 	if (kprobe_status & KPROBE_HIT_SS) {
-		resume_execution(current_kprobe, regs);
+		resume_execution(cur_kprobe, regs);
 		reset_current_kprobe();
 		preempt_enable_no_resched();
 	}
@@ -567,7 +566,7 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
 
 	switch (val) {
 	case DIE_BREAKPOINT:
-		if (current_kprobe_bp_addr != args->regs->pc) {
+		if (cur_kprobe_bp_addr != args->regs->pc) {
 			if (kprobe_handler(args->regs))
 				return NOTIFY_STOP;
 		} else {
diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c
index a9952b1..721d004 100644
--- a/drivers/crypto/padlock-aes.c
+++ b/drivers/crypto/padlock-aes.c
@@ -64,7 +64,7 @@ struct aes_ctx {
 	u32 *D;
 };
 
-static DEFINE_PER_CPU(struct cword *, last_cword);
+static DEFINE_PER_CPU(struct cword *, paes_last_cword);
 
 /* Tells whether the ACE is capable to generate
    the extended key for a given key_len. */
@@ -152,9 +152,9 @@ static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
 
 ok:
 	for_each_online_cpu(cpu)
-		if (&ctx->cword.encrypt == per_cpu(last_cword, cpu) ||
-		    &ctx->cword.decrypt == per_cpu(last_cword, cpu))
-			per_cpu(last_cword, cpu) = NULL;
+		if (&ctx->cword.encrypt == per_cpu(paes_last_cword, cpu) ||
+		    &ctx->cword.decrypt == per_cpu(paes_last_cword, cpu))
+			per_cpu(paes_last_cword, cpu) = NULL;
 
 	return 0;
 }
@@ -166,7 +166,7 @@ static inline void padlock_reset_key(struct cword *cword)
 {
 	int cpu = raw_smp_processor_id();
 
-	if (cword != per_cpu(last_cword, cpu))
+	if (cword != per_cpu(paes_last_cword, cpu))
 #ifndef CONFIG_X86_64
 		asm volatile ("pushfl; popfl");
 #else
@@ -176,7 +176,7 @@ static inline void padlock_reset_key(struct cword *cword)
 
 static inline void padlock_store_cword(struct cword *cword)
 {
-	per_cpu(last_cword, raw_smp_processor_id()) = cword;
+	per_cpu(paes_last_cword, raw_smp_processor_id()) = cword;
 }
 
 /*
diff --git a/drivers/lguest/x86/core.c b/drivers/lguest/x86/core.c
index 6ae3888..fb2b7ef 100644
--- a/drivers/lguest/x86/core.c
+++ b/drivers/lguest/x86/core.c
@@ -69,7 +69,7 @@ static struct lguest_pages *lguest_pages(unsigned int cpu)
 		  (SWITCHER_ADDR + SHARED_SWITCHER_PAGES*PAGE_SIZE))[cpu]);
 }
 
-static DEFINE_PER_CPU(struct lg_cpu *, last_cpu);
+static DEFINE_PER_CPU(struct lg_cpu *, lg_last_cpu);
 
 /*S:010
  * We approach the Switcher.
@@ -90,8 +90,8 @@ static void copy_in_guest_info(struct lg_cpu *cpu, struct lguest_pages *pages)
 	 * meanwhile).  If that's not the case, we pretend everything in the
 	 * Guest has changed.
 	 */
-	if (__get_cpu_var(last_cpu) != cpu || cpu->last_pages != pages) {
-		__get_cpu_var(last_cpu) = cpu;
+	if (__get_cpu_var(lg_last_cpu) != cpu || cpu->last_pages != pages) {
+		__get_cpu_var(lg_last_cpu) = cpu;
 		cpu->last_pages = pages;
 		cpu->changed = CHANGED_ALL;
 	}
diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c
index c84eadd..14e6144 100644
--- a/drivers/s390/net/netiucv.c
+++ b/drivers/s390/net/netiucv.c
@@ -113,11 +113,9 @@ static inline int iucv_dbf_passes(debug_info_t *dbf_grp, int level)
 #define IUCV_DBF_TEXT_(name, level, text...) \
 	do { \
 		if (iucv_dbf_passes(iucv_dbf_##name, level)) { \
-			char* iucv_dbf_txt_buf = \
-					get_cpu_var(iucv_dbf_txt_buf); \
-			sprintf(iucv_dbf_txt_buf, text); \
-			debug_text_event(iucv_dbf_##name, level, \
-						iucv_dbf_txt_buf); \
+			char* __buf = get_cpu_var(iucv_dbf_txt_buf); \
+			sprintf(__buf, text); \
+			debug_text_event(iucv_dbf_##name, level, __buf); \
 			put_cpu_var(iucv_dbf_txt_buf); \
 		} \
 	} while (0)
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (11 preceding siblings ...)
  2009-10-14  6:02 ` [PATCH 12/16] percpu: make misc percpu symbols unique Tejun Heo
@ 2009-10-14  6:02 ` Tejun Heo
  2009-10-14 14:36   ` Christoph Lameter
  2009-10-14  6:02 ` [PATCH 14/16] percpu: make access macros universal Tejun Heo
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:02 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo

From: Rusty Russell <rusty@rustcorp.com.au>

Now that the return from alloc_percpu is compatible with the address
of per-cpu vars, it makes sense to hand around the address of per-cpu
variables.  To make this sane, we remove the per_cpu__ prefix we used
created to stop people accidentally using these vars directly.

Now we have sparse, we can use that (next patch).

tj: * Updated to convert stuff which were missed by or added after the
      original patch.

    * Kill per_cpu_var() macro.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Tejun Heo <tj@kernel.org>
---
 arch/blackfin/mach-common/entry.S       |    4 +-
 arch/cris/arch-v10/kernel/entry.S       |    2 +-
 arch/cris/arch-v32/mm/mmu.S             |    2 +-
 arch/ia64/include/asm/percpu.h          |    4 +-
 arch/ia64/kernel/ia64_ksyms.c           |    4 +-
 arch/ia64/mm/discontig.c                |    2 +-
 arch/microblaze/include/asm/entry.h     |    2 +-
 arch/parisc/lib/fixup.S                 |    8 +++---
 arch/powerpc/platforms/pseries/hvCall.S |    2 +-
 arch/sparc/kernel/nmi.c                 |    6 ++--
 arch/sparc/kernel/rtrap_64.S            |    8 +++---
 arch/x86/include/asm/percpu.h           |   37 ++++++++++++++----------------
 arch/x86/include/asm/system.h           |    8 +++---
 arch/x86/kernel/apic/nmi.c              |    6 ++--
 arch/x86/kernel/head_32.S               |    6 ++--
 arch/x86/kernel/vmlinux.lds.S           |    4 +-
 arch/x86/xen/xen-asm_32.S               |    4 +-
 include/asm-generic/percpu.h            |   12 +++++-----
 include/linux/percpu-defs.h             |   18 +++++----------
 include/linux/percpu.h                  |    5 +--
 include/linux/vmstat.h                  |    8 +++---
 kernel/rcutorture.c                     |    8 +++---
 kernel/trace/trace.c                    |    6 ++--
 kernel/trace/trace_functions_graph.c    |    4 +-
 24 files changed, 80 insertions(+), 90 deletions(-)

diff --git a/arch/blackfin/mach-common/entry.S b/arch/blackfin/mach-common/entry.S
index 1e7cac2..a3ea7e9 100644
--- a/arch/blackfin/mach-common/entry.S
+++ b/arch/blackfin/mach-common/entry.S
@@ -835,8 +835,8 @@ ENDPROC(_resume)
 
 ENTRY(_ret_from_exception)
 #ifdef CONFIG_IPIPE
-	p2.l = _per_cpu__ipipe_percpu_domain;
-	p2.h = _per_cpu__ipipe_percpu_domain;
+	p2.l = _ipipe_percpu_domain;
+	p2.h = _ipipe_percpu_domain;
 	r0.l = _ipipe_root;
 	r0.h = _ipipe_root;
 	r2 = [p2];
diff --git a/arch/cris/arch-v10/kernel/entry.S b/arch/cris/arch-v10/kernel/entry.S
index 2c18d08..c52bef3 100644
--- a/arch/cris/arch-v10/kernel/entry.S
+++ b/arch/cris/arch-v10/kernel/entry.S
@@ -358,7 +358,7 @@ mmu_bus_fault:
 1:	btstq	12, $r1		   ; Refill?
 	bpl	2f
 	lsrq	24, $r1     ; Get PGD index (bit 24-31)
-	move.d  [per_cpu__current_pgd], $r0 ; PGD for the current process
+	move.d  [current_pgd], $r0 ; PGD for the current process
 	move.d	[$r0+$r1.d], $r0   ; Get PMD
 	beq	2f
 	nop
diff --git a/arch/cris/arch-v32/mm/mmu.S b/arch/cris/arch-v32/mm/mmu.S
index 2238d15..f125d91 100644
--- a/arch/cris/arch-v32/mm/mmu.S
+++ b/arch/cris/arch-v32/mm/mmu.S
@@ -115,7 +115,7 @@
 #ifdef CONFIG_SMP
 	move    $s7, $acr	; PGD
 #else
-	move.d  per_cpu__current_pgd, $acr ; PGD
+	move.d  current_pgd, $acr ; PGD
 #endif
 	; Look up PMD in PGD
 	lsrq	24, $r0	; Get PMD index into PGD (bit 24-31)
diff --git a/arch/ia64/include/asm/percpu.h b/arch/ia64/include/asm/percpu.h
index 30cf465..f7c00a5 100644
--- a/arch/ia64/include/asm/percpu.h
+++ b/arch/ia64/include/asm/percpu.h
@@ -9,7 +9,7 @@
 #define PERCPU_ENOUGH_ROOM PERCPU_PAGE_SIZE
 
 #ifdef __ASSEMBLY__
-# define THIS_CPU(var)	(per_cpu__##var)  /* use this to mark accesses to per-CPU variables... */
+# define THIS_CPU(var)	(var)  /* use this to mark accesses to per-CPU variables... */
 #else /* !__ASSEMBLY__ */
 
 
@@ -39,7 +39,7 @@ extern void *per_cpu_init(void);
  * On the positive side, using __ia64_per_cpu_var() instead of __get_cpu_var() is slightly
  * more efficient.
  */
-#define __ia64_per_cpu_var(var)	per_cpu__##var
+#define __ia64_per_cpu_var(var)	var
 
 #include <asm-generic/percpu.h>
 
diff --git a/arch/ia64/kernel/ia64_ksyms.c b/arch/ia64/kernel/ia64_ksyms.c
index 461b999..7f4a0ed 100644
--- a/arch/ia64/kernel/ia64_ksyms.c
+++ b/arch/ia64/kernel/ia64_ksyms.c
@@ -30,9 +30,9 @@ EXPORT_SYMBOL(max_low_pfn);	/* defined by bootmem.c, but not exported by generic
 #endif
 
 #include <asm/processor.h>
-EXPORT_SYMBOL(per_cpu__ia64_cpu_info);
+EXPORT_SYMBOL(ia64_cpu_info);
 #ifdef CONFIG_SMP
-EXPORT_SYMBOL(per_cpu__local_per_cpu_offset);
+EXPORT_SYMBOL(local_per_cpu_offset);
 #endif
 
 #include <asm/uaccess.h>
diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
index 19c4b21..8d586d1 100644
--- a/arch/ia64/mm/discontig.c
+++ b/arch/ia64/mm/discontig.c
@@ -459,7 +459,7 @@ static void __init initialize_pernode_data(void)
 		cpu = 0;
 		node = node_cpuid[cpu].nid;
 		cpu0_cpu_info = (struct cpuinfo_ia64 *)(__phys_per_cpu_start +
-			((char *)&per_cpu__ia64_cpu_info - __per_cpu_start));
+			((char *)&ia64_cpu_info - __per_cpu_start));
 		cpu0_cpu_info->node_data = mem_data[node].node_data;
 	}
 #endif /* CONFIG_SMP */
diff --git a/arch/microblaze/include/asm/entry.h b/arch/microblaze/include/asm/entry.h
index 61abbd2..ec89f2a 100644
--- a/arch/microblaze/include/asm/entry.h
+++ b/arch/microblaze/include/asm/entry.h
@@ -21,7 +21,7 @@
  * places
  */
 
-#define PER_CPU(var) per_cpu__##var
+#define PER_CPU(var) var
 
 # ifndef __ASSEMBLY__
 DECLARE_PER_CPU(unsigned int, KSP); /* Saved kernel stack pointer */
diff --git a/arch/parisc/lib/fixup.S b/arch/parisc/lib/fixup.S
index d172d42..f8c45cc 100644
--- a/arch/parisc/lib/fixup.S
+++ b/arch/parisc/lib/fixup.S
@@ -36,8 +36,8 @@
 #endif
 	/* t2 = &__per_cpu_offset[smp_processor_id()]; */
 	LDREGX \t2(\t1),\t2 
-	addil LT%per_cpu__exception_data,%r27
-	LDREG RT%per_cpu__exception_data(%r1),\t1
+	addil LT%exception_data,%r27
+	LDREG RT%exception_data(%r1),\t1
 	/* t1 = &__get_cpu_var(exception_data) */
 	add,l \t1,\t2,\t1
 	/* t1 = t1->fault_ip */
@@ -46,8 +46,8 @@
 #else
 	.macro  get_fault_ip t1 t2
 	/* t1 = &__get_cpu_var(exception_data) */
-	addil LT%per_cpu__exception_data,%r27
-	LDREG RT%per_cpu__exception_data(%r1),\t2
+	addil LT%exception_data,%r27
+	LDREG RT%exception_data(%r1),\t2
 	/* t1 = t2->fault_ip */
 	LDREG EXCDATA_IP(\t2), \t1
 	.endm
diff --git a/arch/powerpc/platforms/pseries/hvCall.S b/arch/powerpc/platforms/pseries/hvCall.S
index c1427b3..580f789 100644
--- a/arch/powerpc/platforms/pseries/hvCall.S
+++ b/arch/powerpc/platforms/pseries/hvCall.S
@@ -55,7 +55,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_PURR);				\
 	/* calculate address of stat structure r4 = opcode */	\
 	srdi	r4,r4,2;		/* index into array */	\
 	mulli	r4,r4,HCALL_STAT_SIZE;				\
-	LOAD_REG_ADDR(r7, per_cpu__hcall_stats);		\
+	LOAD_REG_ADDR(r7, hcall_stats);				\
 	add	r4,r4,r7;					\
 	ld	r7,PACA_DATA_OFFSET(r13); /* per cpu offset */	\
 	add	r4,r4,r7;					\
diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
index f30f4a1..2ad288f 100644
--- a/arch/sparc/kernel/nmi.c
+++ b/arch/sparc/kernel/nmi.c
@@ -112,13 +112,13 @@ notrace __kprobes void perfctr_irq(int irq, struct pt_regs *regs)
 		touched = 1;
 	}
 	if (!touched && __get_cpu_var(last_irq_sum) == sum) {
-		__this_cpu_inc(per_cpu_var(alert_counter));
-		if (__this_cpu_read(per_cpu_var(alert_counter)) == 30 * nmi_hz)
+		__this_cpu_inc(alert_counter);
+		if (__this_cpu_read(alert_counter) == 30 * nmi_hz)
 			die_nmi("BUG: NMI Watchdog detected LOCKUP",
 				regs, panic_on_timeout);
 	} else {
 		__get_cpu_var(last_irq_sum) = sum;
-		__this_cpu_write(per_cpu_var(alert_counter), 0);
+		__this_cpu_write(alert_counter, 0);
 	}
 	if (__get_cpu_var(wd_enabled)) {
 		write_pic(picl_value(nmi_hz));
diff --git a/arch/sparc/kernel/rtrap_64.S b/arch/sparc/kernel/rtrap_64.S
index fd3cee4..1ddec40 100644
--- a/arch/sparc/kernel/rtrap_64.S
+++ b/arch/sparc/kernel/rtrap_64.S
@@ -149,11 +149,11 @@ rtrap_nmi:	ldx			[%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
 rtrap_irq:
 rtrap:
 #ifndef CONFIG_SMP
-		sethi			%hi(per_cpu____cpu_data), %l0
-		lduw			[%l0 + %lo(per_cpu____cpu_data)], %l1
+		sethi			%hi(__cpu_data), %l0
+		lduw			[%l0 + %lo(__cpu_data)], %l1
 #else
-		sethi			%hi(per_cpu____cpu_data), %l0
-		or			%l0, %lo(per_cpu____cpu_data), %l0
+		sethi			%hi(__cpu_data), %l0
+		or			%l0, %lo(__cpu_data), %l0
 		lduw			[%l0 + %g5], %l1
 #endif
 		cmp			%l1, 0
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 0c44196..4c170cc 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -25,19 +25,18 @@
  */
 #ifdef CONFIG_SMP
 #define PER_CPU(var, reg)						\
-	__percpu_mov_op %__percpu_seg:per_cpu__this_cpu_off, reg;	\
-	lea per_cpu__##var(reg), reg
-#define PER_CPU_VAR(var)	%__percpu_seg:per_cpu__##var
+	__percpu_mov_op %__percpu_seg:this_cpu_off, reg;		\
+	lea var(reg), reg
+#define PER_CPU_VAR(var)	%__percpu_seg:var
 #else /* ! SMP */
-#define PER_CPU(var, reg)						\
-	__percpu_mov_op $per_cpu__##var, reg
-#define PER_CPU_VAR(var)	per_cpu__##var
+#define PER_CPU(var, reg)	__percpu_mov_op $var, reg
+#define PER_CPU_VAR(var)	var
 #endif	/* SMP */
 
 #ifdef CONFIG_X86_64_SMP
 #define INIT_PER_CPU_VAR(var)  init_per_cpu__##var
 #else
-#define INIT_PER_CPU_VAR(var)  per_cpu__##var
+#define INIT_PER_CPU_VAR(var)  var
 #endif
 
 #else /* ...!ASSEMBLY */
@@ -60,12 +59,12 @@
  * There also must be an entry in vmlinux_64.lds.S
  */
 #define DECLARE_INIT_PER_CPU(var) \
-       extern typeof(per_cpu_var(var)) init_per_cpu_var(var)
+       extern typeof(var) init_per_cpu_var(var)
 
 #ifdef CONFIG_X86_64_SMP
 #define init_per_cpu_var(var)  init_per_cpu__##var
 #else
-#define init_per_cpu_var(var)  per_cpu_var(var)
+#define init_per_cpu_var(var)  var
 #endif
 
 /* For arch-specific code, we can use direct single-insn ops (they
@@ -142,16 +141,14 @@ do {							\
  * per-thread variables implemented as per-cpu variables and thus
  * stable for the duration of the respective task.
  */
-#define percpu_read(var)	percpu_from_op("mov", per_cpu__##var,	\
-					       "m" (per_cpu__##var))
-#define percpu_read_stable(var)	percpu_from_op("mov", per_cpu__##var,	\
-					       "p" (&per_cpu__##var))
-#define percpu_write(var, val)	percpu_to_op("mov", per_cpu__##var, val)
-#define percpu_add(var, val)	percpu_to_op("add", per_cpu__##var, val)
-#define percpu_sub(var, val)	percpu_to_op("sub", per_cpu__##var, val)
-#define percpu_and(var, val)	percpu_to_op("and", per_cpu__##var, val)
-#define percpu_or(var, val)	percpu_to_op("or", per_cpu__##var, val)
-#define percpu_xor(var, val)	percpu_to_op("xor", per_cpu__##var, val)
+#define percpu_read(var)		percpu_from_op("mov", var, "m" (var))
+#define percpu_read_stable(var)		percpu_from_op("mov", var, "p" (&(var)))
+#define percpu_write(var, val)		percpu_to_op("mov", var, val)
+#define percpu_add(var, val)		percpu_to_op("add", var, val)
+#define percpu_sub(var, val)		percpu_to_op("sub", var, val)
+#define percpu_and(var, val)		percpu_to_op("and", var, val)
+#define percpu_or(var, val)		percpu_to_op("or", var, val)
+#define percpu_xor(var, val)		percpu_to_op("xor", var, val)
 
 #define __this_cpu_read_1(pcp)		percpu_from_op("mov", (pcp), "m"(pcp))
 #define __this_cpu_read_2(pcp)		percpu_from_op("mov", (pcp), "m"(pcp))
@@ -236,7 +233,7 @@ do {							\
 ({									\
 	int old__;							\
 	asm volatile("btr %2,"__percpu_arg(1)"\n\tsbbl %0,%0"		\
-		     : "=r" (old__), "+m" (per_cpu__##var)		\
+		     : "=r" (old__), "+m" (var)				\
 		     : "dIr" (bit));					\
 	old__;								\
 })
diff --git a/arch/x86/include/asm/system.h b/arch/x86/include/asm/system.h
index f08f973..de10c19 100644
--- a/arch/x86/include/asm/system.h
+++ b/arch/x86/include/asm/system.h
@@ -31,7 +31,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 	"movl %P[task_canary](%[next]), %%ebx\n\t"			\
 	"movl %%ebx, "__percpu_arg([stack_canary])"\n\t"
 #define __switch_canary_oparam						\
-	, [stack_canary] "=m" (per_cpu_var(stack_canary.canary))
+	, [stack_canary] "=m" (stack_canary.canary)
 #define __switch_canary_iparam						\
 	, [task_canary] "i" (offsetof(struct task_struct, stack_canary))
 #else	/* CC_STACKPROTECTOR */
@@ -113,7 +113,7 @@ do {									\
 	"movq %P[task_canary](%%rsi),%%r8\n\t"				  \
 	"movq %%r8,"__percpu_arg([gs_canary])"\n\t"
 #define __switch_canary_oparam						  \
-	, [gs_canary] "=m" (per_cpu_var(irq_stack_union.stack_canary))
+	, [gs_canary] "=m" (irq_stack_union.stack_canary)
 #define __switch_canary_iparam						  \
 	, [task_canary] "i" (offsetof(struct task_struct, stack_canary))
 #else	/* CC_STACKPROTECTOR */
@@ -134,7 +134,7 @@ do {									\
 	     __switch_canary						  \
 	     "movq %P[thread_info](%%rsi),%%r8\n\t"			  \
 	     "movq %%rax,%%rdi\n\t" 					  \
-	     "testl  %[_tif_fork],%P[ti_flags](%%r8)\n\t"	  \
+	     "testl  %[_tif_fork],%P[ti_flags](%%r8)\n\t"		  \
 	     "jnz   ret_from_fork\n\t"					  \
 	     RESTORE_CONTEXT						  \
 	     : "=a" (last)					  	  \
@@ -144,7 +144,7 @@ do {									\
 	       [ti_flags] "i" (offsetof(struct thread_info, flags)),	  \
 	       [_tif_fork] "i" (_TIF_FORK),			  	  \
 	       [thread_info] "i" (offsetof(struct task_struct, stack)),   \
-	       [current_task] "m" (per_cpu_var(current_task))		  \
+	       [current_task] "m" (current_task)			  \
 	       __switch_canary_iparam					  \
 	     : "memory", "cc" __EXTRA_CLOBBER)
 #endif
diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
index e631cc4..4540437 100644
--- a/arch/x86/kernel/apic/nmi.c
+++ b/arch/x86/kernel/apic/nmi.c
@@ -437,8 +437,8 @@ nmi_watchdog_tick(struct pt_regs *regs, unsigned reason)
 		 * Ayiee, looks like this CPU is stuck ...
 		 * wait a few IRQs (5 seconds) before doing the oops ...
 		 */
-		__this_cpu_inc(per_cpu_var(alert_counter));
-		if (__this_cpu_read(per_cpu_var(alert_counter)) == 5 * nmi_hz)
+		__this_cpu_inc(alert_counter);
+		if (__this_cpu_read(alert_counter) == 5 * nmi_hz)
 			/*
 			 * die_nmi will return ONLY if NOTIFY_STOP happens..
 			 */
@@ -446,7 +446,7 @@ nmi_watchdog_tick(struct pt_regs *regs, unsigned reason)
 				regs, panic_on_timeout);
 	} else {
 		__get_cpu_var(last_irq_sum) = sum;
-		__this_cpu_write(per_cpu_var(alert_counter), 0);
+		__this_cpu_write(alert_counter, 0);
 	}
 
 	/* see if the nmi watchdog went off */
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 050c278..fd39eaf 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -438,8 +438,8 @@ is386:	movl $2,%ecx		# set MP
 	 */
 	cmpb $0,ready
 	jne 1f
-	movl $per_cpu__gdt_page,%eax
-	movl $per_cpu__stack_canary,%ecx
+	movl $gdt_page,%eax
+	movl $stack_canary,%ecx
 	movw %cx, 8 * GDT_ENTRY_STACK_CANARY + 2(%eax)
 	shrl $16, %ecx
 	movb %cl, 8 * GDT_ENTRY_STACK_CANARY + 4(%eax)
@@ -702,7 +702,7 @@ idt_descr:
 	.word 0				# 32 bit align gdt_desc.address
 ENTRY(early_gdt_descr)
 	.word GDT_ENTRIES*8-1
-	.long per_cpu__gdt_page		/* Overwritten for secondary CPUs */
+	.long gdt_page			/* Overwritten for secondary CPUs */
 
 /*
  * The boot_gdt must mirror the equivalent in setup.S and is
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 92929fb..ecb9271 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -312,7 +312,7 @@ SECTIONS
  * Per-cpu symbols which need to be offset from __per_cpu_load
  * for the boot processor.
  */
-#define INIT_PER_CPU(x) init_per_cpu__##x = per_cpu__##x + __per_cpu_load
+#define INIT_PER_CPU(x) init_per_cpu__##x = x + __per_cpu_load
 INIT_PER_CPU(gdt_page);
 INIT_PER_CPU(irq_stack_union);
 
@@ -323,7 +323,7 @@ INIT_PER_CPU(irq_stack_union);
 	   "kernel image bigger than KERNEL_IMAGE_SIZE");
 
 #ifdef CONFIG_SMP
-. = ASSERT((per_cpu__irq_stack_union == 0),
+. = ASSERT((irq_stack_union == 0),
            "irq_stack_union is not at start of per-cpu area");
 #endif
 
diff --git a/arch/x86/xen/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
index 88e15de..22a2093 100644
--- a/arch/x86/xen/xen-asm_32.S
+++ b/arch/x86/xen/xen-asm_32.S
@@ -90,9 +90,9 @@ ENTRY(xen_iret)
 	GET_THREAD_INFO(%eax)
 	movl TI_cpu(%eax), %eax
 	movl __per_cpu_offset(,%eax,4), %eax
-	mov per_cpu__xen_vcpu(%eax), %eax
+	mov xen_vcpu(%eax), %eax
 #else
-	movl per_cpu__xen_vcpu, %eax
+	movl xen_vcpu, %eax
 #endif
 
 	/* check IF state we're restoring */
diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
index 8087b90..ca6f049 100644
--- a/include/asm-generic/percpu.h
+++ b/include/asm-generic/percpu.h
@@ -50,11 +50,11 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
  * offset.
  */
 #define per_cpu(var, cpu) \
-	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), per_cpu_offset(cpu)))
+	(*SHIFT_PERCPU_PTR(&(var), per_cpu_offset(cpu)))
 #define __get_cpu_var(var) \
-	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), my_cpu_offset))
+	(*SHIFT_PERCPU_PTR(&(var), my_cpu_offset))
 #define __raw_get_cpu_var(var) \
-	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), __my_cpu_offset))
+	(*SHIFT_PERCPU_PTR(&(var), __my_cpu_offset))
 
 #define this_cpu_ptr(ptr) SHIFT_PERCPU_PTR(ptr, my_cpu_offset)
 #define __this_cpu_ptr(ptr) SHIFT_PERCPU_PTR(ptr, __my_cpu_offset)
@@ -66,9 +66,9 @@ extern void setup_per_cpu_areas(void);
 
 #else /* ! SMP */
 
-#define per_cpu(var, cpu)			(*((void)(cpu), &per_cpu_var(var)))
-#define __get_cpu_var(var)			per_cpu_var(var)
-#define __raw_get_cpu_var(var)			per_cpu_var(var)
+#define per_cpu(var, cpu)			(*((void)(cpu), &(var)))
+#define __get_cpu_var(var)			(var)
+#define __raw_get_cpu_var(var)			(var)
 #define this_cpu_ptr(ptr) per_cpu_ptr(ptr, 0)
 #define __this_cpu_ptr(ptr) this_cpu_ptr(ptr)
 
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index 5a5d6ce..ee99f6c 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -2,12 +2,6 @@
 #define _LINUX_PERCPU_DEFS_H
 
 /*
- * Determine the real variable name from the name visible in the
- * kernel sources.
- */
-#define per_cpu_var(var) per_cpu__##var
-
-/*
  * Base implementations of per-CPU variable declarations and definitions, where
  * the section in which the variable is to be placed is provided by the
  * 'sec' argument.  This may be used to affect the parameters governing the
@@ -56,24 +50,24 @@
  */
 #define DECLARE_PER_CPU_SECTION(type, name, sec)			\
 	extern __PCPU_DUMMY_ATTRS char __pcpu_scope_##name;		\
-	extern __PCPU_ATTRS(sec) __typeof__(type) per_cpu__##name
+	extern __PCPU_ATTRS(sec) __typeof__(type) name
 
 #define DEFINE_PER_CPU_SECTION(type, name, sec)				\
 	__PCPU_DUMMY_ATTRS char __pcpu_scope_##name;			\
 	extern __PCPU_DUMMY_ATTRS char __pcpu_unique_##name;		\
 	__PCPU_DUMMY_ATTRS char __pcpu_unique_##name;			\
 	__PCPU_ATTRS(sec) PER_CPU_DEF_ATTRIBUTES __weak			\
-	__typeof__(type) per_cpu__##name
+	__typeof__(type) name
 #else
 /*
  * Normal declaration and definition macros.
  */
 #define DECLARE_PER_CPU_SECTION(type, name, sec)			\
-	extern __PCPU_ATTRS(sec) __typeof__(type) per_cpu__##name
+	extern __PCPU_ATTRS(sec) __typeof__(type) name
 
 #define DEFINE_PER_CPU_SECTION(type, name, sec)				\
 	__PCPU_ATTRS(sec) PER_CPU_DEF_ATTRIBUTES			\
-	__typeof__(type) per_cpu__##name
+	__typeof__(type) name
 #endif
 
 /*
@@ -137,8 +131,8 @@
 /*
  * Intermodule exports for per-CPU variables.
  */
-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
+#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(var)
+#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(var)
 
 
 #endif /* _LINUX_PERCPU_DEFS_H */
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 522f421..e12410e 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -182,7 +182,7 @@ static inline void *pcpu_lpage_remapped(void *kaddr)
 #ifndef percpu_read
 # define percpu_read(var)						\
   ({									\
-	typeof(per_cpu_var(var)) __tmp_var__;				\
+	typeof(var) __tmp_var__;					\
 	__tmp_var__ = get_cpu_var(var);					\
 	put_cpu_var(var);						\
 	__tmp_var__;							\
@@ -253,8 +253,7 @@ do {									\
 
 /*
  * Optimized manipulation for memory allocated through the per cpu
- * allocator or for addresses of per cpu variables (can be determined
- * using per_cpu_var(xx).
+ * allocator or for addresses of per cpu variables.
  *
  * These operation guarantee exclusivity of access for other operations
  * on the *same* processor. The assumption is that per cpu data is only
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index d858897..3e489fd 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -76,22 +76,22 @@ DECLARE_PER_CPU(struct vm_event_state, vm_event_states);
 
 static inline void __count_vm_event(enum vm_event_item item)
 {
-	__this_cpu_inc(per_cpu_var(vm_event_states).event[item]);
+	__this_cpu_inc(vm_event_states.event[item]);
 }
 
 static inline void count_vm_event(enum vm_event_item item)
 {
-	this_cpu_inc(per_cpu_var(vm_event_states).event[item]);
+	this_cpu_inc(vm_event_states.event[item]);
 }
 
 static inline void __count_vm_events(enum vm_event_item item, long delta)
 {
-	__this_cpu_add(per_cpu_var(vm_event_states).event[item], delta);
+	__this_cpu_add(vm_event_states.event[item], delta);
 }
 
 static inline void count_vm_events(enum vm_event_item item, long delta)
 {
-	this_cpu_add(per_cpu_var(vm_event_states).event[item], delta);
+	this_cpu_add(vm_event_states.event[item], delta);
 }
 
 extern void all_vm_events(unsigned long *);
diff --git a/kernel/rcutorture.c b/kernel/rcutorture.c
index 178967b..e339ab3 100644
--- a/kernel/rcutorture.c
+++ b/kernel/rcutorture.c
@@ -731,13 +731,13 @@ static void rcu_torture_timer(unsigned long unused)
 		/* Should not happen, but... */
 		pipe_count = RCU_TORTURE_PIPE_LEN;
 	}
-	__this_cpu_inc(per_cpu_var(rcu_torture_count)[pipe_count]);
+	__this_cpu_inc(rcu_torture_count[pipe_count]);
 	completed = cur_ops->completed() - completed;
 	if (completed > RCU_TORTURE_PIPE_LEN) {
 		/* Should not happen, but... */
 		completed = RCU_TORTURE_PIPE_LEN;
 	}
-	__this_cpu_inc(per_cpu_var(rcu_torture_batch)[completed]);
+	__this_cpu_inc(rcu_torture_batch[completed]);
 	preempt_enable();
 	cur_ops->readunlock(idx);
 }
@@ -786,13 +786,13 @@ rcu_torture_reader(void *arg)
 			/* Should not happen, but... */
 			pipe_count = RCU_TORTURE_PIPE_LEN;
 		}
-		__this_cpu_inc(per_cpu_var(rcu_torture_count)[pipe_count]);
+		__this_cpu_inc(rcu_torture_count[pipe_count]);
 		completed = cur_ops->completed() - completed;
 		if (completed > RCU_TORTURE_PIPE_LEN) {
 			/* Should not happen, but... */
 			completed = RCU_TORTURE_PIPE_LEN;
 		}
-		__this_cpu_inc(per_cpu_var(rcu_torture_batch)[completed]);
+		__this_cpu_inc(rcu_torture_batch[completed]);
 		preempt_enable();
 		cur_ops->readunlock(idx);
 		schedule();
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 85a5ed7..b808177 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -91,12 +91,12 @@ DEFINE_PER_CPU(int, ftrace_cpu_disabled);
 static inline void ftrace_disable_cpu(void)
 {
 	preempt_disable();
-	__this_cpu_inc(per_cpu_var(ftrace_cpu_disabled));
+	__this_cpu_inc(ftrace_cpu_disabled);
 }
 
 static inline void ftrace_enable_cpu(void)
 {
-	__this_cpu_dec(per_cpu_var(ftrace_cpu_disabled));
+	__this_cpu_dec(ftrace_cpu_disabled);
 	preempt_enable();
 }
 
@@ -1085,7 +1085,7 @@ trace_function(struct trace_array *tr,
 	struct ftrace_entry *entry;
 
 	/* If we are reading the ring buffer, don't trace */
-	if (unlikely(__this_cpu_read(per_cpu_var(ftrace_cpu_disabled))))
+	if (unlikely(__this_cpu_read(ftrace_cpu_disabled)))
 		return;
 
 	event = trace_buffer_lock_reserve(buffer, TRACE_FN, sizeof(*entry),
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 90a6daa..8614e32 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -176,7 +176,7 @@ static int __trace_graph_entry(struct trace_array *tr,
 	struct ring_buffer *buffer = tr->buffer;
 	struct ftrace_graph_ent_entry *entry;
 
-	if (unlikely(__this_cpu_read(per_cpu_var(ftrace_cpu_disabled))))
+	if (unlikely(__this_cpu_read(ftrace_cpu_disabled)))
 		return 0;
 
 	event = trace_buffer_lock_reserve(buffer, TRACE_GRAPH_ENT,
@@ -240,7 +240,7 @@ static void __trace_graph_return(struct trace_array *tr,
 	struct ring_buffer *buffer = tr->buffer;
 	struct ftrace_graph_ret_entry *entry;
 
-	if (unlikely(__this_cpu_read(per_cpu_var(ftrace_cpu_disabled))))
+	if (unlikely(__this_cpu_read(ftrace_cpu_disabled)))
 		return;
 
 	event = trace_buffer_lock_reserve(buffer, TRACE_GRAPH_RET,
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 14/16] percpu: make access macros universal
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (12 preceding siblings ...)
  2009-10-14  6:02 ` [PATCH 13/16] percpu: remove per_cpu__ prefix Tejun Heo
@ 2009-10-14  6:02 ` Tejun Heo
  2009-10-14 14:38   ` Christoph Lameter
  2009-10-14  6:02 ` [PATCH 15/16] percpu: add __percpu for sparse Tejun Heo
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:02 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo

Now that per_cpu__ prefix is gone, there's no distinction between
static and dynamic percpu variables.  Make get_cpu_var() take dynamic
percpu variables and ensure that all macros have parentheses around
the parameter evaluation and evaluate the variable parameter only once
such that any expression which evaluates to percpu address can be used
safely.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/linux/percpu.h |   23 ++++++++++++++---------
 1 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index e12410e..f965f83 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -27,10 +27,13 @@
  * we force a syntax error here if it isn't.
  */
 #define get_cpu_var(var) (*({				\
-	extern int simple_identifier_##var(void);	\
 	preempt_disable();				\
 	&__get_cpu_var(var); }))
-#define put_cpu_var(var) preempt_enable()
+
+#define put_cpu_var(var) do {				\
+	(void)(var);					\
+	preempt_enable();				\
+} while (0)
 
 #ifdef CONFIG_SMP
 
@@ -182,17 +185,19 @@ static inline void *pcpu_lpage_remapped(void *kaddr)
 #ifndef percpu_read
 # define percpu_read(var)						\
   ({									\
-	typeof(var) __tmp_var__;					\
-	__tmp_var__ = get_cpu_var(var);					\
-	put_cpu_var(var);						\
-	__tmp_var__;							\
+	typeof(var) *pr_ptr__ = &(var);					\
+	typeof(var) pr_ret__;						\
+	pr_ret__ = get_cpu_var(*pr_ptr__);				\
+	put_cpu_var(*pr_ptr__);						\
+	pr_ret__;							\
   })
 #endif
 
 #define __percpu_generic_to_op(var, val, op)				\
 do {									\
-	get_cpu_var(var) op val;					\
-	put_cpu_var(var);						\
+	typeof(var) *pgto_ptr__ = &(var);				\
+	get_cpu_var(*pgto_ptr__) op val;				\
+	put_cpu_var(*pgto_ptr__);					\
 } while (0)
 
 #ifndef percpu_write
@@ -304,7 +309,7 @@ do {									\
 #define _this_cpu_generic_to_op(pcp, val, op)				\
 do {									\
 	preempt_disable();						\
-	*__this_cpu_ptr(&pcp) op val;					\
+	*__this_cpu_ptr(&(pcp)) op val;					\
 	preempt_enable();						\
 } while (0)
 
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 15/16] percpu: add __percpu for sparse.
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (13 preceding siblings ...)
  2009-10-14  6:02 ` [PATCH 14/16] percpu: make access macros universal Tejun Heo
@ 2009-10-14  6:02 ` Tejun Heo
  2009-10-14  6:02 ` [PATCH 16/16] percpu: make accessors check for percpu pointer in sparse Tejun Heo
  2009-10-29 13:40 ` [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:02 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Al Viro, Tejun Heo

From: Rusty Russell <rusty@rustcorp.com.au>

We have to make __kernel "__attribute__((address_space(0)))" so we can
cast to it.

tj: * put_cpu_var() update.

    * Annotations added to dynamic allocator interface.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/asm-generic/percpu.h |    4 +++-
 include/linux/compiler.h     |    4 +++-
 include/linux/percpu-defs.h  |    2 +-
 include/linux/percpu.h       |   18 +++++++++++-------
 4 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
index ca6f049..fded453 100644
--- a/include/asm-generic/percpu.h
+++ b/include/asm-generic/percpu.h
@@ -41,7 +41,9 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
  * Only S390 provides its own means of moving the pointer.
  */
 #ifndef SHIFT_PERCPU_PTR
-#define SHIFT_PERCPU_PTR(__p, __offset)	RELOC_HIDE((__p), (__offset))
+/* Weird cast keeps both GCC and sparse happy. */
+#define SHIFT_PERCPU_PTR(__p, __offset)				\
+	RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset))
 #endif
 
 /*
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 04fb513..abba804 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -5,7 +5,7 @@
 
 #ifdef __CHECKER__
 # define __user		__attribute__((noderef, address_space(1)))
-# define __kernel	/* default address space */
+# define __kernel	__attribute__((address_space(0)))
 # define __safe		__attribute__((safe))
 # define __force	__attribute__((force))
 # define __nocast	__attribute__((nocast))
@@ -15,6 +15,7 @@
 # define __acquire(x)	__context__(x,1)
 # define __release(x)	__context__(x,-1)
 # define __cond_lock(x,c)	((c) ? ({ __acquire(x); 1; }) : 0)
+# define __percpu	__attribute__((noderef, address_space(3)))
 extern void __chk_user_ptr(const volatile void __user *);
 extern void __chk_io_ptr(const volatile void __iomem *);
 #else
@@ -32,6 +33,7 @@ extern void __chk_io_ptr(const volatile void __iomem *);
 # define __acquire(x) (void)0
 # define __release(x) (void)0
 # define __cond_lock(x,c) (c)
+# define __percpu
 #endif
 
 #ifdef __KERNEL__
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index ee99f6c..0fa0cb5 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -12,7 +12,7 @@
  * that section.
  */
 #define __PCPU_ATTRS(sec)						\
-	__attribute__((section(PER_CPU_BASE_SECTION sec)))		\
+	__percpu __attribute__((section(PER_CPU_BASE_SECTION sec)))	\
 	PER_CPU_ATTRIBUTES
 
 #define __PCPU_DUMMY_ATTRS						\
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index f965f83..2c0d31a 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -30,8 +30,12 @@
 	preempt_disable();				\
 	&__get_cpu_var(var); }))
 
+/*
+ * The weird & is necessary because sparse considers (void)(var) to be
+ * a direct dereference of percpu variable (var).
+ */
 #define put_cpu_var(var) do {				\
-	(void)(var);					\
+	(void)&(var);					\
 	preempt_enable();				\
 } while (0)
 
@@ -130,9 +134,9 @@ extern int __init pcpu_page_first_chunk(size_t reserved_size,
  */
 #define per_cpu_ptr(ptr, cpu)	SHIFT_PERCPU_PTR((ptr), per_cpu_offset((cpu)))
 
-extern void *__alloc_reserved_percpu(size_t size, size_t align);
-extern void *__alloc_percpu(size_t size, size_t align);
-extern void free_percpu(void *__pdata);
+extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align);
+extern void __percpu *__alloc_percpu(size_t size, size_t align);
+extern void free_percpu(void __percpu *__pdata);
 
 #ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA
 extern void __init setup_per_cpu_areas(void);
@@ -142,7 +146,7 @@ extern void __init setup_per_cpu_areas(void);
 
 #define per_cpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); })
 
-static inline void *__alloc_percpu(size_t size, size_t align)
+static inline void __percpu *__alloc_percpu(size_t size, size_t align)
 {
 	/*
 	 * Can't easily make larger alignment work with kmalloc.  WARN
@@ -153,7 +157,7 @@ static inline void *__alloc_percpu(size_t size, size_t align)
 	return kzalloc(size, GFP_KERNEL);
 }
 
-static inline void free_percpu(void *p)
+static inline void free_percpu(void __percpu *p)
 {
 	kfree(p);
 }
@@ -168,7 +172,7 @@ static inline void *pcpu_lpage_remapped(void *kaddr)
 #endif /* CONFIG_SMP */
 
 #define alloc_percpu(type)	\
-	(typeof(type) *)__alloc_percpu(sizeof(type), __alignof__(type))
+	(typeof(type) __percpu *)__alloc_percpu(sizeof(type), __alignof__(type))
 
 /*
  * Optional methods for optimized non-lvalue per-cpu variable access.
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 16/16] percpu: make accessors check for percpu pointer in sparse
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (14 preceding siblings ...)
  2009-10-14  6:02 ` [PATCH 15/16] percpu: add __percpu for sparse Tejun Heo
@ 2009-10-14  6:02 ` Tejun Heo
  2009-10-14 14:41   ` Christoph Lameter
  2009-10-29 13:40 ` [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
  16 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-14  6:02 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert
  Cc: Tejun Heo, Al Viro

The previous patch made sparse warn about percpu variables being used
directly without going through percpu accessors.  This patch
implements the other half - checking whether non percpu variable is
passed into percpu accessors.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Al Viro <viro@zeniv.linux.org.uk>
---
 include/asm-generic/percpu.h |    6 ++++--
 include/linux/percpu-defs.h  |   20 ++++++++++++++++++--
 include/linux/percpu.h       |    2 ++
 3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
index fded453..04f91c2 100644
--- a/include/asm-generic/percpu.h
+++ b/include/asm-generic/percpu.h
@@ -42,8 +42,10 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
  */
 #ifndef SHIFT_PERCPU_PTR
 /* Weird cast keeps both GCC and sparse happy. */
-#define SHIFT_PERCPU_PTR(__p, __offset)				\
-	RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset))
+#define SHIFT_PERCPU_PTR(__p, __offset)	({				\
+	__verify_pcpu_ptr((__p));					\
+	RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset)); \
+})
 #endif
 
 /*
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index 0fa0cb5..1fa36eb 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -19,6 +19,16 @@
 	__attribute__((section(".discard"), unused))
 
 /*
+ * Macro which verifies @ptr is a percpu pointer without evaluating
+ * @ptr.  This is to be used in percpu accessors to verify that the
+ * input parameter is a percpu pointer.
+ */
+#define __verify_pcpu_ptr(ptr)	do {					\
+	void __percpu *__vpp_verify = (typeof(ptr))NULL;		\
+	(void)__vpp_verify;						\
+} while (0)
+
+/*
  * s390 and alpha modules require percpu variables to be defined as
  * weak to force the compiler to generate GOT based external
  * references for them.  This is necessary because percpu sections
@@ -129,10 +139,16 @@
 	__aligned(PAGE_SIZE)
 
 /*
- * Intermodule exports for per-CPU variables.
+ * Intermodule exports for per-CPU variables.  sparse forgets about
+ * address space across EXPORT_SYMBOL(), change EXPORT_SYMBOL() to
+ * noop if __CHECKER__.
  */
+#ifndef __CHECKER__
 #define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(var)
 #define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(var)
-
+#else
+#define EXPORT_PER_CPU_SYMBOL(var)
+#define EXPORT_PER_CPU_SYMBOL_GPL(var)
+#endif
 
 #endif /* _LINUX_PERCPU_DEFS_H */
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 2c0d31a..42878f0 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -237,6 +237,7 @@ extern void __bad_size_call_parameter(void);
 
 #define __pcpu_size_call_return(stem, variable)				\
 ({	typeof(variable) pscr_ret__;					\
+	__verify_pcpu_ptr(&(variable));					\
 	switch(sizeof(variable)) {					\
 	case 1: pscr_ret__ = stem##1(variable);break;			\
 	case 2: pscr_ret__ = stem##2(variable);break;			\
@@ -250,6 +251,7 @@ extern void __bad_size_call_parameter(void);
 
 #define __pcpu_size_call(stem, variable, ...)				\
 do {									\
+	__verify_pcpu_ptr(&(variable));					\
 	switch(sizeof(variable)) {					\
 		case 1: stem##1(variable, __VA_ARGS__);break;		\
 		case 2: stem##2(variable, __VA_ARGS__);break;		\
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH 03/16] percpu: remove some sparse warnings
  2009-10-14  6:01 ` [PATCH 03/16] percpu: remove some sparse warnings Tejun Heo
@ 2009-10-14 14:20   ` Christoph Lameter
  0 siblings, 0 replies; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 14:20 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, rusty, mingo, tglx, akpm, rostedt, hpa, cebbert



Reviewd-by: Christoph Lameter <cl@linux-foundation.org>



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14  6:02 ` [PATCH 13/16] percpu: remove per_cpu__ prefix Tejun Heo
@ 2009-10-14 14:36   ` Christoph Lameter
  2009-10-14 16:42     ` Luck, Tony
  2009-10-15  9:24     ` Tejun Heo
  0 siblings, 2 replies; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 14:36 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, rusty, mingo, Thomas Gleixner, akpm, rostedt, hpa,
	cebbert, tony.luck

On Wed, 14 Oct 2009, Tejun Heo wrote:

> @@ -39,7 +39,7 @@ extern void *per_cpu_init(void);
>   * On the positive side, using __ia64_per_cpu_var() instead of __get_cpu_var() is slightly
>   * more efficient.
>   */
> -#define __ia64_per_cpu_var(var)	per_cpu__##var
> +#define __ia64_per_cpu_var(var)	var

IA64 could completely drop the macro? Tony?

> diff --git a/arch/microblaze/include/asm/entry.h b/arch/microblaze/include/asm/entry.h
> index 61abbd2..ec89f2a 100644
> --- a/arch/microblaze/include/asm/entry.h
> +++ b/arch/microblaze/include/asm/entry.h
> @@ -21,7 +21,7 @@
>   * places
>   */
>
> -#define PER_CPU(var) per_cpu__##var
> +#define PER_CPU(var) var

Microblaze too.

> +#define PER_CPU(var, reg)	__percpu_mov_op $var, reg
> +#define PER_CPU_VAR(var)	var

Drop X86 PER_CPU_VAR

> -#define percpu_read(var)	percpu_from_op("mov", per_cpu__##var,	\
> -					       "m" (per_cpu__##var))
> -#define percpu_read_stable(var)	percpu_from_op("mov", per_cpu__##var,	\
> -					       "p" (&per_cpu__##var))
> -#define percpu_write(var, val)	percpu_to_op("mov", per_cpu__##var, val)
> -#define percpu_add(var, val)	percpu_to_op("add", per_cpu__##var, val)
> -#define percpu_sub(var, val)	percpu_to_op("sub", per_cpu__##var, val)
> -#define percpu_and(var, val)	percpu_to_op("and", per_cpu__##var, val)
> -#define percpu_or(var, val)	percpu_to_op("or", per_cpu__##var, val)
> -#define percpu_xor(var, val)	percpu_to_op("xor", per_cpu__##var, val)
> +#define percpu_read(var)		percpu_from_op("mov", var, "m" (var))
> +#define percpu_read_stable(var)		percpu_from_op("mov", var, "p" (&(var)))
> +#define percpu_write(var, val)		percpu_to_op("mov", var, val)
> +#define percpu_add(var, val)		percpu_to_op("add", var, val)
> +#define percpu_sub(var, val)		percpu_to_op("sub", var, val)
> +#define percpu_and(var, val)		percpu_to_op("and", var, val)
> +#define percpu_or(var, val)		percpu_to_op("or", var, val)
> +#define percpu_xor(var, val)		percpu_to_op("xor", var, val)

The percpu_xx definitions are now equal to __this_cpu_xx(). They could be
dropped for the core.

> diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
> index 8087b90..ca6f049 100644
> --- a/include/asm-generic/percpu.h
> +++ b/include/asm-generic/percpu.h
> @@ -50,11 +50,11 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
>   * offset.
>   */
>  #define per_cpu(var, cpu) \
> -	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), per_cpu_offset(cpu)))
> +	(*SHIFT_PERCPU_PTR(&(var), per_cpu_offset(cpu)))

>  #define __get_cpu_var(var) \
> -	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), my_cpu_offset))
> +	(*SHIFT_PERCPU_PTR(&(var), my_cpu_offset))

== this_cpu_read(var) or this_cpu_write(var, value)


>  #define __raw_get_cpu_var(var) \
> -	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), __my_cpu_offset))
> +	(*SHIFT_PERCPU_PTR(&(var), __my_cpu_offset))

== __this_cpu_read() or this_cpu_write(var, value)

__raw? Combination of __ and raw? Can we clearly define what each means?

> --- a/include/linux/percpu.h
> +++ b/include/linux/percpu.h
> @@ -182,7 +182,7 @@ static inline void *pcpu_lpage_remapped(void *kaddr)
>  #ifndef percpu_read
>  # define percpu_read(var)						\
>    ({									\
> -	typeof(per_cpu_var(var)) __tmp_var__;				\
> +	typeof(var) __tmp_var__;					\
>  	__tmp_var__ = get_cpu_var(var);					\
>  	put_cpu_var(var);						\
>  	__tmp_var__;							\

== this_cpu_read(var)


Great work. There is lots more possible cleanup work that could be done
after this patch has merged.

Reviewed-by: Christoph Lameter <cl@linux-foundation.org>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 14/16] percpu: make access macros universal
  2009-10-14  6:02 ` [PATCH 14/16] percpu: make access macros universal Tejun Heo
@ 2009-10-14 14:38   ` Christoph Lameter
  2009-10-15  9:27     ` Tejun Heo
  0 siblings, 1 reply; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 14:38 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, rusty, mingo, tglx, akpm, rostedt, hpa, cebbert

On Wed, 14 Oct 2009, Tejun Heo wrote:

> @@ -182,17 +185,19 @@ static inline void *pcpu_lpage_remapped(void *kaddr)
>  #ifndef percpu_read
>  # define percpu_read(var)						\
>    ({									\
> -	typeof(var) __tmp_var__;					\
> -	__tmp_var__ = get_cpu_var(var);					\
> -	put_cpu_var(var);						\
> -	__tmp_var__;							\
> +	typeof(var) *pr_ptr__ = &(var);					\
> +	typeof(var) pr_ret__;						\
> +	pr_ret__ = get_cpu_var(*pr_ptr__);				\
> +	put_cpu_var(*pr_ptr__);						\
> +	pr_ret__;							\
>    })
>  #endif

== this_cpu_read(var) ?


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 16/16] percpu: make accessors check for percpu pointer in sparse
  2009-10-14  6:02 ` [PATCH 16/16] percpu: make accessors check for percpu pointer in sparse Tejun Heo
@ 2009-10-14 14:41   ` Christoph Lameter
  2009-10-15  9:08     ` Tejun Heo
  0 siblings, 1 reply; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 14:41 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, rusty, mingo, tglx, akpm, rostedt, hpa, cebbert, Al Viro

On Wed, 14 Oct 2009, Tejun Heo wrote:

>  #ifndef SHIFT_PERCPU_PTR
>  /* Weird cast keeps both GCC and sparse happy. */
> -#define SHIFT_PERCPU_PTR(__p, __offset)				\
> -	RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset))
> +#define SHIFT_PERCPU_PTR(__p, __offset)	({				\
> +	__verify_pcpu_ptr((__p));					\
> +	RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset)); \
> +})

If you have the verification in SHIFT_PER_CPU_PTR then why do you need it
elsewhere?

>  #define __pcpu_size_call_return(stem, variable)				\
>  ({	typeof(variable) pscr_ret__;					\
> +	__verify_pcpu_ptr(&(variable));					\
>  	switch(sizeof(variable)) {					\
>  	case 1: pscr_ret__ = stem##1(variable);break;			\
>  	case 2: pscr_ret__ = stem##2(variable);break;			\
> @@ -250,6 +251,7 @@ extern void __bad_size_call_parameter(void);
>
>  #define __pcpu_size_call(stem, variable, ...)				\
>  do {									\
> +	__verify_pcpu_ptr(&(variable));					\
>  	switch(sizeof(variable)) {					\
>  		case 1: stem##1(variable, __VA_ARGS__);break;		\
>  		case 2: stem##2(variable, __VA_ARGS__);break;		\

Would it not be better to put the verification in the arch code? The
percpu_to/from_op may have multiple callsites (at least they have now). If
you put it in there then all other stuff is covered.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var()
  2009-10-14  6:01 ` [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var() Tejun Heo
@ 2009-10-14 15:06   ` Christoph Lameter
  2009-10-15  9:10     ` Tejun Heo
  0 siblings, 1 reply; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 15:06 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, rusty, mingo, tglx, akpm, rostedt, hpa, cebbert,
	Nick Piggin

This looks like a candidate for 2.6.32.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 14:36   ` Christoph Lameter
@ 2009-10-14 16:42     ` Luck, Tony
  2009-10-14 17:38       ` H. Peter Anvin
  2009-10-14 18:22       ` Christoph Lameter
  2009-10-15  9:24     ` Tejun Heo
  1 sibling, 2 replies; 47+ messages in thread
From: Luck, Tony @ 2009-10-14 16:42 UTC (permalink / raw)
  To: Christoph Lameter, Tejun Heo
  Cc: linux-kernel, rusty, mingo, Thomas Gleixner, akpm, rostedt, hpa, cebbert

>> -#define __ia64_per_cpu_var(var)	per_cpu__##var
>> +#define __ia64_per_cpu_var(var)	var
>
> IA64 could completely drop the macro? Tony?

A #define that just returns its original argument untouched
does seem to be a no-op.  So I suppose we could just fix
the dozen or so places where it is used to just use the
variable directly.

But that would leave no visual indicator in the source
code that a per-cpu variable was being used.  E.g. in
delayed_tlb_flush() we'd end up with:

   if (unlikely(ia64_need_tlb_flush)) {
       spin_lock ...
       if (ia64_need_tlb_flush) {
           local_flush_tlb_all();
           ia64_need_tlb_flush = 0;
       }
       spin_unlock ...
   }

This might cause confusion to anyone who is looking at
this code and has let the

DECLARE_PER_CPU(u8, ia64_need_tlb_flush);

scroll off the top of their screen.  I'd be tempted to go
and change all the names to make this obvious:

DECLARE_PER_CPU(u8, PERCPU_ia64_need_tlb_flush);


-Tony

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 16:42     ` Luck, Tony
@ 2009-10-14 17:38       ` H. Peter Anvin
  2009-10-14 18:26         ` Christoph Lameter
  2009-10-15  8:57         ` Tejun Heo
  2009-10-14 18:22       ` Christoph Lameter
  1 sibling, 2 replies; 47+ messages in thread
From: H. Peter Anvin @ 2009-10-14 17:38 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Christoph Lameter, Tejun Heo, linux-kernel, rusty, mingo,
	Thomas Gleixner, akpm, rostedt, cebbert

On 10/14/2009 09:42 AM, Luck, Tony wrote:
>>> -#define __ia64_per_cpu_var(var)	per_cpu__##var
>>> +#define __ia64_per_cpu_var(var)	var
>>
>> IA64 could completely drop the macro? Tony?
> 
> A #define that just returns its original argument untouched
> does seem to be a no-op.  So I suppose we could just fix
> the dozen or so places where it is used to just use the
> variable directly.
> 

Okay... I also don't seem to understand the more fundamental issue here,
which is:

Why are we dropping the prefix?

It may be "insufficient", but at least it stands out like a sore thumb
and makes mistakes harder.  It would be a different thing if we could
actually use the TLS ABI, but we really can't.

	-hpa

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 16:42     ` Luck, Tony
  2009-10-14 17:38       ` H. Peter Anvin
@ 2009-10-14 18:22       ` Christoph Lameter
  2009-10-14 18:36         ` Luck, Tony
  1 sibling, 1 reply; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 18:22 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Tejun Heo, linux-kernel, rusty, mingo, Thomas Gleixner, akpm,
	rostedt, hpa, cebbert

On Wed, 14 Oct 2009, Luck, Tony wrote:

> But that would leave no visual indicator in the source
> code that a per-cpu variable was being used.  E.g. in
> delayed_tlb_flush() we'd end up with:

we would still have to use per cpu operations to get to the contents of
these variables.
>
>    if (unlikely(ia64_need_tlb_flush)) {

     if (unlikely(this_cpu_read(ia64_need_tlb_flush))
>        spin_lock ...
>        if (ia64_need_tlb_flush) {

	ditto.

>            local_flush_tlb_all();
>            ia64_need_tlb_flush = 0;

		this_cpu_write(ia64_need_tlb_flush, 0);
>        }
>        spin_unlock ...
>    }
>

Hope that addresses your concerns.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 17:38       ` H. Peter Anvin
@ 2009-10-14 18:26         ` Christoph Lameter
  2009-10-15  8:57         ` Tejun Heo
  1 sibling, 0 replies; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 18:26 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Luck, Tony, Tejun Heo, linux-kernel, rusty, mingo,
	Thomas Gleixner, akpm, rostedt, cebbert

On Wed, 14 Oct 2009, H. Peter Anvin wrote:

> Okay... I also don't seem to understand the more fundamental issue here,
> which is:
>
> Why are we dropping the prefix?

That has been answered by Tejun elsewhere. It simplifies macros in many
places. Makes treatment of dynamic and static per cpu variables uniform,
solves an issue on S/390 etc.

> It may be "insufficient", but at least it stands out like a sore thumb
> and makes mistakes harder.  It would be a different thing if we could
> actually use the TLS ABI, but we really can't.

Sparse annotations will be used to detect these issues. Also generally per
cpu variables are used as parameters to per cpu functions. They only work
in the context of specifically designed macros.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 18:22       ` Christoph Lameter
@ 2009-10-14 18:36         ` Luck, Tony
  2009-10-14 18:51           ` Christoph Lameter
  0 siblings, 1 reply; 47+ messages in thread
From: Luck, Tony @ 2009-10-14 18:36 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Tejun Heo, linux-kernel, rusty, mingo, Thomas Gleixner, akpm,
	rostedt, hpa, cebbert

> we would still have to use per cpu operations to get to the contents of
> these variables.

That's good.

> Hope that addresses your concerns.

But then I don't understand the original patch that was going to do:

> -#define __ia64_per_cpu_var(var)	per_cpu__##var
> +#define __ia64_per_cpu_var(var)	var

Presumably all actual use of __ia64_per_cpu_var is being replaced
by some other "per cpu operations"?

-Tony


^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 18:36         ` Luck, Tony
@ 2009-10-14 18:51           ` Christoph Lameter
  2009-10-15  8:51             ` Tejun Heo
  0 siblings, 1 reply; 47+ messages in thread
From: Christoph Lameter @ 2009-10-14 18:51 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Tejun Heo, linux-kernel, rusty, mingo, Thomas Gleixner, akpm,
	rostedt, hpa, cebbert

On Wed, 14 Oct 2009, Luck, Tony wrote:

> > we would still have to use per cpu operations to get to the contents of
> > these variables.
>
> That's good.
>
> > Hope that addresses your concerns.
>
> But then I don't understand the original patch that was going to do:
>
> > -#define __ia64_per_cpu_var(var)	per_cpu__##var
> > +#define __ia64_per_cpu_var(var)	var
>
> Presumably all actual use of __ia64_per_cpu_var is being replaced
> by some other "per cpu operations"?

Hmmm... Right. IA64 is a special case because the access of the per cpu
variable at a specific address causes per cpu TLBs to do the relocation.

Other platforms have to add a per cpu specific offset to a variable to get
the right per cpu variable.

As a result IA64 strictly does not need this_cpu_read() and
this_cpu_write(). However, not using the operations is going to cause
the sparse annotation by Tejun to trigger errors. this_cpu_read() is
likely a noop for IA64 that just changes the annotations so that sparse
warnings do not trigger. Tejun?


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 18:51           ` Christoph Lameter
@ 2009-10-15  8:51             ` Tejun Heo
  2009-10-16 16:23               ` Christoph Lameter
  0 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-15  8:51 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Luck, Tony, linux-kernel, rusty, mingo, Thomas Gleixner, akpm,
	rostedt, hpa, cebbert

Hello, Christoph, Tony.

Christoph Lameter wrote:
> On Wed, 14 Oct 2009, Luck, Tony wrote:
> 
>>> we would still have to use per cpu operations to get to the contents of
>>> these variables.
>> That's good.
>>
>>> Hope that addresses your concerns.
>> But then I don't understand the original patch that was going to do:
>>
>>> -#define __ia64_per_cpu_var(var)	per_cpu__##var
>>> +#define __ia64_per_cpu_var(var)	var
>> Presumably all actual use of __ia64_per_cpu_var is being replaced
>> by some other "per cpu operations"?

Yeah, I wasn't so sure what to do about these, so just left them as
no-op for the moment.

> Hmmm... Right. IA64 is a special case because the access of the per cpu
> variable at a specific address causes per cpu TLBs to do the relocation.
> 
> Other platforms have to add a per cpu specific offset to a variable to get
> the right per cpu variable.
> 
> As a result IA64 strictly does not need this_cpu_read() and
> this_cpu_write(). However, not using the operations is going to cause
> the sparse annotation by Tejun to trigger errors. this_cpu_read() is
> likely a noop for IA64 that just changes the annotations so that sparse
> warnings do not trigger. Tejun?

__ia64_per_cpu_var() is slightly different from this_cpu_read() in
that the former only works for percpu variables which fall on the
first percpu page - ie. static percpu vars in the kernel image.  So,
__ia64_per_cpu_var() is a bit more efficient than this_cpu_read() as
it doesn't have to compute the address.

So, what we can do is to leave the macro alone for now and then add
sparse annotation to it later in the patch as it's basically another
specialized accessor.  How does that sound?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 17:38       ` H. Peter Anvin
  2009-10-14 18:26         ` Christoph Lameter
@ 2009-10-15  8:57         ` Tejun Heo
  1 sibling, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-15  8:57 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Luck, Tony, Christoph Lameter, linux-kernel, rusty, mingo,
	Thomas Gleixner, akpm, rostedt, cebbert

Hello,

H. Peter Anvin wrote:
> Okay... I also don't seem to understand the more fundamental issue here,
> which is:
> 
> Why are we dropping the prefix?
> 
> It may be "insufficient", but at least it stands out like a sore thumb
> and makes mistakes harder.  It would be a different thing if we could
> actually use the TLS ABI, but we really can't.

The reason for actively removing the prefix is because it forces us to
have two different accessors for static and dynamic ones which is made
worse by any accessor which will accept dynamic ones will accept
anything (static ones enclosed by per_cpu_var(), dynamic ones, plain
wrong random value).  Also, wide spread use of per_cpu_var() in
generic code will basically circumvent any meaningful protection the
prefix provided.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 16/16] percpu: make accessors check for percpu pointer in sparse
  2009-10-14 14:41   ` Christoph Lameter
@ 2009-10-15  9:08     ` Tejun Heo
  0 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-15  9:08 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, rusty, mingo, tglx, akpm, rostedt, hpa, cebbert, Al Viro

Christoph Lameter wrote:
> On Wed, 14 Oct 2009, Tejun Heo wrote:
> 
>>  #ifndef SHIFT_PERCPU_PTR
>>  /* Weird cast keeps both GCC and sparse happy. */
>> -#define SHIFT_PERCPU_PTR(__p, __offset)				\
>> -	RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset))
>> +#define SHIFT_PERCPU_PTR(__p, __offset)	({				\
>> +	__verify_pcpu_ptr((__p));					\
>> +	RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset)); \
>> +})
> 
> If you have the verification in SHIFT_PER_CPU_PTR then why do you need it
> elsewhere?

Because this_cpu_*() macros might not calculate addresses using
SHIFT_PERCPU_PTR().

>>  #define __pcpu_size_call_return(stem, variable)				\
>>  ({	typeof(variable) pscr_ret__;					\
>> +	__verify_pcpu_ptr(&(variable));					\
>>  	switch(sizeof(variable)) {					\
>>  	case 1: pscr_ret__ = stem##1(variable);break;			\
>>  	case 2: pscr_ret__ = stem##2(variable);break;			\
>> @@ -250,6 +251,7 @@ extern void __bad_size_call_parameter(void);
>>
>>  #define __pcpu_size_call(stem, variable, ...)				\
>>  do {									\
>> +	__verify_pcpu_ptr(&(variable));					\
>>  	switch(sizeof(variable)) {					\
>>  		case 1: stem##1(variable, __VA_ARGS__);break;		\
>>  		case 2: stem##2(variable, __VA_ARGS__);break;		\
> 
> Would it not be better to put the verification in the arch code? The
> percpu_to/from_op may have multiple callsites (at least they have now). If
> you put it in there then all other stuff is covered.

I don't know.  The way these ops are defined, adding
__verify_pcpu_ptr() to size_call macros reliably cover all percpu
cases and I much prefer things like this being done in generic code
rather than requiring each arch to do it.  It's just more reliable
this way.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var()
  2009-10-14 15:06   ` Christoph Lameter
@ 2009-10-15  9:10     ` Tejun Heo
  0 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-15  9:10 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, rusty, mingo, tglx, akpm, rostedt, hpa, cebbert,
	Nick Piggin

Christoph Lameter wrote:
> This looks like a candidate for 2.6.32.

Yeah, this could go to 2.6.32 too but it's only exposed by percpu
changes in percpu#for-next, so putting this together with related
percpu changes wouldn't be too bad either.  Does anyone feel strong
about this going in for 2.6.32?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-14 14:36   ` Christoph Lameter
  2009-10-14 16:42     ` Luck, Tony
@ 2009-10-15  9:24     ` Tejun Heo
  2009-10-16  6:04       ` Michal Simek
  2009-10-19 13:40       ` Michal Simek
  1 sibling, 2 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-15  9:24 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, rusty, mingo, Thomas Gleixner, akpm, rostedt, hpa,
	cebbert, tony.luck, Michal Simek

(microblaze maintainer cc'd, hello)

Christoph Lameter wrote:
> On Wed, 14 Oct 2009, Tejun Heo wrote:
> 
>> @@ -39,7 +39,7 @@ extern void *per_cpu_init(void);
>>   * On the positive side, using __ia64_per_cpu_var() instead of __get_cpu_var() is slightly
>>   * more efficient.
>>   */
>> -#define __ia64_per_cpu_var(var)	per_cpu__##var
>> +#define __ia64_per_cpu_var(var)	var
> 
> IA64 could completely drop the macro? Tony?

Being discussed but I think we should just add sparse annotation there
instead.

>> diff --git a/arch/microblaze/include/asm/entry.h b/arch/microblaze/include/asm/entry.h
>> index 61abbd2..ec89f2a 100644
>> --- a/arch/microblaze/include/asm/entry.h
>> +++ b/arch/microblaze/include/asm/entry.h
>> @@ -21,7 +21,7 @@
>>   * places
>>   */
>>
>> -#define PER_CPU(var) per_cpu__##var
>> +#define PER_CPU(var) var
> 
> Microblaze too.

This macro is used only in assemblies which wouldn't be covered by
sparse so in this case this patch series actually removes protection,
so I wasn't too sure about ripping the macro off.  Any ideas what we
can do here?  Just kill it?

>> +#define PER_CPU(var, reg)	__percpu_mov_op $var, reg
>> +#define PER_CPU_VAR(var)	var
> 
> Drop X86 PER_CPU_VAR

No can do.  SMP variant isn't null op.

>> -#define percpu_read(var)	percpu_from_op("mov", per_cpu__##var,	\
>> -					       "m" (per_cpu__##var))
>> -#define percpu_read_stable(var)	percpu_from_op("mov", per_cpu__##var,	\
>> -					       "p" (&per_cpu__##var))
>> -#define percpu_write(var, val)	percpu_to_op("mov", per_cpu__##var, val)
>> -#define percpu_add(var, val)	percpu_to_op("add", per_cpu__##var, val)
>> -#define percpu_sub(var, val)	percpu_to_op("sub", per_cpu__##var, val)
>> -#define percpu_and(var, val)	percpu_to_op("and", per_cpu__##var, val)
>> -#define percpu_or(var, val)	percpu_to_op("or", per_cpu__##var, val)
>> -#define percpu_xor(var, val)	percpu_to_op("xor", per_cpu__##var, val)
>> +#define percpu_read(var)		percpu_from_op("mov", var, "m" (var))
>> +#define percpu_read_stable(var)		percpu_from_op("mov", var, "p" (&(var)))
>> +#define percpu_write(var, val)		percpu_to_op("mov", var, val)
>> +#define percpu_add(var, val)		percpu_to_op("add", var, val)
>> +#define percpu_sub(var, val)		percpu_to_op("sub", var, val)
>> +#define percpu_and(var, val)		percpu_to_op("and", var, val)
>> +#define percpu_or(var, val)		percpu_to_op("or", var, val)
>> +#define percpu_xor(var, val)		percpu_to_op("xor", var, val)
> 
> The percpu_xx definitions are now equal to __this_cpu_xx(). They could be
> dropped for the core.

Yeap, will do so with further patches.

>>  #define __get_cpu_var(var) \
>> -	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), my_cpu_offset))
>> +	(*SHIFT_PERCPU_PTR(&(var), my_cpu_offset))
> 
> == this_cpu_read(var) or this_cpu_write(var, value)
> 
>>  #define __raw_get_cpu_var(var) \
>> -	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), __my_cpu_offset))
>> +	(*SHIFT_PERCPU_PTR(&(var), __my_cpu_offset))
> 
> == __this_cpu_read() or this_cpu_write(var, value)
> 
> __raw? Combination of __ and raw? Can we clearly define what each means?
>
>> -	typeof(per_cpu_var(var)) __tmp_var__;				\
>> +	typeof(var) __tmp_var__;					\
>>  	__tmp_var__ = get_cpu_var(var);					\
>>  	put_cpu_var(var);						\
>>  	__tmp_var__;							\
> 
> == this_cpu_read(var)

For all of above comments, yeap, we definitely need to clean all these
up, but let's do that once sparse annotation is working.

> Great work. There is lots more possible cleanup work that could be done
> after this patch has merged.
> 
> Reviewed-by: Christoph Lameter <cl@linux-foundation.org>

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 14/16] percpu: make access macros universal
  2009-10-14 14:38   ` Christoph Lameter
@ 2009-10-15  9:27     ` Tejun Heo
  0 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-15  9:27 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, rusty, mingo, tglx, akpm, rostedt, hpa, cebbert

Christoph Lameter wrote:
> On Wed, 14 Oct 2009, Tejun Heo wrote:
> 
>> @@ -182,17 +185,19 @@ static inline void *pcpu_lpage_remapped(void *kaddr)
>>  #ifndef percpu_read
>>  # define percpu_read(var)						\
>>    ({									\
>> -	typeof(var) __tmp_var__;					\
>> -	__tmp_var__ = get_cpu_var(var);					\
>> -	put_cpu_var(var);						\
>> -	__tmp_var__;							\
>> +	typeof(var) *pr_ptr__ = &(var);					\
>> +	typeof(var) pr_ret__;						\
>> +	pr_ret__ = get_cpu_var(*pr_ptr__);				\
>> +	put_cpu_var(*pr_ptr__);						\
>> +	pr_ret__;							\
>>    })
>>  #endif
> 
> == this_cpu_read(var) ?

Yeah, all these extra and duplicate accessors need to go away.  Once
dust around this patchset settles down.  That's the next thing to do.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-15  9:24     ` Tejun Heo
@ 2009-10-16  6:04       ` Michal Simek
  2009-10-18  2:58         ` Tejun Heo
  2009-10-19 13:40       ` Michal Simek
  1 sibling, 1 reply; 47+ messages in thread
From: Michal Simek @ 2009-10-16  6:04 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Christoph Lameter, linux-kernel, rusty, mingo, Thomas Gleixner,
	akpm, rostedt, hpa, cebbert, tony.luck

Tejun Heo wrote:
> (microblaze maintainer cc'd, hello)

Hi,

where is git repo with that patches?

Thanks
Michal

> 
> Christoph Lameter wrote:
>> On Wed, 14 Oct 2009, Tejun Heo wrote:
>>
>>> @@ -39,7 +39,7 @@ extern void *per_cpu_init(void);
>>>   * On the positive side, using __ia64_per_cpu_var() instead of __get_cpu_var() is slightly
>>>   * more efficient.
>>>   */
>>> -#define __ia64_per_cpu_var(var)	per_cpu__##var
>>> +#define __ia64_per_cpu_var(var)	var
>> IA64 could completely drop the macro? Tony?
> 
> Being discussed but I think we should just add sparse annotation there
> instead.
> 
>>> diff --git a/arch/microblaze/include/asm/entry.h b/arch/microblaze/include/asm/entry.h
>>> index 61abbd2..ec89f2a 100644
>>> --- a/arch/microblaze/include/asm/entry.h
>>> +++ b/arch/microblaze/include/asm/entry.h
>>> @@ -21,7 +21,7 @@
>>>   * places
>>>   */
>>>
>>> -#define PER_CPU(var) per_cpu__##var
>>> +#define PER_CPU(var) var
>> Microblaze too.
> 
> This macro is used only in assemblies which wouldn't be covered by
> sparse so in this case this patch series actually removes protection,
> so I wasn't too sure about ripping the macro off.  Any ideas what we
> can do here?  Just kill it?
> 
>>> +#define PER_CPU(var, reg)	__percpu_mov_op $var, reg
>>> +#define PER_CPU_VAR(var)	var
>> Drop X86 PER_CPU_VAR
> 
> No can do.  SMP variant isn't null op.
> 
>>> -#define percpu_read(var)	percpu_from_op("mov", per_cpu__##var,	\
>>> -					       "m" (per_cpu__##var))
>>> -#define percpu_read_stable(var)	percpu_from_op("mov", per_cpu__##var,	\
>>> -					       "p" (&per_cpu__##var))
>>> -#define percpu_write(var, val)	percpu_to_op("mov", per_cpu__##var, val)
>>> -#define percpu_add(var, val)	percpu_to_op("add", per_cpu__##var, val)
>>> -#define percpu_sub(var, val)	percpu_to_op("sub", per_cpu__##var, val)
>>> -#define percpu_and(var, val)	percpu_to_op("and", per_cpu__##var, val)
>>> -#define percpu_or(var, val)	percpu_to_op("or", per_cpu__##var, val)
>>> -#define percpu_xor(var, val)	percpu_to_op("xor", per_cpu__##var, val)
>>> +#define percpu_read(var)		percpu_from_op("mov", var, "m" (var))
>>> +#define percpu_read_stable(var)		percpu_from_op("mov", var, "p" (&(var)))
>>> +#define percpu_write(var, val)		percpu_to_op("mov", var, val)
>>> +#define percpu_add(var, val)		percpu_to_op("add", var, val)
>>> +#define percpu_sub(var, val)		percpu_to_op("sub", var, val)
>>> +#define percpu_and(var, val)		percpu_to_op("and", var, val)
>>> +#define percpu_or(var, val)		percpu_to_op("or", var, val)
>>> +#define percpu_xor(var, val)		percpu_to_op("xor", var, val)
>> The percpu_xx definitions are now equal to __this_cpu_xx(). They could be
>> dropped for the core.
> 
> Yeap, will do so with further patches.
> 
>>>  #define __get_cpu_var(var) \
>>> -	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), my_cpu_offset))
>>> +	(*SHIFT_PERCPU_PTR(&(var), my_cpu_offset))
>> == this_cpu_read(var) or this_cpu_write(var, value)
>>
>>>  #define __raw_get_cpu_var(var) \
>>> -	(*SHIFT_PERCPU_PTR(&per_cpu_var(var), __my_cpu_offset))
>>> +	(*SHIFT_PERCPU_PTR(&(var), __my_cpu_offset))
>> == __this_cpu_read() or this_cpu_write(var, value)
>>
>> __raw? Combination of __ and raw? Can we clearly define what each means?
>>
>>> -	typeof(per_cpu_var(var)) __tmp_var__;				\
>>> +	typeof(var) __tmp_var__;					\
>>>  	__tmp_var__ = get_cpu_var(var);					\
>>>  	put_cpu_var(var);						\
>>>  	__tmp_var__;							\
>> == this_cpu_read(var)
> 
> For all of above comments, yeap, we definitely need to clean all these
> up, but let's do that once sparse annotation is working.
> 
>> Great work. There is lots more possible cleanup work that could be done
>> after this patch has merged.
>>
>> Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
> 
> Thanks.
> 


-- 
Michal Simek, Ing. (M.Eng)
w: www.monstr.eu p: +42-0-721842854
Maintainer of Linux kernel 2.6 Microblaze Linux - http://www.monstr.eu/fdt/
Microblaze U-BOOT custodian

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-15  8:51             ` Tejun Heo
@ 2009-10-16 16:23               ` Christoph Lameter
  0 siblings, 0 replies; 47+ messages in thread
From: Christoph Lameter @ 2009-10-16 16:23 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Luck, Tony, linux-kernel, rusty, mingo, Thomas Gleixner, akpm,
	rostedt, hpa, cebbert

On Thu, 15 Oct 2009, Tejun Heo wrote:

> __ia64_per_cpu_var() is slightly different from this_cpu_read() in
> that the former only works for percpu variables which fall on the
> first percpu page - ie. static percpu vars in the kernel image.  So,
> __ia64_per_cpu_var() is a bit more efficient than this_cpu_read() as
> it doesn't have to compute the address.
>
> So, what we can do is to leave the macro alone for now and then add
> sparse annotation to it later in the patch as it's basically another
> specialized accessor.  How does that sound?

Ok. The macro would be arch specific anyways.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-16  6:04       ` Michal Simek
@ 2009-10-18  2:58         ` Tejun Heo
  2009-10-19 13:41           ` Michal Simek
  0 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-18  2:58 UTC (permalink / raw)
  To: monstr
  Cc: Christoph Lameter, linux-kernel, rusty, mingo, Thomas Gleixner,
	akpm, rostedt, hpa, cebbert, tony.luck

Hello, Sorry about the delay.

Michal Simek wrote:
> where is git repo with that patches?

The patches are available in the following git tree.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu.git unified-symbols

and the original thread at

 http://thread.gmane.org/gmane.linux.kernel/902383

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-15  9:24     ` Tejun Heo
  2009-10-16  6:04       ` Michal Simek
@ 2009-10-19 13:40       ` Michal Simek
  2009-10-29 12:06         ` Tejun Heo
  1 sibling, 1 reply; 47+ messages in thread
From: Michal Simek @ 2009-10-19 13:40 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Christoph Lameter, linux-kernel, rusty, mingo, Thomas Gleixner,
	akpm, rostedt, hpa, cebbert, tony.luck

Tejun Heo wrote:
> (microblaze maintainer cc'd, hello)
> 
> Christoph Lameter wrote:
>> On Wed, 14 Oct 2009, Tejun Heo wrote:
>>
>>> @@ -39,7 +39,7 @@ extern void *per_cpu_init(void);
>>>   * On the positive side, using __ia64_per_cpu_var() instead of __get_cpu_var() is slightly
>>>   * more efficient.
>>>   */
>>> -#define __ia64_per_cpu_var(var)	per_cpu__##var
>>> +#define __ia64_per_cpu_var(var)	var
>> IA64 could completely drop the macro? Tony?
> 
> Being discussed but I think we should just add sparse annotation there
> instead.
> 
>>> diff --git a/arch/microblaze/include/asm/entry.h b/arch/microblaze/include/asm/entry.h
>>> index 61abbd2..ec89f2a 100644
>>> --- a/arch/microblaze/include/asm/entry.h
>>> +++ b/arch/microblaze/include/asm/entry.h
>>> @@ -21,7 +21,7 @@
>>>   * places
>>>   */
>>>
>>> -#define PER_CPU(var) per_cpu__##var
>>> +#define PER_CPU(var) var
>> Microblaze too.
> 
> This macro is used only in assemblies which wouldn't be covered by
> sparse so in this case this patch series actually removes protection,
> so I wasn't too sure about ripping the macro off.  Any ideas what we
> can do here?  Just kill it?

If I understand correctly, functionality will be the same. But anyway I would
like to mot lose information about per_cpu variables. That's why please keep
that macro for Microblaze. If is the problem, you should convince me why to do it.

Thanks,
Michal






-- 
Michal Simek, Ing. (M.Eng)
w: www.monstr.eu p: +42-0-721842854
Maintainer of Linux kernel 2.6 Microblaze Linux - http://www.monstr.eu/fdt/
Microblaze U-BOOT custodian

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-18  2:58         ` Tejun Heo
@ 2009-10-19 13:41           ` Michal Simek
  2009-10-29 11:11             ` Tejun Heo
  0 siblings, 1 reply; 47+ messages in thread
From: Michal Simek @ 2009-10-19 13:41 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Christoph Lameter, linux-kernel, rusty, mingo, Thomas Gleixner,
	akpm, rostedt, hpa, cebbert, tony.luck

Tejun Heo wrote:
> Hello, Sorry about the delay.

no worries about.

> 
> Michal Simek wrote:
>> where is git repo with that patches?
> 
> The patches are available in the following git tree.
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu.git unified-symbols
> 
> and the original thread at
> 
>  http://thread.gmane.org/gmane.linux.kernel/902383

I compiled it there is no problem.
I hope you will add these patches through linux-next which I test every day.

Thanks,
Michal


> 
> Thanks.
> 


-- 
Michal Simek, Ing. (M.Eng)
w: www.monstr.eu p: +42-0-721842854
Maintainer of Linux kernel 2.6 Microblaze Linux - http://www.monstr.eu/fdt/
Microblaze U-BOOT custodian

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 10/16] percpu: make percpu symbols in powerpc unique
  2009-10-14  6:01   ` Tejun Heo
@ 2009-10-27  3:19     ` Benjamin Herrenschmidt
  -1 siblings, 0 replies; 47+ messages in thread
From: Benjamin Herrenschmidt @ 2009-10-27  3:19 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa,
	cebbert, Paul Mackerras, linuxppc-dev

On Wed, 2009-10-14 at 15:01 +0900, Tejun Heo wrote:
> This patch updates percpu related symbols in powerpc such that percpu
> symbols are unique and don't clash with local symbols.  This serves
> two purposes of decreasing the possibility of global percpu symbol
> collision and allowing dropping per_cpu__ prefix from percpu symbols.
> 
> * arch/powerpc/kernel/perf_callchain.c: s/callchain/cpu_perf_callchain/
> 
> * arch/powerpc/kernel/setup-common.c: s/pvr/cpu_pvr/
> 
> * arch/powerpc/platforms/pseries/dtl.c: s/dtl/cpu_dtl/
> 
> * arch/powerpc/platforms/cell/interrupt.c: s/iic/cpu_iic/
> 
> Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
> which cause name clashes" patch.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> Cc: Rusty Russell <rusty@rustcorp.com.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

Cheers,
Ben.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 10/16] percpu: make percpu symbols in powerpc unique
@ 2009-10-27  3:19     ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 47+ messages in thread
From: Benjamin Herrenschmidt @ 2009-10-27  3:19 UTC (permalink / raw)
  To: Tejun Heo
  Cc: cl, rusty, linux-kernel, rostedt, linuxppc-dev, mingo,
	Paul Mackerras, cebbert, hpa, tglx, akpm

On Wed, 2009-10-14 at 15:01 +0900, Tejun Heo wrote:
> This patch updates percpu related symbols in powerpc such that percpu
> symbols are unique and don't clash with local symbols.  This serves
> two purposes of decreasing the possibility of global percpu symbol
> collision and allowing dropping per_cpu__ prefix from percpu symbols.
> 
> * arch/powerpc/kernel/perf_callchain.c: s/callchain/cpu_perf_callchain/
> 
> * arch/powerpc/kernel/setup-common.c: s/pvr/cpu_pvr/
> 
> * arch/powerpc/platforms/pseries/dtl.c: s/dtl/cpu_dtl/
> 
> * arch/powerpc/platforms/cell/interrupt.c: s/iic/cpu_iic/
> 
> Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
> which cause name clashes" patch.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> Cc: Rusty Russell <rusty@rustcorp.com.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-19 13:41           ` Michal Simek
@ 2009-10-29 11:11             ` Tejun Heo
  2009-11-02 16:35               ` Michal Simek
  0 siblings, 1 reply; 47+ messages in thread
From: Tejun Heo @ 2009-10-29 11:11 UTC (permalink / raw)
  To: monstr
  Cc: Christoph Lameter, linux-kernel, rusty, mingo, Thomas Gleixner,
	akpm, rostedt, hpa, cebbert, tony.luck

Hello, Michal.

Michal Simek wrote:
> I compiled it there is no problem.
> I hope you will add these patches through linux-next which I test every day.

Yeap, I'm pushing it out soonish.  Sorry about the delay.  Have been
traveling.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-19 13:40       ` Michal Simek
@ 2009-10-29 12:06         ` Tejun Heo
  0 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-29 12:06 UTC (permalink / raw)
  To: monstr
  Cc: Christoph Lameter, linux-kernel, rusty, mingo, Thomas Gleixner,
	akpm, rostedt, hpa, cebbert, tony.luck

Michal Simek wrote:
> If I understand correctly, functionality will be the same. But
> anyway I would like to mot lose information about per_cpu
> variables. That's why please keep that macro for Microblaze. If is
> the problem, you should convince me why to do it.

It's not a problem at all.  The only thing is that now there's nothing
which enforces usage of that macro in assemblies.  If someone forgets
to use that macro, it will work all the same.  It would be great if we
can fix it somehow but I don't have much idea ATM.  Anyways, for now,
I'll proceed in the current form.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2
  2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
                   ` (15 preceding siblings ...)
  2009-10-14  6:02 ` [PATCH 16/16] percpu: make accessors check for percpu pointer in sparse Tejun Heo
@ 2009-10-29 13:40 ` Tejun Heo
  16 siblings, 0 replies; 47+ messages in thread
From: Tejun Heo @ 2009-10-29 13:40 UTC (permalink / raw)
  To: linux-kernel, rusty, cl, mingo, tglx, akpm, rostedt, hpa, cebbert

Tejun Heo wrote:
> This is expanded version of Rusty's drop per_cpu__ prefix patchset
> which first appeared way back and has been bit-rotting waiting for all
> archs to move to the dynamic percpu allocator.  Now that it's
> complete.  I tried to refresh Rusty's patchset and I ended up with
> this rather large patchset.

Patchset landed on percpu#for-next with acked and reviewed-by's added.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH 13/16] percpu: remove per_cpu__ prefix.
  2009-10-29 11:11             ` Tejun Heo
@ 2009-11-02 16:35               ` Michal Simek
  0 siblings, 0 replies; 47+ messages in thread
From: Michal Simek @ 2009-11-02 16:35 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Christoph Lameter, linux-kernel, rusty, mingo, Thomas Gleixner,
	akpm, rostedt, hpa, cebbert, tony.luck

Tejun Heo wrote:
> Hello, Michal.
> 
> Michal Simek wrote:
>> I compiled it there is no problem.
>> I hope you will add these patches through linux-next which I test every day.
> 
> Yeap, I'm pushing it out soonish.  Sorry about the delay.  Have been
> traveling.

no worries about.

Michal

> 
> Thanks.
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2009-11-02 16:37 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-14  6:01 [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo
2009-10-14  6:01 ` [PATCH 01/16] vmalloc: fix use of non-existent percpu variable in put_cpu_var() Tejun Heo
2009-10-14 15:06   ` Christoph Lameter
2009-10-15  9:10     ` Tejun Heo
2009-10-14  6:01 ` [PATCH 02/16] percpu: make alloc_percpu() handle array types Tejun Heo
2009-10-14  6:01 ` [PATCH 03/16] percpu: remove some sparse warnings Tejun Heo
2009-10-14 14:20   ` Christoph Lameter
2009-10-14  6:01 ` [PATCH 04/16] percpu: make percpu symbols under kernel/ and mm/ unique Tejun Heo
2009-10-14  6:01 ` [PATCH 05/16] percpu: make percpu symbols in tracer unique Tejun Heo
2009-10-14  6:01 ` [PATCH 06/16] percpu: make percpu symbols in oprofile unique Tejun Heo
2009-10-14  6:01 ` [PATCH 07/16] percpu: make percpu symbols in cpufreq unique Tejun Heo
2009-10-14  6:01 ` [PATCH 08/16] percpu: make percpu symbols in xen unique Tejun Heo
2009-10-14  6:01 ` [PATCH 09/16] percpu: make percpu symbols in x86 unique Tejun Heo
2009-10-14  6:01 ` [PATCH 10/16] percpu: make percpu symbols in powerpc unique Tejun Heo
2009-10-14  6:01   ` Tejun Heo
2009-10-27  3:19   ` Benjamin Herrenschmidt
2009-10-27  3:19     ` Benjamin Herrenschmidt
2009-10-14  6:02 ` [PATCH 11/16] percpu: make percpu symbols in ia64 unique Tejun Heo
2009-10-14  6:02   ` Tejun Heo
2009-10-14  6:02 ` [PATCH 12/16] percpu: make misc percpu symbols unique Tejun Heo
2009-10-14  6:02 ` [PATCH 13/16] percpu: remove per_cpu__ prefix Tejun Heo
2009-10-14 14:36   ` Christoph Lameter
2009-10-14 16:42     ` Luck, Tony
2009-10-14 17:38       ` H. Peter Anvin
2009-10-14 18:26         ` Christoph Lameter
2009-10-15  8:57         ` Tejun Heo
2009-10-14 18:22       ` Christoph Lameter
2009-10-14 18:36         ` Luck, Tony
2009-10-14 18:51           ` Christoph Lameter
2009-10-15  8:51             ` Tejun Heo
2009-10-16 16:23               ` Christoph Lameter
2009-10-15  9:24     ` Tejun Heo
2009-10-16  6:04       ` Michal Simek
2009-10-18  2:58         ` Tejun Heo
2009-10-19 13:41           ` Michal Simek
2009-10-29 11:11             ` Tejun Heo
2009-11-02 16:35               ` Michal Simek
2009-10-19 13:40       ` Michal Simek
2009-10-29 12:06         ` Tejun Heo
2009-10-14  6:02 ` [PATCH 14/16] percpu: make access macros universal Tejun Heo
2009-10-14 14:38   ` Christoph Lameter
2009-10-15  9:27     ` Tejun Heo
2009-10-14  6:02 ` [PATCH 15/16] percpu: add __percpu for sparse Tejun Heo
2009-10-14  6:02 ` [PATCH 16/16] percpu: make accessors check for percpu pointer in sparse Tejun Heo
2009-10-14 14:41   ` Christoph Lameter
2009-10-15  9:08     ` Tejun Heo
2009-10-29 13:40 ` [RFC percpu#for-next] percpu: drop per_cpu__ prefix and add sparse annotations, take#2 Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.