All of lore.kernel.org
 help / color / mirror / Atom feed
* [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2
@ 2009-06-01  8:58 Tejun Heo
  2009-06-01  8:58 ` [PATCH 1/7] percpu: use dynamic percpu allocator as the default percpu allocator Tejun Heo
                   ` (8 more replies)
  0 siblings, 9 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty

Hello,

Upon ack, please pull from the following git tree.

  git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git tj-percpu

This is the second take of percpu-convert-most-archs-to-dynamic-percpu
patchset.  Changes from the last take[L] are

* Rebased on top of tj-percpu-fix-remap.

* As suggested by Rusty Russell, complex percpu definition macros
  which used dummy variables to guarantee scope and uniqueness have
  been dropped.  Instead, now all percpu variables are required to be
  global.  No statics allowed.  All in-kernel users are converted by
  this patchset.

* Linker script .discard section handling patch no longer necessary
  and dropped.

This patchset contains the following seven patches.

  0001-percpu-use-dynamic-percpu-allocator-as-the-default.patch
  0002-percpu-cleanup-percpu-array-definitions.patch
  0003-percpu-clean-up-percpu-variable-definitions.patch
  0004-percpu-enforce-global-definition.patch
  0005-alpha-kill-unnecessary-__used-attribute-in-PER_CPU_.patch
  0006-alpha-switch-to-dynamic-percpu-allocator.patch
  0007-s390-switch-to-dynamic-percpu-allocator.patch

0001 converts archs which are easy to convert and make dynamic percpu
allocator the default.  0002-0003 prepares for percpu variable
definition change.  0004 enforces global definitions.  0005-0007
convert alpha and s390 to dynamic percpu variable using the weak
attribute.

This patchset is on top of

core/percpu (e1b9aa3f47242e757c776a3771bb6613e675bf9c)
+ linus-2.6#master (3218911f839b6c85acbf872ad264ea69aa4d89ad)
+ x86-percpu-fix-pageattr patchset, take#3 [1]

and contains the following changes.

 arch/alpha/include/asm/percpu.h                  |  101 ++---------------------
 arch/alpha/include/asm/tlbflush.h                |    1 
 arch/arm/kernel/smp.c                            |    2 
 arch/arm/mach-realview/localtimer.c              |    2 
 arch/avr32/kernel/cpu.c                          |    2 
 arch/blackfin/mach-common/smp.c                  |    2 
 arch/blackfin/mm/sram-alloc.c                    |   22 ++---
 arch/cris/include/asm/mmu_context.h              |    2 
 arch/ia64/Kconfig                                |    3 
 arch/ia64/kernel/crash.c                         |    2 
 arch/ia64/kernel/smp.c                           |    4 
 arch/ia64/kernel/traps.c                         |    2 
 arch/ia64/kvm/kvm-ia64.c                         |    2 
 arch/ia64/sn/kernel/setup.c                      |    2 
 arch/ia64/xen/irq_xen.c                          |   24 ++---
 arch/mips/kernel/cevt-bcm1480.c                  |    6 -
 arch/mips/kernel/cevt-sb1250.c                   |    6 -
 arch/mips/kernel/topology.c                      |    2 
 arch/mips/sgi-ip27/ip27-timer.c                  |    4 
 arch/parisc/kernel/irq.c                         |    2 
 arch/parisc/kernel/topology.c                    |    2 
 arch/powerpc/Kconfig                             |    3 
 arch/powerpc/kernel/cacheinfo.c                  |    2 
 arch/powerpc/kernel/process.c                    |    2 
 arch/powerpc/kernel/sysfs.c                      |    4 
 arch/powerpc/kernel/time.c                       |    6 -
 arch/powerpc/mm/pgtable.c                        |    2 
 arch/powerpc/mm/stab.c                           |    4 
 arch/powerpc/oprofile/op_model_cell.c            |    2 
 arch/powerpc/platforms/cell/cpufreq_spudemand.c  |    2 
 arch/powerpc/platforms/cell/interrupt.c          |    2 
 arch/powerpc/platforms/ps3/interrupt.c           |    2 
 arch/powerpc/platforms/ps3/smp.c                 |    2 
 arch/powerpc/platforms/pseries/dtl.c             |    2 
 arch/powerpc/platforms/pseries/iommu.c           |    2 
 arch/s390/appldata/appldata_base.c               |    2 
 arch/s390/include/asm/percpu.h                   |   32 +------
 arch/s390/kernel/nmi.c                           |    2 
 arch/s390/kernel/smp.c                           |    2 
 arch/s390/kernel/time.c                          |    4 
 arch/s390/kernel/vtime.c                         |    2 
 arch/sh/kernel/timers/timer-broadcast.c          |    2 
 arch/sh/kernel/topology.c                        |    2 
 arch/sparc/Kconfig                               |    3 
 arch/sparc/kernel/nmi.c                          |    6 -
 arch/sparc/kernel/pci_sun4v.c                    |    2 
 arch/sparc/kernel/sysfs.c                        |    4 
 arch/sparc/kernel/time_64.c                      |    4 
 arch/x86/Kconfig                                 |    3 
 arch/x86/kernel/apic/apic.c                      |    2 
 arch/x86/kernel/apic/nmi.c                       |    8 -
 arch/x86/kernel/cpu/common.c                     |    2 
 arch/x86/kernel/cpu/cpu_debug.c                  |   10 +-
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c       |    4 
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c        |    2 
 arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c |    4 
 arch/x86/kernel/cpu/intel_cacheinfo.c            |    6 -
 arch/x86/kernel/cpu/mcheck/mce_64.c              |    4 
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c          |    4 
 arch/x86/kernel/cpu/mcheck/mce_intel_64.c        |    2 
 arch/x86/kernel/cpu/mcheck/therm_throt.c         |    4 
 arch/x86/kernel/cpu/perfctr-watchdog.c           |    2 
 arch/x86/kernel/ds.c                             |    4 
 arch/x86/kernel/hpet.c                           |    2 
 arch/x86/kernel/irq_32.c                         |    8 -
 arch/x86/kernel/kvm.c                            |    2 
 arch/x86/kernel/kvmclock.c                       |    2 
 arch/x86/kernel/paravirt.c                       |    2 
 arch/x86/kernel/process_64.c                     |    2 
 arch/x86/kernel/smpboot.c                        |    2 
 arch/x86/kernel/tlb_uv.c                         |    6 -
 arch/x86/kernel/topology.c                       |    2 
 arch/x86/kernel/uv_time.c                        |    2 
 arch/x86/kernel/vmiclock_32.c                    |    2 
 arch/x86/kvm/svm.c                               |    2 
 arch/x86/kvm/vmx.c                               |    6 -
 arch/x86/kvm/x86.c                               |    2 
 arch/x86/mm/kmmio.c                              |    2 
 arch/x86/mm/mmio-mod.c                           |    4 
 arch/x86/oprofile/nmi_int.c                      |    4 
 arch/x86/xen/enlighten.c                         |    2 
 arch/x86/xen/multicalls.c                        |    2 
 arch/x86/xen/smp.c                               |    8 -
 arch/x86/xen/spinlock.c                          |    4 
 arch/x86/xen/time.c                              |   10 +-
 block/as-iosched.c                               |   10 +-
 block/blk-softirq.c                              |    2 
 block/cfq-iosched.c                              |   10 +-
 crypto/sha512_generic.c                          |    2 
 drivers/acpi/processor_core.c                    |    2 
 drivers/acpi/processor_thermal.c                 |    2 
 drivers/base/cpu.c                               |    2 
 drivers/char/random.c                            |    2 
 drivers/connector/cn_proc.c                      |    2 
 drivers/cpufreq/cpufreq.c                        |    8 -
 drivers/cpufreq/cpufreq_conservative.c           |   12 +-
 drivers/cpufreq/cpufreq_ondemand.c               |   15 +--
 drivers/cpufreq/cpufreq_stats.c                  |    2 
 drivers/cpufreq/cpufreq_userspace.c              |   11 +-
 drivers/cpufreq/freq_table.c                     |    2 
 drivers/cpuidle/governors/ladder.c               |    2 
 drivers/cpuidle/governors/menu.c                 |    2 
 drivers/crypto/padlock-aes.c                     |    2 
 drivers/lguest/page_tables.c                     |    2 
 drivers/lguest/x86/core.c                        |    2 
 drivers/xen/events.c                             |   13 +-
 fs/buffer.c                                      |    4 
 fs/file.c                                        |    2 
 fs/namespace.c                                   |    2 
 include/linux/percpu-defs.h                      |   10 +-
 include/linux/percpu.h                           |   12 ++
 init/main.c                                      |   24 -----
 kernel/kprobes.c                                 |    2 
 kernel/lockdep.c                                 |    2 
 kernel/module.c                                  |    6 -
 kernel/printk.c                                  |    2 
 kernel/profile.c                                 |    4 
 kernel/rcuclassic.c                              |    4 
 kernel/rcupdate.c                                |    2 
 kernel/rcupreempt.c                              |   10 +-
 kernel/rcutorture.c                              |    4 
 kernel/sched.c                                   |   30 +++---
 kernel/sched_clock.c                             |    2 
 kernel/sched_rt.c                                |    2 
 kernel/smp.c                                     |    6 -
 kernel/softirq.c                                 |    6 -
 kernel/softlockup.c                              |    6 -
 kernel/taskstats.c                               |    4 
 kernel/time/tick-sched.c                         |    2 
 kernel/time/timer_stats.c                        |    2 
 kernel/timer.c                                   |    2 
 kernel/trace/ring_buffer.c                       |    2 
 kernel/trace/trace.c                             |    6 -
 kernel/trace/trace_hw_branches.c                 |    4 
 kernel/trace/trace_irqsoff.c                     |    2 
 kernel/trace/trace_stack.c                       |    2 
 kernel/trace/trace_sysprof.c                     |    2 
 kernel/trace/trace_workqueue.c                   |    2 
 lib/radix-tree.c                                 |    2 
 lib/random32.c                                   |    2 
 mm/Makefile                                      |    2 
 mm/allocpercpu.c                                 |   28 ++++++
 mm/page-writeback.c                              |    5 -
 mm/percpu.c                                      |   40 ++++++++-
 mm/quicklist.c                                   |    2 
 mm/slab.c                                        |    4 
 mm/slub.c                                        |    6 -
 mm/swap.c                                        |    4 
 mm/vmalloc.c                                     |    2 
 mm/vmstat.c                                      |    2 
 net/core/drop_monitor.c                          |    2 
 net/core/flow.c                                  |    6 -
 net/core/sock.c                                  |    2 
 net/ipv4/route.c                                 |    2 
 net/ipv4/syncookies.c                            |    4 
 net/ipv6/syncookies.c                            |    4 
 net/socket.c                                     |    2 
 157 files changed, 399 insertions(+), 439 deletions(-)

Thanks.

--
tejun

[L] http://thread.gmane.org/gmane.linux.kernel/839059
[1] http://thread.gmane.org/gmane.linux.kernel/844298

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/7] percpu: use dynamic percpu allocator as the default percpu allocator
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
@ 2009-06-01  8:58 ` Tejun Heo
  2009-06-01  8:58   ` Tejun Heo
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo, Russell King, Matthew Wilcox, Grant Grundler

This patch makes most !CONFIG_HAVE_SETUP_PER_CPU_AREA archs use
dynamic percpu allocator.  The first chunk is allocated using
embedding helper and 8k is reserved for modules.  This ensures that
the new allocator behaves almost identically to the original allocator
as long as static percpu variables are concerned, so it shouldn't
introduce much breakage.

s390 and alpha use custom SHIFT_PERCPU_PTR() to work around addressing
range limit the addressing model imposes.  Unfortunately, this breaks
if the address is specified using a variable, so for now, the two
archs aren't converted.

The following architectures are affected by this change.

* sh
* arm
* cris
* mips
* sparc(32)
* blackfin
* avr32
* parisc
* m32r
* powerpc(32)

As this change makes the dynamic allocator the default one,
CONFIG_HAVE_DYNAMIC_PER_CPU_AREA is replaced with its invert -
CONFIG_HAVE_LEGACY_PER_CPU_AREA, which is added to yet-to-be converted
archs.  These archs implement their own setup_per_cpu_areas() and the
conversion is not trivial.

* powerpc(64)
* sparc(64)
* ia64
* alpha
* s390

Boot and batch alloc/free tests on x86_32 with debug code (x86_32
doesn't use default first chunk initialization).  Compile tested on
sparc(32), powerpc(32), arm and alpha.

[ Impact: use dynamic allocator for most archs w/o custom percpu setup ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Christoph Lameter <cl@linux.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Bryan Wu <cooloney@kernel.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Matthew Wilcox <matthew@wil.cx>
Cc: Grant Grundler <grundler@parisc-linux.org>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
---
 arch/alpha/Kconfig     |    3 +++
 arch/ia64/Kconfig      |    3 +++
 arch/powerpc/Kconfig   |    3 +++
 arch/s390/Kconfig      |    3 +++
 arch/sparc/Kconfig     |    3 +++
 arch/x86/Kconfig       |    3 ---
 include/linux/percpu.h |   12 +++++++++---
 init/main.c            |   24 ------------------------
 kernel/module.c        |    6 +++---
 mm/Makefile            |    2 +-
 mm/allocpercpu.c       |   28 ++++++++++++++++++++++++++++
 mm/percpu.c            |   40 +++++++++++++++++++++++++++++++++++++++-
 12 files changed, 95 insertions(+), 35 deletions(-)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 9fb8aae..05d8640 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -70,6 +70,9 @@ config AUTO_IRQ_AFFINITY
 	depends on SMP
 	default y
 
+config HAVE_LEGACY_PER_CPU_AREA
+	def_bool y
+
 source "init/Kconfig"
 source "kernel/Kconfig.freezer"
 
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 294a3b1..8e88df7 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -88,6 +88,9 @@ config GENERIC_TIME_VSYSCALL
 	bool
 	default y
 
+config HAVE_LEGACY_PER_CPU_AREA
+	def_bool y
+
 config HAVE_SETUP_PER_CPU_AREA
 	def_bool y
 
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index cdc9a6f..664a20e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -42,6 +42,9 @@ config GENERIC_HARDIRQS
 	bool
 	default y
 
+config HAVE_LEGACY_PER_CPU_AREA
+	def_bool PPC64
+
 config HAVE_SETUP_PER_CPU_AREA
 	def_bool PPC64
 
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 2eca5fe..686909a 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -75,6 +75,9 @@ config VIRT_CPU_ACCOUNTING
 config ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	def_bool y
 
+config HAVE_LEGACY_PER_CPU_AREA
+	def_bool y
+
 mainmenu "Linux Kernel Configuration"
 
 config S390
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index cc12cd4..2e7f019 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -90,6 +90,9 @@ config AUDIT_ARCH
 	bool
 	default y
 
+config HAVE_LEGACY_PER_CPU_AREA
+	def_bool y if SPARC64
+
 config HAVE_SETUP_PER_CPU_AREA
 	def_bool y if SPARC64
 
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a6efe0a..52b17c2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -141,9 +141,6 @@ config ARCH_HAS_CACHE_LINE_SIZE
 config HAVE_SETUP_PER_CPU_AREA
 	def_bool y
 
-config HAVE_DYNAMIC_PER_CPU_AREA
-	def_bool y
-
 config HAVE_CPUMASK_OF_CPU_MAP
 	def_bool X86_64_SMP
 
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 1581ff2..bbe5b2c 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -34,7 +34,7 @@
 
 #ifdef CONFIG_SMP
 
-#ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+#ifndef CONFIG_HAVE_LEGACY_PER_CPU_AREA
 
 /* minimum unit size, also is the maximum supported allocation size */
 #define PCPU_MIN_UNIT_SIZE		PFN_ALIGN(64 << 10)
@@ -80,7 +80,7 @@ extern ssize_t __init pcpu_embed_first_chunk(
 
 extern void *__alloc_reserved_percpu(size_t size, size_t align);
 
-#else /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+#else /* CONFIG_HAVE_LEGACY_PER_CPU_AREA */
 
 struct percpu_data {
 	void *ptrs[1];
@@ -94,11 +94,15 @@ struct percpu_data {
         (__typeof__(ptr))__p->ptrs[(cpu)];				\
 })
 
-#endif /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+#endif /* CONFIG_HAVE_LEGACY_PER_CPU_AREA */
 
 extern void *__alloc_percpu(size_t size, size_t align);
 extern void free_percpu(void *__pdata);
 
+#ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA
+extern void __init setup_per_cpu_areas(void);
+#endif
+
 #else /* CONFIG_SMP */
 
 #define per_cpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); })
@@ -119,6 +123,8 @@ static inline void free_percpu(void *p)
 	kfree(p);
 }
 
+static inline void __init setup_per_cpu_areas(void) { }
+
 #endif /* CONFIG_SMP */
 
 #define alloc_percpu(type)	(type *)__alloc_percpu(sizeof(type), \
diff --git a/init/main.c b/init/main.c
index d721dad..adb46ee 100644
--- a/init/main.c
+++ b/init/main.c
@@ -355,7 +355,6 @@ static void __init smp_init(void)
 #define smp_init()	do { } while (0)
 #endif
 
-static inline void setup_per_cpu_areas(void) { }
 static inline void setup_nr_cpu_ids(void) { }
 static inline void smp_prepare_cpus(unsigned int maxcpus) { }
 
@@ -376,29 +375,6 @@ static void __init setup_nr_cpu_ids(void)
 	nr_cpu_ids = find_last_bit(cpumask_bits(cpu_possible_mask),NR_CPUS) + 1;
 }
 
-#ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA
-unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
-
-EXPORT_SYMBOL(__per_cpu_offset);
-
-static void __init setup_per_cpu_areas(void)
-{
-	unsigned long size, i;
-	char *ptr;
-	unsigned long nr_possible_cpus = num_possible_cpus();
-
-	/* Copy section for each CPU (we discard the original) */
-	size = ALIGN(PERCPU_ENOUGH_ROOM, PAGE_SIZE);
-	ptr = alloc_bootmem_pages(size * nr_possible_cpus);
-
-	for_each_possible_cpu(i) {
-		__per_cpu_offset[i] = ptr - __per_cpu_start;
-		memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
-		ptr += size;
-	}
-}
-#endif /* CONFIG_HAVE_SETUP_PER_CPU_AREA */
-
 /* Called by boot processor to activate the rest. */
 static void __init smp_init(void)
 {
diff --git a/kernel/module.c b/kernel/module.c
index e797812..1559bd0 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -359,7 +359,7 @@ EXPORT_SYMBOL_GPL(find_module);
 
 #ifdef CONFIG_SMP
 
-#ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+#ifndef CONFIG_HAVE_LEGACY_PER_CPU_AREA
 
 static void *percpu_modalloc(unsigned long size, unsigned long align,
 			     const char *name)
@@ -384,7 +384,7 @@ static void percpu_modfree(void *freeme)
 	free_percpu(freeme);
 }
 
-#else /* ... !CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+#else /* ... CONFIG_HAVE_LEGACY_PER_CPU_AREA */
 
 /* Number of blocks used and allocated. */
 static unsigned int pcpu_num_used, pcpu_num_allocated;
@@ -519,7 +519,7 @@ static int percpu_modinit(void)
 }
 __initcall(percpu_modinit);
 
-#endif /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+#endif /* CONFIG_HAVE_LEGACY_PER_CPU_AREA */
 
 static unsigned int find_pcpusec(Elf_Ehdr *hdr,
 				 Elf_Shdr *sechdrs,
diff --git a/mm/Makefile b/mm/Makefile
index ec73c68..67838bd 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -31,7 +31,7 @@ obj-$(CONFIG_FAILSLAB) += failslab.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_FS_XIP) += filemap_xip.o
 obj-$(CONFIG_MIGRATION) += migrate.o
-ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+ifndef CONFIG_HAVE_LEGACY_PER_CPU_AREA
 obj-$(CONFIG_SMP) += percpu.o
 else
 obj-$(CONFIG_SMP) += allocpercpu.o
diff --git a/mm/allocpercpu.c b/mm/allocpercpu.c
index dfdee6a..df34cea 100644
--- a/mm/allocpercpu.c
+++ b/mm/allocpercpu.c
@@ -5,6 +5,8 @@
  */
 #include <linux/mm.h>
 #include <linux/module.h>
+#include <linux/bootmem.h>
+#include <asm/sections.h>
 
 #ifndef cache_line_size
 #define cache_line_size()	L1_CACHE_BYTES
@@ -147,3 +149,29 @@ void free_percpu(void *__pdata)
 	kfree(__percpu_disguise(__pdata));
 }
 EXPORT_SYMBOL_GPL(free_percpu);
+
+/*
+ * Generic percpu area setup.
+ */
+#ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA
+unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
+
+EXPORT_SYMBOL(__per_cpu_offset);
+
+void __init setup_per_cpu_areas(void)
+{
+	unsigned long size, i;
+	char *ptr;
+	unsigned long nr_possible_cpus = num_possible_cpus();
+
+	/* Copy section for each CPU (we discard the original) */
+	size = ALIGN(PERCPU_ENOUGH_ROOM, PAGE_SIZE);
+	ptr = alloc_bootmem_pages(size * nr_possible_cpus);
+
+	for_each_possible_cpu(i) {
+		__per_cpu_offset[i] = ptr - __per_cpu_start;
+		memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
+		ptr += size;
+	}
+}
+#endif /* CONFIG_HAVE_SETUP_PER_CPU_AREA */
diff --git a/mm/percpu.c b/mm/percpu.c
index f780bee..d9b4059 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -43,7 +43,7 @@
  *
  * To use this allocator, arch code should do the followings.
  *
- * - define CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+ * - drop CONFIG_HAVE_LEGACY_PER_CPU_AREA
  *
  * - define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr() to translate
  *   regular address to percpu pointer and back if they need to be
@@ -1276,3 +1276,41 @@ ssize_t __init pcpu_embed_first_chunk(size_t static_size, size_t reserved_size,
 				      reserved_size, dyn_size,
 				      pcpue_unit_size, pcpue_ptr, NULL);
 }
+
+/*
+ * Generic percpu area setup.
+ *
+ * The embedding helper is used because its behavior closely resembles
+ * the original non-dynamic generic percpu area setup.  This is
+ * important because many archs have addressing restrictions and might
+ * fail if the percpu area is located far away from the previous
+ * location.  As an added bonus, in non-NUMA cases, embedding is
+ * generally a good idea TLB-wise because percpu area can piggy back
+ * on the physical linear memory mapping which uses large page
+ * mappings on applicable archs.
+ */
+#ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA
+unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
+EXPORT_SYMBOL(__per_cpu_offset);
+
+void __init setup_per_cpu_areas(void)
+{
+	size_t static_size = __per_cpu_end - __per_cpu_start;
+	ssize_t unit_size;
+	unsigned long delta;
+	unsigned int cpu;
+
+	/*
+	 * Always reserve area for module percpu variables.  That's
+	 * what the legacy allocator did.
+	 */
+	unit_size = pcpu_embed_first_chunk(static_size, PERCPU_MODULE_RESERVE,
+					   PERCPU_DYNAMIC_RESERVE, -1);
+	if (unit_size < 0)
+		panic("Failed to initialized percpu areas.");
+
+	delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start;
+	for_each_possible_cpu(cpu)
+		__per_cpu_offset[cpu] = delta + cpu * unit_size;
+}
+#endif /* CONFIG_HAVE_SETUP_PER_CPU_AREA */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/7] percpu: cleanup percpu array definitions
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
@ 2009-06-01  8:58   ` Tejun Heo
  2009-06-01  8:58   ` Tejun Heo
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo, Jeremy Fitzhardinge, linux-mm, Christoph Lameter

Currently, the following three different ways to define percpu arrays
are in use.

1. DEFINE_PER_CPU(elem_type[array_len], array_name);
2. DEFINE_PER_CPU(elem_type, array_name[array_len]);
3. DEFINE_PER_CPU(elem_type, array_name)[array_len];

Unify to #1 which correctly separates the roles of the two parameters
and thus allows more flexibility in the way percpu variables are
defined.

[ Impact: cleanup ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm@kvack.org
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: David S. Miller <davem@davemloft.net>
---
 arch/ia64/kernel/smp.c                  |    2 +-
 arch/ia64/sn/kernel/setup.c             |    2 +-
 arch/powerpc/mm/stab.c                  |    2 +-
 arch/powerpc/platforms/ps3/smp.c        |    2 +-
 arch/x86/kernel/cpu/cpu_debug.c         |    4 ++--
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c |    2 +-
 drivers/xen/events.c                    |    4 ++--
 mm/quicklist.c                          |    2 +-
 mm/slub.c                               |    4 ++--
 net/ipv4/syncookies.c                   |    2 +-
 net/ipv6/syncookies.c                   |    2 +-
 11 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 5230eaa..3e0840c 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -58,7 +58,7 @@ static struct local_tlb_flush_counts {
 	unsigned int count;
 } __attribute__((__aligned__(32))) local_tlb_flush_counts[NR_CPUS];
 
-static DEFINE_PER_CPU(unsigned short, shadow_flush_counts[NR_CPUS]) ____cacheline_aligned;
+static DEFINE_PER_CPU(unsigned short [NR_CPUS], shadow_flush_counts) ____cacheline_aligned;
 
 #define IPI_CALL_FUNC		0
 #define IPI_CPU_STOP		1
diff --git a/arch/ia64/sn/kernel/setup.c b/arch/ia64/sn/kernel/setup.c
index e456f06..ece1bf9 100644
--- a/arch/ia64/sn/kernel/setup.c
+++ b/arch/ia64/sn/kernel/setup.c
@@ -71,7 +71,7 @@ EXPORT_SYMBOL(sn_rtc_cycles_per_second);
 DEFINE_PER_CPU(struct sn_hub_info_s, __sn_hub_info);
 EXPORT_PER_CPU_SYMBOL(__sn_hub_info);
 
-DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_COMPACT_NODES]);
+DEFINE_PER_CPU(short [MAX_COMPACT_NODES], __sn_cnodeid_to_nasid);
 EXPORT_PER_CPU_SYMBOL(__sn_cnodeid_to_nasid);
 
 DEFINE_PER_CPU(struct nodepda_s *, __sn_nodepda);
diff --git a/arch/powerpc/mm/stab.c b/arch/powerpc/mm/stab.c
index 98cd1dc..6e9b69c 100644
--- a/arch/powerpc/mm/stab.c
+++ b/arch/powerpc/mm/stab.c
@@ -31,7 +31,7 @@ struct stab_entry {
 
 #define NR_STAB_CACHE_ENTRIES 8
 static DEFINE_PER_CPU(long, stab_cache_ptr);
-static DEFINE_PER_CPU(long, stab_cache[NR_STAB_CACHE_ENTRIES]);
+static DEFINE_PER_CPU(long [NR_STAB_CACHE_ENTRIES], stab_cache);
 
 /*
  * Create a segment table entry for the given esid/vsid pair.
diff --git a/arch/powerpc/platforms/ps3/smp.c b/arch/powerpc/platforms/ps3/smp.c
index a0927a3..6fcc499 100644
--- a/arch/powerpc/platforms/ps3/smp.c
+++ b/arch/powerpc/platforms/ps3/smp.c
@@ -43,7 +43,7 @@ static irqreturn_t ipi_function_handler(int irq, void *msg)
   */
 
 #define MSG_COUNT 4
-static DEFINE_PER_CPU(unsigned int, ps3_ipi_virqs[MSG_COUNT]);
+static DEFINE_PER_CPU(unsigned int [MSG_COUNT], ps3_ipi_virqs);
 
 static const char *names[MSG_COUNT] = {
 	"ipi call",
diff --git a/arch/x86/kernel/cpu/cpu_debug.c b/arch/x86/kernel/cpu/cpu_debug.c
index 46e29ab..66f7471 100644
--- a/arch/x86/kernel/cpu/cpu_debug.c
+++ b/arch/x86/kernel/cpu/cpu_debug.c
@@ -30,8 +30,8 @@
 #include <asm/apic.h>
 #include <asm/desc.h>
 
-static DEFINE_PER_CPU(struct cpu_cpuX_base, cpu_arr[CPU_REG_ALL_BIT]);
-static DEFINE_PER_CPU(struct cpu_private *, priv_arr[MAX_CPU_FILES]);
+static DEFINE_PER_CPU(struct cpu_cpuX_base [CPU_REG_ALL_BIT], cpu_arr);
+static DEFINE_PER_CPU(struct cpu_private * [MAX_CPU_FILES], priv_arr);
 static DEFINE_PER_CPU(unsigned, cpu_modelflag);
 static DEFINE_PER_CPU(int, cpu_priv_count);
 static DEFINE_PER_CPU(unsigned, cpu_model);
diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
index 56dde9c..9fd9bf6 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -69,7 +69,7 @@ struct threshold_bank {
 	struct threshold_block *blocks;
 	cpumask_var_t cpus;
 };
-static DEFINE_PER_CPU(struct threshold_bank *, threshold_banks[NR_BANKS]);
+static DEFINE_PER_CPU(struct threshold_bank * [NR_BANKS], threshold_banks);
 
 #ifdef CONFIG_SMP
 static unsigned char shared_bank[NR_BANKS] = {
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 30963af..228a8bb 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -47,10 +47,10 @@
 static DEFINE_SPINLOCK(irq_mapping_update_lock);
 
 /* IRQ <-> VIRQ mapping. */
-static DEFINE_PER_CPU(int, virq_to_irq[NR_VIRQS]) = {[0 ... NR_VIRQS-1] = -1};
+static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1};
 
 /* IRQ <-> IPI mapping */
-static DEFINE_PER_CPU(int, ipi_to_irq[XEN_NR_IPIS]) = {[0 ... XEN_NR_IPIS-1] = -1};
+static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1};
 
 /* Interrupt types. */
 enum xen_irq_type {
diff --git a/mm/quicklist.c b/mm/quicklist.c
index e66d07d..6eedf7e 100644
--- a/mm/quicklist.c
+++ b/mm/quicklist.c
@@ -19,7 +19,7 @@
 #include <linux/module.h>
 #include <linux/quicklist.h>
 
-DEFINE_PER_CPU(struct quicklist, quicklist)[CONFIG_NR_QUICK];
+DEFINE_PER_CPU(struct quicklist [CONFIG_NR_QUICK], quicklist);
 
 #define FRACTION_OF_NODE_MEM	16
 
diff --git a/mm/slub.c b/mm/slub.c
index 65ffda5..fbcf929 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1987,8 +1987,8 @@ init_kmem_cache_node(struct kmem_cache_node *n, struct kmem_cache *s)
  */
 #define NR_KMEM_CACHE_CPU 100
 
-static DEFINE_PER_CPU(struct kmem_cache_cpu,
-				kmem_cache_cpu)[NR_KMEM_CACHE_CPU];
+static DEFINE_PER_CPU(struct kmem_cache_cpu [NR_KMEM_CACHE_CPU],
+		      kmem_cache_cpu);
 
 static DEFINE_PER_CPU(struct kmem_cache_cpu *, kmem_cache_cpu_free);
 static DECLARE_BITMAP(kmem_cach_cpu_free_init_once, CONFIG_NR_CPUS);
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index b35a950..ce629ed 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -37,7 +37,7 @@ __initcall(init_syncookies);
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
 
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 711175e..4d995fe 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -74,7 +74,7 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
 	return child;
 }
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
 
 static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/7] percpu: cleanup percpu array definitions
@ 2009-06-01  8:58   ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo, Jeremy Fitzhardinge, linux-mm, Christoph Lameter

Currently, the following three different ways to define percpu arrays
are in use.

1. DEFINE_PER_CPU(elem_type[array_len], array_name);
2. DEFINE_PER_CPU(elem_type, array_name[array_len]);
3. DEFINE_PER_CPU(elem_type, array_name)[array_len];

Unify to #1 which correctly separates the roles of the two parameters
and thus allows more flexibility in the way percpu variables are
defined.

[ Impact: cleanup ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm@kvack.org
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: David S. Miller <davem@davemloft.net>
---
 arch/ia64/kernel/smp.c                  |    2 +-
 arch/ia64/sn/kernel/setup.c             |    2 +-
 arch/powerpc/mm/stab.c                  |    2 +-
 arch/powerpc/platforms/ps3/smp.c        |    2 +-
 arch/x86/kernel/cpu/cpu_debug.c         |    4 ++--
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c |    2 +-
 drivers/xen/events.c                    |    4 ++--
 mm/quicklist.c                          |    2 +-
 mm/slub.c                               |    4 ++--
 net/ipv4/syncookies.c                   |    2 +-
 net/ipv6/syncookies.c                   |    2 +-
 11 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 5230eaa..3e0840c 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -58,7 +58,7 @@ static struct local_tlb_flush_counts {
 	unsigned int count;
 } __attribute__((__aligned__(32))) local_tlb_flush_counts[NR_CPUS];
 
-static DEFINE_PER_CPU(unsigned short, shadow_flush_counts[NR_CPUS]) ____cacheline_aligned;
+static DEFINE_PER_CPU(unsigned short [NR_CPUS], shadow_flush_counts) ____cacheline_aligned;
 
 #define IPI_CALL_FUNC		0
 #define IPI_CPU_STOP		1
diff --git a/arch/ia64/sn/kernel/setup.c b/arch/ia64/sn/kernel/setup.c
index e456f06..ece1bf9 100644
--- a/arch/ia64/sn/kernel/setup.c
+++ b/arch/ia64/sn/kernel/setup.c
@@ -71,7 +71,7 @@ EXPORT_SYMBOL(sn_rtc_cycles_per_second);
 DEFINE_PER_CPU(struct sn_hub_info_s, __sn_hub_info);
 EXPORT_PER_CPU_SYMBOL(__sn_hub_info);
 
-DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_COMPACT_NODES]);
+DEFINE_PER_CPU(short [MAX_COMPACT_NODES], __sn_cnodeid_to_nasid);
 EXPORT_PER_CPU_SYMBOL(__sn_cnodeid_to_nasid);
 
 DEFINE_PER_CPU(struct nodepda_s *, __sn_nodepda);
diff --git a/arch/powerpc/mm/stab.c b/arch/powerpc/mm/stab.c
index 98cd1dc..6e9b69c 100644
--- a/arch/powerpc/mm/stab.c
+++ b/arch/powerpc/mm/stab.c
@@ -31,7 +31,7 @@ struct stab_entry {
 
 #define NR_STAB_CACHE_ENTRIES 8
 static DEFINE_PER_CPU(long, stab_cache_ptr);
-static DEFINE_PER_CPU(long, stab_cache[NR_STAB_CACHE_ENTRIES]);
+static DEFINE_PER_CPU(long [NR_STAB_CACHE_ENTRIES], stab_cache);
 
 /*
  * Create a segment table entry for the given esid/vsid pair.
diff --git a/arch/powerpc/platforms/ps3/smp.c b/arch/powerpc/platforms/ps3/smp.c
index a0927a3..6fcc499 100644
--- a/arch/powerpc/platforms/ps3/smp.c
+++ b/arch/powerpc/platforms/ps3/smp.c
@@ -43,7 +43,7 @@ static irqreturn_t ipi_function_handler(int irq, void *msg)
   */
 
 #define MSG_COUNT 4
-static DEFINE_PER_CPU(unsigned int, ps3_ipi_virqs[MSG_COUNT]);
+static DEFINE_PER_CPU(unsigned int [MSG_COUNT], ps3_ipi_virqs);
 
 static const char *names[MSG_COUNT] = {
 	"ipi call",
diff --git a/arch/x86/kernel/cpu/cpu_debug.c b/arch/x86/kernel/cpu/cpu_debug.c
index 46e29ab..66f7471 100644
--- a/arch/x86/kernel/cpu/cpu_debug.c
+++ b/arch/x86/kernel/cpu/cpu_debug.c
@@ -30,8 +30,8 @@
 #include <asm/apic.h>
 #include <asm/desc.h>
 
-static DEFINE_PER_CPU(struct cpu_cpuX_base, cpu_arr[CPU_REG_ALL_BIT]);
-static DEFINE_PER_CPU(struct cpu_private *, priv_arr[MAX_CPU_FILES]);
+static DEFINE_PER_CPU(struct cpu_cpuX_base [CPU_REG_ALL_BIT], cpu_arr);
+static DEFINE_PER_CPU(struct cpu_private * [MAX_CPU_FILES], priv_arr);
 static DEFINE_PER_CPU(unsigned, cpu_modelflag);
 static DEFINE_PER_CPU(int, cpu_priv_count);
 static DEFINE_PER_CPU(unsigned, cpu_model);
diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
index 56dde9c..9fd9bf6 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -69,7 +69,7 @@ struct threshold_bank {
 	struct threshold_block *blocks;
 	cpumask_var_t cpus;
 };
-static DEFINE_PER_CPU(struct threshold_bank *, threshold_banks[NR_BANKS]);
+static DEFINE_PER_CPU(struct threshold_bank * [NR_BANKS], threshold_banks);
 
 #ifdef CONFIG_SMP
 static unsigned char shared_bank[NR_BANKS] = {
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 30963af..228a8bb 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -47,10 +47,10 @@
 static DEFINE_SPINLOCK(irq_mapping_update_lock);
 
 /* IRQ <-> VIRQ mapping. */
-static DEFINE_PER_CPU(int, virq_to_irq[NR_VIRQS]) = {[0 ... NR_VIRQS-1] = -1};
+static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1};
 
 /* IRQ <-> IPI mapping */
-static DEFINE_PER_CPU(int, ipi_to_irq[XEN_NR_IPIS]) = {[0 ... XEN_NR_IPIS-1] = -1};
+static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1};
 
 /* Interrupt types. */
 enum xen_irq_type {
diff --git a/mm/quicklist.c b/mm/quicklist.c
index e66d07d..6eedf7e 100644
--- a/mm/quicklist.c
+++ b/mm/quicklist.c
@@ -19,7 +19,7 @@
 #include <linux/module.h>
 #include <linux/quicklist.h>
 
-DEFINE_PER_CPU(struct quicklist, quicklist)[CONFIG_NR_QUICK];
+DEFINE_PER_CPU(struct quicklist [CONFIG_NR_QUICK], quicklist);
 
 #define FRACTION_OF_NODE_MEM	16
 
diff --git a/mm/slub.c b/mm/slub.c
index 65ffda5..fbcf929 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1987,8 +1987,8 @@ init_kmem_cache_node(struct kmem_cache_node *n, struct kmem_cache *s)
  */
 #define NR_KMEM_CACHE_CPU 100
 
-static DEFINE_PER_CPU(struct kmem_cache_cpu,
-				kmem_cache_cpu)[NR_KMEM_CACHE_CPU];
+static DEFINE_PER_CPU(struct kmem_cache_cpu [NR_KMEM_CACHE_CPU],
+		      kmem_cache_cpu);
 
 static DEFINE_PER_CPU(struct kmem_cache_cpu *, kmem_cache_cpu_free);
 static DECLARE_BITMAP(kmem_cach_cpu_free_init_once, CONFIG_NR_CPUS);
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index b35a950..ce629ed 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -37,7 +37,7 @@ __initcall(init_syncookies);
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
 
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 711175e..4d995fe 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -74,7 +74,7 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
 	return child;
 }
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
 
 static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
-- 
1.6.0.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
@ 2009-06-01  8:58   ` Tejun Heo
  2009-06-01  8:58   ` Tejun Heo
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo, Jens Axboe, Dave Jones, Jeremy Fitzhardinge, linux-mm

Percpu variable definition is about to be updated such that no static
declaration is allowed.  Update percpu variable definitions
accoringly.

* as,cfq: rename ioc_count uniquely

* cpufreq: rename cpu_dbs_info uniquely

* xen: move nesting_count out of xen_evtchn_do_upcall() and rename it

* mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
  rename it

* ipv4,6: rename cookie_scratch uniquely

While at it, make cris:use DECLARE_PER_CPU() instead of extern
volatile DEFINE_PER_CPU() for declaration.

[ Impact: percpu usage cleanups, no duplicate static percpu var names ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: David S. Miller <davem@davemloft.net>
---
 arch/cris/include/asm/mmu_context.h    |    2 +-
 block/as-iosched.c                     |   10 +++++-----
 block/cfq-iosched.c                    |   10 +++++-----
 drivers/cpufreq/cpufreq_conservative.c |   12 ++++++------
 drivers/cpufreq/cpufreq_ondemand.c     |   15 ++++++++-------
 drivers/xen/events.c                   |    9 +++++----
 mm/page-writeback.c                    |    5 +++--
 net/ipv4/syncookies.c                  |    5 +++--
 net/ipv6/syncookies.c                  |    5 +++--
 9 files changed, 39 insertions(+), 34 deletions(-)

diff --git a/arch/cris/include/asm/mmu_context.h b/arch/cris/include/asm/mmu_context.h
index 72ba08d..00de1a0 100644
--- a/arch/cris/include/asm/mmu_context.h
+++ b/arch/cris/include/asm/mmu_context.h
@@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
  * registers like cr3 on the i386
  */
 
-extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
+DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
diff --git a/block/as-iosched.c b/block/as-iosched.c
index c48fa67..96ff4d1 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -146,7 +146,7 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, as_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -161,7 +161,7 @@ static void as_antic_stop(struct as_data *ad);
 static void free_as_io_context(struct as_io_context *aic)
 {
 	kfree(aic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(as_ioc_count);
 	if (ioc_gone) {
 		/*
 		 * AS scheduler is exiting, grab exit lock and check
@@ -169,7 +169,7 @@ static void free_as_io_context(struct as_io_context *aic)
 		 * complete ioc_gone and set it back to NULL.
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(as_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -211,7 +211,7 @@ static struct as_io_context *alloc_as_io_context(void)
 		ret->seek_total = 0;
 		ret->seek_samples = 0;
 		ret->seek_mean = 0;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(as_ioc_count);
 	}
 
 	return ret;
@@ -1509,7 +1509,7 @@ static void __exit as_exit(void)
 	ioc_gone = &all_gone;
 	/* ioc_gone's update must be visible before reading ioc_count */
 	smp_wmb();
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(as_ioc_count))
 		wait_for_completion(&all_gone);
 	synchronize_rcu();
 }
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..deea748 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -48,7 +48,7 @@ static int cfq_slice_idle = HZ / 125;
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -1423,7 +1423,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 	cic = container_of(head, struct cfq_io_context, rcu_head);
 
 	kmem_cache_free(cfq_ioc_pool, cic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(cfq_ioc_count);
 
 	if (ioc_gone) {
 		/*
@@ -1432,7 +1432,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 		 * complete ioc_gone and set it back to NULL
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(cfq_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -1558,7 +1558,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 		INIT_HLIST_NODE(&cic->cic_list);
 		cic->dtor = cfq_free_io_context;
 		cic->exit = cfq_exit_io_context;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(cfq_ioc_count);
 	}
 
 	return cic;
@@ -2663,7 +2663,7 @@ static void __exit cfq_exit(void)
 	 * this also protects us from entering cfq_slab_kill() with
 	 * pending RCU callbacks
 	 */
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(cfq_ioc_count))
 		wait_for_completion(&all_gone);
 	cfq_slab_kill();
 }
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 7a74d17..8191d04 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -80,7 +80,7 @@ struct cpu_dbs_info_s {
 	int cpu;
 	unsigned int enable:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -153,7 +153,7 @@ dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
 		     void *data)
 {
 	struct cpufreq_freqs *freq = data;
-	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cpu_dbs_info,
+	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cs_cpu_dbs_info,
 							freq->cpu);
 
 	struct cpufreq_policy *policy;
@@ -326,7 +326,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -416,7 +416,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		cputime64_t cur_wall_time, cur_idle_time;
 		unsigned int idle_time, wall_time;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -556,7 +556,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(cs_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -576,7 +576,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index e741c33..04de476 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -87,7 +87,7 @@ struct cpu_dbs_info_s {
 	unsigned int enable:1,
 		sample_type:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -165,7 +165,8 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy,
 	unsigned int freq_hi, freq_lo;
 	unsigned int index = 0;
 	unsigned int jiffies_total, jiffies_hi, jiffies_lo;
-	struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, policy->cpu);
+	struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
+						   policy->cpu);
 
 	if (!dbs_info->freq_table) {
 		dbs_info->freq_lo = 0;
@@ -210,7 +211,7 @@ static void ondemand_powersave_bias_init(void)
 {
 	int i;
 	for_each_online_cpu(i) {
-		struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, i);
+		struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, i);
 		dbs_info->freq_table = cpufreq_frequency_get_table(i);
 		dbs_info->freq_lo = 0;
 	}
@@ -325,7 +326,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(od_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -419,7 +420,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		unsigned int load, load_freq;
 		int freq_avg;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -576,7 +577,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -598,7 +599,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 228a8bb..dbfed85 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -596,6 +596,8 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
 /*
  * Search the CPUs pending events bitmasks.  For each one found, map
  * the event number to an irq, and feed it into do_IRQ() for
@@ -611,7 +613,6 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 	struct pt_regs *old_regs = set_irq_regs(regs);
 	struct shared_info *s = HYPERVISOR_shared_info;
 	struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);
-	static DEFINE_PER_CPU(unsigned, nesting_count);
  	unsigned count;
 
 	exit_idle();
@@ -622,7 +623,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		vcpu_info->evtchn_upcall_pending = 0;
 
-		if (__get_cpu_var(nesting_count)++)
+		if (__get_cpu_var(xed_nesting_count)++)
 			goto out;
 
 #ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
@@ -647,8 +648,8 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		BUG_ON(!irqs_disabled());
 
-		count = __get_cpu_var(nesting_count);
-		__get_cpu_var(nesting_count) = 0;
+		count = __get_cpu_var(xed_nesting_count);
+		__get_cpu_var(xed_nesting_count) = 0;
 	} while(count != 1);
 
 out:
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..0e0c9de 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -606,6 +606,8 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 	}
 }
 
+static DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
+
 /**
  * balance_dirty_pages_ratelimited_nr - balance dirty memory state
  * @mapping: address_space which was dirtied
@@ -623,7 +625,6 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 					unsigned long nr_pages_dirtied)
 {
-	static DEFINE_PER_CPU(unsigned long, ratelimits) = 0;
 	unsigned long ratelimit;
 	unsigned long *p;
 
@@ -636,7 +637,7 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 	 * tasks in balance_dirty_pages(). Period.
 	 */
 	preempt_disable();
-	p =  &__get_cpu_var(ratelimits);
+	p =  &__get_cpu_var(bdp_ratelimits);
 	*p += nr_pages_dirtied;
 	if (unlikely(*p >= ratelimit)) {
 		*p = 0;
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index ce629ed..a3c045c 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -37,12 +37,13 @@ __initcall(init_syncookies);
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS],
+		      ipv4_cookie_scratch);
 
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv4_cookie_scratch);
 
 	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
 	tmp[0] = (__force u32)saddr;
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 4d995fe..e2bcff0 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -74,12 +74,13 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
 	return child;
 }
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS],
+		      ipv6_cookie_scratch);
 
 static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv6_cookie_scratch);
 
 	/*
 	 * we have 320 bits of information to hash, copy in the remaining
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-06-01  8:58   ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo, Jens Axboe, Dave Jones, Jeremy Fitzhardinge, linux-mm

Percpu variable definition is about to be updated such that no static
declaration is allowed.  Update percpu variable definitions
accoringly.

* as,cfq: rename ioc_count uniquely

* cpufreq: rename cpu_dbs_info uniquely

* xen: move nesting_count out of xen_evtchn_do_upcall() and rename it

* mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
  rename it

* ipv4,6: rename cookie_scratch uniquely

While at it, make cris:use DECLARE_PER_CPU() instead of extern
volatile DEFINE_PER_CPU() for declaration.

[ Impact: percpu usage cleanups, no duplicate static percpu var names ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: David S. Miller <davem@davemloft.net>
---
 arch/cris/include/asm/mmu_context.h    |    2 +-
 block/as-iosched.c                     |   10 +++++-----
 block/cfq-iosched.c                    |   10 +++++-----
 drivers/cpufreq/cpufreq_conservative.c |   12 ++++++------
 drivers/cpufreq/cpufreq_ondemand.c     |   15 ++++++++-------
 drivers/xen/events.c                   |    9 +++++----
 mm/page-writeback.c                    |    5 +++--
 net/ipv4/syncookies.c                  |    5 +++--
 net/ipv6/syncookies.c                  |    5 +++--
 9 files changed, 39 insertions(+), 34 deletions(-)

diff --git a/arch/cris/include/asm/mmu_context.h b/arch/cris/include/asm/mmu_context.h
index 72ba08d..00de1a0 100644
--- a/arch/cris/include/asm/mmu_context.h
+++ b/arch/cris/include/asm/mmu_context.h
@@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
  * registers like cr3 on the i386
  */
 
-extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
+DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
diff --git a/block/as-iosched.c b/block/as-iosched.c
index c48fa67..96ff4d1 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -146,7 +146,7 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, as_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -161,7 +161,7 @@ static void as_antic_stop(struct as_data *ad);
 static void free_as_io_context(struct as_io_context *aic)
 {
 	kfree(aic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(as_ioc_count);
 	if (ioc_gone) {
 		/*
 		 * AS scheduler is exiting, grab exit lock and check
@@ -169,7 +169,7 @@ static void free_as_io_context(struct as_io_context *aic)
 		 * complete ioc_gone and set it back to NULL.
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(as_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -211,7 +211,7 @@ static struct as_io_context *alloc_as_io_context(void)
 		ret->seek_total = 0;
 		ret->seek_samples = 0;
 		ret->seek_mean = 0;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(as_ioc_count);
 	}
 
 	return ret;
@@ -1509,7 +1509,7 @@ static void __exit as_exit(void)
 	ioc_gone = &all_gone;
 	/* ioc_gone's update must be visible before reading ioc_count */
 	smp_wmb();
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(as_ioc_count))
 		wait_for_completion(&all_gone);
 	synchronize_rcu();
 }
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..deea748 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -48,7 +48,7 @@ static int cfq_slice_idle = HZ / 125;
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -1423,7 +1423,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 	cic = container_of(head, struct cfq_io_context, rcu_head);
 
 	kmem_cache_free(cfq_ioc_pool, cic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(cfq_ioc_count);
 
 	if (ioc_gone) {
 		/*
@@ -1432,7 +1432,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 		 * complete ioc_gone and set it back to NULL
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(cfq_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -1558,7 +1558,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 		INIT_HLIST_NODE(&cic->cic_list);
 		cic->dtor = cfq_free_io_context;
 		cic->exit = cfq_exit_io_context;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(cfq_ioc_count);
 	}
 
 	return cic;
@@ -2663,7 +2663,7 @@ static void __exit cfq_exit(void)
 	 * this also protects us from entering cfq_slab_kill() with
 	 * pending RCU callbacks
 	 */
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(cfq_ioc_count))
 		wait_for_completion(&all_gone);
 	cfq_slab_kill();
 }
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 7a74d17..8191d04 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -80,7 +80,7 @@ struct cpu_dbs_info_s {
 	int cpu;
 	unsigned int enable:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -153,7 +153,7 @@ dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
 		     void *data)
 {
 	struct cpufreq_freqs *freq = data;
-	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cpu_dbs_info,
+	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cs_cpu_dbs_info,
 							freq->cpu);
 
 	struct cpufreq_policy *policy;
@@ -326,7 +326,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -416,7 +416,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		cputime64_t cur_wall_time, cur_idle_time;
 		unsigned int idle_time, wall_time;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -556,7 +556,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(cs_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -576,7 +576,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index e741c33..04de476 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -87,7 +87,7 @@ struct cpu_dbs_info_s {
 	unsigned int enable:1,
 		sample_type:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -165,7 +165,8 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy,
 	unsigned int freq_hi, freq_lo;
 	unsigned int index = 0;
 	unsigned int jiffies_total, jiffies_hi, jiffies_lo;
-	struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, policy->cpu);
+	struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
+						   policy->cpu);
 
 	if (!dbs_info->freq_table) {
 		dbs_info->freq_lo = 0;
@@ -210,7 +211,7 @@ static void ondemand_powersave_bias_init(void)
 {
 	int i;
 	for_each_online_cpu(i) {
-		struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, i);
+		struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, i);
 		dbs_info->freq_table = cpufreq_frequency_get_table(i);
 		dbs_info->freq_lo = 0;
 	}
@@ -325,7 +326,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(od_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -419,7 +420,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		unsigned int load, load_freq;
 		int freq_avg;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -576,7 +577,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -598,7 +599,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 228a8bb..dbfed85 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -596,6 +596,8 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
 /*
  * Search the CPUs pending events bitmasks.  For each one found, map
  * the event number to an irq, and feed it into do_IRQ() for
@@ -611,7 +613,6 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 	struct pt_regs *old_regs = set_irq_regs(regs);
 	struct shared_info *s = HYPERVISOR_shared_info;
 	struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);
-	static DEFINE_PER_CPU(unsigned, nesting_count);
  	unsigned count;
 
 	exit_idle();
@@ -622,7 +623,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		vcpu_info->evtchn_upcall_pending = 0;
 
-		if (__get_cpu_var(nesting_count)++)
+		if (__get_cpu_var(xed_nesting_count)++)
 			goto out;
 
 #ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
@@ -647,8 +648,8 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		BUG_ON(!irqs_disabled());
 
-		count = __get_cpu_var(nesting_count);
-		__get_cpu_var(nesting_count) = 0;
+		count = __get_cpu_var(xed_nesting_count);
+		__get_cpu_var(xed_nesting_count) = 0;
 	} while(count != 1);
 
 out:
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..0e0c9de 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -606,6 +606,8 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 	}
 }
 
+static DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
+
 /**
  * balance_dirty_pages_ratelimited_nr - balance dirty memory state
  * @mapping: address_space which was dirtied
@@ -623,7 +625,6 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 					unsigned long nr_pages_dirtied)
 {
-	static DEFINE_PER_CPU(unsigned long, ratelimits) = 0;
 	unsigned long ratelimit;
 	unsigned long *p;
 
@@ -636,7 +637,7 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 	 * tasks in balance_dirty_pages(). Period.
 	 */
 	preempt_disable();
-	p =  &__get_cpu_var(ratelimits);
+	p =  &__get_cpu_var(bdp_ratelimits);
 	*p += nr_pages_dirtied;
 	if (unlikely(*p >= ratelimit)) {
 		*p = 0;
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index ce629ed..a3c045c 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -37,12 +37,13 @@ __initcall(init_syncookies);
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS],
+		      ipv4_cookie_scratch);
 
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv4_cookie_scratch);
 
 	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
 	tmp[0] = (__force u32)saddr;
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 4d995fe..e2bcff0 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -74,12 +74,13 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
 	return child;
 }
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], cookie_scratch);
+static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS],
+		      ipv6_cookie_scratch);
 
 static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv6_cookie_scratch);
 
 	/*
 	 * we have 320 bits of information to hash, copy in the remaining
-- 
1.6.0.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/7] percpu: enforce global definition
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
                   ` (2 preceding siblings ...)
  2009-06-01  8:58   ` Tejun Heo
@ 2009-06-01  8:58 ` Tejun Heo
  2009-06-01  8:58 ` [PATCH 5/7] alpha: kill unnecessary __used attribute in PER_CPU_ATTRIBUTES Tejun Heo
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo, blackfin: Mike Frysinger, block: Jens Axboe,
	crypto: Herbert Xu, acpi: Len Brown, xen: Jeremy Fitzhardinge,
	cpu: Mike Travis, cpufreq: Dave Jones, cpuidle: Venki Pallipadi,
	fs: Alexander Viro, kprobes: Ananth N Mavinakayanahalli,
	lockdep: Peter Zijlstra, rcu: Dipankar Sarma,
	rcutorture: Josh Triplett, trace: Frederic Weisbecker,
	mm,radix-tree: Nick Piggin, slub: Christoph Lameter,
	random32: Stephen Hemminger, kernel/*: Andrew Morton

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 96212 bytes --]

Some archs (alpha and s390) need to add 'weak' attribute to percpu
variable definitions so that the compiler generates external
references for them.  To allow this, enforce global definition of all
percpu variables.

This patch makes DEFINE_PER_CPU_SECTION() do DECLARE_PERCPU_SECTION()
implicitly and drop static from all percpu definitions.

[ Impact: all percpu variables are forced to be global ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Howells <dhowells@redhat.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: arm: Russell King <linux@arm.linux.org.uk>
Cc: avr32: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: blackfin: Mike Frysinger <vapier@gentoo.org>
Cc: ia64: Tony Luck <tony.luck@intel.com>
Cc: mips: Ralf Baechle <ralf@linux-mips.org>
Cc: parisc: Kyle McMartin <kyle@mcmartin.ca>
Cc: powerpc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: s390: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: superh: Paul Mundt <lethal@linux-sh.org>
Cc: sparc: David S. Miller <davem@davemloft.net>
Cc: x86,timer: Thomas Gleixner <tglx@linutronix.de>
Cc: block: Jens Axboe <axboe@kernel.dk>
Cc: crypto: Herbert Xu <herbert@gondor.apana.org.au>
Cc: acpi: Len Brown <lenb@kernel.org>
Cc: xen: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: cpu: Mike Travis <travis@sgi.com>
Cc: cpufreq: Dave Jones <davej@redhat.com>
Cc: cpuidle: Venki Pallipadi <venkatesh.pallipadi@intel.com>
Cc: lguest: Rusty Russell <rusty@rustcorp.com.au>
Cc: fs: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: kprobes: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: lockdep: Peter Zijlstra <peterz@infradead.org>
Cc: rcu: Dipankar Sarma <dipankar@in.ibm.com>
Cc: rcutorture: Josh Triplett <josh@freedesktop.org>
Cc: trace: Frederic Weisbecker <fweisbec@gmail.com>
Cc: mm,radix-tree: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: slub: Christoph Lameter <cl@linux-foundation.org>
Cc: random32: Stephen Hemminger <shemminger@osdl.org>
Cc: net: David S. Miller <davem@davemloft.net>
Cc: kernel/*: Andrew Morton <akpm@linux-foundation.org>
---
 arch/arm/kernel/smp.c                            |    2 +-
 arch/arm/mach-realview/localtimer.c              |    2 +-
 arch/avr32/kernel/cpu.c                          |    2 +-
 arch/blackfin/mach-common/smp.c                  |    2 +-
 arch/blackfin/mm/sram-alloc.c                    |   22 ++++++++--------
 arch/ia64/kernel/crash.c                         |    2 +-
 arch/ia64/kernel/smp.c                           |    4 +-
 arch/ia64/kernel/traps.c                         |    2 +-
 arch/ia64/kvm/kvm-ia64.c                         |    2 +-
 arch/ia64/xen/irq_xen.c                          |   24 ++++++++--------
 arch/mips/kernel/cevt-bcm1480.c                  |    6 ++--
 arch/mips/kernel/cevt-sb1250.c                   |    6 ++--
 arch/mips/kernel/topology.c                      |    2 +-
 arch/mips/sgi-ip27/ip27-timer.c                  |    4 +-
 arch/parisc/kernel/irq.c                         |    2 +-
 arch/parisc/kernel/topology.c                    |    2 +-
 arch/powerpc/kernel/cacheinfo.c                  |    2 +-
 arch/powerpc/kernel/process.c                    |    2 +-
 arch/powerpc/kernel/sysfs.c                      |    4 +-
 arch/powerpc/kernel/time.c                       |    6 ++--
 arch/powerpc/mm/pgtable.c                        |    2 +-
 arch/powerpc/mm/stab.c                           |    4 +-
 arch/powerpc/oprofile/op_model_cell.c            |    2 +-
 arch/powerpc/platforms/cell/cpufreq_spudemand.c  |    2 +-
 arch/powerpc/platforms/cell/interrupt.c          |    2 +-
 arch/powerpc/platforms/ps3/interrupt.c           |    2 +-
 arch/powerpc/platforms/ps3/smp.c                 |    2 +-
 arch/powerpc/platforms/pseries/dtl.c             |    2 +-
 arch/powerpc/platforms/pseries/iommu.c           |    2 +-
 arch/s390/appldata/appldata_base.c               |    2 +-
 arch/s390/kernel/nmi.c                           |    2 +-
 arch/s390/kernel/smp.c                           |    2 +-
 arch/s390/kernel/time.c                          |    4 +-
 arch/s390/kernel/vtime.c                         |    2 +-
 arch/sh/kernel/timers/timer-broadcast.c          |    2 +-
 arch/sh/kernel/topology.c                        |    2 +-
 arch/sparc/kernel/nmi.c                          |    6 ++--
 arch/sparc/kernel/pci_sun4v.c                    |    2 +-
 arch/sparc/kernel/sysfs.c                        |    4 +-
 arch/sparc/kernel/time_64.c                      |    4 +-
 arch/x86/kernel/apic/apic.c                      |    2 +-
 arch/x86/kernel/apic/nmi.c                       |    8 +++---
 arch/x86/kernel/cpu/common.c                     |    2 +-
 arch/x86/kernel/cpu/cpu_debug.c                  |   10 +++---
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c       |    4 +-
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c        |    2 +-
 arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c |    4 +-
 arch/x86/kernel/cpu/intel_cacheinfo.c            |    6 ++--
 arch/x86/kernel/cpu/mcheck/mce_64.c              |    4 +-
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c          |    4 +-
 arch/x86/kernel/cpu/mcheck/mce_intel_64.c        |    2 +-
 arch/x86/kernel/cpu/mcheck/therm_throt.c         |    4 +-
 arch/x86/kernel/cpu/perfctr-watchdog.c           |    2 +-
 arch/x86/kernel/ds.c                             |    4 +-
 arch/x86/kernel/hpet.c                           |    2 +-
 arch/x86/kernel/irq_32.c                         |    8 +++---
 arch/x86/kernel/kvm.c                            |    2 +-
 arch/x86/kernel/kvmclock.c                       |    2 +-
 arch/x86/kernel/paravirt.c                       |    2 +-
 arch/x86/kernel/process_64.c                     |    2 +-
 arch/x86/kernel/smpboot.c                        |    2 +-
 arch/x86/kernel/tlb_uv.c                         |    6 ++--
 arch/x86/kernel/topology.c                       |    2 +-
 arch/x86/kernel/uv_time.c                        |    2 +-
 arch/x86/kernel/vmiclock_32.c                    |    2 +-
 arch/x86/kvm/svm.c                               |    2 +-
 arch/x86/kvm/vmx.c                               |    6 ++--
 arch/x86/kvm/x86.c                               |    2 +-
 arch/x86/mm/kmmio.c                              |    2 +-
 arch/x86/mm/mmio-mod.c                           |    4 +-
 arch/x86/oprofile/nmi_int.c                      |    4 +-
 arch/x86/xen/enlighten.c                         |    2 +-
 arch/x86/xen/multicalls.c                        |    2 +-
 arch/x86/xen/smp.c                               |    8 +++---
 arch/x86/xen/spinlock.c                          |    4 +-
 arch/x86/xen/time.c                              |   10 +++---
 block/as-iosched.c                               |    2 +-
 block/blk-softirq.c                              |    2 +-
 block/cfq-iosched.c                              |    2 +-
 crypto/sha512_generic.c                          |    2 +-
 drivers/acpi/processor_core.c                    |    2 +-
 drivers/acpi/processor_thermal.c                 |    2 +-
 drivers/base/cpu.c                               |    2 +-
 drivers/char/random.c                            |    2 +-
 drivers/connector/cn_proc.c                      |    2 +-
 drivers/cpufreq/cpufreq.c                        |    8 +++---
 drivers/cpufreq/cpufreq_conservative.c           |    2 +-
 drivers/cpufreq/cpufreq_ondemand.c               |    2 +-
 drivers/cpufreq/cpufreq_stats.c                  |    2 +-
 drivers/cpufreq/cpufreq_userspace.c              |   11 +++----
 drivers/cpufreq/freq_table.c                     |    2 +-
 drivers/cpuidle/governors/ladder.c               |    2 +-
 drivers/cpuidle/governors/menu.c                 |    2 +-
 drivers/crypto/padlock-aes.c                     |    2 +-
 drivers/lguest/page_tables.c                     |    2 +-
 drivers/lguest/x86/core.c                        |    2 +-
 drivers/xen/events.c                             |    6 ++--
 fs/buffer.c                                      |    4 +-
 fs/file.c                                        |    2 +-
 fs/namespace.c                                   |    2 +-
 include/linux/percpu-defs.h                      |   10 +++++--
 kernel/kprobes.c                                 |    2 +-
 kernel/lockdep.c                                 |    2 +-
 kernel/printk.c                                  |    2 +-
 kernel/profile.c                                 |    4 +-
 kernel/rcuclassic.c                              |    4 +-
 kernel/rcupdate.c                                |    2 +-
 kernel/rcupreempt.c                              |   10 +++---
 kernel/rcutorture.c                              |    4 +-
 kernel/sched.c                                   |   30 +++++++++++-----------
 kernel/sched_clock.c                             |    2 +-
 kernel/sched_rt.c                                |    2 +-
 kernel/smp.c                                     |    6 ++--
 kernel/softirq.c                                 |    6 ++--
 kernel/softlockup.c                              |    6 ++--
 kernel/taskstats.c                               |    4 +-
 kernel/time/tick-sched.c                         |    2 +-
 kernel/time/timer_stats.c                        |    2 +-
 kernel/timer.c                                   |    2 +-
 kernel/trace/ring_buffer.c                       |    2 +-
 kernel/trace/trace.c                             |    6 ++--
 kernel/trace/trace_hw_branches.c                 |    4 +-
 kernel/trace/trace_irqsoff.c                     |    2 +-
 kernel/trace/trace_stack.c                       |    2 +-
 kernel/trace/trace_sysprof.c                     |    2 +-
 kernel/trace/trace_workqueue.c                   |    2 +-
 lib/radix-tree.c                                 |    2 +-
 lib/random32.c                                   |    2 +-
 mm/page-writeback.c                              |    2 +-
 mm/slab.c                                        |    4 +-
 mm/slub.c                                        |    6 +---
 mm/swap.c                                        |    4 +-
 mm/vmalloc.c                                     |    2 +-
 mm/vmstat.c                                      |    2 +-
 net/core/drop_monitor.c                          |    2 +-
 net/core/flow.c                                  |    6 ++--
 net/core/sock.c                                  |    2 +-
 net/ipv4/route.c                                 |    2 +-
 net/ipv4/syncookies.c                            |    3 +-
 net/ipv6/syncookies.c                            |    3 +-
 net/socket.c                                     |    2 +-
 141 files changed, 261 insertions(+), 262 deletions(-)

diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 6014dfd..eb1026e 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -50,7 +50,7 @@ struct ipi_data {
 	unsigned long bits;
 };
 
-static DEFINE_PER_CPU(struct ipi_data, ipi_data) = {
+DEFINE_PER_CPU(struct ipi_data, ipi_data) = {
 	.lock	= SPIN_LOCK_UNLOCKED,
 };
 
diff --git a/arch/arm/mach-realview/localtimer.c b/arch/arm/mach-realview/localtimer.c
index 1c01d13..4afd165 100644
--- a/arch/arm/mach-realview/localtimer.c
+++ b/arch/arm/mach-realview/localtimer.c
@@ -24,7 +24,7 @@
 #include <mach/hardware.h>
 #include <asm/irq.h>
 
-static DEFINE_PER_CPU(struct clock_event_device, local_clockevent);
+DEFINE_PER_CPU(struct clock_event_device, local_clockevent);
 
 /*
  * Used on SMP for either the local timer or IPI_TIMER
diff --git a/arch/avr32/kernel/cpu.c b/arch/avr32/kernel/cpu.c
index e84faff..fbc8c92 100644
--- a/arch/avr32/kernel/cpu.c
+++ b/arch/avr32/kernel/cpu.c
@@ -18,7 +18,7 @@
 #include <asm/setup.h>
 #include <asm/sysreg.h>
 
-static DEFINE_PER_CPU(struct cpu, cpu_devices);
+DEFINE_PER_CPU(struct cpu, cpu_devices);
 
 #ifdef CONFIG_PERFORMANCE_COUNTERS
 
diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index 93eab61..0527cba 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -93,7 +93,7 @@ struct ipi_message_queue {
 	unsigned long count;
 };
 
-static DEFINE_PER_CPU(struct ipi_message_queue, ipi_msg_queue);
+DEFINE_PER_CPU(struct ipi_message_queue, ipi_msg_queue);
 
 static void ipi_cpu_stop(unsigned int cpu)
 {
diff --git a/arch/blackfin/mm/sram-alloc.c b/arch/blackfin/mm/sram-alloc.c
index 530d139..e954244 100644
--- a/arch/blackfin/mm/sram-alloc.c
+++ b/arch/blackfin/mm/sram-alloc.c
@@ -42,9 +42,9 @@
 #include <asm/mem_map.h>
 #include "blackfin_sram.h"
 
-static DEFINE_PER_CPU(spinlock_t, l1sram_lock) ____cacheline_aligned_in_smp;
-static DEFINE_PER_CPU(spinlock_t, l1_data_sram_lock) ____cacheline_aligned_in_smp;
-static DEFINE_PER_CPU(spinlock_t, l1_inst_sram_lock) ____cacheline_aligned_in_smp;
+DEFINE_PER_CPU(spinlock_t, l1sram_lock) ____cacheline_aligned_in_smp;
+DEFINE_PER_CPU(spinlock_t, l1_data_sram_lock) ____cacheline_aligned_in_smp;
+DEFINE_PER_CPU(spinlock_t, l1_inst_sram_lock) ____cacheline_aligned_in_smp;
 static spinlock_t l2_sram_lock ____cacheline_aligned_in_smp;
 
 /* the data structure for L1 scratchpad and DATA SRAM */
@@ -55,22 +55,22 @@ struct sram_piece {
 	struct sram_piece *next;
 };
 
-static DEFINE_PER_CPU(struct sram_piece, free_l1_ssram_head);
-static DEFINE_PER_CPU(struct sram_piece, used_l1_ssram_head);
+DEFINE_PER_CPU(struct sram_piece, free_l1_ssram_head);
+DEFINE_PER_CPU(struct sram_piece, used_l1_ssram_head);
 
 #if L1_DATA_A_LENGTH != 0
-static DEFINE_PER_CPU(struct sram_piece, free_l1_data_A_sram_head);
-static DEFINE_PER_CPU(struct sram_piece, used_l1_data_A_sram_head);
+DEFINE_PER_CPU(struct sram_piece, free_l1_data_A_sram_head);
+DEFINE_PER_CPU(struct sram_piece, used_l1_data_A_sram_head);
 #endif
 
 #if L1_DATA_B_LENGTH != 0
-static DEFINE_PER_CPU(struct sram_piece, free_l1_data_B_sram_head);
-static DEFINE_PER_CPU(struct sram_piece, used_l1_data_B_sram_head);
+DEFINE_PER_CPU(struct sram_piece, free_l1_data_B_sram_head);
+DEFINE_PER_CPU(struct sram_piece, used_l1_data_B_sram_head);
 #endif
 
 #if L1_CODE_LENGTH != 0
-static DEFINE_PER_CPU(struct sram_piece, free_l1_inst_sram_head);
-static DEFINE_PER_CPU(struct sram_piece, used_l1_inst_sram_head);
+DEFINE_PER_CPU(struct sram_piece, free_l1_inst_sram_head);
+DEFINE_PER_CPU(struct sram_piece, used_l1_inst_sram_head);
 #endif
 
 #if L2_LENGTH != 0
diff --git a/arch/ia64/kernel/crash.c b/arch/ia64/kernel/crash.c
index f065093..9ba4aa4 100644
--- a/arch/ia64/kernel/crash.c
+++ b/arch/ia64/kernel/crash.c
@@ -50,7 +50,7 @@ final_note(void *buf)
 
 extern void ia64_dump_cpu_regs(void *);
 
-static DEFINE_PER_CPU(struct elf_prstatus, elf_prstatus);
+DEFINE_PER_CPU(struct elf_prstatus, elf_prstatus);
 
 void
 crash_save_this_cpu(void)
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 3e0840c..0681840 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -58,7 +58,7 @@ static struct local_tlb_flush_counts {
 	unsigned int count;
 } __attribute__((__aligned__(32))) local_tlb_flush_counts[NR_CPUS];
 
-static DEFINE_PER_CPU(unsigned short [NR_CPUS], shadow_flush_counts) ____cacheline_aligned;
+DEFINE_PER_CPU(unsigned short [NR_CPUS], shadow_flush_counts) ____cacheline_aligned;
 
 #define IPI_CALL_FUNC		0
 #define IPI_CPU_STOP		1
@@ -66,7 +66,7 @@ static DEFINE_PER_CPU(unsigned short [NR_CPUS], shadow_flush_counts) ____cacheli
 #define IPI_KDUMP_CPU_STOP	3
 
 /* This needs to be cacheline aligned because it is written to by *other* CPUs.  */
-static DEFINE_PER_CPU_SHARED_ALIGNED(u64, ipi_operation);
+DEFINE_PER_CPU_SHARED_ALIGNED(u64, ipi_operation);
 
 extern void cpu_halt (void);
 
diff --git a/arch/ia64/kernel/traps.c b/arch/ia64/kernel/traps.c
index f0cda76..08274f1 100644
--- a/arch/ia64/kernel/traps.c
+++ b/arch/ia64/kernel/traps.c
@@ -276,7 +276,7 @@ struct fpu_swa_msg {
 	unsigned long count;
 	unsigned long time;
 };
-static DEFINE_PER_CPU(struct fpu_swa_msg, cpulast);
+DEFINE_PER_CPU(struct fpu_swa_msg, cpulast);
 DECLARE_PER_CPU(struct fpu_swa_msg, cpulast);
 static struct fpu_swa_msg last __cacheline_aligned;
 
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index d20a5db..64d414f 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -59,7 +59,7 @@ static long vp_env_info;
 
 static struct kvm_vmm_info *kvm_vmm_info;
 
-static DEFINE_PER_CPU(struct kvm_vcpu *, last_vcpu);
+DEFINE_PER_CPU(struct kvm_vcpu *, last_vcpu);
 
 struct kvm_stats_debugfs_item debugfs_entries[] = {
 	{ NULL }
diff --git a/arch/ia64/xen/irq_xen.c b/arch/ia64/xen/irq_xen.c
index af93aad..92e2c2f 100644
--- a/arch/ia64/xen/irq_xen.c
+++ b/arch/ia64/xen/irq_xen.c
@@ -63,19 +63,19 @@ xen_free_irq_vector(int vector)
 }
 
 
-static DEFINE_PER_CPU(int, timer_irq) = -1;
-static DEFINE_PER_CPU(int, ipi_irq) = -1;
-static DEFINE_PER_CPU(int, resched_irq) = -1;
-static DEFINE_PER_CPU(int, cmc_irq) = -1;
-static DEFINE_PER_CPU(int, cmcp_irq) = -1;
-static DEFINE_PER_CPU(int, cpep_irq) = -1;
+DEFINE_PER_CPU(int, timer_irq) = -1;
+DEFINE_PER_CPU(int, ipi_irq) = -1;
+DEFINE_PER_CPU(int, resched_irq) = -1;
+DEFINE_PER_CPU(int, cmc_irq) = -1;
+DEFINE_PER_CPU(int, cmcp_irq) = -1;
+DEFINE_PER_CPU(int, cpep_irq) = -1;
 #define NAME_SIZE	15
-static DEFINE_PER_CPU(char[NAME_SIZE], timer_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], ipi_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], resched_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], cmc_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], cmcp_name);
-static DEFINE_PER_CPU(char[NAME_SIZE], cpep_name);
+DEFINE_PER_CPU(char[NAME_SIZE], timer_name);
+DEFINE_PER_CPU(char[NAME_SIZE], ipi_name);
+DEFINE_PER_CPU(char[NAME_SIZE], resched_name);
+DEFINE_PER_CPU(char[NAME_SIZE], cmc_name);
+DEFINE_PER_CPU(char[NAME_SIZE], cmcp_name);
+DEFINE_PER_CPU(char[NAME_SIZE], cpep_name);
 #undef NAME_SIZE
 
 struct saved_irq {
diff --git a/arch/mips/kernel/cevt-bcm1480.c b/arch/mips/kernel/cevt-bcm1480.c
index a5182a2..cf43bc7 100644
--- a/arch/mips/kernel/cevt-bcm1480.c
+++ b/arch/mips/kernel/cevt-bcm1480.c
@@ -103,9 +103,9 @@ static irqreturn_t sibyte_counter_handler(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-static DEFINE_PER_CPU(struct clock_event_device, sibyte_hpt_clockevent);
-static DEFINE_PER_CPU(struct irqaction, sibyte_hpt_irqaction);
-static DEFINE_PER_CPU(char [18], sibyte_hpt_name);
+DEFINE_PER_CPU(struct clock_event_device, sibyte_hpt_clockevent);
+DEFINE_PER_CPU(struct irqaction, sibyte_hpt_irqaction);
+DEFINE_PER_CPU(char [18], sibyte_hpt_name);
 
 void __cpuinit sb1480_clockevent_init(void)
 {
diff --git a/arch/mips/kernel/cevt-sb1250.c b/arch/mips/kernel/cevt-sb1250.c
index 340f53e..e3c7dce 100644
--- a/arch/mips/kernel/cevt-sb1250.c
+++ b/arch/mips/kernel/cevt-sb1250.c
@@ -101,9 +101,9 @@ static irqreturn_t sibyte_counter_handler(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-static DEFINE_PER_CPU(struct clock_event_device, sibyte_hpt_clockevent);
-static DEFINE_PER_CPU(struct irqaction, sibyte_hpt_irqaction);
-static DEFINE_PER_CPU(char [18], sibyte_hpt_name);
+DEFINE_PER_CPU(struct clock_event_device, sibyte_hpt_clockevent);
+DEFINE_PER_CPU(struct irqaction, sibyte_hpt_irqaction);
+DEFINE_PER_CPU(char [18], sibyte_hpt_name);
 
 void __cpuinit sb1250_clockevent_init(void)
 {
diff --git a/arch/mips/kernel/topology.c b/arch/mips/kernel/topology.c
index 660e44e..38f9a0b 100644
--- a/arch/mips/kernel/topology.c
+++ b/arch/mips/kernel/topology.c
@@ -5,7 +5,7 @@
 #include <linux/nodemask.h>
 #include <linux/percpu.h>
 
-static DEFINE_PER_CPU(struct cpu, cpu_devices);
+DEFINE_PER_CPU(struct cpu, cpu_devices);
 
 static int __init topology_init(void)
 {
diff --git a/arch/mips/sgi-ip27/ip27-timer.c b/arch/mips/sgi-ip27/ip27-timer.c
index f10a7cd..af7379d 100644
--- a/arch/mips/sgi-ip27/ip27-timer.c
+++ b/arch/mips/sgi-ip27/ip27-timer.c
@@ -84,8 +84,8 @@ static void rt_set_mode(enum clock_event_mode mode,
 
 int rt_timer_irq;
 
-static DEFINE_PER_CPU(struct clock_event_device, hub_rt_clockevent);
-static DEFINE_PER_CPU(char [11], hub_rt_name);
+DEFINE_PER_CPU(struct clock_event_device, hub_rt_clockevent);
+DEFINE_PER_CPU(char [11], hub_rt_name);
 
 static irqreturn_t hub_rt_counter_handler(int irq, void *dev_id)
 {
diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
index 4ea4229..f32c727 100644
--- a/arch/parisc/kernel/irq.c
+++ b/arch/parisc/kernel/irq.c
@@ -50,7 +50,7 @@ static volatile unsigned long cpu_eiem = 0;
 ** between ->ack() and ->end() of the interrupt to prevent
 ** re-interruption of a processing interrupt.
 */
-static DEFINE_PER_CPU(unsigned long, local_ack_eiem) = ~0UL;
+DEFINE_PER_CPU(unsigned long, local_ack_eiem) = ~0UL;
 
 static void cpu_disable_irq(unsigned int irq)
 {
diff --git a/arch/parisc/kernel/topology.c b/arch/parisc/kernel/topology.c
index f515938..4f61986 100644
--- a/arch/parisc/kernel/topology.c
+++ b/arch/parisc/kernel/topology.c
@@ -22,7 +22,7 @@
 #include <linux/cpu.h>
 #include <linux/cache.h>
 
-static DEFINE_PER_CPU(struct cpu, cpu_devices);
+DEFINE_PER_CPU(struct cpu, cpu_devices);
 
 static int __init topology_init(void)
 {
diff --git a/arch/powerpc/kernel/cacheinfo.c b/arch/powerpc/kernel/cacheinfo.c
index bb37b1d..ec89cdc 100644
--- a/arch/powerpc/kernel/cacheinfo.c
+++ b/arch/powerpc/kernel/cacheinfo.c
@@ -113,7 +113,7 @@ struct cache {
 	struct cache *next_local;      /* next cache of >= level */
 };
 
-static DEFINE_PER_CPU(struct cache_dir *, cache_dir_pcpu);
+DEFINE_PER_CPU(struct cache_dir *, cache_dir_pcpu);
 
 /* traversal/modification of this list occurs only at cpu hotplug time;
  * access is serialized by cpu hotplug locking
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 7b44a33..a1fc420 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -274,7 +274,7 @@ void do_dabr(struct pt_regs *regs, unsigned long address,
 	force_sig_info(SIGTRAP, &info, current);
 }
 
-static DEFINE_PER_CPU(unsigned long, current_dabr);
+DEFINE_PER_CPU(unsigned long, current_dabr);
 
 int set_dabr(unsigned long dabr)
 {
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index f41aec8..d5d4925 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -25,7 +25,7 @@
 #include <asm/lppaca.h>
 #endif
 
-static DEFINE_PER_CPU(struct cpu, cpu_devices);
+DEFINE_PER_CPU(struct cpu, cpu_devices);
 
 /*
  * SMT snooze delay stuff, 64-bit only for now
@@ -119,7 +119,7 @@ __setup("smt-snooze-delay=", setup_smt_snooze_delay);
  * it the first time we write to the PMCs.
  */
 
-static DEFINE_PER_CPU(char, pmcs_enabled);
+DEFINE_PER_CPU(char, pmcs_enabled);
 
 void ppc_enable_pmcs(void)
 {
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 48571ac..d0c72a7 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -122,7 +122,7 @@ struct decrementer_clock {
 	u64 next_tb;
 };
 
-static DEFINE_PER_CPU(struct decrementer_clock, decrementers);
+DEFINE_PER_CPU(struct decrementer_clock, decrementers);
 
 #ifdef CONFIG_PPC_ISERIES
 static unsigned long __initdata iSeries_recal_titan;
@@ -172,7 +172,7 @@ EXPORT_SYMBOL(ppc_proc_freq);
 unsigned long ppc_tb_freq;
 
 static u64 tb_last_jiffy __cacheline_aligned_in_smp;
-static DEFINE_PER_CPU(u64, last_jiffy);
+DEFINE_PER_CPU(u64, last_jiffy);
 
 #ifdef CONFIG_VIRT_CPU_ACCOUNTING
 /*
@@ -298,7 +298,7 @@ struct cpu_purr_data {
  * each others' cpu_purr_data, disabling local interrupts is
  * sufficient to serialize accesses.
  */
-static DEFINE_PER_CPU(struct cpu_purr_data, cpu_purr_data);
+DEFINE_PER_CPU(struct cpu_purr_data, cpu_purr_data);
 
 static void snapshot_tb_and_purr(void *data)
 {
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index ae1d67c..5454968 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -30,7 +30,7 @@
 #include <asm/tlbflush.h>
 #include <asm/tlb.h>
 
-static DEFINE_PER_CPU(struct pte_freelist_batch *, pte_freelist_cur);
+DEFINE_PER_CPU(struct pte_freelist_batch *, pte_freelist_cur);
 static unsigned long pte_freelist_forced_free;
 
 struct pte_freelist_batch
diff --git a/arch/powerpc/mm/stab.c b/arch/powerpc/mm/stab.c
index 6e9b69c..0124998 100644
--- a/arch/powerpc/mm/stab.c
+++ b/arch/powerpc/mm/stab.c
@@ -30,8 +30,8 @@ struct stab_entry {
 };
 
 #define NR_STAB_CACHE_ENTRIES 8
-static DEFINE_PER_CPU(long, stab_cache_ptr);
-static DEFINE_PER_CPU(long [NR_STAB_CACHE_ENTRIES], stab_cache);
+DEFINE_PER_CPU(long, stab_cache_ptr);
+DEFINE_PER_CPU(long [NR_STAB_CACHE_ENTRIES], stab_cache);
 
 /*
  * Create a segment table entry for the given esid/vsid pair.
diff --git a/arch/powerpc/oprofile/op_model_cell.c b/arch/powerpc/oprofile/op_model_cell.c
index ae06c62..424e263 100644
--- a/arch/powerpc/oprofile/op_model_cell.c
+++ b/arch/powerpc/oprofile/op_model_cell.c
@@ -139,7 +139,7 @@ static struct {
 #define GET_COUNT_CYCLES(x) (x & 0x00000001)
 #define GET_INPUT_CONTROL(x) ((x & 0x00000004) >> 2)
 
-static DEFINE_PER_CPU(unsigned long[NR_PHYS_CTRS], pmc_values);
+DEFINE_PER_CPU(unsigned long[NR_PHYS_CTRS], pmc_values);
 static unsigned long spu_pm_cnt[MAX_NUMNODES * NUM_SPUS_PER_NODE];
 static struct pmc_cntrl_data pmc_cntrl[NUM_THREADS][NR_PHYS_CTRS];
 
diff --git a/arch/powerpc/platforms/cell/cpufreq_spudemand.c b/arch/powerpc/platforms/cell/cpufreq_spudemand.c
index 968c1c0..beee12e 100644
--- a/arch/powerpc/platforms/cell/cpufreq_spudemand.c
+++ b/arch/powerpc/platforms/cell/cpufreq_spudemand.c
@@ -37,7 +37,7 @@ struct spu_gov_info_struct {
 	struct delayed_work work;
 	unsigned int poll_int;		/* µs */
 };
-static DEFINE_PER_CPU(struct spu_gov_info_struct, spu_gov_info);
+DEFINE_PER_CPU(struct spu_gov_info_struct, spu_gov_info);
 
 static struct workqueue_struct *kspugov_wq;
 
diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
index 882e470..2b1d1bd 100644
--- a/arch/powerpc/platforms/cell/interrupt.c
+++ b/arch/powerpc/platforms/cell/interrupt.c
@@ -54,7 +54,7 @@ struct iic {
 	struct device_node *node;
 };
 
-static DEFINE_PER_CPU(struct iic, iic);
+DEFINE_PER_CPU(struct iic, iic);
 #define IIC_NODE_COUNT	2
 static struct irq_host *iic_host;
 
diff --git a/arch/powerpc/platforms/ps3/interrupt.c b/arch/powerpc/platforms/ps3/interrupt.c
index 8ec5ccf..fe5499e 100644
--- a/arch/powerpc/platforms/ps3/interrupt.c
+++ b/arch/powerpc/platforms/ps3/interrupt.c
@@ -90,7 +90,7 @@ struct ps3_private {
 	u64 thread_id;
 };
 
-static DEFINE_PER_CPU(struct ps3_private, ps3_private);
+DEFINE_PER_CPU(struct ps3_private, ps3_private);
 
 /**
  * ps3_chip_mask - Set an interrupt mask bit in ps3_bmp.
diff --git a/arch/powerpc/platforms/ps3/smp.c b/arch/powerpc/platforms/ps3/smp.c
index 6fcc499..29539b1 100644
--- a/arch/powerpc/platforms/ps3/smp.c
+++ b/arch/powerpc/platforms/ps3/smp.c
@@ -43,7 +43,7 @@ static irqreturn_t ipi_function_handler(int irq, void *msg)
   */
 
 #define MSG_COUNT 4
-static DEFINE_PER_CPU(unsigned int [MSG_COUNT], ps3_ipi_virqs);
+DEFINE_PER_CPU(unsigned int [MSG_COUNT], ps3_ipi_virqs);
 
 static const char *names[MSG_COUNT] = {
 	"ipi call",
diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
index ab69925..9da02d5 100644
--- a/arch/powerpc/platforms/pseries/dtl.c
+++ b/arch/powerpc/platforms/pseries/dtl.c
@@ -54,7 +54,7 @@ struct dtl {
 	int			buf_entries;
 	u64			last_idx;
 };
-static DEFINE_PER_CPU(struct dtl, dtl);
+DEFINE_PER_CPU(struct dtl, dtl);
 
 /*
  * Dispatch trace log event mask:
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 3ee01b4..c250cb4 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -140,7 +140,7 @@ static int tce_build_pSeriesLP(struct iommu_table *tbl, long tcenum,
 	return ret;
 }
 
-static DEFINE_PER_CPU(u64 *, tce_page) = NULL;
+DEFINE_PER_CPU(u64 *, tce_page) = NULL;
 
 static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
 				     long npages, unsigned long uaddr,
diff --git a/arch/s390/appldata/appldata_base.c b/arch/s390/appldata/appldata_base.c
index 1dfc710..23a0bac 100644
--- a/arch/s390/appldata/appldata_base.c
+++ b/arch/s390/appldata/appldata_base.c
@@ -80,7 +80,7 @@ static struct ctl_table appldata_dir_table[] = {
 /*
  * Timer
  */
-static DEFINE_PER_CPU(struct vtimer_list, appldata_timer);
+DEFINE_PER_CPU(struct vtimer_list, appldata_timer);
 static atomic_t appldata_expire_count = ATOMIC_INIT(0);
 
 static DEFINE_SPINLOCK(appldata_timer_lock);
diff --git a/arch/s390/kernel/nmi.c b/arch/s390/kernel/nmi.c
index 28cf196..9ae4930 100644
--- a/arch/s390/kernel/nmi.c
+++ b/arch/s390/kernel/nmi.c
@@ -27,7 +27,7 @@ struct mcck_struct {
 	unsigned long long mcck_code;
 };
 
-static DEFINE_PER_CPU(struct mcck_struct, cpu_mcck);
+DEFINE_PER_CPU(struct mcck_struct, cpu_mcck);
 
 static NORET_TYPE void s390_handle_damage(char *msg)
 {
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index a985a3b..ef32579 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -66,7 +66,7 @@ int smp_cpu_polarization[NR_CPUS];
 static int smp_cpu_state[NR_CPUS];
 static int cpu_management;
 
-static DEFINE_PER_CPU(struct cpu, cpu_devices);
+DEFINE_PER_CPU(struct cpu, cpu_devices);
 
 static void smp_ext_bitcall(int, ec_bit_sig);
 
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index ef596d0..6a91235 100644
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -65,7 +65,7 @@ u64 sched_clock_base_cc = -1;	/* Force to data section. */
 static ext_int_info_t ext_int_info_cc;
 static ext_int_info_t ext_int_etr_cc;
 
-static DEFINE_PER_CPU(struct clock_event_device, comparators);
+DEFINE_PER_CPU(struct clock_event_device, comparators);
 
 /*
  * Scheduler clock - returns current time in nanosec units.
@@ -340,7 +340,7 @@ static unsigned long long adjust_time(unsigned long long old,
 	return delta;
 }
 
-static DEFINE_PER_CPU(atomic_t, clock_sync_word);
+DEFINE_PER_CPU(atomic_t, clock_sync_word);
 static DEFINE_MUTEX(clock_sync_mutex);
 static unsigned long clock_sync_flags;
 
diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c
index c87f59b..1e73fb7 100644
--- a/arch/s390/kernel/vtime.c
+++ b/arch/s390/kernel/vtime.c
@@ -27,7 +27,7 @@
 
 static ext_int_info_t ext_int_info_timer;
 
-static DEFINE_PER_CPU(struct vtimer_queue, virt_cpu_timer);
+DEFINE_PER_CPU(struct vtimer_queue, virt_cpu_timer);
 
 DEFINE_PER_CPU(struct s390_idle_data, s390_idle) = {
 	.lock = __SPIN_LOCK_UNLOCKED(s390_idle.lock)
diff --git a/arch/sh/kernel/timers/timer-broadcast.c b/arch/sh/kernel/timers/timer-broadcast.c
index 96e8eae..1339dcb 100644
--- a/arch/sh/kernel/timers/timer-broadcast.c
+++ b/arch/sh/kernel/timers/timer-broadcast.c
@@ -24,7 +24,7 @@
 #include <linux/clockchips.h>
 #include <linux/irq.h>
 
-static DEFINE_PER_CPU(struct clock_event_device, local_clockevent);
+DEFINE_PER_CPU(struct clock_event_device, local_clockevent);
 
 /*
  * Used on SMP for either the local timer or SMP_MSG_TIMER
diff --git a/arch/sh/kernel/topology.c b/arch/sh/kernel/topology.c
index 0838942..743dee7 100644
--- a/arch/sh/kernel/topology.c
+++ b/arch/sh/kernel/topology.c
@@ -14,7 +14,7 @@
 #include <linux/node.h>
 #include <linux/nodemask.h>
 
-static DEFINE_PER_CPU(struct cpu, cpu_devices);
+DEFINE_PER_CPU(struct cpu, cpu_devices);
 
 static int __init topology_init(void)
 {
diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
index 2c0cc72..24d4243 100644
--- a/arch/sparc/kernel/nmi.c
+++ b/arch/sparc/kernel/nmi.c
@@ -39,9 +39,9 @@ EXPORT_SYMBOL_GPL(nmi_usable);
 
 static unsigned int nmi_hz = HZ;
 
-static DEFINE_PER_CPU(unsigned int, last_irq_sum);
-static DEFINE_PER_CPU(local_t, alert_counter);
-static DEFINE_PER_CPU(int, nmi_touch);
+DEFINE_PER_CPU(unsigned int, last_irq_sum);
+DEFINE_PER_CPU(local_t, alert_counter);
+DEFINE_PER_CPU(int, nmi_touch);
 
 void touch_nmi_watchdog(void)
 {
diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c
index 5db5ebe..4f86704 100644
--- a/arch/sparc/kernel/pci_sun4v.c
+++ b/arch/sparc/kernel/pci_sun4v.c
@@ -41,7 +41,7 @@ struct iommu_batch {
 	unsigned long	npages;		/* Number of pages in list.	*/
 };
 
-static DEFINE_PER_CPU(struct iommu_batch, iommu_batch);
+DEFINE_PER_CPU(struct iommu_batch, iommu_batch);
 static int iommu_batch_initialized;
 
 /* Interrupts must be disabled.  */
diff --git a/arch/sparc/kernel/sysfs.c b/arch/sparc/kernel/sysfs.c
index d28f496..fa55243 100644
--- a/arch/sparc/kernel/sysfs.c
+++ b/arch/sparc/kernel/sysfs.c
@@ -12,7 +12,7 @@
 #include <asm/hypervisor.h>
 #include <asm/spitfire.h>
 
-static DEFINE_PER_CPU(struct hv_mmu_statistics, mmu_stats) __attribute__((aligned(64)));
+DEFINE_PER_CPU(struct hv_mmu_statistics, mmu_stats) __attribute__((aligned(64)));
 
 #define SHOW_MMUSTAT_ULONG(NAME) \
 static ssize_t show_##NAME(struct sys_device *dev, \
@@ -217,7 +217,7 @@ static struct sysdev_attribute cpu_core_attrs[] = {
 	_SYSDEV_ATTR(l2_cache_line_size,  0444, show_l2_cache_line_size, NULL),
 };
 
-static DEFINE_PER_CPU(struct cpu, cpu_devices);
+DEFINE_PER_CPU(struct cpu, cpu_devices);
 
 static void register_cpu_online(unsigned int cpu)
 {
diff --git a/arch/sparc/kernel/time_64.c b/arch/sparc/kernel/time_64.c
index 5c12e79..3416f4a 100644
--- a/arch/sparc/kernel/time_64.c
+++ b/arch/sparc/kernel/time_64.c
@@ -630,7 +630,7 @@ struct freq_table {
 	unsigned long clock_tick_ref;
 	unsigned int ref_freq;
 };
-static DEFINE_PER_CPU(struct freq_table, sparc64_freq_table) = { 0, 0 };
+DEFINE_PER_CPU(struct freq_table, sparc64_freq_table) = { 0, 0 };
 
 unsigned long sparc64_get_clock_tick(unsigned int cpu)
 {
@@ -716,7 +716,7 @@ static struct clock_event_device sparc64_clockevent = {
 	.shift		= 30,
 	.irq		= -1,
 };
-static DEFINE_PER_CPU(struct clock_event_device, sparc64_events);
+DEFINE_PER_CPU(struct clock_event_device, sparc64_events);
 
 void timer_interrupt(int irq, struct pt_regs *regs)
 {
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index f287092..d4e1c16 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -173,7 +173,7 @@ static struct clock_event_device lapic_clockevent = {
 	.rating		= 100,
 	.irq		= -1,
 };
-static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
+DEFINE_PER_CPU(struct clock_event_device, lapic_events);
 
 static unsigned long apic_phys;
 
diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
index ce4fbfa..81fff0e 100644
--- a/arch/x86/kernel/apic/nmi.c
+++ b/arch/x86/kernel/apic/nmi.c
@@ -56,7 +56,7 @@ EXPORT_SYMBOL(nmi_watchdog);
 static int panic_on_timeout;
 
 static unsigned int nmi_hz = HZ;
-static DEFINE_PER_CPU(short, wd_enabled);
+DEFINE_PER_CPU(short, wd_enabled);
 static int endflag __initdata;
 
 static inline unsigned int get_nmi_count(int cpu)
@@ -360,9 +360,9 @@ void stop_apic_nmi_watchdog(void *unused)
  * [when there will be more tty-related locks, break them up here too!]
  */
 
-static DEFINE_PER_CPU(unsigned, last_irq_sum);
-static DEFINE_PER_CPU(local_t, alert_counter);
-static DEFINE_PER_CPU(int, nmi_touch);
+DEFINE_PER_CPU(unsigned, last_irq_sum);
+DEFINE_PER_CPU(local_t, alert_counter);
+DEFINE_PER_CPU(int, nmi_touch);
 
 void touch_nmi_watchdog(void)
 {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 77848d9..5770059 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -997,7 +997,7 @@ static const unsigned int exception_stack_sizes[N_EXCEPTION_STACKS] = {
 	  [DEBUG_STACK - 1]			= DEBUG_STKSZ
 };
 
-static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
+DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
 	[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ])
 	__aligned(PAGE_SIZE);
 
diff --git a/arch/x86/kernel/cpu/cpu_debug.c b/arch/x86/kernel/cpu/cpu_debug.c
index 66f7471..1a97ae7 100644
--- a/arch/x86/kernel/cpu/cpu_debug.c
+++ b/arch/x86/kernel/cpu/cpu_debug.c
@@ -30,11 +30,11 @@
 #include <asm/apic.h>
 #include <asm/desc.h>
 
-static DEFINE_PER_CPU(struct cpu_cpuX_base [CPU_REG_ALL_BIT], cpu_arr);
-static DEFINE_PER_CPU(struct cpu_private * [MAX_CPU_FILES], priv_arr);
-static DEFINE_PER_CPU(unsigned, cpu_modelflag);
-static DEFINE_PER_CPU(int, cpu_priv_count);
-static DEFINE_PER_CPU(unsigned, cpu_model);
+DEFINE_PER_CPU(struct cpu_cpuX_base [CPU_REG_ALL_BIT], cpu_arr);
+DEFINE_PER_CPU(struct cpu_private * [MAX_CPU_FILES], priv_arr);
+DEFINE_PER_CPU(unsigned, cpu_modelflag);
+DEFINE_PER_CPU(int, cpu_priv_count);
+DEFINE_PER_CPU(unsigned, cpu_model);
 
 static DEFINE_MUTEX(cpu_debug_lock);
 
diff --git a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
index 208ecf6..5cf61ea 100644
--- a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -69,13 +69,13 @@ struct acpi_cpufreq_data {
 	unsigned int cpu_feature;
 };
 
-static DEFINE_PER_CPU(struct acpi_cpufreq_data *, drv_data);
+DEFINE_PER_CPU(struct acpi_cpufreq_data *, drv_data);
 
 struct acpi_msr_data {
 	u64 saved_aperf, saved_mperf;
 };
 
-static DEFINE_PER_CPU(struct acpi_msr_data, msr_data);
+DEFINE_PER_CPU(struct acpi_msr_data, msr_data);
 
 DEFINE_TRACE(power_mark);
 
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index f6b32d1..ab2a342 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -49,7 +49,7 @@
 /* serialize freq changes  */
 static DEFINE_MUTEX(fidvid_mutex);
 
-static DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data);
+DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data);
 
 static int cpu_family = CPU_OPTERON;
 
diff --git a/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c b/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
index c9f1fdc..6e57f01 100644
--- a/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
+++ b/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
@@ -71,8 +71,8 @@ static int centrino_verify_cpu_id(const struct cpuinfo_x86 *c,
 				  const struct cpu_id *x);
 
 /* Operating points for current CPU */
-static DEFINE_PER_CPU(struct cpu_model *, centrino_model);
-static DEFINE_PER_CPU(const struct cpu_id *, centrino_cpu);
+DEFINE_PER_CPU(struct cpu_model *, centrino_model);
+DEFINE_PER_CPU(const struct cpu_id *, centrino_cpu);
 
 static struct cpufreq_driver centrino_driver;
 
diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c
index 483eda9..3a4261d 100644
--- a/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ b/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -502,7 +502,7 @@ unsigned int __cpuinit init_intel_cacheinfo(struct cpuinfo_x86 *c)
 #ifdef CONFIG_SYSFS
 
 /* pointer to _cpuid4_info array (for each cache leaf) */
-static DEFINE_PER_CPU(struct _cpuid4_info *, cpuid4_info);
+DEFINE_PER_CPU(struct _cpuid4_info *, cpuid4_info);
 #define CPUID4_INFO_IDX(x, y)	(&((per_cpu(cpuid4_info, x))[y]))
 
 #ifdef CONFIG_SMP
@@ -620,7 +620,7 @@ static int __cpuinit detect_cache_attributes(unsigned int cpu)
 extern struct sysdev_class cpu_sysdev_class; /* from drivers/base/cpu.c */
 
 /* pointer to kobject for cpuX/cache */
-static DEFINE_PER_CPU(struct kobject *, cache_kobject);
+DEFINE_PER_CPU(struct kobject *, cache_kobject);
 
 struct _index_kobject {
 	struct kobject kobj;
@@ -629,7 +629,7 @@ struct _index_kobject {
 };
 
 /* pointer to array of kobjects for cpuX/cache/indexY */
-static DEFINE_PER_CPU(struct _index_kobject *, index_kobject);
+DEFINE_PER_CPU(struct _index_kobject *, index_kobject);
 #define INDEX_KOBJECT_PTR(x, y)		(&((per_cpu(index_kobject, x))[y]))
 
 #define show_one_plus(file_name, object, val)				\
diff --git a/arch/x86/kernel/cpu/mcheck/mce_64.c b/arch/x86/kernel/cpu/mcheck/mce_64.c
index 6fb0b35..5785175 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_64.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_64.c
@@ -453,9 +453,9 @@ void mce_log_therm_throt_event(__u64 status)
  */
 
 static int check_interval = 5 * 60; /* 5 minutes */
-static DEFINE_PER_CPU(int, next_interval); /* in jiffies */
+DEFINE_PER_CPU(int, next_interval); /* in jiffies */
 static void mcheck_timer(unsigned long);
-static DEFINE_PER_CPU(struct timer_list, mce_timer);
+DEFINE_PER_CPU(struct timer_list, mce_timer);
 
 static void mcheck_timer(unsigned long data)
 {
diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
index 9fd9bf6..a4d7a81 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -69,7 +69,7 @@ struct threshold_bank {
 	struct threshold_block *blocks;
 	cpumask_var_t cpus;
 };
-static DEFINE_PER_CPU(struct threshold_bank * [NR_BANKS], threshold_banks);
+DEFINE_PER_CPU(struct threshold_bank * [NR_BANKS], threshold_banks);
 
 #ifdef CONFIG_SMP
 static unsigned char shared_bank[NR_BANKS] = {
@@ -77,7 +77,7 @@ static unsigned char shared_bank[NR_BANKS] = {
 };
 #endif
 
-static DEFINE_PER_CPU(unsigned char, bank_map);	/* see which banks are on */
+DEFINE_PER_CPU(unsigned char, bank_map);	/* see which banks are on */
 
 static void amd_threshold_interrupt(void);
 
diff --git a/arch/x86/kernel/cpu/mcheck/mce_intel_64.c b/arch/x86/kernel/cpu/mcheck/mce_intel_64.c
index cef3ee3..007b542 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_intel_64.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_intel_64.c
@@ -95,7 +95,7 @@ static void intel_init_thermal(struct cpuinfo_x86 *c)
  * Also supports reliable discovery of shared banks.
  */
 
-static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
+DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
 
 /*
  * cmci_discover_lock protects against parallel discovery attempts
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index d5ae224..42a85e1 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -25,8 +25,8 @@
 /* How long to wait between reporting thermal events */
 #define CHECK_INTERVAL              (300 * HZ)
 
-static DEFINE_PER_CPU(__u64, next_check) = INITIAL_JIFFIES;
-static DEFINE_PER_CPU(unsigned long, thermal_throttle_count);
+DEFINE_PER_CPU(__u64, next_check) = INITIAL_JIFFIES;
+DEFINE_PER_CPU(unsigned long, thermal_throttle_count);
 atomic_t therm_throt_en = ATOMIC_INIT(0);
 
 #ifdef CONFIG_SYSFS
diff --git a/arch/x86/kernel/cpu/perfctr-watchdog.c b/arch/x86/kernel/cpu/perfctr-watchdog.c
index f6c70a1..ea636fa 100644
--- a/arch/x86/kernel/cpu/perfctr-watchdog.c
+++ b/arch/x86/kernel/cpu/perfctr-watchdog.c
@@ -60,7 +60,7 @@ static const struct wd_ops *wd_ops;
 static DECLARE_BITMAP(perfctr_nmi_owner, NMI_MAX_COUNTER_BITS);
 static DECLARE_BITMAP(evntsel_nmi_owner, NMI_MAX_COUNTER_BITS);
 
-static DEFINE_PER_CPU(struct nmi_watchdog_ctlblk, nmi_watchdog_ctlblk);
+DEFINE_PER_CPU(struct nmi_watchdog_ctlblk, nmi_watchdog_ctlblk);
 
 /* converts an msr to an appropriate reservation bit */
 static inline unsigned int nmi_perfctr_msr_to_bit(unsigned int msr)
diff --git a/arch/x86/kernel/ds.c b/arch/x86/kernel/ds.c
index 87b67e3..a1d487c 100644
--- a/arch/x86/kernel/ds.c
+++ b/arch/x86/kernel/ds.c
@@ -46,7 +46,7 @@ struct ds_configuration {
 	 * by enum ds_feature */
 	unsigned long ctl[dsf_ctl_max];
 };
-static DEFINE_PER_CPU(struct ds_configuration, ds_cfg_array);
+DEFINE_PER_CPU(struct ds_configuration, ds_cfg_array);
 
 #define ds_cfg per_cpu(ds_cfg_array, smp_processor_id())
 
@@ -228,7 +228,7 @@ struct ds_context {
 	struct task_struct *task;
 };
 
-static DEFINE_PER_CPU(struct ds_context *, system_context_array);
+DEFINE_PER_CPU(struct ds_context *, system_context_array);
 
 #define system_context per_cpu(system_context_array, smp_processor_id())
 
diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
index 81408b9..c76488d 100644
--- a/arch/x86/kernel/hpet.c
+++ b/arch/x86/kernel/hpet.c
@@ -409,7 +409,7 @@ static int hpet_legacy_next_event(unsigned long delta,
  */
 #ifdef CONFIG_PCI_MSI
 
-static DEFINE_PER_CPU(struct hpet_dev *, cpu_hpet_dev);
+DEFINE_PER_CPU(struct hpet_dev *, cpu_hpet_dev);
 static struct hpet_dev	*hpet_devs;
 
 void hpet_msi_unmask(unsigned int irq)
diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
index 3b09634..85827e3 100644
--- a/arch/x86/kernel/irq_32.c
+++ b/arch/x86/kernel/irq_32.c
@@ -58,11 +58,11 @@ union irq_ctx {
 	u32                     stack[THREAD_SIZE/sizeof(u32)];
 } __attribute__((aligned(PAGE_SIZE)));
 
-static DEFINE_PER_CPU(union irq_ctx *, hardirq_ctx);
-static DEFINE_PER_CPU(union irq_ctx *, softirq_ctx);
+DEFINE_PER_CPU(union irq_ctx *, hardirq_ctx);
+DEFINE_PER_CPU(union irq_ctx *, softirq_ctx);
 
-static DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, hardirq_stack);
-static DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, softirq_stack);
+DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, hardirq_stack);
+DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, softirq_stack);
 
 static void call_on_stack(void *func, void *stack)
 {
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 33019dd..acffaf2 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -36,7 +36,7 @@ struct kvm_para_state {
 	enum paravirt_lazy_mode mode;
 };
 
-static DEFINE_PER_CPU(struct kvm_para_state, para_state);
+DEFINE_PER_CPU(struct kvm_para_state, para_state);
 
 static struct kvm_para_state *kvm_para_state(void)
 {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 223af43..abe4ab8 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -36,7 +36,7 @@ static int parse_no_kvmclock(char *arg)
 early_param("no-kvmclock", parse_no_kvmclock);
 
 /* The hypervisor will put information about time periodically here */
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct pvclock_vcpu_time_info, hv_clock);
+DEFINE_PER_CPU_SHARED_ALIGNED(struct pvclock_vcpu_time_info, hv_clock);
 static struct pvclock_wall_clock wall_clock;
 
 /*
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 9faf43b..f889b91 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -244,7 +244,7 @@ int paravirt_disable_iospace(void)
 	return request_resource(&ioport_resource, &reserve_ioports);
 }
 
-static DEFINE_PER_CPU(enum paravirt_lazy_mode, paravirt_lazy_mode) = PARAVIRT_LAZY_NONE;
+DEFINE_PER_CPU(enum paravirt_lazy_mode, paravirt_lazy_mode) = PARAVIRT_LAZY_NONE;
 
 static inline void enter_lazy(enum paravirt_lazy_mode mode)
 {
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index b751a41..4abbc34 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -62,7 +62,7 @@ DEFINE_PER_CPU(struct task_struct *, current_task) = &init_task;
 EXPORT_PER_CPU_SYMBOL(current_task);
 
 DEFINE_PER_CPU(unsigned long, old_rsp);
-static DEFINE_PER_CPU(unsigned char, is_idle);
+DEFINE_PER_CPU(unsigned char, is_idle);
 
 unsigned long kernel_thread_flags = CLONE_VM | CLONE_UNTRACED;
 
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 58d24ef..19f8b7f 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -84,7 +84,7 @@ DEFINE_PER_CPU(int, cpu_state) = { 0 };
  * Needed only for CONFIG_HOTPLUG_CPU because __cpuinitdata is
  * removed after init for !CONFIG_HOTPLUG_CPU.
  */
-static DEFINE_PER_CPU(struct task_struct *, idle_thread_array);
+DEFINE_PER_CPU(struct task_struct *, idle_thread_array);
 #define get_idle_for_cpu(x)      (per_cpu(idle_thread_array, x))
 #define set_idle_for_cpu(x, p)   (per_cpu(idle_thread_array, x) = (p))
 #else
diff --git a/arch/x86/kernel/tlb_uv.c b/arch/x86/kernel/tlb_uv.c
index ed0c337..0252522 100644
--- a/arch/x86/kernel/tlb_uv.c
+++ b/arch/x86/kernel/tlb_uv.c
@@ -30,8 +30,8 @@ static int			uv_partition_base_pnode __read_mostly;
 
 static unsigned long		uv_mmask __read_mostly;
 
-static DEFINE_PER_CPU(struct ptc_stats, ptcstats);
-static DEFINE_PER_CPU(struct bau_control, bau_control);
+DEFINE_PER_CPU(struct ptc_stats, ptcstats);
+DEFINE_PER_CPU(struct bau_control, bau_control);
 
 /*
  * Determine the first node on a blade.
@@ -305,7 +305,7 @@ const struct cpumask *uv_flush_send_and_wait(int cpu, int this_pnode,
 	return NULL;
 }
 
-static DEFINE_PER_CPU(cpumask_var_t, uv_flush_tlb_mask);
+DEFINE_PER_CPU(cpumask_var_t, uv_flush_tlb_mask);
 
 /**
  * uv_flush_tlb_others - globally purge translation cache of a virtual
diff --git a/arch/x86/kernel/topology.c b/arch/x86/kernel/topology.c
index 7e45159..a5d2d41 100644
--- a/arch/x86/kernel/topology.c
+++ b/arch/x86/kernel/topology.c
@@ -31,7 +31,7 @@
 #include <linux/smp.h>
 #include <asm/cpu.h>
 
-static DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
+DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
 
 #ifdef CONFIG_HOTPLUG_CPU
 int __ref arch_register_cpu(int num)
diff --git a/arch/x86/kernel/uv_time.c b/arch/x86/kernel/uv_time.c
index 583f11d..73c9287 100644
--- a/arch/x86/kernel/uv_time.c
+++ b/arch/x86/kernel/uv_time.c
@@ -54,7 +54,7 @@ static struct clock_event_device clock_event_device_uv = {
 	.event_handler	= NULL,
 };
 
-static DEFINE_PER_CPU(struct clock_event_device, cpu_ced);
+DEFINE_PER_CPU(struct clock_event_device, cpu_ced);
 
 /* There is one of these allocated per node */
 struct uv_rtc_timer_head {
diff --git a/arch/x86/kernel/vmiclock_32.c b/arch/x86/kernel/vmiclock_32.c
index 2b3eb82..2cbc815 100644
--- a/arch/x86/kernel/vmiclock_32.c
+++ b/arch/x86/kernel/vmiclock_32.c
@@ -37,7 +37,7 @@
 #define VMI_ONESHOT  (VMI_ALARM_IS_ONESHOT  | VMI_CYCLES_REAL | vmi_get_alarm_wiring())
 #define VMI_PERIODIC (VMI_ALARM_IS_PERIODIC | VMI_CYCLES_REAL | vmi_get_alarm_wiring())
 
-static DEFINE_PER_CPU(struct clock_event_device, local_events);
+DEFINE_PER_CPU(struct clock_event_device, local_events);
 
 static inline u32 vmi_counter(u32 flags)
 {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1f8510c..f0b596c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -111,7 +111,7 @@ struct svm_cpu_data {
 	struct page *save_area;
 };
 
-static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
+DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
 static uint32_t svm_features;
 
 struct svm_init_data {
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index bb48133..d9ebe9b 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -107,9 +107,9 @@ static inline struct vcpu_vmx *to_vmx(struct kvm_vcpu *vcpu)
 static int init_rmode(struct kvm *kvm);
 static u64 construct_eptp(unsigned long root_hpa);
 
-static DEFINE_PER_CPU(struct vmcs *, vmxarea);
-static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
-static DEFINE_PER_CPU(struct list_head, vcpus_on_cpu);
+DEFINE_PER_CPU(struct vmcs *, vmxarea);
+DEFINE_PER_CPU(struct vmcs *, current_vmcs);
+DEFINE_PER_CPU(struct list_head, vcpus_on_cpu);
 
 static struct page *vmx_io_bitmap_a;
 static struct page *vmx_io_bitmap_b;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3944e91..00e8a6b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -622,7 +622,7 @@ static void kvm_set_time_scale(uint32_t tsc_khz, struct pvclock_vcpu_time_info *
 		 hv_clock->tsc_to_system_mul);
 }
 
-static DEFINE_PER_CPU(unsigned long, cpu_tsc_khz);
+DEFINE_PER_CPU(unsigned long, cpu_tsc_khz);
 
 static void kvm_write_guest_time(struct kvm_vcpu *v)
 {
diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
index 50dc802..d7ce69c 100644
--- a/arch/x86/mm/kmmio.c
+++ b/arch/x86/mm/kmmio.c
@@ -72,7 +72,7 @@ static struct list_head *kmmio_page_list(unsigned long page)
 }
 
 /* Accessed per-cpu */
-static DEFINE_PER_CPU(struct kmmio_context, kmmio_ctx);
+DEFINE_PER_CPU(struct kmmio_context, kmmio_ctx);
 
 /*
  * this is basically a dynamic stabbing problem:
diff --git a/arch/x86/mm/mmio-mod.c b/arch/x86/mm/mmio-mod.c
index c9342ed..4b143e8 100644
--- a/arch/x86/mm/mmio-mod.c
+++ b/arch/x86/mm/mmio-mod.c
@@ -53,8 +53,8 @@ struct remap_trace {
 };
 
 /* Accessed per-cpu. */
-static DEFINE_PER_CPU(struct trap_reason, pf_reason);
-static DEFINE_PER_CPU(struct mmiotrace_rw, cpu_trace);
+DEFINE_PER_CPU(struct trap_reason, pf_reason);
+DEFINE_PER_CPU(struct mmiotrace_rw, cpu_trace);
 
 static DEFINE_MUTEX(mmiotrace_mutex);
 static DEFINE_SPINLOCK(trace_lock);
diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c
index 202864a..ea53f07 100644
--- a/arch/x86/oprofile/nmi_int.c
+++ b/arch/x86/oprofile/nmi_int.c
@@ -25,8 +25,8 @@
 #include "op_x86_model.h"
 
 static struct op_x86_model_spec const *model;
-static DEFINE_PER_CPU(struct op_msrs, cpu_msrs);
-static DEFINE_PER_CPU(unsigned long, saved_lvtpc);
+DEFINE_PER_CPU(struct op_msrs, cpu_msrs);
+DEFINE_PER_CPU(unsigned long, saved_lvtpc);
 
 /* 0 == registered but off, 1 == registered and on */
 static int nmi_enabled = 0;
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index f09e8c3..72af3ed 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -443,7 +443,7 @@ static int cvt_gate_to_trap(int vector, const gate_desc *val,
 }
 
 /* Locations of each CPU's IDT */
-static DEFINE_PER_CPU(struct desc_ptr, idt_desc);
+DEFINE_PER_CPU(struct desc_ptr, idt_desc);
 
 /* Set an IDT entry.  If the entry is part of the current IDT, then
    also update Xen. */
diff --git a/arch/x86/xen/multicalls.c b/arch/x86/xen/multicalls.c
index 8bff7e7..3fba46a 100644
--- a/arch/x86/xen/multicalls.c
+++ b/arch/x86/xen/multicalls.c
@@ -49,7 +49,7 @@ struct mc_buffer {
 	unsigned mcidx, argidx, cbidx;
 };
 
-static DEFINE_PER_CPU(struct mc_buffer, mc_buffer);
+DEFINE_PER_CPU(struct mc_buffer, mc_buffer);
 DEFINE_PER_CPU(unsigned long, xen_mc_irq_flags);
 
 /* flush reasons 0- slots, 1- args, 2- callbacks */
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 429834e..e6ed68b 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -35,10 +35,10 @@
 
 cpumask_var_t xen_cpu_initialized_map;
 
-static DEFINE_PER_CPU(int, resched_irq);
-static DEFINE_PER_CPU(int, callfunc_irq);
-static DEFINE_PER_CPU(int, callfuncsingle_irq);
-static DEFINE_PER_CPU(int, debug_irq) = -1;
+DEFINE_PER_CPU(int, resched_irq);
+DEFINE_PER_CPU(int, callfunc_irq);
+DEFINE_PER_CPU(int, callfuncsingle_irq);
+DEFINE_PER_CPU(int, debug_irq) = -1;
 
 static irqreturn_t xen_call_function_interrupt(int irq, void *dev_id);
 static irqreturn_t xen_call_function_single_interrupt(int irq, void *dev_id);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 5601506..75be10d 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -147,8 +147,8 @@ static int xen_spin_trylock(struct raw_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
+DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
  * Mark a cpu as interested in a lock.  Returns the CPU's previous
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 0a5aa44..eca663a 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -31,14 +31,14 @@
 #define NS_PER_TICK	(1000000000LL / HZ)
 
 /* runstate info updated by Xen */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, runstate);
+DEFINE_PER_CPU(struct vcpu_runstate_info, runstate);
 
 /* snapshots of runstate info */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, runstate_snapshot);
+DEFINE_PER_CPU(struct vcpu_runstate_info, runstate_snapshot);
 
 /* unused ns of stolen and blocked time */
-static DEFINE_PER_CPU(u64, residual_stolen);
-static DEFINE_PER_CPU(u64, residual_blocked);
+DEFINE_PER_CPU(u64, residual_stolen);
+DEFINE_PER_CPU(u64, residual_blocked);
 
 /* return an consistent snapshot of 64-bit time/counter value */
 static u64 get64(const u64 *p)
@@ -403,7 +403,7 @@ static const struct clock_event_device xen_vcpuop_clockevent = {
 
 static const struct clock_event_device *xen_clockevent =
 	&xen_timerop_clockevent;
-static DEFINE_PER_CPU(struct clock_event_device, xen_clock_events);
+DEFINE_PER_CPU(struct clock_event_device, xen_clock_events);
 
 static irqreturn_t xen_timer_interrupt(int irq, void *dev_id)
 {
diff --git a/block/as-iosched.c b/block/as-iosched.c
index 96ff4d1..59c6935 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -146,7 +146,7 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
-static DEFINE_PER_CPU(unsigned long, as_ioc_count);
+DEFINE_PER_CPU(unsigned long, as_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index ee9c216..412e064 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -11,7 +11,7 @@
 
 #include "blk.h"
 
-static DEFINE_PER_CPU(struct list_head, blk_cpu_done);
+DEFINE_PER_CPU(struct list_head, blk_cpu_done);
 
 /*
  * Softirq action handler - move entries to local list and loop over them
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index deea748..0792ce6 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -48,7 +48,7 @@ static int cfq_slice_idle = HZ / 125;
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
 
-static DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
+DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c
index 3bea38d..c208a1e 100644
--- a/crypto/sha512_generic.c
+++ b/crypto/sha512_generic.c
@@ -27,7 +27,7 @@ struct sha512_ctx {
 	u8 buf[128];
 };
 
-static DEFINE_PER_CPU(u64[80], msg_schedule);
+DEFINE_PER_CPU(u64[80], msg_schedule);
 
 static inline u64 Ch(u64 x, u64 y, u64 z)
 {
diff --git a/drivers/acpi/processor_core.c b/drivers/acpi/processor_core.c
index 45ad328..99d7820 100644
--- a/drivers/acpi/processor_core.c
+++ b/drivers/acpi/processor_core.c
@@ -687,7 +687,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
 	return 0;
 }
 
-static DEFINE_PER_CPU(void *, processor_device_array);
+DEFINE_PER_CPU(void *, processor_device_array);
 
 static int __cpuinit acpi_processor_start(struct acpi_device *device)
 {
diff --git a/drivers/acpi/processor_thermal.c b/drivers/acpi/processor_thermal.c
index 39838c6..0687f2e 100644
--- a/drivers/acpi/processor_thermal.c
+++ b/drivers/acpi/processor_thermal.c
@@ -96,7 +96,7 @@ static int acpi_processor_apply_limit(struct acpi_processor *pr)
 #define CPUFREQ_THERMAL_MIN_STEP 0
 #define CPUFREQ_THERMAL_MAX_STEP 3
 
-static DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg);
+DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg);
 static unsigned int acpi_thermal_cpufreq_is_init = 0;
 
 static int cpu_has_cpufreq(unsigned int cpu)
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index e62a4cc..ef46a16 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -18,7 +18,7 @@ struct sysdev_class cpu_sysdev_class = {
 };
 EXPORT_SYMBOL(cpu_sysdev_class);
 
-static DEFINE_PER_CPU(struct sys_device *, cpu_sys_devices);
+DEFINE_PER_CPU(struct sys_device *, cpu_sys_devices);
 
 #ifdef CONFIG_HOTPLUG_CPU
 static ssize_t show_online(struct sys_device *dev, struct sysdev_attribute *attr,
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 8c74448..612b9fc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -277,7 +277,7 @@ static int random_write_wakeup_thresh = 128;
 
 static int trickle_thresh __read_mostly = INPUT_POOL_WORDS * 28;
 
-static DEFINE_PER_CPU(int, trickle_count);
+DEFINE_PER_CPU(int, trickle_count);
 
 /*
  * A pool of size .poolwords is stirred with a primitive polynomial
diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index c5afc98..88f43c9 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -38,7 +38,7 @@ static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
 static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
 
 /* proc_event_counts is used as the sequence number of the netlink message */
-static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
+DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
 
 static inline void get_seq(__u32 *ts, int *cpu)
 {
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 47d2ad0..b5088c4 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -38,10 +38,10 @@
  * also protects the cpufreq_cpu_data array.
  */
 static struct cpufreq_driver *cpufreq_driver;
-static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data);
+DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data);
 #ifdef CONFIG_HOTPLUG_CPU
 /* This one keeps track of the previously set governor of a removed CPU */
-static DEFINE_PER_CPU(struct cpufreq_governor *, cpufreq_cpu_governor);
+DEFINE_PER_CPU(struct cpufreq_governor *, cpufreq_cpu_governor);
 #endif
 static DEFINE_SPINLOCK(cpufreq_driver_lock);
 
@@ -62,8 +62,8 @@ static DEFINE_SPINLOCK(cpufreq_driver_lock);
  * - Governor routines that can be called in cpufreq hotplug path should not
  *   take this sem as top level hotplug notifier handler takes this.
  */
-static DEFINE_PER_CPU(int, policy_cpu);
-static DEFINE_PER_CPU(struct rw_semaphore, cpu_policy_rwsem);
+DEFINE_PER_CPU(int, policy_cpu);
+DEFINE_PER_CPU(struct rw_semaphore, cpu_policy_rwsem);
 
 #define lock_policy_rwsem(mode, cpu)					\
 int lock_policy_rwsem_##mode						\
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 8191d04..8d3b1f7 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -80,7 +80,7 @@ struct cpu_dbs_info_s {
 	int cpu;
 	unsigned int enable:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
+DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 04de476..56525c0 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -87,7 +87,7 @@ struct cpu_dbs_info_s {
 	unsigned int enable:1,
 		sample_type:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
+DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
index 5a62d67..4cda242 100644
--- a/drivers/cpufreq/cpufreq_stats.c
+++ b/drivers/cpufreq/cpufreq_stats.c
@@ -43,7 +43,7 @@ struct cpufreq_stats {
 #endif
 };
 
-static DEFINE_PER_CPU(struct cpufreq_stats *, cpufreq_stats_table);
+DEFINE_PER_CPU(struct cpufreq_stats *, cpufreq_stats_table);
 
 struct cpufreq_stats_attribute {
 	struct attribute attr;
diff --git a/drivers/cpufreq/cpufreq_userspace.c b/drivers/cpufreq/cpufreq_userspace.c
index 66d2d1d..9170939 100644
--- a/drivers/cpufreq/cpufreq_userspace.c
+++ b/drivers/cpufreq/cpufreq_userspace.c
@@ -27,12 +27,11 @@
 /**
  * A few values needed by the userspace governor
  */
-static DEFINE_PER_CPU(unsigned int, cpu_max_freq);
-static DEFINE_PER_CPU(unsigned int, cpu_min_freq);
-static DEFINE_PER_CPU(unsigned int, cpu_cur_freq); /* current CPU freq */
-static DEFINE_PER_CPU(unsigned int, cpu_set_freq); /* CPU freq desired by
-							userspace */
-static DEFINE_PER_CPU(unsigned int, cpu_is_managed);
+DEFINE_PER_CPU(unsigned int, cpu_max_freq);
+DEFINE_PER_CPU(unsigned int, cpu_min_freq);
+DEFINE_PER_CPU(unsigned int, cpu_cur_freq); /* current CPU freq */
+DEFINE_PER_CPU(unsigned int, cpu_set_freq); /* CPU freq desired by userspace */
+DEFINE_PER_CPU(unsigned int, cpu_is_managed);
 
 static DEFINE_MUTEX(userspace_mutex);
 static int cpus_using_userspace_governor;
diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c
index a9bd3a0..b3873b1 100644
--- a/drivers/cpufreq/freq_table.c
+++ b/drivers/cpufreq/freq_table.c
@@ -174,7 +174,7 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
 }
 EXPORT_SYMBOL_GPL(cpufreq_frequency_table_target);
 
-static DEFINE_PER_CPU(struct cpufreq_frequency_table *, show_table);
+DEFINE_PER_CPU(struct cpufreq_frequency_table *, show_table);
 /**
  * show_available_freqs - show available frequencies for the specified CPU
  */
diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c
index a4bec3f..2ec4150 100644
--- a/drivers/cpuidle/governors/ladder.c
+++ b/drivers/cpuidle/governors/ladder.c
@@ -42,7 +42,7 @@ struct ladder_device {
 	int last_state_idx;
 };
 
-static DEFINE_PER_CPU(struct ladder_device, ladder_devices);
+DEFINE_PER_CPU(struct ladder_device, ladder_devices);
 
 /**
  * ladder_do_selection - prepares private data for a state change
diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index f1df59f..7a44366 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -27,7 +27,7 @@ struct menu_device {
 	unsigned int	elapsed_us;
 };
 
-static DEFINE_PER_CPU(struct menu_device, menu_devices);
+DEFINE_PER_CPU(struct menu_device, menu_devices);
 
 /**
  * menu_select - selects the next idle state to enter
diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c
index 856b3cc..692ed44 100644
--- a/drivers/crypto/padlock-aes.c
+++ b/drivers/crypto/padlock-aes.c
@@ -51,7 +51,7 @@ struct aes_ctx {
 	u32 *D;
 };
 
-static DEFINE_PER_CPU(struct cword *, last_cword);
+DEFINE_PER_CPU(struct cword *, last_cword);
 
 /* Tells whether the ACE is capable to generate
    the extended key for a given key_len. */
diff --git a/drivers/lguest/page_tables.c b/drivers/lguest/page_tables.c
index a059cf9..cccc8ca 100644
--- a/drivers/lguest/page_tables.c
+++ b/drivers/lguest/page_tables.c
@@ -56,7 +56,7 @@
 /* We actually need a separate PTE page for each CPU.  Remember that after the
  * Switcher code itself comes two pages for each CPU, and we don't want this
  * CPU's guest to see the pages of any other CPU. */
-static DEFINE_PER_CPU(pte_t *, switcher_pte_pages);
+DEFINE_PER_CPU(pte_t *, switcher_pte_pages);
 #define switcher_pte_page(cpu) per_cpu(switcher_pte_pages, cpu)
 
 /*H:320 The page table code is curly enough to need helper functions to keep it
diff --git a/drivers/lguest/x86/core.c b/drivers/lguest/x86/core.c
index eaf722f..27662e6 100644
--- a/drivers/lguest/x86/core.c
+++ b/drivers/lguest/x86/core.c
@@ -67,7 +67,7 @@ static struct lguest_pages *lguest_pages(unsigned int cpu)
 		  (SWITCHER_ADDR + SHARED_SWITCHER_PAGES*PAGE_SIZE))[cpu]);
 }
 
-static DEFINE_PER_CPU(struct lg_cpu *, last_cpu);
+DEFINE_PER_CPU(struct lg_cpu *, last_cpu);
 
 /*S:010
  * We approach the Switcher.
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index dbfed85..6ac2e0e 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -47,10 +47,10 @@
 static DEFINE_SPINLOCK(irq_mapping_update_lock);
 
 /* IRQ <-> VIRQ mapping. */
-static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1};
+DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1};
 
 /* IRQ <-> IPI mapping */
-static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1};
+DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1};
 
 /* Interrupt types. */
 enum xen_irq_type {
@@ -596,7 +596,7 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+DEFINE_PER_CPU(unsigned, xed_nesting_count);
 
 /*
  * Search the CPUs pending events bitmasks.  For each one found, map
diff --git a/fs/buffer.c b/fs/buffer.c
index aed2977..707a934 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1242,7 +1242,7 @@ struct bh_lru {
 	struct buffer_head *bhs[BH_LRU_SIZE];
 };
 
-static DEFINE_PER_CPU(struct bh_lru, bh_lrus) = {{ NULL }};
+DEFINE_PER_CPU(struct bh_lru, bh_lrus) = {{ NULL }};
 
 #ifdef CONFIG_SMP
 #define bh_lru_lock()	local_irq_disable()
@@ -3224,7 +3224,7 @@ struct bh_accounting {
 	int ratelimit;		/* Limit cacheline bouncing */
 };
 
-static DEFINE_PER_CPU(struct bh_accounting, bh_accounting) = {0, 0};
+DEFINE_PER_CPU(struct bh_accounting, bh_accounting) = {0, 0};
 
 static void recalc_bh_state(void)
 {
diff --git a/fs/file.c b/fs/file.c
index f313314..62e29d9 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -36,7 +36,7 @@ int sysctl_nr_open_max = 1024 * 1024; /* raised later */
  * the work_struct in fdtable itself which avoids a 64 byte (i386) increase in
  * this per-task structure.
  */
-static DEFINE_PER_CPU(struct fdtable_defer, fdtable_defer_list);
+DEFINE_PER_CPU(struct fdtable_defer, fdtable_defer_list);
 
 static inline void * alloc_fdmem(unsigned int size)
 {
diff --git a/fs/namespace.c b/fs/namespace.c
index 134d494..5feb512 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -181,7 +181,7 @@ struct mnt_writer {
 	unsigned long count;
 	struct vfsmount *mnt;
 } ____cacheline_aligned_in_smp;
-static DEFINE_PER_CPU(struct mnt_writer, mnt_writers);
+DEFINE_PER_CPU(struct mnt_writer, mnt_writers);
 
 static int __init init_mnt_writers(void)
 {
diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index 8f921d7..c51e51b 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -13,9 +13,12 @@
  * 'section' argument.  This may be used to affect the parameters governing the
  * variable's storage.
  *
- * NOTE!  The sections for the DECLARE and for the DEFINE must match, lest
- * linkage errors occur due the compiler generating the wrong code to access
- * that section.
+ * Some architectures (alpha and s390) need 'weak' attribute for percpu
+ * variables to force external references as space for percpu variables is
+ * allocated differently from regular variables.  To allow this, static percpu
+ * variables are not allowed - all perpcu variables must be global.  This is
+ * forced by implicitly doing DECLARE_PERCPU_SECTION() from
+ * DEFINE_PER_CPU_SECTION().
  */
 #define DECLARE_PER_CPU_SECTION(type, name, section)			\
 	extern								\
@@ -23,6 +26,7 @@
 	PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
 
 #define DEFINE_PER_CPU_SECTION(type, name, section)			\
+	DECLARE_PER_CPU_SECTION(type, name, section);			\
 	__attribute__((__section__(PER_CPU_BASE_SECTION section)))	\
 	PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
 
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c0fa54b..92f67ff 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -71,7 +71,7 @@ static struct hlist_head kretprobe_inst_table[KPROBE_TABLE_SIZE];
 static bool kprobes_all_disarmed;
 
 static DEFINE_MUTEX(kprobe_mutex);	/* Protects kprobe_table */
-static DEFINE_PER_CPU(struct kprobe *, kprobe_instance) = NULL;
+DEFINE_PER_CPU(struct kprobe *, kprobe_instance) = NULL;
 static struct {
 	spinlock_t lock ____cacheline_aligned_in_smp;
 } kretprobe_table_locks[KPROBE_TABLE_SIZE];
diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index accb40c..4e6ae0e 100644
--- a/kernel/lockdep.c
+++ b/kernel/lockdep.c
@@ -137,7 +137,7 @@ static inline struct lock_class *hlock_class(struct held_lock *hlock)
 }
 
 #ifdef CONFIG_LOCK_STAT
-static DEFINE_PER_CPU(struct lock_class_stats[MAX_LOCKDEP_KEYS], lock_stats);
+DEFINE_PER_CPU(struct lock_class_stats[MAX_LOCKDEP_KEYS], lock_stats);
 
 static int lock_point(unsigned long points[], unsigned long ip)
 {
diff --git a/kernel/printk.c b/kernel/printk.c
index 5052b54..ac90df5 100644
--- a/kernel/printk.c
+++ b/kernel/printk.c
@@ -959,7 +959,7 @@ int is_console_locked(void)
 	return console_locked;
 }
 
-static DEFINE_PER_CPU(int, printk_pending);
+DEFINE_PER_CPU(int, printk_pending);
 
 void printk_tick(void)
 {
diff --git a/kernel/profile.c b/kernel/profile.c
index 7724e04..e68fd20 100644
--- a/kernel/profile.c
+++ b/kernel/profile.c
@@ -47,8 +47,8 @@ EXPORT_SYMBOL_GPL(prof_on);
 
 static cpumask_var_t prof_cpu_mask;
 #ifdef CONFIG_SMP
-static DEFINE_PER_CPU(struct profile_hit *[2], cpu_profile_hits);
-static DEFINE_PER_CPU(int, cpu_profile_flip);
+DEFINE_PER_CPU(struct profile_hit *[2], cpu_profile_hits);
+DEFINE_PER_CPU(int, cpu_profile_flip);
 static DEFINE_MUTEX(profile_flip_mutex);
 #endif /* CONFIG_SMP */
 
diff --git a/kernel/rcuclassic.c b/kernel/rcuclassic.c
index 0f2b0b3..f913cb9 100644
--- a/kernel/rcuclassic.c
+++ b/kernel/rcuclassic.c
@@ -74,8 +74,8 @@ static struct rcu_ctrlblk rcu_bh_ctrlblk = {
 	.cpumask = CPU_BITS_NONE,
 };
 
-static DEFINE_PER_CPU(struct rcu_data, rcu_data);
-static DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
+DEFINE_PER_CPU(struct rcu_data, rcu_data);
+DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
 
 /*
  * Increment the quiescent state counter.
diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c
index a967c9f..7b80734 100644
--- a/kernel/rcupdate.c
+++ b/kernel/rcupdate.c
@@ -52,7 +52,7 @@ enum rcu_barrier {
 	RCU_BARRIER_SCHED,
 };
 
-static DEFINE_PER_CPU(struct rcu_head, rcu_barrier_head) = {NULL};
+DEFINE_PER_CPU(struct rcu_head, rcu_barrier_head) = {NULL};
 static atomic_t rcu_barrier_cpu_count;
 static DEFINE_MUTEX(rcu_barrier_mutex);
 static struct completion rcu_barrier_completion;
diff --git a/kernel/rcupreempt.c b/kernel/rcupreempt.c
index ce97a4d..4bb39c0 100644
--- a/kernel/rcupreempt.c
+++ b/kernel/rcupreempt.c
@@ -155,7 +155,7 @@ struct rcu_dyntick_sched {
 	int sched_dynticks_snap;
 };
 
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_dyntick_sched, rcu_dyntick_sched) = {
+DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_dyntick_sched, rcu_dyntick_sched) = {
 	.dynticks = 1,
 };
 
@@ -190,7 +190,7 @@ void rcu_exit_nohz(void)
 #endif /* CONFIG_NO_HZ */
 
 
-static DEFINE_PER_CPU(struct rcu_data, rcu_data);
+DEFINE_PER_CPU(struct rcu_data, rcu_data);
 
 static struct rcu_ctrlblk rcu_ctrlblk = {
 	.fliplock = __SPIN_LOCK_UNLOCKED(rcu_ctrlblk.fliplock),
@@ -222,7 +222,7 @@ enum rcu_flip_flag_values {
 	rcu_flipped		/* Flip just completed, need confirmation. */
 				/* Only corresponding CPU can update. */
 };
-static DEFINE_PER_CPU_SHARED_ALIGNED(enum rcu_flip_flag_values, rcu_flip_flag)
+DEFINE_PER_CPU_SHARED_ALIGNED(enum rcu_flip_flag_values, rcu_flip_flag)
 								= rcu_flip_seen;
 
 /*
@@ -237,7 +237,7 @@ enum rcu_mb_flag_values {
 	rcu_mb_needed		/* Flip just completed, need an mb(). */
 				/* Only corresponding CPU can update. */
 };
-static DEFINE_PER_CPU_SHARED_ALIGNED(enum rcu_mb_flag_values, rcu_mb_flag)
+DEFINE_PER_CPU_SHARED_ALIGNED(enum rcu_mb_flag_values, rcu_mb_flag)
 								= rcu_mb_done;
 
 /*
@@ -472,7 +472,7 @@ static void __rcu_advance_callbacks(struct rcu_data *rdp)
 }
 
 #ifdef CONFIG_NO_HZ
-static DEFINE_PER_CPU(int, rcu_update_flag);
+DEFINE_PER_CPU(int, rcu_update_flag);
 
 /**
  * rcu_irq_enter - Called from Hard irq handlers and NMI/SMI.
diff --git a/kernel/rcutorture.c b/kernel/rcutorture.c
index 9b4a975..1fcf4fe 100644
--- a/kernel/rcutorture.c
+++ b/kernel/rcutorture.c
@@ -114,9 +114,9 @@ static struct rcu_torture *rcu_torture_current = NULL;
 static long rcu_torture_current_version = 0;
 static struct rcu_torture rcu_tortures[10 * RCU_TORTURE_PIPE_LEN];
 static DEFINE_SPINLOCK(rcu_torture_lock);
-static DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], rcu_torture_count) =
+DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], rcu_torture_count) =
 	{ 0 };
-static DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], rcu_torture_batch) =
+DEFINE_PER_CPU(long [RCU_TORTURE_PIPE_LEN + 1], rcu_torture_batch) =
 	{ 0 };
 static atomic_t rcu_torture_wcount[RCU_TORTURE_PIPE_LEN + 1];
 static atomic_t n_rcu_torture_alloc;
diff --git a/kernel/sched.c b/kernel/sched.c
index 26efa47..73a2246 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -320,14 +320,14 @@ struct task_group root_task_group;
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 /* Default task group's sched entity on each cpu */
-static DEFINE_PER_CPU(struct sched_entity, init_sched_entity);
+DEFINE_PER_CPU(struct sched_entity, init_sched_entity);
 /* Default task group's cfs_rq on each cpu */
-static DEFINE_PER_CPU(struct cfs_rq, init_cfs_rq) ____cacheline_aligned_in_smp;
+DEFINE_PER_CPU(struct cfs_rq, init_cfs_rq) ____cacheline_aligned_in_smp;
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
 #ifdef CONFIG_RT_GROUP_SCHED
-static DEFINE_PER_CPU(struct sched_rt_entity, init_sched_rt_entity);
-static DEFINE_PER_CPU(struct rt_rq, init_rt_rq) ____cacheline_aligned_in_smp;
+DEFINE_PER_CPU(struct sched_rt_entity, init_sched_rt_entity);
+DEFINE_PER_CPU(struct rt_rq, init_rt_rq) ____cacheline_aligned_in_smp;
 #endif /* CONFIG_RT_GROUP_SCHED */
 #else /* !CONFIG_USER_SCHED */
 #define root_task_group init_task_group
@@ -661,7 +661,7 @@ struct rq {
 #endif
 };
 
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
 
 static inline void check_preempt_curr(struct rq *rq, struct task_struct *p, int sync)
 {
@@ -3839,7 +3839,7 @@ find_busiest_queue(struct sched_group *group, enum cpu_idle_type idle,
 #define MAX_PINNED_INTERVAL	512
 
 /* Working cpumask for load_balance and load_balance_newidle. */
-static DEFINE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
+DEFINE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
 
 /*
  * Check this_cpu to ensure it is balanced within domain. Attempt to move
@@ -7770,8 +7770,8 @@ struct static_sched_domain {
  * SMT sched-domains:
  */
 #ifdef CONFIG_SCHED_SMT
-static DEFINE_PER_CPU(struct static_sched_domain, cpu_domains);
-static DEFINE_PER_CPU(struct static_sched_group, sched_group_cpus);
+DEFINE_PER_CPU(struct static_sched_domain, cpu_domains);
+DEFINE_PER_CPU(struct static_sched_group, sched_group_cpus);
 
 static int
 cpu_to_cpu_group(int cpu, const struct cpumask *cpu_map,
@@ -7787,8 +7787,8 @@ cpu_to_cpu_group(int cpu, const struct cpumask *cpu_map,
  * multi-core sched-domains:
  */
 #ifdef CONFIG_SCHED_MC
-static DEFINE_PER_CPU(struct static_sched_domain, core_domains);
-static DEFINE_PER_CPU(struct static_sched_group, sched_group_core);
+DEFINE_PER_CPU(struct static_sched_domain, core_domains);
+DEFINE_PER_CPU(struct static_sched_group, sched_group_core);
 #endif /* CONFIG_SCHED_MC */
 
 #if defined(CONFIG_SCHED_MC) && defined(CONFIG_SCHED_SMT)
@@ -7815,8 +7815,8 @@ cpu_to_core_group(int cpu, const struct cpumask *cpu_map,
 }
 #endif
 
-static DEFINE_PER_CPU(struct static_sched_domain, phys_domains);
-static DEFINE_PER_CPU(struct static_sched_group, sched_group_phys);
+DEFINE_PER_CPU(struct static_sched_domain, phys_domains);
+DEFINE_PER_CPU(struct static_sched_group, sched_group_phys);
 
 static int
 cpu_to_phys_group(int cpu, const struct cpumask *cpu_map,
@@ -7843,11 +7843,11 @@ cpu_to_phys_group(int cpu, const struct cpumask *cpu_map,
  * groups, so roll our own. Now each node has its own list of groups which
  * gets dynamically allocated.
  */
-static DEFINE_PER_CPU(struct static_sched_domain, node_domains);
+DEFINE_PER_CPU(struct static_sched_domain, node_domains);
 static struct sched_group ***sched_group_nodes_bycpu;
 
-static DEFINE_PER_CPU(struct static_sched_domain, allnodes_domains);
-static DEFINE_PER_CPU(struct static_sched_group, sched_group_allnodes);
+DEFINE_PER_CPU(struct static_sched_domain, allnodes_domains);
+DEFINE_PER_CPU(struct static_sched_group, sched_group_allnodes);
 
 static int cpu_to_allnodes_group(int cpu, const struct cpumask *cpu_map,
 				 struct sched_group **sg,
diff --git a/kernel/sched_clock.c b/kernel/sched_clock.c
index e1d16c9..759f269 100644
--- a/kernel/sched_clock.c
+++ b/kernel/sched_clock.c
@@ -60,7 +60,7 @@ struct sched_clock_data {
 	u64			clock;
 };
 
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct sched_clock_data, sched_clock_data);
+DEFINE_PER_CPU_SHARED_ALIGNED(struct sched_clock_data, sched_clock_data);
 
 static inline struct sched_clock_data *this_scd(void)
 {
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index f2c66f8..339ab0b 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -1113,7 +1113,7 @@ static struct task_struct *pick_next_highest_task_rt(struct rq *rq, int cpu)
 	return next;
 }
 
-static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask);
+DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask);
 
 static inline int pick_optimal_cpu(int this_cpu,
 				   const struct cpumask *mask)
diff --git a/kernel/smp.c b/kernel/smp.c
index 858baac..4ef26e5 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -12,7 +12,7 @@
 #include <linux/smp.h>
 #include <linux/cpu.h>
 
-static DEFINE_PER_CPU(struct call_single_queue, call_single_queue);
+DEFINE_PER_CPU(struct call_single_queue, call_single_queue);
 
 static struct {
 	struct list_head	queue;
@@ -39,7 +39,7 @@ struct call_single_queue {
 	spinlock_t		lock;
 };
 
-static DEFINE_PER_CPU(struct call_function_data, cfd_data) = {
+DEFINE_PER_CPU(struct call_function_data, cfd_data) = {
 	.lock			= __SPIN_LOCK_UNLOCKED(cfd_data.lock),
 };
 
@@ -257,7 +257,7 @@ void generic_smp_call_function_single_interrupt(void)
 	}
 }
 
-static DEFINE_PER_CPU(struct call_single_data, csd_data);
+DEFINE_PER_CPU(struct call_single_data, csd_data);
 
 /*
  * smp_call_function_single - Run a function on a specific CPU
diff --git a/kernel/softirq.c b/kernel/softirq.c
index b525dd3..55f9452 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -52,7 +52,7 @@ EXPORT_SYMBOL(irq_stat);
 
 static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;
 
-static DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
+DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
 
 char *softirq_to_name[NR_SOFTIRQS] = {
 	"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK",
@@ -352,8 +352,8 @@ struct tasklet_head
 	struct tasklet_struct **tail;
 };
 
-static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec);
-static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec);
+DEFINE_PER_CPU(struct tasklet_head, tasklet_vec);
+DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec);
 
 void __tasklet_schedule(struct tasklet_struct *t)
 {
diff --git a/kernel/softlockup.c b/kernel/softlockup.c
index 88796c3..997cf10 100644
--- a/kernel/softlockup.c
+++ b/kernel/softlockup.c
@@ -22,9 +22,9 @@
 
 static DEFINE_SPINLOCK(print_lock);
 
-static DEFINE_PER_CPU(unsigned long, touch_timestamp);
-static DEFINE_PER_CPU(unsigned long, print_timestamp);
-static DEFINE_PER_CPU(struct task_struct *, watchdog_task);
+DEFINE_PER_CPU(unsigned long, touch_timestamp);
+DEFINE_PER_CPU(unsigned long, print_timestamp);
+DEFINE_PER_CPU(struct task_struct *, watchdog_task);
 
 static int __read_mostly did_panic;
 int __read_mostly softlockup_thresh = 60;
diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 888adbc..e7fc2cb 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -35,7 +35,7 @@
  */
 #define TASKSTATS_CPUMASK_MAXLEN	(100+6*NR_CPUS)
 
-static DEFINE_PER_CPU(__u32, taskstats_seqnum);
+DEFINE_PER_CPU(__u32, taskstats_seqnum);
 static int family_registered;
 struct kmem_cache *taskstats_cache;
 
@@ -68,7 +68,7 @@ struct listener_list {
 	struct rw_semaphore sem;
 	struct list_head list;
 };
-static DEFINE_PER_CPU(struct listener_list, listener_array);
+DEFINE_PER_CPU(struct listener_list, listener_array);
 
 enum actions {
 	REGISTER,
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index d3f1ef4..9d4b954 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -29,7 +29,7 @@
 /*
  * Per cpu nohz control structure
  */
-static DEFINE_PER_CPU(struct tick_sched, tick_cpu_sched);
+DEFINE_PER_CPU(struct tick_sched, tick_cpu_sched);
 
 /*
  * The time, when the last jiffy update happened. Protected by xtime_lock.
diff --git a/kernel/time/timer_stats.c b/kernel/time/timer_stats.c
index c994530..2a6fdb7 100644
--- a/kernel/time/timer_stats.c
+++ b/kernel/time/timer_stats.c
@@ -86,7 +86,7 @@ static DEFINE_SPINLOCK(table_lock);
 /*
  * Per-CPU lookup locks for fast hash lookup:
  */
-static DEFINE_PER_CPU(spinlock_t, lookup_lock);
+DEFINE_PER_CPU(spinlock_t, lookup_lock);
 
 /*
  * Mutex to serialize state changes with show-stats activities:
diff --git a/kernel/timer.c b/kernel/timer.c
index cffffad..3dd1d5d 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -79,7 +79,7 @@ struct tvec_base {
 
 struct tvec_base boot_tvec_bases;
 EXPORT_SYMBOL(boot_tvec_bases);
-static DEFINE_PER_CPU(struct tvec_base *, tvec_bases) = &boot_tvec_bases;
+DEFINE_PER_CPU(struct tvec_base *, tvec_bases) = &boot_tvec_bases;
 
 /*
  * Note that all tvec_bases are 2 byte aligned and lower bit of
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 960cbf4..91d73b3 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1458,7 +1458,7 @@ rb_reserve_next_event(struct ring_buffer_per_cpu *cpu_buffer,
 	return event;
 }
 
-static DEFINE_PER_CPU(int, rb_need_resched);
+DEFINE_PER_CPU(int, rb_need_resched);
 
 /**
  * ring_buffer_lock_reserve - reserve a part of the buffer
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index cda81ec..10635e0 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -88,7 +88,7 @@ static int dummy_set_flag(u32 old_flags, u32 bit, int set)
  */
 static int tracing_disabled = 1;
 
-static DEFINE_PER_CPU(local_t, ftrace_cpu_disabled);
+DEFINE_PER_CPU(local_t, ftrace_cpu_disabled);
 
 static inline void ftrace_disable_cpu(void)
 {
@@ -169,7 +169,7 @@ unsigned long long ns2usecs(cycle_t nsec)
  */
 static struct trace_array	global_trace;
 
-static DEFINE_PER_CPU(struct trace_array_cpu, global_trace_cpu);
+DEFINE_PER_CPU(struct trace_array_cpu, global_trace_cpu);
 
 cycle_t ftrace_now(int cpu)
 {
@@ -197,7 +197,7 @@ cycle_t ftrace_now(int cpu)
  */
 static struct trace_array	max_tr;
 
-static DEFINE_PER_CPU(struct trace_array_cpu, max_data);
+DEFINE_PER_CPU(struct trace_array_cpu, max_data);
 
 /* tracer_enabled is used to toggle activation of a tracer */
 static int			tracer_enabled = 1;
diff --git a/kernel/trace/trace_hw_branches.c b/kernel/trace/trace_hw_branches.c
index 7bfdf4c..34cea28 100644
--- a/kernel/trace/trace_hw_branches.c
+++ b/kernel/trace/trace_hw_branches.c
@@ -32,8 +32,8 @@
  * - read the trace from a single cpu
  */
 static DEFINE_SPINLOCK(bts_tracer_lock);
-static DEFINE_PER_CPU(struct bts_tracer *, tracer);
-static DEFINE_PER_CPU(unsigned char[SIZEOF_BTS], buffer);
+DEFINE_PER_CPU(struct bts_tracer *, tracer);
+DEFINE_PER_CPU(unsigned char[SIZEOF_BTS], buffer);
 
 #define this_tracer per_cpu(tracer, smp_processor_id())
 #define this_buffer per_cpu(buffer, smp_processor_id())
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index b923d13..8f3661d 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -21,7 +21,7 @@
 static struct trace_array		*irqsoff_trace __read_mostly;
 static int				tracer_enabled __read_mostly;
 
-static DEFINE_PER_CPU(int, tracing_cpu);
+DEFINE_PER_CPU(int, tracing_cpu);
 
 static DEFINE_SPINLOCK(max_trace_lock);
 
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index c750f65..bd78e95 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -31,7 +31,7 @@ static raw_spinlock_t max_stack_lock =
 	(raw_spinlock_t)__RAW_SPIN_LOCK_UNLOCKED;
 
 static int stack_trace_disabled __read_mostly;
-static DEFINE_PER_CPU(int, trace_active);
+DEFINE_PER_CPU(int, trace_active);
 static DEFINE_MUTEX(stack_sysctl_mutex);
 
 int stack_tracer_enabled;
diff --git a/kernel/trace/trace_sysprof.c b/kernel/trace/trace_sysprof.c
index 91fd19c..e27b8dd 100644
--- a/kernel/trace/trace_sysprof.c
+++ b/kernel/trace/trace_sysprof.c
@@ -31,7 +31,7 @@ static DEFINE_MUTEX(sample_timer_lock);
 /*
  * Per CPU hrtimers that do the profiling:
  */
-static DEFINE_PER_CPU(struct hrtimer, stack_trace_hrtimer);
+DEFINE_PER_CPU(struct hrtimer, stack_trace_hrtimer);
 
 struct stack_frame {
 	const void __user	*next_fp;
diff --git a/kernel/trace/trace_workqueue.c b/kernel/trace/trace_workqueue.c
index 797201e..a70e65c 100644
--- a/kernel/trace/trace_workqueue.c
+++ b/kernel/trace/trace_workqueue.c
@@ -38,7 +38,7 @@ struct workqueue_global_stats {
 /* Don't need a global lock because allocated before the workqueues, and
  * never freed.
  */
-static DEFINE_PER_CPU(struct workqueue_global_stats, all_workqueue_stat);
+DEFINE_PER_CPU(struct workqueue_global_stats, all_workqueue_stat);
 #define workqueue_cpu_stat(cpu) (&per_cpu(all_workqueue_stat, cpu))
 
 /* Insertion of a work */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 4bb42a0..a7f5217 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -81,7 +81,7 @@ struct radix_tree_preload {
 	int nr;
 	struct radix_tree_node *nodes[RADIX_TREE_MAX_PATH];
 };
-static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
+DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
 
 static inline gfp_t root_gfp_mask(struct radix_tree_root *root)
 {
diff --git a/lib/random32.c b/lib/random32.c
index 217d5c4..ce97c36 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -43,7 +43,7 @@ struct rnd_state {
 	u32 s1, s2, s3;
 };
 
-static DEFINE_PER_CPU(struct rnd_state, net_rand_state);
+DEFINE_PER_CPU(struct rnd_state, net_rand_state);
 
 static u32 __random32(struct rnd_state *state)
 {
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 0e0c9de..23d6015 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -606,7 +606,7 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 	}
 }
 
-static DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
+DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
 
 /**
  * balance_dirty_pages_ratelimited_nr - balance dirty memory state
diff --git a/mm/slab.c b/mm/slab.c
index 9a90b00..42ccf81 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -763,7 +763,7 @@ int slab_is_available(void)
 	return g_cpucache_up == FULL;
 }
 
-static DEFINE_PER_CPU(struct delayed_work, reap_work);
+DEFINE_PER_CPU(struct delayed_work, reap_work);
 
 static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
 {
@@ -905,7 +905,7 @@ __setup("noaliencache", noaliencache_setup);
  * objects freed on different nodes from which they were allocated) and the
  * flushing of remote pcps by calling drain_node_pages.
  */
-static DEFINE_PER_CPU(unsigned long, reap_node);
+DEFINE_PER_CPU(unsigned long, reap_node);
 
 static void init_reap_node(int cpu)
 {
diff --git a/mm/slub.c b/mm/slub.c
index fbcf929..2c130e5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1987,10 +1987,8 @@ init_kmem_cache_node(struct kmem_cache_node *n, struct kmem_cache *s)
  */
 #define NR_KMEM_CACHE_CPU 100
 
-static DEFINE_PER_CPU(struct kmem_cache_cpu [NR_KMEM_CACHE_CPU],
-		      kmem_cache_cpu);
-
-static DEFINE_PER_CPU(struct kmem_cache_cpu *, kmem_cache_cpu_free);
+DEFINE_PER_CPU(struct kmem_cache_cpu [NR_KMEM_CACHE_CPU], kmem_cache_cpu);
+DEFINE_PER_CPU(struct kmem_cache_cpu *, kmem_cache_cpu_free);
 static DECLARE_BITMAP(kmem_cach_cpu_free_init_once, CONFIG_NR_CPUS);
 
 static struct kmem_cache_cpu *alloc_kmem_cache_cpu(struct kmem_cache *s,
diff --git a/mm/swap.c b/mm/swap.c
index cb29ae5..c92109a 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -36,8 +36,8 @@
 /* How many pages do we try to swap or page in/out together? */
 int page_cluster;
 
-static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
-static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
+DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
+DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
 
 /*
  * This path almost never happens for VM activity - pages are normally
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 083716e..9e0ad21 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -678,7 +678,7 @@ struct vmap_block {
 };
 
 /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */
-static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
+DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
 
 /*
  * Radix tree of vmap blocks, indexed by address, to quickly find a vmap block
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 74d66db..809c5c8 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -873,7 +873,7 @@ static const struct file_operations proc_vmstat_file_operations = {
 #endif /* CONFIG_PROC_FS */
 
 #ifdef CONFIG_SMP
-static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
+DEFINE_PER_CPU(struct delayed_work, vmstat_work);
 int sysctl_stat_interval __read_mostly = HZ;
 
 static void vmstat_update(struct work_struct *w)
diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
index 9fd0dc3..c2ae1ec 100644
--- a/net/core/drop_monitor.c
+++ b/net/core/drop_monitor.c
@@ -55,7 +55,7 @@ static struct genl_family net_drop_monitor_family = {
 	.maxattr        = NET_DM_CMD_MAX,
 };
 
-static DEFINE_PER_CPU(struct per_cpu_dm_data, dm_cpu_data);
+DEFINE_PER_CPU(struct per_cpu_dm_data, dm_cpu_data);
 
 static int dm_hit_limit = 64;
 static int dm_delay = 1;
diff --git a/net/core/flow.c b/net/core/flow.c
index 9601587..4254804 100644
--- a/net/core/flow.c
+++ b/net/core/flow.c
@@ -39,7 +39,7 @@ atomic_t flow_cache_genid = ATOMIC_INIT(0);
 
 static u32 flow_hash_shift;
 #define flow_hash_size	(1 << flow_hash_shift)
-static DEFINE_PER_CPU(struct flow_cache_entry **, flow_tables) = { NULL };
+DEFINE_PER_CPU(struct flow_cache_entry **, flow_tables) = { NULL };
 
 #define flow_table(cpu) (per_cpu(flow_tables, cpu))
 
@@ -52,7 +52,7 @@ struct flow_percpu_info {
 	u32 hash_rnd;
 	int count;
 };
-static DEFINE_PER_CPU(struct flow_percpu_info, flow_hash_info) = { 0 };
+DEFINE_PER_CPU(struct flow_percpu_info, flow_hash_info) = { 0 };
 
 #define flow_hash_rnd_recalc(cpu) \
 	(per_cpu(flow_hash_info, cpu).hash_rnd_recalc)
@@ -69,7 +69,7 @@ struct flow_flush_info {
 	atomic_t cpuleft;
 	struct completion completion;
 };
-static DEFINE_PER_CPU(struct tasklet_struct, flow_flush_tasklets) = { NULL };
+DEFINE_PER_CPU(struct tasklet_struct, flow_flush_tasklets) = { NULL };
 
 #define flow_flush_tasklet(cpu) (&per_cpu(flow_flush_tasklets, cpu))
 
diff --git a/net/core/sock.c b/net/core/sock.c
index 7dbf3ff..4ea4d1f 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2049,7 +2049,7 @@ static __init int net_inuse_init(void)
 
 core_initcall(net_inuse_init);
 #else
-static DEFINE_PER_CPU(struct prot_inuse, prot_inuse);
+DEFINE_PER_CPU(struct prot_inuse, prot_inuse);
 
 void sock_prot_inuse_add(struct net *net, struct proto *prot, int val)
 {
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 28205e5..9bd897e 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -253,7 +253,7 @@ static struct rt_hash_bucket 	*rt_hash_table __read_mostly;
 static unsigned			rt_hash_mask __read_mostly;
 static unsigned int		rt_hash_log  __read_mostly;
 
-static DEFINE_PER_CPU(struct rt_cache_stat, rt_cache_stat);
+DEFINE_PER_CPU(struct rt_cache_stat, rt_cache_stat);
 #define RT_CACHE_STAT_INC(field) \
 	(__raw_get_cpu_var(rt_cache_stat).field++)
 
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index a3c045c..c72b127 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -37,8 +37,7 @@ __initcall(init_syncookies);
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS],
-		      ipv4_cookie_scratch);
+DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], ipv4_cookie_scratch);
 
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index e2bcff0..6043c63 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -74,8 +74,7 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
 	return child;
 }
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS],
-		      ipv6_cookie_scratch);
+DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], ipv6_cookie_scratch);
 
 static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
diff --git a/net/socket.c b/net/socket.c
index 791d71a..4249ae8 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -153,7 +153,7 @@ static const struct net_proto_family *net_families[NPROTO] __read_mostly;
  *	Statistics counters of the socket lists
  */
 
-static DEFINE_PER_CPU(int, sockets_in_use) = 0;
+DEFINE_PER_CPU(int, sockets_in_use) = 0;
 
 /*
  * Support routines.
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/7] alpha: kill unnecessary __used attribute in PER_CPU_ATTRIBUTES
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
                   ` (3 preceding siblings ...)
  2009-06-01  8:58 ` [PATCH 4/7] percpu: enforce global definition Tejun Heo
@ 2009-06-01  8:58 ` Tejun Heo
  2009-06-01  8:58 ` [PATCH 6/7] alpha: switch to dynamic percpu allocator Tejun Heo
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo

With the previous percpu variable definition change, all percpu
variables are global and there's no need to specify __used, which only
triggers on recent compilers anyway.  Kill it.

[ Impact: remove unnecessary percpu attribute ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
---
 arch/alpha/include/asm/percpu.h |    5 -----
 1 files changed, 0 insertions(+), 5 deletions(-)

diff --git a/arch/alpha/include/asm/percpu.h b/arch/alpha/include/asm/percpu.h
index 06c5c7a..7f0a9c4 100644
--- a/arch/alpha/include/asm/percpu.h
+++ b/arch/alpha/include/asm/percpu.h
@@ -30,7 +30,6 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
 
 #ifndef MODULE
 #define SHIFT_PERCPU_PTR(var, offset) RELOC_HIDE(&per_cpu_var(var), (offset))
-#define PER_CPU_ATTRIBUTES
 #else
 /*
  * To calculate addresses of locally defined variables, GCC uses 32-bit
@@ -49,8 +48,6 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
 		: "=&r"(__ptr), "=&r"(tmp_gp));		\
 	(typeof(&per_cpu_var(var)))(__ptr + (offset)); })
 
-#define PER_CPU_ATTRIBUTES	__used
-
 #endif /* MODULE */
 
 /*
@@ -71,8 +68,6 @@ extern unsigned long __per_cpu_offset[NR_CPUS];
 #define __get_cpu_var(var)		per_cpu_var(var)
 #define __raw_get_cpu_var(var)		per_cpu_var(var)
 
-#define PER_CPU_ATTRIBUTES
-
 #endif /* SMP */
 
 #ifdef CONFIG_SMP
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/7] alpha: switch to dynamic percpu allocator
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
                   ` (4 preceding siblings ...)
  2009-06-01  8:58 ` [PATCH 5/7] alpha: kill unnecessary __used attribute in PER_CPU_ATTRIBUTES Tejun Heo
@ 2009-06-01  8:58 ` Tejun Heo
  2009-06-01  8:58 ` [PATCH 7/7] s390: " Tejun Heo
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo

Alpha implements custom SHIFT_PERCPU_PTR for modules because percpu
area can be located far away from the 4G area where the module text is
located.  The custom SHIFT_PERCPU_PTR forces GOT usage using ldq
instruction with literal relocation; however, the relocation can't be
used with dynamically allocatd percpu variables.  Fortunately, similar
result can be achieved using weak attribute on percpu variable
declarations and definitions, which is allowed with previous changes.

This patch makes alpha use weak attribute instead and switch to
dynamic percpu allocator.

asm/tlbflush.h was getting linux/sched.h via asm/percpu.h which no
longer needs it.  Include linux/sched.h directly in asm/tlbflush.h.

Compile tested.  Generation of litereal relocation verified.

This patch is based on Ivan Kokshaysky's alpha percpu patch.

[ Impact: use dynamic percpu allocator ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
---
 arch/alpha/Kconfig                |    3 -
 arch/alpha/include/asm/percpu.h   |   96 ++++---------------------------------
 arch/alpha/include/asm/tlbflush.h |    1 +
 3 files changed, 10 insertions(+), 90 deletions(-)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 05d8640..9fb8aae 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -70,9 +70,6 @@ config AUTO_IRQ_AFFINITY
 	depends on SMP
 	default y
 
-config HAVE_LEGACY_PER_CPU_AREA
-	def_bool y
-
 source "init/Kconfig"
 source "kernel/Kconfig.freezer"
 
diff --git a/arch/alpha/include/asm/percpu.h b/arch/alpha/include/asm/percpu.h
index 7f0a9c4..2b0c79c 100644
--- a/arch/alpha/include/asm/percpu.h
+++ b/arch/alpha/include/asm/percpu.h
@@ -1,97 +1,19 @@
 #ifndef __ALPHA_PERCPU_H
 #define __ALPHA_PERCPU_H
 
-#include <linux/compiler.h>
-#include <linux/threads.h>
-#include <linux/percpu-defs.h>
-
 /*
- * Determine the real variable name from the name visible in the
- * kernel sources.
- */
-#define per_cpu_var(var) per_cpu__##var
-
-#ifdef CONFIG_SMP
-
-/*
- * per_cpu_offset() is the offset that has to be added to a
- * percpu variable to get to the instance for a certain processor.
- */
-extern unsigned long __per_cpu_offset[NR_CPUS];
-
-#define per_cpu_offset(x) (__per_cpu_offset[x])
-
-#define __my_cpu_offset per_cpu_offset(raw_smp_processor_id())
-#ifdef CONFIG_DEBUG_PREEMPT
-#define my_cpu_offset per_cpu_offset(smp_processor_id())
-#else
-#define my_cpu_offset __my_cpu_offset
-#endif
-
-#ifndef MODULE
-#define SHIFT_PERCPU_PTR(var, offset) RELOC_HIDE(&per_cpu_var(var), (offset))
-#else
-/*
- * To calculate addresses of locally defined variables, GCC uses 32-bit
- * displacement from the GP. Which doesn't work for per cpu variables in
- * modules, as an offset to the kernel per cpu area is way above 4G.
+ * To calculate addresses of locally defined variables, GCC uses
+ * 32-bit displacement from the GP. Which doesn't work for per cpu
+ * variables in modules, as an offset to the kernel per cpu area is
+ * way above 4G.
  *
- * This forces allocation of a GOT entry for per cpu variable using
- * ldq instruction with a 'literal' relocation.
- */
-#define SHIFT_PERCPU_PTR(var, offset) ({		\
-	extern int simple_identifier_##var(void);	\
-	unsigned long __ptr, tmp_gp;			\
-	asm (  "br	%1, 1f		  	      \n\
-	1:	ldgp	%1, 0(%1)	    	      \n\
-		ldq %0, per_cpu__" #var"(%1)\t!literal"		\
-		: "=&r"(__ptr), "=&r"(tmp_gp));		\
-	(typeof(&per_cpu_var(var)))(__ptr + (offset)); })
-
-#endif /* MODULE */
-
-/*
- * A percpu variable may point to a discarded regions. The following are
- * established ways to produce a usable pointer from the percpu variable
- * offset.
+ * Use "weak" attribute to force the compiler to generate external
+ * reference.
  */
-#define per_cpu(var, cpu) \
-	(*SHIFT_PERCPU_PTR(var, per_cpu_offset(cpu)))
-#define __get_cpu_var(var) \
-	(*SHIFT_PERCPU_PTR(var, my_cpu_offset))
-#define __raw_get_cpu_var(var) \
-	(*SHIFT_PERCPU_PTR(var, __my_cpu_offset))
-
-#else /* ! SMP */
-
-#define per_cpu(var, cpu)		(*((void)(cpu), &per_cpu_var(var)))
-#define __get_cpu_var(var)		per_cpu_var(var)
-#define __raw_get_cpu_var(var)		per_cpu_var(var)
-
-#endif /* SMP */
-
-#ifdef CONFIG_SMP
-#define PER_CPU_BASE_SECTION ".data.percpu"
-#else
-#define PER_CPU_BASE_SECTION ".data"
-#endif
-
-#ifdef CONFIG_SMP
-
-#ifdef MODULE
-#define PER_CPU_SHARED_ALIGNED_SECTION ""
-#else
-#define PER_CPU_SHARED_ALIGNED_SECTION ".shared_aligned"
-#endif
-#define PER_CPU_FIRST_SECTION ".first"
-
-#else
-
-#define PER_CPU_SHARED_ALIGNED_SECTION ""
-#define PER_CPU_FIRST_SECTION ""
-
+#if defined(MODULE) && defined(CONFIG_SMP)
+#define PER_CPU_ATTRIBUTES	__attribute__((weak))
 #endif
 
-#define PER_CPU_ATTRIBUTES
+#include <asm-generic/percpu.h>
 
 #endif /* __ALPHA_PERCPU_H */
diff --git a/arch/alpha/include/asm/tlbflush.h b/arch/alpha/include/asm/tlbflush.h
index 9d87aaa..e89e0c2 100644
--- a/arch/alpha/include/asm/tlbflush.h
+++ b/arch/alpha/include/asm/tlbflush.h
@@ -2,6 +2,7 @@
 #define _ALPHA_TLBFLUSH_H
 
 #include <linux/mm.h>
+#include <linux/sched.h>
 #include <asm/compiler.h>
 #include <asm/pgalloc.h>
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 7/7] s390: switch to dynamic percpu allocator
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
                   ` (5 preceding siblings ...)
  2009-06-01  8:58 ` [PATCH 6/7] alpha: switch to dynamic percpu allocator Tejun Heo
@ 2009-06-01  8:58 ` Tejun Heo
  2009-06-01 16:10 ` [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Kyle McMartin
  2009-06-02  6:35 ` Benjamin Herrenschmidt
  8 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01  8:58 UTC (permalink / raw)
  To: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty
  Cc: Tejun Heo

64bit s390 shares the same problem with percpu symbol addressing from
modules.  It needs assembly magic to force GOTENT reference when
building module as the percpu address will be outside the usual 4G
range from the module text.  Simiarly to alpha, this can be solved by
using the weak attribute.

This patch makes s390 use weak attribute instead and switch to dynamic
percpu allocator.  Please note that weak attribute is not added if
!SMP as percpu variables behave exactly the same as normal variables
on UP.

Compile tested.  Generation of GOTENT reference verified.

This patch is based on Ivan Kokshaysky's alpha percpu patch.

[ Impact: use dynamic percpu allocator ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 arch/s390/Kconfig              |    3 ---
 arch/s390/include/asm/percpu.h |   32 ++++++++------------------------
 2 files changed, 8 insertions(+), 27 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 686909a..2eca5fe 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -75,9 +75,6 @@ config VIRT_CPU_ACCOUNTING
 config ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	def_bool y
 
-config HAVE_LEGACY_PER_CPU_AREA
-	def_bool y
-
 mainmenu "Linux Kernel Configuration"
 
 config S390
diff --git a/arch/s390/include/asm/percpu.h b/arch/s390/include/asm/percpu.h
index 408d60b..36672ff 100644
--- a/arch/s390/include/asm/percpu.h
+++ b/arch/s390/include/asm/percpu.h
@@ -1,37 +1,21 @@
 #ifndef __ARCH_S390_PERCPU__
 #define __ARCH_S390_PERCPU__
 
-#include <linux/compiler.h>
-#include <asm/lowcore.h>
-
 /*
  * s390 uses its own implementation for per cpu data, the offset of
  * the cpu local data area is cached in the cpu's lowcore memory.
- * For 64 bit module code s390 forces the use of a GOT slot for the
- * address of the per cpu variable. This is needed because the module
- * may be more than 4G above the per cpu area.
  */
-#if defined(__s390x__) && defined(MODULE)
-
-#define SHIFT_PERCPU_PTR(ptr,offset) (({			\
-	extern int simple_identifier_##var(void);	\
-	unsigned long *__ptr;				\
-	asm ( "larl %0, %1@GOTENT"		\
-	    : "=a" (__ptr) : "X" (ptr) );		\
-	(typeof(ptr))((*__ptr) + (offset));	}))
-
-#else
-
-#define SHIFT_PERCPU_PTR(ptr, offset) (({				\
-	extern int simple_identifier_##var(void);		\
-	unsigned long __ptr;					\
-	asm ( "" : "=a" (__ptr) : "0" (ptr) );			\
-	(typeof(ptr)) (__ptr + (offset)); }))
+#define __my_cpu_offset S390_lowcore.percpu_offset
 
+/*
+ * For 64 bit module code, the module may be more than 4G above the
+ * per cpu area, use "weak" attribute to force the compiler to
+ * generate an external reference.
+ */
+#if defined(CONFIG_SMP) && defined(__s390x__) && defined(MODULE)
+#define PER_CPU_ATTRIBUTES	__attribute__((weak))
 #endif
 
-#define __my_cpu_offset S390_lowcore.percpu_offset
-
 #include <asm-generic/percpu.h>
 
 #endif /* __ARCH_S390_PERCPU__ */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-01  8:58   ` Tejun Heo
@ 2009-06-01  9:40     ` David Miller
  -1 siblings, 0 replies; 34+ messages in thread
From: David Miller @ 2009-06-01  9:40 UTC (permalink / raw)
  To: tj
  Cc: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, jdike, chris, rusty,
	jens.axboe, davej, jeremy, linux-mm

From: Tejun Heo <tj@kernel.org>
Date: Mon,  1 Jun 2009 17:58:24 +0900

> --- a/arch/cris/include/asm/mmu_context.h
> +++ b/arch/cris/include/asm/mmu_context.h
> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>   * registers like cr3 on the i386
>   */
>  
> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>  
>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>  {

Yes volatile sucks, but might this break something?

Whether the volatile is actually needed or not, it's bad to have this
kind of potential behavior changing nugget hidden in this seemingly
inocuous change.  Especially if you're the poor soul who ends up
having to debug it :-/

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-06-01  9:40     ` David Miller
  0 siblings, 0 replies; 34+ messages in thread
From: David Miller @ 2009-06-01  9:40 UTC (permalink / raw)
  To: tj
  Cc: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, jdike, chris, rusty,
	jens.axboe, davej, jeremy, linux-mm

From: Tejun Heo <tj@kernel.org>
Date: Mon,  1 Jun 2009 17:58:24 +0900

> --- a/arch/cris/include/asm/mmu_context.h
> +++ b/arch/cris/include/asm/mmu_context.h
> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>   * registers like cr3 on the i386
>   */
>  
> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>  
>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>  {

Yes volatile sucks, but might this break something?

Whether the volatile is actually needed or not, it's bad to have this
kind of potential behavior changing nugget hidden in this seemingly
inocuous change.  Especially if you're the poor soul who ends up
having to debug it :-/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-01  9:40     ` David Miller
@ 2009-06-01 11:36       ` Tejun Heo
  -1 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01 11:36 UTC (permalink / raw)
  To: David Miller
  Cc: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, jdike, chris, rusty,
	jens.axboe, davej, jeremy, linux-mm

David Miller wrote:
> From: Tejun Heo <tj@kernel.org>
> Date: Mon,  1 Jun 2009 17:58:24 +0900
> 
>> --- a/arch/cris/include/asm/mmu_context.h
>> +++ b/arch/cris/include/asm/mmu_context.h
>> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>   * registers like cr3 on the i386
>>   */
>>  
>> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>>  
>>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>  {
> 
> Yes volatile sucks, but might this break something?
> 
> Whether the volatile is actually needed or not, it's bad to have this
> kind of potential behavior changing nugget hidden in this seemingly
> inocuous change.  Especially if you're the poor soul who ends up
> having to debug it :-/

You're right.  Aieee... how do I feed volatile to the DEFINE macro.
I'll think of something.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-06-01 11:36       ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-01 11:36 UTC (permalink / raw)
  To: David Miller
  Cc: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, jdike, chris, rusty,
	jens.axboe, davej, jeremy, linux-mm

David Miller wrote:
> From: Tejun Heo <tj@kernel.org>
> Date: Mon,  1 Jun 2009 17:58:24 +0900
> 
>> --- a/arch/cris/include/asm/mmu_context.h
>> +++ b/arch/cris/include/asm/mmu_context.h
>> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>   * registers like cr3 on the i386
>>   */
>>  
>> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>>  
>>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>  {
> 
> Yes volatile sucks, but might this break something?
> 
> Whether the volatile is actually needed or not, it's bad to have this
> kind of potential behavior changing nugget hidden in this seemingly
> inocuous change.  Especially if you're the poor soul who ends up
> having to debug it :-/

You're right.  Aieee... how do I feed volatile to the DEFINE macro.
I'll think of something.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
                   ` (6 preceding siblings ...)
  2009-06-01  8:58 ` [PATCH 7/7] s390: " Tejun Heo
@ 2009-06-01 16:10 ` Kyle McMartin
  2009-06-01 19:51   ` Kyle McMartin
  2009-06-02  6:35 ` Benjamin Herrenschmidt
  8 siblings, 1 reply; 34+ messages in thread
From: Kyle McMartin @ 2009-06-01 16:10 UTC (permalink / raw)
  To: Tejun Heo
  Cc: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty

On Mon, Jun 01, 2009 at 05:58:21PM +0900, Tejun Heo wrote:
>  arch/parisc/kernel/irq.c                         |    2 
>  arch/parisc/kernel/topology.c                    |    2 

Ack the parisc bits, they build correctly, and appear to boot fine
as well (at least, limited testing.)

cheers, Kyle

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2
  2009-06-01 16:10 ` [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Kyle McMartin
@ 2009-06-01 19:51   ` Kyle McMartin
  2009-06-05  4:24     ` Tejun Heo
  0 siblings, 1 reply; 34+ messages in thread
From: Kyle McMartin @ 2009-06-01 19:51 UTC (permalink / raw)
  To: Kyle McMartin
  Cc: Tejun Heo, JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86,
	ink, rth, linux, hskinnemoen, cooloney, starvik, jesper.nilsson,
	dhowells, ysato, tony.luck, takata, geert, monstr, ralf, benh,
	paulus, schwidefsky, heiko.carstens, lethal, davem, jdike, chris,
	rusty

On Mon, Jun 01, 2009 at 12:10:28PM -0400, Kyle McMartin wrote:
> On Mon, Jun 01, 2009 at 05:58:21PM +0900, Tejun Heo wrote:
> >  arch/parisc/kernel/irq.c                         |    2 
> >  arch/parisc/kernel/topology.c                    |    2 
> 
> Ack the parisc bits, they build correctly, and appear to boot fine
> as well (at least, limited testing.)
> 

I appear to have spoken too soon (or booted the wrong machine.) I'll
look into it.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-01 11:36       ` Tejun Heo
@ 2009-06-02  5:08         ` Benjamin Herrenschmidt
  -1 siblings, 0 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2009-06-02  5:08 UTC (permalink / raw)
  To: Tejun Heo
  Cc: David Miller, JBeulich, andi, mingo, hpa, tglx, linux-kernel,
	x86, ink, rth, linux, hskinnemoen, cooloney, starvik,
	jesper.nilsson, dhowells, ysato, tony.luck, takata, geert,
	monstr, ralf, kyle, paulus, schwidefsky, heiko.carstens, lethal,
	jdike, chris, rusty, jens.axboe, davej, jeremy, linux-mm

On Mon, 2009-06-01 at 20:36 +0900, Tejun Heo wrote:
> > Whether the volatile is actually needed or not, it's bad to have this
> > kind of potential behavior changing nugget hidden in this seemingly
> > inocuous change.  Especially if you're the poor soul who ends up
> > having to debug it :-/
> 
> You're right.  Aieee... how do I feed volatile to the DEFINE macro.
> I'll think of something.

Or better, work with the cris maintainer to figure out whether it's
needed (it probably isn't) and have a pre-requisite patch that removes
it before your series :-)

Cheers,
Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-06-02  5:08         ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2009-06-02  5:08 UTC (permalink / raw)
  To: Tejun Heo
  Cc: David Miller, JBeulich, andi, mingo, hpa, tglx, linux-kernel,
	x86, ink, rth, linux, hskinnemoen, cooloney, starvik,
	jesper.nilsson, dhowells, ysato, tony.luck, takata, geert,
	monstr, ralf, kyle, paulus, schwidefsky, heiko.carstens, lethal,
	jdike, chris, rusty, jens.axboe, davej, jeremy, linux-mm

On Mon, 2009-06-01 at 20:36 +0900, Tejun Heo wrote:
> > Whether the volatile is actually needed or not, it's bad to have this
> > kind of potential behavior changing nugget hidden in this seemingly
> > inocuous change.  Especially if you're the poor soul who ends up
> > having to debug it :-/
> 
> You're right.  Aieee... how do I feed volatile to the DEFINE macro.
> I'll think of something.

Or better, work with the cris maintainer to figure out whether it's
needed (it probably isn't) and have a pre-requisite patch that removes
it before your series :-)

Cheers,
Ben.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2
  2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
                   ` (7 preceding siblings ...)
  2009-06-01 16:10 ` [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Kyle McMartin
@ 2009-06-02  6:35 ` Benjamin Herrenschmidt
  8 siblings, 0 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2009-06-02  6:35 UTC (permalink / raw)
  To: Tejun Heo
  Cc: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, kyle, paulus,
	schwidefsky, heiko.carstens, lethal, davem, jdike, chris, rusty

On Mon, 2009-06-01 at 17:58 +0900, Tejun Heo wrote:
> Hello,
> 
> Upon ack, please pull from the following git tree.
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git tj-percpu
> 
> This is the second take of percpu-convert-most-archs-to-dynamic-percpu
> patchset.  Changes from the last take[L] are

There's a minor conflict with my -next branch but it's easy enough
to resolve by hand (Kconfig bits). Appart from that, it appears to
build fine with my collection of test configs. I haven't had a chance
to test boot it yet though :-)

Cheers,
Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2
  2009-06-01 19:51   ` Kyle McMartin
@ 2009-06-05  4:24     ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-05  4:24 UTC (permalink / raw)
  To: Kyle McMartin
  Cc: JBeulich, andi, mingo, hpa, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, geert, monstr, ralf, benh, paulus,
	schwidefsky, heiko.carstens, lethal, davem, jdike, chris, rusty

Kyle McMartin wrote:
> On Mon, Jun 01, 2009 at 12:10:28PM -0400, Kyle McMartin wrote:
>> On Mon, Jun 01, 2009 at 05:58:21PM +0900, Tejun Heo wrote:
>>>  arch/parisc/kernel/irq.c                         |    2 
>>>  arch/parisc/kernel/topology.c                    |    2 
>> Ack the parisc bits, they build correctly, and appear to boot fine
>> as well (at least, limited testing.)
>>
> 
> I appear to have spoken too soon (or booted the wrong machine.) I'll
> look into it.

Any news?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-02  5:08         ` Benjamin Herrenschmidt
@ 2009-06-05  4:25           ` Tejun Heo
  -1 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-05  4:25 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: David Miller, JBeulich, andi, mingo, hpa, tglx, linux-kernel,
	x86, ink, rth, linux, hskinnemoen, cooloney, starvik,
	jesper.nilsson, dhowells, ysato, tony.luck, takata, geert,
	monstr, ralf, kyle, paulus, schwidefsky, heiko.carstens, lethal,
	jdike, chris, rusty, jens.axboe, davej, jeremy, linux-mm

Benjamin Herrenschmidt wrote:
> On Mon, 2009-06-01 at 20:36 +0900, Tejun Heo wrote:
>>> Whether the volatile is actually needed or not, it's bad to have this
>>> kind of potential behavior changing nugget hidden in this seemingly
>>> inocuous change.  Especially if you're the poor soul who ends up
>>> having to debug it :-/
>> You're right.  Aieee... how do I feed volatile to the DEFINE macro.
>> I'll think of something.
> 
> Or better, work with the cris maintainer to figure out whether it's
> needed (it probably isn't) and have a pre-requisite patch that removes
> it before your series :-)

Yeap, that's worth giving a shot.

Mikael Starvik, can you please enlighten us why volatile is necessary
there?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-06-05  4:25           ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-05  4:25 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: David Miller, JBeulich, andi, mingo, hpa, tglx, linux-kernel,
	x86, ink, rth, linux, hskinnemoen, cooloney, starvik,
	jesper.nilsson, dhowells, ysato, tony.luck, takata, geert,
	monstr, ralf, kyle, paulus, schwidefsky, heiko.carstens, lethal,
	jdike, chris, rusty, jens.axboe, davej, jeremy, linux-mm

Benjamin Herrenschmidt wrote:
> On Mon, 2009-06-01 at 20:36 +0900, Tejun Heo wrote:
>>> Whether the volatile is actually needed or not, it's bad to have this
>>> kind of potential behavior changing nugget hidden in this seemingly
>>> inocuous change.  Especially if you're the poor soul who ends up
>>> having to debug it :-/
>> You're right.  Aieee... how do I feed volatile to the DEFINE macro.
>> I'll think of something.
> 
> Or better, work with the cris maintainer to figure out whether it's
> needed (it probably isn't) and have a pre-requisite patch that removes
> it before your series :-)

Yeap, that's worth giving a shot.

Mikael Starvik, can you please enlighten us why volatile is necessary
there?

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-01  9:40     ` David Miller
@ 2009-06-10 18:30       ` H. Peter Anvin
  -1 siblings, 0 replies; 34+ messages in thread
From: H. Peter Anvin @ 2009-06-10 18:30 UTC (permalink / raw)
  To: David Miller
  Cc: tj, JBeulich, andi, mingo, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, monstr, ralf, kyle, benh, paulus,
	schwidefsky, heiko.carstens, lethal, jdike, chris, rusty,
	jens.axboe, davej, jeremy, linux-mm

David Miller wrote:
> From: Tejun Heo <tj@kernel.org>
> Date: Mon,  1 Jun 2009 17:58:24 +0900
> 
>> --- a/arch/cris/include/asm/mmu_context.h
>> +++ b/arch/cris/include/asm/mmu_context.h
>> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>   * registers like cr3 on the i386
>>   */
>>  
>> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>>  
>>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>  {
> 
> Yes volatile sucks, but might this break something?
> 
> Whether the volatile is actually needed or not, it's bad to have this
> kind of potential behavior changing nugget hidden in this seemingly
> inocuous change.  Especially if you're the poor soul who ends up
> having to debug it :-/

Shouldn't the "volatile" go inside the DECLARE_PER_CPU() with the rest
of the type?  [Disclaimer: I haven't actually looked.]

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-06-10 18:30       ` H. Peter Anvin
  0 siblings, 0 replies; 34+ messages in thread
From: H. Peter Anvin @ 2009-06-10 18:30 UTC (permalink / raw)
  To: David Miller
  Cc: tj, JBeulich, andi, mingo, tglx, linux-kernel, x86, ink, rth,
	linux, hskinnemoen, cooloney, starvik, jesper.nilsson, dhowells,
	ysato, tony.luck, takata, monstr, ralf, kyle, benh, paulus,
	schwidefsky, heiko.carstens, lethal, jdike, chris, rusty,
	jens.axboe, davej, jeremy, linux-mm

David Miller wrote:
> From: Tejun Heo <tj@kernel.org>
> Date: Mon,  1 Jun 2009 17:58:24 +0900
> 
>> --- a/arch/cris/include/asm/mmu_context.h
>> +++ b/arch/cris/include/asm/mmu_context.h
>> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>   * registers like cr3 on the i386
>>   */
>>  
>> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>>  
>>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>  {
> 
> Yes volatile sucks, but might this break something?
> 
> Whether the volatile is actually needed or not, it's bad to have this
> kind of potential behavior changing nugget hidden in this seemingly
> inocuous change.  Especially if you're the poor soul who ends up
> having to debug it :-/

Shouldn't the "volatile" go inside the DECLARE_PER_CPU() with the rest
of the type?  [Disclaimer: I haven't actually looked.]

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-05  4:25           ` Tejun Heo
  (?)
@ 2009-06-11 10:45           ` Jesper Nilsson
  2009-06-17  2:28             ` Tejun Heo
  -1 siblings, 1 reply; 34+ messages in thread
From: Jesper Nilsson @ 2009-06-11 10:45 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Benjamin Herrenschmidt, David Miller, JBeulich, andi, mingo, hpa,
	tglx, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	Mikael Starvik, dhowells, ysato, tony.luck, takata, geert,
	monstr, ralf, kyle, paulus, schwidefsky, heiko.carstens, lethal,
	jdike, chris, rusty, jens.axboe, davej, jeremy, linux-mm

On Fri, Jun 05, 2009 at 06:25:30AM +0200, Tejun Heo wrote:
> Benjamin Herrenschmidt wrote:
> > On Mon, 2009-06-01 at 20:36 +0900, Tejun Heo wrote:
> >>> Whether the volatile is actually needed or not, it's bad to have this
> >>> kind of potential behavior changing nugget hidden in this seemingly
> >>> inocuous change.  Especially if you're the poor soul who ends up
> >>> having to debug it :-/
> >> You're right.  Aieee... how do I feed volatile to the DEFINE macro.
> >> I'll think of something.
> > 
> > Or better, work with the cris maintainer to figure out whether it's
> > needed (it probably isn't) and have a pre-requisite patch that removes
> > it before your series :-)
> 
> Yeap, that's worth giving a shot.
> 
> Mikael Starvik, can you please enlighten us why volatile is necessary
> there?

I've talked with Mikael, and we both agreed that this was probably
a legacy from earlier versions, and the volatile is no longer needed.

Confirmed by booting and running some video-streaming on an ARTPEC-3
(CRISv32) board.

You can take the following patch as a pre-requisite, or go the way of
the original patch.

From: Jesper Nilsson <jesper.nilsson@axis.com>
Subject: [PATCH] CRIS: Change DEFINE_PER_CPU of current_pgd to be non volatile.

The DEFINE_PER_CPU of current_pgd was on CRIS defined using volatile,
which is not needed. Remove volatile.

Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
---
 arch/cris/include/asm/mmu_context.h |    3 ++-
 arch/cris/mm/fault.c                |    2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/cris/include/asm/mmu_context.h b/arch/cris/include/asm/mmu_context.h
index 72ba08d..476cd9e 100644
--- a/arch/cris/include/asm/mmu_context.h
+++ b/arch/cris/include/asm/mmu_context.h
@@ -17,7 +17,8 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
  * registers like cr3 on the i386
  */
 
-extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
+/* defined in arch/cris/mm/fault.c */
+extern DEFINE_PER_CPU(pgd_t *, current_pgd);
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
diff --git a/arch/cris/mm/fault.c b/arch/cris/mm/fault.c
index c4c76db..84d22ae 100644
--- a/arch/cris/mm/fault.c
+++ b/arch/cris/mm/fault.c
@@ -29,7 +29,7 @@ extern void die_if_kernel(const char *, struct pt_regs *, long);
 
 /* current active page directory */
 
-volatile DEFINE_PER_CPU(pgd_t *,current_pgd);
+DEFINE_PER_CPU(pgd_t *, current_pgd);
 unsigned long cris_signal_return_page;
 
 /*
-- 
1.6.1

> Thanks.
> 
> -- 
> tejun

/^JN - Jesper Nilsson
-- 
               Jesper Nilsson -- jesper.nilsson@axis.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-06-11 10:45           ` Jesper Nilsson
@ 2009-06-17  2:28             ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-06-17  2:28 UTC (permalink / raw)
  To: Jesper Nilsson
  Cc: Benjamin Herrenschmidt, David Miller, JBeulich, andi, mingo, hpa,
	tglx, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	Mikael Starvik, dhowells, ysato, tony.luck, takata, geert,
	monstr, ralf, kyle, paulus, schwidefsky, heiko.carstens, lethal,
	jdike, chris, rusty, jens.axboe, davej, jeremy, linux-mm

Hello,

Jesper Nilsson wrote:
> I've talked with Mikael, and we both agreed that this was probably
> a legacy from earlier versions, and the volatile is no longer needed.
> 
> Confirmed by booting and running some video-streaming on an ARTPEC-3
> (CRISv32) board.
> 
> You can take the following patch as a pre-requisite, or go the way of
> the original patch.
> 
> From: Jesper Nilsson <jesper.nilsson@axis.com>
> Subject: [PATCH] CRIS: Change DEFINE_PER_CPU of current_pgd to be non volatile.
> 
> The DEFINE_PER_CPU of current_pgd was on CRIS defined using volatile,
> which is not needed. Remove volatile.
> 
> Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>

Super.  Included in the series.

Thanks a lot.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-05-25  6:07     ` Rusty Russell
@ 2009-05-25 16:07       ` Tejun Heo
  -1 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-05-25 16:07 UTC (permalink / raw)
  To: Rusty Russell
  Cc: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, Jens Axboe,
	Dave Jones, Jeremy Fitzhardinge, linux-mm

Rusty Russell wrote:
> On Wed, 20 May 2009 05:07:35 pm Tejun Heo wrote:
>> Percpu variable definition is about to be updated such that
>>
>> * percpu symbols must be unique even the static ones
>>
>> * in-function static definition is not allowed
> 
> That spluttering noise is be choking on the title of this patch :)
> 
> Making these pseudo statics is in no way a cleanup.  How about we just
> say "they can't be static" and do something like:
> 
> /* Sorry, can't be static: that breaks archs which need these weak. */
> #define DEFINE_PER_CPU(type, var) \
> 	extern typeof(type) var; DEFINE_PER_CPU_SECTION(type, name, "")

Heh... well, even though I authored the patch, I kind of agree with
you.  Maybe it would be better to simply disallow static declaration /
definition at all.  I wanted to give a go at the original idea as it
seemed to have some potential.  The result isn't too disappointing but
I can't really say there are distinctively compelling advantages to
justify the added complexity and subtlety.

What do others think?  Is everyone happy with going extern only?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-05-25 16:07       ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-05-25 16:07 UTC (permalink / raw)
  To: Rusty Russell
  Cc: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, Jens Axboe,
	Dave Jones, Jeremy Fitzhardinge, linux-mm

Rusty Russell wrote:
> On Wed, 20 May 2009 05:07:35 pm Tejun Heo wrote:
>> Percpu variable definition is about to be updated such that
>>
>> * percpu symbols must be unique even the static ones
>>
>> * in-function static definition is not allowed
> 
> That spluttering noise is be choking on the title of this patch :)
> 
> Making these pseudo statics is in no way a cleanup.  How about we just
> say "they can't be static" and do something like:
> 
> /* Sorry, can't be static: that breaks archs which need these weak. */
> #define DEFINE_PER_CPU(type, var) \
> 	extern typeof(type) var; DEFINE_PER_CPU_SECTION(type, name, "")

Heh... well, even though I authored the patch, I kind of agree with
you.  Maybe it would be better to simply disallow static declaration /
definition at all.  I wanted to give a go at the original idea as it
seemed to have some potential.  The result isn't too disappointing but
I can't really say there are distinctively compelling advantages to
justify the added complexity and subtlety.

What do others think?  Is everyone happy with going extern only?

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-05-20  7:37   ` Tejun Heo
@ 2009-05-25  6:07     ` Rusty Russell
  -1 siblings, 0 replies; 34+ messages in thread
From: Rusty Russell @ 2009-05-25  6:07 UTC (permalink / raw)
  To: Tejun Heo
  Cc: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, Jens Axboe,
	Dave Jones, Jeremy Fitzhardinge, linux-mm

On Wed, 20 May 2009 05:07:35 pm Tejun Heo wrote:
> Percpu variable definition is about to be updated such that
>
> * percpu symbols must be unique even the static ones
>
> * in-function static definition is not allowed

That spluttering noise is be choking on the title of this patch :)

Making these pseudo statics is in no way a cleanup.  How about we just
say "they can't be static" and do something like:

/* Sorry, can't be static: that breaks archs which need these weak. */
#define DEFINE_PER_CPU(type, var) \
	extern typeof(type) var; DEFINE_PER_CPU_SECTION(type, name, "")

Thanks,
Rusty.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-05-25  6:07     ` Rusty Russell
  0 siblings, 0 replies; 34+ messages in thread
From: Rusty Russell @ 2009-05-25  6:07 UTC (permalink / raw)
  To: Tejun Heo
  Cc: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, Jens Axboe,
	Dave Jones, Jeremy Fitzhardinge, linux-mm

On Wed, 20 May 2009 05:07:35 pm Tejun Heo wrote:
> Percpu variable definition is about to be updated such that
>
> * percpu symbols must be unique even the static ones
>
> * in-function static definition is not allowed

That spluttering noise is be choking on the title of this patch :)

Making these pseudo statics is in no way a cleanup.  How about we just
say "they can't be static" and do something like:

/* Sorry, can't be static: that breaks archs which need these weak. */
#define DEFINE_PER_CPU(type, var) \
	extern typeof(type) var; DEFINE_PER_CPU_SECTION(type, name, "")

Thanks,
Rusty.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-05-20  7:37   ` Tejun Heo
@ 2009-05-20  9:17     ` Jens Axboe
  -1 siblings, 0 replies; 34+ messages in thread
From: Jens Axboe @ 2009-05-20  9:17 UTC (permalink / raw)
  To: Tejun Heo
  Cc: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, rusty, Dave Jones,
	Jeremy Fitzhardinge, linux-mm

On Wed, May 20 2009, Tejun Heo wrote:
> Percpu variable definition is about to be updated such that
> 
> * percpu symbols must be unique even the static ones
> 
> * in-function static definition is not allowed
> 
> Update percpu variable definitions accoringly.
> 
> * as,cfq: rename ioc_count uniquely
> 
> * cpufreq: rename cpu_dbs_info uniquely
> 
> * xen: move nesting_count out of xen_evtchn_do_upcall() and rename it
> 
> * mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
>   rename it
> 
> * ipv4,6: rename cookie_scratch uniquely
> 
> While at it, make cris:use DECLARE_PER_CPU() instead of extern
> volatile DEFINE_PER_CPU() for declaration.
> 
> [ Impact: percpu usage cleanups, no duplicate static percpu var names ]

The block bits looks fine.

Acked-by: Jens Axboe <jens.axboe@oracle.com>

> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> Cc: Jens Axboe <jens.axboe@oracle.com>
> Cc: Dave Jones <davej@redhat.com>
> Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
> Cc: linux-mm <linux-mm@kvack.org>
> Cc: David S. Miller <davem@davemloft.net>
> ---
>  arch/cris/include/asm/mmu_context.h    |    2 +-
>  block/as-iosched.c                     |   10 +++++-----
>  block/cfq-iosched.c                    |   10 +++++-----
>  drivers/cpufreq/cpufreq_conservative.c |   12 ++++++------
>  drivers/cpufreq/cpufreq_ondemand.c     |   15 ++++++++-------
>  drivers/xen/events.c                   |    9 +++++----
>  mm/page-writeback.c                    |    5 +++--
>  net/ipv4/syncookies.c                  |    4 ++--
>  net/ipv6/syncookies.c                  |    4 ++--
>  9 files changed, 37 insertions(+), 34 deletions(-)
> 
> diff --git a/arch/cris/include/asm/mmu_context.h b/arch/cris/include/asm/mmu_context.h
> index 72ba08d..00de1a0 100644
> --- a/arch/cris/include/asm/mmu_context.h
> +++ b/arch/cris/include/asm/mmu_context.h
> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>   * registers like cr3 on the i386
>   */
>  
> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>  
>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>  {
> diff --git a/block/as-iosched.c b/block/as-iosched.c
> index c48fa67..96ff4d1 100644
> --- a/block/as-iosched.c
> +++ b/block/as-iosched.c
> @@ -146,7 +146,7 @@ enum arq_state {
>  #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
>  #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
>  
> -static DEFINE_PER_CPU(unsigned long, ioc_count);
> +static DEFINE_PER_CPU(unsigned long, as_ioc_count);
>  static struct completion *ioc_gone;
>  static DEFINE_SPINLOCK(ioc_gone_lock);
>  
> @@ -161,7 +161,7 @@ static void as_antic_stop(struct as_data *ad);
>  static void free_as_io_context(struct as_io_context *aic)
>  {
>  	kfree(aic);
> -	elv_ioc_count_dec(ioc_count);
> +	elv_ioc_count_dec(as_ioc_count);
>  	if (ioc_gone) {
>  		/*
>  		 * AS scheduler is exiting, grab exit lock and check
> @@ -169,7 +169,7 @@ static void free_as_io_context(struct as_io_context *aic)
>  		 * complete ioc_gone and set it back to NULL.
>  		 */
>  		spin_lock(&ioc_gone_lock);
> -		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
> +		if (ioc_gone && !elv_ioc_count_read(as_ioc_count)) {
>  			complete(ioc_gone);
>  			ioc_gone = NULL;
>  		}
> @@ -211,7 +211,7 @@ static struct as_io_context *alloc_as_io_context(void)
>  		ret->seek_total = 0;
>  		ret->seek_samples = 0;
>  		ret->seek_mean = 0;
> -		elv_ioc_count_inc(ioc_count);
> +		elv_ioc_count_inc(as_ioc_count);
>  	}
>  
>  	return ret;
> @@ -1509,7 +1509,7 @@ static void __exit as_exit(void)
>  	ioc_gone = &all_gone;
>  	/* ioc_gone's update must be visible before reading ioc_count */
>  	smp_wmb();
> -	if (elv_ioc_count_read(ioc_count))
> +	if (elv_ioc_count_read(as_ioc_count))
>  		wait_for_completion(&all_gone);
>  	synchronize_rcu();
>  }
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index a55a9bd..deea748 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -48,7 +48,7 @@ static int cfq_slice_idle = HZ / 125;
>  static struct kmem_cache *cfq_pool;
>  static struct kmem_cache *cfq_ioc_pool;
>  
> -static DEFINE_PER_CPU(unsigned long, ioc_count);
> +static DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
>  static struct completion *ioc_gone;
>  static DEFINE_SPINLOCK(ioc_gone_lock);
>  
> @@ -1423,7 +1423,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
>  	cic = container_of(head, struct cfq_io_context, rcu_head);
>  
>  	kmem_cache_free(cfq_ioc_pool, cic);
> -	elv_ioc_count_dec(ioc_count);
> +	elv_ioc_count_dec(cfq_ioc_count);
>  
>  	if (ioc_gone) {
>  		/*
> @@ -1432,7 +1432,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
>  		 * complete ioc_gone and set it back to NULL
>  		 */
>  		spin_lock(&ioc_gone_lock);
> -		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
> +		if (ioc_gone && !elv_ioc_count_read(cfq_ioc_count)) {
>  			complete(ioc_gone);
>  			ioc_gone = NULL;
>  		}
> @@ -1558,7 +1558,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
>  		INIT_HLIST_NODE(&cic->cic_list);
>  		cic->dtor = cfq_free_io_context;
>  		cic->exit = cfq_exit_io_context;
> -		elv_ioc_count_inc(ioc_count);
> +		elv_ioc_count_inc(cfq_ioc_count);
>  	}
>  
>  	return cic;
> @@ -2663,7 +2663,7 @@ static void __exit cfq_exit(void)
>  	 * this also protects us from entering cfq_slab_kill() with
>  	 * pending RCU callbacks
>  	 */
> -	if (elv_ioc_count_read(ioc_count))
> +	if (elv_ioc_count_read(cfq_ioc_count))
>  		wait_for_completion(&all_gone);
>  	cfq_slab_kill();
>  }
> diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
> index 2ecd95e..e0faa3e 100644
> --- a/drivers/cpufreq/cpufreq_conservative.c
> +++ b/drivers/cpufreq/cpufreq_conservative.c
> @@ -80,7 +80,7 @@ struct cpu_dbs_info_s {
>  	int cpu;
>  	unsigned int enable:1;
>  };
> -static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
> +static DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
>  
>  static unsigned int dbs_enable;	/* number of CPUs using this policy */
>  
> @@ -150,7 +150,7 @@ dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
>  		     void *data)
>  {
>  	struct cpufreq_freqs *freq = data;
> -	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cpu_dbs_info,
> +	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cs_cpu_dbs_info,
>  							freq->cpu);
>  
>  	struct cpufreq_policy *policy;
> @@ -323,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
>  	/* we need to re-evaluate prev_cpu_idle */
>  	for_each_online_cpu(j) {
>  		struct cpu_dbs_info_s *dbs_info;
> -		dbs_info = &per_cpu(cpu_dbs_info, j);
> +		dbs_info = &per_cpu(cs_cpu_dbs_info, j);
>  		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
>  						&dbs_info->prev_cpu_wall);
>  		if (dbs_tuners_ins.ignore_nice)
> @@ -413,7 +413,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
>  		cputime64_t cur_wall_time, cur_idle_time;
>  		unsigned int idle_time, wall_time;
>  
> -		j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +		j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
>  
>  		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
>  
> @@ -553,7 +553,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  	unsigned int j;
>  	int rc;
>  
> -	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
> +	this_dbs_info = &per_cpu(cs_cpu_dbs_info, cpu);
>  
>  	switch (event) {
>  	case CPUFREQ_GOV_START:
> @@ -573,7 +573,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  
>  		for_each_cpu(j, policy->cpus) {
>  			struct cpu_dbs_info_s *j_dbs_info;
> -			j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +			j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
>  			j_dbs_info->cur_policy = policy;
>  
>  			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
> diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
> index 338f428..2eaf88f 100644
> --- a/drivers/cpufreq/cpufreq_ondemand.c
> +++ b/drivers/cpufreq/cpufreq_ondemand.c
> @@ -87,7 +87,7 @@ struct cpu_dbs_info_s {
>  	unsigned int enable:1,
>  		sample_type:1;
>  };
> -static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
> +static DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
>  
>  static unsigned int dbs_enable;	/* number of CPUs using this policy */
>  
> @@ -162,7 +162,8 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy,
>  	unsigned int freq_hi, freq_lo;
>  	unsigned int index = 0;
>  	unsigned int jiffies_total, jiffies_hi, jiffies_lo;
> -	struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, policy->cpu);
> +	struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
> +						   policy->cpu);
>  
>  	if (!dbs_info->freq_table) {
>  		dbs_info->freq_lo = 0;
> @@ -207,7 +208,7 @@ static void ondemand_powersave_bias_init(void)
>  {
>  	int i;
>  	for_each_online_cpu(i) {
> -		struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, i);
> +		struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, i);
>  		dbs_info->freq_table = cpufreq_frequency_get_table(i);
>  		dbs_info->freq_lo = 0;
>  	}
> @@ -322,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
>  	/* we need to re-evaluate prev_cpu_idle */
>  	for_each_online_cpu(j) {
>  		struct cpu_dbs_info_s *dbs_info;
> -		dbs_info = &per_cpu(cpu_dbs_info, j);
> +		dbs_info = &per_cpu(od_cpu_dbs_info, j);
>  		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
>  						&dbs_info->prev_cpu_wall);
>  		if (dbs_tuners_ins.ignore_nice)
> @@ -416,7 +417,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
>  		unsigned int load, load_freq;
>  		int freq_avg;
>  
> -		j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +		j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
>  
>  		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
>  
> @@ -573,7 +574,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  	unsigned int j;
>  	int rc;
>  
> -	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
> +	this_dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
>  
>  	switch (event) {
>  	case CPUFREQ_GOV_START:
> @@ -595,7 +596,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  
>  		for_each_cpu(j, policy->cpus) {
>  			struct cpu_dbs_info_s *j_dbs_info;
> -			j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +			j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
>  			j_dbs_info->cur_policy = policy;
>  
>  			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 30963af..4dbe5c0 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -596,6 +596,8 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
>  	return IRQ_HANDLED;
>  }
>  
> +static DEFINE_PER_CPU(unsigned, xed_nesting_count);
> +
>  /*
>   * Search the CPUs pending events bitmasks.  For each one found, map
>   * the event number to an irq, and feed it into do_IRQ() for
> @@ -611,7 +613,6 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  	struct shared_info *s = HYPERVISOR_shared_info;
>  	struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);
> -	static DEFINE_PER_CPU(unsigned, nesting_count);
>   	unsigned count;
>  
>  	exit_idle();
> @@ -622,7 +623,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  
>  		vcpu_info->evtchn_upcall_pending = 0;
>  
> -		if (__get_cpu_var(nesting_count)++)
> +		if (__get_cpu_var(xed_nesting_count)++)
>  			goto out;
>  
>  #ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
> @@ -647,8 +648,8 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  
>  		BUG_ON(!irqs_disabled());
>  
> -		count = __get_cpu_var(nesting_count);
> -		__get_cpu_var(nesting_count) = 0;
> +		count = __get_cpu_var(xed_nesting_count);
> +		__get_cpu_var(xed_nesting_count) = 0;
>  	} while(count != 1);
>  
>  out:
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index bb553c3..0e0c9de 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -606,6 +606,8 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
>  	}
>  }
>  
> +static DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
> +
>  /**
>   * balance_dirty_pages_ratelimited_nr - balance dirty memory state
>   * @mapping: address_space which was dirtied
> @@ -623,7 +625,6 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
>  void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
>  					unsigned long nr_pages_dirtied)
>  {
> -	static DEFINE_PER_CPU(unsigned long, ratelimits) = 0;
>  	unsigned long ratelimit;
>  	unsigned long *p;
>  
> @@ -636,7 +637,7 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
>  	 * tasks in balance_dirty_pages(). Period.
>  	 */
>  	preempt_disable();
> -	p =  &__get_cpu_var(ratelimits);
> +	p =  &__get_cpu_var(bdp_ratelimits);
>  	*p += nr_pages_dirtied;
>  	if (unlikely(*p >= ratelimit)) {
>  		*p = 0;
> diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
> index b35a950..70ee18c 100644
> --- a/net/ipv4/syncookies.c
> +++ b/net/ipv4/syncookies.c
> @@ -37,12 +37,12 @@ __initcall(init_syncookies);
>  #define COOKIEBITS 24	/* Upper bits store count */
>  #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
>  
> -static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
> +static DEFINE_PER_CPU(__u32, ipv4_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
>  
>  static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
>  		       u32 count, int c)
>  {
> -	__u32 *tmp = __get_cpu_var(cookie_scratch);
> +	__u32 *tmp = __get_cpu_var(ipv4_cookie_scratch);
>  
>  	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
>  	tmp[0] = (__force u32)saddr;
> diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
> index 711175e..348e38c 100644
> --- a/net/ipv6/syncookies.c
> +++ b/net/ipv6/syncookies.c
> @@ -74,12 +74,12 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
>  	return child;
>  }
>  
> -static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
> +static DEFINE_PER_CPU(__u32, ipv6_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
>  
>  static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
>  		       __be16 sport, __be16 dport, u32 count, int c)
>  {
> -	__u32 *tmp = __get_cpu_var(cookie_scratch);
> +	__u32 *tmp = __get_cpu_var(ipv6_cookie_scratch);
>  
>  	/*
>  	 * we have 320 bits of information to hash, copy in the remaining
> -- 
> 1.6.0.2
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-05-20  9:17     ` Jens Axboe
  0 siblings, 0 replies; 34+ messages in thread
From: Jens Axboe @ 2009-05-20  9:17 UTC (permalink / raw)
  To: Tejun Heo
  Cc: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, rusty, Dave Jones,
	Jeremy Fitzhardinge, linux-mm

On Wed, May 20 2009, Tejun Heo wrote:
> Percpu variable definition is about to be updated such that
> 
> * percpu symbols must be unique even the static ones
> 
> * in-function static definition is not allowed
> 
> Update percpu variable definitions accoringly.
> 
> * as,cfq: rename ioc_count uniquely
> 
> * cpufreq: rename cpu_dbs_info uniquely
> 
> * xen: move nesting_count out of xen_evtchn_do_upcall() and rename it
> 
> * mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
>   rename it
> 
> * ipv4,6: rename cookie_scratch uniquely
> 
> While at it, make cris:use DECLARE_PER_CPU() instead of extern
> volatile DEFINE_PER_CPU() for declaration.
> 
> [ Impact: percpu usage cleanups, no duplicate static percpu var names ]

The block bits looks fine.

Acked-by: Jens Axboe <jens.axboe@oracle.com>

> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> Cc: Jens Axboe <jens.axboe@oracle.com>
> Cc: Dave Jones <davej@redhat.com>
> Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
> Cc: linux-mm <linux-mm@kvack.org>
> Cc: David S. Miller <davem@davemloft.net>
> ---
>  arch/cris/include/asm/mmu_context.h    |    2 +-
>  block/as-iosched.c                     |   10 +++++-----
>  block/cfq-iosched.c                    |   10 +++++-----
>  drivers/cpufreq/cpufreq_conservative.c |   12 ++++++------
>  drivers/cpufreq/cpufreq_ondemand.c     |   15 ++++++++-------
>  drivers/xen/events.c                   |    9 +++++----
>  mm/page-writeback.c                    |    5 +++--
>  net/ipv4/syncookies.c                  |    4 ++--
>  net/ipv6/syncookies.c                  |    4 ++--
>  9 files changed, 37 insertions(+), 34 deletions(-)
> 
> diff --git a/arch/cris/include/asm/mmu_context.h b/arch/cris/include/asm/mmu_context.h
> index 72ba08d..00de1a0 100644
> --- a/arch/cris/include/asm/mmu_context.h
> +++ b/arch/cris/include/asm/mmu_context.h
> @@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>   * registers like cr3 on the i386
>   */
>  
> -extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
> +DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
>  
>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>  {
> diff --git a/block/as-iosched.c b/block/as-iosched.c
> index c48fa67..96ff4d1 100644
> --- a/block/as-iosched.c
> +++ b/block/as-iosched.c
> @@ -146,7 +146,7 @@ enum arq_state {
>  #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
>  #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
>  
> -static DEFINE_PER_CPU(unsigned long, ioc_count);
> +static DEFINE_PER_CPU(unsigned long, as_ioc_count);
>  static struct completion *ioc_gone;
>  static DEFINE_SPINLOCK(ioc_gone_lock);
>  
> @@ -161,7 +161,7 @@ static void as_antic_stop(struct as_data *ad);
>  static void free_as_io_context(struct as_io_context *aic)
>  {
>  	kfree(aic);
> -	elv_ioc_count_dec(ioc_count);
> +	elv_ioc_count_dec(as_ioc_count);
>  	if (ioc_gone) {
>  		/*
>  		 * AS scheduler is exiting, grab exit lock and check
> @@ -169,7 +169,7 @@ static void free_as_io_context(struct as_io_context *aic)
>  		 * complete ioc_gone and set it back to NULL.
>  		 */
>  		spin_lock(&ioc_gone_lock);
> -		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
> +		if (ioc_gone && !elv_ioc_count_read(as_ioc_count)) {
>  			complete(ioc_gone);
>  			ioc_gone = NULL;
>  		}
> @@ -211,7 +211,7 @@ static struct as_io_context *alloc_as_io_context(void)
>  		ret->seek_total = 0;
>  		ret->seek_samples = 0;
>  		ret->seek_mean = 0;
> -		elv_ioc_count_inc(ioc_count);
> +		elv_ioc_count_inc(as_ioc_count);
>  	}
>  
>  	return ret;
> @@ -1509,7 +1509,7 @@ static void __exit as_exit(void)
>  	ioc_gone = &all_gone;
>  	/* ioc_gone's update must be visible before reading ioc_count */
>  	smp_wmb();
> -	if (elv_ioc_count_read(ioc_count))
> +	if (elv_ioc_count_read(as_ioc_count))
>  		wait_for_completion(&all_gone);
>  	synchronize_rcu();
>  }
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index a55a9bd..deea748 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -48,7 +48,7 @@ static int cfq_slice_idle = HZ / 125;
>  static struct kmem_cache *cfq_pool;
>  static struct kmem_cache *cfq_ioc_pool;
>  
> -static DEFINE_PER_CPU(unsigned long, ioc_count);
> +static DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
>  static struct completion *ioc_gone;
>  static DEFINE_SPINLOCK(ioc_gone_lock);
>  
> @@ -1423,7 +1423,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
>  	cic = container_of(head, struct cfq_io_context, rcu_head);
>  
>  	kmem_cache_free(cfq_ioc_pool, cic);
> -	elv_ioc_count_dec(ioc_count);
> +	elv_ioc_count_dec(cfq_ioc_count);
>  
>  	if (ioc_gone) {
>  		/*
> @@ -1432,7 +1432,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
>  		 * complete ioc_gone and set it back to NULL
>  		 */
>  		spin_lock(&ioc_gone_lock);
> -		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
> +		if (ioc_gone && !elv_ioc_count_read(cfq_ioc_count)) {
>  			complete(ioc_gone);
>  			ioc_gone = NULL;
>  		}
> @@ -1558,7 +1558,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
>  		INIT_HLIST_NODE(&cic->cic_list);
>  		cic->dtor = cfq_free_io_context;
>  		cic->exit = cfq_exit_io_context;
> -		elv_ioc_count_inc(ioc_count);
> +		elv_ioc_count_inc(cfq_ioc_count);
>  	}
>  
>  	return cic;
> @@ -2663,7 +2663,7 @@ static void __exit cfq_exit(void)
>  	 * this also protects us from entering cfq_slab_kill() with
>  	 * pending RCU callbacks
>  	 */
> -	if (elv_ioc_count_read(ioc_count))
> +	if (elv_ioc_count_read(cfq_ioc_count))
>  		wait_for_completion(&all_gone);
>  	cfq_slab_kill();
>  }
> diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
> index 2ecd95e..e0faa3e 100644
> --- a/drivers/cpufreq/cpufreq_conservative.c
> +++ b/drivers/cpufreq/cpufreq_conservative.c
> @@ -80,7 +80,7 @@ struct cpu_dbs_info_s {
>  	int cpu;
>  	unsigned int enable:1;
>  };
> -static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
> +static DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
>  
>  static unsigned int dbs_enable;	/* number of CPUs using this policy */
>  
> @@ -150,7 +150,7 @@ dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
>  		     void *data)
>  {
>  	struct cpufreq_freqs *freq = data;
> -	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cpu_dbs_info,
> +	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cs_cpu_dbs_info,
>  							freq->cpu);
>  
>  	struct cpufreq_policy *policy;
> @@ -323,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
>  	/* we need to re-evaluate prev_cpu_idle */
>  	for_each_online_cpu(j) {
>  		struct cpu_dbs_info_s *dbs_info;
> -		dbs_info = &per_cpu(cpu_dbs_info, j);
> +		dbs_info = &per_cpu(cs_cpu_dbs_info, j);
>  		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
>  						&dbs_info->prev_cpu_wall);
>  		if (dbs_tuners_ins.ignore_nice)
> @@ -413,7 +413,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
>  		cputime64_t cur_wall_time, cur_idle_time;
>  		unsigned int idle_time, wall_time;
>  
> -		j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +		j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
>  
>  		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
>  
> @@ -553,7 +553,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  	unsigned int j;
>  	int rc;
>  
> -	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
> +	this_dbs_info = &per_cpu(cs_cpu_dbs_info, cpu);
>  
>  	switch (event) {
>  	case CPUFREQ_GOV_START:
> @@ -573,7 +573,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  
>  		for_each_cpu(j, policy->cpus) {
>  			struct cpu_dbs_info_s *j_dbs_info;
> -			j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +			j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
>  			j_dbs_info->cur_policy = policy;
>  
>  			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
> diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
> index 338f428..2eaf88f 100644
> --- a/drivers/cpufreq/cpufreq_ondemand.c
> +++ b/drivers/cpufreq/cpufreq_ondemand.c
> @@ -87,7 +87,7 @@ struct cpu_dbs_info_s {
>  	unsigned int enable:1,
>  		sample_type:1;
>  };
> -static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
> +static DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
>  
>  static unsigned int dbs_enable;	/* number of CPUs using this policy */
>  
> @@ -162,7 +162,8 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy,
>  	unsigned int freq_hi, freq_lo;
>  	unsigned int index = 0;
>  	unsigned int jiffies_total, jiffies_hi, jiffies_lo;
> -	struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, policy->cpu);
> +	struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
> +						   policy->cpu);
>  
>  	if (!dbs_info->freq_table) {
>  		dbs_info->freq_lo = 0;
> @@ -207,7 +208,7 @@ static void ondemand_powersave_bias_init(void)
>  {
>  	int i;
>  	for_each_online_cpu(i) {
> -		struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, i);
> +		struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, i);
>  		dbs_info->freq_table = cpufreq_frequency_get_table(i);
>  		dbs_info->freq_lo = 0;
>  	}
> @@ -322,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
>  	/* we need to re-evaluate prev_cpu_idle */
>  	for_each_online_cpu(j) {
>  		struct cpu_dbs_info_s *dbs_info;
> -		dbs_info = &per_cpu(cpu_dbs_info, j);
> +		dbs_info = &per_cpu(od_cpu_dbs_info, j);
>  		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
>  						&dbs_info->prev_cpu_wall);
>  		if (dbs_tuners_ins.ignore_nice)
> @@ -416,7 +417,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
>  		unsigned int load, load_freq;
>  		int freq_avg;
>  
> -		j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +		j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
>  
>  		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
>  
> @@ -573,7 +574,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  	unsigned int j;
>  	int rc;
>  
> -	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
> +	this_dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
>  
>  	switch (event) {
>  	case CPUFREQ_GOV_START:
> @@ -595,7 +596,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
>  
>  		for_each_cpu(j, policy->cpus) {
>  			struct cpu_dbs_info_s *j_dbs_info;
> -			j_dbs_info = &per_cpu(cpu_dbs_info, j);
> +			j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
>  			j_dbs_info->cur_policy = policy;
>  
>  			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 30963af..4dbe5c0 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -596,6 +596,8 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
>  	return IRQ_HANDLED;
>  }
>  
> +static DEFINE_PER_CPU(unsigned, xed_nesting_count);
> +
>  /*
>   * Search the CPUs pending events bitmasks.  For each one found, map
>   * the event number to an irq, and feed it into do_IRQ() for
> @@ -611,7 +613,6 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  	struct shared_info *s = HYPERVISOR_shared_info;
>  	struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);
> -	static DEFINE_PER_CPU(unsigned, nesting_count);
>   	unsigned count;
>  
>  	exit_idle();
> @@ -622,7 +623,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  
>  		vcpu_info->evtchn_upcall_pending = 0;
>  
> -		if (__get_cpu_var(nesting_count)++)
> +		if (__get_cpu_var(xed_nesting_count)++)
>  			goto out;
>  
>  #ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
> @@ -647,8 +648,8 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  
>  		BUG_ON(!irqs_disabled());
>  
> -		count = __get_cpu_var(nesting_count);
> -		__get_cpu_var(nesting_count) = 0;
> +		count = __get_cpu_var(xed_nesting_count);
> +		__get_cpu_var(xed_nesting_count) = 0;
>  	} while(count != 1);
>  
>  out:
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index bb553c3..0e0c9de 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -606,6 +606,8 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
>  	}
>  }
>  
> +static DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
> +
>  /**
>   * balance_dirty_pages_ratelimited_nr - balance dirty memory state
>   * @mapping: address_space which was dirtied
> @@ -623,7 +625,6 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
>  void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
>  					unsigned long nr_pages_dirtied)
>  {
> -	static DEFINE_PER_CPU(unsigned long, ratelimits) = 0;
>  	unsigned long ratelimit;
>  	unsigned long *p;
>  
> @@ -636,7 +637,7 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
>  	 * tasks in balance_dirty_pages(). Period.
>  	 */
>  	preempt_disable();
> -	p =  &__get_cpu_var(ratelimits);
> +	p =  &__get_cpu_var(bdp_ratelimits);
>  	*p += nr_pages_dirtied;
>  	if (unlikely(*p >= ratelimit)) {
>  		*p = 0;
> diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
> index b35a950..70ee18c 100644
> --- a/net/ipv4/syncookies.c
> +++ b/net/ipv4/syncookies.c
> @@ -37,12 +37,12 @@ __initcall(init_syncookies);
>  #define COOKIEBITS 24	/* Upper bits store count */
>  #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
>  
> -static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
> +static DEFINE_PER_CPU(__u32, ipv4_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
>  
>  static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
>  		       u32 count, int c)
>  {
> -	__u32 *tmp = __get_cpu_var(cookie_scratch);
> +	__u32 *tmp = __get_cpu_var(ipv4_cookie_scratch);
>  
>  	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
>  	tmp[0] = (__force u32)saddr;
> diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
> index 711175e..348e38c 100644
> --- a/net/ipv6/syncookies.c
> +++ b/net/ipv6/syncookies.c
> @@ -74,12 +74,12 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
>  	return child;
>  }
>  
> -static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
> +static DEFINE_PER_CPU(__u32, ipv6_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
>  
>  static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
>  		       __be16 sport, __be16 dport, u32 count, int c)
>  {
> -	__u32 *tmp = __get_cpu_var(cookie_scratch);
> +	__u32 *tmp = __get_cpu_var(ipv6_cookie_scratch);
>  
>  	/*
>  	 * we have 320 bits of information to hash, copy in the remaining
> -- 
> 1.6.0.2
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 3/7] percpu: clean up percpu variable definitions
  2009-05-20  7:37 [PATCHSET core/percpu] percpu: convert most archs to dynamic percpu Tejun Heo
@ 2009-05-20  7:37   ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-05-20  7:37 UTC (permalink / raw)
  To: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, rusty
  Cc: Tejun Heo, Jens Axboe, Dave Jones, Jeremy Fitzhardinge, linux-mm

Percpu variable definition is about to be updated such that

* percpu symbols must be unique even the static ones

* in-function static definition is not allowed

Update percpu variable definitions accoringly.

* as,cfq: rename ioc_count uniquely

* cpufreq: rename cpu_dbs_info uniquely

* xen: move nesting_count out of xen_evtchn_do_upcall() and rename it

* mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
  rename it

* ipv4,6: rename cookie_scratch uniquely

While at it, make cris:use DECLARE_PER_CPU() instead of extern
volatile DEFINE_PER_CPU() for declaration.

[ Impact: percpu usage cleanups, no duplicate static percpu var names ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: David S. Miller <davem@davemloft.net>
---
 arch/cris/include/asm/mmu_context.h    |    2 +-
 block/as-iosched.c                     |   10 +++++-----
 block/cfq-iosched.c                    |   10 +++++-----
 drivers/cpufreq/cpufreq_conservative.c |   12 ++++++------
 drivers/cpufreq/cpufreq_ondemand.c     |   15 ++++++++-------
 drivers/xen/events.c                   |    9 +++++----
 mm/page-writeback.c                    |    5 +++--
 net/ipv4/syncookies.c                  |    4 ++--
 net/ipv6/syncookies.c                  |    4 ++--
 9 files changed, 37 insertions(+), 34 deletions(-)

diff --git a/arch/cris/include/asm/mmu_context.h b/arch/cris/include/asm/mmu_context.h
index 72ba08d..00de1a0 100644
--- a/arch/cris/include/asm/mmu_context.h
+++ b/arch/cris/include/asm/mmu_context.h
@@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
  * registers like cr3 on the i386
  */
 
-extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
+DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
diff --git a/block/as-iosched.c b/block/as-iosched.c
index c48fa67..96ff4d1 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -146,7 +146,7 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, as_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -161,7 +161,7 @@ static void as_antic_stop(struct as_data *ad);
 static void free_as_io_context(struct as_io_context *aic)
 {
 	kfree(aic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(as_ioc_count);
 	if (ioc_gone) {
 		/*
 		 * AS scheduler is exiting, grab exit lock and check
@@ -169,7 +169,7 @@ static void free_as_io_context(struct as_io_context *aic)
 		 * complete ioc_gone and set it back to NULL.
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(as_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -211,7 +211,7 @@ static struct as_io_context *alloc_as_io_context(void)
 		ret->seek_total = 0;
 		ret->seek_samples = 0;
 		ret->seek_mean = 0;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(as_ioc_count);
 	}
 
 	return ret;
@@ -1509,7 +1509,7 @@ static void __exit as_exit(void)
 	ioc_gone = &all_gone;
 	/* ioc_gone's update must be visible before reading ioc_count */
 	smp_wmb();
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(as_ioc_count))
 		wait_for_completion(&all_gone);
 	synchronize_rcu();
 }
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..deea748 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -48,7 +48,7 @@ static int cfq_slice_idle = HZ / 125;
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -1423,7 +1423,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 	cic = container_of(head, struct cfq_io_context, rcu_head);
 
 	kmem_cache_free(cfq_ioc_pool, cic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(cfq_ioc_count);
 
 	if (ioc_gone) {
 		/*
@@ -1432,7 +1432,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 		 * complete ioc_gone and set it back to NULL
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(cfq_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -1558,7 +1558,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 		INIT_HLIST_NODE(&cic->cic_list);
 		cic->dtor = cfq_free_io_context;
 		cic->exit = cfq_exit_io_context;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(cfq_ioc_count);
 	}
 
 	return cic;
@@ -2663,7 +2663,7 @@ static void __exit cfq_exit(void)
 	 * this also protects us from entering cfq_slab_kill() with
 	 * pending RCU callbacks
 	 */
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(cfq_ioc_count))
 		wait_for_completion(&all_gone);
 	cfq_slab_kill();
 }
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 2ecd95e..e0faa3e 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -80,7 +80,7 @@ struct cpu_dbs_info_s {
 	int cpu;
 	unsigned int enable:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -150,7 +150,7 @@ dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
 		     void *data)
 {
 	struct cpufreq_freqs *freq = data;
-	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cpu_dbs_info,
+	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cs_cpu_dbs_info,
 							freq->cpu);
 
 	struct cpufreq_policy *policy;
@@ -323,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -413,7 +413,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		cputime64_t cur_wall_time, cur_idle_time;
 		unsigned int idle_time, wall_time;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -553,7 +553,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(cs_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -573,7 +573,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 338f428..2eaf88f 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -87,7 +87,7 @@ struct cpu_dbs_info_s {
 	unsigned int enable:1,
 		sample_type:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -162,7 +162,8 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy,
 	unsigned int freq_hi, freq_lo;
 	unsigned int index = 0;
 	unsigned int jiffies_total, jiffies_hi, jiffies_lo;
-	struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, policy->cpu);
+	struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
+						   policy->cpu);
 
 	if (!dbs_info->freq_table) {
 		dbs_info->freq_lo = 0;
@@ -207,7 +208,7 @@ static void ondemand_powersave_bias_init(void)
 {
 	int i;
 	for_each_online_cpu(i) {
-		struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, i);
+		struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, i);
 		dbs_info->freq_table = cpufreq_frequency_get_table(i);
 		dbs_info->freq_lo = 0;
 	}
@@ -322,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(od_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -416,7 +417,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		unsigned int load, load_freq;
 		int freq_avg;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -573,7 +574,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -595,7 +596,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 30963af..4dbe5c0 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -596,6 +596,8 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
 /*
  * Search the CPUs pending events bitmasks.  For each one found, map
  * the event number to an irq, and feed it into do_IRQ() for
@@ -611,7 +613,6 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 	struct pt_regs *old_regs = set_irq_regs(regs);
 	struct shared_info *s = HYPERVISOR_shared_info;
 	struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);
-	static DEFINE_PER_CPU(unsigned, nesting_count);
  	unsigned count;
 
 	exit_idle();
@@ -622,7 +623,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		vcpu_info->evtchn_upcall_pending = 0;
 
-		if (__get_cpu_var(nesting_count)++)
+		if (__get_cpu_var(xed_nesting_count)++)
 			goto out;
 
 #ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
@@ -647,8 +648,8 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		BUG_ON(!irqs_disabled());
 
-		count = __get_cpu_var(nesting_count);
-		__get_cpu_var(nesting_count) = 0;
+		count = __get_cpu_var(xed_nesting_count);
+		__get_cpu_var(xed_nesting_count) = 0;
 	} while(count != 1);
 
 out:
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..0e0c9de 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -606,6 +606,8 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 	}
 }
 
+static DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
+
 /**
  * balance_dirty_pages_ratelimited_nr - balance dirty memory state
  * @mapping: address_space which was dirtied
@@ -623,7 +625,6 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 					unsigned long nr_pages_dirtied)
 {
-	static DEFINE_PER_CPU(unsigned long, ratelimits) = 0;
 	unsigned long ratelimit;
 	unsigned long *p;
 
@@ -636,7 +637,7 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 	 * tasks in balance_dirty_pages(). Period.
 	 */
 	preempt_disable();
-	p =  &__get_cpu_var(ratelimits);
+	p =  &__get_cpu_var(bdp_ratelimits);
 	*p += nr_pages_dirtied;
 	if (unlikely(*p >= ratelimit)) {
 		*p = 0;
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index b35a950..70ee18c 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -37,12 +37,12 @@ __initcall(init_syncookies);
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32, ipv4_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
 
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv4_cookie_scratch);
 
 	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
 	tmp[0] = (__force u32)saddr;
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 711175e..348e38c 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -74,12 +74,12 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
 	return child;
 }
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32, ipv6_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
 
 static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv6_cookie_scratch);
 
 	/*
 	 * we have 320 bits of information to hash, copy in the remaining
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/7] percpu: clean up percpu variable definitions
@ 2009-05-20  7:37   ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2009-05-20  7:37 UTC (permalink / raw)
  To: mingo, linux-kernel, x86, ink, rth, linux, hskinnemoen, cooloney,
	starvik, jesper.nilsson, dhowells, ysato, tony.luck, takata,
	geert, monstr, ralf, kyle, benh, paulus, schwidefsky,
	heiko.carstens, lethal, davem, jdike, chris, rusty
  Cc: Tejun Heo, Jens Axboe, Dave Jones, Jeremy Fitzhardinge, linux-mm

Percpu variable definition is about to be updated such that

* percpu symbols must be unique even the static ones

* in-function static definition is not allowed

Update percpu variable definitions accoringly.

* as,cfq: rename ioc_count uniquely

* cpufreq: rename cpu_dbs_info uniquely

* xen: move nesting_count out of xen_evtchn_do_upcall() and rename it

* mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
  rename it

* ipv4,6: rename cookie_scratch uniquely

While at it, make cris:use DECLARE_PER_CPU() instead of extern
volatile DEFINE_PER_CPU() for declaration.

[ Impact: percpu usage cleanups, no duplicate static percpu var names ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: David S. Miller <davem@davemloft.net>
---
 arch/cris/include/asm/mmu_context.h    |    2 +-
 block/as-iosched.c                     |   10 +++++-----
 block/cfq-iosched.c                    |   10 +++++-----
 drivers/cpufreq/cpufreq_conservative.c |   12 ++++++------
 drivers/cpufreq/cpufreq_ondemand.c     |   15 ++++++++-------
 drivers/xen/events.c                   |    9 +++++----
 mm/page-writeback.c                    |    5 +++--
 net/ipv4/syncookies.c                  |    4 ++--
 net/ipv6/syncookies.c                  |    4 ++--
 9 files changed, 37 insertions(+), 34 deletions(-)

diff --git a/arch/cris/include/asm/mmu_context.h b/arch/cris/include/asm/mmu_context.h
index 72ba08d..00de1a0 100644
--- a/arch/cris/include/asm/mmu_context.h
+++ b/arch/cris/include/asm/mmu_context.h
@@ -17,7 +17,7 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
  * registers like cr3 on the i386
  */
 
-extern volatile DEFINE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
+DECLARE_PER_CPU(pgd_t *,current_pgd); /* defined in arch/cris/mm/fault.c */
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
diff --git a/block/as-iosched.c b/block/as-iosched.c
index c48fa67..96ff4d1 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -146,7 +146,7 @@ enum arq_state {
 #define RQ_STATE(rq)	((enum arq_state)(rq)->elevator_private2)
 #define RQ_SET_STATE(rq, state)	((rq)->elevator_private2 = (void *) state)
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, as_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -161,7 +161,7 @@ static void as_antic_stop(struct as_data *ad);
 static void free_as_io_context(struct as_io_context *aic)
 {
 	kfree(aic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(as_ioc_count);
 	if (ioc_gone) {
 		/*
 		 * AS scheduler is exiting, grab exit lock and check
@@ -169,7 +169,7 @@ static void free_as_io_context(struct as_io_context *aic)
 		 * complete ioc_gone and set it back to NULL.
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(as_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -211,7 +211,7 @@ static struct as_io_context *alloc_as_io_context(void)
 		ret->seek_total = 0;
 		ret->seek_samples = 0;
 		ret->seek_mean = 0;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(as_ioc_count);
 	}
 
 	return ret;
@@ -1509,7 +1509,7 @@ static void __exit as_exit(void)
 	ioc_gone = &all_gone;
 	/* ioc_gone's update must be visible before reading ioc_count */
 	smp_wmb();
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(as_ioc_count))
 		wait_for_completion(&all_gone);
 	synchronize_rcu();
 }
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..deea748 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -48,7 +48,7 @@ static int cfq_slice_idle = HZ / 125;
 static struct kmem_cache *cfq_pool;
 static struct kmem_cache *cfq_ioc_pool;
 
-static DEFINE_PER_CPU(unsigned long, ioc_count);
+static DEFINE_PER_CPU(unsigned long, cfq_ioc_count);
 static struct completion *ioc_gone;
 static DEFINE_SPINLOCK(ioc_gone_lock);
 
@@ -1423,7 +1423,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 	cic = container_of(head, struct cfq_io_context, rcu_head);
 
 	kmem_cache_free(cfq_ioc_pool, cic);
-	elv_ioc_count_dec(ioc_count);
+	elv_ioc_count_dec(cfq_ioc_count);
 
 	if (ioc_gone) {
 		/*
@@ -1432,7 +1432,7 @@ static void cfq_cic_free_rcu(struct rcu_head *head)
 		 * complete ioc_gone and set it back to NULL
 		 */
 		spin_lock(&ioc_gone_lock);
-		if (ioc_gone && !elv_ioc_count_read(ioc_count)) {
+		if (ioc_gone && !elv_ioc_count_read(cfq_ioc_count)) {
 			complete(ioc_gone);
 			ioc_gone = NULL;
 		}
@@ -1558,7 +1558,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
 		INIT_HLIST_NODE(&cic->cic_list);
 		cic->dtor = cfq_free_io_context;
 		cic->exit = cfq_exit_io_context;
-		elv_ioc_count_inc(ioc_count);
+		elv_ioc_count_inc(cfq_ioc_count);
 	}
 
 	return cic;
@@ -2663,7 +2663,7 @@ static void __exit cfq_exit(void)
 	 * this also protects us from entering cfq_slab_kill() with
 	 * pending RCU callbacks
 	 */
-	if (elv_ioc_count_read(ioc_count))
+	if (elv_ioc_count_read(cfq_ioc_count))
 		wait_for_completion(&all_gone);
 	cfq_slab_kill();
 }
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 2ecd95e..e0faa3e 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -80,7 +80,7 @@ struct cpu_dbs_info_s {
 	int cpu;
 	unsigned int enable:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, cs_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -150,7 +150,7 @@ dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
 		     void *data)
 {
 	struct cpufreq_freqs *freq = data;
-	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cpu_dbs_info,
+	struct cpu_dbs_info_s *this_dbs_info = &per_cpu(cs_cpu_dbs_info,
 							freq->cpu);
 
 	struct cpufreq_policy *policy;
@@ -323,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -413,7 +413,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		cputime64_t cur_wall_time, cur_idle_time;
 		unsigned int idle_time, wall_time;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -553,7 +553,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(cs_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -573,7 +573,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 338f428..2eaf88f 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -87,7 +87,7 @@ struct cpu_dbs_info_s {
 	unsigned int enable:1,
 		sample_type:1;
 };
-static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
+static DEFINE_PER_CPU(struct cpu_dbs_info_s, od_cpu_dbs_info);
 
 static unsigned int dbs_enable;	/* number of CPUs using this policy */
 
@@ -162,7 +162,8 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy,
 	unsigned int freq_hi, freq_lo;
 	unsigned int index = 0;
 	unsigned int jiffies_total, jiffies_hi, jiffies_lo;
-	struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, policy->cpu);
+	struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
+						   policy->cpu);
 
 	if (!dbs_info->freq_table) {
 		dbs_info->freq_lo = 0;
@@ -207,7 +208,7 @@ static void ondemand_powersave_bias_init(void)
 {
 	int i;
 	for_each_online_cpu(i) {
-		struct cpu_dbs_info_s *dbs_info = &per_cpu(cpu_dbs_info, i);
+		struct cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, i);
 		dbs_info->freq_table = cpufreq_frequency_get_table(i);
 		dbs_info->freq_lo = 0;
 	}
@@ -322,7 +323,7 @@ static ssize_t store_ignore_nice_load(struct cpufreq_policy *policy,
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
 		struct cpu_dbs_info_s *dbs_info;
-		dbs_info = &per_cpu(cpu_dbs_info, j);
+		dbs_info = &per_cpu(od_cpu_dbs_info, j);
 		dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
 						&dbs_info->prev_cpu_wall);
 		if (dbs_tuners_ins.ignore_nice)
@@ -416,7 +417,7 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info)
 		unsigned int load, load_freq;
 		int freq_avg;
 
-		j_dbs_info = &per_cpu(cpu_dbs_info, j);
+		j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time);
 
@@ -573,7 +574,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 	unsigned int j;
 	int rc;
 
-	this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
+	this_dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
 
 	switch (event) {
 	case CPUFREQ_GOV_START:
@@ -595,7 +596,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
 
 		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
-			j_dbs_info = &per_cpu(cpu_dbs_info, j);
+			j_dbs_info = &per_cpu(od_cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
 
 			j_dbs_info->prev_cpu_idle = get_cpu_idle_time(j,
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 30963af..4dbe5c0 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -596,6 +596,8 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
 /*
  * Search the CPUs pending events bitmasks.  For each one found, map
  * the event number to an irq, and feed it into do_IRQ() for
@@ -611,7 +613,6 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 	struct pt_regs *old_regs = set_irq_regs(regs);
 	struct shared_info *s = HYPERVISOR_shared_info;
 	struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);
-	static DEFINE_PER_CPU(unsigned, nesting_count);
  	unsigned count;
 
 	exit_idle();
@@ -622,7 +623,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		vcpu_info->evtchn_upcall_pending = 0;
 
-		if (__get_cpu_var(nesting_count)++)
+		if (__get_cpu_var(xed_nesting_count)++)
 			goto out;
 
 #ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
@@ -647,8 +648,8 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 		BUG_ON(!irqs_disabled());
 
-		count = __get_cpu_var(nesting_count);
-		__get_cpu_var(nesting_count) = 0;
+		count = __get_cpu_var(xed_nesting_count);
+		__get_cpu_var(xed_nesting_count) = 0;
 	} while(count != 1);
 
 out:
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..0e0c9de 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -606,6 +606,8 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 	}
 }
 
+static DEFINE_PER_CPU(unsigned long, bdp_ratelimits) = 0;
+
 /**
  * balance_dirty_pages_ratelimited_nr - balance dirty memory state
  * @mapping: address_space which was dirtied
@@ -623,7 +625,6 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite)
 void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 					unsigned long nr_pages_dirtied)
 {
-	static DEFINE_PER_CPU(unsigned long, ratelimits) = 0;
 	unsigned long ratelimit;
 	unsigned long *p;
 
@@ -636,7 +637,7 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 	 * tasks in balance_dirty_pages(). Period.
 	 */
 	preempt_disable();
-	p =  &__get_cpu_var(ratelimits);
+	p =  &__get_cpu_var(bdp_ratelimits);
 	*p += nr_pages_dirtied;
 	if (unlikely(*p >= ratelimit)) {
 		*p = 0;
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index b35a950..70ee18c 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -37,12 +37,12 @@ __initcall(init_syncookies);
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32, ipv4_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
 
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv4_cookie_scratch);
 
 	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
 	tmp[0] = (__force u32)saddr;
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 711175e..348e38c 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -74,12 +74,12 @@ static inline struct sock *get_cookie_sock(struct sock *sk, struct sk_buff *skb,
 	return child;
 }
 
-static DEFINE_PER_CPU(__u32, cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
+static DEFINE_PER_CPU(__u32, ipv6_cookie_scratch)[16 + 5 + SHA_WORKSPACE_WORDS];
 
 static u32 cookie_hash(struct in6_addr *saddr, struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
 {
-	__u32 *tmp = __get_cpu_var(cookie_scratch);
+	__u32 *tmp = __get_cpu_var(ipv6_cookie_scratch);
 
 	/*
 	 * we have 320 bits of information to hash, copy in the remaining
-- 
1.6.0.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2009-06-17  2:29 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-01  8:58 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Tejun Heo
2009-06-01  8:58 ` [PATCH 1/7] percpu: use dynamic percpu allocator as the default percpu allocator Tejun Heo
2009-06-01  8:58 ` [PATCH 2/7] percpu: cleanup percpu array definitions Tejun Heo
2009-06-01  8:58   ` Tejun Heo
2009-06-01  8:58 ` [PATCH 3/7] percpu: clean up percpu variable definitions Tejun Heo
2009-06-01  8:58   ` Tejun Heo
2009-06-01  9:40   ` David Miller
2009-06-01  9:40     ` David Miller
2009-06-01 11:36     ` Tejun Heo
2009-06-01 11:36       ` Tejun Heo
2009-06-02  5:08       ` Benjamin Herrenschmidt
2009-06-02  5:08         ` Benjamin Herrenschmidt
2009-06-05  4:25         ` Tejun Heo
2009-06-05  4:25           ` Tejun Heo
2009-06-11 10:45           ` Jesper Nilsson
2009-06-17  2:28             ` Tejun Heo
2009-06-10 18:30     ` H. Peter Anvin
2009-06-10 18:30       ` H. Peter Anvin
2009-06-01  8:58 ` [PATCH 4/7] percpu: enforce global definition Tejun Heo
2009-06-01  8:58 ` [PATCH 5/7] alpha: kill unnecessary __used attribute in PER_CPU_ATTRIBUTES Tejun Heo
2009-06-01  8:58 ` [PATCH 6/7] alpha: switch to dynamic percpu allocator Tejun Heo
2009-06-01  8:58 ` [PATCH 7/7] s390: " Tejun Heo
2009-06-01 16:10 ` [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#2 Kyle McMartin
2009-06-01 19:51   ` Kyle McMartin
2009-06-05  4:24     ` Tejun Heo
2009-06-02  6:35 ` Benjamin Herrenschmidt
  -- strict thread matches above, loose matches on Subject: below --
2009-05-20  7:37 [PATCHSET core/percpu] percpu: convert most archs to dynamic percpu Tejun Heo
2009-05-20  7:37 ` [PATCH 3/7] percpu: clean up percpu variable definitions Tejun Heo
2009-05-20  7:37   ` Tejun Heo
2009-05-20  9:17   ` Jens Axboe
2009-05-20  9:17     ` Jens Axboe
2009-05-25  6:07   ` Rusty Russell
2009-05-25  6:07     ` Rusty Russell
2009-05-25 16:07     ` Tejun Heo
2009-05-25 16:07       ` Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.