From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760626AbXK1VQa (ORCPT ); Wed, 28 Nov 2007 16:16:30 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1756740AbXK1VPc (ORCPT ); Wed, 28 Nov 2007 16:15:32 -0500 Received: from relay1.sgi.com ([192.48.171.29]:38890 "EHLO relay.sgi.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755183AbXK1VP1 (ORCPT ); Wed, 28 Nov 2007 16:15:27 -0500 Message-Id: <20071128211526.640694725@sgi.com> References: <20071128210926.008783214@sgi.com> User-Agent: quilt/0.46-1 Date: Wed, 28 Nov 2007 13:09:29 -0800 From: Christoph Lameter To: akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, Rusty Russell , Andi Kleen Subject: [patch 03/10] percpu: Make the asm-generic/percpu.h more "generic" Content-Disposition: inline; filename=genericize-percpu.h Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org V1->V2: - add support for PER_CPU_ATTRIBUTES Add the ability to use generic/percpu even if the arch needs to override several aspects of its operations. This will enable the use of generic percpu.h for all arches. An arch may define: __per_cpu_offset Do not use the generic pointer array. Arch must define per_cpu_offset(cpu) (used by x86_64, s390). __my_cpu_offset Can be defined to provide an optimized way to determine the offset for variables of the currently executing processor. Used by ia64, x86_64, x86_32, sparc64, s/390. SHIFT_PTR(ptr, offset) If an arch defines it then special handling of pointer arithmentic may be implemented. Used by s/390. (Some of these special percpu arch implementations may be later consolidated so that there are less cases to deal with.) Cc: Rusty Russell Cc: Andi Kleen Signed-off-by: Christoph Lameter --- include/asm-generic/percpu.h | 69 ++++++++++++++++++++++++++++++++++++------- 1 file changed, 58 insertions(+), 11 deletions(-) Index: linux-2.6.24-rc3-mm2/include/asm-generic/percpu.h =================================================================== --- linux-2.6.24-rc3-mm2.orig/include/asm-generic/percpu.h 2007-11-28 12:51:42.448213150 -0800 +++ linux-2.6.24-rc3-mm2/include/asm-generic/percpu.h 2007-11-28 12:51:45.311964069 -0800 @@ -3,27 +3,74 @@ #include #include +/* + * Determine the real variable name from the name visible in the + * kernel sources. + */ +#define per_cpu_var(var) per_cpu__##var + #ifdef CONFIG_SMP +/* + * per_cpu_offset() is the offset that has to be added to a + * percpu variable to get to the instance for a certain processor. + * + * Most arches use the __per_cpu_offset array for those offsets but + * some arches have their own ways of determining the offset (x86_64, s390). + */ +#ifndef __per_cpu_offset extern unsigned long __per_cpu_offset[NR_CPUS]; - #define per_cpu_offset(x) (__per_cpu_offset[x]) +#endif -/* var is in discarded region: offset to particular copy we want */ -#define per_cpu(var, cpu) (*({ \ - extern int simple_identifier_##var(void); \ - RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]); })) -#define __get_cpu_var(var) per_cpu(var, smp_processor_id()) -#define __raw_get_cpu_var(var) per_cpu(var, raw_smp_processor_id()) +/* + * Determine the offset for the currently active processor. + * An arch may define __my_cpu_offset to provide a more effective + * means of obtaining the offset to the per cpu variables of the + * current processor. + */ +#ifndef __my_cpu_offset +#define __my_cpu_offset per_cpu_offset(raw_smp_processor_id()) +#define my_cpu_offset per_cpu_offset(smp_processor_id()) +#else +#define my_cpu_offset __my_cpu_offset +#endif + +/* + * Add a offset to a pointer but keep the pointer as is. + * + * Only S390 provides its own means of moving the pointer. + */ +#ifndef SHIFT_PTR +#define SHIFT_PTR(__p, __offset) RELOC_HIDE((__p), (__offset)) +#endif + +/* + * A percpu variable may point to a discarded reghions. The following are + * established ways to produce a usable pointer from the percpu variable + * offset. + */ +#define per_cpu(var, cpu) (*SHIFT_PTR(&per_cpu_var(var), per_cpu_offset(cpu))) +#define __get_cpu_var(var) (*SHIFT_PTR(&per_cpu_var(var), my_cpu_offset)) +#define __raw_get_cpu_var(var) (*SHIFT_PTR(&per_cpu_var(var), __my_cpu_offset)) + +#ifdef CONFIG_ARCH_SETS_UP_PER_CPU_AREA +extern void setup_per_cpu_areas(void); +#endif #else /* ! SMP */ -#define per_cpu(var, cpu) (*((void)(cpu), &per_cpu__##var)) -#define __get_cpu_var(var) per_cpu__##var -#define __raw_get_cpu_var(var) per_cpu__##var +#define per_cpu(var, cpu) (*((void)(cpu), &per_cpu_var(var))) +#define __get_cpu_var(var) per_cpu_var(var) +#define __raw_get_cpu_var(var) per_cpu_var(var) #endif /* SMP */ -#define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name +#ifndef PER_CPU_ATTRIBUTES +#define PER_CPU_ATTRIBUTES +#endif + +#define DECLARE_PER_CPU(type, name) extern PER_CPU_ATTRIBUTES \ + __typeof__(type) per_cpu_var(name) #endif /* _ASM_GENERIC_PERCPU_H_ */ --