From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e23smtp04.au.ibm.com (e23smtp04.au.ibm.com [202.81.31.146]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "e23smtp04.au.ibm.com", Issuer "GeoTrust SSL CA" (not verified)) by ozlabs.org (Postfix) with ESMTPS id 81CFAB721E for ; Thu, 17 Nov 2011 22:28:55 +1100 (EST) Received: from /spool/local by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 17 Nov 2011 11:16:46 +1000 Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pAHBPRqa3428594 for ; Thu, 17 Nov 2011 22:25:28 +1100 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pAHBSa1F031188 for ; Thu, 17 Nov 2011 22:28:36 +1100 Subject: [RFC PATCH v2 1/4] cpuidle: (powerpc) Add cpu_idle_wait() to allow switching of idle routines To: linuxppc-dev@ozlabs.org, linux-pm@lists.linux-foundation.org, linux-pm@vger.kernel.org From: Deepthi Dharwar Date: Thu, 17 Nov 2011 16:58:34 +0530 Message-ID: <20111117112830.9191.1951.stgit@localhost6.localdomain6> In-Reply-To: <20111117112815.9191.2322.stgit@localhost6.localdomain6> References: <20111117112815.9191.2322.stgit@localhost6.localdomain6> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Cc: linux-kernel@vger.kernel.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , This patch provides cpu_idle_wait() routine for the powerpc platform which is required by the cpuidle subsystem. This routine is requied to change the idle handler on SMP systems. The equivalent routine for x86 is in arch/x86/kernel/process.c but the powerpc implementation is different. Signed-off-by: Deepthi Dharwar Signed-off-by: Trinabh Gupta Signed-off-by: Arun R Bharadwaj --- arch/powerpc/Kconfig | 4 ++++ arch/powerpc/include/asm/processor.h | 2 ++ arch/powerpc/include/asm/system.h | 1 + arch/powerpc/kernel/idle.c | 26 ++++++++++++++++++++++++++ 4 files changed, 33 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index b177caa..87f8604 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -87,6 +87,10 @@ config ARCH_HAS_ILOG2_U64 bool default y if 64BIT +config ARCH_HAS_CPU_IDLE_WAIT + bool + default y + config GENERIC_HWEIGHT bool default y diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index eb11a44..811b7e7 100644 --- a/arch/powerpc/include/asm/processor.h +++ b/arch/powerpc/include/asm/processor.h @@ -382,6 +382,8 @@ static inline unsigned long get_clean_sp(struct pt_regs *regs, int is_32) } #endif +enum idle_boot_override {IDLE_NO_OVERRIDE = 0, IDLE_POWERSAVE_OFF}; + #endif /* __KERNEL__ */ #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_PROCESSOR_H */ diff --git a/arch/powerpc/include/asm/system.h b/arch/powerpc/include/asm/system.h index e30a13d..ff66680 100644 --- a/arch/powerpc/include/asm/system.h +++ b/arch/powerpc/include/asm/system.h @@ -221,6 +221,7 @@ extern unsigned long klimit; extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask); extern int powersave_nap; /* set if nap mode can be used in idle loop */ +void cpu_idle_wait(void); /* * Atomic exchange diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c index 39a2baa..b478c72 100644 --- a/arch/powerpc/kernel/idle.c +++ b/arch/powerpc/kernel/idle.c @@ -39,9 +39,13 @@ #define cpu_should_die() 0 #endif +unsigned long boot_option_idle_override = IDLE_NO_OVERRIDE; +EXPORT_SYMBOL(boot_option_idle_override); + static int __init powersave_off(char *arg) { ppc_md.power_save = NULL; + boot_option_idle_override = IDLE_POWERSAVE_OFF; return 0; } __setup("powersave=off", powersave_off); @@ -102,6 +106,28 @@ void cpu_idle(void) } } + +/* + * cpu_idle_wait - Used to ensure that all the CPUs come out of the old + * idle loop and start using the new idle loop. + * Required while changing idle handler on SMP systems. + * Caller must have changed idle handler to the new value before the call. + */ +void cpu_idle_wait(void) +{ + int cpu; + smp_mb(); + + /* kick all the CPUs so that they exit out of old idle routine */ + get_online_cpus(); + for_each_online_cpu(cpu) { + if (cpu != smp_processor_id()) + smp_send_reschedule(cpu); + } + put_online_cpus(); +} +EXPORT_SYMBOL_GPL(cpu_idle_wait); + int powersave_nap; #ifdef CONFIG_SYSCTL