From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932336AbcLGHKN (ORCPT ); Wed, 7 Dec 2016 02:10:13 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:58408 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752855AbcLGHKK (ORCPT ); Wed, 7 Dec 2016 02:10:10 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tony Thompson , Vladimir Murzin , James Morse , Suzuki K Poulose , Will Deacon Subject: [PATCH 4.8 33/35] arm64: cpufeature: Schedule enable() calls instead of calling them via IPI Date: Wed, 7 Dec 2016 08:08:49 +0100 Message-Id: <20161207070724.064814913@linuxfoundation.org> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161207070722.410336250@linuxfoundation.org> References: <20161207070722.410336250@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.8-stable review patch. If anyone has any objections, please let me know. ------------------ From: James Morse commit 2a6dcb2b5f3e21592ca8dfa198dcce7bec09b020 upstream. The enable() call for a cpufeature/errata is called using on_each_cpu(). This issues a cross-call IPI to get the work done. Implicitly, this stashes the running PSTATE in SPSR when the CPU receives the IPI, and restores it when we return. This means an enable() call can never modify PSTATE. To allow PAN to do this, change the on_each_cpu() call to use stop_machine(). This schedules the work on each CPU which allows us to modify PSTATE. This involves changing the protype of all the enable() functions. enable_cpu_capabilities() is called during boot and enables the feature on all online CPUs. This path now uses stop_machine(). CPU features for hotplug'd CPUs are enabled by verify_local_cpu_features() which only acts on the local CPU, and can already modify the running PSTATE as it is called from secondary_start_kernel(). Reported-by: Tony Thompson Reported-by: Vladimir Murzin Signed-off-by: James Morse Cc: Suzuki K Poulose Signed-off-by: Will Deacon [Removed enable() hunks for A53 workaround] Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpufeature.h | 2 +- arch/arm64/include/asm/processor.h | 6 +++--- arch/arm64/kernel/cpufeature.c | 10 +++++++++- arch/arm64/kernel/traps.c | 3 ++- arch/arm64/mm/fault.c | 6 ++++-- 5 files changed, 19 insertions(+), 8 deletions(-) --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -90,7 +90,7 @@ struct arm64_cpu_capabilities { u16 capability; int def_scope; /* default scope */ bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope); - void (*enable)(void *); /* Called on all active CPUs */ + int (*enable)(void *); /* Called on all active CPUs */ union { struct { /* To be used for erratum handling only */ u32 midr_model; --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -190,8 +190,8 @@ static inline void spin_lock_prefetch(co #endif -void cpu_enable_pan(void *__unused); -void cpu_enable_uao(void *__unused); -void cpu_enable_cache_maint_trap(void *__unused); +int cpu_enable_pan(void *__unused); +int cpu_enable_uao(void *__unused); +int cpu_enable_cache_maint_trap(void *__unused); #endif /* __ASM_PROCESSOR_H */ --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -19,7 +19,9 @@ #define pr_fmt(fmt) "CPU features: " fmt #include +#include #include +#include #include #include #include @@ -936,7 +938,13 @@ void __init enable_cpu_capabilities(cons { for (; caps->matches; caps++) if (caps->enable && cpus_have_cap(caps->capability)) - on_each_cpu(caps->enable, NULL, true); + /* + * Use stop_machine() as it schedules the work allowing + * us to modify PSTATE, instead of on_each_cpu() which + * uses an IPI, giving us a PSTATE that disappears when + * we return. + */ + stop_machine(caps->enable, NULL, cpu_online_mask); } /* --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -428,9 +428,10 @@ asmlinkage void __exception do_undefinst force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0); } -void cpu_enable_cache_maint_trap(void *__unused) +int cpu_enable_cache_maint_trap(void *__unused) { config_sctlr_el1(SCTLR_EL1_UCI, 0); + return 0; } #define __user_cache_maint(insn, address, res) \ --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -671,9 +671,10 @@ asmlinkage int __exception do_debug_exce NOKPROBE_SYMBOL(do_debug_exception); #ifdef CONFIG_ARM64_PAN -void cpu_enable_pan(void *__unused) +int cpu_enable_pan(void *__unused) { config_sctlr_el1(SCTLR_EL1_SPAN, 0); + return 0; } #endif /* CONFIG_ARM64_PAN */ @@ -684,8 +685,9 @@ void cpu_enable_pan(void *__unused) * We need to enable the feature at runtime (instead of adding it to * PSR_MODE_EL1h) as the feature may not be implemented by the cpu. */ -void cpu_enable_uao(void *__unused) +int cpu_enable_uao(void *__unused) { asm(SET_PSTATE_UAO(1)); + return 0; } #endif /* CONFIG_ARM64_UAO */