All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] PAN Fixes
@ 2016-10-18 10:27 James Morse
  2016-10-18 10:27 ` [PATCH 1/3] arm64: cpufeature: Schedule enable() calls instead of calling them via IPI James Morse
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: James Morse @ 2016-10-18 10:27 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

This series fixes two issues for PAN discovered by Vladimir and Tony:
 * Patch 2 changes the cpu_enable_pan() to not only enable the automatic
   PAN setting when return to the kernel from userspace, but also turn
   it on right now. This covers the case where a pre-empted task may be
   migrated to a new CPU that hasn't yet done a return-to-user.

 * Patch 1 is a prerequisite which fixes the enable() calls to not use
   an IPI, (details in the patch). This means we can modify PSTATE from
   an enable call, which is broken today, but we don't actually depend
   on it...

Patch 3 fixes a third issue where we lose the PSTATE value over cpu-idle,
this will be a problem in the same pre-empted task migrated to a
'new' CPU case above, and if we return from idle to a user task, (which
I believe suspend-to-ram does).

Patch 1 changes the prototype of all the enable calls, so can't be
backported. I will produce separate backports for v4.4.25 and v4.7.8.

Based on v4.9-rc1, with [0] applied locally to fix cpuhotplug. This
series can be retrieved from:


Thanks,

James


[0]  https://www.spinics.net/lists/kernel/msg2357812.html

James Morse (3):
  arm64: cpufeature: Schedule enable() calls instead of calling them via
    IPI
  arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call
  arm64: suspend: Reconfigure PSTATE after resume from idle

 arch/arm64/include/asm/cpufeature.h |  2 +-
 arch/arm64/include/asm/exec.h       |  3 +++
 arch/arm64/include/asm/processor.h  |  6 +++---
 arch/arm64/kernel/cpu_errata.c      |  3 ++-
 arch/arm64/kernel/cpufeature.c      | 10 +++++++++-
 arch/arm64/kernel/process.c         |  3 ++-
 arch/arm64/kernel/suspend.c         | 11 +++++++++++
 arch/arm64/kernel/traps.c           |  3 ++-
 arch/arm64/mm/fault.c               | 15 +++++++++++++--
 9 files changed, 46 insertions(+), 10 deletions(-)

-- 
2.8.0.rc3

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] arm64: cpufeature: Schedule enable() calls instead of calling them via IPI
  2016-10-18 10:27 [PATCH 0/3] PAN Fixes James Morse
@ 2016-10-18 10:27 ` James Morse
  2016-10-18 10:27 ` [PATCH 2/3] arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call James Morse
  2016-10-18 10:27 ` [PATCH 3/3] arm64: suspend: Reconfigure PSTATE after resume from idle James Morse
  2 siblings, 0 replies; 6+ messages in thread
From: James Morse @ 2016-10-18 10:27 UTC (permalink / raw)
  To: linux-arm-kernel

The enable() call for a cpufeature/errata is called using on_each_cpu().
This issues a cross-call IPI to get the work done. Implicitly, this
stashes the running PSTATE in SPSR when the CPU receives the IPI, and
restores it when we return. This means an enable() call can never modify
PSTATE.

To allow PAN to do this, change the on_each_cpu() call to use
stop_machine(). This schedules the work on each CPU which allows
us to modify PSTATE.

This involves changing the protype of all the enable() functions.

enable_cpu_capabilities() is called during boot and enables the feature
on all online CPUs. This path now uses stop_machine(). CPU features for
hotplug'd CPUs are enabled by verify_local_cpu_features() which only
acts on the local CPU, and can already modify the running PSTATE as it
is called from secondary_start_kernel().

Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>

---
The UAO enable call also suffers from this problem, but it is useless
and broken. After this patch it is merely useless, and will be removed,
by a later patch, (replaced with a comment describing what actually
happens).

This patch doesn't apply to linux-stable versions before v4.8 because
it conflicts with every feature introduced since v4.4. If you want this
in stable give me a kick and I will produce versions for v4.4.25 and
v4.7.8 (stable versions since v4.3 when PAN was introduced).

 arch/arm64/include/asm/cpufeature.h |  2 +-
 arch/arm64/include/asm/processor.h  |  6 +++---
 arch/arm64/kernel/cpu_errata.c      |  3 ++-
 arch/arm64/kernel/cpufeature.c      | 10 +++++++++-
 arch/arm64/kernel/traps.c           |  3 ++-
 arch/arm64/mm/fault.c               |  6 ++++--
 6 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 758d74fedfad..a27c3245ba21 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -94,7 +94,7 @@ struct arm64_cpu_capabilities {
 	u16 capability;
 	int def_scope;			/* default scope */
 	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
-	void (*enable)(void *);		/* Called on all active CPUs */
+	int (*enable)(void *);		/* Called on all active CPUs */
 	union {
 		struct {	/* To be used for erratum handling only */
 			u32 midr_model;
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index df2e53d3a969..60e34824e18c 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -188,8 +188,8 @@ static inline void spin_lock_prefetch(const void *ptr)
 
 #endif
 
-void cpu_enable_pan(void *__unused);
-void cpu_enable_uao(void *__unused);
-void cpu_enable_cache_maint_trap(void *__unused);
+int cpu_enable_pan(void *__unused);
+int cpu_enable_uao(void *__unused);
+int cpu_enable_cache_maint_trap(void *__unused);
 
 #endif /* __ASM_PROCESSOR_H */
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 0150394f4cab..b75e917aac46 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -39,10 +39,11 @@ has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
 		(arm64_ftr_reg_ctrel0.sys_val & arm64_ftr_reg_ctrel0.strict_mask);
 }
 
-static void cpu_enable_trap_ctr_access(void *__unused)
+static int cpu_enable_trap_ctr_access(void *__unused)
 {
 	/* Clear SCTLR_EL1.UCT */
 	config_sctlr_el1(SCTLR_EL1_UCT, 0);
+	return 0;
 }
 
 #define MIDR_RANGE(model, min, max) \
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d577f263cc4a..c02504ea304b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -19,7 +19,9 @@
 #define pr_fmt(fmt) "CPU features: " fmt
 
 #include <linux/bsearch.h>
+#include <linux/cpumask.h>
 #include <linux/sort.h>
+#include <linux/stop_machine.h>
 #include <linux/types.h>
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
@@ -941,7 +943,13 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
 {
 	for (; caps->matches; caps++)
 		if (caps->enable && cpus_have_cap(caps->capability))
-			on_each_cpu(caps->enable, NULL, true);
+			/*
+			 * Use stop_machine() as it schedules the work allowing
+			 * us to modify PSTATE, instead of on_each_cpu() which
+			 * uses an IPI, giving us a PSTATE that disappears when
+			 * we return.
+			 */
+			stop_machine(caps->enable, NULL, cpu_online_mask);
 }
 
 /*
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 5ff020f8fb7f..e3a9f8da16e5 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -428,9 +428,10 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
 	force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
 }
 
-void cpu_enable_cache_maint_trap(void *__unused)
+int cpu_enable_cache_maint_trap(void *__unused)
 {
 	config_sctlr_el1(SCTLR_EL1_UCI, 0);
+	return 0;
 }
 
 #define __user_cache_maint(insn, address, res)			\
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 53d9159662fe..3e9ff9b0c78d 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -670,9 +670,10 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
 NOKPROBE_SYMBOL(do_debug_exception);
 
 #ifdef CONFIG_ARM64_PAN
-void cpu_enable_pan(void *__unused)
+int cpu_enable_pan(void *__unused)
 {
 	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
+	return 0;
 }
 #endif /* CONFIG_ARM64_PAN */
 
@@ -683,8 +684,9 @@ void cpu_enable_pan(void *__unused)
  * We need to enable the feature at runtime (instead of adding it to
  * PSR_MODE_EL1h) as the feature may not be implemented by the cpu.
  */
-void cpu_enable_uao(void *__unused)
+int cpu_enable_uao(void *__unused)
 {
 	asm(SET_PSTATE_UAO(1));
+	return 0;
 }
 #endif /* CONFIG_ARM64_UAO */
-- 
2.8.0.rc3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call
  2016-10-18 10:27 [PATCH 0/3] PAN Fixes James Morse
  2016-10-18 10:27 ` [PATCH 1/3] arm64: cpufeature: Schedule enable() calls instead of calling them via IPI James Morse
@ 2016-10-18 10:27 ` James Morse
  2016-10-19 16:52   ` Will Deacon
  2016-10-18 10:27 ` [PATCH 3/3] arm64: suspend: Reconfigure PSTATE after resume from idle James Morse
  2 siblings, 1 reply; 6+ messages in thread
From: James Morse @ 2016-10-18 10:27 UTC (permalink / raw)
  To: linux-arm-kernel

Commit 338d4f49d6f7 ("arm64: kernel: Add support for Privileged Access
Never") enabled PAN by enabling the 'SPAN' feature-bit in SCTLR_EL1.
This means the PSTATE.PAN bit won't be set until the next return to the
kernel from userspace. On a preemptible kernel we may schedule work that
accesses userspace on a CPU before it has done this.

Now that cpufeature enable() calls are scheduled via stop_machine(), we
can set PSTATE.PAN from the cpu_enable_pan() call.

Add WARN_ON_ONCE(in_interrupt()) to check the PSTATE value we updated
is not immediately discarded.

Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>

---
This patch depends on 'arm64: cpufeature: Schedule enable() calls instead
of calling them via IPI', which doesn't apply to linux-stable versions
before v4.8.

 arch/arm64/mm/fault.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 3e9ff9b0c78d..f942ab6cc206 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -29,7 +29,9 @@
 #include <linux/sched.h>
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
+#include <linux/preempt.h>
 
+#include <asm/bug.h>
 #include <asm/cpufeature.h>
 #include <asm/exception.h>
 #include <asm/debug-monitors.h>
@@ -672,7 +674,14 @@ NOKPROBE_SYMBOL(do_debug_exception);
 #ifdef CONFIG_ARM64_PAN
 int cpu_enable_pan(void *__unused)
 {
+	/*
+	 * We modify PSTATE. This won't work from irq context as the PSTATE
+	 * is discared once we return from the exception.
+	 */
+	WARN_ON_ONCE(in_interrupt());
+
 	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
+	asm(SET_PSTATE_PAN(1));
 	return 0;
 }
 #endif /* CONFIG_ARM64_PAN */
-- 
2.8.0.rc3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] arm64: suspend: Reconfigure PSTATE after resume from idle
  2016-10-18 10:27 [PATCH 0/3] PAN Fixes James Morse
  2016-10-18 10:27 ` [PATCH 1/3] arm64: cpufeature: Schedule enable() calls instead of calling them via IPI James Morse
  2016-10-18 10:27 ` [PATCH 2/3] arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call James Morse
@ 2016-10-18 10:27 ` James Morse
  2016-10-20 11:27   ` Lorenzo Pieralisi
  2 siblings, 1 reply; 6+ messages in thread
From: James Morse @ 2016-10-18 10:27 UTC (permalink / raw)
  To: linux-arm-kernel

The suspend/resume path in kernel/sleep.S, as used by cpu-idle, does not
save/restore PSTATE. As a result of this cpufeatures that were detected
and have bits in PSTATE get lost when we resume from idle.

UAO gets set appropriately on the next context switch. PAN will be
re-enabled next time we return from user-space, but on a preemptible
kernel we may run work accessing user space before this point.

Add code to re-enable theses two features in __cpu_suspend_exit().
We re-use uao_thread_switch() passing current.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>

---
This patch applies to linux-stable v4.7.8, but with some fuzz...
but 'git am' rejects it.

asm/exec.h is my best guess at the appropriate header file. Contradictions
welcome.

 arch/arm64/include/asm/exec.h |  3 +++
 arch/arm64/kernel/process.c   |  3 ++-
 arch/arm64/kernel/suspend.c   | 11 +++++++++++
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/exec.h b/arch/arm64/include/asm/exec.h
index db0563c23482..f7865dd9d868 100644
--- a/arch/arm64/include/asm/exec.h
+++ b/arch/arm64/include/asm/exec.h
@@ -18,6 +18,9 @@
 #ifndef __ASM_EXEC_H
 #define __ASM_EXEC_H
 
+#include <linux/sched.h>
+
 extern unsigned long arch_align_stack(unsigned long sp);
+void uao_thread_switch(struct task_struct *next);
 
 #endif	/* __ASM_EXEC_H */
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 27b2f1387df4..4f186c56c5eb 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -49,6 +49,7 @@
 #include <asm/alternative.h>
 #include <asm/compat.h>
 #include <asm/cacheflush.h>
+#include <asm/exec.h>
 #include <asm/fpsimd.h>
 #include <asm/mmu_context.h>
 #include <asm/processor.h>
@@ -301,7 +302,7 @@ static void tls_thread_switch(struct task_struct *next)
 }
 
 /* Restore the UAO state depending on next's addr_limit */
-static void uao_thread_switch(struct task_struct *next)
+void uao_thread_switch(struct task_struct *next)
 {
 	if (IS_ENABLED(CONFIG_ARM64_UAO)) {
 		if (task_thread_info(next)->addr_limit == KERNEL_DS)
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index ad734142070d..bb0cd787a9d3 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -1,8 +1,11 @@
 #include <linux/ftrace.h>
 #include <linux/percpu.h>
 #include <linux/slab.h>
+#include <asm/alternative.h>
 #include <asm/cacheflush.h>
+#include <asm/cpufeature.h>
 #include <asm/debug-monitors.h>
+#include <asm/exec.h>
 #include <asm/pgtable.h>
 #include <asm/memory.h>
 #include <asm/mmu_context.h>
@@ -50,6 +53,14 @@ void notrace __cpu_suspend_exit(void)
 	set_my_cpu_offset(per_cpu_offset(cpu));
 
 	/*
+	 * PSTATE was not saved over suspend/resume, re-enable any detected
+	 * features that might not have been set correctly.
+	 */
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,
+			CONFIG_ARM64_PAN));
+	uao_thread_switch(current);
+
+	/*
 	 * Restore HW breakpoint registers to sane values
 	 * before debug exceptions are possibly reenabled
 	 * through local_dbg_restore.
-- 
2.8.0.rc3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call
  2016-10-18 10:27 ` [PATCH 2/3] arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call James Morse
@ 2016-10-19 16:52   ` Will Deacon
  0 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2016-10-19 16:52 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Oct 18, 2016 at 11:27:47AM +0100, James Morse wrote:
> Commit 338d4f49d6f7 ("arm64: kernel: Add support for Privileged Access
> Never") enabled PAN by enabling the 'SPAN' feature-bit in SCTLR_EL1.
> This means the PSTATE.PAN bit won't be set until the next return to the
> kernel from userspace. On a preemptible kernel we may schedule work that
> accesses userspace on a CPU before it has done this.
> 
> Now that cpufeature enable() calls are scheduled via stop_machine(), we
> can set PSTATE.PAN from the cpu_enable_pan() call.
> 
> Add WARN_ON_ONCE(in_interrupt()) to check the PSTATE value we updated
> is not immediately discarded.
> 
> Reported-by: Tony Thompson <anthony.thompson@arm.com>
> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> 
> ---
> This patch depends on 'arm64: cpufeature: Schedule enable() calls instead
> of calling them via IPI', which doesn't apply to linux-stable versions
> before v4.8.
> 
>  arch/arm64/mm/fault.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 3e9ff9b0c78d..f942ab6cc206 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -29,7 +29,9 @@
>  #include <linux/sched.h>
>  #include <linux/highmem.h>
>  #include <linux/perf_event.h>
> +#include <linux/preempt.h>
>  
> +#include <asm/bug.h>
>  #include <asm/cpufeature.h>
>  #include <asm/exception.h>
>  #include <asm/debug-monitors.h>
> @@ -672,7 +674,14 @@ NOKPROBE_SYMBOL(do_debug_exception);
>  #ifdef CONFIG_ARM64_PAN
>  int cpu_enable_pan(void *__unused)
>  {
> +	/*
> +	 * We modify PSTATE. This won't work from irq context as the PSTATE
> +	 * is discared once we return from the exception.
> +	 */

I fixed the typo in the comment and queued these as fixes. Please take
care of stable once these are in mainline.

Will

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 3/3] arm64: suspend: Reconfigure PSTATE after resume from idle
  2016-10-18 10:27 ` [PATCH 3/3] arm64: suspend: Reconfigure PSTATE after resume from idle James Morse
@ 2016-10-20 11:27   ` Lorenzo Pieralisi
  0 siblings, 0 replies; 6+ messages in thread
From: Lorenzo Pieralisi @ 2016-10-20 11:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Oct 18, 2016 at 11:27:48AM +0100, James Morse wrote:
> The suspend/resume path in kernel/sleep.S, as used by cpu-idle, does not
> save/restore PSTATE. As a result of this cpufeatures that were detected
> and have bits in PSTATE get lost when we resume from idle.
> 
> UAO gets set appropriately on the next context switch. PAN will be
> re-enabled next time we return from user-space, but on a preemptible
> kernel we may run work accessing user space before this point.
> 
> Add code to re-enable theses two features in __cpu_suspend_exit().
> We re-use uao_thread_switch() passing current.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> 
> ---
> This patch applies to linux-stable v4.7.8, but with some fuzz...
> but 'git am' rejects it.
> 
> asm/exec.h is my best guess at the appropriate header file. Contradictions
> welcome.

uaccess.h ? It is a shame you have to export uao_thread_switch() (see
below for a possible solution) but I agree that prevents useless code
duplication and that this needs fixing.

>  arch/arm64/include/asm/exec.h |  3 +++
>  arch/arm64/kernel/process.c   |  3 ++-
>  arch/arm64/kernel/suspend.c   | 11 +++++++++++
>  3 files changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/exec.h b/arch/arm64/include/asm/exec.h
> index db0563c23482..f7865dd9d868 100644
> --- a/arch/arm64/include/asm/exec.h
> +++ b/arch/arm64/include/asm/exec.h
> @@ -18,6 +18,9 @@
>  #ifndef __ASM_EXEC_H
>  #define __ASM_EXEC_H
>  
> +#include <linux/sched.h>
> +
>  extern unsigned long arch_align_stack(unsigned long sp);
> +void uao_thread_switch(struct task_struct *next);
>  
>  #endif	/* __ASM_EXEC_H */
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 27b2f1387df4..4f186c56c5eb 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -49,6 +49,7 @@
>  #include <asm/alternative.h>
>  #include <asm/compat.h>
>  #include <asm/cacheflush.h>
> +#include <asm/exec.h>
>  #include <asm/fpsimd.h>
>  #include <asm/mmu_context.h>
>  #include <asm/processor.h>
> @@ -301,7 +302,7 @@ static void tls_thread_switch(struct task_struct *next)
>  }
>  
>  /* Restore the UAO state depending on next's addr_limit */
> -static void uao_thread_switch(struct task_struct *next)
> +void uao_thread_switch(struct task_struct *next)
>  {
>  	if (IS_ENABLED(CONFIG_ARM64_UAO)) {
>  		if (task_thread_info(next)->addr_limit == KERNEL_DS)
> diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
> index ad734142070d..bb0cd787a9d3 100644
> --- a/arch/arm64/kernel/suspend.c
> +++ b/arch/arm64/kernel/suspend.c
> @@ -1,8 +1,11 @@
>  #include <linux/ftrace.h>
>  #include <linux/percpu.h>
>  #include <linux/slab.h>
> +#include <asm/alternative.h>
>  #include <asm/cacheflush.h>
> +#include <asm/cpufeature.h>
>  #include <asm/debug-monitors.h>
> +#include <asm/exec.h>
>  #include <asm/pgtable.h>
>  #include <asm/memory.h>
>  #include <asm/mmu_context.h>
> @@ -50,6 +53,14 @@ void notrace __cpu_suspend_exit(void)
>  	set_my_cpu_offset(per_cpu_offset(cpu));
>  
>  	/*
> +	 * PSTATE was not saved over suspend/resume, re-enable any detected
> +	 * features that might not have been set correctly.
> +	 */
> +	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,
> +			CONFIG_ARM64_PAN));
> +	uao_thread_switch(current);

set_fs(get_fs());

would do (?), but that's horrendous to say the least, maybe you can
refactor the code in asm/uaccess.h to achieve the same goal (ie you
factor out the code setting UAO from set_fs() in a separate inline that
you can also reuse in uao_thread_switch() and here).

Other than that:

Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>

> +
> +	/*
>  	 * Restore HW breakpoint registers to sane values
>  	 * before debug exceptions are possibly reenabled
>  	 * through local_dbg_restore.
> -- 
> 2.8.0.rc3
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-10-20 11:27 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-18 10:27 [PATCH 0/3] PAN Fixes James Morse
2016-10-18 10:27 ` [PATCH 1/3] arm64: cpufeature: Schedule enable() calls instead of calling them via IPI James Morse
2016-10-18 10:27 ` [PATCH 2/3] arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call James Morse
2016-10-19 16:52   ` Will Deacon
2016-10-18 10:27 ` [PATCH 3/3] arm64: suspend: Reconfigure PSTATE after resume from idle James Morse
2016-10-20 11:27   ` Lorenzo Pieralisi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.