linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] arm64: kernel: Add support for Privileged Access Never
@ 2015-07-16 16:01 James Morse
  2015-07-16 16:01 ` [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: James Morse @ 2015-07-16 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

This series adds support for Privileged Access Never (PAN; part of the ARMv8.1
Extensions). When enabled, this feature causes a permission fault if the kernel
attempts to access memory that is also accessible by userspace - instead the
PAN bit must be cleared when accessing userspace memory. (or use the
ldt*/stt* instructions).

This series detects and enables this feature, and uses alternatives to change
{get,put}_user() et al to clear the PAN bit while they do their work.


James Morse (5):
  arm64: kernel: preparatory: Move config_sctlr_el1
  arm64: kernel: Add cpufeature 'enable' callback.
  arm64: kernel: Add min/max values in feature-detection register
    values.
  arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE().
  arm64: kernel: Add support for Privileged Access Never

 arch/arm64/Kconfig                   | 14 ++++++++++++++
 arch/arm64/include/asm/alternative.h | 28 +++++++++++++++++++++++++---
 arch/arm64/include/asm/cpufeature.h  |  7 +++++--
 arch/arm64/include/asm/futex.h       |  8 ++++++++
 arch/arm64/include/asm/processor.h   |  2 ++
 arch/arm64/include/asm/sysreg.h      | 18 ++++++++++++++++++
 arch/arm64/include/asm/uaccess.h     | 11 +++++++++++
 arch/arm64/kernel/armv8_deprecated.c | 11 +----------
 arch/arm64/kernel/cpufeature.c       | 34 ++++++++++++++++++++++++++++++++--
 arch/arm64/kernel/process.c          |  3 +++
 arch/arm64/lib/clear_user.S          |  8 ++++++++
 arch/arm64/lib/copy_from_user.S      |  8 ++++++++
 arch/arm64/lib/copy_in_user.S        |  8 ++++++++
 arch/arm64/lib/copy_to_user.S        |  8 ++++++++
 arch/arm64/mm/fault.c                | 23 +++++++++++++++++++++++
 15 files changed, 174 insertions(+), 17 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1
  2015-07-16 16:01 [PATCH 0/5] arm64: kernel: Add support for Privileged Access Never James Morse
@ 2015-07-16 16:01 ` James Morse
  2015-07-17 11:06   ` Catalin Marinas
  2015-07-17 12:59   ` Catalin Marinas
  2015-07-16 16:01 ` [PATCH 2/5] arm64: kernel: Add cpufeature 'enable' callback James Morse
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 15+ messages in thread
From: James Morse @ 2015-07-16 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1
register.

This patch moves this function into header a file.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/sysreg.h      |  9 +++++++++
 arch/arm64/kernel/armv8_deprecated.c | 11 +----------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 5c89df0acbcb..7e419fabe75a 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -55,6 +55,15 @@ asm(
 "	.endm\n"
 );
 
+static inline void config_sctlr_el1(u32 clear, u32 set)
+{
+	u32 val;
+
+	asm volatile("mrs %0, sctlr_el1" : "=r" (val));
+	val &= ~clear;
+	val |= set;
+	asm volatile("msr sctlr_el1, %0" : : "r" (val));
+}
 #endif
 
 #endif	/* __ASM_SYSREG_H */
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 7922c2e710ca..78d56bff91fd 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -16,6 +16,7 @@
 
 #include <asm/insn.h>
 #include <asm/opcodes.h>
+#include <asm/sysreg.h>
 #include <asm/system_misc.h>
 #include <asm/traps.h>
 #include <asm/uaccess.h>
@@ -504,16 +505,6 @@ ret:
 	return 0;
 }
 
-static inline void config_sctlr_el1(u32 clear, u32 set)
-{
-	u32 val;
-
-	asm volatile("mrs %0, sctlr_el1" : "=r" (val));
-	val &= ~clear;
-	val |= set;
-	asm volatile("msr sctlr_el1, %0" : : "r" (val));
-}
-
 static int cp15_barrier_set_hw_mode(bool enable)
 {
 	if (enable)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/5] arm64: kernel: Add cpufeature 'enable' callback.
  2015-07-16 16:01 [PATCH 0/5] arm64: kernel: Add support for Privileged Access Never James Morse
  2015-07-16 16:01 ` [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
@ 2015-07-16 16:01 ` James Morse
  2015-07-17 11:06   ` Catalin Marinas
  2015-07-16 16:01 ` [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values James Morse
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: James Morse @ 2015-07-16 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds an 'enable()' callback to cpu capability/feature
detection, allowing features that require some setup or configuration
to get this opportunity once the feature has been detected.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 1 +
 arch/arm64/kernel/cpufeature.c      | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c1044218a63a..a8a201ab6cd1 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -34,6 +34,7 @@ struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
 	bool (*matches)(const struct arm64_cpu_capabilities *);
+	void (*enable)(void);
 	union {
 		struct {	/* To be used for erratum handling only */
 			u32 midr_model;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5ad86ceac010..650ffc28bedc 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -55,6 +55,12 @@ void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			pr_info("%s %s\n", info, caps[i].desc);
 		cpus_set_cap(caps[i].capability);
 	}
+
+	/* second pass allows enable() to consider interacting capabilities */
+	for (i = 0; caps[i].desc; i++) {
+		if (cpus_have_cap(caps[i].capability) && caps[i].enable)
+			caps[i].enable();
+	}
 }
 
 void check_local_cpu_features(void)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values.
  2015-07-16 16:01 [PATCH 0/5] arm64: kernel: Add support for Privileged Access Never James Morse
  2015-07-16 16:01 ` [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
  2015-07-16 16:01 ` [PATCH 2/5] arm64: kernel: Add cpufeature 'enable' callback James Morse
@ 2015-07-16 16:01 ` James Morse
  2015-07-17 10:51   ` Will Deacon
  2015-07-16 16:01 ` [PATCH 4/5] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() James Morse
  2015-07-16 16:01 ` [PATCH 5/5] arm64: kernel: Add support for Privileged Access Never James Morse
  4 siblings, 1 reply; 15+ messages in thread
From: James Morse @ 2015-07-16 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

When a new cpu feature is available, the cpu feature bits will be 0001,
when features are updated, this value will be incremented. This patch
changes 'register_value' to be '{min,max}_register_value', and checks
the value falls in this range.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 3 ++-
 arch/arm64/kernel/cpufeature.c      | 6 ++++--
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index a8a201ab6cd1..680a6a1f087e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -43,7 +43,8 @@ struct arm64_cpu_capabilities {
 
 		struct {	/* Feature register checking */
 			u64 register_mask;
-			u64 register_value;
+			u64 min_register_value;
+			u64 max_register_value;
 		};
 	};
 };
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 650ffc28bedc..f260affb825c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -28,7 +28,8 @@ has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
 	u64 val;
 
 	val = read_cpuid(id_aa64pfr0_el1);
-	return (val & entry->register_mask) == entry->register_value;
+	return ((val & entry->register_mask) >= entry->min_register_value &&
+		(val & entry->register_mask) <= entry->max_register_value);
 }
 
 static const struct arm64_cpu_capabilities arm64_features[] = {
@@ -37,7 +38,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
 		.matches = has_id_aa64pfr0_feature,
 		.register_mask = (0xf << 24),
-		.register_value = (1 << 24),
+		.min_register_value = (1 << 24),
+		.max_register_value = (1 << 24),
 	},
 	{},
 };
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/5] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE().
  2015-07-16 16:01 [PATCH 0/5] arm64: kernel: Add support for Privileged Access Never James Morse
                   ` (2 preceding siblings ...)
  2015-07-16 16:01 ` [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values James Morse
@ 2015-07-16 16:01 ` James Morse
  2015-07-17 11:08   ` Catalin Marinas
  2015-07-16 16:01 ` [PATCH 5/5] arm64: kernel: Add support for Privileged Access Never James Morse
  4 siblings, 1 reply; 15+ messages in thread
From: James Morse @ 2015-07-16 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

Some uses of ALTERNATIVE() may depend on a feature that is disabled at
compile time by a Kconfig option. In this case the unused alternative
instructions waste space, and if the original instruction is a nop, it
wastes time and space.

This patch adds an optional 'config' option to ALTERNATIVE() and
alternative_insn that allows the compiler to remove both the original
and alternative instructions if the config option is not defined.

Signed-off-by: James Morse <james.morse@arm.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/alternative.h | 28 +++++++++++++++++++++++++---
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index c385a0c4057f..5598182dea28 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -3,6 +3,7 @@
 
 #ifndef __ASSEMBLY__
 
+#include <linux/kconfig.h>
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
@@ -40,7 +41,8 @@ void free_alternatives_memory(void);
  * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
  * containing commit 4e4d08cf7399b606 or c1baaddf8861).
  */
-#define ALTERNATIVE(oldinstr, newinstr, feature)			\
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+	".if "__stringify(cfg_enabled)" == 1\n"				\
 	"661:\n\t"							\
 	oldinstr "\n"							\
 	"662:\n"							\
@@ -53,7 +55,11 @@ void free_alternatives_memory(void);
 	"664:\n\t"							\
 	".popsection\n\t"						\
 	".org	. - (664b-663b) + (662b-661b)\n\t"			\
-	".org	. - (662b-661b) + (664b-663b)\n"
+	".org	. - (662b-661b) + (664b-663b)\n"			\
+	".endif\n"
+
+#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
 
 #else
 
@@ -65,7 +71,8 @@ void free_alternatives_memory(void);
 	.byte \alt_len
 .endm
 
-.macro alternative_insn insn1 insn2 cap
+.macro alternative_insn insn1, insn2, cap, enable = 1
+	.if \enable
 661:	\insn1
 662:	.pushsection .altinstructions, "a"
 	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
@@ -75,8 +82,23 @@ void free_alternatives_memory(void);
 664:	.popsection
 	.org	. - (664b-663b) + (662b-661b)
 	.org	. - (662b-661b) + (664b-663b)
+	.endif
 .endm
 
+#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
+	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
+
+
 #endif  /*  __ASSEMBLY__  */
 
+/*
+ * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature));
+ *
+ * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO));
+ * N.B. If CONFIG_FOO is specified, but not selected, the whole block
+ *      will be omitted, including oldinstr.
+ */
+#define ALTERNATIVE(oldinstr, newinstr, ...)   \
+	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
+
 #endif /* __ASM_ALTERNATIVE_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/5] arm64: kernel: Add support for Privileged Access Never
  2015-07-16 16:01 [PATCH 0/5] arm64: kernel: Add support for Privileged Access Never James Morse
                   ` (3 preceding siblings ...)
  2015-07-16 16:01 ` [PATCH 4/5] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() James Morse
@ 2015-07-16 16:01 ` James Morse
  2015-07-17 12:57   ` Catalin Marinas
  4 siblings, 1 reply; 15+ messages in thread
From: James Morse @ 2015-07-16 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

'Privileged Access Never' is a new armv8.1 feature which prevents
privileged code from accessing any virtual address where read or write
access is also permitted at EL0.

This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
helpers temporarily to permit access.

This will catch kernel bugs where user memory is accessed directly.
'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.

This has been tested for regressions on juno and an armv8.1
enabled model using ltp and a custom module intended to trip a pan-fault

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig                  | 14 ++++++++++++++
 arch/arm64/include/asm/cpufeature.h |  3 ++-
 arch/arm64/include/asm/futex.h      |  8 ++++++++
 arch/arm64/include/asm/processor.h  |  2 ++
 arch/arm64/include/asm/sysreg.h     |  9 +++++++++
 arch/arm64/include/asm/uaccess.h    | 11 +++++++++++
 arch/arm64/kernel/cpufeature.c      | 22 ++++++++++++++++++++++
 arch/arm64/kernel/process.c         |  3 +++
 arch/arm64/lib/clear_user.S         |  8 ++++++++
 arch/arm64/lib/copy_from_user.S     |  8 ++++++++
 arch/arm64/lib/copy_in_user.S       |  8 ++++++++
 arch/arm64/lib/copy_to_user.S       |  8 ++++++++
 arch/arm64/mm/fault.c               | 23 +++++++++++++++++++++++
 13 files changed, 126 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 318175f62c24..c53a4b1d5968 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -597,6 +597,20 @@ config FORCE_MAX_ZONEORDER
 	default "14" if (ARM64_64K_PAGES && TRANSPARENT_HUGEPAGE)
 	default "11"
 
+config ARM64_PAN
+	bool "Enable support for Privileged Access Never (PAN)"
+	default y
+	help
+	 Privileged Access Never (PAN; part of the ARMv8.1 Extensions)
+	 prevents the kernel or hypervisor from accessing user-space (EL0)
+	 memory directly.
+
+	 Choosing this option will cause any unprotected (not using
+	 copy_to_user et al) memory access to fail with a permission fault.
+
+	 The feature is detected at runtime, and will remain as a 'nop'
+	 instruction if the cpu does not implement the feature.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 680a6a1f087e..27ff67d99af5 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -25,8 +25,9 @@
 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
 #define ARM64_WORKAROUND_845719			2
 #define ARM64_HAS_SYSREG_GIC_CPUIF		3
+#define ARM64_HAS_PAN				4
 
-#define ARM64_NCAPS				4
+#define ARM64_NCAPS				5
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 74069b3bd919..775e85b9d1f2 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -20,10 +20,16 @@
 
 #include <linux/futex.h>
 #include <linux/uaccess.h>
+
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/errno.h>
+#include <asm/sysreg.h>
 
 #define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg)		\
 	asm volatile(							\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,		\
+		    CONFIG_ARM64_PAN)					\
 "1:	ldxr	%w1, %2\n"						\
 	insn "\n"							\
 "2:	stlxr	%w3, %w0, %2\n"						\
@@ -39,6 +45,8 @@
 "	.align	3\n"							\
 "	.quad	1b, 4b, 2b, 4b\n"					\
 "	.popsection\n"							\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,		\
+		    CONFIG_ARM64_PAN)					\
 	: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp)	\
 	: "r" (oparg), "Ir" (-EFAULT)					\
 	: "memory")
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index e4c893e54f01..98f32355dc97 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -186,4 +186,6 @@ static inline void spin_lock_prefetch(const void *x)
 
 #endif
 
+void cpu_enable_pan(void);
+
 #endif /* __ASM_PROCESSOR_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7e419fabe75a..68aa1e399575 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -20,9 +20,18 @@
 #ifndef __ASM_SYSREG_H
 #define __ASM_SYSREG_H
 
+#include <asm/opcodes.h>
+
 #define sys_reg(op0, op1, crn, crm, op2) \
 	((((op0)-2)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
 
+#define REG_PSTATE_PAN_IMM                     sys_reg(2, 0, 4, 0, 4)
+#define PSTATE_PAN                             (1 << 22)
+#define SCTLR_EL1_SPAN                         (1 << 23)
+
+#define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM |\
+				     (!!x)<<8 | 0x1f)
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 07e1ba449bf1..b2ede967fe7d 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -24,7 +24,10 @@
 #include <linux/string.h>
 #include <linux/thread_info.h>
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/ptrace.h>
+#include <asm/sysreg.h>
 #include <asm/errno.h>
 #include <asm/memory.h>
 #include <asm/compiler.h>
@@ -131,6 +134,8 @@ static inline void set_fs(mm_segment_t fs)
 do {									\
 	unsigned long __gu_val;						\
 	__chk_user_ptr(ptr);						\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
 		__get_user_asm("ldrb", "%w", __gu_val, (ptr), (err));	\
@@ -148,6 +153,8 @@ do {									\
 		BUILD_BUG();						\
 	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 } while (0)
 
 #define __get_user(x, ptr)						\
@@ -194,6 +201,8 @@ do {									\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
 	__chk_user_ptr(ptr);						\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
 		__put_user_asm("strb", "%w", __pu_val, (ptr), (err));	\
@@ -210,6 +219,8 @@ do {									\
 	default:							\
 		BUILD_BUG();						\
 	}								\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 } while (0)
 
 #define __put_user(x, ptr)						\
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f260affb825c..0464eaef6667 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -21,6 +21,7 @@
 #include <linux/types.h>
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
+#include <asm/processor.h>
 
 static bool
 has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
@@ -32,6 +33,16 @@ has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
 		(val & entry->register_mask) <= entry->max_register_value);
 }
 
+static bool __maybe_unused
+has_id_aa64mmfr1_feature(const struct arm64_cpu_capabilities *entry)
+{
+	u64 val;
+
+	val = read_cpuid(id_aa64mmfr1_el1);
+	return ((val & entry->register_mask) >= entry->min_register_value &&
+		(val & entry->register_mask) <= entry->max_register_value);
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -41,6 +52,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_register_value = (1 << 24),
 		.max_register_value = (1 << 24),
 	},
+#ifdef CONFIG_ARM64_PAN
+	{
+		.desc = "Privileged Access Never",
+		.capability = ARM64_HAS_PAN,
+		.matches = has_id_aa64mmfr1_feature,
+		.register_mask = (0xf << 20),
+		.min_register_value = (1 << 20),
+		.max_register_value = (2 << 20),
+		.enable = cpu_enable_pan,
+	},
+#endif /* CONFIG_ARM64_PAN */
 	{},
 };
 
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 223b093c9440..cea69aae4997 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -277,6 +277,9 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 	} else {
 		memset(childregs, 0, sizeof(struct pt_regs));
 		childregs->pstate = PSR_MODE_EL1h;
+		if (IS_ENABLED(CONFIG_ARM64_PAN) &&
+		    cpus_have_cap(ARM64_HAS_PAN))
+			childregs->pstate |= PSTATE_PAN;
 		p->thread.cpu_context.x19 = stack_start;
 		p->thread.cpu_context.x20 = stk_sz;
 	}
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index c17967fdf5f6..a9723c71c52b 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -16,7 +16,11 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 	.text
 
@@ -29,6 +33,8 @@
  * Alignment fixed up by hardware.
  */
 ENTRY(__clear_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	mov	x2, x1			// save the size for fixup return
 	subs	x1, x1, #8
 	b.mi	2f
@@ -48,6 +54,8 @@ USER(9f, strh	wzr, [x0], #2	)
 	b.mi	5f
 USER(9f, strb	wzr, [x0]	)
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__clear_user)
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 5e27add9d362..882c1544a73e 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -15,7 +15,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy from user space to a kernel buffer (alignment handled by the hardware)
@@ -28,6 +32,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_from_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x4, x1, x2			// upper user buffer boundary
 	subs	x2, x2, #8
 	b.mi	2f
@@ -51,6 +57,8 @@ USER(9f, ldrh	w3, [x1], #2	)
 USER(9f, ldrb	w3, [x1]	)
 	strb	w3, [x0]
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_from_user)
 
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 84b6c9bb9b93..97063c4cba75 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -17,7 +17,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy from user space to user space (alignment handled by the hardware)
@@ -30,6 +34,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_in_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x4, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #8
 	b.mi	2f
@@ -53,6 +59,8 @@ USER(9f, strh	w3, [x0], #2	)
 USER(9f, ldrb	w3, [x1]	)
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_in_user)
 
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index a0aeeb9b7a28..c782aaf5494d 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -15,7 +15,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy to user space from a kernel buffer (alignment handled by the hardware)
@@ -28,6 +32,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_to_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x4, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #8
 	b.mi	2f
@@ -51,6 +57,8 @@ USER(9f, strh	w3, [x0], #2	)
 	ldrb	w3, [x1]
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_to_user)
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 94d98cd1aad8..3c10dcf1537b 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -30,9 +30,11 @@
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
 
+#include <asm/cpufeature.h>
 #include <asm/exception.h>
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
+#include <asm/sysreg.h>
 #include <asm/system_misc.h>
 #include <asm/pgtable.h>
 #include <asm/tlbflush.h>
@@ -147,6 +149,13 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
 		__do_kernel_fault(mm, addr, esr, regs);
 }
 
+static bool pan_enabled(struct pt_regs *regs)
+{
+	if (IS_ENABLED(CONFIG_ARM64_PAN))
+		return ((regs->pstate & PSTATE_PAN) != 0);
+	return false;
+}
+
 #define VM_FAULT_BADMAP		0x010000
 #define VM_FAULT_BADACCESS	0x020000
 
@@ -224,6 +233,13 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	}
 
 	/*
+	 * PAN bit set implies the fault happened in kernel space, but not
+	 * in the arch's user access functions.
+	 */
+	if (pan_enabled(regs))
+		goto no_context;
+
+	/*
 	 * As per x86, we may deadlock here. However, since the kernel only
 	 * validly references user space from well defined areas of the code,
 	 * we can bug out early if this is from code which shouldn't.
@@ -536,3 +552,10 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
 
 	return 0;
 }
+
+#ifdef CONFIG_ARM64_PAN
+void cpu_enable_pan(void)
+{
+	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
+}
+#endif /* CONFIG_ARM64_PAN */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values.
  2015-07-16 16:01 ` [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values James Morse
@ 2015-07-17 10:51   ` Will Deacon
  2015-07-17 11:05     ` Catalin Marinas
  2015-07-17 11:05     ` James Morse
  0 siblings, 2 replies; 15+ messages in thread
From: Will Deacon @ 2015-07-17 10:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 16, 2015 at 05:01:57PM +0100, James Morse wrote:
> When a new cpu feature is available, the cpu feature bits will be 0001,
> when features are updated, this value will be incremented. This patch
> changes 'register_value' to be '{min,max}_register_value', and checks
> the value falls in this range.

I'm not sure this is completely true. For example, the new atomics are
advertised with feature bits of 0002, whilst 0001 is RESERVED and 0000
means they're not present.

Also, the problem with specifying an upper bound is that we have to keep
updating it. Elsewhere in the kernel, we've treated these as 4-bit signed
fields and if X > Y, we assume that the feature set of X is a superset of
Y. Unfortunately, the ARM ARM doesn't provide this insight but it is
something that we teased out of the architects.

Does PAN break our assumptions here?

Will

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values.
  2015-07-17 10:51   ` Will Deacon
@ 2015-07-17 11:05     ` Catalin Marinas
  2015-07-17 11:05     ` James Morse
  1 sibling, 0 replies; 15+ messages in thread
From: Catalin Marinas @ 2015-07-17 11:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 17, 2015 at 11:51:11AM +0100, Will Deacon wrote:
> On Thu, Jul 16, 2015 at 05:01:57PM +0100, James Morse wrote:
> > When a new cpu feature is available, the cpu feature bits will be 0001,
> > when features are updated, this value will be incremented. This patch
> > changes 'register_value' to be '{min,max}_register_value', and checks
> > the value falls in this range.
> 
> I'm not sure this is completely true. For example, the new atomics are
> advertised with feature bits of 0002, whilst 0001 is RESERVED and 0000
> means they're not present.
> 
> Also, the problem with specifying an upper bound is that we have to keep
> updating it. Elsewhere in the kernel, we've treated these as 4-bit signed
> fields and if X > Y, we assume that the feature set of X is a superset of
> Y. Unfortunately, the ARM ARM doesn't provide this insight but it is
> something that we teased out of the architects.

I got another verbal confirmation yesterday from the architects that
this is the principle the ID bits are based on (so higher number
preserves the existing functionality).

We currently have a mix of exact match in cpufeature.c or greater-than
in some .S files. So we have two ways to address this:

1. Always use >= but with signed 4-bit field
2. Use min/max in cpufeature.c with max being 7

I think we should just go for 1 unless anyone sees a problem with the
current CPU implementations (and we need to keep an eye on the
architects for future additions to these registers).

-- 
Catalin

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values.
  2015-07-17 10:51   ` Will Deacon
  2015-07-17 11:05     ` Catalin Marinas
@ 2015-07-17 11:05     ` James Morse
  2015-07-17 12:45       ` Will Deacon
  1 sibling, 1 reply; 15+ messages in thread
From: James Morse @ 2015-07-17 11:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 17/07/15 11:51, Will Deacon wrote:
> On Thu, Jul 16, 2015 at 05:01:57PM +0100, James Morse wrote:
>> When a new cpu feature is available, the cpu feature bits will be 0001,
>> when features are updated, this value will be incremented. This patch
>> changes 'register_value' to be '{min,max}_register_value', and checks
>> the value falls in this range.
> 
> I'm not sure this is completely true. For example, the new atomics are
> advertised with feature bits of 0002, whilst 0001 is RESERVED and 0000
> means they're not present.
> 
> Also, the problem with specifying an upper bound is that we have to keep
> updating it. Elsewhere in the kernel, we've treated these as 4-bit signed
> fields and if X > Y, we assume that the feature set of X is a superset of
> Y. Unfortunately, the ARM ARM doesn't provide this insight but it is
> something that we teased out of the architects.
> 
> Does PAN break our assumptions here?

No, it just needs to match 1 and 2.

Would you prefer a 'min_register_value' and some careful 4-bit
sign-extension logic to use the 'if X > Y' logic?


Thanks,

James

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1
  2015-07-16 16:01 ` [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
@ 2015-07-17 11:06   ` Catalin Marinas
  2015-07-17 12:59   ` Catalin Marinas
  1 sibling, 0 replies; 15+ messages in thread
From: Catalin Marinas @ 2015-07-17 11:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 16, 2015 at 05:01:55PM +0100, James Morse wrote:
> Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1
> register.
> 
> This patch moves this function into header a file.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

-- 
Catalin

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 2/5] arm64: kernel: Add cpufeature 'enable' callback.
  2015-07-16 16:01 ` [PATCH 2/5] arm64: kernel: Add cpufeature 'enable' callback James Morse
@ 2015-07-17 11:06   ` Catalin Marinas
  0 siblings, 0 replies; 15+ messages in thread
From: Catalin Marinas @ 2015-07-17 11:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 16, 2015 at 05:01:56PM +0100, James Morse wrote:
> This patch adds an 'enable()' callback to cpu capability/feature
> detection, allowing features that require some setup or configuration
> to get this opportunity once the feature has been detected.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 4/5] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE().
  2015-07-16 16:01 ` [PATCH 4/5] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() James Morse
@ 2015-07-17 11:08   ` Catalin Marinas
  0 siblings, 0 replies; 15+ messages in thread
From: Catalin Marinas @ 2015-07-17 11:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 16, 2015 at 05:01:58PM +0100, James Morse wrote:
> Some uses of ALTERNATIVE() may depend on a feature that is disabled at
> compile time by a Kconfig option. In this case the unused alternative
> instructions waste space, and if the original instruction is a nop, it
> wastes time and space.
> 
> This patch adds an optional 'config' option to ALTERNATIVE() and
> alternative_insn that allows the compiler to remove both the original
> and alternative instructions if the config option is not defined.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values.
  2015-07-17 11:05     ` James Morse
@ 2015-07-17 12:45       ` Will Deacon
  0 siblings, 0 replies; 15+ messages in thread
From: Will Deacon @ 2015-07-17 12:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 17, 2015 at 12:05:07PM +0100, James Morse wrote:
> On 17/07/15 11:51, Will Deacon wrote:
> > On Thu, Jul 16, 2015 at 05:01:57PM +0100, James Morse wrote:
> >> When a new cpu feature is available, the cpu feature bits will be 0001,
> >> when features are updated, this value will be incremented. This patch
> >> changes 'register_value' to be '{min,max}_register_value', and checks
> >> the value falls in this range.
> > 
> > I'm not sure this is completely true. For example, the new atomics are
> > advertised with feature bits of 0002, whilst 0001 is RESERVED and 0000
> > means they're not present.
> > 
> > Also, the problem with specifying an upper bound is that we have to keep
> > updating it. Elsewhere in the kernel, we've treated these as 4-bit signed
> > fields and if X > Y, we assume that the feature set of X is a superset of
> > Y. Unfortunately, the ARM ARM doesn't provide this insight but it is
> > something that we teased out of the architects.
> > 
> > Does PAN break our assumptions here?
> 
> No, it just needs to match 1 and 2.
> 
> Would you prefer a 'min_register_value' and some careful 4-bit
> sign-extension logic to use the 'if X > Y' logic?

Yeah, I think Catalin's suggestion of "Always use >= but with signed 4-bit
field" is the best bet.

Will

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 5/5] arm64: kernel: Add support for Privileged Access Never
  2015-07-16 16:01 ` [PATCH 5/5] arm64: kernel: Add support for Privileged Access Never James Morse
@ 2015-07-17 12:57   ` Catalin Marinas
  0 siblings, 0 replies; 15+ messages in thread
From: Catalin Marinas @ 2015-07-17 12:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 16, 2015 at 05:01:59PM +0100, James Morse wrote:
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7e419fabe75a..68aa1e399575 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -20,9 +20,18 @@
>  #ifndef __ASM_SYSREG_H
>  #define __ASM_SYSREG_H
>  
> +#include <asm/opcodes.h>
> +
>  #define sys_reg(op0, op1, crn, crm, op2) \
>  	((((op0)-2)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
>  
> +#define REG_PSTATE_PAN_IMM                     sys_reg(2, 0, 4, 0, 4)
> +#define PSTATE_PAN                             (1 << 22)

I missed this before, can we have a PSR_PAN_BIT in uapi/asm/ptrace.h for
consistency with the other PSTATE bits? It's not user accessible but we
did the same with the I and F bits already.

> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index f260affb825c..0464eaef6667 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -21,6 +21,7 @@
>  #include <linux/types.h>
>  #include <asm/cpu.h>
>  #include <asm/cpufeature.h>
> +#include <asm/processor.h>
>  
>  static bool
>  has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
> @@ -32,6 +33,16 @@ has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
>  		(val & entry->register_mask) <= entry->max_register_value);
>  }
>  
> +static bool __maybe_unused
> +has_id_aa64mmfr1_feature(const struct arm64_cpu_capabilities *entry)
> +{
> +	u64 val;
> +
> +	val = read_cpuid(id_aa64mmfr1_el1);
> +	return ((val & entry->register_mask) >= entry->min_register_value &&
> +		(val & entry->register_mask) <= entry->max_register_value);
> +}

That's fine until we clarify what we actually want here in the other
patch.

> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 223b093c9440..cea69aae4997 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -277,6 +277,9 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
>  	} else {
>  		memset(childregs, 0, sizeof(struct pt_regs));
>  		childregs->pstate = PSR_MODE_EL1h;
> +		if (IS_ENABLED(CONFIG_ARM64_PAN) &&
> +		    cpus_have_cap(ARM64_HAS_PAN))
> +			childregs->pstate |= PSTATE_PAN;
>  		p->thread.cpu_context.x19 = stack_start;
>  		p->thread.cpu_context.x20 = stk_sz;

I wonder if this is actually needed. When we run in kernel mode, PAN is
always set (automatically on exception entry) and only explicitly
cleared for get_user/put_user etc. Switching to a kernel thread is done
via switch_to which preserves PAN, so the regs->pstate here is not
relevant (IOW, we only ever ERET to a kernel thread after an
exception/interrupt but in that case pstate.pan is already set by the
handler).

-- 
Catalin

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1
  2015-07-16 16:01 ` [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
  2015-07-17 11:06   ` Catalin Marinas
@ 2015-07-17 12:59   ` Catalin Marinas
  1 sibling, 0 replies; 15+ messages in thread
From: Catalin Marinas @ 2015-07-17 12:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 16, 2015 at 05:01:55PM +0100, James Morse wrote:
> --- a/arch/arm64/kernel/armv8_deprecated.c
> +++ b/arch/arm64/kernel/armv8_deprecated.c
> @@ -16,6 +16,7 @@
>  
>  #include <asm/insn.h>
>  #include <asm/opcodes.h>
> +#include <asm/sysreg.h>
>  #include <asm/system_misc.h>
>  #include <asm/traps.h>
>  #include <asm/uaccess.h>
> @@ -504,16 +505,6 @@ ret:
>  	return 0;
>  }
>  
> -static inline void config_sctlr_el1(u32 clear, u32 set)
> -{
> -	u32 val;
> -
> -	asm volatile("mrs %0, sctlr_el1" : "=r" (val));
> -	val &= ~clear;
> -	val |= set;
> -	asm volatile("msr sctlr_el1, %0" : : "r" (val));
> -}

My ack still stands on this patch but please also move the SCTLR_EL1_*
definitions from asm/cputype.h into asm/sysreg.h (together with this
function).

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-07-17 12:59 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-16 16:01 [PATCH 0/5] arm64: kernel: Add support for Privileged Access Never James Morse
2015-07-16 16:01 ` [PATCH 1/5] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
2015-07-17 11:06   ` Catalin Marinas
2015-07-17 12:59   ` Catalin Marinas
2015-07-16 16:01 ` [PATCH 2/5] arm64: kernel: Add cpufeature 'enable' callback James Morse
2015-07-17 11:06   ` Catalin Marinas
2015-07-16 16:01 ` [PATCH 3/5] arm64: kernel: Add min/max values in feature-detection register values James Morse
2015-07-17 10:51   ` Will Deacon
2015-07-17 11:05     ` Catalin Marinas
2015-07-17 11:05     ` James Morse
2015-07-17 12:45       ` Will Deacon
2015-07-16 16:01 ` [PATCH 4/5] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() James Morse
2015-07-17 11:08   ` Catalin Marinas
2015-07-16 16:01 ` [PATCH 5/5] arm64: kernel: Add support for Privileged Access Never James Morse
2015-07-17 12:57   ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).