All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
@ 2016-02-05 14:58 James Morse
  2016-02-05 14:58 ` [PATCH v2 1/5] arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro James Morse
                   ` (7 more replies)
  0 siblings, 8 replies; 36+ messages in thread
From: James Morse @ 2016-02-05 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

This series adds support for User Access Override (UAO; part of the ARMv8.2
Extensions[0]). When enabled, this causes the get_user() accessors to use
the unprivileged load/store instructions. When addr_limit is set to
KERNEL_DS, we set the override bit allowing privileged access.

Because the unprivileged instructions don't trip PAN, the last patch changes
which 'alternative' values are swapped in, allowing PAN to be left enabled
during get_user() and friends.

This series can be retrieved from:
git://linux-arm.org/linux-jm.git -b uao/v2

Comments welcome,

James

Since v1:
 * Moved id register definition into the boiler-plate patch.
 * Moved cpu_enable_uao() into mm/fault.c


[0] https://community.arm.com/groups/processors/blog/2016/01/05/armv8-a-architecture-evolution
[v1] https://lwn.net/Articles/674445/

James Morse (5):
  arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
  arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  arm64: kernel: Add support for User Access Override
  arm64: cpufeature: Test 'matches' pointer to find the end of the list
  arm64: kernel: Don't toggle PAN on systems with UAO

 arch/arm64/Kconfig                   |  21 +++++++
 arch/arm64/include/asm/alternative.h |  72 +++++++++++++++++++++++
 arch/arm64/include/asm/cpu.h         |   1 +
 arch/arm64/include/asm/cpufeature.h  |   6 +-
 arch/arm64/include/asm/cputype.h     |  20 ++++---
 arch/arm64/include/asm/processor.h   |   1 +
 arch/arm64/include/asm/sysreg.h      |   7 +++
 arch/arm64/include/asm/thread_info.h |   6 ++
 arch/arm64/include/asm/uaccess.h     |  52 +++++++++++------
 arch/arm64/include/uapi/asm/ptrace.h |   1 +
 arch/arm64/kernel/cpufeature.c       | 107 +++++++++++++++++++++++------------
 arch/arm64/kernel/cpuinfo.c          |  55 +++++++++---------
 arch/arm64/kernel/process.c          |  20 +++++++
 arch/arm64/lib/clear_user.S          |  12 ++--
 arch/arm64/lib/copy_from_user.S      |  12 ++--
 arch/arm64/lib/copy_in_user.S        |  20 +++----
 arch/arm64/lib/copy_to_user.S        |  12 ++--
 arch/arm64/mm/context.c              |   2 +-
 arch/arm64/mm/fault.c                |  34 +++++++++--
 19 files changed, 337 insertions(+), 124 deletions(-)

-- 
2.6.2

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 1/5] arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
@ 2016-02-05 14:58 ` James Morse
  2016-02-05 14:58 ` [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate James Morse
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 36+ messages in thread
From: James Morse @ 2016-02-05 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

Older assemblers may not have support for newer feature registers. To get
round this, sysreg.h provides a 'mrs_s' macro that takes a register
encoding and generates the raw instruction.

Change read_cpuid() to use mrs_s in all cases so that new registers
don't have to be a special case. Including sysreg.h means we need to move
the include and definition of read_cpuid() after the #ifndef __ASSEMBLY__
to avoid syntax errors in vmlinux.lds.

Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  2 +-
 arch/arm64/include/asm/cputype.h    | 20 +++++++------
 arch/arm64/kernel/cpufeature.c      | 58 ++++++++++++++++++-------------------
 arch/arm64/kernel/cpuinfo.c         | 54 +++++++++++++++++-----------------
 arch/arm64/mm/context.c             |  2 +-
 5 files changed, 69 insertions(+), 67 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8f271b83f910..d28e8b27ee59 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -176,7 +176,7 @@ u64 read_system_reg(u32 id);
 
 static inline bool cpu_supports_mixed_endian_el0(void)
 {
-	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
+	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(SYS_ID_AA64MMFR0_EL1));
 }
 
 static inline bool system_supports_mixed_endian_el0(void)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 1a5949364ed0..71eecd3c5b4e 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -32,12 +32,6 @@
 #define MPIDR_AFFINITY_LEVEL(mpidr, level) \
 	((mpidr >> MPIDR_LEVEL_SHIFT(level)) & MPIDR_LEVEL_MASK)
 
-#define read_cpuid(reg) ({						\
-	u64 __val;							\
-	asm("mrs	%0, " #reg : "=r" (__val));			\
-	__val;								\
-})
-
 #define MIDR_REVISION_MASK	0xf
 #define MIDR_REVISION(midr)	((midr) & MIDR_REVISION_MASK)
 #define MIDR_PARTNUM_SHIFT	4
@@ -77,6 +71,14 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/sysreg.h>
+
+#define read_cpuid(reg) ({						\
+	u64 __val;							\
+	asm("mrs_s	%0, " __stringify(reg) : "=r" (__val));		\
+	__val;								\
+})
+
 /*
  * The CPU ID never changes at run time, so we might as well tell the
  * compiler that it's constant.  Use this function to read the CPU ID
@@ -84,12 +86,12 @@
  */
 static inline u32 __attribute_const__ read_cpuid_id(void)
 {
-	return read_cpuid(MIDR_EL1);
+	return read_cpuid(SYS_MIDR_EL1);
 }
 
 static inline u64 __attribute_const__ read_cpuid_mpidr(void)
 {
-	return read_cpuid(MPIDR_EL1);
+	return read_cpuid(SYS_MPIDR_EL1);
 }
 
 static inline unsigned int __attribute_const__ read_cpuid_implementor(void)
@@ -104,7 +106,7 @@ static inline unsigned int __attribute_const__ read_cpuid_part_number(void)
 
 static inline u32 __attribute_const__ read_cpuid_cachetype(void)
 {
-	return read_cpuid(CTR_EL0);
+	return read_cpuid(SYS_CTR_EL0);
 }
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5c90aa490a2b..a84febc40db2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -791,35 +791,35 @@ static inline void set_sys_caps_initialised(void)
 static u64 __raw_read_system_reg(u32 sys_id)
 {
 	switch (sys_id) {
-	case SYS_ID_PFR0_EL1:		return (u64)read_cpuid(ID_PFR0_EL1);
-	case SYS_ID_PFR1_EL1:		return (u64)read_cpuid(ID_PFR1_EL1);
-	case SYS_ID_DFR0_EL1:		return (u64)read_cpuid(ID_DFR0_EL1);
-	case SYS_ID_MMFR0_EL1:		return (u64)read_cpuid(ID_MMFR0_EL1);
-	case SYS_ID_MMFR1_EL1:		return (u64)read_cpuid(ID_MMFR1_EL1);
-	case SYS_ID_MMFR2_EL1:		return (u64)read_cpuid(ID_MMFR2_EL1);
-	case SYS_ID_MMFR3_EL1:		return (u64)read_cpuid(ID_MMFR3_EL1);
-	case SYS_ID_ISAR0_EL1:		return (u64)read_cpuid(ID_ISAR0_EL1);
-	case SYS_ID_ISAR1_EL1:		return (u64)read_cpuid(ID_ISAR1_EL1);
-	case SYS_ID_ISAR2_EL1:		return (u64)read_cpuid(ID_ISAR2_EL1);
-	case SYS_ID_ISAR3_EL1:		return (u64)read_cpuid(ID_ISAR3_EL1);
-	case SYS_ID_ISAR4_EL1:		return (u64)read_cpuid(ID_ISAR4_EL1);
-	case SYS_ID_ISAR5_EL1:		return (u64)read_cpuid(ID_ISAR4_EL1);
-	case SYS_MVFR0_EL1:		return (u64)read_cpuid(MVFR0_EL1);
-	case SYS_MVFR1_EL1:		return (u64)read_cpuid(MVFR1_EL1);
-	case SYS_MVFR2_EL1:		return (u64)read_cpuid(MVFR2_EL1);
-
-	case SYS_ID_AA64PFR0_EL1:	return (u64)read_cpuid(ID_AA64PFR0_EL1);
-	case SYS_ID_AA64PFR1_EL1:	return (u64)read_cpuid(ID_AA64PFR0_EL1);
-	case SYS_ID_AA64DFR0_EL1:	return (u64)read_cpuid(ID_AA64DFR0_EL1);
-	case SYS_ID_AA64DFR1_EL1:	return (u64)read_cpuid(ID_AA64DFR0_EL1);
-	case SYS_ID_AA64MMFR0_EL1:	return (u64)read_cpuid(ID_AA64MMFR0_EL1);
-	case SYS_ID_AA64MMFR1_EL1:	return (u64)read_cpuid(ID_AA64MMFR1_EL1);
-	case SYS_ID_AA64ISAR0_EL1:	return (u64)read_cpuid(ID_AA64ISAR0_EL1);
-	case SYS_ID_AA64ISAR1_EL1:	return (u64)read_cpuid(ID_AA64ISAR1_EL1);
-
-	case SYS_CNTFRQ_EL0:		return (u64)read_cpuid(CNTFRQ_EL0);
-	case SYS_CTR_EL0:		return (u64)read_cpuid(CTR_EL0);
-	case SYS_DCZID_EL0:		return (u64)read_cpuid(DCZID_EL0);
+	case SYS_ID_PFR0_EL1:		return read_cpuid(SYS_ID_PFR0_EL1);
+	case SYS_ID_PFR1_EL1:		return read_cpuid(SYS_ID_PFR1_EL1);
+	case SYS_ID_DFR0_EL1:		return read_cpuid(SYS_ID_DFR0_EL1);
+	case SYS_ID_MMFR0_EL1:		return read_cpuid(SYS_ID_MMFR0_EL1);
+	case SYS_ID_MMFR1_EL1:		return read_cpuid(SYS_ID_MMFR1_EL1);
+	case SYS_ID_MMFR2_EL1:		return read_cpuid(SYS_ID_MMFR2_EL1);
+	case SYS_ID_MMFR3_EL1:		return read_cpuid(SYS_ID_MMFR3_EL1);
+	case SYS_ID_ISAR0_EL1:		return read_cpuid(SYS_ID_ISAR0_EL1);
+	case SYS_ID_ISAR1_EL1:		return read_cpuid(SYS_ID_ISAR1_EL1);
+	case SYS_ID_ISAR2_EL1:		return read_cpuid(SYS_ID_ISAR2_EL1);
+	case SYS_ID_ISAR3_EL1:		return read_cpuid(SYS_ID_ISAR3_EL1);
+	case SYS_ID_ISAR4_EL1:		return read_cpuid(SYS_ID_ISAR4_EL1);
+	case SYS_ID_ISAR5_EL1:		return read_cpuid(SYS_ID_ISAR4_EL1);
+	case SYS_MVFR0_EL1:		return read_cpuid(SYS_MVFR0_EL1);
+	case SYS_MVFR1_EL1:		return read_cpuid(SYS_MVFR1_EL1);
+	case SYS_MVFR2_EL1:		return read_cpuid(SYS_MVFR2_EL1);
+
+	case SYS_ID_AA64PFR0_EL1:	return read_cpuid(SYS_ID_AA64PFR0_EL1);
+	case SYS_ID_AA64PFR1_EL1:	return read_cpuid(SYS_ID_AA64PFR0_EL1);
+	case SYS_ID_AA64DFR0_EL1:	return read_cpuid(SYS_ID_AA64DFR0_EL1);
+	case SYS_ID_AA64DFR1_EL1:	return read_cpuid(SYS_ID_AA64DFR0_EL1);
+	case SYS_ID_AA64MMFR0_EL1:	return read_cpuid(SYS_ID_AA64MMFR0_EL1);
+	case SYS_ID_AA64MMFR1_EL1:	return read_cpuid(SYS_ID_AA64MMFR1_EL1);
+	case SYS_ID_AA64ISAR0_EL1:	return read_cpuid(SYS_ID_AA64ISAR0_EL1);
+	case SYS_ID_AA64ISAR1_EL1:	return read_cpuid(SYS_ID_AA64ISAR1_EL1);
+
+	case SYS_CNTFRQ_EL0:		return read_cpuid(SYS_CNTFRQ_EL0);
+	case SYS_CTR_EL0:		return read_cpuid(SYS_CTR_EL0);
+	case SYS_DCZID_EL0:		return read_cpuid(SYS_DCZID_EL0);
 	default:
 		BUG();
 		return 0;
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 212ae6361d8b..76df22272804 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -201,35 +201,35 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 {
 	info->reg_cntfrq = arch_timer_get_cntfrq();
 	info->reg_ctr = read_cpuid_cachetype();
-	info->reg_dczid = read_cpuid(DCZID_EL0);
+	info->reg_dczid = read_cpuid(SYS_DCZID_EL0);
 	info->reg_midr = read_cpuid_id();
 
-	info->reg_id_aa64dfr0 = read_cpuid(ID_AA64DFR0_EL1);
-	info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
-	info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1);
-	info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1);
-	info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
-	info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
-	info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1);
-	info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);
-
-	info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
-	info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
-	info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
-	info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
-	info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
-	info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
-	info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
-	info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
-	info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
-	info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
-	info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
-	info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
-	info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
-
-	info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
-	info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
-	info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
+	info->reg_id_aa64dfr0 = read_cpuid(SYS_ID_AA64DFR0_EL1);
+	info->reg_id_aa64dfr1 = read_cpuid(SYS_ID_AA64DFR1_EL1);
+	info->reg_id_aa64isar0 = read_cpuid(SYS_ID_AA64ISAR0_EL1);
+	info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
+	info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
+	info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
+	info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
+	info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);
+
+	info->reg_id_dfr0 = read_cpuid(SYS_ID_DFR0_EL1);
+	info->reg_id_isar0 = read_cpuid(SYS_ID_ISAR0_EL1);
+	info->reg_id_isar1 = read_cpuid(SYS_ID_ISAR1_EL1);
+	info->reg_id_isar2 = read_cpuid(SYS_ID_ISAR2_EL1);
+	info->reg_id_isar3 = read_cpuid(SYS_ID_ISAR3_EL1);
+	info->reg_id_isar4 = read_cpuid(SYS_ID_ISAR4_EL1);
+	info->reg_id_isar5 = read_cpuid(SYS_ID_ISAR5_EL1);
+	info->reg_id_mmfr0 = read_cpuid(SYS_ID_MMFR0_EL1);
+	info->reg_id_mmfr1 = read_cpuid(SYS_ID_MMFR1_EL1);
+	info->reg_id_mmfr2 = read_cpuid(SYS_ID_MMFR2_EL1);
+	info->reg_id_mmfr3 = read_cpuid(SYS_ID_MMFR3_EL1);
+	info->reg_id_pfr0 = read_cpuid(SYS_ID_PFR0_EL1);
+	info->reg_id_pfr1 = read_cpuid(SYS_ID_PFR1_EL1);
+
+	info->reg_mvfr0 = read_cpuid(SYS_MVFR0_EL1);
+	info->reg_mvfr1 = read_cpuid(SYS_MVFR1_EL1);
+	info->reg_mvfr2 = read_cpuid(SYS_MVFR2_EL1);
 
 	cpuinfo_detect_icache_policy(info);
 
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index e87f53ff5f58..7275628ba59f 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -187,7 +187,7 @@ switch_mm_fastpath:
 
 static int asids_init(void)
 {
-	int fld = cpuid_feature_extract_field(read_cpuid(ID_AA64MMFR0_EL1), 4);
+	int fld = cpuid_feature_extract_field(read_cpuid(SYS_ID_AA64MMFR0_EL1), 4);
 
 	switch (fld) {
 	default:
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
  2016-02-05 14:58 ` [PATCH v2 1/5] arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro James Morse
@ 2016-02-05 14:58 ` James Morse
  2016-03-03 17:59   ` Christopher Covington
  2016-02-05 14:58 ` [PATCH v2 3/5] arm64: kernel: Add support for User Access Override James Morse
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: James Morse @ 2016-02-05 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
cpu feature boiler plate used by the actual features in later patches.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpu.h    |  1 +
 arch/arm64/include/asm/sysreg.h |  4 ++++
 arch/arm64/kernel/cpufeature.c  | 10 ++++++++++
 arch/arm64/kernel/cpuinfo.c     |  1 +
 4 files changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
index b5e9cee4b5f8..13a6103130cd 100644
--- a/arch/arm64/include/asm/cpu.h
+++ b/arch/arm64/include/asm/cpu.h
@@ -36,6 +36,7 @@ struct cpuinfo_arm64 {
 	u64		reg_id_aa64isar1;
 	u64		reg_id_aa64mmfr0;
 	u64		reg_id_aa64mmfr1;
+	u64		reg_id_aa64mmfr2;
 	u64		reg_id_aa64pfr0;
 	u64		reg_id_aa64pfr1;
 
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 4aeebec3d882..4ef294ec49cc 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -72,6 +72,7 @@
 
 #define SYS_ID_AA64MMFR0_EL1		sys_reg(3, 0, 0, 7, 0)
 #define SYS_ID_AA64MMFR1_EL1		sys_reg(3, 0, 0, 7, 1)
+#define SYS_ID_AA64MMFR2_EL1		sys_reg(3, 0, 0, 7, 2)
 
 #define SYS_CNTFRQ_EL0			sys_reg(3, 3, 14, 0, 0)
 #define SYS_CTR_EL0			sys_reg(3, 3, 0, 0, 1)
@@ -137,6 +138,9 @@
 #define ID_AA64MMFR1_VMIDBITS_SHIFT	4
 #define ID_AA64MMFR1_HADBS_SHIFT	0
 
+/* id_aa64mmfr2 */
+#define ID_AA64MMFR2_UAO_SHIFT		4
+
 /* id_aa64dfr0 */
 #define ID_AA64DFR0_CTX_CMPS_SHIFT	28
 #define ID_AA64DFR0_WRPS_SHIFT		20
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a84febc40db2..5b8bc98105c1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -123,6 +123,11 @@ static struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
 	ARM64_FTR_END,
 };
 
+static struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
 static struct arm64_ftr_bits ftr_ctr[] = {
 	U_ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 31, 1, 1),	/* RAO */
 	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 28, 3, 0),
@@ -284,6 +289,7 @@ static struct arm64_ftr_reg arm64_ftr_regs[] = {
 	/* Op1 = 0, CRn = 0, CRm = 7 */
 	ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
 	ARM64_FTR_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1),
+	ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2),
 
 	/* Op1 = 3, CRn = 0, CRm = 0 */
 	ARM64_FTR_REG(SYS_CTR_EL0, ftr_ctr),
@@ -408,6 +414,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
 	init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
 	init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
 	init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
+	init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
 	init_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0);
 	init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
 	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
@@ -517,6 +524,8 @@ void update_cpu_features(int cpu,
 				      info->reg_id_aa64mmfr0, boot->reg_id_aa64mmfr0);
 	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR1_EL1, cpu,
 				      info->reg_id_aa64mmfr1, boot->reg_id_aa64mmfr1);
+	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
+				      info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2);
 
 	/*
 	 * EL3 is not our concern.
@@ -814,6 +823,7 @@ static u64 __raw_read_system_reg(u32 sys_id)
 	case SYS_ID_AA64DFR1_EL1:	return read_cpuid(SYS_ID_AA64DFR0_EL1);
 	case SYS_ID_AA64MMFR0_EL1:	return read_cpuid(SYS_ID_AA64MMFR0_EL1);
 	case SYS_ID_AA64MMFR1_EL1:	return read_cpuid(SYS_ID_AA64MMFR1_EL1);
+	case SYS_ID_AA64MMFR2_EL1:	return read_cpuid(SYS_ID_AA64MMFR2_EL1);
 	case SYS_ID_AA64ISAR0_EL1:	return read_cpuid(SYS_ID_AA64ISAR0_EL1);
 	case SYS_ID_AA64ISAR1_EL1:	return read_cpuid(SYS_ID_AA64ISAR1_EL1);
 
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 76df22272804..966fbd52550b 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -210,6 +210,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 	info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
 	info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
 	info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
+	info->reg_id_aa64mmfr2 = read_cpuid(SYS_ID_AA64MMFR2_EL1);
 	info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
 	info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);
 
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 3/5] arm64: kernel: Add support for User Access Override
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
  2016-02-05 14:58 ` [PATCH v2 1/5] arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro James Morse
  2016-02-05 14:58 ` [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate James Morse
@ 2016-02-05 14:58 ` James Morse
  2016-02-18 12:26   ` Catalin Marinas
  2016-02-05 14:58 ` [PATCH v2 4/5] arm64: cpufeature: Test 'matches' pointer to find the end of the list James Morse
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: James Morse @ 2016-02-05 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

'User Access Override' is a new ARMv8.2 feature which allows the
unprivileged load and store instructions to be overridden to behave in
the normal way.

This patch converts {get,put}_user() and friends to use ldtr*/sttr*
instructions - so that they can only access EL0 memory, then enables
UAO when fs==KERNEL_DS so that these functions can access kernel memory.

This allows user space's read/write permissions to be checked against the
page tables, instead of testing addr<USER_DS, then using the kernel's
read/write permissions.

Signed-off-by: James Morse <james.morse@arm.com>
---
 arch/arm64/Kconfig                   | 21 +++++++++++
 arch/arm64/include/asm/alternative.h | 72 ++++++++++++++++++++++++++++++++++++
 arch/arm64/include/asm/cpufeature.h  |  3 +-
 arch/arm64/include/asm/processor.h   |  1 +
 arch/arm64/include/asm/sysreg.h      |  3 ++
 arch/arm64/include/asm/thread_info.h |  6 +++
 arch/arm64/include/asm/uaccess.h     | 44 ++++++++++++++++------
 arch/arm64/include/uapi/asm/ptrace.h |  1 +
 arch/arm64/kernel/cpufeature.c       | 11 ++++++
 arch/arm64/kernel/process.c          | 20 ++++++++++
 arch/arm64/lib/clear_user.S          |  8 ++--
 arch/arm64/lib/copy_from_user.S      |  8 ++--
 arch/arm64/lib/copy_in_user.S        | 16 ++++----
 arch/arm64/lib/copy_to_user.S        |  8 ++--
 arch/arm64/mm/fault.c                | 31 +++++++++++++---
 15 files changed, 214 insertions(+), 39 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc62289a63e..63f2cca9e0ea 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -752,6 +752,27 @@ config ARM64_LSE_ATOMICS
 
 endmenu
 
+config ARM64_UAO
+	bool "Enable support for User Access Override (UAO)"
+	default y
+	help
+	 User Access Override (UAO; part of the ARMv8.2 Extensions)
+	 causes the 'unprivileged' variant of the load/store instructions to
+	 be overriden to be privileged.
+
+	 This option changes get_user() and friends to use the 'unprivileged'
+	 variant of the load/store instructions. This ensures that user-space
+	 really did have access to the supplied memory. When addr_limit is
+	 set to kernel memory the UAO bit will be set, allowing privileged
+	 access to kernel memory.
+
+	 Choosing this option will cause copy_to_user() et al to use user-space
+	 memory permissions.
+
+	 The feature is detected at runtime, the kernel will use the
+	 regular load/store instructions if the cpu does not implement the
+	 feature.
+
 endmenu
 
 menu "Boot options"
diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index e4962f04201e..a9fc24ec1aa9 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -1,6 +1,8 @@
 #ifndef __ASM_ALTERNATIVE_H
 #define __ASM_ALTERNATIVE_H
 
+#include <asm/cpufeature.h>
+
 #ifndef __ASSEMBLY__
 
 #include <linux/init.h>
@@ -63,6 +65,8 @@ void apply_alternatives(void *start, size_t length);
 
 #else
 
+#include <asm/assembler.h>
+
 .macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
 	.word \orig_offset - .
 	.word \alt_offset - .
@@ -136,6 +140,74 @@ void apply_alternatives(void *start, size_t length);
 	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
 
 
+/*
+ * Generate the assembly for UAO alternatives with exception table entries.
+ * This is complicated as there is no post-increment or pair versions of the
+ * unprivileged instructions, and USER() only works for single instructions.
+ */
+#ifdef CONFIG_ARM64_UAO
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			ldp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			ldtr	\reg1, [\addr];
+			ldtr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		.section __ex_table,"a";
+		.align	3;
+		.quad	8888b,\l;
+		.quad	8889b,\l;
+		.previous;
+	.endm
+
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			stp	\reg1, \reg2, [\addr], \post_inc;
+8889:			nop;
+			nop;
+		alternative_else
+			sttr	\reg1, [\addr];
+			sttr	\reg2, [\addr, #8];
+			add	\addr, \addr, \post_inc;
+		alternative_endif
+
+		.section __ex_table,"a";
+		.align	3;
+		.quad	8888b,\l;
+		.quad	8889b,\l;
+		.previous
+	.endm
+
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		alternative_if_not ARM64_HAS_UAO
+8888:			\inst	\reg, [\addr], \post_inc;
+			nop;
+		alternative_else
+			\alt_inst	\reg, [\addr];
+			add		\addr, \addr, \post_inc;
+		alternative_endif
+
+		.section __ex_table,"a";
+		.align	3;
+		.quad	8888b,\l;
+		.previous
+	.endm
+#else
+	.macro uao_ldp l, reg1, reg2, addr, post_inc
+		USER(\l, ldp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_stp l, reg1, reg2, addr, post_inc
+		USER(\l, stp \reg1, \reg2, [\addr], \post_inc)
+	.endm
+	.macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc
+		USER(\l, \inst \reg, [\addr], \post_inc)
+	.endm
+#endif
+
 #endif  /*  __ASSEMBLY__  */
 
 /*
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index d28e8b27ee59..349e8203d16e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -30,8 +30,9 @@
 #define ARM64_HAS_LSE_ATOMICS			5
 #define ARM64_WORKAROUND_CAVIUM_23154		6
 #define ARM64_WORKAROUND_834220			7
+#define ARM64_HAS_UAO				8
 
-#define ARM64_NCAPS				8
+#define ARM64_NCAPS				9
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 4acb7ca94fcd..611ee4e5c89d 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -187,5 +187,6 @@ static inline void spin_lock_prefetch(const void *x)
 #endif
 
 void cpu_enable_pan(void *__unused);
+void cpu_enable_uao(void *__unused);
 
 #endif /* __ASM_PROCESSOR_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 4ef294ec49cc..11378a92e50b 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -79,9 +79,12 @@
 #define SYS_DCZID_EL0			sys_reg(3, 3, 0, 0, 7)
 
 #define REG_PSTATE_PAN_IMM		sys_reg(0, 0, 4, 0, 4)
+#define REG_PSTATE_UAO_IMM		sys_reg(0, 0, 4, 0, 3)
 
 #define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM |\
 				     (!!x)<<8 | 0x1f)
+#define SET_PSTATE_UAO(x) __inst_arm(0xd5000000 | REG_PSTATE_UAO_IMM |\
+				     (!!x)<<8 | 0x1f)
 
 /* SCTLR_EL1 */
 #define SCTLR_EL1_CP15BEN	(0x1 << 5)
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index abd64bd1f6d9..eba8db6838af 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -85,6 +85,12 @@ static inline struct thread_info *current_thread_info(void)
 	return (struct thread_info *)sp_el0;
 }
 
+/* Access struct thread_info of another thread */
+static inline struct thread_info *get_thread_info(unsigned long thread_stack)
+{
+	return (struct thread_info *)(thread_stack & ~(THREAD_SIZE - 1));
+}
+
 #define thread_saved_pc(tsk)	\
 	((unsigned long)(tsk->thread.cpu_context.pc))
 #define thread_saved_sp(tsk)	\
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index b2ede967fe7d..f973bdce8410 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -64,6 +64,16 @@ extern int fixup_exception(struct pt_regs *regs);
 static inline void set_fs(mm_segment_t fs)
 {
 	current_thread_info()->addr_limit = fs;
+
+	/*
+	 * Enable/disable UAO so that copy_to_user() etc can access
+	 * kernel memory with the unprivileged instructions.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_UAO) && fs == KERNEL_DS)
+		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(1), ARM64_HAS_UAO));
+	else
+		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO,
+				CONFIG_ARM64_UAO));
 }
 
 #define segment_eq(a, b)	((a) == (b))
@@ -113,9 +123,10 @@ static inline void set_fs(mm_segment_t fs)
  * The "__xxx_error" versions set the third argument to -EFAULT if an error
  * occurs, and leave it unchanged on success.
  */
-#define __get_user_asm(instr, reg, x, addr, err)			\
+#define __get_user_asm(instr, alt_instr, reg, x, addr, err, feature)	\
 	asm volatile(							\
-	"1:	" instr "	" reg "1, [%2]\n"			\
+	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
+			alt_instr " " reg "1, [%2]\n", feature)		\
 	"2:\n"								\
 	"	.section .fixup, \"ax\"\n"				\
 	"	.align	2\n"						\
@@ -138,16 +149,20 @@ do {									\
 			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__get_user_asm("ldrb", "%w", __gu_val, (ptr), (err));	\
+		__get_user_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr),  \
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 2:								\
-		__get_user_asm("ldrh", "%w", __gu_val, (ptr), (err));	\
+		__get_user_asm("ldrh", "ldtrh", "%w", __gu_val, (ptr),  \
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 4:								\
-		__get_user_asm("ldr", "%w", __gu_val, (ptr), (err));	\
+		__get_user_asm("ldr", "ldtr", "%w", __gu_val, (ptr),	\
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__get_user_asm("ldr", "%",  __gu_val, (ptr), (err));	\
+		__get_user_asm("ldr", "ldtr", "%",  __gu_val, (ptr),	\
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
@@ -181,9 +196,10 @@ do {									\
 		((x) = 0, -EFAULT);					\
 })
 
-#define __put_user_asm(instr, reg, x, addr, err)			\
+#define __put_user_asm(instr, alt_instr, reg, x, addr, err, feature)	\
 	asm volatile(							\
-	"1:	" instr "	" reg "1, [%2]\n"			\
+	"1:"ALTERNATIVE(instr "     " reg "1, [%2]\n",			\
+			alt_instr " " reg "1, [%2]\n", feature)		\
 	"2:\n"								\
 	"	.section .fixup,\"ax\"\n"				\
 	"	.align	2\n"						\
@@ -205,16 +221,20 @@ do {									\
 			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
-		__put_user_asm("strb", "%w", __pu_val, (ptr), (err));	\
+		__put_user_asm("strb", "sttrb", "%w", __pu_val, (ptr),	\
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 2:								\
-		__put_user_asm("strh", "%w", __pu_val, (ptr), (err));	\
+		__put_user_asm("strh", "sttrh", "%w", __pu_val, (ptr),	\
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 4:								\
-		__put_user_asm("str",  "%w", __pu_val, (ptr), (err));	\
+		__put_user_asm("str", "sttr", "%w", __pu_val, (ptr),	\
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__put_user_asm("str",  "%", __pu_val, (ptr), (err));	\
+		__put_user_asm("str", "sttr", "%", __pu_val, (ptr),	\
+			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 208db3df135a..b5c3933ed441 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -45,6 +45,7 @@
 #define PSR_A_BIT	0x00000100
 #define PSR_D_BIT	0x00000200
 #define PSR_PAN_BIT	0x00400000
+#define PSR_UAO_BIT	0x00800000
 #define PSR_Q_BIT	0x08000000
 #define PSR_V_BIT	0x10000000
 #define PSR_C_BIT	0x20000000
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5b8bc98105c1..06bdcd019bd6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -660,6 +660,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 2,
 	},
 #endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */
+#ifdef CONFIG_ARM64_UAO
+	{
+		.desc = "User Access Override",
+		.capability = ARM64_HAS_UAO,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64MMFR2_EL1,
+		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
+		.min_field_value = 1,
+		.enable = cpu_enable_uao,
+	},
+#endif /* CONFIG_ARM64_UAO */
 	{},
 };
 
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 88d742ba19d5..8fcf582c18d7 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -46,6 +46,7 @@
 #include <linux/notifier.h>
 #include <trace/events/power.h>
 
+#include <asm/alternative.h>
 #include <asm/compat.h>
 #include <asm/cacheflush.h>
 #include <asm/fpsimd.h>
@@ -280,6 +281,9 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 	} else {
 		memset(childregs, 0, sizeof(struct pt_regs));
 		childregs->pstate = PSR_MODE_EL1h;
+		if (IS_ENABLED(CONFIG_ARM64_UAO) &&
+		    cpus_have_cap(ARM64_HAS_UAO))
+			childregs->pstate |= PSR_UAO_BIT;
 		p->thread.cpu_context.x19 = stack_start;
 		p->thread.cpu_context.x20 = stk_sz;
 	}
@@ -308,6 +312,20 @@ static void tls_thread_switch(struct task_struct *next)
 	: : "r" (tpidr), "r" (tpidrro));
 }
 
+/* Restore the UAO state depending on next's addr_limit */
+static void uao_thread_switch(struct task_struct *next)
+{
+	unsigned long next_sp = next->thread.cpu_context.sp;
+
+	if (IS_ENABLED(CONFIG_ARM64_UAO) &&
+	    get_thread_info(next_sp)->addr_limit == KERNEL_DS)
+		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(1), ARM64_HAS_UAO,
+			        CONFIG_ARM64_UAO));
+	else
+		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO,
+				CONFIG_ARM64_UAO));
+}
+
 /*
  * Thread switching.
  */
@@ -327,6 +345,8 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	 */
 	dsb(ish);
 
+	uao_thread_switch(next);
+
 	/* the actual thread switch */
 	last = cpu_switch_to(prev, next);
 
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index a9723c71c52b..3f950b677c07 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -39,20 +39,20 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
 	subs	x1, x1, #8
 	b.mi	2f
 1:
-USER(9f, str	xzr, [x0], #8	)
+uao_user_alternative 9f, str, sttr, xzr, x0, 8
 	subs	x1, x1, #8
 	b.pl	1b
 2:	adds	x1, x1, #4
 	b.mi	3f
-USER(9f, str	wzr, [x0], #4	)
+uao_user_alternative 9f, str, sttr, wzr, x0, 4
 	sub	x1, x1, #4
 3:	adds	x1, x1, #2
 	b.mi	4f
-USER(9f, strh	wzr, [x0], #2	)
+uao_user_alternative 9f, strh, sttrh, wzr, x0, 2
 	sub	x1, x1, #2
 4:	adds	x1, x1, #1
 	b.mi	5f
-USER(9f, strb	wzr, [x0]	)
+uao_user_alternative 9f, strb, sttrb, wzr, x0, 0
 5:	mov	x0, #0
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
 	    CONFIG_ARM64_PAN)
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 4699cd74f87e..1d982d64f1a7 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -34,7 +34,7 @@
  */
 
 	.macro ldrb1 ptr, regB, val
-	USER(9998f, ldrb  \ptr, [\regB], \val)
+	uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val
 	.endm
 
 	.macro strb1 ptr, regB, val
@@ -42,7 +42,7 @@
 	.endm
 
 	.macro ldrh1 ptr, regB, val
-	USER(9998f, ldrh  \ptr, [\regB], \val)
+	uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val
 	.endm
 
 	.macro strh1 ptr, regB, val
@@ -50,7 +50,7 @@
 	.endm
 
 	.macro ldr1 ptr, regB, val
-	USER(9998f, ldr \ptr, [\regB], \val)
+	uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val
 	.endm
 
 	.macro str1 ptr, regB, val
@@ -58,7 +58,7 @@
 	.endm
 
 	.macro ldp1 ptr, regB, regC, val
-	USER(9998f, ldp \ptr, \regB, [\regC], \val)
+	uao_ldp 9998f, \ptr, \regB, \regC, \val
 	.endm
 
 	.macro stp1 ptr, regB, regC, val
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 81c8fc93c100..feaad1520dc1 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -35,35 +35,35 @@
  *	x0 - bytes not copied
  */
 	.macro ldrb1 ptr, regB, val
-	USER(9998f, ldrb  \ptr, [\regB], \val)
+	uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val
 	.endm
 
 	.macro strb1 ptr, regB, val
-	USER(9998f, strb \ptr, [\regB], \val)
+	uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val
 	.endm
 
 	.macro ldrh1 ptr, regB, val
-	USER(9998f, ldrh  \ptr, [\regB], \val)
+	uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val
 	.endm
 
 	.macro strh1 ptr, regB, val
-	USER(9998f, strh \ptr, [\regB], \val)
+	uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val
 	.endm
 
 	.macro ldr1 ptr, regB, val
-	USER(9998f, ldr \ptr, [\regB], \val)
+	uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val
 	.endm
 
 	.macro str1 ptr, regB, val
-	USER(9998f, str \ptr, [\regB], \val)
+	uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val
 	.endm
 
 	.macro ldp1 ptr, regB, regC, val
-	USER(9998f, ldp \ptr, \regB, [\regC], \val)
+	uao_ldp 9998f, \ptr, \regB, \regC, \val
 	.endm
 
 	.macro stp1 ptr, regB, regC, val
-	USER(9998f, stp \ptr, \regB, [\regC], \val)
+	uao_stp 9998f, \ptr, \regB, \regC, \val
 	.endm
 
 end	.req	x5
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 7512bbbc07ac..2dae2cd2c481 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -37,7 +37,7 @@
 	.endm
 
 	.macro strb1 ptr, regB, val
-	USER(9998f, strb \ptr, [\regB], \val)
+	uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val
 	.endm
 
 	.macro ldrh1 ptr, regB, val
@@ -45,7 +45,7 @@
 	.endm
 
 	.macro strh1 ptr, regB, val
-	USER(9998f, strh \ptr, [\regB], \val)
+	uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val
 	.endm
 
 	.macro ldr1 ptr, regB, val
@@ -53,7 +53,7 @@
 	.endm
 
 	.macro str1 ptr, regB, val
-	USER(9998f, str \ptr, [\regB], \val)
+	uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val
 	.endm
 
 	.macro ldp1 ptr, regB, regC, val
@@ -61,7 +61,7 @@
 	.endm
 
 	.macro stp1 ptr, regB, regC, val
-	USER(9998f, stp \ptr, \regB, [\regC], \val)
+	uao_stp 9998f, \ptr, \regB, \regC, \val
 	.endm
 
 end	.req	x5
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 92ddac1e8ca2..820d47353cf0 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -192,6 +192,14 @@ out:
 	return fault;
 }
 
+static inline int permission_fault(unsigned int esr)
+{
+	unsigned int ec       = (esr & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT;
+	unsigned int fsc_type = esr & ESR_ELx_FSC_TYPE;
+
+	return (ec == ESR_ELx_EC_DABT_CUR && fsc_type == ESR_ELx_FSC_PERM);
+}
+
 static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 				   struct pt_regs *regs)
 {
@@ -225,12 +233,10 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 		mm_flags |= FAULT_FLAG_WRITE;
 	}
 
-	/*
-	 * PAN bit set implies the fault happened in kernel space, but not
-	 * in the arch's user access functions.
-	 */
-	if (IS_ENABLED(CONFIG_ARM64_PAN) && (regs->pstate & PSR_PAN_BIT))
-		goto no_context;
+	if (permission_fault(esr) && (addr < USER_DS)) {
+		if (!search_exception_tables(regs->pc))
+			panic("Accessing user space memory outside uaccess.h routines");
+	}
 
 	/*
 	 * As per x86, we may deadlock here. However, since the kernel only
@@ -561,3 +567,16 @@ void cpu_enable_pan(void *__unused)
 	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
 }
 #endif /* CONFIG_ARM64_PAN */
+
+#ifdef CONFIG_ARM64_UAO
+/*
+ * Kernel threads have fs=KERNEL_DS by default, and don't need to call
+ * set_fs(), devtmpfs in particular relies on this behaviour.
+ * We need to enable the feature at runtime (instead of adding it to
+ * PSR_MODE_EL1h) as the feature may not be implemented by the cpu.
+ */
+void cpu_enable_uao(void *__unused)
+{
+	asm(SET_PSTATE_UAO(1));
+}
+#endif /* CONFIG_ARM64_UAO */
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 4/5] arm64: cpufeature: Test 'matches' pointer to find the end of the list
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
                   ` (2 preceding siblings ...)
  2016-02-05 14:58 ` [PATCH v2 3/5] arm64: kernel: Add support for User Access Override James Morse
@ 2016-02-05 14:58 ` James Morse
  2016-02-05 14:58 ` [PATCH v2 5/5] arm64: kernel: Don't toggle PAN on systems with UAO James Morse
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 36+ messages in thread
From: James Morse @ 2016-02-05 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

CPU feature code uses the desc field as a test to find the end of the list,
this means every entry must have a description. This generates noise for
entries in the list that aren't really features, but combinations of them.
e.g.
> CPU features: detected feature: Privileged Access Never
> CPU features: detected feature: PAN and not UAO

These combination features are needed for corner cases with alternatives,
where cpu features interact.

Change all walkers of the arm64_features[] and arm64_hwcaps[] lists to test
'matches' not 'desc', and only print 'desc' if it is non-NULL.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 06bdcd019bd6..c717c3d78f5e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -754,7 +754,7 @@ static void __init setup_cpu_hwcaps(void)
 	int i;
 	const struct arm64_cpu_capabilities *hwcaps = arm64_hwcaps;
 
-	for (i = 0; hwcaps[i].desc; i++)
+	for (i = 0; hwcaps[i].matches; i++)
 		if (hwcaps[i].matches(&hwcaps[i]))
 			cap_set_hwcap(&hwcaps[i]);
 }
@@ -764,11 +764,11 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 {
 	int i;
 
-	for (i = 0; caps[i].desc; i++) {
+	for (i = 0; caps[i].matches; i++) {
 		if (!caps[i].matches(&caps[i]))
 			continue;
 
-		if (!cpus_have_cap(caps[i].capability))
+		if (!cpus_have_cap(caps[i].capability) && caps[i].desc)
 			pr_info("%s %s\n", info, caps[i].desc);
 		cpus_set_cap(caps[i].capability);
 	}
@@ -783,7 +783,7 @@ enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
 {
 	int i;
 
-	for (i = 0; caps[i].desc; i++)
+	for (i = 0; caps[i].matches; i++)
 		if (caps[i].enable && cpus_have_cap(caps[i].capability))
 			on_each_cpu(caps[i].enable, NULL, true);
 }
@@ -890,7 +890,7 @@ void verify_local_cpu_capabilities(void)
 		return;
 
 	caps = arm64_features;
-	for (i = 0; caps[i].desc; i++) {
+	for (i = 0; caps[i].matches; i++) {
 		if (!cpus_have_cap(caps[i].capability) || !caps[i].sys_reg)
 			continue;
 		/*
@@ -903,7 +903,7 @@ void verify_local_cpu_capabilities(void)
 			caps[i].enable(NULL);
 	}
 
-	for (i = 0, caps = arm64_hwcaps; caps[i].desc; i++) {
+	for (i = 0, caps = arm64_hwcaps; caps[i].matches; i++) {
 		if (!cpus_have_hwcap(&caps[i]))
 			continue;
 		if (!feature_matches(__raw_read_system_reg(caps[i].sys_reg), &caps[i]))
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 5/5] arm64: kernel: Don't toggle PAN on systems with UAO
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
                   ` (3 preceding siblings ...)
  2016-02-05 14:58 ` [PATCH v2 4/5] arm64: cpufeature: Test 'matches' pointer to find the end of the list James Morse
@ 2016-02-05 14:58 ` James Morse
  2016-02-18 14:36   ` Catalin Marinas
  2016-02-05 15:40 ` [PATCH v2 0/5] arm64: kernel: Add support for User Access Override Arnd Bergmann
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: James Morse @ 2016-02-05 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

If a CPU supports both Privileged Access Never (PAN) and User Access
Override (UAO), we don't need to disable/re-enable PAN round all
copy_to_user() like calls.

UAO alternatives cause these calls to use the 'unprivileged' load/store
instructions, which are overridden to be the privileged kind when
fs==KERNEL_DS.

This patch changes the copy_to_user() calls to have their PAN toggling
depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.

If both features are detected, PAN will be enabled, but the copy_to_user()
alternatives will not be applied. This means PAN will be enabled all the
time for these functions. If only PAN is detected, the toggling will be
enabled as normal.

This will save the time taken to disable/re-enable PAN, and allow us to
catch copy_to_user() accesses that occur with fs==KERNEL_DS.

Futex and swp-emulation code continue to hang their PAN toggling code on
ARM64_HAS_PAN.

Signed-off-by: James Morse <james.morse@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  3 ++-
 arch/arm64/include/asm/uaccess.h    |  8 ++++----
 arch/arm64/kernel/cpufeature.c      | 16 ++++++++++++++++
 arch/arm64/lib/clear_user.S         |  4 ++--
 arch/arm64/lib/copy_from_user.S     |  4 ++--
 arch/arm64/lib/copy_in_user.S       |  4 ++--
 arch/arm64/lib/copy_to_user.S       |  4 ++--
 arch/arm64/mm/fault.c               |  3 +++
 8 files changed, 33 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 349e8203d16e..7ae0d90a21ed 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -31,8 +31,9 @@
 #define ARM64_WORKAROUND_CAVIUM_23154		6
 #define ARM64_WORKAROUND_834220			7
 #define ARM64_HAS_UAO				8
+#define ARM64_ALT_PAN_NOT_UAO			9
 
-#define ARM64_NCAPS				9
+#define ARM64_NCAPS				10
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index f973bdce8410..16ba0d5c9740 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -145,7 +145,7 @@ static inline void set_fs(mm_segment_t fs)
 do {									\
 	unsigned long __gu_val;						\
 	__chk_user_ptr(ptr);						\
-	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_ALT_PAN_NOT_UAO,\
 			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
@@ -168,7 +168,7 @@ do {									\
 		BUILD_BUG();						\
 	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
-	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_ALT_PAN_NOT_UAO,\
 			CONFIG_ARM64_PAN));				\
 } while (0)
 
@@ -217,7 +217,7 @@ do {									\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
 	__chk_user_ptr(ptr);						\
-	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_ALT_PAN_NOT_UAO,\
 			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
@@ -239,7 +239,7 @@ do {									\
 	default:							\
 		BUILD_BUG();						\
 	}								\
-	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_ALT_PAN_NOT_UAO,\
 			CONFIG_ARM64_PAN));				\
 } while (0)
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c717c3d78f5e..7ac60350eb08 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -67,6 +67,10 @@ DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 		.width = 0,				\
 	}
 
+/* meta feature for alternatives */
+static bool __maybe_unused
+cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry);
+
 static struct arm64_ftr_bits ftr_id_aa64isar0[] = {
 	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 32, 32, 0),
 	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
@@ -671,6 +675,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.enable = cpu_enable_uao,
 	},
 #endif /* CONFIG_ARM64_UAO */
+#ifdef CONFIG_ARM64_PAN
+	{
+		.capability = ARM64_ALT_PAN_NOT_UAO,
+		.matches = cpufeature_pan_not_uao,
+	},
+#endif /* CONFIG_ARM64_PAN */
 	{},
 };
 
@@ -949,3 +959,9 @@ void __init setup_cpu_features(void)
 		pr_warn("L1_CACHE_BYTES smaller than the Cache Writeback Granule (%d < %d)\n",
 			L1_CACHE_BYTES, cls);
 }
+
+static bool __maybe_unused
+cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry)
+{
+	return (cpus_have_cap(ARM64_HAS_PAN) && !cpus_have_cap(ARM64_HAS_UAO));
+}
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index 3f950b677c07..5d1cad3ce6d6 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -33,7 +33,7 @@
  * Alignment fixed up by hardware.
  */
 ENTRY(__clear_user)
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x2, x1			// save the size for fixup return
 	subs	x1, x1, #8
@@ -54,7 +54,7 @@ uao_user_alternative 9f, strh, sttrh, wzr, x0, 2
 	b.mi	5f
 uao_user_alternative 9f, strb, sttrb, wzr, x0, 0
 5:	mov	x0, #0
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__clear_user)
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 1d982d64f1a7..17e8306dca29 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -67,11 +67,11 @@
 
 end	.req	x5
 ENTRY(__copy_from_user)
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
 #include "copy_template.S"
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index feaad1520dc1..f7292dd08c84 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -68,11 +68,11 @@
 
 end	.req	x5
 ENTRY(__copy_in_user)
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
 #include "copy_template.S"
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 2dae2cd2c481..21faae60f988 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -66,11 +66,11 @@
 
 end	.req	x5
 ENTRY(__copy_to_user)
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
 #include "copy_template.S"
-ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 820d47353cf0..d0762a729d01 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -234,6 +234,9 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	}
 
 	if (permission_fault(esr) && (addr < USER_DS)) {
+		if (get_thread_info(regs->sp)->addr_limit == KERNEL_DS)
+			panic("Accessing user space memory with fs=KERNEL_DS");
+
 		if (!search_exception_tables(regs->pc))
 			panic("Accessing user space memory outside uaccess.h routines");
 	}
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
                   ` (4 preceding siblings ...)
  2016-02-05 14:58 ` [PATCH v2 5/5] arm64: kernel: Don't toggle PAN on systems with UAO James Morse
@ 2016-02-05 15:40 ` Arnd Bergmann
  2016-02-09  9:47   ` Will Deacon
  2016-02-18 18:03 ` Catalin Marinas
  2016-03-07 16:43 ` James Morse
  7 siblings, 1 reply; 36+ messages in thread
From: Arnd Bergmann @ 2016-02-05 15:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Friday 05 February 2016 14:58:45 James Morse wrote:
> This series adds support for User Access Override (UAO; part of the ARMv8.2
> Extensions[0]). When enabled, this causes the get_user() accessors to use
> the unprivileged load/store instructions. When addr_limit is set to
> KERNEL_DS, we set the override bit allowing privileged access.
> 
> Because the unprivileged instructions don't trip PAN, the last patch changes
> which 'alternative' values are swapped in, allowing PAN to be left enabled
> during get_user() and friends.
> 
> This series can be retrieved from:
> git://linux-arm.org/linux-jm.git -b uao/v2
> 

Looks very nice. I have no complaints about the implementations, but two questions:

* There was recently some work in reducing the number of set_fs() calls in
  the compat_ioctl, which further reduces the attack surface. Should we try to
  continue that effort in other syscalls?

* Do we expect to handle this using live patching indefinitely? I can imagine
  that at some point in the future, ARMv8.2+ systems will be the vast majority,
  so it might be nice to support enabling it unconditionally (same for any
  of the alternative bits really). Is there a long-term strategy?

	Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-05 15:40 ` [PATCH v2 0/5] arm64: kernel: Add support for User Access Override Arnd Bergmann
@ 2016-02-09  9:47   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2016-02-09  9:47 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 05, 2016 at 04:40:54PM +0100, Arnd Bergmann wrote:
> * Do we expect to handle this using live patching indefinitely? I can imagine
>   that at some point in the future, ARMv8.2+ systems will be the vast majority,
>   so it might be nice to support enabling it unconditionally (same for any
>   of the alternative bits really). Is there a long-term strategy?

Another good reason for an option turning these things on by default is
that it makes debugging *so* much easier when the vmlinux matches what's
actually running!

So yes, that's definitely something we should look at in the long-term,
but we still need the live patching for distro kernels that will assumedly
want a single Image covering ARMv8+ for a while.

Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 3/5] arm64: kernel: Add support for User Access Override
  2016-02-05 14:58 ` [PATCH v2 3/5] arm64: kernel: Add support for User Access Override James Morse
@ 2016-02-18 12:26   ` Catalin Marinas
  0 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2016-02-18 12:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 05, 2016 at 02:58:48PM +0000, James Morse wrote:
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
[...]
> @@ -308,6 +312,20 @@ static void tls_thread_switch(struct task_struct *next)
>  	: : "r" (tpidr), "r" (tpidrro));
>  }
>  
> +/* Restore the UAO state depending on next's addr_limit */
> +static void uao_thread_switch(struct task_struct *next)
> +{
> +	unsigned long next_sp = next->thread.cpu_context.sp;
> +
> +	if (IS_ENABLED(CONFIG_ARM64_UAO) &&
> +	    get_thread_info(next_sp)->addr_limit == KERNEL_DS)

Can you not use task_thread_info(next) directly instead of a new
get_thread_info()?

> +		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(1), ARM64_HAS_UAO,
> +			        CONFIG_ARM64_UAO));
> +	else
> +		asm(ALTERNATIVE("nop", SET_PSTATE_UAO(0), ARM64_HAS_UAO,
> +				CONFIG_ARM64_UAO));
> +}

I think we still end up with some empty asm in this function even though
CONFIG_ARM64_UAO is disabled, I'm not sure the compiler will eliminate
it completely. If that's the case, maybe use two nested "if" blocks.

> +
>  /*
>   * Thread switching.
>   */
> @@ -327,6 +345,8 @@ struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>  	dsb(ish);
>  
> +	uao_thread_switch(next);

We should place this before the dsb(), together with the other switches.
The pstate change should be self synchronising.

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 5/5] arm64: kernel: Don't toggle PAN on systems with UAO
  2016-02-05 14:58 ` [PATCH v2 5/5] arm64: kernel: Don't toggle PAN on systems with UAO James Morse
@ 2016-02-18 14:36   ` Catalin Marinas
  2016-02-18 14:43     ` James Morse
  0 siblings, 1 reply; 36+ messages in thread
From: Catalin Marinas @ 2016-02-18 14:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 05, 2016 at 02:58:50PM +0000, James Morse wrote:
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -234,6 +234,9 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
>  	}
>  
>  	if (permission_fault(esr) && (addr < USER_DS)) {
> +		if (get_thread_info(regs->sp)->addr_limit == KERNEL_DS)
> +			panic("Accessing user space memory with fs=KERNEL_DS");

We could simply use "get_fs() == KERNEL_DS" as we should call
do_page_fault() outside the current context.

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 5/5] arm64: kernel: Don't toggle PAN on systems with UAO
  2016-02-18 14:36   ` Catalin Marinas
@ 2016-02-18 14:43     ` James Morse
  0 siblings, 0 replies; 36+ messages in thread
From: James Morse @ 2016-02-18 14:43 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,

On 18/02/16 14:36, Catalin Marinas wrote:
> On Fri, Feb 05, 2016 at 02:58:50PM +0000, James Morse wrote:
>> --- a/arch/arm64/mm/fault.c
>> +++ b/arch/arm64/mm/fault.c
>> @@ -234,6 +234,9 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
>>  	}
>>  
>>  	if (permission_fault(esr) && (addr < USER_DS)) {
>> +		if (get_thread_info(regs->sp)->addr_limit == KERNEL_DS)
>> +			panic("Accessing user space memory with fs=KERNEL_DS");
> 
> We could simply use "get_fs() == KERNEL_DS" as we should call
> do_page_fault() outside the current context.

shouldn't?

You're right, it will always be on the same stack, so get_fs() will give us the
right result. This fits better with getting rid of the new get_thread_info() in
patch 3.


Thanks,

James

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
                   ` (5 preceding siblings ...)
  2016-02-05 15:40 ` [PATCH v2 0/5] arm64: kernel: Add support for User Access Override Arnd Bergmann
@ 2016-02-18 18:03 ` Catalin Marinas
  2016-02-19 15:38   ` Peter Maydell
  2016-03-07 16:43 ` James Morse
  7 siblings, 1 reply; 36+ messages in thread
From: Catalin Marinas @ 2016-02-18 18:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 05, 2016 at 02:58:45PM +0000, James Morse wrote:
> James Morse (5):
>   arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
>   arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
>   arm64: kernel: Add support for User Access Override
>   arm64: cpufeature: Test 'matches' pointer to find the end of the list
>   arm64: kernel: Don't toggle PAN on systems with UAO

Patches applied with an additional fixup on top for removing
get_thread_info().

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-18 18:03 ` Catalin Marinas
@ 2016-02-19 15:38   ` Peter Maydell
  2016-02-19 16:46     ` Catalin Marinas
  0 siblings, 1 reply; 36+ messages in thread
From: Peter Maydell @ 2016-02-19 15:38 UTC (permalink / raw)
  To: linux-arm-kernel

On 18 February 2016 at 18:03, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Fri, Feb 05, 2016 at 02:58:45PM +0000, James Morse wrote:
>> James Morse (5):
>>   arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
>>   arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
>>   arm64: kernel: Add support for User Access Override
>>   arm64: cpufeature: Test 'matches' pointer to find the end of the list
>>   arm64: kernel: Don't toggle PAN on systems with UAO
>
> Patches applied with an additional fixup on top for removing
> get_thread_info().

Just to let you know, unfortunately this series breaks booting the
kernel on QEMU. We didn't implement the parts of the ID register
space that the ARM ARM documents as "reserved, RAZ", and so when
the kernel touches ID_AA64MMFR2 (new in v8.2) QEMU hands it an
UNDEF. This is obviously a bug in QEMU, and I just sent out a patch
for it: http://patchwork.ozlabs.org/patch/585237/ -- but of course
that doesn't do anything for all the buggy QEMUs already in the field.

This is more of a heads-up than a demand that you Do Something,
but perhaps somebody has a clever idea...

thanks
-- PMM

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-19 15:38   ` Peter Maydell
@ 2016-02-19 16:46     ` Catalin Marinas
  2016-02-19 16:54       ` Peter Maydell
  0 siblings, 1 reply; 36+ messages in thread
From: Catalin Marinas @ 2016-02-19 16:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 19, 2016 at 03:38:44PM +0000, Peter Maydell wrote:
> On 18 February 2016 at 18:03, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Fri, Feb 05, 2016 at 02:58:45PM +0000, James Morse wrote:
> >> James Morse (5):
> >>   arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
> >>   arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
> >>   arm64: kernel: Add support for User Access Override
> >>   arm64: cpufeature: Test 'matches' pointer to find the end of the list
> >>   arm64: kernel: Don't toggle PAN on systems with UAO
> >
> > Patches applied with an additional fixup on top for removing
> > get_thread_info().
> 
> Just to let you know, unfortunately this series breaks booting the
> kernel on QEMU. We didn't implement the parts of the ID register
> space that the ARM ARM documents as "reserved, RAZ", and so when
> the kernel touches ID_AA64MMFR2 (new in v8.2) QEMU hands it an
> UNDEF. This is obviously a bug in QEMU, and I just sent out a patch
> for it: http://patchwork.ozlabs.org/patch/585237/ -- but of course
> that doesn't do anything for all the buggy QEMUs already in the field.
> 
> This is more of a heads-up than a demand that you Do Something,
> but perhaps somebody has a clever idea...

Only if Qemu had its own MIDR/REVIDR ;) (and patch the instruction like
other errata workarounds).

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-19 16:46     ` Catalin Marinas
@ 2016-02-19 16:54       ` Peter Maydell
  2016-02-19 16:57         ` Ard Biesheuvel
  0 siblings, 1 reply; 36+ messages in thread
From: Peter Maydell @ 2016-02-19 16:54 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 February 2016 at 16:46, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Fri, Feb 19, 2016 at 03:38:44PM +0000, Peter Maydell wrote:
>> This is more of a heads-up than a demand that you Do Something,
>> but perhaps somebody has a clever idea...
>
> Only if Qemu had its own MIDR/REVIDR ;) (and patch the instruction like
> other errata workarounds).

Smiley noted, but FWIW the relevant CPUs advertise themselves as
MIDR 0x411fd070 REVIDR 0 (A57) and MIDR 0x410fd034 REVIDR 0 (A53),
and it should be harmless to patch the ID_AA64MMFR2 read to always
return 0 on any A53 or A57 including h/w, because they'll read as
zero anyway...

So it comes down to how much we want to work around QEMU "errata".

thanks
-- PMM

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-19 16:54       ` Peter Maydell
@ 2016-02-19 16:57         ` Ard Biesheuvel
  2016-02-19 17:03           ` Peter Maydell
  0 siblings, 1 reply; 36+ messages in thread
From: Ard Biesheuvel @ 2016-02-19 16:57 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 February 2016 at 17:54, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 19 February 2016 at 16:46, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> On Fri, Feb 19, 2016 at 03:38:44PM +0000, Peter Maydell wrote:
>>> This is more of a heads-up than a demand that you Do Something,
>>> but perhaps somebody has a clever idea...
>>
>> Only if Qemu had its own MIDR/REVIDR ;) (and patch the instruction like
>> other errata workarounds).
>
> Smiley noted, but FWIW the relevant CPUs advertise themselves as
> MIDR 0x411fd070 REVIDR 0 (A57) and MIDR 0x410fd034 REVIDR 0 (A53),
> and it should be harmless to patch the ID_AA64MMFR2 read to always
> return 0 on any A53 or A57 including h/w, because they'll read as
> zero anyway...
>
> So it comes down to how much we want to work around QEMU "errata".
>

Doesn't this only affect TCG emulation, and not KVM?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-19 16:57         ` Ard Biesheuvel
@ 2016-02-19 17:03           ` Peter Maydell
  0 siblings, 0 replies; 36+ messages in thread
From: Peter Maydell @ 2016-02-19 17:03 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 February 2016 at 16:57, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> Doesn't this only affect TCG emulation, and not KVM?

Correct, but that's still a fair userbase.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-02-05 14:58 ` [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate James Morse
@ 2016-03-03 17:59   ` Christopher Covington
  2016-03-03 18:27     ` Robin Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Christopher Covington @ 2016-03-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

Hi James,

This change breaks booting linux-next for me.

On 02/05/2016 09:58 AM, James Morse wrote:
> ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
> cpu feature boiler plate used by the actual features in later patches.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/include/asm/cpu.h    |  1 +
>  arch/arm64/include/asm/sysreg.h |  4 ++++
>  arch/arm64/kernel/cpufeature.c  | 10 ++++++++++
>  arch/arm64/kernel/cpuinfo.c     |  1 +
>  4 files changed, 16 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
> index b5e9cee4b5f8..13a6103130cd 100644
> --- a/arch/arm64/include/asm/cpu.h
> +++ b/arch/arm64/include/asm/cpu.h
> @@ -36,6 +36,7 @@ struct cpuinfo_arm64 {
>  	u64		reg_id_aa64isar1;
>  	u64		reg_id_aa64mmfr0;
>  	u64		reg_id_aa64mmfr1;
> +	u64		reg_id_aa64mmfr2;
>  	u64		reg_id_aa64pfr0;
>  	u64		reg_id_aa64pfr1;
>  
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 4aeebec3d882..4ef294ec49cc 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -72,6 +72,7 @@
>  
>  #define SYS_ID_AA64MMFR0_EL1		sys_reg(3, 0, 0, 7, 0)
>  #define SYS_ID_AA64MMFR1_EL1		sys_reg(3, 0, 0, 7, 1)
> +#define SYS_ID_AA64MMFR2_EL1		sys_reg(3, 0, 0, 7, 2)
>  
>  #define SYS_CNTFRQ_EL0			sys_reg(3, 3, 14, 0, 0)
>  #define SYS_CTR_EL0			sys_reg(3, 3, 0, 0, 1)
> @@ -137,6 +138,9 @@
>  #define ID_AA64MMFR1_VMIDBITS_SHIFT	4
>  #define ID_AA64MMFR1_HADBS_SHIFT	0
>  
> +/* id_aa64mmfr2 */
> +#define ID_AA64MMFR2_UAO_SHIFT		4
> +
>  /* id_aa64dfr0 */
>  #define ID_AA64DFR0_CTX_CMPS_SHIFT	28
>  #define ID_AA64DFR0_WRPS_SHIFT		20
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index a84febc40db2..5b8bc98105c1 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -123,6 +123,11 @@ static struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>  	ARM64_FTR_END,
>  };
>  
> +static struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
> +	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
>  static struct arm64_ftr_bits ftr_ctr[] = {
>  	U_ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 31, 1, 1),	/* RAO */
>  	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 28, 3, 0),
> @@ -284,6 +289,7 @@ static struct arm64_ftr_reg arm64_ftr_regs[] = {
>  	/* Op1 = 0, CRn = 0, CRm = 7 */
>  	ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
>  	ARM64_FTR_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1),
> +	ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2),
>  
>  	/* Op1 = 3, CRn = 0, CRm = 0 */
>  	ARM64_FTR_REG(SYS_CTR_EL0, ftr_ctr),
> @@ -408,6 +414,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
>  	init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
>  	init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
>  	init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
> +	init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
>  	init_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0);
>  	init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
>  	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
> @@ -517,6 +524,8 @@ void update_cpu_features(int cpu,
>  				      info->reg_id_aa64mmfr0, boot->reg_id_aa64mmfr0);
>  	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR1_EL1, cpu,
>  				      info->reg_id_aa64mmfr1, boot->reg_id_aa64mmfr1);
> +	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
> +				      info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2);
>  
>  	/*
>  	 * EL3 is not our concern.
> @@ -814,6 +823,7 @@ static u64 __raw_read_system_reg(u32 sys_id)
>  	case SYS_ID_AA64DFR1_EL1:	return read_cpuid(SYS_ID_AA64DFR0_EL1);
>  	case SYS_ID_AA64MMFR0_EL1:	return read_cpuid(SYS_ID_AA64MMFR0_EL1);
>  	case SYS_ID_AA64MMFR1_EL1:	return read_cpuid(SYS_ID_AA64MMFR1_EL1);
> +	case SYS_ID_AA64MMFR2_EL1:	return read_cpuid(SYS_ID_AA64MMFR2_EL1);
>  	case SYS_ID_AA64ISAR0_EL1:	return read_cpuid(SYS_ID_AA64ISAR0_EL1);
>  	case SYS_ID_AA64ISAR1_EL1:	return read_cpuid(SYS_ID_AA64ISAR1_EL1);
>  
> diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
> index 76df22272804..966fbd52550b 100644
> --- a/arch/arm64/kernel/cpuinfo.c
> +++ b/arch/arm64/kernel/cpuinfo.c
> @@ -210,6 +210,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
>  	info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
>  	info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
>  	info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
> +	info->reg_id_aa64mmfr2 = read_cpuid(SYS_ID_AA64MMFR2_EL1);
>  	info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
>  	info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);

swapper[0]: undefined instruction: pc=ffffff800808d730
Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
Hardware name: (null) (DT)
task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
PC is at __cpuinfo_store_cpu+0x68/0x198
LR is at cpuinfo_store_boot_cpu+0x28/0x50

ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2

Thanks,
Christopher Covington

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-03 17:59   ` Christopher Covington
@ 2016-03-03 18:27     ` Robin Murphy
  2016-03-03 19:03       ` Christopher Covington
  0 siblings, 1 reply; 36+ messages in thread
From: Robin Murphy @ 2016-03-03 18:27 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/03/16 17:59, Christopher Covington wrote:
> Hi James,
>
> This change breaks booting linux-next for me.
>
> On 02/05/2016 09:58 AM, James Morse wrote:
>> ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
>> cpu feature boiler plate used by the actual features in later patches.
>>
>> Signed-off-by: James Morse <james.morse@arm.com>
>> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>>   arch/arm64/include/asm/cpu.h    |  1 +
>>   arch/arm64/include/asm/sysreg.h |  4 ++++
>>   arch/arm64/kernel/cpufeature.c  | 10 ++++++++++
>>   arch/arm64/kernel/cpuinfo.c     |  1 +
>>   4 files changed, 16 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
>> index b5e9cee4b5f8..13a6103130cd 100644
>> --- a/arch/arm64/include/asm/cpu.h
>> +++ b/arch/arm64/include/asm/cpu.h
>> @@ -36,6 +36,7 @@ struct cpuinfo_arm64 {
>>   	u64		reg_id_aa64isar1;
>>   	u64		reg_id_aa64mmfr0;
>>   	u64		reg_id_aa64mmfr1;
>> +	u64		reg_id_aa64mmfr2;
>>   	u64		reg_id_aa64pfr0;
>>   	u64		reg_id_aa64pfr1;
>>
>> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
>> index 4aeebec3d882..4ef294ec49cc 100644
>> --- a/arch/arm64/include/asm/sysreg.h
>> +++ b/arch/arm64/include/asm/sysreg.h
>> @@ -72,6 +72,7 @@
>>
>>   #define SYS_ID_AA64MMFR0_EL1		sys_reg(3, 0, 0, 7, 0)
>>   #define SYS_ID_AA64MMFR1_EL1		sys_reg(3, 0, 0, 7, 1)
>> +#define SYS_ID_AA64MMFR2_EL1		sys_reg(3, 0, 0, 7, 2)
>>
>>   #define SYS_CNTFRQ_EL0			sys_reg(3, 3, 14, 0, 0)
>>   #define SYS_CTR_EL0			sys_reg(3, 3, 0, 0, 1)
>> @@ -137,6 +138,9 @@
>>   #define ID_AA64MMFR1_VMIDBITS_SHIFT	4
>>   #define ID_AA64MMFR1_HADBS_SHIFT	0
>>
>> +/* id_aa64mmfr2 */
>> +#define ID_AA64MMFR2_UAO_SHIFT		4
>> +
>>   /* id_aa64dfr0 */
>>   #define ID_AA64DFR0_CTX_CMPS_SHIFT	28
>>   #define ID_AA64DFR0_WRPS_SHIFT		20
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index a84febc40db2..5b8bc98105c1 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -123,6 +123,11 @@ static struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>>   	ARM64_FTR_END,
>>   };
>>
>> +static struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>> +	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>>   static struct arm64_ftr_bits ftr_ctr[] = {
>>   	U_ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 31, 1, 1),	/* RAO */
>>   	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 28, 3, 0),
>> @@ -284,6 +289,7 @@ static struct arm64_ftr_reg arm64_ftr_regs[] = {
>>   	/* Op1 = 0, CRn = 0, CRm = 7 */
>>   	ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
>>   	ARM64_FTR_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1),
>> +	ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2),
>>
>>   	/* Op1 = 3, CRn = 0, CRm = 0 */
>>   	ARM64_FTR_REG(SYS_CTR_EL0, ftr_ctr),
>> @@ -408,6 +414,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
>>   	init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
>>   	init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
>>   	init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
>> +	init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
>>   	init_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0);
>>   	init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
>>   	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
>> @@ -517,6 +524,8 @@ void update_cpu_features(int cpu,
>>   				      info->reg_id_aa64mmfr0, boot->reg_id_aa64mmfr0);
>>   	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR1_EL1, cpu,
>>   				      info->reg_id_aa64mmfr1, boot->reg_id_aa64mmfr1);
>> +	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
>> +				      info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2);
>>
>>   	/*
>>   	 * EL3 is not our concern.
>> @@ -814,6 +823,7 @@ static u64 __raw_read_system_reg(u32 sys_id)
>>   	case SYS_ID_AA64DFR1_EL1:	return read_cpuid(SYS_ID_AA64DFR0_EL1);
>>   	case SYS_ID_AA64MMFR0_EL1:	return read_cpuid(SYS_ID_AA64MMFR0_EL1);
>>   	case SYS_ID_AA64MMFR1_EL1:	return read_cpuid(SYS_ID_AA64MMFR1_EL1);
>> +	case SYS_ID_AA64MMFR2_EL1:	return read_cpuid(SYS_ID_AA64MMFR2_EL1);
>>   	case SYS_ID_AA64ISAR0_EL1:	return read_cpuid(SYS_ID_AA64ISAR0_EL1);
>>   	case SYS_ID_AA64ISAR1_EL1:	return read_cpuid(SYS_ID_AA64ISAR1_EL1);
>>
>> diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
>> index 76df22272804..966fbd52550b 100644
>> --- a/arch/arm64/kernel/cpuinfo.c
>> +++ b/arch/arm64/kernel/cpuinfo.c
>> @@ -210,6 +210,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
>>   	info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
>>   	info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
>>   	info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
>> +	info->reg_id_aa64mmfr2 = read_cpuid(SYS_ID_AA64MMFR2_EL1);
>>   	info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
>>   	info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);
>
> swapper[0]: undefined instruction: pc=ffffff800808d730
> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
> Hardware name: (null) (DT)
> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
> PC is at __cpuinfo_store_cpu+0x68/0x198
> LR is at cpuinfo_store_boot_cpu+0x28/0x50
>
> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2

Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically a 
reserved encoding, rather than an unallocated one, so it should read as 
zero, not undef. What are you running on?

Robin.

>
> Thanks,
> Christopher Covington
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-03 18:27     ` Robin Murphy
@ 2016-03-03 19:03       ` Christopher Covington
  2016-03-03 19:19         ` Will Deacon
  2016-03-04 14:59         ` Mark Rutland
  0 siblings, 2 replies; 36+ messages in thread
From: Christopher Covington @ 2016-03-03 19:03 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/03/2016 01:27 PM, Robin Murphy wrote:
> On 03/03/16 17:59, Christopher Covington wrote:

>>> --- a/arch/arm64/kernel/cpuinfo.c
>>> +++ b/arch/arm64/kernel/cpuinfo.c
>>> @@ -210,6 +210,7 @@ static void __cpuinfo_store_cpu(struct
>>> cpuinfo_arm64 *info)
>>>       info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
>>>       info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
>>>       info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
>>> +    info->reg_id_aa64mmfr2 = read_cpuid(SYS_ID_AA64MMFR2_EL1);
>>>       info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
>>>       info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);
>>
>> swapper[0]: undefined instruction: pc=ffffff800808d730
>> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
>> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
>> Modules linked in:
>> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
>> Hardware name: (null) (DT)
>> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
>> PC is at __cpuinfo_store_cpu+0x68/0x198
>> LR is at cpuinfo_store_boot_cpu+0x28/0x50
>>
>> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2
> 
> Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically a
> reserved encoding, rather than an unallocated one, so it should read as
> zero, not undef. What are you running on?

QDF2432. I'll investigate.

Cov

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-03 19:03       ` Christopher Covington
@ 2016-03-03 19:19         ` Will Deacon
  2016-03-04 10:20           ` Suzuki K. Poulose
  2016-03-04 14:59         ` Mark Rutland
  1 sibling, 1 reply; 36+ messages in thread
From: Will Deacon @ 2016-03-03 19:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 03, 2016 at 02:03:08PM -0500, Christopher Covington wrote:
> On 03/03/2016 01:27 PM, Robin Murphy wrote:
> > On 03/03/16 17:59, Christopher Covington wrote:
> 
> >>> --- a/arch/arm64/kernel/cpuinfo.c
> >>> +++ b/arch/arm64/kernel/cpuinfo.c
> >>> @@ -210,6 +210,7 @@ static void __cpuinfo_store_cpu(struct
> >>> cpuinfo_arm64 *info)
> >>>       info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
> >>>       info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
> >>>       info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
> >>> +    info->reg_id_aa64mmfr2 = read_cpuid(SYS_ID_AA64MMFR2_EL1);
> >>>       info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
> >>>       info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);
> >>
> >> swapper[0]: undefined instruction: pc=ffffff800808d730
> >> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
> >> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
> >> Modules linked in:
> >> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
> >> Hardware name: (null) (DT)
> >> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
> >> PC is at __cpuinfo_store_cpu+0x68/0x198
> >> LR is at cpuinfo_store_boot_cpu+0x28/0x50
> >>
> >> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2
> > 
> > Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically a
> > reserved encoding, rather than an unallocated one, so it should read as
> > zero, not undef. What are you running on?
> 
> QDF2432. I'll investigate.

FWIW, I think the same issue was reported on qemu, so we may want to
handle this in the kernel anyway. We could either use alternatives on
the register read or handle the undef.

Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-03 19:19         ` Will Deacon
@ 2016-03-04 10:20           ` Suzuki K. Poulose
  2016-03-04 13:37             ` James Morse
  0 siblings, 1 reply; 36+ messages in thread
From: Suzuki K. Poulose @ 2016-03-04 10:20 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/03/16 19:19, Will Deacon wrote:
> On Thu, Mar 03, 2016 at 02:03:08PM -0500, Christopher Covington wrote:
>> On 03/03/2016 01:27 PM, Robin Murphy wrote:
>>> On 03/03/16 17:59, Christopher Covington wrote:
>>

>>>> swapper[0]: undefined instruction: pc=ffffff800808d730
>>>> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
>>>> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
>>>> Modules linked in:
>>>> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
>>>> Hardware name: (null) (DT)
>>>> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
>>>> PC is at __cpuinfo_store_cpu+0x68/0x198
>>>> LR is at cpuinfo_store_boot_cpu+0x28/0x50
>>>>
>>>> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2
>>>
>>> Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically a
>>> reserved encoding, rather than an unallocated one, so it should read as
>>> zero, not undef. What are you running on?
>>
>> QDF2432. I'll investigate.
>
> FWIW, I think the same issue was reported on qemu, so we may want to
> handle this in the kernel anyway. We could either use alternatives on
> the register read or handle the undef.

We could use parts of the mrs emulation patch [1] to emulate only the above
for kernel mode.

[1] https://lkml.org/lkml/2015/10/13/641

Cheers
Suzuki

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-04 10:20           ` Suzuki K. Poulose
@ 2016-03-04 13:37             ` James Morse
  2016-03-04 13:54               ` Robin Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: James Morse @ 2016-03-04 13:37 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Suzuki,

On 04/03/16 10:20, Suzuki K. Poulose wrote:
> On 03/03/16 19:19, Will Deacon wrote:
>> On Thu, Mar 03, 2016 at 02:03:08PM -0500, Christopher Covington wrote:
>>> On 03/03/2016 01:27 PM, Robin Murphy wrote:
>>>> On 03/03/16 17:59, Christopher Covington wrote:
>>>>> swapper[0]: undefined instruction: pc=ffffff800808d730
>>>>> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
>>>>> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
>>>>> Modules linked in:
>>>>> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
>>>>> Hardware name: (null) (DT)
>>>>> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
>>>>> PC is at __cpuinfo_store_cpu+0x68/0x198
>>>>> LR is at cpuinfo_store_boot_cpu+0x28/0x50
>>>>>
>>>>> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2
>>>>
>>>> Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically a
>>>> reserved encoding, rather than an unallocated one, so it should read as
>>>> zero, not undef. What are you running on?
>>>
>>> QDF2432. I'll investigate.
>>
>> FWIW, I think the same issue was reported on qemu, so we may want to
>> handle this in the kernel anyway. We could either use alternatives on
>> the register read or handle the undef.

Alternatives would let us handle the QDF2432 case, but detecting the version of
qemu doesn't sound right. I will put together a series to handle the undef.


> We could use parts of the mrs emulation patch [1] to emulate only the above
> for kernel mode.

Thanks, I will see what I can poach!


James

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-04 13:37             ` James Morse
@ 2016-03-04 13:54               ` Robin Murphy
  0 siblings, 0 replies; 36+ messages in thread
From: Robin Murphy @ 2016-03-04 13:54 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/03/16 13:37, James Morse wrote:
> Hi Suzuki,
>
> On 04/03/16 10:20, Suzuki K. Poulose wrote:
>> On 03/03/16 19:19, Will Deacon wrote:
>>> On Thu, Mar 03, 2016 at 02:03:08PM -0500, Christopher Covington wrote:
>>>> On 03/03/2016 01:27 PM, Robin Murphy wrote:
>>>>> On 03/03/16 17:59, Christopher Covington wrote:
>>>>>> swapper[0]: undefined instruction: pc=ffffff800808d730
>>>>>> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
>>>>>> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
>>>>>> Modules linked in:
>>>>>> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
>>>>>> Hardware name: (null) (DT)
>>>>>> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
>>>>>> PC is at __cpuinfo_store_cpu+0x68/0x198
>>>>>> LR is at cpuinfo_store_boot_cpu+0x28/0x50
>>>>>>
>>>>>> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2
>>>>>
>>>>> Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically a
>>>>> reserved encoding, rather than an unallocated one, so it should read as
>>>>> zero, not undef. What are you running on?
>>>>
>>>> QDF2432. I'll investigate.
>>>
>>> FWIW, I think the same issue was reported on qemu, so we may want to
>>> handle this in the kernel anyway. We could either use alternatives on
>>> the register read or handle the undef.
>
> Alternatives would let us handle the QDF2432 case, but detecting the version of
> qemu doesn't sound right. I will put together a series to handle the undef.

More generally, it would leave future potential for having to work out 
whether to apply alternatives to read an ID register to work out whether 
we need to apply alternatives for some feature within. I also can't even 
begin to fathom how such a thing would interact with the heterogeneous 
feature sanity-checking.

I think trapping the undef is the only viable approach.

Robin.

>> We could use parts of the mrs emulation patch [1] to emulate only the above
>> for kernel mode.
>
> Thanks, I will see what I can poach!
>
>
> James
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-03 19:03       ` Christopher Covington
  2016-03-03 19:19         ` Will Deacon
@ 2016-03-04 14:59         ` Mark Rutland
  2016-03-04 18:15           ` Christopher Covington
  1 sibling, 1 reply; 36+ messages in thread
From: Mark Rutland @ 2016-03-04 14:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 03, 2016 at 02:03:08PM -0500, Christopher Covington wrote:
> On 03/03/2016 01:27 PM, Robin Murphy wrote:
> > On 03/03/16 17:59, Christopher Covington wrote:
> 
> >>> --- a/arch/arm64/kernel/cpuinfo.c
> >>> +++ b/arch/arm64/kernel/cpuinfo.c
> >>> @@ -210,6 +210,7 @@ static void __cpuinfo_store_cpu(struct
> >>> cpuinfo_arm64 *info)
> >>>       info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
> >>>       info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
> >>>       info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
> >>> +    info->reg_id_aa64mmfr2 = read_cpuid(SYS_ID_AA64MMFR2_EL1);
> >>>       info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
> >>>       info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);
> >>
> >> swapper[0]: undefined instruction: pc=ffffff800808d730
> >> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
> >> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
> >> Modules linked in:
> >> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
> >> Hardware name: (null) (DT)
> >> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti: ffffff8008b20000
> >> PC is at __cpuinfo_store_cpu+0x68/0x198
> >> LR is at cpuinfo_store_boot_cpu+0x28/0x50
> >>
> >> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2
> > 
> > Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically a
> > reserved encoding, rather than an unallocated one, so it should read as
> > zero, not undef. What are you running on?
> 
> QDF2432. I'll investigate.

Just to check, I take it you're running natively, and not under a
hypervisor?

Mark.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
  2016-03-04 14:59         ` Mark Rutland
@ 2016-03-04 18:15           ` Christopher Covington
  0 siblings, 0 replies; 36+ messages in thread
From: Christopher Covington @ 2016-03-04 18:15 UTC (permalink / raw)
  To: linux-arm-kernel



On March 4, 2016 9:59:28 AM EST, Mark Rutland <mark.rutland@arm.com> wrote:
>On Thu, Mar 03, 2016 at 02:03:08PM -0500, Christopher Covington wrote:
>> On 03/03/2016 01:27 PM, Robin Murphy wrote:
>> > On 03/03/16 17:59, Christopher Covington wrote:
>> 
>> >>> --- a/arch/arm64/kernel/cpuinfo.c
>> >>> +++ b/arch/arm64/kernel/cpuinfo.c
>> >>> @@ -210,6 +210,7 @@ static void __cpuinfo_store_cpu(struct
>> >>> cpuinfo_arm64 *info)
>> >>>       info->reg_id_aa64isar1 = read_cpuid(SYS_ID_AA64ISAR1_EL1);
>> >>>       info->reg_id_aa64mmfr0 = read_cpuid(SYS_ID_AA64MMFR0_EL1);
>> >>>       info->reg_id_aa64mmfr1 = read_cpuid(SYS_ID_AA64MMFR1_EL1);
>> >>> +    info->reg_id_aa64mmfr2 = read_cpuid(SYS_ID_AA64MMFR2_EL1);
>> >>>       info->reg_id_aa64pfr0 = read_cpuid(SYS_ID_AA64PFR0_EL1);
>> >>>       info->reg_id_aa64pfr1 = read_cpuid(SYS_ID_AA64PFR1_EL1);
>> >>
>> >> swapper[0]: undefined instruction: pc=ffffff800808d730
>> >> Code: d5380702 d5380721 f9017c02 f9018001 (d5380742)
>> >> Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
>> >> Modules linked in:
>> >> CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc6-next-20160303 #1
>> >> Hardware name: (null) (DT)
>> >> task: ffffff8008b2d980 ti: ffffff8008b20000 task.ti:
>ffffff8008b20000
>> >> PC is at __cpuinfo_store_cpu+0x68/0x198
>> >> LR is at cpuinfo_store_boot_cpu+0x28/0x50
>> >>
>> >> ffffff800808d730:       d5380742        mrs     x2, s3_0_c0_c7_2
>> > 
>> > Hmm, per table C5-6 in the ARMv8 ARM (issue i), that's specifically
>a
>> > reserved encoding, rather than an unallocated one, so it should
>read as
>> > zero, not undef. What are you running on?
>> 
>> QDF2432. I'll investigate.
>
>Just to check, I take it you're running natively, and not under a
>hypervisor?

Yes this was running natively. I haven't yet been able to confirm things, but I'll let you know when I do.

Cov

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

Sent from my Snapdragon powered Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
                   ` (6 preceding siblings ...)
  2016-02-18 18:03 ` Catalin Marinas
@ 2016-03-07 16:43 ` James Morse
  2016-03-07 17:23   ` Russell King - ARM Linux
  2016-03-07 17:38   ` Catalin Marinas
  7 siblings, 2 replies; 36+ messages in thread
From: James Morse @ 2016-03-07 16:43 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,

I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
to fail. Who to blame is up for discussion. The test is passing a user pointer
as the 'to' field of copy_from_user(), which it expects to fail gracefully:

lib/test_user_copy.c:75
>	/* Invalid usage: none of these should succeed. */
[ ... ]
> 	ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
>				    PAGE_SIZE),
>		    "illegal reversed copy_from_user passed");
>

access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
bad_usermem to memset():

arch/arm64/include/asm/uaccess.h:279
>	if (access_ok(VERIFY_READ, from, n))
>		n = __copy_from_user(to, from, n);
>	else /* security hole - plug it */
>		memset(to, 0, n);

This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
routines" message, which is a little confusing to debug, and stops the rest of
the module's tests from being run.

As far as I can see, this would only affect arm64. I can't find an equivalent
memset() for x86_64.

The below ugly hack [0], handles this more gracefully. I can send this as a fix
sooner/later if you think its the right thing to do.



Thanks,

James

[0]
-----------------%<-----------------
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 0685d74572af..049a82e8dd9e 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -278,8 +278,8 @@ static inline unsigned long __must_check copy_from_user(void
*to, const void __u
 {
        if (access_ok(VERIFY_READ, from, n))
                n = __copy_from_user(to, from, n);
-       else /* security hole - plug it */
-               memset(to, 0, n);
+       else if ((unsigned long)to > USER_DS) /* swapped from/to args? */
+               memset(to, 0, n); /* security hole - plug it */
        return n;
 }

-----------------%<-----------------

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-07 16:43 ` James Morse
@ 2016-03-07 17:23   ` Russell King - ARM Linux
  2016-03-07 17:40     ` James Morse
  2016-03-07 17:38   ` Catalin Marinas
  1 sibling, 1 reply; 36+ messages in thread
From: Russell King - ARM Linux @ 2016-03-07 17:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
> Hi Catalin,
> 
> I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
> to fail. Who to blame is up for discussion. The test is passing a user pointer
> as the 'to' field of copy_from_user(), which it expects to fail gracefully:
> 
> lib/test_user_copy.c:75
> >	/* Invalid usage: none of these should succeed. */
> [ ... ]
> > 	ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
> >				    PAGE_SIZE),
> >		    "illegal reversed copy_from_user passed");
> >
> 
> access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
> bad_usermem to memset():
> 
> arch/arm64/include/asm/uaccess.h:279
> >	if (access_ok(VERIFY_READ, from, n))
> >		n = __copy_from_user(to, from, n);
> >	else /* security hole - plug it */
> >		memset(to, 0, n);
> 
> This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
> routines" message, which is a little confusing to debug, and stops the rest of
> the module's tests from being run.
> 
> As far as I can see, this would only affect arm64. I can't find an equivalent
> memset() for x86_64.

I don't think you've looked hard enough. :)

arch/x86/lib/usercopy_32.c:

unsigned long _copy_from_user(void *to, const void __user *from, unsigned n)
{
        if (access_ok(VERIFY_READ, from, n))
                n = __copy_from_user(to, from, n);
        else
                memset(to, 0, n);
        return n;
}
EXPORT_SYMBOL(_copy_from_user);

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-07 16:43 ` James Morse
  2016-03-07 17:23   ` Russell King - ARM Linux
@ 2016-03-07 17:38   ` Catalin Marinas
  2016-03-07 20:54     ` Kees Cook
  1 sibling, 1 reply; 36+ messages in thread
From: Catalin Marinas @ 2016-03-07 17:38 UTC (permalink / raw)
  To: linux-arm-kernel

(cc'ing Kees)

On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
> I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
> to fail. Who to blame is up for discussion. The test is passing a user pointer
> as the 'to' field of copy_from_user(), which it expects to fail gracefully:
> 
> lib/test_user_copy.c:75
> >	/* Invalid usage: none of these should succeed. */
> [ ... ]
> > 	ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
> >				    PAGE_SIZE),
> >		    "illegal reversed copy_from_user passed");
> >
> 
> access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
> bad_usermem to memset():
> 
> arch/arm64/include/asm/uaccess.h:279
> >	if (access_ok(VERIFY_READ, from, n))
> >		n = __copy_from_user(to, from, n);
> >	else /* security hole - plug it */
> >		memset(to, 0, n);
> 
> This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
> routines" message, which is a little confusing to debug, and stops the rest of
> the module's tests from being run.

I suggest we don't do anything on arch/arm64 (or arch/arm), I just
consider the test to be broken. The semantics of copy_from_user() don't
say anything about the safety checks on the destination pointer, this is
supposed to be a valid kernel address. The test assumes that if the
source pointer is invalid, the copy_from_user() routine should not touch
the destination.

A better test would be to use a destination buffer filled with a poison
value and checked after the operation.

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-07 17:23   ` Russell King - ARM Linux
@ 2016-03-07 17:40     ` James Morse
  2016-03-07 17:51       ` Russell King - ARM Linux
  0 siblings, 1 reply; 36+ messages in thread
From: James Morse @ 2016-03-07 17:40 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/03/16 17:23, Russell King - ARM Linux wrote:
> On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
>> As far as I can see, this would only affect arm64. I can't find an equivalent
>> memset() for x86_64.
> 
> I don't think you've looked hard enough. :)

Heh, Thanks! I ignored the 32bit code and instead got lost in the maze of
underscores and alternative-strings for the 64 bit path.

Having seen that path, I've now found:
>	/* If the destination is a kernel buffer, we always clear the end */
>	if (!__addr_ok(to))
>		memset(to, 0, len);

in arch/x86/lib/usercopy_64.c:copy_user_handle_tail(), which may be the x86_64
equivalent, or I may be lost in the maze again.


Thanks!

James

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-07 17:40     ` James Morse
@ 2016-03-07 17:51       ` Russell King - ARM Linux
  0 siblings, 0 replies; 36+ messages in thread
From: Russell King - ARM Linux @ 2016-03-07 17:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 07, 2016 at 05:40:23PM +0000, James Morse wrote:
> On 07/03/16 17:23, Russell King - ARM Linux wrote:
> > On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
> >> As far as I can see, this would only affect arm64. I can't find an equivalent
> >> memset() for x86_64.
> > 
> > I don't think you've looked hard enough. :)
> 
> Heh, Thanks! I ignored the 32bit code and instead got lost in the maze of
> underscores and alternative-strings for the 64 bit path.
> 
> Having seen that path, I've now found:
> >	/* If the destination is a kernel buffer, we always clear the end */
> >	if (!__addr_ok(to))
> >		memset(to, 0, len);
> 
> in arch/x86/lib/usercopy_64.c:copy_user_handle_tail(), which may be the x86_64
> equivalent, or I may be lost in the maze again.

x86_64 has the function in assembler:

/* Standard copy_from_user with segment limit checking */
ENTRY(_copy_from_user)
        GET_THREAD_INFO(%rax)
        movq %rsi,%rcx
        addq %rdx,%rcx
        jc bad_from_user
        cmpq TI_addr_limit(%rax),%rcx
        ja bad_from_user
        ALTERNATIVE_2 "jmp copy_user_generic_unrolled",         \
                      "jmp copy_user_generic_string",           \
                      X86_FEATURE_REP_GOOD,                     \
                      "jmp copy_user_enhanced_fast_string",     \
                      X86_FEATURE_ERMS
ENDPROC(_copy_from_user)

        .section .fixup,"ax"
        /* must zero dest */
ENTRY(bad_from_user)
bad_from_user:
        movl %edx,%ecx
        xorl %eax,%eax
        rep
        stosb
bad_to_user:
        movl %edx,%eax
        ret
ENDPROC(bad_from_user)

The memset() are the 4 instructions after bad_from_user:.  This will
be triggered if 'from' is an invalid address.

However, you're right that we end up in copy_user_handle_tail() if
we fault, which will try to copy as much data and then memset() the
remainder.  Since this is used for both copy_from_user() and
copy_to_user() I think the __addr_ok() is just a side-effect of
avoiding memsetting for the copy_to_user() case.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-07 17:38   ` Catalin Marinas
@ 2016-03-07 20:54     ` Kees Cook
  2016-03-08 17:19       ` Catalin Marinas
  0 siblings, 1 reply; 36+ messages in thread
From: Kees Cook @ 2016-03-07 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 7, 2016 at 9:38 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> (cc'ing Kees)
>
> On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
>> I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
>> to fail. Who to blame is up for discussion. The test is passing a user pointer
>> as the 'to' field of copy_from_user(), which it expects to fail gracefully:
>>
>> lib/test_user_copy.c:75
>> >     /* Invalid usage: none of these should succeed. */
>> [ ... ]
>> >     ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
>> >                                 PAGE_SIZE),
>> >                 "illegal reversed copy_from_user passed");
>> >
>>
>> access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
>> bad_usermem to memset():
>>
>> arch/arm64/include/asm/uaccess.h:279
>> >     if (access_ok(VERIFY_READ, from, n))
>> >             n = __copy_from_user(to, from, n);
>> >     else /* security hole - plug it */
>> >             memset(to, 0, n);
>>
>> This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
>> routines" message, which is a little confusing to debug, and stops the rest of
>> the module's tests from being run.
>
> I suggest we don't do anything on arch/arm64 (or arch/arm), I just
> consider the test to be broken. The semantics of copy_from_user() don't
> say anything about the safety checks on the destination pointer, this is
> supposed to be a valid kernel address. The test assumes that if the
> source pointer is invalid, the copy_from_user() routine should not touch
> the destination.

Hmmm, I'd prefer that copy_*_user was checking both src and
destination. That's what these tests are checking for.

> A better test would be to use a destination buffer filled with a poison
> value and checked after the operation.

Well, I could certainly add that check, but I'd prefer that a bogus
destination be rejected.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-07 20:54     ` Kees Cook
@ 2016-03-08 17:19       ` Catalin Marinas
  2016-03-08 17:39         ` Kees Cook
  0 siblings, 1 reply; 36+ messages in thread
From: Catalin Marinas @ 2016-03-08 17:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 07, 2016 at 12:54:33PM -0800, Kees Cook wrote:
> On Mon, Mar 7, 2016 at 9:38 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > (cc'ing Kees)
> >
> > On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
> >> I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
> >> to fail. Who to blame is up for discussion. The test is passing a user pointer
> >> as the 'to' field of copy_from_user(), which it expects to fail gracefully:
> >>
> >> lib/test_user_copy.c:75
> >> >     /* Invalid usage: none of these should succeed. */
> >> [ ... ]
> >> >     ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
> >> >                                 PAGE_SIZE),
> >> >                 "illegal reversed copy_from_user passed");
> >> >
> >>
> >> access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
> >> bad_usermem to memset():
> >>
> >> arch/arm64/include/asm/uaccess.h:279
> >> >     if (access_ok(VERIFY_READ, from, n))
> >> >             n = __copy_from_user(to, from, n);
> >> >     else /* security hole - plug it */
> >> >             memset(to, 0, n);
> >>
> >> This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
> >> routines" message, which is a little confusing to debug, and stops the rest of
> >> the module's tests from being run.
> >
> > I suggest we don't do anything on arch/arm64 (or arch/arm), I just
> > consider the test to be broken. The semantics of copy_from_user() don't
> > say anything about the safety checks on the destination pointer, this is
> > supposed to be a valid kernel address. The test assumes that if the
> > source pointer is invalid, the copy_from_user() routine should not touch
> > the destination.
> 
> Hmmm, I'd prefer that copy_*_user was checking both src and
> destination. That's what these tests are checking for.

If the kernel pointer (source or destination, depending on the function)
is not valid, I would rather panic the kernel (maybe as a result of a
data abort) than silently returning a non-zero value from
copy_from_user() which doesn't even tell whether the source or
destination pointer is wrong.

The copy_*_user functions are indeed meant to check the validity of user
pointer arguments and safely handle aborts (extable annotations) since
these values are usually provided by user space. But the kernel pointer
arguments are under the control of the kernel, so IMO passing a corrupt
value is a serious kernel/driver bug.

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-08 17:19       ` Catalin Marinas
@ 2016-03-08 17:39         ` Kees Cook
  2016-03-08 18:22           ` Catalin Marinas
  0 siblings, 1 reply; 36+ messages in thread
From: Kees Cook @ 2016-03-08 17:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 8, 2016 at 9:19 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Mon, Mar 07, 2016 at 12:54:33PM -0800, Kees Cook wrote:
>> On Mon, Mar 7, 2016 at 9:38 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> > (cc'ing Kees)
>> >
>> > On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
>> >> I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
>> >> to fail. Who to blame is up for discussion. The test is passing a user pointer
>> >> as the 'to' field of copy_from_user(), which it expects to fail gracefully:
>> >>
>> >> lib/test_user_copy.c:75
>> >> >     /* Invalid usage: none of these should succeed. */
>> >> [ ... ]
>> >> >     ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
>> >> >                                 PAGE_SIZE),
>> >> >                 "illegal reversed copy_from_user passed");
>> >> >
>> >>
>> >> access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
>> >> bad_usermem to memset():
>> >>
>> >> arch/arm64/include/asm/uaccess.h:279
>> >> >     if (access_ok(VERIFY_READ, from, n))
>> >> >             n = __copy_from_user(to, from, n);
>> >> >     else /* security hole - plug it */
>> >> >             memset(to, 0, n);
>> >>
>> >> This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
>> >> routines" message, which is a little confusing to debug, and stops the rest of
>> >> the module's tests from being run.
>> >
>> > I suggest we don't do anything on arch/arm64 (or arch/arm), I just
>> > consider the test to be broken. The semantics of copy_from_user() don't
>> > say anything about the safety checks on the destination pointer, this is
>> > supposed to be a valid kernel address. The test assumes that if the
>> > source pointer is invalid, the copy_from_user() routine should not touch
>> > the destination.
>>
>> Hmmm, I'd prefer that copy_*_user was checking both src and
>> destination. That's what these tests are checking for.
>
> If the kernel pointer (source or destination, depending on the function)
> is not valid, I would rather panic the kernel (maybe as a result of a
> data abort) than silently returning a non-zero value from
> copy_from_user() which doesn't even tell whether the source or
> destination pointer is wrong.
>
> The copy_*_user functions are indeed meant to check the validity of user
> pointer arguments and safely handle aborts (extable annotations) since
> these values are usually provided by user space. But the kernel pointer
> arguments are under the control of the kernel, so IMO passing a corrupt
> value is a serious kernel/driver bug.

I'm totally fine with that, though I suspect Linus would not like this
(perhaps just Oops instead of panic). Regardless, if that was changed,
we could move this entire test into lkdtm (where system panics are
considered a "success").

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-08 17:39         ` Kees Cook
@ 2016-03-08 18:22           ` Catalin Marinas
  2016-03-08 18:27             ` Kees Cook
  0 siblings, 1 reply; 36+ messages in thread
From: Catalin Marinas @ 2016-03-08 18:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 08, 2016 at 09:39:27AM -0800, Kees Cook wrote:
> On Tue, Mar 8, 2016 at 9:19 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Mon, Mar 07, 2016 at 12:54:33PM -0800, Kees Cook wrote:
> >> On Mon, Mar 7, 2016 at 9:38 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >> > (cc'ing Kees)
> >> >
> >> > On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
> >> >> I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
> >> >> to fail. Who to blame is up for discussion. The test is passing a user pointer
> >> >> as the 'to' field of copy_from_user(), which it expects to fail gracefully:
> >> >>
> >> >> lib/test_user_copy.c:75
> >> >> >     /* Invalid usage: none of these should succeed. */
> >> >> [ ... ]
> >> >> >     ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
> >> >> >                                 PAGE_SIZE),
> >> >> >                 "illegal reversed copy_from_user passed");
> >> >> >
> >> >>
> >> >> access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
> >> >> bad_usermem to memset():
> >> >>
> >> >> arch/arm64/include/asm/uaccess.h:279
> >> >> >     if (access_ok(VERIFY_READ, from, n))
> >> >> >             n = __copy_from_user(to, from, n);
> >> >> >     else /* security hole - plug it */
> >> >> >             memset(to, 0, n);
> >> >>
> >> >> This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
> >> >> routines" message, which is a little confusing to debug, and stops the rest of
> >> >> the module's tests from being run.
> >> >
> >> > I suggest we don't do anything on arch/arm64 (or arch/arm), I just
> >> > consider the test to be broken. The semantics of copy_from_user() don't
> >> > say anything about the safety checks on the destination pointer, this is
> >> > supposed to be a valid kernel address. The test assumes that if the
> >> > source pointer is invalid, the copy_from_user() routine should not touch
> >> > the destination.
> >>
> >> Hmmm, I'd prefer that copy_*_user was checking both src and
> >> destination. That's what these tests are checking for.
> >
> > If the kernel pointer (source or destination, depending on the function)
> > is not valid, I would rather panic the kernel (maybe as a result of a
> > data abort) than silently returning a non-zero value from
> > copy_from_user() which doesn't even tell whether the source or
> > destination pointer is wrong.
> >
> > The copy_*_user functions are indeed meant to check the validity of user
> > pointer arguments and safely handle aborts (extable annotations) since
> > these values are usually provided by user space. But the kernel pointer
> > arguments are under the control of the kernel, so IMO passing a corrupt
> > value is a serious kernel/driver bug.
> 
> I'm totally fine with that, though I suspect Linus would not like this
> (perhaps just Oops instead of panic).

That's what we get on arm64 with PAN enabled (oops).

> Regardless, if that was changed, we could move this entire test into
> lkdtm (where system panics are considered a "success").

The only downside is that not all architectures behave in the same way
w.r.t. the copy_*_user() routines, so the test may have some surprises
with the oops not being triggered.

Anyway, I think the test should also check the invalid
source/destination independently (possibly in combination with
set_fs(KERNEL_DS)).

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] arm64: kernel: Add support for User Access Override
  2016-03-08 18:22           ` Catalin Marinas
@ 2016-03-08 18:27             ` Kees Cook
  0 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2016-03-08 18:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 8, 2016 at 10:22 AM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Tue, Mar 08, 2016 at 09:39:27AM -0800, Kees Cook wrote:
>> On Tue, Mar 8, 2016 at 9:19 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> > On Mon, Mar 07, 2016 at 12:54:33PM -0800, Kees Cook wrote:
>> >> On Mon, Mar 7, 2016 at 9:38 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> >> > (cc'ing Kees)
>> >> >
>> >> > On Mon, Mar 07, 2016 at 04:43:19PM +0000, James Morse wrote:
>> >> >> I've just spotted UAO causes the test_user_copy module (CONFIG_TEST_USER_COPY)
>> >> >> to fail. Who to blame is up for discussion. The test is passing a user pointer
>> >> >> as the 'to' field of copy_from_user(), which it expects to fail gracefully:
>> >> >>
>> >> >> lib/test_user_copy.c:75
>> >> >> >     /* Invalid usage: none of these should succeed. */
>> >> >> [ ... ]
>> >> >> >     ret |= test(!copy_from_user(bad_usermem, (char __user *)kmem,
>> >> >> >                                 PAGE_SIZE),
>> >> >> >                 "illegal reversed copy_from_user passed");
>> >> >> >
>> >> >>
>> >> >> access_ok() catches the "(char __user *)kmem", causing copy_from_user() to pass
>> >> >> bad_usermem to memset():
>> >> >>
>> >> >> arch/arm64/include/asm/uaccess.h:279
>> >> >> >     if (access_ok(VERIFY_READ, from, n))
>> >> >> >             n = __copy_from_user(to, from, n);
>> >> >> >     else /* security hole - plug it */
>> >> >> >             memset(to, 0, n);
>> >> >>
>> >> >> This (correctly) trips UAO's "Accessing user space memory outside uaccess.h
>> >> >> routines" message, which is a little confusing to debug, and stops the rest of
>> >> >> the module's tests from being run.
>> >> >
>> >> > I suggest we don't do anything on arch/arm64 (or arch/arm), I just
>> >> > consider the test to be broken. The semantics of copy_from_user() don't
>> >> > say anything about the safety checks on the destination pointer, this is
>> >> > supposed to be a valid kernel address. The test assumes that if the
>> >> > source pointer is invalid, the copy_from_user() routine should not touch
>> >> > the destination.
>> >>
>> >> Hmmm, I'd prefer that copy_*_user was checking both src and
>> >> destination. That's what these tests are checking for.
>> >
>> > If the kernel pointer (source or destination, depending on the function)
>> > is not valid, I would rather panic the kernel (maybe as a result of a
>> > data abort) than silently returning a non-zero value from
>> > copy_from_user() which doesn't even tell whether the source or
>> > destination pointer is wrong.
>> >
>> > The copy_*_user functions are indeed meant to check the validity of user
>> > pointer arguments and safely handle aborts (extable annotations) since
>> > these values are usually provided by user space. But the kernel pointer
>> > arguments are under the control of the kernel, so IMO passing a corrupt
>> > value is a serious kernel/driver bug.
>>
>> I'm totally fine with that, though I suspect Linus would not like this
>> (perhaps just Oops instead of panic).
>
> That's what we get on arm64 with PAN enabled (oops).
>
>> Regardless, if that was changed, we could move this entire test into
>> lkdtm (where system panics are considered a "success").
>
> The only downside is that not all architectures behave in the same way
> w.r.t. the copy_*_user() routines, so the test may have some surprises
> with the oops not being triggered.

Right. I think if the tests were moved to lkdtm, we could do the
checks in a few stages, where lkdtm would generate the Oops if
copy_*_user correctly errored/ignored the bad combo without Oopsing on
its own.

> Anyway, I think the test should also check the invalid
> source/destination independently (possibly in combination with
> set_fs(KERNEL_DS)).

I'm open to whatever you like. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2016-03-08 18:27 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-05 14:58 [PATCH v2 0/5] arm64: kernel: Add support for User Access Override James Morse
2016-02-05 14:58 ` [PATCH v2 1/5] arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro James Morse
2016-02-05 14:58 ` [PATCH v2 2/5] arm64: add ARMv8.2 id_aa64mmfr2 boiler plate James Morse
2016-03-03 17:59   ` Christopher Covington
2016-03-03 18:27     ` Robin Murphy
2016-03-03 19:03       ` Christopher Covington
2016-03-03 19:19         ` Will Deacon
2016-03-04 10:20           ` Suzuki K. Poulose
2016-03-04 13:37             ` James Morse
2016-03-04 13:54               ` Robin Murphy
2016-03-04 14:59         ` Mark Rutland
2016-03-04 18:15           ` Christopher Covington
2016-02-05 14:58 ` [PATCH v2 3/5] arm64: kernel: Add support for User Access Override James Morse
2016-02-18 12:26   ` Catalin Marinas
2016-02-05 14:58 ` [PATCH v2 4/5] arm64: cpufeature: Test 'matches' pointer to find the end of the list James Morse
2016-02-05 14:58 ` [PATCH v2 5/5] arm64: kernel: Don't toggle PAN on systems with UAO James Morse
2016-02-18 14:36   ` Catalin Marinas
2016-02-18 14:43     ` James Morse
2016-02-05 15:40 ` [PATCH v2 0/5] arm64: kernel: Add support for User Access Override Arnd Bergmann
2016-02-09  9:47   ` Will Deacon
2016-02-18 18:03 ` Catalin Marinas
2016-02-19 15:38   ` Peter Maydell
2016-02-19 16:46     ` Catalin Marinas
2016-02-19 16:54       ` Peter Maydell
2016-02-19 16:57         ` Ard Biesheuvel
2016-02-19 17:03           ` Peter Maydell
2016-03-07 16:43 ` James Morse
2016-03-07 17:23   ` Russell King - ARM Linux
2016-03-07 17:40     ` James Morse
2016-03-07 17:51       ` Russell King - ARM Linux
2016-03-07 17:38   ` Catalin Marinas
2016-03-07 20:54     ` Kees Cook
2016-03-08 17:19       ` Catalin Marinas
2016-03-08 17:39         ` Kees Cook
2016-03-08 18:22           ` Catalin Marinas
2016-03-08 18:27             ` Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.