All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements
@ 2020-09-09 14:07 Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-09 14:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

These patch series adds support for Armv8.3 pointer authentication
enhanced features mandatory for Armv8.6 and optional for Armv8.3.
These features are,

 * Enhanced PAC generation algorithm (ARMv8.3-pauth2).
 * Generate fault when authenticate instruction fails (ARMV8.3-FPAC).

More details can be found here [1].

Changes since v6 [2]:
* Clubbed Ptrauth combined branch instructions as suggested by Will.

Changes since v5 [3]:
* Few Comments modified as suggested by Dave and Suzuki.
* Added Reviewed-by's.

Changes since v4 [4]:
* New patch "arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions".
  This fixes uprobe crash with ARMv8.3-PAuth combined instructions.
* New patch "arm64: traps: Allow force_signal_inject to pass esr error code".
  This is preparatory patch for ARMV8.3-FPAC fault exception handling.
* Removed caching of bootcpu address authentication cpufeature levels in
  static variables. This was suggested by Dave and Suzuki.
* Use existing force_signal_inject function to invoke ptrauth fault
  signal as suggested by Dave.
* Commit log changes.

Changes since v3 [5]:
* Added a new patch "arm64: kprobe: clarify the comment of steppable hint instructions"
  as suggested in the last iteration.
* Removed the ptrauth fault handler from el0 compat handler as pointed
  by Dave.
* Mentioned the new feature name clearly as ARMV8.3-FPAC and ARMv8.3-pauth2
  as per ARMv8-A reference manual.
* Commit logs cleanup. 

Changes since v2 [6]:
* Dropped the patch "arm64: cpufeature: Fix the handler for address authentication"
* Added new matching function for address authentication as generic
  matching function has_cpuid_feature is specific for LOWER_SAFE
  features. This was suggested by Suzuki [3].
* Disabled probe of Authenticate ptrauth instructions as per Mark
  Brown's merged changes of whitelisting of hint instructions.

This series is based on kernel version v5.9-rc3.

Regards,
Amit

[1]: https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a
[2]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-September/598163.html 
[3]: https://lore.kernel.org/linux-arm-kernel/1597734671-23407-1-git-send-email-amit.kachhap@arm.com/
[4]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-July/584393.html
[5]: https://lore.kernel.org/linux-arm-kernel/1592457029-18547-1-git-send-email-amit.kachhap@arm.com/
[6]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-April/723751.html

Amit Daniel Kachhap (6):
  arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  arm64: traps: Allow force_signal_inject to pass esr error code
  arm64: ptrauth: Introduce Armv8.3 pointer aut hentication enhancements
  arm64: cpufeature: Modify address authentication cpufeature to exact
  arm64: kprobe: disable probe of fault prone ptrauth instruction
  arm64: kprobe: clarify the comment of steppable hint instructions

 arch/arm64/include/asm/esr.h           |  4 ++-
 arch/arm64/include/asm/exception.h     |  1 +
 arch/arm64/include/asm/insn.h          |  4 +++
 arch/arm64/include/asm/sysreg.h        | 24 +++++++++-----
 arch/arm64/include/asm/traps.h         |  2 +-
 arch/arm64/kernel/cpufeature.c         | 46 +++++++++++++++++++++-----
 arch/arm64/kernel/entry-common.c       | 21 ++++++++++++
 arch/arm64/kernel/fpsimd.c             |  4 +--
 arch/arm64/kernel/insn.c               | 11 +++---
 arch/arm64/kernel/probes/decode-insn.c |  9 +++--
 arch/arm64/kernel/traps.c              | 26 +++++++++++----
 11 files changed, 114 insertions(+), 38 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v7 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  2020-09-09 14:07 [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
@ 2020-09-09 14:07 ` Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 2/6] arm64: traps: Allow force_signal_inject to pass esr error code Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-09 14:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

Currently the ARMv8.3-PAuth combined branch instructions (braa, retaa
etc.) are not simulated for out-of-line execution with a handler. Hence the
uprobe of such instructions leads to kernel warnings in a loop as they are
not explicitly checked and fall into INSN_GOOD categories. Other combined
instructions like LDRAA and LDRBB can be probed.

The issue of the combined branch instructions is fixed by adding
group definitions of all such instructions and rejecting their probes.
The instruction groups added are br_auth(braa, brab, braaz and brabz),
blr_auth(blraa, blrab, blraaz and blrabz), ret_auth(retaa and retab) and
eret_auth(eretaa and eretab).

Warning log:
 WARNING: CPU: 0 PID: 156 at arch/arm64/kernel/probes/uprobes.c:182 uprobe_single_step_handler+0x34/0x50
 Modules linked in:
 CPU: 0 PID: 156 Comm: func Not tainted 5.9.0-rc3 #188
 Hardware name: Foundation-v8A (DT)
 pstate: 804003c9 (Nzcv DAIF +PAN -UAO BTYPE=--)
 pc : uprobe_single_step_handler+0x34/0x50
 lr : single_step_handler+0x70/0xf8
 sp : ffff800012af3e30
 x29: ffff800012af3e30 x28: ffff000878723b00
 x27: 0000000000000000 x26: 0000000000000000
 x25: 0000000000000000 x24: 0000000000000000
 x23: 0000000060001000 x22: 00000000cb000022
 x21: ffff800012065ce8 x20: ffff800012af3ec0
 x19: ffff800012068d50 x18: 0000000000000000
 x17: 0000000000000000 x16: 0000000000000000
 x15: 0000000000000000 x14: 0000000000000000
 x13: 0000000000000000 x12: 0000000000000000
 x11: 0000000000000000 x10: 0000000000000000
 x9 : ffff800010085c90 x8 : 0000000000000000
 x7 : 0000000000000000 x6 : ffff80001205a9c8
 x5 : ffff80001205a000 x4 : ffff80001233db80
 x3 : ffff8000100a7a60 x2 : 0020000000000003
 x1 : 0000fffffffff008 x0 : ffff800012af3ec0
 Call trace:
  uprobe_single_step_handler+0x34/0x50
  single_step_handler+0x70/0xf8
  do_debug_exception+0xb8/0x130
  el0_sync_handler+0x138/0x1b8
  el0_sync+0x158/0x180

Fixes: 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing")
Fixes: 04ca3204fa09 ("arm64: enable pointer authentication")
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
Changes since v6:
* Clubbed Ptrauth combined branch instructions as suggested by Will.

 arch/arm64/include/asm/insn.h          | 4 ++++
 arch/arm64/kernel/insn.c               | 5 ++++-
 arch/arm64/kernel/probes/decode-insn.c | 3 ++-
 3 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 0bc46149e491..4b39293d0f72 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -359,9 +359,13 @@ __AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
 __AARCH64_INSN_FUNCS(exception,	0xFF000000, 0xD4000000)
 __AARCH64_INSN_FUNCS(hint,	0xFFFFF01F, 0xD503201F)
 __AARCH64_INSN_FUNCS(br,	0xFFFFFC1F, 0xD61F0000)
+__AARCH64_INSN_FUNCS(br_auth,	0xFEFFF800, 0xD61F0800)
 __AARCH64_INSN_FUNCS(blr,	0xFFFFFC1F, 0xD63F0000)
+__AARCH64_INSN_FUNCS(blr_auth,	0xFEFFF800, 0xD63F0800)
 __AARCH64_INSN_FUNCS(ret,	0xFFFFFC1F, 0xD65F0000)
+__AARCH64_INSN_FUNCS(ret_auth,	0xFFFFFBFF, 0xD65F0BFF)
 __AARCH64_INSN_FUNCS(eret,	0xFFFFFFFF, 0xD69F03E0)
+__AARCH64_INSN_FUNCS(eret_auth,	0xFFFFFBFF, 0xD69F0BFF)
 __AARCH64_INSN_FUNCS(mrs,	0xFFF00000, 0xD5300000)
 __AARCH64_INSN_FUNCS(msr_imm,	0xFFF8F01F, 0xD500401F)
 __AARCH64_INSN_FUNCS(msr_reg,	0xFFF00000, 0xD5100000)
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index a107375005bc..ccc8c9e22b25 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -176,7 +176,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn)
 
 bool __kprobes aarch64_insn_is_branch(u32 insn)
 {
-	/* b, bl, cb*, tb*, b.cond, br, blr */
+	/* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */
 
 	return aarch64_insn_is_b(insn) ||
 		aarch64_insn_is_bl(insn) ||
@@ -185,8 +185,11 @@ bool __kprobes aarch64_insn_is_branch(u32 insn)
 		aarch64_insn_is_tbz(insn) ||
 		aarch64_insn_is_tbnz(insn) ||
 		aarch64_insn_is_ret(insn) ||
+		aarch64_insn_is_ret_auth(insn) ||
 		aarch64_insn_is_br(insn) ||
+		aarch64_insn_is_br_auth(insn) ||
 		aarch64_insn_is_blr(insn) ||
+		aarch64_insn_is_blr_auth(insn) ||
 		aarch64_insn_is_bcond(insn);
 }
 
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index 263d5fba4c8a..c541fb48886e 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -29,7 +29,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 		    aarch64_insn_is_msr_imm(insn) ||
 		    aarch64_insn_is_msr_reg(insn) ||
 		    aarch64_insn_is_exception(insn) ||
-		    aarch64_insn_is_eret(insn))
+		    aarch64_insn_is_eret(insn) ||
+		    aarch64_insn_is_eret_auth(insn))
 			return false;
 
 		/*
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v7 2/6] arm64: traps: Allow force_signal_inject to pass esr error code
  2020-09-09 14:07 [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
@ 2020-09-09 14:07 ` Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-09 14:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

Some error signal need to pass proper ARM esr error code to userspace to
better identify the cause of the signal. So the function
force_signal_inject is extended to pass this as a parameter. The
existing code is not affected by this change.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/traps.h |  2 +-
 arch/arm64/kernel/fpsimd.c     |  4 ++--
 arch/arm64/kernel/traps.c      | 14 +++++++-------
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
index cee5928e1b7d..d96dc2c7c09d 100644
--- a/arch/arm64/include/asm/traps.h
+++ b/arch/arm64/include/asm/traps.h
@@ -24,7 +24,7 @@ struct undef_hook {
 
 void register_undef_hook(struct undef_hook *hook);
 void unregister_undef_hook(struct undef_hook *hook);
-void force_signal_inject(int signal, int code, unsigned long address);
+void force_signal_inject(int signal, int code, unsigned long address, unsigned int err);
 void arm64_notify_segfault(unsigned long addr);
 void arm64_force_sig_fault(int signo, int code, void __user *addr, const char *str);
 void arm64_force_sig_mceerr(int code, void __user *addr, short lsb, const char *str);
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 55c8f3ec6705..77484359d44a 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -312,7 +312,7 @@ static void fpsimd_save(void)
 				 * re-enter user with corrupt state.
 				 * There's no way to recover, so kill it:
 				 */
-				force_signal_inject(SIGKILL, SI_KERNEL, 0);
+				force_signal_inject(SIGKILL, SI_KERNEL, 0, 0);
 				return;
 			}
 
@@ -936,7 +936,7 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
 {
 	/* Even if we chose not to use SVE, the hardware could still trap: */
 	if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
-		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 		return;
 	}
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 13ebd5ca2070..29fd00fe94f2 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -412,7 +412,7 @@ static int call_undef_hook(struct pt_regs *regs)
 	return fn ? fn(regs, instr) : 1;
 }
 
-void force_signal_inject(int signal, int code, unsigned long address)
+void force_signal_inject(int signal, int code, unsigned long address, unsigned int err)
 {
 	const char *desc;
 	struct pt_regs *regs = current_pt_regs();
@@ -438,7 +438,7 @@ void force_signal_inject(int signal, int code, unsigned long address)
 		signal = SIGKILL;
 	}
 
-	arm64_notify_die(desc, regs, signal, code, (void __user *)address, 0);
+	arm64_notify_die(desc, regs, signal, code, (void __user *)address, err);
 }
 
 /*
@@ -455,7 +455,7 @@ void arm64_notify_segfault(unsigned long addr)
 		code = SEGV_ACCERR;
 	mmap_read_unlock(current->mm);
 
-	force_signal_inject(SIGSEGV, code, addr);
+	force_signal_inject(SIGSEGV, code, addr, 0);
 }
 
 void do_undefinstr(struct pt_regs *regs)
@@ -468,14 +468,14 @@ void do_undefinstr(struct pt_regs *regs)
 		return;
 
 	BUG_ON(!user_mode(regs));
-	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 }
 NOKPROBE_SYMBOL(do_undefinstr);
 
 void do_bti(struct pt_regs *regs)
 {
 	BUG_ON(!user_mode(regs));
-	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 }
 NOKPROBE_SYMBOL(do_bti);
 
@@ -528,7 +528,7 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
 		__user_cache_maint("ic ivau", address, ret);
 		break;
 	default:
-		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 		return;
 	}
 
@@ -581,7 +581,7 @@ static void mrs_handler(unsigned int esr, struct pt_regs *regs)
 	sysreg = esr_sys64_to_sysreg(esr);
 
 	if (do_emulate_mrs(regs, sysreg, rt) != 0)
-		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 }
 
 static void wfi_handler(unsigned int esr, struct pt_regs *regs)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v7 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements
  2020-09-09 14:07 [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 2/6] arm64: traps: Allow force_signal_inject to pass esr error code Amit Daniel Kachhap
@ 2020-09-09 14:07 ` Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-09 14:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

Some Armv8.3 Pointer Authentication enhancements have been introduced
which are mandatory for Armv8.6 and optional for Armv8.3. These features
are,

* ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens
  finding the correct PAC value of the authenticated pointer.

* ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication
  instruction fails in authenticating the PAC present in the address.
  This is different from earlier case when such failures just adds an
  error code in the top byte and waits for subsequent load/store to abort.
  The ptrauth instructions which may cause this fault are autiasp, retaa
  etc.

The above features are now represented by additional configurations
for the Address Authentication cpufeature and a new ESR exception class.

The userspace fault received in the kernel due to ARMv8.3-FPAC is treated
as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN
as the signal code. Note that this is different from earlier ARMv8.3
ptrauth where signal SIGSEGV is issued due to Pointer authentication
failures. The in-kernel PAC fault causes kernel to crash.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/esr.h       |  4 +++-
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/include/asm/sysreg.h    | 24 ++++++++++++++++--------
 arch/arm64/kernel/entry-common.c   | 21 +++++++++++++++++++++
 arch/arm64/kernel/traps.c          | 12 ++++++++++++
 5 files changed, 53 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 035003acfa87..22c81f1edda2 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -35,7 +35,9 @@
 #define ESR_ELx_EC_SYS64	(0x18)
 #define ESR_ELx_EC_SVE		(0x19)
 #define ESR_ELx_EC_ERET		(0x1a)	/* EL2 only */
-/* Unallocated EC: 0x1b - 0x1E */
+/* Unallocated EC: 0x1B */
+#define ESR_ELx_EC_FPAC		(0x1C)	/* EL1 and above */
+/* Unallocated EC: 0x1D - 0x1E */
 #define ESR_ELx_EC_IMP_DEF	(0x1f)	/* EL3 only */
 #define ESR_ELx_EC_IABT_LOW	(0x20)
 #define ESR_ELx_EC_IABT_CUR	(0x21)
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 7577a754d443..99b9383cd036 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
 void do_cp15instr(unsigned int esr, struct pt_regs *regs);
 void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
+void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 554a7e8ecb07..b738bc793369 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -636,14 +636,22 @@
 #define ID_AA64ISAR1_APA_SHIFT		4
 #define ID_AA64ISAR1_DPB_SHIFT		0
 
-#define ID_AA64ISAR1_APA_NI		0x0
-#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_API_NI		0x0
-#define ID_AA64ISAR1_API_IMP_DEF	0x1
-#define ID_AA64ISAR1_GPA_NI		0x0
-#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_GPI_NI		0x0
-#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
+#define ID_AA64ISAR1_APA_NI			0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_APA_ARCH_EPAC		0x2
+#define ID_AA64ISAR1_APA_ARCH_EPAC2		0x3
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_API_NI			0x0
+#define ID_AA64ISAR1_API_IMP_DEF		0x1
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC		0x2
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2		0x3
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_GPA_NI			0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_GPI_NI			0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF		0x1
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index d3be9dbf5490..43d4c329775f 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
 }
 NOKPROBE_SYMBOL(el1_dbg);
 
+static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	local_daif_inherit(regs);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el1_fpac);
+
 asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el1_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el1_fpac(regs, esr);
+		break;
 	default:
 		el1_inv(regs, esr);
 	}
@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_svc);
 
+static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el0_fpac);
+
 asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el0_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el0_fpac(regs, esr);
+		break;
 	default:
 		el0_inv(regs, esr);
 	}
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 29fd00fe94f2..b24f81197a68 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -479,6 +479,17 @@ void do_bti(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_bti);
 
+void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr)
+{
+	/*
+	 * Unexpected FPAC exception or pointer authentication failure in
+	 * the kernel: kill the task before it does any more harm.
+	 */
+	BUG_ON(!user_mode(regs));
+	force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr);
+}
+NOKPROBE_SYMBOL(do_ptrauth_fault);
+
 #define __user_cache_maint(insn, address, res)			\
 	if (address >= user_addr_max()) {			\
 		res = -EFAULT;					\
@@ -775,6 +786,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
 	[ESR_ELx_EC_SVE]		= "SVE",
 	[ESR_ELx_EC_ERET]		= "ERET/ERETAA/ERETAB",
+	[ESR_ELx_EC_FPAC]		= "FPAC",
 	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
 	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
 	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-09-09 14:07 [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2020-09-09 14:07 ` [PATCH v7 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
@ 2020-09-09 14:07 ` Amit Daniel Kachhap
  2020-09-11 17:24   ` Catalin Marinas
  2020-09-09 14:07 ` [PATCH v7 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 6/6] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
  5 siblings, 1 reply; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-09 14:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

The current address authentication cpufeature levels are set as LOWER_SAFE
which is not compatible with the different configurations added for Armv8.3
ptrauth enhancements as the different levels have different behaviour and
there is no tunable to enable the lower safe versions. This is rectified
by setting those cpufeature type as EXACT.

The current cpufeature framework also does not interfere in the booting of
non-exact secondary cpus but rather marks them as tainted. As a workaround
this is fixed by replacing the generic match handler with a new handler
specific to ptrauth.

After this change, if there is any variation in ptrauth configurations in
secondary cpus from boot cpu then those mismatched cpus are parked in an
infinite loop.

Following ptrauth crash log is oberserved in Arm fastmodel with mismatched
cpus without this fix,

 CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64ISAR1_EL1. Boot CPU: 0x11111110211402, CPU4: 0x11111110211102
 CPU features: Unsupported CPU feature variation detected.
 GICv3: CPU4: found redistributor 100 region 0:0x000000002f180000
 CPU4: Booted secondary processor 0x0000000100 [0x410fd0f0]
 Unable to handle kernel paging request at virtual address bfff800010dadf3c
 Mem abort info:
   ESR = 0x86000004
   EC = 0x21: IABT (current EL), IL = 32 bits
   SET = 0, FnV = 0
   EA = 0, S1PTW = 0
 [bfff800010dadf3c] address between user and kernel address ranges
 Internal error: Oops: 86000004 [#1] PREEMPT SMP
 Modules linked in:
 CPU: 4 PID: 29 Comm: migration/4 Tainted: G S                5.8.0-rc4-00005-ge658591d66d1-dirty #158
 Hardware name: Foundation-v8A (DT)
 pstate: 60000089 (nZCv daIf -PAN -UAO BTYPE=--)
 pc : 0xbfff800010dadf3c
 lr : __schedule+0x2b4/0x5a8
 sp : ffff800012043d70
 x29: ffff800012043d70 x28: 0080000000000000
 x27: ffff800011cbe000 x26: ffff00087ad37580
 x25: ffff00087ad37000 x24: ffff800010de7d50
 x23: ffff800011674018 x22: 0784800010dae2a8
 x21: ffff00087ad37000 x20: ffff00087acb8000
 x19: ffff00087f742100 x18: 0000000000000030
 x17: 0000000000000000 x16: 0000000000000000
 x15: ffff800011ac1000 x14: 00000000000001bd
 x13: 0000000000000000 x12: 0000000000000000
 x11: 0000000000000000 x10: 71519a147ddfeb82
 x9 : 825d5ec0fb246314 x8 : ffff00087ad37dd8
 x7 : 0000000000000000 x6 : 00000000fffedb0e
 x5 : 00000000ffffffff x4 : 0000000000000000
 x3 : 0000000000000028 x2 : ffff80086e11e000
 x1 : ffff00087ad37000 x0 : ffff00087acdc600
 Call trace:
  0xbfff800010dadf3c
  schedule+0x78/0x110
  schedule_preempt_disabled+0x24/0x40
  __kthread_parkme+0x68/0xd0
  kthread+0x138/0x160
  ret_from_fork+0x10/0x34
 Code: bad PC value

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
[Suzuki: Introduce new matching function for address authentication]
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 46 +++++++++++++++++++++++++++-------
 1 file changed, 37 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6424584be01e..4bb3f2b2ffed 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -197,9 +197,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
@@ -1648,11 +1648,39 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
-			     int __unused)
+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
 {
-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+	int boot_val, sec_val;
+
+	/* We don't expect to be called with SCOPE_SYSTEM */
+	WARN_ON(scope == SCOPE_SYSTEM);
+	/*
+	 * The ptr-auth feature levels are not intercompatible with lower
+	 * levels. Hence we must match ptr-auth feature level of the secondary
+	 * CPUs with that of the boot CPU. The level of boot cpu is fetched
+	 * from the sanitised register whereas direct register read is done for
+	 * the secondary CPUs.
+	 * The sanitised feature state is guaranteed to match that of the
+	 * boot CPU as a mismatched secondary CPU is parked before it gets
+	 * a chance to update the state, with the capability.
+	 */
+	boot_val = cpuid_feature_extract_field(read_sanitised_ftr_reg(entry->sys_reg),
+					       entry->field_pos, entry->sign);
+	if (scope & SCOPE_BOOT_CPU) {
+		return boot_val >= entry->min_field_value;
+	} else if (scope & SCOPE_LOCAL_CPU) {
+		sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
+						      entry->field_pos, entry->sign);
+		return (sec_val >= entry->min_field_value) && (sec_val == boot_val);
+	}
+	return false;
+}
+
+static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
+				     int scope)
+{
+	return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
+	       has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
 }
 
 static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
@@ -2021,7 +2049,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -2031,12 +2059,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
 		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
-		.matches = has_address_auth,
+		.matches = has_address_auth_metacap,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v7 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction
  2020-09-09 14:07 [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2020-09-09 14:07 ` [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-09-09 14:07 ` Amit Daniel Kachhap
  2020-09-09 14:07 ` [PATCH v7 6/6] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
  5 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-09 14:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

With the addition of ARMv8.3-FPAC feature, the probe of authenticate
ptrauth instructions (AUT*) may cause ptrauth fault exception in case of
authenticate failure so they cannot be safely single stepped.

Hence the probe of authenticate instructions is disallowed but the
corresponding pac ptrauth instruction (PAC*) is not affected and they can
still be probed. Also AUTH* instructions do not make sense at function
entry points so most realistic probes would be unaffected by this change.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
---
 arch/arm64/kernel/insn.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index ccc8c9e22b25..6c0de2f60ea9 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
 	case AARCH64_INSN_HINT_XPACLRI:
 	case AARCH64_INSN_HINT_PACIA_1716:
 	case AARCH64_INSN_HINT_PACIB_1716:
-	case AARCH64_INSN_HINT_AUTIA_1716:
-	case AARCH64_INSN_HINT_AUTIB_1716:
 	case AARCH64_INSN_HINT_PACIAZ:
 	case AARCH64_INSN_HINT_PACIASP:
 	case AARCH64_INSN_HINT_PACIBZ:
 	case AARCH64_INSN_HINT_PACIBSP:
-	case AARCH64_INSN_HINT_AUTIAZ:
-	case AARCH64_INSN_HINT_AUTIASP:
-	case AARCH64_INSN_HINT_AUTIBZ:
-	case AARCH64_INSN_HINT_AUTIBSP:
 	case AARCH64_INSN_HINT_BTI:
 	case AARCH64_INSN_HINT_BTIC:
 	case AARCH64_INSN_HINT_BTIJ:
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v7 6/6] arm64: kprobe: clarify the comment of steppable hint instructions
  2020-09-09 14:07 [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2020-09-09 14:07 ` [PATCH v7 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
@ 2020-09-09 14:07 ` Amit Daniel Kachhap
  5 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-09 14:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

The existing comment about steppable hint instruction is not complete
and only describes NOP instructions as steppable. As the function
aarch64_insn_is_steppable_hint allows all white-listed instruction
to be probed so the comment is updated to reflect this.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
---
 arch/arm64/kernel/probes/decode-insn.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index c541fb48886e..104101f633b1 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -43,8 +43,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 			     != AARCH64_INSN_SPCLREG_DAIF;
 
 		/*
-		 * The HINT instruction is is problematic when single-stepping,
-		 * except for the NOP case.
+		 * The HINT instruction is steppable only if it is in whitelist
+		 * and the rest of other such instructions are blocked for
+		 * single stepping as they may cause exception or other
+		 * unintended behaviour.
 		 */
 		if (aarch64_insn_is_hint(insn))
 			return aarch64_insn_is_steppable_hint(insn);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-09-09 14:07 ` [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-09-11 17:24   ` Catalin Marinas
  2020-09-14  6:54     ` Amit Kachhap
  0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2020-09-11 17:24 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Suzuki K Poulose, Mark Brown, James Morse,
	Vincenzo Frascino, Will Deacon, Dave Martin, linux-arm-kernel

On Wed, Sep 09, 2020 at 07:37:57PM +0530, Amit Daniel Kachhap wrote:
> The current address authentication cpufeature levels are set as LOWER_SAFE
> which is not compatible with the different configurations added for Armv8.3
> ptrauth enhancements as the different levels have different behaviour and
> there is no tunable to enable the lower safe versions. This is rectified
> by setting those cpufeature type as EXACT.
> 
> The current cpufeature framework also does not interfere in the booting of
> non-exact secondary cpus but rather marks them as tainted. As a workaround
> this is fixed by replacing the generic match handler with a new handler
> specific to ptrauth.
> 
> After this change, if there is any variation in ptrauth configurations in
> secondary cpus from boot cpu then those mismatched cpus are parked in an
> infinite loop.
> 
> Following ptrauth crash log is oberserved in Arm fastmodel with mismatched
> cpus without this fix,
> 
>  CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64ISAR1_EL1. Boot CPU: 0x11111110211402, CPU4: 0x11111110211102
>  CPU features: Unsupported CPU feature variation detected.
>  GICv3: CPU4: found redistributor 100 region 0:0x000000002f180000
>  CPU4: Booted secondary processor 0x0000000100 [0x410fd0f0]
>  Unable to handle kernel paging request at virtual address bfff800010dadf3c
>  Mem abort info:
>    ESR = 0x86000004
>    EC = 0x21: IABT (current EL), IL = 32 bits
>    SET = 0, FnV = 0
>    EA = 0, S1PTW = 0
>  [bfff800010dadf3c] address between user and kernel address ranges
>  Internal error: Oops: 86000004 [#1] PREEMPT SMP
>  Modules linked in:
>  CPU: 4 PID: 29 Comm: migration/4 Tainted: G S                5.8.0-rc4-00005-ge658591d66d1-dirty #158
>  Hardware name: Foundation-v8A (DT)
>  pstate: 60000089 (nZCv daIf -PAN -UAO BTYPE=--)
>  pc : 0xbfff800010dadf3c
>  lr : __schedule+0x2b4/0x5a8
>  sp : ffff800012043d70
>  x29: ffff800012043d70 x28: 0080000000000000
>  x27: ffff800011cbe000 x26: ffff00087ad37580
>  x25: ffff00087ad37000 x24: ffff800010de7d50
>  x23: ffff800011674018 x22: 0784800010dae2a8
>  x21: ffff00087ad37000 x20: ffff00087acb8000
>  x19: ffff00087f742100 x18: 0000000000000030
>  x17: 0000000000000000 x16: 0000000000000000
>  x15: ffff800011ac1000 x14: 00000000000001bd
>  x13: 0000000000000000 x12: 0000000000000000
>  x11: 0000000000000000 x10: 71519a147ddfeb82
>  x9 : 825d5ec0fb246314 x8 : ffff00087ad37dd8
>  x7 : 0000000000000000 x6 : 00000000fffedb0e
>  x5 : 00000000ffffffff x4 : 0000000000000000
>  x3 : 0000000000000028 x2 : ffff80086e11e000
>  x1 : ffff00087ad37000 x0 : ffff00087acdc600
>  Call trace:
>   0xbfff800010dadf3c
>   schedule+0x78/0x110
>   schedule_preempt_disabled+0x24/0x40
>   __kthread_parkme+0x68/0xd0
>   kthread+0x138/0x160
>   ret_from_fork+0x10/0x34
>  Code: bad PC value

That's what FTR_EXACT gives us. The variation above is in the field at
bit position 8 (API_SHIFT) with the boot CPU value of 4 and the
secondary CPU of 1, if I read it correctly.

Would it be better if the incompatible CPUs are just parked? I'm trying
to figure out from the verify_local_cpu_caps() code whether that's
possible. I don't fully understand why we don't trigger the "Detected
conflict for capability" message instead.

My reading of the code is that we set the system capability based on the
boot CPU since it's a BOOT_CPU_FEATURE. has_address_auth_cpucap() should
return false with this mismatch and verify_local_cpu_caps() would park
the CPU. Once parked, it shouldn't reach the sanity check since
cpuinfo_store_cpu() is called after check_local_cpu_capabilities() (and
the match function).

> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 6424584be01e..4bb3f2b2ffed 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -197,9 +197,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),

So we go for FTR_EXACT here with a safe value of 0. It makes sense.
IIUC, in case of a mismatch, we end up with the sanitised register field
as 0.

>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>  	ARM64_FTR_END,
>  };
> @@ -1648,11 +1648,39 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>  #endif /* CONFIG_ARM64_RAS_EXTN */
>  
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> -			     int __unused)
> +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
>  {
> -	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> -	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> +	int boot_val, sec_val;
> +
> +	/* We don't expect to be called with SCOPE_SYSTEM */
> +	WARN_ON(scope == SCOPE_SYSTEM);
> +	/*
> +	 * The ptr-auth feature levels are not intercompatible with lower
> +	 * levels. Hence we must match ptr-auth feature level of the secondary
> +	 * CPUs with that of the boot CPU. The level of boot cpu is fetched
> +	 * from the sanitised register whereas direct register read is done for
> +	 * the secondary CPUs.
> +	 * The sanitised feature state is guaranteed to match that of the
> +	 * boot CPU as a mismatched secondary CPU is parked before it gets
> +	 * a chance to update the state, with the capability.
> +	 */
> +	boot_val = cpuid_feature_extract_field(read_sanitised_ftr_reg(entry->sys_reg),
> +					       entry->field_pos, entry->sign);
> +	if (scope & SCOPE_BOOT_CPU) {
> +		return boot_val >= entry->min_field_value;
> +	} else if (scope & SCOPE_LOCAL_CPU) {

Nitpick: just drop the else as we had a return already above. We could
also drop the SCOPE_LOCAL_CPU check since that's the only one left.

> +		sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
> +						      entry->field_pos, entry->sign);
> +		return (sec_val >= entry->min_field_value) && (sec_val == boot_val);

Another nitpick: do you still need the sec_val >= ... if you check for
sec_val == boot_val? boot_val was already checked against
min_field_value. That's unless the sanitised reg is updated and boot_val
above is 0 on subsequent CPUs. This could only happen if we don't park a
CPU that has a mismatch and it ends up updating the sysreg. AFAICT, for
a secondary CPU, we first check the features then we update the
sanitised regs.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-09-11 17:24   ` Catalin Marinas
@ 2020-09-14  6:54     ` Amit Kachhap
  2020-09-14 10:04       ` Catalin Marinas
  0 siblings, 1 reply; 10+ messages in thread
From: Amit Kachhap @ 2020-09-14  6:54 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Suzuki K Poulose, Mark Brown, James Morse,
	Vincenzo Frascino, Will Deacon, Dave Martin, linux-arm-kernel



On 9/11/20 10:54 PM, Catalin Marinas wrote:
> On Wed, Sep 09, 2020 at 07:37:57PM +0530, Amit Daniel Kachhap wrote:
>> The current address authentication cpufeature levels are set as LOWER_SAFE
>> which is not compatible with the different configurations added for Armv8.3
>> ptrauth enhancements as the different levels have different behaviour and
>> there is no tunable to enable the lower safe versions. This is rectified
>> by setting those cpufeature type as EXACT.
>>
>> The current cpufeature framework also does not interfere in the booting of
>> non-exact secondary cpus but rather marks them as tainted. As a workaround
>> this is fixed by replacing the generic match handler with a new handler
>> specific to ptrauth.
>>
>> After this change, if there is any variation in ptrauth configurations in
>> secondary cpus from boot cpu then those mismatched cpus are parked in an
>> infinite loop.
>>
>> Following ptrauth crash log is oberserved in Arm fastmodel with mismatched
>> cpus without this fix,
>>
>>   CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64ISAR1_EL1. Boot CPU: 0x11111110211402, CPU4: 0x11111110211102
>>   CPU features: Unsupported CPU feature variation detected.
>>   GICv3: CPU4: found redistributor 100 region 0:0x000000002f180000
>>   CPU4: Booted secondary processor 0x0000000100 [0x410fd0f0]
>>   Unable to handle kernel paging request at virtual address bfff800010dadf3c
>>   Mem abort info:
>>     ESR = 0x86000004
>>     EC = 0x21: IABT (current EL), IL = 32 bits
>>     SET = 0, FnV = 0
>>     EA = 0, S1PTW = 0
>>   [bfff800010dadf3c] address between user and kernel address ranges
>>   Internal error: Oops: 86000004 [#1] PREEMPT SMP
>>   Modules linked in:
>>   CPU: 4 PID: 29 Comm: migration/4 Tainted: G S                5.8.0-rc4-00005-ge658591d66d1-dirty #158
>>   Hardware name: Foundation-v8A (DT)
>>   pstate: 60000089 (nZCv daIf -PAN -UAO BTYPE=--)
>>   pc : 0xbfff800010dadf3c
>>   lr : __schedule+0x2b4/0x5a8
>>   sp : ffff800012043d70
>>   x29: ffff800012043d70 x28: 0080000000000000
>>   x27: ffff800011cbe000 x26: ffff00087ad37580
>>   x25: ffff00087ad37000 x24: ffff800010de7d50
>>   x23: ffff800011674018 x22: 0784800010dae2a8
>>   x21: ffff00087ad37000 x20: ffff00087acb8000
>>   x19: ffff00087f742100 x18: 0000000000000030
>>   x17: 0000000000000000 x16: 0000000000000000
>>   x15: ffff800011ac1000 x14: 00000000000001bd
>>   x13: 0000000000000000 x12: 0000000000000000
>>   x11: 0000000000000000 x10: 71519a147ddfeb82
>>   x9 : 825d5ec0fb246314 x8 : ffff00087ad37dd8
>>   x7 : 0000000000000000 x6 : 00000000fffedb0e
>>   x5 : 00000000ffffffff x4 : 0000000000000000
>>   x3 : 0000000000000028 x2 : ffff80086e11e000
>>   x1 : ffff00087ad37000 x0 : ffff00087acdc600
>>   Call trace:
>>    0xbfff800010dadf3c
>>    schedule+0x78/0x110
>>    schedule_preempt_disabled+0x24/0x40
>>    __kthread_parkme+0x68/0xd0
>>    kthread+0x138/0x160
>>    ret_from_fork+0x10/0x34
>>   Code: bad PC value
> 
> That's what FTR_EXACT gives us. The variation above is in the field at
> bit position 8 (API_SHIFT) with the boot CPU value of 4 and the
> secondary CPU of 1, if I read it correctly.

Yes
> 
> Would it be better if the incompatible CPUs are just parked? I'm trying
> to figure out from the verify_local_cpu_caps() code whether that's
> possible. I don't fully understand why we don't trigger the "Detected
> conflict for capability" message instead.

The above ptrauth crash appears when this fix patch is not present and
with this fix present, cpu4 is actually parked as,

[    0.098833] CPU features: CPU4: Detected conflict for capability 39 
(Address authentication (IMP DEF algorithm)), System: 1, CPU: 0
[    0.098833] CPU4: will not boot

> 
> My reading of the code is that we set the system capability based on the
> boot CPU since it's a BOOT_CPU_FEATURE. has_address_auth_cpucap() should
> return false with this mismatch and verify_local_cpu_caps() would park
> the CPU. Once parked, it shouldn't reach the sanity check since
> cpuinfo_store_cpu() is called after check_local_cpu_capabilities() (and
> the match function).

Yes.
> 
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 6424584be01e..4bb3f2b2ffed 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -197,9 +197,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> 
> So we go for FTR_EXACT here with a safe value of 0. It makes sense.
> IIUC, in case of a mismatch, we end up with the sanitised register field
> as 0.
> 
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>   	ARM64_FTR_END,
>>   };
>> @@ -1648,11 +1648,39 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>>   #endif /* CONFIG_ARM64_RAS_EXTN */
>>   
>>   #ifdef CONFIG_ARM64_PTR_AUTH
>> -static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
>> -			     int __unused)
>> +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
>>   {
>> -	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
>> -	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
>> +	int boot_val, sec_val;
>> +
>> +	/* We don't expect to be called with SCOPE_SYSTEM */
>> +	WARN_ON(scope == SCOPE_SYSTEM);
>> +	/*
>> +	 * The ptr-auth feature levels are not intercompatible with lower
>> +	 * levels. Hence we must match ptr-auth feature level of the secondary
>> +	 * CPUs with that of the boot CPU. The level of boot cpu is fetched
>> +	 * from the sanitised register whereas direct register read is done for
>> +	 * the secondary CPUs.
>> +	 * The sanitised feature state is guaranteed to match that of the
>> +	 * boot CPU as a mismatched secondary CPU is parked before it gets
>> +	 * a chance to update the state, with the capability.
>> +	 */
>> +	boot_val = cpuid_feature_extract_field(read_sanitised_ftr_reg(entry->sys_reg),
>> +					       entry->field_pos, entry->sign);
>> +	if (scope & SCOPE_BOOT_CPU) {
>> +		return boot_val >= entry->min_field_value;
>> +	} else if (scope & SCOPE_LOCAL_CPU) {
> 
> Nitpick: just drop the else as we had a return already above. We could
> also drop the SCOPE_LOCAL_CPU check since that's the only one left.

Ok I will update it in v8 version.
> 
>> +		sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
>> +						      entry->field_pos, entry->sign);
>> +		return (sec_val >= entry->min_field_value) && (sec_val == boot_val);
> 
> Another nitpick: do you still need the sec_val >= ... if you check for
> sec_val == boot_val? boot_val was already checked against
> min_field_value. That's unless the sanitised reg is updated and boot_val
> above is 0 on subsequent CPUs. This could only happen if we don't park a
> CPU that has a mismatch and it ends up updating the sysreg. AFAICT, for
> a secondary CPU, we first check the features then we update the
> sanitised regs.

Yes agreed. I will update it in v8 version.
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-09-14  6:54     ` Amit Kachhap
@ 2020-09-14 10:04       ` Catalin Marinas
  0 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2020-09-14 10:04 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Suzuki K Poulose, Mark Brown, James Morse,
	Vincenzo Frascino, Will Deacon, Dave Martin, linux-arm-kernel

On Mon, Sep 14, 2020 at 12:24:23PM +0530, Amit Kachhap wrote:
> On 9/11/20 10:54 PM, Catalin Marinas wrote:
> > On Wed, Sep 09, 2020 at 07:37:57PM +0530, Amit Daniel Kachhap wrote:
> > > The current address authentication cpufeature levels are set as LOWER_SAFE
> > > which is not compatible with the different configurations added for Armv8.3
> > > ptrauth enhancements as the different levels have different behaviour and
> > > there is no tunable to enable the lower safe versions. This is rectified
> > > by setting those cpufeature type as EXACT.
> > > 
> > > The current cpufeature framework also does not interfere in the booting of
> > > non-exact secondary cpus but rather marks them as tainted. As a workaround
> > > this is fixed by replacing the generic match handler with a new handler
> > > specific to ptrauth.
> > > 
> > > After this change, if there is any variation in ptrauth configurations in
> > > secondary cpus from boot cpu then those mismatched cpus are parked in an
> > > infinite loop.
> > > 
> > > Following ptrauth crash log is oberserved in Arm fastmodel with mismatched
> > > cpus without this fix,
> > > 
> > >   CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64ISAR1_EL1. Boot CPU: 0x11111110211402, CPU4: 0x11111110211102
> > >   CPU features: Unsupported CPU feature variation detected.
> > >   GICv3: CPU4: found redistributor 100 region 0:0x000000002f180000
> > >   CPU4: Booted secondary processor 0x0000000100 [0x410fd0f0]
> > >   Unable to handle kernel paging request at virtual address bfff800010dadf3c
> > >   Mem abort info:
> > >     ESR = 0x86000004
> > >     EC = 0x21: IABT (current EL), IL = 32 bits
> > >     SET = 0, FnV = 0
> > >     EA = 0, S1PTW = 0
> > >   [bfff800010dadf3c] address between user and kernel address ranges
> > >   Internal error: Oops: 86000004 [#1] PREEMPT SMP
> > >   Modules linked in:
> > >   CPU: 4 PID: 29 Comm: migration/4 Tainted: G S                5.8.0-rc4-00005-ge658591d66d1-dirty #158
> > >   Hardware name: Foundation-v8A (DT)
> > >   pstate: 60000089 (nZCv daIf -PAN -UAO BTYPE=--)
> > >   pc : 0xbfff800010dadf3c
> > >   lr : __schedule+0x2b4/0x5a8
> > >   sp : ffff800012043d70
> > >   x29: ffff800012043d70 x28: 0080000000000000
> > >   x27: ffff800011cbe000 x26: ffff00087ad37580
> > >   x25: ffff00087ad37000 x24: ffff800010de7d50
> > >   x23: ffff800011674018 x22: 0784800010dae2a8
> > >   x21: ffff00087ad37000 x20: ffff00087acb8000
> > >   x19: ffff00087f742100 x18: 0000000000000030
> > >   x17: 0000000000000000 x16: 0000000000000000
> > >   x15: ffff800011ac1000 x14: 00000000000001bd
> > >   x13: 0000000000000000 x12: 0000000000000000
> > >   x11: 0000000000000000 x10: 71519a147ddfeb82
> > >   x9 : 825d5ec0fb246314 x8 : ffff00087ad37dd8
> > >   x7 : 0000000000000000 x6 : 00000000fffedb0e
> > >   x5 : 00000000ffffffff x4 : 0000000000000000
> > >   x3 : 0000000000000028 x2 : ffff80086e11e000
> > >   x1 : ffff00087ad37000 x0 : ffff00087acdc600
> > >   Call trace:
> > >    0xbfff800010dadf3c
> > >    schedule+0x78/0x110
> > >    schedule_preempt_disabled+0x24/0x40
> > >    __kthread_parkme+0x68/0xd0
> > >    kthread+0x138/0x160
> > >    ret_from_fork+0x10/0x34
> > >   Code: bad PC value
> > 
> > That's what FTR_EXACT gives us. The variation above is in the field at
> > bit position 8 (API_SHIFT) with the boot CPU value of 4 and the
> > secondary CPU of 1, if I read it correctly.
> 
> Yes
> 
> > Would it be better if the incompatible CPUs are just parked? I'm trying
> > to figure out from the verify_local_cpu_caps() code whether that's
> > possible. I don't fully understand why we don't trigger the "Detected
> > conflict for capability" message instead.
> 
> The above ptrauth crash appears when this fix patch is not present and
> with this fix present, cpu4 is actually parked as,
> 
> [    0.098833] CPU features: CPU4: Detected conflict for capability 39
> (Address authentication (IMP DEF algorithm)), System: 1, CPU: 0
> [    0.098833] CPU4: will not boot

Ah, should have read the commit log properly. Thanks for the
clarification.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-09-14 10:06 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-09 14:07 [PATCH v7 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
2020-09-09 14:07 ` [PATCH v7 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
2020-09-09 14:07 ` [PATCH v7 2/6] arm64: traps: Allow force_signal_inject to pass esr error code Amit Daniel Kachhap
2020-09-09 14:07 ` [PATCH v7 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
2020-09-09 14:07 ` [PATCH v7 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
2020-09-11 17:24   ` Catalin Marinas
2020-09-14  6:54     ` Amit Kachhap
2020-09-14 10:04       ` Catalin Marinas
2020-09-09 14:07 ` [PATCH v7 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
2020-09-09 14:07 ` [PATCH v7 6/6] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.