linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements
@ 2020-09-04 10:42 Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-04 10:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

These patch series adds support for Armv8.3 pointer authentication
enhanced features mandatory for Armv8.6 and optional for Armv8.3.
These features are,

 * Enhanced PAC generation algorithm (ARMv8.3-pauth2).
 * Generate fault when authenticate instruction fails (ARMV8.3-FPAC).

More details can be found here [1].

Changes since v5 [2]:
* Few Comments modified as suggested by Dave and Suzuki.
* Added Reviewed-by's.

Changes since v4 [3]:
* New patch "arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions".
  This fixes uprobe crash with ARMv8.3-PAuth combined instructions.
* New patch "arm64: traps: Allow force_signal_inject to pass esr error code".
  This is preparatory patch for ARMV8.3-FPAC fault exception handling.
* Removed caching of bootcpu address authentication cpufeature levels in
  static variables. This was suggested by Dave and Suzuki.
* Use existing force_signal_inject function to invoke ptrauth fault
  signal as suggested by Dave.
* Commit log changes.

Changes since v3 [4]:
* Added a new patch "arm64: kprobe: clarify the comment of steppable hint instructions"
  as suggested in the last iteration.
* Removed the ptrauth fault handler from el0 compat handler as pointed
  by Dave.
* Mentioned the new feature name clearly as ARMV8.3-FPAC and ARMv8.3-pauth2
  as per ARMv8-A reference manual.
* Commit logs cleanup. 

Changes since v2 [5]:
* Dropped the patch "arm64: cpufeature: Fix the handler for address authentication"
* Added new matching function for address authentication as generic
  matching function has_cpuid_feature is specific for LOWER_SAFE
  features. This was suggested by Suzuki [3].
* Disabled probe of Authenticate ptrauth instructions as per Mark
  Brown's merged changes of whitelisting of hint instructions.

This series is based on kernel version v5.9-rc3.

Regards,
Amit

[1]: https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a
[2]:  https://lore.kernel.org/linux-arm-kernel/1597734671-23407-1-git-send-email-amit.kachhap@arm.com/
[3]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-July/584393.html
[4]: https://lore.kernel.org/linux-arm-kernel/1592457029-18547-1-git-send-email-amit.kachhap@arm.com/
[5]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-April/723751.html

Amit Daniel Kachhap (6):
  arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  arm64: traps: Allow force_signal_inject to pass esr error code
  arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements
  arm64: cpufeature: Modify address authentication cpufeature to exact
  arm64: kprobe: disable probe of fault prone ptrauth instruction
  arm64: kprobe: clarify the comment of steppable hint instructions

 arch/arm64/include/asm/esr.h           |  4 ++-
 arch/arm64/include/asm/exception.h     |  1 +
 arch/arm64/include/asm/insn.h          | 12 +++++++
 arch/arm64/include/asm/sysreg.h        | 24 +++++++++-----
 arch/arm64/include/asm/traps.h         |  2 +-
 arch/arm64/kernel/cpufeature.c         | 46 +++++++++++++++++++++-----
 arch/arm64/kernel/entry-common.c       | 21 ++++++++++++
 arch/arm64/kernel/fpsimd.c             |  4 +--
 arch/arm64/kernel/insn.c               | 20 ++++++-----
 arch/arm64/kernel/probes/decode-insn.c | 10 ++++--
 arch/arm64/kernel/traps.c              | 26 +++++++++++----
 11 files changed, 131 insertions(+), 39 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  2020-09-04 10:42 [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
@ 2020-09-04 10:42 ` Amit Daniel Kachhap
  2020-09-07 21:45   ` Will Deacon
  2020-09-04 10:42 ` [PATCH v6 2/6] arm64: traps: Allow force_signal_inject to pass esr error code Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-04 10:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

Currently the ARMv8.3-PAuth combined branch instructions (braa, retaa
etc.) are not simulated for out-of-line execution with a handler. Hence the
uprobe of such instructions leads to kernel warnings in a loop as they are
not explicitly checked and fall into INSN_GOOD categories. Other combined
instructions like LDRAA and LDRBB can be probed.

The issue of the combined branch instructions is fixed by adding
definitions of all such instructions and rejecting their probes.

Warning log:
 WARNING: CPU: 5 PID: 249 at arch/arm64/kernel/probes/uprobes.c:182 uprobe_single_step_handler+0x34/0x50
 Modules linked in:
 CPU: 5 PID: 249 Comm: func Tainted: G        W         5.8.0-rc4-00005-ge658591d66d1-dirty #160
 Hardware name: Foundation-v8A (DT)
 pstate: 204003c9 (nzCv DAIF +PAN -UAO BTYPE=--)
 pc : uprobe_single_step_handler+0x34/0x50
 lr : single_step_handler+0x70/0xf8
 sp : ffff800012afbe30
 x29: ffff800012afbe30 x28: ffff000879f00ec0
 x27: 0000000000000000 x26: 0000000000000000
 x25: 0000000000000000 x24: 0000000000000000
 x23: 0000000060001000 x22: 00000000cb000022
 x21: ffff800011fc5a68 x20: ffff800012afbec0
 x19: ffff800011fc86c0 x18: 0000000000000000
 x17: 0000000000000000 x16: 0000000000000000
 x15: 0000000000000000 x14: 0000000000000000
 x13: 0000000000000000 x12: 0000000000000000
 x11: 0000000000000000 x10: 0000000000000000
 x9 : ffff800010085d50 x8 : 0000000000000000
 x7 : 0000000000000000 x6 : ffff800011fba9c0
 x5 : ffff800011fba000 x4 : ffff800012283070
 x3 : ffff8000100a78e0 x2 : 00000000004005f0
 x1 : 0000fffffffff008 x0 : ffff800012afbec0
 Call trace:
  uprobe_single_step_handler+0x34/0x50
  single_step_handler+0x70/0xf8
  do_debug_exception+0xb8/0x130
  el0_sync_handler+0x7c/0x188
  el0_sync+0x158/0x180

Fixes: 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing")
Fixes: 04ca3204fa09 ("arm64: enable pointer authentication")
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
Changes since v5: 
* Slight change in commit log.
* Added Reviewed-by.

 arch/arm64/include/asm/insn.h          | 12 ++++++++++++
 arch/arm64/kernel/insn.c               | 14 ++++++++++++--
 arch/arm64/kernel/probes/decode-insn.c |  4 +++-
 3 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 0bc46149e491..324234068fee 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -359,9 +359,21 @@ __AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
 __AARCH64_INSN_FUNCS(exception,	0xFF000000, 0xD4000000)
 __AARCH64_INSN_FUNCS(hint,	0xFFFFF01F, 0xD503201F)
 __AARCH64_INSN_FUNCS(br,	0xFFFFFC1F, 0xD61F0000)
+__AARCH64_INSN_FUNCS(braaz,	0xFFFFFC1F, 0xD61F081F)
+__AARCH64_INSN_FUNCS(brabz,	0xFFFFFC1F, 0xD61F0C1F)
+__AARCH64_INSN_FUNCS(braa,	0xFFFFFC00, 0xD71F0800)
+__AARCH64_INSN_FUNCS(brab,	0xFFFFFC00, 0xD71F0C00)
 __AARCH64_INSN_FUNCS(blr,	0xFFFFFC1F, 0xD63F0000)
+__AARCH64_INSN_FUNCS(blraaz,	0xFFFFFC1F, 0xD63F081F)
+__AARCH64_INSN_FUNCS(blrabz,	0xFFFFFC1F, 0xD63F0C1F)
+__AARCH64_INSN_FUNCS(blraa,	0xFFFFFC00, 0xD73F0800)
+__AARCH64_INSN_FUNCS(blrab,	0xFFFFFC00, 0xD73F0C00)
 __AARCH64_INSN_FUNCS(ret,	0xFFFFFC1F, 0xD65F0000)
+__AARCH64_INSN_FUNCS(retaa,	0xFFFFFFFF, 0xD65F0BFF)
+__AARCH64_INSN_FUNCS(retab,	0xFFFFFFFF, 0xD65F0FFF)
 __AARCH64_INSN_FUNCS(eret,	0xFFFFFFFF, 0xD69F03E0)
+__AARCH64_INSN_FUNCS(eretaa,	0xFFFFFFFF, 0xD69F0BFF)
+__AARCH64_INSN_FUNCS(eretab,	0xFFFFFFFF, 0xD69F0FFF)
 __AARCH64_INSN_FUNCS(mrs,	0xFFF00000, 0xD5300000)
 __AARCH64_INSN_FUNCS(msr_imm,	0xFFF8F01F, 0xD500401F)
 __AARCH64_INSN_FUNCS(msr_reg,	0xFFF00000, 0xD5100000)
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index a107375005bc..27d5a52d5058 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -176,7 +176,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn)
 
 bool __kprobes aarch64_insn_is_branch(u32 insn)
 {
-	/* b, bl, cb*, tb*, b.cond, br, blr */
+	/* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */
 
 	return aarch64_insn_is_b(insn) ||
 		aarch64_insn_is_bl(insn) ||
@@ -185,9 +185,19 @@ bool __kprobes aarch64_insn_is_branch(u32 insn)
 		aarch64_insn_is_tbz(insn) ||
 		aarch64_insn_is_tbnz(insn) ||
 		aarch64_insn_is_ret(insn) ||
+		aarch64_insn_is_retaa(insn) ||
+		aarch64_insn_is_retab(insn) ||
 		aarch64_insn_is_br(insn) ||
 		aarch64_insn_is_blr(insn) ||
-		aarch64_insn_is_bcond(insn);
+		aarch64_insn_is_bcond(insn) ||
+		aarch64_insn_is_braaz(insn) ||
+		aarch64_insn_is_brabz(insn) ||
+		aarch64_insn_is_braa(insn) ||
+		aarch64_insn_is_brab(insn) ||
+		aarch64_insn_is_blraaz(insn) ||
+		aarch64_insn_is_blrabz(insn) ||
+		aarch64_insn_is_blraa(insn) ||
+		aarch64_insn_is_blrab(insn);
 }
 
 int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index 263d5fba4c8a..f9eb8210d6d3 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -29,7 +29,9 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 		    aarch64_insn_is_msr_imm(insn) ||
 		    aarch64_insn_is_msr_reg(insn) ||
 		    aarch64_insn_is_exception(insn) ||
-		    aarch64_insn_is_eret(insn))
+		    aarch64_insn_is_eret(insn) ||
+		    aarch64_insn_is_eretaa(insn) ||
+		    aarch64_insn_is_eretab(insn))
 			return false;
 
 		/*
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v6 2/6] arm64: traps: Allow force_signal_inject to pass esr error code
  2020-09-04 10:42 [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
@ 2020-09-04 10:42 ` Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-04 10:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

Some error signal need to pass proper ARM esr error code to userspace to
better identify the cause of the signal. So the function
force_signal_inject is extended to pass this as a parameter. The
existing code is not affected by this change.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
Changes since v5: 
* Added Reviewed-by.

 arch/arm64/include/asm/traps.h |  2 +-
 arch/arm64/kernel/fpsimd.c     |  4 ++--
 arch/arm64/kernel/traps.c      | 14 +++++++-------
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
index cee5928e1b7d..d96dc2c7c09d 100644
--- a/arch/arm64/include/asm/traps.h
+++ b/arch/arm64/include/asm/traps.h
@@ -24,7 +24,7 @@ struct undef_hook {
 
 void register_undef_hook(struct undef_hook *hook);
 void unregister_undef_hook(struct undef_hook *hook);
-void force_signal_inject(int signal, int code, unsigned long address);
+void force_signal_inject(int signal, int code, unsigned long address, unsigned int err);
 void arm64_notify_segfault(unsigned long addr);
 void arm64_force_sig_fault(int signo, int code, void __user *addr, const char *str);
 void arm64_force_sig_mceerr(int code, void __user *addr, short lsb, const char *str);
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 55c8f3ec6705..77484359d44a 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -312,7 +312,7 @@ static void fpsimd_save(void)
 				 * re-enter user with corrupt state.
 				 * There's no way to recover, so kill it:
 				 */
-				force_signal_inject(SIGKILL, SI_KERNEL, 0);
+				force_signal_inject(SIGKILL, SI_KERNEL, 0, 0);
 				return;
 			}
 
@@ -936,7 +936,7 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
 {
 	/* Even if we chose not to use SVE, the hardware could still trap: */
 	if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
-		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 		return;
 	}
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 13ebd5ca2070..29fd00fe94f2 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -412,7 +412,7 @@ static int call_undef_hook(struct pt_regs *regs)
 	return fn ? fn(regs, instr) : 1;
 }
 
-void force_signal_inject(int signal, int code, unsigned long address)
+void force_signal_inject(int signal, int code, unsigned long address, unsigned int err)
 {
 	const char *desc;
 	struct pt_regs *regs = current_pt_regs();
@@ -438,7 +438,7 @@ void force_signal_inject(int signal, int code, unsigned long address)
 		signal = SIGKILL;
 	}
 
-	arm64_notify_die(desc, regs, signal, code, (void __user *)address, 0);
+	arm64_notify_die(desc, regs, signal, code, (void __user *)address, err);
 }
 
 /*
@@ -455,7 +455,7 @@ void arm64_notify_segfault(unsigned long addr)
 		code = SEGV_ACCERR;
 	mmap_read_unlock(current->mm);
 
-	force_signal_inject(SIGSEGV, code, addr);
+	force_signal_inject(SIGSEGV, code, addr, 0);
 }
 
 void do_undefinstr(struct pt_regs *regs)
@@ -468,14 +468,14 @@ void do_undefinstr(struct pt_regs *regs)
 		return;
 
 	BUG_ON(!user_mode(regs));
-	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 }
 NOKPROBE_SYMBOL(do_undefinstr);
 
 void do_bti(struct pt_regs *regs)
 {
 	BUG_ON(!user_mode(regs));
-	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 }
 NOKPROBE_SYMBOL(do_bti);
 
@@ -528,7 +528,7 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
 		__user_cache_maint("ic ivau", address, ret);
 		break;
 	default:
-		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 		return;
 	}
 
@@ -581,7 +581,7 @@ static void mrs_handler(unsigned int esr, struct pt_regs *regs)
 	sysreg = esr_sys64_to_sysreg(esr);
 
 	if (do_emulate_mrs(regs, sysreg, rt) != 0)
-		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
 }
 
 static void wfi_handler(unsigned int esr, struct pt_regs *regs)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v6 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements
  2020-09-04 10:42 [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 2/6] arm64: traps: Allow force_signal_inject to pass esr error code Amit Daniel Kachhap
@ 2020-09-04 10:42 ` Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-04 10:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

Some Armv8.3 Pointer Authentication enhancements have been introduced
which are mandatory for Armv8.6 and optional for Armv8.3. These features
are,

* ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens
  finding the correct PAC value of the authenticated pointer.

* ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication
  instruction fails in authenticating the PAC present in the address.
  This is different from earlier case when such failures just adds an
  error code in the top byte and waits for subsequent load/store to abort.
  The ptrauth instructions which may cause this fault are autiasp, retaa
  etc.

The above features are now represented by additional configurations
for the Address Authentication cpufeature and a new ESR exception class.

The userspace fault received in the kernel due to ARMv8.3-FPAC is treated
as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN
as the signal code. Note that this is different from earlier ARMv8.3
ptrauth where signal SIGSEGV is issued due to Pointer authentication
failures. The in-kernel PAC fault causes kernel to crash.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
Changes since v5: 
* Modified comment as suggested by Dave.
* Added Reviewed-by.

 arch/arm64/include/asm/esr.h       |  4 +++-
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/include/asm/sysreg.h    | 24 ++++++++++++++++--------
 arch/arm64/kernel/entry-common.c   | 21 +++++++++++++++++++++
 arch/arm64/kernel/traps.c          | 12 ++++++++++++
 5 files changed, 53 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 035003acfa87..22c81f1edda2 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -35,7 +35,9 @@
 #define ESR_ELx_EC_SYS64	(0x18)
 #define ESR_ELx_EC_SVE		(0x19)
 #define ESR_ELx_EC_ERET		(0x1a)	/* EL2 only */
-/* Unallocated EC: 0x1b - 0x1E */
+/* Unallocated EC: 0x1B */
+#define ESR_ELx_EC_FPAC		(0x1C)	/* EL1 and above */
+/* Unallocated EC: 0x1D - 0x1E */
 #define ESR_ELx_EC_IMP_DEF	(0x1f)	/* EL3 only */
 #define ESR_ELx_EC_IABT_LOW	(0x20)
 #define ESR_ELx_EC_IABT_CUR	(0x21)
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 7577a754d443..99b9383cd036 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
 void do_cp15instr(unsigned int esr, struct pt_regs *regs);
 void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
+void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 554a7e8ecb07..b738bc793369 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -636,14 +636,22 @@
 #define ID_AA64ISAR1_APA_SHIFT		4
 #define ID_AA64ISAR1_DPB_SHIFT		0
 
-#define ID_AA64ISAR1_APA_NI		0x0
-#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_API_NI		0x0
-#define ID_AA64ISAR1_API_IMP_DEF	0x1
-#define ID_AA64ISAR1_GPA_NI		0x0
-#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_GPI_NI		0x0
-#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
+#define ID_AA64ISAR1_APA_NI			0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_APA_ARCH_EPAC		0x2
+#define ID_AA64ISAR1_APA_ARCH_EPAC2		0x3
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_API_NI			0x0
+#define ID_AA64ISAR1_API_IMP_DEF		0x1
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC		0x2
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2		0x3
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_GPA_NI			0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_GPI_NI			0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF		0x1
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index d3be9dbf5490..43d4c329775f 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
 }
 NOKPROBE_SYMBOL(el1_dbg);
 
+static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	local_daif_inherit(regs);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el1_fpac);
+
 asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el1_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el1_fpac(regs, esr);
+		break;
 	default:
 		el1_inv(regs, esr);
 	}
@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_svc);
 
+static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el0_fpac);
+
 asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el0_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el0_fpac(regs, esr);
+		break;
 	default:
 		el0_inv(regs, esr);
 	}
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 29fd00fe94f2..b24f81197a68 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -479,6 +479,17 @@ void do_bti(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_bti);
 
+void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr)
+{
+	/*
+	 * Unexpected FPAC exception or pointer authentication failure in
+	 * the kernel: kill the task before it does any more harm.
+	 */
+	BUG_ON(!user_mode(regs));
+	force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr);
+}
+NOKPROBE_SYMBOL(do_ptrauth_fault);
+
 #define __user_cache_maint(insn, address, res)			\
 	if (address >= user_addr_max()) {			\
 		res = -EFAULT;					\
@@ -775,6 +786,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
 	[ESR_ELx_EC_SVE]		= "SVE",
 	[ESR_ELx_EC_ERET]		= "ERET/ERETAA/ERETAB",
+	[ESR_ELx_EC_FPAC]		= "FPAC",
 	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
 	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
 	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v6 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-09-04 10:42 [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2020-09-04 10:42 ` [PATCH v6 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
@ 2020-09-04 10:42 ` Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 6/6] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
  5 siblings, 0 replies; 11+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-04 10:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

The current address authentication cpufeature levels are set as LOWER_SAFE
which is not compatible with the different configurations added for Armv8.3
ptrauth enhancements as the different levels have different behaviour and
there is no tunable to enable the lower safe versions. This is rectified
by setting those cpufeature type as EXACT.

The current cpufeature framework also does not interfere in the booting of
non-exact secondary cpus but rather marks them as tainted. As a workaround
this is fixed by replacing the generic match handler with a new handler
specific to ptrauth.

After this change, if there is any variation in ptrauth configurations in
secondary cpus from boot cpu then those mismatched cpus are parked in an
infinite loop.

Following ptrauth crash log is oberserved in Arm fastmodel with mismatched
cpus without this fix,

 CPU features: SANITY CHECK: Unexpected variation in SYS_ID_AA64ISAR1_EL1. Boot CPU: 0x11111110211402, CPU4: 0x11111110211102
 CPU features: Unsupported CPU feature variation detected.
 GICv3: CPU4: found redistributor 100 region 0:0x000000002f180000
 CPU4: Booted secondary processor 0x0000000100 [0x410fd0f0]
 Unable to handle kernel paging request at virtual address bfff800010dadf3c
 Mem abort info:
   ESR = 0x86000004
   EC = 0x21: IABT (current EL), IL = 32 bits
   SET = 0, FnV = 0
   EA = 0, S1PTW = 0
 [bfff800010dadf3c] address between user and kernel address ranges
 Internal error: Oops: 86000004 [#1] PREEMPT SMP
 Modules linked in:
 CPU: 4 PID: 29 Comm: migration/4 Tainted: G S                5.8.0-rc4-00005-ge658591d66d1-dirty #158
 Hardware name: Foundation-v8A (DT)
 pstate: 60000089 (nZCv daIf -PAN -UAO BTYPE=--)
 pc : 0xbfff800010dadf3c
 lr : __schedule+0x2b4/0x5a8
 sp : ffff800012043d70
 x29: ffff800012043d70 x28: 0080000000000000
 x27: ffff800011cbe000 x26: ffff00087ad37580
 x25: ffff00087ad37000 x24: ffff800010de7d50
 x23: ffff800011674018 x22: 0784800010dae2a8
 x21: ffff00087ad37000 x20: ffff00087acb8000
 x19: ffff00087f742100 x18: 0000000000000030
 x17: 0000000000000000 x16: 0000000000000000
 x15: ffff800011ac1000 x14: 00000000000001bd
 x13: 0000000000000000 x12: 0000000000000000
 x11: 0000000000000000 x10: 71519a147ddfeb82
 x9 : 825d5ec0fb246314 x8 : ffff00087ad37dd8
 x7 : 0000000000000000 x6 : 00000000fffedb0e
 x5 : 00000000ffffffff x4 : 0000000000000000
 x3 : 0000000000000028 x2 : ffff80086e11e000
 x1 : ffff00087ad37000 x0 : ffff00087acdc600
 Call trace:
  0xbfff800010dadf3c
  schedule+0x78/0x110
  schedule_preempt_disabled+0x24/0x40
  __kthread_parkme+0x68/0xd0
  kthread+0x138/0x160
  ret_from_fork+0x10/0x34
 Code: bad PC value

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
[Suzuki: Introduce new matching function for address authentication]
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since v5: 
* Modified comment as suggested by Suzuki.
* Added Reviewed-by.

 arch/arm64/kernel/cpufeature.c | 46 +++++++++++++++++++++++++++-------
 1 file changed, 37 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6424584be01e..4bb3f2b2ffed 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -197,9 +197,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
@@ -1648,11 +1648,39 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
-			     int __unused)
+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
 {
-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+	int boot_val, sec_val;
+
+	/* We don't expect to be called with SCOPE_SYSTEM */
+	WARN_ON(scope == SCOPE_SYSTEM);
+	/*
+	 * The ptr-auth feature levels are not intercompatible with lower
+	 * levels. Hence we must match ptr-auth feature level of the secondary
+	 * CPUs with that of the boot CPU. The level of boot cpu is fetched
+	 * from the sanitised register whereas direct register read is done for
+	 * the secondary CPUs.
+	 * The sanitised feature state is guaranteed to match that of the
+	 * boot CPU as a mismatched secondary CPU is parked before it gets
+	 * a chance to update the state, with the capability.
+	 */
+	boot_val = cpuid_feature_extract_field(read_sanitised_ftr_reg(entry->sys_reg),
+					       entry->field_pos, entry->sign);
+	if (scope & SCOPE_BOOT_CPU) {
+		return boot_val >= entry->min_field_value;
+	} else if (scope & SCOPE_LOCAL_CPU) {
+		sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
+						      entry->field_pos, entry->sign);
+		return (sec_val >= entry->min_field_value) && (sec_val == boot_val);
+	}
+	return false;
+}
+
+static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
+				     int scope)
+{
+	return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
+	       has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
 }
 
 static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
@@ -2021,7 +2049,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -2031,12 +2059,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
 		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
-		.matches = has_address_auth,
+		.matches = has_address_auth_metacap,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v6 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction
  2020-09-04 10:42 [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2020-09-04 10:42 ` [PATCH v6 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-09-04 10:42 ` Amit Daniel Kachhap
  2020-09-04 10:42 ` [PATCH v6 6/6] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
  5 siblings, 0 replies; 11+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-04 10:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

With the addition of ARMv8.3-FPAC feature, the probe of authenticate
ptrauth instructions (AUT*) may cause ptrauth fault exception in case of
authenticate failure so they cannot be safely single stepped.

Hence the probe of authenticate instructions is disallowed but the
corresponding pac ptrauth instruction (PAC*) is not affected and they can
still be probed. Also AUTH* instructions do not make sense at function
entry points so most realistic probes would be unaffected by this change.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
---
Changes since v5: 
* None.

 arch/arm64/kernel/insn.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 27d5a52d5058..e4bdc9186e58 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
 	case AARCH64_INSN_HINT_XPACLRI:
 	case AARCH64_INSN_HINT_PACIA_1716:
 	case AARCH64_INSN_HINT_PACIB_1716:
-	case AARCH64_INSN_HINT_AUTIA_1716:
-	case AARCH64_INSN_HINT_AUTIB_1716:
 	case AARCH64_INSN_HINT_PACIAZ:
 	case AARCH64_INSN_HINT_PACIASP:
 	case AARCH64_INSN_HINT_PACIBZ:
 	case AARCH64_INSN_HINT_PACIBSP:
-	case AARCH64_INSN_HINT_AUTIAZ:
-	case AARCH64_INSN_HINT_AUTIASP:
-	case AARCH64_INSN_HINT_AUTIBZ:
-	case AARCH64_INSN_HINT_AUTIBSP:
 	case AARCH64_INSN_HINT_BTI:
 	case AARCH64_INSN_HINT_BTIC:
 	case AARCH64_INSN_HINT_BTIJ:
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v6 6/6] arm64: kprobe: clarify the comment of steppable hint instructions
  2020-09-04 10:42 [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2020-09-04 10:42 ` [PATCH v6 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
@ 2020-09-04 10:42 ` Amit Daniel Kachhap
  5 siblings, 0 replies; 11+ messages in thread
From: Amit Daniel Kachhap @ 2020-09-04 10:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin

The existing comment about steppable hint instruction is not complete
and only describes NOP instructions as steppable. As the function
aarch64_insn_is_steppable_hint allows all white-listed instruction
to be probed so the comment is updated to reflect this.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
---
Changes since v5: 
* None.

 arch/arm64/kernel/probes/decode-insn.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index f9eb8210d6d3..288114d54fb5 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -44,8 +44,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 			     != AARCH64_INSN_SPCLREG_DAIF;
 
 		/*
-		 * The HINT instruction is is problematic when single-stepping,
-		 * except for the NOP case.
+		 * The HINT instruction is steppable only if it is in whitelist
+		 * and the rest of other such instructions are blocked for
+		 * single stepping as they may cause exception or other
+		 * unintended behaviour.
 		 */
 		if (aarch64_insn_is_hint(insn))
 			return aarch64_insn_is_steppable_hint(insn);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  2020-09-04 10:42 ` [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
@ 2020-09-07 21:45   ` Will Deacon
  2020-09-08 10:51     ` Dave Martin
  0 siblings, 1 reply; 11+ messages in thread
From: Will Deacon @ 2020-09-07 21:45 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Vincenzo Frascino, Dave Martin, linux-arm-kernel

On Fri, Sep 04, 2020 at 04:12:04PM +0530, Amit Daniel Kachhap wrote:
> Currently the ARMv8.3-PAuth combined branch instructions (braa, retaa
> etc.) are not simulated for out-of-line execution with a handler. Hence the
> uprobe of such instructions leads to kernel warnings in a loop as they are
> not explicitly checked and fall into INSN_GOOD categories. Other combined
> instructions like LDRAA and LDRBB can be probed.
> 
> The issue of the combined branch instructions is fixed by adding
> definitions of all such instructions and rejecting their probes.
> 
> Warning log:
>  WARNING: CPU: 5 PID: 249 at arch/arm64/kernel/probes/uprobes.c:182 uprobe_single_step_handler+0x34/0x50
>  Modules linked in:
>  CPU: 5 PID: 249 Comm: func Tainted: G        W         5.8.0-rc4-00005-ge658591d66d1-dirty #160
>  Hardware name: Foundation-v8A (DT)
>  pstate: 204003c9 (nzCv DAIF +PAN -UAO BTYPE=--)
>  pc : uprobe_single_step_handler+0x34/0x50
>  lr : single_step_handler+0x70/0xf8
>  sp : ffff800012afbe30
>  x29: ffff800012afbe30 x28: ffff000879f00ec0
>  x27: 0000000000000000 x26: 0000000000000000
>  x25: 0000000000000000 x24: 0000000000000000
>  x23: 0000000060001000 x22: 00000000cb000022
>  x21: ffff800011fc5a68 x20: ffff800012afbec0
>  x19: ffff800011fc86c0 x18: 0000000000000000
>  x17: 0000000000000000 x16: 0000000000000000
>  x15: 0000000000000000 x14: 0000000000000000
>  x13: 0000000000000000 x12: 0000000000000000
>  x11: 0000000000000000 x10: 0000000000000000
>  x9 : ffff800010085d50 x8 : 0000000000000000
>  x7 : 0000000000000000 x6 : ffff800011fba9c0
>  x5 : ffff800011fba000 x4 : ffff800012283070
>  x3 : ffff8000100a78e0 x2 : 00000000004005f0
>  x1 : 0000fffffffff008 x0 : ffff800012afbec0
>  Call trace:
>   uprobe_single_step_handler+0x34/0x50
>   single_step_handler+0x70/0xf8
>   do_debug_exception+0xb8/0x130
>   el0_sync_handler+0x7c/0x188
>   el0_sync+0x158/0x180
> 
> Fixes: 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing")
> Fixes: 04ca3204fa09 ("arm64: enable pointer authentication")
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
> ---
> Changes since v5: 
> * Slight change in commit log.
> * Added Reviewed-by.
> 
>  arch/arm64/include/asm/insn.h          | 12 ++++++++++++
>  arch/arm64/kernel/insn.c               | 14 ++++++++++++--
>  arch/arm64/kernel/probes/decode-insn.c |  4 +++-
>  3 files changed, 27 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> index 0bc46149e491..324234068fee 100644
> --- a/arch/arm64/include/asm/insn.h
> +++ b/arch/arm64/include/asm/insn.h
> @@ -359,9 +359,21 @@ __AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
>  __AARCH64_INSN_FUNCS(exception,	0xFF000000, 0xD4000000)
>  __AARCH64_INSN_FUNCS(hint,	0xFFFFF01F, 0xD503201F)
>  __AARCH64_INSN_FUNCS(br,	0xFFFFFC1F, 0xD61F0000)
> +__AARCH64_INSN_FUNCS(braaz,	0xFFFFFC1F, 0xD61F081F)
> +__AARCH64_INSN_FUNCS(brabz,	0xFFFFFC1F, 0xD61F0C1F)
> +__AARCH64_INSN_FUNCS(braa,	0xFFFFFC00, 0xD71F0800)
> +__AARCH64_INSN_FUNCS(brab,	0xFFFFFC00, 0xD71F0C00)

When do we need to distinguish these variants? Can we modify the mask/value
pair so that we catch bra* in one go? That would match how they are
documented in the Arm ARM.

>  __AARCH64_INSN_FUNCS(blr,	0xFFFFFC1F, 0xD63F0000)
> +__AARCH64_INSN_FUNCS(blraaz,	0xFFFFFC1F, 0xD63F081F)
> +__AARCH64_INSN_FUNCS(blrabz,	0xFFFFFC1F, 0xD63F0C1F)
> +__AARCH64_INSN_FUNCS(blraa,	0xFFFFFC00, 0xD73F0800)
> +__AARCH64_INSN_FUNCS(blrab,	0xFFFFFC00, 0xD73F0C00)

Same here for blra*

>  __AARCH64_INSN_FUNCS(ret,	0xFFFFFC1F, 0xD65F0000)
> +__AARCH64_INSN_FUNCS(retaa,	0xFFFFFFFF, 0xD65F0BFF)
> +__AARCH64_INSN_FUNCS(retab,	0xFFFFFFFF, 0xD65F0FFF)
>  __AARCH64_INSN_FUNCS(eret,	0xFFFFFFFF, 0xD69F03E0)
> +__AARCH64_INSN_FUNCS(eretaa,	0xFFFFFFFF, 0xD69F0BFF)
> +__AARCH64_INSN_FUNCS(eretab,	0xFFFFFFFF, 0xD69F0FFF)

... and here for ereta*.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  2020-09-07 21:45   ` Will Deacon
@ 2020-09-08 10:51     ` Dave Martin
  2020-09-11 13:55       ` Will Deacon
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Martin @ 2020-09-08 10:51 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino,
	linux-arm-kernel

On Mon, Sep 07, 2020 at 10:45:51PM +0100, Will Deacon wrote:
> On Fri, Sep 04, 2020 at 04:12:04PM +0530, Amit Daniel Kachhap wrote:
> > Currently the ARMv8.3-PAuth combined branch instructions (braa, retaa
> > etc.) are not simulated for out-of-line execution with a handler. Hence the
> > uprobe of such instructions leads to kernel warnings in a loop as they are
> > not explicitly checked and fall into INSN_GOOD categories. Other combined
> > instructions like LDRAA and LDRBB can be probed.
> > 
> > The issue of the combined branch instructions is fixed by adding
> > definitions of all such instructions and rejecting their probes.
> > 
> > Warning log:
> >  WARNING: CPU: 5 PID: 249 at arch/arm64/kernel/probes/uprobes.c:182 uprobe_single_step_handler+0x34/0x50
> >  Modules linked in:
> >  CPU: 5 PID: 249 Comm: func Tainted: G        W         5.8.0-rc4-00005-ge658591d66d1-dirty #160
> >  Hardware name: Foundation-v8A (DT)
> >  pstate: 204003c9 (nzCv DAIF +PAN -UAO BTYPE=--)
> >  pc : uprobe_single_step_handler+0x34/0x50
> >  lr : single_step_handler+0x70/0xf8
> >  sp : ffff800012afbe30
> >  x29: ffff800012afbe30 x28: ffff000879f00ec0
> >  x27: 0000000000000000 x26: 0000000000000000
> >  x25: 0000000000000000 x24: 0000000000000000
> >  x23: 0000000060001000 x22: 00000000cb000022
> >  x21: ffff800011fc5a68 x20: ffff800012afbec0
> >  x19: ffff800011fc86c0 x18: 0000000000000000
> >  x17: 0000000000000000 x16: 0000000000000000
> >  x15: 0000000000000000 x14: 0000000000000000
> >  x13: 0000000000000000 x12: 0000000000000000
> >  x11: 0000000000000000 x10: 0000000000000000
> >  x9 : ffff800010085d50 x8 : 0000000000000000
> >  x7 : 0000000000000000 x6 : ffff800011fba9c0
> >  x5 : ffff800011fba000 x4 : ffff800012283070
> >  x3 : ffff8000100a78e0 x2 : 00000000004005f0
> >  x1 : 0000fffffffff008 x0 : ffff800012afbec0
> >  Call trace:
> >   uprobe_single_step_handler+0x34/0x50
> >   single_step_handler+0x70/0xf8
> >   do_debug_exception+0xb8/0x130
> >   el0_sync_handler+0x7c/0x188
> >   el0_sync+0x158/0x180
> > 
> > Fixes: 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing")
> > Fixes: 04ca3204fa09 ("arm64: enable pointer authentication")
> > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> > Reviewed-by: Dave Martin <Dave.Martin@arm.com>
> > ---
> > Changes since v5: 
> > * Slight change in commit log.
> > * Added Reviewed-by.
> > 
> >  arch/arm64/include/asm/insn.h          | 12 ++++++++++++
> >  arch/arm64/kernel/insn.c               | 14 ++++++++++++--
> >  arch/arm64/kernel/probes/decode-insn.c |  4 +++-
> >  3 files changed, 27 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> > index 0bc46149e491..324234068fee 100644
> > --- a/arch/arm64/include/asm/insn.h
> > +++ b/arch/arm64/include/asm/insn.h
> > @@ -359,9 +359,21 @@ __AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
> >  __AARCH64_INSN_FUNCS(exception,	0xFF000000, 0xD4000000)
> >  __AARCH64_INSN_FUNCS(hint,	0xFFFFF01F, 0xD503201F)
> >  __AARCH64_INSN_FUNCS(br,	0xFFFFFC1F, 0xD61F0000)
> > +__AARCH64_INSN_FUNCS(braaz,	0xFFFFFC1F, 0xD61F081F)
> > +__AARCH64_INSN_FUNCS(brabz,	0xFFFFFC1F, 0xD61F0C1F)
> > +__AARCH64_INSN_FUNCS(braa,	0xFFFFFC00, 0xD71F0800)
> > +__AARCH64_INSN_FUNCS(brab,	0xFFFFFC00, 0xD71F0C00)
> 
> When do we need to distinguish these variants? Can we modify the mask/value
> pair so that we catch bra* in one go? That would match how they are
> documented in the Arm ARM.
> 
> >  __AARCH64_INSN_FUNCS(blr,	0xFFFFFC1F, 0xD63F0000)
> > +__AARCH64_INSN_FUNCS(blraaz,	0xFFFFFC1F, 0xD63F081F)
> > +__AARCH64_INSN_FUNCS(blrabz,	0xFFFFFC1F, 0xD63F0C1F)
> > +__AARCH64_INSN_FUNCS(blraa,	0xFFFFFC00, 0xD73F0800)
> > +__AARCH64_INSN_FUNCS(blrab,	0xFFFFFC00, 0xD73F0C00)
> 
> Same here for blra*
> 
> >  __AARCH64_INSN_FUNCS(ret,	0xFFFFFC1F, 0xD65F0000)
> > +__AARCH64_INSN_FUNCS(retaa,	0xFFFFFFFF, 0xD65F0BFF)
> > +__AARCH64_INSN_FUNCS(retab,	0xFFFFFFFF, 0xD65F0FFF)
> >  __AARCH64_INSN_FUNCS(eret,	0xFFFFFFFF, 0xD69F03E0)
> > +__AARCH64_INSN_FUNCS(eretaa,	0xFFFFFFFF, 0xD69F0BFF)
> > +__AARCH64_INSN_FUNCS(eretab,	0xFFFFFFFF, 0xD69F0FFF)
> 
> ... and here for ereta*.

From my side:

I thought about this myself, but I thought that this may be easier to
maintain if we avoid lumping instructions together.

Some of these cases are probably trivial enough that they can be merged
at low risk, though, and we also have some other merged instruction
classes here already.  Avoiding pointless distinctions here will also
help the efficiency of code that uses these definitions in some cases,
though I don't have a feel for how significant the difference will
be -- probably not very.

If we become concerned about performance, we would probably want a
bigger overhaul of the affected code anyway.

I guess I'm happy either way.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  2020-09-08 10:51     ` Dave Martin
@ 2020-09-11 13:55       ` Will Deacon
  2020-09-14  8:42         ` Amit Kachhap
  0 siblings, 1 reply; 11+ messages in thread
From: Will Deacon @ 2020-09-11 13:55 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino,
	linux-arm-kernel

On Tue, Sep 08, 2020 at 11:51:08AM +0100, Dave Martin wrote:
> On Mon, Sep 07, 2020 at 10:45:51PM +0100, Will Deacon wrote:
> > On Fri, Sep 04, 2020 at 04:12:04PM +0530, Amit Daniel Kachhap wrote:
> > > diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> > > index 0bc46149e491..324234068fee 100644
> > > --- a/arch/arm64/include/asm/insn.h
> > > +++ b/arch/arm64/include/asm/insn.h
> > > @@ -359,9 +359,21 @@ __AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
> > >  __AARCH64_INSN_FUNCS(exception,	0xFF000000, 0xD4000000)
> > >  __AARCH64_INSN_FUNCS(hint,	0xFFFFF01F, 0xD503201F)
> > >  __AARCH64_INSN_FUNCS(br,	0xFFFFFC1F, 0xD61F0000)
> > > +__AARCH64_INSN_FUNCS(braaz,	0xFFFFFC1F, 0xD61F081F)
> > > +__AARCH64_INSN_FUNCS(brabz,	0xFFFFFC1F, 0xD61F0C1F)
> > > +__AARCH64_INSN_FUNCS(braa,	0xFFFFFC00, 0xD71F0800)
> > > +__AARCH64_INSN_FUNCS(brab,	0xFFFFFC00, 0xD71F0C00)
> > 
> > When do we need to distinguish these variants? Can we modify the mask/value
> > pair so that we catch bra* in one go? That would match how they are
> > documented in the Arm ARM.
> > 
> > >  __AARCH64_INSN_FUNCS(blr,	0xFFFFFC1F, 0xD63F0000)
> > > +__AARCH64_INSN_FUNCS(blraaz,	0xFFFFFC1F, 0xD63F081F)
> > > +__AARCH64_INSN_FUNCS(blrabz,	0xFFFFFC1F, 0xD63F0C1F)
> > > +__AARCH64_INSN_FUNCS(blraa,	0xFFFFFC00, 0xD73F0800)
> > > +__AARCH64_INSN_FUNCS(blrab,	0xFFFFFC00, 0xD73F0C00)
> > 
> > Same here for blra*
> > 
> > >  __AARCH64_INSN_FUNCS(ret,	0xFFFFFC1F, 0xD65F0000)
> > > +__AARCH64_INSN_FUNCS(retaa,	0xFFFFFFFF, 0xD65F0BFF)
> > > +__AARCH64_INSN_FUNCS(retab,	0xFFFFFFFF, 0xD65F0FFF)
> > >  __AARCH64_INSN_FUNCS(eret,	0xFFFFFFFF, 0xD69F03E0)
> > > +__AARCH64_INSN_FUNCS(eretaa,	0xFFFFFFFF, 0xD69F0BFF)
> > > +__AARCH64_INSN_FUNCS(eretab,	0xFFFFFFFF, 0xD69F0FFF)
> > 
> > ... and here for ereta*.
> 
> From my side:
> 
> I thought about this myself, but I thought that this may be easier to
> maintain if we avoid lumping instructions together.

Maybe, but I'm just suggesting to lump them together in the same way as the
Arm ARM, which I think helps readability because it lines up directly with
the text.

> I guess I'm happy either way.

Ok, thanks. Amit -- can you repost the series with that change, please, and
I'll queue the lot for 5.10?

Thanks,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
  2020-09-11 13:55       ` Will Deacon
@ 2020-09-14  8:42         ` Amit Kachhap
  0 siblings, 0 replies; 11+ messages in thread
From: Amit Kachhap @ 2020-09-14  8:42 UTC (permalink / raw)
  To: Will Deacon, Dave Martin
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Mark Brown,
	James Morse, Vincenzo Frascino, linux-arm-kernel



On 9/11/20 7:25 PM, Will Deacon wrote:
> On Tue, Sep 08, 2020 at 11:51:08AM +0100, Dave Martin wrote:
>> On Mon, Sep 07, 2020 at 10:45:51PM +0100, Will Deacon wrote:
>>> On Fri, Sep 04, 2020 at 04:12:04PM +0530, Amit Daniel Kachhap wrote:
>>>> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
>>>> index 0bc46149e491..324234068fee 100644
>>>> --- a/arch/arm64/include/asm/insn.h
>>>> +++ b/arch/arm64/include/asm/insn.h
>>>> @@ -359,9 +359,21 @@ __AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
>>>>   __AARCH64_INSN_FUNCS(exception,	0xFF000000, 0xD4000000)
>>>>   __AARCH64_INSN_FUNCS(hint,	0xFFFFF01F, 0xD503201F)
>>>>   __AARCH64_INSN_FUNCS(br,	0xFFFFFC1F, 0xD61F0000)
>>>> +__AARCH64_INSN_FUNCS(braaz,	0xFFFFFC1F, 0xD61F081F)
>>>> +__AARCH64_INSN_FUNCS(brabz,	0xFFFFFC1F, 0xD61F0C1F)
>>>> +__AARCH64_INSN_FUNCS(braa,	0xFFFFFC00, 0xD71F0800)
>>>> +__AARCH64_INSN_FUNCS(brab,	0xFFFFFC00, 0xD71F0C00)
>>>
>>> When do we need to distinguish these variants? Can we modify the mask/value
>>> pair so that we catch bra* in one go? That would match how they are
>>> documented in the Arm ARM.
>>>
>>>>   __AARCH64_INSN_FUNCS(blr,	0xFFFFFC1F, 0xD63F0000)
>>>> +__AARCH64_INSN_FUNCS(blraaz,	0xFFFFFC1F, 0xD63F081F)
>>>> +__AARCH64_INSN_FUNCS(blrabz,	0xFFFFFC1F, 0xD63F0C1F)
>>>> +__AARCH64_INSN_FUNCS(blraa,	0xFFFFFC00, 0xD73F0800)
>>>> +__AARCH64_INSN_FUNCS(blrab,	0xFFFFFC00, 0xD73F0C00)
>>>
>>> Same here for blra*
>>>
>>>>   __AARCH64_INSN_FUNCS(ret,	0xFFFFFC1F, 0xD65F0000)
>>>> +__AARCH64_INSN_FUNCS(retaa,	0xFFFFFFFF, 0xD65F0BFF)
>>>> +__AARCH64_INSN_FUNCS(retab,	0xFFFFFFFF, 0xD65F0FFF)
>>>>   __AARCH64_INSN_FUNCS(eret,	0xFFFFFFFF, 0xD69F03E0)
>>>> +__AARCH64_INSN_FUNCS(eretaa,	0xFFFFFFFF, 0xD69F0BFF)
>>>> +__AARCH64_INSN_FUNCS(eretab,	0xFFFFFFFF, 0xD69F0FFF)
>>>
>>> ... and here for ereta*.
>>
>>  From my side:
>>
>> I thought about this myself, but I thought that this may be easier to
>> maintain if we avoid lumping instructions together.
> 
> Maybe, but I'm just suggesting to lump them together in the same way as the
> Arm ARM, which I think helps readability because it lines up directly with
> the text.
> 
>> I guess I'm happy either way.
> 
> Ok, thanks. Amit -- can you repost the series with that change, please, and
> I'll queue the lot for 5.10?

My v8 revision posted just now has this clubbing of instructions.

Thanks,
Amit

> 
> Thanks,
> 
> Will
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-09-14  8:43 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-04 10:42 [PATCH v6 0/6] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
2020-09-04 10:42 ` [PATCH v6 1/6] arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Amit Daniel Kachhap
2020-09-07 21:45   ` Will Deacon
2020-09-08 10:51     ` Dave Martin
2020-09-11 13:55       ` Will Deacon
2020-09-14  8:42         ` Amit Kachhap
2020-09-04 10:42 ` [PATCH v6 2/6] arm64: traps: Allow force_signal_inject to pass esr error code Amit Daniel Kachhap
2020-09-04 10:42 ` [PATCH v6 3/6] arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
2020-09-04 10:42 ` [PATCH v6 4/6] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
2020-09-04 10:42 ` [PATCH v6 5/6] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
2020-09-04 10:42 ` [PATCH v6 6/6] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).