All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements
@ 2020-07-10  8:00 Amit Daniel Kachhap
  2020-07-10  8:00 ` [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features Amit Daniel Kachhap
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Amit Daniel Kachhap @ 2020-07-10  8:00 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Dave Martin

These patch series adds support for Armv8.3 pointer authentication
enhanced features mandatory for Armv8.6 and optional for Armv8.3.
These features are,

 * Enhanced PAC generation algorithm (ARMv8.3-pauth2).
 * Generate fault when authenticate instruction fails (ARMV8.3-FPAC).

More details can be found here [1].

Changes since v3 [2]:
* Added a new patch "arm64: kprobe: clarify the comment of steppable hint instructions"
  as suggested in the last iteration.
* Removed the ptrauth fault handler from el0 compat handler as pointed
  by Dave.
* Mentioned the new feature name clearly as ARMV8.3-FPAC and ARMv8.3-pauth2
  as per ARMv8-A reference manual.
* Commit logs cleanup. 

Changes since v2 [3]:
* Dropped the patch "arm64: cpufeature: Fix the handler for address authentication"
* Added new matching function for address authentication as generic
  matching function has_cpuid_feature is specific for LOWER_SAFE
  features. This was suggested by Suzuki [3].
* Disabled probe of Authenticate ptrauth instructions as per Mark
  Brown's merged changes of whitelisting of hint instructions.

This series is based on kernel version v5.8-rc4.

Regards,
Amit

[1]: https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a
[2]: https://lore.kernel.org/linux-arm-kernel/1592457029-18547-1-git-send-email-amit.kachhap@arm.com/
[3]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-April/723751.html


Amit Daniel Kachhap (4):
  arm64: ptrauth: add Armv8.3 pointer authentication enhanced features
  arm64: cpufeature: Modify address authentication cpufeature to exact
  arm64: kprobe: disable probe of fault prone ptrauth instruction
  arm64: kprobe: clarify the comment of steppable hint instructions

 arch/arm64/include/asm/esr.h           |  4 +-
 arch/arm64/include/asm/exception.h     |  1 +
 arch/arm64/include/asm/sysreg.h        | 24 +++++++----
 arch/arm64/kernel/cpufeature.c         | 56 +++++++++++++++++++++-----
 arch/arm64/kernel/entry-common.c       | 21 ++++++++++
 arch/arm64/kernel/insn.c               |  6 ---
 arch/arm64/kernel/probes/decode-insn.c |  6 ++-
 arch/arm64/kernel/traps.c              | 10 +++++
 8 files changed, 102 insertions(+), 26 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features
  2020-07-10  8:00 [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
@ 2020-07-10  8:00 ` Amit Daniel Kachhap
  2020-07-29 11:07   ` Dave Martin
  2020-07-10  8:00 ` [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Amit Daniel Kachhap @ 2020-07-10  8:00 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Dave Martin

Some Armv8.3 Pointer Authentication enhancements have been introduced
which are mandatory for Armv8.6 and optional for ARMv8.3. These features
are,

* ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens
  finding the correct PAC value of the authenticated pointer.

* ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication
  instruction fails in authenticating the PAC present in the address.
  This is different from earlier case when such failures just adds an
  error code in the top byte and waits for subsequent load/store to abort.
  The ptrauth instructions which may cause this fault are autiasp, retaa
  etc.

The above features are now represented by additional configurations
for the Address Authentication cpufeature.

The userspace fault received in the kernel due to ARMv8.3-FPAC is treated
as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN
as the signal code. Note that this is different from earlier ARMv8.3
ptrauth where signal SIGSEGV is issued due to Pointer authentication
failures. The in-kernel PAC fault causes kernel to crash.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v3:
 * Removed ptrauth fault handler from 32 bit compat layer handler.
 * Added a comment for in-kernel PAC fault kernel crash.
 * Commit logs cleanup.

 arch/arm64/include/asm/esr.h       |  4 +++-
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/include/asm/sysreg.h    | 24 ++++++++++++++++--------
 arch/arm64/kernel/entry-common.c   | 21 +++++++++++++++++++++
 arch/arm64/kernel/traps.c          | 10 ++++++++++
 5 files changed, 51 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 035003acfa87..22c81f1edda2 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -35,7 +35,9 @@
 #define ESR_ELx_EC_SYS64	(0x18)
 #define ESR_ELx_EC_SVE		(0x19)
 #define ESR_ELx_EC_ERET		(0x1a)	/* EL2 only */
-/* Unallocated EC: 0x1b - 0x1E */
+/* Unallocated EC: 0x1B */
+#define ESR_ELx_EC_FPAC		(0x1C)	/* EL1 and above */
+/* Unallocated EC: 0x1D - 0x1E */
 #define ESR_ELx_EC_IMP_DEF	(0x1f)	/* EL3 only */
 #define ESR_ELx_EC_IABT_LOW	(0x20)
 #define ESR_ELx_EC_IABT_CUR	(0x21)
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 7577a754d443..3140b90f2e4f 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
 void do_cp15instr(unsigned int esr, struct pt_regs *regs);
 void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
+void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr);
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 463175f80341..c71bcd0c002a 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -633,14 +633,22 @@
 #define ID_AA64ISAR1_APA_SHIFT		4
 #define ID_AA64ISAR1_DPB_SHIFT		0
 
-#define ID_AA64ISAR1_APA_NI		0x0
-#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_API_NI		0x0
-#define ID_AA64ISAR1_API_IMP_DEF	0x1
-#define ID_AA64ISAR1_GPA_NI		0x0
-#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_GPI_NI		0x0
-#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
+#define ID_AA64ISAR1_APA_NI			0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_APA_ARCH_EPAC		0x2
+#define ID_AA64ISAR1_APA_ARCH_EPAC2		0x3
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_API_NI			0x0
+#define ID_AA64ISAR1_API_IMP_DEF		0x1
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC		0x2
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2		0x3
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_GPA_NI			0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_GPI_NI			0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF		0x1
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 3dbdf9752b11..8380ffb8160a 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
 }
 NOKPROBE_SYMBOL(el1_dbg);
 
+static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	local_daif_inherit(regs);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el1_fpac);
+
 asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el1_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el1_fpac(regs, esr);
+		break;
 	default:
 		el1_inv(regs, esr);
 	}
@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_svc);
 
+static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el0_fpac);
+
 asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el0_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el0_fpac(regs, esr);
+		break;
 	default:
 		el0_inv(regs, esr);
 	}
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 47f651df781c..0f1e587393a0 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -479,6 +479,15 @@ void do_bti(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_bti);
 
+void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
+{
+	/* In-kernel Pointer authentication fault causes kernel crash */
+	BUG_ON(!user_mode(regs));
+	arm64_notify_die("pointer authentication fault", regs, SIGILL,
+			 ILL_ILLOPN, (void __user *)regs->pc, esr);
+}
+NOKPROBE_SYMBOL(do_ptrauth_fault);
+
 #define __user_cache_maint(insn, address, res)			\
 	if (address >= user_addr_max()) {			\
 		res = -EFAULT;					\
@@ -775,6 +784,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
 	[ESR_ELx_EC_SVE]		= "SVE",
 	[ESR_ELx_EC_ERET]		= "ERET/ERETAA/ERETAB",
+	[ESR_ELx_EC_FPAC]		= "FPAC",
 	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
 	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
 	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-07-10  8:00 [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
  2020-07-10  8:00 ` [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features Amit Daniel Kachhap
@ 2020-07-10  8:00 ` Amit Daniel Kachhap
  2020-07-29 10:36   ` Dave Martin
  2020-07-10  8:00 ` [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
  2020-07-10  8:00 ` [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
  3 siblings, 1 reply; 15+ messages in thread
From: Amit Daniel Kachhap @ 2020-07-10  8:00 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Dave Martin

The current address authentication cpufeature levels are set as LOWER_SAFE
which is not compatible with the different configurations added for Armv8.3
ptrauth enhancements as the different levels have different behaviour and
there is no tunable to enable the lower safe versions. This is rectified
by setting those cpufeature type as EXACT.

The current cpufeature framework also does not interfere in the booting of
non-exact secondary cpus but rather marks them as tainted. As a workaround
this is fixed by replacing the generic match handler with a new handler
specific to ptrauth.

After this change, if there is any variation in ptrauth configurations in
secondary cpus from boot cpu then those mismatched cpus are parked in an
infinite loop.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
[Suzuki: Introduce new matching function for address authentication]
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since v3:
 * Commit logs cleanup.

 arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
 1 file changed, 47 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9fae0efc80c1..8ac8c18f70c9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
@@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
-			     int __unused)
+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
 {
-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+	int local_cpu_auth;
+	static int boot_cpu_auth_arch;
+	static int boot_cpu_auth_imp_def;
+
+	/* We don't expect to be called with SCOPE_SYSTEM */
+	WARN_ON(scope == SCOPE_SYSTEM);
+
+	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
+						     entry->field_pos, entry->sign);
+	/*
+	 * The ptr-auth feature levels are not intercompatible with
+	 * lower levels. Hence we must match all the CPUs with that
+	 * of the boot CPU. So cache the level of boot CPU and compare
+	 * it against the secondary CPUs.
+	 */
+	if (scope & SCOPE_BOOT_CPU) {
+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
+			boot_cpu_auth_imp_def = local_cpu_auth;
+			return boot_cpu_auth_imp_def >= entry->min_field_value;
+		} else if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
+			boot_cpu_auth_arch = local_cpu_auth;
+			return boot_cpu_auth_arch >= entry->min_field_value;
+		}
+	} else if (scope & SCOPE_LOCAL_CPU) {
+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
+			return (local_cpu_auth >= entry->min_field_value) &&
+			       (local_cpu_auth == boot_cpu_auth_imp_def);
+		}
+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
+			return (local_cpu_auth >= entry->min_field_value) &&
+			       (local_cpu_auth == boot_cpu_auth_arch);
+		}
+	}
+	return false;
+}
+
+static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
+			     int scope)
+{
+	return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
+	       has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
 }
 
 static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
@@ -1958,7 +1996,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -1968,12 +2006,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
 		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
-		.matches = has_address_auth,
+		.matches = has_address_auth_metacap,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction
  2020-07-10  8:00 [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
  2020-07-10  8:00 ` [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features Amit Daniel Kachhap
  2020-07-10  8:00 ` [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-07-10  8:00 ` Amit Daniel Kachhap
  2020-07-29 10:43   ` Dave Martin
  2020-07-10  8:00 ` [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
  3 siblings, 1 reply; 15+ messages in thread
From: Amit Daniel Kachhap @ 2020-07-10  8:00 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Dave Martin

With the addition of ARMv8.3-FPAC feature, the probe of authenticate
ptrauth instructions (AUT*) may cause ptrauth fault exception in case of
authenticate failure so they cannot be safely single stepped.

Hence the probe of authenticate instructions is disallowed but the
corresponding pac ptrauth instruction (PAC*) is not affected and they can
still be probed. Also AUTH* instructions do not make sense at function
entry points so most realistic probes would be unaffected by this change.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v3:
 * Commit logs cleanup.
 * Moved comment changes in a separate patch.

 arch/arm64/kernel/insn.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index a107375005bc..33d53cb46542 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
 	case AARCH64_INSN_HINT_XPACLRI:
 	case AARCH64_INSN_HINT_PACIA_1716:
 	case AARCH64_INSN_HINT_PACIB_1716:
-	case AARCH64_INSN_HINT_AUTIA_1716:
-	case AARCH64_INSN_HINT_AUTIB_1716:
 	case AARCH64_INSN_HINT_PACIAZ:
 	case AARCH64_INSN_HINT_PACIASP:
 	case AARCH64_INSN_HINT_PACIBZ:
 	case AARCH64_INSN_HINT_PACIBSP:
-	case AARCH64_INSN_HINT_AUTIAZ:
-	case AARCH64_INSN_HINT_AUTIASP:
-	case AARCH64_INSN_HINT_AUTIBZ:
-	case AARCH64_INSN_HINT_AUTIBSP:
 	case AARCH64_INSN_HINT_BTI:
 	case AARCH64_INSN_HINT_BTIC:
 	case AARCH64_INSN_HINT_BTIJ:
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions
  2020-07-10  8:00 [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2020-07-10  8:00 ` [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
@ 2020-07-10  8:00 ` Amit Daniel Kachhap
  2020-07-29 10:44   ` Dave Martin
  3 siblings, 1 reply; 15+ messages in thread
From: Amit Daniel Kachhap @ 2020-07-10  8:00 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Dave Martin

The existing comment about steppable hint instruction is not complete
and only describes NOP instructions as steppable. As the function
aarch64_insn_is_steppable_hint allows all white-listed instruction
to be probed so the comment is updated to reflect this.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v3:
 * New patch.

 arch/arm64/kernel/probes/decode-insn.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index 263d5fba4c8a..3912a37cb117 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -42,8 +42,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 			     != AARCH64_INSN_SPCLREG_DAIF;
 
 		/*
-		 * The HINT instruction is is problematic when single-stepping,
-		 * except for the NOP case.
+		 * The HINT instruction is steppable only if it is in whitelist
+		 * and the rest of other such instructions are blocked for
+		 * single stepping as they may cause exception or other
+		 * unintended behaviour.
 		 */
 		if (aarch64_insn_is_hint(insn))
 			return aarch64_insn_is_steppable_hint(insn);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-07-10  8:00 ` [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-07-29 10:36   ` Dave Martin
  2020-07-29 12:28     ` Suzuki K Poulose
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Martin @ 2020-07-29 10:36 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	linux-arm-kernel

On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
> The current address authentication cpufeature levels are set as LOWER_SAFE
> which is not compatible with the different configurations added for Armv8.3
> ptrauth enhancements as the different levels have different behaviour and
> there is no tunable to enable the lower safe versions. This is rectified
> by setting those cpufeature type as EXACT.
> 
> The current cpufeature framework also does not interfere in the booting of
> non-exact secondary cpus but rather marks them as tainted. As a workaround
> this is fixed by replacing the generic match handler with a new handler
> specific to ptrauth.
> 
> After this change, if there is any variation in ptrauth configurations in
> secondary cpus from boot cpu then those mismatched cpus are parked in an
> infinite loop.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> [Suzuki: Introduce new matching function for address authentication]
> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> Changes since v3:
>  * Commit logs cleanup.
> 
>  arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
>  1 file changed, 47 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9fae0efc80c1..8ac8c18f70c9 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>  	ARM64_FTR_END,
>  };
> @@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>  #endif /* CONFIG_ARM64_RAS_EXTN */
>  
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> -			     int __unused)
> +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
>  {
> -	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> -	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> +	int local_cpu_auth;
> +	static int boot_cpu_auth_arch;
> +	static int boot_cpu_auth_imp_def;

I guess having static variables here is probably safe, provided that
calls to this function are strictly serialised with sufficiently
heavyweight synchronisation.

Might be worth a comment.

Coming up with a cleaner approach using locks or atomics might be
overkill, but other folks might take a stronger view on this.

> +
> +	/* We don't expect to be called with SCOPE_SYSTEM */
> +	WARN_ON(scope == SCOPE_SYSTEM);
> +
> +	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
> +						     entry->field_pos, entry->sign);
> +	/*
> +	 * The ptr-auth feature levels are not intercompatible with
> +	 * lower levels. Hence we must match all the CPUs with that
> +	 * of the boot CPU. So cache the level of boot CPU and compare
> +	 * it against the secondary CPUs.
> +	 */
> +	if (scope & SCOPE_BOOT_CPU) {
> +		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> +			boot_cpu_auth_imp_def = local_cpu_auth;
> +			return boot_cpu_auth_imp_def >= entry->min_field_value;
> +		} else if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> +			boot_cpu_auth_arch = local_cpu_auth;
> +			return boot_cpu_auth_arch >= entry->min_field_value;
> +		}
> +	} else if (scope & SCOPE_LOCAL_CPU) {
> +		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> +			return (local_cpu_auth >= entry->min_field_value) &&
> +			       (local_cpu_auth == boot_cpu_auth_imp_def);
> +		}
> +		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> +			return (local_cpu_auth >= entry->min_field_value) &&
> +			       (local_cpu_auth == boot_cpu_auth_arch);
> +		}
> +	}

This looks complex.  I guess this is something to do with the change to
FTR_EXACT, but won't the cpufeatures code do the right thing anyway with
mismatched features?

I may be missing something here.

> +	return false;
> +}
> +
> +static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
> +			     int scope)
> +{
> +	return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
> +	       has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
>  }
>  
>  static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
> @@ -1958,7 +1996,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.sign = FTR_UNSIGNED,
>  		.field_pos = ID_AA64ISAR1_APA_SHIFT,
>  		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
> -		.matches = has_cpuid_feature,
> +		.matches = has_address_auth_cpucap,
>  	},
>  	{
>  		.desc = "Address authentication (IMP DEF algorithm)",
> @@ -1968,12 +2006,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.sign = FTR_UNSIGNED,
>  		.field_pos = ID_AA64ISAR1_API_SHIFT,
>  		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
> -		.matches = has_cpuid_feature,
> +		.matches = has_address_auth_cpucap,
>  	},
>  	{
>  		.capability = ARM64_HAS_ADDRESS_AUTH,
>  		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
> -		.matches = has_address_auth,
> +		.matches = has_address_auth_metacap,
>  	},
>  	{
>  		.desc = "Generic authentication (architected algorithm)",

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction
  2020-07-10  8:00 ` [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
@ 2020-07-29 10:43   ` Dave Martin
  2020-08-03 10:16     ` Amit Kachhap
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Martin @ 2020-07-29 10:43 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	linux-arm-kernel

On Fri, Jul 10, 2020 at 01:30:09PM +0530, Amit Daniel Kachhap wrote:
> With the addition of ARMv8.3-FPAC feature, the probe of authenticate
> ptrauth instructions (AUT*) may cause ptrauth fault exception in case of
> authenticate failure so they cannot be safely single stepped.
> 
> Hence the probe of authenticate instructions is disallowed but the
> corresponding pac ptrauth instruction (PAC*) is not affected and they can
> still be probed. Also AUTH* instructions do not make sense at function
> entry points so most realistic probes would be unaffected by this change.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>

I take it we don't need any special handling of things like RETAA now
that they are allowed to generate ptrauth faults?  IIUC such
instructions are already not simulated and not stepped out-of-line, so
we probably don't need to do anything.  Instructions like this won't
appear at normal function entry points.

Assuming what I've said above is correct:

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
> Changes since v3:
>  * Commit logs cleanup.
>  * Moved comment changes in a separate patch.
> 
>  arch/arm64/kernel/insn.c | 6 ------
>  1 file changed, 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index a107375005bc..33d53cb46542 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
>  	case AARCH64_INSN_HINT_XPACLRI:
>  	case AARCH64_INSN_HINT_PACIA_1716:
>  	case AARCH64_INSN_HINT_PACIB_1716:
> -	case AARCH64_INSN_HINT_AUTIA_1716:
> -	case AARCH64_INSN_HINT_AUTIB_1716:
>  	case AARCH64_INSN_HINT_PACIAZ:
>  	case AARCH64_INSN_HINT_PACIASP:
>  	case AARCH64_INSN_HINT_PACIBZ:
>  	case AARCH64_INSN_HINT_PACIBSP:
> -	case AARCH64_INSN_HINT_AUTIAZ:
> -	case AARCH64_INSN_HINT_AUTIASP:
> -	case AARCH64_INSN_HINT_AUTIBZ:
> -	case AARCH64_INSN_HINT_AUTIBSP:
>  	case AARCH64_INSN_HINT_BTI:
>  	case AARCH64_INSN_HINT_BTIC:
>  	case AARCH64_INSN_HINT_BTIJ:
> -- 
> 2.17.1
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions
  2020-07-10  8:00 ` [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
@ 2020-07-29 10:44   ` Dave Martin
  0 siblings, 0 replies; 15+ messages in thread
From: Dave Martin @ 2020-07-29 10:44 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	linux-arm-kernel

On Fri, Jul 10, 2020 at 01:30:10PM +0530, Amit Daniel Kachhap wrote:
> The existing comment about steppable hint instruction is not complete
> and only describes NOP instructions as steppable. As the function
> aarch64_insn_is_steppable_hint allows all white-listed instruction
> to be probed so the comment is updated to reflect this.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
> Changes since v3:
>  * New patch.
> 
>  arch/arm64/kernel/probes/decode-insn.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
> index 263d5fba4c8a..3912a37cb117 100644
> --- a/arch/arm64/kernel/probes/decode-insn.c
> +++ b/arch/arm64/kernel/probes/decode-insn.c
> @@ -42,8 +42,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
>  			     != AARCH64_INSN_SPCLREG_DAIF;
>  
>  		/*
> -		 * The HINT instruction is is problematic when single-stepping,
> -		 * except for the NOP case.
> +		 * The HINT instruction is steppable only if it is in whitelist
> +		 * and the rest of other such instructions are blocked for
> +		 * single stepping as they may cause exception or other
> +		 * unintended behaviour.
>  		 */
>  		if (aarch64_insn_is_hint(insn))
>  			return aarch64_insn_is_steppable_hint(insn);
> -- 
> 2.17.1
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features
  2020-07-10  8:00 ` [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features Amit Daniel Kachhap
@ 2020-07-29 11:07   ` Dave Martin
  2020-08-03 10:13     ` Amit Kachhap
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Martin @ 2020-07-29 11:07 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	linux-arm-kernel

On Fri, Jul 10, 2020 at 01:30:07PM +0530, Amit Daniel Kachhap wrote:
> Some Armv8.3 Pointer Authentication enhancements have been introduced
> which are mandatory for Armv8.6 and optional for ARMv8.3. These features
> are,
> 
> * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens
>   finding the correct PAC value of the authenticated pointer.
> 
> * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication
>   instruction fails in authenticating the PAC present in the address.
>   This is different from earlier case when such failures just adds an
>   error code in the top byte and waits for subsequent load/store to abort.
>   The ptrauth instructions which may cause this fault are autiasp, retaa
>   etc.

Did we add anything for armv8.5 EnhancedPAC?

I think that just changes some detail of the signing/authentication
algorithms, rather than adding new insns or exceptions.  So maybe there
was nothing to add anyway.


> The above features are now represented by additional configurations
> for the Address Authentication cpufeature.
> 
> The userspace fault received in the kernel due to ARMv8.3-FPAC is treated
> as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN
> as the signal code. Note that this is different from earlier ARMv8.3
> ptrauth where signal SIGSEGV is issued due to Pointer authentication
> failures. The in-kernel PAC fault causes kernel to crash.

Presumably this breaks code that checks pointers using AUT* but doesn't
attempt to dereference them if the check fails, perhaps calling some
userspace callback to handle it or throwing an exception.

Since the point of the architecture though is that you don't need to do
an explicit check, maybe code doesn't do this in practice.  Checking
codesearch.debian.net or similar for explicit AUT* instructions could be
good idea.

If this is a concern we might have to stop reporting HWCAP_PACA and
HWCAP_PACG, and define new hwcaps.  That seems brutal, though.

> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v3:
>  * Removed ptrauth fault handler from 32 bit compat layer handler.
>  * Added a comment for in-kernel PAC fault kernel crash.
>  * Commit logs cleanup.
> 
>  arch/arm64/include/asm/esr.h       |  4 +++-
>  arch/arm64/include/asm/exception.h |  1 +
>  arch/arm64/include/asm/sysreg.h    | 24 ++++++++++++++++--------
>  arch/arm64/kernel/entry-common.c   | 21 +++++++++++++++++++++
>  arch/arm64/kernel/traps.c          | 10 ++++++++++
>  5 files changed, 51 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> index 035003acfa87..22c81f1edda2 100644
> --- a/arch/arm64/include/asm/esr.h
> +++ b/arch/arm64/include/asm/esr.h
> @@ -35,7 +35,9 @@
>  #define ESR_ELx_EC_SYS64	(0x18)
>  #define ESR_ELx_EC_SVE		(0x19)
>  #define ESR_ELx_EC_ERET		(0x1a)	/* EL2 only */
> -/* Unallocated EC: 0x1b - 0x1E */
> +/* Unallocated EC: 0x1B */
> +#define ESR_ELx_EC_FPAC		(0x1C)	/* EL1 and above */
> +/* Unallocated EC: 0x1D - 0x1E */
>  #define ESR_ELx_EC_IMP_DEF	(0x1f)	/* EL3 only */
>  #define ESR_ELx_EC_IABT_LOW	(0x20)
>  #define ESR_ELx_EC_IABT_CUR	(0x21)
> diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
> index 7577a754d443..3140b90f2e4f 100644
> --- a/arch/arm64/include/asm/exception.h
> +++ b/arch/arm64/include/asm/exception.h
> @@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
>  void do_cp15instr(unsigned int esr, struct pt_regs *regs);
>  void do_el0_svc(struct pt_regs *regs);
>  void do_el0_svc_compat(struct pt_regs *regs);
> +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr);
>  #endif	/* __ASM_EXCEPTION_H */
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 463175f80341..c71bcd0c002a 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -633,14 +633,22 @@
>  #define ID_AA64ISAR1_APA_SHIFT		4
>  #define ID_AA64ISAR1_DPB_SHIFT		0
>  
> -#define ID_AA64ISAR1_APA_NI		0x0
> -#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
> -#define ID_AA64ISAR1_API_NI		0x0
> -#define ID_AA64ISAR1_API_IMP_DEF	0x1
> -#define ID_AA64ISAR1_GPA_NI		0x0
> -#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
> -#define ID_AA64ISAR1_GPI_NI		0x0
> -#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
> +#define ID_AA64ISAR1_APA_NI			0x0
> +#define ID_AA64ISAR1_APA_ARCHITECTED		0x1
> +#define ID_AA64ISAR1_APA_ARCH_EPAC		0x2
> +#define ID_AA64ISAR1_APA_ARCH_EPAC2		0x3
> +#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC	0x4
> +#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB	0x5
> +#define ID_AA64ISAR1_API_NI			0x0
> +#define ID_AA64ISAR1_API_IMP_DEF		0x1
> +#define ID_AA64ISAR1_API_IMP_DEF_EPAC		0x2
> +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2		0x3
> +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC	0x4
> +#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB	0x5
> +#define ID_AA64ISAR1_GPA_NI			0x0
> +#define ID_AA64ISAR1_GPA_ARCHITECTED		0x1
> +#define ID_AA64ISAR1_GPI_NI			0x0
> +#define ID_AA64ISAR1_GPI_IMP_DEF		0x1
>  
>  /* id_aa64pfr0 */
>  #define ID_AA64PFR0_CSV3_SHIFT		60
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 3dbdf9752b11..8380ffb8160a 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
>  }
>  NOKPROBE_SYMBOL(el1_dbg);
>  
> +static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
> +{
> +	local_daif_inherit(regs);
> +	do_ptrauth_fault(regs, esr);
> +}
> +NOKPROBE_SYMBOL(el1_fpac);
> +
>  asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
>  {
>  	unsigned long esr = read_sysreg(esr_el1);
> @@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
>  	case ESR_ELx_EC_BRK64:
>  		el1_dbg(regs, esr);
>  		break;
> +	case ESR_ELx_EC_FPAC:
> +		el1_fpac(regs, esr);
> +		break;
>  	default:
>  		el1_inv(regs, esr);
>  	}
> @@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
>  }
>  NOKPROBE_SYMBOL(el0_svc);
>  
> +static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
> +{
> +	user_exit_irqoff();
> +	local_daif_restore(DAIF_PROCCTX);
> +	do_ptrauth_fault(regs, esr);
> +}
> +NOKPROBE_SYMBOL(el0_fpac);
> +
>  asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
>  {
>  	unsigned long esr = read_sysreg(esr_el1);
> @@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
>  	case ESR_ELx_EC_BRK64:
>  		el0_dbg(regs, esr);
>  		break;
> +	case ESR_ELx_EC_FPAC:
> +		el0_fpac(regs, esr);
> +		break;
>  	default:
>  		el0_inv(regs, esr);
>  	}
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 47f651df781c..0f1e587393a0 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -479,6 +479,15 @@ void do_bti(struct pt_regs *regs)
>  }
>  NOKPROBE_SYMBOL(do_bti);
>  
> +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
> +{
> +	/* In-kernel Pointer authentication fault causes kernel crash */

Doesn't arm64_notify_die() already dump a backtrace and kill the task?
I wonder whether force_signal_inject() is a reasonable thing to use
here.  This is used in some similar places such as do_undefinstr(),
do_bti() etc.

force_signal_inject() might need some minor tweaking to handle this
case.

> +	BUG_ON(!user_mode(regs));

We need to kill the thread, but doesn't arm64_notify_die() already do
that?

(Don't take my word for it though...)

> +	arm64_notify_die("pointer authentication fault", regs, SIGILL,
> +			 ILL_ILLOPN, (void __user *)regs->pc, esr);
> +}
> +NOKPROBE_SYMBOL(do_ptrauth_fault);
> +
>  #define __user_cache_maint(insn, address, res)			\
>  	if (address >= user_addr_max()) {			\
>  		res = -EFAULT;					\
> @@ -775,6 +784,7 @@ static const char *esr_class_str[] = {
>  	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
>  	[ESR_ELx_EC_SVE]		= "SVE",
>  	[ESR_ELx_EC_ERET]		= "ERET/ERETAA/ERETAB",
> +	[ESR_ELx_EC_FPAC]		= "FPAC",
>  	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
>  	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
>  	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
> -- 
> 2.17.1
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-07-29 10:36   ` Dave Martin
@ 2020-07-29 12:28     ` Suzuki K Poulose
  2020-07-29 14:31       ` Dave Martin
  0 siblings, 1 reply; 15+ messages in thread
From: Suzuki K Poulose @ 2020-07-29 12:28 UTC (permalink / raw)
  To: dave.martin, amit.kachhap
  Cc: mark.rutland, keescook, catalin.marinas, broonie, james.morse,
	Vincenzo.Frascino, will, linux-arm-kernel

On 07/29/2020 11:36 AM, Dave Martin wrote:
> On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
>> The current address authentication cpufeature levels are set as LOWER_SAFE
>> which is not compatible with the different configurations added for Armv8.3
>> ptrauth enhancements as the different levels have different behaviour and
>> there is no tunable to enable the lower safe versions. This is rectified
>> by setting those cpufeature type as EXACT.
>>
>> The current cpufeature framework also does not interfere in the booting of
>> non-exact secondary cpus but rather marks them as tainted. As a workaround
>> this is fixed by replacing the generic match handler with a new handler
>> specific to ptrauth.
>>
>> After this change, if there is any variation in ptrauth configurations in
>> secondary cpus from boot cpu then those mismatched cpus are parked in an
>> infinite loop.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> [Suzuki: Introduce new matching function for address authentication]
>> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>> Changes since v3:
>>   * Commit logs cleanup.
>>
>>   arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
>>   1 file changed, 47 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 9fae0efc80c1..8ac8c18f70c9 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>   	ARM64_FTR_END,
>>   };
>> @@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>>   #endif /* CONFIG_ARM64_RAS_EXTN */
>>   
>>   #ifdef CONFIG_ARM64_PTR_AUTH
>> -static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
>> -			     int __unused)
>> +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
>>   {
>> -	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
>> -	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
>> +	int local_cpu_auth;
>> +	static int boot_cpu_auth_arch;
>> +	static int boot_cpu_auth_imp_def;
> 
> I guess having static variables here is probably safe, provided that
> calls to this function are strictly serialised with sufficiently
> heavyweight synchronisation.
> 

These are initialised with the primary boot CPU. And later on called
from CPU bring up. So they are serialized, except when
this_cpu_has_cap() could be called. But this is fine, as it is a read
operation.

> Might be worth a comment.
> 
> Coming up with a cleaner approach using locks or atomics might be
> overkill, but other folks might take a stronger view on this.
> 
>> +
>> +	/* We don't expect to be called with SCOPE_SYSTEM */
>> +	WARN_ON(scope == SCOPE_SYSTEM);
>> +
>> +	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
>> +						     entry->field_pos, entry->sign);
>> +	/*
>> +	 * The ptr-auth feature levels are not intercompatible with
>> +	 * lower levels. Hence we must match all the CPUs with that
>> +	 * of the boot CPU. So cache the level of boot CPU and compare
>> +	 * it against the secondary CPUs.
>> +	 */
>> +	if (scope & SCOPE_BOOT_CPU) {
>> +		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
>> +			boot_cpu_auth_imp_def = local_cpu_auth;
>> +			return boot_cpu_auth_imp_def >= entry->min_field_value;
>> +		} else if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
>> +			boot_cpu_auth_arch = local_cpu_auth;
>> +			return boot_cpu_auth_arch >= entry->min_field_value;
>> +		}
>> +	} else if (scope & SCOPE_LOCAL_CPU) {
>> +		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
>> +			return (local_cpu_auth >= entry->min_field_value) &&
>> +			       (local_cpu_auth == boot_cpu_auth_imp_def);
>> +		}
>> +		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
>> +			return (local_cpu_auth >= entry->min_field_value) &&
>> +			       (local_cpu_auth == boot_cpu_auth_arch);
>> +		}
>> +	}
> 
> This looks complex.  I guess this is something to do with the change to
> FTR_EXACT, but won't the cpufeatures code do the right thing anyway with
> mismatched features?

Part of the issue is that the capability is SCOPE_BOOT_CPU. So,
we need to verify the features on the secondary CPUs, making sure that
they match the "level" as detected by the boot CPU. Otherwise, we could
end up with mismatched CPUs, where all of them support different levels
of AUTH.

Also, implies that the checks are performed before the cpufeature gets
updated with the secondary CPUs values. Any conflict implies the
secondary CPU ends up parked.

So, the "FTR_EXACT" change only has an impact of keeping the cpufeature
infrastructure sanitised only in the absence of CONFIG_ARM64_PTRAUTH.

Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-07-29 12:28     ` Suzuki K Poulose
@ 2020-07-29 14:31       ` Dave Martin
  2020-07-29 17:09         ` Suzuki K Poulose
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Martin @ 2020-07-29 14:31 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: mark.rutland, keescook, catalin.marinas, broonie, james.morse,
	amit.kachhap, Vincenzo.Frascino, will, linux-arm-kernel

On Wed, Jul 29, 2020 at 01:28:06PM +0100, Suzuki K Poulose wrote:
> On 07/29/2020 11:36 AM, Dave Martin wrote:
> >On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
> >>The current address authentication cpufeature levels are set as LOWER_SAFE
> >>which is not compatible with the different configurations added for Armv8.3
> >>ptrauth enhancements as the different levels have different behaviour and
> >>there is no tunable to enable the lower safe versions. This is rectified
> >>by setting those cpufeature type as EXACT.
> >>
> >>The current cpufeature framework also does not interfere in the booting of
> >>non-exact secondary cpus but rather marks them as tainted. As a workaround
> >>this is fixed by replacing the generic match handler with a new handler
> >>specific to ptrauth.
> >>
> >>After this change, if there is any variation in ptrauth configurations in
> >>secondary cpus from boot cpu then those mismatched cpus are parked in an
> >>infinite loop.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>[Suzuki: Introduce new matching function for address authentication]
> >>Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> >>---
> >>Changes since v3:
> >>  * Commit logs cleanup.
> >>
> >>  arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
> >>  1 file changed, 47 insertions(+), 9 deletions(-)
> >>
> >>diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> >>index 9fae0efc80c1..8ac8c18f70c9 100644
> >>--- a/arch/arm64/kernel/cpufeature.c
> >>+++ b/arch/arm64/kernel/cpufeature.c
> >>@@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >>+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >>+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> >>  	ARM64_FTR_END,
> >>  };
> >>@@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
> >>  #endif /* CONFIG_ARM64_RAS_EXTN */
> >>  #ifdef CONFIG_ARM64_PTR_AUTH
> >>-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> >>-			     int __unused)
> >>+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
> >>  {
> >>-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> >>-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> >>+	int local_cpu_auth;
> >>+	static int boot_cpu_auth_arch;
> >>+	static int boot_cpu_auth_imp_def;
> >
> >I guess having static variables here is probably safe, provided that
> >calls to this function are strictly serialised with sufficiently
> >heavyweight synchronisation.
> >
> 
> These are initialised with the primary boot CPU. And later on called
> from CPU bring up. So they are serialized, except when
> this_cpu_has_cap() could be called. But this is fine, as it is a read
> operation.

To guarantee that any update to those variables is visible to a booting
CPU, we'd probably need a DSB and/or DMB somewhere.  For now I don't see
anything explicity, though it's likely we're getting them as a side
effect of something else.

In any case, I guess we've been managing for some time without this
being strictly correct.


> >Might be worth a comment.
> >
> >Coming up with a cleaner approach using locks or atomics might be
> >overkill, but other folks might take a stronger view on this.
> >
> >>+
> >>+	/* We don't expect to be called with SCOPE_SYSTEM */
> >>+	WARN_ON(scope == SCOPE_SYSTEM);
> >>+
> >>+	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
> >>+						     entry->field_pos, entry->sign);
> >>+	/*
> >>+	 * The ptr-auth feature levels are not intercompatible with
> >>+	 * lower levels. Hence we must match all the CPUs with that
> >>+	 * of the boot CPU. So cache the level of boot CPU and compare
> >>+	 * it against the secondary CPUs.
> >>+	 */
> >>+	if (scope & SCOPE_BOOT_CPU) {
> >>+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> >>+			boot_cpu_auth_imp_def = local_cpu_auth;

Actually, can we make use of read_sanitised_ftr_reg() here rather than
having to cache the values in static variables?

> >>+			return boot_cpu_auth_imp_def >= entry->min_field_value;
> >>+		} else if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> >>+			boot_cpu_auth_arch = local_cpu_auth;
> >>+			return boot_cpu_auth_arch >= entry->min_field_value;
> >>+		}
> >>+	} else if (scope & SCOPE_LOCAL_CPU) {
> >>+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> >>+			return (local_cpu_auth >= entry->min_field_value) &&
> >>+			       (local_cpu_auth == boot_cpu_auth_imp_def);
> >>+		}
> >>+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> >>+			return (local_cpu_auth >= entry->min_field_value) &&
> >>+			       (local_cpu_auth == boot_cpu_auth_arch);
> >>+		}
> >>+	}
> >
> >This looks complex.  I guess this is something to do with the change to
> >FTR_EXACT, but won't the cpufeatures code do the right thing anyway with
> >mismatched features?
> 
> Part of the issue is that the capability is SCOPE_BOOT_CPU. So,
> we need to verify the features on the secondary CPUs, making sure that
> they match the "level" as detected by the boot CPU. Otherwise, we could
> end up with mismatched CPUs, where all of them support different levels
> of AUTH.
> 
> Also, implies that the checks are performed before the cpufeature gets
> updated with the secondary CPUs values. Any conflict implies the
> secondary CPU ends up parked.
> 
> So, the "FTR_EXACT" change only has an impact of keeping the cpufeature
> infrastructure sanitised only in the absence of CONFIG_ARM64_PTRAUTH.

I think the framework could be improved to handle this kind of thing
more gracefully, but I don't see an obviously better way to do this for
now, especially if this is just a one-off case.

(I certainly hope we don't get more features that behave this way...)

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-07-29 14:31       ` Dave Martin
@ 2020-07-29 17:09         ` Suzuki K Poulose
  2020-08-05  9:16           ` Amit Kachhap
  0 siblings, 1 reply; 15+ messages in thread
From: Suzuki K Poulose @ 2020-07-29 17:09 UTC (permalink / raw)
  To: dave.martin
  Cc: mark.rutland, keescook, catalin.marinas, broonie, james.morse,
	amit.kachhap, Vincenzo.Frascino, will, linux-arm-kernel

On 07/29/2020 03:31 PM, Dave Martin wrote:
> On Wed, Jul 29, 2020 at 01:28:06PM +0100, Suzuki K Poulose wrote:
>> On 07/29/2020 11:36 AM, Dave Martin wrote:
>>> On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
>>>> The current address authentication cpufeature levels are set as LOWER_SAFE
>>>> which is not compatible with the different configurations added for Armv8.3
>>>> ptrauth enhancements as the different levels have different behaviour and
>>>> there is no tunable to enable the lower safe versions. This is rectified
>>>> by setting those cpufeature type as EXACT.
>>>>
>>>> The current cpufeature framework also does not interfere in the booting of
>>>> non-exact secondary cpus but rather marks them as tainted. As a workaround
>>>> this is fixed by replacing the generic match handler with a new handler
>>>> specific to ptrauth.
>>>>
>>>> After this change, if there is any variation in ptrauth configurations in
>>>> secondary cpus from boot cpu then those mismatched cpus are parked in an
>>>> infinite loop.
>>>>
>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>> [Suzuki: Introduce new matching function for address authentication]
>>>> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>>>> ---
>>>> Changes since v3:
>>>>   * Commit logs cleanup.
>>>>
>>>>   arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
>>>>   1 file changed, 47 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>>> index 9fae0efc80c1..8ac8c18f70c9 100644
>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>> @@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>   	ARM64_FTR_END,
>>>>   };
>>>> @@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>>>>   #endif /* CONFIG_ARM64_RAS_EXTN */
>>>>   #ifdef CONFIG_ARM64_PTR_AUTH
>>>> -static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
>>>> -			     int __unused)
>>>> +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
>>>>   {
>>>> -	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
>>>> -	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
>>>> +	int local_cpu_auth;
>>>> +	static int boot_cpu_auth_arch;
>>>> +	static int boot_cpu_auth_imp_def;
>>>
>>> I guess having static variables here is probably safe, provided that
>>> calls to this function are strictly serialised with sufficiently
>>> heavyweight synchronisation.
>>>
>>
>> These are initialised with the primary boot CPU. And later on called
>> from CPU bring up. So they are serialized, except when
>> this_cpu_has_cap() could be called. But this is fine, as it is a read
>> operation.
> 
> To guarantee that any update to those variables is visible to a booting
> CPU, we'd probably need a DSB and/or DMB somewhere.  For now I don't see
> anything explicity, though it's likely we're getting them as a side
> effect of something else.
> 
> In any case, I guess we've been managing for some time without this
> being strictly correct.
> 

That is a good point. In fact we need this for init_cpu_features() and 
update_cpu_features() too.


> 
>>> Might be worth a comment.
>>>
>>> Coming up with a cleaner approach using locks or atomics might be
>>> overkill, but other folks might take a stronger view on this.
>>>
>>>> +
>>>> +	/* We don't expect to be called with SCOPE_SYSTEM */
>>>> +	WARN_ON(scope == SCOPE_SYSTEM);
>>>> +
>>>> +	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
>>>> +						     entry->field_pos, entry->sign);
>>>> +	/*
>>>> +	 * The ptr-auth feature levels are not intercompatible with
>>>> +	 * lower levels. Hence we must match all the CPUs with that
>>>> +	 * of the boot CPU. So cache the level of boot CPU and compare
>>>> +	 * it against the secondary CPUs.
>>>> +	 */
>>>> +	if (scope & SCOPE_BOOT_CPU) {
>>>> +		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
>>>> +			boot_cpu_auth_imp_def = local_cpu_auth;
> 
> Actually, can we make use of read_sanitised_ftr_reg() here rather than
> having to cache the values in static variables?

Thats actually a good idea ! However the only catch is if there is a
case where the values become compatible. e.g, APA >= 3 are all
compatible, in which case the sanitised value would indicate 0.
If we decide not support such cases, then we could go ahead with  the
sanitised_ftr value.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features
  2020-07-29 11:07   ` Dave Martin
@ 2020-08-03 10:13     ` Amit Kachhap
  0 siblings, 0 replies; 15+ messages in thread
From: Amit Kachhap @ 2020-08-03 10:13 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	linux-arm-kernel

Hi,

On 7/29/20 4:37 PM, Dave Martin wrote:
> On Fri, Jul 10, 2020 at 01:30:07PM +0530, Amit Daniel Kachhap wrote:
>> Some Armv8.3 Pointer Authentication enhancements have been introduced
>> which are mandatory for Armv8.6 and optional for ARMv8.3. These features
>> are,
>>
>> * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens
>>    finding the correct PAC value of the authenticated pointer.
>>
>> * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication
>>    instruction fails in authenticating the PAC present in the address.
>>    This is different from earlier case when such failures just adds an
>>    error code in the top byte and waits for subsequent load/store to abort.
>>    The ptrauth instructions which may cause this fault are autiasp, retaa
>>    etc.
> 
> Did we add anything for armv8.5 EnhancedPAC?
>
> I think that just changes some detail of the signing/authentication
> algorithms, rather than adding new insns or exceptions.  So maybe there
> was nothing to add anyway.

Yes agreed. There are only small stuffs added here, cpufeature 
configurations and new exception class code. I will change commit log to 
reflect this.

> 
> 
>> The above features are now represented by additional configurations
>> for the Address Authentication cpufeature.
>>
>> The userspace fault received in the kernel due to ARMv8.3-FPAC is treated
>> as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN
>> as the signal code. Note that this is different from earlier ARMv8.3
>> ptrauth where signal SIGSEGV is issued due to Pointer authentication
>> failures. The in-kernel PAC fault causes kernel to crash.
> 
> Presumably this breaks code that checks pointers using AUT* but doesn't
> attempt to dereference them if the check fails, perhaps calling some
> userspace callback to handle it or throwing an exception.

Should we worry about it? EPAC is essentially added to not make the 
intermediate result of authenticate made available as it will impact the 
security of key or PAC algorithm.

> 
> Since the point of the architecture though is that you don't need to do
> an explicit check, maybe code doesn't do this in practice.  Checking
> codesearch.debian.net or similar for explicit AUT* instructions could be
> good idea.

I will check if I can find any.

> 
> If this is a concern we might have to stop reporting HWCAP_PACA and
> HWCAP_PACG, and define new hwcaps.  That seems brutal, though.

ok.

> 
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Changes since v3:
>>   * Removed ptrauth fault handler from 32 bit compat layer handler.
>>   * Added a comment for in-kernel PAC fault kernel crash.
>>   * Commit logs cleanup.
>>
>>   arch/arm64/include/asm/esr.h       |  4 +++-
>>   arch/arm64/include/asm/exception.h |  1 +
>>   arch/arm64/include/asm/sysreg.h    | 24 ++++++++++++++++--------
>>   arch/arm64/kernel/entry-common.c   | 21 +++++++++++++++++++++
>>   arch/arm64/kernel/traps.c          | 10 ++++++++++
>>   5 files changed, 51 insertions(+), 9 deletions(-)

>>   	default:
>>   		el0_inv(regs, esr);
>>   	}
>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>> index 47f651df781c..0f1e587393a0 100644
>> --- a/arch/arm64/kernel/traps.c
>> +++ b/arch/arm64/kernel/traps.c
>> @@ -479,6 +479,15 @@ void do_bti(struct pt_regs *regs)
>>   }
>>   NOKPROBE_SYMBOL(do_bti);
>>   
>> +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
>> +{
>> +	/* In-kernel Pointer authentication fault causes kernel crash */
> 
> Doesn't arm64_notify_die() already dump a backtrace and kill the task?
> I wonder whether force_signal_inject() is a reasonable thing to use
> here.  This is used in some similar places such as do_undefinstr(),
> do_bti() etc.

Ok i will make it similar to do_bti(). It will be clear that way.

> 
> force_signal_inject() might need some minor tweaking to handle this
> case.
> 
>> +	BUG_ON(!user_mode(regs));
> 
> We need to kill the thread, but doesn't arm64_notify_die() already do
> that?

Yes some other caller of arm64_notify_die do not use BUG_ON for kernel.

Thanks,
Amit

> 
> (Don't take my word for it though...)
> 
>> +	arm64_notify_die("pointer authentication fault", regs, SIGILL,
>> +			 ILL_ILLOPN, (void __user *)regs->pc, esr);
>> +}
>> +NOKPROBE_SYMBOL(do_ptrauth_fault);
>> +
>>   #define __user_cache_maint(insn, address, res)			\
>>   	if (address >= user_addr_max()) {			\
>>   		res = -EFAULT;					\
>> @@ -775,6 +784,7 @@ static const char *esr_class_str[] = {
>>   	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
>>   	[ESR_ELx_EC_SVE]		= "SVE",
>>   	[ESR_ELx_EC_ERET]		= "ERET/ERETAA/ERETAB",
>> +	[ESR_ELx_EC_FPAC]		= "FPAC",
>>   	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
>>   	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
>>   	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
>> -- 
>> 2.17.1
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction
  2020-07-29 10:43   ` Dave Martin
@ 2020-08-03 10:16     ` Amit Kachhap
  0 siblings, 0 replies; 15+ messages in thread
From: Amit Kachhap @ 2020-08-03 10:16 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	linux-arm-kernel

Hi,

On 7/29/20 4:13 PM, Dave Martin wrote:
> On Fri, Jul 10, 2020 at 01:30:09PM +0530, Amit Daniel Kachhap wrote:
>> With the addition of ARMv8.3-FPAC feature, the probe of authenticate
>> ptrauth instructions (AUT*) may cause ptrauth fault exception in case of
>> authenticate failure so they cannot be safely single stepped.
>>
>> Hence the probe of authenticate instructions is disallowed but the
>> corresponding pac ptrauth instruction (PAC*) is not affected and they can
>> still be probed. Also AUTH* instructions do not make sense at function
>> entry points so most realistic probes would be unaffected by this change.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> 
> I take it we don't need any special handling of things like RETAA now
> that they are allowed to generate ptrauth faults?  IIUC such
> instructions are already not simulated and not stepped out-of-line, so
> we probably don't need to do anything.  Instructions like this won't
> appear at normal function entry points.

There is an issue currently with retaa(all combined instructions) as 
such branch instructions are not checked and code breaks later. I will 
push a fix as a separate patch.

> 
> Assuming what I've said above is correct:
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>

Thanks for reviewing.

> 
>> ---
>> Changes since v3:
>>   * Commit logs cleanup.
>>   * Moved comment changes in a separate patch.
>>
>>   arch/arm64/kernel/insn.c | 6 ------
>>   1 file changed, 6 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index a107375005bc..33d53cb46542 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
>>   	case AARCH64_INSN_HINT_XPACLRI:
>>   	case AARCH64_INSN_HINT_PACIA_1716:
>>   	case AARCH64_INSN_HINT_PACIB_1716:
>> -	case AARCH64_INSN_HINT_AUTIA_1716:
>> -	case AARCH64_INSN_HINT_AUTIB_1716:
>>   	case AARCH64_INSN_HINT_PACIAZ:
>>   	case AARCH64_INSN_HINT_PACIASP:
>>   	case AARCH64_INSN_HINT_PACIBZ:
>>   	case AARCH64_INSN_HINT_PACIBSP:
>> -	case AARCH64_INSN_HINT_AUTIAZ:
>> -	case AARCH64_INSN_HINT_AUTIASP:
>> -	case AARCH64_INSN_HINT_AUTIBZ:
>> -	case AARCH64_INSN_HINT_AUTIBSP:
>>   	case AARCH64_INSN_HINT_BTI:
>>   	case AARCH64_INSN_HINT_BTIC:
>>   	case AARCH64_INSN_HINT_BTIJ:
>> -- 
>> 2.17.1
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-07-29 17:09         ` Suzuki K Poulose
@ 2020-08-05  9:16           ` Amit Kachhap
  0 siblings, 0 replies; 15+ messages in thread
From: Amit Kachhap @ 2020-08-05  9:16 UTC (permalink / raw)
  To: Suzuki K Poulose, dave.martin
  Cc: mark.rutland, keescook, catalin.marinas, broonie, james.morse,
	Vincenzo.Frascino, will, linux-arm-kernel

Hi,

On 7/29/20 10:39 PM, Suzuki K Poulose wrote:
> On 07/29/2020 03:31 PM, Dave Martin wrote:
>> On Wed, Jul 29, 2020 at 01:28:06PM +0100, Suzuki K Poulose wrote:
>>> On 07/29/2020 11:36 AM, Dave Martin wrote:
>>>> On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
>>>>> The current address authentication cpufeature levels are set as 
>>>>> LOWER_SAFE
>>>>> which is not compatible with the different configurations added for 
>>>>> Armv8.3
>>>>> ptrauth enhancements as the different levels have different 
>>>>> behaviour and
>>>>> there is no tunable to enable the lower safe versions. This is 
>>>>> rectified
>>>>> by setting those cpufeature type as EXACT.
>>>>>
>>>>> The current cpufeature framework also does not interfere in the 
>>>>> booting of
>>>>> non-exact secondary cpus but rather marks them as tainted. As a 
>>>>> workaround
>>>>> this is fixed by replacing the generic match handler with a new 
>>>>> handler
>>>>> specific to ptrauth.
>>>>>
>>>>> After this change, if there is any variation in ptrauth 
>>>>> configurations in
>>>>> secondary cpus from boot cpu then those mismatched cpus are parked 
>>>>> in an
>>>>> infinite loop.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>> [Suzuki: Introduce new matching function for address authentication]
>>>>> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>>>>> ---
>>>>> Changes since v3:
>>>>>   * Commit logs cleanup.
>>>>>
>>>>>   arch/arm64/kernel/cpufeature.c | 56 
>>>>> ++++++++++++++++++++++++++++------
>>>>>   1 file changed, 47 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm64/kernel/cpufeature.c 
>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>> index 9fae0efc80c1..8ac8c18f70c9 100644
>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>> @@ -210,9 +210,9 @@ static const struct arm64_ftr_bits 
>>>>> ftr_id_aa64isar1[] = {
>>>>>       ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>>       ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>>       ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 
>>>>> 4, 0),
>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>       ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 
>>>>> 4, 0),
>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>       ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>>       ARM64_FTR_END,
>>>>>   };
>>>>> @@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct 
>>>>> arm64_cpu_capabilities *__unused)
>>>>>   #endif /* CONFIG_ARM64_RAS_EXTN */
>>>>>   #ifdef CONFIG_ARM64_PTR_AUTH
>>>>> -static bool has_address_auth(const struct arm64_cpu_capabilities 
>>>>> *entry,
>>>>> -                 int __unused)
>>>>> +static bool has_address_auth_cpucap(const struct 
>>>>> arm64_cpu_capabilities *entry, int scope)
>>>>>   {
>>>>> -    return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
>>>>> -           __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
>>>>> +    int local_cpu_auth;
>>>>> +    static int boot_cpu_auth_arch;
>>>>> +    static int boot_cpu_auth_imp_def;
>>>>
>>>> I guess having static variables here is probably safe, provided that
>>>> calls to this function are strictly serialised with sufficiently
>>>> heavyweight synchronisation.
>>>>
>>>
>>> These are initialised with the primary boot CPU. And later on called
>>> from CPU bring up. So they are serialized, except when
>>> this_cpu_has_cap() could be called. But this is fine, as it is a read
>>> operation.
>>
>> To guarantee that any update to those variables is visible to a booting
>> CPU, we'd probably need a DSB and/or DMB somewhere.  For now I don't see
>> anything explicity, though it's likely we're getting them as a side
>> effect of something else.
>>
>> In any case, I guess we've been managing for some time without this
>> being strictly correct.
>>
> 
> That is a good point. In fact we need this for init_cpu_features() and 
> update_cpu_features() too.

I just scanned for dsb in the boot flow. Is this dsb(ishst) called from
update_cpu_boot_status() in __cpu_up(arch/arm64/kernel/smp.c) sufficient
to sync information with the booting cpus?

> 
> 
>>
>>>> Might be worth a comment.
>>>>
>>>> Coming up with a cleaner approach using locks or atomics might be
>>>> overkill, but other folks might take a stronger view on this.
>>>>
>>>>> +
>>>>> +    /* We don't expect to be called with SCOPE_SYSTEM */
>>>>> +    WARN_ON(scope == SCOPE_SYSTEM);
>>>>> +
>>>>> +    local_cpu_auth = 
>>>>> cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
>>>>> +                             entry->field_pos, entry->sign);
>>>>> +    /*
>>>>> +     * The ptr-auth feature levels are not intercompatible with
>>>>> +     * lower levels. Hence we must match all the CPUs with that
>>>>> +     * of the boot CPU. So cache the level of boot CPU and compare
>>>>> +     * it against the secondary CPUs.
>>>>> +     */
>>>>> +    if (scope & SCOPE_BOOT_CPU) {
>>>>> +        if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
>>>>> +            boot_cpu_auth_imp_def = local_cpu_auth;
>>
>> Actually, can we make use of read_sanitised_ftr_reg() here rather than
>> having to cache the values in static variables?
> 
> Thats actually a good idea ! However the only catch is if there is a
> case where the values become compatible. e.g, APA >= 3 are all
> compatible, in which case the sanitised value would indicate 0.
> If we decide not support such cases, then we could go ahead with  the
> sanitised_ftr value.

This is clean method. I will use it in my next v5 version.

Thanks,
Amit D
> 
> Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-08-05  9:18 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-10  8:00 [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
2020-07-10  8:00 ` [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features Amit Daniel Kachhap
2020-07-29 11:07   ` Dave Martin
2020-08-03 10:13     ` Amit Kachhap
2020-07-10  8:00 ` [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
2020-07-29 10:36   ` Dave Martin
2020-07-29 12:28     ` Suzuki K Poulose
2020-07-29 14:31       ` Dave Martin
2020-07-29 17:09         ` Suzuki K Poulose
2020-08-05  9:16           ` Amit Kachhap
2020-07-10  8:00 ` [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
2020-07-29 10:43   ` Dave Martin
2020-08-03 10:16     ` Amit Kachhap
2020-07-10  8:00 ` [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
2020-07-29 10:44   ` Dave Martin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.