All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] arm64: add Armv8.6 pointer authentication
@ 2020-06-18  5:10 Amit Daniel Kachhap
  2020-06-18  5:10 ` [PATCH v3 1/3] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-06-18  5:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

These patch series adds support for Armv8.6 pointer authentication
enhanced mandatory features. These features are,

 * Enhanced PAC generation algorithm.
 * Generate fault when authenticate instruction fails.

For the first feature no code change is done and for the second feature
a ptrauth fault handler is added.
More details can be found here [1].

Changes since v2 [2]:
* Dropped the patch "arm64: cpufeature: Fix the handler for address authentication"
* Added new matching function for address authentication as generic
  matching function has_cpuid_feature is specific for LOWER_SAFE
  features. This was suggested by Suzuki [3].
* Disabled probe of Authenticate ptrauth instructions as per Mark
  Brown's merged changes of whitelisting of hint instructions.

This series is based on kernel version v5.8-rc1.

Regards,
Amit

[1]: https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a
[2]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-April/723751.html
[3]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-May/733839.html


Amit Daniel Kachhap (3):arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  arm64: cpufeature: Modify address authentication cpufeature to exact
  arm64: kprobe: disable probe of fault prone ptrauth instruction

 arch/arm64/include/asm/esr.h           |  4 +-
 arch/arm64/include/asm/exception.h     |  1 +
 arch/arm64/include/asm/sysreg.h        | 24 +++++++----
 arch/arm64/kernel/cpufeature.c         | 56 +++++++++++++++++++++-----
 arch/arm64/kernel/entry-common.c       | 24 +++++++++++
 arch/arm64/kernel/insn.c               |  6 ---
 arch/arm64/kernel/probes/decode-insn.c |  2 +-
 arch/arm64/kernel/traps.c              |  9 +++++
 8 files changed, 101 insertions(+), 25 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/3] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  2020-06-18  5:10 [PATCH v3 0/3] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
@ 2020-06-18  5:10 ` Amit Daniel Kachhap
       [not found]   ` <20200622142255.GS25945@arm.com>
  2020-06-18  5:10 ` [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
  2020-06-18  5:10 ` [PATCH v3 3/3] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
  2 siblings, 1 reply; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-06-18  5:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

This patch add changes for Pointer Authentication enhanced features
mandatory for Armv8.6. These features are,

* An enhanced PAC generation logic which hardens finding the correct
  PAC value of the authenticated pointer. However, no code change done
  for this.

* Fault(FPAC) is generated now when the ptrauth authentication instruction
  fails in authenticating the PAC present in the address. This is different
  from earlier case when such failures just adds an error code in the top
  byte and waits for subsequent load/store to abort. The ptrauth
  instructions which may cause this fault are autiasp, retaa etc.

The above features are now represented by additional configurations
for the Address Authentication cpufeature.

The fault received in the kernel due to FPAC is treated as Illegal
instruction and hence signal SIGILL is injected with ILL_ILLOPN as the
signal code. Note that this is different from earlier ARMv8.3 ptrauth
where signal SIGSEGV is issued due to Pointer authentication failures.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
No change since v2.

 arch/arm64/include/asm/esr.h       |  4 +++-
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/include/asm/sysreg.h    | 24 ++++++++++++++++--------
 arch/arm64/kernel/entry-common.c   | 24 ++++++++++++++++++++++++
 arch/arm64/kernel/traps.c          |  9 +++++++++
 5 files changed, 53 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 035003acfa87..22c81f1edda2 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -35,7 +35,9 @@
 #define ESR_ELx_EC_SYS64	(0x18)
 #define ESR_ELx_EC_SVE		(0x19)
 #define ESR_ELx_EC_ERET		(0x1a)	/* EL2 only */
-/* Unallocated EC: 0x1b - 0x1E */
+/* Unallocated EC: 0x1B */
+#define ESR_ELx_EC_FPAC		(0x1C)	/* EL1 and above */
+/* Unallocated EC: 0x1D - 0x1E */
 #define ESR_ELx_EC_IMP_DEF	(0x1f)	/* EL3 only */
 #define ESR_ELx_EC_IABT_LOW	(0x20)
 #define ESR_ELx_EC_IABT_CUR	(0x21)
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 7577a754d443..3140b90f2e4f 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
 void do_cp15instr(unsigned int esr, struct pt_regs *regs);
 void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
+void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr);
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 463175f80341..c71bcd0c002a 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -633,14 +633,22 @@
 #define ID_AA64ISAR1_APA_SHIFT		4
 #define ID_AA64ISAR1_DPB_SHIFT		0
 
-#define ID_AA64ISAR1_APA_NI		0x0
-#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_API_NI		0x0
-#define ID_AA64ISAR1_API_IMP_DEF	0x1
-#define ID_AA64ISAR1_GPA_NI		0x0
-#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_GPI_NI		0x0
-#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
+#define ID_AA64ISAR1_APA_NI			0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_APA_ARCH_EPAC		0x2
+#define ID_AA64ISAR1_APA_ARCH_EPAC2		0x3
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_API_NI			0x0
+#define ID_AA64ISAR1_API_IMP_DEF		0x1
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC		0x2
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2		0x3
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_GPA_NI			0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_GPI_NI			0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF		0x1
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 3dbdf9752b11..08a4805a5cca 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
 }
 NOKPROBE_SYMBOL(el1_dbg);
 
+static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	local_daif_inherit(regs);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el1_fpac);
+
 asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el1_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el1_fpac(regs, esr);
+		break;
 	default:
 		el1_inv(regs, esr);
 	}
@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_svc);
 
+static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el0_fpac);
+
 asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el0_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el0_fpac(regs, esr);
+		break;
 	default:
 		el0_inv(regs, esr);
 	}
@@ -335,6 +356,9 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BKPT32:
 		el0_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el0_fpac(regs, esr);
+		break;
 	default:
 		el0_inv(regs, esr);
 	}
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 50cc30acf106..f596ef585560 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -479,6 +479,14 @@ void do_bti(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_bti);
 
+void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
+{
+	BUG_ON(!user_mode(regs));
+	arm64_notify_die("pointer authentication fault", regs, SIGILL,
+			 ILL_ILLOPN, (void __user *)regs->pc, esr);
+}
+NOKPROBE_SYMBOL(do_ptrauth_fault);
+
 #define __user_cache_maint(insn, address, res)			\
 	if (address >= user_addr_max()) {			\
 		res = -EFAULT;					\
@@ -775,6 +783,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
 	[ESR_ELx_EC_SVE]		= "SVE",
 	[ESR_ELx_EC_ERET]		= "ERET/ERETAA/ERETAB",
+	[ESR_ELx_EC_FPAC]		= "FPAC",
 	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
 	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
 	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-06-18  5:10 [PATCH v3 0/3] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
  2020-06-18  5:10 ` [PATCH v3 1/3] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
@ 2020-06-18  5:10 ` Amit Daniel Kachhap
       [not found]   ` <20200622143503.GT25945@arm.com>
  2020-06-18  5:10 ` [PATCH v3 3/3] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
  2 siblings, 1 reply; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-06-18  5:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

This patch modifies the address authentication cpufeature type to EXACT
from earlier LOWER_SAFE as the different configurations added for Armv8.6
enhanced PAC have different behaviour and there is no tunable to enable the
lower safe versions.

The current cpufeature framework does not interfere in the booting of
non-exact secondary cpus but rather marks them as tainted. This patch fixes
it by replacing the generic match handler with a new handler specific to
ptrauth.

After this change, if there is any variation in ptrauth configurations in
secondary cpus from boot cpu then those mismatched cpus are parked in an
infinite loop.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
[Suzuki: Introduce new matching function for address authentication]
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Change since v2:
 * Added new matching helper function has_address_auth_cpucap as address
   authentication cpufeature is now FTR_EXACT. The existing matching function
   has_cpuid_feature is specific to LOWER_SAFE.

 arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
 1 file changed, 47 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4ae41670c2e6..42670d615555 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
@@ -1601,11 +1601,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
-			     int __unused)
+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
 {
-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+	int local_cpu_auth;
+	static int boot_cpu_auth_arch;
+	static int boot_cpu_auth_imp_def;
+
+	/* We don't expect to be called with SCOPE_SYSTEM */
+	WARN_ON(scope == SCOPE_SYSTEM);
+
+	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
+						     entry->field_pos, entry->sign);
+	/*
+	 * The ptr-auth feature levels are not intercompatible with
+	 * lower levels. Hence we must match all the CPUs with that
+	 * of the boot CPU. So cache the level of boot CPU and compare
+	 * it against the secondary CPUs.
+	 */
+	if (scope & SCOPE_BOOT_CPU) {
+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
+			boot_cpu_auth_imp_def = local_cpu_auth;
+			return boot_cpu_auth_imp_def >= entry->min_field_value;
+		} else if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
+			boot_cpu_auth_arch = local_cpu_auth;
+			return boot_cpu_auth_arch >= entry->min_field_value;
+		}
+	} else if (scope & SCOPE_LOCAL_CPU) {
+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
+			return (local_cpu_auth >= entry->min_field_value) &&
+			       (local_cpu_auth == boot_cpu_auth_imp_def);
+		}
+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
+			return (local_cpu_auth >= entry->min_field_value) &&
+			       (local_cpu_auth == boot_cpu_auth_arch);
+		}
+	}
+	return false;
+}
+
+static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
+			     int scope)
+{
+	return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
+	       has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
 }
 
 static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
@@ -1954,7 +1992,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -1964,12 +2002,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
-		.matches = has_cpuid_feature,
+		.matches = has_address_auth_cpucap,
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
 		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
-		.matches = has_address_auth,
+		.matches = has_address_auth_metacap,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/3] arm64: kprobe: disable probe of fault prone ptrauth instruction
  2020-06-18  5:10 [PATCH v3 0/3] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
  2020-06-18  5:10 ` [PATCH v3 1/3] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
  2020-06-18  5:10 ` [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-06-18  5:10 ` Amit Daniel Kachhap
  2 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-06-18  5:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

This patch disables the probing of authenticate ptrauth instruction (AUT*)
which falls under the hint instructions region. This is done to disallow
probe of authenticate instruction which may lead to ptrauth faults with the
addition of Armv8.6 enhanced ptrauth features.

The corresponding append pac ptrauth instruction (PAC*) is not disabled
and they can still be probed.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Change since v2:
 * Modified this patch to consider the merged changes for whitelisting
  of nops by commit 47d67e4d19184e ("arm64: insn: Report PAC and BTI").

 arch/arm64/kernel/insn.c               | 6 ------
 arch/arm64/kernel/probes/decode-insn.c | 2 +-
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 684d871ae38d..9cd10edefc96 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
 	case AARCH64_INSN_HINT_XPACLRI:
 	case AARCH64_INSN_HINT_PACIA_1716:
 	case AARCH64_INSN_HINT_PACIB_1716:
-	case AARCH64_INSN_HINT_AUTIA_1716:
-	case AARCH64_INSN_HINT_AUTIB_1716:
 	case AARCH64_INSN_HINT_PACIAZ:
 	case AARCH64_INSN_HINT_PACIASP:
 	case AARCH64_INSN_HINT_PACIBZ:
 	case AARCH64_INSN_HINT_PACIBSP:
-	case AARCH64_INSN_HINT_AUTIAZ:
-	case AARCH64_INSN_HINT_AUTIASP:
-	case AARCH64_INSN_HINT_AUTIBZ:
-	case AARCH64_INSN_HINT_AUTIBSP:
 	case AARCH64_INSN_HINT_BTI:
 	case AARCH64_INSN_HINT_BTIC:
 	case AARCH64_INSN_HINT_BTIJ:
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index 263d5fba4c8a..c26c638b260e 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -42,7 +42,7 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 			     != AARCH64_INSN_SPCLREG_DAIF;
 
 		/*
-		 * The HINT instruction is is problematic when single-stepping,
+		 * The HINT instruction is problematic when single-stepping,
 		 * except for the NOP case.
 		 */
 		if (aarch64_insn_is_hint(insn))
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/3] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
       [not found]     ` <d1d3b25d-12d8-15d6-086a-d23b36440dd5@arm.com>
@ 2020-06-23 14:45       ` Dave Martin
  2020-06-24  7:07         ` Amit Daniel Kachhap
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Martin @ 2020-06-23 14:45 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Vincenzo Frascino,
	Will Deacon, linux-arm-kernel

On Tue, Jun 23, 2020 at 06:46:28PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 6/22/20 7:52 PM, Dave Martin wrote:
> >On Thu, Jun 18, 2020 at 10:40:27AM +0530, Amit Daniel Kachhap wrote:
> >>This patch add changes for Pointer Authentication enhanced features
> >>mandatory for Armv8.6. These features are,
> >>
> >>* An enhanced PAC generation logic which hardens finding the correct
> >>   PAC value of the authenticated pointer. However, no code change done
> >>   for this.
> >>
> >>* Fault(FPAC) is generated now when the ptrauth authentication instruction
> >>   fails in authenticating the PAC present in the address. This is different
> >>   from earlier case when such failures just adds an error code in the top
> >>   byte and waits for subsequent load/store to abort. The ptrauth
> >>   instructions which may cause this fault are autiasp, retaa etc.
> >>
> >>The above features are now represented by additional configurations
> >>for the Address Authentication cpufeature.
> >>
> >>The fault received in the kernel due to FPAC is treated as Illegal
> >>instruction and hence signal SIGILL is injected with ILL_ILLOPN as the
> >>signal code. Note that this is different from earlier ARMv8.3 ptrauth
> >>where signal SIGSEGV is issued due to Pointer authentication failures.
> >
> >Seems reasonable.  It's a little unfortunate that the behaviour changes
> >here, but I don't see what alternative we have.  And getting a
> >sunchronous fault is certainly better for debugging.
> >
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>

[...]

> >>diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> >>index 3dbdf9752b11..08a4805a5cca 100644
> >>--- a/arch/arm64/kernel/entry-common.c
> >>+++ b/arch/arm64/kernel/entry-common.c
> >>@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
> >>  }
> >>  NOKPROBE_SYMBOL(el1_dbg);
> >>+static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
> >>+{
> >>+	local_daif_inherit(regs);
> >>+	do_ptrauth_fault(regs, esr);
> >>+}
> >>+NOKPROBE_SYMBOL(el1_fpac);
> >>+
> >>  asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
> >>  {
> >>  	unsigned long esr = read_sysreg(esr_el1);
> >>@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
> >>  	case ESR_ELx_EC_BRK64:
> >>  		el1_dbg(regs, esr);
> >>  		break;
> >>+	case ESR_ELx_EC_FPAC:
> >>+		el1_fpac(regs, esr);
> >>+		break;
> >>  	default:
> >>  		el1_inv(regs, esr);
> >>  	}
> >>@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
> >>  }
> >>  NOKPROBE_SYMBOL(el0_svc);
> >>+static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
> >>+{
> >>+	user_exit_irqoff();
> >>+	local_daif_restore(DAIF_PROCCTX);
> >>+	do_ptrauth_fault(regs, esr);
> >>+}
> >>+NOKPROBE_SYMBOL(el0_fpac);
> >>+
> >>  asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
> >>  {
> >>  	unsigned long esr = read_sysreg(esr_el1);
> >>@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
> >>  	case ESR_ELx_EC_BRK64:
> >>  		el0_dbg(regs, esr);
> >>  		break;
> >>+	case ESR_ELx_EC_FPAC:
> >>+		el0_fpac(regs, esr);
> >>+		break;
> >>  	default:
> >>  		el0_inv(regs, esr);
> >>  	}
> >>@@ -335,6 +356,9 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
> >>  	case ESR_ELx_EC_BKPT32:
> >>  		el0_dbg(regs, esr);
> >>  		break;
> >>+	case ESR_ELx_EC_FPAC:
> >>+		el0_fpac(regs, esr);
> >>+		break;
> >
> >Can this exception ever happen?  I thought ptrauth doesn't exist for
> >AArch32.
> 
> This path will be taken during FPAC fault from userspace(EL0). Am I missing
> something here?

I thought AArch32 doesn't have pointer auth at all, due to a lack of
spare address bits.

el0_sync_compat_handler() is presumably only for exceptions coming from
AArch32?
> 
> >
> >>  	default:
> >>  		el0_inv(regs, esr);
> >>  	}
> >>diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> >>index 50cc30acf106..f596ef585560 100644
> >>--- a/arch/arm64/kernel/traps.c
> >>+++ b/arch/arm64/kernel/traps.c
> >>@@ -479,6 +479,14 @@ void do_bti(struct pt_regs *regs)
> >>  }
> >>  NOKPROBE_SYMBOL(do_bti);
> >>+void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
> >>+{
> >>+	BUG_ON(!user_mode(regs));
> >
> >Doesn't el1_fpac() call this?  How does this interact with in-kernel
> >pointer auth?
> 
> Yes called from el1_fpac. Currently we crash the kernel when this fpac
> occurs. This is similar to do_undefinstr implementation. Anyway it can be
> crashed from el1_fpac also.

OK, might be worth a comment saying what this is checking -- or simply
replace the body of el1_fpac() with a BUG() and remove the BUG_ON()
here?

In its current form, this looks like this function should not be called
in a kernel context, but el1_fpac() explicitly does make such a call.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact
       [not found]     ` <d4e29203-7a6b-c6e5-643c-6b0abc670feb@arm.com>
@ 2020-06-23 14:47       ` Dave Martin
  2020-06-24  7:13         ` Amit Daniel Kachhap
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Martin @ 2020-06-23 14:47 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Vincenzo Frascino,
	Will Deacon, linux-arm-kernel

On Tue, Jun 23, 2020 at 06:47:02PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 6/22/20 8:05 PM, Dave Martin wrote:
> >On Thu, Jun 18, 2020 at 10:40:28AM +0530, Amit Daniel Kachhap wrote:
> >>This patch modifies the address authentication cpufeature type to EXACT
> >>from earlier LOWER_SAFE as the different configurations added for Armv8.6
> >>enhanced PAC have different behaviour and there is no tunable to enable the
> >>lower safe versions.
> >
> >The enancements add no new instructions at EL0, right?  And no
> >behavioural change, provided that userspace doesn't care what signing/
> >authentication algorithm is used?
> 
> Yes you are correct that no new instructions are added.
> 
> >
> >If so then I guess you're correct not to add a new hwcap, but I thought
> >it was worth asking.
> >
> >>non-exact secondary cpus but rather marks them as tainted. This patch fixes
> >>it by replacing the generic match handler with a new handler specific to
> >>ptrauth.
> >>
> >>After this change, if there is any variation in ptrauth configurations in
> >>secondary cpus from boot cpu then those mismatched cpus are parked in an
> >>infinite loop.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>[Suzuki: Introduce new matching function for address authentication]
> >>Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> >
> >Does a corresponding patch need to go to stable?  As things stand, I
> >think older kernels that support pointer auth will go wrong on v8.7+
> >hardware that has these features.
> >
> >This patch in its current form may be too heavy for stable, though.
> >
> >See comment below though.
> >
> >>---
> >>Change since v2:
> >>  * Added new matching helper function has_address_auth_cpucap as address
> >>    authentication cpufeature is now FTR_EXACT. The existing matching function
> >>    has_cpuid_feature is specific to LOWER_SAFE.
> >>
> >>  arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
> >>  1 file changed, 47 insertions(+), 9 deletions(-)
> >>
> >>diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> >>index 4ae41670c2e6..42670d615555 100644
> >>--- a/arch/arm64/kernel/cpufeature.c
> >>+++ b/arch/arm64/kernel/cpufeature.c
> >>@@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >>+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >>+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> >>  	ARM64_FTR_END,
> >>  };
> >>@@ -1601,11 +1601,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
> >>  #endif /* CONFIG_ARM64_RAS_EXTN */
> >>  #ifdef CONFIG_ARM64_PTR_AUTH
> >>-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> >>-			     int __unused)
> >>+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
> >>  {
> >>-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> >>-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> >>+	int local_cpu_auth;
> >>+	static int boot_cpu_auth_arch;
> >>+	static int boot_cpu_auth_imp_def;
> >>+
> >>+	/* We don't expect to be called with SCOPE_SYSTEM */
> >>+	WARN_ON(scope == SCOPE_SYSTEM);
> >>+
> >>+	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
> >>+						     entry->field_pos, entry->sign);
> >>+	/*
> >>+	 * The ptr-auth feature levels are not intercompatible with
> >>+	 * lower levels. Hence we must match all the CPUs with that
> >>+	 * of the boot CPU. So cache the level of boot CPU and compare
> >>+	 * it against the secondary CPUs.
> >>+	 */
> >
> >Thinking about this, does it actually matter if different CPUs have
> >different trapping behaviours for ptrauth?
> >
> >If we are guaranteed that the signing algorithm is still compatible
> >across all CPUs, we might get away with it.
> 
> You may be right that these protections may not be required practically.
> But the algorithm of each configurations is different so theoretically
> same set of software will produce different behavior on different cpus.
> 
> This code initially changed only FTR_EXACT from FTR_LOWE_SAFE. But there
> are many issues identified here [1] in the cpufeature framework so rest
> of the defensive changes added.
> 
> [1]: https://www.spinics.net/lists/arm-kernel/msg808238.html
> >
> >Possibly a dumb question -- I haven't checked the specs to find out
> >whether this makes any sense or not.
> 
> Your point is valid though.

OK.

Did you see my comment above about patches for stable?

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/3] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  2020-06-23 14:45       ` Dave Martin
@ 2020-06-24  7:07         ` Amit Daniel Kachhap
  0 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-06-24  7:07 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Vincenzo Frascino,
	Will Deacon, linux-arm-kernel



On 6/23/20 8:15 PM, Dave Martin wrote:
> On Tue, Jun 23, 2020 at 06:46:28PM +0530, Amit Daniel Kachhap wrote:
>> Hi,
>>
>> On 6/22/20 7:52 PM, Dave Martin wrote:
>>> On Thu, Jun 18, 2020 at 10:40:27AM +0530, Amit Daniel Kachhap wrote:
>>>> This patch add changes for Pointer Authentication enhanced features
>>>> mandatory for Armv8.6. These features are,
>>>>

>>>> +
>>>>   asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
>>>>   {
>>>>   	unsigned long esr = read_sysreg(esr_el1);
>>>> @@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
>>>>   	case ESR_ELx_EC_BRK64:
>>>>   		el0_dbg(regs, esr);
>>>>   		break;
>>>> +	case ESR_ELx_EC_FPAC:
>>>> +		el0_fpac(regs, esr);
>>>> +		break;
>>>>   	default:
>>>>   		el0_inv(regs, esr);
>>>>   	}
>>>> @@ -335,6 +356,9 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
>>>>   	case ESR_ELx_EC_BKPT32:
>>>>   		el0_dbg(regs, esr);
>>>>   		break;
>>>> +	case ESR_ELx_EC_FPAC:
>>>> +		el0_fpac(regs, esr);
>>>> +		break;
>>>
>>> Can this exception ever happen?  I thought ptrauth doesn't exist for
>>> AArch32.
>>
>> This path will be taken during FPAC fault from userspace(EL0). Am I missing
>> something here?
> 
> I thought AArch32 doesn't have pointer auth at all, due to a lack of
> spare address bits.
> 
> el0_sync_compat_handler() is presumably only for exceptions coming from
> AArch32?

ok got it. I will remove el0_fpac from here. Thanks for pointing this issue.

>>
>>>
>>>>   	default:
>>>>   		el0_inv(regs, esr);
>>>>   	}

> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-06-23 14:47       ` Dave Martin
@ 2020-06-24  7:13         ` Amit Daniel Kachhap
  2020-06-24  7:49           ` Will Deacon
  0 siblings, 1 reply; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-06-24  7:13 UTC (permalink / raw)
  To: Dave Martin, Catalin Marinas, Will Deacon
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, linux-arm-kernel



On 6/23/20 8:17 PM, Dave Martin wrote:
> On Tue, Jun 23, 2020 at 06:47:02PM +0530, Amit Daniel Kachhap wrote:
>> Hi,
>>
>> On 6/22/20 8:05 PM, Dave Martin wrote:
>>> On Thu, Jun 18, 2020 at 10:40:28AM +0530, Amit Daniel Kachhap wrote:
>>>> This patch modifies the address authentication cpufeature type to EXACT
>>> >from earlier LOWER_SAFE as the different configurations added for Armv8.6
>>>> enhanced PAC have different behaviour and there is no tunable to enable the
>>>> lower safe versions.
>>>
>>> The enancements add no new instructions at EL0, right?  And no
>>> behavioural change, provided that userspace doesn't care what signing/
>>> authentication algorithm is used?
>>
>> Yes you are correct that no new instructions are added.
>>
>>>
>>> If so then I guess you're correct not to add a new hwcap, but I thought
>>> it was worth asking.
>>>
>>>> non-exact secondary cpus but rather marks them as tainted. This patch fixes
>>>> it by replacing the generic match handler with a new handler specific to
>>>> ptrauth.
>>>>
>>>> After this change, if there is any variation in ptrauth configurations in
>>>> secondary cpus from boot cpu then those mismatched cpus are parked in an
>>>> infinite loop.
>>>>
>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>> [Suzuki: Introduce new matching function for address authentication]
>>>> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>>>
>>> Does a corresponding patch need to go to stable?  As things stand, I
>>> think older kernels that support pointer auth will go wrong on v8.7+
>>> hardware that has these features.

Yes It makes to add this patch to the stable version.

@Catalin, @Will - Shall I send this one with a fix subject line? Please 
let me know your suggestion.

>>>
>>> This patch in its current form may be too heavy for stable, though.
>>>
>>> See comment below though.
>>>
>>>> ---
>>>> Change since v2:
>>>>   * Added new matching helper function has_address_auth_cpucap as address
>>>>     authentication cpufeature is now FTR_EXACT. The existing matching function
>>>>     has_cpuid_feature is specific to LOWER_SAFE.
>>>>
>>>>   arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
>>>>   1 file changed, 47 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>>> index 4ae41670c2e6..42670d615555 100644
>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>> @@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>   	ARM64_FTR_END,
>>>>   };
>>>> @@ -1601,11 +1601,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>>>>   #endif /* CONFIG_ARM64_RAS_EXTN */
>>>>   #ifdef CONFIG_ARM64_PTR_AUTH
>>>> -static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
>>>> -			     int __unused)
>>>> +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
>>>>   {
>>>> -	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
>>>> -	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
>>>> +	int local_cpu_auth;
>>>> +	static int boot_cpu_auth_arch;
>>>> +	static int boot_cpu_auth_imp_def;
>>>> +
>>>> +	/* We don't expect to be called with SCOPE_SYSTEM */
>>>> +	WARN_ON(scope == SCOPE_SYSTEM);
>>>> +
>>>> +	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
>>>> +						     entry->field_pos, entry->sign);
>>>> +	/*
>>>> +	 * The ptr-auth feature levels are not intercompatible with
>>>> +	 * lower levels. Hence we must match all the CPUs with that
>>>> +	 * of the boot CPU. So cache the level of boot CPU and compare
>>>> +	 * it against the secondary CPUs.
>>>> +	 */
>>>
>>> Thinking about this, does it actually matter if different CPUs have
>>> different trapping behaviours for ptrauth?
>>>
>>> If we are guaranteed that the signing algorithm is still compatible
>>> across all CPUs, we might get away with it.
>>
>> You may be right that these protections may not be required practically.
>> But the algorithm of each configurations is different so theoretically
>> same set of software will produce different behavior on different cpus.
>>
>> This code initially changed only FTR_EXACT from FTR_LOWE_SAFE. But there
>> are many issues identified here [1] in the cpufeature framework so rest
>> of the defensive changes added.
>>
>> [1]: https://www.spinics.net/lists/arm-kernel/msg808238.html
>>>
>>> Possibly a dumb question -- I haven't checked the specs to find out
>>> whether this makes any sense or not.
>>
>> Your point is valid though.
> 
> OK.
> 
> Did you see my comment above about patches for stable?

oops I missed this one.
> 
> [...]
> 
> Cheers
> ---Dave
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-06-24  7:13         ` Amit Daniel Kachhap
@ 2020-06-24  7:49           ` Will Deacon
  2020-06-24 11:55             ` Amit Daniel Kachhap
  0 siblings, 1 reply; 10+ messages in thread
From: Will Deacon @ 2020-06-24  7:49 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Vincenzo Frascino,
	Dave Martin, linux-arm-kernel

On Wed, Jun 24, 2020 at 12:43:20PM +0530, Amit Daniel Kachhap wrote:
> On 6/23/20 8:17 PM, Dave Martin wrote:
> > On Tue, Jun 23, 2020 at 06:47:02PM +0530, Amit Daniel Kachhap wrote:
> > > On 6/22/20 8:05 PM, Dave Martin wrote:
> > > > On Thu, Jun 18, 2020 at 10:40:28AM +0530, Amit Daniel Kachhap wrote:
> > > > > This patch modifies the address authentication cpufeature type to EXACT
> > > > >from earlier LOWER_SAFE as the different configurations added for Armv8.6
> > > > > enhanced PAC have different behaviour and there is no tunable to enable the
> > > > > lower safe versions.

$ git grep 'This patch' Documentation/process/submitting-patches.rst

> > > > Does a corresponding patch need to go to stable?  As things stand, I
> > > > think older kernels that support pointer auth will go wrong on v8.7+
> > > > hardware that has these features.
> 
> Yes It makes to add this patch to the stable version.
> 
> @Catalin, @Will - Shall I send this one with a fix subject line? Please let
> me know your suggestion.

What exactly goes wrong? As far as I can tell, we will taint and probably
(?) crash shortly afterwards if you run an old kernel on hardware with
mismatched pointer auth. If that's the case, I don't think this warrants
stable (especially since this file has seen enough churn that the backports
are likely risky).

Can you confirm that things only go wrong if we have a mismatched system,
and ideally provide an example of what the crash looks like (you should
be able to get a fast model to emulate this setup)?

Cheers,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-06-24  7:49           ` Will Deacon
@ 2020-06-24 11:55             ` Amit Daniel Kachhap
  0 siblings, 0 replies; 10+ messages in thread
From: Amit Daniel Kachhap @ 2020-06-24 11:55 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Vincenzo Frascino,
	Dave Martin, linux-arm-kernel

Hi,

On 6/24/20 1:19 PM, Will Deacon wrote:
> On Wed, Jun 24, 2020 at 12:43:20PM +0530, Amit Daniel Kachhap wrote:
>> On 6/23/20 8:17 PM, Dave Martin wrote:
>>> On Tue, Jun 23, 2020 at 06:47:02PM +0530, Amit Daniel Kachhap wrote:
>>>> On 6/22/20 8:05 PM, Dave Martin wrote:
>>>>> On Thu, Jun 18, 2020 at 10:40:28AM +0530, Amit Daniel Kachhap wrote:
>>>>>> This patch modifies the address authentication cpufeature type to EXACT
>>>>> >from earlier LOWER_SAFE as the different configurations added for Armv8.6
>>>>>> enhanced PAC have different behaviour and there is no tunable to enable the
>>>>>> lower safe versions.
> 
> $ git grep 'This patch' Documentation/process/submitting-patches.rst

ok sure. I will update.

> 
>>>>> Does a corresponding patch need to go to stable?  As things stand, I
>>>>> think older kernels that support pointer auth will go wrong on v8.7+
>>>>> hardware that has these features.
>>
>> Yes It makes to add this patch to the stable version.
>>
>> @Catalin, @Will - Shall I send this one with a fix subject line? Please let
>> me know your suggestion.
> 
> What exactly goes wrong? As far as I can tell, we will taint and probably
> (?) crash shortly afterwards if you run an old kernel on hardware with
> mismatched pointer auth.

Yes you are right that 8.6+ hardware will crash with older kernel if
ptrauth cpu variation occurs. Basically EnhancedPAC2(with extra xor) and
old pac algorithm may not be compatible and may crash when thread
migration happens in between paciasp and autiasp [1].

[1]: 
https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a

> If that's the case, I don't think this warrants
> stable (especially since this file has seen enough churn that the backports
> are likely risky).

ok. I was seeking mere opinion :).

> 
> Can you confirm that things only go wrong if we have a mismatched system,
> and ideally provide an example of what the crash looks like (you should
> be able to get a fast model to emulate this setup)?

It is difficult to emulate this as fast model only provides cluster
level pac config option and not at cpu level. I will check on it further.

Thanks,
Amit Daniel

> 
> Cheers,
> 
> Will
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-06-24 11:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-18  5:10 [PATCH v3 0/3] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
2020-06-18  5:10 ` [PATCH v3 1/3] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
     [not found]   ` <20200622142255.GS25945@arm.com>
     [not found]     ` <d1d3b25d-12d8-15d6-086a-d23b36440dd5@arm.com>
2020-06-23 14:45       ` Dave Martin
2020-06-24  7:07         ` Amit Daniel Kachhap
2020-06-18  5:10 ` [PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
     [not found]   ` <20200622143503.GT25945@arm.com>
     [not found]     ` <d4e29203-7a6b-c6e5-643c-6b0abc670feb@arm.com>
2020-06-23 14:47       ` Dave Martin
2020-06-24  7:13         ` Amit Daniel Kachhap
2020-06-24  7:49           ` Will Deacon
2020-06-24 11:55             ` Amit Daniel Kachhap
2020-06-18  5:10 ` [PATCH v3 3/3] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.