All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] arm64: add Armv8.6 pointer authentication
@ 2020-04-14  5:31 Amit Daniel Kachhap
  2020-04-14  5:31 ` [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list Amit Daniel Kachhap
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2020-04-14  5:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

Hi all,

These patch series adds support for Armv8.6 pointer authentication
enhanced mandatory features. These features are,

 * Enhanced PAC generation algorithm.
 * Generate fault when authenticate instruction fails.

For the first feature no code change is done and for the second feature
a ptrauth fault handler is added.
More details can be found here [1].

Changes since v1:
* Patch 1 is newly added and fixes a meta cpucapability check.
* Patch 3 is forked out from earlier patch 1 ("arm64: ptrauth: add pointer
  authentication Armv8.6 enhanced feature"). This was suggested by Will
  [2].

This series is based on kernel version v5.7-rc1.

Note: patch 4 may need some changes with Mark Brown's work on whitelisting
of hint instructions [3].

Regards,
Amit

[1]: https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a
[2]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-March/715443.html
[3]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-March/720280.html

Amit Daniel Kachhap (4):
  arm64: cpufeature: Extract meta-capability scope from list
  arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  arm64: cpufeature: Modify address authentication cpufeature to exact
  arm64: kprobe: disable probe of fault prone ptrauth instruction

 arch/arm64/include/asm/esr.h           |  4 +++-
 arch/arm64/include/asm/exception.h     |  1 +
 arch/arm64/include/asm/insn.h          | 13 +++++++------
 arch/arm64/include/asm/sysreg.h        | 24 ++++++++++++++++--------
 arch/arm64/kernel/cpufeature.c         | 22 +++++++++++-----------
 arch/arm64/kernel/entry-common.c       | 25 +++++++++++++++++++++++++
 arch/arm64/kernel/insn.c               |  1 +
 arch/arm64/kernel/probes/decode-insn.c |  2 +-
 arch/arm64/kernel/traps.c              | 18 ++++++++++++++++++
 9 files changed, 83 insertions(+), 27 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list
  2020-04-14  5:31 [PATCH v2 0/4] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
@ 2020-04-14  5:31 ` Amit Daniel Kachhap
  2020-05-06 15:00   ` Catalin Marinas
  2020-04-14  5:31 ` [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2020-04-14  5:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

This fixes the earlier commit 3ff047f6971d3c ("arm64: cpufeature: Fix
meta-capability cpufeature check"). This patch was added to fix the
dependency of individual meta-cpucaps checks on the array entry order. This
dependency was specifically added for cpufeature of system scope.

However this dependency can occur for cpufeature of boot scope such as
ARM64_HAS_ADDRESS_AUTH so this patch renames the helper function
__system_matches_cap to __cpufeature_matches_cap and invokes the match
handler with the scope fetched from the cpufeatures array list.

Fixes: 3ff047f6971d3c ("arm64: cpufeature: Fix meta-capability cpufeature check")
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9fac745aa7bb..08795025409c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -116,7 +116,7 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
 
 static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
 
-static bool __system_matches_cap(unsigned int n);
+static bool __cpufeature_matches_cap(unsigned int n);
 
 /*
  * NOTE: Any changes to the visibility of features should be kept in
@@ -1373,15 +1373,15 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
 			     int __unused)
 {
-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+	return __cpufeature_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
+	       __cpufeature_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
 }
 
 static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
 			     int __unused)
 {
-	return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
+	return __cpufeature_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
+	       __cpufeature_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
 }
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
@@ -2251,16 +2251,16 @@ bool this_cpu_has_cap(unsigned int n)
 /*
  * This helper function is used in a narrow window when,
  * - The system wide safe registers are set with all the SMP CPUs and,
- * - The SYSTEM_FEATURE cpu_hwcaps may not have been set.
+ * - The cpu_hwcaps may not have been set.
  * In all other cases cpus_have_{const_}cap() should be used.
  */
-static bool __system_matches_cap(unsigned int n)
+static bool __cpufeature_matches_cap(unsigned int n)
 {
 	if (n < ARM64_NCAPS) {
 		const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n];
 
 		if (cap)
-			return cap->matches(cap, SCOPE_SYSTEM);
+			return cap->matches(cap, cpucap_default_scope(cap));
 	}
 	return false;
 }
@@ -2337,7 +2337,7 @@ void __init setup_cpu_features(void)
 static bool __maybe_unused
 cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
 {
-	return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO));
+	return (__cpufeature_matches_cap(ARM64_HAS_PAN) && !__cpufeature_matches_cap(ARM64_HAS_UAO));
 }
 
 static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  2020-04-14  5:31 [PATCH v2 0/4] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
  2020-04-14  5:31 ` [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list Amit Daniel Kachhap
@ 2020-04-14  5:31 ` Amit Daniel Kachhap
  2020-05-06 16:31   ` Catalin Marinas
  2020-04-14  5:31 ` [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
  2020-04-14  5:31 ` [PATCH v2 4/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
  3 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2020-04-14  5:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

This patch add changes for Pointer Authentication enhanced features
mandatory for Armv8.6. These features are,

* Uses an enhanced PAC generation logic which hardens finding the correct
  PAC value of the authenticated pointer. However, no code change done
  for this.

* Fault(FPAC) is generated now when the ptrauth authentication instruction
  fails in authenticating the PAC present in the address. This is different
  from earlier case when such failures just adds an error code in the top
  byte and waits for subsequent load/store to abort. The ptrauth
  instructions which may cause this fault are autiasp, retaa etc.

The above features are now represented by additional configurations
for the Address Authentication cpufeature.

The fault received in the kernel due to FPAC is treated as Illegal
instruction and hence signal SIGILL is injected with ILL_ILLOPN as the
signal code. Note that this is different from earlier ARMv8.3 ptrauth
where signal SIGSEGV is issued due to Pointer authentication failures.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/include/asm/esr.h       |  4 +++-
 arch/arm64/include/asm/exception.h |  1 +
 arch/arm64/include/asm/sysreg.h    | 24 ++++++++++++++++--------
 arch/arm64/kernel/entry-common.c   | 25 +++++++++++++++++++++++++
 arch/arm64/kernel/traps.c          | 18 ++++++++++++++++++
 5 files changed, 63 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 6a395a7e6707..f4faff45edd1 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -35,7 +35,9 @@
 #define ESR_ELx_EC_SYS64	(0x18)
 #define ESR_ELx_EC_SVE		(0x19)
 #define ESR_ELx_EC_ERET		(0x1a)	/* EL2 only */
-/* Unallocated EC: 0x1b - 0x1E */
+/* Unallocated EC: 0x1B */
+#define ESR_ELx_EC_FPAC		(0x1C)	/* EL1 and above */
+/* Unallocated EC: 0x1D - 0x1E */
 #define ESR_ELx_EC_IMP_DEF	(0x1f)	/* EL3 only */
 #define ESR_ELx_EC_IABT_LOW	(0x20)
 #define ESR_ELx_EC_IABT_CUR	(0x21)
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 7a6e81ca23a8..de76772fac81 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -46,4 +46,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
 void do_cp15instr(unsigned int esr, struct pt_regs *regs);
 void do_el0_svc(struct pt_regs *regs);
 void do_el0_svc_compat(struct pt_regs *regs);
+void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr);
 #endif	/* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index ebc622432831..53904b968e5d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -621,14 +621,22 @@
 #define ID_AA64ISAR1_APA_SHIFT		4
 #define ID_AA64ISAR1_DPB_SHIFT		0
 
-#define ID_AA64ISAR1_APA_NI		0x0
-#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_API_NI		0x0
-#define ID_AA64ISAR1_API_IMP_DEF	0x1
-#define ID_AA64ISAR1_GPA_NI		0x0
-#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
-#define ID_AA64ISAR1_GPI_NI		0x0
-#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
+#define ID_AA64ISAR1_APA_NI			0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_APA_ARCH_EPAC		0x2
+#define ID_AA64ISAR1_APA_ARCH_EPAC2		0x3
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_API_NI			0x0
+#define ID_AA64ISAR1_API_IMP_DEF		0x1
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC		0x2
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2		0x3
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC	0x4
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB	0x5
+#define ID_AA64ISAR1_GPA_NI			0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED		0x1
+#define ID_AA64ISAR1_GPI_NI			0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF		0x1
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index c839b5bf1904..870ff488708b 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -15,6 +15,7 @@
 #include <asm/exception.h>
 #include <asm/kprobes.h>
 #include <asm/mmu.h>
+#include <asm/pointer_auth.h>
 #include <asm/sysreg.h>
 
 static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
@@ -66,6 +67,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
 }
 NOKPROBE_SYMBOL(el1_dbg);
 
+static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	local_daif_inherit(regs);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el1_fpac);
+
 asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -92,6 +100,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el1_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el1_fpac(regs, esr);
+		break;
 	default:
 		el1_inv(regs, esr);
 	};
@@ -219,6 +230,14 @@ static void notrace el0_svc(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(el0_svc);
 
+static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
+{
+	user_exit_irqoff();
+	local_daif_restore(DAIF_PROCCTX);
+	do_ptrauth_fault(regs, esr);
+}
+NOKPROBE_SYMBOL(el0_fpac);
+
 asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 {
 	unsigned long esr = read_sysreg(esr_el1);
@@ -261,6 +280,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BRK64:
 		el0_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el0_fpac(regs, esr);
+		break;
 	default:
 		el0_inv(regs, esr);
 	}
@@ -324,6 +346,9 @@ asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
 	case ESR_ELx_EC_BKPT32:
 		el0_dbg(regs, esr);
 		break;
+	case ESR_ELx_EC_FPAC:
+		el0_fpac(regs, esr);
+		break;
 	default:
 		el0_inv(regs, esr);
 	}
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index cf402be5c573..0ef9e9880194 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -411,6 +411,23 @@ void do_undefinstr(struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(do_undefinstr);
 
+void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
+{
+	const char *desc;
+
+	BUG_ON(!user_mode(regs));
+
+	/* Even if we chose not to use PTRAUTH, the hardware might still trap */
+	if (unlikely(!(system_supports_address_auth()))) {
+		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
+		return;
+	}
+
+	desc = "pointer authentication fault";
+	arm64_notify_die(desc, regs, SIGILL, ILL_ILLOPN, (void __user *)regs->pc, esr);
+}
+NOKPROBE_SYMBOL(do_ptrauth_fault);
+
 #define __user_cache_maint(insn, address, res)			\
 	if (address >= user_addr_max()) {			\
 		res = -EFAULT;					\
@@ -763,6 +780,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
 	[ESR_ELx_EC_SVE]		= "SVE",
 	[ESR_ELx_EC_ERET]		= "ERET/ERETAA/ERETAB",
+	[ESR_ELx_EC_FPAC]		= "FPAC",
 	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
 	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
 	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-04-14  5:31 [PATCH v2 0/4] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
  2020-04-14  5:31 ` [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list Amit Daniel Kachhap
  2020-04-14  5:31 ` [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
@ 2020-04-14  5:31 ` Amit Daniel Kachhap
  2020-05-06 17:13   ` Catalin Marinas
  2020-04-14  5:31 ` [PATCH v2 4/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
  3 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2020-04-14  5:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

This patch modifies the address authentication cpufeature type to EXACT
from earlier LOWER_SAFE as the different configurations added for Armv8.6
enhanced PAC have different behaviour and there is no tunable to enable the
lower safe versions. The safe value is set as 0.

After this change, if there is any variation in configurations in secondary
cpus from boot cpu then those cpus are marked tainted. The KVM guests may
completely disable address authentication if there is any such variations
detected.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 08795025409c..599b03df2f93 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 4/4] arm64: kprobe: disable probe of fault prone ptrauth instruction
  2020-04-14  5:31 [PATCH v2 0/4] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2020-04-14  5:31 ` [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-04-14  5:31 ` Amit Daniel Kachhap
  3 siblings, 0 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2020-04-14  5:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Mark Brown, James Morse, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Dave Martin

This patch disables the probing of authenticate ptrauth instruction
(AUTIASP) which falls under the hint instructions region. This is done
to disallow probe of authenticate instruction in the kernel which may
lead to ptrauth faults with the addition of Armv8.6 enhanced ptrauth
features.

The corresponding append pac ptrauth instruction (PACIASP) is not disabled
and they can still be probed.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
This patch may need some changes with Mark Brown's work on whitelisting
of hint instructions [1].

[1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-March/720280.html

 arch/arm64/include/asm/insn.h          | 13 +++++++------
 arch/arm64/kernel/insn.c               |  1 +
 arch/arm64/kernel/probes/decode-insn.c |  2 +-
 3 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index bb313dde58a4..2e01db04c885 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -40,12 +40,13 @@ enum aarch64_insn_encoding_class {
 };
 
 enum aarch64_insn_hint_op {
-	AARCH64_INSN_HINT_NOP	= 0x0 << 5,
-	AARCH64_INSN_HINT_YIELD	= 0x1 << 5,
-	AARCH64_INSN_HINT_WFE	= 0x2 << 5,
-	AARCH64_INSN_HINT_WFI	= 0x3 << 5,
-	AARCH64_INSN_HINT_SEV	= 0x4 << 5,
-	AARCH64_INSN_HINT_SEVL	= 0x5 << 5,
+	AARCH64_INSN_HINT_NOP		= 0x0 << 5,
+	AARCH64_INSN_HINT_YIELD		= 0x1 << 5,
+	AARCH64_INSN_HINT_WFE		= 0x2 << 5,
+	AARCH64_INSN_HINT_WFI		= 0x3 << 5,
+	AARCH64_INSN_HINT_SEV		= 0x4 << 5,
+	AARCH64_INSN_HINT_SEVL		= 0x5 << 5,
+	AARCH64_INSN_HINT_AUTIASP	= (0x3 << 8) | (0x5 << 5),
 };
 
 enum aarch64_insn_imm_type {
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 4a9e773a177f..87f7c8a46b31 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -63,6 +63,7 @@ bool __kprobes aarch64_insn_is_nop(u32 insn)
 	case AARCH64_INSN_HINT_WFI:
 	case AARCH64_INSN_HINT_SEV:
 	case AARCH64_INSN_HINT_SEVL:
+	case AARCH64_INSN_HINT_AUTIASP:
 		return false;
 	default:
 		return true;
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index b78fac9e546c..a7caf84a9baa 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -42,7 +42,7 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 			     != AARCH64_INSN_SPCLREG_DAIF;
 
 		/*
-		 * The HINT instruction is is problematic when single-stepping,
+		 * The HINT instruction is problematic when single-stepping,
 		 * except for the NOP case.
 		 */
 		if (aarch64_insn_is_hint(insn))
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list
  2020-04-14  5:31 ` [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list Amit Daniel Kachhap
@ 2020-05-06 15:00   ` Catalin Marinas
  2020-05-06 16:14     ` Suzuki K Poulose
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2020-05-06 15:00 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel

On Tue, Apr 14, 2020 at 11:01:51AM +0530, Amit Daniel Kachhap wrote:
> This fixes the earlier commit 3ff047f6971d3c ("arm64: cpufeature: Fix
> meta-capability cpufeature check"). This patch was added to fix the
> dependency of individual meta-cpucaps checks on the array entry order. This
> dependency was specifically added for cpufeature of system scope.
> 
> However this dependency can occur for cpufeature of boot scope such as
> ARM64_HAS_ADDRESS_AUTH so this patch renames the helper function
> __system_matches_cap to __cpufeature_matches_cap and invokes the match
> handler with the scope fetched from the cpufeatures array list.
> 
> Fixes: 3ff047f6971d3c ("arm64: cpufeature: Fix meta-capability cpufeature check")
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>

Does this patch need to be merged in 5.7? The fixed commit went in
5.7-rc1 but it doesn't look to me like we'd have a problem without this
fix. Basically we read the sanitised regs with SYSTEM_SCOPE rather than
the current CPU regs. These are already populated correctly to the
register values of the boot CPU.

Otherwise I'm fine with the patch, just disputing the Fixes tag.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list
  2020-05-06 15:00   ` Catalin Marinas
@ 2020-05-06 16:14     ` Suzuki K Poulose
  2020-05-07 15:27       ` Amit Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: Suzuki K Poulose @ 2020-05-06 16:14 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Kees Cook, Kristina Martsenko, Mark Brown,
	James Morse, Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel

On Wed, May 06, 2020 at 04:00:01PM +0100, Catalin Marinas wrote:
> On Tue, Apr 14, 2020 at 11:01:51AM +0530, Amit Daniel Kachhap wrote:
> > This fixes the earlier commit 3ff047f6971d3c ("arm64: cpufeature: Fix
> > meta-capability cpufeature check"). This patch was added to fix the
> > dependency of individual meta-cpucaps checks on the array entry order. This
> > dependency was specifically added for cpufeature of system scope.
> > 
> > However this dependency can occur for cpufeature of boot scope such as
> > ARM64_HAS_ADDRESS_AUTH so this patch renames the helper function
> > __system_matches_cap to __cpufeature_matches_cap and invokes the match
> > handler with the scope fetched from the cpufeatures array list.
> > 
> > Fixes: 3ff047f6971d3c ("arm64: cpufeature: Fix meta-capability cpufeature check")
> > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> 
> Does this patch need to be merged in 5.7? The fixed commit went in
> 5.7-rc1 but it doesn't look to me like we'd have a problem without this
> fix. Basically we read the sanitised regs with SYSTEM_SCOPE rather than

Yes, this fixes an actual issue. The code is fine for BOOT cpu when we
detect whether the system supports the capability. However, for verifying
the secondary CPUs, this still succeeds as we only check the sanitised
values and a defective CPU could escape from being parked.

I think something like the following is a better idea, to make sure we
do the appropriate checks.


diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9fac745aa7bb..5df74490d7d3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1373,15 +1373,23 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
 			     int __unused)
 {
-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+	if (scope == SCOPE_SYSTEM)
+		return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
+		       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+	else
+		return this_cpu_has_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
+			this_cpu_has_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
 }
 
 static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
-			     int __unused)
+			     int scope)
 {
-	return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
-	       __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
+	if (scope == SCOPE_SYSTEM)
+		return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
+		       __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
+	else
+		return this_cpu_has_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
+			this_cpu_has_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
 }
 #endif /* CONFIG_ARM64_PTR_AUTH */
 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  2020-04-14  5:31 ` [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
@ 2020-05-06 16:31   ` Catalin Marinas
  2020-05-07 15:28     ` Amit Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2020-05-06 16:31 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel

On Tue, Apr 14, 2020 at 11:01:52AM +0530, Amit Daniel Kachhap wrote:
> This patch add changes for Pointer Authentication enhanced features
> mandatory for Armv8.6. These features are,
> 
> * Uses an enhanced PAC generation logic which hardens finding the correct
>   PAC value of the authenticated pointer. However, no code change done
>   for this.
> 
> * Fault(FPAC) is generated now when the ptrauth authentication instruction
>   fails in authenticating the PAC present in the address. This is different
>   from earlier case when such failures just adds an error code in the top
>   byte and waits for subsequent load/store to abort. The ptrauth
>   instructions which may cause this fault are autiasp, retaa etc.
> 
> The above features are now represented by additional configurations
> for the Address Authentication cpufeature.
> 
> The fault received in the kernel due to FPAC is treated as Illegal
> instruction and hence signal SIGILL is injected with ILL_ILLOPN as the
> signal code. Note that this is different from earlier ARMv8.3 ptrauth
> where signal SIGSEGV is issued due to Pointer authentication failures.

Sorry if it was discussed before. Was there any reasoning behind
choosing ILL_ILLOPN vs something else like ILL_ILLADR?

> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index cf402be5c573..0ef9e9880194 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -411,6 +411,23 @@ void do_undefinstr(struct pt_regs *regs)
>  }
>  NOKPROBE_SYMBOL(do_undefinstr);
>  
> +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
> +{
> +	const char *desc;
> +
> +	BUG_ON(!user_mode(regs));
> +
> +	/* Even if we chose not to use PTRAUTH, the hardware might still trap */
> +	if (unlikely(!(system_supports_address_auth()))) {

Nitpick: no need for braces around system_supports_address_auth().

> +		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
> +		return;
> +	}

So when do we execute this path? Is it on a big.LITTLE system where some
CPUs don't have the 8.6 behaviour? It's the same AUT instruction that
triggered it, so I don't think we should report a different ILL code.

It's a bit unfortunate that this new ptrauth feature doesn't have an
opt-in, so user-space would have to cope with both behaviours. In this
case I don't see why we need to differentiate on
system_supports_address_auth().

While the new behaviour is a lot more useful in practice, I wonder
whether we could still emulate the old one by setting regs->pc to a
faulting address and returning to user.

> +
> +	desc = "pointer authentication fault";
> +	arm64_notify_die(desc, regs, SIGILL, ILL_ILLOPN, (void __user *)regs->pc, esr);

Nitpick: you could pass the string directly, no need for an additional
variable.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-04-14  5:31 ` [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
@ 2020-05-06 17:13   ` Catalin Marinas
  2020-05-08 16:21     ` Amit Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2020-05-06 17:13 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel

On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
> This patch modifies the address authentication cpufeature type to EXACT
> from earlier LOWER_SAFE as the different configurations added for Armv8.6
> enhanced PAC have different behaviour and there is no tunable to enable the
> lower safe versions. The safe value is set as 0.
> 
> After this change, if there is any variation in configurations in secondary
> cpus from boot cpu then those cpus are marked tainted. The KVM guests may
> completely disable address authentication if there is any such variations
> detected.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 08795025409c..599b03df2f93 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>  	ARM64_FTR_END,

Is this sufficient? If we have the boot CPU already enabling the ptrauth
and we get a secondary CPU with a different ISAR1 field that matches the
address auth in cpufeature.c, we still allow it to boot. We no longer
report the feature to the user system_supports_address_auth() is true
while system_supports_generic_auth() would be false as it checks the
sanitised feature registers.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list
  2020-05-06 16:14     ` Suzuki K Poulose
@ 2020-05-07 15:27       ` Amit Kachhap
  0 siblings, 0 replies; 18+ messages in thread
From: Amit Kachhap @ 2020-05-07 15:27 UTC (permalink / raw)
  To: Suzuki K Poulose, Catalin Marinas
  Cc: Mark Rutland, Kees Cook, Kristina Martsenko, Mark Brown,
	James Morse, Vincenzo Frascino, Will Deacon, Dave Martin,
	linux-arm-kernel

Hi,

On 5/6/20 9:44 PM, Suzuki K Poulose wrote:
> On Wed, May 06, 2020 at 04:00:01PM +0100, Catalin Marinas wrote:
>> On Tue, Apr 14, 2020 at 11:01:51AM +0530, Amit Daniel Kachhap wrote:
>>> This fixes the earlier commit 3ff047f6971d3c ("arm64: cpufeature: Fix
>>> meta-capability cpufeature check"). This patch was added to fix the
>>> dependency of individual meta-cpucaps checks on the array entry order. This
>>> dependency was specifically added for cpufeature of system scope.
>>>
>>> However this dependency can occur for cpufeature of boot scope such as
>>> ARM64_HAS_ADDRESS_AUTH so this patch renames the helper function
>>> __system_matches_cap to __cpufeature_matches_cap and invokes the match
>>> handler with the scope fetched from the cpufeatures array list.
>>>
>>> Fixes: 3ff047f6971d3c ("arm64: cpufeature: Fix meta-capability cpufeature check")
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>
>> Does this patch need to be merged in 5.7? The fixed commit went in
>> 5.7-rc1 but it doesn't look to me like we'd have a problem without this
>> fix. Basically we read the sanitised regs with SYSTEM_SCOPE rather than
> 
> Yes, this fixes an actual issue. The code is fine for BOOT cpu when we
> detect whether the system supports the capability. However, for verifying
> the secondary CPUs, this still succeeds as we only check the sanitised
> values and a defective CPU could escape from being parked.
> 
> I think something like the following is a better idea, to make sure we
> do the appropriate checks.

This approach looks better. I will use it in the next iteration.

Cheers,
Amit
> 
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9fac745aa7bb..5df74490d7d3 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1373,15 +1373,23 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>   static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
>   			     int __unused)
>   {
> -	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> -	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> +	if (scope == SCOPE_SYSTEM)
> +		return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> +		       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> +	else
> +		return this_cpu_has_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> +			this_cpu_has_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
>   }
>   
>   static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
> -			     int __unused)
> +			     int scope)
>   {
> -	return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
> -	       __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
> +	if (scope == SCOPE_SYSTEM)
> +		return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
> +		       __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
> +	else
> +		return this_cpu_has_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
> +			this_cpu_has_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
>   }
>   #endif /* CONFIG_ARM64_PTR_AUTH */
>   
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  2020-05-06 16:31   ` Catalin Marinas
@ 2020-05-07 15:28     ` Amit Kachhap
  2020-05-12 17:12       ` Catalin Marinas
  0 siblings, 1 reply; 18+ messages in thread
From: Amit Kachhap @ 2020-05-07 15:28 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel

Hi,

On 5/6/20 10:01 PM, Catalin Marinas wrote:
> On Tue, Apr 14, 2020 at 11:01:52AM +0530, Amit Daniel Kachhap wrote:
>> This patch add changes for Pointer Authentication enhanced features
>> mandatory for Armv8.6. These features are,
>>
>> * Uses an enhanced PAC generation logic which hardens finding the correct
>>    PAC value of the authenticated pointer. However, no code change done
>>    for this.
>>
>> * Fault(FPAC) is generated now when the ptrauth authentication instruction
>>    fails in authenticating the PAC present in the address. This is different
>>    from earlier case when such failures just adds an error code in the top
>>    byte and waits for subsequent load/store to abort. The ptrauth
>>    instructions which may cause this fault are autiasp, retaa etc.
>>
>> The above features are now represented by additional configurations
>> for the Address Authentication cpufeature.
>>
>> The fault received in the kernel due to FPAC is treated as Illegal
>> instruction and hence signal SIGILL is injected with ILL_ILLOPN as the
>> signal code. Note that this is different from earlier ARMv8.3 ptrauth
>> where signal SIGSEGV is issued due to Pointer authentication failures.
> 
> Sorry if it was discussed before. Was there any reasoning behind
> choosing ILL_ILLOPN vs something else like ILL_ILLADR?

No it was not discussed earlier. I used it as I thought that autiasp 
failed here due to incorrect operands provided (sp, key, lr). Although
sp and lr are addresses.

> 
>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>> index cf402be5c573..0ef9e9880194 100644
>> --- a/arch/arm64/kernel/traps.c
>> +++ b/arch/arm64/kernel/traps.c
>> @@ -411,6 +411,23 @@ void do_undefinstr(struct pt_regs *regs)
>>   }
>>   NOKPROBE_SYMBOL(do_undefinstr);
>>   
>> +void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr)
>> +{
>> +	const char *desc;
>> +
>> +	BUG_ON(!user_mode(regs));
>> +
>> +	/* Even if we chose not to use PTRAUTH, the hardware might still trap */
>> +	if (unlikely(!(system_supports_address_auth()))) {
> 
> Nitpick: no need for braces around system_supports_address_auth().

ok

> 
>> +		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
>> +		return;
>> +	}
> 
> So when do we execute this path? Is it on a big.LITTLE system where some
> CPUs don't have the 8.6 behaviour? It's the same AUT instruction that
> triggered it, so I don't think we should report a different ILL code.
> 
> It's a bit unfortunate that this new ptrauth feature doesn't have an
> opt-in, so user-space would have to cope with both behaviours. In this
> case I don't see why we need to differentiate on
> system_supports_address_auth().

I was referring to some similar checks present in do_sve_acc in
file arch/arm64/kernel/fpsimd.c to gaurd some unknown situations. Anyway 
I should probably drop this as EC value is already matched.

> 
> While the new behaviour is a lot more useful in practice, I wonder
> whether we could still emulate the old one by setting regs->pc to a
> faulting address and returning to user.

However even if we set regs->pc to the faulting lr address but this lr
may not be same as earlier one as theoretically there can be two autia
instructions so I am not sure if complete emulation is possible. Also 
other work will be change ESR value, set error pattern in the faulting 
address etc when the error pattern is itself not defined.

> 
>> +
>> +	desc = "pointer authentication fault";
>> +	arm64_notify_die(desc, regs, SIGILL, ILL_ILLOPN, (void __user *)regs->pc, esr);
> 
> Nitpick: you could pass the string directly, no need for an additional
> variable.

ok

Amit

> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-05-06 17:13   ` Catalin Marinas
@ 2020-05-08 16:21     ` Amit Kachhap
  2020-05-12 17:33       ` Catalin Marinas
  0 siblings, 1 reply; 18+ messages in thread
From: Amit Kachhap @ 2020-05-08 16:21 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel


On 5/6/20 10:43 PM, Catalin Marinas wrote:
> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>> This patch modifies the address authentication cpufeature type to EXACT
>> from earlier LOWER_SAFE as the different configurations added for Armv8.6
>> enhanced PAC have different behaviour and there is no tunable to enable the
>> lower safe versions. The safe value is set as 0.
>>
>> After this change, if there is any variation in configurations in secondary
>> cpus from boot cpu then those cpus are marked tainted. The KVM guests may
>> completely disable address authentication if there is any such variations
>> detected.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>>   arch/arm64/kernel/cpufeature.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 08795025409c..599b03df2f93 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>   	ARM64_FTR_END,
> 
> Is this sufficient? If we have the boot CPU already enabling the ptrauth
> and we get a secondary CPU with a different ISAR1 field that matches the
> address auth in cpufeature.c, we still allow it to boot. We no longer
> report the feature to the user system_supports_address_auth() is true
> while system_supports_generic_auth() would be false as it checks the
> sanitised feature registers.

Yes agreed. Generic authentication also needs EXACT cpufeature type.

> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature
  2020-05-07 15:28     ` Amit Kachhap
@ 2020-05-12 17:12       ` Catalin Marinas
  0 siblings, 0 replies; 18+ messages in thread
From: Catalin Marinas @ 2020-05-12 17:12 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel

On Thu, May 07, 2020 at 08:58:51PM +0530, Amit Kachhap wrote:
> On 5/6/20 10:01 PM, Catalin Marinas wrote:
> > On Tue, Apr 14, 2020 at 11:01:52AM +0530, Amit Daniel Kachhap wrote:
> > > This patch add changes for Pointer Authentication enhanced features
> > > mandatory for Armv8.6. These features are,
> > > 
> > > * Uses an enhanced PAC generation logic which hardens finding the correct
> > >    PAC value of the authenticated pointer. However, no code change done
> > >    for this.
> > > 
> > > * Fault(FPAC) is generated now when the ptrauth authentication instruction
> > >    fails in authenticating the PAC present in the address. This is different
> > >    from earlier case when such failures just adds an error code in the top
> > >    byte and waits for subsequent load/store to abort. The ptrauth
> > >    instructions which may cause this fault are autiasp, retaa etc.
> > > 
> > > The above features are now represented by additional configurations
> > > for the Address Authentication cpufeature.
> > > 
> > > The fault received in the kernel due to FPAC is treated as Illegal
> > > instruction and hence signal SIGILL is injected with ILL_ILLOPN as the
> > > signal code. Note that this is different from earlier ARMv8.3 ptrauth
> > > where signal SIGSEGV is issued due to Pointer authentication failures.
[...]
> > While the new behaviour is a lot more useful in practice, I wonder
> > whether we could still emulate the old one by setting regs->pc to a
> > faulting address and returning to user.
> 
> However even if we set regs->pc to the faulting lr address but this lr
> may not be same as earlier one as theoretically there can be two autia
> instructions so I am not sure if complete emulation is possible. Also other
> work will be change ESR value, set error pattern in the faulting address etc
> when the error pattern is itself not defined.

You are right, we can't fully emulate this. I was thinking of getting a
fake faulting PC and returning to it so that we trigger a SIGSEGV.
Anyway, I don't think it buys us anything and I really doubt we have
user space relying on the 8.3 behaviour.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-05-08 16:21     ` Amit Kachhap
@ 2020-05-12 17:33       ` Catalin Marinas
  2020-05-13 15:42         ` Amit Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2020-05-12 17:33 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel

On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
> On 5/6/20 10:43 PM, Catalin Marinas wrote:
> > On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
> > > This patch modifies the address authentication cpufeature type to EXACT
> > > from earlier LOWER_SAFE as the different configurations added for Armv8.6
> > > enhanced PAC have different behaviour and there is no tunable to enable the
> > > lower safe versions. The safe value is set as 0.
> > > 
> > > After this change, if there is any variation in configurations in secondary
> > > cpus from boot cpu then those cpus are marked tainted. The KVM guests may
> > > completely disable address authentication if there is any such variations
> > > detected.
> > > 
> > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> > > ---
> > >   arch/arm64/kernel/cpufeature.c | 4 ++--
> > >   1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > > index 08795025409c..599b03df2f93 100644
> > > --- a/arch/arm64/kernel/cpufeature.c
> > > +++ b/arch/arm64/kernel/cpufeature.c
> > > @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> > >   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> > > -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> > > +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> > > -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> > > +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> > >   	ARM64_FTR_END,
> > 
> > Is this sufficient? If we have the boot CPU already enabling the ptrauth
> > and we get a secondary CPU with a different ISAR1 field that matches the
> > address auth in cpufeature.c, we still allow it to boot. We no longer
> > report the feature to the user system_supports_address_auth() is true
> > while system_supports_generic_auth() would be false as it checks the
> > sanitised feature registers.
> 
> Yes agreed. Generic authentication also needs EXACT cpufeature type.

I'm still not sure that's sufficient. If we boot the primary CPU with
ptrauth as detected in proc.S, we consider this a boot feature so all
secondary CPUs must have it. Subsequent CPUs are currently checked via
the arm64_features[] definitions and we allow them to boot if the ID is
at least that of the boot CPU. How does this interact with the above
FTR_EXACT changes?

My concern is that we boot with PAC enabled on all CPUs but because of
the FTR_EXACT, the sanitised ID registers no longer report the feature.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-05-12 17:33       ` Catalin Marinas
@ 2020-05-13 15:42         ` Amit Kachhap
  2020-05-20 13:20           ` Suzuki K Poulose
  0 siblings, 1 reply; 18+ messages in thread
From: Amit Kachhap @ 2020-05-13 15:42 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Mark Brown, James Morse, Vincenzo Frascino, Will Deacon,
	Dave Martin, linux-arm-kernel



On 5/12/20 11:03 PM, Catalin Marinas wrote:
> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>> This patch modifies the address authentication cpufeature type to EXACT
>>>> from earlier LOWER_SAFE as the different configurations added for Armv8.6
>>>> enhanced PAC have different behaviour and there is no tunable to enable the
>>>> lower safe versions. The safe value is set as 0.
>>>>
>>>> After this change, if there is any variation in configurations in secondary
>>>> cpus from boot cpu then those cpus are marked tainted. The KVM guests may
>>>> completely disable address authentication if there is any such variations
>>>> detected.
>>>>
>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>> ---
>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>>> index 08795025409c..599b03df2f93 100644
>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>    	ARM64_FTR_END,
>>>
>>> Is this sufficient? If we have the boot CPU already enabling the ptrauth
>>> and we get a secondary CPU with a different ISAR1 field that matches the
>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>> report the feature to the user system_supports_address_auth() is true
>>> while system_supports_generic_auth() would be false as it checks the
>>> sanitised feature registers.
>>
>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
> 
> I'm still not sure that's sufficient. If we boot the primary CPU with
> ptrauth as detected in proc.S, we consider this a boot feature so all
> secondary CPUs must have it. Subsequent CPUs are currently checked via
> the arm64_features[] definitions and we allow them to boot if the ID is
> at least that of the boot CPU. How does this interact with the above
> FTR_EXACT changes?

Unfortunately FTR_EXACT does not effect the bootflow directly but marks
the cpu TAINTED and goes ahead.

> 
> My concern is that we boot with PAC enabled on all CPUs but because of
> the FTR_EXACT, the sanitised ID registers no longer report the feature.
> 

You are right that PAC is enabled in hardware but un-reported to user in 
this case.

The issue here is in feature_matches() which only validates with the
entry->min_field_value. If we can modify this value to boot cpu value
for FTR_EXACT type then this cpu will fail to online.
May be we can introduce a new structure or make arm64_feature[] writable 
for this.

Something like below code.
-------------------------------------------------------------------------
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3ae35aadbc20..967928568edf 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -103,6 +103,8 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
  bool arm64_use_ng_mappings = false;
  EXPORT_SYMBOL(arm64_use_ng_mappings);

+u8 min_cap_value[ARM64_NCAPS];
+
  /*
   * Flag to indicate if we have computed the system wide
   * capabilities based on the boot time active CPUs. This
@@ -616,6 +618,17 @@ static void __init sort_ftr_regs(void)
                 BUG_ON(arm64_ftr_regs[i].sys_id < arm64_ftr_regs[i - 
1].sys_id);
  }

+static void __init update_cpu_capability_min(u32 sys_reg, s64 new)
+{
+       const struct arm64_cpu_capabilities *caps = arm64_features;
+       for (; caps->matches; caps++) {
+               if (caps->sys_reg == sys_reg) {
+                       if (caps->min_field_value)
+                               min_cap_value[caps->capability] = new;
+               }
+       }
+}
+
  /*
   * Initialise the CPU feature register from Boot CPU values.
   * Also initiliases the strict_mask for the register.
@@ -649,6 +662,8 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 
new)
                         reg->user_val = arm64_ftr_set_value(ftrp,
                                                             reg->user_val,
 
ftrp->safe_val);
+               if (ftrp->type == FTR_EXACT)
+                       update_cpu_capability_min(sys_reg, ftr_new);
         }

         val &= valid_mask;
@@ -1021,8 +1036,10 @@ static bool
  feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
  {
         int val = cpuid_feature_extract_field(reg, entry->field_pos, 
entry->sign);
-
-       return val >= entry->min_field_value;
+       if (min_cap_value[entry->capability])
+               return val >= min_cap_value[entry->capability];
+       else
+               return val >= entry->min_field_value;
  }

  static bool

-------------------------------------------------------------------------

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-05-13 15:42         ` Amit Kachhap
@ 2020-05-20 13:20           ` Suzuki K Poulose
  2020-05-21  8:09             ` Amit Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: Suzuki K Poulose @ 2020-05-20 13:20 UTC (permalink / raw)
  To: amit.kachhap, catalin.marinas
  Cc: mark.rutland, keescook, kristina.martsenko, broonie, james.morse,
	Vincenzo.Frascino, will, dave.martin, linux-arm-kernel

On 05/13/2020 04:42 PM, Amit Kachhap wrote:
> 
> 
> On 5/12/20 11:03 PM, Catalin Marinas wrote:
>> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>>> This patch modifies the address authentication cpufeature type to 
>>>>> EXACT
>>>>> from earlier LOWER_SAFE as the different configurations added for 
>>>>> Armv8.6
>>>>> enhanced PAC have different behaviour and there is no tunable to 
>>>>> enable the
>>>>> lower safe versions. The safe value is set as 0.
>>>>>
>>>>> After this change, if there is any variation in configurations in 
>>>>> secondary
>>>>> cpus from boot cpu then those cpus are marked tainted. The KVM 
>>>>> guests may
>>>>> completely disable address authentication if there is any such 
>>>>> variations
>>>>> detected.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>> ---
>>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm64/kernel/cpufeature.c 
>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>> index 08795025409c..599b03df2f93 100644
>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits 
>>>>> ftr_id_aa64isar1[] = {
>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>>        
>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 
>>>>> 4, 0),
>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>        
>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 
>>>>> 4, 0),
>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>>        ARM64_FTR_END,
>>>>
>>>> Is this sufficient? If we have the boot CPU already enabling the 
>>>> ptrauth
>>>> and we get a secondary CPU with a different ISAR1 field that matches 
>>>> the
>>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>>> report the feature to the user system_supports_address_auth() is true
>>>> while system_supports_generic_auth() would be false as it checks the
>>>> sanitised feature registers.
>>>
>>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
>>
>> I'm still not sure that's sufficient. If we boot the primary CPU with
>> ptrauth as detected in proc.S, we consider this a boot feature so all
>> secondary CPUs must have it. Subsequent CPUs are currently checked via
>> the arm64_features[] definitions and we allow them to boot if the ID is
>> at least that of the boot CPU. How does this interact with the above
>> FTR_EXACT changes?
> 
> Unfortunately FTR_EXACT does not effect the bootflow directly but marks
> the cpu TAINTED and goes ahead.
> 
>>
>> My concern is that we boot with PAC enabled on all CPUs but because of
>> the FTR_EXACT, the sanitised ID registers no longer report the feature.
>>
> 
> You are right that PAC is enabled in hardware but un-reported to user in 
> this case.
> 
> The issue here is in feature_matches() which only validates with the
> entry->min_field_value. If we can modify this value to boot cpu value
> for FTR_EXACT type then this cpu will fail to online.
> May be we can introduce a new structure or make arm64_feature[] writable 
> for this.
> 
> Something like below code.

The has_cpuid_feature() is for features with "FTR_LOWER_SAFE". Hacking
it to support EXACT doesn't look ideal. You may simply add your own
"matches()" for ptr-auth.

something like :

static bool
has_addr_auth(const struct arm64_cpu_capabilities *entry, int scope)
{
	static int boot_cpu_auth;
	int local_cpu_auth;
	u64 isar1;

	/* We don't expect to be called with SCOPE_SYSTEM */
	WARN_ON(scope == SCOPE_SYSTEM);
	isar1 = read_sysreg_s(SYS_ID_AA64ISAR1_EL1);
	local_cpu_auth = cpuid_feature_extract_unsigned_field(isar1, entry->shift);
						
	/*
	 * The ptr-auth feature levels are not intercompatible with
	 * lower levels. Hence we must match all the CPUs with that
	 * of the boot CPU. So cache the level of boot CPU and compare
	 * it against the secondary CPUs.
	 */
	if (scope & SCOPE_BOOT_CPU) {
		boot_cpu_auth = local_cpu_auth;
		return boot_cpu_auth > 0;
	} else if (scope & SCOPE_LOCAL_CPU) {
		return local_cpu_auth == boot_cpu_auth;
	}
}

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-05-20 13:20           ` Suzuki K Poulose
@ 2020-05-21  8:09             ` Amit Kachhap
  2020-05-21  9:00               ` Suzuki K Poulose
  0 siblings, 1 reply; 18+ messages in thread
From: Amit Kachhap @ 2020-05-21  8:09 UTC (permalink / raw)
  To: Suzuki K Poulose, catalin.marinas
  Cc: mark.rutland, keescook, kristina.martsenko, broonie, james.morse,
	Vincenzo.Frascino, will, dave.martin, linux-arm-kernel

Hi Suzuki,

On 5/20/20 6:50 PM, Suzuki K Poulose wrote:
> On 05/13/2020 04:42 PM, Amit Kachhap wrote:
>>
>>
>> On 5/12/20 11:03 PM, Catalin Marinas wrote:
>>> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>>>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>>>> This patch modifies the address authentication cpufeature type to 
>>>>>> EXACT
>>>>>> from earlier LOWER_SAFE as the different configurations added for 
>>>>>> Armv8.6
>>>>>> enhanced PAC have different behaviour and there is no tunable to 
>>>>>> enable the
>>>>>> lower safe versions. The safe value is set as 0.
>>>>>>
>>>>>> After this change, if there is any variation in configurations in 
>>>>>> secondary
>>>>>> cpus from boot cpu then those cpus are marked tainted. The KVM 
>>>>>> guests may
>>>>>> completely disable address authentication if there is any such 
>>>>>> variations
>>>>>> detected.
>>>>>>
>>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>>> ---
>>>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/arch/arm64/kernel/cpufeature.c 
>>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>>> index 08795025409c..599b03df2f93 100644
>>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits 
>>>>>> ftr_id_aa64isar1[] = {
>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>>>        ARM64_FTR_END,
>>>>>
>>>>> Is this sufficient? If we have the boot CPU already enabling the 
>>>>> ptrauth
>>>>> and we get a secondary CPU with a different ISAR1 field that 
>>>>> matches the
>>>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>>>> report the feature to the user system_supports_address_auth() is true
>>>>> while system_supports_generic_auth() would be false as it checks the
>>>>> sanitised feature registers.
>>>>
>>>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
>>>
>>> I'm still not sure that's sufficient. If we boot the primary CPU with
>>> ptrauth as detected in proc.S, we consider this a boot feature so all
>>> secondary CPUs must have it. Subsequent CPUs are currently checked via
>>> the arm64_features[] definitions and we allow them to boot if the ID is
>>> at least that of the boot CPU. How does this interact with the above
>>> FTR_EXACT changes?
>>
>> Unfortunately FTR_EXACT does not effect the bootflow directly but marks
>> the cpu TAINTED and goes ahead.
>>
>>>
>>> My concern is that we boot with PAC enabled on all CPUs but because of
>>> the FTR_EXACT, the sanitised ID registers no longer report the feature.
>>>
>>
>> You are right that PAC is enabled in hardware but un-reported to user 
>> in this case.
>>
>> The issue here is in feature_matches() which only validates with the
>> entry->min_field_value. If we can modify this value to boot cpu value
>> for FTR_EXACT type then this cpu will fail to online.
>> May be we can introduce a new structure or make arm64_feature[] 
>> writable for this.
>>
>> Something like below code.
> 
> The has_cpuid_feature() is for features with "FTR_LOWER_SAFE". Hacking
> it to support EXACT doesn't look ideal. You may simply add your own
> "matches()" for ptr-auth.

Yes it is reasonable to have separate match() function. I was thinking
of adding some generic match function for FTR_EXACT to be used by other
similar cpufeatures.

> 
> something like :
> 
> static bool
> has_addr_auth(const struct arm64_cpu_capabilities *entry, int scope)
> {
>      static int boot_cpu_auth;

I suppose that is this new match() has to be used for both AUTH_ARCH and 
AUTH_IMP_DEF then we may need 2 such static variables.

>      int local_cpu_auth;
>      u64 isar1;
> 
>      /* We don't expect to be called with SCOPE_SYSTEM */
>      WARN_ON(scope == SCOPE_SYSTEM);
>      isar1 = read_sysreg_s(SYS_ID_AA64ISAR1_EL1);
>      local_cpu_auth = cpuid_feature_extract_unsigned_field(isar1, 
> entry->shift);
> 
>      /*
>       * The ptr-auth feature levels are not intercompatible with
>       * lower levels. Hence we must match all the CPUs with that
>       * of the boot CPU. So cache the level of boot CPU and compare
>       * it against the secondary CPUs.
>       */
>      if (scope & SCOPE_BOOT_CPU) {
>          boot_cpu_auth = local_cpu_auth;
>          return boot_cpu_auth > 0;

May be,
return boot_cpu_auth >= entry->min_field_value

>      } else if (scope & SCOPE_LOCAL_CPU) {
>          return local_cpu_auth == boot_cpu_auth;
>      }
> }
> 
> Suzuki

Thanks,
Amit Daniel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact
  2020-05-21  8:09             ` Amit Kachhap
@ 2020-05-21  9:00               ` Suzuki K Poulose
  0 siblings, 0 replies; 18+ messages in thread
From: Suzuki K Poulose @ 2020-05-21  9:00 UTC (permalink / raw)
  To: amit.kachhap, catalin.marinas
  Cc: mark.rutland, keescook, kristina.martsenko, broonie, james.morse,
	Vincenzo.Frascino, will, dave.martin, linux-arm-kernel

On 05/21/2020 09:09 AM, Amit Kachhap wrote:
> Hi Suzuki,
> 
> On 5/20/20 6:50 PM, Suzuki K Poulose wrote:
>> On 05/13/2020 04:42 PM, Amit Kachhap wrote:
>>>
>>>
>>> On 5/12/20 11:03 PM, Catalin Marinas wrote:
>>>> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>>>>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>>>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>>>>> This patch modifies the address authentication cpufeature type to 
>>>>>>> EXACT
>>>>>>> from earlier LOWER_SAFE as the different configurations added for 
>>>>>>> Armv8.6
>>>>>>> enhanced PAC have different behaviour and there is no tunable to 
>>>>>>> enable the
>>>>>>> lower safe versions. The safe value is set as 0.
>>>>>>>
>>>>>>> After this change, if there is any variation in configurations in 
>>>>>>> secondary
>>>>>>> cpus from boot cpu then those cpus are marked tainted. The KVM 
>>>>>>> guests may
>>>>>>> completely disable address authentication if there is any such 
>>>>>>> variations
>>>>>>> detected.
>>>>>>>
>>>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>>>> ---
>>>>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/arch/arm64/kernel/cpufeature.c 
>>>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>>>> index 08795025409c..599b03df2f93 100644
>>>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits 
>>>>>>> ftr_id_aa64isar1[] = {
>>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 
>>>>>>> 0),
>>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 
>>>>>>> 0),
>>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>>>>        ARM64_FTR_END,
>>>>>>
>>>>>> Is this sufficient? If we have the boot CPU already enabling the 
>>>>>> ptrauth
>>>>>> and we get a secondary CPU with a different ISAR1 field that 
>>>>>> matches the
>>>>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>>>>> report the feature to the user system_supports_address_auth() is true
>>>>>> while system_supports_generic_auth() would be false as it checks the
>>>>>> sanitised feature registers.
>>>>>
>>>>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
>>>>
>>>> I'm still not sure that's sufficient. If we boot the primary CPU with
>>>> ptrauth as detected in proc.S, we consider this a boot feature so all
>>>> secondary CPUs must have it. Subsequent CPUs are currently checked via
>>>> the arm64_features[] definitions and we allow them to boot if the ID is
>>>> at least that of the boot CPU. How does this interact with the above
>>>> FTR_EXACT changes?
>>>
>>> Unfortunately FTR_EXACT does not effect the bootflow directly but marks
>>> the cpu TAINTED and goes ahead.
>>>
>>>>
>>>> My concern is that we boot with PAC enabled on all CPUs but because of
>>>> the FTR_EXACT, the sanitised ID registers no longer report the feature.
>>>>
>>>
>>> You are right that PAC is enabled in hardware but un-reported to user 
>>> in this case.
>>>
>>> The issue here is in feature_matches() which only validates with the
>>> entry->min_field_value. If we can modify this value to boot cpu value
>>> for FTR_EXACT type then this cpu will fail to online.
>>> May be we can introduce a new structure or make arm64_feature[] 
>>> writable for this.
>>>
>>> Something like below code.
>>
>> The has_cpuid_feature() is for features with "FTR_LOWER_SAFE". Hacking
>> it to support EXACT doesn't look ideal. You may simply add your own
>> "matches()" for ptr-auth.
> 
> Yes it is reasonable to have separate match() function. I was thinking
> of adding some generic match function for FTR_EXACT to be used by other
> similar cpufeatures.

Ideally, if we enable CPU feature sanity check infrastructure to do this 
for us (i.e park a CPU which conflicts), we don't have to duplicate it
here in the capabilities. For now, yes, let us use something specific to
ptr-auth, which may not cater for generic EXACT features. Generalizing
it for different "scope"s are going to be tricky.

> 
>>
>> something like :
>>
>> static bool
>> has_addr_auth(const struct arm64_cpu_capabilities *entry, int scope)
>> {
>>      static int boot_cpu_auth;
> 
> I suppose that is this new match() has to be used for both AUTH_ARCH and 
> AUTH_IMP_DEF then we may need 2 such static variables.
> 
>>      int local_cpu_auth;
>>      u64 isar1;
>>
>>      /* We don't expect to be called with SCOPE_SYSTEM */
>>      WARN_ON(scope == SCOPE_SYSTEM);
>>      isar1 = read_sysreg_s(SYS_ID_AA64ISAR1_EL1);
>>      local_cpu_auth = cpuid_feature_extract_unsigned_field(isar1, 
>> entry->shift);
>>
>>      /*
>>       * The ptr-auth feature levels are not intercompatible with
>>       * lower levels. Hence we must match all the CPUs with that
>>       * of the boot CPU. So cache the level of boot CPU and compare
>>       * it against the secondary CPUs.
>>       */
>>      if (scope & SCOPE_BOOT_CPU) {
>>          boot_cpu_auth = local_cpu_auth;
>>          return boot_cpu_auth > 0;
> 
> May be,
> return boot_cpu_auth >= entry->min_field_value

Yes, thats fine.

Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-05-21  8:56 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-14  5:31 [PATCH v2 0/4] arm64: add Armv8.6 pointer authentication Amit Daniel Kachhap
2020-04-14  5:31 ` [PATCH v2 1/4] arm64: cpufeature: Extract meta-capability scope from list Amit Daniel Kachhap
2020-05-06 15:00   ` Catalin Marinas
2020-05-06 16:14     ` Suzuki K Poulose
2020-05-07 15:27       ` Amit Kachhap
2020-04-14  5:31 ` [PATCH v2 2/4] arm64: ptrauth: add pointer authentication Armv8.6 enhanced feature Amit Daniel Kachhap
2020-05-06 16:31   ` Catalin Marinas
2020-05-07 15:28     ` Amit Kachhap
2020-05-12 17:12       ` Catalin Marinas
2020-04-14  5:31 ` [PATCH v2 3/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
2020-05-06 17:13   ` Catalin Marinas
2020-05-08 16:21     ` Amit Kachhap
2020-05-12 17:33       ` Catalin Marinas
2020-05-13 15:42         ` Amit Kachhap
2020-05-20 13:20           ` Suzuki K Poulose
2020-05-21  8:09             ` Amit Kachhap
2020-05-21  9:00               ` Suzuki K Poulose
2020-04-14  5:31 ` [PATCH v2 4/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.