All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2 0/7] arm64: return address signing
@ 2019-05-29 19:03 Kristina Martsenko
  2019-05-29 19:03 ` [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities Kristina Martsenko
                   ` (7 more replies)
  0 siblings, 8 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

Hi,

This series improves function return address protection for the arm64 kernel, by
compiling the kernel with ARMv8.3 Pointer Authentication instructions. This
should help protect the kernel against attacks using return-oriented
programming.

This series is based on v5.1-rc7.

These patches were previously posted as [RFC] as part of the series to enable
pointer authentication for userspace [1].

High-level changes since RFC v1 [1] (detailed changes are listed in patches):
 - Rebased onto v5.1-rc7
 - Updated the series to handle all 5 keys, as the current kernel exposes all 5
   to userspace (previously only APIAKey)
 - Fixed support for compilers without ptrauth
 - Added support for the new -mbranch-protection option
 - Switched to only protecting non-leaf functions
 - Dropped the patch that moved keys to thread_info, as that is already done in
   commit 750319756256 (and superseded by 84931327a807)

Questions / notes:

 - The patches make use of the sign-return-address/branch-protection compiler
   options and function attributes. GCC supports both, but Clang/LLVM appears
   to only support the compiler option, not the function attribute, so with
   these patches (and CONFIG_ARM64_PTR_AUTH=y) an LLVM-built kernel will fail
   to boot on ARMv8.3 CPUs. I don't yet know why LLVM does not support it, or
   whether support can be added. This series may need to be rewritten to not
   use the attribute, and instead move the functionality to assembly, or to
   disable return address signing when building with LLVM.

 - Each task has its own pointer authentication key for use in the kernel,
   initialized during fork. On systems without much entropy during early boot,
   the earlier keys are not random. Ideally the kernel should get early
   randomness from firmware. Currently, this should be possible on UEFI systems
   that support EFI_RNG_PROTOCOL (via LINUX_EFI_RANDOM_SEED_TABLE_GUID). A
   device tree based scheme is also under discussion [2]. Another option might
   be to generate some randomness for pointer auth during kernel build time.

This series is still an RFC as there are a number of things to still look at:
 - rebase onto v5.2-rcX and the KVM guest ptrauth support
 - suspend/resume/hibernate
 - comparison of compiler options pac-ret vs pac-ret+leaf
 - ftrace, kprobes, other tracing
 - __builtin_return_address(n), kdump, other debug
 - other smaller things
 - more testing

Feedback welcome!

Thanks,
Kristina

[1] https://lore.kernel.org/linux-arm-kernel/20181005084754.20950-1-kristina.martsenko@arm.com/
[2] https://lore.kernel.org/linux-arm-kernel/20190527043336.112854-2-hsinyi@chromium.org/


Kristina Martsenko (6):
  arm64: cpufeature: add pointer auth meta-capabilities
  arm64: install user ptrauth keys at kernel exit time
  arm64: cpufeature: handle conflicts based on capability
  arm64: enable ptrauth earlier
  arm64: initialize and switch ptrauth kernel keys
  arm64: compile the kernel with ptrauth return address signing

Mark Rutland (1):
  arm64: unwind: strip PAC from kernel addresses

 arch/arm64/Kconfig                        | 16 ++++++-
 arch/arm64/Makefile                       |  6 +++
 arch/arm64/include/asm/asm_pointer_auth.h | 59 +++++++++++++++++++++++
 arch/arm64/include/asm/cpucaps.h          |  4 +-
 arch/arm64/include/asm/cpufeature.h       | 30 ++++++++++--
 arch/arm64/include/asm/pointer_auth.h     | 79 +++++++++++++++++++------------
 arch/arm64/include/asm/processor.h        |  1 +
 arch/arm64/kernel/asm-offsets.c           | 12 +++++
 arch/arm64/kernel/cpufeature.c            | 53 +++++++++++++--------
 arch/arm64/kernel/entry.S                 |  4 ++
 arch/arm64/kernel/pointer_auth.c          |  5 +-
 arch/arm64/kernel/process.c               |  2 +
 arch/arm64/kernel/smp.c                   | 10 +++-
 arch/arm64/kernel/stacktrace.c            |  3 ++
 14 files changed, 222 insertions(+), 62 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h

-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
@ 2019-05-29 19:03 ` Kristina Martsenko
  2019-05-30  1:58   ` Kees Cook
  2019-05-30 10:50   ` Suzuki K Poulose
  2019-05-29 19:03 ` [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time Kristina Martsenko
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

To enable pointer auth for the kernel, we're going to need to check for
the presence of address auth and generic auth using alternative_if. We
currently have two cpucaps for each, but alternative_if needs to check a
single cpucap. So define meta-capabilities that are present when either
of the current two capabilities is present.

Leave the existing four cpucaps in place, as they are still needed to
check for mismatched systems where one CPU has the architected algorithm
but another has the IMP DEF algorithm.

Note, the meta-capabilities were present before but were removed in
commits a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
CPU caps from 6 to 4") and 1e013d06120c ("arm64: cpufeature: Rework ptr
auth hwcaps using multi_entry_cap_matches"), as they were not needed
then. Note, unlike before, the current patch checks the cpucap values
directly, instead of reading the CPU ID register value.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
---

Changes since RFC v1:
 - New patch, as the meta-capabilities have been removed upstream

 arch/arm64/include/asm/cpucaps.h    |  4 +++-
 arch/arm64/include/asm/cpufeature.h |  6 ++----
 arch/arm64/kernel/cpufeature.c      | 25 ++++++++++++++++++++++++-
 3 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index f6a76e43f39e..601183b7b484 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -61,7 +61,9 @@
 #define ARM64_HAS_GENERIC_AUTH_ARCH		40
 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF		41
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
+#define ARM64_HAS_ADDRESS_AUTH			43
+#define ARM64_HAS_GENERIC_AUTH			44
 
-#define ARM64_NCAPS				43
+#define ARM64_NCAPS				45
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e505e1fbd2b9..0522ea674253 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -605,15 +605,13 @@ static inline bool system_supports_cnp(void)
 static inline bool system_supports_address_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
 }
 
 static inline bool system_supports_generic_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
 }
 
 static inline bool system_uses_irq_prio_masking(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4061de10cea6..166584deaed2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1205,6 +1205,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
 	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
 				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
 }
+
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
+	       cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+}
+
+static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
+	       cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
+}
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
@@ -1466,7 +1480,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
 		.matches = has_cpuid_feature,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -1477,6 +1490,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
 		.matches = has_cpuid_feature,
+	},
+	{
+		.capability = ARM64_HAS_ADDRESS_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_address_auth,
 		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
@@ -1499,6 +1517,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
 		.matches = has_cpuid_feature,
 	},
+	{
+		.capability = ARM64_HAS_GENERIC_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_generic_auth,
+	},
 #endif /* CONFIG_ARM64_PTR_AUTH */
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	{
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
  2019-05-29 19:03 ` [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities Kristina Martsenko
@ 2019-05-29 19:03 ` Kristina Martsenko
  2019-05-30  2:04   ` Kees Cook
  2019-06-06 16:26   ` Catalin Marinas
  2019-05-29 19:03 ` [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability Kristina Martsenko
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

As we're going to enable pointer auth within the kernel and use a
different APIAKey for the kernel itself, then move the user APIAKey
switch to EL0 exception return.

The other 4 keys could remain switched during task switch, but are also
moved to keep things simple.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
---

Changes since RFC v1:
 - Install all 5 keys on kernel_exit
 - Updated prctl to have keys installed on kernel exit
 - Renamed ptrauth-asm.h to asm_pointer_auth.h
 - Minor cleanups
 - Updated the commit message

 arch/arm64/include/asm/asm_pointer_auth.h | 43 +++++++++++++++++++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 23 +----------------
 arch/arm64/kernel/asm-offsets.c           | 11 ++++++++
 arch/arm64/kernel/entry.S                 |  3 +++
 arch/arm64/kernel/pointer_auth.c          |  3 ---
 arch/arm64/kernel/process.c               |  1 -
 6 files changed, 58 insertions(+), 26 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
new file mode 100644
index 000000000000..e3bfddfe80b6
--- /dev/null
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ASM_POINTER_AUTH_H
+#define __ASM_ASM_POINTER_AUTH_H
+
+#include <asm/alternative.h>
+#include <asm/asm-offsets.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	mov	\tmp1, #THREAD_KEYS_USER
+	add	\tmp1, \tsk, \tmp1
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
+	msr_s	SYS_APIBKEYLO_EL1, \tmp2
+	msr_s	SYS_APIBKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
+	msr_s	SYS_APDAKEYLO_EL1, \tmp2
+	msr_s	SYS_APDAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
+	msr_s	SYS_APDBKEYLO_EL1, \tmp2
+	msr_s	SYS_APDBKEYHI_EL1, \tmp3
+alternative_else_nop_endif
+alternative_if ARM64_HAS_GENERIC_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
+	msr_s	SYS_APGAKEYLO_EL1, \tmp2
+	msr_s	SYS_APGAKEYHI_EL1, \tmp3
+alternative_else_nop_endif
+	.endm
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	.endm
+
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 15d49515efdd..fc8dc70cc19f 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -50,19 +50,6 @@ do {								\
 	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
 } while (0)
 
-static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
-{
-	if (system_supports_address_auth()) {
-		__ptrauth_key_install(APIA, keys->apia);
-		__ptrauth_key_install(APIB, keys->apib);
-		__ptrauth_key_install(APDA, keys->apda);
-		__ptrauth_key_install(APDB, keys->apdb);
-	}
-
-	if (system_supports_generic_auth())
-		__ptrauth_key_install(APGA, keys->apga);
-}
-
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -78,20 +65,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-do {									\
-	struct task_struct *__ptiu_tsk = (tsk);				\
-	ptrauth_keys_init(&__ptiu_tsk->thread.keys_user);		\
-	ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user);		\
-} while (0)
-
-#define ptrauth_thread_switch(tsk)	\
-	ptrauth_keys_switch(&(tsk)->thread.keys_user)
+	ptrauth_keys_init(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
-#define ptrauth_thread_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7f40dcbdd51d..edc471e4acb1 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -51,6 +51,9 @@ int main(void)
 #endif
   BLANK();
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+#endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
   DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
@@ -153,5 +156,13 @@ int main(void)
   DEFINE(SDEI_EVENT_INTREGS,	offsetof(struct sdei_registered_event, interrupted_regs));
   DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(PTRAUTH_KEY_APIA,	offsetof(struct ptrauth_keys, apia));
+  DEFINE(PTRAUTH_KEY_APIB,	offsetof(struct ptrauth_keys, apib));
+  DEFINE(PTRAUTH_KEY_APDA,	offsetof(struct ptrauth_keys, apda));
+  DEFINE(PTRAUTH_KEY_APDB,	offsetof(struct ptrauth_keys, apdb));
+  DEFINE(PTRAUTH_KEY_APGA,	offsetof(struct ptrauth_keys, apga));
+  BLANK();
+#endif
   return 0;
 }
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index c50a7a75f2e0..73a28d88f78d 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -25,6 +25,7 @@
 #include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
+#include <asm/asm_pointer_auth.h>
 #include <asm/cpufeature.h>
 #include <asm/errno.h>
 #include <asm/esr.h>
@@ -336,6 +337,8 @@ alternative_if ARM64_WORKAROUND_845719
 alternative_else_nop_endif
 #endif
 3:
+	ptrauth_keys_install_user tsk, x0, x1, x2
+
 	apply_ssbd 0, x0, x1
 	.endif
 
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index c507b584259d..95985be67891 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 
 	if (!arg) {
 		ptrauth_keys_init(keys);
-		ptrauth_keys_switch(keys);
 		return 0;
 	}
 
@@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 	if (arg & PR_PAC_APGAKEY)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 
-	ptrauth_keys_switch(keys);
-
 	return 0;
 }
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 3767fb21a5b8..a9b30111b725 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -481,7 +481,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
 	uao_thread_switch(next);
-	ptrauth_thread_switch(next);
 
 	/*
 	 * Complete any pending TLB or cache maintenance on this CPU in case
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
  2019-05-29 19:03 ` [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities Kristina Martsenko
  2019-05-29 19:03 ` [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time Kristina Martsenko
@ 2019-05-29 19:03 ` Kristina Martsenko
  2019-05-30  2:49   ` Kees Cook
  2019-05-30 14:16   ` Suzuki K Poulose
  2019-05-29 19:03 ` [RFC v2 4/7] arm64: enable ptrauth earlier Kristina Martsenko
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

Each system capability can be of either boot, local, or system scope,
depending on when the state of the capability is finalized. When we
detect a conflict on a late CPU, we either offline the CPU or panic the
system. We currently always panic if the conflict is caused by a boot
scope capability, and offline the CPU if the conflict is caused by a
local or system scope capability.

We're going to want to add new capability (for pointer authentication)
which needs to be boot scope but doesn't need to panic the system when a
conflict is detected. So add a new flag to specify whether the
capability requires the system to panic or not. Current boot scope
capabilities are updated to set the flag, so there should be no
functional change as a result of this patch.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
---

Changes since RFC v1:
 - New patch, to have ptrauth mismatches disable secondaries instead of
   panicking

 arch/arm64/include/asm/cpufeature.h | 15 ++++++++++++++-
 arch/arm64/kernel/cpufeature.c      | 23 +++++++++--------------
 2 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 0522ea674253..ad952f2e0a2b 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -217,6 +217,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
  *     In some non-typical cases either both (a) and (b), or neither,
  *     should be permitted. This can be described by including neither
  *     or both flags in the capability's type field.
+ *
+ *     In case of a conflict, the CPU is prevented from booting. If the
+ *     ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
+ *     then a kernel panic is triggered.
  */
 
 
@@ -249,6 +253,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU	((u16)BIT(4))
 /* Is it safe for a late CPU to miss this capability when system has it */
 #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	((u16)BIT(5))
+/* Panic when a conflict is detected */
+#define ARM64_CPUCAP_PANIC_ON_CONFLICT		((u16)BIT(6))
 
 /*
  * CPU errata workarounds that need to be enabled at boot time if one or
@@ -290,7 +296,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
  * CPU feature used early in the boot based on the boot CPU. All secondary
  * CPUs must match the state of the capability as detected by the boot CPU.
  */
-#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
+#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
+	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
 
 struct arm64_cpu_capabilities {
 	const char *desc;
@@ -354,6 +361,12 @@ cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
 	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
 }
 
+static inline bool
+cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
+{
+	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
+}
+
 /*
  * Generic helper for handling capabilties with multiple (match,enable) pairs
  * of call backs, sharing the same capability bit.
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 166584deaed2..8a595b4cb0aa 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1796,10 +1796,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask)
  * Run through the list of capabilities to check for conflicts.
  * If the system has already detected a capability, take necessary
  * action on this CPU.
- *
- * Returns "false" on conflicts.
  */
-static bool verify_local_cpu_caps(u16 scope_mask)
+static void verify_local_cpu_caps(u16 scope_mask)
 {
 	int i;
 	bool cpu_has_cap, system_has_cap;
@@ -1844,10 +1842,12 @@ static bool verify_local_cpu_caps(u16 scope_mask)
 		pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n",
 			smp_processor_id(), caps->capability,
 			caps->desc, system_has_cap, cpu_has_cap);
-		return false;
-	}
 
-	return true;
+		if (cpucap_panic_on_conflict(caps))
+			cpu_panic_kernel();
+		else
+			cpu_die_early();
+	}
 }
 
 /*
@@ -1857,12 +1857,8 @@ static bool verify_local_cpu_caps(u16 scope_mask)
 static void check_early_cpu_features(void)
 {
 	verify_cpu_asid_bits();
-	/*
-	 * Early features are used by the kernel already. If there
-	 * is a conflict, we cannot proceed further.
-	 */
-	if (!verify_local_cpu_caps(SCOPE_BOOT_CPU))
-		cpu_panic_kernel();
+
+	verify_local_cpu_caps(SCOPE_BOOT_CPU);
 }
 
 static void
@@ -1910,8 +1906,7 @@ static void verify_local_cpu_capabilities(void)
 	 * check_early_cpu_features(), as they need to be verified
 	 * on all secondary CPUs.
 	 */
-	if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU))
-		cpu_die_early();
+	verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU);
 
 	verify_local_elf_hwcaps(arm64_elf_hwcaps);
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [RFC v2 4/7] arm64: enable ptrauth earlier
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
                   ` (2 preceding siblings ...)
  2019-05-29 19:03 ` [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability Kristina Martsenko
@ 2019-05-29 19:03 ` Kristina Martsenko
  2019-05-30  3:11   ` Kees Cook
  2019-06-13 15:41   ` Suzuki K Poulose
  2019-05-29 19:03 ` [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys Kristina Martsenko
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

When the kernel is compiled with pointer auth instructions, the boot CPU
needs to start using address auth very early, so change the cpucap to
account for this.

A function that enables pointer auth cannot return, so compile such
functions without pointer auth (using a compiler function attribute).
The __no_ptrauth macro will be defined to the required function
attribute in a later patch.

Do not use the cpu_enable callback, to avoid compiling the whole
callchain down to cpu_enable without pointer auth.

Note the change in behavior: if the boot CPU has address auth and a late
CPU does not, then we offline the late CPU. Until now we would have just
disabled address auth in this case.

Leave generic authentication as a "system scope" cpucap for now, since
initially the kernel will only use address authentication.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
---

Changes since RFC v1:
 - Enable instructions for all 5 keys
 - Replaced __always_inline with __no_ptrauth as it turns out __always_inline
   is only a hint (and could therefore result in crashes)
 - Left the __no_ptrauth definition blank for now as it needs to be determined
   with more complex logic in a later patch
 - Updated the Kconfig symbol description
 - Minor cleanups
 - Updated the commit message

 arch/arm64/Kconfig                    |  4 ++++
 arch/arm64/include/asm/cpufeature.h   |  9 +++++++++
 arch/arm64/include/asm/pointer_auth.h | 19 +++++++++++++++++++
 arch/arm64/kernel/cpufeature.c        | 13 +++----------
 arch/arm64/kernel/smp.c               |  7 ++++++-
 5 files changed, 41 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9eba5de..f4c1e9b30129 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1304,6 +1304,10 @@ config ARM64_PTR_AUTH
 	  hardware it will not be advertised to userspace nor will it be
 	  enabled.
 
+	  If the feature is present on the primary CPU but not a secondary CPU,
+	  then the secondary CPU will be offlined. On such a system, this
+	  option should not be selected.
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index ad952f2e0a2b..e36a7948b763 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -299,6 +299,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
 	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
 
+/*
+ * CPU feature used early in the boot based on the boot CPU. It is safe for a
+ * late CPU to have this feature even though the boot CPU hasn't enabled it,
+ * although the feature will not be used by Linux in this case. If the boot CPU
+ * has enabled this feature already, then every late CPU must have it.
+ */
+#define ARM64_CPUCAP_BOOT_CPU_FEATURE			\
+	 (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
+
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index fc8dc70cc19f..a97b7dc10bdb 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -11,6 +11,13 @@
 
 #ifdef CONFIG_ARM64_PTR_AUTH
 /*
+ * Compile the function without pointer authentication instructions. This
+ * allows pointer authentication to be enabled/disabled within the function
+ * (but leaves the function unprotected by pointer authentication).
+ */
+#define __no_ptrauth
+
+/*
  * Each key is a 128-bit quantity which is split across a pair of 64-bit
  * registers (Lo and Hi).
  */
@@ -50,6 +57,16 @@ do {								\
 	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
 } while (0)
 
+static inline void __no_ptrauth ptrauth_cpu_enable(void)
+{
+	if (!system_supports_address_auth())
+		return;
+
+	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
+				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
+	isb();
+}
+
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -68,6 +85,8 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 	ptrauth_keys_init(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
+#define __no_ptrauth
+#define ptrauth_cpu_enable(tsk)
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 8a595b4cb0aa..2cf042ebb237 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1200,12 +1200,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
-{
-	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
-				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
-}
-
 static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
 			     int __unused)
 {
@@ -1474,7 +1468,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (architected algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
@@ -1484,7 +1478,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
@@ -1493,9 +1487,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.matches = has_address_auth,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 824de7038967..eca6aa05ac66 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -54,6 +54,7 @@
 #include <asm/numa.h>
 #include <asm/pgtable.h>
 #include <asm/pgalloc.h>
+#include <asm/pointer_auth.h>
 #include <asm/processor.h>
 #include <asm/smp_plat.h>
 #include <asm/sections.h>
@@ -238,6 +239,8 @@ asmlinkage notrace void secondary_start_kernel(void)
 	 */
 	check_local_cpu_capabilities();
 
+	ptrauth_cpu_enable();
+
 	if (cpu_ops[cpu]->cpu_postboot)
 		cpu_ops[cpu]->cpu_postboot();
 
@@ -432,7 +435,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	mark_linear_text_alias_ro();
 }
 
-void __init smp_prepare_boot_cpu(void)
+void __init __no_ptrauth smp_prepare_boot_cpu(void)
 {
 	set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
 	/*
@@ -452,6 +455,8 @@ void __init smp_prepare_boot_cpu(void)
 	/* Conditionally switch to GIC PMR for interrupt masking */
 	if (system_uses_irq_prio_masking())
 		init_gic_priority_masking();
+
+	ptrauth_cpu_enable();
 }
 
 static u64 __init of_get_cpu_mpidr(struct device_node *dn)
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
                   ` (3 preceding siblings ...)
  2019-05-29 19:03 ` [RFC v2 4/7] arm64: enable ptrauth earlier Kristina Martsenko
@ 2019-05-29 19:03 ` Kristina Martsenko
  2019-05-30  3:34   ` Kees Cook
                     ` (2 more replies)
  2019-05-29 19:03 ` [RFC v2 6/7] arm64: unwind: strip PAC from kernel addresses Kristina Martsenko
                   ` (2 subsequent siblings)
  7 siblings, 3 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

Set up keys to use pointer authentication within the kernel. The kernel
will be compiled with APIAKey instructions, the other keys are currently
unused. Each task is given its own APIAKey, which is initialized during
fork. The key is changed during context switch and on kernel entry from
EL0.

A function that changes the key cannot return, so compile such functions
without pointer auth (using __no_ptrauth which will be defined to a
compiler function attribute later).

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
---

Changes since RFC v1:
 - Updated ptrauth_keys_init to not randomly initialize the other 4 (unused)
   kernel-space keys, to save entropy
 - Replaced __always_inline with __no_ptrauth as it turns out __always_inline
   is only a hint (and could therefore result in crashes)
 - Moved ptrauth_keys_install_kernel earlier in kernel_entry, in case in the
   future C function calls are added in kernel_entry
 - Added ISB after key install in kernel_exit, in case in the future C function
   calls are added after the macro
 - Dropped an outdated comment
 - Updated the commit message

 arch/arm64/include/asm/asm_pointer_auth.h | 16 +++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 33 +++++++++++++++++++++----------
 arch/arm64/include/asm/processor.h        |  1 +
 arch/arm64/kernel/asm-offsets.c           |  1 +
 arch/arm64/kernel/entry.S                 |  1 +
 arch/arm64/kernel/pointer_auth.c          |  2 +-
 arch/arm64/kernel/process.c               |  3 +++
 arch/arm64/kernel/smp.c                   |  3 +++
 8 files changed, 49 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
index e3bfddfe80b6..f595da9661a4 100644
--- a/arch/arm64/include/asm/asm_pointer_auth.h
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -25,11 +25,24 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
 	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
 	msr_s	SYS_APDBKEYLO_EL1, \tmp2
 	msr_s	SYS_APDBKEYHI_EL1, \tmp3
+	isb
 alternative_else_nop_endif
 alternative_if ARM64_HAS_GENERIC_AUTH
 	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
 	msr_s	SYS_APGAKEYLO_EL1, \tmp2
 	msr_s	SYS_APGAKEYHI_EL1, \tmp3
+	isb
+alternative_else_nop_endif
+	.endm
+
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	mov	\tmp1, #THREAD_KEYS_KERNEL
+	add	\tmp1, \tsk, \tmp1
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	isb
 alternative_else_nop_endif
 	.endm
 
@@ -38,6 +51,9 @@ alternative_else_nop_endif
 	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
 	.endm
 
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	.endm
+
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index a97b7dc10bdb..79f35f5ecff5 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -25,10 +25,6 @@ struct ptrauth_key {
 	unsigned long lo, hi;
 };
 
-/*
- * We give each process its own keys, which are shared by all threads. The keys
- * are inherited upon fork(), and reinitialised upon exec*().
- */
 struct ptrauth_keys {
 	struct ptrauth_key apia;
 	struct ptrauth_key apib;
@@ -37,16 +33,18 @@ struct ptrauth_keys {
 	struct ptrauth_key apga;
 };
 
-static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys, bool user)
 {
 	if (system_supports_address_auth()) {
 		get_random_bytes(&keys->apia, sizeof(keys->apia));
-		get_random_bytes(&keys->apib, sizeof(keys->apib));
-		get_random_bytes(&keys->apda, sizeof(keys->apda));
-		get_random_bytes(&keys->apdb, sizeof(keys->apdb));
+		if (user) {
+			get_random_bytes(&keys->apib, sizeof(keys->apib));
+			get_random_bytes(&keys->apda, sizeof(keys->apda));
+			get_random_bytes(&keys->apdb, sizeof(keys->apdb));
+		}
 	}
 
-	if (system_supports_generic_auth())
+	if (system_supports_generic_auth() && user)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 }
 
@@ -57,6 +55,15 @@ do {								\
 	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
 } while (0)
 
+static inline void __no_ptrauth ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+	if (!system_supports_address_auth())
+		return;
+
+	__ptrauth_key_install(APIA, keys->apia);
+	isb();
+}
+
 static inline void __no_ptrauth ptrauth_cpu_enable(void)
 {
 	if (!system_supports_address_auth())
@@ -82,7 +89,11 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-	ptrauth_keys_init(&(tsk)->thread.keys_user)
+	ptrauth_keys_init(&(tsk)->thread.keys_user, true)
+#define ptrauth_thread_init_kernel(tsk)					\
+	ptrauth_keys_init(&(tsk)->thread.keys_kernel, false)
+#define ptrauth_thread_switch(tsk)					\
+	ptrauth_keys_switch(&(tsk)->thread.keys_kernel)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define __no_ptrauth
@@ -90,6 +101,8 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
+#define ptrauth_thread_init_kernel(tsk)
+#define ptrauth_thread_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5d9ce62bdebd..f7684a021ca2 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -149,6 +149,7 @@ struct thread_struct {
 	struct debug_info	debug;		/* debugging */
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys	keys_user;
+	struct ptrauth_keys	keys_kernel;
 #endif
 };
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index edc471e4acb1..7dfebecd387d 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -53,6 +53,7 @@ int main(void)
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
 #ifdef CONFIG_ARM64_PTR_AUTH
   DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+  DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
 #endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 73a28d88f78d..001d508cd63f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -184,6 +184,7 @@ alternative_cb_end
 
 	apply_ssbd 1, x22, x23
 
+	ptrauth_keys_install_kernel tsk, x20, x22, x23
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_current_task tsk
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index 95985be67891..4ca141b8c581 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -18,7 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 		return -EINVAL;
 
 	if (!arg) {
-		ptrauth_keys_init(keys);
+		ptrauth_keys_init(keys, true);
 		return 0;
 	}
 
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index a9b30111b725..890d7185641b 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -378,6 +378,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 	 */
 	fpsimd_flush_task_state(p);
 
+	ptrauth_thread_init_kernel(p);
+
 	if (likely(!(p->flags & PF_KTHREAD))) {
 		*childregs = *current_pt_regs();
 		childregs->regs[0] = 0;
@@ -481,6 +483,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
 	uao_thread_switch(next);
+	ptrauth_thread_switch(next);
 
 	/*
 	 * Complete any pending TLB or cache maintenance on this CPU in case
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index eca6aa05ac66..c5a6f3e8660b 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -239,6 +239,7 @@ asmlinkage notrace void secondary_start_kernel(void)
 	 */
 	check_local_cpu_capabilities();
 
+	ptrauth_thread_switch(current);
 	ptrauth_cpu_enable();
 
 	if (cpu_ops[cpu]->cpu_postboot)
@@ -456,6 +457,8 @@ void __init __no_ptrauth smp_prepare_boot_cpu(void)
 	if (system_uses_irq_prio_masking())
 		init_gic_priority_masking();
 
+	ptrauth_thread_init_kernel(current);
+	ptrauth_thread_switch(current);
 	ptrauth_cpu_enable();
 }
 
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [RFC v2 6/7] arm64: unwind: strip PAC from kernel addresses
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
                   ` (4 preceding siblings ...)
  2019-05-29 19:03 ` [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys Kristina Martsenko
@ 2019-05-29 19:03 ` Kristina Martsenko
  2019-05-30  3:36   ` Kees Cook
  2019-05-29 19:03 ` [RFC v2 7/7] arm64: compile the kernel with ptrauth return address signing Kristina Martsenko
  2019-05-30  3:09 ` [RFC v2 0/7] arm64: " Kees Cook
  7 siblings, 1 reply; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

From: Mark Rutland <mark.rutland@arm.com>

When we enable pointer authentication in the kernel, LR values saved to
the stack will have a PAC which we must strip in order to retrieve the
real return address.

Strip PACs when unwinding the stack in order to account for this.

Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
---

Changes since RFC v1:
 - Moved the patch later in the series

 arch/arm64/include/asm/pointer_auth.h | 10 +++++++---
 arch/arm64/kernel/stacktrace.c        |  3 +++
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 79f35f5ecff5..5491c34b4dc3 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -80,12 +80,16 @@ extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
  * The EL0 pointer bits used by a pointer authentication code.
  * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
  */
-#define ptrauth_user_pac_mask()	GENMASK(54, vabits_user)
+#define ptrauth_user_pac_mask()		GENMASK(54, vabits_user)
+
+#define ptrauth_kernel_pac_mask()	(GENMASK(63, 56) | GENMASK(54, VA_BITS))
 
-/* Only valid for EL0 TTBR0 instruction pointers */
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 {
-	return ptr & ~ptrauth_user_pac_mask();
+	if (ptr & BIT_ULL(55))
+		return ptr | ptrauth_kernel_pac_mask();
+	else
+		return ptr & ~ptrauth_user_pac_mask();
 }
 
 #define ptrauth_thread_init_user(tsk)					\
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index d908b5e9e949..df07c27a9673 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -24,6 +24,7 @@
 #include <linux/stacktrace.h>
 
 #include <asm/irq.h>
+#include <asm/pointer_auth.h>
 #include <asm/stack_pointer.h>
 #include <asm/stacktrace.h>
 
@@ -56,6 +57,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
 	frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8));
 
+	frame->pc = ptrauth_strip_insn_pac(frame->pc);
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	if (tsk->ret_stack &&
 			(frame->pc == (unsigned long)return_to_handler)) {
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [RFC v2 7/7] arm64: compile the kernel with ptrauth return address signing
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
                   ` (5 preceding siblings ...)
  2019-05-29 19:03 ` [RFC v2 6/7] arm64: unwind: strip PAC from kernel addresses Kristina Martsenko
@ 2019-05-29 19:03 ` Kristina Martsenko
  2019-05-30  3:45   ` Kees Cook
  2019-05-30  3:09 ` [RFC v2 0/7] arm64: " Kees Cook
  7 siblings, 1 reply; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-29 19:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Amit Kachhap, Dave Martin

Compile all non-leaf functions with two ptrauth instructions: PACIASP in
the prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task. This should help protect the kernel
against attacks using return-oriented programming.

The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.

CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.

GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
---

Changes since RFC v1:
 - Fixed support for compilers without ptrauth
 - Added support for the new -mbranch-protection option
 - Switched from protecting all functions to only protecting non-leaf functions
   (for no good reason, I have not done e.g. gadget analysis)
 - Moved __no_ptrauth definition to this patch, depending on compiler support
 - Updated the Kconfig symbol description
 - Updated the commit message

 arch/arm64/Kconfig                    | 12 +++++++++++-
 arch/arm64/Makefile                   |  6 ++++++
 arch/arm64/include/asm/pointer_auth.h |  6 ++++++
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f4c1e9b30129..3ce93d88fae1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1295,11 +1295,15 @@ config ARM64_PTR_AUTH
 	  and other attacks.
 
 	  This option enables these instructions at EL0 (i.e. for userspace).
-
 	  Choosing this option will cause the kernel to initialise secret keys
 	  for each process at exec() time, with these keys being
 	  context-switched along with the process.
 
+	  If the compiler supports the -mbranch-protection or
+	  -msign-return-address flag (e.g. GCC 7 or later), then this option
+	  will also cause the kernel itself to be compiled with return address
+	  protection.
+
 	  The feature is detected at runtime. If the feature is not present in
 	  hardware it will not be advertised to userspace nor will it be
 	  enabled.
@@ -1308,6 +1312,12 @@ config ARM64_PTR_AUTH
 	  then the secondary CPU will be offlined. On such a system, this
 	  option should not be selected.
 
+config CC_HAS_BRANCH_PROT_PAC_RET
+	def_bool $(cc-option,-mbranch-protection=pac-ret)
+
+config CC_HAS_SIGN_RETURN_ADDRESS
+	def_bool $(cc-option,-msign-return-address=non-leaf)
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index b025304bde46..1dfbe755b531 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -66,6 +66,12 @@ stack_protector_prepare: prepare0
 					include/generated/asm-offsets.h))
 endif
 
+ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
+pac-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=non-leaf
+pac-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret
+KBUILD_CFLAGS += $(pac-flags-y)
+endif
+
 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
 KBUILD_CPPFLAGS	+= -mbig-endian
 CHECKFLAGS	+= -D__AARCH64EB__
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 5491c34b4dc3..3a83c40ffd8a 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -15,7 +15,13 @@
  * allows pointer authentication to be enabled/disabled within the function
  * (but leaves the function unprotected by pointer authentication).
  */
+#if defined(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET)
+#define __no_ptrauth	__attribute__((target("branch-protection=none")))
+#elif defined(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS)
+#define __no_ptrauth	__attribute__((target("sign-return-address=none")))
+#else
 #define __no_ptrauth
+#endif
 
 /*
  * Each key is a 128-bit quantity which is split across a pair of 64-bit
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities
  2019-05-29 19:03 ` [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities Kristina Martsenko
@ 2019-05-30  1:58   ` Kees Cook
  2019-05-30 10:50   ` Suzuki K Poulose
  1 sibling, 0 replies; 41+ messages in thread
From: Kees Cook @ 2019-05-30  1:58 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:26PM +0100, Kristina Martsenko wrote:
> To enable pointer auth for the kernel, we're going to need to check for
> the presence of address auth and generic auth using alternative_if. We
> currently have two cpucaps for each, but alternative_if needs to check a
> single cpucap. So define meta-capabilities that are present when either
> of the current two capabilities is present.
> 
> Leave the existing four cpucaps in place, as they are still needed to
> check for mismatched systems where one CPU has the architected algorithm
> but another has the IMP DEF algorithm.
> 
> Note, the meta-capabilities were present before but were removed in
> commits a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
> CPU caps from 6 to 4") and 1e013d06120c ("arm64: cpufeature: Rework ptr
> auth hwcaps using multi_entry_cap_matches"), as they were not needed
> then. Note, unlike before, the current patch checks the cpucap values
> directly, instead of reading the CPU ID register value.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> 
> Changes since RFC v1:
>  - New patch, as the meta-capabilities have been removed upstream
> 
>  arch/arm64/include/asm/cpucaps.h    |  4 +++-
>  arch/arm64/include/asm/cpufeature.h |  6 ++----
>  arch/arm64/kernel/cpufeature.c      | 25 ++++++++++++++++++++++++-
>  3 files changed, 29 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index f6a76e43f39e..601183b7b484 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -61,7 +61,9 @@
>  #define ARM64_HAS_GENERIC_AUTH_ARCH		40
>  #define ARM64_HAS_GENERIC_AUTH_IMP_DEF		41
>  #define ARM64_HAS_IRQ_PRIO_MASKING		42
> +#define ARM64_HAS_ADDRESS_AUTH			43
> +#define ARM64_HAS_GENERIC_AUTH			44
>  
> -#define ARM64_NCAPS				43
> +#define ARM64_NCAPS				45
>  
>  #endif /* __ASM_CPUCAPS_H */
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index e505e1fbd2b9..0522ea674253 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -605,15 +605,13 @@ static inline bool system_supports_cnp(void)
>  static inline bool system_supports_address_auth(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> -		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> -		 cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
> +		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
>  }
>  
>  static inline bool system_supports_generic_auth(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> -		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
> -		 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
> +		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
>  }
>  
>  static inline bool system_uses_irq_prio_masking(void)
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 4061de10cea6..166584deaed2 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1205,6 +1205,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
>  	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
>  				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
>  }
> +
> +static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> +			     int __unused)
> +{
> +	return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> +	       cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> +}
> +
> +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
> +			     int __unused)
> +{
> +	return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
> +	       cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
> +}
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #ifdef CONFIG_ARM64_PSEUDO_NMI
> @@ -1466,7 +1480,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.field_pos = ID_AA64ISAR1_APA_SHIFT,
>  		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
>  		.matches = has_cpuid_feature,
> -		.cpu_enable = cpu_enable_address_auth,
>  	},
>  	{
>  		.desc = "Address authentication (IMP DEF algorithm)",
> @@ -1477,6 +1490,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.field_pos = ID_AA64ISAR1_API_SHIFT,
>  		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
>  		.matches = has_cpuid_feature,
> +	},
> +	{
> +		.capability = ARM64_HAS_ADDRESS_AUTH,
> +		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.matches = has_address_auth,
>  		.cpu_enable = cpu_enable_address_auth,
>  	},
>  	{
> @@ -1499,6 +1517,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
>  		.matches = has_cpuid_feature,
>  	},
> +	{
> +		.capability = ARM64_HAS_GENERIC_AUTH,
> +		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.matches = has_generic_auth,
> +	},
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  #ifdef CONFIG_ARM64_PSEUDO_NMI
>  	{
> -- 
> 2.11.0
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time
  2019-05-29 19:03 ` [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time Kristina Martsenko
@ 2019-05-30  2:04   ` Kees Cook
  2019-06-06 16:26   ` Catalin Marinas
  1 sibling, 0 replies; 41+ messages in thread
From: Kees Cook @ 2019-05-30  2:04 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:27PM +0100, Kristina Martsenko wrote:
> As we're going to enable pointer auth within the kernel and use a
> different APIAKey for the kernel itself, then move the user APIAKey
> switch to EL0 exception return.
> 
> The other 4 keys could remain switched during task switch, but are also
> moved to keep things simple.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> 
> Changes since RFC v1:
>  - Install all 5 keys on kernel_exit
>  - Updated prctl to have keys installed on kernel exit
>  - Renamed ptrauth-asm.h to asm_pointer_auth.h
>  - Minor cleanups
>  - Updated the commit message
> 
>  arch/arm64/include/asm/asm_pointer_auth.h | 43 +++++++++++++++++++++++++++++++
>  arch/arm64/include/asm/pointer_auth.h     | 23 +----------------
>  arch/arm64/kernel/asm-offsets.c           | 11 ++++++++
>  arch/arm64/kernel/entry.S                 |  3 +++
>  arch/arm64/kernel/pointer_auth.c          |  3 ---
>  arch/arm64/kernel/process.c               |  1 -
>  6 files changed, 58 insertions(+), 26 deletions(-)
>  create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
> 
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> new file mode 100644
> index 000000000000..e3bfddfe80b6
> --- /dev/null
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -0,0 +1,43 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_ASM_POINTER_AUTH_H
> +#define __ASM_ASM_POINTER_AUTH_H
> +
> +#include <asm/alternative.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +
> +	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
> +	mov	\tmp1, #THREAD_KEYS_USER
> +	add	\tmp1, \tsk, \tmp1
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
> +	msr_s	SYS_APIAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APIAKEYHI_EL1, \tmp3
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
> +	msr_s	SYS_APIBKEYLO_EL1, \tmp2
> +	msr_s	SYS_APIBKEYHI_EL1, \tmp3
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
> +	msr_s	SYS_APDAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APDAKEYHI_EL1, \tmp3
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
> +	msr_s	SYS_APDBKEYLO_EL1, \tmp2
> +	msr_s	SYS_APDBKEYHI_EL1, \tmp3
> +alternative_else_nop_endif
> +alternative_if ARM64_HAS_GENERIC_AUTH
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
> +	msr_s	SYS_APGAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APGAKEYHI_EL1, \tmp3
> +alternative_else_nop_endif
> +	.endm
> +
> +#else /* CONFIG_ARM64_PTR_AUTH */
> +
> +	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
> +	.endm
> +
> +#endif /* CONFIG_ARM64_PTR_AUTH */
> +
> +#endif /* __ASM_ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 15d49515efdd..fc8dc70cc19f 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -50,19 +50,6 @@ do {								\
>  	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
>  } while (0)
>  
> -static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> -{
> -	if (system_supports_address_auth()) {
> -		__ptrauth_key_install(APIA, keys->apia);
> -		__ptrauth_key_install(APIB, keys->apib);
> -		__ptrauth_key_install(APDA, keys->apda);
> -		__ptrauth_key_install(APDB, keys->apdb);
> -	}
> -
> -	if (system_supports_generic_auth())
> -		__ptrauth_key_install(APGA, keys->apga);
> -}
> -
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
>  /*
> @@ -78,20 +65,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  }
>  
>  #define ptrauth_thread_init_user(tsk)					\
> -do {									\
> -	struct task_struct *__ptiu_tsk = (tsk);				\
> -	ptrauth_keys_init(&__ptiu_tsk->thread.keys_user);		\
> -	ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user);		\
> -} while (0)
> -
> -#define ptrauth_thread_switch(tsk)	\
> -	ptrauth_keys_switch(&(tsk)->thread.keys_user)
> +	ptrauth_keys_init(&(tsk)->thread.keys_user)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
> -#define ptrauth_thread_switch(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7f40dcbdd51d..edc471e4acb1 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -51,6 +51,9 @@ int main(void)
>  #endif
>    BLANK();
>    DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +  DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
> +#endif
>    BLANK();
>    DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
>    DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
> @@ -153,5 +156,13 @@ int main(void)
>    DEFINE(SDEI_EVENT_INTREGS,	offsetof(struct sdei_registered_event, interrupted_regs));
>    DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
>  #endif
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +  DEFINE(PTRAUTH_KEY_APIA,	offsetof(struct ptrauth_keys, apia));
> +  DEFINE(PTRAUTH_KEY_APIB,	offsetof(struct ptrauth_keys, apib));
> +  DEFINE(PTRAUTH_KEY_APDA,	offsetof(struct ptrauth_keys, apda));
> +  DEFINE(PTRAUTH_KEY_APDB,	offsetof(struct ptrauth_keys, apdb));
> +  DEFINE(PTRAUTH_KEY_APGA,	offsetof(struct ptrauth_keys, apga));
> +  BLANK();
> +#endif
>    return 0;
>  }
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index c50a7a75f2e0..73a28d88f78d 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -25,6 +25,7 @@
>  #include <asm/alternative.h>
>  #include <asm/assembler.h>
>  #include <asm/asm-offsets.h>
> +#include <asm/asm_pointer_auth.h>
>  #include <asm/cpufeature.h>
>  #include <asm/errno.h>
>  #include <asm/esr.h>
> @@ -336,6 +337,8 @@ alternative_if ARM64_WORKAROUND_845719
>  alternative_else_nop_endif
>  #endif
>  3:
> +	ptrauth_keys_install_user tsk, x0, x1, x2
> +
>  	apply_ssbd 0, x0, x1
>  	.endif
>  
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index c507b584259d..95985be67891 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  
>  	if (!arg) {
>  		ptrauth_keys_init(keys);
> -		ptrauth_keys_switch(keys);
>  		return 0;
>  	}
>  
> @@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  	if (arg & PR_PAC_APGAKEY)
>  		get_random_bytes(&keys->apga, sizeof(keys->apga));
>  
> -	ptrauth_keys_switch(keys);
> -
>  	return 0;
>  }
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 3767fb21a5b8..a9b30111b725 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -481,7 +481,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	contextidr_thread_switch(next);
>  	entry_task_switch(next);
>  	uao_thread_switch(next);
> -	ptrauth_thread_switch(next);
>  
>  	/*
>  	 * Complete any pending TLB or cache maintenance on this CPU in case
> -- 
> 2.11.0
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability
  2019-05-29 19:03 ` [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability Kristina Martsenko
@ 2019-05-30  2:49   ` Kees Cook
  2019-05-30 14:16   ` Suzuki K Poulose
  1 sibling, 0 replies; 41+ messages in thread
From: Kees Cook @ 2019-05-30  2:49 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:28PM +0100, Kristina Martsenko wrote:
> Each system capability can be of either boot, local, or system scope,
> depending on when the state of the capability is finalized. When we
> detect a conflict on a late CPU, we either offline the CPU or panic the
> system. We currently always panic if the conflict is caused by a boot
> scope capability, and offline the CPU if the conflict is caused by a
> local or system scope capability.
> 
> We're going to want to add new capability (for pointer authentication)
> which needs to be boot scope but doesn't need to panic the system when a
> conflict is detected. So add a new flag to specify whether the
> capability requires the system to panic or not. Current boot scope
> capabilities are updated to set the flag, so there should be no
> functional change as a result of this patch.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> 
> Changes since RFC v1:
>  - New patch, to have ptrauth mismatches disable secondaries instead of
>    panicking
> 
>  arch/arm64/include/asm/cpufeature.h | 15 ++++++++++++++-
>  arch/arm64/kernel/cpufeature.c      | 23 +++++++++--------------
>  2 files changed, 23 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 0522ea674253..ad952f2e0a2b 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -217,6 +217,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>   *     In some non-typical cases either both (a) and (b), or neither,
>   *     should be permitted. This can be described by including neither
>   *     or both flags in the capability's type field.
> + *
> + *     In case of a conflict, the CPU is prevented from booting. If the
> + *     ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
> + *     then a kernel panic is triggered.
>   */
>  
>  
> @@ -249,6 +253,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>  #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU	((u16)BIT(4))
>  /* Is it safe for a late CPU to miss this capability when system has it */
>  #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	((u16)BIT(5))
> +/* Panic when a conflict is detected */
> +#define ARM64_CPUCAP_PANIC_ON_CONFLICT		((u16)BIT(6))
>  
>  /*
>   * CPU errata workarounds that need to be enabled at boot time if one or
> @@ -290,7 +296,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>   * CPU feature used early in the boot based on the boot CPU. All secondary
>   * CPUs must match the state of the capability as detected by the boot CPU.
>   */
> -#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
> +#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
> +	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
>  
>  struct arm64_cpu_capabilities {
>  	const char *desc;
> @@ -354,6 +361,12 @@ cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
>  	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
>  }
>  
> +static inline bool
> +cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
> +{
> +	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
> +}
> +
>  /*
>   * Generic helper for handling capabilties with multiple (match,enable) pairs
>   * of call backs, sharing the same capability bit.
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 166584deaed2..8a595b4cb0aa 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1796,10 +1796,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask)
>   * Run through the list of capabilities to check for conflicts.
>   * If the system has already detected a capability, take necessary
>   * action on this CPU.
> - *
> - * Returns "false" on conflicts.
>   */
> -static bool verify_local_cpu_caps(u16 scope_mask)
> +static void verify_local_cpu_caps(u16 scope_mask)
>  {
>  	int i;
>  	bool cpu_has_cap, system_has_cap;
> @@ -1844,10 +1842,12 @@ static bool verify_local_cpu_caps(u16 scope_mask)
>  		pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n",
>  			smp_processor_id(), caps->capability,
>  			caps->desc, system_has_cap, cpu_has_cap);
> -		return false;
> -	}
>  
> -	return true;
> +		if (cpucap_panic_on_conflict(caps))
> +			cpu_panic_kernel();
> +		else
> +			cpu_die_early();
> +	}
>  }
>  
>  /*
> @@ -1857,12 +1857,8 @@ static bool verify_local_cpu_caps(u16 scope_mask)
>  static void check_early_cpu_features(void)
>  {
>  	verify_cpu_asid_bits();
> -	/*
> -	 * Early features are used by the kernel already. If there
> -	 * is a conflict, we cannot proceed further.
> -	 */
> -	if (!verify_local_cpu_caps(SCOPE_BOOT_CPU))
> -		cpu_panic_kernel();
> +
> +	verify_local_cpu_caps(SCOPE_BOOT_CPU);
>  }
>  
>  static void
> @@ -1910,8 +1906,7 @@ static void verify_local_cpu_capabilities(void)
>  	 * check_early_cpu_features(), as they need to be verified
>  	 * on all secondary CPUs.
>  	 */
> -	if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU))
> -		cpu_die_early();
> +	verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU);
>  
>  	verify_local_elf_hwcaps(arm64_elf_hwcaps);
>  
> -- 
> 2.11.0
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
                   ` (6 preceding siblings ...)
  2019-05-29 19:03 ` [RFC v2 7/7] arm64: compile the kernel with ptrauth return address signing Kristina Martsenko
@ 2019-05-30  3:09 ` Kees Cook
  2019-05-30  7:25   ` Will Deacon
                     ` (2 more replies)
  7 siblings, 3 replies; 41+ messages in thread
From: Kees Cook @ 2019-05-30  3:09 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Diogo N. Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Will Deacon, Ramana Radhakrishnan, Amit Kachhap,
	Suzuki K Poulose, Dave Martin, linux-arm-kernel

On Wed, May 29, 2019 at 08:03:25PM +0100, Kristina Martsenko wrote:
> This series improves function return address protection for the arm64 kernel, by
> compiling the kernel with ARMv8.3 Pointer Authentication instructions. This
> should help protect the kernel against attacks using return-oriented
> programming.

Can you speak to whether this feature should be enalbed in addition to
or instead of the standard stack canary option?

>  - The patches make use of the sign-return-address/branch-protection compiler
>    options and function attributes. GCC supports both, but Clang/LLVM appears
>    to only support the compiler option, not the function attribute, so with
>    these patches (and CONFIG_ARM64_PTR_AUTH=y) an LLVM-built kernel will fail
>    to boot on ARMv8.3 CPUs. I don't yet know why LLVM does not support it, or
>    whether support can be added. This series may need to be rewritten to not
>    use the attribute, and instead move the functionality to assembly, or to
>    disable return address signing when building with LLVM.

I've added Luke Cheeseman and Diogo N. Sampaio to CC. In looking quickly
at the LLVM support for branch-protection, I think it's just missing the
attribute target glue needed to "notice" the attribute markings. Luke,
is this expected to work Clang currently? I'm not familiar yet with
how attributes get wired up, but I think it should be quite possible.

>  - more testing

Is PAC emulated in QEmu yet? (I assume I can't get real hardware to help
test this yet...)

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 4/7] arm64: enable ptrauth earlier
  2019-05-29 19:03 ` [RFC v2 4/7] arm64: enable ptrauth earlier Kristina Martsenko
@ 2019-05-30  3:11   ` Kees Cook
  2019-06-13 15:41   ` Suzuki K Poulose
  1 sibling, 0 replies; 41+ messages in thread
From: Kees Cook @ 2019-05-30  3:11 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:29PM +0100, Kristina Martsenko wrote:
> When the kernel is compiled with pointer auth instructions, the boot CPU
> needs to start using address auth very early, so change the cpucap to
> account for this.
> 
> A function that enables pointer auth cannot return, so compile such
> functions without pointer auth (using a compiler function attribute).
> The __no_ptrauth macro will be defined to the required function
> attribute in a later patch.
> 
> Do not use the cpu_enable callback, to avoid compiling the whole
> callchain down to cpu_enable without pointer auth.
> 
> Note the change in behavior: if the boot CPU has address auth and a late
> CPU does not, then we offline the late CPU. Until now we would have just
> disabled address auth in this case.
> 
> Leave generic authentication as a "system scope" cpucap for now, since
> initially the kernel will only use address authentication.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

This feels a little out of order to define the empty __no_ptrauth here.
The only better option I can think of is to split the compiler flag
patch in half to introduce the __no_ptrauth flag in full, on its own.
Either way:

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> 
> Changes since RFC v1:
>  - Enable instructions for all 5 keys
>  - Replaced __always_inline with __no_ptrauth as it turns out __always_inline
>    is only a hint (and could therefore result in crashes)
>  - Left the __no_ptrauth definition blank for now as it needs to be determined
>    with more complex logic in a later patch
>  - Updated the Kconfig symbol description
>  - Minor cleanups
>  - Updated the commit message
> 
>  arch/arm64/Kconfig                    |  4 ++++
>  arch/arm64/include/asm/cpufeature.h   |  9 +++++++++
>  arch/arm64/include/asm/pointer_auth.h | 19 +++++++++++++++++++
>  arch/arm64/kernel/cpufeature.c        | 13 +++----------
>  arch/arm64/kernel/smp.c               |  7 ++++++-
>  5 files changed, 41 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9eba5de..f4c1e9b30129 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1304,6 +1304,10 @@ config ARM64_PTR_AUTH
>  	  hardware it will not be advertised to userspace nor will it be
>  	  enabled.
>  
> +	  If the feature is present on the primary CPU but not a secondary CPU,
> +	  then the secondary CPU will be offlined. On such a system, this
> +	  option should not be selected.
> +
>  endmenu
>  
>  config ARM64_SVE
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index ad952f2e0a2b..e36a7948b763 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -299,6 +299,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>  #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
>  	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
>  
> +/*
> + * CPU feature used early in the boot based on the boot CPU. It is safe for a
> + * late CPU to have this feature even though the boot CPU hasn't enabled it,
> + * although the feature will not be used by Linux in this case. If the boot CPU
> + * has enabled this feature already, then every late CPU must have it.
> + */
> +#define ARM64_CPUCAP_BOOT_CPU_FEATURE			\
> +	 (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
> +
>  struct arm64_cpu_capabilities {
>  	const char *desc;
>  	u16 capability;
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index fc8dc70cc19f..a97b7dc10bdb 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -11,6 +11,13 @@
>  
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  /*
> + * Compile the function without pointer authentication instructions. This
> + * allows pointer authentication to be enabled/disabled within the function
> + * (but leaves the function unprotected by pointer authentication).
> + */
> +#define __no_ptrauth
> +
> +/*
>   * Each key is a 128-bit quantity which is split across a pair of 64-bit
>   * registers (Lo and Hi).
>   */
> @@ -50,6 +57,16 @@ do {								\
>  	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
>  } while (0)
>  
> +static inline void __no_ptrauth ptrauth_cpu_enable(void)
> +{
> +	if (!system_supports_address_auth())
> +		return;
> +
> +	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
> +				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
> +	isb();
> +}
> +
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
>  /*
> @@ -68,6 +85,8 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  	ptrauth_keys_init(&(tsk)->thread.keys_user)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
> +#define __no_ptrauth
> +#define ptrauth_cpu_enable(tsk)
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 8a595b4cb0aa..2cf042ebb237 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1200,12 +1200,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
>  #endif /* CONFIG_ARM64_RAS_EXTN */
>  
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
> -{
> -	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
> -				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
> -}
> -
>  static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
>  			     int __unused)
>  {
> @@ -1474,7 +1468,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  	{
>  		.desc = "Address authentication (architected algorithm)",
>  		.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
> -		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
>  		.sys_reg = SYS_ID_AA64ISAR1_EL1,
>  		.sign = FTR_UNSIGNED,
>  		.field_pos = ID_AA64ISAR1_APA_SHIFT,
> @@ -1484,7 +1478,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  	{
>  		.desc = "Address authentication (IMP DEF algorithm)",
>  		.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
> -		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
>  		.sys_reg = SYS_ID_AA64ISAR1_EL1,
>  		.sign = FTR_UNSIGNED,
>  		.field_pos = ID_AA64ISAR1_API_SHIFT,
> @@ -1493,9 +1487,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  	},
>  	{
>  		.capability = ARM64_HAS_ADDRESS_AUTH,
> -		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
>  		.matches = has_address_auth,
> -		.cpu_enable = cpu_enable_address_auth,
>  	},
>  	{
>  		.desc = "Generic authentication (architected algorithm)",
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 824de7038967..eca6aa05ac66 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -54,6 +54,7 @@
>  #include <asm/numa.h>
>  #include <asm/pgtable.h>
>  #include <asm/pgalloc.h>
> +#include <asm/pointer_auth.h>
>  #include <asm/processor.h>
>  #include <asm/smp_plat.h>
>  #include <asm/sections.h>
> @@ -238,6 +239,8 @@ asmlinkage notrace void secondary_start_kernel(void)
>  	 */
>  	check_local_cpu_capabilities();
>  
> +	ptrauth_cpu_enable();
> +
>  	if (cpu_ops[cpu]->cpu_postboot)
>  		cpu_ops[cpu]->cpu_postboot();
>  
> @@ -432,7 +435,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>  	mark_linear_text_alias_ro();
>  }
>  
> -void __init smp_prepare_boot_cpu(void)
> +void __init __no_ptrauth smp_prepare_boot_cpu(void)
>  {
>  	set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
>  	/*
> @@ -452,6 +455,8 @@ void __init smp_prepare_boot_cpu(void)
>  	/* Conditionally switch to GIC PMR for interrupt masking */
>  	if (system_uses_irq_prio_masking())
>  		init_gic_priority_masking();
> +
> +	ptrauth_cpu_enable();
>  }
>  
>  static u64 __init of_get_cpu_mpidr(struct device_node *dn)
> -- 
> 2.11.0
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys
  2019-05-29 19:03 ` [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys Kristina Martsenko
@ 2019-05-30  3:34   ` Kees Cook
  2019-05-30 16:26     ` Kristina Martsenko
  2019-06-04 10:03   ` Dave Martin
  2019-06-06 16:44   ` Catalin Marinas
  2 siblings, 1 reply; 41+ messages in thread
From: Kees Cook @ 2019-05-30  3:34 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:30PM +0100, Kristina Martsenko wrote:
> Set up keys to use pointer authentication within the kernel. The kernel
> will be compiled with APIAKey instructions, the other keys are currently
> unused. Each task is given its own APIAKey, which is initialized during
> fork. The key is changed during context switch and on kernel entry from
> EL0.
> 
> A function that changes the key cannot return, so compile such functions
> without pointer auth (using __no_ptrauth which will be defined to a
> compiler function attribute later).
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

Just so I'm reading this right: the kernel is only using APIAKey?

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> 
> Changes since RFC v1:
>  - Updated ptrauth_keys_init to not randomly initialize the other 4 (unused)
>    kernel-space keys, to save entropy
>  - Replaced __always_inline with __no_ptrauth as it turns out __always_inline
>    is only a hint (and could therefore result in crashes)
>  - Moved ptrauth_keys_install_kernel earlier in kernel_entry, in case in the
>    future C function calls are added in kernel_entry
>  - Added ISB after key install in kernel_exit, in case in the future C function
>    calls are added after the macro
>  - Dropped an outdated comment
>  - Updated the commit message
> 
>  arch/arm64/include/asm/asm_pointer_auth.h | 16 +++++++++++++++
>  arch/arm64/include/asm/pointer_auth.h     | 33 +++++++++++++++++++++----------
>  arch/arm64/include/asm/processor.h        |  1 +
>  arch/arm64/kernel/asm-offsets.c           |  1 +
>  arch/arm64/kernel/entry.S                 |  1 +
>  arch/arm64/kernel/pointer_auth.c          |  2 +-
>  arch/arm64/kernel/process.c               |  3 +++
>  arch/arm64/kernel/smp.c                   |  3 +++
>  8 files changed, 49 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> index e3bfddfe80b6..f595da9661a4 100644
> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -25,11 +25,24 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
>  	msr_s	SYS_APDBKEYLO_EL1, \tmp2
>  	msr_s	SYS_APDBKEYHI_EL1, \tmp3
> +	isb
>  alternative_else_nop_endif
>  alternative_if ARM64_HAS_GENERIC_AUTH
>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
>  	msr_s	SYS_APGAKEYLO_EL1, \tmp2
>  	msr_s	SYS_APGAKEYHI_EL1, \tmp3
> +	isb
> +alternative_else_nop_endif
> +	.endm
> +
> +	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> +	mov	\tmp1, #THREAD_KEYS_KERNEL
> +	add	\tmp1, \tsk, \tmp1
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
> +	msr_s	SYS_APIAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APIAKEYHI_EL1, \tmp3
> +	isb
>  alternative_else_nop_endif
>  	.endm
>  
> @@ -38,6 +51,9 @@ alternative_else_nop_endif
>  	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
>  	.endm
>  
> +	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> +	.endm
> +
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #endif /* __ASM_ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index a97b7dc10bdb..79f35f5ecff5 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -25,10 +25,6 @@ struct ptrauth_key {
>  	unsigned long lo, hi;
>  };
>  
> -/*
> - * We give each process its own keys, which are shared by all threads. The keys
> - * are inherited upon fork(), and reinitialised upon exec*().
> - */
>  struct ptrauth_keys {
>  	struct ptrauth_key apia;
>  	struct ptrauth_key apib;
> @@ -37,16 +33,18 @@ struct ptrauth_keys {
>  	struct ptrauth_key apga;
>  };
>  
> -static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +static inline void ptrauth_keys_init(struct ptrauth_keys *keys, bool user)
>  {
>  	if (system_supports_address_auth()) {
>  		get_random_bytes(&keys->apia, sizeof(keys->apia));
> -		get_random_bytes(&keys->apib, sizeof(keys->apib));
> -		get_random_bytes(&keys->apda, sizeof(keys->apda));
> -		get_random_bytes(&keys->apdb, sizeof(keys->apdb));
> +		if (user) {
> +			get_random_bytes(&keys->apib, sizeof(keys->apib));
> +			get_random_bytes(&keys->apda, sizeof(keys->apda));
> +			get_random_bytes(&keys->apdb, sizeof(keys->apdb));
> +		}
>  	}
>  
> -	if (system_supports_generic_auth())
> +	if (system_supports_generic_auth() && user)
>  		get_random_bytes(&keys->apga, sizeof(keys->apga));
>  }
>  
> @@ -57,6 +55,15 @@ do {								\
>  	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
>  } while (0)
>  
> +static inline void __no_ptrauth ptrauth_keys_switch(struct ptrauth_keys *keys)
> +{
> +	if (!system_supports_address_auth())
> +		return;
> +
> +	__ptrauth_key_install(APIA, keys->apia);
> +	isb();
> +}
> +
>  static inline void __no_ptrauth ptrauth_cpu_enable(void)
>  {
>  	if (!system_supports_address_auth())
> @@ -82,7 +89,11 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  }
>  
>  #define ptrauth_thread_init_user(tsk)					\
> -	ptrauth_keys_init(&(tsk)->thread.keys_user)
> +	ptrauth_keys_init(&(tsk)->thread.keys_user, true)
> +#define ptrauth_thread_init_kernel(tsk)					\
> +	ptrauth_keys_init(&(tsk)->thread.keys_kernel, false)
> +#define ptrauth_thread_switch(tsk)					\
> +	ptrauth_keys_switch(&(tsk)->thread.keys_kernel)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define __no_ptrauth
> @@ -90,6 +101,8 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
> +#define ptrauth_thread_init_kernel(tsk)
> +#define ptrauth_thread_switch(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 5d9ce62bdebd..f7684a021ca2 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -149,6 +149,7 @@ struct thread_struct {
>  	struct debug_info	debug;		/* debugging */
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	struct ptrauth_keys	keys_user;
> +	struct ptrauth_keys	keys_kernel;
>  #endif
>  };
>  
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index edc471e4acb1..7dfebecd387d 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -53,6 +53,7 @@ int main(void)
>    DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
>  #ifdef CONFIG_ARM64_PTR_AUTH
>    DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
> +  DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
>  #endif
>    BLANK();
>    DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 73a28d88f78d..001d508cd63f 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -184,6 +184,7 @@ alternative_cb_end
>  
>  	apply_ssbd 1, x22, x23
>  
> +	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
>  	get_current_task tsk
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 95985be67891..4ca141b8c581 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -18,7 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  		return -EINVAL;
>  
>  	if (!arg) {
> -		ptrauth_keys_init(keys);
> +		ptrauth_keys_init(keys, true);
>  		return 0;
>  	}
>  
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index a9b30111b725..890d7185641b 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -378,6 +378,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
>  	 */
>  	fpsimd_flush_task_state(p);
>  
> +	ptrauth_thread_init_kernel(p);
> +
>  	if (likely(!(p->flags & PF_KTHREAD))) {
>  		*childregs = *current_pt_regs();
>  		childregs->regs[0] = 0;
> @@ -481,6 +483,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	contextidr_thread_switch(next);
>  	entry_task_switch(next);
>  	uao_thread_switch(next);
> +	ptrauth_thread_switch(next);
>  
>  	/*
>  	 * Complete any pending TLB or cache maintenance on this CPU in case
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index eca6aa05ac66..c5a6f3e8660b 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -239,6 +239,7 @@ asmlinkage notrace void secondary_start_kernel(void)
>  	 */
>  	check_local_cpu_capabilities();
>  
> +	ptrauth_thread_switch(current);
>  	ptrauth_cpu_enable();
>  
>  	if (cpu_ops[cpu]->cpu_postboot)
> @@ -456,6 +457,8 @@ void __init __no_ptrauth smp_prepare_boot_cpu(void)
>  	if (system_uses_irq_prio_masking())
>  		init_gic_priority_masking();
>  
> +	ptrauth_thread_init_kernel(current);
> +	ptrauth_thread_switch(current);
>  	ptrauth_cpu_enable();
>  }
>  
> -- 
> 2.11.0
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 6/7] arm64: unwind: strip PAC from kernel addresses
  2019-05-29 19:03 ` [RFC v2 6/7] arm64: unwind: strip PAC from kernel addresses Kristina Martsenko
@ 2019-05-30  3:36   ` Kees Cook
  0 siblings, 0 replies; 41+ messages in thread
From: Kees Cook @ 2019-05-30  3:36 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:31PM +0100, Kristina Martsenko wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When we enable pointer authentication in the kernel, LR values saved to
> the stack will have a PAC which we must strip in order to retrieve the
> real return address.
> 
> Strip PACs when unwinding the stack in order to account for this.
> 
> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> 
> Changes since RFC v1:
>  - Moved the patch later in the series
> 
>  arch/arm64/include/asm/pointer_auth.h | 10 +++++++---
>  arch/arm64/kernel/stacktrace.c        |  3 +++
>  2 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 79f35f5ecff5..5491c34b4dc3 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -80,12 +80,16 @@ extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>   * The EL0 pointer bits used by a pointer authentication code.
>   * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
>   */
> -#define ptrauth_user_pac_mask()	GENMASK(54, vabits_user)
> +#define ptrauth_user_pac_mask()		GENMASK(54, vabits_user)
> +
> +#define ptrauth_kernel_pac_mask()	(GENMASK(63, 56) | GENMASK(54, VA_BITS))
>  
> -/* Only valid for EL0 TTBR0 instruction pointers */
>  static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  {
> -	return ptr & ~ptrauth_user_pac_mask();
> +	if (ptr & BIT_ULL(55))
> +		return ptr | ptrauth_kernel_pac_mask();
> +	else
> +		return ptr & ~ptrauth_user_pac_mask();
>  }
>  
>  #define ptrauth_thread_init_user(tsk)					\
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index d908b5e9e949..df07c27a9673 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -24,6 +24,7 @@
>  #include <linux/stacktrace.h>
>  
>  #include <asm/irq.h>
> +#include <asm/pointer_auth.h>
>  #include <asm/stack_pointer.h>
>  #include <asm/stacktrace.h>
>  
> @@ -56,6 +57,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
>  	frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
>  	frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8));
>  
> +	frame->pc = ptrauth_strip_insn_pac(frame->pc);
> +
>  #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>  	if (tsk->ret_stack &&
>  			(frame->pc == (unsigned long)return_to_handler)) {
> -- 
> 2.11.0
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 7/7] arm64: compile the kernel with ptrauth return address signing
  2019-05-29 19:03 ` [RFC v2 7/7] arm64: compile the kernel with ptrauth return address signing Kristina Martsenko
@ 2019-05-30  3:45   ` Kees Cook
  0 siblings, 0 replies; 41+ messages in thread
From: Kees Cook @ 2019-05-30  3:45 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:32PM +0100, Kristina Martsenko wrote:
> Compile all non-leaf functions with two ptrauth instructions: PACIASP in
> the prologue to sign the return address, and AUTIASP in the epilogue to
> authenticate the return address (from the stack). If authentication
> fails, the return will cause an instruction abort to be taken, followed
> by an oops and killing the task. This should help protect the kernel
> against attacks using return-oriented programming.
> 
> The new instructions are in the HINT encoding space, so on a system
> without ptrauth they execute as NOPs.
> 
> CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
> guests, but also automatically builds the kernel with ptrauth
> instructions if the compiler supports it. If there is no compiler
> support, we do not warn that the kernel was built without ptrauth
> instructions.
> 
> GCC 7 and 8 support the -msign-return-address option, while GCC 9
> deprecates that option and replaces it with -mbranch-protection. Support
> both options.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

Minor nits below...

Reviewed-by: Kees Cook <keescook@chromium.org>

> ---
> 
> Changes since RFC v1:
>  - Fixed support for compilers without ptrauth
>  - Added support for the new -mbranch-protection option
>  - Switched from protecting all functions to only protecting non-leaf functions
>    (for no good reason, I have not done e.g. gadget analysis)
>  - Moved __no_ptrauth definition to this patch, depending on compiler support
>  - Updated the Kconfig symbol description
>  - Updated the commit message
> 
>  arch/arm64/Kconfig                    | 12 +++++++++++-
>  arch/arm64/Makefile                   |  6 ++++++
>  arch/arm64/include/asm/pointer_auth.h |  6 ++++++
>  3 files changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index f4c1e9b30129..3ce93d88fae1 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1295,11 +1295,15 @@ config ARM64_PTR_AUTH
>  	  and other attacks.
>  
>  	  This option enables these instructions at EL0 (i.e. for userspace).
> -
>  	  Choosing this option will cause the kernel to initialise secret keys
>  	  for each process at exec() time, with these keys being
>  	  context-switched along with the process.
>  
> +	  If the compiler supports the -mbranch-protection or
> +	  -msign-return-address flag (e.g. GCC 7 or later), then this option
> +	  will also cause the kernel itself to be compiled with return address
> +	  protection.
> +
>  	  The feature is detected at runtime. If the feature is not present in
>  	  hardware it will not be advertised to userspace nor will it be
>  	  enabled.
> @@ -1308,6 +1312,12 @@ config ARM64_PTR_AUTH
>  	  then the secondary CPU will be offlined. On such a system, this
>  	  option should not be selected.
>  
> +config CC_HAS_BRANCH_PROT_PAC_RET
> +	def_bool $(cc-option,-mbranch-protection=pac-ret)
> +
> +config CC_HAS_SIGN_RETURN_ADDRESS
> +	def_bool $(cc-option,-msign-return-address=non-leaf)
> +

I would add comments here for "GCC 9, Clang" and "GCC 7, 8"
respectively, just so it's quickly obvious what's to be expected when
reading this later. :)

>  endmenu
>  
>  config ARM64_SVE
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index b025304bde46..1dfbe755b531 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -66,6 +66,12 @@ stack_protector_prepare: prepare0
>  					include/generated/asm-offsets.h))
>  endif
>  
> +ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
> +pac-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=non-leaf
> +pac-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret
> +KBUILD_CFLAGS += $(pac-flags-y)
> +endif
> +
>  ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
>  KBUILD_CPPFLAGS	+= -mbig-endian
>  CHECKFLAGS	+= -D__AARCH64EB__
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 5491c34b4dc3..3a83c40ffd8a 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -15,7 +15,13 @@
>   * allows pointer authentication to be enabled/disabled within the function
>   * (but leaves the function unprotected by pointer authentication).
>   */
> +#if defined(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET)
> +#define __no_ptrauth	__attribute__((target("branch-protection=none")))
> +#elif defined(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS)
> +#define __no_ptrauth	__attribute__((target("sign-return-address=none")))
> +#else
>  #define __no_ptrauth
> +#endif

Is arch/arm64/include/asm/pointer_auth.h going to be included always?
I suspect the correct place for this might end up being in
include/linux/compiler_types.h, but for now, only a few select places
need it, so this is probably fine as-is.

>  
>  /*
>   * Each key is a 128-bit quantity which is split across a pair of 64-bit
> -- 
> 2.11.0
> 

I'm excited to test this series! :)

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-30  3:09 ` [RFC v2 0/7] arm64: " Kees Cook
@ 2019-05-30  7:25   ` Will Deacon
  2019-05-30  8:39     ` Ard Biesheuvel
  2019-05-30  9:12   ` Ramana Radhakrishnan
       [not found]   ` <DB7PR08MB3865C4AA36C9C465B2A687DABF180@DB7PR08MB3865.eurprd08.prod.outlook.com>
  2 siblings, 1 reply; 41+ messages in thread
From: Will Deacon @ 2019-05-30  7:25 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Diogo N. Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Kristina Martsenko, Ramana Radhakrishnan,
	Amit Kachhap, Suzuki K Poulose, Dave Martin, linux-arm-kernel

On Wed, May 29, 2019 at 08:09:23PM -0700, Kees Cook wrote:
> On Wed, May 29, 2019 at 08:03:25PM +0100, Kristina Martsenko wrote:
> > This series improves function return address protection for the arm64 kernel, by
> > compiling the kernel with ARMv8.3 Pointer Authentication instructions. This
> > should help protect the kernel against attacks using return-oriented
> > programming.
> 
> Can you speak to whether this feature should be enalbed in addition to
> or instead of the standard stack canary option?

Hmm. That's a really interesting question. Given that PAC is optional in the
hardware and behaves as NOPs on older CPUs, I've have thought that we'd need
to continue enabling stack canaries regardless. However, that then raises
the obvious question as to whether we could patch out the canary code if we
detect PAC at runtime, which probably needs compiler help...

Then again, perhaps there's benefit in having both features enabled.

So I think I agree with your question :)

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-30  7:25   ` Will Deacon
@ 2019-05-30  8:39     ` Ard Biesheuvel
  2019-05-30  9:11       ` Ramana Radhakrishnan
  0 siblings, 1 reply; 41+ messages in thread
From: Ard Biesheuvel @ 2019-05-30  8:39 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Kees Cook, Diogo N. Sampaio, Luke Cheeseman,
	Catalin Marinas, Suzuki K Poulose, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Thu, 30 May 2019 at 09:25, Will Deacon <will.deacon@arm.com> wrote:
>
> On Wed, May 29, 2019 at 08:09:23PM -0700, Kees Cook wrote:
> > On Wed, May 29, 2019 at 08:03:25PM +0100, Kristina Martsenko wrote:
> > > This series improves function return address protection for the arm64 kernel, by
> > > compiling the kernel with ARMv8.3 Pointer Authentication instructions. This
> > > should help protect the kernel against attacks using return-oriented
> > > programming.
> >
> > Can you speak to whether this feature should be enalbed in addition to
> > or instead of the standard stack canary option?
>
> Hmm. That's a really interesting question. Given that PAC is optional in the
> hardware and behaves as NOPs on older CPUs, I've have thought that we'd need
> to continue enabling stack canaries regardless. However, that then raises
> the obvious question as to whether we could patch out the canary code if we
> detect PAC at runtime, which probably needs compiler help...
>
> Then again, perhaps there's benefit in having both features enabled.
>
> So I think I agree with your question :)
>

PAC only protects the return address, which is arguably the most
vulnerable piece of data in the stack frame, but not the only one. So
one question is whether we should take the hit of protecting ourselves
from stack buffer overrun exploits that manage to overwrite something
else in the stack frame while leaving the return address alone. I
don't think doing that adds any value. Or if you are paranoid, you
could argue that the stack protector also protects against a leak of
the PAC key. And obviously, you need hardware from the future to use
PAC :-)

So while we think we should retain it for the single kernel image, I
don't see why you would enable the stack protector on, e.g., android
images built specifically for SOCs that enable PAC in the kernel.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-30  8:39     ` Ard Biesheuvel
@ 2019-05-30  9:11       ` Ramana Radhakrishnan
  0 siblings, 0 replies; 41+ messages in thread
From: Ramana Radhakrishnan @ 2019-05-30  9:11 UTC (permalink / raw)
  To: Ard Biesheuvel, Will Deacon
  Cc: Mark Rutland, Kees Cook, Diogo Sampaio, Luke Cheeseman,
	Catalin Marinas, Suzuki Poulose, Kristina Martsenko,
	Amit Kachhap, nd, Dave P Martin, linux-arm-kernel

On 30/05/2019 09:39, Ard Biesheuvel wrote:
> On Thu, 30 May 2019 at 09:25, Will Deacon <will.deacon@arm.com> wrote:
>>
>> On Wed, May 29, 2019 at 08:09:23PM -0700, Kees Cook wrote:
>> > On Wed, May 29, 2019 at 08:03:25PM +0100, Kristina Martsenko wrote:
>> > > This series improves function return address protection for the arm64 kernel, by
>> > > compiling the kernel with ARMv8.3 Pointer Authentication instructions. This
>> > > should help protect the kernel against attacks using return-oriented
>> > > programming.
>> >
>> > Can you speak to whether this feature should be enalbed in addition to
>> > or instead of the standard stack canary option?
>>
>> Hmm. That's a really interesting question. Given that PAC is optional in the
>> hardware and behaves as NOPs on older CPUs, I've have thought that we'd need
>> to continue enabling stack canaries regardless. However, that then raises
>> the obvious question as to whether we could patch out the canary code if we
>> detect PAC at runtime, which probably needs compiler help...
>>
>> Then again, perhaps there's benefit in having both features enabled.
>>
>> So I think I agree with your question :)
>>
> 
> PAC only protects the return address, which is arguably the most
> vulnerable piece of data in the stack frame, but not the only one. So
> one question is whether we should take the hit of protecting ourselves
> from stack buffer overrun exploits that manage to overwrite something
> else in the stack frame while leaving the return address alone. I
> don't think doing that adds any value. Or if you are paranoid, you
> could argue that the stack protector also protects against a leak of
> the PAC key. And obviously, you need hardware from the future to use
> PAC :-)
> 
> So while we think we should retain it for the single kernel image, I
> don't see why you would enable the stack protector on, e.g., android
> images built specifically for SOCs that enable PAC in the kernel.

Return address signing using PAC only protects the return address and 
reduces the surface area for ROP oriented attacks on hardware that has 
PAC instructions.

For a single image the answer today I think would be to have both return 
address signing and stack Canaries. In a few years (decades ?) when 
everyone has 8.3 silicon, then maybe canaries can be turned off ?

If someone has control over deployment of kernels
  - Return address signing only if you knew it's on 8.3+ hardware.
  - Stack canaries if only on pre-8.3 hardware


Regards,
Ramana
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-30  3:09 ` [RFC v2 0/7] arm64: " Kees Cook
  2019-05-30  7:25   ` Will Deacon
@ 2019-05-30  9:12   ` Ramana Radhakrishnan
  2019-06-06 17:44     ` Kristina Martsenko
       [not found]   ` <DB7PR08MB3865C4AA36C9C465B2A687DABF180@DB7PR08MB3865.eurprd08.prod.outlook.com>
  2 siblings, 1 reply; 41+ messages in thread
From: Ramana Radhakrishnan @ 2019-05-30  9:12 UTC (permalink / raw)
  To: Kees Cook, Kristina Martsenko
  Cc: Mark Rutland, Diogo Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Will Deacon, Amit Kachhap, nd, Suzuki Poulose,
	Dave P Martin, linux-arm-kernel


>>  - more testing
> 
> Is PAC emulated in QEmu yet? (I assume I can't get real hardware to help
> test this yet...)

AFAIK, yes qemu trunk should have this.

Thanks,
Ramana
> 
> -- 
> Kees Cook


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities
  2019-05-29 19:03 ` [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities Kristina Martsenko
  2019-05-30  1:58   ` Kees Cook
@ 2019-05-30 10:50   ` Suzuki K Poulose
  2019-06-13 16:13     ` Suzuki K Poulose
  1 sibling, 1 reply; 41+ messages in thread
From: Suzuki K Poulose @ 2019-05-30 10:50 UTC (permalink / raw)
  To: kristina.martsenko, linux-arm-kernel
  Cc: mark.rutland, keescook, ard.biesheuvel, catalin.marinas,
	will.deacon, ramana.radhakrishnan, amit.kachhap, dave.martin

Hi Kristina,

On 05/29/2019 08:03 PM, Kristina Martsenko wrote:
> To enable pointer auth for the kernel, we're going to need to check for
> the presence of address auth and generic auth using alternative_if. We
> currently have two cpucaps for each, but alternative_if needs to check a
> single cpucap. So define meta-capabilities that are present when either
> of the current two capabilities is present.
> 
> Leave the existing four cpucaps in place, as they are still needed to
> check for mismatched systems where one CPU has the architected algorithm
> but another has the IMP DEF algorithm.
> 
> Note, the meta-capabilities were present before but were removed in
> commits a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
> CPU caps from 6 to 4") and 1e013d06120c ("arm64: cpufeature: Rework ptr
> auth hwcaps using multi_entry_cap_matches"), as they were not needed
> then. Note, unlike before, the current patch checks the cpucap values
> directly, instead of reading the CPU ID register value.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

This patch as such looks fine to me. However, do we really make use of
the individual caps for ARCH/IMPDEF support ? Do we plan to do something
about them in the future ? If not we could as well remove them and have
the generic ones in place. That may be done in a separate series as a
cleanup.

Either way, for this patch:

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability
  2019-05-29 19:03 ` [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability Kristina Martsenko
  2019-05-30  2:49   ` Kees Cook
@ 2019-05-30 14:16   ` Suzuki K Poulose
  2019-05-31 14:00     ` Kristina Martsenko
  1 sibling, 1 reply; 41+ messages in thread
From: Suzuki K Poulose @ 2019-05-30 14:16 UTC (permalink / raw)
  To: kristina.martsenko, linux-arm-kernel
  Cc: mark.rutland, keescook, ard.biesheuvel, catalin.marinas,
	will.deacon, ramana.radhakrishnan, amit.kachhap, dave.martin

On 05/29/2019 08:03 PM, Kristina Martsenko wrote:
> Each system capability can be of either boot, local, or system scope,
> depending on when the state of the capability is finalized. When we
> detect a conflict on a late CPU, we either offline the CPU or panic the
> system. We currently always panic if the conflict is caused by a boot
> scope capability, and offline the CPU if the conflict is caused by a
> local or system scope capability.
> 
> We're going to want to add new capability (for pointer authentication)
> which needs to be boot scope but doesn't need to panic the system when a
> conflict is detected. So add a new flag to specify whether the
> capability requires the system to panic or not. Current boot scope
> capabilities are updated to set the flag, so there should be no
> functional change as a result of this patch.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>


> ---
> 
> Changes since RFC v1:
>   - New patch, to have ptrauth mismatches disable secondaries instead of
>     panicking
> 
>   arch/arm64/include/asm/cpufeature.h | 15 ++++++++++++++-
>   arch/arm64/kernel/cpufeature.c      | 23 +++++++++--------------
>   2 files changed, 23 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 0522ea674253..ad952f2e0a2b 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -217,6 +217,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>    *     In some non-typical cases either both (a) and (b), or neither,
>    *     should be permitted. This can be described by including neither
>    *     or both flags in the capability's type field.
> + *
> + *     In case of a conflict, the CPU is prevented from booting. If the
> + *     ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
> + *     then a kernel panic is triggered.
>    */
>   
>   
> @@ -249,6 +253,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>   #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU	((u16)BIT(4))
>   /* Is it safe for a late CPU to miss this capability when system has it */
>   #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	((u16)BIT(5))
> +/* Panic when a conflict is detected */
> +#define ARM64_CPUCAP_PANIC_ON_CONFLICT		((u16)BIT(6))
>   
>   /*
>    * CPU errata workarounds that need to be enabled at boot time if one or
> @@ -290,7 +296,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>    * CPU feature used early in the boot based on the boot CPU. All secondary
>    * CPUs must match the state of the capability as detected by the boot CPU.
>    */
> -#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
> +#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
> +	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)

You may want to update the comment to mention that a conflict triggers
kernel panic, as it is more within the control of the "cap" behavior.

With that:

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
       [not found]   ` <DB7PR08MB3865C4AA36C9C465B2A687DABF180@DB7PR08MB3865.eurprd08.prod.outlook.com>
@ 2019-05-30 15:57     ` Kees Cook
       [not found]       ` <DB7PR08MB3865A83066179CE419D171EDBF180@DB7PR08MB3865.eurprd08.prod.outlook.com>
  0 siblings, 1 reply; 41+ messages in thread
From: Kees Cook @ 2019-05-30 15:57 UTC (permalink / raw)
  To: Luke Cheeseman
  Cc: Mark Rutland, Diogo Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Kachhap, Suzuki Poulose,
	Dave P Martin, linux-arm-kernel

On Thu, May 30, 2019 at 10:33:33AM +0000, Luke Cheeseman wrote:
> > Luke, is this expected to work Clang currently?
> 
> 
> Do you mean something like the following to control signing of each function?
> 
> 
> int __attribute__ ((target ("sign-return-address=all"))) foo(void) {
>   return 42;
> }

Well, yes, though, in this usage, the goal is to disable it for specific
functions:

int __attribute__((target("branch-protection=none"))) early_func(void)
{
    /* set up branch protection registers */
}

> Clang doesn't currently support any function attribute to control
> function signing to this level of granularity. We haven't added it and
> don't have a plan to do so at the moment.

What's needed to accomplish this? It looks to be a blocker for getting
PAC working on Android kernels.

-Kees

> 
> 
> Thanks,
> 
> Luke
> 
> 
> ________________________________
> From: Kees Cook <keescook@chromium.org>
> Sent: 30 May 2019 04:09:23
> To: Kristina Martsenko
> Cc: Luke Cheeseman; Diogo Sampaio; linux-arm-kernel@lists.infradead.org; Amit Kachhap; Ard Biesheuvel; Catalin Marinas; Dave P Martin; Mark Rutland; Ramana Radhakrishnan; Suzuki Poulose; Will Deacon
> Subject: Re: [RFC v2 0/7] arm64: return address signing
> 
> On Wed, May 29, 2019 at 08:03:25PM +0100, Kristina Martsenko wrote:
> > This series improves function return address protection for the arm64 kernel, by
> > compiling the kernel with ARMv8.3 Pointer Authentication instructions. This
> > should help protect the kernel against attacks using return-oriented
> > programming.
> 
> Can you speak to whether this feature should be enalbed in addition to
> or instead of the standard stack canary option?
> 
> >  - The patches make use of the sign-return-address/branch-protection compiler
> >    options and function attributes. GCC supports both, but Clang/LLVM appears
> >    to only support the compiler option, not the function attribute, so with
> >    these patches (and CONFIG_ARM64_PTR_AUTH=y) an LLVM-built kernel will fail
> >    to boot on ARMv8.3 CPUs. I don't yet know why LLVM does not support it, or
> >    whether support can be added. This series may need to be rewritten to not
> >    use the attribute, and instead move the functionality to assembly, or to
> >    disable return address signing when building with LLVM.
> 
> I've added Luke Cheeseman and Diogo N. Sampaio to CC. In looking quickly
> at the LLVM support for branch-protection, I think it's just missing the
> attribute target glue needed to "notice" the attribute markings. Luke,
> is this expected to work Clang currently? I'm not familiar yet with
> how attributes get wired up, but I think it should be quite possible.
> 
> >  - more testing
> 
> Is PAC emulated in QEmu yet? (I assume I can't get real hardware to help
> test this yet...)
> 
> --
> Kees Cook
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys
  2019-05-30  3:34   ` Kees Cook
@ 2019-05-30 16:26     ` Kristina Martsenko
  0 siblings, 0 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-30 16:26 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On 30/05/2019 04:34, Kees Cook wrote:
> On Wed, May 29, 2019 at 08:03:30PM +0100, Kristina Martsenko wrote:
>> Set up keys to use pointer authentication within the kernel. The kernel
>> will be compiled with APIAKey instructions, the other keys are currently
>> unused. Each task is given its own APIAKey, which is initialized during
>> fork. The key is changed during context switch and on kernel entry from
>> EL0.
>>
>> A function that changes the key cannot return, so compile such functions
>> without pointer auth (using __no_ptrauth which will be defined to a
>> compiler function attribute later).
>>
>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> Just so I'm reading this right: the kernel is only using APIAKey?

Yes, that's right. The compiler options (in patch #7) will compile the
kernel with only APIAKey instructions, so the kernel will only use
APIAKey. We don't initialize or install the other 4 keys for the kernel
(only for userspace). Let me know if the commit message could be clearer
in some way.

> Reviewed-by: Kees Cook <keescook@chromium.org>

Thanks for all the review!

Kristina

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
       [not found]       ` <DB7PR08MB3865A83066179CE419D171EDBF180@DB7PR08MB3865.eurprd08.prod.outlook.com>
@ 2019-05-30 18:05         ` Kees Cook
  2019-05-31  9:22           ` Will Deacon
  0 siblings, 1 reply; 41+ messages in thread
From: Kees Cook @ 2019-05-30 18:05 UTC (permalink / raw)
  To: Luke Cheeseman
  Cc: Mark Rutland, Diogo Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Kachhap, Kristof Beyls,
	Christof Douma, Suzuki Poulose, Dave P Martin, linux-arm-kernel

On Thu, May 30, 2019 at 04:55:08PM +0000, Luke Cheeseman wrote:
> The semantics of this attribute are straightforward enough but it
> raises some questions. One question being why would I want to turn off
> BTI (also controlled by this option) for one function in a file? Which
> gets a bit odd.

It's about leaving very early CPU startup functions in the kernel from
getting marked up (since they are running before or during the PAC setup).

> I don't know if the alternatives have been suggested but it's
> possible to achieve the result you seem to be after (a function without
> return address signing) in a couple of ways. First and foremost,
> separating the function out into it's own file and compiling with
> -mbranch-protection=none. Alternatively, writing the function in assembly
> or perhaps even a naked function with inline assembly.

Fair enough. :) Thanks for the clarification. Yeah, split on compilation
unit could work. (In the future, though, having the attribute flexibility
would be nice.)

Kristina, would it be feasible to split these functions into a separate
source file? (There doesn't seem to be a need to inline them, given
they're not performance sensitive and only used once, etc?)

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-30 18:05         ` Kees Cook
@ 2019-05-31  9:22           ` Will Deacon
  2019-06-02 15:43             ` Kees Cook
  0 siblings, 1 reply; 41+ messages in thread
From: Will Deacon @ 2019-05-31  9:22 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Luke Cheeseman, Diogo Sampaio, Luke Cheeseman,
	Catalin Marinas, Ard Biesheuvel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Kachhap, Kristof Beyls,
	Christof Douma, Suzuki Poulose, Dave P Martin, linux-arm-kernel

On Thu, May 30, 2019 at 11:05:15AM -0700, Kees Cook wrote:
> On Thu, May 30, 2019 at 04:55:08PM +0000, Luke Cheeseman wrote:
> > The semantics of this attribute are straightforward enough but it
> > raises some questions. One question being why would I want to turn off
> > BTI (also controlled by this option) for one function in a file? Which
> > gets a bit odd.
> 
> It's about leaving very early CPU startup functions in the kernel from
> getting marked up (since they are running before or during the PAC setup).
> 
> > I don't know if the alternatives have been suggested but it's
> > possible to achieve the result you seem to be after (a function without
> > return address signing) in a couple of ways. First and foremost,
> > separating the function out into it's own file and compiling with
> > -mbranch-protection=none. Alternatively, writing the function in assembly
> > or perhaps even a naked function with inline assembly.
> 
> Fair enough. :) Thanks for the clarification. Yeah, split on compilation
> unit could work. (In the future, though, having the attribute flexibility
> would be nice.)
> 
> Kristina, would it be feasible to split these functions into a separate
> source file? (There doesn't seem to be a need to inline them, given
> they're not performance sensitive and only used once, etc?)

Right, and we could call it kernel.c

Sarcasm aside, please fix this in the toolchain. Moving logically unrelated
functions into one file just because the toolchain doesn't yet support this
feature just messes up the codebase and removes the incentive to get this
implemented properly. After all, you need something to do now that asm goto
is out of the way, right? ;)

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability
  2019-05-30 14:16   ` Suzuki K Poulose
@ 2019-05-31 14:00     ` Kristina Martsenko
  2019-05-31 15:08       ` Suzuki K Poulose
  0 siblings, 1 reply; 41+ messages in thread
From: Kristina Martsenko @ 2019-05-31 14:00 UTC (permalink / raw)
  To: Suzuki K Poulose, linux-arm-kernel
  Cc: mark.rutland, keescook, ard.biesheuvel, catalin.marinas,
	will.deacon, ramana.radhakrishnan, amit.kachhap, dave.martin

On 30/05/2019 15:16, Suzuki K Poulose wrote:
> On 05/29/2019 08:03 PM, Kristina Martsenko wrote:
>> Each system capability can be of either boot, local, or system scope,
>> depending on when the state of the capability is finalized. When we
>> detect a conflict on a late CPU, we either offline the CPU or panic the
>> system. We currently always panic if the conflict is caused by a boot
>> scope capability, and offline the CPU if the conflict is caused by a
>> local or system scope capability.
>>
>> We're going to want to add new capability (for pointer authentication)
>> which needs to be boot scope but doesn't need to panic the system when a
>> conflict is detected. So add a new flag to specify whether the
>> capability requires the system to panic or not. Current boot scope
>> capabilities are updated to set the flag, so there should be no
>> functional change as a result of this patch.
>>
>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> 
>> ---
>>
>> Changes since RFC v1:
>>   - New patch, to have ptrauth mismatches disable secondaries instead of
>>     panicking
>>
>>   arch/arm64/include/asm/cpufeature.h | 15 ++++++++++++++-
>>   arch/arm64/kernel/cpufeature.c      | 23 +++++++++--------------
>>   2 files changed, 23 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 0522ea674253..ad952f2e0a2b 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -217,6 +217,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>>    *     In some non-typical cases either both (a) and (b), or neither,
>>    *     should be permitted. This can be described by including neither
>>    *     or both flags in the capability's type field.
>> + *
>> + *     In case of a conflict, the CPU is prevented from booting. If the
>> + *     ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
>> + *     then a kernel panic is triggered.
>>    */
>>     @@ -249,6 +253,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>>   #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU    ((u16)BIT(4))
>>   /* Is it safe for a late CPU to miss this capability when system has it */
>>   #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU    ((u16)BIT(5))
>> +/* Panic when a conflict is detected */
>> +#define ARM64_CPUCAP_PANIC_ON_CONFLICT        ((u16)BIT(6))
>>     /*
>>    * CPU errata workarounds that need to be enabled at boot time if one or
>> @@ -290,7 +296,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>>    * CPU feature used early in the boot based on the boot CPU. All secondary
>>    * CPUs must match the state of the capability as detected by the boot CPU.
>>    */
>> -#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
>> +#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE        \
>> +    (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
> 
> You may want to update the comment to mention that a conflict triggers
> kernel panic, as it is more within the control of the "cap" behavior.

Do you mean to update the comment above ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE?
To something like the following?

/*
 * CPU feature used early in the boot based on the boot CPU. All secondary
 * CPUs must match the state of the capability as detected by the boot CPU. In
 * case of a conflict, a kernel panic is triggered.
 */

> 
> With that:
> 
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

Thanks!

Kristina

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability
  2019-05-31 14:00     ` Kristina Martsenko
@ 2019-05-31 15:08       ` Suzuki K Poulose
  0 siblings, 0 replies; 41+ messages in thread
From: Suzuki K Poulose @ 2019-05-31 15:08 UTC (permalink / raw)
  To: kristina.martsenko, linux-arm-kernel
  Cc: mark.rutland, keescook, ard.biesheuvel, catalin.marinas,
	will.deacon, ramana.radhakrishnan, Amit.Kachhap, dave.martin



On 31/05/2019 15:00, Kristina Martsenko wrote:
> On 30/05/2019 15:16, Suzuki K Poulose wrote:
>> On 05/29/2019 08:03 PM, Kristina Martsenko wrote:
>>> Each system capability can be of either boot, local, or system scope,
>>> depending on when the state of the capability is finalized. When we
>>> detect a conflict on a late CPU, we either offline the CPU or panic the
>>> system. We currently always panic if the conflict is caused by a boot
>>> scope capability, and offline the CPU if the conflict is caused by a
>>> local or system scope capability.
>>>
>>> We're going to want to add new capability (for pointer authentication)
>>> which needs to be boot scope but doesn't need to panic the system when a
>>> conflict is detected. So add a new flag to specify whether the
>>> capability requires the system to panic or not. Current boot scope
>>> capabilities are updated to set the flag, so there should be no
>>> functional change as a result of this patch.
>>>
>>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

>>>      @@ -249,6 +253,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>>>    #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU    ((u16)BIT(4))
>>>    /* Is it safe for a late CPU to miss this capability when system has it */
>>>    #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU    ((u16)BIT(5))
>>> +/* Panic when a conflict is detected */
>>> +#define ARM64_CPUCAP_PANIC_ON_CONFLICT        ((u16)BIT(6))
>>>      /*
>>>     * CPU errata workarounds that need to be enabled at boot time if one or
>>> @@ -290,7 +296,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>>>     * CPU feature used early in the boot based on the boot CPU. All secondary
>>>     * CPUs must match the state of the capability as detected by the boot CPU.
>>>     */
>>> -#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
>>> +#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE        \
>>> +    (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
>>
>> You may want to update the comment to mention that a conflict triggers
>> kernel panic, as it is more within the control of the "cap" behavior.
> 
> Do you mean to update the comment above ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE?
> To something like the following?
> 
> /*
>   * CPU feature used early in the boot based on the boot CPU. All secondary
>   * CPUs must match the state of the capability as detected by the boot CPU. In
>   * case of a conflict, a kernel panic is triggered.
>   */
> 

Yes. That looks good.
Earlier it was upto the caller to decide when there was a conflict with
Boot features to take an action. But with this change we Panic for STRICT_BOOT
features.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-31  9:22           ` Will Deacon
@ 2019-06-02 15:43             ` Kees Cook
  2019-06-03 10:40               ` Will Deacon
  0 siblings, 1 reply; 41+ messages in thread
From: Kees Cook @ 2019-06-02 15:43 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Luke Cheeseman, Diogo Sampaio, Luke Cheeseman,
	Catalin Marinas, Ard Biesheuvel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Kachhap, Kristof Beyls,
	Christof Douma, Suzuki Poulose, Dave P Martin, linux-arm-kernel

On Fri, May 31, 2019 at 10:22:02AM +0100, Will Deacon wrote:
> On Thu, May 30, 2019 at 11:05:15AM -0700, Kees Cook wrote:
> > On Thu, May 30, 2019 at 04:55:08PM +0000, Luke Cheeseman wrote:
> > > The semantics of this attribute are straightforward enough but it
> > > raises some questions. One question being why would I want to turn off
> > > BTI (also controlled by this option) for one function in a file? Which
> > > gets a bit odd.
> > 
> > It's about leaving very early CPU startup functions in the kernel from
> > getting marked up (since they are running before or during the PAC setup).
> > 
> > > I don't know if the alternatives have been suggested but it's
> > > possible to achieve the result you seem to be after (a function without
> > > return address signing) in a couple of ways. First and foremost,
> > > separating the function out into it's own file and compiling with
> > > -mbranch-protection=none. Alternatively, writing the function in assembly
> > > or perhaps even a naked function with inline assembly.
> > 
> > Fair enough. :) Thanks for the clarification. Yeah, split on compilation
> > unit could work. (In the future, though, having the attribute flexibility
> > would be nice.)
> > 
> > Kristina, would it be feasible to split these functions into a separate
> > source file? (There doesn't seem to be a need to inline them, given
> > they're not performance sensitive and only used once, etc?)
> 
> Right, and we could call it kernel.c
> 
> Sarcasm aside, please fix this in the toolchain. Moving logically unrelated
> functions into one file just because the toolchain doesn't yet support this
> feature just messes up the codebase and removes the incentive to get this
> implemented properly. After all, you need something to do now that asm goto
> is out of the way, right? ;)

LLVM tracking bug created...
https://bugs.llvm.org/show_bug.cgi?id=42095

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-06-02 15:43             ` Kees Cook
@ 2019-06-03 10:40               ` Will Deacon
  2019-06-04 13:52                 ` Luke Cheeseman
  0 siblings, 1 reply; 41+ messages in thread
From: Will Deacon @ 2019-06-03 10:40 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Luke Cheeseman, Diogo Sampaio, Luke Cheeseman,
	Catalin Marinas, Ard Biesheuvel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Kachhap, Kristof Beyls,
	Christof Douma, Suzuki Poulose, Dave P Martin, linux-arm-kernel

On Sun, Jun 02, 2019 at 08:43:55AM -0700, Kees Cook wrote:
> On Fri, May 31, 2019 at 10:22:02AM +0100, Will Deacon wrote:
> > On Thu, May 30, 2019 at 11:05:15AM -0700, Kees Cook wrote:
> > > On Thu, May 30, 2019 at 04:55:08PM +0000, Luke Cheeseman wrote:
> > > > The semantics of this attribute are straightforward enough but it
> > > > raises some questions. One question being why would I want to turn off
> > > > BTI (also controlled by this option) for one function in a file? Which
> > > > gets a bit odd.
> > > 
> > > It's about leaving very early CPU startup functions in the kernel from
> > > getting marked up (since they are running before or during the PAC setup).
> > > 
> > > > I don't know if the alternatives have been suggested but it's
> > > > possible to achieve the result you seem to be after (a function without
> > > > return address signing) in a couple of ways. First and foremost,
> > > > separating the function out into it's own file and compiling with
> > > > -mbranch-protection=none. Alternatively, writing the function in assembly
> > > > or perhaps even a naked function with inline assembly.
> > > 
> > > Fair enough. :) Thanks for the clarification. Yeah, split on compilation
> > > unit could work. (In the future, though, having the attribute flexibility
> > > would be nice.)
> > > 
> > > Kristina, would it be feasible to split these functions into a separate
> > > source file? (There doesn't seem to be a need to inline them, given
> > > they're not performance sensitive and only used once, etc?)
> > 
> > Right, and we could call it kernel.c
> > 
> > Sarcasm aside, please fix this in the toolchain. Moving logically unrelated
> > functions into one file just because the toolchain doesn't yet support this
> > feature just messes up the codebase and removes the incentive to get this
> > implemented properly. After all, you need something to do now that asm goto
> > is out of the way, right? ;)
> 
> LLVM tracking bug created...
> https://bugs.llvm.org/show_bug.cgi?id=42095

Thanks, Kees!

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys
  2019-05-29 19:03 ` [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys Kristina Martsenko
  2019-05-30  3:34   ` Kees Cook
@ 2019-06-04 10:03   ` Dave Martin
  2019-06-06 16:44   ` Catalin Marinas
  2 siblings, 0 replies; 41+ messages in thread
From: Dave Martin @ 2019-06-04 10:03 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Ramana Radhakrishnan, Amit Kachhap,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:30PM +0100, Kristina Martsenko wrote:
> Set up keys to use pointer authentication within the kernel. The kernel
> will be compiled with APIAKey instructions, the other keys are currently
> unused. Each task is given its own APIAKey, which is initialized during
> fork. The key is changed during context switch and on kernel entry from
> EL0.
> 
> A function that changes the key cannot return, so compile such functions
> without pointer auth (using __no_ptrauth which will be defined to a
> compiler function attribute later).
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> ---

[...]

> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index a97b7dc10bdb..79f35f5ecff5 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -25,10 +25,6 @@ struct ptrauth_key {
>  	unsigned long lo, hi;
>  };
>  
> -/*
> - * We give each process its own keys, which are shared by all threads. The keys
> - * are inherited upon fork(), and reinitialised upon exec*().
> - */
>  struct ptrauth_keys {
>  	struct ptrauth_key apia;
>  	struct ptrauth_key apib;
> @@ -37,16 +33,18 @@ struct ptrauth_keys {
>  	struct ptrauth_key apga;
>  };

Do we need all this struct for the kernel thread case?

It may be a bit pointless trying to optimise this, though.

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-06-03 10:40               ` Will Deacon
@ 2019-06-04 13:52                 ` Luke Cheeseman
  2019-06-06 17:43                   ` Kristina Martsenko
  0 siblings, 1 reply; 41+ messages in thread
From: Luke Cheeseman @ 2019-06-04 13:52 UTC (permalink / raw)
  To: Will Deacon, Kees Cook
  Cc: Mark Rutland, Diogo Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Kristina Martsenko, Ramana Radhakrishnan,
	Amit Kachhap, Kristof Beyls, Christof Douma, Suzuki Poulose,
	Dave P Martin, linux-arm-kernel

Hi,

It was suggested to me that forcing the compiler to inline the function may be another way to avoid writing the function in a separate file and not have the concerns of switching keys in a function. For example:

void __attribute__((always_inline)) switch_keys() {
  // do something
}


int main() {
  switch_keys(42);
}


switch_keys is always inlined so you don't get the pac/aut pair. Is this something that is useful?

For the feature request for disabling branch protection (https://bugs.llvm.org/show_bug.cgi?id=42095) is there a time frame you need this within?

Thanks,
Luke


From: Will Deacon <will.deacon@arm.com>
Sent: 03 June 2019 11:40
To: Kees Cook
Cc: Luke Cheeseman; Kristina Martsenko; Luke Cheeseman; Diogo Sampaio; linux-arm-kernel@lists.infradead.org; Amit Kachhap; Ard Biesheuvel; Catalin Marinas; Dave P Martin; Mark Rutland; Ramana Radhakrishnan; Suzuki Poulose; Kristof Beyls; Christof Douma
Subject: Re: [RFC v2 0/7] arm64: return address signing


On Sun, Jun 02, 2019 at 08:43:55AM -0700, Kees Cook wrote:
> On Fri, May 31, 2019 at 10:22:02AM +0100, Will Deacon wrote:
> > On Thu, May 30, 2019 at 11:05:15AM -0700, Kees Cook wrote:
> > > On Thu, May 30, 2019 at 04:55:08PM +0000, Luke Cheeseman wrote:
> > > > The semantics of this attribute are straightforward enough but it
> > > > raises some questions. One question being why would I want to turn off
> > > > BTI (also controlled by this option) for one function in a file? Which
> > > > gets a bit odd.
> > >
> > > It's about leaving very early CPU startup functions in the kernel from
> > > getting marked up (since they are running before or during the PAC setup).
> > >
> > > > I don't know if the alternatives have been suggested but it's
> > > > possible to achieve the result you seem to be after (a function without
> > > > return address signing) in a couple of ways. First and foremost,
> > > > separating the function out into it's own file and compiling with
> > > > -mbranch-protection=none. Alternatively, writing the function in assembly
> > > > or perhaps even a naked function with inline assembly.
> > >
> > > Fair enough. :) Thanks for the clarification. Yeah, split on compilation
> > > unit could work. (In the future, though, having the attribute flexibility
> > > would be nice.)
> > >
> > > Kristina, would it be feasible to split these functions into a separate
> > > source file? (There doesn't seem to be a need to inline them, given
> > > they're not performance sensitive and only used once, etc?)
> >
> > Right, and we could call it kernel.c
> >
> > Sarcasm aside, please fix this in the toolchain. Moving logically unrelated
> > functions into one file just because the toolchain doesn't yet support this
> > feature just messes up the codebase and removes the incentive to get this
> > implemented properly. After all, you need something to do now that asm goto
> > is out of the way, right? ;)
>
> LLVM tracking bug created...
> https://bugs.llvm.org/show_bug.cgi?id=42095

Thanks, Kees!

Will

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time
  2019-05-29 19:03 ` [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time Kristina Martsenko
  2019-05-30  2:04   ` Kees Cook
@ 2019-06-06 16:26   ` Catalin Marinas
  1 sibling, 0 replies; 41+ messages in thread
From: Catalin Marinas @ 2019-06-06 16:26 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:27PM +0100, Kristina Martsenko wrote:
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> new file mode 100644
> index 000000000000..e3bfddfe80b6
> --- /dev/null
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -0,0 +1,43 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_ASM_POINTER_AUTH_H
> +#define __ASM_ASM_POINTER_AUTH_H
> +
> +#include <asm/alternative.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +
> +	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
> +	mov	\tmp1, #THREAD_KEYS_USER
> +	add	\tmp1, \tsk, \tmp1
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
> +	msr_s	SYS_APIAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APIAKEYHI_EL1, \tmp3
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
> +	msr_s	SYS_APIBKEYLO_EL1, \tmp2
> +	msr_s	SYS_APIBKEYHI_EL1, \tmp3
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
> +	msr_s	SYS_APDAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APDAKEYHI_EL1, \tmp3
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
> +	msr_s	SYS_APDBKEYLO_EL1, \tmp2
> +	msr_s	SYS_APDBKEYHI_EL1, \tmp3
> +alternative_else_nop_endif

I wonder whether it's better to have a branch here over this sequence
with alternative_if_not than 12 nops.

> +alternative_if ARM64_HAS_GENERIC_AUTH
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
> +	msr_s	SYS_APGAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APGAKEYHI_EL1, \tmp3
> +alternative_else_nop_endif

That's small enough with 3 nops.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys
  2019-05-29 19:03 ` [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys Kristina Martsenko
  2019-05-30  3:34   ` Kees Cook
  2019-06-04 10:03   ` Dave Martin
@ 2019-06-06 16:44   ` Catalin Marinas
  2019-06-12 16:21     ` Kristina Martsenko
  2 siblings, 1 reply; 41+ messages in thread
From: Catalin Marinas @ 2019-06-06 16:44 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On Wed, May 29, 2019 at 08:03:30PM +0100, Kristina Martsenko wrote:

>  - Added ISB after key install in kernel_exit, in case in the future C function
>    calls are added after the macro
[...]
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> index e3bfddfe80b6..f595da9661a4 100644
> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -25,11 +25,24 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
>  	msr_s	SYS_APDBKEYLO_EL1, \tmp2
>  	msr_s	SYS_APDBKEYHI_EL1, \tmp3
> +	isb
>  alternative_else_nop_endif
>  alternative_if ARM64_HAS_GENERIC_AUTH
>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
>  	msr_s	SYS_APGAKEYLO_EL1, \tmp2
>  	msr_s	SYS_APGAKEYHI_EL1, \tmp3
> +	isb
> +alternative_else_nop_endif

I couldn't find the previous discussions, so why are the ISBs needed
here? Is this macro not invoked only on the kernel_exit path?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-06-04 13:52                 ` Luke Cheeseman
@ 2019-06-06 17:43                   ` Kristina Martsenko
  0 siblings, 0 replies; 41+ messages in thread
From: Kristina Martsenko @ 2019-06-06 17:43 UTC (permalink / raw)
  To: Luke Cheeseman, Will Deacon, Kees Cook
  Cc: Mark Rutland, Diogo Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Christof Douma, Ramana Radhakrishnan,
	Amit Kachhap, Kristof Beyls, Suzuki Poulose, Dave P Martin,
	linux-arm-kernel

On 04/06/2019 14:52, Luke Cheeseman wrote:
> Hi,
> 
> It was suggested to me that forcing the compiler to inline the function may be another way to avoid writing the function in a separate file and not have the concerns of switching keys in a function. For example:
> 
> void __attribute__((always_inline)) switch_keys() {
>   // do something
> }
> 
> 
> int main() {
>   switch_keys(42);
> }
> 
> 
> switch_keys is always inlined so you don't get the pac/aut pair. Is this something that is useful?

This is useful in some cases, but not when the function and its caller
are in different compilation units. Unfortunately we have cases where
arm64-specific code (that sets up the keys) is being called from core
kernel code, which is in a different file. For this case we'd still need
the attribute to disable return address signing.

Thanks,
Kristina

> 
> For the feature request for disabling branch protection (https://bugs.llvm.org/show_bug.cgi?id=42095) is there a time frame you need this within?
> 
> Thanks,
> Luke
> 
> 
> From: Will Deacon <will.deacon@arm.com>
> Sent: 03 June 2019 11:40
> To: Kees Cook
> Cc: Luke Cheeseman; Kristina Martsenko; Luke Cheeseman; Diogo Sampaio; linux-arm-kernel@lists.infradead.org; Amit Kachhap; Ard Biesheuvel; Catalin Marinas; Dave P Martin; Mark Rutland; Ramana Radhakrishnan; Suzuki Poulose; Kristof Beyls; Christof Douma
> Subject: Re: [RFC v2 0/7] arm64: return address signing
>   
> 
> On Sun, Jun 02, 2019 at 08:43:55AM -0700, Kees Cook wrote:
>> On Fri, May 31, 2019 at 10:22:02AM +0100, Will Deacon wrote:
>>> On Thu, May 30, 2019 at 11:05:15AM -0700, Kees Cook wrote:
>>>> On Thu, May 30, 2019 at 04:55:08PM +0000, Luke Cheeseman wrote:
>>>>> The semantics of this attribute are straightforward enough but it
>>>>> raises some questions. One question being why would I want to turn off
>>>>> BTI (also controlled by this option) for one function in a file? Which
>>>>> gets a bit odd.
>>>>
>>>> It's about leaving very early CPU startup functions in the kernel from
>>>> getting marked up (since they are running before or during the PAC setup).
>>>>
>>>>> I don't know if the alternatives have been suggested but it's
>>>>> possible to achieve the result you seem to be after (a function without
>>>>> return address signing) in a couple of ways. First and foremost,
>>>>> separating the function out into it's own file and compiling with
>>>>> -mbranch-protection=none. Alternatively, writing the function in assembly
>>>>> or perhaps even a naked function with inline assembly.
>>>>
>>>> Fair enough. :) Thanks for the clarification. Yeah, split on compilation
>>>> unit could work. (In the future, though, having the attribute flexibility
>>>> would be nice.)
>>>>
>>>> Kristina, would it be feasible to split these functions into a separate
>>>> source file? (There doesn't seem to be a need to inline them, given
>>>> they're not performance sensitive and only used once, etc?)
>>>
>>> Right, and we could call it kernel.c
>>>
>>> Sarcasm aside, please fix this in the toolchain. Moving logically unrelated
>>> functions into one file just because the toolchain doesn't yet support this
>>> feature just messes up the codebase and removes the incentive to get this
>>> implemented properly. After all, you need something to do now that asm goto
>>> is out of the way, right? ;)
>>
>> LLVM tracking bug created...
>> https://bugs.llvm.org/show_bug.cgi?id=42095
> 
> Thanks, Kees!
> 
> Will
>     
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-05-30  9:12   ` Ramana Radhakrishnan
@ 2019-06-06 17:44     ` Kristina Martsenko
  2019-06-08  4:09       ` Kees Cook
  0 siblings, 1 reply; 41+ messages in thread
From: Kristina Martsenko @ 2019-06-06 17:44 UTC (permalink / raw)
  To: Ramana Radhakrishnan, Kees Cook
  Cc: Mark Rutland, Diogo Sampaio, Ard Biesheuvel, Catalin Marinas,
	Luke Cheeseman, Will Deacon, Amit Kachhap, nd, Suzuki Poulose,
	Dave P Martin, linux-arm-kernel

On 30/05/2019 10:12, Ramana Radhakrishnan wrote:
> 
>>>   - more testing
>>
>> Is PAC emulated in QEmu yet? (I assume I can't get real hardware to help
>> test this yet...)
> 
> AFAIK, yes qemu trunk should have this.

I've been testing on the ARM FastModels, but I tried out QEMU 4.0.0 and
it seems to support PAC with the "-cpu max" option.

Thanks,
Kristina

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 0/7] arm64: return address signing
  2019-06-06 17:44     ` Kristina Martsenko
@ 2019-06-08  4:09       ` Kees Cook
  0 siblings, 0 replies; 41+ messages in thread
From: Kees Cook @ 2019-06-08  4:09 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Diogo Sampaio, Luke Cheeseman, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Ramana Radhakrishnan, Amit Kachhap,
	nd, Suzuki Poulose, Dave P Martin, linux-arm-kernel

On Thu, Jun 06, 2019 at 06:44:41PM +0100, Kristina Martsenko wrote:
> On 30/05/2019 10:12, Ramana Radhakrishnan wrote:
> > 
> >>>   - more testing
> >>
> >> Is PAC emulated in QEmu yet? (I assume I can't get real hardware to help
> >> test this yet...)
> > 
> > AFAIK, yes qemu trunk should have this.
> 
> I've been testing on the ARM FastModels, but I tried out QEMU 4.0.0 and
> it seems to support PAC with the "-cpu max" option.

Ah-ha! I wasn't seeing it mentioned in dmesg (it should appear along
with PAN, etc, yes?) but I guess I need a newer QEMU:

$ qemu-system-aarch64 --version
QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-2ubuntu3.1)

I will go build QEMU myself! :)

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys
  2019-06-06 16:44   ` Catalin Marinas
@ 2019-06-12 16:21     ` Kristina Martsenko
  2019-06-13 10:44       ` Catalin Marinas
  0 siblings, 1 reply; 41+ messages in thread
From: Kristina Martsenko @ 2019-06-12 16:21 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

On 06/06/2019 17:44, Catalin Marinas wrote:
> On Wed, May 29, 2019 at 08:03:30PM +0100, Kristina Martsenko wrote:
> 
>>  - Added ISB after key install in kernel_exit, in case in the future C function
>>    calls are added after the macro
> [...]
>> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
>> index e3bfddfe80b6..f595da9661a4 100644
>> --- a/arch/arm64/include/asm/asm_pointer_auth.h
>> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
>> @@ -25,11 +25,24 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
>>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
>>  	msr_s	SYS_APDBKEYLO_EL1, \tmp2
>>  	msr_s	SYS_APDBKEYHI_EL1, \tmp3
>> +	isb
>>  alternative_else_nop_endif
>>  alternative_if ARM64_HAS_GENERIC_AUTH
>>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
>>  	msr_s	SYS_APGAKEYLO_EL1, \tmp2
>>  	msr_s	SYS_APGAKEYHI_EL1, \tmp3
>> +	isb
>> +alternative_else_nop_endif
> 
> I couldn't find the previous discussions, so why are the ISBs needed
> here? Is this macro not invoked only on the kernel_exit path?

It is invoked only in kernel_exit. There weren't any previous
discussions, I just started thinking that in the future someone could
add a call to a C function in kernel_exit after this macro (there are a
few C calls in kernel_exit already). If the function is compiled with
ptrauth instructions, and the above key system register writes take
effect in the middle of the function, then authentication would fail,
because we would enter the function with one key but exit with another.

This is probably overly cautious, so I'd be happy to remove the ISBs if
you prefer. Could also add a comment in kernel_exit. Also, thinking
about it now, we only use APIA key in the kernel, so the ISB after APGA
key is actually unnecessary, so I'll go ahead and remove that one.

Thanks,
Kristina

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys
  2019-06-12 16:21     ` Kristina Martsenko
@ 2019-06-13 10:44       ` Catalin Marinas
  0 siblings, 0 replies; 41+ messages in thread
From: Catalin Marinas @ 2019-06-13 10:44 UTC (permalink / raw)
  To: Kristina Martsenko
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Suzuki K Poulose,
	Will Deacon, Ramana Radhakrishnan, Amit Kachhap, Dave Martin,
	linux-arm-kernel

Hi Kristina,

On Wed, Jun 12, 2019 at 05:21:20PM +0100, Kristina Martsenko wrote:
> On 06/06/2019 17:44, Catalin Marinas wrote:
> > On Wed, May 29, 2019 at 08:03:30PM +0100, Kristina Martsenko wrote:
> > 
> >>  - Added ISB after key install in kernel_exit, in case in the future C function
> >>    calls are added after the macro
> > [...]
> >> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> >> index e3bfddfe80b6..f595da9661a4 100644
> >> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> >> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> >> @@ -25,11 +25,24 @@ alternative_if ARM64_HAS_ADDRESS_AUTH
> >>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
> >>  	msr_s	SYS_APDBKEYLO_EL1, \tmp2
> >>  	msr_s	SYS_APDBKEYHI_EL1, \tmp3
> >> +	isb
> >>  alternative_else_nop_endif
> >>  alternative_if ARM64_HAS_GENERIC_AUTH
> >>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
> >>  	msr_s	SYS_APGAKEYLO_EL1, \tmp2
> >>  	msr_s	SYS_APGAKEYHI_EL1, \tmp3
> >> +	isb
> >> +alternative_else_nop_endif
> > 
> > I couldn't find the previous discussions, so why are the ISBs needed
> > here? Is this macro not invoked only on the kernel_exit path?
> 
> It is invoked only in kernel_exit. There weren't any previous
> discussions, I just started thinking that in the future someone could
> add a call to a C function in kernel_exit after this macro (there are a
> few C calls in kernel_exit already). If the function is compiled with
> ptrauth instructions, and the above key system register writes take
> effect in the middle of the function, then authentication would fail,
> because we would enter the function with one key but exit with another.

You are right if there are subsequent function calls after this but I
don't think we should allow them. For example, you already restored the
keys to the user ones, do we want to run further kernel code with the
user keys?

> This is probably overly cautious, so I'd be happy to remove the ISBs if
> you prefer. Could also add a comment in kernel_exit. Also, thinking
> about it now, we only use APIA key in the kernel, so the ISB after APGA
> key is actually unnecessary, so I'll go ahead and remove that one.

I think it's better if we just add a comment in the kernel_exit(),
something along the lines of no function calls after this point. IIRC,
we only have the ssbd macro after this followed by register restoring
(you could as well move the ptrauth key restoration after the apply_ssbd
one.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 4/7] arm64: enable ptrauth earlier
  2019-05-29 19:03 ` [RFC v2 4/7] arm64: enable ptrauth earlier Kristina Martsenko
  2019-05-30  3:11   ` Kees Cook
@ 2019-06-13 15:41   ` Suzuki K Poulose
  1 sibling, 0 replies; 41+ messages in thread
From: Suzuki K Poulose @ 2019-06-13 15:41 UTC (permalink / raw)
  To: kristina.martsenko, linux-arm-kernel
  Cc: mark.rutland, keescook, ard.biesheuvel, catalin.marinas,
	will.deacon, ramana.radhakrishnan, Amit.Kachhap, dave.martin



On 29/05/2019 20:03, Kristina Martsenko wrote:
> When the kernel is compiled with pointer auth instructions, the boot CPU
> needs to start using address auth very early, so change the cpucap to
> account for this.
> 
> A function that enables pointer auth cannot return, so compile such
> functions without pointer auth (using a compiler function attribute).
> The __no_ptrauth macro will be defined to the required function
> attribute in a later patch.
> 
> Do not use the cpu_enable callback, to avoid compiling the whole
> callchain down to cpu_enable without pointer auth.
> 
> Note the change in behavior: if the boot CPU has address auth and a late
> CPU does not, then we offline the late CPU. Until now we would have just
> disabled address auth in this case.
> 
> Leave generic authentication as a "system scope" cpucap for now, since
> initially the kernel will only use address authentication.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> ---
> 
> Changes since RFC v1:
>   - Enable instructions for all 5 keys
>   - Replaced __always_inline with __no_ptrauth as it turns out __always_inline
>     is only a hint (and could therefore result in crashes)
>   - Left the __no_ptrauth definition blank for now as it needs to be determined
>     with more complex logic in a later patch
This patch looks fine to me as such. But if you switch to doing the
SCTLR bits in assembly, managed by alternatives, you may not need
the attribute support for C functions.

You may be able to enable the ptr-auth for all the CPUs from early on
in the assembly and also park the CPUs which don't have the capability
via alternative code patching that gets applied just after the boot CPU
is initialized (via apply_boot_alternatives()).


Rough version :

e.g:

+
+/*
+ * ptr_auth_setup(bool primary)
+ *	x0 = 1 for primary, 0 for secondary
+ */
+ptr_auth_setup:
+#ifdef CONFIG_ARM64_PTR_AUTH
+	/* check ptr auth support */
+	mrs	x1, id_aa64isar1_el1
+	ubfx	x1, x1, 4, 8
+	cbz	x1, 1f
+
+	/* x2 = system_has_ptr_auth() */
+alternative_if ARM64_HAS_PTR_AUTH
+	mov	x2, 1
+alternative_else
+	mov	x2, 0
+alternative_endif
+	orr	x0, x2, x0		// primary || system_has_ptr_auth()
+	cbz	x0, 2f
+	mrs	x1, sctlr_el1
+	/* Set the PTR_AUTH_BITS orr	x1, x1, (SCTLR_ELx_AUTH_BITS) */
+	msr	sctlr_el1, x1
+	b	2f
+1:	/* No pionter authentication support */
+alternative_if ARM64_HAS_PTR_AUTH
+	b	3f
+alternative_else_nop_endif
+2:
+#endif
+	ret
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+3:	/* Park the secondary CPU after updating the boot status */
+	update_early_cpu_boot_status \
+	   CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_PTR_AUTH, x0, x1
+	b	__cpu_park_early
+#endif
+ENDPROC(ptr_auth_setup)
+


Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities
  2019-05-30 10:50   ` Suzuki K Poulose
@ 2019-06-13 16:13     ` Suzuki K Poulose
  0 siblings, 0 replies; 41+ messages in thread
From: Suzuki K Poulose @ 2019-06-13 16:13 UTC (permalink / raw)
  To: kristina.martsenko, linux-arm-kernel
  Cc: mark.rutland, keescook, ard.biesheuvel, catalin.marinas,
	will.deacon, ramana.radhakrishnan, Amit.Kachhap, dave.martin



On 30/05/2019 11:50, Suzuki K Poulose wrote:
> Hi Kristina,
> 
> On 05/29/2019 08:03 PM, Kristina Martsenko wrote:
>> To enable pointer auth for the kernel, we're going to need to check for
>> the presence of address auth and generic auth using alternative_if. We
>> currently have two cpucaps for each, but alternative_if needs to check a
>> single cpucap. So define meta-capabilities that are present when either
>> of the current two capabilities is present.
>>
>> Leave the existing four cpucaps in place, as they are still needed to
>> check for mismatched systems where one CPU has the architected algorithm
>> but another has the IMP DEF algorithm.
>>
>> Note, the meta-capabilities were present before but were removed in
>> commits a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
>> CPU caps from 6 to 4") and 1e013d06120c ("arm64: cpufeature: Rework ptr
>> auth hwcaps using multi_entry_cap_matches"), as they were not needed
>> then. Note, unlike before, the current patch checks the cpucap values
>> directly, instead of reading the CPU ID register value.
>>
>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> This patch as such looks fine to me. However, do we really make use of
> the individual caps for ARCH/IMPDEF support ? Do we plan to do something
> about them in the future ? If not we could as well remove them and have
> the generic ones in place. That may be done in a separate series as a
> cleanup.

I take that back. I think both are needed to make sure the secondaries
match what the Boot CPU initially reported it has. With a generic cap,
we don't have a reliable way to verify against the boot CPU. Sorry for
the noise. This makes sense when we make it a boot CPU feature.

> 
> Either way, for this patch:
> 
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> 

This still holds.

Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2019-06-13 16:13 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-29 19:03 [RFC v2 0/7] arm64: return address signing Kristina Martsenko
2019-05-29 19:03 ` [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities Kristina Martsenko
2019-05-30  1:58   ` Kees Cook
2019-05-30 10:50   ` Suzuki K Poulose
2019-06-13 16:13     ` Suzuki K Poulose
2019-05-29 19:03 ` [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time Kristina Martsenko
2019-05-30  2:04   ` Kees Cook
2019-06-06 16:26   ` Catalin Marinas
2019-05-29 19:03 ` [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability Kristina Martsenko
2019-05-30  2:49   ` Kees Cook
2019-05-30 14:16   ` Suzuki K Poulose
2019-05-31 14:00     ` Kristina Martsenko
2019-05-31 15:08       ` Suzuki K Poulose
2019-05-29 19:03 ` [RFC v2 4/7] arm64: enable ptrauth earlier Kristina Martsenko
2019-05-30  3:11   ` Kees Cook
2019-06-13 15:41   ` Suzuki K Poulose
2019-05-29 19:03 ` [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys Kristina Martsenko
2019-05-30  3:34   ` Kees Cook
2019-05-30 16:26     ` Kristina Martsenko
2019-06-04 10:03   ` Dave Martin
2019-06-06 16:44   ` Catalin Marinas
2019-06-12 16:21     ` Kristina Martsenko
2019-06-13 10:44       ` Catalin Marinas
2019-05-29 19:03 ` [RFC v2 6/7] arm64: unwind: strip PAC from kernel addresses Kristina Martsenko
2019-05-30  3:36   ` Kees Cook
2019-05-29 19:03 ` [RFC v2 7/7] arm64: compile the kernel with ptrauth return address signing Kristina Martsenko
2019-05-30  3:45   ` Kees Cook
2019-05-30  3:09 ` [RFC v2 0/7] arm64: " Kees Cook
2019-05-30  7:25   ` Will Deacon
2019-05-30  8:39     ` Ard Biesheuvel
2019-05-30  9:11       ` Ramana Radhakrishnan
2019-05-30  9:12   ` Ramana Radhakrishnan
2019-06-06 17:44     ` Kristina Martsenko
2019-06-08  4:09       ` Kees Cook
     [not found]   ` <DB7PR08MB3865C4AA36C9C465B2A687DABF180@DB7PR08MB3865.eurprd08.prod.outlook.com>
2019-05-30 15:57     ` Kees Cook
     [not found]       ` <DB7PR08MB3865A83066179CE419D171EDBF180@DB7PR08MB3865.eurprd08.prod.outlook.com>
2019-05-30 18:05         ` Kees Cook
2019-05-31  9:22           ` Will Deacon
2019-06-02 15:43             ` Kees Cook
2019-06-03 10:40               ` Will Deacon
2019-06-04 13:52                 ` Luke Cheeseman
2019-06-06 17:43                   ` Kristina Martsenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.