linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/11] arm64: return address signing
@ 2019-10-17  8:14 Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 01/11] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

Hi,

This series improves function return address protection for the arm64 kernel, by
compiling the kernel with ARMv8.3 Pointer Authentication instructions (ptrauth
referred hereafter). This should help protect the kernel against attacks using
return-oriented programming.

Patch 9 and 10 are newly added and hence sent as RFC.

This series is based on v5.4-rc2.

High-level changes since RFC v2 [1] (detailed changes are listed in patches):
 - Moved enabling, key setup and context switch to assembly, to avoid using
   the pointer auth compiler attribute which Clang does not support (thanks
   Suzuki for the initial code!).
 - Added code to restore keys after cpu resume.
 - __builtin_return_address will now mask pac bits.
 - Changed gcc compiler options to add ptrauth instructions in all functions
   and not just non-leaf functions. This may be revisited later due to 
   performance concerns.
 - Rebased onto v5.4-rc2.
 - Added Reviewed-by's.

This series do not implement few things or have known limitations:
 - ftrace function tracer does not work with this series. But after using
   the posted series [2] based on -fpatchable-function-entry, it works fine.
 - kprobes/uprobes and other tracing may need some rework with ptrauth.
 - kdump, other debug may need some rework with ptrauth.
 - Generate some randomness for ptrauth keys during kernel early booting.

Feedback welcome!

Thanks,
Amit Daniel

[1] https://lore.kernel.org/linux-arm-kernel/20190529190332.29753-1-kristina.martsenko@arm.com/
[2] https://patchwork.kernel.org/patch/10803279/

Amit Daniel Kachhap (3):
  arm64: create macro to park cpu in infinite loop
  arm64: suspend: restore the kernel ptrauth keys
  arm64: mask PAC bits of __builtin_return_address

Kristina Martsenko (8):
  arm64: cpufeature: add pointer auth meta-capabilities
  arm64: install user ptrauth keys at kernel exit time
  arm64: cpufeature: handle conflicts based on capability
  arm64: enable ptrauth earlier
  arm64: rename ptrauth key structures to be user-specific
  arm64: initialize and switch ptrauth kernel keys
  arm64: unwind: strip PAC from kernel addresses
  arm64: compile the kernel with ptrauth return address signing

 arch/arm64/Kconfig                        | 21 ++++++++-
 arch/arm64/Makefile                       |  6 +++
 arch/arm64/include/asm/asm_pointer_auth.h | 59 +++++++++++++++++++++++
 arch/arm64/include/asm/compiler.h         | 15 ++++++
 arch/arm64/include/asm/cpucaps.h          |  4 +-
 arch/arm64/include/asm/cpufeature.h       | 33 ++++++++++---
 arch/arm64/include/asm/pointer_auth.h     | 50 ++++++++------------
 arch/arm64/include/asm/processor.h        |  3 +-
 arch/arm64/include/asm/smp.h              |  3 ++
 arch/arm64/kernel/asm-offsets.c           | 15 ++++++
 arch/arm64/kernel/cpufeature.c            | 53 ++++++++++++---------
 arch/arm64/kernel/entry.S                 |  6 +++
 arch/arm64/kernel/head.S                  | 78 +++++++++++++++++++++++++++----
 arch/arm64/kernel/pointer_auth.c          |  7 +--
 arch/arm64/kernel/process.c               |  3 +-
 arch/arm64/kernel/ptrace.c                | 16 +++----
 arch/arm64/kernel/sleep.S                 |  6 +++
 arch/arm64/kernel/smp.c                   |  8 ++++
 arch/arm64/kernel/stacktrace.c            |  3 ++
 19 files changed, 306 insertions(+), 83 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
 create mode 100644 arch/arm64/include/asm/compiler.h

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 01/11] arm64: cpufeature: add pointer auth meta-capabilities
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 02/11] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

To enable pointer auth for the kernel, we're going to need to check for
the presence of address auth and generic auth using alternative_if. We
currently have two cpucaps for each, but alternative_if needs to check a
single cpucap. So define meta-capabilities that are present when either
of the current two capabilities is present.

Leave the existing four cpucaps in place, as they are still needed to
check for mismatched systems where one CPU has the architected algorithm
but another has the IMP DEF algorithm.

Note, the meta-capabilities were present before but were removed in
commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework
ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed
then. Note, unlike before, the current patch checks the cpucap values
directly, instead of reading the CPU ID register value.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - Small change in commit logs.

 arch/arm64/include/asm/cpucaps.h    |  4 +++-
 arch/arm64/include/asm/cpufeature.h |  6 ++----
 arch/arm64/kernel/cpufeature.c      | 25 ++++++++++++++++++++++++-
 3 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index f19fe4b..cbecfa5 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -52,7 +52,9 @@
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
 #define ARM64_HAS_DCPODP			43
 #define ARM64_WORKAROUND_1463225		44
+#define ARM64_HAS_ADDRESS_AUTH			45
+#define ARM64_HAS_GENERIC_AUTH			46
 
-#define ARM64_NCAPS				45
+#define ARM64_NCAPS				47
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9cde5d2..670497d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -590,15 +590,13 @@ static inline bool system_supports_cnp(void)
 static inline bool system_supports_address_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
 }
 
 static inline bool system_supports_generic_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
 }
 
 static inline bool system_uses_irq_prio_masking(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9323bcc..a73400b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1242,6 +1242,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
 	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
 				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
 }
+
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
+	       cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+}
+
+static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
+	       cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
+}
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
@@ -1511,7 +1525,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
 		.matches = has_cpuid_feature,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -1522,6 +1535,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
 		.matches = has_cpuid_feature,
+	},
+	{
+		.capability = ARM64_HAS_ADDRESS_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_address_auth,
 		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
@@ -1544,6 +1562,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
 		.matches = has_cpuid_feature,
 	},
+	{
+		.capability = ARM64_HAS_GENERIC_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_generic_auth,
+	},
 #endif /* CONFIG_ARM64_PTR_AUTH */
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	{
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 02/11] arm64: install user ptrauth keys at kernel exit time
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 01/11] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 03/11] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

As we're going to enable pointer auth within the kernel and use a
different APIAKey for the kernel itself, then move the user APIAKey
switch to EL0 exception return.

The other 4 keys could remain switched during task switch, but are also
moved to keep things consistent.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - Replaced 12 NOPs with a branch in ptrauth_keys_install_user [Catalin]

 arch/arm64/include/asm/asm_pointer_auth.h | 45 +++++++++++++++++++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 30 +--------------------
 arch/arm64/kernel/asm-offsets.c           | 11 ++++++++
 arch/arm64/kernel/entry.S                 |  3 +++
 arch/arm64/kernel/pointer_auth.c          |  3 ---
 arch/arm64/kernel/process.c               |  1 -
 6 files changed, 60 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
new file mode 100644
index 0000000..cb21a06
--- /dev/null
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ASM_POINTER_AUTH_H
+#define __ASM_ASM_POINTER_AUTH_H
+
+#include <asm/alternative.h>
+#include <asm/asm-offsets.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	mov	\tmp1, #THREAD_KEYS_USER
+	add	\tmp1, \tsk, \tmp1
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+	b	.Laddr_auth_skip_\@
+alternative_else_nop_endif
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
+	msr_s	SYS_APIBKEYLO_EL1, \tmp2
+	msr_s	SYS_APIBKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
+	msr_s	SYS_APDAKEYLO_EL1, \tmp2
+	msr_s	SYS_APDAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
+	msr_s	SYS_APDBKEYLO_EL1, \tmp2
+	msr_s	SYS_APDBKEYHI_EL1, \tmp3
+.Laddr_auth_skip_\@:
+alternative_if ARM64_HAS_GENERIC_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
+	msr_s	SYS_APGAKEYLO_EL1, \tmp2
+	msr_s	SYS_APGAKEYHI_EL1, \tmp3
+alternative_else_nop_endif
+	.endm
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	.endm
+
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 7a24bad..21c2115 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -43,26 +43,6 @@ static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 }
 
-#define __ptrauth_key_install(k, v)				\
-do {								\
-	struct ptrauth_key __pki_v = (v);			\
-	write_sysreg_s(__pki_v.lo, SYS_ ## k ## KEYLO_EL1);	\
-	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
-} while (0)
-
-static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
-{
-	if (system_supports_address_auth()) {
-		__ptrauth_key_install(APIA, keys->apia);
-		__ptrauth_key_install(APIB, keys->apib);
-		__ptrauth_key_install(APDA, keys->apda);
-		__ptrauth_key_install(APDB, keys->apdb);
-	}
-
-	if (system_supports_generic_auth())
-		__ptrauth_key_install(APGA, keys->apga);
-}
-
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -78,20 +58,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-do {									\
-	struct task_struct *__ptiu_tsk = (tsk);				\
-	ptrauth_keys_init(&__ptiu_tsk->thread.keys_user);		\
-	ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user);		\
-} while (0)
-
-#define ptrauth_thread_switch(tsk)	\
-	ptrauth_keys_switch(&(tsk)->thread.keys_user)
+	ptrauth_keys_init(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
-#define ptrauth_thread_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 2146857..ef0c24b 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -40,6 +40,9 @@ int main(void)
 #endif
   BLANK();
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+#endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
   DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
@@ -127,5 +130,13 @@ int main(void)
   DEFINE(SDEI_EVENT_INTREGS,	offsetof(struct sdei_registered_event, interrupted_regs));
   DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(PTRAUTH_KEY_APIA,	offsetof(struct ptrauth_keys, apia));
+  DEFINE(PTRAUTH_KEY_APIB,	offsetof(struct ptrauth_keys, apib));
+  DEFINE(PTRAUTH_KEY_APDA,	offsetof(struct ptrauth_keys, apda));
+  DEFINE(PTRAUTH_KEY_APDB,	offsetof(struct ptrauth_keys, apdb));
+  DEFINE(PTRAUTH_KEY_APGA,	offsetof(struct ptrauth_keys, apga));
+  BLANK();
+#endif
   return 0;
 }
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 84a8227..b6272a3 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -14,6 +14,7 @@
 #include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
+#include <asm/asm_pointer_auth.h>
 #include <asm/cpufeature.h>
 #include <asm/errno.h>
 #include <asm/esr.h>
@@ -341,6 +342,8 @@ alternative_else_nop_endif
 	msr	cntkctl_el1, x1
 4:
 #endif
+	ptrauth_keys_install_user tsk, x0, x1, x2
+
 	apply_ssbd 0, x0, x1
 	.endif
 
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index c507b58..95985be 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 
 	if (!arg) {
 		ptrauth_keys_init(keys);
-		ptrauth_keys_switch(keys);
 		return 0;
 	}
 
@@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 	if (arg & PR_PAC_APGAKEY)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 
-	ptrauth_keys_switch(keys);
-
 	return 0;
 }
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index a47462d..5b2b158 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -505,7 +505,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
 	uao_thread_switch(next);
-	ptrauth_thread_switch(next);
 	ssbs_thread_switch(next);
 
 	/*
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 03/11] arm64: cpufeature: handle conflicts based on capability
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 01/11] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 02/11] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-17 14:06   ` Suzuki K Poulose
  2019-10-17  8:14 ` [PATCH 04/11] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

Each system capability can be of either boot, local, or system scope,
depending on when the state of the capability is finalized. When we
detect a conflict on a late CPU, we either offline the CPU or panic the
system. We currently always panic if the conflict is caused by a boot
scope capability, and offline the CPU if the conflict is caused by a
local or system scope capability.

We're going to want to add a new capability (for pointer authentication)
which needs to be boot scope but doesn't need to panic the system when a
conflict is detected. So add a new flag to specify whether the
capability requires the system to panic or not. Current boot scope
capabilities are updated to set the flag, so there should be no
functional change as a result of this patch.

Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - Updated comment above ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE [Suzuki]

 arch/arm64/include/asm/cpufeature.h | 18 ++++++++++++++++--
 arch/arm64/kernel/cpufeature.c      | 23 +++++++++--------------
 2 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 670497d..011a665 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -208,6 +208,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
  *     In some non-typical cases either both (a) and (b), or neither,
  *     should be permitted. This can be described by including neither
  *     or both flags in the capability's type field.
+ *
+ *     In case of a conflict, the CPU is prevented from booting. If the
+ *     ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
+ *     then a kernel panic is triggered.
  */
 
 
@@ -240,6 +244,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU	((u16)BIT(4))
 /* Is it safe for a late CPU to miss this capability when system has it */
 #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	((u16)BIT(5))
+/* Panic when a conflict is detected */
+#define ARM64_CPUCAP_PANIC_ON_CONFLICT		((u16)BIT(6))
 
 /*
  * CPU errata workarounds that need to be enabled at boot time if one or
@@ -279,9 +285,11 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 
 /*
  * CPU feature used early in the boot based on the boot CPU. All secondary
- * CPUs must match the state of the capability as detected by the boot CPU.
+ * CPUs must match the state of the capability as detected by the boot CPU. In
+ * case of a conflict, a kernel panic is triggered.
  */
-#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
+#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
+	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
 
 struct arm64_cpu_capabilities {
 	const char *desc;
@@ -352,6 +360,12 @@ cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
 	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
 }
 
+static inline bool
+cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
+{
+	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
+}
+
 /*
  * Generic helper for handling capabilties with multiple (match,enable) pairs
  * of call backs, sharing the same capability bit.
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a73400b..4ef40c9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1850,10 +1850,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask)
  * Run through the list of capabilities to check for conflicts.
  * If the system has already detected a capability, take necessary
  * action on this CPU.
- *
- * Returns "false" on conflicts.
  */
-static bool verify_local_cpu_caps(u16 scope_mask)
+static void verify_local_cpu_caps(u16 scope_mask)
 {
 	int i;
 	bool cpu_has_cap, system_has_cap;
@@ -1898,10 +1896,12 @@ static bool verify_local_cpu_caps(u16 scope_mask)
 		pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n",
 			smp_processor_id(), caps->capability,
 			caps->desc, system_has_cap, cpu_has_cap);
-		return false;
-	}
 
-	return true;
+		if (cpucap_panic_on_conflict(caps))
+			cpu_panic_kernel();
+		else
+			cpu_die_early();
+	}
 }
 
 /*
@@ -1911,12 +1911,8 @@ static bool verify_local_cpu_caps(u16 scope_mask)
 static void check_early_cpu_features(void)
 {
 	verify_cpu_asid_bits();
-	/*
-	 * Early features are used by the kernel already. If there
-	 * is a conflict, we cannot proceed further.
-	 */
-	if (!verify_local_cpu_caps(SCOPE_BOOT_CPU))
-		cpu_panic_kernel();
+
+	verify_local_cpu_caps(SCOPE_BOOT_CPU);
 }
 
 static void
@@ -1964,8 +1960,7 @@ static void verify_local_cpu_capabilities(void)
 	 * check_early_cpu_features(), as they need to be verified
 	 * on all secondary CPUs.
 	 */
-	if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU))
-		cpu_die_early();
+	verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU);
 
 	verify_local_elf_hwcaps(arm64_elf_hwcaps);
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 04/11] arm64: create macro to park cpu in an infinite loop
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2019-10-17  8:14 ` [PATCH 03/11] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 05/11] arm64: enable ptrauth earlier Amit Daniel Kachhap
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

A macro early_park_cpu is added to park the faulted cpu in an infinite
loop. Currently, this macro is substituted in two instances and is reused
in the subsequent pointer authentication patch.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - New patch in this version for code reuse.

 arch/arm64/kernel/head.S | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 989b194..e58e5975 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -761,6 +761,16 @@ ENDPROC(__secondary_too_slow)
 	.endm
 
 /*
+ * Macro to park the cpu in an infinite loop.
+ */
+	.macro	early_park_cpu
+.Lepc_\@:
+	wfe
+	wfi
+	b	.Lepc_\@
+	.endm
+
+/*
  * Enable the MMU.
  *
  *  x0  = SCTLR_EL1 value for turning on the MMU.
@@ -810,10 +820,7 @@ ENTRY(__cpu_secondary_check52bitva)
 
 	update_early_cpu_boot_status \
 		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_52_BIT_VA, x0, x1
-1:	wfe
-	wfi
-	b	1b
-
+	early_park_cpu
 #endif
 2:	ret
 ENDPROC(__cpu_secondary_check52bitva)
@@ -822,10 +829,7 @@ __no_granule_support:
 	/* Indicate that this CPU can't boot and is stuck in the kernel */
 	update_early_cpu_boot_status \
 		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2
-1:
-	wfe
-	wfi
-	b	1b
+	early_park_cpu
 ENDPROC(__no_granule_support)
 
 #ifdef CONFIG_RELOCATABLE
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 05/11] arm64: enable ptrauth earlier
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2019-10-17  8:14 ` [PATCH 04/11] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-17 14:13   ` Suzuki K Poulose
  2019-10-23 17:32   ` James Morse
  2019-10-17  8:14 ` [PATCH 06/11] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

When the kernel is compiled with pointer auth instructions, the boot CPU
needs to start using address auth very early, so change the cpucap to
account for this.

Pointer auth must be enabled before we call C functions, because it is
not possible to enter a function with pointer auth disabled and exit it
with pointer auth enabled. Note, mismatches between architected and
IMPDEF algorithms will still be caught by the cpufeature framework (the
separate *_ARCH and *_IMP_DEF cpucaps).

Note the change in behavior: if the boot CPU has address auth and a late
CPU does not, then we offline the late CPU. Until now we would have just
disabled address auth in this case.

Leave generic authentication as a "system scope" cpucap for now, since
initially the kernel will only use address authentication.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - Moved early enabling from C to assembly, and no longer use the pointer auth
   C function attribute [Suzuki]

 arch/arm64/Kconfig                  |  4 ++++
 arch/arm64/include/asm/cpufeature.h |  9 +++++++
 arch/arm64/include/asm/smp.h        |  1 +
 arch/arm64/kernel/cpufeature.c      | 13 +++-------
 arch/arm64/kernel/head.S            | 48 +++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/smp.c             |  2 ++
 6 files changed, 67 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 41a9b42..253e3c5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1408,6 +1408,10 @@ config ARM64_PTR_AUTH
 	  be enabled. However, KVM guest also require VHE mode and hence
 	  CONFIG_ARM64_VHE=y option to use this feature.
 
+	  If the feature is present on the primary CPU but not a secondary CPU,
+	  then the secondary CPU will be offlined. On such a system, this
+	  option should not be selected.
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 011a665..5d61749 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -291,6 +291,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
 	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
 
+/*
+ * CPU feature used early in the boot based on the boot CPU. It is safe for a
+ * late CPU to have this feature even though the boot CPU hasn't enabled it,
+ * although the feature will not be used by Linux in this case. If the boot CPU
+ * has enabled this feature already, then every late CPU must have it.
+ */
+#define ARM64_CPUCAP_BOOT_CPU_FEATURE			\
+	 (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
+
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index a0c8a0b..46e2b05 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -22,6 +22,7 @@
 
 #define CPU_STUCK_REASON_52_BIT_VA	(UL(1) << CPU_STUCK_REASON_SHIFT)
 #define CPU_STUCK_REASON_NO_GRAN	(UL(2) << CPU_STUCK_REASON_SHIFT)
+#define CPU_STUCK_REASON_NO_PTRAUTH	(UL(4) << CPU_STUCK_REASON_SHIFT)
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4ef40c9..507c057 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1237,12 +1237,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
-{
-	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
-				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
-}
-
 static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
 			     int __unused)
 {
@@ -1519,7 +1513,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (architected algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
@@ -1529,7 +1523,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
@@ -1538,9 +1532,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.matches = has_address_auth,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index e58e5975..157c811 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/irqchip/arm-gic-v3.h>
 
+#include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/boot.h>
 #include <asm/ptrace.h>
@@ -119,6 +120,8 @@ ENTRY(stext)
 	 * the TCR will have been set.
 	 */
 	bl	__cpu_setup			// initialise processor
+	mov	x1, #1
+	bl	__ptrauth_setup
 	b	__primary_switch
 ENDPROC(stext)
 
@@ -713,6 +716,8 @@ secondary_startup:
 	 */
 	bl	__cpu_secondary_check52bitva
 	bl	__cpu_setup			// initialise processor
+	mov	x1, #0
+	bl	__ptrauth_setup
 	adrp	x1, swapper_pg_dir
 	bl	__enable_mmu
 	ldr	x8, =__secondary_switched
@@ -832,6 +837,49 @@ __no_granule_support:
 	early_park_cpu
 ENDPROC(__no_granule_support)
 
+/*
+ * Enable pointer authentication.
+ *   x0 = SCTLR_EL1
+ *   x1 = 1 for primary, 0 for secondary
+ */
+__ptrauth_setup:
+#ifdef CONFIG_ARM64_PTR_AUTH
+	/* Check if the CPU supports ptrauth */
+	mrs	x2, id_aa64isar1_el1
+	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
+	cbz	x2, 2f
+
+	/* x2 = system_supports_address_auth() */
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	mov	x2, 1
+alternative_else
+	mov	x2, 0
+alternative_endif
+	orr	x2, x2, x1	// primary || system_supports_address_auth()
+	cbz	x2, 3f
+
+	/* Enable ptrauth instructions */
+	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
+		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
+	orr	x0, x0, x2
+	b	3f
+
+2:	/* No ptrauth support */
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	b	4f
+alternative_else_nop_endif
+3:
+#endif
+	ret
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+4:	/* Park the secondary CPU */
+	update_early_cpu_boot_status \
+		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_PTRAUTH, x0, x1
+	early_park_cpu
+#endif
+ENDPROC(__ptrauth_setup)
+
 #ifdef CONFIG_RELOCATABLE
 __relocate_kernel:
 	/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dc9fe87..a6a5f24 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -162,6 +162,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 				pr_crit("CPU%u: does not support 52-bit VAs\n", cpu);
 			if (status & CPU_STUCK_REASON_NO_GRAN)
 				pr_crit("CPU%u: does not support %luK granule \n", cpu, PAGE_SIZE / SZ_1K);
+			if (status & CPU_STUCK_REASON_NO_PTRAUTH)
+				pr_crit("CPU%u: does not support pointer authentication\n", cpu);
 			cpus_stuck_in_kernel++;
 			break;
 		case CPU_PANIC_KERNEL:
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 06/11] arm64: rename ptrauth key structures to be user-specific
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2019-10-17  8:14 ` [PATCH 05/11] arm64: enable ptrauth earlier Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-29 23:27   ` Kees Cook
  2019-10-17  8:14 ` [PATCH 07/11] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

We currently enable ptrauth for userspace, but do not use it within the
kernel. We're going to enable it for the kernel, and will need to manage
a separate set of ptrauth keys for the kernel.

We currently keep all 5 keys in struct ptrauth_keys. However, as the
kernel will only need to use 1 key, it is a bit wasteful to allocate a
whole ptrauth_keys struct for every thread.

Therefore, a subsequent patch will define a separate struct, with only 1
key, for the kernel. In preparation for that, rename the existing struct
(and associated macros and functions) to reflect that they are specific
to userspace.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - New patch in this version, to optimize struct usage [Dave]

 arch/arm64/include/asm/asm_pointer_auth.h | 10 +++++-----
 arch/arm64/include/asm/pointer_auth.h     |  6 +++---
 arch/arm64/include/asm/processor.h        |  2 +-
 arch/arm64/kernel/asm-offsets.c           | 10 +++++-----
 arch/arm64/kernel/pointer_auth.c          |  4 ++--
 arch/arm64/kernel/ptrace.c                | 16 ++++++++--------
 6 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
index cb21a06..3d39788 100644
--- a/arch/arm64/include/asm/asm_pointer_auth.h
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -15,21 +15,21 @@
 alternative_if_not ARM64_HAS_ADDRESS_AUTH
 	b	.Laddr_auth_skip_\@
 alternative_else_nop_endif
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
 	msr_s	SYS_APIAKEYLO_EL1, \tmp2
 	msr_s	SYS_APIAKEYHI_EL1, \tmp3
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB]
 	msr_s	SYS_APIBKEYLO_EL1, \tmp2
 	msr_s	SYS_APIBKEYHI_EL1, \tmp3
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA]
 	msr_s	SYS_APDAKEYLO_EL1, \tmp2
 	msr_s	SYS_APDAKEYHI_EL1, \tmp3
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB]
 	msr_s	SYS_APDBKEYLO_EL1, \tmp2
 	msr_s	SYS_APDBKEYHI_EL1, \tmp3
 .Laddr_auth_skip_\@:
 alternative_if ARM64_HAS_GENERIC_AUTH
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA]
 	msr_s	SYS_APGAKEYLO_EL1, \tmp2
 	msr_s	SYS_APGAKEYHI_EL1, \tmp3
 alternative_else_nop_endif
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 21c2115..cc42145 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -22,7 +22,7 @@ struct ptrauth_key {
  * We give each process its own keys, which are shared by all threads. The keys
  * are inherited upon fork(), and reinitialised upon exec*().
  */
-struct ptrauth_keys {
+struct ptrauth_keys_user {
 	struct ptrauth_key apia;
 	struct ptrauth_key apib;
 	struct ptrauth_key apda;
@@ -30,7 +30,7 @@ struct ptrauth_keys {
 	struct ptrauth_key apga;
 };
 
-static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 {
 	if (system_supports_address_auth()) {
 		get_random_bytes(&keys->apia, sizeof(keys->apia));
@@ -58,7 +58,7 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-	ptrauth_keys_init(&(tsk)->thread.keys_user)
+	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5623685..8ec792d 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -144,7 +144,7 @@ struct thread_struct {
 	unsigned long		fault_code;	/* ESR_EL1 value */
 	struct debug_info	debug;		/* debugging */
 #ifdef CONFIG_ARM64_PTR_AUTH
-	struct ptrauth_keys	keys_user;
+	struct ptrauth_keys_user	keys_user;
 #endif
 };
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index ef0c24b..cf15182 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -131,11 +131,11 @@ int main(void)
   DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
 #endif
 #ifdef CONFIG_ARM64_PTR_AUTH
-  DEFINE(PTRAUTH_KEY_APIA,	offsetof(struct ptrauth_keys, apia));
-  DEFINE(PTRAUTH_KEY_APIB,	offsetof(struct ptrauth_keys, apib));
-  DEFINE(PTRAUTH_KEY_APDA,	offsetof(struct ptrauth_keys, apda));
-  DEFINE(PTRAUTH_KEY_APDB,	offsetof(struct ptrauth_keys, apdb));
-  DEFINE(PTRAUTH_KEY_APGA,	offsetof(struct ptrauth_keys, apga));
+  DEFINE(PTRAUTH_USER_KEY_APIA,		offsetof(struct ptrauth_keys_user, apia));
+  DEFINE(PTRAUTH_USER_KEY_APIB,		offsetof(struct ptrauth_keys_user, apib));
+  DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
+  DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
+  DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
   BLANK();
 #endif
   return 0;
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index 95985be..1e77736 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -9,7 +9,7 @@
 
 int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 {
-	struct ptrauth_keys *keys = &tsk->thread.keys_user;
+	struct ptrauth_keys_user *keys = &tsk->thread.keys_user;
 	unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
 				      PR_PAC_APDAKEY | PR_PAC_APDBKEY;
 	unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
@@ -18,7 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 		return -EINVAL;
 
 	if (!arg) {
-		ptrauth_keys_init(keys);
+		ptrauth_keys_init_user(keys);
 		return 0;
 	}
 
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 21176d0..0793739 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -986,7 +986,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey)
 }
 
 static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
-				     const struct ptrauth_keys *keys)
+				     const struct ptrauth_keys_user *keys)
 {
 	ukeys->apiakey = pac_key_to_user(&keys->apia);
 	ukeys->apibkey = pac_key_to_user(&keys->apib);
@@ -994,7 +994,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
 	ukeys->apdbkey = pac_key_to_user(&keys->apdb);
 }
 
-static void pac_address_keys_from_user(struct ptrauth_keys *keys,
+static void pac_address_keys_from_user(struct ptrauth_keys_user *keys,
 				       const struct user_pac_address_keys *ukeys)
 {
 	keys->apia = pac_key_from_user(ukeys->apiakey);
@@ -1008,7 +1008,7 @@ static int pac_address_keys_get(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				void *kbuf, void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_address_keys user_keys;
 
 	if (!system_supports_address_auth())
@@ -1025,7 +1025,7 @@ static int pac_address_keys_set(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				const void *kbuf, const void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_address_keys user_keys;
 	int ret;
 
@@ -1043,12 +1043,12 @@ static int pac_address_keys_set(struct task_struct *target,
 }
 
 static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys,
-				     const struct ptrauth_keys *keys)
+				     const struct ptrauth_keys_user *keys)
 {
 	ukeys->apgakey = pac_key_to_user(&keys->apga);
 }
 
-static void pac_generic_keys_from_user(struct ptrauth_keys *keys,
+static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys,
 				       const struct user_pac_generic_keys *ukeys)
 {
 	keys->apga = pac_key_from_user(ukeys->apgakey);
@@ -1059,7 +1059,7 @@ static int pac_generic_keys_get(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				void *kbuf, void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_generic_keys user_keys;
 
 	if (!system_supports_generic_auth())
@@ -1076,7 +1076,7 @@ static int pac_generic_keys_set(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				const void *kbuf, const void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_generic_keys user_keys;
 	int ret;
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 07/11] arm64: initialize and switch ptrauth kernel keys
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (5 preceding siblings ...)
  2019-10-17  8:14 ` [PATCH 06/11] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-23 17:35   ` James Morse
  2019-10-17  8:14 ` [PATCH 08/11] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

Set up keys to use pointer authentication within the kernel. The kernel
will be compiled with APIAKey instructions, the other keys are currently
unused. Each task is given its own APIAKey, which is initialized during
fork. The key is changed during context switch and on kernel entry from
EL0.

The keys for idle threads need to be set before calling any C functions,
because it is not possible to enter and exit a function with different
keys.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - Moved early key setting and key switching from C to assembly, and no longer
   use the pointer auth C function attribute; added new secondary_data fields
   to do so [Suzuki]
 - Removed ISBs in ptrauth_keys_install_user, added a comment in kernel_exit [Catalin]
 - Separate struct for kernel keys [Dave]

 arch/arm64/include/asm/asm_pointer_auth.h | 14 ++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 13 +++++++++++++
 arch/arm64/include/asm/processor.h        |  1 +
 arch/arm64/include/asm/smp.h              |  2 ++
 arch/arm64/kernel/asm-offsets.c           |  4 ++++
 arch/arm64/kernel/entry.S                 |  3 +++
 arch/arm64/kernel/head.S                  | 10 ++++++++++
 arch/arm64/kernel/process.c               |  2 ++
 arch/arm64/kernel/smp.c                   |  6 ++++++
 9 files changed, 55 insertions(+)

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
index 3d39788..172548a 100644
--- a/arch/arm64/include/asm/asm_pointer_auth.h
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -35,11 +35,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
 alternative_else_nop_endif
 	.endm
 
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	mov	\tmp1, #THREAD_KEYS_KERNEL
+	add	\tmp1, \tsk, \tmp1
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	isb
+alternative_else_nop_endif
+	.endm
+
 #else /* CONFIG_ARM64_PTR_AUTH */
 
 	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
 	.endm
 
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	.endm
+
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index cc42145..599dd09 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -30,6 +30,10 @@ struct ptrauth_keys_user {
 	struct ptrauth_key apga;
 };
 
+struct ptrauth_keys_kernel {
+	struct ptrauth_key apia;
+};
+
 static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 {
 	if (system_supports_address_auth()) {
@@ -43,6 +47,12 @@ static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 }
 
+static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
+{
+	if (system_supports_address_auth())
+		get_random_bytes(&keys->apia, sizeof(keys->apia));
+}
+
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -59,11 +69,14 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 
 #define ptrauth_thread_init_user(tsk)					\
 	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
+#define ptrauth_thread_init_kernel(tsk)					\
+	ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
+#define ptrauth_thread_init_kernel(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 8ec792d..c12c98d 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -145,6 +145,7 @@ struct thread_struct {
 	struct debug_info	debug;		/* debugging */
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys_user	keys_user;
+	struct ptrauth_keys_kernel	keys_kernel;
 #endif
 };
 
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 46e2b05..2294e93 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -88,6 +88,8 @@ asmlinkage void secondary_start_kernel(void);
 struct secondary_data {
 	void *stack;
 	struct task_struct *task;
+	unsigned long ptrauth_key_lo;
+	unsigned long ptrauth_key_hi;
 	long status;
 };
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index cf15182..78ab279 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -42,6 +42,7 @@ int main(void)
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
 #ifdef CONFIG_ARM64_PTR_AUTH
   DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+  DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
 #endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
@@ -90,6 +91,8 @@ int main(void)
   BLANK();
   DEFINE(CPU_BOOT_STACK,	offsetof(struct secondary_data, stack));
   DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
+  DEFINE(CPU_BOOT_PTRAUTH_KEY_LO,	offsetof(struct secondary_data, ptrauth_key_lo));
+  DEFINE(CPU_BOOT_PTRAUTH_KEY_HI,	offsetof(struct secondary_data, ptrauth_key_hi));
   BLANK();
 #ifdef CONFIG_KVM_ARM_HOST
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
@@ -136,6 +139,7 @@ int main(void)
   DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
   DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
   DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
+  DEFINE(PTRAUTH_KERNEL_KEY_APIA,	offsetof(struct ptrauth_keys_kernel, apia));
   BLANK();
 #endif
   return 0;
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index b6272a3..0619065 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -173,6 +173,7 @@ alternative_cb_end
 
 	apply_ssbd 1, x22, x23
 
+	ptrauth_keys_install_kernel tsk, x20, x22, x23
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_current_task tsk
@@ -342,6 +343,7 @@ alternative_else_nop_endif
 	msr	cntkctl_el1, x1
 4:
 #endif
+	/* No kernel C function calls after this as user keys are set. */
 	ptrauth_keys_install_user tsk, x0, x1, x2
 
 	apply_ssbd 0, x0, x1
@@ -1155,6 +1157,7 @@ ENTRY(cpu_switch_to)
 	ldr	lr, [x8]
 	mov	sp, x9
 	msr	sp_el0, x1
+	ptrauth_keys_install_kernel x1, x8, x9, x10
 	ret
 ENDPROC(cpu_switch_to)
 NOKPROBE(cpu_switch_to)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 157c811..e518511 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -858,6 +858,16 @@ alternative_endif
 	orr	x2, x2, x1	// primary || system_supports_address_auth()
 	cbz	x2, 3f
 
+	/* Install ptrauth key */
+	mov	x3, xzr
+	mov	x4, xzr
+	cbnz	x1, 1f
+	adr_l	x2, secondary_data
+	ldr	x3, [x2, CPU_BOOT_PTRAUTH_KEY_LO]
+	ldr	x4, [x2, CPU_BOOT_PTRAUTH_KEY_HI]
+1:	msr_s	SYS_APIAKEYLO_EL1, x3
+	msr_s	SYS_APIAKEYHI_EL1, x4
+
 	/* Enable ptrauth instructions */
 	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
 		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 5b2b158..b737f55 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -376,6 +376,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 	 */
 	fpsimd_flush_task_state(p);
 
+	ptrauth_thread_init_kernel(p);
+
 	if (likely(!(p->flags & PF_KTHREAD))) {
 		*childregs = *current_pt_regs();
 		childregs->regs[0] = 0;
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a6a5f24..230d21f 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -110,6 +110,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	 */
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
+#if defined(CONFIG_ARM64_PTR_AUTH)
+	secondary_data.ptrauth_key_lo = idle->thread.keys_kernel.apia.lo;
+	secondary_data.ptrauth_key_hi = idle->thread.keys_kernel.apia.hi;
+#endif
 	update_cpu_boot_status(CPU_MMU_OFF);
 	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
 
@@ -136,6 +140,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
+	secondary_data.ptrauth_key_lo = 0;
+	secondary_data.ptrauth_key_hi = 0;
 	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
 	if (ret && status) {
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 08/11] arm64: unwind: strip PAC from kernel addresses
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (6 preceding siblings ...)
  2019-10-17  8:14 ` [PATCH 07/11] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-23 17:36   ` James Morse
  2019-10-17  8:14 ` [RFC PATCH 09/11] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

When we enable pointer authentication in the kernel, LR values saved to
the stack will have a PAC which we must strip in order to retrieve the
real return address.

Strip PACs when unwinding the stack in order to account for this.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - None

 arch/arm64/include/asm/pointer_auth.h | 9 ++++++---
 arch/arm64/kernel/stacktrace.c        | 3 +++
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 599dd09..a75dc89 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -59,12 +59,15 @@ extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
  * The EL0 pointer bits used by a pointer authentication code.
  * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
  */
-#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
+#define ptrauth_user_pac_mask()		GENMASK(54, vabits_actual)
+#define ptrauth_kernel_pac_mask()	(GENMASK(63, 56) | GENMASK(54, VA_BITS))
 
-/* Only valid for EL0 TTBR0 instruction pointers */
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 {
-	return ptr & ~ptrauth_user_pac_mask();
+	if (ptr & BIT_ULL(55))
+		return ptr | ptrauth_kernel_pac_mask();
+	else
+		return ptr & ~ptrauth_user_pac_mask();
 }
 
 #define ptrauth_thread_init_user(tsk)					\
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index a336cb1..49eb1c3 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -14,6 +14,7 @@
 #include <linux/stacktrace.h>
 
 #include <asm/irq.h>
+#include <asm/pointer_auth.h>
 #include <asm/stack_pointer.h>
 #include <asm/stacktrace.h>
 
@@ -84,6 +85,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	frame->prev_fp = fp;
 	frame->prev_type = info.type;
 
+	frame->pc = ptrauth_strip_insn_pac(frame->pc);
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	if (tsk->ret_stack &&
 			(frame->pc == (unsigned long)return_to_handler)) {
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 09/11] arm64: suspend: restore the kernel ptrauth keys
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (7 preceding siblings ...)
  2019-10-17  8:14 ` [PATCH 08/11] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-17  8:14 ` [RFC PATCH 10/11] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

This patch restores the kernel keys from current task during
cpu resume after the mmu is turned on and ptrauth is enabled.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - New patch to support cpu suspend/resume.

 arch/arm64/kernel/sleep.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index f5b04dd..476e110 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -2,6 +2,7 @@
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <asm/asm-offsets.h>
+#include <asm/asm_pointer_auth.h>
 #include <asm/assembler.h>
 
 	.text
@@ -137,6 +138,11 @@ ENTRY(_cpu_resume)
 	bl	kasan_unpoison_task_stack_below
 #endif
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+	get_current_task x1
+	ptrauth_keys_install_kernel x1, x2, x3, x4
+#endif
+
 	ldp	x19, x20, [x29, #16]
 	ldp	x21, x22, [x29, #32]
 	ldp	x23, x24, [x29, #48]
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [RFC PATCH 10/11] arm64: mask PAC bits of __builtin_return_address
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (8 preceding siblings ...)
  2019-10-17  8:14 ` [RFC PATCH 09/11] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-17  8:14 ` [PATCH 11/11] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
  2019-10-23 17:31 ` [PATCH 00/11] arm64: " James Morse
  11 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

This patch redefines __builtin_return_address to mask pac bits
when Pointer Authentication is enabled. As __builtin_return_address
is used mostly used to refer to the caller function symbol address
so masking runtime generated pac bits will make it clear to the user.

This change fixes the utilities like cat /proc/vmallocinfo to now
show the correct logs.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - New patch

 arch/arm64/Kconfig                |  1 +
 arch/arm64/include/asm/compiler.h | 15 +++++++++++++++
 2 files changed, 16 insertions(+)
 create mode 100644 arch/arm64/include/asm/compiler.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 253e3c5..8e86f6a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -118,6 +118,7 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_COMPILER_H
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
new file mode 100644
index 0000000..229efca
--- /dev/null
+++ b/arch/arm64/include/asm/compiler.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ARM_COMPILER_H
+#define __ASM_ARM_COMPILER_H
+
+#ifndef __ASSEMBLY__
+
+#if defined(CONFIG_ARM64_PTR_AUTH)
+#define __builtin_return_address(val)				\
+	(void *)((unsigned long)__builtin_return_address(val) |	\
+	(GENMASK(63, 56) | GENMASK(54, VA_BITS)))
+#endif
+
+#endif
+
+#endif /* __ASM_ARM_COMPILER_H */
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 11/11] arm64: compile the kernel with ptrauth return address signing
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (9 preceding siblings ...)
  2019-10-17  8:14 ` [RFC PATCH 10/11] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
@ 2019-10-17  8:14 ` Amit Daniel Kachhap
  2019-10-23 17:31 ` [PATCH 00/11] arm64: " James Morse
  11 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-17  8:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

Compile all functions with two ptrauth instructions: PACIASP in the
prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task.

This should help protect the kernel against attacks using
return-oriented programming. As ptrauth protects the return address, it
can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
that it does not protect other parts of the stack.

The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.

CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.

GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since RFC v2:
 - Removed function attribute definition [Suzuki]
 - Added comments about GCC versions to Kconfig symbols [Kees]
 - Added a note in Kconfig (and the commit message) about STACKPROTECTOR
 - Changed GCC option to protect all functions with ptrauth and just non-leaf.

 arch/arm64/Kconfig  | 16 +++++++++++++++-
 arch/arm64/Makefile |  6 ++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8e86f6a..f83db93 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1399,11 +1399,17 @@ config ARM64_PTR_AUTH
 	  and other attacks.
 
 	  This option enables these instructions at EL0 (i.e. for userspace).
-
 	  Choosing this option will cause the kernel to initialise secret keys
 	  for each process at exec() time, with these keys being
 	  context-switched along with the process.
 
+	  If the compiler supports the -mbranch-protection or
+	  -msign-return-address flag (e.g. GCC 7 or later), then this option
+	  will also cause the kernel itself to be compiled with return address
+	  protection. In this case, and if the target hardware is known to
+	  support pointer authentication, then CONFIG_STACKPROTECTOR can be
+	  disabled with minimal loss of protection.
+
 	  The feature is detected at runtime. If the feature is not present in
 	  hardware it will not be advertised to userspace/KVM guest nor will it
 	  be enabled. However, KVM guest also require VHE mode and hence
@@ -1413,6 +1419,14 @@ config ARM64_PTR_AUTH
 	  then the secondary CPU will be offlined. On such a system, this
 	  option should not be selected.
 
+config CC_HAS_BRANCH_PROT_PAC_RET
+	# GCC 9 or later
+	def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)
+
+config CC_HAS_SIGN_RETURN_ADDRESS
+	# GCC 7, 8
+	def_bool $(cc-option,-msign-return-address=all)
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 84a3d50..4dbe86a 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -88,6 +88,12 @@ stack_protector_prepare: prepare0
 					include/generated/asm-offsets.h))
 endif
 
+ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
+pac-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
+pac-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
+KBUILD_CFLAGS += $(pac-flags-y)
+endif
+
 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
 KBUILD_CPPFLAGS	+= -mbig-endian
 CHECKFLAGS	+= -D__AARCH64EB__
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/11] arm64: cpufeature: handle conflicts based on capability
  2019-10-17  8:14 ` [PATCH 03/11] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
@ 2019-10-17 14:06   ` Suzuki K Poulose
  2019-10-18  9:59     ` Amit Kachhap
  0 siblings, 1 reply; 25+ messages in thread
From: Suzuki K Poulose @ 2019-10-17 14:06 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

Hi Amit,

On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> Each system capability can be of either boot, local, or system scope,
> depending on when the state of the capability is finalized. When we
> detect a conflict on a late CPU, we either offline the CPU or panic the
> system. We currently always panic if the conflict is caused by a boot
> scope capability, and offline the CPU if the conflict is caused by a
> local or system scope capability.
> 
> We're going to want to add a new capability (for pointer authentication)
> which needs to be boot scope but doesn't need to panic the system when a
> conflict is detected. So add a new flag to specify whether the
> capability requires the system to panic or not. Current boot scope
> capabilities are updated to set the flag, so there should be no
> functional change as a result of this patch.
> 
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---

With the early parking of the CPUs without ptr_auth, I believe this
change is not needed, as long as we use ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU
flag for the ptr_auth capability. Moreover, we may want to retain
the "panic" situation, as we don't expect a secondary CPU to turn
up without the ptr_auth and have a "conflict".

Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 05/11] arm64: enable ptrauth earlier
  2019-10-17  8:14 ` [PATCH 05/11] arm64: enable ptrauth earlier Amit Daniel Kachhap
@ 2019-10-17 14:13   ` Suzuki K Poulose
  2019-10-18 10:07     ` Amit Kachhap
  2019-10-23 17:32   ` James Morse
  1 sibling, 1 reply; 25+ messages in thread
From: Suzuki K Poulose @ 2019-10-17 14:13 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin


Hi Amit,

On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> When the kernel is compiled with pointer auth instructions, the boot CPU
> needs to start using address auth very early, so change the cpucap to
> account for this.
> 
> Pointer auth must be enabled before we call C functions, because it is
> not possible to enter a function with pointer auth disabled and exit it
> with pointer auth enabled. Note, mismatches between architected and
> IMPDEF algorithms will still be caught by the cpufeature framework (the
> separate *_ARCH and *_IMP_DEF cpucaps).
> 
> Note the change in behavior: if the boot CPU has address auth and a late
> CPU does not, then we offline the late CPU. Until now we would have just
> disabled address auth in this case.
> 
> Leave generic authentication as a "system scope" cpucap for now, since
> initially the kernel will only use address authentication.
> 
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since RFC v2:
>   - Moved early enabling from C to assembly, and no longer use the pointer auth
>     C function attribute [Suzuki]
> 
>   arch/arm64/Kconfig                  |  4 ++++
>   arch/arm64/include/asm/cpufeature.h |  9 +++++++
>   arch/arm64/include/asm/smp.h        |  1 +
>   arch/arm64/kernel/cpufeature.c      | 13 +++-------
>   arch/arm64/kernel/head.S            | 48 +++++++++++++++++++++++++++++++++++++
>   arch/arm64/kernel/smp.c             |  2 ++
>   6 files changed, 67 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 41a9b42..253e3c5 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1408,6 +1408,10 @@ config ARM64_PTR_AUTH
>   	  be enabled. However, KVM guest also require VHE mode and hence
>   	  CONFIG_ARM64_VHE=y option to use this feature.
>   
> +	  If the feature is present on the primary CPU but not a secondary CPU,
> +	  then the secondary CPU will be offlined. On such a system, this
> +	  option should not be selected.

We don't offline the CPU, but simply park them. You may want to update this to
reflect the reality.

> +
>   endmenu
>   
>   config ARM64_SVE
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 011a665..5d61749 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -291,6 +291,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>   #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
>   	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
>   
> +/*
> + * CPU feature used early in the boot based on the boot CPU. It is safe for a
> + * late CPU to have this feature even though the boot CPU hasn't enabled it,
> + * although the feature will not be used by Linux in this case. If the boot CPU
> + * has enabled this feature already, then every late CPU must have it.
> + */
> +#define ARM64_CPUCAP_BOOT_CPU_FEATURE			\
> +	 (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)

As mentioned in the previous patch, I think this must panic the system if ever a
CPU turns up without the ptr_auth.

Otherwise looks fine to me.

Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/11] arm64: cpufeature: handle conflicts based on capability
  2019-10-17 14:06   ` Suzuki K Poulose
@ 2019-10-18  9:59     ` Amit Kachhap
  0 siblings, 0 replies; 25+ messages in thread
From: Amit Kachhap @ 2019-10-18  9:59 UTC (permalink / raw)
  To: Suzuki Poulose, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave P Martin



On 10/17/19 7:36 PM, Suzuki K Poulose wrote:
> Hi Amit,
>
> On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> Each system capability can be of either boot, local, or system scope,
>> depending on when the state of the capability is finalized. When we
>> detect a conflict on a late CPU, we either offline the CPU or panic the
>> system. We currently always panic if the conflict is caused by a boot
>> scope capability, and offline the CPU if the conflict is caused by a
>> local or system scope capability.
>>
>> We're going to want to add a new capability (for pointer authentication)
>> which needs to be boot scope but doesn't need to panic the system when a
>> conflict is detected. So add a new flag to specify whether the
>> capability requires the system to panic or not. Current boot scope
>> capabilities are updated to set the flag, so there should be no
>> functional change as a result of this patch.
>>
>> Reviewed-by: Kees Cook <keescook@chromium.org>
>> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>
> With the early parking of the CPUs without ptr_auth, I believe this
> change is not needed, as long as we use ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU
> flag for the ptr_auth capability. Moreover, we may want to retain
> the "panic" situation, as we don't expect a secondary CPU to turn
> up without the ptr_auth and have a "conflict".
Yes you are right that this patch is not required and panic better for
error scenario. I will drop it in next iteration.

Thanks,
Amit
>
> Cheers
> Suzuki
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 05/11] arm64: enable ptrauth earlier
  2019-10-17 14:13   ` Suzuki K Poulose
@ 2019-10-18 10:07     ` Amit Kachhap
  0 siblings, 0 replies; 25+ messages in thread
From: Amit Kachhap @ 2019-10-18 10:07 UTC (permalink / raw)
  To: Suzuki Poulose, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave P Martin

Hi,

On 10/17/19 7:43 PM, Suzuki K Poulose wrote:
>
> Hi Amit,
>
> On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> When the kernel is compiled with pointer auth instructions, the boot CPU
>> needs to start using address auth very early, so change the cpucap to
>> account for this.
>>
>> Pointer auth must be enabled before we call C functions, because it is
>> not possible to enter a function with pointer auth disabled and exit it
>> with pointer auth enabled. Note, mismatches between architected and
>> IMPDEF algorithms will still be caught by the cpufeature framework (the
>> separate *_ARCH and *_IMP_DEF cpucaps).
>>
>> Note the change in behavior: if the boot CPU has address auth and a late
>> CPU does not, then we offline the late CPU. Until now we would have just
>> disabled address auth in this case.
>>
>> Leave generic authentication as a "system scope" cpucap for now, since
>> initially the kernel will only use address authentication.
>>
>> Reviewed-by: Kees Cook <keescook@chromium.org>
>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Changes since RFC v2:
>>   - Moved early enabling from C to assembly, and no longer use the
>> pointer auth
>>     C function attribute [Suzuki]
>>
>>   arch/arm64/Kconfig                  |  4 ++++
>>   arch/arm64/include/asm/cpufeature.h |  9 +++++++
>>   arch/arm64/include/asm/smp.h        |  1 +
>>   arch/arm64/kernel/cpufeature.c      | 13 +++-------
>>   arch/arm64/kernel/head.S            | 48
>> +++++++++++++++++++++++++++++++++++++
>>   arch/arm64/kernel/smp.c             |  2 ++
>>   6 files changed, 67 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 41a9b42..253e3c5 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1408,6 +1408,10 @@ config ARM64_PTR_AUTH
>>         be enabled. However, KVM guest also require VHE mode and hence
>>         CONFIG_ARM64_VHE=y option to use this feature.
>> +      If the feature is present on the primary CPU but not a
>> secondary CPU,
>> +      then the secondary CPU will be offlined. On such a system, this
>> +      option should not be selected.
>
> We don't offline the CPU, but simply park them. You may want to update
> this to
> reflect the reality.
Yes agreed. I will do it in next iteration and in all other places where
offline is mentioned.
>
>> +
>>   endmenu
>>   config ARM64_SVE
>> diff --git a/arch/arm64/include/asm/cpufeature.h
>> b/arch/arm64/include/asm/cpufeature.h
>> index 011a665..5d61749 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -291,6 +291,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
>>   #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE        \
>>       (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
>> +/*
>> + * CPU feature used early in the boot based on the boot CPU. It is
>> safe for a
>> + * late CPU to have this feature even though the boot CPU hasn't
>> enabled it,
>> + * although the feature will not be used by Linux in this case. If
>> the boot CPU
>> + * has enabled this feature already, then every late CPU must have it.
>> + */
>> +#define ARM64_CPUCAP_BOOT_CPU_FEATURE            \
>> +     (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
>
> As mentioned in the previous patch, I think this must panic the system
> if ever a
> CPU turns up without the ptr_auth.
Yes. Makes sense.
>
> Otherwise looks fine to me.
Thanks.
Amit
>
> Cheers
> Suzuki
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/11] arm64: return address signing
  2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
                   ` (10 preceding siblings ...)
  2019-10-17  8:14 ` [PATCH 11/11] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
@ 2019-10-23 17:31 ` James Morse
  2019-10-30  3:59   ` Amit Daniel Kachhap
  11 siblings, 1 reply; 25+ messages in thread
From: James Morse @ 2019-10-23 17:31 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

Hi Amit,

On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
> This series improves function return address protection for the arm64 kernel, by
> compiling the kernel with ARMv8.3 Pointer Authentication instructions (ptrauth
> referred hereafter). This should help protect the kernel against attacks using
> return-oriented programming.
> 
> Patch 9 and 10 are newly added and hence sent as RFC.

Please don't mix 'RFC' in a series. If one patch is RFC, the whole series should be marked
like that, including the cover letter. git format-patch's '--rfc' option will do this for
you.

If this is 'v3', please mark all the patches 'v3' too. Adding '-v 3' to git format-patch
will do this for you.


> High-level changes since RFC v2 [1] (detailed changes are listed in patches):
>  - Moved enabling, key setup and context switch to assembly, to avoid using
>    the pointer auth compiler attribute which Clang does not support (thanks
>    Suzuki for the initial code!).
>  - Added code to restore keys after cpu resume.
>  - __builtin_return_address will now mask pac bits.
>  - Changed gcc compiler options to add ptrauth instructions in all functions
>    and not just non-leaf functions. This may be revisited later due to 
>    performance concerns.
>  - Rebased onto v5.4-rc2.
>  - Added Reviewed-by's.

> This series do not implement few things or have known limitations:
>  - ftrace function tracer does not work with this series. But after using
>    the posted series [2] based on -fpatchable-function-entry, it works fine.
>  - kprobes/uprobes and other tracing may need some rework with ptrauth.
>  - kdump, other debug may need some rework with ptrauth.
>  - Generate some randomness for ptrauth keys during kernel early booting.

Its good to have this list in the cover letter. (thanks!)

Could you expand on the kprobes point? Is it emulating/stepping the ptrauth instructions,
or stuff like kretprobes, that overwrite the lr? (arch_prepare_kretprobe()).
(or both!)

SDEI (firmware assisted NMI) may be called with the user-keys, kernel-keys, or
half-way-through switching keys. I don't think this is a problem, it just means the key in
use is unknown.


Thanks,

James


> [1] https://lore.kernel.org/linux-arm-kernel/20190529190332.29753-1-kristina.martsenko@arm.com/
> [2] https://patchwork.kernel.org/patch/10803279/

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 05/11] arm64: enable ptrauth earlier
  2019-10-17  8:14 ` [PATCH 05/11] arm64: enable ptrauth earlier Amit Daniel Kachhap
  2019-10-17 14:13   ` Suzuki K Poulose
@ 2019-10-23 17:32   ` James Morse
  2019-10-30  4:01     ` Amit Daniel Kachhap
  1 sibling, 1 reply; 25+ messages in thread
From: James Morse @ 2019-10-23 17:32 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

Hi Amit,

On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> When the kernel is compiled with pointer auth instructions, the boot CPU
> needs to start using address auth very early, so change the cpucap to
> account for this.
> 
> Pointer auth must be enabled before we call C functions, because it is
> not possible to enter a function with pointer auth disabled and exit it
> with pointer auth enabled. Note, mismatches between architected and
> IMPDEF algorithms will still be caught by the cpufeature framework (the
> separate *_ARCH and *_IMP_DEF cpucaps).
> 
> Note the change in behavior: if the boot CPU has address auth and a late
> CPU does not, then we offline the late CPU. Until now we would have just
> disabled address auth in this case.
> 
> Leave generic authentication as a "system scope" cpucap for now, since
> initially the kernel will only use address authentication.


> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index e58e5975..157c811 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -13,6 +13,7 @@
>  #include <linux/init.h>
>  #include <linux/irqchip/arm-gic-v3.h>
>  
> +#include <asm/alternative.h>
>  #include <asm/assembler.h>
>  #include <asm/boot.h>
>  #include <asm/ptrace.h>
> @@ -119,6 +120,8 @@ ENTRY(stext)
>  	 * the TCR will have been set.
>  	 */
>  	bl	__cpu_setup			// initialise processor
> +	mov	x1, #1
> +	bl	__ptrauth_setup
>  	b	__primary_switch
>  ENDPROC(stext)
>  
> @@ -713,6 +716,8 @@ secondary_startup:
>  	 */
>  	bl	__cpu_secondary_check52bitva
>  	bl	__cpu_setup			// initialise processor
> +	mov	x1, #0
> +	bl	__ptrauth_setup
>  	adrp	x1, swapper_pg_dir
>  	bl	__enable_mmu
>  	ldr	x8, =__secondary_switched

__cpu_setup creates the SCTLR_EL1 value for us, it already reads ID registers for stuff
like AFDBM. It seems odd that you don't do the ptrauth check in there.

Not putting it in __cpu_setup means you've missed the other caller: sleep.S's cpu_resume,
which brings the wakeup CPU back as if it were a secondary. (although the value set at
boot will be restored in _cpu_resume).


It looks like you only need this to be separate to pass in the primary/secondary flag, as
__ptrauth_setup has to work with 3 cases: the boot-CPU and secondary CPUs that must have
the feature, or can ignore the feature. Three cases with one alternative isn't possible.

Could we pull the '__cpu_secondary_checkptrauth' out, and run it earlier? This means the
setup call doesn't need to consider secondary CPUs that don't support ptrauth. (and it
matches what we do for 52bit support)

I think passing primary-boot-cpu into __cpu_setup is something we are going to need for
other features, so it makes sense to add it as a preparatory patch.

Now the setup call can enable the feature if its supported and we are the boot cpu.
If the feature is discovered, cpufeature can change that code to enable it unconditionally
as we know secondaries without support will be caught in __cpu_secondary_checkptrauth.

I think this would be simpler, but the proof is in the writing... what do you think?


Thanks,

James

> @@ -832,6 +837,49 @@ __no_granule_support:
>  	early_park_cpu
>  ENDPROC(__no_granule_support)
>  
> +/*
> + * Enable pointer authentication.
> + *   x0 = SCTLR_EL1
> + *   x1 = 1 for primary, 0 for secondary
> + */
> +__ptrauth_setup:
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +	/* Check if the CPU supports ptrauth */
> +	mrs	x2, id_aa64isar1_el1
> +	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
> +	cbz	x2, 2f
> +
> +	/* x2 = system_supports_address_auth() */
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	mov	x2, 1
> +alternative_else
> +	mov	x2, 0
> +alternative_endif
> +	orr	x2, x2, x1	// primary || system_supports_address_auth()
> +	cbz	x2, 3f
> +
> +	/* Enable ptrauth instructions */
> +	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
> +		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
> +	orr	x0, x0, x2
> +	b	3f
> +
> +2:	/* No ptrauth support */
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	b	4f
> +alternative_else_nop_endif
> +3:
> +#endif
> +	ret
> +
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +4:	/* Park the secondary CPU */
> +	update_early_cpu_boot_status \
> +		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_PTRAUTH, x0, x1
> +	early_park_cpu
> +#endif
> +ENDPROC(__ptrauth_setup)
> +
>  #ifdef CONFIG_RELOCATABLE
>  __relocate_kernel:
>  	/*

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 07/11] arm64: initialize and switch ptrauth kernel keys
  2019-10-17  8:14 ` [PATCH 07/11] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
@ 2019-10-23 17:35   ` James Morse
  2019-10-30  4:02     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 25+ messages in thread
From: James Morse @ 2019-10-23 17:35 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

Hi Amit,

On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> Set up keys to use pointer authentication within the kernel. The kernel
> will be compiled with APIAKey instructions, the other keys are currently
> unused. Each task is given its own APIAKey, which is initialized during
> fork. The key is changed during context switch and on kernel entry from
> EL0.
> 
> The keys for idle threads need to be set before calling any C functions,
> because it is not possible to enter and exit a function with different
> keys.


> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 8ec792d..c12c98d 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -145,6 +145,7 @@ struct thread_struct {
>  	struct debug_info	debug;		/* debugging */
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	struct ptrauth_keys_user	keys_user;
> +	struct ptrauth_keys_kernel	keys_kernel;
>  #endif
>  };
>  
> diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
> index 46e2b05..2294e93 100644
> --- a/arch/arm64/include/asm/smp.h
> +++ b/arch/arm64/include/asm/smp.h
> @@ -88,6 +88,8 @@ asmlinkage void secondary_start_kernel(void);
>  struct secondary_data {
>  	void *stack;
>  	struct task_struct *task;
> +	unsigned long ptrauth_key_lo;
> +	unsigned long ptrauth_key_hi;

How come this isn't a struct ptrauth_keys_kernel, like the thread struct?
Everywhere else you ldp it as one 128 bit quantity, here its split into lo/hi and loaded
independently.

I think this is safe for big-endian... but it hurts to think about it.

I'd like the lo/hi not to matter, its a randomly generated key, as long as we always read
it from memory and write it to the CPU registers in the same way, it doesn't matter which
bits end up in the hi/lo registers.


>  	long status;
>  };
>  

> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 157c811..e518511 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -858,6 +858,16 @@ alternative_endif
>  	orr	x2, x2, x1	// primary || system_supports_address_auth()
>  	cbz	x2, 3f
>  
> +	/* Install ptrauth key */
> +	mov	x3, xzr
> +	mov	x4, xzr
> +	cbnz	x1, 1f
> +	adr_l	x2, secondary_data
> +	ldr	x3, [x2, CPU_BOOT_PTRAUTH_KEY_LO]
> +	ldr	x4, [x2, CPU_BOOT_PTRAUTH_KEY_HI]
> +1:	msr_s	SYS_APIAKEYLO_EL1, x3
> +	msr_s	SYS_APIAKEYHI_EL1, x4

Why do we store zero to those registers if !system_supports_address_auth()?
Is this for the boot CPU?

It would be good to keep the code accessing secondary_data together. The boot-cpu's values
are always going to come from somewhere funny, so it makes sense to keep the secondarys
separate.

(We test values from secondary_data in case the cpu-online code has given up on us. I
don't think we can test the PTRAUTH_KEY against 0 for too-slow secondaries, as 0 may be
the random key we generated).

This assembly function is a bit of a swiss-army-knife, it does everything to do with
ptrauth. The code-patching makes it difficult to work out what happens. In general I think
it would be simpler if these functions did one thing, even if there ends up being more
code overall.


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 08/11] arm64: unwind: strip PAC from kernel addresses
  2019-10-17  8:14 ` [PATCH 08/11] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
@ 2019-10-23 17:36   ` James Morse
  2019-10-30  4:02     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 25+ messages in thread
From: James Morse @ 2019-10-23 17:36 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

Hi Amit,

On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> When we enable pointer authentication in the kernel, LR values saved to
> the stack will have a PAC which we must strip in order to retrieve the
> real return address.
> 
> Strip PACs when unwinding the stack in order to account for this.
> 
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>

Sign-off chain Nit:
These Signed-off-by are supposed to be a chain of who handled the patch before it got to
Linus' tree. The first entry should match the 'From', the last should match the person
posting the patch.


I suspect the __builtin_return_address() patch should appear before this one, as
start_backtrace() callers pass that in as the first 'pc' value.

> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 599dd09..a75dc89 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -59,12 +59,15 @@ extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>   * The EL0 pointer bits used by a pointer authentication code.
>   * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.

It might be worth updating the comment now we have the kernel version too.


>   */
> -#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
> +#define ptrauth_user_pac_mask()		GENMASK(54, vabits_actual)
> +#define ptrauth_kernel_pac_mask()	(GENMASK(63, 56) | GENMASK(54, VA_BITS))

(I see everywhere else we use GENMASK_ULL() for >32 bit values. It seems to work without it)


> -/* Only valid for EL0 TTBR0 instruction pointers */

Hmm, I suspect this is because the psuedo code's AArch64.BranchAddr removes Tags and PAC.
If you get a value from the LR, it should have been a PC, so it can't have a tag. It might
have been signed, so has a PAC that this function removes.

If you gave this a Tagged pointer, it would keep the tag. Is that intended?
(If not, can we fix the comment instead of removing it.)


>  static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  {
> -	return ptr & ~ptrauth_user_pac_mask();
> +	if (ptr & BIT_ULL(55))
> +		return ptr | ptrauth_kernel_pac_mask();
> +	else
> +		return ptr & ~ptrauth_user_pac_mask();
>  }
>  
>  #define ptrauth_thread_init_user(tsk)					\
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index a336cb1..49eb1c3 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -14,6 +14,7 @@
>  #include <linux/stacktrace.h>
>  
>  #include <asm/irq.h>
> +#include <asm/pointer_auth.h>
>  #include <asm/stack_pointer.h>
>  #include <asm/stacktrace.h>
>  
> @@ -84,6 +85,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
>  	frame->prev_fp = fp;
>  	frame->prev_type = info.type;
>  
> +	frame->pc = ptrauth_strip_insn_pac(frame->pc);

Could this be against the frame->pc assignment? (Its evidently far enough away that diff
would trim this line out if someone adds something just after!)


Do you need to fixup __show_regs()? This reads regs->regs[30], and passes it to printk()s
%pS which will try to find the entry in kallsyms.


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 06/11] arm64: rename ptrauth key structures to be user-specific
  2019-10-17  8:14 ` [PATCH 06/11] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
@ 2019-10-29 23:27   ` Kees Cook
  0 siblings, 0 replies; 25+ messages in thread
From: Kees Cook @ 2019-10-29 23:27 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On Thu, Oct 17, 2019 at 01:44:20PM +0530, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> We currently enable ptrauth for userspace, but do not use it within the
> kernel. We're going to enable it for the kernel, and will need to manage
> a separate set of ptrauth keys for the kernel.
> 
> We currently keep all 5 keys in struct ptrauth_keys. However, as the
> kernel will only need to use 1 key, it is a bit wasteful to allocate a
> whole ptrauth_keys struct for every thread.
> 
> Therefore, a subsequent patch will define a separate struct, with only 1
> key, for the kernel. In preparation for that, rename the existing struct
> (and associated macros and functions) to reflect that they are specific
> to userspace.
> 
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>

This appears very mechanical; easy to review! ;)

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since RFC v2:
>  - New patch in this version, to optimize struct usage [Dave]
> 
>  arch/arm64/include/asm/asm_pointer_auth.h | 10 +++++-----
>  arch/arm64/include/asm/pointer_auth.h     |  6 +++---
>  arch/arm64/include/asm/processor.h        |  2 +-
>  arch/arm64/kernel/asm-offsets.c           | 10 +++++-----
>  arch/arm64/kernel/pointer_auth.c          |  4 ++--
>  arch/arm64/kernel/ptrace.c                | 16 ++++++++--------
>  6 files changed, 24 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> index cb21a06..3d39788 100644
> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -15,21 +15,21 @@
>  alternative_if_not ARM64_HAS_ADDRESS_AUTH
>  	b	.Laddr_auth_skip_\@
>  alternative_else_nop_endif
> -	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
>  	msr_s	SYS_APIAKEYLO_EL1, \tmp2
>  	msr_s	SYS_APIAKEYHI_EL1, \tmp3
> -	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB]
>  	msr_s	SYS_APIBKEYLO_EL1, \tmp2
>  	msr_s	SYS_APIBKEYHI_EL1, \tmp3
> -	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA]
>  	msr_s	SYS_APDAKEYLO_EL1, \tmp2
>  	msr_s	SYS_APDAKEYHI_EL1, \tmp3
> -	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB]
>  	msr_s	SYS_APDBKEYLO_EL1, \tmp2
>  	msr_s	SYS_APDBKEYHI_EL1, \tmp3
>  .Laddr_auth_skip_\@:
>  alternative_if ARM64_HAS_GENERIC_AUTH
> -	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA]
>  	msr_s	SYS_APGAKEYLO_EL1, \tmp2
>  	msr_s	SYS_APGAKEYHI_EL1, \tmp3
>  alternative_else_nop_endif
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 21c2115..cc42145 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -22,7 +22,7 @@ struct ptrauth_key {
>   * We give each process its own keys, which are shared by all threads. The keys
>   * are inherited upon fork(), and reinitialised upon exec*().
>   */
> -struct ptrauth_keys {
> +struct ptrauth_keys_user {
>  	struct ptrauth_key apia;
>  	struct ptrauth_key apib;
>  	struct ptrauth_key apda;
> @@ -30,7 +30,7 @@ struct ptrauth_keys {
>  	struct ptrauth_key apga;
>  };
>  
> -static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
>  {
>  	if (system_supports_address_auth()) {
>  		get_random_bytes(&keys->apia, sizeof(keys->apia));
> @@ -58,7 +58,7 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  }
>  
>  #define ptrauth_thread_init_user(tsk)					\
> -	ptrauth_keys_init(&(tsk)->thread.keys_user)
> +	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 5623685..8ec792d 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -144,7 +144,7 @@ struct thread_struct {
>  	unsigned long		fault_code;	/* ESR_EL1 value */
>  	struct debug_info	debug;		/* debugging */
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -	struct ptrauth_keys	keys_user;
> +	struct ptrauth_keys_user	keys_user;
>  #endif
>  };
>  
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index ef0c24b..cf15182 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -131,11 +131,11 @@ int main(void)
>    DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
>  #endif
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -  DEFINE(PTRAUTH_KEY_APIA,	offsetof(struct ptrauth_keys, apia));
> -  DEFINE(PTRAUTH_KEY_APIB,	offsetof(struct ptrauth_keys, apib));
> -  DEFINE(PTRAUTH_KEY_APDA,	offsetof(struct ptrauth_keys, apda));
> -  DEFINE(PTRAUTH_KEY_APDB,	offsetof(struct ptrauth_keys, apdb));
> -  DEFINE(PTRAUTH_KEY_APGA,	offsetof(struct ptrauth_keys, apga));
> +  DEFINE(PTRAUTH_USER_KEY_APIA,		offsetof(struct ptrauth_keys_user, apia));
> +  DEFINE(PTRAUTH_USER_KEY_APIB,		offsetof(struct ptrauth_keys_user, apib));
> +  DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
> +  DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
> +  DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
>    BLANK();
>  #endif
>    return 0;
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 95985be..1e77736 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -9,7 +9,7 @@
>  
>  int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  {
> -	struct ptrauth_keys *keys = &tsk->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &tsk->thread.keys_user;
>  	unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
>  				      PR_PAC_APDAKEY | PR_PAC_APDBKEY;
>  	unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
> @@ -18,7 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  		return -EINVAL;
>  
>  	if (!arg) {
> -		ptrauth_keys_init(keys);
> +		ptrauth_keys_init_user(keys);
>  		return 0;
>  	}
>  
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index 21176d0..0793739 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -986,7 +986,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey)
>  }
>  
>  static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
> -				     const struct ptrauth_keys *keys)
> +				     const struct ptrauth_keys_user *keys)
>  {
>  	ukeys->apiakey = pac_key_to_user(&keys->apia);
>  	ukeys->apibkey = pac_key_to_user(&keys->apib);
> @@ -994,7 +994,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
>  	ukeys->apdbkey = pac_key_to_user(&keys->apdb);
>  }
>  
> -static void pac_address_keys_from_user(struct ptrauth_keys *keys,
> +static void pac_address_keys_from_user(struct ptrauth_keys_user *keys,
>  				       const struct user_pac_address_keys *ukeys)
>  {
>  	keys->apia = pac_key_from_user(ukeys->apiakey);
> @@ -1008,7 +1008,7 @@ static int pac_address_keys_get(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				void *kbuf, void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_address_keys user_keys;
>  
>  	if (!system_supports_address_auth())
> @@ -1025,7 +1025,7 @@ static int pac_address_keys_set(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				const void *kbuf, const void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_address_keys user_keys;
>  	int ret;
>  
> @@ -1043,12 +1043,12 @@ static int pac_address_keys_set(struct task_struct *target,
>  }
>  
>  static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys,
> -				     const struct ptrauth_keys *keys)
> +				     const struct ptrauth_keys_user *keys)
>  {
>  	ukeys->apgakey = pac_key_to_user(&keys->apga);
>  }
>  
> -static void pac_generic_keys_from_user(struct ptrauth_keys *keys,
> +static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys,
>  				       const struct user_pac_generic_keys *ukeys)
>  {
>  	keys->apga = pac_key_from_user(ukeys->apgakey);
> @@ -1059,7 +1059,7 @@ static int pac_generic_keys_get(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				void *kbuf, void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_generic_keys user_keys;
>  
>  	if (!system_supports_generic_auth())
> @@ -1076,7 +1076,7 @@ static int pac_generic_keys_set(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				const void *kbuf, const void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_generic_keys user_keys;
>  	int ret;
>  
> -- 
> 2.7.4
> 

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/11] arm64: return address signing
  2019-10-23 17:31 ` [PATCH 00/11] arm64: " James Morse
@ 2019-10-30  3:59   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-30  3:59 UTC (permalink / raw)
  To: James Morse, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

Hi,

On 10/23/19 11:01 PM, James Morse wrote:
> Hi Amit,
> 
> On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
>> This series improves function return address protection for the arm64 kernel, by
>> compiling the kernel with ARMv8.3 Pointer Authentication instructions (ptrauth
>> referred hereafter). This should help protect the kernel against attacks using
>> return-oriented programming.
>>
>> Patch 9 and 10 are newly added and hence sent as RFC.
> 
> Please don't mix 'RFC' in a series. If one patch is RFC, the whole series should be marked
> like that, including the cover letter. git format-patch's '--rfc' option will do this for
> you.
> 
> If this is 'v3', please mark all the patches 'v3' too. Adding '-v 3' to git format-patch
> will do this for you.
Yes sure . I will do like this.
> 
> 
>> High-level changes since RFC v2 [1] (detailed changes are listed in patches):
>>   - Moved enabling, key setup and context switch to assembly, to avoid using
>>     the pointer auth compiler attribute which Clang does not support (thanks
>>     Suzuki for the initial code!).
>>   - Added code to restore keys after cpu resume.
>>   - __builtin_return_address will now mask pac bits.
>>   - Changed gcc compiler options to add ptrauth instructions in all functions
>>     and not just non-leaf functions. This may be revisited later due to
>>     performance concerns.
>>   - Rebased onto v5.4-rc2.
>>   - Added Reviewed-by's.
> 
>> This series do not implement few things or have known limitations:
>>   - ftrace function tracer does not work with this series. But after using
>>     the posted series [2] based on -fpatchable-function-entry, it works fine.
>>   - kprobes/uprobes and other tracing may need some rework with ptrauth.
>>   - kdump, other debug may need some rework with ptrauth.
>>   - Generate some randomness for ptrauth keys during kernel early booting.
> 
> Its good to have this list in the cover letter. (thanks!)
> 
> Could you expand on the kprobes point? Is it emulating/stepping the ptrauth instructions,
> or stuff like kretprobes, that overwrite the lr? (arch_prepare_kretprobe()).
> (or both!)
Yes I should have expanded it here. Currently it is able step both 
PACIASP and AUTIASP instruction as krpobes/kretprobes keeps same 
register context. In negative case,  kretprobe may cause some issue. 
Need to look more into it.
> 
> SDEI (firmware assisted NMI) may be called with the user-keys, kernel-keys, or
> half-way-through switching keys. I don't think this is a problem, it just means the key in
> use is unknown.
Thanks for pointing this out. Yes the ptrauth keys save/store may be 
added in SDEI handler. I will check more on it.

Thanks,
Amit Daniel
> 
> 
> Thanks,
> 
> James
> 
> 
>> [1] https://lore.kernel.org/linux-arm-kernel/20190529190332.29753-1-kristina.martsenko@arm.com/
>> [2] https://patchwork.kernel.org/patch/10803279/

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 05/11] arm64: enable ptrauth earlier
  2019-10-23 17:32   ` James Morse
@ 2019-10-30  4:01     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-30  4:01 UTC (permalink / raw)
  To: James Morse, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

Hi,

On 10/23/19 11:02 PM, James Morse wrote:
> Hi Amit,
> 
> On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> When the kernel is compiled with pointer auth instructions, the boot CPU
>> needs to start using address auth very early, so change the cpucap to
>> account for this.
>>
>> Pointer auth must be enabled before we call C functions, because it is
>> not possible to enter a function with pointer auth disabled and exit it
>> with pointer auth enabled. Note, mismatches between architected and
>> IMPDEF algorithms will still be caught by the cpufeature framework (the
>> separate *_ARCH and *_IMP_DEF cpucaps).
>>
>> Note the change in behavior: if the boot CPU has address auth and a late
>> CPU does not, then we offline the late CPU. Until now we would have just
>> disabled address auth in this case.
>>
>> Leave generic authentication as a "system scope" cpucap for now, since
>> initially the kernel will only use address authentication.
> 
> 
>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
>> index e58e5975..157c811 100644
>> --- a/arch/arm64/kernel/head.S
>> +++ b/arch/arm64/kernel/head.S
>> @@ -13,6 +13,7 @@
>>   #include <linux/init.h>
>>   #include <linux/irqchip/arm-gic-v3.h>
>>   
>> +#include <asm/alternative.h>
>>   #include <asm/assembler.h>
>>   #include <asm/boot.h>
>>   #include <asm/ptrace.h>
>> @@ -119,6 +120,8 @@ ENTRY(stext)
>>   	 * the TCR will have been set.
>>   	 */
>>   	bl	__cpu_setup			// initialise processor
>> +	mov	x1, #1
>> +	bl	__ptrauth_setup
>>   	b	__primary_switch
>>   ENDPROC(stext)
>>   
>> @@ -713,6 +716,8 @@ secondary_startup:
>>   	 */
>>   	bl	__cpu_secondary_check52bitva
>>   	bl	__cpu_setup			// initialise processor
>> +	mov	x1, #0
>> +	bl	__ptrauth_setup
>>   	adrp	x1, swapper_pg_dir
>>   	bl	__enable_mmu
>>   	ldr	x8, =__secondary_switched
> 
> __cpu_setup creates the SCTLR_EL1 value for us, it already reads ID registers for stuff
> like AFDBM. It seems odd that you don't do the ptrauth check in there.
Yes it makes sense to do __ptrauth_checkup before those stuffs.
> 
> Not putting it in __cpu_setup means you've missed the other caller: sleep.S's cpu_resume,
> which brings the wakeup CPU back as if it were a secondary. (although the value set at
> boot will be restored in _cpu_resume).
Yes sctlr_el1 is overridden later.
	
> 
> 
> It looks like you only need this to be separate to pass in the primary/secondary flag, as
> __ptrauth_setup has to work with 3 cases: the boot-CPU and secondary CPUs that must have
> the feature, or can ignore the feature. Three cases with one alternative isn't possible.
> 
> Could we pull the '__cpu_secondary_checkptrauth' out, and run it earlier? This means the
> setup call doesn't need to consider secondary CPUs that don't support ptrauth. (and it
> matches what we do for 52bit support)
Ok, separinting __cpu_secondary_checkptrauth makes sense. I will add it 
by bifurcating __ptrauth_setup.
> 
> I think passing primary-boot-cpu into __cpu_setup is something we are going to need for
> other features, so it makes sense to add it as a preparatory patch.
Setting __cpu_setup flag for boot cpu and secondary cpu seems easy in 
head.S but __cpu_setup flag in sleep.S is tricky as it already has some 
context stored so only need to set remaining context. So may be 3 flags 
can be passed to __cpu_setup like primary-cpu-full-ctxt, 
secondary-cpu-full-ctxt and cpu-partial-ctxt. In case of ptrauth, no 
change required
for cpu-partial-ctxt flag. I will check more on this.
> 
> Now the setup call can enable the feature if its supported and we are the boot cpu.
> If the feature is discovered, cpufeature can change that code to enable it unconditionally
> as we know secondaries without support will be caught in __cpu_secondary_checkptrauth.
ok.
> 
> I think this would be simpler, but the proof is in the writing... what do you think?
It is possible to implement also.

Thanks,
Amit
> 
> 
> Thanks,
> 
> James
> 
>> @@ -832,6 +837,49 @@ __no_granule_support:
>>   	early_park_cpu
>>   ENDPROC(__no_granule_support)
>>   
>> +/*
>> + * Enable pointer authentication.
>> + *   x0 = SCTLR_EL1
>> + *   x1 = 1 for primary, 0 for secondary
>> + */
>> +__ptrauth_setup:
>> +#ifdef CONFIG_ARM64_PTR_AUTH
>> +	/* Check if the CPU supports ptrauth */
>> +	mrs	x2, id_aa64isar1_el1
>> +	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
>> +	cbz	x2, 2f
>> +
>> +	/* x2 = system_supports_address_auth() */
>> +alternative_if ARM64_HAS_ADDRESS_AUTH
>> +	mov	x2, 1
>> +alternative_else
>> +	mov	x2, 0
>> +alternative_endif
>> +	orr	x2, x2, x1	// primary || system_supports_address_auth()
>> +	cbz	x2, 3f
>> +
>> +	/* Enable ptrauth instructions */
>> +	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
>> +		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
>> +	orr	x0, x0, x2
>> +	b	3f
>> +
>> +2:	/* No ptrauth support */
>> +alternative_if ARM64_HAS_ADDRESS_AUTH
>> +	b	4f
>> +alternative_else_nop_endif
>> +3:
>> +#endif
>> +	ret
>> +
>> +#ifdef CONFIG_ARM64_PTR_AUTH
>> +4:	/* Park the secondary CPU */
>> +	update_early_cpu_boot_status \
>> +		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_PTRAUTH, x0, x1
>> +	early_park_cpu
>> +#endif
>> +ENDPROC(__ptrauth_setup)
>> +
>>   #ifdef CONFIG_RELOCATABLE
>>   __relocate_kernel:
>>   	/*

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 07/11] arm64: initialize and switch ptrauth kernel keys
  2019-10-23 17:35   ` James Morse
@ 2019-10-30  4:02     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-30  4:02 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

Hi James,

On 10/23/19 11:05 PM, James Morse wrote:
> Hi Amit,
> 
> On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> Set up keys to use pointer authentication within the kernel. The kernel
>> will be compiled with APIAKey instructions, the other keys are currently
>> unused. Each task is given its own APIAKey, which is initialized during
>> fork. The key is changed during context switch and on kernel entry from
>> EL0.
>>
>> The keys for idle threads need to be set before calling any C functions,
>> because it is not possible to enter and exit a function with different
>> keys.
> 
> 
>> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
>> index 8ec792d..c12c98d 100644
>> --- a/arch/arm64/include/asm/processor.h
>> +++ b/arch/arm64/include/asm/processor.h
>> @@ -145,6 +145,7 @@ struct thread_struct {
>>   	struct debug_info	debug;		/* debugging */
>>   #ifdef CONFIG_ARM64_PTR_AUTH
>>   	struct ptrauth_keys_user	keys_user;
>> +	struct ptrauth_keys_kernel	keys_kernel;
>>   #endif
>>   };
>>   
>> diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
>> index 46e2b05..2294e93 100644
>> --- a/arch/arm64/include/asm/smp.h
>> +++ b/arch/arm64/include/asm/smp.h
>> @@ -88,6 +88,8 @@ asmlinkage void secondary_start_kernel(void);
>>   struct secondary_data {
>>   	void *stack;
>>   	struct task_struct *task;
>> +	unsigned long ptrauth_key_lo;
>> +	unsigned long ptrauth_key_hi;
> 
> How come this isn't a struct ptrauth_keys_kernel, like the thread struct?
> Everywhere else you ldp it as one 128 bit quantity, here its split into lo/hi and loaded
> independently.
It can be updated to struct ptrauth_keys_kernel.
> 
> I think this is safe for big-endian... but it hurts to think about it.
> 
> I'd like the lo/hi not to matter, its a randomly generated key, as long as we always read
> it from memory and write it to the CPU registers in the same way, it doesn't matter which
> bits end up in the hi/lo registers.
> 
> 
>>   	long status;
>>   };
>>   
> 
>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
>> index 157c811..e518511 100644
>> --- a/arch/arm64/kernel/head.S
>> +++ b/arch/arm64/kernel/head.S
>> @@ -858,6 +858,16 @@ alternative_endif
>>   	orr	x2, x2, x1	// primary || system_supports_address_auth()
>>   	cbz	x2, 3f
>>   
>> +	/* Install ptrauth key */
>> +	mov	x3, xzr
>> +	mov	x4, xzr
>> +	cbnz	x1, 1f
>> +	adr_l	x2, secondary_data
>> +	ldr	x3, [x2, CPU_BOOT_PTRAUTH_KEY_LO]
>> +	ldr	x4, [x2, CPU_BOOT_PTRAUTH_KEY_HI]
>> +1:	msr_s	SYS_APIAKEYLO_EL1, x3
>> +	msr_s	SYS_APIAKEYHI_EL1, x4
> 
> Why do we store zero to those registers if !system_supports_address_auth()?
> Is this for the boot CPU?
Yes. This is a limitation in the current implementation. Is it better to 
leave it uninitialized here?
> 
> It would be good to keep the code accessing secondary_data together. The boot-cpu's values
> are always going to come from somewhere funny, so it makes sense to keep the secondarys
> separate.
ok.
> 
> (We test values from secondary_data in case the cpu-online code has given up on us. I
> don't think we can test the PTRAUTH_KEY against 0 for too-slow secondaries, as 0 may be
> the random key we generated).
Ok thanks for the details. After looking at the code, it seems 
secondary_data.ptrauth_key_lo/hi is not required as 
secondary_data.task.thread.keys_kernel.apia.lo can fetch us the same 
information.
> 
> This assembly function is a bit of a swiss-army-knife, it does everything to do with
> ptrauth. The code-patching makes it difficult to work out what happens. In general I think
> it would be simpler if these functions did one thing, even if there ends up being more

ok I will try to simplify them in next iteration.

Thanks,
Amit
> 
> 
> Thanks,
> 
> James
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 08/11] arm64: unwind: strip PAC from kernel addresses
  2019-10-23 17:36   ` James Morse
@ 2019-10-30  4:02     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 25+ messages in thread
From: Amit Daniel Kachhap @ 2019-10-30  4:02 UTC (permalink / raw)
  To: James Morse, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

Hi James,

On 10/23/19 11:06 PM, James Morse wrote:
> Hi Amit,
> 
> On 17/10/2019 09:14, Amit Daniel Kachhap wrote:
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> When we enable pointer authentication in the kernel, LR values saved to
>> the stack will have a PAC which we must strip in order to retrieve the
>> real return address.
>>
>> Strip PACs when unwinding the stack in order to account for this.
>>
>> Reviewed-by: Kees Cook <keescook@chromium.org>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> 
> Sign-off chain Nit:
> These Signed-off-by are supposed to be a chain of who handled the patch before it got to
> Linus' tree. The first entry should match the 'From', the last should match the person
> posting the patch.
ok will do.
> 
> 
> I suspect the __builtin_return_address() patch should appear before this one, as
> start_backtrace() callers pass that in as the first 'pc' value.
> 
>> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
>> index 599dd09..a75dc89 100644
>> --- a/arch/arm64/include/asm/pointer_auth.h
>> +++ b/arch/arm64/include/asm/pointer_auth.h
>> @@ -59,12 +59,15 @@ extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>>    * The EL0 pointer bits used by a pointer authentication code.
>>    * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
> 
> It might be worth updating the comment now we have the kernel version too.
ok.
> 
> 
>>    */
>> -#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
>> +#define ptrauth_user_pac_mask()		GENMASK(54, vabits_actual)
>> +#define ptrauth_kernel_pac_mask()	(GENMASK(63, 56) | GENMASK(54, VA_BITS))
> 
> (I see everywhere else we use GENMASK_ULL() for >32 bit values. It seems to work without it)
ok.
> 
> 
>> -/* Only valid for EL0 TTBR0 instruction pointers */
> 
> Hmm, I suspect this is because the psuedo code's AArch64.BranchAddr removes Tags and PAC.
> If you get a value from the LR, it should have been a PC, so it can't have a tag. It might
> have been signed, so has a PAC that this function removes.
yes.
> 
> If you gave this a Tagged pointer, it would keep the tag. Is that intended?
> (If not, can we fix the comment instead of removing it.)
I will fix the comment.
> 
> 
>>   static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>>   {
>> -	return ptr & ~ptrauth_user_pac_mask();
>> +	if (ptr & BIT_ULL(55))
>> +		return ptr | ptrauth_kernel_pac_mask();
>> +	else
>> +		return ptr & ~ptrauth_user_pac_mask();
>>   }
>>   
>>   #define ptrauth_thread_init_user(tsk)					\
>> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
>> index a336cb1..49eb1c3 100644
>> --- a/arch/arm64/kernel/stacktrace.c
>> +++ b/arch/arm64/kernel/stacktrace.c
>> @@ -14,6 +14,7 @@
>>   #include <linux/stacktrace.h>
>>   
>>   #include <asm/irq.h>
>> +#include <asm/pointer_auth.h>
>>   #include <asm/stack_pointer.h>
>>   #include <asm/stacktrace.h>
>>   
>> @@ -84,6 +85,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
>>   	frame->prev_fp = fp;
>>   	frame->prev_type = info.type;
>>   
>> +	frame->pc = ptrauth_strip_insn_pac(frame->pc);
> 
> Could this be against the frame->pc assignment? (Its evidently far enough away that diff
> would trim this line out if someone adds something just after!)
Yes there is some re-assignment later. I will check this one.
> 
> 
> Do you need to fixup __show_regs()? This reads regs->regs[30], and passes it to printk()s
> %pS which will try to find the entry in kallsyms.
Good pointer. I will check it.

Thanks,
Amit Daniel

> 
> 
> Thanks,
> 
> James
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2019-10-30  4:03 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-17  8:14 [PATCH 00/11] arm64: return address signing Amit Daniel Kachhap
2019-10-17  8:14 ` [PATCH 01/11] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
2019-10-17  8:14 ` [PATCH 02/11] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
2019-10-17  8:14 ` [PATCH 03/11] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
2019-10-17 14:06   ` Suzuki K Poulose
2019-10-18  9:59     ` Amit Kachhap
2019-10-17  8:14 ` [PATCH 04/11] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
2019-10-17  8:14 ` [PATCH 05/11] arm64: enable ptrauth earlier Amit Daniel Kachhap
2019-10-17 14:13   ` Suzuki K Poulose
2019-10-18 10:07     ` Amit Kachhap
2019-10-23 17:32   ` James Morse
2019-10-30  4:01     ` Amit Daniel Kachhap
2019-10-17  8:14 ` [PATCH 06/11] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
2019-10-29 23:27   ` Kees Cook
2019-10-17  8:14 ` [PATCH 07/11] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
2019-10-23 17:35   ` James Morse
2019-10-30  4:02     ` Amit Daniel Kachhap
2019-10-17  8:14 ` [PATCH 08/11] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
2019-10-23 17:36   ` James Morse
2019-10-30  4:02     ` Amit Daniel Kachhap
2019-10-17  8:14 ` [RFC PATCH 09/11] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
2019-10-17  8:14 ` [RFC PATCH 10/11] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
2019-10-17  8:14 ` [PATCH 11/11] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
2019-10-23 17:31 ` [PATCH 00/11] arm64: " James Morse
2019-10-30  3:59   ` Amit Daniel Kachhap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).