All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/14] arm64: return address signing
@ 2019-11-19 12:32 Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 01/14] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
                   ` (14 more replies)
  0 siblings, 15 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

Hi,

This series improves function return address protection for the arm64 kernel, by
compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
ptrauth hereafter). This should help protect the kernel against attacks using
return-oriented programming.

This series is based on v5.4-rc8.

High-level changes since v1 [1] (detailed changes are listed in patches):
 - Dropped patch "arm64: cpufeature: handle conflicts based on capability"
   as pointed by Suzuki.
 - Patch 4, 10, 12 and 14 are added newly added.
 - Patch 12 adds support to block probe of authenticate ptrauth instructions.
 - Patch 14 adds support for lkdtm to test ptrauth.
 - In the last version if secondary cpus do have ptrauth and primary cpu do not
   then the secondary will silently disable ptrauth and keep running. This version
   creates panic in this case as suggested by Suzuki.
 - Many suggestion from James implemented.

This series do not implement few things or have known limitations:
 - kdump tool may need some rework to work with ptrauth.
 - Generate/Get some randomness for ptrauth keys during kernel early booting.

Feedback welcome!

Thanks,
Amit Daniel

[1]: https://www.spinics.net/lists/arm-kernel/msg761991.html

Amit Daniel Kachhap (7):
  arm64: create macro to park cpu in an infinite loop
  arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  arm64: mask PAC bits of __builtin_return_address
  arm64: __show_regs: strip PAC from lr in printk
  arm64: suspend: restore the kernel ptrauth keys
  arm64: kprobe: disable probe of ptrauth instruction
  lkdtm: arm64: test kernel pointer authentication

Kristina Martsenko (6):
  arm64: cpufeature: add pointer auth meta-capabilities
  arm64: install user ptrauth keys at kernel exit time
  arm64: enable ptrauth earlier
  arm64: rename ptrauth key structures to be user-specific
  arm64: initialize and switch ptrauth kernel keys
  arm64: compile the kernel with ptrauth return address signing

Mark Rutland (1):
  arm64: unwind: strip PAC from kernel addresses

 arch/arm64/Kconfig                        | 22 +++++++++-
 arch/arm64/Makefile                       |  6 +++
 arch/arm64/include/asm/asm_pointer_auth.h | 59 ++++++++++++++++++++++++++
 arch/arm64/include/asm/compiler.h         | 17 ++++++++
 arch/arm64/include/asm/cpucaps.h          |  4 +-
 arch/arm64/include/asm/cpufeature.h       |  6 +--
 arch/arm64/include/asm/insn.h             | 13 +++---
 arch/arm64/include/asm/pointer_auth.h     | 57 +++++++++++--------------
 arch/arm64/include/asm/processor.h        |  3 +-
 arch/arm64/include/asm/smp.h              | 10 +++++
 arch/arm64/kernel/asm-offsets.c           | 16 +++++++
 arch/arm64/kernel/cpufeature.c            | 30 +++++++++----
 arch/arm64/kernel/entry.S                 |  6 +++
 arch/arm64/kernel/head.S                  | 47 +++++++++++++++------
 arch/arm64/kernel/insn.c                  |  1 +
 arch/arm64/kernel/pointer_auth.c          |  7 +---
 arch/arm64/kernel/probes/decode-insn.c    |  2 +-
 arch/arm64/kernel/process.c               |  5 ++-
 arch/arm64/kernel/ptrace.c                | 16 +++----
 arch/arm64/kernel/sleep.S                 |  8 ++++
 arch/arm64/kernel/smp.c                   | 10 +++++
 arch/arm64/kernel/stacktrace.c            |  3 ++
 arch/arm64/mm/proc.S                      | 70 ++++++++++++++++++++++++++-----
 drivers/misc/lkdtm/bugs.c                 | 17 ++++++++
 drivers/misc/lkdtm/core.c                 |  1 +
 drivers/misc/lkdtm/lkdtm.h                |  1 +
 26 files changed, 345 insertions(+), 92 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
 create mode 100644 arch/arm64/include/asm/compiler.h

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 01/14] arm64: cpufeature: add pointer auth meta-capabilities
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 02/14] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

To enable pointer auth for the kernel, we're going to need to check for
the presence of address auth and generic auth using alternative_if. We
currently have two cpucaps for each, but alternative_if needs to check a
single cpucap. So define meta-capabilities that are present when either
of the current two capabilities is present.

Leave the existing four cpucaps in place, as they are still needed to
check for mismatched systems where one CPU has the architected algorithm
but another has the IMP DEF algorithm.

Note, the meta-capabilities were present before but were removed in
commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework
ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed
then. Note, unlike before, the current patch checks the cpucap values
directly, instead of reading the CPU ID register value.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: commit message and macro rebase]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last version:
* Macro number.

 arch/arm64/include/asm/cpucaps.h    |  4 +++-
 arch/arm64/include/asm/cpufeature.h |  6 ++----
 arch/arm64/kernel/cpufeature.c      | 25 ++++++++++++++++++++++++-
 3 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index ac1dbca..944c596 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -54,7 +54,9 @@
 #define ARM64_WORKAROUND_1463225		44
 #define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM	45
 #define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM	46
+#define ARM64_HAS_ADDRESS_AUTH			47
+#define ARM64_HAS_GENERIC_AUTH			48
 
-#define ARM64_NCAPS				47
+#define ARM64_NCAPS				49
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9cde5d2..670497d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -590,15 +590,13 @@ static inline bool system_supports_cnp(void)
 static inline bool system_supports_address_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
 }
 
 static inline bool system_supports_generic_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
 }
 
 static inline bool system_uses_irq_prio_masking(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 80f459a..b6af43f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1248,6 +1248,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
 	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
 				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
 }
+
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
+	       cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+}
+
+static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
+	       cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
+}
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
@@ -1517,7 +1531,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
 		.matches = has_cpuid_feature,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -1528,6 +1541,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
 		.matches = has_cpuid_feature,
+	},
+	{
+		.capability = ARM64_HAS_ADDRESS_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_address_auth,
 		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
@@ -1550,6 +1568,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
 		.matches = has_cpuid_feature,
 	},
+	{
+		.capability = ARM64_HAS_GENERIC_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_generic_auth,
+	},
 #endif /* CONFIG_ARM64_PTR_AUTH */
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	{
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 02/14] arm64: install user ptrauth keys at kernel exit time
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 01/14] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 03/14] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

As we're going to enable pointer auth within the kernel and use a
different APIAKey for the kernel itself, so move the user APIAKey
switch to EL0 exception return.

The other 4 keys could remain switched during task switch, but are also
moved to keep things consistent.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: commit msg]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last version:
 * Minor change in commit.
 
 arch/arm64/include/asm/asm_pointer_auth.h | 45 +++++++++++++++++++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 30 +--------------------
 arch/arm64/kernel/asm-offsets.c           | 11 ++++++++
 arch/arm64/kernel/entry.S                 |  3 +++
 arch/arm64/kernel/pointer_auth.c          |  3 ---
 arch/arm64/kernel/process.c               |  1 -
 6 files changed, 60 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
new file mode 100644
index 0000000..cb21a06
--- /dev/null
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ASM_POINTER_AUTH_H
+#define __ASM_ASM_POINTER_AUTH_H
+
+#include <asm/alternative.h>
+#include <asm/asm-offsets.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	mov	\tmp1, #THREAD_KEYS_USER
+	add	\tmp1, \tsk, \tmp1
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+	b	.Laddr_auth_skip_\@
+alternative_else_nop_endif
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
+	msr_s	SYS_APIBKEYLO_EL1, \tmp2
+	msr_s	SYS_APIBKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
+	msr_s	SYS_APDAKEYLO_EL1, \tmp2
+	msr_s	SYS_APDAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
+	msr_s	SYS_APDBKEYLO_EL1, \tmp2
+	msr_s	SYS_APDBKEYHI_EL1, \tmp3
+.Laddr_auth_skip_\@:
+alternative_if ARM64_HAS_GENERIC_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
+	msr_s	SYS_APGAKEYLO_EL1, \tmp2
+	msr_s	SYS_APGAKEYHI_EL1, \tmp3
+alternative_else_nop_endif
+	.endm
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	.endm
+
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 7a24bad..21c2115 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -43,26 +43,6 @@ static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 }
 
-#define __ptrauth_key_install(k, v)				\
-do {								\
-	struct ptrauth_key __pki_v = (v);			\
-	write_sysreg_s(__pki_v.lo, SYS_ ## k ## KEYLO_EL1);	\
-	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
-} while (0)
-
-static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
-{
-	if (system_supports_address_auth()) {
-		__ptrauth_key_install(APIA, keys->apia);
-		__ptrauth_key_install(APIB, keys->apib);
-		__ptrauth_key_install(APDA, keys->apda);
-		__ptrauth_key_install(APDB, keys->apdb);
-	}
-
-	if (system_supports_generic_auth())
-		__ptrauth_key_install(APGA, keys->apga);
-}
-
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -78,20 +58,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-do {									\
-	struct task_struct *__ptiu_tsk = (tsk);				\
-	ptrauth_keys_init(&__ptiu_tsk->thread.keys_user);		\
-	ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user);		\
-} while (0)
-
-#define ptrauth_thread_switch(tsk)	\
-	ptrauth_keys_switch(&(tsk)->thread.keys_user)
+	ptrauth_keys_init(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
-#define ptrauth_thread_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 2146857..ef0c24b 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -40,6 +40,9 @@ int main(void)
 #endif
   BLANK();
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+#endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
   DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
@@ -127,5 +130,13 @@ int main(void)
   DEFINE(SDEI_EVENT_INTREGS,	offsetof(struct sdei_registered_event, interrupted_regs));
   DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(PTRAUTH_KEY_APIA,	offsetof(struct ptrauth_keys, apia));
+  DEFINE(PTRAUTH_KEY_APIB,	offsetof(struct ptrauth_keys, apib));
+  DEFINE(PTRAUTH_KEY_APDA,	offsetof(struct ptrauth_keys, apda));
+  DEFINE(PTRAUTH_KEY_APDB,	offsetof(struct ptrauth_keys, apdb));
+  DEFINE(PTRAUTH_KEY_APGA,	offsetof(struct ptrauth_keys, apga));
+  BLANK();
+#endif
   return 0;
 }
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index cf3bd29..6a4e402 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -14,6 +14,7 @@
 #include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
+#include <asm/asm_pointer_auth.h>
 #include <asm/cpufeature.h>
 #include <asm/errno.h>
 #include <asm/esr.h>
@@ -341,6 +342,8 @@ alternative_else_nop_endif
 	msr	cntkctl_el1, x1
 4:
 #endif
+	ptrauth_keys_install_user tsk, x0, x1, x2
+
 	apply_ssbd 0, x0, x1
 	.endif
 
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index c507b58..95985be 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 
 	if (!arg) {
 		ptrauth_keys_init(keys);
-		ptrauth_keys_switch(keys);
 		return 0;
 	}
 
@@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 	if (arg & PR_PAC_APGAKEY)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 
-	ptrauth_keys_switch(keys);
-
 	return 0;
 }
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 71f788c..3716528 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -505,7 +505,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
 	uao_thread_switch(next);
-	ptrauth_thread_switch(next);
 	ssbs_thread_switch(next);
 
 	/*
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 03/14] arm64: create macro to park cpu in an infinite loop
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 01/14] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 02/14] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 04/14] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

A macro early_park_cpu is added to park the faulted cpu in an infinite
loop. Currently, this macro is substituted in two instances and is reused
in the subsequent pointer authentication patch.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last:
* Added macro update_early_cpu_boot_status inside early_park_cpu.

 arch/arm64/kernel/head.S | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 989b194..3d18163 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -761,6 +761,17 @@ ENDPROC(__secondary_too_slow)
 	.endm
 
 /*
+ * Macro to park the cpu in an infinite loop.
+ */
+	.macro	early_park_cpu status
+	update_early_cpu_boot_status \status | CPU_STUCK_IN_KERNEL, x1, x2
+.Lepc_\@:
+	wfe
+	wfi
+	b	.Lepc_\@
+	.endm
+
+/*
  * Enable the MMU.
  *
  *  x0  = SCTLR_EL1 value for turning on the MMU.
@@ -808,24 +819,14 @@ ENTRY(__cpu_secondary_check52bitva)
 	and	x0, x0, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
 	cbnz	x0, 2f
 
-	update_early_cpu_boot_status \
-		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_52_BIT_VA, x0, x1
-1:	wfe
-	wfi
-	b	1b
-
+	early_park_cpu CPU_STUCK_REASON_52_BIT_VA
 #endif
 2:	ret
 ENDPROC(__cpu_secondary_check52bitva)
 
 __no_granule_support:
 	/* Indicate that this CPU can't boot and is stuck in the kernel */
-	update_early_cpu_boot_status \
-		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2
-1:
-	wfe
-	wfi
-	b	1b
+	early_park_cpu CPU_STUCK_REASON_NO_GRAN
 ENDPROC(__no_granule_support)
 
 #ifdef CONFIG_RELOCATABLE
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 04/14] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 03/14] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 05/14] arm64: enable ptrauth earlier Amit Daniel Kachhap
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

This patch allows __cpu_setup to be invoked with one of these flags,
ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_LATE or ARM64_CPU_RUNTIME.
This is required as some cpufeatures need different handling during
different scenarios.

The input parameter in x0 is preserved till the end to be used inside
this function.

There should be no functional change with this patch and is useful
for the subsequent ptrauth patch which utilizes it. Some upcoming
arm cpufeatures can also utilize these flags.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last version:
* This is a new patch and is added as suggested by James[1].

[1]: https://www.spinics.net/lists/arm-kernel/msg763622.html

 arch/arm64/include/asm/smp.h |  5 +++++
 arch/arm64/kernel/head.S     |  2 ++
 arch/arm64/kernel/sleep.S    |  2 ++
 arch/arm64/mm/proc.S         | 26 +++++++++++++++-----------
 4 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index a0c8a0b..008d004 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -23,6 +23,11 @@
 #define CPU_STUCK_REASON_52_BIT_VA	(UL(1) << CPU_STUCK_REASON_SHIFT)
 #define CPU_STUCK_REASON_NO_GRAN	(UL(2) << CPU_STUCK_REASON_SHIFT)
 
+/* Options for __cpu_setup */
+#define ARM64_CPU_BOOT_PRIMARY		(1)
+#define ARM64_CPU_BOOT_LATE		(2)
+#define ARM64_CPU_RUNTIME		(3)
+
 #ifndef __ASSEMBLY__
 
 #include <asm/percpu.h>
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 3d18163..5aaf1bb 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -118,6 +118,7 @@ ENTRY(stext)
 	 * On return, the CPU will be ready for the MMU to be turned on and
 	 * the TCR will have been set.
 	 */
+	mov	x0, #ARM64_CPU_BOOT_PRIMARY
 	bl	__cpu_setup			// initialise processor
 	b	__primary_switch
 ENDPROC(stext)
@@ -712,6 +713,7 @@ secondary_startup:
 	 * Common entry point for secondary CPUs.
 	 */
 	bl	__cpu_secondary_check52bitva
+	mov	x0, #ARM64_CPU_BOOT_LATE
 	bl	__cpu_setup			// initialise processor
 	adrp	x1, swapper_pg_dir
 	bl	__enable_mmu
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index f5b04dd..7b2f2e6 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -3,6 +3,7 @@
 #include <linux/linkage.h>
 #include <asm/asm-offsets.h>
 #include <asm/assembler.h>
+#include <asm/smp.h>
 
 	.text
 /*
@@ -99,6 +100,7 @@ ENDPROC(__cpu_suspend_enter)
 	.pushsection ".idmap.text", "awx"
 ENTRY(cpu_resume)
 	bl	el2_setup		// if in EL2 drop to EL1 cleanly
+	mov	x0, #ARM64_CPU_RUNTIME
 	bl	__cpu_setup
 	/* enable the MMU early - so we can access sleep_save_stash by va */
 	adrp	x1, swapper_pg_dir
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index a1e0592..88cf7e4 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -400,21 +400,25 @@ ENDPROC(idmap_kpti_install_ng_mappings)
 /*
  *	__cpu_setup
  *
- *	Initialise the processor for turning the MMU on.  Return in x0 the
- *	value of the SCTLR_EL1 register.
+ *	Initialise the processor for turning the MMU on.
+ *
+ * Input:
+ *	x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_LATE/ARM64_CPU_RUNTIME.
+ * Output:
+ *	Return in x0 the value of the SCTLR_EL1 register.
  */
 	.pushsection ".idmap.text", "awx"
 ENTRY(__cpu_setup)
 	tlbi	vmalle1				// Invalidate local TLB
 	dsb	nsh
 
-	mov	x0, #3 << 20
-	msr	cpacr_el1, x0			// Enable FP/ASIMD
-	mov	x0, #1 << 12			// Reset mdscr_el1 and disable
-	msr	mdscr_el1, x0			// access to the DCC from EL0
+	mov	x1, #3 << 20
+	msr	cpacr_el1, x1			// Enable FP/ASIMD
+	mov	x1, #1 << 12			// Reset mdscr_el1 and disable
+	msr	mdscr_el1, x1			// access to the DCC from EL0
 	isb					// Unmask debug exceptions now,
 	enable_dbg				// since this is per-cpu
-	reset_pmuserenr_el0 x0			// Disable PMU access from EL0
+	reset_pmuserenr_el0 x1			// Disable PMU access from EL0
 	/*
 	 * Memory region attributes for LPAE:
 	 *
@@ -435,10 +439,6 @@ ENTRY(__cpu_setup)
 		     MAIR(0xbb, MT_NORMAL_WT)
 	msr	mair_el1, x5
 	/*
-	 * Prepare SCTLR
-	 */
-	mov_q	x0, SCTLR_EL1_SET
-	/*
 	 * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for
 	 * both user and kernel.
 	 */
@@ -474,5 +474,9 @@ ENTRY(__cpu_setup)
 1:
 #endif	/* CONFIG_ARM64_HW_AFDBM */
 	msr	tcr_el1, x10
+	/*
+	 * Prepare SCTLR
+	 */
+	mov_q	x0, SCTLR_EL1_SET
 	ret					// return to head.S
 ENDPROC(__cpu_setup)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 05/14] arm64: enable ptrauth earlier
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 04/14] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 06/14] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

When the kernel is compiled with pointer auth instructions, the boot CPU
needs to start using address auth very early, so change the cpucap to
account for this.

Pointer auth must be enabled before we call C functions, because it is
not possible to enter a function with pointer auth disabled and exit it
with pointer auth enabled. Note, mismatches between architected and
IMPDEF algorithms will still be caught by the cpufeature framework (the
separate *_ARCH and *_IMP_DEF cpucaps).

Note the change in behavior: if the boot CPU has address auth and a late
CPU does not, then we park the late CPU very early in booting. Also, if
the boot CPU does not have address auth and the late CPU has then system
panic will occur little later from inside the C code. Until now we would
have just disabled address auth in this case.

Leave generic authentication as a "system scope" cpucap for now, since
initially the kernel will only use address authentication.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Re-worked ptrauth setup logic, comments]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last version:
* Added a check __cpu_secondary_checkptrauth for secondary cores which do
  not have ptrauth. [James]
* Moved ptrauth setup inside __cpu_setup. [James]
* Now if secondary cpus do not have ptrauth and primary has then this
  will cause system panic. [Suzuki]

Link to above discussions: https://www.spinics.net/lists/arm-kernel/msg761993.html

 arch/arm64/Kconfig             |  5 +++++
 arch/arm64/include/asm/smp.h   |  1 +
 arch/arm64/kernel/cpufeature.c | 13 +++----------
 arch/arm64/kernel/head.S       | 20 ++++++++++++++++++++
 arch/arm64/kernel/smp.c        |  2 ++
 arch/arm64/mm/proc.S           | 31 +++++++++++++++++++++++++++++++
 6 files changed, 62 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3f047af..998248e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1436,6 +1436,11 @@ config ARM64_PTR_AUTH
 	  be enabled. However, KVM guest also require VHE mode and hence
 	  CONFIG_ARM64_VHE=y option to use this feature.
 
+	  If the feature is present on the primary CPU but not a secondary CPU,
+	  then the secondary CPU will be parked. Also, if the boot CPU does not
+	  have address auth and the late CPU has then system panic will occur.
+	  On such a system, this option should not be selected.
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 008d004..ddb6d70 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -22,6 +22,7 @@
 
 #define CPU_STUCK_REASON_52_BIT_VA	(UL(1) << CPU_STUCK_REASON_SHIFT)
 #define CPU_STUCK_REASON_NO_GRAN	(UL(2) << CPU_STUCK_REASON_SHIFT)
+#define CPU_STUCK_REASON_NO_PTRAUTH	(UL(4) << CPU_STUCK_REASON_SHIFT)
 
 /* Options for __cpu_setup */
 #define ARM64_CPU_BOOT_PRIMARY		(1)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b6af43f..c05c36a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1243,12 +1243,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
-{
-	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
-				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
-}
-
 static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
 			     int __unused)
 {
@@ -1525,7 +1519,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (architected algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_SCOPE_BOOT_CPU,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
@@ -1535,7 +1529,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_SCOPE_BOOT_CPU,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
@@ -1544,9 +1538,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_SCOPE_BOOT_CPU,
 		.matches = has_address_auth,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 5aaf1bb..c59c28f 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/irqchip/arm-gic-v3.h>
 
+#include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/boot.h>
 #include <asm/ptrace.h>
@@ -713,6 +714,7 @@ secondary_startup:
 	 * Common entry point for secondary CPUs.
 	 */
 	bl	__cpu_secondary_check52bitva
+	bl	__cpu_secondary_checkptrauth
 	mov	x0, #ARM64_CPU_BOOT_LATE
 	bl	__cpu_setup			// initialise processor
 	adrp	x1, swapper_pg_dir
@@ -831,6 +833,24 @@ __no_granule_support:
 	early_park_cpu CPU_STUCK_REASON_NO_GRAN
 ENDPROC(__no_granule_support)
 
+ENTRY(__cpu_secondary_checkptrauth)
+#ifdef CONFIG_ARM64_PTR_AUTH
+	/* Check if the CPU supports ptrauth */
+	mrs	x2, id_aa64isar1_el1
+	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
+	cbnz	x2, 1f
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	mov	x3, 1
+alternative_else
+	mov	x3, 0
+alternative_endif
+	cbz	x3, 1f
+	/* Park the mismatched secondary CPU */
+	early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH
+#endif
+1:	ret
+ENDPROC(__cpu_secondary_checkptrauth)
+
 #ifdef CONFIG_RELOCATABLE
 __relocate_kernel:
 	/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dc9fe87..a6a5f24 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -162,6 +162,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 				pr_crit("CPU%u: does not support 52-bit VAs\n", cpu);
 			if (status & CPU_STUCK_REASON_NO_GRAN)
 				pr_crit("CPU%u: does not support %luK granule \n", cpu, PAGE_SIZE / SZ_1K);
+			if (status & CPU_STUCK_REASON_NO_PTRAUTH)
+				pr_crit("CPU%u: does not support pointer authentication\n", cpu);
 			cpus_stuck_in_kernel++;
 			break;
 		case CPU_PANIC_KERNEL:
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 88cf7e4..8734d99 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -16,6 +16,7 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/cpufeature.h>
 #include <asm/alternative.h>
+#include <asm/smp.h>
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
@@ -474,9 +475,39 @@ ENTRY(__cpu_setup)
 1:
 #endif	/* CONFIG_ARM64_HW_AFDBM */
 	msr	tcr_el1, x10
+	mov	x1, x0
 	/*
 	 * Prepare SCTLR
 	 */
 	mov_q	x0, SCTLR_EL1_SET
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+	/* No ptrauth setup for run time cpus */
+	cmp	x1, #ARM64_CPU_RUNTIME
+	b.eq	3f
+
+	/* Check if the CPU supports ptrauth */
+	mrs	x2, id_aa64isar1_el1
+	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
+	cbz	x2, 3f
+
+	msr_s	SYS_APIAKEYLO_EL1, xzr
+	msr_s	SYS_APIAKEYHI_EL1, xzr
+
+	/* Just enable ptrauth for primary cpu */
+	cmp	x1, #ARM64_CPU_BOOT_PRIMARY
+	b.eq	2f
+
+	/* if !system_supports_address_auth() then skip enable */
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+	b	3f
+alternative_else_nop_endif
+
+2:	/* Enable ptrauth instructions */
+	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
+		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
+	orr	x0, x0, x2
+3:
+#endif
 	ret					// return to head.S
 ENDPROC(__cpu_setup)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 06/14] arm64: rename ptrauth key structures to be user-specific
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 05/14] arm64: enable ptrauth earlier Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-22 13:28   ` Ard Biesheuvel
  2019-11-19 12:32 ` [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

We currently enable ptrauth for userspace, but do not use it within the
kernel. We're going to enable it for the kernel, and will need to manage
a separate set of ptrauth keys for the kernel.

We currently keep all 5 keys in struct ptrauth_keys. However, as the
kernel will only need to use 1 key, it is a bit wasteful to allocate a
whole ptrauth_keys struct for every thread.

Therefore, a subsequent patch will define a separate struct, with only 1
key, for the kernel. In preparation for that, rename the existing struct
(and associated macros and functions) to reflect that they are specific
to userspace.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last version:
* None

 arch/arm64/include/asm/asm_pointer_auth.h | 10 +++++-----
 arch/arm64/include/asm/pointer_auth.h     |  6 +++---
 arch/arm64/include/asm/processor.h        |  2 +-
 arch/arm64/kernel/asm-offsets.c           | 10 +++++-----
 arch/arm64/kernel/pointer_auth.c          |  4 ++--
 arch/arm64/kernel/ptrace.c                | 16 ++++++++--------
 6 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
index cb21a06..3d39788 100644
--- a/arch/arm64/include/asm/asm_pointer_auth.h
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -15,21 +15,21 @@
 alternative_if_not ARM64_HAS_ADDRESS_AUTH
 	b	.Laddr_auth_skip_\@
 alternative_else_nop_endif
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
 	msr_s	SYS_APIAKEYLO_EL1, \tmp2
 	msr_s	SYS_APIAKEYHI_EL1, \tmp3
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB]
 	msr_s	SYS_APIBKEYLO_EL1, \tmp2
 	msr_s	SYS_APIBKEYHI_EL1, \tmp3
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA]
 	msr_s	SYS_APDAKEYLO_EL1, \tmp2
 	msr_s	SYS_APDAKEYHI_EL1, \tmp3
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB]
 	msr_s	SYS_APDBKEYLO_EL1, \tmp2
 	msr_s	SYS_APDBKEYHI_EL1, \tmp3
 .Laddr_auth_skip_\@:
 alternative_if ARM64_HAS_GENERIC_AUTH
-	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA]
 	msr_s	SYS_APGAKEYLO_EL1, \tmp2
 	msr_s	SYS_APGAKEYHI_EL1, \tmp3
 alternative_else_nop_endif
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 21c2115..cc42145 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -22,7 +22,7 @@ struct ptrauth_key {
  * We give each process its own keys, which are shared by all threads. The keys
  * are inherited upon fork(), and reinitialised upon exec*().
  */
-struct ptrauth_keys {
+struct ptrauth_keys_user {
 	struct ptrauth_key apia;
 	struct ptrauth_key apib;
 	struct ptrauth_key apda;
@@ -30,7 +30,7 @@ struct ptrauth_keys {
 	struct ptrauth_key apga;
 };
 
-static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 {
 	if (system_supports_address_auth()) {
 		get_random_bytes(&keys->apia, sizeof(keys->apia));
@@ -58,7 +58,7 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-	ptrauth_keys_init(&(tsk)->thread.keys_user)
+	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5623685..8ec792d 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -144,7 +144,7 @@ struct thread_struct {
 	unsigned long		fault_code;	/* ESR_EL1 value */
 	struct debug_info	debug;		/* debugging */
 #ifdef CONFIG_ARM64_PTR_AUTH
-	struct ptrauth_keys	keys_user;
+	struct ptrauth_keys_user	keys_user;
 #endif
 };
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index ef0c24b..cf15182 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -131,11 +131,11 @@ int main(void)
   DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
 #endif
 #ifdef CONFIG_ARM64_PTR_AUTH
-  DEFINE(PTRAUTH_KEY_APIA,	offsetof(struct ptrauth_keys, apia));
-  DEFINE(PTRAUTH_KEY_APIB,	offsetof(struct ptrauth_keys, apib));
-  DEFINE(PTRAUTH_KEY_APDA,	offsetof(struct ptrauth_keys, apda));
-  DEFINE(PTRAUTH_KEY_APDB,	offsetof(struct ptrauth_keys, apdb));
-  DEFINE(PTRAUTH_KEY_APGA,	offsetof(struct ptrauth_keys, apga));
+  DEFINE(PTRAUTH_USER_KEY_APIA,		offsetof(struct ptrauth_keys_user, apia));
+  DEFINE(PTRAUTH_USER_KEY_APIB,		offsetof(struct ptrauth_keys_user, apib));
+  DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
+  DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
+  DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
   BLANK();
 #endif
   return 0;
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index 95985be..1e77736 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -9,7 +9,7 @@
 
 int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 {
-	struct ptrauth_keys *keys = &tsk->thread.keys_user;
+	struct ptrauth_keys_user *keys = &tsk->thread.keys_user;
 	unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
 				      PR_PAC_APDAKEY | PR_PAC_APDBKEY;
 	unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
@@ -18,7 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 		return -EINVAL;
 
 	if (!arg) {
-		ptrauth_keys_init(keys);
+		ptrauth_keys_init_user(keys);
 		return 0;
 	}
 
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 21176d0..0793739 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -986,7 +986,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey)
 }
 
 static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
-				     const struct ptrauth_keys *keys)
+				     const struct ptrauth_keys_user *keys)
 {
 	ukeys->apiakey = pac_key_to_user(&keys->apia);
 	ukeys->apibkey = pac_key_to_user(&keys->apib);
@@ -994,7 +994,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
 	ukeys->apdbkey = pac_key_to_user(&keys->apdb);
 }
 
-static void pac_address_keys_from_user(struct ptrauth_keys *keys,
+static void pac_address_keys_from_user(struct ptrauth_keys_user *keys,
 				       const struct user_pac_address_keys *ukeys)
 {
 	keys->apia = pac_key_from_user(ukeys->apiakey);
@@ -1008,7 +1008,7 @@ static int pac_address_keys_get(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				void *kbuf, void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_address_keys user_keys;
 
 	if (!system_supports_address_auth())
@@ -1025,7 +1025,7 @@ static int pac_address_keys_set(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				const void *kbuf, const void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_address_keys user_keys;
 	int ret;
 
@@ -1043,12 +1043,12 @@ static int pac_address_keys_set(struct task_struct *target,
 }
 
 static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys,
-				     const struct ptrauth_keys *keys)
+				     const struct ptrauth_keys_user *keys)
 {
 	ukeys->apgakey = pac_key_to_user(&keys->apga);
 }
 
-static void pac_generic_keys_from_user(struct ptrauth_keys *keys,
+static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys,
 				       const struct user_pac_generic_keys *ukeys)
 {
 	keys->apga = pac_key_from_user(ukeys->apgakey);
@@ -1059,7 +1059,7 @@ static int pac_generic_keys_get(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				void *kbuf, void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_generic_keys user_keys;
 
 	if (!system_supports_generic_auth())
@@ -1076,7 +1076,7 @@ static int pac_generic_keys_set(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				const void *kbuf, const void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_generic_keys user_keys;
 	int ret;
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (5 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 06/14] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-22 19:19   ` Richard Henderson
  2019-11-19 12:32 ` [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

Set up keys to use pointer authentication within the kernel. The kernel
will be compiled with APIAKey instructions, the other keys are currently
unused. Each task is given its own APIAKey, which is initialized during
fork. The key is changed during context switch and on kernel entry from
EL0.

The keys for idle threads need to be set before calling any C functions,
because it is not possible to enter and exit a function with different
keys.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Modified secondary cores key structure, comments]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last version:
 * Used "struct ptrauth_keys_kernel" instead of 2 unsigned longs. [James]
 * Check secondary data validity for slow cpus. [James]

Link to above discussion: https://www.spinics.net/lists/arm-kernel/msg763623.html

 arch/arm64/include/asm/asm_pointer_auth.h | 14 ++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 13 +++++++++++++
 arch/arm64/include/asm/processor.h        |  1 +
 arch/arm64/include/asm/smp.h              |  4 ++++
 arch/arm64/kernel/asm-offsets.c           |  5 +++++
 arch/arm64/kernel/entry.S                 |  3 +++
 arch/arm64/kernel/process.c               |  2 ++
 arch/arm64/kernel/smp.c                   |  8 ++++++++
 arch/arm64/mm/proc.S                      | 13 +++++++++++++
 9 files changed, 63 insertions(+)

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
index 3d39788..172548a 100644
--- a/arch/arm64/include/asm/asm_pointer_auth.h
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -35,11 +35,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
 alternative_else_nop_endif
 	.endm
 
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	mov	\tmp1, #THREAD_KEYS_KERNEL
+	add	\tmp1, \tsk, \tmp1
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	isb
+alternative_else_nop_endif
+	.endm
+
 #else /* CONFIG_ARM64_PTR_AUTH */
 
 	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
 	.endm
 
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	.endm
+
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index cc42145..599dd09 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -30,6 +30,10 @@ struct ptrauth_keys_user {
 	struct ptrauth_key apga;
 };
 
+struct ptrauth_keys_kernel {
+	struct ptrauth_key apia;
+};
+
 static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 {
 	if (system_supports_address_auth()) {
@@ -43,6 +47,12 @@ static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 }
 
+static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
+{
+	if (system_supports_address_auth())
+		get_random_bytes(&keys->apia, sizeof(keys->apia));
+}
+
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -59,11 +69,14 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 
 #define ptrauth_thread_init_user(tsk)					\
 	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
+#define ptrauth_thread_init_kernel(tsk)					\
+	ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
+#define ptrauth_thread_init_kernel(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 8ec792d..c12c98d 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -145,6 +145,7 @@ struct thread_struct {
 	struct debug_info	debug;		/* debugging */
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys_user	keys_user;
+	struct ptrauth_keys_kernel	keys_kernel;
 #endif
 };
 
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index ddb6d70..8664ec4 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -36,6 +36,7 @@
 #include <linux/threads.h>
 #include <linux/cpumask.h>
 #include <linux/thread_info.h>
+#include <asm/pointer_auth.h>
 
 DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
 
@@ -93,6 +94,9 @@ asmlinkage void secondary_start_kernel(void);
 struct secondary_data {
 	void *stack;
 	struct task_struct *task;
+#ifdef CONFIG_ARM64_PTR_AUTH
+	struct ptrauth_keys_kernel ptrauth_key;
+#endif
 	long status;
 };
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index cf15182..7e0e1bc 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -42,6 +42,7 @@ int main(void)
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
 #ifdef CONFIG_ARM64_PTR_AUTH
   DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+  DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
 #endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
@@ -90,6 +91,9 @@ int main(void)
   BLANK();
   DEFINE(CPU_BOOT_STACK,	offsetof(struct secondary_data, stack));
   DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(CPU_BOOT_PTRAUTH_KEY,	offsetof(struct secondary_data, ptrauth_key));
+#endif
   BLANK();
 #ifdef CONFIG_KVM_ARM_HOST
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
@@ -136,6 +140,7 @@ int main(void)
   DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
   DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
   DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
+  DEFINE(PTRAUTH_KERNEL_KEY_APIA,	offsetof(struct ptrauth_keys_kernel, apia));
   BLANK();
 #endif
   return 0;
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 6a4e402..b85695c 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -173,6 +173,7 @@ alternative_cb_end
 
 	apply_ssbd 1, x22, x23
 
+	ptrauth_keys_install_kernel tsk, x20, x22, x23
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_current_task tsk
@@ -342,6 +343,7 @@ alternative_else_nop_endif
 	msr	cntkctl_el1, x1
 4:
 #endif
+	/* No kernel C function calls after this as user keys are set. */
 	ptrauth_keys_install_user tsk, x0, x1, x2
 
 	apply_ssbd 0, x0, x1
@@ -1158,6 +1160,7 @@ ENTRY(cpu_switch_to)
 	ldr	lr, [x8]
 	mov	sp, x9
 	msr	sp_el0, x1
+	ptrauth_keys_install_kernel x1, x8, x9, x10
 	ret
 ENDPROC(cpu_switch_to)
 NOKPROBE(cpu_switch_to)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 3716528..0d4a3b8 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -376,6 +376,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 	 */
 	fpsimd_flush_task_state(p);
 
+	ptrauth_thread_init_kernel(p);
+
 	if (likely(!(p->flags & PF_KTHREAD))) {
 		*childregs = *current_pt_regs();
 		childregs->regs[0] = 0;
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a6a5f24..b171237 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -110,6 +110,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	 */
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
+#if defined(CONFIG_ARM64_PTR_AUTH)
+	secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo;
+	secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi;
+#endif
 	update_cpu_boot_status(CPU_MMU_OFF);
 	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
 
@@ -136,6 +140,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
+#if defined(CONFIG_ARM64_PTR_AUTH)
+	secondary_data.ptrauth_key.apia.lo = 0;
+	secondary_data.ptrauth_key.apia.hi = 0;
+#endif
 	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
 	if (ret && status) {
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8734d99..c5a43ac 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -491,6 +491,11 @@ ENTRY(__cpu_setup)
 	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
 	cbz	x2, 3f
 
+	/*
+	 * TODO: The primary cpu keys are reset here and may be
+	 * re-initialised with values passed from bootloader or
+	 * some other way.
+	 */
 	msr_s	SYS_APIAKEYLO_EL1, xzr
 	msr_s	SYS_APIAKEYHI_EL1, xzr
 
@@ -503,6 +508,14 @@ alternative_if_not ARM64_HAS_ADDRESS_AUTH
 	b	3f
 alternative_else_nop_endif
 
+	/* Install ptrauth key for secondary cpus */
+	adr_l	x2, secondary_data
+	ldr	x3, [x2, #CPU_BOOT_TASK]	// get secondary_data.task
+	cbz	x3, 2f				// check for slow booting cpus
+	ldp	x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY]
+	msr_s	SYS_APIAKEYLO_EL1, x3
+	msr_s	SYS_APIAKEYHI_EL1, x4
+
 2:	/* Enable ptrauth instructions */
 	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
 		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (6 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-21 17:42   ` Ard Biesheuvel
  2019-11-19 12:32 ` [PATCH v2 09/14] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

This patch redefines __builtin_return_address to mask pac bits
when Pointer Authentication is enabled. As __builtin_return_address
is used mostly used to refer to the caller function symbol address
so masking runtime generated pac bits will help to find the match.

This change fixes the utilities like cat /proc/vmallocinfo to now
show the correct logs.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Change since last version:
 * Comment modified.

 arch/arm64/Kconfig                |  1 +
 arch/arm64/include/asm/compiler.h | 17 +++++++++++++++++
 2 files changed, 18 insertions(+)
 create mode 100644 arch/arm64/include/asm/compiler.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 998248e..c1844de 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -117,6 +117,7 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_COMPILER_H
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
new file mode 100644
index 0000000..5efe310
--- /dev/null
+++ b/arch/arm64/include/asm/compiler.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ARM_COMPILER_H
+#define __ASM_ARM_COMPILER_H
+
+#ifndef __ASSEMBLY__
+
+#if defined(CONFIG_ARM64_PTR_AUTH)
+
+/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
+#define __builtin_return_address(val)				\
+	(void *)((unsigned long)__builtin_return_address(val) |	\
+	(GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)))
+#endif
+
+#endif
+
+#endif /* __ASM_ARM_COMPILER_H */
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 09/14] arm64: unwind: strip PAC from kernel addresses
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (7 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 10/14] arm64: __show_regs: strip PAC from lr in printk Amit Daniel Kachhap
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Mark Rutland <mark.rutland@arm.com>

When we enable pointer authentication in the kernel, LR values saved to
the stack will have a PAC which we must strip in order to retrieve the
real return address.

Strip PACs when unwinding the stack in order to account for this.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Re-position ptrauth_strip_insn_pac, comment]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since last version:
 * Reposition ptrauth_strip_insn_pac. [James]
 * Added more comments for stripping PAC at EL1. [James]

Link to above discussion: https://www.spinics.net/lists/arm-kernel/msg763624.html

 arch/arm64/include/asm/pointer_auth.h | 16 +++++++++++-----
 arch/arm64/kernel/stacktrace.c        |  3 +++
 2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 599dd09..efd70b5 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -56,15 +56,21 @@ static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
- * The EL0 pointer bits used by a pointer authentication code.
- * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
+ * The EL0/EL1 pointer bits used by a pointer authentication code.
+ * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
  */
-#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
+#define ptrauth_user_pac_mask()		GENMASK_ULL(54, vabits_actual)
+/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
+#define ptrauth_kernel_pac_mask()	\
+				(GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS))
 
-/* Only valid for EL0 TTBR0 instruction pointers */
+/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 {
-	return ptr & ~ptrauth_user_pac_mask();
+	if (ptr & BIT_ULL(55))
+		return ptr | ptrauth_kernel_pac_mask();
+	else
+		return ptr & ~ptrauth_user_pac_mask();
 }
 
 #define ptrauth_thread_init_user(tsk)					\
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index a336cb1..b479df7 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -14,6 +14,7 @@
 #include <linux/stacktrace.h>
 
 #include <asm/irq.h>
+#include <asm/pointer_auth.h>
 #include <asm/stack_pointer.h>
 #include <asm/stacktrace.h>
 
@@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	}
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
 
+	frame->pc = ptrauth_strip_insn_pac(frame->pc);
+
 	/*
 	 * Frames created upon entry from EL0 have NULL FP and PC values, so
 	 * don't bother reporting these. Frames created by __noreturn functions
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 10/14] arm64: __show_regs: strip PAC from lr in printk
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (8 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 09/14] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 11/14] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

lr is printed with %pS which will try to find an entry in kallsyms.
After enabling pointer authentication, this match will fail due to
PAC present in the lr.

Strip PAC from the lr to display the correct symbol name.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Change since last version:
 * New patch

 arch/arm64/kernel/process.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 0d4a3b8..d35b4c0 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -262,7 +262,7 @@ void __show_regs(struct pt_regs *regs)
 
 	if (!user_mode(regs)) {
 		printk("pc : %pS\n", (void *)regs->pc);
-		printk("lr : %pS\n", (void *)lr);
+		printk("lr : %pS\n", (void *)ptrauth_strip_insn_pac(lr));
 	} else {
 		printk("pc : %016llx\n", regs->pc);
 		printk("lr : %016llx\n", lr);
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 11/14] arm64: suspend: restore the kernel ptrauth keys
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (9 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 10/14] arm64: __show_regs: strip PAC from lr in printk Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 12/14] arm64: kprobe: disable probe of ptrauth instruction Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

This patch restores the kernel keys from current task during
cpu resume after the mmu is turned on and ptrauth is enabled.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Change since last version:
 * None

 arch/arm64/kernel/sleep.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index 7b2f2e6..a6e9460 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -2,6 +2,7 @@
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <asm/asm-offsets.h>
+#include <asm/asm_pointer_auth.h>
 #include <asm/assembler.h>
 #include <asm/smp.h>
 
@@ -139,6 +140,11 @@ ENTRY(_cpu_resume)
 	bl	kasan_unpoison_task_stack_below
 #endif
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+	get_current_task x1
+	ptrauth_keys_install_kernel x1, x2, x3, x4
+#endif
+
 	ldp	x19, x20, [x29, #16]
 	ldp	x21, x22, [x29, #32]
 	ldp	x23, x24, [x29, #48]
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 12/14] arm64: kprobe: disable probe of ptrauth instruction
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (10 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 11/14] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-19 12:32 ` [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

This patch disables the probing of authenticate ptrauth
instruction which falls under the hint instructions region.
This is done to disallow probe of instruction which may lead
to ptrauth faults.

The corresponding append pac ptrauth instruction is not
disabled as they are typically the first instruction in the
function so disabling them will be disabling the function
probe itself. Also, appending pac do not cause any exception
in itself.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Change since last version:
 * New patch

 arch/arm64/include/asm/insn.h          | 13 +++++++------
 arch/arm64/kernel/insn.c               |  1 +
 arch/arm64/kernel/probes/decode-insn.c |  2 +-
 3 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 39e7780..d055694 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -40,12 +40,13 @@ enum aarch64_insn_encoding_class {
 };
 
 enum aarch64_insn_hint_op {
-	AARCH64_INSN_HINT_NOP	= 0x0 << 5,
-	AARCH64_INSN_HINT_YIELD	= 0x1 << 5,
-	AARCH64_INSN_HINT_WFE	= 0x2 << 5,
-	AARCH64_INSN_HINT_WFI	= 0x3 << 5,
-	AARCH64_INSN_HINT_SEV	= 0x4 << 5,
-	AARCH64_INSN_HINT_SEVL	= 0x5 << 5,
+	AARCH64_INSN_HINT_NOP		= 0x0 << 5,
+	AARCH64_INSN_HINT_YIELD		= 0x1 << 5,
+	AARCH64_INSN_HINT_WFE		= 0x2 << 5,
+	AARCH64_INSN_HINT_WFI		= 0x3 << 5,
+	AARCH64_INSN_HINT_SEV		= 0x4 << 5,
+	AARCH64_INSN_HINT_SEVL		= 0x5 << 5,
+	AARCH64_INSN_HINT_AUTIASP	= (0x3 << 8) | (0x5 << 5),
 };
 
 enum aarch64_insn_imm_type {
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index d801a70..d172386 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -62,6 +62,7 @@ bool __kprobes aarch64_insn_is_nop(u32 insn)
 	case AARCH64_INSN_HINT_WFI:
 	case AARCH64_INSN_HINT_SEV:
 	case AARCH64_INSN_HINT_SEVL:
+	case AARCH64_INSN_HINT_AUTIASP:
 		return false;
 	default:
 		return true;
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index b78fac9..a7caf84 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -42,7 +42,7 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 			     != AARCH64_INSN_SPCLREG_DAIF;
 
 		/*
-		 * The HINT instruction is is problematic when single-stepping,
+		 * The HINT instruction is problematic when single-stepping,
 		 * except for the NOP case.
 		 */
 		if (aarch64_insn_is_hint(insn))
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (11 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 12/14] arm64: kprobe: disable probe of ptrauth instruction Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-21 15:06   ` Mark Brown
  2019-11-25 17:35   ` Mark Brown
  2019-11-19 12:32 ` [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication Amit Daniel Kachhap
  2019-11-20 16:05 ` [PATCH v2 00/14] arm64: return address signing Ard Biesheuvel
  14 siblings, 2 replies; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

From: Kristina Martsenko <kristina.martsenko@arm.com>

Compile all functions with two ptrauth instructions: PACIASP in the
prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task.

This should help protect the kernel against attacks using
return-oriented programming. As ptrauth protects the return address, it
can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
that it does not protect other parts of the stack.

The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.

CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.

GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Cover leaf function, comments]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Change since last version:
 * Comments for different behaviour while booting secondary cores.

 arch/arm64/Kconfig  | 16 +++++++++++++++-
 arch/arm64/Makefile |  6 ++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c1844de..44d13fe 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1427,11 +1427,17 @@ config ARM64_PTR_AUTH
 	  and other attacks.
 
 	  This option enables these instructions at EL0 (i.e. for userspace).
-
 	  Choosing this option will cause the kernel to initialise secret keys
 	  for each process at exec() time, with these keys being
 	  context-switched along with the process.
 
+	  If the compiler supports the -mbranch-protection or
+	  -msign-return-address flag (e.g. GCC 7 or later), then this option
+	  will also cause the kernel itself to be compiled with return address
+	  protection. In this case, and if the target hardware is known to
+	  support pointer authentication, then CONFIG_STACKPROTECTOR can be
+	  disabled with minimal loss of protection.
+
 	  The feature is detected at runtime. If the feature is not present in
 	  hardware it will not be advertised to userspace/KVM guest nor will it
 	  be enabled. However, KVM guest also require VHE mode and hence
@@ -1442,6 +1448,14 @@ config ARM64_PTR_AUTH
 	  have address auth and the late CPU has then system panic will occur.
 	  On such a system, this option should not be selected.
 
+config CC_HAS_BRANCH_PROT_PAC_RET
+	# GCC 9 or later
+	def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)
+
+config CC_HAS_SIGN_RETURN_ADDRESS
+	# GCC 7, 8
+	def_bool $(cc-option,-msign-return-address=all)
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 2c0238c..031462b 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -72,6 +72,12 @@ stack_protector_prepare: prepare0
 					include/generated/asm-offsets.h))
 endif
 
+ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
+pac-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
+pac-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
+KBUILD_CFLAGS += $(pac-flags-y)
+endif
+
 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
 KBUILD_CPPFLAGS	+= -mbig-endian
 CHECKFLAGS	+= -D__AARCH64EB__
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (12 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
@ 2019-11-19 12:32 ` Amit Daniel Kachhap
  2019-11-21 17:39   ` Ard Biesheuvel
  2019-11-20 16:05 ` [PATCH v2 00/14] arm64: return address signing Ard Biesheuvel
  14 siblings, 1 reply; 36+ messages in thread
From: Amit Daniel Kachhap @ 2019-11-19 12:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin

This test is specific for arm64. When in-kernel Pointer Authentication
config is enabled, the return address stored in the stack is signed. This
feature helps in ROP kind of attack. If the matching signature is corrupted
then this will fail in authentication and lead to abort.

e.g.
echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT

[   13.118166] lkdtm: Performing direct entry CORRUPT_PAC
[   13.118298] lkdtm: Clearing PAC from the return address
[   13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec
[   13.118626] Mem abort info:
[   13.118666]   ESR = 0x86000004
[   13.118866]   EC = 0x21: IABT (current EL), IL = 32 bits
[   13.118966]   SET = 0, FnV = 0
[   13.119117]   EA = 0, S1PTW = 0

Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Change since last version:
 * New patch

 drivers/misc/lkdtm/bugs.c  | 17 +++++++++++++++++
 drivers/misc/lkdtm/core.c  |  1 +
 drivers/misc/lkdtm/lkdtm.h |  1 +
 3 files changed, 19 insertions(+)

diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
index 7284a22..c9bb493 100644
--- a/drivers/misc/lkdtm/bugs.c
+++ b/drivers/misc/lkdtm/bugs.c
@@ -337,3 +337,20 @@ void lkdtm_UNSET_SMEP(void)
 	pr_err("FAIL: this test is x86_64-only\n");
 #endif
 }
+
+void lkdtm_CORRUPT_PAC(void)
+{
+#if IS_ENABLED(CONFIG_ARM64_PTR_AUTH)
+	u64 ret;
+
+	pr_info("Clearing PAC from the return address\n");
+	/*
+	 * __builtin_return_address masks the PAC bits of return
+	 * address, so set the same again.
+	 */
+	ret = (u64)__builtin_return_address(0);
+	asm volatile("str %0, [sp, 8]" : : "r" (ret) : "memory");
+#else
+	pr_err("FAIL: For arm64 pointer authentication capable systems only\n");
+#endif
+}
diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
index cbc4c90..b9c9927 100644
--- a/drivers/misc/lkdtm/core.c
+++ b/drivers/misc/lkdtm/core.c
@@ -116,6 +116,7 @@ static const struct crashtype crashtypes[] = {
 	CRASHTYPE(STACK_GUARD_PAGE_LEADING),
 	CRASHTYPE(STACK_GUARD_PAGE_TRAILING),
 	CRASHTYPE(UNSET_SMEP),
+	CRASHTYPE(CORRUPT_PAC),
 	CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE),
 	CRASHTYPE(OVERWRITE_ALLOCATION),
 	CRASHTYPE(WRITE_AFTER_FREE),
diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
index ab446e0..bf12b68 100644
--- a/drivers/misc/lkdtm/lkdtm.h
+++ b/drivers/misc/lkdtm/lkdtm.h
@@ -28,6 +28,7 @@ void lkdtm_CORRUPT_USER_DS(void);
 void lkdtm_STACK_GUARD_PAGE_LEADING(void);
 void lkdtm_STACK_GUARD_PAGE_TRAILING(void);
 void lkdtm_UNSET_SMEP(void);
+void lkdtm_CORRUPT_PAC(void);
 
 /* lkdtm_heap.c */
 void __init lkdtm_heap_init(void);
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 00/14] arm64: return address signing
  2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
                   ` (13 preceding siblings ...)
  2019-11-19 12:32 ` [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication Amit Daniel Kachhap
@ 2019-11-20 16:05 ` Ard Biesheuvel
  2019-11-21 12:15   ` Amit Kachhap
  14 siblings, 1 reply; 36+ messages in thread
From: Ard Biesheuvel @ 2019-11-20 16:05 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>
> Hi,
>
> This series improves function return address protection for the arm64 kernel, by
> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
> ptrauth hereafter). This should help protect the kernel against attacks using
> return-oriented programming.
>
> This series is based on v5.4-rc8.
>
> High-level changes since v1 [1] (detailed changes are listed in patches):
>  - Dropped patch "arm64: cpufeature: handle conflicts based on capability"
>    as pointed by Suzuki.
>  - Patch 4, 10, 12 and 14 are added newly added.
>  - Patch 12 adds support to block probe of authenticate ptrauth instructions.
>  - Patch 14 adds support for lkdtm to test ptrauth.
>  - In the last version if secondary cpus do have ptrauth and primary cpu do not
>    then the secondary will silently disable ptrauth and keep running. This version
>    creates panic in this case as suggested by Suzuki.
>  - Many suggestion from James implemented.
>
> This series do not implement few things or have known limitations:
>  - kdump tool may need some rework to work with ptrauth.
>  - Generate/Get some randomness for ptrauth keys during kernel early booting.
>

Hello Amit,

As we discussed off line, we still need some place to initialize the
PAC keys for the boot CPU.

We should follow the same approach as boot_init_stack_canary() is
currently taking: it is called from start_kernel(), never returns, and
it is marked as __always_inline, which means it does not set up a
stack frame and so its return address will not get signed with the
wrong key.

Something like the below should be acceptable for a generic header
file, and we can wire up kernel PAC in the arm64 version of the
stackprotector.h header whichever way we like.

-- 
Ard.




diff --git a/include/linux/stackprotector.h b/include/linux/stackprotector.h
index 6b792d080eee..4c678c4fec58 100644
--- a/include/linux/stackprotector.h
+++ b/include/linux/stackprotector.h
@@ -6,7 +6,7 @@
 #include <linux/sched.h>
 #include <linux/random.h>

-#ifdef CONFIG_STACKPROTECTOR
+#if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH)
 # include <asm/stackprotector.h>
 #else
 static inline void boot_init_stack_canary(void)

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 00/14] arm64: return address signing
  2019-11-20 16:05 ` [PATCH v2 00/14] arm64: return address signing Ard Biesheuvel
@ 2019-11-21 12:15   ` Amit Kachhap
  0 siblings, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-21 12:15 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

Hi Ard,

On 11/20/19 9:35 PM, Ard Biesheuvel wrote:
> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> Hi,
>>
>> This series improves function return address protection for the arm64 kernel, by
>> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
>> ptrauth hereafter). This should help protect the kernel against attacks using
>> return-oriented programming.
>>
>> This series is based on v5.4-rc8.
>>
>> High-level changes since v1 [1] (detailed changes are listed in patches):
>>   - Dropped patch "arm64: cpufeature: handle conflicts based on capability"
>>     as pointed by Suzuki.
>>   - Patch 4, 10, 12 and 14 are added newly added.
>>   - Patch 12 adds support to block probe of authenticate ptrauth instructions.
>>   - Patch 14 adds support for lkdtm to test ptrauth.
>>   - In the last version if secondary cpus do have ptrauth and primary cpu do not
>>     then the secondary will silently disable ptrauth and keep running. This version
>>     creates panic in this case as suggested by Suzuki.
>>   - Many suggestion from James implemented.
>>
>> This series do not implement few things or have known limitations:
>>   - kdump tool may need some rework to work with ptrauth.
>>   - Generate/Get some randomness for ptrauth keys during kernel early booting.
>>
> 
> Hello Amit,
> 
> As we discussed off line, we still need some place to initialize the
> PAC keys for the boot CPU.
> 
> We should follow the same approach as boot_init_stack_canary() is
> currently taking: it is called from start_kernel(), never returns, and
> it is marked as __always_inline, which means it does not set up a
> stack frame and so its return address will not get signed with the
> wrong key.
> 
> Something like the below should be acceptable for a generic header
> file, and we can wire up kernel PAC in the arm64 version of the
> stackprotector.h header whichever way we like.
> 

This seems to be a practical approach. I tested in my local system and
it works fine. For few functions before boot_init_stack_canary, it can 
afford to run without keys as randomization driver is not initialised. 
Thanks for the pointer.

Regards,
Amit Daniel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing
  2019-11-19 12:32 ` [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
@ 2019-11-21 15:06   ` Mark Brown
  2019-11-26  7:00     ` Amit Kachhap
  2019-11-25 17:35   ` Mark Brown
  1 sibling, 1 reply; 36+ messages in thread
From: Mark Brown @ 2019-11-21 15:06 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1499 bytes --]

On Tue, Nov 19, 2019 at 06:02:25PM +0530, Amit Daniel Kachhap wrote:

> +config CC_HAS_BRANCH_PROT_PAC_RET
> +	# GCC 9 or later
> +	def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)

clang also supports this option, as of clang-8 I think.

> +ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
> +pac-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
> +pac-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
> +KBUILD_CFLAGS += $(pac-flags-y)
> +endif

This is going to be a bit annoying with BTI as we need to set
-mbranch-protection=bti too.  This means we end up with type
bti+pac-ret+leaf which is annoying to arrange.  There is the convenient
branch protection type standard which does enable both in one option but
that only enables non-leaf pac-ret so you need to explicitly spell out
pac-ret so you can turn on leaf as well.  I'm not sure I can think of
anything much better than adding another case for BTI at the top so we
end up with something along the lines of:

ifeq ($(CONFIG_ARM64_BTI_KERNEL),y)
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_BTI) := -mbranch-protection=bti+pac-ret+leaf
else ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
endif
KBUILD_CFLAGS += $(branch-prot-flags-y)

with a separate section for the signed return address bit.  It would be
helpful to avoid the immediate refactoring when adding BTI by splitting
things up with a more generic name.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication
  2019-11-19 12:32 ` [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication Amit Daniel Kachhap
@ 2019-11-21 17:39   ` Ard Biesheuvel
  2019-11-22 18:51     ` Richard Henderson
  2019-11-25  5:34     ` Amit Kachhap
  0 siblings, 2 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2019-11-21 17:39 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>
> This test is specific for arm64. When in-kernel Pointer Authentication
> config is enabled, the return address stored in the stack is signed. This
> feature helps in ROP kind of attack. If the matching signature is corrupted
> then this will fail in authentication and lead to abort.
>
> e.g.
> echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT
>
> [   13.118166] lkdtm: Performing direct entry CORRUPT_PAC
> [   13.118298] lkdtm: Clearing PAC from the return address
> [   13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec
> [   13.118626] Mem abort info:
> [   13.118666]   ESR = 0x86000004
> [   13.118866]   EC = 0x21: IABT (current EL), IL = 32 bits
> [   13.118966]   SET = 0, FnV = 0
> [   13.119117]   EA = 0, S1PTW = 0
>
> Cc: Kees Cook <keescook@chromium.org>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Change since last version:
>  * New patch
>
>  drivers/misc/lkdtm/bugs.c  | 17 +++++++++++++++++
>  drivers/misc/lkdtm/core.c  |  1 +
>  drivers/misc/lkdtm/lkdtm.h |  1 +
>  3 files changed, 19 insertions(+)
>
> diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
> index 7284a22..c9bb493 100644
> --- a/drivers/misc/lkdtm/bugs.c
> +++ b/drivers/misc/lkdtm/bugs.c
> @@ -337,3 +337,20 @@ void lkdtm_UNSET_SMEP(void)
>         pr_err("FAIL: this test is x86_64-only\n");
>  #endif
>  }
> +
> +void lkdtm_CORRUPT_PAC(void)
> +{
> +#if IS_ENABLED(CONFIG_ARM64_PTR_AUTH)
> +       u64 ret;
> +
> +       pr_info("Clearing PAC from the return address\n");
> +       /*
> +        * __builtin_return_address masks the PAC bits of return
> +        * address, so set the same again.
> +        */
> +       ret = (u64)__builtin_return_address(0);
> +       asm volatile("str %0, [sp, 8]" : : "r" (ret) : "memory");

This looks a bit fragile to me. You are making assumptions about the
location of the return address in the stack frame which are not
guaranteed to hold.

Couldn't you do this test simply by changing the key?

> +#else
> +       pr_err("FAIL: For arm64 pointer authentication capable systems only\n");
> +#endif
> +}
> diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
> index cbc4c90..b9c9927 100644
> --- a/drivers/misc/lkdtm/core.c
> +++ b/drivers/misc/lkdtm/core.c
> @@ -116,6 +116,7 @@ static const struct crashtype crashtypes[] = {
>         CRASHTYPE(STACK_GUARD_PAGE_LEADING),
>         CRASHTYPE(STACK_GUARD_PAGE_TRAILING),
>         CRASHTYPE(UNSET_SMEP),
> +       CRASHTYPE(CORRUPT_PAC),
>         CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE),
>         CRASHTYPE(OVERWRITE_ALLOCATION),
>         CRASHTYPE(WRITE_AFTER_FREE),
> diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
> index ab446e0..bf12b68 100644
> --- a/drivers/misc/lkdtm/lkdtm.h
> +++ b/drivers/misc/lkdtm/lkdtm.h
> @@ -28,6 +28,7 @@ void lkdtm_CORRUPT_USER_DS(void);
>  void lkdtm_STACK_GUARD_PAGE_LEADING(void);
>  void lkdtm_STACK_GUARD_PAGE_TRAILING(void);
>  void lkdtm_UNSET_SMEP(void);
> +void lkdtm_CORRUPT_PAC(void);
>
>  /* lkdtm_heap.c */
>  void __init lkdtm_heap_init(void);
> --
> 2.7.4
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address
  2019-11-19 12:32 ` [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
@ 2019-11-21 17:42   ` Ard Biesheuvel
  2019-11-22  8:48     ` Richard Henderson
  2019-11-25  5:42     ` Amit Kachhap
  0 siblings, 2 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2019-11-21 17:42 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>
> This patch redefines __builtin_return_address to mask pac bits
> when Pointer Authentication is enabled. As __builtin_return_address
> is used mostly used to refer to the caller function symbol address
> so masking runtime generated pac bits will help to find the match.
>
> This change fixes the utilities like cat /proc/vmallocinfo to now
> show the correct logs.
>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Change since last version:
>  * Comment modified.
>
>  arch/arm64/Kconfig                |  1 +
>  arch/arm64/include/asm/compiler.h | 17 +++++++++++++++++
>  2 files changed, 18 insertions(+)
>  create mode 100644 arch/arm64/include/asm/compiler.h
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 998248e..c1844de 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -117,6 +117,7 @@ config ARM64
>         select HAVE_ALIGNED_STRUCT_PAGE if SLUB
>         select HAVE_ARCH_AUDITSYSCALL
>         select HAVE_ARCH_BITREVERSE
> +       select HAVE_ARCH_COMPILER_H
>         select HAVE_ARCH_HUGE_VMAP
>         select HAVE_ARCH_JUMP_LABEL
>         select HAVE_ARCH_JUMP_LABEL_RELATIVE
> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> new file mode 100644
> index 0000000..5efe310
> --- /dev/null
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -0,0 +1,17 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_ARM_COMPILER_H
> +#define __ASM_ARM_COMPILER_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#if defined(CONFIG_ARM64_PTR_AUTH)
> +
> +/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
> +#define __builtin_return_address(val)                          \
> +       (void *)((unsigned long)__builtin_return_address(val) | \
> +       (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)))
> +#endif
> +
> +#endif
> +
> +#endif /* __ASM_ARM_COMPILER_H */

It seems to me like we are accumulating a lot of cruft for khwasan as
well as PAC to convert address into their untagged format.

Are there are untagging helpers we can already reuse? If not, can we
introduce something that can be shared between all these use cases?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address
  2019-11-21 17:42   ` Ard Biesheuvel
@ 2019-11-22  8:48     ` Richard Henderson
  2019-11-22 13:27       ` Ard Biesheuvel
  2019-11-25  9:12       ` Amit Kachhap
  2019-11-25  5:42     ` Amit Kachhap
  1 sibling, 2 replies; 36+ messages in thread
From: Richard Henderson @ 2019-11-22  8:48 UTC (permalink / raw)
  To: Ard Biesheuvel, Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On 11/21/19 5:42 PM, Ard Biesheuvel wrote:
> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> This patch redefines __builtin_return_address to mask pac bits
>> when Pointer Authentication is enabled. As __builtin_return_address
>> is used mostly used to refer to the caller function symbol address
>> so masking runtime generated pac bits will help to find the match.
>>
>> This change fixes the utilities like cat /proc/vmallocinfo to now
>> show the correct logs.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Change since last version:
>>  * Comment modified.
>>
>>  arch/arm64/Kconfig                |  1 +
>>  arch/arm64/include/asm/compiler.h | 17 +++++++++++++++++
>>  2 files changed, 18 insertions(+)
>>  create mode 100644 arch/arm64/include/asm/compiler.h
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 998248e..c1844de 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -117,6 +117,7 @@ config ARM64
>>         select HAVE_ALIGNED_STRUCT_PAGE if SLUB
>>         select HAVE_ARCH_AUDITSYSCALL
>>         select HAVE_ARCH_BITREVERSE
>> +       select HAVE_ARCH_COMPILER_H
>>         select HAVE_ARCH_HUGE_VMAP
>>         select HAVE_ARCH_JUMP_LABEL
>>         select HAVE_ARCH_JUMP_LABEL_RELATIVE
>> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
>> new file mode 100644
>> index 0000000..5efe310
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/compiler.h
>> @@ -0,0 +1,17 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +#ifndef __ASM_ARM_COMPILER_H
>> +#define __ASM_ARM_COMPILER_H
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#if defined(CONFIG_ARM64_PTR_AUTH)
>> +
>> +/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
>> +#define __builtin_return_address(val)                          \
>> +       (void *)((unsigned long)__builtin_return_address(val) | \
>> +       (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)))
>> +#endif
>> +
>> +#endif
>> +
>> +#endif /* __ASM_ARM_COMPILER_H */
> 
> It seems to me like we are accumulating a lot of cruft for khwasan as
> well as PAC to convert address into their untagged format.
> 
> Are there are untagging helpers we can already reuse? If not, can we
> introduce something that can be shared between all these use cases?

xpaci will strip the pac from an instruction pointer, but requires the
instruction set to be enabled, so you'd have to fiddle with alternatives.  You
*could* force the use of lr as input/output and use xpaclri, which is a nop if
the instruction set is not enabled.

Also, this definition of is not correct, because bit 55 needs to be propagated
to all of the bits being masked out here, so that you get a large negative
number for kernel space addresses.


r~

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address
  2019-11-22  8:48     ` Richard Henderson
@ 2019-11-22 13:27       ` Ard Biesheuvel
  2019-11-25  9:18         ` Amit Kachhap
  2019-11-25  9:12       ` Amit Kachhap
  1 sibling, 1 reply; 36+ messages in thread
From: Ard Biesheuvel @ 2019-11-22 13:27 UTC (permalink / raw)
  To: Richard Henderson
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Dave Martin, linux-arm-kernel

On Fri, 22 Nov 2019 at 09:48, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 11/21/19 5:42 PM, Ard Biesheuvel wrote:
> > On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
> >>
> >> This patch redefines __builtin_return_address to mask pac bits
> >> when Pointer Authentication is enabled. As __builtin_return_address
> >> is used mostly used to refer to the caller function symbol address
> >> so masking runtime generated pac bits will help to find the match.
> >>
> >> This change fixes the utilities like cat /proc/vmallocinfo to now
> >> show the correct logs.
> >>
> >> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >> ---
> >> Change since last version:
> >>  * Comment modified.
> >>
> >>  arch/arm64/Kconfig                |  1 +
> >>  arch/arm64/include/asm/compiler.h | 17 +++++++++++++++++
> >>  2 files changed, 18 insertions(+)
> >>  create mode 100644 arch/arm64/include/asm/compiler.h
> >>
> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> >> index 998248e..c1844de 100644
> >> --- a/arch/arm64/Kconfig
> >> +++ b/arch/arm64/Kconfig
> >> @@ -117,6 +117,7 @@ config ARM64
> >>         select HAVE_ALIGNED_STRUCT_PAGE if SLUB
> >>         select HAVE_ARCH_AUDITSYSCALL
> >>         select HAVE_ARCH_BITREVERSE
> >> +       select HAVE_ARCH_COMPILER_H
> >>         select HAVE_ARCH_HUGE_VMAP
> >>         select HAVE_ARCH_JUMP_LABEL
> >>         select HAVE_ARCH_JUMP_LABEL_RELATIVE
> >> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> >> new file mode 100644
> >> index 0000000..5efe310
> >> --- /dev/null
> >> +++ b/arch/arm64/include/asm/compiler.h
> >> @@ -0,0 +1,17 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +#ifndef __ASM_ARM_COMPILER_H
> >> +#define __ASM_ARM_COMPILER_H
> >> +
> >> +#ifndef __ASSEMBLY__
> >> +
> >> +#if defined(CONFIG_ARM64_PTR_AUTH)
> >> +
> >> +/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
> >> +#define __builtin_return_address(val)                          \
> >> +       (void *)((unsigned long)__builtin_return_address(val) | \
> >> +       (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)))
> >> +#endif
> >> +
> >> +#endif
> >> +
> >> +#endif /* __ASM_ARM_COMPILER_H */
> >
> > It seems to me like we are accumulating a lot of cruft for khwasan as
> > well as PAC to convert address into their untagged format.
> >
> > Are there are untagging helpers we can already reuse? If not, can we
> > introduce something that can be shared between all these use cases?
>
> xpaci will strip the pac from an instruction pointer, but requires the
> instruction set to be enabled, so you'd have to fiddle with alternatives.  You
> *could* force the use of lr as input/output and use xpaclri, which is a nop if
> the instruction set is not enabled.
>
> Also, this definition of is not correct, because bit 55 needs to be propagated
> to all of the bits being masked out here, so that you get a large negative
> number for kernel space addresses.
>

Indeed. Even though bit 55 is generally guaranteed to be set, it would
be better to simply reuse ptrauth_strip_insn_pac() that you introduce
in the next patch.

Also, please use __ASM_COMPILER_H as the header guard (which is more
idiomatic), and drop the unnecessary 'ifndef __ASSEMBLY__'.

Finally, could you add a comment that this header is transitively
included (via include/compiler_types.h) on the compiler command line,
so it is guaranteed to be loaded by users of this macro, and so there
is no risk of the wrong version being used.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 06/14] arm64: rename ptrauth key structures to be user-specific
  2019-11-19 12:32 ` [PATCH v2 06/14] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
@ 2019-11-22 13:28   ` Ard Biesheuvel
  2019-11-25  9:22     ` Amit Kachhap
  0 siblings, 1 reply; 36+ messages in thread
From: Ard Biesheuvel @ 2019-11-22 13:28 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>
> From: Kristina Martsenko <kristina.martsenko@arm.com>
>
> We currently enable ptrauth for userspace, but do not use it within the
> kernel. We're going to enable it for the kernel, and will need to manage
> a separate set of ptrauth keys for the kernel.
>
> We currently keep all 5 keys in struct ptrauth_keys. However, as the
> kernel will only need to use 1 key, it is a bit wasteful to allocate a
> whole ptrauth_keys struct for every thread.
>
> Therefore, a subsequent patch will define a separate struct, with only 1
> key, for the kernel. In preparation for that, rename the existing struct
> (and associated macros and functions) to reflect that they are specific
> to userspace.
>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>

Could we combine this patch with #2 somehow? You are modifying lots of
code that you just introduced there.

> ---
> Changes since last version:
> * None
>
>  arch/arm64/include/asm/asm_pointer_auth.h | 10 +++++-----
>  arch/arm64/include/asm/pointer_auth.h     |  6 +++---
>  arch/arm64/include/asm/processor.h        |  2 +-
>  arch/arm64/kernel/asm-offsets.c           | 10 +++++-----
>  arch/arm64/kernel/pointer_auth.c          |  4 ++--
>  arch/arm64/kernel/ptrace.c                | 16 ++++++++--------
>  6 files changed, 24 insertions(+), 24 deletions(-)
>
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> index cb21a06..3d39788 100644
> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -15,21 +15,21 @@
>  alternative_if_not ARM64_HAS_ADDRESS_AUTH
>         b       .Laddr_auth_skip_\@
>  alternative_else_nop_endif
> -       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA]
> +       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
>         msr_s   SYS_APIAKEYLO_EL1, \tmp2
>         msr_s   SYS_APIAKEYHI_EL1, \tmp3
> -       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB]
> +       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB]
>         msr_s   SYS_APIBKEYLO_EL1, \tmp2
>         msr_s   SYS_APIBKEYHI_EL1, \tmp3
> -       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA]
> +       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA]
>         msr_s   SYS_APDAKEYLO_EL1, \tmp2
>         msr_s   SYS_APDAKEYHI_EL1, \tmp3
> -       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB]
> +       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB]
>         msr_s   SYS_APDBKEYLO_EL1, \tmp2
>         msr_s   SYS_APDBKEYHI_EL1, \tmp3
>  .Laddr_auth_skip_\@:
>  alternative_if ARM64_HAS_GENERIC_AUTH
> -       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA]
> +       ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA]
>         msr_s   SYS_APGAKEYLO_EL1, \tmp2
>         msr_s   SYS_APGAKEYHI_EL1, \tmp3
>  alternative_else_nop_endif
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 21c2115..cc42145 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -22,7 +22,7 @@ struct ptrauth_key {
>   * We give each process its own keys, which are shared by all threads. The keys
>   * are inherited upon fork(), and reinitialised upon exec*().
>   */
> -struct ptrauth_keys {
> +struct ptrauth_keys_user {
>         struct ptrauth_key apia;
>         struct ptrauth_key apib;
>         struct ptrauth_key apda;
> @@ -30,7 +30,7 @@ struct ptrauth_keys {
>         struct ptrauth_key apga;
>  };
>
> -static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
>  {
>         if (system_supports_address_auth()) {
>                 get_random_bytes(&keys->apia, sizeof(keys->apia));
> @@ -58,7 +58,7 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  }
>
>  #define ptrauth_thread_init_user(tsk)                                  \
> -       ptrauth_keys_init(&(tsk)->thread.keys_user)
> +       ptrauth_keys_init_user(&(tsk)->thread.keys_user)
>
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)     (-EINVAL)
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 5623685..8ec792d 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -144,7 +144,7 @@ struct thread_struct {
>         unsigned long           fault_code;     /* ESR_EL1 value */
>         struct debug_info       debug;          /* debugging */
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -       struct ptrauth_keys     keys_user;
> +       struct ptrauth_keys_user        keys_user;
>  #endif
>  };
>
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index ef0c24b..cf15182 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -131,11 +131,11 @@ int main(void)
>    DEFINE(SDEI_EVENT_PRIORITY,  offsetof(struct sdei_registered_event, priority));
>  #endif
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -  DEFINE(PTRAUTH_KEY_APIA,     offsetof(struct ptrauth_keys, apia));
> -  DEFINE(PTRAUTH_KEY_APIB,     offsetof(struct ptrauth_keys, apib));
> -  DEFINE(PTRAUTH_KEY_APDA,     offsetof(struct ptrauth_keys, apda));
> -  DEFINE(PTRAUTH_KEY_APDB,     offsetof(struct ptrauth_keys, apdb));
> -  DEFINE(PTRAUTH_KEY_APGA,     offsetof(struct ptrauth_keys, apga));
> +  DEFINE(PTRAUTH_USER_KEY_APIA,                offsetof(struct ptrauth_keys_user, apia));
> +  DEFINE(PTRAUTH_USER_KEY_APIB,                offsetof(struct ptrauth_keys_user, apib));
> +  DEFINE(PTRAUTH_USER_KEY_APDA,                offsetof(struct ptrauth_keys_user, apda));
> +  DEFINE(PTRAUTH_USER_KEY_APDB,                offsetof(struct ptrauth_keys_user, apdb));
> +  DEFINE(PTRAUTH_USER_KEY_APGA,                offsetof(struct ptrauth_keys_user, apga));
>    BLANK();
>  #endif
>    return 0;
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 95985be..1e77736 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -9,7 +9,7 @@
>
>  int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  {
> -       struct ptrauth_keys *keys = &tsk->thread.keys_user;
> +       struct ptrauth_keys_user *keys = &tsk->thread.keys_user;
>         unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
>                                       PR_PAC_APDAKEY | PR_PAC_APDBKEY;
>         unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
> @@ -18,7 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>                 return -EINVAL;
>
>         if (!arg) {
> -               ptrauth_keys_init(keys);
> +               ptrauth_keys_init_user(keys);
>                 return 0;
>         }
>
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index 21176d0..0793739 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -986,7 +986,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey)
>  }
>
>  static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
> -                                    const struct ptrauth_keys *keys)
> +                                    const struct ptrauth_keys_user *keys)
>  {
>         ukeys->apiakey = pac_key_to_user(&keys->apia);
>         ukeys->apibkey = pac_key_to_user(&keys->apib);
> @@ -994,7 +994,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
>         ukeys->apdbkey = pac_key_to_user(&keys->apdb);
>  }
>
> -static void pac_address_keys_from_user(struct ptrauth_keys *keys,
> +static void pac_address_keys_from_user(struct ptrauth_keys_user *keys,
>                                        const struct user_pac_address_keys *ukeys)
>  {
>         keys->apia = pac_key_from_user(ukeys->apiakey);
> @@ -1008,7 +1008,7 @@ static int pac_address_keys_get(struct task_struct *target,
>                                 unsigned int pos, unsigned int count,
>                                 void *kbuf, void __user *ubuf)
>  {
> -       struct ptrauth_keys *keys = &target->thread.keys_user;
> +       struct ptrauth_keys_user *keys = &target->thread.keys_user;
>         struct user_pac_address_keys user_keys;
>
>         if (!system_supports_address_auth())
> @@ -1025,7 +1025,7 @@ static int pac_address_keys_set(struct task_struct *target,
>                                 unsigned int pos, unsigned int count,
>                                 const void *kbuf, const void __user *ubuf)
>  {
> -       struct ptrauth_keys *keys = &target->thread.keys_user;
> +       struct ptrauth_keys_user *keys = &target->thread.keys_user;
>         struct user_pac_address_keys user_keys;
>         int ret;
>
> @@ -1043,12 +1043,12 @@ static int pac_address_keys_set(struct task_struct *target,
>  }
>
>  static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys,
> -                                    const struct ptrauth_keys *keys)
> +                                    const struct ptrauth_keys_user *keys)
>  {
>         ukeys->apgakey = pac_key_to_user(&keys->apga);
>  }
>
> -static void pac_generic_keys_from_user(struct ptrauth_keys *keys,
> +static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys,
>                                        const struct user_pac_generic_keys *ukeys)
>  {
>         keys->apga = pac_key_from_user(ukeys->apgakey);
> @@ -1059,7 +1059,7 @@ static int pac_generic_keys_get(struct task_struct *target,
>                                 unsigned int pos, unsigned int count,
>                                 void *kbuf, void __user *ubuf)
>  {
> -       struct ptrauth_keys *keys = &target->thread.keys_user;
> +       struct ptrauth_keys_user *keys = &target->thread.keys_user;
>         struct user_pac_generic_keys user_keys;
>
>         if (!system_supports_generic_auth())
> @@ -1076,7 +1076,7 @@ static int pac_generic_keys_set(struct task_struct *target,
>                                 unsigned int pos, unsigned int count,
>                                 const void *kbuf, const void __user *ubuf)
>  {
> -       struct ptrauth_keys *keys = &target->thread.keys_user;
> +       struct ptrauth_keys_user *keys = &target->thread.keys_user;
>         struct user_pac_generic_keys user_keys;
>         int ret;
>
> --
> 2.7.4
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication
  2019-11-21 17:39   ` Ard Biesheuvel
@ 2019-11-22 18:51     ` Richard Henderson
  2019-11-25  9:25       ` Amit Kachhap
  2019-11-25  5:34     ` Amit Kachhap
  1 sibling, 1 reply; 36+ messages in thread
From: Richard Henderson @ 2019-11-22 18:51 UTC (permalink / raw)
  To: Ard Biesheuvel, Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On 11/21/19 5:39 PM, Ard Biesheuvel wrote:
> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> This test is specific for arm64. When in-kernel Pointer Authentication
>> config is enabled, the return address stored in the stack is signed. This
>> feature helps in ROP kind of attack. If the matching signature is corrupted
>> then this will fail in authentication and lead to abort.
>>
>> e.g.
>> echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT
>>
>> [   13.118166] lkdtm: Performing direct entry CORRUPT_PAC
>> [   13.118298] lkdtm: Clearing PAC from the return address
>> [   13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec
>> [   13.118626] Mem abort info:
>> [   13.118666]   ESR = 0x86000004
>> [   13.118866]   EC = 0x21: IABT (current EL), IL = 32 bits
>> [   13.118966]   SET = 0, FnV = 0
>> [   13.119117]   EA = 0, S1PTW = 0
>>
>> Cc: Kees Cook <keescook@chromium.org>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Change since last version:
>>  * New patch
>>
>>  drivers/misc/lkdtm/bugs.c  | 17 +++++++++++++++++
>>  drivers/misc/lkdtm/core.c  |  1 +
>>  drivers/misc/lkdtm/lkdtm.h |  1 +
>>  3 files changed, 19 insertions(+)
>>
>> diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
>> index 7284a22..c9bb493 100644
>> --- a/drivers/misc/lkdtm/bugs.c
>> +++ b/drivers/misc/lkdtm/bugs.c
>> @@ -337,3 +337,20 @@ void lkdtm_UNSET_SMEP(void)
>>         pr_err("FAIL: this test is x86_64-only\n");
>>  #endif
>>  }
>> +
>> +void lkdtm_CORRUPT_PAC(void)
>> +{
>> +#if IS_ENABLED(CONFIG_ARM64_PTR_AUTH)
>> +       u64 ret;
>> +
>> +       pr_info("Clearing PAC from the return address\n");
>> +       /*
>> +        * __builtin_return_address masks the PAC bits of return
>> +        * address, so set the same again.
>> +        */
>> +       ret = (u64)__builtin_return_address(0);
>> +       asm volatile("str %0, [sp, 8]" : : "r" (ret) : "memory");
> 
> This looks a bit fragile to me. You are making assumptions about the
> location of the return address in the stack frame which are not
> guaranteed to hold.

Indeed.

> Couldn't you do this test simply by changing the key?

That, at least, means you don't have to know the stack frame layout.  However,
there's a chance (1/32767, I think, for the 48-bit vma case w/o TBI) that
changing the key will result in the same hash.

Even when the stack frame happens to be layed out as Amit guesses, the result
is akin to changing the key, such that hash(key, salt, ptr) == 0.

While testing this in qemu, I iterate until I find a <key, salt, ptr> tuple
that definitely produces a different hash.  Usually this loop iterates just
once, but the occasional failures that I got without iterating were annoying
(with TBI enabled in userspace, the chance drops to 1/127, so much more frequent).


r~

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys
  2019-11-19 12:32 ` [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
@ 2019-11-22 19:19   ` Richard Henderson
  2019-11-25  9:34     ` Amit Kachhap
  0 siblings, 1 reply; 36+ messages in thread
From: Richard Henderson @ 2019-11-22 19:19 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin

On 11/19/19 12:32 PM, Amit Daniel Kachhap wrote:
> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -35,11 +35,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
>  alternative_else_nop_endif
>  	.endm
>  
> +	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> +	mov	\tmp1, #THREAD_KEYS_KERNEL
> +	add	\tmp1, \tsk, \tmp1
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
> +	msr_s	SYS_APIAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APIAKEYHI_EL1, \tmp3
> +	isb
> +alternative_else_nop_endif
> +	.endm

Any reason you didn't put the first two insns in the alternative?

You could have re-used tmp1 instead of requiring tmp3, but at no point are we
lacking tmp registers so it doesn't matter.


r~

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication
  2019-11-21 17:39   ` Ard Biesheuvel
  2019-11-22 18:51     ` Richard Henderson
@ 2019-11-25  5:34     ` Amit Kachhap
  1 sibling, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25  5:34 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

Hi,

On 11/21/19 11:09 PM, Ard Biesheuvel wrote:
> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> This test is specific for arm64. When in-kernel Pointer Authentication
>> config is enabled, the return address stored in the stack is signed. This
>> feature helps in ROP kind of attack. If the matching signature is corrupted
>> then this will fail in authentication and lead to abort.
>>
>> e.g.
>> echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT
>>
>> [   13.118166] lkdtm: Performing direct entry CORRUPT_PAC
>> [   13.118298] lkdtm: Clearing PAC from the return address
>> [   13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec
>> [   13.118626] Mem abort info:
>> [   13.118666]   ESR = 0x86000004
>> [   13.118866]   EC = 0x21: IABT (current EL), IL = 32 bits
>> [   13.118966]   SET = 0, FnV = 0
>> [   13.119117]   EA = 0, S1PTW = 0
>>
>> Cc: Kees Cook <keescook@chromium.org>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Change since last version:
>>   * New patch
>>
>>   drivers/misc/lkdtm/bugs.c  | 17 +++++++++++++++++
>>   drivers/misc/lkdtm/core.c  |  1 +
>>   drivers/misc/lkdtm/lkdtm.h |  1 +
>>   3 files changed, 19 insertions(+)
>>
>> diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
>> index 7284a22..c9bb493 100644
>> --- a/drivers/misc/lkdtm/bugs.c
>> +++ b/drivers/misc/lkdtm/bugs.c
>> @@ -337,3 +337,20 @@ void lkdtm_UNSET_SMEP(void)
>>          pr_err("FAIL: this test is x86_64-only\n");
>>   #endif
>>   }
>> +
>> +void lkdtm_CORRUPT_PAC(void)
>> +{
>> +#if IS_ENABLED(CONFIG_ARM64_PTR_AUTH)
>> +       u64 ret;
>> +
>> +       pr_info("Clearing PAC from the return address\n");
>> +       /*
>> +        * __builtin_return_address masks the PAC bits of return
>> +        * address, so set the same again.
>> +        */
>> +       ret = (u64)__builtin_return_address(0);
>> +       asm volatile("str %0, [sp, 8]" : : "r" (ret) : "memory");
> 
> This looks a bit fragile to me. You are making assumptions about the
> location of the return address in the stack frame which are not
> guaranteed to hold.
Yes agreed.
> 
> Couldn't you do this test simply by changing the key?
Yes it should be possible. I will update it in my next iteration.

Regards,
Amit D
> 
>> +#else
>> +       pr_err("FAIL: For arm64 pointer authentication capable systems only\n");
>> +#endif
>> +}
>> diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
>> index cbc4c90..b9c9927 100644
>> --- a/drivers/misc/lkdtm/core.c
>> +++ b/drivers/misc/lkdtm/core.c
>> @@ -116,6 +116,7 @@ static const struct crashtype crashtypes[] = {
>>          CRASHTYPE(STACK_GUARD_PAGE_LEADING),
>>          CRASHTYPE(STACK_GUARD_PAGE_TRAILING),
>>          CRASHTYPE(UNSET_SMEP),
>> +       CRASHTYPE(CORRUPT_PAC),
>>          CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE),
>>          CRASHTYPE(OVERWRITE_ALLOCATION),
>>          CRASHTYPE(WRITE_AFTER_FREE),
>> diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
>> index ab446e0..bf12b68 100644
>> --- a/drivers/misc/lkdtm/lkdtm.h
>> +++ b/drivers/misc/lkdtm/lkdtm.h
>> @@ -28,6 +28,7 @@ void lkdtm_CORRUPT_USER_DS(void);
>>   void lkdtm_STACK_GUARD_PAGE_LEADING(void);
>>   void lkdtm_STACK_GUARD_PAGE_TRAILING(void);
>>   void lkdtm_UNSET_SMEP(void);
>> +void lkdtm_CORRUPT_PAC(void);
>>
>>   /* lkdtm_heap.c */
>>   void __init lkdtm_heap_init(void);
>> --
>> 2.7.4
>>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address
  2019-11-21 17:42   ` Ard Biesheuvel
  2019-11-22  8:48     ` Richard Henderson
@ 2019-11-25  5:42     ` Amit Kachhap
  1 sibling, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25  5:42 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

Hi Ard,

On 11/21/19 11:12 PM, Ard Biesheuvel wrote:
> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> This patch redefines __builtin_return_address to mask pac bits
>> when Pointer Authentication is enabled. As __builtin_return_address
>> is used mostly used to refer to the caller function symbol address
>> so masking runtime generated pac bits will help to find the match.
>>
>> This change fixes the utilities like cat /proc/vmallocinfo to now
>> show the correct logs.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Change since last version:
>>   * Comment modified.
>>
>>   arch/arm64/Kconfig                |  1 +
>>   arch/arm64/include/asm/compiler.h | 17 +++++++++++++++++
>>   2 files changed, 18 insertions(+)
>>   create mode 100644 arch/arm64/include/asm/compiler.h
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 998248e..c1844de 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -117,6 +117,7 @@ config ARM64
>>          select HAVE_ALIGNED_STRUCT_PAGE if SLUB
>>          select HAVE_ARCH_AUDITSYSCALL
>>          select HAVE_ARCH_BITREVERSE
>> +       select HAVE_ARCH_COMPILER_H
>>          select HAVE_ARCH_HUGE_VMAP
>>          select HAVE_ARCH_JUMP_LABEL
>>          select HAVE_ARCH_JUMP_LABEL_RELATIVE
>> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
>> new file mode 100644
>> index 0000000..5efe310
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/compiler.h
>> @@ -0,0 +1,17 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +#ifndef __ASM_ARM_COMPILER_H
>> +#define __ASM_ARM_COMPILER_H
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#if defined(CONFIG_ARM64_PTR_AUTH)
>> +
>> +/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
>> +#define __builtin_return_address(val)                          \
>> +       (void *)((unsigned long)__builtin_return_address(val) | \
>> +       (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)))
>> +#endif
>> +
>> +#endif
>> +
>> +#endif /* __ASM_ARM_COMPILER_H */
> 
> It seems to me like we are accumulating a lot of cruft for khwasan as
> well as PAC to convert address into their untagged format.

ok

> 
> Are there are untagging helpers we can already reuse? If not, can we
> introduce something that can be shared between all these use cases?

I tried to include <asm/pointer_auth.h> here but it produced lot of 
header inclusion error as include/linux/compiler_types.h which includes 
it is a very sensitive header.
I will check to add some kind of header or at least write proper commit 
logs.

Regards,
Amit D
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address
  2019-11-22  8:48     ` Richard Henderson
  2019-11-22 13:27       ` Ard Biesheuvel
@ 2019-11-25  9:12       ` Amit Kachhap
  1 sibling, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25  9:12 UTC (permalink / raw)
  To: Richard Henderson, Ard Biesheuvel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel


Hi,

On 11/22/19 2:18 PM, Richard Henderson wrote:
> On 11/21/19 5:42 PM, Ard Biesheuvel wrote:
>> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>>
>>> This patch redefines __builtin_return_address to mask pac bits
>>> when Pointer Authentication is enabled. As __builtin_return_address
>>> is used mostly used to refer to the caller function symbol address
>>> so masking runtime generated pac bits will help to find the match.
>>>
>>> This change fixes the utilities like cat /proc/vmallocinfo to now
>>> show the correct logs.
>>>
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> ---
>>> Change since last version:
>>>   * Comment modified.
>>>
>>>   arch/arm64/Kconfig                |  1 +
>>>   arch/arm64/include/asm/compiler.h | 17 +++++++++++++++++
>>>   2 files changed, 18 insertions(+)
>>>   create mode 100644 arch/arm64/include/asm/compiler.h
>>>
>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>> index 998248e..c1844de 100644
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -117,6 +117,7 @@ config ARM64
>>>          select HAVE_ALIGNED_STRUCT_PAGE if SLUB
>>>          select HAVE_ARCH_AUDITSYSCALL
>>>          select HAVE_ARCH_BITREVERSE
>>> +       select HAVE_ARCH_COMPILER_H
>>>          select HAVE_ARCH_HUGE_VMAP
>>>          select HAVE_ARCH_JUMP_LABEL
>>>          select HAVE_ARCH_JUMP_LABEL_RELATIVE
>>> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
>>> new file mode 100644
>>> index 0000000..5efe310
>>> --- /dev/null
>>> +++ b/arch/arm64/include/asm/compiler.h
>>> @@ -0,0 +1,17 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +#ifndef __ASM_ARM_COMPILER_H
>>> +#define __ASM_ARM_COMPILER_H
>>> +
>>> +#ifndef __ASSEMBLY__
>>> +
>>> +#if defined(CONFIG_ARM64_PTR_AUTH)
>>> +
>>> +/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
>>> +#define __builtin_return_address(val)                          \
>>> +       (void *)((unsigned long)__builtin_return_address(val) | \
>>> +       (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)))
>>> +#endif
>>> +
>>> +#endif
>>> +
>>> +#endif /* __ASM_ARM_COMPILER_H */
>>
>> It seems to me like we are accumulating a lot of cruft for khwasan as
>> well as PAC to convert address into their untagged format.
>>
>> Are there are untagging helpers we can already reuse? If not, can we
>> introduce something that can be shared between all these use cases?
> 
> xpaci will strip the pac from an instruction pointer, but requires the
> instruction set to be enabled, so you'd have to fiddle with alternatives.  You
> *could* force the use of lr as input/output and use xpaclri, which is a nop if
> the instruction set is not enabled.

xpaclri instruction seems easy to implement as including any header here 
"alternative.h" creates lot of header inclusion error. Thanks for the 
suggestion.

> 
> Also, this definition of is not correct, because bit 55 needs to be propagated
> to all of the bits being masked out here, so that you get a large negative
> number for kernel space addresses.

Yes agree.

Regards,
Amit D
> 
> 
> r~
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address
  2019-11-22 13:27       ` Ard Biesheuvel
@ 2019-11-25  9:18         ` Amit Kachhap
  0 siblings, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25  9:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Richard Henderson
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

Hi,

On 11/22/19 6:57 PM, Ard Biesheuvel wrote:
> On Fri, 22 Nov 2019 at 09:48, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> On 11/21/19 5:42 PM, Ard Biesheuvel wrote:
>>> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>>>
>>>> This patch redefines __builtin_return_address to mask pac bits
>>>> when Pointer Authentication is enabled. As __builtin_return_address
>>>> is used mostly used to refer to the caller function symbol address
>>>> so masking runtime generated pac bits will help to find the match.
>>>>
>>>> This change fixes the utilities like cat /proc/vmallocinfo to now
>>>> show the correct logs.
>>>>
>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>> ---
>>>> Change since last version:
>>>>   * Comment modified.
>>>>
>>>>   arch/arm64/Kconfig                |  1 +
>>>>   arch/arm64/include/asm/compiler.h | 17 +++++++++++++++++
>>>>   2 files changed, 18 insertions(+)
>>>>   create mode 100644 arch/arm64/include/asm/compiler.h
>>>>
>>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>>> index 998248e..c1844de 100644
>>>> --- a/arch/arm64/Kconfig
>>>> +++ b/arch/arm64/Kconfig
>>>> @@ -117,6 +117,7 @@ config ARM64
>>>>          select HAVE_ALIGNED_STRUCT_PAGE if SLUB
>>>>          select HAVE_ARCH_AUDITSYSCALL
>>>>          select HAVE_ARCH_BITREVERSE
>>>> +       select HAVE_ARCH_COMPILER_H
>>>>          select HAVE_ARCH_HUGE_VMAP
>>>>          select HAVE_ARCH_JUMP_LABEL
>>>>          select HAVE_ARCH_JUMP_LABEL_RELATIVE
>>>> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
>>>> new file mode 100644
>>>> index 0000000..5efe310
>>>> --- /dev/null
>>>> +++ b/arch/arm64/include/asm/compiler.h
>>>> @@ -0,0 +1,17 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +#ifndef __ASM_ARM_COMPILER_H
>>>> +#define __ASM_ARM_COMPILER_H
>>>> +
>>>> +#ifndef __ASSEMBLY__
>>>> +
>>>> +#if defined(CONFIG_ARM64_PTR_AUTH)
>>>> +
>>>> +/* As TBI1 is disabled currently, so bits 63:56 also has PAC */
>>>> +#define __builtin_return_address(val)                          \
>>>> +       (void *)((unsigned long)__builtin_return_address(val) | \
>>>> +       (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)))
>>>> +#endif
>>>> +
>>>> +#endif
>>>> +
>>>> +#endif /* __ASM_ARM_COMPILER_H */
>>>
>>> It seems to me like we are accumulating a lot of cruft for khwasan as
>>> well as PAC to convert address into their untagged format.
>>>
>>> Are there are untagging helpers we can already reuse? If not, can we
>>> introduce something that can be shared between all these use cases?
>>
>> xpaci will strip the pac from an instruction pointer, but requires the
>> instruction set to be enabled, so you'd have to fiddle with alternatives.  You
>> *could* force the use of lr as input/output and use xpaclri, which is a nop if
>> the instruction set is not enabled.
>>
>> Also, this definition of is not correct, because bit 55 needs to be propagated
>> to all of the bits being masked out here, so that you get a large negative
>> number for kernel space addresses.
>>
> 
> Indeed. Even though bit 55 is generally guaranteed to be set, it would
> be better to simply reuse ptrauth_strip_insn_pac() that you introduce
> in the next patch.
Earlier I tried re-using but it produces lot of header inclusion error. 
I will check if it can be fixed.
> 
> Also, please use __ASM_COMPILER_H as the header guard (which is more
> idiomatic), and drop the unnecessary 'ifndef __ASSEMBLY__'.

Yes. sure.
> 
> Finally, could you add a comment that this header is transitively
> included (via include/compiler_types.h) on the compiler command line,
> so it is guaranteed to be loaded by users of this macro, and so there
> is no risk of the wrong version being used.

Yes sure.

> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 06/14] arm64: rename ptrauth key structures to be user-specific
  2019-11-22 13:28   ` Ard Biesheuvel
@ 2019-11-25  9:22     ` Amit Kachhap
  0 siblings, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25  9:22 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel



On 11/22/19 6:58 PM, Ard Biesheuvel wrote:
> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> We currently enable ptrauth for userspace, but do not use it within the
>> kernel. We're going to enable it for the kernel, and will need to manage
>> a separate set of ptrauth keys for the kernel.
>>
>> We currently keep all 5 keys in struct ptrauth_keys. However, as the
>> kernel will only need to use 1 key, it is a bit wasteful to allocate a
>> whole ptrauth_keys struct for every thread.
>>
>> Therefore, a subsequent patch will define a separate struct, with only 1
>> key, for the kernel. In preparation for that, rename the existing struct
>> (and associated macros and functions) to reflect that they are specific
>> to userspace.
>>
>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> 
> Could we combine this patch with #2 somehow? You are modifying lots of
> code that you just introduced there.
Yes sure.
> 
>> ---
>> Changes since last version:
>> * None
>>
>>   arch/arm64/include/asm/asm_pointer_auth.h | 10 +++++-----
>>   arch/arm64/include/asm/pointer_auth.h     |  6 +++---
>>   arch/arm64/include/asm/processor.h        |  2 +-
>>   arch/arm64/kernel/asm-offsets.c           | 10 +++++-----
>>   arch/arm64/kernel/pointer_auth.c          |  4 ++--
>>   arch/arm64/kernel/ptrace.c                | 16 ++++++++--------
>>   6 files changed, 24 insertions(+), 24 deletions(-)
>>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication
  2019-11-22 18:51     ` Richard Henderson
@ 2019-11-25  9:25       ` Amit Kachhap
  0 siblings, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25  9:25 UTC (permalink / raw)
  To: Richard Henderson, Ard Biesheuvel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel


On 11/23/19 12:21 AM, Richard Henderson wrote:
> On 11/21/19 5:39 PM, Ard Biesheuvel wrote:
>> On Tue, 19 Nov 2019 at 13:33, Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>>
>>> This test is specific for arm64. When in-kernel Pointer Authentication
>>> config is enabled, the return address stored in the stack is signed. This
>>> feature helps in ROP kind of attack. If the matching signature is corrupted
>>> then this will fail in authentication and lead to abort.
>>>
>>> e.g.
>>> echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT
>>>
>>> [   13.118166] lkdtm: Performing direct entry CORRUPT_PAC
>>> [   13.118298] lkdtm: Clearing PAC from the return address
>>> [   13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec
>>> [   13.118626] Mem abort info:
>>> [   13.118666]   ESR = 0x86000004
>>> [   13.118866]   EC = 0x21: IABT (current EL), IL = 32 bits
>>> [   13.118966]   SET = 0, FnV = 0
>>> [   13.119117]   EA = 0, S1PTW = 0
>>>
>>> Cc: Kees Cook <keescook@chromium.org>
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> ---
>>> Change since last version:
>>>   * New patch
>>>
>>>   drivers/misc/lkdtm/bugs.c  | 17 +++++++++++++++++
>>>   drivers/misc/lkdtm/core.c  |  1 +
>>>   drivers/misc/lkdtm/lkdtm.h |  1 +
>>>   3 files changed, 19 insertions(+)
>>>
>>> diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
>>> index 7284a22..c9bb493 100644
>>> --- a/drivers/misc/lkdtm/bugs.c
>>> +++ b/drivers/misc/lkdtm/bugs.c
>>> @@ -337,3 +337,20 @@ void lkdtm_UNSET_SMEP(void)
>>>          pr_err("FAIL: this test is x86_64-only\n");
>>>   #endif
>>>   }
>>> +
>>> +void lkdtm_CORRUPT_PAC(void)
>>> +{
>>> +#if IS_ENABLED(CONFIG_ARM64_PTR_AUTH)
>>> +       u64 ret;
>>> +
>>> +       pr_info("Clearing PAC from the return address\n");
>>> +       /*
>>> +        * __builtin_return_address masks the PAC bits of return
>>> +        * address, so set the same again.
>>> +        */
>>> +       ret = (u64)__builtin_return_address(0);
>>> +       asm volatile("str %0, [sp, 8]" : : "r" (ret) : "memory");
>>
>> This looks a bit fragile to me. You are making assumptions about the
>> location of the return address in the stack frame which are not
>> guaranteed to hold.
> 
> Indeed.
> 
>> Couldn't you do this test simply by changing the key?
> 
> That, at least, means you don't have to know the stack frame layout.  However,
> there's a chance (1/32767, I think, for the 48-bit vma case w/o TBI) that
> changing the key will result in the same hash.
> 
> Even when the stack frame happens to be layed out as Amit guesses, the result
> is akin to changing the key, such that hash(key, salt, ptr) == 0.
> 
> While testing this in qemu, I iterate until I find a <key, salt, ptr> tuple
> that definitely produces a different hash.  Usually this loop iterates just
> once, but the occasional failures that I got without iterating were annoying
> (with TBI enabled in userspace, the chance drops to 1/127, so much more frequent).

Nice suggestion. I will check doing it this way in the next iteration.

> 
> 
> r~
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys
  2019-11-22 19:19   ` Richard Henderson
@ 2019-11-25  9:34     ` Amit Kachhap
  2019-11-25  9:39       ` Ard Biesheuvel
  0 siblings, 1 reply; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25  9:34 UTC (permalink / raw)
  To: Richard Henderson, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin



On 11/23/19 12:49 AM, Richard Henderson wrote:
> On 11/19/19 12:32 PM, Amit Daniel Kachhap wrote:
>> --- a/arch/arm64/include/asm/asm_pointer_auth.h
>> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
>> @@ -35,11 +35,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
>>   alternative_else_nop_endif
>>   	.endm
>>   
>> +	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
>> +	mov	\tmp1, #THREAD_KEYS_KERNEL
>> +	add	\tmp1, \tsk, \tmp1
>> +alternative_if ARM64_HAS_ADDRESS_AUTH
>> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
>> +	msr_s	SYS_APIAKEYLO_EL1, \tmp2
>> +	msr_s	SYS_APIAKEYHI_EL1, \tmp3
>> +	isb
>> +alternative_else_nop_endif
>> +	.endm
> 
> Any reason you didn't put the first two insns in the alternative?

Yes these 2 instructions can be moved below. Thanks for the catch.

> 
> You could have re-used tmp1 instead of requiring tmp3, but at no point are we
> lacking tmp registers so it doesn't matter.
> 
> 
> r~
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys
  2019-11-25  9:34     ` Amit Kachhap
@ 2019-11-25  9:39       ` Ard Biesheuvel
  2019-11-25 11:01         ` Amit Kachhap
  0 siblings, 1 reply; 36+ messages in thread
From: Ard Biesheuvel @ 2019-11-25  9:39 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Richard Henderson, Will Deacon, James Morse, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

On Mon, 25 Nov 2019 at 10:34, Amit Kachhap <amit.kachhap@arm.com> wrote:
>
>
>
> On 11/23/19 12:49 AM, Richard Henderson wrote:
> > On 11/19/19 12:32 PM, Amit Daniel Kachhap wrote:
> >> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> >> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> >> @@ -35,11 +35,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
> >>   alternative_else_nop_endif
> >>      .endm
> >>
> >> +    .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> >> +    mov     \tmp1, #THREAD_KEYS_KERNEL
> >> +    add     \tmp1, \tsk, \tmp1
> >> +alternative_if ARM64_HAS_ADDRESS_AUTH
> >> +    ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
> >> +    msr_s   SYS_APIAKEYLO_EL1, \tmp2
> >> +    msr_s   SYS_APIAKEYHI_EL1, \tmp3
> >> +    isb
> >> +alternative_else_nop_endif
> >> +    .endm
> >
> > Any reason you didn't put the first two insns in the alternative?
>
> Yes these 2 instructions can be moved below. Thanks for the catch.
>

Do you even need them? Isn't it possible to do

ldp \tmp1, \tmp2, [\tsk, #(THREAD_KEYS_KERNEL + PTRAUTH_KERNEL_KEY_APIA)]

? Or is the range for the offset insufficient?


> >
> > You could have re-used tmp1 instead of requiring tmp3, but at no point are we
> > lacking tmp registers so it doesn't matter.
> >

I think we should fix it nonetheless.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys
  2019-11-25  9:39       ` Ard Biesheuvel
@ 2019-11-25 11:01         ` Amit Kachhap
  0 siblings, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-25 11:01 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Richard Henderson, Will Deacon, James Morse, Kristina Martsenko,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel



On 11/25/19 3:09 PM, Ard Biesheuvel wrote:
> On Mon, 25 Nov 2019 at 10:34, Amit Kachhap <amit.kachhap@arm.com> wrote:
>>
>>
>>
>> On 11/23/19 12:49 AM, Richard Henderson wrote:
>>> On 11/19/19 12:32 PM, Amit Daniel Kachhap wrote:
>>>> --- a/arch/arm64/include/asm/asm_pointer_auth.h
>>>> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
>>>> @@ -35,11 +35,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
>>>>    alternative_else_nop_endif
>>>>       .endm
>>>>
>>>> +    .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
>>>> +    mov     \tmp1, #THREAD_KEYS_KERNEL
>>>> +    add     \tmp1, \tsk, \tmp1
>>>> +alternative_if ARM64_HAS_ADDRESS_AUTH
>>>> +    ldp     \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
>>>> +    msr_s   SYS_APIAKEYLO_EL1, \tmp2
>>>> +    msr_s   SYS_APIAKEYHI_EL1, \tmp3
>>>> +    isb
>>>> +alternative_else_nop_endif
>>>> +    .endm
>>>
>>> Any reason you didn't put the first two insns in the alternative?
>>
>> Yes these 2 instructions can be moved below. Thanks for the catch.
>>
> 
> Do you even need them? Isn't it possible to do
> 
> ldp \tmp1, \tmp2, [\tsk, #(THREAD_KEYS_KERNEL + PTRAUTH_KERNEL_KEY_APIA)]
> 
> ? Or is the range for the offset insufficient?

Yes the offset exceeds the maximum range so done this way.
> 
> 
>>>
>>> You could have re-used tmp1 instead of requiring tmp3, but at no point are we
>>> lacking tmp registers so it doesn't matter.
>>>
> 
> I think we should fix it nonetheless.

yes.
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing
  2019-11-19 12:32 ` [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
  2019-11-21 15:06   ` Mark Brown
@ 2019-11-25 17:35   ` Mark Brown
  1 sibling, 0 replies; 36+ messages in thread
From: Mark Brown @ 2019-11-25 17:35 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1015 bytes --]

On Tue, Nov 19, 2019 at 06:02:25PM +0530, Amit Daniel Kachhap wrote:

> +config CC_HAS_BRANCH_PROT_PAC_RET
> +	# GCC 9 or later
> +	def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)

This breaks the build for me with CC=clang-9:

  CC      arch/arm64/kernel/vdso/vgettimeofday.o
/tmp/vgettimeofday-1a520b.s: Assembler messages:
/tmp/vgettimeofday-1a520b.s:25: Error: selected processor does not support `paciasp'
/tmp/vgettimeofday-1a520b.s:26: Error: unknown pseudo-op: `.cfi_negate_ra_state'
/tmp/vgettimeofday-1a520b.s:120: Error: selected processor does not support `autiasp'

(and various other errors with the assembler not understanding stuff).
This happens because clang is using the system assembler (that from
Debian stable in my case, 2.31.1) and it requires additional options to
enable newer instructions.  We need to pass -mcpu=all or similar to the
assembler (eg, with -Wa,-mcpu=all in CC).  This'd be fine if the
cc-option check detected the assembler issues but sadly it doesn't get
that far.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing
  2019-11-21 15:06   ` Mark Brown
@ 2019-11-26  7:00     ` Amit Kachhap
  0 siblings, 0 replies; 36+ messages in thread
From: Amit Kachhap @ 2019-11-26  7:00 UTC (permalink / raw)
  To: Mark Brown
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Kristina Martsenko, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Dave Martin,
	linux-arm-kernel

Hi Mark,

On 11/21/19 8:36 PM, Mark Brown wrote:
> On Tue, Nov 19, 2019 at 06:02:25PM +0530, Amit Daniel Kachhap wrote:
> 
>> +config CC_HAS_BRANCH_PROT_PAC_RET
>> +	# GCC 9 or later
>> +	def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)
> 
> clang also supports this option, as of clang-8 I think.

ok. I will test and update the comment here.
> 
>> +ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
>> +pac-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
>> +pac-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
>> +KBUILD_CFLAGS += $(pac-flags-y)
>> +endif
> 
> This is going to be a bit annoying with BTI as we need to set
> -mbranch-protection=bti too.  This means we end up with type
> bti+pac-ret+leaf which is annoying to arrange.  There is the convenient
> branch protection type standard which does enable both in one option but
> that only enables non-leaf pac-ret so you need to explicitly spell out
> pac-ret so you can turn on leaf as well.  I'm not sure I can think of
> anything much better than adding another case for BTI at the top so we
> end up with something along the lines of:

Yes. The reason for keeping pac-ret+leaf is to cover all functions in 
the earlier offline discussions. As you pointed out I can add 
CC_HAS_BRANCH_PROT_PAC_RET_LEAF config name to make it more meaningful
> 
> ifeq ($(CONFIG_ARM64_BTI_KERNEL),y)
> branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_BTI) := -mbranch-protection=bti+pac-ret+leaf
> else ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
> branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
> endif
> KBUILD_CFLAGS += $(branch-prot-flags-y)
> 
> with a separate section for the signed return address bit.  It would be
> helpful to avoid the immediate refactoring when adding BTI by splitting
> things up with a more generic name.

I agree with your concern for separate section when BTI support is 
added. I will do it in my next iteration.

//Amit

> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2019-11-26  7:00 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-19 12:32 [PATCH v2 00/14] arm64: return address signing Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 01/14] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 02/14] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 03/14] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 04/14] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 05/14] arm64: enable ptrauth earlier Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 06/14] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
2019-11-22 13:28   ` Ard Biesheuvel
2019-11-25  9:22     ` Amit Kachhap
2019-11-19 12:32 ` [PATCH v2 07/14] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
2019-11-22 19:19   ` Richard Henderson
2019-11-25  9:34     ` Amit Kachhap
2019-11-25  9:39       ` Ard Biesheuvel
2019-11-25 11:01         ` Amit Kachhap
2019-11-19 12:32 ` [PATCH v2 08/14] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
2019-11-21 17:42   ` Ard Biesheuvel
2019-11-22  8:48     ` Richard Henderson
2019-11-22 13:27       ` Ard Biesheuvel
2019-11-25  9:18         ` Amit Kachhap
2019-11-25  9:12       ` Amit Kachhap
2019-11-25  5:42     ` Amit Kachhap
2019-11-19 12:32 ` [PATCH v2 09/14] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 10/14] arm64: __show_regs: strip PAC from lr in printk Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 11/14] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 12/14] arm64: kprobe: disable probe of ptrauth instruction Amit Daniel Kachhap
2019-11-19 12:32 ` [PATCH v2 13/14] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
2019-11-21 15:06   ` Mark Brown
2019-11-26  7:00     ` Amit Kachhap
2019-11-25 17:35   ` Mark Brown
2019-11-19 12:32 ` [PATCH v2 14/14] lkdtm: arm64: test kernel pointer authentication Amit Daniel Kachhap
2019-11-21 17:39   ` Ard Biesheuvel
2019-11-22 18:51     ` Richard Henderson
2019-11-25  9:25       ` Amit Kachhap
2019-11-25  5:34     ` Amit Kachhap
2019-11-20 16:05 ` [PATCH v2 00/14] arm64: return address signing Ard Biesheuvel
2019-11-21 12:15   ` Amit Kachhap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.