All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/18] arm64: return address signing
@ 2020-03-06  6:35 Amit Daniel Kachhap
  2020-03-06  6:35 ` [PATCH v6 01/18] arm64: cpufeature: Fix meta-capability cpufeature check Amit Daniel Kachhap
                   ` (19 more replies)
  0 siblings, 20 replies; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

Hi,

This series improves function return address protection for the arm64 kernel, by
compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
ptrauth hereafter). This should help protect the kernel against attacks using
return-oriented programming.

Changes since v5 [1]:
 - Added a new patch(arm64: cpufeature: Move cpu capability..) to move cpucapability
   type helpers in cpufeature.c file. This makes adding new cpucapability easier.
 - Moved kernel key restore to function __cpu_setup(proc.S) as suggested by Catalin.
 - More comments for as-option Kconfig option for concerns raised by Masahiro.
 - Clarified comments for -march=armv8.3-a non-integrated assembler option.

Changes since v4 [2]:
 - Rebased the patch series to v5.6-rc2.
 - Patch "arm64: cpufeature: Fix meta-capability" updated as per Suzuki's
   review comments.

Some additional work not implemented below will be taken up separately:
 - kdump tools may need some rework to work with ptrauth. The kdump
   tools may need the ptrauth information to strip PAC bits. This will
   be sent in a separate patch.
 - Few more ptrauth generic lkdtm tests as requested by Kees Cook.
 - Generate compile time warnings if requested Kconfig feature not 
   supported by compilers.

This series is based on Linux version v5.6-rc4. This complete series can be
found at (git://linux-arm.org/linux-ak.git PAC_mainline_v6) for reference.

Feedback welcome!

Thanks,
Amit Daniel

[1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-February/711699.html 
[2]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-January/707567.html

Amit Daniel Kachhap (9):
  arm64: cpufeature: Fix meta-capability cpufeature check
  arm64: create macro to park cpu in an infinite loop
  arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  arm64: cpufeature: Move cpu capability helpers inside C file
  arm64: initialize ptrauth keys for kernel booting task
  arm64: mask PAC bits of __builtin_return_address
  arm64: __show_regs: strip PAC from lr in printk
  arm64: suspend: restore the kernel ptrauth keys
  lkdtm: arm64: test kernel pointer authentication

Kristina Martsenko (7):
  arm64: cpufeature: add pointer auth meta-capabilities
  arm64: rename ptrauth key structures to be user-specific
  arm64: install user ptrauth keys at kernel exit time
  arm64: cpufeature: handle conflicts based on capability
  arm64: enable ptrauth earlier
  arm64: initialize and switch ptrauth kernel keys
  arm64: compile the kernel with ptrauth return address 
Mark Rutland (1):
  arm64: unwind: strip PAC from kernel addresses

Vincenzo Frascino (1):
  kconfig: Add support for 'as-option'

 arch/arm64/Kconfig                        | 27 +++++++++-
 arch/arm64/Makefile                       | 11 ++++
 arch/arm64/include/asm/asm_pointer_auth.h | 65 +++++++++++++++++++++++
 arch/arm64/include/asm/compiler.h         | 20 +++++++
 arch/arm64/include/asm/cpucaps.h          |  4 +-
 arch/arm64/include/asm/cpufeature.h       | 39 +++++++-------
 arch/arm64/include/asm/pointer_auth.h     | 54 +++++++++----------
 arch/arm64/include/asm/processor.h        |  3 +-
 arch/arm64/include/asm/smp.h              | 10 ++++
 arch/arm64/include/asm/stackprotector.h   |  5 ++
 arch/arm64/kernel/asm-offsets.c           | 16 ++++++
 arch/arm64/kernel/cpufeature.c            | 87 +++++++++++++++++++++++--------
 arch/arm64/kernel/entry.S                 |  6 +++
 arch/arm64/kernel/head.S                  | 27 +++++-----
 arch/arm64/kernel/pointer_auth.c          |  7 +--
 arch/arm64/kernel/process.c               |  5 +-
 arch/arm64/kernel/ptrace.c                | 16 +++---
 arch/arm64/kernel/sleep.S                 |  2 +
 arch/arm64/kernel/smp.c                   | 10 ++++
 arch/arm64/kernel/stacktrace.c            |  3 ++
 arch/arm64/mm/proc.S                      | 71 +++++++++++++++++++++----
 drivers/misc/lkdtm/bugs.c                 | 36 +++++++++++++
 drivers/misc/lkdtm/core.c                 |  1 +
 drivers/misc/lkdtm/lkdtm.h                |  1 +
 include/linux/stackprotector.h            |  2 +-
 scripts/Kconfig.include                   |  6 +++
 26 files changed, 424 insertions(+), 110 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
 create mode 100644 arch/arm64/include/asm/compiler.h

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v6 01/18] arm64: cpufeature: Fix meta-capability cpufeature check
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 10:59   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 02/18] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

Some existing/future meta cpucaps match need the presence of individual
cpucaps. Currently the individual cpucaps checks it via an array based
flag and this introduces dependency on the array entry order.
This limitation exists only for system scope cpufeature.

This patch introduces an internal helper function (__system_matches_cap)
to invoke the matching handler for system scope. This helper has to be
used during a narrow window when,
- The system wide safe registers are set with all the SMP CPUs and,
- The SYSTEM_FEATURE cpu_hwcaps may not have been set.

Normal users should use the existing cpus_have_{const_}cap() global
function.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0b67156..3818685 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -116,6 +116,8 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
 
 static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
 
+static bool __system_matches_cap(unsigned int n);
+
 /*
  * NOTE: Any changes to the visibility of features should be kept in
  * sync with the documentation of the CPU feature register ABI.
@@ -2146,6 +2148,17 @@ bool this_cpu_has_cap(unsigned int n)
 	return false;
 }
 
+static bool __system_matches_cap(unsigned int n)
+{
+	if (n < ARM64_NCAPS) {
+		const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n];
+
+		if (cap)
+			return cap->matches(cap, SCOPE_SYSTEM);
+	}
+	return false;
+}
+
 void cpu_set_feature(unsigned int num)
 {
 	WARN_ON(num >= MAX_CPU_FEATURES);
@@ -2218,7 +2231,7 @@ void __init setup_cpu_features(void)
 static bool __maybe_unused
 cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
 {
-	return (cpus_have_const_cap(ARM64_HAS_PAN) && !cpus_have_const_cap(ARM64_HAS_UAO));
+	return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO));
 }
 
 static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 02/18] arm64: cpufeature: add pointer auth meta-capabilities
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
  2020-03-06  6:35 ` [PATCH v6 01/18] arm64: cpufeature: Fix meta-capability cpufeature check Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 11:18   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 03/18] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

From: Kristina Martsenko <kristina.martsenko@arm.com>

To enable pointer auth for the kernel, we're going to need to check for
the presence of address auth and generic auth using alternative_if. We
currently have two cpucaps for each, but alternative_if needs to check a
single cpucap. So define meta-capabilities that are present when either
of the current two capabilities is present.

Leave the existing four cpucaps in place, as they are still needed to
check for mismatched systems where one CPU has the architected algorithm
but another has the IMP DEF algorithm.

Note, the meta-capabilities were present before but were removed in
commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework
ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed
then. Note, unlike before, the current patch checks the cpucap values
directly, instead of reading the CPU ID register value.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: commit message and macro rebase, use __system_matches_cap]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/include/asm/cpucaps.h    |  4 +++-
 arch/arm64/include/asm/cpufeature.h |  6 ++----
 arch/arm64/kernel/cpufeature.c      | 25 ++++++++++++++++++++++++-
 3 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 865e025..72e4e05 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -58,7 +58,9 @@
 #define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE	48
 #define ARM64_HAS_E0PD				49
 #define ARM64_HAS_RNG				50
+#define ARM64_HAS_ADDRESS_AUTH			51
+#define ARM64_HAS_GENERIC_AUTH			52
 
-#define ARM64_NCAPS				51
+#define ARM64_NCAPS				53
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 2a746b9..0fd1feb 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -590,15 +590,13 @@ static __always_inline bool system_supports_cnp(void)
 static inline bool system_supports_address_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
 }
 
 static inline bool system_supports_generic_auth(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
-		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
-		 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
+		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
 }
 
 static inline bool system_uses_irq_prio_masking(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3818685..b12e386 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1323,6 +1323,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
 	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
 				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
 }
+
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
+	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
+}
+
+static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
+	       __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
+}
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #ifdef CONFIG_ARM64_E0PD
@@ -1600,7 +1614,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
 		.matches = has_cpuid_feature,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
@@ -1611,6 +1624,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
 		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
 		.matches = has_cpuid_feature,
+	},
+	{
+		.capability = ARM64_HAS_ADDRESS_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_address_auth,
 		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
@@ -1633,6 +1651,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
 		.matches = has_cpuid_feature,
 	},
+	{
+		.capability = ARM64_HAS_GENERIC_AUTH,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_generic_auth,
+	},
 #endif /* CONFIG_ARM64_PTR_AUTH */
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	{
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 03/18] arm64: rename ptrauth key structures to be user-specific
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
  2020-03-06  6:35 ` [PATCH v6 01/18] arm64: cpufeature: Fix meta-capability cpufeature check Amit Daniel Kachhap
  2020-03-06  6:35 ` [PATCH v6 02/18] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 11:35   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 04/18] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

From: Kristina Martsenko <kristina.martsenko@arm.com>

We currently enable ptrauth for userspace, but do not use it within the
kernel. We're going to enable it for the kernel, and will need to manage
a separate set of ptrauth keys for the kernel.

We currently keep all 5 keys in struct ptrauth_keys. However, as the
kernel will only need to use 1 key, it is a bit wasteful to allocate a
whole ptrauth_keys struct for every thread.

Therefore, a subsequent patch will define a separate struct, with only 1
key, for the kernel. In preparation for that, rename the existing struct
(and associated macros and functions) to reflect that they are specific
to userspace.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Re-positioned the patch to reduce the diff]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/include/asm/pointer_auth.h | 12 ++++++------
 arch/arm64/include/asm/processor.h    |  2 +-
 arch/arm64/kernel/pointer_auth.c      |  8 ++++----
 arch/arm64/kernel/ptrace.c            | 16 ++++++++--------
 4 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 7a24bad..799b079 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -22,7 +22,7 @@ struct ptrauth_key {
  * We give each process its own keys, which are shared by all threads. The keys
  * are inherited upon fork(), and reinitialised upon exec*().
  */
-struct ptrauth_keys {
+struct ptrauth_keys_user {
 	struct ptrauth_key apia;
 	struct ptrauth_key apib;
 	struct ptrauth_key apda;
@@ -30,7 +30,7 @@ struct ptrauth_keys {
 	struct ptrauth_key apga;
 };
 
-static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 {
 	if (system_supports_address_auth()) {
 		get_random_bytes(&keys->apia, sizeof(keys->apia));
@@ -50,7 +50,7 @@ do {								\
 	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
 } while (0)
 
-static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+static inline void ptrauth_keys_switch_user(struct ptrauth_keys_user *keys)
 {
 	if (system_supports_address_auth()) {
 		__ptrauth_key_install(APIA, keys->apia);
@@ -80,12 +80,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 #define ptrauth_thread_init_user(tsk)					\
 do {									\
 	struct task_struct *__ptiu_tsk = (tsk);				\
-	ptrauth_keys_init(&__ptiu_tsk->thread.keys_user);		\
-	ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user);		\
+	ptrauth_keys_init_user(&__ptiu_tsk->thread.keys_user);		\
+	ptrauth_keys_switch_user(&__ptiu_tsk->thread.keys_user);		\
 } while (0)
 
 #define ptrauth_thread_switch(tsk)	\
-	ptrauth_keys_switch(&(tsk)->thread.keys_user)
+	ptrauth_keys_switch_user(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5ba6320..496a928 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -146,7 +146,7 @@ struct thread_struct {
 	unsigned long		fault_code;	/* ESR_EL1 value */
 	struct debug_info	debug;		/* debugging */
 #ifdef CONFIG_ARM64_PTR_AUTH
-	struct ptrauth_keys	keys_user;
+	struct ptrauth_keys_user	keys_user;
 #endif
 };
 
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index c507b58..af5a638 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -9,7 +9,7 @@
 
 int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 {
-	struct ptrauth_keys *keys = &tsk->thread.keys_user;
+	struct ptrauth_keys_user *keys = &tsk->thread.keys_user;
 	unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
 				      PR_PAC_APDAKEY | PR_PAC_APDBKEY;
 	unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
@@ -18,8 +18,8 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 		return -EINVAL;
 
 	if (!arg) {
-		ptrauth_keys_init(keys);
-		ptrauth_keys_switch(keys);
+		ptrauth_keys_init_user(keys);
+		ptrauth_keys_switch_user(keys);
 		return 0;
 	}
 
@@ -41,7 +41,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 	if (arg & PR_PAC_APGAKEY)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 
-	ptrauth_keys_switch(keys);
+	ptrauth_keys_switch_user(keys);
 
 	return 0;
 }
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index cd6e5fa..b3d3005 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -999,7 +999,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey)
 }
 
 static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
-				     const struct ptrauth_keys *keys)
+				     const struct ptrauth_keys_user *keys)
 {
 	ukeys->apiakey = pac_key_to_user(&keys->apia);
 	ukeys->apibkey = pac_key_to_user(&keys->apib);
@@ -1007,7 +1007,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
 	ukeys->apdbkey = pac_key_to_user(&keys->apdb);
 }
 
-static void pac_address_keys_from_user(struct ptrauth_keys *keys,
+static void pac_address_keys_from_user(struct ptrauth_keys_user *keys,
 				       const struct user_pac_address_keys *ukeys)
 {
 	keys->apia = pac_key_from_user(ukeys->apiakey);
@@ -1021,7 +1021,7 @@ static int pac_address_keys_get(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				void *kbuf, void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_address_keys user_keys;
 
 	if (!system_supports_address_auth())
@@ -1038,7 +1038,7 @@ static int pac_address_keys_set(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				const void *kbuf, const void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_address_keys user_keys;
 	int ret;
 
@@ -1056,12 +1056,12 @@ static int pac_address_keys_set(struct task_struct *target,
 }
 
 static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys,
-				     const struct ptrauth_keys *keys)
+				     const struct ptrauth_keys_user *keys)
 {
 	ukeys->apgakey = pac_key_to_user(&keys->apga);
 }
 
-static void pac_generic_keys_from_user(struct ptrauth_keys *keys,
+static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys,
 				       const struct user_pac_generic_keys *ukeys)
 {
 	keys->apga = pac_key_from_user(ukeys->apgakey);
@@ -1072,7 +1072,7 @@ static int pac_generic_keys_get(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				void *kbuf, void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_generic_keys user_keys;
 
 	if (!system_supports_generic_auth())
@@ -1089,7 +1089,7 @@ static int pac_generic_keys_set(struct task_struct *target,
 				unsigned int pos, unsigned int count,
 				const void *kbuf, const void __user *ubuf)
 {
-	struct ptrauth_keys *keys = &target->thread.keys_user;
+	struct ptrauth_keys_user *keys = &target->thread.keys_user;
 	struct user_pac_generic_keys user_keys;
 	int ret;
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 04/18] arm64: install user ptrauth keys at kernel exit time
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 03/18] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-06 19:07   ` James Morse
  2020-03-06  6:35 ` [PATCH v6 05/18] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

From: Kristina Martsenko <kristina.martsenko@arm.com>

As we're going to enable pointer auth within the kernel and use a
different APIAKey for the kernel itself, so move the user APIAKey
switch to EL0 exception return.

The other 4 keys could remain switched during task switch, but are also
moved to keep things consistent.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: commit msg, re-positioned the patch, comments]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/include/asm/asm_pointer_auth.h | 49 +++++++++++++++++++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 23 +--------------
 arch/arm64/kernel/asm-offsets.c           | 11 +++++++
 arch/arm64/kernel/entry.S                 |  3 ++
 arch/arm64/kernel/pointer_auth.c          |  3 --
 arch/arm64/kernel/process.c               |  1 -
 6 files changed, 64 insertions(+), 26 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
new file mode 100644
index 0000000..f820a13
--- /dev/null
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ASM_POINTER_AUTH_H
+#define __ASM_ASM_POINTER_AUTH_H
+
+#include <asm/alternative.h>
+#include <asm/asm-offsets.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+/*
+ * thread.keys_user.ap* as offset exceeds the #imm offset range
+ * so use the base value of ldp as thread.keys_user and offset as
+ * keys_user.ap*.
+ */
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	mov	\tmp1, #THREAD_KEYS_USER
+	add	\tmp1, \tsk, \tmp1
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+	b	.Laddr_auth_skip_\@
+alternative_else_nop_endif
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB]
+	msr_s	SYS_APIBKEYLO_EL1, \tmp2
+	msr_s	SYS_APIBKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA]
+	msr_s	SYS_APDAKEYLO_EL1, \tmp2
+	msr_s	SYS_APDAKEYHI_EL1, \tmp3
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB]
+	msr_s	SYS_APDBKEYLO_EL1, \tmp2
+	msr_s	SYS_APDBKEYHI_EL1, \tmp3
+.Laddr_auth_skip_\@:
+alternative_if ARM64_HAS_GENERIC_AUTH
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA]
+	msr_s	SYS_APGAKEYLO_EL1, \tmp2
+	msr_s	SYS_APGAKEYHI_EL1, \tmp3
+alternative_else_nop_endif
+	.endm
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+
+	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
+	.endm
+
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 799b079..dabe026 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -50,19 +50,6 @@ do {								\
 	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
 } while (0)
 
-static inline void ptrauth_keys_switch_user(struct ptrauth_keys_user *keys)
-{
-	if (system_supports_address_auth()) {
-		__ptrauth_key_install(APIA, keys->apia);
-		__ptrauth_key_install(APIB, keys->apib);
-		__ptrauth_key_install(APDA, keys->apda);
-		__ptrauth_key_install(APDB, keys->apdb);
-	}
-
-	if (system_supports_generic_auth())
-		__ptrauth_key_install(APGA, keys->apga);
-}
-
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -78,20 +65,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-do {									\
-	struct task_struct *__ptiu_tsk = (tsk);				\
-	ptrauth_keys_init_user(&__ptiu_tsk->thread.keys_user);		\
-	ptrauth_keys_switch_user(&__ptiu_tsk->thread.keys_user);		\
-} while (0)
-
-#define ptrauth_thread_switch(tsk)	\
-	ptrauth_keys_switch_user(&(tsk)->thread.keys_user)
+	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
-#define ptrauth_thread_switch(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index a5bdce8..7b1ea2a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -40,6 +40,9 @@ int main(void)
 #endif
   BLANK();
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+#endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
   DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
@@ -128,5 +131,13 @@ int main(void)
   DEFINE(SDEI_EVENT_INTREGS,	offsetof(struct sdei_registered_event, interrupted_regs));
   DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
 #endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(PTRAUTH_USER_KEY_APIA,		offsetof(struct ptrauth_keys_user, apia));
+  DEFINE(PTRAUTH_USER_KEY_APIB,		offsetof(struct ptrauth_keys_user, apib));
+  DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
+  DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
+  DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
+  BLANK();
+#endif
   return 0;
 }
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 9461d81..684e475 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -14,6 +14,7 @@
 #include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
+#include <asm/asm_pointer_auth.h>
 #include <asm/cpufeature.h>
 #include <asm/errno.h>
 #include <asm/esr.h>
@@ -341,6 +342,8 @@ alternative_else_nop_endif
 	msr	cntkctl_el1, x1
 4:
 #endif
+	ptrauth_keys_install_user tsk, x0, x1, x2
+
 	apply_ssbd 0, x0, x1
 	.endif
 
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index af5a638..1e77736 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 
 	if (!arg) {
 		ptrauth_keys_init_user(keys);
-		ptrauth_keys_switch_user(keys);
 		return 0;
 	}
 
@@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 	if (arg & PR_PAC_APGAKEY)
 		get_random_bytes(&keys->apga, sizeof(keys->apga));
 
-	ptrauth_keys_switch_user(keys);
-
 	return 0;
 }
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 0062605..6140e79 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -512,7 +512,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
 	uao_thread_switch(next);
-	ptrauth_thread_switch(next);
 	ssbs_thread_switch(next);
 
 	/*
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 05/18] arm64: create macro to park cpu in an infinite loop
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 04/18] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 12:02   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

A macro early_park_cpu is added to park the faulted cpu in an infinite
loop. Currently, this macro is substituted in two instances and may be
reused in future.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/kernel/head.S | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 989b194..3d18163 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -761,6 +761,17 @@ ENDPROC(__secondary_too_slow)
 	.endm
 
 /*
+ * Macro to park the cpu in an infinite loop.
+ */
+	.macro	early_park_cpu status
+	update_early_cpu_boot_status \status | CPU_STUCK_IN_KERNEL, x1, x2
+.Lepc_\@:
+	wfe
+	wfi
+	b	.Lepc_\@
+	.endm
+
+/*
  * Enable the MMU.
  *
  *  x0  = SCTLR_EL1 value for turning on the MMU.
@@ -808,24 +819,14 @@ ENTRY(__cpu_secondary_check52bitva)
 	and	x0, x0, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
 	cbnz	x0, 2f
 
-	update_early_cpu_boot_status \
-		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_52_BIT_VA, x0, x1
-1:	wfe
-	wfi
-	b	1b
-
+	early_park_cpu CPU_STUCK_REASON_52_BIT_VA
 #endif
 2:	ret
 ENDPROC(__cpu_secondary_check52bitva)
 
 __no_granule_support:
 	/* Indicate that this CPU can't boot and is stuck in the kernel */
-	update_early_cpu_boot_status \
-		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2
-1:
-	wfe
-	wfi
-	b	1b
+	early_park_cpu CPU_STUCK_REASON_NO_GRAN
 ENDPROC(__no_granule_support)
 
 #ifdef CONFIG_RELOCATABLE
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 05/18] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-06 19:07   ` James Morse
  2020-03-10 12:14   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file Amit Daniel Kachhap
                   ` (13 subsequent siblings)
  19 siblings, 2 replies; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

This patch allows __cpu_setup to be invoked with one of these flags,
ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME.
This is required as some cpufeatures need different handling during
different scenarios.

The input parameter in x0 is preserved till the end to be used inside
this function.

There should be no functional change with this patch and is useful
for the subsequent ptrauth patch which utilizes it. Some upcoming
arm cpufeatures can also utilize these flags.

Suggested-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/include/asm/smp.h |  5 +++++
 arch/arm64/kernel/head.S     |  2 ++
 arch/arm64/kernel/sleep.S    |  2 ++
 arch/arm64/mm/proc.S         | 26 +++++++++++++++-----------
 4 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index a0c8a0b..8159000 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -23,6 +23,11 @@
 #define CPU_STUCK_REASON_52_BIT_VA	(UL(1) << CPU_STUCK_REASON_SHIFT)
 #define CPU_STUCK_REASON_NO_GRAN	(UL(2) << CPU_STUCK_REASON_SHIFT)
 
+/* Options for __cpu_setup */
+#define ARM64_CPU_BOOT_PRIMARY		(1)
+#define ARM64_CPU_BOOT_SECONDARY	(2)
+#define ARM64_CPU_RUNTIME		(3)
+
 #ifndef __ASSEMBLY__
 
 #include <asm/percpu.h>
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 3d18163..5a7ce15 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -118,6 +118,7 @@ ENTRY(stext)
 	 * On return, the CPU will be ready for the MMU to be turned on and
 	 * the TCR will have been set.
 	 */
+	mov	x0, #ARM64_CPU_BOOT_PRIMARY
 	bl	__cpu_setup			// initialise processor
 	b	__primary_switch
 ENDPROC(stext)
@@ -712,6 +713,7 @@ secondary_startup:
 	 * Common entry point for secondary CPUs.
 	 */
 	bl	__cpu_secondary_check52bitva
+	mov	x0, #ARM64_CPU_BOOT_SECONDARY
 	bl	__cpu_setup			// initialise processor
 	adrp	x1, swapper_pg_dir
 	bl	__enable_mmu
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index f5b04dd..7b2f2e6 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -3,6 +3,7 @@
 #include <linux/linkage.h>
 #include <asm/asm-offsets.h>
 #include <asm/assembler.h>
+#include <asm/smp.h>
 
 	.text
 /*
@@ -99,6 +100,7 @@ ENDPROC(__cpu_suspend_enter)
 	.pushsection ".idmap.text", "awx"
 ENTRY(cpu_resume)
 	bl	el2_setup		// if in EL2 drop to EL1 cleanly
+	mov	x0, #ARM64_CPU_RUNTIME
 	bl	__cpu_setup
 	/* enable the MMU early - so we can access sleep_save_stash by va */
 	adrp	x1, swapper_pg_dir
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index aafed69..ea0db17 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -408,31 +408,31 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
 /*
  *	__cpu_setup
  *
- *	Initialise the processor for turning the MMU on.  Return in x0 the
- *	value of the SCTLR_EL1 register.
+ *	Initialise the processor for turning the MMU on.
+ *
+ * Input:
+ *	x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_SECONDARY/ARM64_CPU_RUNTIME.
+ * Output:
+ *	Return in x0 the value of the SCTLR_EL1 register.
  */
 	.pushsection ".idmap.text", "awx"
 SYM_FUNC_START(__cpu_setup)
 	tlbi	vmalle1				// Invalidate local TLB
 	dsb	nsh
 
-	mov	x0, #3 << 20
-	msr	cpacr_el1, x0			// Enable FP/ASIMD
-	mov	x0, #1 << 12			// Reset mdscr_el1 and disable
-	msr	mdscr_el1, x0			// access to the DCC from EL0
+	mov	x1, #3 << 20
+	msr	cpacr_el1, x1			// Enable FP/ASIMD
+	mov	x1, #1 << 12			// Reset mdscr_el1 and disable
+	msr	mdscr_el1, x1			// access to the DCC from EL0
 	isb					// Unmask debug exceptions now,
 	enable_dbg				// since this is per-cpu
-	reset_pmuserenr_el0 x0			// Disable PMU access from EL0
+	reset_pmuserenr_el0 x1			// Disable PMU access from EL0
 	/*
 	 * Memory region attributes
 	 */
 	mov_q	x5, MAIR_EL1_SET
 	msr	mair_el1, x5
 	/*
-	 * Prepare SCTLR
-	 */
-	mov_q	x0, SCTLR_EL1_SET
-	/*
 	 * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for
 	 * both user and kernel.
 	 */
@@ -468,5 +468,9 @@ SYM_FUNC_START(__cpu_setup)
 1:
 #endif	/* CONFIG_ARM64_HW_AFDBM */
 	msr	tcr_el1, x10
+	/*
+	 * Prepare SCTLR
+	 */
+	mov_q	x0, SCTLR_EL1_SET
 	ret					// return to head.S
 SYM_FUNC_END(__cpu_setup)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (5 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 12:20   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

These helpers are used only by functions inside cpufeature.c and
hence makes sense to be moved from cpufeature.h to cpufeature.c as
they are not expected to be used globaly.

This change helps in reducing the header file size as well as to add
future cpu capability types without confusion. Only a cpu capability
type macro is sufficient to expose those capabilities globally.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v5:
 * New patch.

 arch/arm64/include/asm/cpufeature.h | 12 ------------
 arch/arm64/kernel/cpufeature.c      | 13 +++++++++++++
 2 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 0fd1feb..ae9673a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -340,18 +340,6 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap)
 	return cap->type & ARM64_CPUCAP_SCOPE_MASK;
 }
 
-static inline bool
-cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
-{
-	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
-}
-
-static inline bool
-cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
-{
-	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
-}
-
 /*
  * Generic helper for handling capabilties with multiple (match,enable) pairs
  * of call backs, sharing the same capability bit.
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b12e386..865dce6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1363,6 +1363,19 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
 }
 #endif
 
+/* Internal helper functions to match cpu capability type */
+static bool
+cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
+{
+	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
+}
+
+static bool
+cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
+{
+	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (6 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 12:31   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 09/18] arm64: enable ptrauth earlier Amit Daniel Kachhap
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

From: Kristina Martsenko <kristina.martsenko@arm.com>

Each system capability can be of either boot, local, or system scope,
depending on when the state of the capability is finalized. When we
detect a conflict on a late CPU, we either offline the CPU or panic the
system. We currently always panic if the conflict is caused by a boot
scope capability, and offline the CPU if the conflict is caused by a
local or system scope capability.

We're going to want to add a new capability (for pointer authentication)
which needs to be boot scope but doesn't need to panic the system when a
conflict is detected. So add a new flag to specify whether the
capability requires the system to panic or not. Current boot scope
capabilities are updated to set the flag, so there should be no
functional change as a result of this patch.

Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v5:
 * Moved cpucap_panic_on_conflict helper function inside cpufeature.c.

 arch/arm64/include/asm/cpufeature.h | 12 ++++++++++--
 arch/arm64/kernel/cpufeature.c      | 29 +++++++++++++++--------------
 2 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index ae9673a..9818ff8 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -208,6 +208,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
  *     In some non-typical cases either both (a) and (b), or neither,
  *     should be permitted. This can be described by including neither
  *     or both flags in the capability's type field.
+ *
+ *     In case of a conflict, the CPU is prevented from booting. If the
+ *     ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
+ *     then a kernel panic is triggered.
  */
 
 
@@ -240,6 +244,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU	((u16)BIT(4))
 /* Is it safe for a late CPU to miss this capability when system has it */
 #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	((u16)BIT(5))
+/* Panic when a conflict is detected */
+#define ARM64_CPUCAP_PANIC_ON_CONFLICT		((u16)BIT(6))
 
 /*
  * CPU errata workarounds that need to be enabled at boot time if one or
@@ -279,9 +285,11 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 
 /*
  * CPU feature used early in the boot based on the boot CPU. All secondary
- * CPUs must match the state of the capability as detected by the boot CPU.
+ * CPUs must match the state of the capability as detected by the boot CPU. In
+ * case of a conflict, a kernel panic is triggered.
  */
-#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
+#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
+	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
 
 struct arm64_cpu_capabilities {
 	const char *desc;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 865dce6..09906ff 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1376,6 +1376,12 @@ cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
 	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
 }
 
+static bool
+cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
+{
+	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -2018,10 +2024,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask)
  * Run through the list of capabilities to check for conflicts.
  * If the system has already detected a capability, take necessary
  * action on this CPU.
- *
- * Returns "false" on conflicts.
  */
-static bool verify_local_cpu_caps(u16 scope_mask)
+static void verify_local_cpu_caps(u16 scope_mask)
 {
 	int i;
 	bool cpu_has_cap, system_has_cap;
@@ -2066,10 +2070,12 @@ static bool verify_local_cpu_caps(u16 scope_mask)
 		pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n",
 			smp_processor_id(), caps->capability,
 			caps->desc, system_has_cap, cpu_has_cap);
-		return false;
-	}
 
-	return true;
+		if (cpucap_panic_on_conflict(caps))
+			cpu_panic_kernel();
+		else
+			cpu_die_early();
+	}
 }
 
 /*
@@ -2079,12 +2085,8 @@ static bool verify_local_cpu_caps(u16 scope_mask)
 static void check_early_cpu_features(void)
 {
 	verify_cpu_asid_bits();
-	/*
-	 * Early features are used by the kernel already. If there
-	 * is a conflict, we cannot proceed further.
-	 */
-	if (!verify_local_cpu_caps(SCOPE_BOOT_CPU))
-		cpu_panic_kernel();
+
+	verify_local_cpu_caps(SCOPE_BOOT_CPU);
 }
 
 static void
@@ -2132,8 +2134,7 @@ static void verify_local_cpu_capabilities(void)
 	 * check_early_cpu_features(), as they need to be verified
 	 * on all secondary CPUs.
 	 */
-	if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU))
-		cpu_die_early();
+	verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU);
 
 	verify_local_elf_hwcaps(arm64_elf_hwcaps);
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 09/18] arm64: enable ptrauth earlier
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (7 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 15:45   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 10/18] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

From: Kristina Martsenko <kristina.martsenko@arm.com>

When the kernel is compiled with pointer auth instructions, the boot CPU
needs to start using address auth very early, so change the cpucap to
account for this.

Pointer auth must be enabled before we call C functions, because it is
not possible to enter a function with pointer auth disabled and exit it
with pointer auth enabled. Note, mismatches between architected and
IMPDEF algorithms will still be caught by the cpufeature framework (the
separate *_ARCH and *_IMP_DEF cpucaps).

Note the change in behavior: if the boot CPU has address auth and a
late CPU does not, then the late CPU is parked by the cpufeature
framework. Also, if the boot CPU does not have address auth and the late
CPU has then the late cpu will still boot but with ptrauth feature
disabled.

Leave generic authentication as a "system scope" cpucap for now, since
initially the kernel will only use address authentication.

Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Re-worked ptrauth setup logic, comments]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/Kconfig                  |  6 ++++++
 arch/arm64/include/asm/cpufeature.h |  9 +++++++++
 arch/arm64/include/asm/smp.h        |  1 +
 arch/arm64/kernel/cpufeature.c      | 13 +++----------
 arch/arm64/kernel/smp.c             |  2 ++
 arch/arm64/mm/proc.S                | 31 +++++++++++++++++++++++++++++++
 6 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0b30e88..87e2cbb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1515,6 +1515,12 @@ config ARM64_PTR_AUTH
 	  be enabled. However, KVM guest also require VHE mode and hence
 	  CONFIG_ARM64_VHE=y option to use this feature.
 
+	  If the feature is present on the boot CPU but not on a late CPU, then
+	  the late CPU will be parked. Also, if the boot CPU does not have
+	  address auth and the late CPU has then the late CPU will still boot
+	  but with the feature disabled. On such a system, this option should
+	  not be selected.
+
 endmenu
 
 menu "ARMv8.5 architectural features"
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9818ff8..9cffe7e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -291,6 +291,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE		\
 	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
 
+/*
+ * CPU feature used early in the boot based on the boot CPU. It is safe for a
+ * late CPU to have this feature even though the boot CPU hasn't enabled it,
+ * although the feature will not be used by Linux in this case. If the boot CPU
+ * has enabled this feature already, then every late CPU must have it.
+ */
+#define ARM64_CPUCAP_BOOT_CPU_FEATURE                  \
+	(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
+
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 8159000..5334d69 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -22,6 +22,7 @@
 
 #define CPU_STUCK_REASON_52_BIT_VA	(UL(1) << CPU_STUCK_REASON_SHIFT)
 #define CPU_STUCK_REASON_NO_GRAN	(UL(2) << CPU_STUCK_REASON_SHIFT)
+#define CPU_STUCK_REASON_NO_PTRAUTH	(UL(4) << CPU_STUCK_REASON_SHIFT)
 
 /* Options for __cpu_setup */
 #define ARM64_CPU_BOOT_PRIMARY		(1)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 09906ff..819fc69 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1318,12 +1318,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
-{
-	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
-				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
-}
-
 static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
 			     int __unused)
 {
@@ -1627,7 +1621,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (architected algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
@@ -1637,7 +1631,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "Address authentication (IMP DEF algorithm)",
 		.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64ISAR1_API_SHIFT,
@@ -1646,9 +1640,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 	{
 		.capability = ARM64_HAS_ADDRESS_AUTH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
 		.matches = has_address_auth,
-		.cpu_enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index d4ed9a1..f2761a9 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -164,6 +164,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 				pr_crit("CPU%u: does not support 52-bit VAs\n", cpu);
 			if (status & CPU_STUCK_REASON_NO_GRAN)
 				pr_crit("CPU%u: does not support %luK granule \n", cpu, PAGE_SIZE / SZ_1K);
+			if (status & CPU_STUCK_REASON_NO_PTRAUTH)
+				pr_crit("CPU%u: does not support pointer authentication\n", cpu);
 			cpus_stuck_in_kernel++;
 			break;
 		case CPU_PANIC_KERNEL:
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index ea0db17..4cf19a2 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -16,6 +16,7 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/cpufeature.h>
 #include <asm/alternative.h>
+#include <asm/smp.h>
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
@@ -468,9 +469,39 @@ SYM_FUNC_START(__cpu_setup)
 1:
 #endif	/* CONFIG_ARM64_HW_AFDBM */
 	msr	tcr_el1, x10
+	mov	x1, x0
 	/*
 	 * Prepare SCTLR
 	 */
 	mov_q	x0, SCTLR_EL1_SET
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+	/* No ptrauth setup for run time cpus */
+	cmp	x1, #ARM64_CPU_RUNTIME
+	b.eq	3f
+
+	/* Check if the CPU supports ptrauth */
+	mrs	x2, id_aa64isar1_el1
+	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
+	cbz	x2, 3f
+
+	msr_s	SYS_APIAKEYLO_EL1, xzr
+	msr_s	SYS_APIAKEYHI_EL1, xzr
+
+	/* Just enable ptrauth for primary cpu */
+	cmp	x1, #ARM64_CPU_BOOT_PRIMARY
+	b.eq	2f
+
+	/* if !system_supports_address_auth() then skip enable */
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+	b	3f
+alternative_else_nop_endif
+
+2:	/* Enable ptrauth instructions */
+	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
+		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
+	orr	x0, x0, x2
+3:
+#endif
 	ret					// return to head.S
 SYM_FUNC_END(__cpu_setup)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 10/18] arm64: initialize and switch ptrauth kernel keys
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (8 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 09/18] arm64: enable ptrauth earlier Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 15:07   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 11/18] arm64: initialize ptrauth keys for kernel booting task Amit Daniel Kachhap
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

From: Kristina Martsenko <kristina.martsenko@arm.com>

Set up keys to use pointer authentication within the kernel. The kernel
will be compiled with APIAKey instructions, the other keys are currently
unused. Each task is given its own APIAKey, which is initialized during
fork. The key is changed during context switch and on kernel entry from
EL0.

The keys for idle threads need to be set before calling any C functions,
because it is not possible to enter and exit a function with different
keys.

Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Modified secondary cores key structure, comments]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/include/asm/asm_pointer_auth.h | 14 ++++++++++++++
 arch/arm64/include/asm/pointer_auth.h     | 13 +++++++++++++
 arch/arm64/include/asm/processor.h        |  1 +
 arch/arm64/include/asm/smp.h              |  4 ++++
 arch/arm64/kernel/asm-offsets.c           |  5 +++++
 arch/arm64/kernel/entry.S                 |  3 +++
 arch/arm64/kernel/process.c               |  2 ++
 arch/arm64/kernel/smp.c                   |  8 ++++++++
 arch/arm64/mm/proc.S                      | 12 ++++++++++++
 9 files changed, 62 insertions(+)

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
index f820a13..4152afe 100644
--- a/arch/arm64/include/asm/asm_pointer_auth.h
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -39,11 +39,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
 alternative_else_nop_endif
 	.endm
 
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	mov	\tmp1, #THREAD_KEYS_KERNEL
+	add	\tmp1, \tsk, \tmp1
+	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
+	msr_s	SYS_APIAKEYLO_EL1, \tmp2
+	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	isb
+alternative_else_nop_endif
+	.endm
+
 #else /* CONFIG_ARM64_PTR_AUTH */
 
 	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
 	.endm
 
+	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	.endm
+
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index dabe026..aa956ca 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -30,6 +30,10 @@ struct ptrauth_keys_user {
 	struct ptrauth_key apga;
 };
 
+struct ptrauth_keys_kernel {
+	struct ptrauth_key apia;
+};
+
 static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
 {
 	if (system_supports_address_auth()) {
@@ -50,6 +54,12 @@ do {								\
 	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
 } while (0)
 
+static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
+{
+	if (system_supports_address_auth())
+		get_random_bytes(&keys->apia, sizeof(keys->apia));
+}
+
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -66,11 +76,14 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 
 #define ptrauth_thread_init_user(tsk)					\
 	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
+#define ptrauth_thread_init_kernel(tsk)					\
+	ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
+#define ptrauth_thread_init_kernel(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 496a928..4c77da5 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -147,6 +147,7 @@ struct thread_struct {
 	struct debug_info	debug;		/* debugging */
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys_user	keys_user;
+	struct ptrauth_keys_kernel	keys_kernel;
 #endif
 };
 
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 5334d69..4e92150 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -36,6 +36,7 @@
 #include <linux/threads.h>
 #include <linux/cpumask.h>
 #include <linux/thread_info.h>
+#include <asm/pointer_auth.h>
 
 DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
 
@@ -93,6 +94,9 @@ asmlinkage void secondary_start_kernel(void);
 struct secondary_data {
 	void *stack;
 	struct task_struct *task;
+#ifdef CONFIG_ARM64_PTR_AUTH
+	struct ptrauth_keys_kernel ptrauth_key;
+#endif
 	long status;
 };
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7b1ea2a..9981a0a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -42,6 +42,7 @@ int main(void)
   DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
 #ifdef CONFIG_ARM64_PTR_AUTH
   DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
+  DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
 #endif
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
@@ -91,6 +92,9 @@ int main(void)
   BLANK();
   DEFINE(CPU_BOOT_STACK,	offsetof(struct secondary_data, stack));
   DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
+#ifdef CONFIG_ARM64_PTR_AUTH
+  DEFINE(CPU_BOOT_PTRAUTH_KEY,	offsetof(struct secondary_data, ptrauth_key));
+#endif
   BLANK();
 #ifdef CONFIG_KVM_ARM_HOST
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
@@ -137,6 +141,7 @@ int main(void)
   DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
   DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
   DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
+  DEFINE(PTRAUTH_KERNEL_KEY_APIA,	offsetof(struct ptrauth_keys_kernel, apia));
   BLANK();
 #endif
   return 0;
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 684e475..3dad2d0 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -178,6 +178,7 @@ alternative_cb_end
 
 	apply_ssbd 1, x22, x23
 
+	ptrauth_keys_install_kernel tsk, x20, x22, x23
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_current_task tsk
@@ -342,6 +343,7 @@ alternative_else_nop_endif
 	msr	cntkctl_el1, x1
 4:
 #endif
+	/* No kernel C function calls after this as user keys are set. */
 	ptrauth_keys_install_user tsk, x0, x1, x2
 
 	apply_ssbd 0, x0, x1
@@ -898,6 +900,7 @@ ENTRY(cpu_switch_to)
 	ldr	lr, [x8]
 	mov	sp, x9
 	msr	sp_el0, x1
+	ptrauth_keys_install_kernel x1, x8, x9, x10
 	ret
 ENDPROC(cpu_switch_to)
 NOKPROBE(cpu_switch_to)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 6140e79..7db0302 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -376,6 +376,8 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long stack_start,
 	 */
 	fpsimd_flush_task_state(p);
 
+	ptrauth_thread_init_kernel(p);
+
 	if (likely(!(p->flags & PF_KTHREAD))) {
 		*childregs = *current_pt_regs();
 		childregs->regs[0] = 0;
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index f2761a9..3fa0fbf 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -112,6 +112,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	 */
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
+#if defined(CONFIG_ARM64_PTR_AUTH)
+	secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo;
+	secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi;
+#endif
 	update_cpu_boot_status(CPU_MMU_OFF);
 	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
 
@@ -138,6 +142,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
+#if defined(CONFIG_ARM64_PTR_AUTH)
+	secondary_data.ptrauth_key.apia.lo = 0;
+	secondary_data.ptrauth_key.apia.hi = 0;
+#endif
 	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
 	if (ret && status) {
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 4cf19a2..5a11a89 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -485,6 +485,10 @@ SYM_FUNC_START(__cpu_setup)
 	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
 	cbz	x2, 3f
 
+	/*
+	 * The primary cpu keys are reset here and can be
+	 * re-initialised with some proper values later.
+	 */
 	msr_s	SYS_APIAKEYLO_EL1, xzr
 	msr_s	SYS_APIAKEYHI_EL1, xzr
 
@@ -497,6 +501,14 @@ alternative_if_not ARM64_HAS_ADDRESS_AUTH
 	b	3f
 alternative_else_nop_endif
 
+	/* Install ptrauth key for secondary cpus */
+	adr_l	x2, secondary_data
+	ldr	x3, [x2, #CPU_BOOT_TASK]	// get secondary_data.task
+	cbz	x3, 2f				// check for slow booting cpus
+	ldp	x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY]
+	msr_s	SYS_APIAKEYLO_EL1, x3
+	msr_s	SYS_APIAKEYHI_EL1, x4
+
 2:	/* Enable ptrauth instructions */
 	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
 		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 11/18] arm64: initialize ptrauth keys for kernel booting task
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (9 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 10/18] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 15:09   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 12/18] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

This patch uses the existing boot_init_stack_canary arch function
to initialize the ptrauth keys for the booting task in the primary
core. The requirement here is that it should be always inline and
the caller must never return.

As pointer authentication too detects a subset of stack corruption
so it makes sense to place this code here.

Both pointer authentication and stack canary codes are protected
by their respective config option.

Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/include/asm/pointer_auth.h   | 11 ++++++++++-
 arch/arm64/include/asm/stackprotector.h |  5 +++++
 include/linux/stackprotector.h          |  2 +-
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index aa956ca..833d3f9 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -54,12 +54,18 @@ do {								\
 	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
 } while (0)
 
-static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
+static __always_inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
 {
 	if (system_supports_address_auth())
 		get_random_bytes(&keys->apia, sizeof(keys->apia));
 }
 
+static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys)
+{
+	if (system_supports_address_auth())
+		__ptrauth_key_install(APIA, keys->apia);
+}
+
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
 /*
@@ -78,12 +84,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
 #define ptrauth_thread_init_kernel(tsk)					\
 	ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
+#define ptrauth_thread_switch_kernel(tsk)				\
+	ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
 #define ptrauth_thread_init_kernel(tsk)
+#define ptrauth_thread_switch_kernel(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/stackprotector.h b/arch/arm64/include/asm/stackprotector.h
index 5884a2b..7263e0b 100644
--- a/arch/arm64/include/asm/stackprotector.h
+++ b/arch/arm64/include/asm/stackprotector.h
@@ -15,6 +15,7 @@
 
 #include <linux/random.h>
 #include <linux/version.h>
+#include <asm/pointer_auth.h>
 
 extern unsigned long __stack_chk_guard;
 
@@ -26,6 +27,7 @@ extern unsigned long __stack_chk_guard;
  */
 static __always_inline void boot_init_stack_canary(void)
 {
+#if defined(CONFIG_STACKPROTECTOR)
 	unsigned long canary;
 
 	/* Try to get a semi random initial value. */
@@ -36,6 +38,9 @@ static __always_inline void boot_init_stack_canary(void)
 	current->stack_canary = canary;
 	if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK))
 		__stack_chk_guard = current->stack_canary;
+#endif
+	ptrauth_thread_init_kernel(current);
+	ptrauth_thread_switch_kernel(current);
 }
 
 #endif	/* _ASM_STACKPROTECTOR_H */
diff --git a/include/linux/stackprotector.h b/include/linux/stackprotector.h
index 6b792d0..4c678c4 100644
--- a/include/linux/stackprotector.h
+++ b/include/linux/stackprotector.h
@@ -6,7 +6,7 @@
 #include <linux/sched.h>
 #include <linux/random.h>
 
-#ifdef CONFIG_STACKPROTECTOR
+#if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH)
 # include <asm/stackprotector.h>
 #else
 static inline void boot_init_stack_canary(void)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 12/18] arm64: mask PAC bits of __builtin_return_address
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (10 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 11/18] arm64: initialize ptrauth keys for kernel booting task Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-06 19:07   ` James Morse
  2020-03-06  6:35 ` [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
                   ` (7 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

This redefines __builtin_return_address to mask pac bits
when Pointer Authentication is enabled. As __builtin_return_address
is used mostly used to refer to the caller function symbol address
so masking runtime generated pac bits will help to find the match.

This patch adds a new file (asm/compiler.h) and is transitively
included (via include/compiler_types.h) on the compiler command line
so it is guaranteed to be loaded and the users of this macro will
not find a wrong version.

A helper macro ptrauth_kernel_pac_mask is created for this purpose
and added in this file. A similar macro ptrauth_user_pac_mask exists
in pointer_auth.h and is now moved here for the sake of consistency.

This change fixes the utilities like cat /proc/vmallocinfo to show
correct symbol names.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/Kconfig                    |  1 +
 arch/arm64/include/asm/compiler.h     | 20 ++++++++++++++++++++
 arch/arm64/include/asm/pointer_auth.h | 13 +++++--------
 3 files changed, 26 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm64/include/asm/compiler.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 87e2cbb..115ceea 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -118,6 +118,7 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_COMPILER_H
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
new file mode 100644
index 0000000..085e7cd0
--- /dev/null
+++ b/arch/arm64/include/asm/compiler.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_COMPILER_H
+#define __ASM_COMPILER_H
+
+#if defined(CONFIG_ARM64_PTR_AUTH)
+
+/*
+ * The EL0/EL1 pointer bits used by a pointer authentication code.
+ * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
+ */
+#define ptrauth_user_pac_mask()		GENMASK_ULL(54, vabits_actual)
+#define ptrauth_kernel_pac_mask()	GENMASK_ULL(63, vabits_actual)
+
+#define __builtin_return_address(val)				\
+	(void *)((unsigned long)__builtin_return_address(val) |	\
+	ptrauth_kernel_pac_mask())
+
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_COMPILER_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 833d3f9..5340dbb 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -68,16 +68,13 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
 
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
-/*
- * The EL0 pointer bits used by a pointer authentication code.
- * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
- */
-#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
-
-/* Only valid for EL0 TTBR0 instruction pointers */
+/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 {
-	return ptr & ~ptrauth_user_pac_mask();
+	if (ptr & BIT_ULL(55))
+		return ptr | ptrauth_kernel_pac_mask();
+	else
+		return ptr & ~ptrauth_user_pac_mask();
 }
 
 #define ptrauth_thread_init_user(tsk)					\
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (11 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 12/18] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-09 19:03   ` James Morse
  2020-03-06  6:35 ` [PATCH v6 14/18] arm64: __show_regs: strip PAC from lr in printk Amit Daniel Kachhap
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

From: Mark Rutland <mark.rutland@arm.com>

When we enable pointer authentication in the kernel, LR values saved to
the stack will have a PAC which we must strip in order to retrieve the
real return address.

Strip PACs when unwinding the stack in order to account for this.

Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Re-position ptrauth_strip_insn_pac, comment]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/kernel/stacktrace.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index a336cb1..b479df7 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -14,6 +14,7 @@
 #include <linux/stacktrace.h>
 
 #include <asm/irq.h>
+#include <asm/pointer_auth.h>
 #include <asm/stack_pointer.h>
 #include <asm/stacktrace.h>
 
@@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 	}
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
 
+	frame->pc = ptrauth_strip_insn_pac(frame->pc);
+
 	/*
 	 * Frames created upon entry from EL0 have NULL FP and PC values, so
 	 * don't bother reporting these. Frames created by __noreturn functions
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 14/18] arm64: __show_regs: strip PAC from lr in printk
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (12 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 15:11   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 15/18] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

lr is printed with %pS which will try to find an entry in kallsyms.
After enabling pointer authentication, this match will fail due to
PAC present in the lr.

Strip PAC from the lr to display the correct symbol name.

Suggested-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/kernel/process.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 7db0302..cacae29 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -262,7 +262,7 @@ void __show_regs(struct pt_regs *regs)
 
 	if (!user_mode(regs)) {
 		printk("pc : %pS\n", (void *)regs->pc);
-		printk("lr : %pS\n", (void *)lr);
+		printk("lr : %pS\n", (void *)ptrauth_strip_insn_pac(lr));
 	} else {
 		printk("pc : %016llx\n", regs->pc);
 		printk("lr : %016llx\n", lr);
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 15/18] arm64: suspend: restore the kernel ptrauth keys
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (13 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 14/18] arm64: __show_regs: strip PAC from lr in printk Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 15:18   ` Vincenzo Frascino
  2020-03-06  6:35   ` Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

This patch restores the kernel keys from current task during cpu resume
after the mmu is turned on and ptrauth is enabled.

A flag is added in macro ptrauth_keys_install_kernel to check if isb
instruction needs to executed.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v5:
 * Moved ptrauth_keys_install_kernel inside function cpu_do_resume.
 * Added a flag in ptrauth_keys_install_kernel to provide options for isb
   instruction.

 arch/arm64/include/asm/asm_pointer_auth.h | 6 ++++--
 arch/arm64/kernel/entry.S                 | 4 ++--
 arch/arm64/mm/proc.S                      | 2 ++
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
index 4152afe..899a007 100644
--- a/arch/arm64/include/asm/asm_pointer_auth.h
+++ b/arch/arm64/include/asm/asm_pointer_auth.h
@@ -39,14 +39,16 @@ alternative_if ARM64_HAS_GENERIC_AUTH
 alternative_else_nop_endif
 	.endm
 
-	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
 alternative_if ARM64_HAS_ADDRESS_AUTH
 	mov	\tmp1, #THREAD_KEYS_KERNEL
 	add	\tmp1, \tsk, \tmp1
 	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
 	msr_s	SYS_APIAKEYLO_EL1, \tmp2
 	msr_s	SYS_APIAKEYHI_EL1, \tmp3
+	.if     \sync == 1
 	isb
+	.endif
 alternative_else_nop_endif
 	.endm
 
@@ -55,7 +57,7 @@ alternative_else_nop_endif
 	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
 	.endm
 
-	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
+	.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
 	.endm
 
 #endif /* CONFIG_ARM64_PTR_AUTH */
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 3dad2d0..6273d7b 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -178,7 +178,7 @@ alternative_cb_end
 
 	apply_ssbd 1, x22, x23
 
-	ptrauth_keys_install_kernel tsk, x20, x22, x23
+	ptrauth_keys_install_kernel tsk, 1, x20, x22, x23
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_current_task tsk
@@ -900,7 +900,7 @@ ENTRY(cpu_switch_to)
 	ldr	lr, [x8]
 	mov	sp, x9
 	msr	sp_el0, x1
-	ptrauth_keys_install_kernel x1, x8, x9, x10
+	ptrauth_keys_install_kernel x1, 1, x8, x9, x10
 	ret
 ENDPROC(cpu_switch_to)
 NOKPROBE(cpu_switch_to)
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 5a11a89..4450dc8 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -11,6 +11,7 @@
 #include <linux/linkage.h>
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
+#include <asm/asm_pointer_auth.h>
 #include <asm/hwcap.h>
 #include <asm/pgtable.h>
 #include <asm/pgtable-hwdef.h>
@@ -137,6 +138,7 @@ alternative_if ARM64_HAS_RAS_EXTN
 	msr_s	SYS_DISR_EL1, xzr
 alternative_else_nop_endif
 
+	ptrauth_keys_install_kernel x14, 0, x1, x2, x3
 	isb
 	ret
 SYM_FUNC_END(cpu_do_resume)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 16/18] kconfig: Add support for 'as-option'
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
@ 2020-03-06  6:35   ` Amit Daniel Kachhap
  2020-03-06  6:35 ` [PATCH v6 02/18] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
                     ` (18 subsequent siblings)
  19 siblings, 0 replies; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Ard Biesheuvel, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ramana Radhakrishnan,
	Kristina Martsenko, Dave Martin, Amit Daniel Kachhap,
	James Morse, Vincenzo Frascino, Mark Brown, Vincenzo Frascino,
	Masahiro Yamada, linux-kbuild

From: Vincenzo Frascino <vincenzo.frascino@arm.com>

Currently kconfig does not have a feature that allows to detect if the
used assembler supports a specific compilation option.

Introduce 'as-option' to serve this purpose in the context of Kconfig:

        config X
                def_bool $(as-option,...)

Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: linux-kbuild@vger.kernel.org
Acked-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v5:
 * More descriptions for using /dev/zero.

 scripts/Kconfig.include | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include
index 85334dc..a1c1925 100644
--- a/scripts/Kconfig.include
+++ b/scripts/Kconfig.include
@@ -31,6 +31,12 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -S -x c /dev/null -o /de
 # Return y if the linker supports <flag>, n otherwise
 ld-option = $(success,$(LD) -v $(1))
 
+# $(as-option,<flag>)
+# /dev/zero is used as output instead of /dev/null as some assembler cribs when
+# both input and output are same. Also both of them have same write behaviour so
+# can be easily substituted.
+as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -c -x assembler /dev/null -o /dev/zero)
+
 # $(as-instr,<instr>)
 # Return y if the assembler supports <instr>, n otherwise
 as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 16/18] kconfig: Add support for 'as-option'
@ 2020-03-06  6:35   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	linux-kbuild, Kristina Martsenko, Dave Martin, Masahiro Yamada,
	Mark Brown, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel

From: Vincenzo Frascino <vincenzo.frascino@arm.com>

Currently kconfig does not have a feature that allows to detect if the
used assembler supports a specific compilation option.

Introduce 'as-option' to serve this purpose in the context of Kconfig:

        config X
                def_bool $(as-option,...)

Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: linux-kbuild@vger.kernel.org
Acked-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v5:
 * More descriptions for using /dev/zero.

 scripts/Kconfig.include | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include
index 85334dc..a1c1925 100644
--- a/scripts/Kconfig.include
+++ b/scripts/Kconfig.include
@@ -31,6 +31,12 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -S -x c /dev/null -o /de
 # Return y if the linker supports <flag>, n otherwise
 ld-option = $(success,$(LD) -v $(1))
 
+# $(as-option,<flag>)
+# /dev/zero is used as output instead of /dev/null as some assembler cribs when
+# both input and output are same. Also both of them have same write behaviour so
+# can be easily substituted.
+as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -c -x assembler /dev/null -o /dev/zero)
+
 # $(as-instr,<instr>)
 # Return y if the assembler supports <instr>, n otherwise
 as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 17/18] arm64: compile the kernel with ptrauth return address signing
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (15 preceding siblings ...)
  2020-03-06  6:35   ` Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 15:20   ` Vincenzo Frascino
  2020-03-06  6:35 ` [PATCH v6 18/18] lkdtm: arm64: test kernel pointer authentication Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Masahiro Yamada, Mark Brown,
	James Morse, Ramana Radhakrishnan, Amit Daniel Kachhap,
	Vincenzo Frascino, Will Deacon, Ard Biesheuvel

From: Kristina Martsenko <kristina.martsenko@arm.com>

Compile all functions with two ptrauth instructions: PACIASP in the
prologue to sign the return address, and AUTIASP in the epilogue to
authenticate the return address (from the stack). If authentication
fails, the return will cause an instruction abort to be taken, followed
by an oops and killing the task.

This should help protect the kernel against attacks using
return-oriented programming. As ptrauth protects the return address, it
can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
that it does not protect other parts of the stack.

The new instructions are in the HINT encoding space, so on a system
without ptrauth they execute as NOPs.

CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
guests, but also automatically builds the kernel with ptrauth
instructions if the compiler supports it. If there is no compiler
support, we do not warn that the kernel was built without ptrauth
instructions.

GCC 7 and 8 support the -msign-return-address option, while GCC 9
deprecates that option and replaces it with -mbranch-protection. Support
both options.

Clang uses an external assembler hence this patch makes sure that the
correct parameters (-march=armv8.3-a) are passed down to help it recognize
the ptrauth instructions.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
[Amit: Cover leaf function, comments]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v5:
 * Clarified assembler option for GNU toochain.

 arch/arm64/Kconfig  | 20 +++++++++++++++++++-
 arch/arm64/Makefile | 11 +++++++++++
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 115ceea..0f3ea01 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1499,6 +1499,7 @@ config ARM64_PTR_AUTH
 	bool "Enable support for pointer authentication"
 	default y
 	depends on !KVM || ARM64_VHE
+	depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC
 	help
 	  Pointer authentication (part of the ARMv8.3 Extensions) provides
 	  instructions for signing and authenticating pointers against secret
@@ -1506,11 +1507,17 @@ config ARM64_PTR_AUTH
 	  and other attacks.
 
 	  This option enables these instructions at EL0 (i.e. for userspace).
-
 	  Choosing this option will cause the kernel to initialise secret keys
 	  for each process at exec() time, with these keys being
 	  context-switched along with the process.
 
+	  If the compiler supports the -mbranch-protection or
+	  -msign-return-address flag (e.g. GCC 7 or later), then this option
+	  will also cause the kernel itself to be compiled with return address
+	  protection. In this case, and if the target hardware is known to
+	  support pointer authentication, then CONFIG_STACKPROTECTOR can be
+	  disabled with minimal loss of protection.
+
 	  The feature is detected at runtime. If the feature is not present in
 	  hardware it will not be advertised to userspace/KVM guest nor will it
 	  be enabled. However, KVM guest also require VHE mode and hence
@@ -1522,6 +1529,17 @@ config ARM64_PTR_AUTH
 	  but with the feature disabled. On such a system, this option should
 	  not be selected.
 
+config CC_HAS_BRANCH_PROT_PAC_RET
+	# GCC 9 or later, clang 8 or later
+	def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)
+
+config CC_HAS_SIGN_RETURN_ADDRESS
+	# GCC 7, 8
+	def_bool $(cc-option,-msign-return-address=all)
+
+config AS_HAS_PAC
+	def_bool $(as-option,-Wa$(comma)-march=armv8.3-a)
+
 endmenu
 
 menu "ARMv8.5 architectural features"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index dca1a97..f15f92b 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -65,6 +65,17 @@ stack_protector_prepare: prepare0
 					include/generated/asm-offsets.h))
 endif
 
+ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
+branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
+branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
+# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the
+# compiler to generate them and consequently to break the single image contract
+# we pass it only to the assembler. This option is utilized only in case of non
+# integrated assemblers.
+branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a
+KBUILD_CFLAGS += $(branch-prot-flags-y)
+endif
+
 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
 KBUILD_CPPFLAGS	+= -mbig-endian
 CHECKFLAGS	+= -D__AARCH64EB__
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 18/18] lkdtm: arm64: test kernel pointer authentication
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (16 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 17/18] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
@ 2020-03-06  6:35 ` Amit Daniel Kachhap
  2020-03-10 15:59 ` [PATCH v6 00/18] arm64: return address signing Rémi Denis-Courmont
  2020-03-11  9:28 ` James Morse
  19 siblings, 0 replies; 67+ messages in thread
From: Amit Daniel Kachhap @ 2020-03-06  6:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Amit Daniel Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel

This test is specific for arm64. When in-kernel Pointer Authentication
config is enabled, the return address stored in the stack is signed.
This feature helps in ROP kind of attack. If any parameters used to
generate the pac (<key, sp, lr>) is modified then this will fail in
the authentication stage and will lead to abort.

This test changes the input parameter APIA kernel keys to cause abort.
The pac computed from the new key can be same as last due to hash
collision so this is retried for few times as there is no reliable way
to compare the pacs. Even though this test may fail even after retries
but this may cause authentication failure at a later stage in earlier
function returns.

This test can be invoked as,
echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT

or as below if inserted as a module,
insmod lkdtm.ko cpoint_name=DIRECT cpoint_type=CORRUPT_PAC cpoint_count=1

[   13.118166] lkdtm: Performing direct entry CORRUPT_PAC
[   13.118298] lkdtm: Clearing PAC from the return address
[   13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec
[   13.118626] Mem abort info:
[   13.118666]   ESR = 0x86000004
[   13.118866]   EC = 0x21: IABT (current EL), IL = 32 bits
[   13.118966]   SET = 0, FnV = 0
[   13.119117]   EA = 0, S1PTW = 0

Cc: Kees Cook <keescook@chromium.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 drivers/misc/lkdtm/bugs.c  | 36 ++++++++++++++++++++++++++++++++++++
 drivers/misc/lkdtm/core.c  |  1 +
 drivers/misc/lkdtm/lkdtm.h |  1 +
 3 files changed, 38 insertions(+)

diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
index de87693..cc92bc3 100644
--- a/drivers/misc/lkdtm/bugs.c
+++ b/drivers/misc/lkdtm/bugs.c
@@ -378,3 +378,39 @@ void lkdtm_DOUBLE_FAULT(void)
 	pr_err("XFAIL: this test is ia32-only\n");
 #endif
 }
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+static noinline void change_pac_parameters(void)
+{
+	/* Reset the keys of current task */
+	ptrauth_thread_init_kernel(current);
+	ptrauth_thread_switch_kernel(current);
+}
+
+#define CORRUPT_PAC_ITERATE	10
+noinline void lkdtm_CORRUPT_PAC(void)
+{
+	int i;
+
+	if (!system_supports_address_auth()) {
+		pr_err("FAIL: arm64 pointer authentication feature not present\n");
+		return;
+	}
+
+	pr_info("Change the PAC parameters to force function return failure\n");
+	/*
+	 * Pac is a hash value computed from input keys, return address and
+	 * stack pointer. As pac has fewer bits so there is a chance of
+	 * collision, so iterate few times to reduce the collision probability.
+	 */
+	for (i = 0; i < CORRUPT_PAC_ITERATE; i++)
+		change_pac_parameters();
+
+	pr_err("FAIL: %s test failed. Kernel may be unstable from here\n", __func__);
+}
+#else /* !CONFIG_ARM64_PTR_AUTH */
+noinline void lkdtm_CORRUPT_PAC(void)
+{
+	pr_err("FAIL: arm64 pointer authentication config disabled\n");
+}
+#endif
diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
index ee0d6e7..5ce4ac8 100644
--- a/drivers/misc/lkdtm/core.c
+++ b/drivers/misc/lkdtm/core.c
@@ -116,6 +116,7 @@ static const struct crashtype crashtypes[] = {
 	CRASHTYPE(STACK_GUARD_PAGE_LEADING),
 	CRASHTYPE(STACK_GUARD_PAGE_TRAILING),
 	CRASHTYPE(UNSET_SMEP),
+	CRASHTYPE(CORRUPT_PAC),
 	CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE),
 	CRASHTYPE(OVERWRITE_ALLOCATION),
 	CRASHTYPE(WRITE_AFTER_FREE),
diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
index c56d23e..8d13d01 100644
--- a/drivers/misc/lkdtm/lkdtm.h
+++ b/drivers/misc/lkdtm/lkdtm.h
@@ -31,6 +31,7 @@ void lkdtm_UNSET_SMEP(void);
 #ifdef CONFIG_X86_32
 void lkdtm_DOUBLE_FAULT(void);
 #endif
+void lkdtm_CORRUPT_PAC(void);
 
 /* lkdtm_heap.c */
 void __init lkdtm_heap_init(void);
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 16/18] kconfig: Add support for 'as-option'
  2020-03-06  6:35   ` Amit Daniel Kachhap
@ 2020-03-06 11:37     ` Masahiro Yamada
  -1 siblings, 0 replies; 67+ messages in thread
From: Masahiro Yamada @ 2020-03-06 11:37 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Mark Rutland, Kees Cook, Ard Biesheuvel,
	Catalin Marinas, Suzuki K Poulose, Will Deacon,
	Ramana Radhakrishnan, Kristina Martsenko, Dave Martin,
	James Morse, Vincenzo Frascino, Mark Brown,
	Linux Kbuild mailing list

On Fri, Mar 6, 2020 at 3:36 PM Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>
> From: Vincenzo Frascino <vincenzo.frascino@arm.com>
>
> Currently kconfig does not have a feature that allows to detect if the
> used assembler supports a specific compilation option.
>
> Introduce 'as-option' to serve this purpose in the context of Kconfig:
>
>         config X
>                 def_bool $(as-option,...)
>
> Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
> Cc: linux-kbuild@vger.kernel.org
> Acked-by: Masahiro Yamada <masahiroy@kernel.org>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v5:
>  * More descriptions for using /dev/zero.


FYI:

This has been fixed:

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=3c968de5c7d1719b2f9b538f2f7f5f5922e5f311


So, this will not be a problem for the
future release of binutils.

Anyway, we need to take care of the released ones,
so I am fine with /dev/zero.





>  scripts/Kconfig.include | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include
> index 85334dc..a1c1925 100644
> --- a/scripts/Kconfig.include
> +++ b/scripts/Kconfig.include
> @@ -31,6 +31,12 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -S -x c /dev/null -o /de
>  # Return y if the linker supports <flag>, n otherwise
>  ld-option = $(success,$(LD) -v $(1))
>
> +# $(as-option,<flag>)
> +# /dev/zero is used as output instead of /dev/null as some assembler cribs when
> +# both input and output are same. Also both of them have same write behaviour so
> +# can be easily substituted.
> +as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -c -x assembler /dev/null -o /dev/zero)
> +
>  # $(as-instr,<instr>)
>  # Return y if the assembler supports <instr>, n otherwise
>  as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
> --
> 2.7.4
>


-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 16/18] kconfig: Add support for 'as-option'
@ 2020-03-06 11:37     ` Masahiro Yamada
  0 siblings, 0 replies; 67+ messages in thread
From: Masahiro Yamada @ 2020-03-06 11:37 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Linux Kbuild mailing list, Kristina Martsenko, Dave Martin,
	Mark Brown, James Morse, Ramana Radhakrishnan, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel, linux-arm-kernel

On Fri, Mar 6, 2020 at 3:36 PM Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>
> From: Vincenzo Frascino <vincenzo.frascino@arm.com>
>
> Currently kconfig does not have a feature that allows to detect if the
> used assembler supports a specific compilation option.
>
> Introduce 'as-option' to serve this purpose in the context of Kconfig:
>
>         config X
>                 def_bool $(as-option,...)
>
> Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
> Cc: linux-kbuild@vger.kernel.org
> Acked-by: Masahiro Yamada <masahiroy@kernel.org>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v5:
>  * More descriptions for using /dev/zero.


FYI:

This has been fixed:

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=3c968de5c7d1719b2f9b538f2f7f5f5922e5f311


So, this will not be a problem for the
future release of binutils.

Anyway, we need to take care of the released ones,
so I am fine with /dev/zero.





>  scripts/Kconfig.include | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include
> index 85334dc..a1c1925 100644
> --- a/scripts/Kconfig.include
> +++ b/scripts/Kconfig.include
> @@ -31,6 +31,12 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -S -x c /dev/null -o /de
>  # Return y if the linker supports <flag>, n otherwise
>  ld-option = $(success,$(LD) -v $(1))
>
> +# $(as-option,<flag>)
> +# /dev/zero is used as output instead of /dev/null as some assembler cribs when
> +# both input and output are same. Also both of them have same write behaviour so
> +# can be easily substituted.
> +as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -c -x assembler /dev/null -o /dev/zero)
> +
>  # $(as-instr,<instr>)
>  # Return y if the assembler supports <instr>, n otherwise
>  as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
> --
> 2.7.4
>


-- 
Best Regards
Masahiro Yamada

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 16/18] kconfig: Add support for 'as-option'
  2020-03-06 11:37     ` Masahiro Yamada
@ 2020-03-06 11:49       ` Vincenzo Frascino
  -1 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-06 11:49 UTC (permalink / raw)
  To: Masahiro Yamada, Amit Daniel Kachhap
  Cc: linux-arm-kernel, Mark Rutland, Kees Cook, Ard Biesheuvel,
	Catalin Marinas, Suzuki K Poulose, Will Deacon,
	Ramana Radhakrishnan, Kristina Martsenko, Dave Martin,
	James Morse, Mark Brown, Linux Kbuild mailing list

Hi Masahiro,

On 3/6/20 11:37 AM, Masahiro Yamada wrote:
> On Fri, Mar 6, 2020 at 3:36 PM Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> From: Vincenzo Frascino <vincenzo.frascino@arm.com>
>>
>> Currently kconfig does not have a feature that allows to detect if the
>> used assembler supports a specific compilation option.
>>
>> Introduce 'as-option' to serve this purpose in the context of Kconfig:
>>
>>         config X
>>                 def_bool $(as-option,...)
>>
>> Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
>> Cc: linux-kbuild@vger.kernel.org
>> Acked-by: Masahiro Yamada <masahiroy@kernel.org>
>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Changes since v5:
>>  * More descriptions for using /dev/zero.
> 
> 
> FYI:
> 
> This has been fixed:
> 
> https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=3c968de5c7d1719b2f9b538f2f7f5f5922e5f311
> 
> 
> So, this will not be a problem for the
> future release of binutils.
> 
> Anyway, we need to take care of the released ones,
> so I am fine with /dev/zero.
> 

Thank you for letting us know.

I did not realize it was a compiler issue otherwise I would have reported it. I
thought it was a mechanism to prevent people from trashing their code, but
thinking at it more carefully, for devices does not make sense hence it is good
that there is a fix already.

[...]

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 16/18] kconfig: Add support for 'as-option'
@ 2020-03-06 11:49       ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-06 11:49 UTC (permalink / raw)
  To: Masahiro Yamada, Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Linux Kbuild mailing list, Kristina Martsenko, Dave Martin,
	Mark Brown, James Morse, Ramana Radhakrishnan, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Masahiro,

On 3/6/20 11:37 AM, Masahiro Yamada wrote:
> On Fri, Mar 6, 2020 at 3:36 PM Amit Daniel Kachhap <amit.kachhap@arm.com> wrote:
>>
>> From: Vincenzo Frascino <vincenzo.frascino@arm.com>
>>
>> Currently kconfig does not have a feature that allows to detect if the
>> used assembler supports a specific compilation option.
>>
>> Introduce 'as-option' to serve this purpose in the context of Kconfig:
>>
>>         config X
>>                 def_bool $(as-option,...)
>>
>> Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
>> Cc: linux-kbuild@vger.kernel.org
>> Acked-by: Masahiro Yamada <masahiroy@kernel.org>
>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Changes since v5:
>>  * More descriptions for using /dev/zero.
> 
> 
> FYI:
> 
> This has been fixed:
> 
> https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=3c968de5c7d1719b2f9b538f2f7f5f5922e5f311
> 
> 
> So, this will not be a problem for the
> future release of binutils.
> 
> Anyway, we need to take care of the released ones,
> so I am fine with /dev/zero.
> 

Thank you for letting us know.

I did not realize it was a compiler issue otherwise I would have reported it. I
thought it was a mechanism to prevent people from trashing their code, but
thinking at it more carefully, for devices does not make sense hence it is good
that there is a fix already.

[...]

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 12/18] arm64: mask PAC bits of __builtin_return_address
  2020-03-06  6:35 ` [PATCH v6 12/18] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
@ 2020-03-06 19:07   ` James Morse
  2020-03-09 12:27     ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: James Morse @ 2020-03-06 19:07 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
> This redefines __builtin_return_address to mask pac bits
> when Pointer Authentication is enabled. As __builtin_return_address
> is used mostly used to refer to the caller function symbol address
> so masking runtime generated pac bits will help to find the match.

I'm not entirely sure what problem you are trying to solve from this paragraph.


> This patch adds a new file (asm/compiler.h) and is transitively
> included (via include/compiler_types.h) on the compiler command line
> so it is guaranteed to be loaded and the users of this macro will
> not find a wrong version.
> 
> A helper macro ptrauth_kernel_pac_mask is created for this purpose
> and added in this file. A similar macro ptrauth_user_pac_mask exists
> in pointer_auth.h and is now moved here for the sake of consistency.

> This change fixes the utilities like cat /proc/vmallocinfo to show
> correct symbol names.

This is to avoid things like:
| 0x(____ptrval____)-0x(____ptrval____)   20480 0xb8b8000100f75fc pages=4 vmalloc N0=4
| 0x(____ptrval____)-0x(____ptrval____)   20480 0xc0f28000100f75fc pages=4 vmalloc N0=4

Where those 64bit values should be the same symbol name, not different LR values.

Could you phrase the commit message to describe the problem, then how you fix it.
something like:
| Functions like vmap() record how much memory has been allocated by their callers,
| callers are identified using __builtin_return_address(). Once the kernel is using
| pointer-auth the return address will be signed. This means it will not match any kernel
| symbol, and will vary between threads even for the same caller.
|
| Use the pre-processor to add logic to strip the PAC to __builtin_return_address()
| callers.


> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> new file mode 100644
> index 0000000..085e7cd0
> --- /dev/null
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_COMPILER_H
> +#define __ASM_COMPILER_H
> +
> +#if defined(CONFIG_ARM64_PTR_AUTH)
> +
> +/*
> + * The EL0/EL1 pointer bits used by a pointer authentication code.
> + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
> + */
> +#define ptrauth_user_pac_mask()		GENMASK_ULL(54, vabits_actual)
> +#define ptrauth_kernel_pac_mask()	GENMASK_ULL(63, vabits_actual)
> +
> +#define __builtin_return_address(val)				\
> +	(void *)((unsigned long)__builtin_return_address(val) |	\
> +	ptrauth_kernel_pac_mask())


This is pretty manky, ideally we would have some __arch_return_address() hook for this,
but this works, and lets us postpone any tree-wide noise until its absolutely necessary.
(I assume if this does ever break, it will be a build error)


You add ptrauth_strip_insn_pac() in this patch, but don't use it until the next one.
However... could you use it here?

This would go wrong if __builtin_return_address() legitimately returned a value that was
mapped by TTBR0, we would force the top bits to be set instead of clearing the PAC bits.
This would corrupt the value instead of corrupting it.

I don't think there is anywhere this could happen today, but passing callbacks into UEFI
runtime services or making kernel calls from an idmap may both do this.


With that:
Reviewed-by: James Morse <james.morse@arm.com>



Thanks!

James


> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -68,16 +68,13 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
>  
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
> -/*
> - * The EL0 pointer bits used by a pointer authentication code.
> - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
> - */
> -#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
> -
> -/* Only valid for EL0 TTBR0 instruction pointers */
> +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */
>  static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  {
> -	return ptr & ~ptrauth_user_pac_mask();
> +	if (ptr & BIT_ULL(55))
> +		return ptr | ptrauth_kernel_pac_mask();
> +	else
> +		return ptr & ~ptrauth_user_pac_mask();
>  }
>  
>  #define ptrauth_thread_init_user(tsk)					\
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  2020-03-06  6:35 ` [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
@ 2020-03-06 19:07   ` James Morse
  2020-03-09 17:04     ` Catalin Marinas
  2020-03-10 12:14   ` Vincenzo Frascino
  1 sibling, 1 reply; 67+ messages in thread
From: James Morse @ 2020-03-06 19:07 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
> This patch allows __cpu_setup to be invoked with one of these flags,
> ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME.
> This is required as some cpufeatures need different handling during
> different scenarios.
> 
> The input parameter in x0 is preserved till the end to be used inside
> this function.
> 
> There should be no functional change with this patch and is useful
> for the subsequent ptrauth patch which utilizes it. Some upcoming
> arm cpufeatures can also utilize these flags.

Reviewed-by: James Morse <james.morse@arm.com>


(this will conflict with Ionela's AMU series, which will need to not clobber x0 during
__cpu_setup.)


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 04/18] arm64: install user ptrauth keys at kernel exit time
  2020-03-06  6:35 ` [PATCH v6 04/18] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
@ 2020-03-06 19:07   ` James Morse
  2020-03-10 11:48     ` Vincenzo Frascino
  0 siblings, 1 reply; 67+ messages in thread
From: James Morse @ 2020-03-06 19:07 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> As we're going to enable pointer auth within the kernel and use a
> different APIAKey for the kernel itself, so move the user APIAKey
> switch to EL0 exception return.
> 
> The other 4 keys could remain switched during task switch, but are also
> moved to keep things consistent.

> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> new file mode 100644
> index 0000000..f820a13
> --- /dev/null
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -0,0 +1,49 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_ASM_POINTER_AUTH_H
> +#define __ASM_ASM_POINTER_AUTH_H
> +
> +#include <asm/alternative.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +/*
> + * thread.keys_user.ap* as offset exceeds the #imm offset range
> + * so use the base value of ldp as thread.keys_user and offset as

> + * keys_user.ap*.

(Nit: thread.keys_user.ap*)

> + */
> +	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3

Reviewed-by: James Morse <james.morse@arm.com>


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 12/18] arm64: mask PAC bits of __builtin_return_address
  2020-03-06 19:07   ` James Morse
@ 2020-03-09 12:27     ` Amit Kachhap
  0 siblings, 0 replies; 67+ messages in thread
From: Amit Kachhap @ 2020-03-09 12:27 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi James,

On 3/7/20 12:37 AM, James Morse wrote:
> Hi Amit,
> 
> On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
>> This redefines __builtin_return_address to mask pac bits
>> when Pointer Authentication is enabled. As __builtin_return_address
>> is used mostly used to refer to the caller function symbol address
>> so masking runtime generated pac bits will help to find the match.
> 
> I'm not entirely sure what problem you are trying to solve from this paragraph.
> 
> 
>> This patch adds a new file (asm/compiler.h) and is transitively
>> included (via include/compiler_types.h) on the compiler command line
>> so it is guaranteed to be loaded and the users of this macro will
>> not find a wrong version.
>>
>> A helper macro ptrauth_kernel_pac_mask is created for this purpose
>> and added in this file. A similar macro ptrauth_user_pac_mask exists
>> in pointer_auth.h and is now moved here for the sake of consistency.
> 
>> This change fixes the utilities like cat /proc/vmallocinfo to show
>> correct symbol names.
> 
> This is to avoid things like:
> | 0x(____ptrval____)-0x(____ptrval____)   20480 0xb8b8000100f75fc pages=4 vmalloc N0=4
> | 0x(____ptrval____)-0x(____ptrval____)   20480 0xc0f28000100f75fc pages=4 vmalloc N0=4
> 
> Where those 64bit values should be the same symbol name, not different LR values.
> 
> Could you phrase the commit message to describe the problem, then how you fix it.
> something like:
> | Functions like vmap() record how much memory has been allocated by their callers,
> | callers are identified using __builtin_return_address(). Once the kernel is using
> | pointer-auth the return address will be signed. This means it will not match any kernel
> | symbol, and will vary between threads even for the same caller.
> |
> | Use the pre-processor to add logic to strip the PAC to __builtin_return_address()
> | callers.
> 

Thanks for the detailed description. I will update my commit message.

> 
>> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
>> new file mode 100644
>> index 0000000..085e7cd0
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/compiler.h
>> @@ -0,0 +1,20 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +#ifndef __ASM_COMPILER_H
>> +#define __ASM_COMPILER_H
>> +
>> +#if defined(CONFIG_ARM64_PTR_AUTH)
>> +
>> +/*
>> + * The EL0/EL1 pointer bits used by a pointer authentication code.
>> + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
>> + */
>> +#define ptrauth_user_pac_mask()		GENMASK_ULL(54, vabits_actual)
>> +#define ptrauth_kernel_pac_mask()	GENMASK_ULL(63, vabits_actual)
>> +
>> +#define __builtin_return_address(val)				\
>> +	(void *)((unsigned long)__builtin_return_address(val) |	\
>> +	ptrauth_kernel_pac_mask())
> 
> 
> This is pretty manky, ideally we would have some __arch_return_address() hook for this,
> but this works, and lets us postpone any tree-wide noise until its absolutely necessary.
> (I assume if this does ever break, it will be a build error)

ok.

> 
> 
> You add ptrauth_strip_insn_pac() in this patch, but don't use it until the next one.
> However... could you use it here?

Yes ptrauth_strip_insn_pac can be used here not as a C function but as a 
macro. I will post in my next version.

> 
> This would go wrong if __builtin_return_address() legitimately returned a value that was
> mapped by TTBR0, we would force the top bits to be set instead of clearing the PAC bits.
> This would corrupt the value instead of corrupting it.
> 
> I don't think there is anywhere this could happen today, but passing callbacks into UEFI
> runtime services or making kernel calls from an idmap may both do this.

I didnt thought about this scenario so did this way.

> 
> 
> With that:
> Reviewed-by: James Morse <james.morse@arm.com>

Thankyou.

Cheers,
Amit
> 
> 
> 
> Thanks!
> 
> James
> 
> 
>> --- a/arch/arm64/include/asm/pointer_auth.h
>> +++ b/arch/arm64/include/asm/pointer_auth.h
>> @@ -68,16 +68,13 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
>>   
>>   extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>>   
>> -/*
>> - * The EL0 pointer bits used by a pointer authentication code.
>> - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
>> - */
>> -#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
>> -
>> -/* Only valid for EL0 TTBR0 instruction pointers */
>> +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */
>>   static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>>   {
>> -	return ptr & ~ptrauth_user_pac_mask();
>> +	if (ptr & BIT_ULL(55))
>> +		return ptr | ptrauth_kernel_pac_mask();
>> +	else
>> +		return ptr & ~ptrauth_user_pac_mask();
>>   }
>>   
>>   #define ptrauth_thread_init_user(tsk)					\
>>
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  2020-03-06 19:07   ` James Morse
@ 2020-03-09 17:04     ` Catalin Marinas
  0 siblings, 0 replies; 67+ messages in thread
From: Catalin Marinas @ 2020-03-09 17:04 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Dave Martin, Mark Brown, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

On Fri, Mar 06, 2020 at 07:07:19PM +0000, James Morse wrote:
> Hi Amit,
> 
> On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
> > This patch allows __cpu_setup to be invoked with one of these flags,
> > ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME.
> > This is required as some cpufeatures need different handling during
> > different scenarios.
> > 
> > The input parameter in x0 is preserved till the end to be used inside
> > this function.
> > 
> > There should be no functional change with this patch and is useful
> > for the subsequent ptrauth patch which utilizes it. Some upcoming
> > arm cpufeatures can also utilize these flags.
> 
> Reviewed-by: James Morse <james.morse@arm.com>
> 
> (this will conflict with Ionela's AMU series, which will need to not clobber x0 during
> __cpu_setup.)

Thanks for the heads-up. I'll fix it up when merging the series.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses
  2020-03-06  6:35 ` [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
@ 2020-03-09 19:03   ` James Morse
  2020-03-10 12:28     ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: James Morse @ 2020-03-09 19:03 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When we enable pointer authentication in the kernel, LR values saved to
> the stack will have a PAC which we must strip in order to retrieve the
> real return address.
> 
> Strip PACs when unwinding the stack in order to account for this.

This patch had me looking at the wider pointer-auth + ftrace interaction...


> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index a336cb1..b479df7 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -14,6 +14,7 @@
>  #include <linux/stacktrace.h>
>  
>  #include <asm/irq.h>
> +#include <asm/pointer_auth.h>
>  #include <asm/stack_pointer.h>
>  #include <asm/stacktrace.h>
>  
> @@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)

There is an earlier reader of frame->pc:
| #ifdef CONFIG_FUNCTION_GRAPH_TRACER
| 	if (tsk->ret_stack &&
| 			(frame->pc == (unsigned long)return_to_handler)) {


Which leads down the rat-hole of: does this need ptrauth_strip_insn_pac()?

The version of GCC on my desktop supports patchable-function-entry, the function pre-amble
has two nops for use by ftrace[0]. This means if prepare_ftrace_return() re-writes the
saved LR, it does it before the caller paciasp's it.

I think that means if you stack-trace from a function that had been hooked by the
function_graph_tracer, you will see the LR with a PAC, meaning the above == won't match.


The version of LLVM on my desktop however doesn't support patchable-function-entry, it
uses _mcount() to do the ftrace stuff[1]. Here prepare_ftrace_return() overwrites a
paciasp'd LR with one that isn't, which will fail.


Could the ptrauth_strip_insn_pac() call move above the CONFIG_FUNCTION_GRAPH_TRACER block,
and could we add something like:
|	depends on (!FTRACE || HAVE_DYNAMIC_FTRACE_WITH_REGS)

to the Kconfig to prevent both FTRACE and PTR_AUTH being enabled unless the compiler has
support for patchable-function-entry?


>  	}
>  #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
>  
> +	frame->pc = ptrauth_strip_insn_pac(frame->pc);
> +
>  	/*
>  	 * Frames created upon entry from EL0 have NULL FP and PC values, so
>  	 * don't bother reporting these. Frames created by __noreturn functions


Thanks,

James

[0] gcc (Debian 9.2.1-28) 9.2.1 20200203
0000000000000048 <sync_icache_aliases>:
  48:   d503201f        nop
  4c:   d503201f        nop
  50:   90000002        adrp    x2, 0 <__icache_flags>
  54:   d503233f        paciasp
  58:   a9bf7bfd        stp     x29, x30, [sp, #-16]!
  5c:   910003fd        mov     x29, sp
  60:   f9400044        ldr     x4, [x2]
  64:   36000124        tbz     w4, #0, 88 <sync_icache_al


[1] clang version 9.0.0-1 (tags/RELEASE_900/final)
0000000000000000 <sync_icache_aliases>:
   0:   d503233f        paciasp
   4:   a9be4ff4        stp     x20, x19, [sp, #-32]!
   8:   a9017bfd        stp     x29, x30, [sp, #16]
   c:   910043fd        add     x29, sp, #0x10
  10:   aa0103f4        mov     x20, x1
  14:   aa0003f3        mov     x19, x0
  18:   94000000        bl      0 <_mcount>
  1c:   90000008        adrp    x8, 0 <__icache_flags>
  20:   f9400108        ldr     x8, [x8]
  24:   370000a8        tbnz    w8, #0, 38 <sync_icache_aliases+0x38>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 01/18] arm64: cpufeature: Fix meta-capability cpufeature check
  2020-03-06  6:35 ` [PATCH v6 01/18] arm64: cpufeature: Fix meta-capability cpufeature check Amit Daniel Kachhap
@ 2020-03-10 10:59   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 10:59 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Amit,

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> Some existing/future meta cpucaps match need the presence of individual
> cpucaps. Currently the individual cpucaps checks it via an array based
> flag and this introduces dependency on the array entry order.
> This limitation exists only for system scope cpufeature.
> 
> This patch introduces an internal helper function (__system_matches_cap)
> to invoke the matching handler for system scope. This helper has to be
> used during a narrow window when,
> - The system wide safe registers are set with all the SMP CPUs and,
> - The SYSTEM_FEATURE cpu_hwcaps may not have been set.
> 
> Normal users should use the existing cpus_have_{const_}cap() global
> function.
> 
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 15 ++++++++++++++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 0b67156..3818685 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -116,6 +116,8 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
>  
>  static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>  
> +static bool __system_matches_cap(unsigned int n);
> +
>  /*
>   * NOTE: Any changes to the visibility of features should be kept in
>   * sync with the documentation of the CPU feature register ABI.
> @@ -2146,6 +2148,17 @@ bool this_cpu_has_cap(unsigned int n)
>  	return false;
>  }
>  

Nit: you might want to add a comment on top of __system_matches_cap() that
explains why we introduce this function and when should be used.

Otherwise looks good to me.

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> +static bool __system_matches_cap(unsigned int n)
> +{
> +	if (n < ARM64_NCAPS) {
> +		const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n];
> +
> +		if (cap)
> +			return cap->matches(cap, SCOPE_SYSTEM);
> +	}
> +	return false;
> +}
> +
>  void cpu_set_feature(unsigned int num)
>  {
>  	WARN_ON(num >= MAX_CPU_FEATURES);
> @@ -2218,7 +2231,7 @@ void __init setup_cpu_features(void)
>  static bool __maybe_unused
>  cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
>  {
> -	return (cpus_have_const_cap(ARM64_HAS_PAN) && !cpus_have_const_cap(ARM64_HAS_UAO));
> +	return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO));
>  }
>  
>  static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 02/18] arm64: cpufeature: add pointer auth meta-capabilities
  2020-03-06  6:35 ` [PATCH v6 02/18] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
@ 2020-03-10 11:18   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 11:18 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel


On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> To enable pointer auth for the kernel, we're going to need to check for
> the presence of address auth and generic auth using alternative_if. We
> currently have two cpucaps for each, but alternative_if needs to check a
> single cpucap. So define meta-capabilities that are present when either
> of the current two capabilities is present.
> 
> Leave the existing four cpucaps in place, as they are still needed to
> check for mismatched systems where one CPU has the architected algorithm
> but another has the IMP DEF algorithm.
> 
> Note, the meta-capabilities were present before but were removed in
> commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth
> CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework
> ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed
> then. Note, unlike before, the current patch checks the cpucap values
> directly, instead of reading the CPU ID register value.
>

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> [Amit: commit message and macro rebase, use __system_matches_cap]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/include/asm/cpucaps.h    |  4 +++-
>  arch/arm64/include/asm/cpufeature.h |  6 ++----
>  arch/arm64/kernel/cpufeature.c      | 25 ++++++++++++++++++++++++-
>  3 files changed, 29 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index 865e025..72e4e05 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -58,7 +58,9 @@
>  #define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE	48
>  #define ARM64_HAS_E0PD				49
>  #define ARM64_HAS_RNG				50
> +#define ARM64_HAS_ADDRESS_AUTH			51
> +#define ARM64_HAS_GENERIC_AUTH			52
>  
> -#define ARM64_NCAPS				51
> +#define ARM64_NCAPS				53
>  
>  #endif /* __ASM_CPUCAPS_H */
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 2a746b9..0fd1feb 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -590,15 +590,13 @@ static __always_inline bool system_supports_cnp(void)
>  static inline bool system_supports_address_auth(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> -		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> -		 cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
> +		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
>  }
>  
>  static inline bool system_supports_generic_auth(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> -		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
> -		 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
> +		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
>  }
>  
>  static inline bool system_uses_irq_prio_masking(void)
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 3818685..b12e386 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1323,6 +1323,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
>  	sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
>  				       SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
>  }
> +
> +static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> +			     int __unused)
> +{
> +	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> +	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> +}
> +
> +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
> +			     int __unused)
> +{
> +	return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
> +	       __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
> +}
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #ifdef CONFIG_ARM64_E0PD
> @@ -1600,7 +1614,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.field_pos = ID_AA64ISAR1_APA_SHIFT,
>  		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
>  		.matches = has_cpuid_feature,
> -		.cpu_enable = cpu_enable_address_auth,
>  	},
>  	{
>  		.desc = "Address authentication (IMP DEF algorithm)",
> @@ -1611,6 +1624,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.field_pos = ID_AA64ISAR1_API_SHIFT,
>  		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
>  		.matches = has_cpuid_feature,
> +	},
> +	{
> +		.capability = ARM64_HAS_ADDRESS_AUTH,
> +		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.matches = has_address_auth,
>  		.cpu_enable = cpu_enable_address_auth,
>  	},
>  	{
> @@ -1633,6 +1651,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
>  		.matches = has_cpuid_feature,
>  	},
> +	{
> +		.capability = ARM64_HAS_GENERIC_AUTH,
> +		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.matches = has_generic_auth,
> +	},
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  #ifdef CONFIG_ARM64_PSEUDO_NMI
>  	{
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 03/18] arm64: rename ptrauth key structures to be user-specific
  2020-03-06  6:35 ` [PATCH v6 03/18] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
@ 2020-03-10 11:35   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 11:35 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> We currently enable ptrauth for userspace, but do not use it within the
> kernel. We're going to enable it for the kernel, and will need to manage
> a separate set of ptrauth keys for the kernel.
> 
> We currently keep all 5 keys in struct ptrauth_keys. However, as the
> kernel will only need to use 1 key, it is a bit wasteful to allocate a
> whole ptrauth_keys struct for every thread.
> 
> Therefore, a subsequent patch will define a separate struct, with only 1
> key, for the kernel. In preparation for that, rename the existing struct
> (and associated macros and functions) to reflect that they are specific
> to userspace.
>

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> [Amit: Re-positioned the patch to reduce the diff]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/include/asm/pointer_auth.h | 12 ++++++------
>  arch/arm64/include/asm/processor.h    |  2 +-
>  arch/arm64/kernel/pointer_auth.c      |  8 ++++----
>  arch/arm64/kernel/ptrace.c            | 16 ++++++++--------
>  4 files changed, 19 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 7a24bad..799b079 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -22,7 +22,7 @@ struct ptrauth_key {
>   * We give each process its own keys, which are shared by all threads. The keys
>   * are inherited upon fork(), and reinitialised upon exec*().
>   */
> -struct ptrauth_keys {
> +struct ptrauth_keys_user {
>  	struct ptrauth_key apia;
>  	struct ptrauth_key apib;
>  	struct ptrauth_key apda;
> @@ -30,7 +30,7 @@ struct ptrauth_keys {
>  	struct ptrauth_key apga;
>  };
>  
> -static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
>  {
>  	if (system_supports_address_auth()) {
>  		get_random_bytes(&keys->apia, sizeof(keys->apia));
> @@ -50,7 +50,7 @@ do {								\
>  	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
>  } while (0)
>  
> -static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> +static inline void ptrauth_keys_switch_user(struct ptrauth_keys_user *keys)
>  {
>  	if (system_supports_address_auth()) {
>  		__ptrauth_key_install(APIA, keys->apia);
> @@ -80,12 +80,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  #define ptrauth_thread_init_user(tsk)					\
>  do {									\
>  	struct task_struct *__ptiu_tsk = (tsk);				\
> -	ptrauth_keys_init(&__ptiu_tsk->thread.keys_user);		\
> -	ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user);		\
> +	ptrauth_keys_init_user(&__ptiu_tsk->thread.keys_user);		\
> +	ptrauth_keys_switch_user(&__ptiu_tsk->thread.keys_user);		\
>  } while (0)
>  
>  #define ptrauth_thread_switch(tsk)	\
> -	ptrauth_keys_switch(&(tsk)->thread.keys_user)
> +	ptrauth_keys_switch_user(&(tsk)->thread.keys_user)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 5ba6320..496a928 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -146,7 +146,7 @@ struct thread_struct {
>  	unsigned long		fault_code;	/* ESR_EL1 value */
>  	struct debug_info	debug;		/* debugging */
>  #ifdef CONFIG_ARM64_PTR_AUTH
> -	struct ptrauth_keys	keys_user;
> +	struct ptrauth_keys_user	keys_user;
>  #endif
>  };
>  
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index c507b58..af5a638 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -9,7 +9,7 @@
>  
>  int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  {
> -	struct ptrauth_keys *keys = &tsk->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &tsk->thread.keys_user;
>  	unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
>  				      PR_PAC_APDAKEY | PR_PAC_APDBKEY;
>  	unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
> @@ -18,8 +18,8 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  		return -EINVAL;
>  
>  	if (!arg) {
> -		ptrauth_keys_init(keys);
> -		ptrauth_keys_switch(keys);
> +		ptrauth_keys_init_user(keys);
> +		ptrauth_keys_switch_user(keys);
>  		return 0;
>  	}
>  
> @@ -41,7 +41,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  	if (arg & PR_PAC_APGAKEY)
>  		get_random_bytes(&keys->apga, sizeof(keys->apga));
>  
> -	ptrauth_keys_switch(keys);
> +	ptrauth_keys_switch_user(keys);
>  
>  	return 0;
>  }
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index cd6e5fa..b3d3005 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -999,7 +999,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey)
>  }
>  
>  static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
> -				     const struct ptrauth_keys *keys)
> +				     const struct ptrauth_keys_user *keys)
>  {
>  	ukeys->apiakey = pac_key_to_user(&keys->apia);
>  	ukeys->apibkey = pac_key_to_user(&keys->apib);
> @@ -1007,7 +1007,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys,
>  	ukeys->apdbkey = pac_key_to_user(&keys->apdb);
>  }
>  
> -static void pac_address_keys_from_user(struct ptrauth_keys *keys,
> +static void pac_address_keys_from_user(struct ptrauth_keys_user *keys,
>  				       const struct user_pac_address_keys *ukeys)
>  {
>  	keys->apia = pac_key_from_user(ukeys->apiakey);
> @@ -1021,7 +1021,7 @@ static int pac_address_keys_get(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				void *kbuf, void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_address_keys user_keys;
>  
>  	if (!system_supports_address_auth())
> @@ -1038,7 +1038,7 @@ static int pac_address_keys_set(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				const void *kbuf, const void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_address_keys user_keys;
>  	int ret;
>  
> @@ -1056,12 +1056,12 @@ static int pac_address_keys_set(struct task_struct *target,
>  }
>  
>  static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys,
> -				     const struct ptrauth_keys *keys)
> +				     const struct ptrauth_keys_user *keys)
>  {
>  	ukeys->apgakey = pac_key_to_user(&keys->apga);
>  }
>  
> -static void pac_generic_keys_from_user(struct ptrauth_keys *keys,
> +static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys,
>  				       const struct user_pac_generic_keys *ukeys)
>  {
>  	keys->apga = pac_key_from_user(ukeys->apgakey);
> @@ -1072,7 +1072,7 @@ static int pac_generic_keys_get(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				void *kbuf, void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_generic_keys user_keys;
>  
>  	if (!system_supports_generic_auth())
> @@ -1089,7 +1089,7 @@ static int pac_generic_keys_set(struct task_struct *target,
>  				unsigned int pos, unsigned int count,
>  				const void *kbuf, const void __user *ubuf)
>  {
> -	struct ptrauth_keys *keys = &target->thread.keys_user;
> +	struct ptrauth_keys_user *keys = &target->thread.keys_user;
>  	struct user_pac_generic_keys user_keys;
>  	int ret;
>  
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 04/18] arm64: install user ptrauth keys at kernel exit time
  2020-03-06 19:07   ` James Morse
@ 2020-03-10 11:48     ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 11:48 UTC (permalink / raw)
  To: James Morse, Amit Daniel Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel


On 3/6/20 7:07 PM, James Morse wrote:
> Hi Amit,
> 
> On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> As we're going to enable pointer auth within the kernel and use a
>> different APIAKey for the kernel itself, so move the user APIAKey
>> switch to EL0 exception return.
>>
>> The other 4 keys could remain switched during task switch, but are also
>> moved to keep things consistent.
> 
>> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
>> new file mode 100644
>> index 0000000..f820a13
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
>> @@ -0,0 +1,49 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +#ifndef __ASM_ASM_POINTER_AUTH_H
>> +#define __ASM_ASM_POINTER_AUTH_H
>> +
>> +#include <asm/alternative.h>
>> +#include <asm/asm-offsets.h>
>> +#include <asm/cpufeature.h>
>> +#include <asm/sysreg.h>
>> +
>> +#ifdef CONFIG_ARM64_PTR_AUTH
>> +/*
>> + * thread.keys_user.ap* as offset exceeds the #imm offset range
>> + * so use the base value of ldp as thread.keys_user and offset as
> 
>> + * keys_user.ap*.
> 
> (Nit: thread.keys_user.ap*)
> 
>> + */
>> +	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
> 
> Reviewed-by: James Morse <james.morse@arm.com>
>

A part what James already mentioned:

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> 
> Thanks,
> 
> James
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 05/18] arm64: create macro to park cpu in an infinite loop
  2020-03-06  6:35 ` [PATCH v6 05/18] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
@ 2020-03-10 12:02   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 12:02 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> A macro early_park_cpu is added to park the faulted cpu in an infinite
> loop. Currently, this macro is substituted in two instances and may be
> reused in future.
> 

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/kernel/head.S | 25 +++++++++++++------------
>  1 file changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 989b194..3d18163 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -761,6 +761,17 @@ ENDPROC(__secondary_too_slow)
>  	.endm
>  
>  /*
> + * Macro to park the cpu in an infinite loop.
> + */
> +	.macro	early_park_cpu status
> +	update_early_cpu_boot_status \status | CPU_STUCK_IN_KERNEL, x1, x2
> +.Lepc_\@:
> +	wfe
> +	wfi
> +	b	.Lepc_\@
> +	.endm
> +
> +/*
>   * Enable the MMU.
>   *
>   *  x0  = SCTLR_EL1 value for turning on the MMU.
> @@ -808,24 +819,14 @@ ENTRY(__cpu_secondary_check52bitva)
>  	and	x0, x0, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
>  	cbnz	x0, 2f
>  
> -	update_early_cpu_boot_status \
> -		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_52_BIT_VA, x0, x1
> -1:	wfe
> -	wfi
> -	b	1b
> -
> +	early_park_cpu CPU_STUCK_REASON_52_BIT_VA
>  #endif
>  2:	ret
>  ENDPROC(__cpu_secondary_check52bitva)
>  
>  __no_granule_support:
>  	/* Indicate that this CPU can't boot and is stuck in the kernel */
> -	update_early_cpu_boot_status \
> -		CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2
> -1:
> -	wfe
> -	wfi
> -	b	1b
> +	early_park_cpu CPU_STUCK_REASON_NO_GRAN
>  ENDPROC(__no_granule_support)
>  
>  #ifdef CONFIG_RELOCATABLE
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  2020-03-06  6:35 ` [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
  2020-03-06 19:07   ` James Morse
@ 2020-03-10 12:14   ` Vincenzo Frascino
  2020-03-11  9:28     ` Amit Kachhap
  1 sibling, 1 reply; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 12:14 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Amit,

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> This patch allows __cpu_setup to be invoked with one of these flags,
> ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME.
> This is required as some cpufeatures need different handling during
> different scenarios.
> 

I could not find any explanation in this patch on what these flags stand for.
Could you please add it? Maybe near where you define them.

With this:

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> The input parameter in x0 is preserved till the end to be used inside
> this function.
> 
> There should be no functional change with this patch and is useful
> for the subsequent ptrauth patch which utilizes it. Some upcoming
> arm cpufeatures can also utilize these flags.
> 
> Suggested-by: James Morse <james.morse@arm.com>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/include/asm/smp.h |  5 +++++
>  arch/arm64/kernel/head.S     |  2 ++
>  arch/arm64/kernel/sleep.S    |  2 ++
>  arch/arm64/mm/proc.S         | 26 +++++++++++++++-----------
>  4 files changed, 24 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
> index a0c8a0b..8159000 100644
> --- a/arch/arm64/include/asm/smp.h
> +++ b/arch/arm64/include/asm/smp.h
> @@ -23,6 +23,11 @@
>  #define CPU_STUCK_REASON_52_BIT_VA	(UL(1) << CPU_STUCK_REASON_SHIFT)
>  #define CPU_STUCK_REASON_NO_GRAN	(UL(2) << CPU_STUCK_REASON_SHIFT)
>  
> +/* Options for __cpu_setup */
> +#define ARM64_CPU_BOOT_PRIMARY		(1)
> +#define ARM64_CPU_BOOT_SECONDARY	(2)
> +#define ARM64_CPU_RUNTIME		(3)
> +
>  #ifndef __ASSEMBLY__
>  
>  #include <asm/percpu.h>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 3d18163..5a7ce15 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -118,6 +118,7 @@ ENTRY(stext)
>  	 * On return, the CPU will be ready for the MMU to be turned on and
>  	 * the TCR will have been set.
>  	 */
> +	mov	x0, #ARM64_CPU_BOOT_PRIMARY
>  	bl	__cpu_setup			// initialise processor
>  	b	__primary_switch
>  ENDPROC(stext)
> @@ -712,6 +713,7 @@ secondary_startup:
>  	 * Common entry point for secondary CPUs.
>  	 */
>  	bl	__cpu_secondary_check52bitva
> +	mov	x0, #ARM64_CPU_BOOT_SECONDARY
>  	bl	__cpu_setup			// initialise processor
>  	adrp	x1, swapper_pg_dir
>  	bl	__enable_mmu
> diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
> index f5b04dd..7b2f2e6 100644
> --- a/arch/arm64/kernel/sleep.S
> +++ b/arch/arm64/kernel/sleep.S
> @@ -3,6 +3,7 @@
>  #include <linux/linkage.h>
>  #include <asm/asm-offsets.h>
>  #include <asm/assembler.h>
> +#include <asm/smp.h>
>  
>  	.text
>  /*
> @@ -99,6 +100,7 @@ ENDPROC(__cpu_suspend_enter)
>  	.pushsection ".idmap.text", "awx"
>  ENTRY(cpu_resume)
>  	bl	el2_setup		// if in EL2 drop to EL1 cleanly
> +	mov	x0, #ARM64_CPU_RUNTIME
>  	bl	__cpu_setup
>  	/* enable the MMU early - so we can access sleep_save_stash by va */
>  	adrp	x1, swapper_pg_dir
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index aafed69..ea0db17 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -408,31 +408,31 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
>  /*
>   *	__cpu_setup
>   *
> - *	Initialise the processor for turning the MMU on.  Return in x0 the
> - *	value of the SCTLR_EL1 register.
> + *	Initialise the processor for turning the MMU on.
> + *
> + * Input:
> + *	x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_SECONDARY/ARM64_CPU_RUNTIME.
> + * Output:
> + *	Return in x0 the value of the SCTLR_EL1 register.
>   */
>  	.pushsection ".idmap.text", "awx"
>  SYM_FUNC_START(__cpu_setup)
>  	tlbi	vmalle1				// Invalidate local TLB
>  	dsb	nsh
>  
> -	mov	x0, #3 << 20
> -	msr	cpacr_el1, x0			// Enable FP/ASIMD
> -	mov	x0, #1 << 12			// Reset mdscr_el1 and disable
> -	msr	mdscr_el1, x0			// access to the DCC from EL0
> +	mov	x1, #3 << 20
> +	msr	cpacr_el1, x1			// Enable FP/ASIMD
> +	mov	x1, #1 << 12			// Reset mdscr_el1 and disable
> +	msr	mdscr_el1, x1			// access to the DCC from EL0
>  	isb					// Unmask debug exceptions now,
>  	enable_dbg				// since this is per-cpu
> -	reset_pmuserenr_el0 x0			// Disable PMU access from EL0
> +	reset_pmuserenr_el0 x1			// Disable PMU access from EL0
>  	/*
>  	 * Memory region attributes
>  	 */
>  	mov_q	x5, MAIR_EL1_SET
>  	msr	mair_el1, x5
>  	/*
> -	 * Prepare SCTLR
> -	 */
> -	mov_q	x0, SCTLR_EL1_SET
> -	/*
>  	 * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for
>  	 * both user and kernel.
>  	 */
> @@ -468,5 +468,9 @@ SYM_FUNC_START(__cpu_setup)
>  1:
>  #endif	/* CONFIG_ARM64_HW_AFDBM */
>  	msr	tcr_el1, x10
> +	/*
> +	 * Prepare SCTLR
> +	 */
> +	mov_q	x0, SCTLR_EL1_SET
>  	ret					// return to head.S
>  SYM_FUNC_END(__cpu_setup)
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file
  2020-03-06  6:35 ` [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file Amit Daniel Kachhap
@ 2020-03-10 12:20   ` Vincenzo Frascino
  2020-03-10 12:53     ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 12:20 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Amit,

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:

[...]

>  
> -static inline bool
> -cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
> -{
> -	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
> -}
> -
> -static inline bool
> -cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
> -{
> -	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
> -}
> -
>  /*
>   * Generic helper for handling capabilties with multiple (match,enable) pairs
>   * of call backs, sharing the same capability bit.
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index b12e386..865dce6 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1363,6 +1363,19 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
>  }
>  #endif
>  
> +/* Internal helper functions to match cpu capability type */
> +static bool
> +cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
> +{
> +	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
> +}
> +
> +static bool
> +cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
> +{
> +	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
> +}
> +
>  static const struct arm64_cpu_capabilities arm64_features[] = {
>  	{
>  		.desc = "GIC system register CPU interface",
> 

Seems that the signature of the functions above is changed during the migration.
In particular you dropped "inline". Is there any specific reason?

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses
  2020-03-09 19:03   ` James Morse
@ 2020-03-10 12:28     ` Amit Kachhap
  2020-03-10 17:37       ` James Morse
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-10 12:28 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi James,

On 3/10/20 12:33 AM, James Morse wrote:
> Hi Amit,
> 
> On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When we enable pointer authentication in the kernel, LR values saved to
>> the stack will have a PAC which we must strip in order to retrieve the
>> real return address.
>>
>> Strip PACs when unwinding the stack in order to account for this.
> 
> This patch had me looking at the wider pointer-auth + ftrace interaction...

Thanks for your effort.

> 
> 
>> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
>> index a336cb1..b479df7 100644
>> --- a/arch/arm64/kernel/stacktrace.c
>> +++ b/arch/arm64/kernel/stacktrace.c
>> @@ -14,6 +14,7 @@
>>   #include <linux/stacktrace.h>
>>   
>>   #include <asm/irq.h>
>> +#include <asm/pointer_auth.h>
>>   #include <asm/stack_pointer.h>
>>   #include <asm/stacktrace.h>
>>   
>> @@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
> 
> There is an earlier reader of frame->pc:
> | #ifdef CONFIG_FUNCTION_GRAPH_TRACER
> | 	if (tsk->ret_stack &&
> | 			(frame->pc == (unsigned long)return_to_handler)) {
> 
> 
> Which leads down the rat-hole of: does this need ptrauth_strip_insn_pac()?
> 
> The version of GCC on my desktop supports patchable-function-entry, the function pre-amble
> has two nops for use by ftrace[0]. This means if prepare_ftrace_return() re-writes the
> saved LR, it does it before the caller paciasp's it.
> 
> I think that means if you stack-trace from a function that had been hooked by the
> function_graph_tracer, you will see the LR with a PAC, meaning the above == won't match.
> 
> 
> The version of LLVM on my desktop however doesn't support patchable-function-entry, it
> uses _mcount() to do the ftrace stuff[1]. Here prepare_ftrace_return() overwrites a
> paciasp'd LR with one that isn't, which will fail.
> 
> 
> Could the ptrauth_strip_insn_pac() call move above the CONFIG_FUNCTION_GRAPH_TRACER block,

This may not be required as we never explicitly sign return_to_handler 
and frame->pc may store it without any PAC signature for 
patchable-function-entry.

While testing patchable-function-entry, I had an observation regarding 
WARN messages,

[  541.030265] Hardware name: Foundation-v8A (DT)
[  541.033500] pstate: 60400009 (nZCv daif +PAN -UAO)
[  541.036880] pc : change_pac_parameters+0x40/0x4c
[  541.040279] lr : return_to_handler+0x0/0x3c
[  541.043373] sp : ffff8000126e3d00

Here lr may need some logic to display correct return address although 
it is unrelated to this ptrauth series. (arch/arm64/kernel/process.c +264)

> and could we add something like:
> |	depends on (!FTRACE || HAVE_DYNAMIC_FTRACE_WITH_REGS)
> 
> to the Kconfig to prevent both FTRACE and PTR_AUTH being enabled unless the compiler has
> support for patchable-function-entry?

Yes this is a good condition to have. I feel below condition is more 
suitable as there is issue only with FUNCTION_GRAPH_TRACER,

depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)

Thanks,
Amit Daniel

> 
> 
>>   	}
>>   #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
>>   
>> +	frame->pc = ptrauth_strip_insn_pac(frame->pc);
>> +
>>   	/*
>>   	 * Frames created upon entry from EL0 have NULL FP and PC values, so
>>   	 * don't bother reporting these. Frames created by __noreturn functions
> 
> 
> Thanks,
> 
> James
> 
> [0] gcc (Debian 9.2.1-28) 9.2.1 20200203
> 0000000000000048 <sync_icache_aliases>:
>    48:   d503201f        nop
>    4c:   d503201f        nop
>    50:   90000002        adrp    x2, 0 <__icache_flags>
>    54:   d503233f        paciasp
>    58:   a9bf7bfd        stp     x29, x30, [sp, #-16]!
>    5c:   910003fd        mov     x29, sp
>    60:   f9400044        ldr     x4, [x2]
>    64:   36000124        tbz     w4, #0, 88 <sync_icache_al
> 
> 
> [1] clang version 9.0.0-1 (tags/RELEASE_900/final)
> 0000000000000000 <sync_icache_aliases>:
>     0:   d503233f        paciasp
>     4:   a9be4ff4        stp     x20, x19, [sp, #-32]!
>     8:   a9017bfd        stp     x29, x30, [sp, #16]
>     c:   910043fd        add     x29, sp, #0x10
>    10:   aa0103f4        mov     x20, x1
>    14:   aa0003f3        mov     x19, x0
>    18:   94000000        bl      0 <_mcount>
>    1c:   90000008        adrp    x8, 0 <__icache_flags>
>    20:   f9400108        ldr     x8, [x8]
>    24:   370000a8        tbnz    w8, #0, 38 <sync_icache_aliases+0x38>
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability
  2020-03-06  6:35 ` [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
@ 2020-03-10 12:31   ` Vincenzo Frascino
  2020-03-11 11:03     ` Catalin Marinas
  0 siblings, 1 reply; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 12:31 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Amit,

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:

[...]

>  
> +static bool
> +cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
> +{
> +	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
> +}
> +

If there is no specific reason in the previous patch for changing the signature,
could you please make "inline" even this function, for symmetry with the others?

[...]

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file
  2020-03-10 12:20   ` Vincenzo Frascino
@ 2020-03-10 12:53     ` Amit Kachhap
  2020-03-11 10:50       ` Catalin Marinas
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-10 12:53 UTC (permalink / raw)
  To: Vincenzo Frascino, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Vincenzo,

On 3/10/20 5:50 PM, Vincenzo Frascino wrote:
> Hi Amit,
> 
> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> 
> [...]
> 
>>   
>> -static inline bool
>> -cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
>> -{
>> -	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
>> -}
>> -
>> -static inline bool
>> -cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
>> -{
>> -	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
>> -}
>> -
>>   /*
>>    * Generic helper for handling capabilties with multiple (match,enable) pairs
>>    * of call backs, sharing the same capability bit.
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index b12e386..865dce6 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -1363,6 +1363,19 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
>>   }
>>   #endif
>>   
>> +/* Internal helper functions to match cpu capability type */
>> +static bool
>> +cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
>> +{
>> +	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
>> +}
>> +
>> +static bool
>> +cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
>> +{
>> +	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
>> +}
>> +
>>   static const struct arm64_cpu_capabilities arm64_features[] = {
>>   	{
>>   		.desc = "GIC system register CPU interface",
>>
> 
> Seems that the signature of the functions above is changed during the migration.
> In particular you dropped "inline". Is there any specific reason?

Earlier Catalin pointed me here [1]. I guess its not a hot-path function.

[1]: https://www.spinics.net/lists/arm-kernel/msg789696.html

Cheers,
Amit
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 10/18] arm64: initialize and switch ptrauth kernel keys
  2020-03-06  6:35 ` [PATCH v6 10/18] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
@ 2020-03-10 15:07   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 15:07 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> Set up keys to use pointer authentication within the kernel. The kernel
> will be compiled with APIAKey instructions, the other keys are currently
> unused. Each task is given its own APIAKey, which is initialized during
> fork. The key is changed during context switch and on kernel entry from
> EL0.
> 
> The keys for idle threads need to be set before calling any C functions,
> because it is not possible to enter and exit a function with different
> keys.
> 

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> Reviewed-by: Kees Cook <keescook@chromium.org>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> [Amit: Modified secondary cores key structure, comments]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/include/asm/asm_pointer_auth.h | 14 ++++++++++++++
>  arch/arm64/include/asm/pointer_auth.h     | 13 +++++++++++++
>  arch/arm64/include/asm/processor.h        |  1 +
>  arch/arm64/include/asm/smp.h              |  4 ++++
>  arch/arm64/kernel/asm-offsets.c           |  5 +++++
>  arch/arm64/kernel/entry.S                 |  3 +++
>  arch/arm64/kernel/process.c               |  2 ++
>  arch/arm64/kernel/smp.c                   |  8 ++++++++
>  arch/arm64/mm/proc.S                      | 12 ++++++++++++
>  9 files changed, 62 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> index f820a13..4152afe 100644
> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -39,11 +39,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH
>  alternative_else_nop_endif
>  	.endm
>  
> +	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	mov	\tmp1, #THREAD_KEYS_KERNEL
> +	add	\tmp1, \tsk, \tmp1
> +	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
> +	msr_s	SYS_APIAKEYLO_EL1, \tmp2
> +	msr_s	SYS_APIAKEYHI_EL1, \tmp3
> +	isb
> +alternative_else_nop_endif
> +	.endm
> +
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  
>  	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
>  	.endm
>  
> +	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> +	.endm
> +
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #endif /* __ASM_ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index dabe026..aa956ca 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -30,6 +30,10 @@ struct ptrauth_keys_user {
>  	struct ptrauth_key apga;
>  };
>  
> +struct ptrauth_keys_kernel {
> +	struct ptrauth_key apia;
> +};
> +
>  static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
>  {
>  	if (system_supports_address_auth()) {
> @@ -50,6 +54,12 @@ do {								\
>  	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
>  } while (0)
>  
> +static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
> +{
> +	if (system_supports_address_auth())
> +		get_random_bytes(&keys->apia, sizeof(keys->apia));
> +}
> +
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
>  /*
> @@ -66,11 +76,14 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  
>  #define ptrauth_thread_init_user(tsk)					\
>  	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
> +#define ptrauth_thread_init_kernel(tsk)					\
> +	ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
> +#define ptrauth_thread_init_kernel(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 496a928..4c77da5 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -147,6 +147,7 @@ struct thread_struct {
>  	struct debug_info	debug;		/* debugging */
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	struct ptrauth_keys_user	keys_user;
> +	struct ptrauth_keys_kernel	keys_kernel;
>  #endif
>  };
>  
> diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
> index 5334d69..4e92150 100644
> --- a/arch/arm64/include/asm/smp.h
> +++ b/arch/arm64/include/asm/smp.h
> @@ -36,6 +36,7 @@
>  #include <linux/threads.h>
>  #include <linux/cpumask.h>
>  #include <linux/thread_info.h>
> +#include <asm/pointer_auth.h>
>  
>  DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
>  
> @@ -93,6 +94,9 @@ asmlinkage void secondary_start_kernel(void);
>  struct secondary_data {
>  	void *stack;
>  	struct task_struct *task;
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +	struct ptrauth_keys_kernel ptrauth_key;
> +#endif
>  	long status;
>  };
>  
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7b1ea2a..9981a0a 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -42,6 +42,7 @@ int main(void)
>    DEFINE(THREAD_CPU_CONTEXT,	offsetof(struct task_struct, thread.cpu_context));
>  #ifdef CONFIG_ARM64_PTR_AUTH
>    DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
> +  DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
>  #endif
>    BLANK();
>    DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
> @@ -91,6 +92,9 @@ int main(void)
>    BLANK();
>    DEFINE(CPU_BOOT_STACK,	offsetof(struct secondary_data, stack));
>    DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +  DEFINE(CPU_BOOT_PTRAUTH_KEY,	offsetof(struct secondary_data, ptrauth_key));
> +#endif
>    BLANK();
>  #ifdef CONFIG_KVM_ARM_HOST
>    DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
> @@ -137,6 +141,7 @@ int main(void)
>    DEFINE(PTRAUTH_USER_KEY_APDA,		offsetof(struct ptrauth_keys_user, apda));
>    DEFINE(PTRAUTH_USER_KEY_APDB,		offsetof(struct ptrauth_keys_user, apdb));
>    DEFINE(PTRAUTH_USER_KEY_APGA,		offsetof(struct ptrauth_keys_user, apga));
> +  DEFINE(PTRAUTH_KERNEL_KEY_APIA,	offsetof(struct ptrauth_keys_kernel, apia));
>    BLANK();
>  #endif
>    return 0;
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 684e475..3dad2d0 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -178,6 +178,7 @@ alternative_cb_end
>  
>  	apply_ssbd 1, x22, x23
>  
> +	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
>  	get_current_task tsk
> @@ -342,6 +343,7 @@ alternative_else_nop_endif
>  	msr	cntkctl_el1, x1
>  4:
>  #endif
> +	/* No kernel C function calls after this as user keys are set. */
>  	ptrauth_keys_install_user tsk, x0, x1, x2
>  
>  	apply_ssbd 0, x0, x1
> @@ -898,6 +900,7 @@ ENTRY(cpu_switch_to)
>  	ldr	lr, [x8]
>  	mov	sp, x9
>  	msr	sp_el0, x1
> +	ptrauth_keys_install_kernel x1, x8, x9, x10
>  	ret
>  ENDPROC(cpu_switch_to)
>  NOKPROBE(cpu_switch_to)
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 6140e79..7db0302 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -376,6 +376,8 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long stack_start,
>  	 */
>  	fpsimd_flush_task_state(p);
>  
> +	ptrauth_thread_init_kernel(p);
> +
>  	if (likely(!(p->flags & PF_KTHREAD))) {
>  		*childregs = *current_pt_regs();
>  		childregs->regs[0] = 0;
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index f2761a9..3fa0fbf 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -112,6 +112,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>  	 */
>  	secondary_data.task = idle;
>  	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
> +#if defined(CONFIG_ARM64_PTR_AUTH)
> +	secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo;
> +	secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi;
> +#endif
>  	update_cpu_boot_status(CPU_MMU_OFF);
>  	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
>  
> @@ -138,6 +142,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>  
>  	secondary_data.task = NULL;
>  	secondary_data.stack = NULL;
> +#if defined(CONFIG_ARM64_PTR_AUTH)
> +	secondary_data.ptrauth_key.apia.lo = 0;
> +	secondary_data.ptrauth_key.apia.hi = 0;
> +#endif
>  	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
>  	status = READ_ONCE(secondary_data.status);
>  	if (ret && status) {
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 4cf19a2..5a11a89 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -485,6 +485,10 @@ SYM_FUNC_START(__cpu_setup)
>  	ubfx	x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
>  	cbz	x2, 3f
>  
> +	/*
> +	 * The primary cpu keys are reset here and can be
> +	 * re-initialised with some proper values later.
> +	 */
>  	msr_s	SYS_APIAKEYLO_EL1, xzr
>  	msr_s	SYS_APIAKEYHI_EL1, xzr
>  
> @@ -497,6 +501,14 @@ alternative_if_not ARM64_HAS_ADDRESS_AUTH
>  	b	3f
>  alternative_else_nop_endif
>  
> +	/* Install ptrauth key for secondary cpus */
> +	adr_l	x2, secondary_data
> +	ldr	x3, [x2, #CPU_BOOT_TASK]	// get secondary_data.task
> +	cbz	x3, 2f				// check for slow booting cpus
> +	ldp	x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY]
> +	msr_s	SYS_APIAKEYLO_EL1, x3
> +	msr_s	SYS_APIAKEYHI_EL1, x4
> +
>  2:	/* Enable ptrauth instructions */
>  	ldr	x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
>  		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 11/18] arm64: initialize ptrauth keys for kernel booting task
  2020-03-06  6:35 ` [PATCH v6 11/18] arm64: initialize ptrauth keys for kernel booting task Amit Daniel Kachhap
@ 2020-03-10 15:09   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 15:09 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> This patch uses the existing boot_init_stack_canary arch function
> to initialize the ptrauth keys for the booting task in the primary
> core. The requirement here is that it should be always inline and
> the caller must never return.
> 
> As pointer authentication too detects a subset of stack corruption
> so it makes sense to place this code here.
> 
> Both pointer authentication and stack canary codes are protected
> by their respective config option.
>

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> Suggested-by: Ard Biesheuvel <ardb@kernel.org>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/include/asm/pointer_auth.h   | 11 ++++++++++-
>  arch/arm64/include/asm/stackprotector.h |  5 +++++
>  include/linux/stackprotector.h          |  2 +-
>  3 files changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index aa956ca..833d3f9 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -54,12 +54,18 @@ do {								\
>  	write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1);	\
>  } while (0)
>  
> -static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
> +static __always_inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys)
>  {
>  	if (system_supports_address_auth())
>  		get_random_bytes(&keys->apia, sizeof(keys->apia));
>  }
>  
> +static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys)
> +{
> +	if (system_supports_address_auth())
> +		__ptrauth_key_install(APIA, keys->apia);
> +}
> +
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
>  /*
> @@ -78,12 +84,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  	ptrauth_keys_init_user(&(tsk)->thread.keys_user)
>  #define ptrauth_thread_init_kernel(tsk)					\
>  	ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
> +#define ptrauth_thread_switch_kernel(tsk)				\
> +	ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
>  #define ptrauth_thread_init_kernel(tsk)
> +#define ptrauth_thread_switch_kernel(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/stackprotector.h b/arch/arm64/include/asm/stackprotector.h
> index 5884a2b..7263e0b 100644
> --- a/arch/arm64/include/asm/stackprotector.h
> +++ b/arch/arm64/include/asm/stackprotector.h
> @@ -15,6 +15,7 @@
>  
>  #include <linux/random.h>
>  #include <linux/version.h>
> +#include <asm/pointer_auth.h>
>  
>  extern unsigned long __stack_chk_guard;
>  
> @@ -26,6 +27,7 @@ extern unsigned long __stack_chk_guard;
>   */
>  static __always_inline void boot_init_stack_canary(void)
>  {
> +#if defined(CONFIG_STACKPROTECTOR)
>  	unsigned long canary;
>  
>  	/* Try to get a semi random initial value. */
> @@ -36,6 +38,9 @@ static __always_inline void boot_init_stack_canary(void)
>  	current->stack_canary = canary;
>  	if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK))
>  		__stack_chk_guard = current->stack_canary;
> +#endif
> +	ptrauth_thread_init_kernel(current);
> +	ptrauth_thread_switch_kernel(current);
>  }
>  
>  #endif	/* _ASM_STACKPROTECTOR_H */
> diff --git a/include/linux/stackprotector.h b/include/linux/stackprotector.h
> index 6b792d0..4c678c4 100644
> --- a/include/linux/stackprotector.h
> +++ b/include/linux/stackprotector.h
> @@ -6,7 +6,7 @@
>  #include <linux/sched.h>
>  #include <linux/random.h>
>  
> -#ifdef CONFIG_STACKPROTECTOR
> +#if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH)
>  # include <asm/stackprotector.h>
>  #else
>  static inline void boot_init_stack_canary(void)
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 14/18] arm64: __show_regs: strip PAC from lr in printk
  2020-03-06  6:35 ` [PATCH v6 14/18] arm64: __show_regs: strip PAC from lr in printk Amit Daniel Kachhap
@ 2020-03-10 15:11   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 15:11 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> lr is printed with %pS which will try to find an entry in kallsyms.
> After enabling pointer authentication, this match will fail due to
> PAC present in the lr.
> 
> Strip PAC from the lr to display the correct symbol name.
>

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> Suggested-by: James Morse <james.morse@arm.com>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/kernel/process.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 7db0302..cacae29 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -262,7 +262,7 @@ void __show_regs(struct pt_regs *regs)
>  
>  	if (!user_mode(regs)) {
>  		printk("pc : %pS\n", (void *)regs->pc);
> -		printk("lr : %pS\n", (void *)lr);
> +		printk("lr : %pS\n", (void *)ptrauth_strip_insn_pac(lr));
>  	} else {
>  		printk("pc : %016llx\n", regs->pc);
>  		printk("lr : %016llx\n", lr);
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 15/18] arm64: suspend: restore the kernel ptrauth keys
  2020-03-06  6:35 ` [PATCH v6 15/18] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
@ 2020-03-10 15:18   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 15:18 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> This patch restores the kernel keys from current task during cpu resume
> after the mmu is turned on and ptrauth is enabled.
> 
> A flag is added in macro ptrauth_keys_install_kernel to check if isb
> instruction needs to executed.
>

Nit: s/needs to executed/needs to be executed/

With this:

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v5:
>  * Moved ptrauth_keys_install_kernel inside function cpu_do_resume.
>  * Added a flag in ptrauth_keys_install_kernel to provide options for isb
>    instruction.
> 
>  arch/arm64/include/asm/asm_pointer_auth.h | 6 ++++--
>  arch/arm64/kernel/entry.S                 | 4 ++--
>  arch/arm64/mm/proc.S                      | 2 ++
>  3 files changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h
> index 4152afe..899a007 100644
> --- a/arch/arm64/include/asm/asm_pointer_auth.h
> +++ b/arch/arm64/include/asm/asm_pointer_auth.h
> @@ -39,14 +39,16 @@ alternative_if ARM64_HAS_GENERIC_AUTH
>  alternative_else_nop_endif
>  	.endm
>  
> -	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> +	.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
>  alternative_if ARM64_HAS_ADDRESS_AUTH
>  	mov	\tmp1, #THREAD_KEYS_KERNEL
>  	add	\tmp1, \tsk, \tmp1
>  	ldp	\tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
>  	msr_s	SYS_APIAKEYLO_EL1, \tmp2
>  	msr_s	SYS_APIAKEYHI_EL1, \tmp3
> +	.if     \sync == 1
>  	isb
> +	.endif
>  alternative_else_nop_endif
>  	.endm
>  
> @@ -55,7 +57,7 @@ alternative_else_nop_endif
>  	.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
>  	.endm
>  
> -	.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
> +	.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
>  	.endm
>  
>  #endif /* CONFIG_ARM64_PTR_AUTH */
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 3dad2d0..6273d7b 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -178,7 +178,7 @@ alternative_cb_end
>  
>  	apply_ssbd 1, x22, x23
>  
> -	ptrauth_keys_install_kernel tsk, x20, x22, x23
> +	ptrauth_keys_install_kernel tsk, 1, x20, x22, x23
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
>  	get_current_task tsk
> @@ -900,7 +900,7 @@ ENTRY(cpu_switch_to)
>  	ldr	lr, [x8]
>  	mov	sp, x9
>  	msr	sp_el0, x1
> -	ptrauth_keys_install_kernel x1, x8, x9, x10
> +	ptrauth_keys_install_kernel x1, 1, x8, x9, x10
>  	ret
>  ENDPROC(cpu_switch_to)
>  NOKPROBE(cpu_switch_to)
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 5a11a89..4450dc8 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -11,6 +11,7 @@
>  #include <linux/linkage.h>
>  #include <asm/assembler.h>
>  #include <asm/asm-offsets.h>
> +#include <asm/asm_pointer_auth.h>
>  #include <asm/hwcap.h>
>  #include <asm/pgtable.h>
>  #include <asm/pgtable-hwdef.h>
> @@ -137,6 +138,7 @@ alternative_if ARM64_HAS_RAS_EXTN
>  	msr_s	SYS_DISR_EL1, xzr
>  alternative_else_nop_endif
>  
> +	ptrauth_keys_install_kernel x14, 0, x1, x2, x3
>  	isb
>  	ret
>  SYM_FUNC_END(cpu_do_resume)
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 17/18] arm64: compile the kernel with ptrauth return address signing
  2020-03-06  6:35 ` [PATCH v6 17/18] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
@ 2020-03-10 15:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 15:20 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Masahiro Yamada, Mark Brown,
	James Morse, Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> Compile all functions with two ptrauth instructions: PACIASP in the
> prologue to sign the return address, and AUTIASP in the epilogue to
> authenticate the return address (from the stack). If authentication
> fails, the return will cause an instruction abort to be taken, followed
> by an oops and killing the task.
> 
> This should help protect the kernel against attacks using
> return-oriented programming. As ptrauth protects the return address, it
> can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
> that it does not protect other parts of the stack.
> 
> The new instructions are in the HINT encoding space, so on a system
> without ptrauth they execute as NOPs.
> 
> CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
> guests, but also automatically builds the kernel with ptrauth
> instructions if the compiler supports it. If there is no compiler
> support, we do not warn that the kernel was built without ptrauth
> instructions.
> 
> GCC 7 and 8 support the -msign-return-address option, while GCC 9
> deprecates that option and replaces it with -mbranch-protection. Support
> both options.
> 
> Clang uses an external assembler hence this patch makes sure that the
> correct parameters (-march=armv8.3-a) are passed down to help it recognize
> the ptrauth instructions.
>

Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts

> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
> [Amit: Cover leaf function, comments]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v5:
>  * Clarified assembler option for GNU toochain.
> 
>  arch/arm64/Kconfig  | 20 +++++++++++++++++++-
>  arch/arm64/Makefile | 11 +++++++++++
>  2 files changed, 30 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 115ceea..0f3ea01 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1499,6 +1499,7 @@ config ARM64_PTR_AUTH
>  	bool "Enable support for pointer authentication"
>  	default y
>  	depends on !KVM || ARM64_VHE
> +	depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC
>  	help
>  	  Pointer authentication (part of the ARMv8.3 Extensions) provides
>  	  instructions for signing and authenticating pointers against secret
> @@ -1506,11 +1507,17 @@ config ARM64_PTR_AUTH
>  	  and other attacks.
>  
>  	  This option enables these instructions at EL0 (i.e. for userspace).
> -
>  	  Choosing this option will cause the kernel to initialise secret keys
>  	  for each process at exec() time, with these keys being
>  	  context-switched along with the process.
>  
> +	  If the compiler supports the -mbranch-protection or
> +	  -msign-return-address flag (e.g. GCC 7 or later), then this option
> +	  will also cause the kernel itself to be compiled with return address
> +	  protection. In this case, and if the target hardware is known to
> +	  support pointer authentication, then CONFIG_STACKPROTECTOR can be
> +	  disabled with minimal loss of protection.
> +
>  	  The feature is detected at runtime. If the feature is not present in
>  	  hardware it will not be advertised to userspace/KVM guest nor will it
>  	  be enabled. However, KVM guest also require VHE mode and hence
> @@ -1522,6 +1529,17 @@ config ARM64_PTR_AUTH
>  	  but with the feature disabled. On such a system, this option should
>  	  not be selected.
>  
> +config CC_HAS_BRANCH_PROT_PAC_RET
> +	# GCC 9 or later, clang 8 or later
> +	def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)
> +
> +config CC_HAS_SIGN_RETURN_ADDRESS
> +	# GCC 7, 8
> +	def_bool $(cc-option,-msign-return-address=all)
> +
> +config AS_HAS_PAC
> +	def_bool $(as-option,-Wa$(comma)-march=armv8.3-a)
> +
>  endmenu
>  
>  menu "ARMv8.5 architectural features"
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index dca1a97..f15f92b 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -65,6 +65,17 @@ stack_protector_prepare: prepare0
>  					include/generated/asm-offsets.h))
>  endif
>  
> +ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
> +branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
> +branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
> +# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the
> +# compiler to generate them and consequently to break the single image contract
> +# we pass it only to the assembler. This option is utilized only in case of non
> +# integrated assemblers.
> +branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a
> +KBUILD_CFLAGS += $(branch-prot-flags-y)
> +endif
> +
>  ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
>  KBUILD_CPPFLAGS	+= -mbig-endian
>  CHECKFLAGS	+= -D__AARCH64EB__
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 09/18] arm64: enable ptrauth earlier
  2020-03-06  6:35 ` [PATCH v6 09/18] arm64: enable ptrauth earlier Amit Daniel Kachhap
@ 2020-03-10 15:45   ` Vincenzo Frascino
  2020-03-11  6:26     ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-10 15:45 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Amit,

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> From: Kristina Martsenko <kristina.martsenko@arm.com>
> 
> When the kernel is compiled with pointer auth instructions, the boot CPU
> needs to start using address auth very early, so change the cpucap to
> account for this.
> 
> Pointer auth must be enabled before we call C functions, because it is
> not possible to enter a function with pointer auth disabled and exit it
> with pointer auth enabled. Note, mismatches between architected and
> IMPDEF algorithms will still be caught by the cpufeature framework (the
> separate *_ARCH and *_IMP_DEF cpucaps).
> 
> Note the change in behavior: if the boot CPU has address auth and a
> late CPU does not, then the late CPU is parked by the cpufeature
> framework. Also, if the boot CPU does not have address auth and the late
> CPU has then the late cpu will still boot but with ptrauth feature
> disabled.
> 
> Leave generic authentication as a "system scope" cpucap for now, since
> initially the kernel will only use address authentication.
> 

I can't find in this patch were CPU_STUCK_REASON_NO_PTRAUTH is set. Maybe I am
missing something. Please feel free to correct me if I am wrong.

My expectation is that you should call early_park_cpu to do that if the
secondary does not support PTRAUTH similar to what you did in v2 of this series:

ENTRY(__cpu_secondary_checkptrauth)
#ifdef CONFIG_ARM64_PTR_AUTH
       /* Check if the CPU supports ptrauth */
       mrs     x2, id_aa64isar1_el1
       ubfx    x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
       cbnz    x2, 1f
alternative_if ARM64_HAS_ADDRESS_AUTH
       mov     x3, 1
alternative_else
       mov     x3, 0
alternative_endif
       cbz     x3, 1f
       /* Park the mismatched secondary CPU */
       early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH
#endif
1:     ret
ENDPROC(__cpu_secondary_checkptrauth)

and then check it during the secondary_startup, similar to what happens for
52BIT_VA for example.

In this way "update_early_cpu_boot_status" would update the
CPU_STUCK_REASON_NO_PTRAUTH flag.

[...]

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] arm64: return address signing
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (17 preceding siblings ...)
  2020-03-06  6:35 ` [PATCH v6 18/18] lkdtm: arm64: test kernel pointer authentication Amit Daniel Kachhap
@ 2020-03-10 15:59 ` Rémi Denis-Courmont
  2020-03-11  9:28 ` James Morse
  19 siblings, 0 replies; 67+ messages in thread
From: Rémi Denis-Courmont @ 2020-03-10 15:59 UTC (permalink / raw)
  To: linux-arm-kernel

	Hello,

Le perjantaina 6. maaliskuuta 2020, 8.35.07 EET Amit Daniel Kachhap a écrit :
> This series improves function return address protection for the arm64
> kernel, by compiling the kernel with ARMv8.3 Pointer Authentication
> instructions (referred ptrauth hereafter). This should help protect the
> kernel against attacks using return-oriented programming.

Is there any notion what the runtime overhead will be?

We tried to estimate PAuth-based return address signing and an attempt of ours 
to improve it (ArXiv:1912.04145, to be presented at DAC 2020), but we lacked 
proper hardware.

-- 
Rémi Denis-Courmont
http://www.remlab.net/




_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses
  2020-03-10 12:28     ` Amit Kachhap
@ 2020-03-10 17:37       ` James Morse
  2020-03-11  6:07         ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: James Morse @ 2020-03-10 17:37 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

On 10/03/2020 12:28, Amit Kachhap wrote:
> On 3/10/20 12:33 AM, James Morse wrote:
>> On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <mark.rutland@arm.com>
>>>
>>> When we enable pointer authentication in the kernel, LR values saved to
>>> the stack will have a PAC which we must strip in order to retrieve the
>>> real return address.
>>>
>>> Strip PACs when unwinding the stack in order to account for this.
>>
>> This patch had me looking at the wider pointer-auth + ftrace interaction...

>>> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
>>> index a336cb1..b479df7 100644
>>> --- a/arch/arm64/kernel/stacktrace.c
>>> +++ b/arch/arm64/kernel/stacktrace.c
>>> @@ -14,6 +14,7 @@
>>>   #include <linux/stacktrace.h>
>>>     #include <asm/irq.h>
>>> +#include <asm/pointer_auth.h>
>>>   #include <asm/stack_pointer.h>
>>>   #include <asm/stacktrace.h>
>>>   @@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct
>>> stackframe *frame)
>>
>> There is an earlier reader of frame->pc:
>> | #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>> |     if (tsk->ret_stack &&
>> |             (frame->pc == (unsigned long)return_to_handler)) {
>>
>>
>> Which leads down the rat-hole of: does this need ptrauth_strip_insn_pac()?
>>
>> The version of GCC on my desktop supports patchable-function-entry, the function pre-amble
>> has two nops for use by ftrace[0]. This means if prepare_ftrace_return() re-writes the
>> saved LR, it does it before the caller paciasp's it.
>>
>> I think that means if you stack-trace from a function that had been hooked by the
>> function_graph_tracer, you will see the LR with a PAC, meaning the above == won't match.
>>
>>
>> The version of LLVM on my desktop however doesn't support patchable-function-entry, it
>> uses _mcount() to do the ftrace stuff[1]. Here prepare_ftrace_return() overwrites a
>> paciasp'd LR with one that isn't, which will fail.
>>
>>
>> Could the ptrauth_strip_insn_pac() call move above the CONFIG_FUNCTION_GRAPH_TRACER block,

> This may not be required as we never explicitly sign return_to_handler

Doesn't the original caller sign it? (I agree that assembly is tricky to work out)

ftrace_graph_caller passes 'parent' to prepare_ftrace_return() as the LR in regs:
| add	x1, sp, #S_LR

prepare_ftrace_return() may overwrite it with an unsigned value.

ftrace_common_return restores this location to x30:
| ldr	x30, [sp, #S_LR]

Then returns to the first real instruction of the original caller: paciasp.

(when navigating that assembly, there are two stack frames, each with an LR, and one LR in
the regs...)


> and frame->pc may
> store it without any PAC signature for patchable-function-entry.

How does return_to_handler() run? Surely when the original caller pulls the LR off the
stack, it runs:
| autiasp
| ret

Wouldn't autiasp transform an unsigned return_to_handler() to be a bogus address?

I agree the 'unsigned' case does happen if you're using _mcount(), this will be caught by
autiasp, hence we need to depend on HAVE_DYNAMIC_FTRACE_WITH_REGS.


> While testing patchable-function-entry, I had an observation regarding WARN messages,
> 
> [  541.030265] Hardware name: Foundation-v8A (DT)
> [  541.033500] pstate: 60400009 (nZCv daif +PAN -UAO)
> [  541.036880] pc : change_pac_parameters+0x40/0x4c
> [  541.040279] lr : return_to_handler+0x0/0x3c
> [  541.043373] sp : ffff8000126e3d00

(a WARN()ing?, where?! Ah, you mean triggered deliberately to check they look right...)


> Here lr may need some logic to display correct return address although it is unrelated to
> this ptrauth series. (arch/arm64/kernel/process.c +264)

Yes, this happens when a function that has been hooked by ftrace, hits a WARN_ON(),
show_regs() will report the real LR. I don't think that's a problem, its helpful to know
that ftrace has hooked this call.

Presumably return_to_handler() doesn't appear in the call-trace? (that would be a problem)


>> and could we add something like:
>> |    depends on (!FTRACE || HAVE_DYNAMIC_FTRACE_WITH_REGS)
>>
>> to the Kconfig to prevent both FTRACE and PTR_AUTH being enabled unless the compiler has
>> support for patchable-function-entry?
> 
> Yes this is a good condition to have. I feel below condition is more suitable as there is
> issue only with FUNCTION_GRAPH_TRACER,

Er, yes!
Because its callers of prepare_ftrace_return() that have the problem, and that is behind
#ifdef FUNCTION_GRAPH_TRACER.


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses
  2020-03-10 17:37       ` James Morse
@ 2020-03-11  6:07         ` Amit Kachhap
  2020-03-11  9:09           ` James Morse
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-11  6:07 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi,

On 3/10/20 11:07 PM, James Morse wrote:
> Hi Amit,
> 
> On 10/03/2020 12:28, Amit Kachhap wrote:
>> On 3/10/20 12:33 AM, James Morse wrote:
>>> On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
>>>> From: Mark Rutland <mark.rutland@arm.com>
>>>>
>>>> When we enable pointer authentication in the kernel, LR values saved to
>>>> the stack will have a PAC which we must strip in order to retrieve the
>>>> real return address.
>>>>
>>>> Strip PACs when unwinding the stack in order to account for this.
>>>
>>> This patch had me looking at the wider pointer-auth + ftrace interaction...
> 
>>>> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
>>>> index a336cb1..b479df7 100644
>>>> --- a/arch/arm64/kernel/stacktrace.c
>>>> +++ b/arch/arm64/kernel/stacktrace.c
>>>> @@ -14,6 +14,7 @@
>>>>    #include <linux/stacktrace.h>
>>>>      #include <asm/irq.h>
>>>> +#include <asm/pointer_auth.h>
>>>>    #include <asm/stack_pointer.h>
>>>>    #include <asm/stacktrace.h>
>>>>    @@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct
>>>> stackframe *frame)
>>>
>>> There is an earlier reader of frame->pc:
>>> | #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>>> |     if (tsk->ret_stack &&
>>> |             (frame->pc == (unsigned long)return_to_handler)) {
>>>
>>>
>>> Which leads down the rat-hole of: does this need ptrauth_strip_insn_pac()?
>>>
>>> The version of GCC on my desktop supports patchable-function-entry, the function pre-amble
>>> has two nops for use by ftrace[0]. This means if prepare_ftrace_return() re-writes the
>>> saved LR, it does it before the caller paciasp's it.
>>>
>>> I think that means if you stack-trace from a function that had been hooked by the
>>> function_graph_tracer, you will see the LR with a PAC, meaning the above == won't match.
>>>
>>>
>>> The version of LLVM on my desktop however doesn't support patchable-function-entry, it
>>> uses _mcount() to do the ftrace stuff[1]. Here prepare_ftrace_return() overwrites a
>>> paciasp'd LR with one that isn't, which will fail.
>>>
>>>
>>> Could the ptrauth_strip_insn_pac() call move above the CONFIG_FUNCTION_GRAPH_TRACER block,
> 
>> This may not be required as we never explicitly sign return_to_handler
> 
> Doesn't the original caller sign it? (I agree that assembly is tricky to work out)
> 
> ftrace_graph_caller passes 'parent' to prepare_ftrace_return() as the LR in regs:
> | add	x1, sp, #S_LR
> 
> prepare_ftrace_return() may overwrite it with an unsigned value.
> 
> ftrace_common_return restores this location to x30:
> | ldr	x30, [sp, #S_LR]
> 
> Then returns to the first real instruction of the original caller: paciasp.
> 
> (when navigating that assembly, there are two stack frames, each with an LR, and one LR in
> the regs...)
> 
> 
>> and frame->pc may
>> store it without any PAC signature for patchable-function-entry.
> 
> How does return_to_handler() run? Surely when the original caller pulls the LR off the
> stack, it runs:
> | autiasp
> | ret

I used dump_stack() instead of WARN_ON and able to reproduce the issue.
Yes ptrauth_strip_insn_pac needs to move up to fix this. Thanks for the 
details.

> 
> Wouldn't autiasp transform an unsigned return_to_handler() to be a bogus address?
> 
> I agree the 'unsigned' case does happen if you're using _mcount(), this will be caught by
> autiasp, hence we need to depend on HAVE_DYNAMIC_FTRACE_WITH_REGS.
> 
> 
>> While testing patchable-function-entry, I had an observation regarding WARN messages,
>>
>> [  541.030265] Hardware name: Foundation-v8A (DT)
>> [  541.033500] pstate: 60400009 (nZCv daif +PAN -UAO)
>> [  541.036880] pc : change_pac_parameters+0x40/0x4c
>> [  541.040279] lr : return_to_handler+0x0/0x3c
>> [  541.043373] sp : ffff8000126e3d00
> 
> (a WARN()ing?, where?! Ah, you mean triggered deliberately to check they look right...)
> 
> 
>> Here lr may need some logic to display correct return address although it is unrelated to
>> this ptrauth series. (arch/arm64/kernel/process.c +264)
> 
> Yes, this happens when a function that has been hooked by ftrace, hits a WARN_ON(),
> show_regs() will report the real LR. I don't think that's a problem, its helpful to know
> that ftrace has hooked this call.

ok

> 
> Presumably return_to_handler() doesn't appear in the call-trace? (that would be a problem)

No it doesn't appear.

> 
> 
>>> and could we add something like:
>>> |    depends on (!FTRACE || HAVE_DYNAMIC_FTRACE_WITH_REGS)
>>>
>>> to the Kconfig to prevent both FTRACE and PTR_AUTH being enabled unless the compiler has
>>> support for patchable-function-entry?
>>
>> Yes this is a good condition to have. I feel below condition is more suitable as there is
>> issue only with FUNCTION_GRAPH_TRACER,
> 
> Er, yes!
> Because its callers of prepare_ftrace_return() that have the problem, and that is behind
> #ifdef FUNCTION_GRAPH_TRACER.

ok

Cheers,
Amit
> 
> 
> Thanks,
> 
> James
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 09/18] arm64: enable ptrauth earlier
  2020-03-10 15:45   ` Vincenzo Frascino
@ 2020-03-11  6:26     ` Amit Kachhap
  2020-03-11 10:26       ` Vincenzo Frascino
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-11  6:26 UTC (permalink / raw)
  To: Vincenzo Frascino, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel



On 3/10/20 9:15 PM, Vincenzo Frascino wrote:
> Hi Amit,
> 
> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>> From: Kristina Martsenko <kristina.martsenko@arm.com>
>>
>> When the kernel is compiled with pointer auth instructions, the boot CPU
>> needs to start using address auth very early, so change the cpucap to
>> account for this.
>>
>> Pointer auth must be enabled before we call C functions, because it is
>> not possible to enter a function with pointer auth disabled and exit it
>> with pointer auth enabled. Note, mismatches between architected and
>> IMPDEF algorithms will still be caught by the cpufeature framework (the
>> separate *_ARCH and *_IMP_DEF cpucaps).
>>
>> Note the change in behavior: if the boot CPU has address auth and a
>> late CPU does not, then the late CPU is parked by the cpufeature
>> framework. Also, if the boot CPU does not have address auth and the late
>> CPU has then the late cpu will still boot but with ptrauth feature
>> disabled.
>>
>> Leave generic authentication as a "system scope" cpucap for now, since
>> initially the kernel will only use address authentication.
>>
> 
> I can't find in this patch were CPU_STUCK_REASON_NO_PTRAUTH is set. Maybe I am
> missing something. Please feel free to correct me if I am wrong.
> 
> My expectation is that you should call early_park_cpu to do that if the
> secondary does not support PTRAUTH similar to what you did in v2 of this series:
> 
> ENTRY(__cpu_secondary_checkptrauth)
> #ifdef CONFIG_ARM64_PTR_AUTH
>         /* Check if the CPU supports ptrauth */
>         mrs     x2, id_aa64isar1_el1
>         ubfx    x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
>         cbnz    x2, 1f
> alternative_if ARM64_HAS_ADDRESS_AUTH
>         mov     x3, 1
> alternative_else
>         mov     x3, 0
> alternative_endif
>         cbz     x3, 1f
>         /* Park the mismatched secondary CPU */
>         early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH
> #endif
> 1:     ret
> ENDPROC(__cpu_secondary_checkptrauth)
> 
> and then check it during the secondary_startup, similar to what happens for
> 52BIT_VA for example.
> 
> In this way "update_early_cpu_boot_status" would update the
> CPU_STUCK_REASON_NO_PTRAUTH flag.

This way it was implemented earlier. Catalin suggested the pointer auth
variation in cpus is not critical enough and cpufeature framework can 
park it little later [1].

Agree that I should have removed all definitions of 
CPU_STUCK_REASON_NO_PTRAUTH and prevent unnecessary confusions.

[1] : https://www.spinics.net/lists/arm-kernel/msg780766.html
> 
> [...]
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses
  2020-03-11  6:07         ` Amit Kachhap
@ 2020-03-11  9:09           ` James Morse
  0 siblings, 0 replies; 67+ messages in thread
From: James Morse @ 2020-03-11  9:09 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

On 3/11/20 6:07 AM, Amit Kachhap wrote:
> On 3/10/20 11:07 PM, James Morse wrote:
>> On 10/03/2020 12:28, Amit Kachhap wrote:
>>> On 3/10/20 12:33 AM, James Morse wrote:
>>>> On 06/03/2020 06:35, Amit Daniel Kachhap wrote:
>>>>> From: Mark Rutland <mark.rutland@arm.com>
>>>>>
>>>>> When we enable pointer authentication in the kernel, LR values
>>>>> saved to
>>>>> the stack will have a PAC which we must strip in order to retrieve the
>>>>> real return address.
>>>>>
>>>>> Strip PACs when unwinding the stack in order to account for this.
>>>>
>>>> This patch had me looking at the wider pointer-auth + ftrace
>>>> interaction...

>>>>> diff --git a/arch/arm64/kernel/stacktrace.c
>>>>> b/arch/arm64/kernel/stacktrace.c
>>>>> index a336cb1..b479df7 100644
>>>>> --- a/arch/arm64/kernel/stacktrace.c
>>>>> +++ b/arch/arm64/kernel/stacktrace.c
>>>>> @@ -14,6 +14,7 @@
>>>>>    #include <linux/stacktrace.h>
>>>>>      #include <asm/irq.h>
>>>>> +#include <asm/pointer_auth.h>
>>>>>    #include <asm/stack_pointer.h>
>>>>>    #include <asm/stacktrace.h>
>>>>>    @@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct
>>>>> *tsk, struct
>>>>> stackframe *frame)
>>>>
>>>> There is an earlier reader of frame->pc:
>>>> | #ifdef CONFIG_FUNCTION_GRAPH_TRACER
>>>> |     if (tsk->ret_stack &&
>>>> |             (frame->pc == (unsigned long)return_to_handler)) {
>>>>

>>>> Could the ptrauth_strip_insn_pac() call move above the
>>>> CONFIG_FUNCTION_GRAPH_TRACER block,
>>
>>> This may not be required as we never explicitly sign return_to_handler
>>
>> Doesn't the original caller sign it? (I agree that assembly is tricky
>> to work out)

> I used dump_stack() instead of WARN_ON and able to reproduce the issue.
> Yes ptrauth_strip_insn_pac needs to move up to fix this. Thanks for the
> details.

Great!


>>>> and could we add something like:
>>>> |    depends on (!FTRACE || HAVE_DYNAMIC_FTRACE_WITH_REGS)
>>>>
>>>> to the Kconfig to prevent both FTRACE and PTR_AUTH being enabled
>>>> unless the compiler has
>>>> support for patchable-function-entry?
>>>
>>> Yes this is a good condition to have. I feel below condition is more
>>> suitable as there is
>>> issue only with FUNCTION_GRAPH_TRACER,
>>
>> Er, yes!
>> Because its callers of prepare_ftrace_return() that have the problem,
>> and that is behind
>> #ifdef FUNCTION_GRAPH_TRACER.

With the ptrauth_strip_insn_pac() moved, and your better version of that
Kconfig suggestion:
Reviewed-by: James Morse <james.morse@arm.com>


Thanks!

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] arm64: return address signing
  2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
                   ` (18 preceding siblings ...)
  2020-03-10 15:59 ` [PATCH v6 00/18] arm64: return address signing Rémi Denis-Courmont
@ 2020-03-11  9:28 ` James Morse
  2020-03-12  6:53   ` Amit Kachhap
  19 siblings, 1 reply; 67+ messages in thread
From: James Morse @ 2020-03-11  9:28 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Mark Rutland, Marc Zyngier, Kees Cook, Suzuki K Poulose,
	Catalin Marinas, Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

(CC: +Marc)

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> This series improves function return address protection for the arm64 kernel, by
> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
> ptrauth hereafter). This should help protect the kernel against attacks using
> return-oriented programming.

(as it looks like there may be another version of this:)

Am I right in thinking that after your patch 10 changing
cpu_switch_to(), only the A key is live during kernel execution?

KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
restore all the keys on return to user-space, is this still necessary?

(insert cross-tree headache here)


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  2020-03-10 12:14   ` Vincenzo Frascino
@ 2020-03-11  9:28     ` Amit Kachhap
  0 siblings, 0 replies; 67+ messages in thread
From: Amit Kachhap @ 2020-03-11  9:28 UTC (permalink / raw)
  To: Vincenzo Frascino, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi,

On 3/10/20 5:44 PM, Vincenzo Frascino wrote:
> Hi Amit,
> 
> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>> This patch allows __cpu_setup to be invoked with one of these flags,
>> ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME.
>> This is required as some cpufeatures need different handling during
>> different scenarios.
>>
> 
> I could not find any explanation in this patch on what these flags stand for.
> Could you please add it? Maybe near where you define them.

I will add in my V7 version.
> 
> With this:
> 
> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

Thanks.
> 
>> The input parameter in x0 is preserved till the end to be used inside
>> this function.
>>
>> There should be no functional change with this patch and is useful
>> for the subsequent ptrauth patch which utilizes it. Some upcoming
>> arm cpufeatures can also utilize these flags.
>>
>> Suggested-by: James Morse <james.morse@arm.com>
>> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 09/18] arm64: enable ptrauth earlier
  2020-03-11  6:26     ` Amit Kachhap
@ 2020-03-11 10:26       ` Vincenzo Frascino
  2020-03-11 10:46         ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-11 10:26 UTC (permalink / raw)
  To: Amit Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Amit,

On 3/11/20 6:26 AM, Amit Kachhap wrote:

[...]

>>
>> My expectation is that you should call early_park_cpu to do that if the
>> secondary does not support PTRAUTH similar to what you did in v2 of this series:
>>
>> ENTRY(__cpu_secondary_checkptrauth)
>> #ifdef CONFIG_ARM64_PTR_AUTH
>>         /* Check if the CPU supports ptrauth */
>>         mrs     x2, id_aa64isar1_el1
>>         ubfx    x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
>>         cbnz    x2, 1f
>> alternative_if ARM64_HAS_ADDRESS_AUTH
>>         mov     x3, 1
>> alternative_else
>>         mov     x3, 0
>> alternative_endif
>>         cbz     x3, 1f
>>         /* Park the mismatched secondary CPU */
>>         early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH
>> #endif
>> 1:     ret
>> ENDPROC(__cpu_secondary_checkptrauth)
>>
>> and then check it during the secondary_startup, similar to what happens for
>> 52BIT_VA for example.
>>
>> In this way "update_early_cpu_boot_status" would update the
>> CPU_STUCK_REASON_NO_PTRAUTH flag.
> 
> This way it was implemented earlier. Catalin suggested the pointer auth
> variation in cpus is not critical enough and cpufeature framework can park it
> little later [1].
> 
> Agree that I should have removed all definitions of CPU_STUCK_REASON_NO_PTRAUTH
> and prevent unnecessary confusions.
> 
> [1] : https://www.spinics.net/lists/arm-kernel/msg780766.html

It was either or :) Sorry I did not see Catalin's comment, please go ahead and
remove the definition of CPU_STUCK_REASON_NO_PTRAUTH and the code that uses it
in this case.

Maybe you want to expand as well your commit message (which already seems
covering this case) to explain why it is possible to let the cpu framework deal
with this case. This should make things clear according to me.

Another question that still remains is: do we need to introduce early_park_cpu
as part of this series? Since it does not seem you are using it anywhere.

>>
>> [...]
>>

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 09/18] arm64: enable ptrauth earlier
  2020-03-11 10:26       ` Vincenzo Frascino
@ 2020-03-11 10:46         ` Amit Kachhap
  2020-03-11 10:49           ` Vincenzo Frascino
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-11 10:46 UTC (permalink / raw)
  To: Vincenzo Frascino, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Vincenzo,

On 3/11/20 3:56 PM, Vincenzo Frascino wrote:
> Hi Amit,
> 
> On 3/11/20 6:26 AM, Amit Kachhap wrote:
> 
> [...]
> 
>>>
>>> My expectation is that you should call early_park_cpu to do that if the
>>> secondary does not support PTRAUTH similar to what you did in v2 of this series:
>>>
>>> ENTRY(__cpu_secondary_checkptrauth)
>>> #ifdef CONFIG_ARM64_PTR_AUTH
>>>          /* Check if the CPU supports ptrauth */
>>>          mrs     x2, id_aa64isar1_el1
>>>          ubfx    x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
>>>          cbnz    x2, 1f
>>> alternative_if ARM64_HAS_ADDRESS_AUTH
>>>          mov     x3, 1
>>> alternative_else
>>>          mov     x3, 0
>>> alternative_endif
>>>          cbz     x3, 1f
>>>          /* Park the mismatched secondary CPU */
>>>          early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH
>>> #endif
>>> 1:     ret
>>> ENDPROC(__cpu_secondary_checkptrauth)
>>>
>>> and then check it during the secondary_startup, similar to what happens for
>>> 52BIT_VA for example.
>>>
>>> In this way "update_early_cpu_boot_status" would update the
>>> CPU_STUCK_REASON_NO_PTRAUTH flag.
>>
>> This way it was implemented earlier. Catalin suggested the pointer auth
>> variation in cpus is not critical enough and cpufeature framework can park it
>> little later [1].
>>
>> Agree that I should have removed all definitions of CPU_STUCK_REASON_NO_PTRAUTH
>> and prevent unnecessary confusions.
>>
>> [1] : https://www.spinics.net/lists/arm-kernel/msg780766.html
> 
> It was either or :) Sorry I did not see Catalin's comment, please go ahead and
> remove the definition of CPU_STUCK_REASON_NO_PTRAUTH and the code that uses it
> in this case.

ok

> 
> Maybe you want to expand as well your commit message (which already seems
> covering this case) to explain why it is possible to let the cpu framework deal
> with this case. This should make things clear according to me.

sure.

> 
> Another question that still remains is: do we need to introduce early_park_cpu
> as part of this series? Since it does not seem you are using it anywhere.

I should probably drop this cleanup patch from this series and may be
send it separately.

> 
>>>
>>> [...]
>>>
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 09/18] arm64: enable ptrauth earlier
  2020-03-11 10:46         ` Amit Kachhap
@ 2020-03-11 10:49           ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-11 10:49 UTC (permalink / raw)
  To: Amit Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Will Deacon, Ard Biesheuvel

Hi Amit,

On 3/11/20 10:46 AM, Amit Kachhap wrote:
> Hi Vincenzo,
> 
> On 3/11/20 3:56 PM, Vincenzo Frascino wrote:
>> Hi Amit,
>>
>> On 3/11/20 6:26 AM, Amit Kachhap wrote:
>>
>> [...]
>>
>>>>
>>>> My expectation is that you should call early_park_cpu to do that if the
>>>> secondary does not support PTRAUTH similar to what you did in v2 of this
>>>> series:
>>>>
>>>> ENTRY(__cpu_secondary_checkptrauth)
>>>> #ifdef CONFIG_ARM64_PTR_AUTH
>>>>          /* Check if the CPU supports ptrauth */
>>>>          mrs     x2, id_aa64isar1_el1
>>>>          ubfx    x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
>>>>          cbnz    x2, 1f
>>>> alternative_if ARM64_HAS_ADDRESS_AUTH
>>>>          mov     x3, 1
>>>> alternative_else
>>>>          mov     x3, 0
>>>> alternative_endif
>>>>          cbz     x3, 1f
>>>>          /* Park the mismatched secondary CPU */
>>>>          early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH
>>>> #endif
>>>> 1:     ret
>>>> ENDPROC(__cpu_secondary_checkptrauth)
>>>>
>>>> and then check it during the secondary_startup, similar to what happens for
>>>> 52BIT_VA for example.
>>>>
>>>> In this way "update_early_cpu_boot_status" would update the
>>>> CPU_STUCK_REASON_NO_PTRAUTH flag.
>>>
>>> This way it was implemented earlier. Catalin suggested the pointer auth
>>> variation in cpus is not critical enough and cpufeature framework can park it
>>> little later [1].
>>>
>>> Agree that I should have removed all definitions of CPU_STUCK_REASON_NO_PTRAUTH
>>> and prevent unnecessary confusions.
>>>
>>> [1] : https://www.spinics.net/lists/arm-kernel/msg780766.html
>>
>> It was either or :) Sorry I did not see Catalin's comment, please go ahead and
>> remove the definition of CPU_STUCK_REASON_NO_PTRAUTH and the code that uses it
>> in this case.
> 
> ok
> 
>>
>> Maybe you want to expand as well your commit message (which already seems
>> covering this case) to explain why it is possible to let the cpu framework deal
>> with this case. This should make things clear according to me.
> 
> sure.
> 
>>
>> Another question that still remains is: do we need to introduce early_park_cpu
>> as part of this series? Since it does not seem you are using it anywhere.
> 
> I should probably drop this cleanup patch from this series and may be
> send it separately.
>

Thanks!

With this comments addressed:
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>

>>
>>>>
>>>> [...]
>>>>
>>

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file
  2020-03-10 12:53     ` Amit Kachhap
@ 2020-03-11 10:50       ` Catalin Marinas
  2020-03-11 11:44         ` Vincenzo Frascino
  0 siblings, 1 reply; 67+ messages in thread
From: Catalin Marinas @ 2020-03-11 10:50 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Dave Martin, Mark Brown, James Morse, Ramana Radhakrishnan,
	Vincenzo Frascino, Will Deacon, Ard Biesheuvel, linux-arm-kernel

On Tue, Mar 10, 2020 at 06:23:15PM +0530, Amit Kachhap wrote:
> On 3/10/20 5:50 PM, Vincenzo Frascino wrote:
> > On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> > 
> > [...]
> > 
> > > -static inline bool
> > > -cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
> > > -{
> > > -	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
> > > -}
> > > -
> > > -static inline bool
> > > -cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
> > > -{
> > > -	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
> > > -}
> > > -
> > >   /*
> > >    * Generic helper for handling capabilties with multiple (match,enable) pairs
> > >    * of call backs, sharing the same capability bit.
> > > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > > index b12e386..865dce6 100644
> > > --- a/arch/arm64/kernel/cpufeature.c
> > > +++ b/arch/arm64/kernel/cpufeature.c
> > > @@ -1363,6 +1363,19 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
> > >   }
> > >   #endif
> > > +/* Internal helper functions to match cpu capability type */
> > > +static bool
> > > +cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
> > > +{
> > > +	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
> > > +}
> > > +
> > > +static bool
> > > +cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
> > > +{
> > > +	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
> > > +}
> > > +
> > >   static const struct arm64_cpu_capabilities arm64_features[] = {
> > >   	{
> > >   		.desc = "GIC system register CPU interface",
> > > 
> > 
> > Seems that the signature of the functions above is changed during the migration.
> > In particular you dropped "inline". Is there any specific reason?
> 
> Earlier Catalin pointed me here [1]. I guess its not a hot-path function.
> 
> [1]: https://www.spinics.net/lists/arm-kernel/msg789696.html

Indeed, it had to be static inline in a header but not anymore in a .c
file. Also if it's really essential to be inlined and the compiler
doesn't do this automatically, use __always_inline. But my preference is
not to bother unless you back it up by numbers.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability
  2020-03-10 12:31   ` Vincenzo Frascino
@ 2020-03-11 11:03     ` Catalin Marinas
  2020-03-11 11:46       ` Vincenzo Frascino
  0 siblings, 1 reply; 67+ messages in thread
From: Catalin Marinas @ 2020-03-11 11:03 UTC (permalink / raw)
  To: Vincenzo Frascino
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Dave Martin, Mark Brown, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On Tue, Mar 10, 2020 at 12:31:56PM +0000, Vincenzo Frascino wrote:
> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> 
> [...]
> 
> >  
> > +static bool
> > +cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
> > +{
> > +	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
> > +}
> > +
> 
> If there is no specific reason in the previous patch for changing the signature,
> could you please make "inline" even this function, for symmetry with the others?

Please don't add new 'inline' unless you have a real justification (in
which case __always_inline is better suited). Also symmetry with others
is not a good argument.

https://www.kernel.org/doc/html/latest/process/coding-style.html#the-inline-disease

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file
  2020-03-11 10:50       ` Catalin Marinas
@ 2020-03-11 11:44         ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-11 11:44 UTC (permalink / raw)
  To: Catalin Marinas, Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Dave Martin, Mark Brown, James Morse, Ramana Radhakrishnan,
	Will Deacon, Ard Biesheuvel, linux-arm-kernel

On 3/11/20 10:50 AM, Catalin Marinas wrote:
> On Tue, Mar 10, 2020 at 06:23:15PM +0530, Amit Kachhap wrote:
>> On 3/10/20 5:50 PM, Vincenzo Frascino wrote:
>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>
>>> [...]
>>>
>>> Seems that the signature of the functions above is changed during the migration.
>>> In particular you dropped "inline". Is there any specific reason?
>>
>> Earlier Catalin pointed me here [1]. I guess its not a hot-path function.
>>
>> [1]: https://www.spinics.net/lists/arm-kernel/msg789696.html
> 
> Indeed, it had to be static inline in a header but not anymore in a .c
> file. Also if it's really essential to be inlined and the compiler
> doesn't do this automatically, use __always_inline. But my preference is
> not to bother unless you back it up by numbers.
> 

Ok, fine by me.

Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability
  2020-03-11 11:03     ` Catalin Marinas
@ 2020-03-11 11:46       ` Vincenzo Frascino
  0 siblings, 0 replies; 67+ messages in thread
From: Vincenzo Frascino @ 2020-03-11 11:46 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Kristina Martsenko,
	Dave Martin, Mark Brown, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On 3/11/20 11:03 AM, Catalin Marinas wrote:
> On Tue, Mar 10, 2020 at 12:31:56PM +0000, Vincenzo Frascino wrote:
>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>
>> [...]
>>
>>>  
>>> +static bool
>>> +cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
>>> +{
>>> +	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
>>> +}
>>> +
>>
>> If there is no specific reason in the previous patch for changing the signature,
>> could you please make "inline" even this function, for symmetry with the others?
> 
> Please don't add new 'inline' unless you have a real justification (in
> which case __always_inline is better suited). Also symmetry with others
> is not a good argument.
> 
> https://www.kernel.org/doc/html/latest/process/coding-style.html#the-inline-disease
> 

Ok, thanks for the explanation.

With this:

Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] arm64: return address signing
  2020-03-11  9:28 ` James Morse
@ 2020-03-12  6:53   ` Amit Kachhap
  2020-03-12  8:06     ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-12  6:53 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Marc Zyngier, Kees Cook, Suzuki K Poulose,
	Catalin Marinas, Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi James,

On 3/11/20 2:58 PM, James Morse wrote:
> Hi Amit,
> 
> (CC: +Marc)
> 
> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>> This series improves function return address protection for the arm64 kernel, by
>> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
>> ptrauth hereafter). This should help protect the kernel against attacks using
>> return-oriented programming.
> 
> (as it looks like there may be another version of this:)
> 
> Am I right in thinking that after your patch 10 changing
> cpu_switch_to(), only the A key is live during kernel execution?

Yes

> 
> KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
> restore all the keys on return to user-space, is this still necessary?

Yes Its a good optimization to skip 4 non-A keys. I was wondering 
whether to do it in this series or send it separately.

Cheers,
Amit Daniel

> 
> (insert cross-tree headache here)
> 
> 
> Thanks,
> 
> James
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] arm64: return address signing
  2020-03-12  6:53   ` Amit Kachhap
@ 2020-03-12  8:06     ` Amit Kachhap
  2020-03-12 12:47       ` [PATCH v6 00/18] (as long a Marc Zyngier
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-12  8:06 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Marc Zyngier, Kees Cook, Suzuki K Poulose,
	Catalin Marinas, Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi James,

On 3/12/20 12:23 PM, Amit Kachhap wrote:
> Hi James,
> 
> On 3/11/20 2:58 PM, James Morse wrote:
>> Hi Amit,
>>
>> (CC: +Marc)
>>
>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>> This series improves function return address protection for the arm64 
>>> kernel, by
>>> compiling the kernel with ARMv8.3 Pointer Authentication instructions 
>>> (referred
>>> ptrauth hereafter). This should help protect the kernel against 
>>> attacks using
>>> return-oriented programming.
>>
>> (as it looks like there may be another version of this:)
>>
>> Am I right in thinking that after your patch 10 changing
>> cpu_switch_to(), only the A key is live during kernel execution?
> 
> Yes
> 
>>
>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
>> restore all the keys on return to user-space, is this still necessary?
> 
> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
> whether to do it in this series or send it separately.

I suppose we can only skip non-A keys save/restore for host context. If
we skip non-A keys for guest context then guest with old implementation
will break. Let me know your opinion.

//Amit
> 
> Cheers,
> Amit Daniel
> 
>>
>> (insert cross-tree headache here)
>>
>>
>> Thanks,
>>
>> James
>>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18]  (as long a
  2020-03-12  8:06     ` Amit Kachhap
@ 2020-03-12 12:47       ` Marc Zyngier
  2020-03-12 13:21         ` Amit Kachhap
  0 siblings, 1 reply; 67+ messages in thread
From: Marc Zyngier @ 2020-03-12 12:47 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit,

On 2020-03-12 08:06, Amit Kachhap wrote:
> Hi James,
> 
> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>> Hi James,
>> 
>> On 3/11/20 2:58 PM, James Morse wrote:
>>> Hi Amit,
>>> 
>>> (CC: +Marc)
>>> 
>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>> This series improves function return address protection for the 
>>>> arm64 kernel, by
>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>> instructions (referred
>>>> ptrauth hereafter). This should help protect the kernel against 
>>>> attacks using
>>>> return-oriented programming.
>>> 
>>> (as it looks like there may be another version of this:)
>>> 
>>> Am I right in thinking that after your patch 10 changing
>>> cpu_switch_to(), only the A key is live during kernel execution?
>> 
>> Yes
>> 
>>> 
>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As 
>>> you
>>> restore all the keys on return to user-space, is this still 
>>> necessary?
>> 
>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>> whether to do it in this series or send it separately.
> 
> I suppose we can only skip non-A keys save/restore for host context. If
> we skip non-A keys for guest context then guest with old implementation
> will break. Let me know your opinion.

I don't think you can skip anything as far as the guest is concerned.
But being able to skip the B keys (which is what I expect you call the
non-A keys) on the host would certainly be useful.

I assume you have a way to hide them from userspace, though.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] (as long a
  2020-03-12 12:47       ` [PATCH v6 00/18] (as long a Marc Zyngier
@ 2020-03-12 13:21         ` Amit Kachhap
  2020-03-12 15:05           ` [PATCH v6 00/18] arm64: return address signing Marc Zyngier
  0 siblings, 1 reply; 67+ messages in thread
From: Amit Kachhap @ 2020-03-12 13:21 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Marc,

On 3/12/20 6:17 PM, Marc Zyngier wrote:
> Hi Amit,
> 
> On 2020-03-12 08:06, Amit Kachhap wrote:
>> Hi James,
>>
>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>> Hi James,
>>>
>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>> Hi Amit,
>>>>
>>>> (CC: +Marc)
>>>>
>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>> This series improves function return address protection for the 
>>>>> arm64 kernel, by
>>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>>> instructions (referred
>>>>> ptrauth hereafter). This should help protect the kernel against 
>>>>> attacks using
>>>>> return-oriented programming.
>>>>
>>>> (as it looks like there may be another version of this:)
>>>>
>>>> Am I right in thinking that after your patch 10 changing
>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>
>>> Yes
>>>
>>>>
>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As 
>>>> you
>>>> restore all the keys on return to user-space, is this still necessary?
>>>
>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>>> whether to do it in this series or send it separately.
>>
>> I suppose we can only skip non-A keys save/restore for host context. If
>> we skip non-A keys for guest context then guest with old implementation
>> will break. Let me know your opinion.
> 
> I don't think you can skip anything as far as the guest is concerned.
> But being able to skip the B keys (which is what I expect you call the
> non-A keys) on the host would certainly be useful.

Thanks for the clarification.

> 
> I assume you have a way to hide them from userspace, though.

You mean hide all the keys from userspace like below,

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3e909b1..29cc74f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1023,7 +1023,7 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
  static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
                         const struct sys_reg_desc *rd)
  {
-       return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | 
REG_HIDDEN_GUEST;
+       return vcpu_has_ptrauth(vcpu) ? REG_HIDDEN_USER : 
REG_HIDDEN_USER | REG_HIDDEN_GUEST;
  }

  #define __PTRAUTH_KEY(k)

I don't remember why it was not done this way last time.

Cheers,
Amit

> 
> Thanks,
> 
>          M.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] arm64: return address signing
  2020-03-12 13:21         ` Amit Kachhap
@ 2020-03-12 15:05           ` Marc Zyngier
  2020-03-12 17:26             ` James Morse
  0 siblings, 1 reply; 67+ messages in thread
From: Marc Zyngier @ 2020-03-12 15:05 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown, James Morse,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

[Somehow I managed to butcher the subject line. no idea how...]

On 2020-03-12 13:21, Amit Kachhap wrote:
> Hi Marc,
> 
> On 3/12/20 6:17 PM, Marc Zyngier wrote:
>> Hi Amit,
>> 
>> On 2020-03-12 08:06, Amit Kachhap wrote:
>>> Hi James,
>>> 
>>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>>> Hi James,
>>>> 
>>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>>> Hi Amit,
>>>>> 
>>>>> (CC: +Marc)
>>>>> 
>>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>>> This series improves function return address protection for the 
>>>>>> arm64 kernel, by
>>>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>>>> instructions (referred
>>>>>> ptrauth hereafter). This should help protect the kernel against 
>>>>>> attacks using
>>>>>> return-oriented programming.
>>>>> 
>>>>> (as it looks like there may be another version of this:)
>>>>> 
>>>>> Am I right in thinking that after your patch 10 changing
>>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>> 
>>>> Yes
>>>> 
>>>>> 
>>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. 
>>>>> As you
>>>>> restore all the keys on return to user-space, is this still 
>>>>> necessary?
>>>> 
>>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>>>> whether to do it in this series or send it separately.
>>> 
>>> I suppose we can only skip non-A keys save/restore for host context. 
>>> If
>>> we skip non-A keys for guest context then guest with old 
>>> implementation
>>> will break. Let me know your opinion.
>> 
>> I don't think you can skip anything as far as the guest is concerned.
>> But being able to skip the B keys (which is what I expect you call the
>> non-A keys) on the host would certainly be useful.
> 
> Thanks for the clarification.
> 
>> 
>> I assume you have a way to hide them from userspace, though.
> 
> You mean hide all the keys from userspace like below,
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3e909b1..29cc74f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1023,7 +1023,7 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>  static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
>                         const struct sys_reg_desc *rd)
>  {
> -       return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | 
> REG_HIDDEN_GUEST;
> +       return vcpu_has_ptrauth(vcpu) ? REG_HIDDEN_USER :
> REG_HIDDEN_USER | REG_HIDDEN_GUEST;
>  }
> 
>  #define __PTRAUTH_KEY(k)
> 
> I don't remember why it was not done this way last time.

No, that's not what I meant. What you're describing is preventing keys
from being exposed to the VMM controlling the guest, and that'd be
pretty bad (you need to be able to save/restore them for migration).

But if KVM doesn't save/restore the host's B-keys in the world switch,
then you must make sure that no host userspace can make use of them,
as they would be the guest's keys.

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] arm64: return address signing
  2020-03-12 15:05           ` [PATCH v6 00/18] arm64: return address signing Marc Zyngier
@ 2020-03-12 17:26             ` James Morse
  2020-03-12 17:31               ` Marc Zyngier
  0 siblings, 1 reply; 67+ messages in thread
From: James Morse @ 2020-03-12 17:26 UTC (permalink / raw)
  To: Marc Zyngier, Amit Kachhap
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Vincenzo Frascino, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel

Hi Amit, Marc,

On 12/03/2020 15:05, Marc Zyngier wrote:
> On 2020-03-12 13:21, Amit Kachhap wrote:
>> On 3/12/20 6:17 PM, Marc Zyngier wrote:
>>> On 2020-03-12 08:06, Amit Kachhap wrote:
>>>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>>>> This series improves function return address protection for the arm64 kernel, by
>>>>>>> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
>>>>>>> ptrauth hereafter). This should help protect the kernel against attacks using
>>>>>>> return-oriented programming.
>>>>>>
>>>>>> (as it looks like there may be another version of this:)
>>>>>>
>>>>>> Am I right in thinking that after your patch 10 changing
>>>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>>>
>>>>> Yes

>>>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
>>>>>> restore all the keys on return to user-space, is this still necessary?
>>>>>
>>>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering whether to do it in
>>>>> this series or send it separately.
>>>>
>>>> I suppose we can only skip non-A keys save/restore for host context. If
>>>> we skip non-A keys for guest context then guest with old implementation
>>>> will break. Let me know your opinion.
>>>
>>> I don't think you can skip anything as far as the guest is concerned.
>>> But being able to skip the B keys (which is what I expect you call the
>>> non-A keys) on the host would certainly be useful.

> But if KVM doesn't save/restore the host's B-keys in the world switch,
> then you must make sure that no host userspace can make use of them,
> as they would be the guest's keys.

Yes, the arch code entry.S changes cover this with ptrauth_keys_install_user. It restores
4 keys, and the generic key.


We always need to save/restore all the guest keys (as we do today).
But when ptrauth_restore_state restores the host keys, it only needs to restore the one
the kernel uses. (possibly using the same macro's so it stays up to date?!)

If we return to user-space, the arch code's entry code does the right thing.
KVM's user-space peeking at the keys will see the saved values.


My original question was more around: do we need to do this now, or can we clean it up in
a later kernel version?

(and a sanity check that it doesn't lead to a correctness problem)



Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 00/18] arm64: return address signing
  2020-03-12 17:26             ` James Morse
@ 2020-03-12 17:31               ` Marc Zyngier
  0 siblings, 0 replies; 67+ messages in thread
From: Marc Zyngier @ 2020-03-12 17:31 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Kees Cook, Suzuki K Poulose, Catalin Marinas,
	Kristina Martsenko, Dave Martin, Mark Brown,
	Ramana Radhakrishnan, Amit Kachhap, Vincenzo Frascino,
	Will Deacon, Ard Biesheuvel, linux-arm-kernel

Hi James,

On 2020-03-12 17:26, James Morse wrote:
> Hi Amit, Marc,
> 
> On 12/03/2020 15:05, Marc Zyngier wrote:
>> On 2020-03-12 13:21, Amit Kachhap wrote:
>>> On 3/12/20 6:17 PM, Marc Zyngier wrote:
>>>> On 2020-03-12 08:06, Amit Kachhap wrote:
>>>>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>>>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>>>>> This series improves function return address protection for the 
>>>>>>>> arm64 kernel, by
>>>>>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>>>>>> instructions (referred
>>>>>>>> ptrauth hereafter). This should help protect the kernel against 
>>>>>>>> attacks using
>>>>>>>> return-oriented programming.
>>>>>>> 
>>>>>>> (as it looks like there may be another version of this:)
>>>>>>> 
>>>>>>> Am I right in thinking that after your patch 10 changing
>>>>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>>>> 
>>>>>> Yes
> 
>>>>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. 
>>>>>>> As you
>>>>>>> restore all the keys on return to user-space, is this still 
>>>>>>> necessary?
>>>>>> 
>>>>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>>>>>> whether to do it in
>>>>>> this series or send it separately.
>>>>> 
>>>>> I suppose we can only skip non-A keys save/restore for host 
>>>>> context. If
>>>>> we skip non-A keys for guest context then guest with old 
>>>>> implementation
>>>>> will break. Let me know your opinion.
>>>> 
>>>> I don't think you can skip anything as far as the guest is 
>>>> concerned.
>>>> But being able to skip the B keys (which is what I expect you call 
>>>> the
>>>> non-A keys) on the host would certainly be useful.
> 
>> But if KVM doesn't save/restore the host's B-keys in the world switch,
>> then you must make sure that no host userspace can make use of them,
>> as they would be the guest's keys.
> 
> Yes, the arch code entry.S changes cover this with
> ptrauth_keys_install_user. It restores
> 4 keys, and the generic key.
> 
> 
> We always need to save/restore all the guest keys (as we do today).
> But when ptrauth_restore_state restores the host keys, it only needs
> to restore the one
> the kernel uses. (possibly using the same macro's so it stays up to 
> date?!)
> 
> If we return to user-space, the arch code's entry code does the right 
> thing.
> KVM's user-space peeking at the keys will see the saved values.
> 
> 
> My original question was more around: do we need to do this now, or
> can we clean it up in
> a later kernel version?
> 
> (and a sanity check that it doesn't lead to a correctness problem)

I think what we have now is sane, and doesn't seem to lead to any
issue (at least that I can see). We can always optimize this at a
later point.

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2020-03-12 17:31 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-06  6:35 [PATCH v6 00/18] arm64: return address signing Amit Daniel Kachhap
2020-03-06  6:35 ` [PATCH v6 01/18] arm64: cpufeature: Fix meta-capability cpufeature check Amit Daniel Kachhap
2020-03-10 10:59   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 02/18] arm64: cpufeature: add pointer auth meta-capabilities Amit Daniel Kachhap
2020-03-10 11:18   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 03/18] arm64: rename ptrauth key structures to be user-specific Amit Daniel Kachhap
2020-03-10 11:35   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 04/18] arm64: install user ptrauth keys at kernel exit time Amit Daniel Kachhap
2020-03-06 19:07   ` James Morse
2020-03-10 11:48     ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 05/18] arm64: create macro to park cpu in an infinite loop Amit Daniel Kachhap
2020-03-10 12:02   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 06/18] arm64: ptrauth: Add bootup/runtime flags for __cpu_setup Amit Daniel Kachhap
2020-03-06 19:07   ` James Morse
2020-03-09 17:04     ` Catalin Marinas
2020-03-10 12:14   ` Vincenzo Frascino
2020-03-11  9:28     ` Amit Kachhap
2020-03-06  6:35 ` [PATCH v6 07/18] arm64: cpufeature: Move cpu capability helpers inside C file Amit Daniel Kachhap
2020-03-10 12:20   ` Vincenzo Frascino
2020-03-10 12:53     ` Amit Kachhap
2020-03-11 10:50       ` Catalin Marinas
2020-03-11 11:44         ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 08/18] arm64: cpufeature: handle conflicts based on capability Amit Daniel Kachhap
2020-03-10 12:31   ` Vincenzo Frascino
2020-03-11 11:03     ` Catalin Marinas
2020-03-11 11:46       ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 09/18] arm64: enable ptrauth earlier Amit Daniel Kachhap
2020-03-10 15:45   ` Vincenzo Frascino
2020-03-11  6:26     ` Amit Kachhap
2020-03-11 10:26       ` Vincenzo Frascino
2020-03-11 10:46         ` Amit Kachhap
2020-03-11 10:49           ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 10/18] arm64: initialize and switch ptrauth kernel keys Amit Daniel Kachhap
2020-03-10 15:07   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 11/18] arm64: initialize ptrauth keys for kernel booting task Amit Daniel Kachhap
2020-03-10 15:09   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 12/18] arm64: mask PAC bits of __builtin_return_address Amit Daniel Kachhap
2020-03-06 19:07   ` James Morse
2020-03-09 12:27     ` Amit Kachhap
2020-03-06  6:35 ` [PATCH v6 13/18] arm64: unwind: strip PAC from kernel addresses Amit Daniel Kachhap
2020-03-09 19:03   ` James Morse
2020-03-10 12:28     ` Amit Kachhap
2020-03-10 17:37       ` James Morse
2020-03-11  6:07         ` Amit Kachhap
2020-03-11  9:09           ` James Morse
2020-03-06  6:35 ` [PATCH v6 14/18] arm64: __show_regs: strip PAC from lr in printk Amit Daniel Kachhap
2020-03-10 15:11   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 15/18] arm64: suspend: restore the kernel ptrauth keys Amit Daniel Kachhap
2020-03-10 15:18   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 16/18] kconfig: Add support for 'as-option' Amit Daniel Kachhap
2020-03-06  6:35   ` Amit Daniel Kachhap
2020-03-06 11:37   ` Masahiro Yamada
2020-03-06 11:37     ` Masahiro Yamada
2020-03-06 11:49     ` Vincenzo Frascino
2020-03-06 11:49       ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 17/18] arm64: compile the kernel with ptrauth return address signing Amit Daniel Kachhap
2020-03-10 15:20   ` Vincenzo Frascino
2020-03-06  6:35 ` [PATCH v6 18/18] lkdtm: arm64: test kernel pointer authentication Amit Daniel Kachhap
2020-03-10 15:59 ` [PATCH v6 00/18] arm64: return address signing Rémi Denis-Courmont
2020-03-11  9:28 ` James Morse
2020-03-12  6:53   ` Amit Kachhap
2020-03-12  8:06     ` Amit Kachhap
2020-03-12 12:47       ` [PATCH v6 00/18] (as long a Marc Zyngier
2020-03-12 13:21         ` Amit Kachhap
2020-03-12 15:05           ` [PATCH v6 00/18] arm64: return address signing Marc Zyngier
2020-03-12 17:26             ` James Morse
2020-03-12 17:31               ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.