Hi, This series improves function return address protection for the arm64 kernel, by compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred ptrauth hereafter). This should help protect the kernel against attacks using return-oriented programming. This series is based on v5.5-rc2. High-level changes since v2 [1] (detailed changes are in individual patches): - Added support to generate randomness for ptrauth keys for early booting task in primary core as suggested by Ard. - Modified lkdtm ptrauth test-case to change keys to cause crash instead of modifying the lr in the stack. - Resolved a clang compilation issue. - Re-positioned "arm64: rename ptrauth key structures to be user-specific" to reduce code churnings. This series do not implement few things or have known limitations: - kdump tools may need some rework to work with ptrauth. The kdump tools may need the ptrauth information to strip PAC bits. Feedback welcome! Thanks, Amit Daniel [1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-November/695089.html Amit Daniel Kachhap (8): arm64: create macro to park cpu in an infinite loop arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: initialize ptrauth keys for kernel booting task arm64: mask PAC bits of __builtin_return_address arm64: __show_regs: strip PAC from lr in printk arm64: suspend: restore the kernel ptrauth keys arm64: kprobe: disable probe of ptrauth instruction lkdtm: arm64: test kernel pointer authentication Kristina Martsenko (6): arm64: cpufeature: add pointer auth meta-capabilities arm64: rename ptrauth key structures to be user-specific arm64: install user ptrauth keys at kernel exit time arm64: enable ptrauth earlier arm64: initialize and switch ptrauth kernel keys arm64: compile the kernel with ptrauth return address signing Mark Rutland (1): arm64: unwind: strip PAC from kernel addresses Vincenzo Frascino (1): kconfig: Add support for 'as-option' arch/arm64/Kconfig | 27 +++++++++++- arch/arm64/Makefile | 11 +++++ arch/arm64/include/asm/asm_pointer_auth.h | 59 ++++++++++++++++++++++++++ arch/arm64/include/asm/compiler.h | 20 +++++++++ arch/arm64/include/asm/cpucaps.h | 4 +- arch/arm64/include/asm/cpufeature.h | 6 +-- arch/arm64/include/asm/insn.h | 13 +++--- arch/arm64/include/asm/pointer_auth.h | 54 ++++++++++++------------ arch/arm64/include/asm/processor.h | 3 +- arch/arm64/include/asm/smp.h | 10 +++++ arch/arm64/include/asm/stackprotector.h | 5 +++ arch/arm64/kernel/asm-offsets.c | 16 +++++++ arch/arm64/kernel/cpufeature.c | 30 ++++++++++---- arch/arm64/kernel/entry.S | 6 +++ arch/arm64/kernel/head.S | 47 +++++++++++++++------ arch/arm64/kernel/insn.c | 1 + arch/arm64/kernel/pointer_auth.c | 7 +--- arch/arm64/kernel/probes/decode-insn.c | 2 +- arch/arm64/kernel/process.c | 5 ++- arch/arm64/kernel/ptrace.c | 16 +++---- arch/arm64/kernel/sleep.S | 8 ++++ arch/arm64/kernel/smp.c | 10 +++++ arch/arm64/kernel/stacktrace.c | 3 ++ arch/arm64/mm/proc.S | 69 ++++++++++++++++++++++++++----- drivers/misc/lkdtm/bugs.c | 36 ++++++++++++++++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + include/linux/stackprotector.h | 2 +- scripts/Kconfig.include | 4 ++ 29 files changed, 388 insertions(+), 88 deletions(-) create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h create mode 100644 arch/arm64/include/asm/compiler.h -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Kristina Martsenko <kristina.martsenko@arm.com> To enable pointer auth for the kernel, we're going to need to check for the presence of address auth and generic auth using alternative_if. We currently have two cpucaps for each, but alternative_if needs to check a single cpucap. So define meta-capabilities that are present when either of the current two capabilities is present. Leave the existing four cpucaps in place, as they are still needed to check for mismatched systems where one CPU has the architected algorithm but another has the IMP DEF algorithm. Note, the meta-capabilities were present before but were removed in commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed then. Note, unlike before, the current patch checks the cpucap values directly, instead of reading the CPU ID register value. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: commit message and macro rebase] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * Macro number. arch/arm64/include/asm/cpucaps.h | 4 +++- arch/arm64/include/asm/cpufeature.h | 6 ++---- arch/arm64/kernel/cpufeature.c | 25 ++++++++++++++++++++++++- 3 files changed, 29 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index b926838..6674ad1 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -56,7 +56,9 @@ #define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46 #define ARM64_WORKAROUND_1542419 47 #define ARM64_WORKAROUND_1319367 48 +#define ARM64_HAS_ADDRESS_AUTH 49 +#define ARM64_HAS_GENERIC_AUTH 50 -#define ARM64_NCAPS 49 +#define ARM64_NCAPS 51 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 4261d55..c4e27a1 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -590,15 +590,13 @@ static inline bool system_supports_cnp(void) static inline bool system_supports_address_auth(void) { return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || - cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF)); + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH); } static inline bool system_supports_generic_auth(void) { return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || - cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF)); + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH); } static inline bool system_uses_irq_prio_masking(void) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 04cf64e..cf42c46 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1249,6 +1249,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); } + +static bool has_address_auth(const struct arm64_cpu_capabilities *entry, + int __unused) +{ + return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF); +} + +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, + int __unused) +{ + return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); +} #endif /* CONFIG_ARM64_PTR_AUTH */ #ifdef CONFIG_ARM64_PSEUDO_NMI @@ -1518,7 +1532,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR1_APA_SHIFT, .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED, .matches = has_cpuid_feature, - .cpu_enable = cpu_enable_address_auth, }, { .desc = "Address authentication (IMP DEF algorithm)", @@ -1529,6 +1542,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR1_API_SHIFT, .min_field_value = ID_AA64ISAR1_API_IMP_DEF, .matches = has_cpuid_feature, + }, + { + .capability = ARM64_HAS_ADDRESS_AUTH, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_address_auth, .cpu_enable = cpu_enable_address_auth, }, { @@ -1551,6 +1569,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = ID_AA64ISAR1_GPI_IMP_DEF, .matches = has_cpuid_feature, }, + { + .capability = ARM64_HAS_GENERIC_AUTH, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_generic_auth, + }, #endif /* CONFIG_ARM64_PTR_AUTH */ #ifdef CONFIG_ARM64_PSEUDO_NMI { -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Kristina Martsenko <kristina.martsenko@arm.com> We currently enable ptrauth for userspace, but do not use it within the kernel. We're going to enable it for the kernel, and will need to manage a separate set of ptrauth keys for the kernel. We currently keep all 5 keys in struct ptrauth_keys. However, as the kernel will only need to use 1 key, it is a bit wasteful to allocate a whole ptrauth_keys struct for every thread. Therefore, a subsequent patch will define a separate struct, with only 1 key, for the kernel. In preparation for that, rename the existing struct (and associated macros and functions) to reflect that they are specific to userspace. Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-positioned the patch to reduce the diff] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * This patch is now positioned before the patch "arm64: install user ptrauth keys at kernel exit time". It was also suggested by Ard to reduce the diff in the patch [1]. [1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-November/695915.html arch/arm64/include/asm/pointer_auth.h | 12 ++++++------ arch/arm64/include/asm/processor.h | 2 +- arch/arm64/kernel/pointer_auth.c | 8 ++++---- arch/arm64/kernel/ptrace.c | 16 ++++++++-------- 4 files changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 7a24bad..799b079 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -22,7 +22,7 @@ struct ptrauth_key { * We give each process its own keys, which are shared by all threads. The keys * are inherited upon fork(), and reinitialised upon exec*(). */ -struct ptrauth_keys { +struct ptrauth_keys_user { struct ptrauth_key apia; struct ptrauth_key apib; struct ptrauth_key apda; @@ -30,7 +30,7 @@ struct ptrauth_keys { struct ptrauth_key apga; }; -static inline void ptrauth_keys_init(struct ptrauth_keys *keys) +static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys) { if (system_supports_address_auth()) { get_random_bytes(&keys->apia, sizeof(keys->apia)); @@ -50,7 +50,7 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0) -static inline void ptrauth_keys_switch(struct ptrauth_keys *keys) +static inline void ptrauth_keys_switch_user(struct ptrauth_keys_user *keys) { if (system_supports_address_auth()) { __ptrauth_key_install(APIA, keys->apia); @@ -80,12 +80,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) #define ptrauth_thread_init_user(tsk) \ do { \ struct task_struct *__ptiu_tsk = (tsk); \ - ptrauth_keys_init(&__ptiu_tsk->thread.keys_user); \ - ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user); \ + ptrauth_keys_init_user(&__ptiu_tsk->thread.keys_user); \ + ptrauth_keys_switch_user(&__ptiu_tsk->thread.keys_user); \ } while (0) #define ptrauth_thread_switch(tsk) \ - ptrauth_keys_switch(&(tsk)->thread.keys_user) + ptrauth_keys_switch_user(&(tsk)->thread.keys_user) #else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 5ba6320..496a928 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -146,7 +146,7 @@ struct thread_struct { unsigned long fault_code; /* ESR_EL1 value */ struct debug_info debug; /* debugging */ #ifdef CONFIG_ARM64_PTR_AUTH - struct ptrauth_keys keys_user; + struct ptrauth_keys_user keys_user; #endif }; diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index c507b58..af5a638 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -9,7 +9,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) { - struct ptrauth_keys *keys = &tsk->thread.keys_user; + struct ptrauth_keys_user *keys = &tsk->thread.keys_user; unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY; unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY; @@ -18,8 +18,8 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) return -EINVAL; if (!arg) { - ptrauth_keys_init(keys); - ptrauth_keys_switch(keys); + ptrauth_keys_init_user(keys); + ptrauth_keys_switch_user(keys); return 0; } @@ -41,7 +41,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) if (arg & PR_PAC_APGAKEY) get_random_bytes(&keys->apga, sizeof(keys->apga)); - ptrauth_keys_switch(keys); + ptrauth_keys_switch_user(keys); return 0; } diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 6771c39..ddaff0f 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -986,7 +986,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey) } static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys, - const struct ptrauth_keys *keys) + const struct ptrauth_keys_user *keys) { ukeys->apiakey = pac_key_to_user(&keys->apia); ukeys->apibkey = pac_key_to_user(&keys->apib); @@ -994,7 +994,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys, ukeys->apdbkey = pac_key_to_user(&keys->apdb); } -static void pac_address_keys_from_user(struct ptrauth_keys *keys, +static void pac_address_keys_from_user(struct ptrauth_keys_user *keys, const struct user_pac_address_keys *ukeys) { keys->apia = pac_key_from_user(ukeys->apiakey); @@ -1008,7 +1008,7 @@ static int pac_address_keys_get(struct task_struct *target, unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_address_keys user_keys; if (!system_supports_address_auth()) @@ -1025,7 +1025,7 @@ static int pac_address_keys_set(struct task_struct *target, unsigned int pos, unsigned int count, const void *kbuf, const void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_address_keys user_keys; int ret; @@ -1043,12 +1043,12 @@ static int pac_address_keys_set(struct task_struct *target, } static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys, - const struct ptrauth_keys *keys) + const struct ptrauth_keys_user *keys) { ukeys->apgakey = pac_key_to_user(&keys->apga); } -static void pac_generic_keys_from_user(struct ptrauth_keys *keys, +static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys, const struct user_pac_generic_keys *ukeys) { keys->apga = pac_key_from_user(ukeys->apgakey); @@ -1059,7 +1059,7 @@ static int pac_generic_keys_get(struct task_struct *target, unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_generic_keys user_keys; if (!system_supports_generic_auth()) @@ -1076,7 +1076,7 @@ static int pac_generic_keys_set(struct task_struct *target, unsigned int pos, unsigned int count, const void *kbuf, const void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_generic_keys user_keys; int ret; -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Kristina Martsenko <kristina.martsenko@arm.com> As we're going to enable pointer auth within the kernel and use a different APIAKey for the kernel itself, so move the user APIAKey switch to EL0 exception return. The other 4 keys could remain switched during task switch, but are also moved to keep things consistent. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: commit msg, re-positioned the patch] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * This patch is now placed after "arm64: rename ptrauth key structures to be user-specific" to avoid un-necessary code churnings. arch/arm64/include/asm/asm_pointer_auth.h | 45 +++++++++++++++++++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 23 +--------------- arch/arm64/kernel/asm-offsets.c | 11 ++++++++ arch/arm64/kernel/entry.S | 3 +++ arch/arm64/kernel/pointer_auth.c | 3 --- arch/arm64/kernel/process.c | 1 - 6 files changed, 60 insertions(+), 26 deletions(-) create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h new file mode 100644 index 0000000..3d39788 --- /dev/null +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ASM_POINTER_AUTH_H +#define __ASM_ASM_POINTER_AUTH_H + +#include <asm/alternative.h> +#include <asm/asm-offsets.h> +#include <asm/cpufeature.h> +#include <asm/sysreg.h> + +#ifdef CONFIG_ARM64_PTR_AUTH + + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 + mov \tmp1, #THREAD_KEYS_USER + add \tmp1, \tsk, \tmp1 +alternative_if_not ARM64_HAS_ADDRESS_AUTH + b .Laddr_auth_skip_\@ +alternative_else_nop_endif + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA] + msr_s SYS_APIAKEYLO_EL1, \tmp2 + msr_s SYS_APIAKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB] + msr_s SYS_APIBKEYLO_EL1, \tmp2 + msr_s SYS_APIBKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA] + msr_s SYS_APDAKEYLO_EL1, \tmp2 + msr_s SYS_APDAKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB] + msr_s SYS_APDBKEYLO_EL1, \tmp2 + msr_s SYS_APDBKEYHI_EL1, \tmp3 +.Laddr_auth_skip_\@: +alternative_if ARM64_HAS_GENERIC_AUTH + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA] + msr_s SYS_APGAKEYLO_EL1, \tmp2 + msr_s SYS_APGAKEYHI_EL1, \tmp3 +alternative_else_nop_endif + .endm + +#else /* CONFIG_ARM64_PTR_AUTH */ + + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 + .endm + +#endif /* CONFIG_ARM64_PTR_AUTH */ + +#endif /* __ASM_ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 799b079..dabe026 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -50,19 +50,6 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0) -static inline void ptrauth_keys_switch_user(struct ptrauth_keys_user *keys) -{ - if (system_supports_address_auth()) { - __ptrauth_key_install(APIA, keys->apia); - __ptrauth_key_install(APIB, keys->apib); - __ptrauth_key_install(APDA, keys->apda); - __ptrauth_key_install(APDB, keys->apdb); - } - - if (system_supports_generic_auth()) - __ptrauth_key_install(APGA, keys->apga); -} - extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); /* @@ -78,20 +65,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) } #define ptrauth_thread_init_user(tsk) \ -do { \ - struct task_struct *__ptiu_tsk = (tsk); \ - ptrauth_keys_init_user(&__ptiu_tsk->thread.keys_user); \ - ptrauth_keys_switch_user(&__ptiu_tsk->thread.keys_user); \ -} while (0) - -#define ptrauth_thread_switch(tsk) \ - ptrauth_keys_switch_user(&(tsk)->thread.keys_user) + ptrauth_keys_init_user(&(tsk)->thread.keys_user) #else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) -#define ptrauth_thread_switch(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index a5bdce8..7b1ea2a 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -40,6 +40,9 @@ int main(void) #endif BLANK(); DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); +#endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); DEFINE(S_X2, offsetof(struct pt_regs, regs[2])); @@ -128,5 +131,13 @@ int main(void) DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); #endif +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia)); + DEFINE(PTRAUTH_USER_KEY_APIB, offsetof(struct ptrauth_keys_user, apib)); + DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda)); + DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb)); + DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); + BLANK(); +#endif return 0; } diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 7c6a0a4..18067bb 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -14,6 +14,7 @@ #include <asm/alternative.h> #include <asm/assembler.h> #include <asm/asm-offsets.h> +#include <asm/asm_pointer_auth.h> #include <asm/cpufeature.h> #include <asm/errno.h> #include <asm/esr.h> @@ -344,6 +345,8 @@ alternative_else_nop_endif msr cntkctl_el1, x1 4: #endif + ptrauth_keys_install_user tsk, x0, x1, x2 + apply_ssbd 0, x0, x1 .endif diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index af5a638..1e77736 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) if (!arg) { ptrauth_keys_init_user(keys); - ptrauth_keys_switch_user(keys); return 0; } @@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) if (arg & PR_PAC_APGAKEY) get_random_bytes(&keys->apga, sizeof(keys->apga)); - ptrauth_keys_switch_user(keys); - return 0; } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 71f788c..3716528 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -505,7 +505,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, contextidr_thread_switch(next); entry_task_switch(next); uao_thread_switch(next); - ptrauth_thread_switch(next); ssbs_thread_switch(next); /* -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
A macro early_park_cpu is added to park the faulted cpu in an infinite loop. Currently, this macro is substituted in two instances and is reused in the subsequent pointer authentication patch. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * None. arch/arm64/kernel/head.S | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 989b194..3d18163 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -761,6 +761,17 @@ ENDPROC(__secondary_too_slow) .endm /* + * Macro to park the cpu in an infinite loop. + */ + .macro early_park_cpu status + update_early_cpu_boot_status \status | CPU_STUCK_IN_KERNEL, x1, x2 +.Lepc_\@: + wfe + wfi + b .Lepc_\@ + .endm + +/* * Enable the MMU. * * x0 = SCTLR_EL1 value for turning on the MMU. @@ -808,24 +819,14 @@ ENTRY(__cpu_secondary_check52bitva) and x0, x0, #(0xf << ID_AA64MMFR2_LVA_SHIFT) cbnz x0, 2f - update_early_cpu_boot_status \ - CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_52_BIT_VA, x0, x1 -1: wfe - wfi - b 1b - + early_park_cpu CPU_STUCK_REASON_52_BIT_VA #endif 2: ret ENDPROC(__cpu_secondary_check52bitva) __no_granule_support: /* Indicate that this CPU can't boot and is stuck in the kernel */ - update_early_cpu_boot_status \ - CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2 -1: - wfe - wfi - b 1b + early_park_cpu CPU_STUCK_REASON_NO_GRAN ENDPROC(__no_granule_support) #ifdef CONFIG_RELOCATABLE -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
This patch allows __cpu_setup to be invoked with one of these flags, ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_LATE or ARM64_CPU_RUNTIME. This is required as some cpufeatures need different handling during different scenarios. The input parameter in x0 is preserved till the end to be used inside this function. There should be no functional change with this patch and is useful for the subsequent ptrauth patch which utilizes it. Some upcoming arm cpufeatures can also utilize these flags. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * None. arch/arm64/include/asm/smp.h | 5 +++++ arch/arm64/kernel/head.S | 2 ++ arch/arm64/kernel/sleep.S | 2 ++ arch/arm64/mm/proc.S | 26 +++++++++++++++----------- 4 files changed, 24 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index a0c8a0b..008d004 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -23,6 +23,11 @@ #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) +/* Options for __cpu_setup */ +#define ARM64_CPU_BOOT_PRIMARY (1) +#define ARM64_CPU_BOOT_LATE (2) +#define ARM64_CPU_RUNTIME (3) + #ifndef __ASSEMBLY__ #include <asm/percpu.h> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 3d18163..5aaf1bb 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -118,6 +118,7 @@ ENTRY(stext) * On return, the CPU will be ready for the MMU to be turned on and * the TCR will have been set. */ + mov x0, #ARM64_CPU_BOOT_PRIMARY bl __cpu_setup // initialise processor b __primary_switch ENDPROC(stext) @@ -712,6 +713,7 @@ secondary_startup: * Common entry point for secondary CPUs. */ bl __cpu_secondary_check52bitva + mov x0, #ARM64_CPU_BOOT_LATE bl __cpu_setup // initialise processor adrp x1, swapper_pg_dir bl __enable_mmu diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index f5b04dd..7b2f2e6 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -3,6 +3,7 @@ #include <linux/linkage.h> #include <asm/asm-offsets.h> #include <asm/assembler.h> +#include <asm/smp.h> .text /* @@ -99,6 +100,7 @@ ENDPROC(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" ENTRY(cpu_resume) bl el2_setup // if in EL2 drop to EL1 cleanly + mov x0, #ARM64_CPU_RUNTIME bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ adrp x1, swapper_pg_dir diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index a1e0592..88cf7e4 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -400,21 +400,25 @@ ENDPROC(idmap_kpti_install_ng_mappings) /* * __cpu_setup * - * Initialise the processor for turning the MMU on. Return in x0 the - * value of the SCTLR_EL1 register. + * Initialise the processor for turning the MMU on. + * + * Input: + * x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_LATE/ARM64_CPU_RUNTIME. + * Output: + * Return in x0 the value of the SCTLR_EL1 register. */ .pushsection ".idmap.text", "awx" ENTRY(__cpu_setup) tlbi vmalle1 // Invalidate local TLB dsb nsh - mov x0, #3 << 20 - msr cpacr_el1, x0 // Enable FP/ASIMD - mov x0, #1 << 12 // Reset mdscr_el1 and disable - msr mdscr_el1, x0 // access to the DCC from EL0 + mov x1, #3 << 20 + msr cpacr_el1, x1 // Enable FP/ASIMD + mov x1, #1 << 12 // Reset mdscr_el1 and disable + msr mdscr_el1, x1 // access to the DCC from EL0 isb // Unmask debug exceptions now, enable_dbg // since this is per-cpu - reset_pmuserenr_el0 x0 // Disable PMU access from EL0 + reset_pmuserenr_el0 x1 // Disable PMU access from EL0 /* * Memory region attributes for LPAE: * @@ -435,10 +439,6 @@ ENTRY(__cpu_setup) MAIR(0xbb, MT_NORMAL_WT) msr mair_el1, x5 /* - * Prepare SCTLR - */ - mov_q x0, SCTLR_EL1_SET - /* * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for * both user and kernel. */ @@ -474,5 +474,9 @@ ENTRY(__cpu_setup) 1: #endif /* CONFIG_ARM64_HW_AFDBM */ msr tcr_el1, x10 + /* + * Prepare SCTLR + */ + mov_q x0, SCTLR_EL1_SET ret // return to head.S ENDPROC(__cpu_setup) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Kristina Martsenko <kristina.martsenko@arm.com> When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using address auth very early, so change the cpucap to account for this. Pointer auth must be enabled before we call C functions, because it is not possible to enter a function with pointer auth disabled and exit it with pointer auth enabled. Note, mismatches between architected and IMPDEF algorithms will still be caught by the cpufeature framework (the separate *_ARCH and *_IMP_DEF cpucaps). Note the change in behavior: if the boot CPU has address auth and a late CPU does not, then we park the late CPU very early in booting. Also, if the boot CPU does not have address auth and the late CPU has then system panic will occur little later from inside the C code. Until now we would have just disabled address auth in this case. Leave generic authentication as a "system scope" cpucap for now, since initially the kernel will only use address authentication. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-worked ptrauth setup logic, comments] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * None. arch/arm64/Kconfig | 5 +++++ arch/arm64/include/asm/smp.h | 1 + arch/arm64/kernel/cpufeature.c | 13 +++---------- arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ arch/arm64/kernel/smp.c | 2 ++ arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++++ 6 files changed, 62 insertions(+), 10 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b1b4476..5aabe8a 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH be enabled. However, KVM guest also require VHE mode and hence CONFIG_ARM64_VHE=y option to use this feature. + If the feature is present on the primary CPU but not a secondary CPU, + then the secondary CPU will be parked. Also, if the boot CPU does not + have address auth and the late CPU has then system panic will occur. + On such a system, this option should not be selected. + endmenu config ARM64_SVE diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index 008d004..ddb6d70 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -22,6 +22,7 @@ #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) +#define CPU_STUCK_REASON_NO_PTRAUTH (UL(4) << CPU_STUCK_REASON_SHIFT) /* Options for __cpu_setup */ #define ARM64_CPU_BOOT_PRIMARY (1) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index cf42c46..771c435 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1244,12 +1244,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) #endif /* CONFIG_ARM64_RAS_EXTN */ #ifdef CONFIG_ARM64_PTR_AUTH -static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) -{ - sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | - SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); -} - static bool has_address_auth(const struct arm64_cpu_capabilities *entry, int __unused) { @@ -1526,7 +1520,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (architected algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_ARCH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_SCOPE_BOOT_CPU, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_APA_SHIFT, @@ -1536,7 +1530,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (IMP DEF algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_SCOPE_BOOT_CPU, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_API_SHIFT, @@ -1545,9 +1539,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, { .capability = ARM64_HAS_ADDRESS_AUTH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_SCOPE_BOOT_CPU, .matches = has_address_auth, - .cpu_enable = cpu_enable_address_auth, }, { .desc = "Generic authentication (architected algorithm)", diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 5aaf1bb..c59c28f 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -13,6 +13,7 @@ #include <linux/init.h> #include <linux/irqchip/arm-gic-v3.h> +#include <asm/alternative.h> #include <asm/assembler.h> #include <asm/boot.h> #include <asm/ptrace.h> @@ -713,6 +714,7 @@ secondary_startup: * Common entry point for secondary CPUs. */ bl __cpu_secondary_check52bitva + bl __cpu_secondary_checkptrauth mov x0, #ARM64_CPU_BOOT_LATE bl __cpu_setup // initialise processor adrp x1, swapper_pg_dir @@ -831,6 +833,24 @@ __no_granule_support: early_park_cpu CPU_STUCK_REASON_NO_GRAN ENDPROC(__no_granule_support) +ENTRY(__cpu_secondary_checkptrauth) +#ifdef CONFIG_ARM64_PTR_AUTH + /* Check if the CPU supports ptrauth */ + mrs x2, id_aa64isar1_el1 + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 + cbnz x2, 1f +alternative_if ARM64_HAS_ADDRESS_AUTH + mov x3, 1 +alternative_else + mov x3, 0 +alternative_endif + cbz x3, 1f + /* Park the mismatched secondary CPU */ + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH +#endif +1: ret +ENDPROC(__cpu_secondary_checkptrauth) + #ifdef CONFIG_RELOCATABLE __relocate_kernel: /* diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index d4ed9a1..f2761a9 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -164,6 +164,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) pr_crit("CPU%u: does not support 52-bit VAs\n", cpu); if (status & CPU_STUCK_REASON_NO_GRAN) pr_crit("CPU%u: does not support %luK granule \n", cpu, PAGE_SIZE / SZ_1K); + if (status & CPU_STUCK_REASON_NO_PTRAUTH) + pr_crit("CPU%u: does not support pointer authentication\n", cpu); cpus_stuck_in_kernel++; break; case CPU_PANIC_KERNEL: diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 88cf7e4..8734d99 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -16,6 +16,7 @@ #include <asm/pgtable-hwdef.h> #include <asm/cpufeature.h> #include <asm/alternative.h> +#include <asm/smp.h> #ifdef CONFIG_ARM64_64K_PAGES #define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K @@ -474,9 +475,39 @@ ENTRY(__cpu_setup) 1: #endif /* CONFIG_ARM64_HW_AFDBM */ msr tcr_el1, x10 + mov x1, x0 /* * Prepare SCTLR */ mov_q x0, SCTLR_EL1_SET + +#ifdef CONFIG_ARM64_PTR_AUTH + /* No ptrauth setup for run time cpus */ + cmp x1, #ARM64_CPU_RUNTIME + b.eq 3f + + /* Check if the CPU supports ptrauth */ + mrs x2, id_aa64isar1_el1 + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 + cbz x2, 3f + + msr_s SYS_APIAKEYLO_EL1, xzr + msr_s SYS_APIAKEYHI_EL1, xzr + + /* Just enable ptrauth for primary cpu */ + cmp x1, #ARM64_CPU_BOOT_PRIMARY + b.eq 2f + + /* if !system_supports_address_auth() then skip enable */ +alternative_if_not ARM64_HAS_ADDRESS_AUTH + b 3f +alternative_else_nop_endif + +2: /* Enable ptrauth instructions */ + ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ + SCTLR_ELx_ENDA | SCTLR_ELx_ENDB + orr x0, x0, x2 +3: +#endif ret // return to head.S ENDPROC(__cpu_setup) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Kristina Martsenko <kristina.martsenko@arm.com> Set up keys to use pointer authentication within the kernel. The kernel will be compiled with APIAKey instructions, the other keys are currently unused. Each task is given its own APIAKey, which is initialized during fork. The key is changed during context switch and on kernel entry from EL0. The keys for idle threads need to be set before calling any C functions, because it is not possible to enter and exit a function with different keys. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Modified secondary cores key structure, comments] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * Moved 2 instructions inside alternative_if as pointed by Richard. arch/arm64/include/asm/asm_pointer_auth.h | 14 ++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 13 +++++++++++++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/include/asm/smp.h | 4 ++++ arch/arm64/kernel/asm-offsets.c | 5 +++++ arch/arm64/kernel/entry.S | 3 +++ arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/smp.c | 8 ++++++++ arch/arm64/mm/proc.S | 12 ++++++++++++ 9 files changed, 62 insertions(+) diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h index 3d39788..c5fbbf5 100644 --- a/arch/arm64/include/asm/asm_pointer_auth.h +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -35,11 +35,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH alternative_else_nop_endif .endm + .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 +alternative_if ARM64_HAS_ADDRESS_AUTH + mov \tmp1, #THREAD_KEYS_KERNEL + add \tmp1, \tsk, \tmp1 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA] + msr_s SYS_APIAKEYLO_EL1, \tmp2 + msr_s SYS_APIAKEYHI_EL1, \tmp3 + isb +alternative_else_nop_endif + .endm + #else /* CONFIG_ARM64_PTR_AUTH */ .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .endm + .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 + .endm + #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index dabe026..aa956ca 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -30,6 +30,10 @@ struct ptrauth_keys_user { struct ptrauth_key apga; }; +struct ptrauth_keys_kernel { + struct ptrauth_key apia; +}; + static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys) { if (system_supports_address_auth()) { @@ -50,6 +54,12 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0) +static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) +{ + if (system_supports_address_auth()) + get_random_bytes(&keys->apia, sizeof(keys->apia)); +} + extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); /* @@ -66,11 +76,14 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) #define ptrauth_thread_init_user(tsk) \ ptrauth_keys_init_user(&(tsk)->thread.keys_user) +#define ptrauth_thread_init_kernel(tsk) \ + ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel) #else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) +#define ptrauth_thread_init_kernel(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 496a928..4c77da5 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -147,6 +147,7 @@ struct thread_struct { struct debug_info debug; /* debugging */ #ifdef CONFIG_ARM64_PTR_AUTH struct ptrauth_keys_user keys_user; + struct ptrauth_keys_kernel keys_kernel; #endif }; diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index ddb6d70..8664ec4 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -36,6 +36,7 @@ #include <linux/threads.h> #include <linux/cpumask.h> #include <linux/thread_info.h> +#include <asm/pointer_auth.h> DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number); @@ -93,6 +94,9 @@ asmlinkage void secondary_start_kernel(void); struct secondary_data { void *stack; struct task_struct *task; +#ifdef CONFIG_ARM64_PTR_AUTH + struct ptrauth_keys_kernel ptrauth_key; +#endif long status; }; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7b1ea2a..9981a0a 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -42,6 +42,7 @@ int main(void) DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); #ifdef CONFIG_ARM64_PTR_AUTH DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); + DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel)); #endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); @@ -91,6 +92,9 @@ int main(void) BLANK(); DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack)); DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key)); +#endif BLANK(); #ifdef CONFIG_KVM_ARM_HOST DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); @@ -137,6 +141,7 @@ int main(void) DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda)); DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb)); DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); + DEFINE(PTRAUTH_KERNEL_KEY_APIA, offsetof(struct ptrauth_keys_kernel, apia)); BLANK(); #endif return 0; diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 18067bb..5e6ebcc 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -174,6 +174,7 @@ alternative_cb_end apply_ssbd 1, x22, x23 + ptrauth_keys_install_kernel tsk, x20, x22, x23 .else add x21, sp, #S_FRAME_SIZE get_current_task tsk @@ -345,6 +346,7 @@ alternative_else_nop_endif msr cntkctl_el1, x1 4: #endif + /* No kernel C function calls after this as user keys are set. */ ptrauth_keys_install_user tsk, x0, x1, x2 apply_ssbd 0, x0, x1 @@ -898,6 +900,7 @@ ENTRY(cpu_switch_to) ldr lr, [x8] mov sp, x9 msr sp_el0, x1 + ptrauth_keys_install_kernel x1, x8, x9, x10 ret ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 3716528..0d4a3b8 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -376,6 +376,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, */ fpsimd_flush_task_state(p); + ptrauth_thread_init_kernel(p); + if (likely(!(p->flags & PF_KTHREAD))) { *childregs = *current_pt_regs(); childregs->regs[0] = 0; diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index f2761a9..3fa0fbf 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -112,6 +112,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) */ secondary_data.task = idle; secondary_data.stack = task_stack_page(idle) + THREAD_SIZE; +#if defined(CONFIG_ARM64_PTR_AUTH) + secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo; + secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi; +#endif update_cpu_boot_status(CPU_MMU_OFF); __flush_dcache_area(&secondary_data, sizeof(secondary_data)); @@ -138,6 +142,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) secondary_data.task = NULL; secondary_data.stack = NULL; +#if defined(CONFIG_ARM64_PTR_AUTH) + secondary_data.ptrauth_key.apia.lo = 0; + secondary_data.ptrauth_key.apia.hi = 0; +#endif __flush_dcache_area(&secondary_data, sizeof(secondary_data)); status = READ_ONCE(secondary_data.status); if (ret && status) { diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 8734d99..bae8584 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -491,6 +491,10 @@ ENTRY(__cpu_setup) ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 cbz x2, 3f + /* + * The primary cpu keys are reset here and can be + * re-initialised with some proper values later. + */ msr_s SYS_APIAKEYLO_EL1, xzr msr_s SYS_APIAKEYHI_EL1, xzr @@ -503,6 +507,14 @@ alternative_if_not ARM64_HAS_ADDRESS_AUTH b 3f alternative_else_nop_endif + /* Install ptrauth key for secondary cpus */ + adr_l x2, secondary_data + ldr x3, [x2, #CPU_BOOT_TASK] // get secondary_data.task + cbz x3, 2f // check for slow booting cpus + ldp x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY] + msr_s SYS_APIAKEYLO_EL1, x3 + msr_s SYS_APIAKEYHI_EL1, x4 + 2: /* Enable ptrauth instructions */ ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ SCTLR_ELx_ENDA | SCTLR_ELx_ENDB -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
This patch uses the existing boot_init_stack_canary arch function to initialize the ptrauth keys for the booting task in the primary core. The requirement here is that it should be always inline and the caller must never return. As pointer authentication too detects a subset of stack corruption so it makes sense to place this code here. Both pointer authentication and stack canary codes are protected by their respective config option. Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * New patch. arch/arm64/include/asm/pointer_auth.h | 9 +++++++++ arch/arm64/include/asm/stackprotector.h | 5 +++++ include/linux/stackprotector.h | 2 +- 3 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index aa956ca..0f89f59 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -60,6 +60,12 @@ static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) get_random_bytes(&keys->apia, sizeof(keys->apia)); } +static inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys) +{ + if (system_supports_address_auth()) + __ptrauth_key_install(APIA, keys->apia); +} + extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); /* @@ -78,12 +84,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) ptrauth_keys_init_user(&(tsk)->thread.keys_user) #define ptrauth_thread_init_kernel(tsk) \ ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel) +#define ptrauth_thread_switch_kernel(tsk) \ + ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel) #else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) #define ptrauth_thread_init_kernel(tsk) +#define ptrauth_thread_switch_kernel(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/stackprotector.h b/arch/arm64/include/asm/stackprotector.h index 5884a2b..7263e0b 100644 --- a/arch/arm64/include/asm/stackprotector.h +++ b/arch/arm64/include/asm/stackprotector.h @@ -15,6 +15,7 @@ #include <linux/random.h> #include <linux/version.h> +#include <asm/pointer_auth.h> extern unsigned long __stack_chk_guard; @@ -26,6 +27,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { +#if defined(CONFIG_STACKPROTECTOR) unsigned long canary; /* Try to get a semi random initial value. */ @@ -36,6 +38,9 @@ static __always_inline void boot_init_stack_canary(void) current->stack_canary = canary; if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK)) __stack_chk_guard = current->stack_canary; +#endif + ptrauth_thread_init_kernel(current); + ptrauth_thread_switch_kernel(current); } #endif /* _ASM_STACKPROTECTOR_H */ diff --git a/include/linux/stackprotector.h b/include/linux/stackprotector.h index 6b792d0..4c678c4 100644 --- a/include/linux/stackprotector.h +++ b/include/linux/stackprotector.h @@ -6,7 +6,7 @@ #include <linux/sched.h> #include <linux/random.h> -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH) # include <asm/stackprotector.h> #else static inline void boot_init_stack_canary(void) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
This redefines __builtin_return_address to mask pac bits when Pointer Authentication is enabled. As __builtin_return_address is used mostly used to refer to the caller function symbol address so masking runtime generated pac bits will help to find the match. This patch adds a new file (asm/compiler.h) and is transitively included (via include/compiler_types.h) on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version. A helper macro ptrauth_kernel_pac_mask is created for this purpose and added in this file. A similar macro ptrauth_user_pac_mask exists in pointer_auth.h and is now moved here for the sake of consistency. This change fixes the utilities like cat /proc/vmallocinfo to show correct symbol names. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * Moved pac mask macros from pointer_auth.h to asm/compiler.h as suggested by Ard. * There was suggestion by Richard to use xpaclri/xpac instruction to clear the pack but this needs -march=armv8.3-a GCC option which also generates non-nops instruction like retaa and hence may break the backward compatibility. So it is left like earlier. arch/arm64/Kconfig | 1 + arch/arm64/include/asm/compiler.h | 20 ++++++++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 13 +++++-------- 3 files changed, 26 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/include/asm/compiler.h diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5aabe8a..06b5025 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -116,6 +116,7 @@ config ARM64 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_BITREVERSE + select HAVE_ARCH_COMPILER_H select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h new file mode 100644 index 0000000..3cb06f9 --- /dev/null +++ b/arch/arm64/include/asm/compiler.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_COMPILER_H +#define __ASM_COMPILER_H + +#if defined(CONFIG_ARM64_PTR_AUTH) + +/* + * The EL0/EL1 pointer bits used by a pointer authentication code. + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. + */ +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) +#define ptrauth_kernel_pac_mask() (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)) + +#define __builtin_return_address(val) \ + (void *)((unsigned long)__builtin_return_address(val) | \ + ptrauth_kernel_pac_mask()) + +#endif /* CONFIG_ARM64_PTR_AUTH */ + +#endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 0f89f59..ec33af3 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -68,16 +68,13 @@ static inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys) extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); -/* - * The EL0 pointer bits used by a pointer authentication code. - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. - */ -#define ptrauth_user_pac_mask() GENMASK(54, vabits_actual) - -/* Only valid for EL0 TTBR0 instruction pointers */ +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) { - return ptr & ~ptrauth_user_pac_mask(); + if (ptr & BIT_ULL(55)) + return ptr | ptrauth_kernel_pac_mask(); + else + return ptr & ~ptrauth_user_pac_mask(); } #define ptrauth_thread_init_user(tsk) \ -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Mark Rutland <mark.rutland@arm.com> When we enable pointer authentication in the kernel, LR values saved to the stack will have a PAC which we must strip in order to retrieve the real return address. Strip PACs when unwinding the stack in order to account for this. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-position ptrauth_strip_insn_pac, comment] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * Just use ptrauth_strip_insn_pac function introduced in previous patch. arch/arm64/kernel/stacktrace.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index a336cb1..b479df7 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -14,6 +14,7 @@ #include <linux/stacktrace.h> #include <asm/irq.h> +#include <asm/pointer_auth.h> #include <asm/stack_pointer.h> #include <asm/stacktrace.h> @@ -101,6 +102,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ + frame->pc = ptrauth_strip_insn_pac(frame->pc); + /* * Frames created upon entry from EL0 have NULL FP and PC values, so * don't bother reporting these. Frames created by __noreturn functions -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
lr is printed with %pS which will try to find an entry in kallsyms. After enabling pointer authentication, this match will fail due to PAC present in the lr. Strip PAC from the lr to display the correct symbol name. Suggested-by: James Morse <james.morse@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * None. arch/arm64/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 0d4a3b8..d35b4c0 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -262,7 +262,7 @@ void __show_regs(struct pt_regs *regs) if (!user_mode(regs)) { printk("pc : %pS\n", (void *)regs->pc); - printk("lr : %pS\n", (void *)lr); + printk("lr : %pS\n", (void *)ptrauth_strip_insn_pac(lr)); } else { printk("pc : %016llx\n", regs->pc); printk("lr : %016llx\n", lr); -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
This patch restores the kernel keys from current task during cpu resume after the mmu is turned on and ptrauth is enabled. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * None. arch/arm64/kernel/sleep.S | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 7b2f2e6..a6e9460 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -2,6 +2,7 @@ #include <linux/errno.h> #include <linux/linkage.h> #include <asm/asm-offsets.h> +#include <asm/asm_pointer_auth.h> #include <asm/assembler.h> #include <asm/smp.h> @@ -139,6 +140,11 @@ ENTRY(_cpu_resume) bl kasan_unpoison_task_stack_below #endif +#ifdef CONFIG_ARM64_PTR_AUTH + get_current_task x1 + ptrauth_keys_install_kernel x1, x2, x3, x4 +#endif + ldp x19, x20, [x29, #16] ldp x21, x22, [x29, #32] ldp x23, x24, [x29, #48] -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
This patch disables the probing of authenticate ptrauth instruction which falls under the hint instructions region. This is done to disallow probe of instruction which may lead to ptrauth faults. The corresponding append pac ptrauth instruction is not disabled as they are typically the first instruction in the function so disabling them will be disabling the function probe itself. Also, appending pac do not cause any exception in itself. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * None. arch/arm64/include/asm/insn.h | 13 +++++++------ arch/arm64/kernel/insn.c | 1 + arch/arm64/kernel/probes/decode-insn.c | 2 +- 3 files changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index bb313dd..2e01db0 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -40,12 +40,13 @@ enum aarch64_insn_encoding_class { }; enum aarch64_insn_hint_op { - AARCH64_INSN_HINT_NOP = 0x0 << 5, - AARCH64_INSN_HINT_YIELD = 0x1 << 5, - AARCH64_INSN_HINT_WFE = 0x2 << 5, - AARCH64_INSN_HINT_WFI = 0x3 << 5, - AARCH64_INSN_HINT_SEV = 0x4 << 5, - AARCH64_INSN_HINT_SEVL = 0x5 << 5, + AARCH64_INSN_HINT_NOP = 0x0 << 5, + AARCH64_INSN_HINT_YIELD = 0x1 << 5, + AARCH64_INSN_HINT_WFE = 0x2 << 5, + AARCH64_INSN_HINT_WFI = 0x3 << 5, + AARCH64_INSN_HINT_SEV = 0x4 << 5, + AARCH64_INSN_HINT_SEVL = 0x5 << 5, + AARCH64_INSN_HINT_AUTIASP = (0x3 << 8) | (0x5 << 5), }; enum aarch64_insn_imm_type { diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 4a9e773..87f7c8a 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -63,6 +63,7 @@ bool __kprobes aarch64_insn_is_nop(u32 insn) case AARCH64_INSN_HINT_WFI: case AARCH64_INSN_HINT_SEV: case AARCH64_INSN_HINT_SEVL: + case AARCH64_INSN_HINT_AUTIASP: return false; default: return true; diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c index b78fac9..a7caf84 100644 --- a/arch/arm64/kernel/probes/decode-insn.c +++ b/arch/arm64/kernel/probes/decode-insn.c @@ -42,7 +42,7 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn) != AARCH64_INSN_SPCLREG_DAIF; /* - * The HINT instruction is is problematic when single-stepping, + * The HINT instruction is problematic when single-stepping, * except for the NOP case. */ if (aarch64_insn_is_hint(insn)) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Vincenzo Frascino <vincenzo.frascino@arm.com> Currently kconfig does not have a feature that allows to detect if the used assembler supports a specific compilation option. Introduce 'as-option' to serve this purpose in the context of Kconfig: config X def_bool $(as-option,...) Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: linux-kbuild@vger.kernel.org Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * None. scripts/Kconfig.include | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include index d4adfbe..cc465dd 100644 --- a/scripts/Kconfig.include +++ b/scripts/Kconfig.include @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de # Return y if the linker supports <flag>, n otherwise ld-option = $(success,$(LD) -v $(1)) +# $(as-option,<flag>) +# Return y if the assembler supports <flag>, n otherwise +as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -E -x assembler /dev/null -o /dev/null) + # check if $(CC) and $(LD) exist $(error-if,$(failure,command -v $(CC)),compiler '$(CC)' not found) $(error-if,$(failure,command -v $(LD)),linker '$(LD)' not found) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
From: Kristina Martsenko <kristina.martsenko@arm.com> Compile all functions with two ptrauth instructions: PACIASP in the prologue to sign the return address, and AUTIASP in the epilogue to authenticate the return address (from the stack). If authentication fails, the return will cause an instruction abort to be taken, followed by an oops and killing the task. This should help protect the kernel against attacks using return-oriented programming. As ptrauth protects the return address, it can also serve as a replacement for CONFIG_STACKPROTECTOR, although note that it does not protect other parts of the stack. The new instructions are in the HINT encoding space, so on a system without ptrauth they execute as NOPs. CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM guests, but also automatically builds the kernel with ptrauth instructions if the compiler supports it. If there is no compiler support, we do not warn that the kernel was built without ptrauth instructions. GCC 7 and 8 support the -msign-return-address option, while GCC 9 deprecates that option and replaces it with -mbranch-protection. Support both options. Clang uses an external assembler hence this patch makes sure that the correct parameters (-march=armv8.3-a) are passed down to help it recognize the ptrauth instructions. This option is not used for GNU toolchain. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: Cover leaf function, comments] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Change since last version: * Introduced clang compiler support by passing flag -march=armv8.3-a to external assembler. There were some changes required for dwarfs `.cfi_negate_ra_state' in GCC toolchain. They are taken care and will be present in newer GCC toochains with backward compatibility. The issue reported by Mark Brown [1] is now fixed. [1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-November/696162.html arch/arm64/Kconfig | 21 ++++++++++++++++++++- arch/arm64/Makefile | 11 +++++++++++ 2 files changed, 31 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 06b5025..f0798b7 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1466,6 +1466,7 @@ config ARM64_PTR_AUTH bool "Enable support for pointer authentication" default y depends on !KVM || ARM64_VHE + depends on GCC_VERSION >= 70000 || CLANG_VERSION >= 80000 help Pointer authentication (part of the ARMv8.3 Extensions) provides instructions for signing and authenticating pointers against secret @@ -1473,11 +1474,17 @@ config ARM64_PTR_AUTH and other attacks. This option enables these instructions at EL0 (i.e. for userspace). - Choosing this option will cause the kernel to initialise secret keys for each process at exec() time, with these keys being context-switched along with the process. + If the compiler supports the -mbranch-protection or + -msign-return-address flag (e.g. GCC 7 or later), then this option + will also cause the kernel itself to be compiled with return address + protection. In this case, and if the target hardware is known to + support pointer authentication, then CONFIG_STACKPROTECTOR can be + disabled with minimal loss of protection. + The feature is detected at runtime. If the feature is not present in hardware it will not be advertised to userspace/KVM guest nor will it be enabled. However, KVM guest also require VHE mode and hence @@ -1488,6 +1495,18 @@ config ARM64_PTR_AUTH have address auth and the late CPU has then system panic will occur. On such a system, this option should not be selected. +config CC_HAS_BRANCH_PROT_PAC_RET + # GCC 9 or later, clang 8 or later + def_bool $(cc-option,-mbranch-protection=pac-ret+leaf) + +config CC_HAS_SIGN_RETURN_ADDRESS + # GCC 7, 8 + def_bool $(cc-option,-msign-return-address=all) + +config AS_HAS_PAC + def_bool $(as-option,-Wa,-march=armv8.3-a) + depends on CC_IS_CLANG + endmenu config ARM64_SVE diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 1fbe24d..6a1da59 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -72,6 +72,17 @@ stack_protector_prepare: prepare0 include/generated/asm-offsets.h)) endif +ifeq ($(CONFIG_ARM64_PTR_AUTH),y) +branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all +branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf +# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the compiler +# to generate them and consequently to break the single image contract we pass it +# only to the assembler when clang is selected as a compiler. For the GNU toolchain +# this option is not used. +branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a +KBUILD_CFLAGS += $(branch-prot-flags-y) +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
This test is specific for arm64. When in-kernel Pointer Authentication config is enabled, the return address stored in the stack is signed. This feature helps in ROP kind of attack. If any parameters used to generate the pac (<key, sp, lr>) is modified then this will fail in the authentication stage and will lead to abort. This test changes the input parameter APIA kernel keys to cause abort. The pac computed from the new key can be same as last due to hash collision so this is retried for few times as there is no reliable way to compare the pacs. Even though this test may fail even after retries but this may cause authentication failure at a later stage in earlier function returns. This test can be invoked as, echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT or as below if inserted as a module, insmod lkdtm.ko cpoint_name=DIRECT cpoint_type=CORRUPT_PAC cpoint_count=1 [ 13.118166] lkdtm: Performing direct entry CORRUPT_PAC [ 13.118298] lkdtm: Clearing PAC from the return address [ 13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec [ 13.118626] Mem abort info: [ 13.118666] ESR = 0x86000004 [ 13.118866] EC = 0x21: IABT (current EL), IL = 32 bits [ 13.118966] SET = 0, FnV = 0 [ 13.119117] EA = 0, S1PTW = 0 Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * The pointer authentication error is now caused by changing the kernel keys as suggested by Ard. * Richard [1] suggested to iterate till a new pac value is generated but variables required to compute the pac cannot be acquired in a generic way so this test just retries crashing for some time and then stops. [1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-November/696084.html drivers/misc/lkdtm/bugs.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + 3 files changed, 38 insertions(+) diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c index a4fdad0..b9ef583 100644 --- a/drivers/misc/lkdtm/bugs.c +++ b/drivers/misc/lkdtm/bugs.c @@ -376,3 +376,39 @@ void lkdtm_DOUBLE_FAULT(void) panic("tried to double fault but didn't die\n"); } #endif + +#ifdef CONFIG_ARM64_PTR_AUTH +static noinline void change_pac_parameters(void) +{ + /* Reset the keys of current task */ + ptrauth_thread_init_kernel(current); + ptrauth_thread_switch_kernel(current); +} + +#define CORRUPT_PAC_ITERATE 10 +noinline void lkdtm_CORRUPT_PAC(void) +{ + int i; + + if (!system_supports_address_auth()) { + pr_err("FAIL: arm64 pointer authentication feature not present\n"); + return; + } + + pr_info("Change the PAC parameters to force function return failure\n"); + /* + * Pac is a hash value computed from input keys, return address and + * stack pointer. As pac has fewer bits so there is a chance of + * collision, so iterate few times to reduce the collision probability. + */ + for (i = 0; i < CORRUPT_PAC_ITERATE; i++) + change_pac_parameters(); + + pr_err("FAIL: %s test failed. Kernel may be unstable from here\n", __func__); +} +#else /* !CONFIG_ARM64_PTR_AUTH */ +noinline void lkdtm_CORRUPT_PAC(void) +{ + pr_err("FAIL: arm64 pointer authentication config disabled\n"); +} +#endif diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index ee0d6e7..5ce4ac8 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -116,6 +116,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(STACK_GUARD_PAGE_LEADING), CRASHTYPE(STACK_GUARD_PAGE_TRAILING), CRASHTYPE(UNSET_SMEP), + CRASHTYPE(CORRUPT_PAC), CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE), CRASHTYPE(OVERWRITE_ALLOCATION), CRASHTYPE(WRITE_AFTER_FREE), diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index c56d23e..8d13d01 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -31,6 +31,7 @@ void lkdtm_UNSET_SMEP(void); #ifdef CONFIG_X86_32 void lkdtm_DOUBLE_FAULT(void); #endif +void lkdtm_CORRUPT_PAC(void); /* lkdtm_heap.c */ void __init lkdtm_heap_init(void); -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Hi, This series improves function return address protection for the arm64 kernel, by compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred ptrauth hereafter). This should help protect the kernel against attacks using return-oriented programming. This series is based on v5.5-rc2. High-level changes since v2 [1] (detailed changes are in individual patches): - Added support to generate randomness for ptrauth keys for early booting task in primary core as suggested by Ard. - Modified lkdtm ptrauth test-case to change keys to cause crash instead of modifying the lr in the stack. - Resolved a clang compilation issue. - Re-positioned "arm64: rename ptrauth key structures to be user-specific" to reduce code churnings. - Verified all the vdso test cases with ptrauth [2]. This series do not implement few things or have known limitations: - kdump tools may need some rework to work with ptrauth. The kdump tools may need the ptrauth information to strip PAC bits. Feedback welcome! Thanks, Amit Daniel [1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-November/695089.html [2]: https://github.com/nlynch-mentor/vdsotest.git Amit Daniel Kachhap (8): arm64: create macro to park cpu in an infinite loop arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: initialize ptrauth keys for kernel booting task arm64: mask PAC bits of __builtin_return_address arm64: __show_regs: strip PAC from lr in printk arm64: suspend: restore the kernel ptrauth keys arm64: kprobe: disable probe of ptrauth instruction lkdtm: arm64: test kernel pointer authentication Kristina Martsenko (6): arm64: cpufeature: add pointer auth meta-capabilities arm64: rename ptrauth key structures to be user-specific arm64: install user ptrauth keys at kernel exit time arm64: enable ptrauth earlier arm64: initialize and switch ptrauth kernel keys arm64: compile the kernel with ptrauth return address signing Mark Rutland (1): arm64: unwind: strip PAC from kernel addresses Vincenzo Frascino (1): kconfig: Add support for 'as-option' arch/arm64/Kconfig | 27 +++++++++++- arch/arm64/Makefile | 11 +++++ arch/arm64/include/asm/asm_pointer_auth.h | 59 ++++++++++++++++++++++++++ arch/arm64/include/asm/compiler.h | 20 +++++++++ arch/arm64/include/asm/cpucaps.h | 4 +- arch/arm64/include/asm/cpufeature.h | 6 +-- arch/arm64/include/asm/insn.h | 13 +++--- arch/arm64/include/asm/pointer_auth.h | 54 ++++++++++++------------ arch/arm64/include/asm/processor.h | 3 +- arch/arm64/include/asm/smp.h | 10 +++++ arch/arm64/include/asm/stackprotector.h | 5 +++ arch/arm64/kernel/asm-offsets.c | 16 +++++++ arch/arm64/kernel/cpufeature.c | 30 ++++++++++---- arch/arm64/kernel/entry.S | 6 +++ arch/arm64/kernel/head.S | 47 +++++++++++++++------ arch/arm64/kernel/insn.c | 1 + arch/arm64/kernel/pointer_auth.c | 7 +--- arch/arm64/kernel/probes/decode-insn.c | 2 +- arch/arm64/kernel/process.c | 5 ++- arch/arm64/kernel/ptrace.c | 16 +++---- arch/arm64/kernel/sleep.S | 8 ++++ arch/arm64/kernel/smp.c | 10 +++++ arch/arm64/kernel/stacktrace.c | 3 ++ arch/arm64/mm/proc.S | 69 ++++++++++++++++++++++++++----- drivers/misc/lkdtm/bugs.c | 36 ++++++++++++++++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + include/linux/stackprotector.h | 2 +- scripts/Kconfig.include | 4 ++ 29 files changed, 388 insertions(+), 88 deletions(-) create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h create mode 100644 arch/arm64/include/asm/compiler.h -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 5:48 PM Amit Daniel Kachhap <amit.kachhap@arm.com> wrote: > > From: Vincenzo Frascino <vincenzo.frascino@arm.com> > > Currently kconfig does not have a feature that allows to detect if the > used assembler supports a specific compilation option. > > Introduce 'as-option' to serve this purpose in the context of Kconfig: > > config X > def_bool $(as-option,...) > > Cc: Masahiro Yamada <yamada.masahiro@socionext.com> > Cc: linux-kbuild@vger.kernel.org > Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> > --- If you apply this to arm64 tree, just go ahead. Acked-by: Masahiro Yamada <masahiroy@kernel.org> But, please be aware of a possible merge conflict with: https://patchwork.kernel.org/patch/11285929/ -- Best Regards Masahiro Yamada _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:02PM +0530, Amit Daniel Kachhap wrote: > This series improves function return address protection for the arm64 kernel, by > compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred > ptrauth hereafter). This should help protect the kernel against attacks using > return-oriented programming. Exciting! Can this be emulated in qemu yet? I'd like to see more specific LKDTM tests added for this (similar to the forward-edge CFI tests[1]), but I won't be able to do these tests myself since I don't have ARMv8.3 hardware. :) IIUC, the existing lkdtm_CORRUPT_STACK*() tests[2] should trip with this protection enabled... Thanks! -Kees [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/misc/lkdtm/cfi.c [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/misc/lkdtm/bugs.c#n114 > > This series is based on v5.5-rc2. > > High-level changes since v2 [1] (detailed changes are in individual patches): > - Added support to generate randomness for ptrauth keys for early booting task > in primary core as suggested by Ard. > - Modified lkdtm ptrauth test-case to change keys to cause crash instead of > modifying the lr in the stack. > - Resolved a clang compilation issue. > - Re-positioned "arm64: rename ptrauth key structures to be user-specific" to > reduce code churnings. > > This series do not implement few things or have known limitations: > - kdump tools may need some rework to work with ptrauth. The kdump > tools may need the ptrauth information to strip PAC bits. > > Feedback welcome! > > Thanks, > Amit Daniel > > [1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-November/695089.html > > Amit Daniel Kachhap (8): > arm64: create macro to park cpu in an infinite loop > arm64: ptrauth: Add bootup/runtime flags for __cpu_setup > arm64: initialize ptrauth keys for kernel booting task > arm64: mask PAC bits of __builtin_return_address > arm64: __show_regs: strip PAC from lr in printk > arm64: suspend: restore the kernel ptrauth keys > arm64: kprobe: disable probe of ptrauth instruction > lkdtm: arm64: test kernel pointer authentication > > Kristina Martsenko (6): > arm64: cpufeature: add pointer auth meta-capabilities > arm64: rename ptrauth key structures to be user-specific > arm64: install user ptrauth keys at kernel exit time > arm64: enable ptrauth earlier > arm64: initialize and switch ptrauth kernel keys > arm64: compile the kernel with ptrauth return address signing > > Mark Rutland (1): > arm64: unwind: strip PAC from kernel addresses > > Vincenzo Frascino (1): > kconfig: Add support for 'as-option' > > arch/arm64/Kconfig | 27 +++++++++++- > arch/arm64/Makefile | 11 +++++ > arch/arm64/include/asm/asm_pointer_auth.h | 59 ++++++++++++++++++++++++++ > arch/arm64/include/asm/compiler.h | 20 +++++++++ > arch/arm64/include/asm/cpucaps.h | 4 +- > arch/arm64/include/asm/cpufeature.h | 6 +-- > arch/arm64/include/asm/insn.h | 13 +++--- > arch/arm64/include/asm/pointer_auth.h | 54 ++++++++++++------------ > arch/arm64/include/asm/processor.h | 3 +- > arch/arm64/include/asm/smp.h | 10 +++++ > arch/arm64/include/asm/stackprotector.h | 5 +++ > arch/arm64/kernel/asm-offsets.c | 16 +++++++ > arch/arm64/kernel/cpufeature.c | 30 ++++++++++---- > arch/arm64/kernel/entry.S | 6 +++ > arch/arm64/kernel/head.S | 47 +++++++++++++++------ > arch/arm64/kernel/insn.c | 1 + > arch/arm64/kernel/pointer_auth.c | 7 +--- > arch/arm64/kernel/probes/decode-insn.c | 2 +- > arch/arm64/kernel/process.c | 5 ++- > arch/arm64/kernel/ptrace.c | 16 +++---- > arch/arm64/kernel/sleep.S | 8 ++++ > arch/arm64/kernel/smp.c | 10 +++++ > arch/arm64/kernel/stacktrace.c | 3 ++ > arch/arm64/mm/proc.S | 69 ++++++++++++++++++++++++++----- > drivers/misc/lkdtm/bugs.c | 36 ++++++++++++++++ > drivers/misc/lkdtm/core.c | 1 + > drivers/misc/lkdtm/lkdtm.h | 1 + > include/linux/stackprotector.h | 2 +- > scripts/Kconfig.include | 4 ++ > 29 files changed, 388 insertions(+), 88 deletions(-) > create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h > create mode 100644 arch/arm64/include/asm/compiler.h > > -- > 2.7.4 > -- Kees Cook _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Hi Kees, On 12/31/19 12:39 AM, Kees Cook wrote: > On Mon, Dec 16, 2019 at 02:17:02PM +0530, Amit Daniel Kachhap wrote: >> This series improves function return address protection for the arm64 kernel, by >> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred >> ptrauth hereafter). This should help protect the kernel against attacks using >> return-oriented programming. > > Exciting! Can this be emulated in qemu yet? I'd like to see more specific Yes I just checked qemu 4.1 version. ptrauth can be emulated by using option -cpu max. Even the lkdtm test provided in this series works fine. > LKDTM tests added for this (similar to the forward-edge CFI tests[1]), Ok sure I will check on this if I can add more tests. > but I won't be able to do these tests myself since I don't have ARMv8.3 > hardware. :) IIUC, the existing lkdtm_CORRUPT_STACK*() tests[2] should trip > with this protection enabled... Yes lkdtm_CORRUPT_STACK test works fine along ptrauth instructions. Thanks, Amit > > Thanks! > > -Kees > > [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/misc/lkdtm/cfi.c > [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/misc/lkdtm/bugs.c#n114 > >> >> This series is based on v5.5-rc2. >> >> High-level changes since v2 [1] (detailed changes are in individual patches): >> - Added support to generate randomness for ptrauth keys for early booting task >> in primary core as suggested by Ard. >> - Modified lkdtm ptrauth test-case to change keys to cause crash instead of >> modifying the lr in the stack. >> - Resolved a clang compilation issue. >> - Re-positioned "arm64: rename ptrauth key structures to be user-specific" to >> reduce code churnings. >> >> This series do not implement few things or have known limitations: >> - kdump tools may need some rework to work with ptrauth. The kdump >> tools may need the ptrauth information to strip PAC bits. >> >> Feedback welcome! >> >> Thanks, >> Amit Daniel >> >> [1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-November/695089.html >> >> Amit Daniel Kachhap (8): >> arm64: create macro to park cpu in an infinite loop >> arm64: ptrauth: Add bootup/runtime flags for __cpu_setup >> arm64: initialize ptrauth keys for kernel booting task >> arm64: mask PAC bits of __builtin_return_address >> arm64: __show_regs: strip PAC from lr in printk >> arm64: suspend: restore the kernel ptrauth keys >> arm64: kprobe: disable probe of ptrauth instruction >> lkdtm: arm64: test kernel pointer authentication >> >> Kristina Martsenko (6): >> arm64: cpufeature: add pointer auth meta-capabilities >> arm64: rename ptrauth key structures to be user-specific >> arm64: install user ptrauth keys at kernel exit time >> arm64: enable ptrauth earlier >> arm64: initialize and switch ptrauth kernel keys >> arm64: compile the kernel with ptrauth return address signing >> >> Mark Rutland (1): >> arm64: unwind: strip PAC from kernel addresses >> >> Vincenzo Frascino (1): >> kconfig: Add support for 'as-option' >> >> arch/arm64/Kconfig | 27 +++++++++++- >> arch/arm64/Makefile | 11 +++++ >> arch/arm64/include/asm/asm_pointer_auth.h | 59 ++++++++++++++++++++++++++ >> arch/arm64/include/asm/compiler.h | 20 +++++++++ >> arch/arm64/include/asm/cpucaps.h | 4 +- >> arch/arm64/include/asm/cpufeature.h | 6 +-- >> arch/arm64/include/asm/insn.h | 13 +++--- >> arch/arm64/include/asm/pointer_auth.h | 54 ++++++++++++------------ >> arch/arm64/include/asm/processor.h | 3 +- >> arch/arm64/include/asm/smp.h | 10 +++++ >> arch/arm64/include/asm/stackprotector.h | 5 +++ >> arch/arm64/kernel/asm-offsets.c | 16 +++++++ >> arch/arm64/kernel/cpufeature.c | 30 ++++++++++---- >> arch/arm64/kernel/entry.S | 6 +++ >> arch/arm64/kernel/head.S | 47 +++++++++++++++------ >> arch/arm64/kernel/insn.c | 1 + >> arch/arm64/kernel/pointer_auth.c | 7 +--- >> arch/arm64/kernel/probes/decode-insn.c | 2 +- >> arch/arm64/kernel/process.c | 5 ++- >> arch/arm64/kernel/ptrace.c | 16 +++---- >> arch/arm64/kernel/sleep.S | 8 ++++ >> arch/arm64/kernel/smp.c | 10 +++++ >> arch/arm64/kernel/stacktrace.c | 3 ++ >> arch/arm64/mm/proc.S | 69 ++++++++++++++++++++++++++----- >> drivers/misc/lkdtm/bugs.c | 36 ++++++++++++++++ >> drivers/misc/lkdtm/core.c | 1 + >> drivers/misc/lkdtm/lkdtm.h | 1 + >> include/linux/stackprotector.h | 2 +- >> scripts/Kconfig.include | 4 ++ >> 29 files changed, 388 insertions(+), 88 deletions(-) >> create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h >> create mode 100644 arch/arm64/include/asm/compiler.h >> >> -- >> 2.7.4 >> > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 16/12/2019 08:47, Amit Daniel Kachhap wrote: > A macro early_park_cpu is added to park the faulted cpu in an infinite > loop. Currently, this macro is substituted in two instances and is reused > in the subsequent pointer authentication patch. > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 16/12/2019 08:47, Amit Daniel Kachhap wrote: > This patch allows __cpu_setup to be invoked with one of these flags, > ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_LATE or ARM64_CPU_RUNTIME. > This is required as some cpufeatures need different handling during > different scenarios. > > The input parameter in x0 is preserved till the end to be used inside > this function. > > There should be no functional change with this patch and is useful > for the subsequent ptrauth patch which utilizes it. Some upcoming > arm cpufeatures can also utilize these flags. > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> You may add: Suggested-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 16/12/2019 08:47, Amit Daniel Kachhap wrote: > From: Kristina Martsenko <kristina.martsenko@arm.com> > > When the kernel is compiled with pointer auth instructions, the boot CPU > needs to start using address auth very early, so change the cpucap to > account for this. > > Pointer auth must be enabled before we call C functions, because it is > not possible to enter a function with pointer auth disabled and exit it > with pointer auth enabled. Note, mismatches between architected and > IMPDEF algorithms will still be caught by the cpufeature framework (the > separate *_ARCH and *_IMP_DEF cpucaps). > > Note the change in behavior: if the boot CPU has address auth and a late > CPU does not, then we park the late CPU very early in booting. Also, if > the boot CPU does not have address auth and the late CPU has then system > panic will occur little later from inside the C code. Until now we would > have just disabled address auth in this case. > > Leave generic authentication as a "system scope" cpucap for now, since > initially the kernel will only use address authentication. > > Reviewed-by: Kees Cook <keescook@chromium.org> > Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> > [Amit: Re-worked ptrauth setup logic, comments] > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> > --- > Changes since last version: > * None. > > arch/arm64/Kconfig | 5 +++++ > arch/arm64/include/asm/smp.h | 1 + > arch/arm64/kernel/cpufeature.c | 13 +++---------- > arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ > arch/arm64/kernel/smp.c | 2 ++ > arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++++ > 6 files changed, 62 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index b1b4476..5aabe8a 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH > be enabled. However, KVM guest also require VHE mode and hence > CONFIG_ARM64_VHE=y option to use this feature. > > + If the feature is present on the primary CPU but not a secondary CPU, > + then the secondary CPU will be parked. --- > Also, if the boot CPU does not > + have address auth and the late CPU has then system panic will occur. > + On such a system, this option should not be selected. Is this part of the text true ? We do not enable ptr-auth on the CPUs if we are missing the support on primary. So, given we disable SCTLR bits, the ptr-auth instructions should be a NOP and is thus safe. The rest looks good to me. With the above text removed, Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Hi Suzuki, On 1/7/20 5:05 PM, Suzuki Kuruppassery Poulose wrote: > On 16/12/2019 08:47, Amit Daniel Kachhap wrote: >> From: Kristina Martsenko <kristina.martsenko@arm.com> >> >> When the kernel is compiled with pointer auth instructions, the boot CPU >> needs to start using address auth very early, so change the cpucap to >> account for this. >> >> Pointer auth must be enabled before we call C functions, because it is >> not possible to enter a function with pointer auth disabled and exit it >> with pointer auth enabled. Note, mismatches between architected and >> IMPDEF algorithms will still be caught by the cpufeature framework (the >> separate *_ARCH and *_IMP_DEF cpucaps). >> >> Note the change in behavior: if the boot CPU has address auth and a late >> CPU does not, then we park the late CPU very early in booting. Also, if >> the boot CPU does not have address auth and the late CPU has then system >> panic will occur little later from inside the C code. Until now we would >> have just disabled address auth in this case. >> >> Leave generic authentication as a "system scope" cpucap for now, since >> initially the kernel will only use address authentication. >> >> Reviewed-by: Kees Cook <keescook@chromium.org> >> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> >> [Amit: Re-worked ptrauth setup logic, comments] >> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> >> --- >> Changes since last version: >> * None. >> >> arch/arm64/Kconfig | 5 +++++ >> arch/arm64/include/asm/smp.h | 1 + >> arch/arm64/kernel/cpufeature.c | 13 +++---------- >> arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ >> arch/arm64/kernel/smp.c | 2 ++ >> arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++++ >> 6 files changed, 62 insertions(+), 10 deletions(-) >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index b1b4476..5aabe8a 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH >> be enabled. However, KVM guest also require VHE mode and hence >> CONFIG_ARM64_VHE=y option to use this feature. >> + If the feature is present on the primary CPU but not a >> secondary CPU, >> + then the secondary CPU will be parked. > > --- > >> Also, if the boot CPU does not >> + have address auth and the late CPU has then system panic will >> occur. >> + On such a system, this option should not be selected. > > Is this part of the text true ? We do not enable ptr-auth on the CPUs if > we are missing the support on primary. So, given we disable SCTLR bits, > the ptr-auth instructions should be a NOP and is thus safe. I got little confused with your earlier comments [1] and made the secondary cpu's panic in case they have ptrauth and primary don't. In this case ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU will leave it running and not panic as you mentioned. I will append ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU feature and update the comments over here accordingly in my next iteration. [1]: https://patchwork.kernel.org/patch/11195087/ > > The rest looks good to me. With the above text removed, > > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Thanks for reviewing. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/7/20 4:48 PM, Suzuki Kuruppassery Poulose wrote: > On 16/12/2019 08:47, Amit Daniel Kachhap wrote: >> This patch allows __cpu_setup to be invoked with one of these flags, >> ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_LATE or ARM64_CPU_RUNTIME. >> This is required as some cpufeatures need different handling during >> different scenarios. >> >> The input parameter in x0 is preserved till the end to be used inside >> this function. >> >> There should be no functional change with this patch and is useful >> for the subsequent ptrauth patch which utilizes it. Some upcoming >> arm cpufeatures can also utilize these flags. >> >> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> > > You may add: > > Suggested-by: James Morse <james.morse@arm.com> Sure. I missed it. > > > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Thanks for reviewing. > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 09/01/2020 08:29, Amit Kachhap wrote: > Hi Suzuki, > > On 1/7/20 5:05 PM, Suzuki Kuruppassery Poulose wrote: >> On 16/12/2019 08:47, Amit Daniel Kachhap wrote: >>> From: Kristina Martsenko <kristina.martsenko@arm.com> >>> >>> When the kernel is compiled with pointer auth instructions, the boot CPU >>> needs to start using address auth very early, so change the cpucap to >>> account for this. >>> >>> Pointer auth must be enabled before we call C functions, because it is >>> not possible to enter a function with pointer auth disabled and exit it >>> with pointer auth enabled. Note, mismatches between architected and >>> IMPDEF algorithms will still be caught by the cpufeature framework (the >>> separate *_ARCH and *_IMP_DEF cpucaps). >>> >>> Note the change in behavior: if the boot CPU has address auth and a late >>> CPU does not, then we park the late CPU very early in booting. Also, if >>> the boot CPU does not have address auth and the late CPU has then system >>> panic will occur little later from inside the C code. Until now we would >>> have just disabled address auth in this case. >>> >>> Leave generic authentication as a "system scope" cpucap for now, since >>> initially the kernel will only use address authentication. >>> >>> Reviewed-by: Kees Cook <keescook@chromium.org> >>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> >>> [Amit: Re-worked ptrauth setup logic, comments] >>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> >>> --- >>> Changes since last version: >>> * None. >>> >>> arch/arm64/Kconfig | 5 +++++ >>> arch/arm64/include/asm/smp.h | 1 + >>> arch/arm64/kernel/cpufeature.c | 13 +++---------- >>> arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ >>> arch/arm64/kernel/smp.c | 2 ++ >>> arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++++ >>> 6 files changed, 62 insertions(+), 10 deletions(-) >>> >>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>> index b1b4476..5aabe8a 100644 >>> --- a/arch/arm64/Kconfig >>> +++ b/arch/arm64/Kconfig >>> @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH >>> be enabled. However, KVM guest also require VHE mode and hence >>> CONFIG_ARM64_VHE=y option to use this feature. >>> + If the feature is present on the primary CPU but not a >>> secondary CPU, >>> + then the secondary CPU will be parked. >> >> --- >> >>> Also, if the boot CPU does not >>> + have address auth and the late CPU has then system panic will >>> occur. >>> + On such a system, this option should not be selected. >> >> Is this part of the text true ? We do not enable ptr-auth on the CPUs if >> we are missing the support on primary. So, given we disable SCTLR bits, >> the ptr-auth instructions should be a NOP and is thus safe. > > I got little confused with your earlier comments [1] and made the > secondary cpu's panic in case they have ptrauth and primary don't. In > this case ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU will leave it running and > not panic as you mentioned. Yes please. Sorry about the confusion. Suzuki _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:03PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index 04cf64e..cf42c46 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -1249,6 +1249,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) > sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | > SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); > } > + > +static bool has_address_auth(const struct arm64_cpu_capabilities *entry, > + int __unused) > +{ > +return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || > + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF); > +} > + > +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, > + int __unused) > +{ > +return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || > + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); > +} Do these rely on the order in which the entries are listed in the arm64_features[] array? It looks like we do the same for PAN_NOT_UAO but that's pretty fragile. I'd prefer if we invoked the cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH]->matches() directly here, maybe hidden behind a helper (I couldn't find one at a quick look). -- Catalin IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 15/01/2020 12:26, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:03PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c >> index 04cf64e..cf42c46 100644 >> --- a/arch/arm64/kernel/cpufeature.c >> +++ b/arch/arm64/kernel/cpufeature.c >> @@ -1249,6 +1249,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) >> sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | >> SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); >> } >> + >> +static bool has_address_auth(const struct arm64_cpu_capabilities *entry, >> + int __unused) >> +{ >> + return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || >> + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF); >> +} >> + >> +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, >> + int __unused) >> +{ >> + return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || >> + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); >> +} > > Do these rely on the order in which the entries are listed in the > arm64_features[] array? It looks like we do the same for PAN_NOT_UAO but > that's pretty fragile. Yes, it surely depends on the order in which they are listed. > > I'd prefer if we invoked the > cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH]->matches() directly here, maybe Yes, calling the matches(cap, SYSTEM_SCOPE), that should work and is much better. > hidden behind a helper (I couldn't find one at a quick look). > There are no helpers for this operation to do it on a SYSTEM_SCOPE and this is only needed for caps dependent on the other caps. May be we could hide the conversion of the number to "cap" as: static inline struct arm64_cpu_capabilities *cpu_cap_from_number(int n) { if (n < ARM64_NCAPS) return cpu_hwcaps_ptr[n]; return NULL; } And use this for "this_cpu_has_cap()" too. Suzuki _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Wed, Jan 15, 2020 at 01:52:24PM +0000, Suzuki K Poulose wrote: > On 15/01/2020 12:26, Catalin Marinas wrote: > > On Mon, Dec 16, 2019 at 02:17:03PM +0530, Amit Daniel Kachhap wrote: > > > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > > > index 04cf64e..cf42c46 100644 > > > --- a/arch/arm64/kernel/cpufeature.c > > > +++ b/arch/arm64/kernel/cpufeature.c > > > @@ -1249,6 +1249,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) > > > sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | > > > SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); > > > } > > > + > > > +static bool has_address_auth(const struct arm64_cpu_capabilities *entry, > > > + int __unused) > > > +{ > > > + return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || > > > + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF); > > > +} > > > + > > > +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, > > > + int __unused) > > > +{ > > > + return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || > > > + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); > > > +} > > > > Do these rely on the order in which the entries are listed in the > > arm64_features[] array? It looks like we do the same for PAN_NOT_UAO but > > that's pretty fragile. > > Yes, it surely depends on the order in which they are listed. > > > I'd prefer if we invoked the > > cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH]->matches() directly here, maybe > > Yes, calling the matches(cap, SYSTEM_SCOPE), that should work and is much > better. > > > hidden behind a helper (I couldn't find one at a quick look). > > > > There are no helpers for this operation to do it on a SYSTEM_SCOPE > and this is only needed for caps dependent on the other caps. > > May be we could hide the conversion of the number to "cap" as: > > static inline struct arm64_cpu_capabilities *cpu_cap_from_number(int n) > { > if (n < ARM64_NCAPS) > return cpu_hwcaps_ptr[n]; > return NULL; > } > > And use this for "this_cpu_has_cap()" too. I'm not bothered about the cpu_cap_from_number() part. I was actually thinking of something like the diff below: -----------8<------------------------- diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 2595c2886d3f..2ea4c84fcc8a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2008,6 +2008,18 @@ bool this_cpu_has_cap(unsigned int n) return false; } +static bool system_has_cap(unsigned int n) +{ + if (n < ARM64_NCAPS) { + const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n]; + + if (cap) + return cap->matches(cap, SCOPE_SYSTEM); + } + + return false; +} + void cpu_set_feature(unsigned int num) { WARN_ON(num >= MAX_CPU_FEATURES); @@ -2081,7 +2093,7 @@ void __init setup_cpu_features(void) static bool __maybe_unused cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused) { - return (cpus_have_const_cap(ARM64_HAS_PAN) && !cpus_have_const_cap(ARM64_HAS_UAO)); + return system_has_cap(ARM64_HAS_PAN) && !system_has_cap(ARM64_HAS_UAO); } static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:04PM +0530, Amit Daniel Kachhap wrote: > From: Kristina Martsenko <kristina.martsenko@arm.com> > > We currently enable ptrauth for userspace, but do not use it within the > kernel. We're going to enable it for the kernel, and will need to manage > a separate set of ptrauth keys for the kernel. > > We currently keep all 5 keys in struct ptrauth_keys. However, as the > kernel will only need to use 1 key, it is a bit wasteful to allocate a > whole ptrauth_keys struct for every thread. > > Therefore, a subsequent patch will define a separate struct, with only 1 > key, for the kernel. In preparation for that, rename the existing struct > (and associated macros and functions) to reflect that they are specific > to userspace. > > Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> > [Amit: Re-positioned the patch to reduce the diff] > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:05PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h > new file mode 100644 > index 0000000..3d39788 > --- /dev/null > +++ b/arch/arm64/include/asm/asm_pointer_auth.h > @@ -0,0 +1,45 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef __ASM_ASM_POINTER_AUTH_H > +#define __ASM_ASM_POINTER_AUTH_H > + > +#include <asm/alternative.h> > +#include <asm/asm-offsets.h> > +#include <asm/cpufeature.h> > +#include <asm/sysreg.h> > + > +#ifdef CONFIG_ARM64_PTR_AUTH > + > + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 > + mov \tmp1, #THREAD_KEYS_USER > + add \tmp1, \tsk, \tmp1 I think we can remove these instructions (assuming that the ldp #imm range is sufficient), > +alternative_if_not ARM64_HAS_ADDRESS_AUTH > + b .Laddr_auth_skip_\@ > +alternative_else_nop_endif > + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA] use \tsk directly here (see below) > + msr_s SYS_APIAKEYLO_EL1, \tmp2 > + msr_s SYS_APIAKEYHI_EL1, \tmp3 > + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB] > + msr_s SYS_APIBKEYLO_EL1, \tmp2 > + msr_s SYS_APIBKEYHI_EL1, \tmp3 > + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA] > + msr_s SYS_APDAKEYLO_EL1, \tmp2 > + msr_s SYS_APDAKEYHI_EL1, \tmp3 > + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB] > + msr_s SYS_APDBKEYLO_EL1, \tmp2 > + msr_s SYS_APDBKEYHI_EL1, \tmp3 > +.Laddr_auth_skip_\@: > +alternative_if ARM64_HAS_GENERIC_AUTH > + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA] > + msr_s SYS_APGAKEYLO_EL1, \tmp2 > + msr_s SYS_APGAKEYHI_EL1, \tmp3 > +alternative_else_nop_endif > + .endm [...] > diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c > index a5bdce8..7b1ea2a 100644 > --- a/arch/arm64/kernel/asm-offsets.c > +++ b/arch/arm64/kernel/asm-offsets.c > @@ -40,6 +40,9 @@ int main(void) > #endif > BLANK(); > DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); > +#ifdef CONFIG_ARM64_PTR_AUTH > + DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); > +#endif > BLANK(); > DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); > DEFINE(S_X2, offsetof(struct pt_regs, regs[2])); > @@ -128,5 +131,13 @@ int main(void) > DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); > DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); > #endif > +#ifdef CONFIG_ARM64_PTR_AUTH > + DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia)); > + DEFINE(PTRAUTH_USER_KEY_APIB, offsetof(struct ptrauth_keys_user, apib)); > + DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda)); > + DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb)); > + DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); > + BLANK(); > +#endif and define the above as offsetof(struct task_struct, thread.keys_user.apia) -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:06PM +0530, Amit Daniel Kachhap wrote: > A macro early_park_cpu is added to park the faulted cpu in an infinite > loop. Currently, this macro is substituted in two instances and is reused > in the subsequent pointer authentication patch. > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:07PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h > index a0c8a0b..008d004 100644 > --- a/arch/arm64/include/asm/smp.h > +++ b/arch/arm64/include/asm/smp.h > @@ -23,6 +23,11 @@ > #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) > #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) > > +/* Options for __cpu_setup */ > +#define ARM64_CPU_BOOT_PRIMARY (1) > +#define ARM64_CPU_BOOT_LATE (2) Nitpick: I'd call this ARM64_CPU_BOOT_SECONDARY. I thought we tend to use "late" for CPUs brought up after user space started (at least that was my impression from the cpufeature.c code). -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Hi, On 1/15/20 9:31 PM, Catalin Marinas wrote: > On Wed, Jan 15, 2020 at 01:52:24PM +0000, Suzuki K Poulose wrote: >> On 15/01/2020 12:26, Catalin Marinas wrote: >>> On Mon, Dec 16, 2019 at 02:17:03PM +0530, Amit Daniel Kachhap wrote: >>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c >>>> index 04cf64e..cf42c46 100644 >>>> --- a/arch/arm64/kernel/cpufeature.c >>>> +++ b/arch/arm64/kernel/cpufeature.c >>>> @@ -1249,6 +1249,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) >>>> sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | >>>> SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); >>>> } >>>> + >>>> +static bool has_address_auth(const struct arm64_cpu_capabilities *entry, >>>> + int __unused) >>>> +{ >>>> + return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || >>>> + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF); >>>> +} >>>> + >>>> +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, >>>> + int __unused) >>>> +{ >>>> + return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || >>>> + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); >>>> +} >>> >>> Do these rely on the order in which the entries are listed in the >>> arm64_features[] array? It looks like we do the same for PAN_NOT_UAO but >>> that's pretty fragile. >> >> Yes, it surely depends on the order in which they are listed. >> >>> I'd prefer if we invoked the >>> cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH]->matches() directly here, maybe >> >> Yes, calling the matches(cap, SYSTEM_SCOPE), that should work and is much >> better. >> >>> hidden behind a helper (I couldn't find one at a quick look). >>> >> >> There are no helpers for this operation to do it on a SYSTEM_SCOPE >> and this is only needed for caps dependent on the other caps. >> >> May be we could hide the conversion of the number to "cap" as: >> >> static inline struct arm64_cpu_capabilities *cpu_cap_from_number(int n) >> { >> if (n < ARM64_NCAPS) >> return cpu_hwcaps_ptr[n]; >> return NULL; >> } >> >> And use this for "this_cpu_has_cap()" too. > > I'm not bothered about the cpu_cap_from_number() part. I was actually > thinking of something like the diff below: > > -----------8<------------------------- > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index 2595c2886d3f..2ea4c84fcc8a 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -2008,6 +2008,18 @@ bool this_cpu_has_cap(unsigned int n) > return false; > } > > +static bool system_has_cap(unsigned int n) > +{ > + if (n < ARM64_NCAPS) { > + const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n]; > + > + if (cap) > + return cap->matches(cap, SCOPE_SYSTEM); > + } > + > + return false; > +} > + This patch looks fine. ARM64_HAS_ADDRESS_AUTH_* cpufeature is moved to SCOPE_BOOT in the subsequent patches. so instead of system_has_cap, existing this_cpu_has_cap can be used. This new function can still be used for the other system meta capability cpufeatures. > void cpu_set_feature(unsigned int num) > { > WARN_ON(num >= MAX_CPU_FEATURES); > @@ -2081,7 +2093,7 @@ void __init setup_cpu_features(void) > static bool __maybe_unused > cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused) > { > - return (cpus_have_const_cap(ARM64_HAS_PAN) && !cpus_have_const_cap(ARM64_HAS_UAO)); > + return system_has_cap(ARM64_HAS_PAN) && !system_has_cap(ARM64_HAS_UAO); > } > > static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/15/20 10:32 PM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:05PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h >> new file mode 100644 >> index 0000000..3d39788 >> --- /dev/null >> +++ b/arch/arm64/include/asm/asm_pointer_auth.h >> @@ -0,0 +1,45 @@ >> +/* SPDX-License-Identifier: GPL-2.0 */ >> +#ifndef __ASM_ASM_POINTER_AUTH_H >> +#define __ASM_ASM_POINTER_AUTH_H >> + >> +#include <asm/alternative.h> >> +#include <asm/asm-offsets.h> >> +#include <asm/cpufeature.h> >> +#include <asm/sysreg.h> >> + >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + >> + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 >> + mov \tmp1, #THREAD_KEYS_USER >> + add \tmp1, \tsk, \tmp1 > > I think we can remove these instructions (assuming that the ldp #imm > range is sufficient), No #imm is exceeding the range. Probably a comment here will be useful. > >> +alternative_if_not ARM64_HAS_ADDRESS_AUTH >> + b .Laddr_auth_skip_\@ >> +alternative_else_nop_endif >> + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA] > > use \tsk directly here (see below) > >> + msr_s SYS_APIAKEYLO_EL1, \tmp2 >> + msr_s SYS_APIAKEYHI_EL1, \tmp3 >> + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB] >> + msr_s SYS_APIBKEYLO_EL1, \tmp2 >> + msr_s SYS_APIBKEYHI_EL1, \tmp3 >> + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA] >> + msr_s SYS_APDAKEYLO_EL1, \tmp2 >> + msr_s SYS_APDAKEYHI_EL1, \tmp3 >> + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB] >> + msr_s SYS_APDBKEYLO_EL1, \tmp2 >> + msr_s SYS_APDBKEYHI_EL1, \tmp3 >> +.Laddr_auth_skip_\@: >> +alternative_if ARM64_HAS_GENERIC_AUTH >> + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA] >> + msr_s SYS_APGAKEYLO_EL1, \tmp2 >> + msr_s SYS_APGAKEYHI_EL1, \tmp3 >> +alternative_else_nop_endif >> + .endm > [...] >> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c >> index a5bdce8..7b1ea2a 100644 >> --- a/arch/arm64/kernel/asm-offsets.c >> +++ b/arch/arm64/kernel/asm-offsets.c >> @@ -40,6 +40,9 @@ int main(void) >> #endif >> BLANK(); >> DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); >> +#endif >> BLANK(); >> DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); >> DEFINE(S_X2, offsetof(struct pt_regs, regs[2])); >> @@ -128,5 +131,13 @@ int main(void) >> DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); >> DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); >> #endif >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia)); >> + DEFINE(PTRAUTH_USER_KEY_APIB, offsetof(struct ptrauth_keys_user, apib)); >> + DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda)); >> + DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb)); >> + DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); >> + BLANK(); >> +#endif > > and define the above as > > offsetof(struct task_struct, thread.keys_user.apia) > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/15/20 11:00 PM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:07PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h >> index a0c8a0b..008d004 100644 >> --- a/arch/arm64/include/asm/smp.h >> +++ b/arch/arm64/include/asm/smp.h >> @@ -23,6 +23,11 @@ >> #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) >> #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) >> >> +/* Options for __cpu_setup */ >> +#define ARM64_CPU_BOOT_PRIMARY (1) >> +#define ARM64_CPU_BOOT_LATE (2) > > Nitpick: I'd call this ARM64_CPU_BOOT_SECONDARY. I thought we tend to > use "late" for CPUs brought up after user space started (at least that > was my impression from the cpufeature.c code). Sure. I will update it in the next version. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > index 5aaf1bb..c59c28f 100644 > --- a/arch/arm64/kernel/head.S > +++ b/arch/arm64/kernel/head.S > @@ -13,6 +13,7 @@ > #include <linux/init.h> > #include <linux/irqchip/arm-gic-v3.h> > > +#include <asm/alternative.h> > #include <asm/assembler.h> > #include <asm/boot.h> > #include <asm/ptrace.h> > @@ -713,6 +714,7 @@ secondary_startup: > * Common entry point for secondary CPUs. > */ > bl __cpu_secondary_check52bitva > + bl __cpu_secondary_checkptrauth > mov x0, #ARM64_CPU_BOOT_LATE > bl __cpu_setup // initialise processor > adrp x1, swapper_pg_dir > @@ -831,6 +833,24 @@ __no_granule_support: > early_park_cpu CPU_STUCK_REASON_NO_GRAN > ENDPROC(__no_granule_support) > > +ENTRY(__cpu_secondary_checkptrauth) > +#ifdef CONFIG_ARM64_PTR_AUTH > + /* Check if the CPU supports ptrauth */ > + mrs x2, id_aa64isar1_el1 > + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 > + cbnz x2, 1f > +alternative_if ARM64_HAS_ADDRESS_AUTH > + mov x3, 1 > +alternative_else > + mov x3, 0 > +alternative_endif > + cbz x3, 1f > + /* Park the mismatched secondary CPU */ > + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH > +#endif > +1: ret > +ENDPROC(__cpu_secondary_checkptrauth) Do we actually need to park secondary CPUs early? Let's say a secondary CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 bits and the instructions are NOPs. Wouldn't the cpufeature framework park it later anyway? -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:10PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h > index aa956ca..0f89f59 100644 > --- a/arch/arm64/include/asm/pointer_auth.h > +++ b/arch/arm64/include/asm/pointer_auth.h > @@ -60,6 +60,12 @@ static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) > get_random_bytes(&keys->apia, sizeof(keys->apia)); > } > > +static inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys) I think we should use __always_inline here, just in case the compiler ignores the hint. Otherwise: Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:09PM +0530, Amit Daniel Kachhap wrote: > From: Kristina Martsenko <kristina.martsenko@arm.com> > > Set up keys to use pointer authentication within the kernel. The kernel > will be compiled with APIAKey instructions, the other keys are currently > unused. Each task is given its own APIAKey, which is initialized during > fork. The key is changed during context switch and on kernel entry from > EL0. > > The keys for idle threads need to be set before calling any C functions, > because it is not possible to enter and exit a function with different > keys. > > Reviewed-by: Kees Cook <keescook@chromium.org> > Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> > [Amit: Modified secondary cores key structure, comments] > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:11PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h > new file mode 100644 > index 0000000..3cb06f9 > --- /dev/null > +++ b/arch/arm64/include/asm/compiler.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef __ASM_COMPILER_H > +#define __ASM_COMPILER_H > + > +#if defined(CONFIG_ARM64_PTR_AUTH) > + > +/* > + * The EL0/EL1 pointer bits used by a pointer authentication code. > + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. > + */ > +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) That's the current behaviour but I guess we could extend the mask to 63 here without breaking anything since we don't expect instruction addresses to be tagged. I also think we should turn TCR_EL1.TBID0 on when we have PAC present (in a separate patch for both the mask change and the TCR_EL1 bit as this may be slightly more controversial, a theoretical ABI change). > +#define ptrauth_kernel_pac_mask() (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)) I think the kernel mask should be GENMASK_ULL(63, vabits_actual), no need to skip bit 55 since it's 1 already. With regards to VA_BITS (a constant), I'm not sure that's correct. ARMv8.2-LVA (52-bit VA) is an optional feature and I don't think PAC in 8.3 mandates it. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:12PM +0530, Amit Daniel Kachhap wrote: > From: Mark Rutland <mark.rutland@arm.com> > > When we enable pointer authentication in the kernel, LR values saved to > the stack will have a PAC which we must strip in order to retrieve the > real return address. > > Strip PACs when unwinding the stack in order to account for this. > > Reviewed-by: Kees Cook <keescook@chromium.org> > Signed-off-by: Mark Rutland <mark.rutland@arm.com> > Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> > [Amit: Re-position ptrauth_strip_insn_pac, comment] > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:13PM +0530, Amit Daniel Kachhap wrote: > lr is printed with %pS which will try to find an entry in kallsyms. > After enabling pointer authentication, this match will fail due to > PAC present in the lr. > > Strip PAC from the lr to display the correct symbol name. > > Suggested-by: James Morse <james.morse@arm.com> > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:14PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S > index 7b2f2e6..a6e9460 100644 > --- a/arch/arm64/kernel/sleep.S > +++ b/arch/arm64/kernel/sleep.S > @@ -2,6 +2,7 @@ > #include <linux/errno.h> > #include <linux/linkage.h> > #include <asm/asm-offsets.h> > +#include <asm/asm_pointer_auth.h> > #include <asm/assembler.h> > #include <asm/smp.h> > > @@ -139,6 +140,11 @@ ENTRY(_cpu_resume) > bl kasan_unpoison_task_stack_below > #endif > > +#ifdef CONFIG_ARM64_PTR_AUTH > + get_current_task x1 > + ptrauth_keys_install_kernel x1, x2, x3, x4 > +#endif We should initialise the keys earlier since kasan_unpoison_task_stack_below() is a C function. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/16/20 9:54 PM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S >> index 5aaf1bb..c59c28f 100644 >> --- a/arch/arm64/kernel/head.S >> +++ b/arch/arm64/kernel/head.S >> @@ -13,6 +13,7 @@ >> #include <linux/init.h> >> #include <linux/irqchip/arm-gic-v3.h> >> >> +#include <asm/alternative.h> >> #include <asm/assembler.h> >> #include <asm/boot.h> >> #include <asm/ptrace.h> >> @@ -713,6 +714,7 @@ secondary_startup: >> * Common entry point for secondary CPUs. >> */ >> bl __cpu_secondary_check52bitva >> + bl __cpu_secondary_checkptrauth >> mov x0, #ARM64_CPU_BOOT_LATE >> bl __cpu_setup // initialise processor >> adrp x1, swapper_pg_dir >> @@ -831,6 +833,24 @@ __no_granule_support: >> early_park_cpu CPU_STUCK_REASON_NO_GRAN >> ENDPROC(__no_granule_support) >> >> +ENTRY(__cpu_secondary_checkptrauth) >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + /* Check if the CPU supports ptrauth */ >> + mrs x2, id_aa64isar1_el1 >> + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 >> + cbnz x2, 1f >> +alternative_if ARM64_HAS_ADDRESS_AUTH >> + mov x3, 1 >> +alternative_else >> + mov x3, 0 >> +alternative_endif >> + cbz x3, 1f >> + /* Park the mismatched secondary CPU */ >> + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH >> +#endif >> +1: ret >> +ENDPROC(__cpu_secondary_checkptrauth) > > Do we actually need to park secondary CPUs early? Let's say a secondary > CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 > bits and the instructions are NOPs. Wouldn't the cpufeature framework > park it later anyway? In the current cpufeature framework, such missing cpufeature in secondary cpu will lead to kernel panic (inside check_early_cpufeatures) and not cpu offline. However Kristina in her RFC V2 [1] added such feature to park it. Later for moving the enabling ptrauth to assembly this work got dropped. Suzuki provided the template code for doing that [2]. Later James suggested to do this like existing __cpu_secondary_check52bitva which parks the secondary cpu very early and also to save wasted cpu cycles [3]. So your question is still valid that it can be done in cpufeature. Let me know your opinion that which one is better. [1]: https://lore.kernel.org/linux-arm-kernel/20190529190332.29753-4-kristina.martsenko@arm.com/ [2]: https://lore.kernel.org/linux-arm-kernel/9886324a-5a12-5dd8-b84c-3f32098e3d35@arm.com/ [3]: https://www.spinics.net/lists/arm-kernel/msg763622.html > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:15PM +0530, Amit Daniel Kachhap wrote: > This patch disables the probing of authenticate ptrauth > instruction which falls under the hint instructions region. > This is done to disallow probe of instruction which may lead > to ptrauth faults. > > The corresponding append pac ptrauth instruction is not > disabled as they are typically the first instruction in the > function so disabling them will be disabling the function > probe itself. Also, appending pac do not cause any exception > in itself. Neither does AUTIASP in v8.3, only the subsequent dereferencing, so why can kprobes cope with PACIASP but not AUTIASP? -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:16PM +0530, Amit Daniel Kachhap wrote: > diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include > index d4adfbe..cc465dd 100644 > --- a/scripts/Kconfig.include > +++ b/scripts/Kconfig.include > @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de > # Return y if the linker supports <flag>, n otherwise > ld-option = $(success,$(LD) -v $(1)) > > +# $(as-option,<flag>) > +# Return y if the assembler supports <flag>, n otherwise > +as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -E -x assembler /dev/null -o /dev/null) I had different experiments with this for MTE and noticed that clang does not honour the -Wa, option (which you use in a subsequent patch). So not sure how useful as-option is. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:17PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 06b5025..f0798b7 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -1466,6 +1466,7 @@ config ARM64_PTR_AUTH > bool "Enable support for pointer authentication" > default y > depends on !KVM || ARM64_VHE > + depends on GCC_VERSION >= 70000 || CLANG_VERSION >= 80000 Please don't add checks on compiler versions. Use cc-option when you need a certain option rather than guessing which compiler version supports it. People may do backports of different features, so the version is not relevant. > help > Pointer authentication (part of the ARMv8.3 Extensions) provides > instructions for signing and authenticating pointers against secret > @@ -1473,11 +1474,17 @@ config ARM64_PTR_AUTH > and other attacks. > > This option enables these instructions at EL0 (i.e. for userspace). > - > Choosing this option will cause the kernel to initialise secret keys > for each process at exec() time, with these keys being > context-switched along with the process. > > + If the compiler supports the -mbranch-protection or > + -msign-return-address flag (e.g. GCC 7 or later), then this option > + will also cause the kernel itself to be compiled with return address > + protection. In this case, and if the target hardware is known to > + support pointer authentication, then CONFIG_STACKPROTECTOR can be > + disabled with minimal loss of protection. > + > The feature is detected at runtime. If the feature is not present in > hardware it will not be advertised to userspace/KVM guest nor will it > be enabled. However, KVM guest also require VHE mode and hence > @@ -1488,6 +1495,18 @@ config ARM64_PTR_AUTH > have address auth and the late CPU has then system panic will occur. > On such a system, this option should not be selected. > > +config CC_HAS_BRANCH_PROT_PAC_RET > + # GCC 9 or later, clang 8 or later > + def_bool $(cc-option,-mbranch-protection=pac-ret+leaf) > + > +config CC_HAS_SIGN_RETURN_ADDRESS > + # GCC 7, 8 > + def_bool $(cc-option,-msign-return-address=all) > + > +config AS_HAS_PAC > + def_bool $(as-option,-Wa,-march=armv8.3-a) > + depends on CC_IS_CLANG First, as I commented on the previous patch, clang seems to ignore -Wa, so you can write whatever you want after it and it seems to be always true. Second, if you need the assembler to support certain features, it needs to be checked irrespective of whether the compiler is gcc or clang. Binutils is a separate package. > + > endmenu > > config ARM64_SVE > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile > index 1fbe24d..6a1da59 100644 > --- a/arch/arm64/Makefile > +++ b/arch/arm64/Makefile > @@ -72,6 +72,17 @@ stack_protector_prepare: prepare0 > include/generated/asm-offsets.h)) > endif > > +ifeq ($(CONFIG_ARM64_PTR_AUTH),y) > +branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all > +branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf > +# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the compiler > +# to generate them and consequently to break the single image contract we pass it > +# only to the assembler when clang is selected as a compiler. For the GNU toolchain > +# this option is not used. > +branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a > +KBUILD_CFLAGS += $(branch-prot-flags-y) > +endif So, does this actually work with clang? Please check the clang issue in case I'm mistaken. Otherwise, you could use as-instr as in this patch: https://lore.kernel.org/linux-arm-kernel/20200115113008.3334-3-catalin.marinas@arm.com/ Also Will had a preference for warning during build if the user requested a feature in .config (i.e. PAC) but the compiler/assembler does not support it (that was for the LSE patch above). You could attempt something similar with this patch. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Dec 16, 2019 at 02:17:18PM +0530, Amit Daniel Kachhap wrote: > This test is specific for arm64. When in-kernel Pointer Authentication > config is enabled, the return address stored in the stack is signed. > This feature helps in ROP kind of attack. If any parameters used to > generate the pac (<key, sp, lr>) is modified then this will fail in > the authentication stage and will lead to abort. > > This test changes the input parameter APIA kernel keys to cause abort. > The pac computed from the new key can be same as last due to hash > collision so this is retried for few times as there is no reliable way > to compare the pacs. Even though this test may fail even after retries > but this may cause authentication failure at a later stage in earlier > function returns. > > This test can be invoked as, > echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT > > or as below if inserted as a module, > insmod lkdtm.ko cpoint_name=DIRECT cpoint_type=CORRUPT_PAC cpoint_count=1 > > [ 13.118166] lkdtm: Performing direct entry CORRUPT_PAC > [ 13.118298] lkdtm: Clearing PAC from the return address > [ 13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec > [ 13.118626] Mem abort info: > [ 13.118666] ESR = 0x86000004 > [ 13.118866] EC = 0x21: IABT (current EL), IL = 32 bits > [ 13.118966] SET = 0, FnV = 0 > [ 13.119117] EA = 0, S1PTW = 0 > > Cc: Kees Cook <keescook@chromium.org> > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Fri, Jan 17, 2020 at 04:13:06PM +0530, Amit Kachhap wrote: > On 1/16/20 9:54 PM, Catalin Marinas wrote: > > On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: > > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > > > index 5aaf1bb..c59c28f 100644 > > > --- a/arch/arm64/kernel/head.S > > > +++ b/arch/arm64/kernel/head.S [...] > > > +ENTRY(__cpu_secondary_checkptrauth) > > > +#ifdef CONFIG_ARM64_PTR_AUTH > > > + /* Check if the CPU supports ptrauth */ > > > + mrs x2, id_aa64isar1_el1 > > > + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 > > > + cbnz x2, 1f > > > +alternative_if ARM64_HAS_ADDRESS_AUTH > > > + mov x3, 1 > > > +alternative_else > > > + mov x3, 0 > > > +alternative_endif > > > + cbz x3, 1f > > > + /* Park the mismatched secondary CPU */ > > > + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH > > > +#endif > > > +1: ret > > > +ENDPROC(__cpu_secondary_checkptrauth) > > > > Do we actually need to park secondary CPUs early? Let's say a secondary > > CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 > > bits and the instructions are NOPs. Wouldn't the cpufeature framework > > park it later anyway? > > In the current cpufeature framework, such missing cpufeature in > secondary cpu will lead to kernel panic (inside check_early_cpufeatures) > and not cpu offline. However Kristina in her RFC V2 [1] added such > feature to park it. I remember discussing how to avoid the kernel panic with her at the time. > Later for moving the enabling ptrauth to assembly this work got dropped. > Suzuki provided the template code for doing that [2]. > > Later James suggested to do this like existing > __cpu_secondary_check52bitva which parks the secondary cpu very early > and also to save wasted cpu cycles [3]. I don't really care about a few cycles lost during boot. > So your question is still valid that it can be done in cpufeature. Let > me know your opinion that which one is better. My preference is for Kristina's approach. The 52-bit VA is slightly different (as is VHE) as we cannot guarantee the secondary CPU to even reach the CPU framework. With PAC, I don't see why it would fail reaching the C code, so I'd prefer a more readable C implementation than the assembler one. Anyway, I'm open to counterarguments here. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Fri, Jan 17, 2020 at 8:33 PM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Mon, Dec 16, 2019 at 02:17:16PM +0530, Amit Daniel Kachhap wrote: > > diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include > > index d4adfbe..cc465dd 100644 > > --- a/scripts/Kconfig.include > > +++ b/scripts/Kconfig.include > > @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de > > # Return y if the linker supports <flag>, n otherwise > > ld-option = $(success,$(LD) -v $(1)) > > > > +# $(as-option,<flag>) > > +# Return y if the assembler supports <flag>, n otherwise > > +as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -E -x assembler /dev/null -o /dev/null) > > I had different experiments with this for MTE and noticed that clang > does not honour the -Wa, option (which you use in a subsequent patch). > So not sure how useful as-option is. I think this is because it uses '-E' option. To invoke the assembler, -c is needed. I replaced -E with -c, and tested it. It seems working for both gcc and clang. I noticed a similar case for cc-option: https://patchwork.kernel.org/patch/11339567/ -- Best Regards Masahiro Yamada _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
[-- Attachment #1: Type: text/plain, Size: 1321 bytes --] Hi Catalin, On 17/01/2020 11:33, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:16PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include >> index d4adfbe..cc465dd 100644 >> --- a/scripts/Kconfig.include >> +++ b/scripts/Kconfig.include >> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de >> # Return y if the linker supports <flag>, n otherwise >> ld-option = $(success,$(LD) -v $(1)) >> >> +# $(as-option,<flag>) >> +# Return y if the assembler supports <flag>, n otherwise >> +as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -E -x assembler /dev/null -o /dev/null) > > I had different experiments with this for MTE and noticed that clang > does not honour the -Wa, option (which you use in a subsequent patch). > So not sure how useful as-option is. > Not sure of your experiments for MTE, the kernel is built with clang but uses the GNU assembler. The -Wa, option is passed down to the assembler (GNU) which honors it. Said that, I agree with Masahiro Yamada about the "-c" option to be used in place of "-E" and on that sense I provided a patch to Amit last December which should be integrated in the next series (Amit can confirm, I hope ;) ). -- Regards, Vincenzo [-- Attachment #2: pEpkey.asc --] [-- Type: application/pgp-keys, Size: 14291 bytes --] [-- Attachment #3: Type: text/plain, Size: 176 bytes --] _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/16/20 11:29 PM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:10PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h >> index aa956ca..0f89f59 100644 >> --- a/arch/arm64/include/asm/pointer_auth.h >> +++ b/arch/arm64/include/asm/pointer_auth.h >> @@ -60,6 +60,12 @@ static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) >> get_random_bytes(&keys->apia, sizeof(keys->apia)); >> } >> >> +static inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys) > > I think we should use __always_inline here, just in case the compiler > ignores the hint. yes agreed. Even the other function ptrauth_keys_init_kernel. > > Otherwise: > > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Thanks. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/17/20 10:14 AM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:11PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h >> new file mode 100644 >> index 0000000..3cb06f9 >> --- /dev/null >> +++ b/arch/arm64/include/asm/compiler.h >> @@ -0,0 +1,20 @@ >> +/* SPDX-License-Identifier: GPL-2.0 */ >> +#ifndef __ASM_COMPILER_H >> +#define __ASM_COMPILER_H >> + >> +#if defined(CONFIG_ARM64_PTR_AUTH) >> + >> +/* >> + * The EL0/EL1 pointer bits used by a pointer authentication code. >> + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. >> + */ >> +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) > > That's the current behaviour but I guess we could extend the mask to 63 > here without breaking anything since we don't expect instruction > addresses to be tagged. I also think we should turn TCR_EL1.TBID0 on > when we have PAC present (in a separate patch for both the mask change > and the TCR_EL1 bit as this may be slightly more controversial, a > theoretical ABI change). ok. For this there has to be 2 mask then as ptrace passes both the masks to user. #define ptrauth_user_ins_pac_mask() GENMASK_ULL(63, vabits_actual) #define ptrauth_user_data_pac_mask() GENMASK_ULL(54, vabits_actual) > >> +#define ptrauth_kernel_pac_mask() (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)) > > I think the kernel mask should be GENMASK_ULL(63, vabits_actual), no > need to skip bit 55 since it's 1 already. > > With regards to VA_BITS (a constant), I'm not sure that's correct. > ARMv8.2-LVA (52-bit VA) is an optional feature and I don't think PAC in > 8.3 mandates it. yes. Thanks for the correction. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/17/20 11:16 AM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:15PM +0530, Amit Daniel Kachhap wrote: >> This patch disables the probing of authenticate ptrauth >> instruction which falls under the hint instructions region. >> This is done to disallow probe of instruction which may lead >> to ptrauth faults. >> >> The corresponding append pac ptrauth instruction is not >> disabled as they are typically the first instruction in the >> function so disabling them will be disabling the function >> probe itself. Also, appending pac do not cause any exception >> in itself. > > Neither does AUTIASP in v8.3, only the subsequent dereferencing, so why > can kprobes cope with PACIASP but not AUTIASP? ok I will add this patch as part of FPAC in armv8.6 ptrauth. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/17/20 12:00 PM, Catalin Marinas wrote: > On Fri, Jan 17, 2020 at 04:13:06PM +0530, Amit Kachhap wrote: >> On 1/16/20 9:54 PM, Catalin Marinas wrote: >>> On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: >>>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S >>>> index 5aaf1bb..c59c28f 100644 >>>> --- a/arch/arm64/kernel/head.S >>>> +++ b/arch/arm64/kernel/head.S > [...] >>>> +ENTRY(__cpu_secondary_checkptrauth) >>>> +#ifdef CONFIG_ARM64_PTR_AUTH >>>> + /* Check if the CPU supports ptrauth */ >>>> + mrs x2, id_aa64isar1_el1 >>>> + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 >>>> + cbnz x2, 1f >>>> +alternative_if ARM64_HAS_ADDRESS_AUTH >>>> + mov x3, 1 >>>> +alternative_else >>>> + mov x3, 0 >>>> +alternative_endif >>>> + cbz x3, 1f >>>> + /* Park the mismatched secondary CPU */ >>>> + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH >>>> +#endif >>>> +1: ret >>>> +ENDPROC(__cpu_secondary_checkptrauth) >>> >>> Do we actually need to park secondary CPUs early? Let's say a secondary >>> CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 >>> bits and the instructions are NOPs. Wouldn't the cpufeature framework >>> park it later anyway? >> >> In the current cpufeature framework, such missing cpufeature in >> secondary cpu will lead to kernel panic (inside check_early_cpufeatures) >> and not cpu offline. However Kristina in her RFC V2 [1] added such >> feature to park it. > > I remember discussing how to avoid the kernel panic with her at the > time. > >> Later for moving the enabling ptrauth to assembly this work got dropped. >> Suzuki provided the template code for doing that [2]. >> >> Later James suggested to do this like existing >> __cpu_secondary_check52bitva which parks the secondary cpu very early >> and also to save wasted cpu cycles [3]. > > I don't really care about a few cycles lost during boot. > >> So your question is still valid that it can be done in cpufeature. Let >> me know your opinion that which one is better. > > My preference is for Kristina's approach. The 52-bit VA is slightly > different (as is VHE) as we cannot guarantee the secondary CPU to even > reach the CPU framework. With PAC, I don't see why it would fail > reaching the C code, so I'd prefer a more readable C implementation than > the assembler one. ok. I will use approach in my next iteration. > > Anyway, I'm open to counterarguments here. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 1/17/20 11:49 AM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:17PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index 06b5025..f0798b7 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -1466,6 +1466,7 @@ config ARM64_PTR_AUTH >> bool "Enable support for pointer authentication" >> default y >> depends on !KVM || ARM64_VHE >> + depends on GCC_VERSION >= 70000 || CLANG_VERSION >= 80000 > > Please don't add checks on compiler versions. Use cc-option when you > need a certain option rather than guessing which compiler version > supports it. People may do backports of different features, so the > version is not relevant. ok this is fixed locally. > >> help >> Pointer authentication (part of the ARMv8.3 Extensions) provides >> instructions for signing and authenticating pointers against secret >> @@ -1473,11 +1474,17 @@ config ARM64_PTR_AUTH >> and other attacks. >> >> This option enables these instructions at EL0 (i.e. for userspace). >> - >> Choosing this option will cause the kernel to initialise secret keys >> for each process at exec() time, with these keys being >> context-switched along with the process. >> >> + If the compiler supports the -mbranch-protection or >> + -msign-return-address flag (e.g. GCC 7 or later), then this option >> + will also cause the kernel itself to be compiled with return address >> + protection. In this case, and if the target hardware is known to >> + support pointer authentication, then CONFIG_STACKPROTECTOR can be >> + disabled with minimal loss of protection. >> + >> The feature is detected at runtime. If the feature is not present in >> hardware it will not be advertised to userspace/KVM guest nor will it >> be enabled. However, KVM guest also require VHE mode and hence >> @@ -1488,6 +1495,18 @@ config ARM64_PTR_AUTH >> have address auth and the late CPU has then system panic will occur. >> On such a system, this option should not be selected. >> >> +config CC_HAS_BRANCH_PROT_PAC_RET >> + # GCC 9 or later, clang 8 or later >> + def_bool $(cc-option,-mbranch-protection=pac-ret+leaf) >> + >> +config CC_HAS_SIGN_RETURN_ADDRESS >> + # GCC 7, 8 >> + def_bool $(cc-option,-msign-return-address=all) >> + >> +config AS_HAS_PAC >> + def_bool $(as-option,-Wa,-march=armv8.3-a) >> + depends on CC_IS_CLANG > > First, as I commented on the previous patch, clang seems to ignore -Wa, > so you can write whatever you want after it and it seems to be always > true. > > Second, if you need the assembler to support certain features, it needs > to be checked irrespective of whether the compiler is gcc or clang. > Binutils is a separate package. ok agreed and done locally. > >> + >> endmenu >> >> config ARM64_SVE >> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile >> index 1fbe24d..6a1da59 100644 >> --- a/arch/arm64/Makefile >> +++ b/arch/arm64/Makefile >> @@ -72,6 +72,17 @@ stack_protector_prepare: prepare0 >> include/generated/asm-offsets.h)) >> endif >> >> +ifeq ($(CONFIG_ARM64_PTR_AUTH),y) >> +branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all >> +branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf >> +# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the compiler >> +# to generate them and consequently to break the single image contract we pass it >> +# only to the assembler when clang is selected as a compiler. For the GNU toolchain >> +# this option is not used. >> +branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a >> +KBUILD_CFLAGS += $(branch-prot-flags-y) >> +endif > > So, does this actually work with clang? Yes this works with clang. If I add -v to the KBUILD_CFLAGS then it splits the output and shows that the Wa arguments are given to the gcc assembler and clang compiler does not use it. > > Please check the clang issue in case I'm mistaken. Otherwise, you could > use as-instr as in this patch: > > https://lore.kernel.org/linux-arm-kernel/20200115113008.3334-3-catalin.marinas@arm.com/ > > Also Will had a preference for warning during build if the user > requested a feature in .config (i.e. PAC) but the compiler/assembler > does not support it (that was for the LSE patch above). You could > attempt something similar with this patch. I tried to add warnings like below which are in similar line to the above link, ifeq ($(CONFIG_ARM64_PTR_AUTH),y) + ifneq ($(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS),y) + ifneq ($(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET),y) +$(warning Pointer authentication not supported by compiler) + endif + endif + ifneq ($(CONFIG_AS_HAS_PAC),y) +$(warning Pointer authentication not supported by assembler) + endif endif But the issue is that warnings are printed twice and becomes confusing. First warning computed with the incorrect Kconfig flags and later with the correct computed Kconfig flags. This may be due to arch/arm64/Kconfig sourced twice. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
[-- Attachment #1: Type: text/plain, Size: 1389 bytes --] On 21/01/2020 14:37, Amit Kachhap wrote: [...] >> >> Also Will had a preference for warning during build if the user >> requested a feature in .config (i.e. PAC) but the compiler/assembler >> does not support it (that was for the LSE patch above). You could >> attempt something similar with this patch. > > I tried to add warnings like below which are in similar line to the above link, > > ifeq ($(CONFIG_ARM64_PTR_AUTH),y) > + ifneq ($(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS),y) > + ifneq ($(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET),y) > +$(warning Pointer authentication not supported by compiler) > + endif > + endif > + ifneq ($(CONFIG_AS_HAS_PAC),y) > +$(warning Pointer authentication not supported by assembler) > + endif > endif > > But the issue is that warnings are printed twice and becomes confusing. First > warning computed with the incorrect Kconfig flags and later with the correct > computed Kconfig flags. This may be due to arch/arm64/Kconfig sourced twice. > This seems similar to the problem we had on vDSOs some time ago. The main Makefile includes the arch specific twice, once for generating the building scripts and once for building the kernel. As a consequence of this the arch Kconfig is sourced twice. For the vDSOs we discarded the warning because it was causing confusion. > >> -- Regards, Vincenzo [-- Attachment #2: pEpkey.asc --] [-- Type: application/pgp-keys, Size: 14291 bytes --] [-- Attachment #3: Type: text/plain, Size: 176 bytes --] _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Tue, Jan 21, 2020 at 02:37:44PM +0000, Amit Kachhap wrote: > On 1/17/20 11:49 AM, Catalin Marinas wrote: > > Also Will had a preference for warning during build if the user > > requested a feature in .config (i.e. PAC) but the compiler/assembler > > does not support it (that was for the LSE patch above). You could > > attempt something similar with this patch. > > I tried to add warnings like below which are in similar line to the above > link, > > ifeq ($(CONFIG_ARM64_PTR_AUTH),y) > + ifneq ($(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS),y) > + ifneq ($(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET),y) > +$(warning Pointer authentication not supported by compiler) > + endif > + endif > + ifneq ($(CONFIG_AS_HAS_PAC),y) > +$(warning Pointer authentication not supported by assembler) > + endif > endif > > But the issue is that warnings are printed twice and becomes confusing. > First warning computed with the incorrect Kconfig flags and later with the > correct computed Kconfig flags. This may be due to arch/arm64/Kconfig > sourced twice. I think there are two passes over the opt arch Makefile, hence a warning based on the old config. Maybe there is a way to check which pass this is and skip the check in the first instance. It needs more digging, otherwise we could move the check to kernel/Makefile. Yet another option would be to check whether CONFIG_AS_HAS_PAC in .config is backed by the actual assembler option via the Makefile. It means we duplicate the as-option check but it may be the cleanest. Anyway, for the time being, just get the dependencies correctly in Kconfig without the warnings. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Mon, Jan 20, 2020 at 02:20:17PM +0000, Amit Kachhap wrote: > On 1/17/20 10:14 AM, Catalin Marinas wrote: > > On Mon, Dec 16, 2019 at 02:17:11PM +0530, Amit Daniel Kachhap wrote: > > > diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h > > > new file mode 100644 > > > index 0000000..3cb06f9 > > > --- /dev/null > > > +++ b/arch/arm64/include/asm/compiler.h > > > @@ -0,0 +1,20 @@ > > > +/* SPDX-License-Identifier: GPL-2.0 */ > > > +#ifndef __ASM_COMPILER_H > > > +#define __ASM_COMPILER_H > > > + > > > +#if defined(CONFIG_ARM64_PTR_AUTH) > > > + > > > +/* > > > + * The EL0/EL1 pointer bits used by a pointer authentication code. > > > + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. > > > + */ > > > +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) > > > > That's the current behaviour but I guess we could extend the mask to 63 > > here without breaking anything since we don't expect instruction > > addresses to be tagged. I also think we should turn TCR_EL1.TBID0 on > > when we have PAC present (in a separate patch for both the mask change > > and the TCR_EL1 bit as this may be slightly more controversial, a > > theoretical ABI change). > > ok. For this there has to be 2 mask then as ptrace passes both the masks to > user. > > #define ptrauth_user_ins_pac_mask() GENMASK_ULL(63, vabits_actual) > > #define ptrauth_user_data_pac_mask() GENMASK_ULL(54, vabits_actual) Yes. But as I said, keep this patch as is and change the above in a separate patch, possibly even a separate series. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel