All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-10-14  5:51 ` Peter Collingbourne
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-10-14  5:51 UTC (permalink / raw)
  To: Catalin Marinas, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Dave Martin
  Cc: Peter Collingbourne, Linux ARM, Kevin Brodsky, Andrey Konovalov,
	Will Deacon, linux-api

This prctl allows the user program to control which PAC keys are enabled
in a particular task. The main reason why this is useful is to enable a
userspace ABI that uses PAC to sign and authenticate function pointers
and other pointers exposed outside of the function, while still allowing
binaries conforming to the ABI to interoperate with legacy binaries that
do not sign or authenticate pointers.

The idea is that a dynamic loader or early startup code would issue
this prctl very early after establishing that a process may load legacy
binaries, but before executing any PAC instructions.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
---
This patch must be applied on top of Catalin's MTE series:

  git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte

Alternatively the series with this patch on top may be
downloaded from Gerrit by following the link above.

v2:
- added prctl(PR_PAC_GET_ENABLED_KEYS)
- added ptrace APIs for getting and setting the set of enabled
  keys
- optimized the instruction sequence for kernel entry/exit
- rebased on top of MTE series

 .../arm64/pointer-authentication.rst          | 27 +++++++++
 arch/arm64/include/asm/pointer_auth.h         | 11 ++++
 arch/arm64/include/asm/processor.h            | 21 ++++++-
 arch/arm64/include/asm/sysreg.h               |  4 +-
 arch/arm64/kernel/asm-offsets.c               |  1 +
 arch/arm64/kernel/entry.S                     | 27 +++++++++
 arch/arm64/kernel/mte.c                       | 39 +++----------
 arch/arm64/kernel/pointer_auth.c              | 56 +++++++++++++++++++
 arch/arm64/kernel/process.c                   | 37 ++++++++++++
 arch/arm64/kernel/ptrace.c                    | 41 ++++++++++++++
 arch/arm64/kernel/setup.c                     | 10 ++++
 include/uapi/linux/elf.h                      |  1 +
 include/uapi/linux/prctl.h                    |  4 ++
 kernel/sys.c                                  | 16 ++++++
 14 files changed, 261 insertions(+), 34 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.rst b/Documentation/arm64/pointer-authentication.rst
index 30b2ab06526b..1f7e064deeb3 100644
--- a/Documentation/arm64/pointer-authentication.rst
+++ b/Documentation/arm64/pointer-authentication.rst
@@ -107,3 +107,30 @@ filter out the Pointer Authentication system key registers from
 KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
 register. Any attempt to use the Pointer Authentication instructions will
 result in an UNDEFINED exception being injected into the guest.
+
+
+Enabling and disabling keys
+---------------------------
+
+The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
+PAC keys are enabled in a particular task. It takes two arguments, the
+first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
+and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
+and the second being a bitmask of the same bits specifying whether the key
+should be enabled or disabled. For example::
+
+  prctl(PR_PAC_SET_ENABLED_KEYS,
+        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
+        PR_PAC_APIBKEY, 0, 0);
+
+disables all keys except the IB key.
+
+The main reason why this is useful is to enable a userspace ABI that uses PAC
+instructions to sign and authenticate function pointers and other pointers
+exposed outside of the function, while still allowing binaries conforming to
+the ABI to interoperate with legacy binaries that do not sign or authenticate
+pointers.
+
+The idea is that a dynamic loader or early startup code would issue this
+prctl very early after establishing that a process may load legacy binaries,
+but before executing any PAC instructions.
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index c6b4f0603024..662a005d9070 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -3,6 +3,7 @@
 #define __ASM_POINTER_AUTH_H
 
 #include <linux/bitops.h>
+#include <linux/prctl.h>
 #include <linux/random.h>
 
 #include <asm/cpufeature.h>
@@ -71,6 +72,11 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
 
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
+extern int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk,
+					  unsigned long keys,
+					  unsigned long enabled);
+extern int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk);
+
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 {
 	return ptrauth_clear_pac(ptr);
@@ -85,10 +91,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
+#define ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)	(-EINVAL)
+#define ptrauth_prctl_get_enabled_keys(tsk)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
 #define ptrauth_thread_init_kernel(tsk)
 #define ptrauth_thread_switch_kernel(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
+#define PR_PAC_ENABLED_KEYS_MASK                                               \
+	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
+
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index fec204d28fce..68c9d8064759 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -152,11 +152,17 @@ struct thread_struct {
 	struct ptrauth_keys_kernel	keys_kernel;
 #endif
 #ifdef CONFIG_ARM64_MTE
-	u64			sctlr_tcf0;
 	u64			gcr_user_incl;
 #endif
+	u64			sctlr;
 };
 
+extern u64 init_sctlr;
+
+#define SCTLR_TASK_MASK                                                        \
+	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
+	 SCTLR_EL1_TCF0_MASK)
+
 static inline void arch_thread_struct_whitelist(unsigned long *offset,
 						unsigned long *size)
 {
@@ -263,6 +269,14 @@ extern void release_thread(struct task_struct *);
 
 unsigned long get_wchan(struct task_struct *p);
 
+#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
+void set_task_sctlr_el1(u64 sctlr);
+#else
+static inline void set_task_sctlr_el1(u64 sctlr)
+{
+}
+#endif
+
 /* Thread switching */
 extern struct task_struct *cpu_switch_to(struct task_struct *prev,
 					 struct task_struct *next);
@@ -317,6 +331,11 @@ extern void __init minsigstksz_setup(void);
 /* PR_PAC_RESET_KEYS prctl */
 #define PAC_RESET_KEYS(tsk, arg)	ptrauth_prctl_reset_keys(tsk, arg)
 
+/* PR_PAC_{SET,GET}_ENABLED_KEYS prctl */
+#define PAC_SET_ENABLED_KEYS(tsk, keys, enabled)				\
+	ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)
+#define PAC_GET_ENABLED_KEYS(tsk) ptrauth_prctl_get_enabled_keys(tsk)
+
 #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
 /* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
 long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 52eefe2f7d95..912150361338 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -548,8 +548,10 @@
 #define SCTLR_ELx_TCF_ASYNC	(UL(0x2) << SCTLR_ELx_TCF_SHIFT)
 #define SCTLR_ELx_TCF_MASK	(UL(0x3) << SCTLR_ELx_TCF_SHIFT)
 
+#define SCTLR_ELx_ENIA_SHIFT	31
+
 #define SCTLR_ELx_ITFSB	(BIT(37))
-#define SCTLR_ELx_ENIA	(BIT(31))
+#define SCTLR_ELx_ENIA	(BIT(SCTLR_ELx_ENIA_SHIFT))
 #define SCTLR_ELx_ENIB	(BIT(30))
 #define SCTLR_ELx_ENDA	(BIT(27))
 #define SCTLR_ELx_EE    (BIT(25))
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7d32fc959b1a..cb49d4f31540 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -48,6 +48,7 @@ int main(void)
   DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
   DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
 #endif
+  DEFINE(THREAD_SCTLR,		offsetof(struct task_struct, thread.sctlr));
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
   DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ff34461524d4..19d24666b529 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -214,6 +214,18 @@ alternative_else_nop_endif
 
 	ptrauth_keys_install_kernel tsk, x20, x22, x23
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	/* Enable IA for in-kernel PAC if the task had it disabled. */
+	ldr	x0, [tsk, THREAD_SCTLR]
+	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
+	mrs	x0, sctlr_el1
+	orr	x0, x0, SCTLR_ELx_ENIA
+	msr	sctlr_el1, x0
+1:
+alternative_else_nop_endif
+#endif
+
 	scs_load tsk, x20
 	.else
 	add	x21, sp, #S_FRAME_SIZE
@@ -332,6 +344,21 @@ alternative_else_nop_endif
 	/* No kernel C function calls after this as user keys are set. */
 	ptrauth_keys_install_user tsk, x0, x1, x2
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	/*
+	 * IA was enabled for in-kernel PAC. Disable it now if needed.
+	 * All other per-task SCTLR bits were updated on task switch.
+	 */
+	ldr	x0, [tsk, THREAD_SCTLR]
+	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
+	mrs	x0, sctlr_el1
+	bic	x0, x0, SCTLR_ELx_ENIA
+	msr	sctlr_el1, x0
+1:
+alternative_else_nop_endif
+#endif
+
 	apply_ssbd 0, x0, x1
 	.endif
 
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 52a0638ed967..60238a5e3c82 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -72,26 +72,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
 	return ret;
 }
 
-static void update_sctlr_el1_tcf0(u64 tcf0)
-{
-	/* ISB required for the kernel uaccess routines */
-	sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0);
-	isb();
-}
-
-static void set_sctlr_el1_tcf0(u64 tcf0)
-{
-	/*
-	 * mte_thread_switch() checks current->thread.sctlr_tcf0 as an
-	 * optimisation. Disable preemption so that it does not see
-	 * the variable update before the SCTLR_EL1.TCF0 one.
-	 */
-	preempt_disable();
-	current->thread.sctlr_tcf0 = tcf0;
-	update_sctlr_el1_tcf0(tcf0);
-	preempt_enable();
-}
-
 static void update_gcr_el1_excl(u64 incl)
 {
 	u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
@@ -120,8 +100,6 @@ void flush_mte_state(void)
 	dsb(ish);
 	write_sysreg_s(0, SYS_TFSRE0_EL1);
 	clear_thread_flag(TIF_MTE_ASYNC_FAULT);
-	/* disable tag checking */
-	set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE);
 	/* reset tag generation mask */
 	set_gcr_el1_excl(0);
 }
@@ -131,9 +109,6 @@ void mte_thread_switch(struct task_struct *next)
 	if (!system_supports_mte())
 		return;
 
-	/* avoid expensive SCTLR_EL1 accesses if no change */
-	if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
-		update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
 	update_gcr_el1_excl(next->thread.gcr_user_incl);
 }
 
@@ -147,7 +122,7 @@ void mte_suspend_exit(void)
 
 long set_mte_ctrl(struct task_struct *task, unsigned long arg)
 {
-	u64 tcf0;
+	u64 sctlr = task->thread.sctlr & ~SCTLR_EL1_TCF0_MASK;
 	u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT;
 
 	if (!system_supports_mte())
@@ -155,23 +130,23 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
 
 	switch (arg & PR_MTE_TCF_MASK) {
 	case PR_MTE_TCF_NONE:
-		tcf0 = SCTLR_EL1_TCF0_NONE;
+		sctlr |= SCTLR_EL1_TCF0_NONE;
 		break;
 	case PR_MTE_TCF_SYNC:
-		tcf0 = SCTLR_EL1_TCF0_SYNC;
+		sctlr |= SCTLR_EL1_TCF0_SYNC;
 		break;
 	case PR_MTE_TCF_ASYNC:
-		tcf0 = SCTLR_EL1_TCF0_ASYNC;
+		sctlr |= SCTLR_EL1_TCF0_ASYNC;
 		break;
 	default:
 		return -EINVAL;
 	}
 
 	if (task != current) {
-		task->thread.sctlr_tcf0 = tcf0;
+		task->thread.sctlr = sctlr;
 		task->thread.gcr_user_incl = gcr_incl;
 	} else {
-		set_sctlr_el1_tcf0(tcf0);
+		set_task_sctlr_el1(sctlr);
 		set_gcr_el1_excl(gcr_incl);
 	}
 
@@ -187,7 +162,7 @@ long get_mte_ctrl(struct task_struct *task)
 
 	ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT;
 
-	switch (task->thread.sctlr_tcf0) {
+	switch (task->thread.sctlr & SCTLR_EL1_TCF0_MASK) {
 	case SCTLR_EL1_TCF0_NONE:
 		return PR_MTE_TCF_NONE;
 	case SCTLR_EL1_TCF0_SYNC:
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index 1e77736a4f66..0e763e8babd8 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 
+#include <linux/compat.h>
 #include <linux/errno.h>
 #include <linux/prctl.h>
 #include <linux/random.h>
@@ -42,3 +43,58 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 
 	return 0;
 }
+
+static u64 arg_to_enxx_mask(unsigned long arg)
+{
+	u64 sctlr_enxx_mask = 0;
+	if (arg & PR_PAC_APIAKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENIA;
+	if (arg & PR_PAC_APIBKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENIB;
+	if (arg & PR_PAC_APDAKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENDA;
+	if (arg & PR_PAC_APDBKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENDB;
+	return sctlr_enxx_mask;
+}
+
+int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
+				   unsigned long enabled)
+{
+	u64 sctlr = tsk->thread.sctlr;
+
+	if (!system_supports_address_auth() || is_compat_task())
+		return -EINVAL;
+
+	if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
+		return -EINVAL;
+
+	sctlr &= ~arg_to_enxx_mask(keys);
+	sctlr |= arg_to_enxx_mask(enabled);
+	if (tsk == current) {
+		set_task_sctlr_el1(sctlr);
+	} else {
+		tsk->thread.sctlr = sctlr;
+	}
+
+	return 0;
+}
+
+int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk)
+{
+	int retval = 0;
+
+	if (!system_supports_address_auth() || is_compat_task())
+		return -EINVAL;
+
+	if (tsk->thread.sctlr & SCTLR_ELx_ENIA)
+		retval |= PR_PAC_APIAKEY;
+	if (tsk->thread.sctlr & SCTLR_ELx_ENIB)
+		retval |= PR_PAC_APIBKEY;
+	if (tsk->thread.sctlr & SCTLR_ELx_ENDA)
+		retval |= PR_PAC_APDAKEY;
+	if (tsk->thread.sctlr & SCTLR_ELx_ENDB)
+		retval |= PR_PAC_APDBKEY;
+
+	return retval;
+}
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 05a9cdd0b471..7fb28ccdf862 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
 	write_sysreg(val, cntkctl_el1);
 }
 
+#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
+static void update_sctlr_el1(u64 sctlr)
+{
+	/*
+	 * EnIA must not be cleared while in the kernel as this is necessary for
+	 * in-kernel PAC. It will be cleared on kernel exit if needed.
+	 */
+	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
+
+	/* ISB required for the kernel uaccess routines when setting TCF0. */
+	isb();
+}
+
+void set_task_sctlr_el1(u64 sctlr)
+{
+	/*
+	 * __switch_to() checks current->thread.sctlr as an
+	 * optimisation. Disable preemption so that it does not see
+	 * the variable update before the SCTLR_EL1 one.
+	 */
+	preempt_disable();
+	current->thread.sctlr = sctlr;
+	update_sctlr_el1(sctlr);
+	preempt_enable();
+}
+#else
+static void update_sctlr_el1(u64 sctlr)
+{
+}
+#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
+
 /*
  * Thread switching.
  */
@@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	 */
 	dsb(ish);
 
+	/* avoid expensive SCTLR_EL1 accesses if no change */
+	if (prev->thread.sctlr != next->thread.sctlr)
+		update_sctlr_el1(next->thread.sctlr);
+
 	/*
 	 * MTE thread switching must happen after the DSB above to ensure that
 	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
@@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
 void arch_setup_new_exec(void)
 {
 	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
+	if (current->thread.sctlr != init_sctlr)
+		set_task_sctlr_el1(init_sctlr);
 
 	ptrauth_thread_init_user(current);
 }
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index f49b349e16a3..2ed17fb07666 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -911,6 +911,38 @@ static int pac_mask_get(struct task_struct *target,
 	return membuf_write(&to, &uregs, sizeof(uregs));
 }
 
+static int pac_enabled_keys_get(struct task_struct *target,
+				const struct user_regset *regset,
+				struct membuf to)
+{
+	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
+
+	if (IS_ERR_VALUE(enabled_keys))
+		return enabled_keys;
+
+	return membuf_write(&to, &enabled_keys, sizeof(enabled_keys));
+}
+
+static int pac_enabled_keys_set(struct task_struct *target,
+				const struct user_regset *regset,
+				unsigned int pos, unsigned int count,
+				const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
+
+	if (IS_ERR_VALUE(enabled_keys))
+		return enabled_keys;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+				 &enabled_keys, 0, -1);
+	if (ret)
+		return ret;
+
+	return ptrauth_prctl_set_enabled_keys(target, PR_PAC_ENABLED_KEYS_MASK,
+					      enabled_keys);
+}
+
 #ifdef CONFIG_CHECKPOINT_RESTORE
 static __uint128_t pac_key_to_user(const struct ptrauth_key *key)
 {
@@ -1076,6 +1108,7 @@ enum aarch64_regset {
 #endif
 #ifdef CONFIG_ARM64_PTR_AUTH
 	REGSET_PAC_MASK,
+	REGSET_PAC_ENABLED_KEYS,
 #ifdef CONFIG_CHECKPOINT_RESTORE
 	REGSET_PACA_KEYS,
 	REGSET_PACG_KEYS,
@@ -1162,6 +1195,14 @@ static const struct user_regset aarch64_regsets[] = {
 		.regset_get = pac_mask_get,
 		/* this cannot be set dynamically */
 	},
+	[REGSET_PAC_ENABLED_KEYS] = {
+		.core_note_type = NT_ARM_PAC_ENABLED_KEYS,
+		.n = 1,
+		.size = sizeof(long),
+		.align = sizeof(long),
+		.regset_get = pac_enabled_keys_get,
+		.set = pac_enabled_keys_set,
+	},
 #ifdef CONFIG_CHECKPOINT_RESTORE
 	[REGSET_PACA_KEYS] = {
 		.core_note_type = NT_ARM_PACA_KEYS,
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 77c4c9bad1b8..91932215a6dd 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
 }
 EXPORT_SYMBOL_GPL(cpu_logical_map);
 
+u64 init_sctlr;
+
 void __init __no_sanitize_address setup_arch(char **cmdline_p)
 {
 	init_mm.start_code = (unsigned long) _text;
@@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
 	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
 #endif
 
+	/*
+	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
+	 * the value that it was set to by the early startup code.
+	 */
+	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
+	init_sctlr &= SCTLR_TASK_MASK;
+	init_task.thread.sctlr = init_sctlr;
+
 	if (boot_args[1] || boot_args[2] || boot_args[3]) {
 		pr_err("WARNING: x1-x3 nonzero in violation of boot protocol:\n"
 			"\tx1: %016llx\n\tx2: %016llx\n\tx3: %016llx\n"
diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index 30f68b42eeb5..61bf4774b8f2 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -426,6 +426,7 @@ typedef struct elf64_shdr {
 #define NT_ARM_PACA_KEYS	0x407	/* ARM pointer authentication address keys */
 #define NT_ARM_PACG_KEYS	0x408	/* ARM pointer authentication generic key */
 #define NT_ARM_TAGGED_ADDR_CTRL	0x409	/* arm64 tagged address control (prctl()) */
+#define NT_ARM_PAC_ENABLED_KEYS	0x40a	/* arm64 ptr auth enabled keys (prctl()) */
 #define NT_ARC_V2	0x600		/* ARCv2 accumulator/extra registers */
 #define NT_VMCOREDD	0x700		/* Vmcore Device Dump Note */
 #define NT_MIPS_DSP	0x800		/* MIPS DSP ASE registers */
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 7f0827705c9a..0d1bb3a2e59a 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -247,4 +247,8 @@ struct prctl_mm_map {
 #define PR_SET_IO_FLUSHER		57
 #define PR_GET_IO_FLUSHER		58
 
+/* Set/get enabled arm64 pointer authentication keys */
+#define PR_PAC_SET_ENABLED_KEYS		59
+#define PR_PAC_GET_ENABLED_KEYS		60
+
 #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index ab6c409b1159..0cbd5841e521 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -119,6 +119,12 @@
 #ifndef PAC_RESET_KEYS
 # define PAC_RESET_KEYS(a, b)	(-EINVAL)
 #endif
+#ifndef PAC_SET_ENABLED_KEYS
+# define PAC_SET_ENABLED_KEYS(a, b, c)	(-EINVAL)
+#endif
+#ifndef PAC_GET_ENABLED_KEYS
+# define PAC_GET_ENABLED_KEYS(a)	(-EINVAL)
+#endif
 #ifndef SET_TAGGED_ADDR_CTRL
 # define SET_TAGGED_ADDR_CTRL(a)	(-EINVAL)
 #endif
@@ -2497,6 +2503,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 			return -EINVAL;
 		error = PAC_RESET_KEYS(me, arg2);
 		break;
+	case PR_PAC_SET_ENABLED_KEYS:
+		if (arg4 || arg5)
+			return -EINVAL;
+		error = PAC_SET_ENABLED_KEYS(me, arg2, arg3);
+		break;
+	case PR_PAC_GET_ENABLED_KEYS:
+		if (arg2 || arg3 || arg4 || arg5)
+			return -EINVAL;
+		error = PAC_GET_ENABLED_KEYS(me);
+		break;
 	case PR_SET_TAGGED_ADDR_CTRL:
 		if (arg3 || arg4 || arg5)
 			return -EINVAL;
-- 
2.28.0.1011.ga647a8990f-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-10-14  5:51 ` Peter Collingbourne
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-10-14  5:51 UTC (permalink / raw)
  To: Catalin Marinas, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Dave Martin
  Cc: Peter Collingbourne, Andrey Konovalov, Kevin Brodsky, linux-api,
	Will Deacon, Linux ARM

This prctl allows the user program to control which PAC keys are enabled
in a particular task. The main reason why this is useful is to enable a
userspace ABI that uses PAC to sign and authenticate function pointers
and other pointers exposed outside of the function, while still allowing
binaries conforming to the ABI to interoperate with legacy binaries that
do not sign or authenticate pointers.

The idea is that a dynamic loader or early startup code would issue
this prctl very early after establishing that a process may load legacy
binaries, but before executing any PAC instructions.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
---
This patch must be applied on top of Catalin's MTE series:

  git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte

Alternatively the series with this patch on top may be
downloaded from Gerrit by following the link above.

v2:
- added prctl(PR_PAC_GET_ENABLED_KEYS)
- added ptrace APIs for getting and setting the set of enabled
  keys
- optimized the instruction sequence for kernel entry/exit
- rebased on top of MTE series

 .../arm64/pointer-authentication.rst          | 27 +++++++++
 arch/arm64/include/asm/pointer_auth.h         | 11 ++++
 arch/arm64/include/asm/processor.h            | 21 ++++++-
 arch/arm64/include/asm/sysreg.h               |  4 +-
 arch/arm64/kernel/asm-offsets.c               |  1 +
 arch/arm64/kernel/entry.S                     | 27 +++++++++
 arch/arm64/kernel/mte.c                       | 39 +++----------
 arch/arm64/kernel/pointer_auth.c              | 56 +++++++++++++++++++
 arch/arm64/kernel/process.c                   | 37 ++++++++++++
 arch/arm64/kernel/ptrace.c                    | 41 ++++++++++++++
 arch/arm64/kernel/setup.c                     | 10 ++++
 include/uapi/linux/elf.h                      |  1 +
 include/uapi/linux/prctl.h                    |  4 ++
 kernel/sys.c                                  | 16 ++++++
 14 files changed, 261 insertions(+), 34 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.rst b/Documentation/arm64/pointer-authentication.rst
index 30b2ab06526b..1f7e064deeb3 100644
--- a/Documentation/arm64/pointer-authentication.rst
+++ b/Documentation/arm64/pointer-authentication.rst
@@ -107,3 +107,30 @@ filter out the Pointer Authentication system key registers from
 KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
 register. Any attempt to use the Pointer Authentication instructions will
 result in an UNDEFINED exception being injected into the guest.
+
+
+Enabling and disabling keys
+---------------------------
+
+The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
+PAC keys are enabled in a particular task. It takes two arguments, the
+first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
+and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
+and the second being a bitmask of the same bits specifying whether the key
+should be enabled or disabled. For example::
+
+  prctl(PR_PAC_SET_ENABLED_KEYS,
+        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
+        PR_PAC_APIBKEY, 0, 0);
+
+disables all keys except the IB key.
+
+The main reason why this is useful is to enable a userspace ABI that uses PAC
+instructions to sign and authenticate function pointers and other pointers
+exposed outside of the function, while still allowing binaries conforming to
+the ABI to interoperate with legacy binaries that do not sign or authenticate
+pointers.
+
+The idea is that a dynamic loader or early startup code would issue this
+prctl very early after establishing that a process may load legacy binaries,
+but before executing any PAC instructions.
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index c6b4f0603024..662a005d9070 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -3,6 +3,7 @@
 #define __ASM_POINTER_AUTH_H
 
 #include <linux/bitops.h>
+#include <linux/prctl.h>
 #include <linux/random.h>
 
 #include <asm/cpufeature.h>
@@ -71,6 +72,11 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
 
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
 
+extern int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk,
+					  unsigned long keys,
+					  unsigned long enabled);
+extern int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk);
+
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 {
 	return ptrauth_clear_pac(ptr);
@@ -85,10 +91,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
 #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
+#define ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)	(-EINVAL)
+#define ptrauth_prctl_get_enabled_keys(tsk)	(-EINVAL)
 #define ptrauth_strip_insn_pac(lr)	(lr)
 #define ptrauth_thread_init_user(tsk)
 #define ptrauth_thread_init_kernel(tsk)
 #define ptrauth_thread_switch_kernel(tsk)
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
+#define PR_PAC_ENABLED_KEYS_MASK                                               \
+	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
+
 #endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index fec204d28fce..68c9d8064759 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -152,11 +152,17 @@ struct thread_struct {
 	struct ptrauth_keys_kernel	keys_kernel;
 #endif
 #ifdef CONFIG_ARM64_MTE
-	u64			sctlr_tcf0;
 	u64			gcr_user_incl;
 #endif
+	u64			sctlr;
 };
 
+extern u64 init_sctlr;
+
+#define SCTLR_TASK_MASK                                                        \
+	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
+	 SCTLR_EL1_TCF0_MASK)
+
 static inline void arch_thread_struct_whitelist(unsigned long *offset,
 						unsigned long *size)
 {
@@ -263,6 +269,14 @@ extern void release_thread(struct task_struct *);
 
 unsigned long get_wchan(struct task_struct *p);
 
+#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
+void set_task_sctlr_el1(u64 sctlr);
+#else
+static inline void set_task_sctlr_el1(u64 sctlr)
+{
+}
+#endif
+
 /* Thread switching */
 extern struct task_struct *cpu_switch_to(struct task_struct *prev,
 					 struct task_struct *next);
@@ -317,6 +331,11 @@ extern void __init minsigstksz_setup(void);
 /* PR_PAC_RESET_KEYS prctl */
 #define PAC_RESET_KEYS(tsk, arg)	ptrauth_prctl_reset_keys(tsk, arg)
 
+/* PR_PAC_{SET,GET}_ENABLED_KEYS prctl */
+#define PAC_SET_ENABLED_KEYS(tsk, keys, enabled)				\
+	ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)
+#define PAC_GET_ENABLED_KEYS(tsk) ptrauth_prctl_get_enabled_keys(tsk)
+
 #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
 /* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
 long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 52eefe2f7d95..912150361338 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -548,8 +548,10 @@
 #define SCTLR_ELx_TCF_ASYNC	(UL(0x2) << SCTLR_ELx_TCF_SHIFT)
 #define SCTLR_ELx_TCF_MASK	(UL(0x3) << SCTLR_ELx_TCF_SHIFT)
 
+#define SCTLR_ELx_ENIA_SHIFT	31
+
 #define SCTLR_ELx_ITFSB	(BIT(37))
-#define SCTLR_ELx_ENIA	(BIT(31))
+#define SCTLR_ELx_ENIA	(BIT(SCTLR_ELx_ENIA_SHIFT))
 #define SCTLR_ELx_ENIB	(BIT(30))
 #define SCTLR_ELx_ENDA	(BIT(27))
 #define SCTLR_ELx_EE    (BIT(25))
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7d32fc959b1a..cb49d4f31540 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -48,6 +48,7 @@ int main(void)
   DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
   DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
 #endif
+  DEFINE(THREAD_SCTLR,		offsetof(struct task_struct, thread.sctlr));
   BLANK();
   DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
   DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ff34461524d4..19d24666b529 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -214,6 +214,18 @@ alternative_else_nop_endif
 
 	ptrauth_keys_install_kernel tsk, x20, x22, x23
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	/* Enable IA for in-kernel PAC if the task had it disabled. */
+	ldr	x0, [tsk, THREAD_SCTLR]
+	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
+	mrs	x0, sctlr_el1
+	orr	x0, x0, SCTLR_ELx_ENIA
+	msr	sctlr_el1, x0
+1:
+alternative_else_nop_endif
+#endif
+
 	scs_load tsk, x20
 	.else
 	add	x21, sp, #S_FRAME_SIZE
@@ -332,6 +344,21 @@ alternative_else_nop_endif
 	/* No kernel C function calls after this as user keys are set. */
 	ptrauth_keys_install_user tsk, x0, x1, x2
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+alternative_if ARM64_HAS_ADDRESS_AUTH
+	/*
+	 * IA was enabled for in-kernel PAC. Disable it now if needed.
+	 * All other per-task SCTLR bits were updated on task switch.
+	 */
+	ldr	x0, [tsk, THREAD_SCTLR]
+	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
+	mrs	x0, sctlr_el1
+	bic	x0, x0, SCTLR_ELx_ENIA
+	msr	sctlr_el1, x0
+1:
+alternative_else_nop_endif
+#endif
+
 	apply_ssbd 0, x0, x1
 	.endif
 
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 52a0638ed967..60238a5e3c82 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -72,26 +72,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
 	return ret;
 }
 
-static void update_sctlr_el1_tcf0(u64 tcf0)
-{
-	/* ISB required for the kernel uaccess routines */
-	sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0);
-	isb();
-}
-
-static void set_sctlr_el1_tcf0(u64 tcf0)
-{
-	/*
-	 * mte_thread_switch() checks current->thread.sctlr_tcf0 as an
-	 * optimisation. Disable preemption so that it does not see
-	 * the variable update before the SCTLR_EL1.TCF0 one.
-	 */
-	preempt_disable();
-	current->thread.sctlr_tcf0 = tcf0;
-	update_sctlr_el1_tcf0(tcf0);
-	preempt_enable();
-}
-
 static void update_gcr_el1_excl(u64 incl)
 {
 	u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
@@ -120,8 +100,6 @@ void flush_mte_state(void)
 	dsb(ish);
 	write_sysreg_s(0, SYS_TFSRE0_EL1);
 	clear_thread_flag(TIF_MTE_ASYNC_FAULT);
-	/* disable tag checking */
-	set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE);
 	/* reset tag generation mask */
 	set_gcr_el1_excl(0);
 }
@@ -131,9 +109,6 @@ void mte_thread_switch(struct task_struct *next)
 	if (!system_supports_mte())
 		return;
 
-	/* avoid expensive SCTLR_EL1 accesses if no change */
-	if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
-		update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
 	update_gcr_el1_excl(next->thread.gcr_user_incl);
 }
 
@@ -147,7 +122,7 @@ void mte_suspend_exit(void)
 
 long set_mte_ctrl(struct task_struct *task, unsigned long arg)
 {
-	u64 tcf0;
+	u64 sctlr = task->thread.sctlr & ~SCTLR_EL1_TCF0_MASK;
 	u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT;
 
 	if (!system_supports_mte())
@@ -155,23 +130,23 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
 
 	switch (arg & PR_MTE_TCF_MASK) {
 	case PR_MTE_TCF_NONE:
-		tcf0 = SCTLR_EL1_TCF0_NONE;
+		sctlr |= SCTLR_EL1_TCF0_NONE;
 		break;
 	case PR_MTE_TCF_SYNC:
-		tcf0 = SCTLR_EL1_TCF0_SYNC;
+		sctlr |= SCTLR_EL1_TCF0_SYNC;
 		break;
 	case PR_MTE_TCF_ASYNC:
-		tcf0 = SCTLR_EL1_TCF0_ASYNC;
+		sctlr |= SCTLR_EL1_TCF0_ASYNC;
 		break;
 	default:
 		return -EINVAL;
 	}
 
 	if (task != current) {
-		task->thread.sctlr_tcf0 = tcf0;
+		task->thread.sctlr = sctlr;
 		task->thread.gcr_user_incl = gcr_incl;
 	} else {
-		set_sctlr_el1_tcf0(tcf0);
+		set_task_sctlr_el1(sctlr);
 		set_gcr_el1_excl(gcr_incl);
 	}
 
@@ -187,7 +162,7 @@ long get_mte_ctrl(struct task_struct *task)
 
 	ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT;
 
-	switch (task->thread.sctlr_tcf0) {
+	switch (task->thread.sctlr & SCTLR_EL1_TCF0_MASK) {
 	case SCTLR_EL1_TCF0_NONE:
 		return PR_MTE_TCF_NONE;
 	case SCTLR_EL1_TCF0_SYNC:
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
index 1e77736a4f66..0e763e8babd8 100644
--- a/arch/arm64/kernel/pointer_auth.c
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 
+#include <linux/compat.h>
 #include <linux/errno.h>
 #include <linux/prctl.h>
 #include <linux/random.h>
@@ -42,3 +43,58 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
 
 	return 0;
 }
+
+static u64 arg_to_enxx_mask(unsigned long arg)
+{
+	u64 sctlr_enxx_mask = 0;
+	if (arg & PR_PAC_APIAKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENIA;
+	if (arg & PR_PAC_APIBKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENIB;
+	if (arg & PR_PAC_APDAKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENDA;
+	if (arg & PR_PAC_APDBKEY)
+		sctlr_enxx_mask |= SCTLR_ELx_ENDB;
+	return sctlr_enxx_mask;
+}
+
+int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
+				   unsigned long enabled)
+{
+	u64 sctlr = tsk->thread.sctlr;
+
+	if (!system_supports_address_auth() || is_compat_task())
+		return -EINVAL;
+
+	if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
+		return -EINVAL;
+
+	sctlr &= ~arg_to_enxx_mask(keys);
+	sctlr |= arg_to_enxx_mask(enabled);
+	if (tsk == current) {
+		set_task_sctlr_el1(sctlr);
+	} else {
+		tsk->thread.sctlr = sctlr;
+	}
+
+	return 0;
+}
+
+int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk)
+{
+	int retval = 0;
+
+	if (!system_supports_address_auth() || is_compat_task())
+		return -EINVAL;
+
+	if (tsk->thread.sctlr & SCTLR_ELx_ENIA)
+		retval |= PR_PAC_APIAKEY;
+	if (tsk->thread.sctlr & SCTLR_ELx_ENIB)
+		retval |= PR_PAC_APIBKEY;
+	if (tsk->thread.sctlr & SCTLR_ELx_ENDA)
+		retval |= PR_PAC_APDAKEY;
+	if (tsk->thread.sctlr & SCTLR_ELx_ENDB)
+		retval |= PR_PAC_APDBKEY;
+
+	return retval;
+}
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 05a9cdd0b471..7fb28ccdf862 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
 	write_sysreg(val, cntkctl_el1);
 }
 
+#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
+static void update_sctlr_el1(u64 sctlr)
+{
+	/*
+	 * EnIA must not be cleared while in the kernel as this is necessary for
+	 * in-kernel PAC. It will be cleared on kernel exit if needed.
+	 */
+	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
+
+	/* ISB required for the kernel uaccess routines when setting TCF0. */
+	isb();
+}
+
+void set_task_sctlr_el1(u64 sctlr)
+{
+	/*
+	 * __switch_to() checks current->thread.sctlr as an
+	 * optimisation. Disable preemption so that it does not see
+	 * the variable update before the SCTLR_EL1 one.
+	 */
+	preempt_disable();
+	current->thread.sctlr = sctlr;
+	update_sctlr_el1(sctlr);
+	preempt_enable();
+}
+#else
+static void update_sctlr_el1(u64 sctlr)
+{
+}
+#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
+
 /*
  * Thread switching.
  */
@@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	 */
 	dsb(ish);
 
+	/* avoid expensive SCTLR_EL1 accesses if no change */
+	if (prev->thread.sctlr != next->thread.sctlr)
+		update_sctlr_el1(next->thread.sctlr);
+
 	/*
 	 * MTE thread switching must happen after the DSB above to ensure that
 	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
@@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
 void arch_setup_new_exec(void)
 {
 	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
+	if (current->thread.sctlr != init_sctlr)
+		set_task_sctlr_el1(init_sctlr);
 
 	ptrauth_thread_init_user(current);
 }
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index f49b349e16a3..2ed17fb07666 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -911,6 +911,38 @@ static int pac_mask_get(struct task_struct *target,
 	return membuf_write(&to, &uregs, sizeof(uregs));
 }
 
+static int pac_enabled_keys_get(struct task_struct *target,
+				const struct user_regset *regset,
+				struct membuf to)
+{
+	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
+
+	if (IS_ERR_VALUE(enabled_keys))
+		return enabled_keys;
+
+	return membuf_write(&to, &enabled_keys, sizeof(enabled_keys));
+}
+
+static int pac_enabled_keys_set(struct task_struct *target,
+				const struct user_regset *regset,
+				unsigned int pos, unsigned int count,
+				const void *kbuf, const void __user *ubuf)
+{
+	int ret;
+	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
+
+	if (IS_ERR_VALUE(enabled_keys))
+		return enabled_keys;
+
+	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+				 &enabled_keys, 0, -1);
+	if (ret)
+		return ret;
+
+	return ptrauth_prctl_set_enabled_keys(target, PR_PAC_ENABLED_KEYS_MASK,
+					      enabled_keys);
+}
+
 #ifdef CONFIG_CHECKPOINT_RESTORE
 static __uint128_t pac_key_to_user(const struct ptrauth_key *key)
 {
@@ -1076,6 +1108,7 @@ enum aarch64_regset {
 #endif
 #ifdef CONFIG_ARM64_PTR_AUTH
 	REGSET_PAC_MASK,
+	REGSET_PAC_ENABLED_KEYS,
 #ifdef CONFIG_CHECKPOINT_RESTORE
 	REGSET_PACA_KEYS,
 	REGSET_PACG_KEYS,
@@ -1162,6 +1195,14 @@ static const struct user_regset aarch64_regsets[] = {
 		.regset_get = pac_mask_get,
 		/* this cannot be set dynamically */
 	},
+	[REGSET_PAC_ENABLED_KEYS] = {
+		.core_note_type = NT_ARM_PAC_ENABLED_KEYS,
+		.n = 1,
+		.size = sizeof(long),
+		.align = sizeof(long),
+		.regset_get = pac_enabled_keys_get,
+		.set = pac_enabled_keys_set,
+	},
 #ifdef CONFIG_CHECKPOINT_RESTORE
 	[REGSET_PACA_KEYS] = {
 		.core_note_type = NT_ARM_PACA_KEYS,
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 77c4c9bad1b8..91932215a6dd 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
 }
 EXPORT_SYMBOL_GPL(cpu_logical_map);
 
+u64 init_sctlr;
+
 void __init __no_sanitize_address setup_arch(char **cmdline_p)
 {
 	init_mm.start_code = (unsigned long) _text;
@@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
 	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
 #endif
 
+	/*
+	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
+	 * the value that it was set to by the early startup code.
+	 */
+	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
+	init_sctlr &= SCTLR_TASK_MASK;
+	init_task.thread.sctlr = init_sctlr;
+
 	if (boot_args[1] || boot_args[2] || boot_args[3]) {
 		pr_err("WARNING: x1-x3 nonzero in violation of boot protocol:\n"
 			"\tx1: %016llx\n\tx2: %016llx\n\tx3: %016llx\n"
diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index 30f68b42eeb5..61bf4774b8f2 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -426,6 +426,7 @@ typedef struct elf64_shdr {
 #define NT_ARM_PACA_KEYS	0x407	/* ARM pointer authentication address keys */
 #define NT_ARM_PACG_KEYS	0x408	/* ARM pointer authentication generic key */
 #define NT_ARM_TAGGED_ADDR_CTRL	0x409	/* arm64 tagged address control (prctl()) */
+#define NT_ARM_PAC_ENABLED_KEYS	0x40a	/* arm64 ptr auth enabled keys (prctl()) */
 #define NT_ARC_V2	0x600		/* ARCv2 accumulator/extra registers */
 #define NT_VMCOREDD	0x700		/* Vmcore Device Dump Note */
 #define NT_MIPS_DSP	0x800		/* MIPS DSP ASE registers */
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 7f0827705c9a..0d1bb3a2e59a 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -247,4 +247,8 @@ struct prctl_mm_map {
 #define PR_SET_IO_FLUSHER		57
 #define PR_GET_IO_FLUSHER		58
 
+/* Set/get enabled arm64 pointer authentication keys */
+#define PR_PAC_SET_ENABLED_KEYS		59
+#define PR_PAC_GET_ENABLED_KEYS		60
+
 #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index ab6c409b1159..0cbd5841e521 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -119,6 +119,12 @@
 #ifndef PAC_RESET_KEYS
 # define PAC_RESET_KEYS(a, b)	(-EINVAL)
 #endif
+#ifndef PAC_SET_ENABLED_KEYS
+# define PAC_SET_ENABLED_KEYS(a, b, c)	(-EINVAL)
+#endif
+#ifndef PAC_GET_ENABLED_KEYS
+# define PAC_GET_ENABLED_KEYS(a)	(-EINVAL)
+#endif
 #ifndef SET_TAGGED_ADDR_CTRL
 # define SET_TAGGED_ADDR_CTRL(a)	(-EINVAL)
 #endif
@@ -2497,6 +2503,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 			return -EINVAL;
 		error = PAC_RESET_KEYS(me, arg2);
 		break;
+	case PR_PAC_SET_ENABLED_KEYS:
+		if (arg4 || arg5)
+			return -EINVAL;
+		error = PAC_SET_ENABLED_KEYS(me, arg2, arg3);
+		break;
+	case PR_PAC_GET_ENABLED_KEYS:
+		if (arg2 || arg3 || arg4 || arg5)
+			return -EINVAL;
+		error = PAC_GET_ENABLED_KEYS(me);
+		break;
 	case PR_SET_TAGGED_ADDR_CTRL:
 		if (arg3 || arg4 || arg5)
 			return -EINVAL;
-- 
2.28.0.1011.ga647a8990f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-10-14  5:51 ` Peter Collingbourne
@ 2020-11-17 17:29   ` Catalin Marinas
  -1 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-17 17:29 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Evgenii Stepanov, Kostya Serebryany, Vincenzo Frascino,
	Dave Martin, Linux ARM, Kevin Brodsky, Andrey Konovalov,
	Will Deacon, linux-api, Szabolcs Nagy, libc-alpha

Adding Szabolcs and libc-alpha. The original patch below and also here:

https://lore.kernel.org/r/20201014055106.25164-1-pcc@google.com

The patch looks fine to me but I'd like the glibc community to confirm
that they are happy with this ABI addition. I'd also like Dave Martin to
ack the patch since he was involved in the discussion for v1.

Thanks.

Catalin

On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> This prctl allows the user program to control which PAC keys are enabled
> in a particular task. The main reason why this is useful is to enable a
> userspace ABI that uses PAC to sign and authenticate function pointers
> and other pointers exposed outside of the function, while still allowing
> binaries conforming to the ABI to interoperate with legacy binaries that
> do not sign or authenticate pointers.
> 
> The idea is that a dynamic loader or early startup code would issue
> this prctl very early after establishing that a process may load legacy
> binaries, but before executing any PAC instructions.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> ---
> This patch must be applied on top of Catalin's MTE series:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> 
> Alternatively the series with this patch on top may be
> downloaded from Gerrit by following the link above.
> 
> v2:
> - added prctl(PR_PAC_GET_ENABLED_KEYS)
> - added ptrace APIs for getting and setting the set of enabled
>   keys
> - optimized the instruction sequence for kernel entry/exit
> - rebased on top of MTE series
> 
>  .../arm64/pointer-authentication.rst          | 27 +++++++++
>  arch/arm64/include/asm/pointer_auth.h         | 11 ++++
>  arch/arm64/include/asm/processor.h            | 21 ++++++-
>  arch/arm64/include/asm/sysreg.h               |  4 +-
>  arch/arm64/kernel/asm-offsets.c               |  1 +
>  arch/arm64/kernel/entry.S                     | 27 +++++++++
>  arch/arm64/kernel/mte.c                       | 39 +++----------
>  arch/arm64/kernel/pointer_auth.c              | 56 +++++++++++++++++++
>  arch/arm64/kernel/process.c                   | 37 ++++++++++++
>  arch/arm64/kernel/ptrace.c                    | 41 ++++++++++++++
>  arch/arm64/kernel/setup.c                     | 10 ++++
>  include/uapi/linux/elf.h                      |  1 +
>  include/uapi/linux/prctl.h                    |  4 ++
>  kernel/sys.c                                  | 16 ++++++
>  14 files changed, 261 insertions(+), 34 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.rst b/Documentation/arm64/pointer-authentication.rst
> index 30b2ab06526b..1f7e064deeb3 100644
> --- a/Documentation/arm64/pointer-authentication.rst
> +++ b/Documentation/arm64/pointer-authentication.rst
> @@ -107,3 +107,30 @@ filter out the Pointer Authentication system key registers from
>  KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
>  register. Any attempt to use the Pointer Authentication instructions will
>  result in an UNDEFINED exception being injected into the guest.
> +
> +
> +Enabling and disabling keys
> +---------------------------
> +
> +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> +PAC keys are enabled in a particular task. It takes two arguments, the
> +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> +and the second being a bitmask of the same bits specifying whether the key
> +should be enabled or disabled. For example::
> +
> +  prctl(PR_PAC_SET_ENABLED_KEYS,
> +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> +        PR_PAC_APIBKEY, 0, 0);
> +
> +disables all keys except the IB key.
> +
> +The main reason why this is useful is to enable a userspace ABI that uses PAC
> +instructions to sign and authenticate function pointers and other pointers
> +exposed outside of the function, while still allowing binaries conforming to
> +the ABI to interoperate with legacy binaries that do not sign or authenticate
> +pointers.
> +
> +The idea is that a dynamic loader or early startup code would issue this
> +prctl very early after establishing that a process may load legacy binaries,
> +but before executing any PAC instructions.
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index c6b4f0603024..662a005d9070 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -3,6 +3,7 @@
>  #define __ASM_POINTER_AUTH_H
>  
>  #include <linux/bitops.h>
> +#include <linux/prctl.h>
>  #include <linux/random.h>
>  
>  #include <asm/cpufeature.h>
> @@ -71,6 +72,11 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
>  
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
> +extern int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk,
> +					  unsigned long keys,
> +					  unsigned long enabled);
> +extern int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk);
> +
>  static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  {
>  	return ptrauth_clear_pac(ptr);
> @@ -85,10 +91,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
> +#define ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)	(-EINVAL)
> +#define ptrauth_prctl_get_enabled_keys(tsk)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
>  #define ptrauth_thread_init_kernel(tsk)
>  #define ptrauth_thread_switch_kernel(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
> +#define PR_PAC_ENABLED_KEYS_MASK                                               \
> +	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
> +
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index fec204d28fce..68c9d8064759 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -152,11 +152,17 @@ struct thread_struct {
>  	struct ptrauth_keys_kernel	keys_kernel;
>  #endif
>  #ifdef CONFIG_ARM64_MTE
> -	u64			sctlr_tcf0;
>  	u64			gcr_user_incl;
>  #endif
> +	u64			sctlr;
>  };
>  
> +extern u64 init_sctlr;
> +
> +#define SCTLR_TASK_MASK                                                        \
> +	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
> +	 SCTLR_EL1_TCF0_MASK)
> +
>  static inline void arch_thread_struct_whitelist(unsigned long *offset,
>  						unsigned long *size)
>  {
> @@ -263,6 +269,14 @@ extern void release_thread(struct task_struct *);
>  
>  unsigned long get_wchan(struct task_struct *p);
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +void set_task_sctlr_el1(u64 sctlr);
> +#else
> +static inline void set_task_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif
> +
>  /* Thread switching */
>  extern struct task_struct *cpu_switch_to(struct task_struct *prev,
>  					 struct task_struct *next);
> @@ -317,6 +331,11 @@ extern void __init minsigstksz_setup(void);
>  /* PR_PAC_RESET_KEYS prctl */
>  #define PAC_RESET_KEYS(tsk, arg)	ptrauth_prctl_reset_keys(tsk, arg)
>  
> +/* PR_PAC_{SET,GET}_ENABLED_KEYS prctl */
> +#define PAC_SET_ENABLED_KEYS(tsk, keys, enabled)				\
> +	ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)
> +#define PAC_GET_ENABLED_KEYS(tsk) ptrauth_prctl_get_enabled_keys(tsk)
> +
>  #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
>  /* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
>  long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 52eefe2f7d95..912150361338 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -548,8 +548,10 @@
>  #define SCTLR_ELx_TCF_ASYNC	(UL(0x2) << SCTLR_ELx_TCF_SHIFT)
>  #define SCTLR_ELx_TCF_MASK	(UL(0x3) << SCTLR_ELx_TCF_SHIFT)
>  
> +#define SCTLR_ELx_ENIA_SHIFT	31
> +
>  #define SCTLR_ELx_ITFSB	(BIT(37))
> -#define SCTLR_ELx_ENIA	(BIT(31))
> +#define SCTLR_ELx_ENIA	(BIT(SCTLR_ELx_ENIA_SHIFT))
>  #define SCTLR_ELx_ENIB	(BIT(30))
>  #define SCTLR_ELx_ENDA	(BIT(27))
>  #define SCTLR_ELx_EE    (BIT(25))
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7d32fc959b1a..cb49d4f31540 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -48,6 +48,7 @@ int main(void)
>    DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
>    DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
>  #endif
> +  DEFINE(THREAD_SCTLR,		offsetof(struct task_struct, thread.sctlr));
>    BLANK();
>    DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
>    DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ff34461524d4..19d24666b529 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -214,6 +214,18 @@ alternative_else_nop_endif
>  
>  	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/* Enable IA for in-kernel PAC if the task had it disabled. */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	orr	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	scs_load tsk, x20
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
> @@ -332,6 +344,21 @@ alternative_else_nop_endif
>  	/* No kernel C function calls after this as user keys are set. */
>  	ptrauth_keys_install_user tsk, x0, x1, x2
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/*
> +	 * IA was enabled for in-kernel PAC. Disable it now if needed.
> +	 * All other per-task SCTLR bits were updated on task switch.
> +	 */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	bic	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	apply_ssbd 0, x0, x1
>  	.endif
>  
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 52a0638ed967..60238a5e3c82 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -72,26 +72,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
>  	return ret;
>  }
>  
> -static void update_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/* ISB required for the kernel uaccess routines */
> -	sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0);
> -	isb();
> -}
> -
> -static void set_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/*
> -	 * mte_thread_switch() checks current->thread.sctlr_tcf0 as an
> -	 * optimisation. Disable preemption so that it does not see
> -	 * the variable update before the SCTLR_EL1.TCF0 one.
> -	 */
> -	preempt_disable();
> -	current->thread.sctlr_tcf0 = tcf0;
> -	update_sctlr_el1_tcf0(tcf0);
> -	preempt_enable();
> -}
> -
>  static void update_gcr_el1_excl(u64 incl)
>  {
>  	u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
> @@ -120,8 +100,6 @@ void flush_mte_state(void)
>  	dsb(ish);
>  	write_sysreg_s(0, SYS_TFSRE0_EL1);
>  	clear_thread_flag(TIF_MTE_ASYNC_FAULT);
> -	/* disable tag checking */
> -	set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE);
>  	/* reset tag generation mask */
>  	set_gcr_el1_excl(0);
>  }
> @@ -131,9 +109,6 @@ void mte_thread_switch(struct task_struct *next)
>  	if (!system_supports_mte())
>  		return;
>  
> -	/* avoid expensive SCTLR_EL1 accesses if no change */
> -	if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
> -		update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
>  	update_gcr_el1_excl(next->thread.gcr_user_incl);
>  }
>  
> @@ -147,7 +122,7 @@ void mte_suspend_exit(void)
>  
>  long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  {
> -	u64 tcf0;
> +	u64 sctlr = task->thread.sctlr & ~SCTLR_EL1_TCF0_MASK;
>  	u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT;
>  
>  	if (!system_supports_mte())
> @@ -155,23 +130,23 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  
>  	switch (arg & PR_MTE_TCF_MASK) {
>  	case PR_MTE_TCF_NONE:
> -		tcf0 = SCTLR_EL1_TCF0_NONE;
> +		sctlr |= SCTLR_EL1_TCF0_NONE;
>  		break;
>  	case PR_MTE_TCF_SYNC:
> -		tcf0 = SCTLR_EL1_TCF0_SYNC;
> +		sctlr |= SCTLR_EL1_TCF0_SYNC;
>  		break;
>  	case PR_MTE_TCF_ASYNC:
> -		tcf0 = SCTLR_EL1_TCF0_ASYNC;
> +		sctlr |= SCTLR_EL1_TCF0_ASYNC;
>  		break;
>  	default:
>  		return -EINVAL;
>  	}
>  
>  	if (task != current) {
> -		task->thread.sctlr_tcf0 = tcf0;
> +		task->thread.sctlr = sctlr;
>  		task->thread.gcr_user_incl = gcr_incl;
>  	} else {
> -		set_sctlr_el1_tcf0(tcf0);
> +		set_task_sctlr_el1(sctlr);
>  		set_gcr_el1_excl(gcr_incl);
>  	}
>  
> @@ -187,7 +162,7 @@ long get_mte_ctrl(struct task_struct *task)
>  
>  	ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT;
>  
> -	switch (task->thread.sctlr_tcf0) {
> +	switch (task->thread.sctlr & SCTLR_EL1_TCF0_MASK) {
>  	case SCTLR_EL1_TCF0_NONE:
>  		return PR_MTE_TCF_NONE;
>  	case SCTLR_EL1_TCF0_SYNC:
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 1e77736a4f66..0e763e8babd8 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0
>  
> +#include <linux/compat.h>
>  #include <linux/errno.h>
>  #include <linux/prctl.h>
>  #include <linux/random.h>
> @@ -42,3 +43,58 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  
>  	return 0;
>  }
> +
> +static u64 arg_to_enxx_mask(unsigned long arg)
> +{
> +	u64 sctlr_enxx_mask = 0;
> +	if (arg & PR_PAC_APIAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIA;
> +	if (arg & PR_PAC_APIBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIB;
> +	if (arg & PR_PAC_APDAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDA;
> +	if (arg & PR_PAC_APDBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDB;
> +	return sctlr_enxx_mask;
> +}
> +
> +int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
> +				   unsigned long enabled)
> +{
> +	u64 sctlr = tsk->thread.sctlr;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
> +		return -EINVAL;
> +
> +	sctlr &= ~arg_to_enxx_mask(keys);
> +	sctlr |= arg_to_enxx_mask(enabled);
> +	if (tsk == current) {
> +		set_task_sctlr_el1(sctlr);
> +	} else {
> +		tsk->thread.sctlr = sctlr;
> +	}
> +
> +	return 0;
> +}
> +
> +int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk)
> +{
> +	int retval = 0;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIA)
> +		retval |= PR_PAC_APIAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIB)
> +		retval |= PR_PAC_APIBKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDA)
> +		retval |= PR_PAC_APDAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDB)
> +		retval |= PR_PAC_APDBKEY;
> +
> +	return retval;
> +}
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 05a9cdd0b471..7fb28ccdf862 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
>  	write_sysreg(val, cntkctl_el1);
>  }
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * EnIA must not be cleared while in the kernel as this is necessary for
> +	 * in-kernel PAC. It will be cleared on kernel exit if needed.
> +	 */
> +	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> +
> +	/* ISB required for the kernel uaccess routines when setting TCF0. */
> +	isb();
> +}
> +
> +void set_task_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * __switch_to() checks current->thread.sctlr as an
> +	 * optimisation. Disable preemption so that it does not see
> +	 * the variable update before the SCTLR_EL1 one.
> +	 */
> +	preempt_disable();
> +	current->thread.sctlr = sctlr;
> +	update_sctlr_el1(sctlr);
> +	preempt_enable();
> +}
> +#else
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> +
>  /*
>   * Thread switching.
>   */
> @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>  	dsb(ish);
>  
> +	/* avoid expensive SCTLR_EL1 accesses if no change */
> +	if (prev->thread.sctlr != next->thread.sctlr)
> +		update_sctlr_el1(next->thread.sctlr);
> +
>  	/*
>  	 * MTE thread switching must happen after the DSB above to ensure that
>  	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
> @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
>  void arch_setup_new_exec(void)
>  {
>  	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> +	if (current->thread.sctlr != init_sctlr)
> +		set_task_sctlr_el1(init_sctlr);
>  
>  	ptrauth_thread_init_user(current);
>  }
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index f49b349e16a3..2ed17fb07666 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -911,6 +911,38 @@ static int pac_mask_get(struct task_struct *target,
>  	return membuf_write(&to, &uregs, sizeof(uregs));
>  }
>  
> +static int pac_enabled_keys_get(struct task_struct *target,
> +				const struct user_regset *regset,
> +				struct membuf to)
> +{
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	return membuf_write(&to, &enabled_keys, sizeof(enabled_keys));
> +}
> +
> +static int pac_enabled_keys_set(struct task_struct *target,
> +				const struct user_regset *regset,
> +				unsigned int pos, unsigned int count,
> +				const void *kbuf, const void __user *ubuf)
> +{
> +	int ret;
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
> +				 &enabled_keys, 0, -1);
> +	if (ret)
> +		return ret;
> +
> +	return ptrauth_prctl_set_enabled_keys(target, PR_PAC_ENABLED_KEYS_MASK,
> +					      enabled_keys);
> +}
> +
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  static __uint128_t pac_key_to_user(const struct ptrauth_key *key)
>  {
> @@ -1076,6 +1108,7 @@ enum aarch64_regset {
>  #endif
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	REGSET_PAC_MASK,
> +	REGSET_PAC_ENABLED_KEYS,
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	REGSET_PACA_KEYS,
>  	REGSET_PACG_KEYS,
> @@ -1162,6 +1195,14 @@ static const struct user_regset aarch64_regsets[] = {
>  		.regset_get = pac_mask_get,
>  		/* this cannot be set dynamically */
>  	},
> +	[REGSET_PAC_ENABLED_KEYS] = {
> +		.core_note_type = NT_ARM_PAC_ENABLED_KEYS,
> +		.n = 1,
> +		.size = sizeof(long),
> +		.align = sizeof(long),
> +		.regset_get = pac_enabled_keys_get,
> +		.set = pac_enabled_keys_set,
> +	},
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	[REGSET_PACA_KEYS] = {
>  		.core_note_type = NT_ARM_PACA_KEYS,
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 77c4c9bad1b8..91932215a6dd 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
>  }
>  EXPORT_SYMBOL_GPL(cpu_logical_map);
>  
> +u64 init_sctlr;
> +
>  void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  {
>  	init_mm.start_code = (unsigned long) _text;
> @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
>  #endif
>  
> +	/*
> +	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> +	 * the value that it was set to by the early startup code.
> +	 */
> +	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
> +	init_sctlr &= SCTLR_TASK_MASK;
> +	init_task.thread.sctlr = init_sctlr;
> +
>  	if (boot_args[1] || boot_args[2] || boot_args[3]) {
>  		pr_err("WARNING: x1-x3 nonzero in violation of boot protocol:\n"
>  			"\tx1: %016llx\n\tx2: %016llx\n\tx3: %016llx\n"
> diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
> index 30f68b42eeb5..61bf4774b8f2 100644
> --- a/include/uapi/linux/elf.h
> +++ b/include/uapi/linux/elf.h
> @@ -426,6 +426,7 @@ typedef struct elf64_shdr {
>  #define NT_ARM_PACA_KEYS	0x407	/* ARM pointer authentication address keys */
>  #define NT_ARM_PACG_KEYS	0x408	/* ARM pointer authentication generic key */
>  #define NT_ARM_TAGGED_ADDR_CTRL	0x409	/* arm64 tagged address control (prctl()) */
> +#define NT_ARM_PAC_ENABLED_KEYS	0x40a	/* arm64 ptr auth enabled keys (prctl()) */
>  #define NT_ARC_V2	0x600		/* ARCv2 accumulator/extra registers */
>  #define NT_VMCOREDD	0x700		/* Vmcore Device Dump Note */
>  #define NT_MIPS_DSP	0x800		/* MIPS DSP ASE registers */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 7f0827705c9a..0d1bb3a2e59a 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -247,4 +247,8 @@ struct prctl_mm_map {
>  #define PR_SET_IO_FLUSHER		57
>  #define PR_GET_IO_FLUSHER		58
>  
> +/* Set/get enabled arm64 pointer authentication keys */
> +#define PR_PAC_SET_ENABLED_KEYS		59
> +#define PR_PAC_GET_ENABLED_KEYS		60
> +
>  #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index ab6c409b1159..0cbd5841e521 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -119,6 +119,12 @@
>  #ifndef PAC_RESET_KEYS
>  # define PAC_RESET_KEYS(a, b)	(-EINVAL)
>  #endif
> +#ifndef PAC_SET_ENABLED_KEYS
> +# define PAC_SET_ENABLED_KEYS(a, b, c)	(-EINVAL)
> +#endif
> +#ifndef PAC_GET_ENABLED_KEYS
> +# define PAC_GET_ENABLED_KEYS(a)	(-EINVAL)
> +#endif
>  #ifndef SET_TAGGED_ADDR_CTRL
>  # define SET_TAGGED_ADDR_CTRL(a)	(-EINVAL)
>  #endif
> @@ -2497,6 +2503,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>  			return -EINVAL;
>  		error = PAC_RESET_KEYS(me, arg2);
>  		break;
> +	case PR_PAC_SET_ENABLED_KEYS:
> +		if (arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_SET_ENABLED_KEYS(me, arg2, arg3);
> +		break;
> +	case PR_PAC_GET_ENABLED_KEYS:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_GET_ENABLED_KEYS(me);
> +		break;
>  	case PR_SET_TAGGED_ADDR_CTRL:
>  		if (arg3 || arg4 || arg5)
>  			return -EINVAL;
> -- 
> 2.28.0.1011.ga647a8990f-goog

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-17 17:29   ` Catalin Marinas
  0 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-17 17:29 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: libc-alpha, Szabolcs Nagy, Andrey Konovalov, Kevin Brodsky,
	Kostya Serebryany, Evgenii Stepanov, linux-api,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

Adding Szabolcs and libc-alpha. The original patch below and also here:

https://lore.kernel.org/r/20201014055106.25164-1-pcc@google.com

The patch looks fine to me but I'd like the glibc community to confirm
that they are happy with this ABI addition. I'd also like Dave Martin to
ack the patch since he was involved in the discussion for v1.

Thanks.

Catalin

On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> This prctl allows the user program to control which PAC keys are enabled
> in a particular task. The main reason why this is useful is to enable a
> userspace ABI that uses PAC to sign and authenticate function pointers
> and other pointers exposed outside of the function, while still allowing
> binaries conforming to the ABI to interoperate with legacy binaries that
> do not sign or authenticate pointers.
> 
> The idea is that a dynamic loader or early startup code would issue
> this prctl very early after establishing that a process may load legacy
> binaries, but before executing any PAC instructions.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> ---
> This patch must be applied on top of Catalin's MTE series:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> 
> Alternatively the series with this patch on top may be
> downloaded from Gerrit by following the link above.
> 
> v2:
> - added prctl(PR_PAC_GET_ENABLED_KEYS)
> - added ptrace APIs for getting and setting the set of enabled
>   keys
> - optimized the instruction sequence for kernel entry/exit
> - rebased on top of MTE series
> 
>  .../arm64/pointer-authentication.rst          | 27 +++++++++
>  arch/arm64/include/asm/pointer_auth.h         | 11 ++++
>  arch/arm64/include/asm/processor.h            | 21 ++++++-
>  arch/arm64/include/asm/sysreg.h               |  4 +-
>  arch/arm64/kernel/asm-offsets.c               |  1 +
>  arch/arm64/kernel/entry.S                     | 27 +++++++++
>  arch/arm64/kernel/mte.c                       | 39 +++----------
>  arch/arm64/kernel/pointer_auth.c              | 56 +++++++++++++++++++
>  arch/arm64/kernel/process.c                   | 37 ++++++++++++
>  arch/arm64/kernel/ptrace.c                    | 41 ++++++++++++++
>  arch/arm64/kernel/setup.c                     | 10 ++++
>  include/uapi/linux/elf.h                      |  1 +
>  include/uapi/linux/prctl.h                    |  4 ++
>  kernel/sys.c                                  | 16 ++++++
>  14 files changed, 261 insertions(+), 34 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.rst b/Documentation/arm64/pointer-authentication.rst
> index 30b2ab06526b..1f7e064deeb3 100644
> --- a/Documentation/arm64/pointer-authentication.rst
> +++ b/Documentation/arm64/pointer-authentication.rst
> @@ -107,3 +107,30 @@ filter out the Pointer Authentication system key registers from
>  KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
>  register. Any attempt to use the Pointer Authentication instructions will
>  result in an UNDEFINED exception being injected into the guest.
> +
> +
> +Enabling and disabling keys
> +---------------------------
> +
> +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> +PAC keys are enabled in a particular task. It takes two arguments, the
> +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> +and the second being a bitmask of the same bits specifying whether the key
> +should be enabled or disabled. For example::
> +
> +  prctl(PR_PAC_SET_ENABLED_KEYS,
> +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> +        PR_PAC_APIBKEY, 0, 0);
> +
> +disables all keys except the IB key.
> +
> +The main reason why this is useful is to enable a userspace ABI that uses PAC
> +instructions to sign and authenticate function pointers and other pointers
> +exposed outside of the function, while still allowing binaries conforming to
> +the ABI to interoperate with legacy binaries that do not sign or authenticate
> +pointers.
> +
> +The idea is that a dynamic loader or early startup code would issue this
> +prctl very early after establishing that a process may load legacy binaries,
> +but before executing any PAC instructions.
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index c6b4f0603024..662a005d9070 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -3,6 +3,7 @@
>  #define __ASM_POINTER_AUTH_H
>  
>  #include <linux/bitops.h>
> +#include <linux/prctl.h>
>  #include <linux/random.h>
>  
>  #include <asm/cpufeature.h>
> @@ -71,6 +72,11 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
>  
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
> +extern int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk,
> +					  unsigned long keys,
> +					  unsigned long enabled);
> +extern int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk);
> +
>  static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  {
>  	return ptrauth_clear_pac(ptr);
> @@ -85,10 +91,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
> +#define ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)	(-EINVAL)
> +#define ptrauth_prctl_get_enabled_keys(tsk)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
>  #define ptrauth_thread_init_kernel(tsk)
>  #define ptrauth_thread_switch_kernel(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
> +#define PR_PAC_ENABLED_KEYS_MASK                                               \
> +	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
> +
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index fec204d28fce..68c9d8064759 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -152,11 +152,17 @@ struct thread_struct {
>  	struct ptrauth_keys_kernel	keys_kernel;
>  #endif
>  #ifdef CONFIG_ARM64_MTE
> -	u64			sctlr_tcf0;
>  	u64			gcr_user_incl;
>  #endif
> +	u64			sctlr;
>  };
>  
> +extern u64 init_sctlr;
> +
> +#define SCTLR_TASK_MASK                                                        \
> +	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
> +	 SCTLR_EL1_TCF0_MASK)
> +
>  static inline void arch_thread_struct_whitelist(unsigned long *offset,
>  						unsigned long *size)
>  {
> @@ -263,6 +269,14 @@ extern void release_thread(struct task_struct *);
>  
>  unsigned long get_wchan(struct task_struct *p);
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +void set_task_sctlr_el1(u64 sctlr);
> +#else
> +static inline void set_task_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif
> +
>  /* Thread switching */
>  extern struct task_struct *cpu_switch_to(struct task_struct *prev,
>  					 struct task_struct *next);
> @@ -317,6 +331,11 @@ extern void __init minsigstksz_setup(void);
>  /* PR_PAC_RESET_KEYS prctl */
>  #define PAC_RESET_KEYS(tsk, arg)	ptrauth_prctl_reset_keys(tsk, arg)
>  
> +/* PR_PAC_{SET,GET}_ENABLED_KEYS prctl */
> +#define PAC_SET_ENABLED_KEYS(tsk, keys, enabled)				\
> +	ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)
> +#define PAC_GET_ENABLED_KEYS(tsk) ptrauth_prctl_get_enabled_keys(tsk)
> +
>  #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
>  /* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
>  long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 52eefe2f7d95..912150361338 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -548,8 +548,10 @@
>  #define SCTLR_ELx_TCF_ASYNC	(UL(0x2) << SCTLR_ELx_TCF_SHIFT)
>  #define SCTLR_ELx_TCF_MASK	(UL(0x3) << SCTLR_ELx_TCF_SHIFT)
>  
> +#define SCTLR_ELx_ENIA_SHIFT	31
> +
>  #define SCTLR_ELx_ITFSB	(BIT(37))
> -#define SCTLR_ELx_ENIA	(BIT(31))
> +#define SCTLR_ELx_ENIA	(BIT(SCTLR_ELx_ENIA_SHIFT))
>  #define SCTLR_ELx_ENIB	(BIT(30))
>  #define SCTLR_ELx_ENDA	(BIT(27))
>  #define SCTLR_ELx_EE    (BIT(25))
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7d32fc959b1a..cb49d4f31540 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -48,6 +48,7 @@ int main(void)
>    DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
>    DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
>  #endif
> +  DEFINE(THREAD_SCTLR,		offsetof(struct task_struct, thread.sctlr));
>    BLANK();
>    DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
>    DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ff34461524d4..19d24666b529 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -214,6 +214,18 @@ alternative_else_nop_endif
>  
>  	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/* Enable IA for in-kernel PAC if the task had it disabled. */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	orr	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	scs_load tsk, x20
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
> @@ -332,6 +344,21 @@ alternative_else_nop_endif
>  	/* No kernel C function calls after this as user keys are set. */
>  	ptrauth_keys_install_user tsk, x0, x1, x2
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/*
> +	 * IA was enabled for in-kernel PAC. Disable it now if needed.
> +	 * All other per-task SCTLR bits were updated on task switch.
> +	 */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	bic	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	apply_ssbd 0, x0, x1
>  	.endif
>  
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 52a0638ed967..60238a5e3c82 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -72,26 +72,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
>  	return ret;
>  }
>  
> -static void update_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/* ISB required for the kernel uaccess routines */
> -	sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0);
> -	isb();
> -}
> -
> -static void set_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/*
> -	 * mte_thread_switch() checks current->thread.sctlr_tcf0 as an
> -	 * optimisation. Disable preemption so that it does not see
> -	 * the variable update before the SCTLR_EL1.TCF0 one.
> -	 */
> -	preempt_disable();
> -	current->thread.sctlr_tcf0 = tcf0;
> -	update_sctlr_el1_tcf0(tcf0);
> -	preempt_enable();
> -}
> -
>  static void update_gcr_el1_excl(u64 incl)
>  {
>  	u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
> @@ -120,8 +100,6 @@ void flush_mte_state(void)
>  	dsb(ish);
>  	write_sysreg_s(0, SYS_TFSRE0_EL1);
>  	clear_thread_flag(TIF_MTE_ASYNC_FAULT);
> -	/* disable tag checking */
> -	set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE);
>  	/* reset tag generation mask */
>  	set_gcr_el1_excl(0);
>  }
> @@ -131,9 +109,6 @@ void mte_thread_switch(struct task_struct *next)
>  	if (!system_supports_mte())
>  		return;
>  
> -	/* avoid expensive SCTLR_EL1 accesses if no change */
> -	if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
> -		update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
>  	update_gcr_el1_excl(next->thread.gcr_user_incl);
>  }
>  
> @@ -147,7 +122,7 @@ void mte_suspend_exit(void)
>  
>  long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  {
> -	u64 tcf0;
> +	u64 sctlr = task->thread.sctlr & ~SCTLR_EL1_TCF0_MASK;
>  	u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT;
>  
>  	if (!system_supports_mte())
> @@ -155,23 +130,23 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  
>  	switch (arg & PR_MTE_TCF_MASK) {
>  	case PR_MTE_TCF_NONE:
> -		tcf0 = SCTLR_EL1_TCF0_NONE;
> +		sctlr |= SCTLR_EL1_TCF0_NONE;
>  		break;
>  	case PR_MTE_TCF_SYNC:
> -		tcf0 = SCTLR_EL1_TCF0_SYNC;
> +		sctlr |= SCTLR_EL1_TCF0_SYNC;
>  		break;
>  	case PR_MTE_TCF_ASYNC:
> -		tcf0 = SCTLR_EL1_TCF0_ASYNC;
> +		sctlr |= SCTLR_EL1_TCF0_ASYNC;
>  		break;
>  	default:
>  		return -EINVAL;
>  	}
>  
>  	if (task != current) {
> -		task->thread.sctlr_tcf0 = tcf0;
> +		task->thread.sctlr = sctlr;
>  		task->thread.gcr_user_incl = gcr_incl;
>  	} else {
> -		set_sctlr_el1_tcf0(tcf0);
> +		set_task_sctlr_el1(sctlr);
>  		set_gcr_el1_excl(gcr_incl);
>  	}
>  
> @@ -187,7 +162,7 @@ long get_mte_ctrl(struct task_struct *task)
>  
>  	ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT;
>  
> -	switch (task->thread.sctlr_tcf0) {
> +	switch (task->thread.sctlr & SCTLR_EL1_TCF0_MASK) {
>  	case SCTLR_EL1_TCF0_NONE:
>  		return PR_MTE_TCF_NONE;
>  	case SCTLR_EL1_TCF0_SYNC:
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 1e77736a4f66..0e763e8babd8 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0
>  
> +#include <linux/compat.h>
>  #include <linux/errno.h>
>  #include <linux/prctl.h>
>  #include <linux/random.h>
> @@ -42,3 +43,58 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  
>  	return 0;
>  }
> +
> +static u64 arg_to_enxx_mask(unsigned long arg)
> +{
> +	u64 sctlr_enxx_mask = 0;
> +	if (arg & PR_PAC_APIAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIA;
> +	if (arg & PR_PAC_APIBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIB;
> +	if (arg & PR_PAC_APDAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDA;
> +	if (arg & PR_PAC_APDBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDB;
> +	return sctlr_enxx_mask;
> +}
> +
> +int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
> +				   unsigned long enabled)
> +{
> +	u64 sctlr = tsk->thread.sctlr;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
> +		return -EINVAL;
> +
> +	sctlr &= ~arg_to_enxx_mask(keys);
> +	sctlr |= arg_to_enxx_mask(enabled);
> +	if (tsk == current) {
> +		set_task_sctlr_el1(sctlr);
> +	} else {
> +		tsk->thread.sctlr = sctlr;
> +	}
> +
> +	return 0;
> +}
> +
> +int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk)
> +{
> +	int retval = 0;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIA)
> +		retval |= PR_PAC_APIAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIB)
> +		retval |= PR_PAC_APIBKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDA)
> +		retval |= PR_PAC_APDAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDB)
> +		retval |= PR_PAC_APDBKEY;
> +
> +	return retval;
> +}
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 05a9cdd0b471..7fb28ccdf862 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
>  	write_sysreg(val, cntkctl_el1);
>  }
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * EnIA must not be cleared while in the kernel as this is necessary for
> +	 * in-kernel PAC. It will be cleared on kernel exit if needed.
> +	 */
> +	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> +
> +	/* ISB required for the kernel uaccess routines when setting TCF0. */
> +	isb();
> +}
> +
> +void set_task_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * __switch_to() checks current->thread.sctlr as an
> +	 * optimisation. Disable preemption so that it does not see
> +	 * the variable update before the SCTLR_EL1 one.
> +	 */
> +	preempt_disable();
> +	current->thread.sctlr = sctlr;
> +	update_sctlr_el1(sctlr);
> +	preempt_enable();
> +}
> +#else
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> +
>  /*
>   * Thread switching.
>   */
> @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>  	dsb(ish);
>  
> +	/* avoid expensive SCTLR_EL1 accesses if no change */
> +	if (prev->thread.sctlr != next->thread.sctlr)
> +		update_sctlr_el1(next->thread.sctlr);
> +
>  	/*
>  	 * MTE thread switching must happen after the DSB above to ensure that
>  	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
> @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
>  void arch_setup_new_exec(void)
>  {
>  	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> +	if (current->thread.sctlr != init_sctlr)
> +		set_task_sctlr_el1(init_sctlr);
>  
>  	ptrauth_thread_init_user(current);
>  }
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index f49b349e16a3..2ed17fb07666 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -911,6 +911,38 @@ static int pac_mask_get(struct task_struct *target,
>  	return membuf_write(&to, &uregs, sizeof(uregs));
>  }
>  
> +static int pac_enabled_keys_get(struct task_struct *target,
> +				const struct user_regset *regset,
> +				struct membuf to)
> +{
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	return membuf_write(&to, &enabled_keys, sizeof(enabled_keys));
> +}
> +
> +static int pac_enabled_keys_set(struct task_struct *target,
> +				const struct user_regset *regset,
> +				unsigned int pos, unsigned int count,
> +				const void *kbuf, const void __user *ubuf)
> +{
> +	int ret;
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
> +				 &enabled_keys, 0, -1);
> +	if (ret)
> +		return ret;
> +
> +	return ptrauth_prctl_set_enabled_keys(target, PR_PAC_ENABLED_KEYS_MASK,
> +					      enabled_keys);
> +}
> +
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  static __uint128_t pac_key_to_user(const struct ptrauth_key *key)
>  {
> @@ -1076,6 +1108,7 @@ enum aarch64_regset {
>  #endif
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	REGSET_PAC_MASK,
> +	REGSET_PAC_ENABLED_KEYS,
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	REGSET_PACA_KEYS,
>  	REGSET_PACG_KEYS,
> @@ -1162,6 +1195,14 @@ static const struct user_regset aarch64_regsets[] = {
>  		.regset_get = pac_mask_get,
>  		/* this cannot be set dynamically */
>  	},
> +	[REGSET_PAC_ENABLED_KEYS] = {
> +		.core_note_type = NT_ARM_PAC_ENABLED_KEYS,
> +		.n = 1,
> +		.size = sizeof(long),
> +		.align = sizeof(long),
> +		.regset_get = pac_enabled_keys_get,
> +		.set = pac_enabled_keys_set,
> +	},
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	[REGSET_PACA_KEYS] = {
>  		.core_note_type = NT_ARM_PACA_KEYS,
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 77c4c9bad1b8..91932215a6dd 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
>  }
>  EXPORT_SYMBOL_GPL(cpu_logical_map);
>  
> +u64 init_sctlr;
> +
>  void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  {
>  	init_mm.start_code = (unsigned long) _text;
> @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
>  #endif
>  
> +	/*
> +	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> +	 * the value that it was set to by the early startup code.
> +	 */
> +	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
> +	init_sctlr &= SCTLR_TASK_MASK;
> +	init_task.thread.sctlr = init_sctlr;
> +
>  	if (boot_args[1] || boot_args[2] || boot_args[3]) {
>  		pr_err("WARNING: x1-x3 nonzero in violation of boot protocol:\n"
>  			"\tx1: %016llx\n\tx2: %016llx\n\tx3: %016llx\n"
> diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
> index 30f68b42eeb5..61bf4774b8f2 100644
> --- a/include/uapi/linux/elf.h
> +++ b/include/uapi/linux/elf.h
> @@ -426,6 +426,7 @@ typedef struct elf64_shdr {
>  #define NT_ARM_PACA_KEYS	0x407	/* ARM pointer authentication address keys */
>  #define NT_ARM_PACG_KEYS	0x408	/* ARM pointer authentication generic key */
>  #define NT_ARM_TAGGED_ADDR_CTRL	0x409	/* arm64 tagged address control (prctl()) */
> +#define NT_ARM_PAC_ENABLED_KEYS	0x40a	/* arm64 ptr auth enabled keys (prctl()) */
>  #define NT_ARC_V2	0x600		/* ARCv2 accumulator/extra registers */
>  #define NT_VMCOREDD	0x700		/* Vmcore Device Dump Note */
>  #define NT_MIPS_DSP	0x800		/* MIPS DSP ASE registers */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 7f0827705c9a..0d1bb3a2e59a 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -247,4 +247,8 @@ struct prctl_mm_map {
>  #define PR_SET_IO_FLUSHER		57
>  #define PR_GET_IO_FLUSHER		58
>  
> +/* Set/get enabled arm64 pointer authentication keys */
> +#define PR_PAC_SET_ENABLED_KEYS		59
> +#define PR_PAC_GET_ENABLED_KEYS		60
> +
>  #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index ab6c409b1159..0cbd5841e521 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -119,6 +119,12 @@
>  #ifndef PAC_RESET_KEYS
>  # define PAC_RESET_KEYS(a, b)	(-EINVAL)
>  #endif
> +#ifndef PAC_SET_ENABLED_KEYS
> +# define PAC_SET_ENABLED_KEYS(a, b, c)	(-EINVAL)
> +#endif
> +#ifndef PAC_GET_ENABLED_KEYS
> +# define PAC_GET_ENABLED_KEYS(a)	(-EINVAL)
> +#endif
>  #ifndef SET_TAGGED_ADDR_CTRL
>  # define SET_TAGGED_ADDR_CTRL(a)	(-EINVAL)
>  #endif
> @@ -2497,6 +2503,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>  			return -EINVAL;
>  		error = PAC_RESET_KEYS(me, arg2);
>  		break;
> +	case PR_PAC_SET_ENABLED_KEYS:
> +		if (arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_SET_ENABLED_KEYS(me, arg2, arg3);
> +		break;
> +	case PR_PAC_GET_ENABLED_KEYS:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_GET_ENABLED_KEYS(me);
> +		break;
>  	case PR_SET_TAGGED_ADDR_CTRL:
>  		if (arg3 || arg4 || arg5)
>  			return -EINVAL;
> -- 
> 2.28.0.1011.ga647a8990f-goog

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-10-14  5:51 ` Peter Collingbourne
@ 2020-11-17 17:48   ` Florian Weimer
  -1 siblings, 0 replies; 34+ messages in thread
From: Florian Weimer @ 2020-11-17 17:48 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Catalin Marinas, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Dave Martin, Linux ARM, Kevin Brodsky,
	Andrey Konovalov, Will Deacon, linux-api, libc-alpha

* Peter Collingbourne:

> This prctl allows the user program to control which PAC keys are enabled
> in a particular task. The main reason why this is useful is to enable a
> userspace ABI that uses PAC to sign and authenticate function pointers
> and other pointers exposed outside of the function, while still allowing
> binaries conforming to the ABI to interoperate with legacy binaries that
> do not sign or authenticate pointers.
>
> The idea is that a dynamic loader or early startup code would issue
> this prctl very early after establishing that a process may load legacy
> binaries, but before executing any PAC instructions.

I thought that the silicon did not support this?

What exactly does this switch on and off?  The signing itself (so that
the bits are zero again), or just the verification?

I do not know how easy it will be to adjust the glibc dynamic linker
to this because I expect it to use PAC instructions itself.  (It is an
interesting target, I suppose, so this makes sense to me.)  The loader
code used for initial process setup and later dlopen is the same.
Worst case, we could compile the loader twice.

There is also an issue with LD_AUDIT, where we run user-supplied code
(which might be PAC-compatible) before loading code that is not.  I
guess we could disable PAC by default in LD_AUDIT mode (which is
unusual, no relation to the kernel audit subsystem).

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-17 17:48   ` Florian Weimer
  0 siblings, 0 replies; 34+ messages in thread
From: Florian Weimer @ 2020-11-17 17:48 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: libc-alpha, Catalin Marinas, Kevin Brodsky, linux-api,
	Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

* Peter Collingbourne:

> This prctl allows the user program to control which PAC keys are enabled
> in a particular task. The main reason why this is useful is to enable a
> userspace ABI that uses PAC to sign and authenticate function pointers
> and other pointers exposed outside of the function, while still allowing
> binaries conforming to the ABI to interoperate with legacy binaries that
> do not sign or authenticate pointers.
>
> The idea is that a dynamic loader or early startup code would issue
> this prctl very early after establishing that a process may load legacy
> binaries, but before executing any PAC instructions.

I thought that the silicon did not support this?

What exactly does this switch on and off?  The signing itself (so that
the bits are zero again), or just the verification?

I do not know how easy it will be to adjust the glibc dynamic linker
to this because I expect it to use PAC instructions itself.  (It is an
interesting target, I suppose, so this makes sense to me.)  The loader
code used for initial process setup and later dlopen is the same.
Worst case, we could compile the loader twice.

There is also an issue with LD_AUDIT, where we run user-supplied code
(which might be PAC-compatible) before loading code that is not.  I
guess we could disable PAC by default in LD_AUDIT mode (which is
unusual, no relation to the kernel audit subsystem).

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-17 17:29   ` Catalin Marinas
@ 2020-11-17 18:14     ` Szabolcs Nagy
  -1 siblings, 0 replies; 34+ messages in thread
From: Szabolcs Nagy @ 2020-11-17 18:14 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Peter Collingbourne, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Dave Martin, Linux ARM, Kevin Brodsky,
	Andrey Konovalov, Will Deacon, linux-api, libc-alpha

The 11/17/2020 17:29, Catalin Marinas wrote:
> Adding Szabolcs and libc-alpha. The original patch below and also here:
> 
> https://lore.kernel.org/r/20201014055106.25164-1-pcc@google.com
> 
> The patch looks fine to me but I'd like the glibc community to confirm
> that they are happy with this ABI addition. I'd also like Dave Martin to
> ack the patch since he was involved in the discussion for v1.
> 
> Thanks.
> 
> Catalin
> 
> On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
> > 
> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
> > 
> > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> > ---
> > This patch must be applied on top of Catalin's MTE series:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> > 
> > Alternatively the series with this patch on top may be
> > downloaded from Gerrit by following the link above.
...
> > +Enabling and disabling keys
> > +---------------------------
> > +
> > +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> > +PAC keys are enabled in a particular task. It takes two arguments, the
> > +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> > +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> > +and the second being a bitmask of the same bits specifying whether the key
> > +should be enabled or disabled. For example::
> > +
> > +  prctl(PR_PAC_SET_ENABLED_KEYS,
> > +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> > +        PR_PAC_APIBKEY, 0, 0);
> > +
> > +disables all keys except the IB key.
> > +
> > +The main reason why this is useful is to enable a userspace ABI that uses PAC
> > +instructions to sign and authenticate function pointers and other pointers
> > +exposed outside of the function, while still allowing binaries conforming to
> > +the ABI to interoperate with legacy binaries that do not sign or authenticate
> > +pointers.
> > +
> > +The idea is that a dynamic loader or early startup code would issue this
> > +prctl very early after establishing that a process may load legacy binaries,
> > +but before executing any PAC instructions.

which keys are enabled by default when HWCAP_PACA is
set in AT_HWCAPS ? i think that would be useful to
point out here.

per process enable/disable was deemed costly to do when
pac support was added, did this change? (i'm happy if it
did, because even without a new PAC specific pointer ABI,
plain PAC-RET can cause problems if a process tries to do
its own DWARF unwinding but does not understand the new
opcode that toggles PAC status of the return address,
possibly in a third party library, so a way to opt-out of
PAC is useful. currently it's a kernel config option and
system wide on or off.)


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-17 18:14     ` Szabolcs Nagy
  0 siblings, 0 replies; 34+ messages in thread
From: Szabolcs Nagy @ 2020-11-17 18:14 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: libc-alpha, Will Deacon, Andrey Konovalov, Kevin Brodsky,
	Kostya Serebryany, Evgenii Stepanov, linux-api,
	Vincenzo Frascino, Peter Collingbourne, Dave Martin, Linux ARM

The 11/17/2020 17:29, Catalin Marinas wrote:
> Adding Szabolcs and libc-alpha. The original patch below and also here:
> 
> https://lore.kernel.org/r/20201014055106.25164-1-pcc@google.com
> 
> The patch looks fine to me but I'd like the glibc community to confirm
> that they are happy with this ABI addition. I'd also like Dave Martin to
> ack the patch since he was involved in the discussion for v1.
> 
> Thanks.
> 
> Catalin
> 
> On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
> > 
> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
> > 
> > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> > ---
> > This patch must be applied on top of Catalin's MTE series:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> > 
> > Alternatively the series with this patch on top may be
> > downloaded from Gerrit by following the link above.
...
> > +Enabling and disabling keys
> > +---------------------------
> > +
> > +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> > +PAC keys are enabled in a particular task. It takes two arguments, the
> > +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> > +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> > +and the second being a bitmask of the same bits specifying whether the key
> > +should be enabled or disabled. For example::
> > +
> > +  prctl(PR_PAC_SET_ENABLED_KEYS,
> > +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> > +        PR_PAC_APIBKEY, 0, 0);
> > +
> > +disables all keys except the IB key.
> > +
> > +The main reason why this is useful is to enable a userspace ABI that uses PAC
> > +instructions to sign and authenticate function pointers and other pointers
> > +exposed outside of the function, while still allowing binaries conforming to
> > +the ABI to interoperate with legacy binaries that do not sign or authenticate
> > +pointers.
> > +
> > +The idea is that a dynamic loader or early startup code would issue this
> > +prctl very early after establishing that a process may load legacy binaries,
> > +but before executing any PAC instructions.

which keys are enabled by default when HWCAP_PACA is
set in AT_HWCAPS ? i think that would be useful to
point out here.

per process enable/disable was deemed costly to do when
pac support was added, did this change? (i'm happy if it
did, because even without a new PAC specific pointer ABI,
plain PAC-RET can cause problems if a process tries to do
its own DWARF unwinding but does not understand the new
opcode that toggles PAC status of the return address,
possibly in a third party library, so a way to opt-out of
PAC is useful. currently it's a kernel config option and
system wide on or off.)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-17 17:48   ` Florian Weimer
@ 2020-11-17 18:17     ` Peter Collingbourne
  -1 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-17 18:17 UTC (permalink / raw)
  To: Florian Weimer
  Cc: Catalin Marinas, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Dave Martin, Linux ARM, Kevin Brodsky,
	Andrey Konovalov, Will Deacon, Linux API, libc-alpha

On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
>
> * Peter Collingbourne:
>
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
> >
> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
>
> I thought that the silicon did not support this?

See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
are also enable bits for the other three keys.

> What exactly does this switch on and off?  The signing itself (so that
> the bits are zero again), or just the verification?

Both the PAC* and AUT* instructions for the specific key become
no-ops, so the bits will be zero.

> I do not know how easy it will be to adjust the glibc dynamic linker
> to this because I expect it to use PAC instructions itself.  (It is an
> interesting target, I suppose, so this makes sense to me.)  The loader
> code used for initial process setup and later dlopen is the same.
> Worst case, we could compile the loader twice.

If you can avoid creating function pointers before the loader has
finished recursively scanning all libraries, and the ABI uses
different keys for function pointers and return addresses, you may be
able to get away with making the decision in the loader. The idea is
that you would disable the function pointer key and leave the return
address key enabled. This would also have the advantage of at least
providing return address protection for some libraries if function
pointer protection can't be enabled.

> There is also an issue with LD_AUDIT, where we run user-supplied code
> (which might be PAC-compatible) before loading code that is not.  I
> guess we could disable PAC by default in LD_AUDIT mode (which is
> unusual, no relation to the kernel audit subsystem).

Yes, LD_AUDIT may be difficult to deal with if it can influence which
libraries are loaded at startup. I agree that LD_AUDIT should disable
PAC by default.

Peter

[1] https://developer.arm.com/docs/ddi0601/d/aarch64-system-registers/sctlr_el1#EnIA_31

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-17 18:17     ` Peter Collingbourne
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-17 18:17 UTC (permalink / raw)
  To: Florian Weimer
  Cc: libc-alpha, Catalin Marinas, Kevin Brodsky, Linux API,
	Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
>
> * Peter Collingbourne:
>
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
> >
> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
>
> I thought that the silicon did not support this?

See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
are also enable bits for the other three keys.

> What exactly does this switch on and off?  The signing itself (so that
> the bits are zero again), or just the verification?

Both the PAC* and AUT* instructions for the specific key become
no-ops, so the bits will be zero.

> I do not know how easy it will be to adjust the glibc dynamic linker
> to this because I expect it to use PAC instructions itself.  (It is an
> interesting target, I suppose, so this makes sense to me.)  The loader
> code used for initial process setup and later dlopen is the same.
> Worst case, we could compile the loader twice.

If you can avoid creating function pointers before the loader has
finished recursively scanning all libraries, and the ABI uses
different keys for function pointers and return addresses, you may be
able to get away with making the decision in the loader. The idea is
that you would disable the function pointer key and leave the return
address key enabled. This would also have the advantage of at least
providing return address protection for some libraries if function
pointer protection can't be enabled.

> There is also an issue with LD_AUDIT, where we run user-supplied code
> (which might be PAC-compatible) before loading code that is not.  I
> guess we could disable PAC by default in LD_AUDIT mode (which is
> unusual, no relation to the kernel audit subsystem).

Yes, LD_AUDIT may be difficult to deal with if it can influence which
libraries are loaded at startup. I agree that LD_AUDIT should disable
PAC by default.

Peter

[1] https://developer.arm.com/docs/ddi0601/d/aarch64-system-registers/sctlr_el1#EnIA_31

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-17 18:17     ` Peter Collingbourne
@ 2020-11-17 18:39       ` Szabolcs Nagy
  -1 siblings, 0 replies; 34+ messages in thread
From: Szabolcs Nagy @ 2020-11-17 18:39 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Florian Weimer, libc-alpha, Catalin Marinas, Kevin Brodsky,
	Linux API, Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> >
> > * Peter Collingbourne:
> >
> > > This prctl allows the user program to control which PAC keys are enabled
> > > in a particular task. The main reason why this is useful is to enable a
> > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > and other pointers exposed outside of the function, while still allowing
> > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > do not sign or authenticate pointers.
> > >
> > > The idea is that a dynamic loader or early startup code would issue
> > > this prctl very early after establishing that a process may load legacy
> > > binaries, but before executing any PAC instructions.
> >
> > I thought that the silicon did not support this?
> 
> See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
> are also enable bits for the other three keys.

i think it was insufficiently clear in the architecture
spec how that can be context switched. (but it probably
changed)

> If you can avoid creating function pointers before the loader has
> finished recursively scanning all libraries, and the ABI uses
> different keys for function pointers and return addresses, you may be
> able to get away with making the decision in the loader. The idea is

glibc currently only supports pac-ret, so if we enable
or disable pac keys it would be for pac-ret.

(i.e. we would need to do the enable/disable logic in
the dynamic linker entry code before c code is executed)

> that you would disable the function pointer key and leave the return
> address key enabled. This would also have the advantage of at least
> providing return address protection for some libraries if function
> pointer protection can't be enabled.
> 
> > There is also an issue with LD_AUDIT, where we run user-supplied code
> > (which might be PAC-compatible) before loading code that is not.  I
> > guess we could disable PAC by default in LD_AUDIT mode (which is
> > unusual, no relation to the kernel audit subsystem).
> 
> Yes, LD_AUDIT may be difficult to deal with if it can influence which
> libraries are loaded at startup. I agree that LD_AUDIT should disable
> PAC by default.
> 
> Peter
> 
> [1] https://developer.arm.com/docs/ddi0601/d/aarch64-system-registers/sctlr_el1#EnIA_31

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-17 18:39       ` Szabolcs Nagy
  0 siblings, 0 replies; 34+ messages in thread
From: Szabolcs Nagy @ 2020-11-17 18:39 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: libc-alpha, Catalin Marinas, Kevin Brodsky, Andrey Konovalov,
	Kostya Serebryany, Florian Weimer, Linux ARM, Linux API,
	Vincenzo Frascino, Will Deacon, Dave Martin, Evgenii Stepanov

The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> >
> > * Peter Collingbourne:
> >
> > > This prctl allows the user program to control which PAC keys are enabled
> > > in a particular task. The main reason why this is useful is to enable a
> > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > and other pointers exposed outside of the function, while still allowing
> > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > do not sign or authenticate pointers.
> > >
> > > The idea is that a dynamic loader or early startup code would issue
> > > this prctl very early after establishing that a process may load legacy
> > > binaries, but before executing any PAC instructions.
> >
> > I thought that the silicon did not support this?
> 
> See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
> are also enable bits for the other three keys.

i think it was insufficiently clear in the architecture
spec how that can be context switched. (but it probably
changed)

> If you can avoid creating function pointers before the loader has
> finished recursively scanning all libraries, and the ABI uses
> different keys for function pointers and return addresses, you may be
> able to get away with making the decision in the loader. The idea is

glibc currently only supports pac-ret, so if we enable
or disable pac keys it would be for pac-ret.

(i.e. we would need to do the enable/disable logic in
the dynamic linker entry code before c code is executed)

> that you would disable the function pointer key and leave the return
> address key enabled. This would also have the advantage of at least
> providing return address protection for some libraries if function
> pointer protection can't be enabled.
> 
> > There is also an issue with LD_AUDIT, where we run user-supplied code
> > (which might be PAC-compatible) before loading code that is not.  I
> > guess we could disable PAC by default in LD_AUDIT mode (which is
> > unusual, no relation to the kernel audit subsystem).
> 
> Yes, LD_AUDIT may be difficult to deal with if it can influence which
> libraries are loaded at startup. I agree that LD_AUDIT should disable
> PAC by default.
> 
> Peter
> 
> [1] https://developer.arm.com/docs/ddi0601/d/aarch64-system-registers/sctlr_el1#EnIA_31

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-17 18:14     ` Szabolcs Nagy
@ 2020-11-17 18:40       ` Peter Collingbourne
  -1 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-17 18:40 UTC (permalink / raw)
  To: Szabolcs Nagy
  Cc: Catalin Marinas, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Dave Martin, Linux ARM, Kevin Brodsky,
	Andrey Konovalov, Will Deacon, Linux API, libc-alpha

On Tue, Nov 17, 2020 at 10:14 AM Szabolcs Nagy <szabolcs.nagy@arm.com> wrote:
>
> The 11/17/2020 17:29, Catalin Marinas wrote:
> > Adding Szabolcs and libc-alpha. The original patch below and also here:
> >
> > https://lore.kernel.org/r/20201014055106.25164-1-pcc@google.com
> >
> > The patch looks fine to me but I'd like the glibc community to confirm
> > that they are happy with this ABI addition. I'd also like Dave Martin to
> > ack the patch since he was involved in the discussion for v1.
> >
> > Thanks.
> >
> > Catalin
> >
> > On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > > This prctl allows the user program to control which PAC keys are enabled
> > > in a particular task. The main reason why this is useful is to enable a
> > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > and other pointers exposed outside of the function, while still allowing
> > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > do not sign or authenticate pointers.
> > >
> > > The idea is that a dynamic loader or early startup code would issue
> > > this prctl very early after establishing that a process may load legacy
> > > binaries, but before executing any PAC instructions.
> > >
> > > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > > Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> > > ---
> > > This patch must be applied on top of Catalin's MTE series:
> > >
> > >   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> > >
> > > Alternatively the series with this patch on top may be
> > > downloaded from Gerrit by following the link above.
> ...
> > > +Enabling and disabling keys
> > > +---------------------------
> > > +
> > > +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> > > +PAC keys are enabled in a particular task. It takes two arguments, the
> > > +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> > > +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> > > +and the second being a bitmask of the same bits specifying whether the key
> > > +should be enabled or disabled. For example::
> > > +
> > > +  prctl(PR_PAC_SET_ENABLED_KEYS,
> > > +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> > > +        PR_PAC_APIBKEY, 0, 0);
> > > +
> > > +disables all keys except the IB key.
> > > +
> > > +The main reason why this is useful is to enable a userspace ABI that uses PAC
> > > +instructions to sign and authenticate function pointers and other pointers
> > > +exposed outside of the function, while still allowing binaries conforming to
> > > +the ABI to interoperate with legacy binaries that do not sign or authenticate
> > > +pointers.
> > > +
> > > +The idea is that a dynamic loader or early startup code would issue this
> > > +prctl very early after establishing that a process may load legacy binaries,
> > > +but before executing any PAC instructions.
>
> which keys are enabled by default when HWCAP_PACA is
> set in AT_HWCAPS ? i think that would be useful to
> point out here.

For compatibility with previous kernel versions all keys are enabled
by default at process startup. But I agree it would be useful to point
it out here.

> per process enable/disable was deemed costly to do when
> pac support was added, did this change? (i'm happy if it
> did, because even without a new PAC specific pointer ABI,
> plain PAC-RET can cause problems if a process tries to do
> its own DWARF unwinding but does not understand the new
> opcode that toggles PAC status of the return address,
> possibly in a third party library, so a way to opt-out of
> PAC is useful. currently it's a kernel config option and
> system wide on or off.)

My understanding is that it was considered too expensive to do without
a use case, but we did come up with one. In general we only need to
update the system register on process switch as well as on entry/exit
if we need to disable IA which is also needed by in-kernel PAC. I did
benchmark a variant of the code sequences on existing non-PAC HW [1]
which showed that the entry/exit overhead in the common case where IA
is left enabled is expected to be low. But with the new Apple chips I
think it should be possible to benchmark the real entry/exit code
sequences on supporting HW in the enabled and disabled cases.

Peter

[1] https://lore.kernel.org/linux-arm-kernel/CAMn1gO7HCJiXEzDvBb-=T7znATqyaxE_t-zezqKL0dyXRCG-nQ@mail.gmail.com/

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-17 18:40       ` Peter Collingbourne
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-17 18:40 UTC (permalink / raw)
  To: Szabolcs Nagy
  Cc: libc-alpha, Catalin Marinas, Kevin Brodsky, Linux API,
	Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

On Tue, Nov 17, 2020 at 10:14 AM Szabolcs Nagy <szabolcs.nagy@arm.com> wrote:
>
> The 11/17/2020 17:29, Catalin Marinas wrote:
> > Adding Szabolcs and libc-alpha. The original patch below and also here:
> >
> > https://lore.kernel.org/r/20201014055106.25164-1-pcc@google.com
> >
> > The patch looks fine to me but I'd like the glibc community to confirm
> > that they are happy with this ABI addition. I'd also like Dave Martin to
> > ack the patch since he was involved in the discussion for v1.
> >
> > Thanks.
> >
> > Catalin
> >
> > On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > > This prctl allows the user program to control which PAC keys are enabled
> > > in a particular task. The main reason why this is useful is to enable a
> > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > and other pointers exposed outside of the function, while still allowing
> > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > do not sign or authenticate pointers.
> > >
> > > The idea is that a dynamic loader or early startup code would issue
> > > this prctl very early after establishing that a process may load legacy
> > > binaries, but before executing any PAC instructions.
> > >
> > > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > > Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> > > ---
> > > This patch must be applied on top of Catalin's MTE series:
> > >
> > >   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> > >
> > > Alternatively the series with this patch on top may be
> > > downloaded from Gerrit by following the link above.
> ...
> > > +Enabling and disabling keys
> > > +---------------------------
> > > +
> > > +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> > > +PAC keys are enabled in a particular task. It takes two arguments, the
> > > +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> > > +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> > > +and the second being a bitmask of the same bits specifying whether the key
> > > +should be enabled or disabled. For example::
> > > +
> > > +  prctl(PR_PAC_SET_ENABLED_KEYS,
> > > +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> > > +        PR_PAC_APIBKEY, 0, 0);
> > > +
> > > +disables all keys except the IB key.
> > > +
> > > +The main reason why this is useful is to enable a userspace ABI that uses PAC
> > > +instructions to sign and authenticate function pointers and other pointers
> > > +exposed outside of the function, while still allowing binaries conforming to
> > > +the ABI to interoperate with legacy binaries that do not sign or authenticate
> > > +pointers.
> > > +
> > > +The idea is that a dynamic loader or early startup code would issue this
> > > +prctl very early after establishing that a process may load legacy binaries,
> > > +but before executing any PAC instructions.
>
> which keys are enabled by default when HWCAP_PACA is
> set in AT_HWCAPS ? i think that would be useful to
> point out here.

For compatibility with previous kernel versions all keys are enabled
by default at process startup. But I agree it would be useful to point
it out here.

> per process enable/disable was deemed costly to do when
> pac support was added, did this change? (i'm happy if it
> did, because even without a new PAC specific pointer ABI,
> plain PAC-RET can cause problems if a process tries to do
> its own DWARF unwinding but does not understand the new
> opcode that toggles PAC status of the return address,
> possibly in a third party library, so a way to opt-out of
> PAC is useful. currently it's a kernel config option and
> system wide on or off.)

My understanding is that it was considered too expensive to do without
a use case, but we did come up with one. In general we only need to
update the system register on process switch as well as on entry/exit
if we need to disable IA which is also needed by in-kernel PAC. I did
benchmark a variant of the code sequences on existing non-PAC HW [1]
which showed that the entry/exit overhead in the common case where IA
is left enabled is expected to be low. But with the new Apple chips I
think it should be possible to benchmark the real entry/exit code
sequences on supporting HW in the enabled and disabled cases.

Peter

[1] https://lore.kernel.org/linux-arm-kernel/CAMn1gO7HCJiXEzDvBb-=T7znATqyaxE_t-zezqKL0dyXRCG-nQ@mail.gmail.com/

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-10-14  5:51 ` Peter Collingbourne
@ 2020-11-18 12:25   ` Catalin Marinas
  -1 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-18 12:25 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Evgenii Stepanov, Kostya Serebryany, Vincenzo Frascino,
	Dave Martin, Linux ARM, Kevin Brodsky, Andrey Konovalov,
	Will Deacon, linux-api, Szabolcs Nagy, libc-alpha

On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ff34461524d4..19d24666b529 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -214,6 +214,18 @@ alternative_else_nop_endif
>  
>  	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH

If we get too many instructions, we might as well do (same further
down):

alternative_if_not ARM64_HAS_ADDRESS_AUTH
	b	1f
alternative_else_nop_endif

> +	/* Enable IA for in-kernel PAC if the task had it disabled. */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	orr	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0

I now realised that this is missing an ISB. Writes to system registers
in general are not visible until the context is synchronised. I suggest
you change the line above your hunk to
ptrauth_keys_install_kernel_nosync followed by an explicit isb here.

Note that this ISB may affect your benchmark results slightly. I think
you only used MSR without ISB.

> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	scs_load tsk, x20
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
> @@ -332,6 +344,21 @@ alternative_else_nop_endif
>  	/* No kernel C function calls after this as user keys are set. */
>  	ptrauth_keys_install_user tsk, x0, x1, x2
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/*
> +	 * IA was enabled for in-kernel PAC. Disable it now if needed.
> +	 * All other per-task SCTLR bits were updated on task switch.
> +	 */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	bic	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0

That's correct, no need for context synchronisation here as we are soon
doing an ERET.

> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	apply_ssbd 0, x0, x1
>  	.endif
>  
[...]
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 05a9cdd0b471..7fb28ccdf862 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
>  	write_sysreg(val, cntkctl_el1);
>  }
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * EnIA must not be cleared while in the kernel as this is necessary for
> +	 * in-kernel PAC. It will be cleared on kernel exit if needed.
> +	 */
> +	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> +
> +	/* ISB required for the kernel uaccess routines when setting TCF0. */
> +	isb();
> +}
> +
> +void set_task_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * __switch_to() checks current->thread.sctlr as an
> +	 * optimisation. Disable preemption so that it does not see
> +	 * the variable update before the SCTLR_EL1 one.
> +	 */
> +	preempt_disable();
> +	current->thread.sctlr = sctlr;
> +	update_sctlr_el1(sctlr);
> +	preempt_enable();
> +}
> +#else
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> +
>  /*
>   * Thread switching.
>   */
> @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>  	dsb(ish);
>  
> +	/* avoid expensive SCTLR_EL1 accesses if no change */
> +	if (prev->thread.sctlr != next->thread.sctlr)
> +		update_sctlr_el1(next->thread.sctlr);
> +
>  	/*
>  	 * MTE thread switching must happen after the DSB above to ensure that
>  	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
> @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
>  void arch_setup_new_exec(void)
>  {
>  	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> +	if (current->thread.sctlr != init_sctlr)
> +		set_task_sctlr_el1(init_sctlr);

init_sctlr may not have the full information when initialised in
setup_arch(). We probably get away with this for the current features
but it may have unexpected behaviour in the long run.

I think we could start from the existing current->thread.sctlr and just
set the PAC enable bits via ptrauth_thread_init_user(). Similarly, we
could call a mte_thread_init_user() (just rename flush_mte_state() and
move its call place) which sets the current->thread.stclr bits that we
need.

Since we set the bits explicitly within the SCTLR_TASK_MASK, I don't
think we need any init_sctlr.

> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 77c4c9bad1b8..91932215a6dd 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
>  }
>  EXPORT_SYMBOL_GPL(cpu_logical_map);
>  
> +u64 init_sctlr;
> +
>  void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  {
>  	init_mm.start_code = (unsigned long) _text;
> @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
>  #endif
>  
> +	/*
> +	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> +	 * the value that it was set to by the early startup code.
> +	 */
> +	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));

We have a read_sysreg(sctlr_el1) but I don't think we need it.

> +	init_sctlr &= SCTLR_TASK_MASK;
> +	init_task.thread.sctlr = init_sctlr;

At this point we have the full SCTLR_EL1. The cpufeature framework may
turn on additional bits when SMP is fully operational. Also, a user
thread is not interested in bits that are specific to the kernel only.

As I mentioned above, if we set the bits explicitly when creating a user
thread, we can just leave init_task.thread.sctlr to 0.

Also, I'd rename thread.sctlr to sctlr_user so that it's clearer that it
doesn't affect the kernel settings.

-- 
Catalin

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 12:25   ` Catalin Marinas
  0 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-18 12:25 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: libc-alpha, Szabolcs Nagy, Andrey Konovalov, Kevin Brodsky,
	Kostya Serebryany, Evgenii Stepanov, linux-api,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ff34461524d4..19d24666b529 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -214,6 +214,18 @@ alternative_else_nop_endif
>  
>  	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH

If we get too many instructions, we might as well do (same further
down):

alternative_if_not ARM64_HAS_ADDRESS_AUTH
	b	1f
alternative_else_nop_endif

> +	/* Enable IA for in-kernel PAC if the task had it disabled. */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	orr	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0

I now realised that this is missing an ISB. Writes to system registers
in general are not visible until the context is synchronised. I suggest
you change the line above your hunk to
ptrauth_keys_install_kernel_nosync followed by an explicit isb here.

Note that this ISB may affect your benchmark results slightly. I think
you only used MSR without ISB.

> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	scs_load tsk, x20
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
> @@ -332,6 +344,21 @@ alternative_else_nop_endif
>  	/* No kernel C function calls after this as user keys are set. */
>  	ptrauth_keys_install_user tsk, x0, x1, x2
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/*
> +	 * IA was enabled for in-kernel PAC. Disable it now if needed.
> +	 * All other per-task SCTLR bits were updated on task switch.
> +	 */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	bic	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0

That's correct, no need for context synchronisation here as we are soon
doing an ERET.

> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	apply_ssbd 0, x0, x1
>  	.endif
>  
[...]
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 05a9cdd0b471..7fb28ccdf862 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
>  	write_sysreg(val, cntkctl_el1);
>  }
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * EnIA must not be cleared while in the kernel as this is necessary for
> +	 * in-kernel PAC. It will be cleared on kernel exit if needed.
> +	 */
> +	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> +
> +	/* ISB required for the kernel uaccess routines when setting TCF0. */
> +	isb();
> +}
> +
> +void set_task_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * __switch_to() checks current->thread.sctlr as an
> +	 * optimisation. Disable preemption so that it does not see
> +	 * the variable update before the SCTLR_EL1 one.
> +	 */
> +	preempt_disable();
> +	current->thread.sctlr = sctlr;
> +	update_sctlr_el1(sctlr);
> +	preempt_enable();
> +}
> +#else
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> +
>  /*
>   * Thread switching.
>   */
> @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>  	dsb(ish);
>  
> +	/* avoid expensive SCTLR_EL1 accesses if no change */
> +	if (prev->thread.sctlr != next->thread.sctlr)
> +		update_sctlr_el1(next->thread.sctlr);
> +
>  	/*
>  	 * MTE thread switching must happen after the DSB above to ensure that
>  	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
> @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
>  void arch_setup_new_exec(void)
>  {
>  	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> +	if (current->thread.sctlr != init_sctlr)
> +		set_task_sctlr_el1(init_sctlr);

init_sctlr may not have the full information when initialised in
setup_arch(). We probably get away with this for the current features
but it may have unexpected behaviour in the long run.

I think we could start from the existing current->thread.sctlr and just
set the PAC enable bits via ptrauth_thread_init_user(). Similarly, we
could call a mte_thread_init_user() (just rename flush_mte_state() and
move its call place) which sets the current->thread.stclr bits that we
need.

Since we set the bits explicitly within the SCTLR_TASK_MASK, I don't
think we need any init_sctlr.

> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 77c4c9bad1b8..91932215a6dd 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
>  }
>  EXPORT_SYMBOL_GPL(cpu_logical_map);
>  
> +u64 init_sctlr;
> +
>  void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  {
>  	init_mm.start_code = (unsigned long) _text;
> @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
>  #endif
>  
> +	/*
> +	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> +	 * the value that it was set to by the early startup code.
> +	 */
> +	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));

We have a read_sysreg(sctlr_el1) but I don't think we need it.

> +	init_sctlr &= SCTLR_TASK_MASK;
> +	init_task.thread.sctlr = init_sctlr;

At this point we have the full SCTLR_EL1. The cpufeature framework may
turn on additional bits when SMP is fully operational. Also, a user
thread is not interested in bits that are specific to the kernel only.

As I mentioned above, if we set the bits explicitly when creating a user
thread, we can just leave init_task.thread.sctlr to 0.

Also, I'd rename thread.sctlr to sctlr_user so that it's clearer that it
doesn't affect the kernel settings.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-17 18:39       ` Szabolcs Nagy
@ 2020-11-18 12:33         ` Catalin Marinas
  -1 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-18 12:33 UTC (permalink / raw)
  To: Szabolcs Nagy
  Cc: Peter Collingbourne, Florian Weimer, libc-alpha, Kevin Brodsky,
	Linux API, Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

On Tue, Nov 17, 2020 at 06:39:13PM +0000, Szabolcs Nagy wrote:
> The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> > On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> > >
> > > * Peter Collingbourne:
> > >
> > > > This prctl allows the user program to control which PAC keys are enabled
> > > > in a particular task. The main reason why this is useful is to enable a
> > > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > > and other pointers exposed outside of the function, while still allowing
> > > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > > do not sign or authenticate pointers.
> > > >
> > > > The idea is that a dynamic loader or early startup code would issue
> > > > this prctl very early after establishing that a process may load legacy
> > > > binaries, but before executing any PAC instructions.
> > >
> > > I thought that the silicon did not support this?

I think the past discussion we had was around enabling PAC for kernel
while disabling it for user. The hardware doesn't give us separate bits,
so Peter's patch toggles them on kernel entry/return, with some overhead
given by the MSR+ISB (to be added).

> > See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
> > are also enable bits for the other three keys.
> 
> i think it was insufficiently clear in the architecture
> spec how that can be context switched. (but it probably
> changed)

The bits that we can't toggle easily have the comment "This field is
permitted to be cached in the TLB" in the ARM ARM. Luckily, it's not the
case for EnI*.

-- 
Catalin

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 12:33         ` Catalin Marinas
  0 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-18 12:33 UTC (permalink / raw)
  To: Szabolcs Nagy
  Cc: libc-alpha, Will Deacon, Linux API, Kevin Brodsky,
	Kostya Serebryany, Florian Weimer, Linux ARM, Andrey Konovalov,
	Vincenzo Frascino, Peter Collingbourne, Dave Martin,
	Evgenii Stepanov

On Tue, Nov 17, 2020 at 06:39:13PM +0000, Szabolcs Nagy wrote:
> The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> > On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> > >
> > > * Peter Collingbourne:
> > >
> > > > This prctl allows the user program to control which PAC keys are enabled
> > > > in a particular task. The main reason why this is useful is to enable a
> > > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > > and other pointers exposed outside of the function, while still allowing
> > > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > > do not sign or authenticate pointers.
> > > >
> > > > The idea is that a dynamic loader or early startup code would issue
> > > > this prctl very early after establishing that a process may load legacy
> > > > binaries, but before executing any PAC instructions.
> > >
> > > I thought that the silicon did not support this?

I think the past discussion we had was around enabling PAC for kernel
while disabling it for user. The hardware doesn't give us separate bits,
so Peter's patch toggles them on kernel entry/return, with some overhead
given by the MSR+ISB (to be added).

> > See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
> > are also enable bits for the other three keys.
> 
> i think it was insufficiently clear in the architecture
> spec how that can be context switched. (but it probably
> changed)

The bits that we can't toggle easily have the comment "This field is
permitted to be cached in the TLB" in the ARM ARM. Luckily, it's not the
case for EnI*.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-18 12:33         ` Catalin Marinas
@ 2020-11-18 13:31           ` Szabolcs Nagy
  -1 siblings, 0 replies; 34+ messages in thread
From: Szabolcs Nagy @ 2020-11-18 13:31 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Peter Collingbourne, Florian Weimer, libc-alpha, Kevin Brodsky,
	Linux API, Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

The 11/18/2020 12:33, Catalin Marinas wrote:
> On Tue, Nov 17, 2020 at 06:39:13PM +0000, Szabolcs Nagy wrote:
> > The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> > > On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> > > >
> > > > * Peter Collingbourne:
> > > >
> > > > > This prctl allows the user program to control which PAC keys are enabled
> > > > > in a particular task. The main reason why this is useful is to enable a
> > > > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > > > and other pointers exposed outside of the function, while still allowing
> > > > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > > > do not sign or authenticate pointers.
> > > > >
> > > > > The idea is that a dynamic loader or early startup code would issue
> > > > > this prctl very early after establishing that a process may load legacy
> > > > > binaries, but before executing any PAC instructions.
> > > >
> > > > I thought that the silicon did not support this?
> 
> I think the past discussion we had was around enabling PAC for kernel
> while disabling it for user. The hardware doesn't give us separate bits,
> so Peter's patch toggles them on kernel entry/return, with some overhead
> given by the MSR+ISB (to be added).

ah ok. i probably incorrectly thought that toggling that sys
register bit is too much overhead at kernel entry so assumed
we cannot have PAC off by default or set per process.

(i think with the proposed prctl it's possible to disable PAC
at early ld.so startup and reenable it before calling into the
main exe entry code, if we ever need to disable PAC-RET.)

i assume thread creation/fork inherits the setting but exec
does not (this is another point that may be worth adding to
the documentation).

if we run into issues in userspace with PAC then a prctl that
can be inherited across exec is a possible workaround (so PAC
can be disabled for an entire process tree).

> 
> > > See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
> > > are also enable bits for the other three keys.
> > 
> > i think it was insufficiently clear in the architecture
> > spec how that can be context switched. (but it probably
> > changed)
> 
> The bits that we can't toggle easily have the comment "This field is
> permitted to be cached in the TLB" in the ARM ARM. Luckily, it's not the
> case for EnI*.

ok. thanks.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 13:31           ` Szabolcs Nagy
  0 siblings, 0 replies; 34+ messages in thread
From: Szabolcs Nagy @ 2020-11-18 13:31 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: libc-alpha, Will Deacon, Linux API, Kevin Brodsky,
	Kostya Serebryany, Florian Weimer, Linux ARM, Andrey Konovalov,
	Vincenzo Frascino, Peter Collingbourne, Dave Martin,
	Evgenii Stepanov

The 11/18/2020 12:33, Catalin Marinas wrote:
> On Tue, Nov 17, 2020 at 06:39:13PM +0000, Szabolcs Nagy wrote:
> > The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> > > On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> > > >
> > > > * Peter Collingbourne:
> > > >
> > > > > This prctl allows the user program to control which PAC keys are enabled
> > > > > in a particular task. The main reason why this is useful is to enable a
> > > > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > > > and other pointers exposed outside of the function, while still allowing
> > > > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > > > do not sign or authenticate pointers.
> > > > >
> > > > > The idea is that a dynamic loader or early startup code would issue
> > > > > this prctl very early after establishing that a process may load legacy
> > > > > binaries, but before executing any PAC instructions.
> > > >
> > > > I thought that the silicon did not support this?
> 
> I think the past discussion we had was around enabling PAC for kernel
> while disabling it for user. The hardware doesn't give us separate bits,
> so Peter's patch toggles them on kernel entry/return, with some overhead
> given by the MSR+ISB (to be added).

ah ok. i probably incorrectly thought that toggling that sys
register bit is too much overhead at kernel entry so assumed
we cannot have PAC off by default or set per process.

(i think with the proposed prctl it's possible to disable PAC
at early ld.so startup and reenable it before calling into the
main exe entry code, if we ever need to disable PAC-RET.)

i assume thread creation/fork inherits the setting but exec
does not (this is another point that may be worth adding to
the documentation).

if we run into issues in userspace with PAC then a prctl that
can be inherited across exec is a possible workaround (so PAC
can be disabled for an entire process tree).

> 
> > > See e.g. the documentation for SCTLR_EL1.EnIA [1] for details. There
> > > are also enable bits for the other three keys.
> > 
> > i think it was insufficiently clear in the architecture
> > spec how that can be context switched. (but it probably
> > changed)
> 
> The bits that we can't toggle easily have the comment "This field is
> permitted to be cached in the TLB" in the ARM ARM. Luckily, it's not the
> case for EnI*.

ok. thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-18 13:31           ` Szabolcs Nagy
@ 2020-11-18 13:37             ` Catalin Marinas
  -1 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-18 13:37 UTC (permalink / raw)
  To: Szabolcs Nagy
  Cc: Peter Collingbourne, Florian Weimer, libc-alpha, Kevin Brodsky,
	Linux API, Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

On Wed, Nov 18, 2020 at 01:31:21PM +0000, Szabolcs Nagy wrote:
> The 11/18/2020 12:33, Catalin Marinas wrote:
> > On Tue, Nov 17, 2020 at 06:39:13PM +0000, Szabolcs Nagy wrote:
> > > The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> > > > On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> > > > >
> > > > > * Peter Collingbourne:
> > > > >
> > > > > > This prctl allows the user program to control which PAC keys are enabled
> > > > > > in a particular task. The main reason why this is useful is to enable a
> > > > > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > > > > and other pointers exposed outside of the function, while still allowing
> > > > > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > > > > do not sign or authenticate pointers.
> > > > > >
> > > > > > The idea is that a dynamic loader or early startup code would issue
> > > > > > this prctl very early after establishing that a process may load legacy
> > > > > > binaries, but before executing any PAC instructions.
> > > > >
> > > > > I thought that the silicon did not support this?
> > 
> > I think the past discussion we had was around enabling PAC for kernel
> > while disabling it for user. The hardware doesn't give us separate bits,
> > so Peter's patch toggles them on kernel entry/return, with some overhead
> > given by the MSR+ISB (to be added).
> 
> ah ok. i probably incorrectly thought that toggling that sys
> register bit is too much overhead at kernel entry so assumed
> we cannot have PAC off by default or set per process.

I think Peter can rerun his benchmarks but with the ISB added after MSR.
If they are not too bad, we can take this series.

> (i think with the proposed prctl it's possible to disable PAC
> at early ld.so startup and reenable it before calling into the
> main exe entry code, if we ever need to disable PAC-RET.)
> 
> i assume thread creation/fork inherits the setting but exec
> does not (this is another point that may be worth adding to
> the documentation).

Yes, that's my understanding from the patch. It should be documented
explicitly (I haven't read the doc updates, maybe it does this already).

> if we run into issues in userspace with PAC then a prctl that
> can be inherited across exec is a possible workaround (so PAC
> can be disabled for an entire process tree).

Can you do something similar to MTE with an environment variable forcing
PAC off?

-- 
Catalin

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 13:37             ` Catalin Marinas
  0 siblings, 0 replies; 34+ messages in thread
From: Catalin Marinas @ 2020-11-18 13:37 UTC (permalink / raw)
  To: Szabolcs Nagy
  Cc: libc-alpha, Will Deacon, Linux API, Kevin Brodsky,
	Kostya Serebryany, Florian Weimer, Linux ARM, Andrey Konovalov,
	Vincenzo Frascino, Peter Collingbourne, Dave Martin,
	Evgenii Stepanov

On Wed, Nov 18, 2020 at 01:31:21PM +0000, Szabolcs Nagy wrote:
> The 11/18/2020 12:33, Catalin Marinas wrote:
> > On Tue, Nov 17, 2020 at 06:39:13PM +0000, Szabolcs Nagy wrote:
> > > The 11/17/2020 10:17, Peter Collingbourne via Libc-alpha wrote:
> > > > On Tue, Nov 17, 2020 at 9:48 AM Florian Weimer <fw@deneb.enyo.de> wrote:
> > > > >
> > > > > * Peter Collingbourne:
> > > > >
> > > > > > This prctl allows the user program to control which PAC keys are enabled
> > > > > > in a particular task. The main reason why this is useful is to enable a
> > > > > > userspace ABI that uses PAC to sign and authenticate function pointers
> > > > > > and other pointers exposed outside of the function, while still allowing
> > > > > > binaries conforming to the ABI to interoperate with legacy binaries that
> > > > > > do not sign or authenticate pointers.
> > > > > >
> > > > > > The idea is that a dynamic loader or early startup code would issue
> > > > > > this prctl very early after establishing that a process may load legacy
> > > > > > binaries, but before executing any PAC instructions.
> > > > >
> > > > > I thought that the silicon did not support this?
> > 
> > I think the past discussion we had was around enabling PAC for kernel
> > while disabling it for user. The hardware doesn't give us separate bits,
> > so Peter's patch toggles them on kernel entry/return, with some overhead
> > given by the MSR+ISB (to be added).
> 
> ah ok. i probably incorrectly thought that toggling that sys
> register bit is too much overhead at kernel entry so assumed
> we cannot have PAC off by default or set per process.

I think Peter can rerun his benchmarks but with the ISB added after MSR.
If they are not too bad, we can take this series.

> (i think with the proposed prctl it's possible to disable PAC
> at early ld.so startup and reenable it before calling into the
> main exe entry code, if we ever need to disable PAC-RET.)
> 
> i assume thread creation/fork inherits the setting but exec
> does not (this is another point that may be worth adding to
> the documentation).

Yes, that's my understanding from the patch. It should be documented
explicitly (I haven't read the doc updates, maybe it does this already).

> if we run into issues in userspace with PAC then a prctl that
> can be inherited across exec is a possible workaround (so PAC
> can be disabled for an entire process tree).

Can you do something similar to MTE with an environment variable forcing
PAC off?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-17 17:48   ` Florian Weimer
@ 2020-11-18 17:19     ` Dave Martin
  -1 siblings, 0 replies; 34+ messages in thread
From: Dave Martin @ 2020-11-18 17:19 UTC (permalink / raw)
  To: Florian Weimer
  Cc: Peter Collingbourne, libc-alpha, Catalin Marinas, Kevin Brodsky,
	linux-api, Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Linux ARM

On Tue, Nov 17, 2020 at 06:48:16PM +0100, Florian Weimer wrote:
> * Peter Collingbourne:
> 
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
> >
> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
> 
> I thought that the silicon did not support this?
> 
> What exactly does this switch on and off?  The signing itself (so that
> the bits are zero again), or just the verification?
> 
> I do not know how easy it will be to adjust the glibc dynamic linker
> to this because I expect it to use PAC instructions itself.  (It is an
> interesting target, I suppose, so this makes sense to me.)  The loader
> code used for initial process setup and later dlopen is the same.
> Worst case, we could compile the loader twice.

I don't think this would matter if only the B key is turned on and off,
since the compiler and libc should only be using the A key (or no key at
all) when built standard compiler options.

IIUC the default compiler options when using PAC will only use the
A key, and only use the PAC instructions that execute as NOPs when the
affected key is disabled (precisely so that the code still runs on non-
PAC supporting hardware).  But you can't simply flip it on and off while
there are function frames on the stack that assume it's either on or off.


However, the kernel interface should not assume any particular userspace
environment, so the controls offered should be general.  There are
plenty of other prctl()s (as well as regular syscalls) that will confuse
or break glibc; this is not really any different.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 17:19     ` Dave Martin
  0 siblings, 0 replies; 34+ messages in thread
From: Dave Martin @ 2020-11-18 17:19 UTC (permalink / raw)
  To: Florian Weimer
  Cc: libc-alpha, Will Deacon, linux-api, Kevin Brodsky,
	Andrey Konovalov, Kostya Serebryany, Linux ARM, Catalin Marinas,
	Vincenzo Frascino, Peter Collingbourne, Evgenii Stepanov

On Tue, Nov 17, 2020 at 06:48:16PM +0100, Florian Weimer wrote:
> * Peter Collingbourne:
> 
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
> >
> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
> 
> I thought that the silicon did not support this?
> 
> What exactly does this switch on and off?  The signing itself (so that
> the bits are zero again), or just the verification?
> 
> I do not know how easy it will be to adjust the glibc dynamic linker
> to this because I expect it to use PAC instructions itself.  (It is an
> interesting target, I suppose, so this makes sense to me.)  The loader
> code used for initial process setup and later dlopen is the same.
> Worst case, we could compile the loader twice.

I don't think this would matter if only the B key is turned on and off,
since the compiler and libc should only be using the A key (or no key at
all) when built standard compiler options.

IIUC the default compiler options when using PAC will only use the
A key, and only use the PAC instructions that execute as NOPs when the
affected key is disabled (precisely so that the code still runs on non-
PAC supporting hardware).  But you can't simply flip it on and off while
there are function frames on the stack that assume it's either on or off.


However, the kernel interface should not assume any particular userspace
environment, so the controls offered should be general.  There are
plenty of other prctl()s (as well as regular syscalls) that will confuse
or break glibc; this is not really any different.

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-18 17:19     ` Dave Martin
@ 2020-11-18 17:31       ` Florian Weimer
  -1 siblings, 0 replies; 34+ messages in thread
From: Florian Weimer @ 2020-11-18 17:31 UTC (permalink / raw)
  To: Dave Martin
  Cc: Peter Collingbourne, libc-alpha, Catalin Marinas, Kevin Brodsky,
	linux-api, Kostya Serebryany, Evgenii Stepanov, Andrey Konovalov,
	Vincenzo Frascino, Will Deacon, Linux ARM

* Dave Martin:

> IIUC the default compiler options when using PAC will only use the
> A key, and only use the PAC instructions that execute as NOPs when the
> affected key is disabled (precisely so that the code still runs on non-
> PAC supporting hardware).  But you can't simply flip it on and off while
> there are function frames on the stack that assume it's either on or off.

I think we can do the switch at the top-most frame, at least in ld.so.
I have not thought about statically linked binaries. 8-/

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 17:31       ` Florian Weimer
  0 siblings, 0 replies; 34+ messages in thread
From: Florian Weimer @ 2020-11-18 17:31 UTC (permalink / raw)
  To: Dave Martin
  Cc: libc-alpha, Will Deacon, linux-api, Kevin Brodsky,
	Andrey Konovalov, Kostya Serebryany, Linux ARM, Catalin Marinas,
	Vincenzo Frascino, Peter Collingbourne, Evgenii Stepanov

* Dave Martin:

> IIUC the default compiler options when using PAC will only use the
> A key, and only use the PAC instructions that execute as NOPs when the
> affected key is disabled (precisely so that the code still runs on non-
> PAC supporting hardware).  But you can't simply flip it on and off while
> there are function frames on the stack that assume it's either on or off.

I think we can do the switch at the top-most frame, at least in ld.so.
I have not thought about statically linked binaries. 8-/

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-10-14  5:51 ` Peter Collingbourne
@ 2020-11-18 17:55   ` Dave Martin
  -1 siblings, 0 replies; 34+ messages in thread
From: Dave Martin @ 2020-11-18 17:55 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Catalin Marinas, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Andrey Konovalov, Kevin Brodsky, linux-api,
	Will Deacon, Linux ARM

On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> This prctl allows the user program to control which PAC keys are enabled
> in a particular task. The main reason why this is useful is to enable a
> userspace ABI that uses PAC to sign and authenticate function pointers
> and other pointers exposed outside of the function, while still allowing
> binaries conforming to the ABI to interoperate with legacy binaries that
> do not sign or authenticate pointers.

Does this boil down to using the B key(s) to sign pointers that flow
across library boundaries?

ld.so would then need a control to turn off the B key off if, say,
the main program or some random library it uses hasn't been rebuilt
to enable this signing.

(I think we discussed this before, but I keep forgetting the exact
rationale.)

> The idea is that a dynamic loader or early startup code would issue
> this prctl very early after establishing that a process may load legacy
> binaries, but before executing any PAC instructions.

We probably need a new program property in ELF to describe this.

Currently, there is GNU_PROPERTY_AARCH64_FEATURE_1_PAC, which does not
specify a key -- but I think it could be defined retrospectively to
apply to just the APIA key, say.

This raises the question of whether the kernel should actually turn the
keys on or off based on these flags.  For historical compatibility,
probably not -- but we could perhaps do that for the B key since the ABI
for use of the B key is entirely unstandardised so far.


I'll take a look at the patch tomorrow -- my brain isn't functioning
right now.

Cheers
---Dave

> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> ---
> This patch must be applied on top of Catalin's MTE series:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> 
> Alternatively the series with this patch on top may be
> downloaded from Gerrit by following the link above.
> 
> v2:
> - added prctl(PR_PAC_GET_ENABLED_KEYS)
> - added ptrace APIs for getting and setting the set of enabled
>   keys
> - optimized the instruction sequence for kernel entry/exit
> - rebased on top of MTE series
> 
>  .../arm64/pointer-authentication.rst          | 27 +++++++++
>  arch/arm64/include/asm/pointer_auth.h         | 11 ++++
>  arch/arm64/include/asm/processor.h            | 21 ++++++-
>  arch/arm64/include/asm/sysreg.h               |  4 +-
>  arch/arm64/kernel/asm-offsets.c               |  1 +
>  arch/arm64/kernel/entry.S                     | 27 +++++++++
>  arch/arm64/kernel/mte.c                       | 39 +++----------
>  arch/arm64/kernel/pointer_auth.c              | 56 +++++++++++++++++++
>  arch/arm64/kernel/process.c                   | 37 ++++++++++++
>  arch/arm64/kernel/ptrace.c                    | 41 ++++++++++++++
>  arch/arm64/kernel/setup.c                     | 10 ++++
>  include/uapi/linux/elf.h                      |  1 +
>  include/uapi/linux/prctl.h                    |  4 ++
>  kernel/sys.c                                  | 16 ++++++
>  14 files changed, 261 insertions(+), 34 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.rst b/Documentation/arm64/pointer-authentication.rst
> index 30b2ab06526b..1f7e064deeb3 100644
> --- a/Documentation/arm64/pointer-authentication.rst
> +++ b/Documentation/arm64/pointer-authentication.rst
> @@ -107,3 +107,30 @@ filter out the Pointer Authentication system key registers from
>  KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
>  register. Any attempt to use the Pointer Authentication instructions will
>  result in an UNDEFINED exception being injected into the guest.
> +
> +
> +Enabling and disabling keys
> +---------------------------
> +
> +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> +PAC keys are enabled in a particular task. It takes two arguments, the
> +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> +and the second being a bitmask of the same bits specifying whether the key
> +should be enabled or disabled. For example::
> +
> +  prctl(PR_PAC_SET_ENABLED_KEYS,
> +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> +        PR_PAC_APIBKEY, 0, 0);
> +
> +disables all keys except the IB key.
> +
> +The main reason why this is useful is to enable a userspace ABI that uses PAC
> +instructions to sign and authenticate function pointers and other pointers
> +exposed outside of the function, while still allowing binaries conforming to
> +the ABI to interoperate with legacy binaries that do not sign or authenticate
> +pointers.
> +
> +The idea is that a dynamic loader or early startup code would issue this
> +prctl very early after establishing that a process may load legacy binaries,
> +but before executing any PAC instructions.
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index c6b4f0603024..662a005d9070 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -3,6 +3,7 @@
>  #define __ASM_POINTER_AUTH_H
>  
>  #include <linux/bitops.h>
> +#include <linux/prctl.h>
>  #include <linux/random.h>
>  
>  #include <asm/cpufeature.h>
> @@ -71,6 +72,11 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
>  
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
> +extern int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk,
> +					  unsigned long keys,
> +					  unsigned long enabled);
> +extern int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk);
> +
>  static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  {
>  	return ptrauth_clear_pac(ptr);
> @@ -85,10 +91,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
> +#define ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)	(-EINVAL)
> +#define ptrauth_prctl_get_enabled_keys(tsk)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
>  #define ptrauth_thread_init_kernel(tsk)
>  #define ptrauth_thread_switch_kernel(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
> +#define PR_PAC_ENABLED_KEYS_MASK                                               \
> +	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
> +
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index fec204d28fce..68c9d8064759 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -152,11 +152,17 @@ struct thread_struct {
>  	struct ptrauth_keys_kernel	keys_kernel;
>  #endif
>  #ifdef CONFIG_ARM64_MTE
> -	u64			sctlr_tcf0;
>  	u64			gcr_user_incl;
>  #endif
> +	u64			sctlr;
>  };
>  
> +extern u64 init_sctlr;
> +
> +#define SCTLR_TASK_MASK                                                        \
> +	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
> +	 SCTLR_EL1_TCF0_MASK)
> +
>  static inline void arch_thread_struct_whitelist(unsigned long *offset,
>  						unsigned long *size)
>  {
> @@ -263,6 +269,14 @@ extern void release_thread(struct task_struct *);
>  
>  unsigned long get_wchan(struct task_struct *p);
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +void set_task_sctlr_el1(u64 sctlr);
> +#else
> +static inline void set_task_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif
> +
>  /* Thread switching */
>  extern struct task_struct *cpu_switch_to(struct task_struct *prev,
>  					 struct task_struct *next);
> @@ -317,6 +331,11 @@ extern void __init minsigstksz_setup(void);
>  /* PR_PAC_RESET_KEYS prctl */
>  #define PAC_RESET_KEYS(tsk, arg)	ptrauth_prctl_reset_keys(tsk, arg)
>  
> +/* PR_PAC_{SET,GET}_ENABLED_KEYS prctl */
> +#define PAC_SET_ENABLED_KEYS(tsk, keys, enabled)				\
> +	ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)
> +#define PAC_GET_ENABLED_KEYS(tsk) ptrauth_prctl_get_enabled_keys(tsk)
> +
>  #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
>  /* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
>  long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 52eefe2f7d95..912150361338 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -548,8 +548,10 @@
>  #define SCTLR_ELx_TCF_ASYNC	(UL(0x2) << SCTLR_ELx_TCF_SHIFT)
>  #define SCTLR_ELx_TCF_MASK	(UL(0x3) << SCTLR_ELx_TCF_SHIFT)
>  
> +#define SCTLR_ELx_ENIA_SHIFT	31
> +
>  #define SCTLR_ELx_ITFSB	(BIT(37))
> -#define SCTLR_ELx_ENIA	(BIT(31))
> +#define SCTLR_ELx_ENIA	(BIT(SCTLR_ELx_ENIA_SHIFT))
>  #define SCTLR_ELx_ENIB	(BIT(30))
>  #define SCTLR_ELx_ENDA	(BIT(27))
>  #define SCTLR_ELx_EE    (BIT(25))
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7d32fc959b1a..cb49d4f31540 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -48,6 +48,7 @@ int main(void)
>    DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
>    DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
>  #endif
> +  DEFINE(THREAD_SCTLR,		offsetof(struct task_struct, thread.sctlr));
>    BLANK();
>    DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
>    DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ff34461524d4..19d24666b529 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -214,6 +214,18 @@ alternative_else_nop_endif
>  
>  	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/* Enable IA for in-kernel PAC if the task had it disabled. */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	orr	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	scs_load tsk, x20
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
> @@ -332,6 +344,21 @@ alternative_else_nop_endif
>  	/* No kernel C function calls after this as user keys are set. */
>  	ptrauth_keys_install_user tsk, x0, x1, x2
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/*
> +	 * IA was enabled for in-kernel PAC. Disable it now if needed.
> +	 * All other per-task SCTLR bits were updated on task switch.
> +	 */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	bic	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	apply_ssbd 0, x0, x1
>  	.endif
>  
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 52a0638ed967..60238a5e3c82 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -72,26 +72,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
>  	return ret;
>  }
>  
> -static void update_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/* ISB required for the kernel uaccess routines */
> -	sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0);
> -	isb();
> -}
> -
> -static void set_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/*
> -	 * mte_thread_switch() checks current->thread.sctlr_tcf0 as an
> -	 * optimisation. Disable preemption so that it does not see
> -	 * the variable update before the SCTLR_EL1.TCF0 one.
> -	 */
> -	preempt_disable();
> -	current->thread.sctlr_tcf0 = tcf0;
> -	update_sctlr_el1_tcf0(tcf0);
> -	preempt_enable();
> -}
> -
>  static void update_gcr_el1_excl(u64 incl)
>  {
>  	u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
> @@ -120,8 +100,6 @@ void flush_mte_state(void)
>  	dsb(ish);
>  	write_sysreg_s(0, SYS_TFSRE0_EL1);
>  	clear_thread_flag(TIF_MTE_ASYNC_FAULT);
> -	/* disable tag checking */
> -	set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE);
>  	/* reset tag generation mask */
>  	set_gcr_el1_excl(0);
>  }
> @@ -131,9 +109,6 @@ void mte_thread_switch(struct task_struct *next)
>  	if (!system_supports_mte())
>  		return;
>  
> -	/* avoid expensive SCTLR_EL1 accesses if no change */
> -	if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
> -		update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
>  	update_gcr_el1_excl(next->thread.gcr_user_incl);
>  }
>  
> @@ -147,7 +122,7 @@ void mte_suspend_exit(void)
>  
>  long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  {
> -	u64 tcf0;
> +	u64 sctlr = task->thread.sctlr & ~SCTLR_EL1_TCF0_MASK;
>  	u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT;
>  
>  	if (!system_supports_mte())
> @@ -155,23 +130,23 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  
>  	switch (arg & PR_MTE_TCF_MASK) {
>  	case PR_MTE_TCF_NONE:
> -		tcf0 = SCTLR_EL1_TCF0_NONE;
> +		sctlr |= SCTLR_EL1_TCF0_NONE;
>  		break;
>  	case PR_MTE_TCF_SYNC:
> -		tcf0 = SCTLR_EL1_TCF0_SYNC;
> +		sctlr |= SCTLR_EL1_TCF0_SYNC;
>  		break;
>  	case PR_MTE_TCF_ASYNC:
> -		tcf0 = SCTLR_EL1_TCF0_ASYNC;
> +		sctlr |= SCTLR_EL1_TCF0_ASYNC;
>  		break;
>  	default:
>  		return -EINVAL;
>  	}
>  
>  	if (task != current) {
> -		task->thread.sctlr_tcf0 = tcf0;
> +		task->thread.sctlr = sctlr;
>  		task->thread.gcr_user_incl = gcr_incl;
>  	} else {
> -		set_sctlr_el1_tcf0(tcf0);
> +		set_task_sctlr_el1(sctlr);
>  		set_gcr_el1_excl(gcr_incl);
>  	}
>  
> @@ -187,7 +162,7 @@ long get_mte_ctrl(struct task_struct *task)
>  
>  	ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT;
>  
> -	switch (task->thread.sctlr_tcf0) {
> +	switch (task->thread.sctlr & SCTLR_EL1_TCF0_MASK) {
>  	case SCTLR_EL1_TCF0_NONE:
>  		return PR_MTE_TCF_NONE;
>  	case SCTLR_EL1_TCF0_SYNC:
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 1e77736a4f66..0e763e8babd8 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0
>  
> +#include <linux/compat.h>
>  #include <linux/errno.h>
>  #include <linux/prctl.h>
>  #include <linux/random.h>
> @@ -42,3 +43,58 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  
>  	return 0;
>  }
> +
> +static u64 arg_to_enxx_mask(unsigned long arg)
> +{
> +	u64 sctlr_enxx_mask = 0;
> +	if (arg & PR_PAC_APIAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIA;
> +	if (arg & PR_PAC_APIBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIB;
> +	if (arg & PR_PAC_APDAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDA;
> +	if (arg & PR_PAC_APDBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDB;
> +	return sctlr_enxx_mask;
> +}
> +
> +int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
> +				   unsigned long enabled)
> +{
> +	u64 sctlr = tsk->thread.sctlr;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
> +		return -EINVAL;
> +
> +	sctlr &= ~arg_to_enxx_mask(keys);
> +	sctlr |= arg_to_enxx_mask(enabled);
> +	if (tsk == current) {
> +		set_task_sctlr_el1(sctlr);
> +	} else {
> +		tsk->thread.sctlr = sctlr;
> +	}
> +
> +	return 0;
> +}
> +
> +int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk)
> +{
> +	int retval = 0;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIA)
> +		retval |= PR_PAC_APIAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIB)
> +		retval |= PR_PAC_APIBKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDA)
> +		retval |= PR_PAC_APDAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDB)
> +		retval |= PR_PAC_APDBKEY;
> +
> +	return retval;
> +}
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 05a9cdd0b471..7fb28ccdf862 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
>  	write_sysreg(val, cntkctl_el1);
>  }
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * EnIA must not be cleared while in the kernel as this is necessary for
> +	 * in-kernel PAC. It will be cleared on kernel exit if needed.
> +	 */
> +	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> +
> +	/* ISB required for the kernel uaccess routines when setting TCF0. */
> +	isb();
> +}
> +
> +void set_task_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * __switch_to() checks current->thread.sctlr as an
> +	 * optimisation. Disable preemption so that it does not see
> +	 * the variable update before the SCTLR_EL1 one.
> +	 */
> +	preempt_disable();
> +	current->thread.sctlr = sctlr;
> +	update_sctlr_el1(sctlr);
> +	preempt_enable();
> +}
> +#else
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> +
>  /*
>   * Thread switching.
>   */
> @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>  	dsb(ish);
>  
> +	/* avoid expensive SCTLR_EL1 accesses if no change */
> +	if (prev->thread.sctlr != next->thread.sctlr)
> +		update_sctlr_el1(next->thread.sctlr);
> +
>  	/*
>  	 * MTE thread switching must happen after the DSB above to ensure that
>  	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
> @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
>  void arch_setup_new_exec(void)
>  {
>  	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> +	if (current->thread.sctlr != init_sctlr)
> +		set_task_sctlr_el1(init_sctlr);
>  
>  	ptrauth_thread_init_user(current);
>  }
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index f49b349e16a3..2ed17fb07666 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -911,6 +911,38 @@ static int pac_mask_get(struct task_struct *target,
>  	return membuf_write(&to, &uregs, sizeof(uregs));
>  }
>  
> +static int pac_enabled_keys_get(struct task_struct *target,
> +				const struct user_regset *regset,
> +				struct membuf to)
> +{
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	return membuf_write(&to, &enabled_keys, sizeof(enabled_keys));
> +}
> +
> +static int pac_enabled_keys_set(struct task_struct *target,
> +				const struct user_regset *regset,
> +				unsigned int pos, unsigned int count,
> +				const void *kbuf, const void __user *ubuf)
> +{
> +	int ret;
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
> +				 &enabled_keys, 0, -1);
> +	if (ret)
> +		return ret;
> +
> +	return ptrauth_prctl_set_enabled_keys(target, PR_PAC_ENABLED_KEYS_MASK,
> +					      enabled_keys);
> +}
> +
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  static __uint128_t pac_key_to_user(const struct ptrauth_key *key)
>  {
> @@ -1076,6 +1108,7 @@ enum aarch64_regset {
>  #endif
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	REGSET_PAC_MASK,
> +	REGSET_PAC_ENABLED_KEYS,
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	REGSET_PACA_KEYS,
>  	REGSET_PACG_KEYS,
> @@ -1162,6 +1195,14 @@ static const struct user_regset aarch64_regsets[] = {
>  		.regset_get = pac_mask_get,
>  		/* this cannot be set dynamically */
>  	},
> +	[REGSET_PAC_ENABLED_KEYS] = {
> +		.core_note_type = NT_ARM_PAC_ENABLED_KEYS,
> +		.n = 1,
> +		.size = sizeof(long),
> +		.align = sizeof(long),
> +		.regset_get = pac_enabled_keys_get,
> +		.set = pac_enabled_keys_set,
> +	},
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	[REGSET_PACA_KEYS] = {
>  		.core_note_type = NT_ARM_PACA_KEYS,
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 77c4c9bad1b8..91932215a6dd 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
>  }
>  EXPORT_SYMBOL_GPL(cpu_logical_map);
>  
> +u64 init_sctlr;
> +
>  void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  {
>  	init_mm.start_code = (unsigned long) _text;
> @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
>  #endif
>  
> +	/*
> +	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> +	 * the value that it was set to by the early startup code.
> +	 */
> +	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
> +	init_sctlr &= SCTLR_TASK_MASK;
> +	init_task.thread.sctlr = init_sctlr;
> +
>  	if (boot_args[1] || boot_args[2] || boot_args[3]) {
>  		pr_err("WARNING: x1-x3 nonzero in violation of boot protocol:\n"
>  			"\tx1: %016llx\n\tx2: %016llx\n\tx3: %016llx\n"
> diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
> index 30f68b42eeb5..61bf4774b8f2 100644
> --- a/include/uapi/linux/elf.h
> +++ b/include/uapi/linux/elf.h
> @@ -426,6 +426,7 @@ typedef struct elf64_shdr {
>  #define NT_ARM_PACA_KEYS	0x407	/* ARM pointer authentication address keys */
>  #define NT_ARM_PACG_KEYS	0x408	/* ARM pointer authentication generic key */
>  #define NT_ARM_TAGGED_ADDR_CTRL	0x409	/* arm64 tagged address control (prctl()) */
> +#define NT_ARM_PAC_ENABLED_KEYS	0x40a	/* arm64 ptr auth enabled keys (prctl()) */
>  #define NT_ARC_V2	0x600		/* ARCv2 accumulator/extra registers */
>  #define NT_VMCOREDD	0x700		/* Vmcore Device Dump Note */
>  #define NT_MIPS_DSP	0x800		/* MIPS DSP ASE registers */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 7f0827705c9a..0d1bb3a2e59a 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -247,4 +247,8 @@ struct prctl_mm_map {
>  #define PR_SET_IO_FLUSHER		57
>  #define PR_GET_IO_FLUSHER		58
>  
> +/* Set/get enabled arm64 pointer authentication keys */
> +#define PR_PAC_SET_ENABLED_KEYS		59
> +#define PR_PAC_GET_ENABLED_KEYS		60
> +
>  #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index ab6c409b1159..0cbd5841e521 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -119,6 +119,12 @@
>  #ifndef PAC_RESET_KEYS
>  # define PAC_RESET_KEYS(a, b)	(-EINVAL)
>  #endif
> +#ifndef PAC_SET_ENABLED_KEYS
> +# define PAC_SET_ENABLED_KEYS(a, b, c)	(-EINVAL)
> +#endif
> +#ifndef PAC_GET_ENABLED_KEYS
> +# define PAC_GET_ENABLED_KEYS(a)	(-EINVAL)
> +#endif
>  #ifndef SET_TAGGED_ADDR_CTRL
>  # define SET_TAGGED_ADDR_CTRL(a)	(-EINVAL)
>  #endif
> @@ -2497,6 +2503,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>  			return -EINVAL;
>  		error = PAC_RESET_KEYS(me, arg2);
>  		break;
> +	case PR_PAC_SET_ENABLED_KEYS:
> +		if (arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_SET_ENABLED_KEYS(me, arg2, arg3);
> +		break;
> +	case PR_PAC_GET_ENABLED_KEYS:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_GET_ENABLED_KEYS(me);
> +		break;
>  	case PR_SET_TAGGED_ADDR_CTRL:
>  		if (arg3 || arg4 || arg5)
>  			return -EINVAL;
> -- 
> 2.28.0.1011.ga647a8990f-goog
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 17:55   ` Dave Martin
  0 siblings, 0 replies; 34+ messages in thread
From: Dave Martin @ 2020-11-18 17:55 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Andrey Konovalov, Kevin Brodsky, linux-api, Kostya Serebryany,
	Linux ARM, Catalin Marinas, Vincenzo Frascino, Will Deacon,
	Evgenii Stepanov

On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> This prctl allows the user program to control which PAC keys are enabled
> in a particular task. The main reason why this is useful is to enable a
> userspace ABI that uses PAC to sign and authenticate function pointers
> and other pointers exposed outside of the function, while still allowing
> binaries conforming to the ABI to interoperate with legacy binaries that
> do not sign or authenticate pointers.

Does this boil down to using the B key(s) to sign pointers that flow
across library boundaries?

ld.so would then need a control to turn off the B key off if, say,
the main program or some random library it uses hasn't been rebuilt
to enable this signing.

(I think we discussed this before, but I keep forgetting the exact
rationale.)

> The idea is that a dynamic loader or early startup code would issue
> this prctl very early after establishing that a process may load legacy
> binaries, but before executing any PAC instructions.

We probably need a new program property in ELF to describe this.

Currently, there is GNU_PROPERTY_AARCH64_FEATURE_1_PAC, which does not
specify a key -- but I think it could be defined retrospectively to
apply to just the APIA key, say.

This raises the question of whether the kernel should actually turn the
keys on or off based on these flags.  For historical compatibility,
probably not -- but we could perhaps do that for the B key since the ABI
for use of the B key is entirely unstandardised so far.


I'll take a look at the patch tomorrow -- my brain isn't functioning
right now.

Cheers
---Dave

> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
> ---
> This patch must be applied on top of Catalin's MTE series:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/mte
> 
> Alternatively the series with this patch on top may be
> downloaded from Gerrit by following the link above.
> 
> v2:
> - added prctl(PR_PAC_GET_ENABLED_KEYS)
> - added ptrace APIs for getting and setting the set of enabled
>   keys
> - optimized the instruction sequence for kernel entry/exit
> - rebased on top of MTE series
> 
>  .../arm64/pointer-authentication.rst          | 27 +++++++++
>  arch/arm64/include/asm/pointer_auth.h         | 11 ++++
>  arch/arm64/include/asm/processor.h            | 21 ++++++-
>  arch/arm64/include/asm/sysreg.h               |  4 +-
>  arch/arm64/kernel/asm-offsets.c               |  1 +
>  arch/arm64/kernel/entry.S                     | 27 +++++++++
>  arch/arm64/kernel/mte.c                       | 39 +++----------
>  arch/arm64/kernel/pointer_auth.c              | 56 +++++++++++++++++++
>  arch/arm64/kernel/process.c                   | 37 ++++++++++++
>  arch/arm64/kernel/ptrace.c                    | 41 ++++++++++++++
>  arch/arm64/kernel/setup.c                     | 10 ++++
>  include/uapi/linux/elf.h                      |  1 +
>  include/uapi/linux/prctl.h                    |  4 ++
>  kernel/sys.c                                  | 16 ++++++
>  14 files changed, 261 insertions(+), 34 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.rst b/Documentation/arm64/pointer-authentication.rst
> index 30b2ab06526b..1f7e064deeb3 100644
> --- a/Documentation/arm64/pointer-authentication.rst
> +++ b/Documentation/arm64/pointer-authentication.rst
> @@ -107,3 +107,30 @@ filter out the Pointer Authentication system key registers from
>  KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
>  register. Any attempt to use the Pointer Authentication instructions will
>  result in an UNDEFINED exception being injected into the guest.
> +
> +
> +Enabling and disabling keys
> +---------------------------
> +
> +The prctl PR_PAC_SET_ENABLED_KEYS allows the user program to control which
> +PAC keys are enabled in a particular task. It takes two arguments, the
> +first being a bitmask of PR_PAC_APIAKEY, PR_PAC_APIBKEY, PR_PAC_APDAKEY
> +and PR_PAC_APDBKEY specifying which keys shall be affected by this prctl,
> +and the second being a bitmask of the same bits specifying whether the key
> +should be enabled or disabled. For example::
> +
> +  prctl(PR_PAC_SET_ENABLED_KEYS,
> +        PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY,
> +        PR_PAC_APIBKEY, 0, 0);
> +
> +disables all keys except the IB key.
> +
> +The main reason why this is useful is to enable a userspace ABI that uses PAC
> +instructions to sign and authenticate function pointers and other pointers
> +exposed outside of the function, while still allowing binaries conforming to
> +the ABI to interoperate with legacy binaries that do not sign or authenticate
> +pointers.
> +
> +The idea is that a dynamic loader or early startup code would issue this
> +prctl very early after establishing that a process may load legacy binaries,
> +but before executing any PAC instructions.
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index c6b4f0603024..662a005d9070 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -3,6 +3,7 @@
>  #define __ASM_POINTER_AUTH_H
>  
>  #include <linux/bitops.h>
> +#include <linux/prctl.h>
>  #include <linux/random.h>
>  
>  #include <asm/cpufeature.h>
> @@ -71,6 +72,11 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
>  
>  extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
>  
> +extern int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk,
> +					  unsigned long keys,
> +					  unsigned long enabled);
> +extern int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk);
> +
>  static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  {
>  	return ptrauth_clear_pac(ptr);
> @@ -85,10 +91,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
>  
>  #else /* CONFIG_ARM64_PTR_AUTH */
>  #define ptrauth_prctl_reset_keys(tsk, arg)	(-EINVAL)
> +#define ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)	(-EINVAL)
> +#define ptrauth_prctl_get_enabled_keys(tsk)	(-EINVAL)
>  #define ptrauth_strip_insn_pac(lr)	(lr)
>  #define ptrauth_thread_init_user(tsk)
>  #define ptrauth_thread_init_kernel(tsk)
>  #define ptrauth_thread_switch_kernel(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
> +#define PR_PAC_ENABLED_KEYS_MASK                                               \
> +	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
> +
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index fec204d28fce..68c9d8064759 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -152,11 +152,17 @@ struct thread_struct {
>  	struct ptrauth_keys_kernel	keys_kernel;
>  #endif
>  #ifdef CONFIG_ARM64_MTE
> -	u64			sctlr_tcf0;
>  	u64			gcr_user_incl;
>  #endif
> +	u64			sctlr;
>  };
>  
> +extern u64 init_sctlr;
> +
> +#define SCTLR_TASK_MASK                                                        \
> +	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
> +	 SCTLR_EL1_TCF0_MASK)
> +
>  static inline void arch_thread_struct_whitelist(unsigned long *offset,
>  						unsigned long *size)
>  {
> @@ -263,6 +269,14 @@ extern void release_thread(struct task_struct *);
>  
>  unsigned long get_wchan(struct task_struct *p);
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +void set_task_sctlr_el1(u64 sctlr);
> +#else
> +static inline void set_task_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif
> +
>  /* Thread switching */
>  extern struct task_struct *cpu_switch_to(struct task_struct *prev,
>  					 struct task_struct *next);
> @@ -317,6 +331,11 @@ extern void __init minsigstksz_setup(void);
>  /* PR_PAC_RESET_KEYS prctl */
>  #define PAC_RESET_KEYS(tsk, arg)	ptrauth_prctl_reset_keys(tsk, arg)
>  
> +/* PR_PAC_{SET,GET}_ENABLED_KEYS prctl */
> +#define PAC_SET_ENABLED_KEYS(tsk, keys, enabled)				\
> +	ptrauth_prctl_set_enabled_keys(tsk, keys, enabled)
> +#define PAC_GET_ENABLED_KEYS(tsk) ptrauth_prctl_get_enabled_keys(tsk)
> +
>  #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
>  /* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
>  long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 52eefe2f7d95..912150361338 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -548,8 +548,10 @@
>  #define SCTLR_ELx_TCF_ASYNC	(UL(0x2) << SCTLR_ELx_TCF_SHIFT)
>  #define SCTLR_ELx_TCF_MASK	(UL(0x3) << SCTLR_ELx_TCF_SHIFT)
>  
> +#define SCTLR_ELx_ENIA_SHIFT	31
> +
>  #define SCTLR_ELx_ITFSB	(BIT(37))
> -#define SCTLR_ELx_ENIA	(BIT(31))
> +#define SCTLR_ELx_ENIA	(BIT(SCTLR_ELx_ENIA_SHIFT))
>  #define SCTLR_ELx_ENIB	(BIT(30))
>  #define SCTLR_ELx_ENDA	(BIT(27))
>  #define SCTLR_ELx_EE    (BIT(25))
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7d32fc959b1a..cb49d4f31540 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -48,6 +48,7 @@ int main(void)
>    DEFINE(THREAD_KEYS_USER,	offsetof(struct task_struct, thread.keys_user));
>    DEFINE(THREAD_KEYS_KERNEL,	offsetof(struct task_struct, thread.keys_kernel));
>  #endif
> +  DEFINE(THREAD_SCTLR,		offsetof(struct task_struct, thread.sctlr));
>    BLANK();
>    DEFINE(S_X0,			offsetof(struct pt_regs, regs[0]));
>    DEFINE(S_X2,			offsetof(struct pt_regs, regs[2]));
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ff34461524d4..19d24666b529 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -214,6 +214,18 @@ alternative_else_nop_endif
>  
>  	ptrauth_keys_install_kernel tsk, x20, x22, x23
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/* Enable IA for in-kernel PAC if the task had it disabled. */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	orr	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	scs_load tsk, x20
>  	.else
>  	add	x21, sp, #S_FRAME_SIZE
> @@ -332,6 +344,21 @@ alternative_else_nop_endif
>  	/* No kernel C function calls after this as user keys are set. */
>  	ptrauth_keys_install_user tsk, x0, x1, x2
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +alternative_if ARM64_HAS_ADDRESS_AUTH
> +	/*
> +	 * IA was enabled for in-kernel PAC. Disable it now if needed.
> +	 * All other per-task SCTLR bits were updated on task switch.
> +	 */
> +	ldr	x0, [tsk, THREAD_SCTLR]
> +	tbnz	x0, SCTLR_ELx_ENIA_SHIFT, 1f
> +	mrs	x0, sctlr_el1
> +	bic	x0, x0, SCTLR_ELx_ENIA
> +	msr	sctlr_el1, x0
> +1:
> +alternative_else_nop_endif
> +#endif
> +
>  	apply_ssbd 0, x0, x1
>  	.endif
>  
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 52a0638ed967..60238a5e3c82 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -72,26 +72,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
>  	return ret;
>  }
>  
> -static void update_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/* ISB required for the kernel uaccess routines */
> -	sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0);
> -	isb();
> -}
> -
> -static void set_sctlr_el1_tcf0(u64 tcf0)
> -{
> -	/*
> -	 * mte_thread_switch() checks current->thread.sctlr_tcf0 as an
> -	 * optimisation. Disable preemption so that it does not see
> -	 * the variable update before the SCTLR_EL1.TCF0 one.
> -	 */
> -	preempt_disable();
> -	current->thread.sctlr_tcf0 = tcf0;
> -	update_sctlr_el1_tcf0(tcf0);
> -	preempt_enable();
> -}
> -
>  static void update_gcr_el1_excl(u64 incl)
>  {
>  	u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
> @@ -120,8 +100,6 @@ void flush_mte_state(void)
>  	dsb(ish);
>  	write_sysreg_s(0, SYS_TFSRE0_EL1);
>  	clear_thread_flag(TIF_MTE_ASYNC_FAULT);
> -	/* disable tag checking */
> -	set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE);
>  	/* reset tag generation mask */
>  	set_gcr_el1_excl(0);
>  }
> @@ -131,9 +109,6 @@ void mte_thread_switch(struct task_struct *next)
>  	if (!system_supports_mte())
>  		return;
>  
> -	/* avoid expensive SCTLR_EL1 accesses if no change */
> -	if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
> -		update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
>  	update_gcr_el1_excl(next->thread.gcr_user_incl);
>  }
>  
> @@ -147,7 +122,7 @@ void mte_suspend_exit(void)
>  
>  long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  {
> -	u64 tcf0;
> +	u64 sctlr = task->thread.sctlr & ~SCTLR_EL1_TCF0_MASK;
>  	u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT;
>  
>  	if (!system_supports_mte())
> @@ -155,23 +130,23 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  
>  	switch (arg & PR_MTE_TCF_MASK) {
>  	case PR_MTE_TCF_NONE:
> -		tcf0 = SCTLR_EL1_TCF0_NONE;
> +		sctlr |= SCTLR_EL1_TCF0_NONE;
>  		break;
>  	case PR_MTE_TCF_SYNC:
> -		tcf0 = SCTLR_EL1_TCF0_SYNC;
> +		sctlr |= SCTLR_EL1_TCF0_SYNC;
>  		break;
>  	case PR_MTE_TCF_ASYNC:
> -		tcf0 = SCTLR_EL1_TCF0_ASYNC;
> +		sctlr |= SCTLR_EL1_TCF0_ASYNC;
>  		break;
>  	default:
>  		return -EINVAL;
>  	}
>  
>  	if (task != current) {
> -		task->thread.sctlr_tcf0 = tcf0;
> +		task->thread.sctlr = sctlr;
>  		task->thread.gcr_user_incl = gcr_incl;
>  	} else {
> -		set_sctlr_el1_tcf0(tcf0);
> +		set_task_sctlr_el1(sctlr);
>  		set_gcr_el1_excl(gcr_incl);
>  	}
>  
> @@ -187,7 +162,7 @@ long get_mte_ctrl(struct task_struct *task)
>  
>  	ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT;
>  
> -	switch (task->thread.sctlr_tcf0) {
> +	switch (task->thread.sctlr & SCTLR_EL1_TCF0_MASK) {
>  	case SCTLR_EL1_TCF0_NONE:
>  		return PR_MTE_TCF_NONE;
>  	case SCTLR_EL1_TCF0_SYNC:
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 1e77736a4f66..0e763e8babd8 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0
>  
> +#include <linux/compat.h>
>  #include <linux/errno.h>
>  #include <linux/prctl.h>
>  #include <linux/random.h>
> @@ -42,3 +43,58 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
>  
>  	return 0;
>  }
> +
> +static u64 arg_to_enxx_mask(unsigned long arg)
> +{
> +	u64 sctlr_enxx_mask = 0;
> +	if (arg & PR_PAC_APIAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIA;
> +	if (arg & PR_PAC_APIBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENIB;
> +	if (arg & PR_PAC_APDAKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDA;
> +	if (arg & PR_PAC_APDBKEY)
> +		sctlr_enxx_mask |= SCTLR_ELx_ENDB;
> +	return sctlr_enxx_mask;
> +}
> +
> +int ptrauth_prctl_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
> +				   unsigned long enabled)
> +{
> +	u64 sctlr = tsk->thread.sctlr;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
> +		return -EINVAL;
> +
> +	sctlr &= ~arg_to_enxx_mask(keys);
> +	sctlr |= arg_to_enxx_mask(enabled);
> +	if (tsk == current) {
> +		set_task_sctlr_el1(sctlr);
> +	} else {
> +		tsk->thread.sctlr = sctlr;
> +	}
> +
> +	return 0;
> +}
> +
> +int ptrauth_prctl_get_enabled_keys(struct task_struct *tsk)
> +{
> +	int retval = 0;
> +
> +	if (!system_supports_address_auth() || is_compat_task())
> +		return -EINVAL;
> +
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIA)
> +		retval |= PR_PAC_APIAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENIB)
> +		retval |= PR_PAC_APIBKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDA)
> +		retval |= PR_PAC_APDAKEY;
> +	if (tsk->thread.sctlr & SCTLR_ELx_ENDB)
> +		retval |= PR_PAC_APDBKEY;
> +
> +	return retval;
> +}
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 05a9cdd0b471..7fb28ccdf862 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
>  	write_sysreg(val, cntkctl_el1);
>  }
>  
> +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * EnIA must not be cleared while in the kernel as this is necessary for
> +	 * in-kernel PAC. It will be cleared on kernel exit if needed.
> +	 */
> +	sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> +
> +	/* ISB required for the kernel uaccess routines when setting TCF0. */
> +	isb();
> +}
> +
> +void set_task_sctlr_el1(u64 sctlr)
> +{
> +	/*
> +	 * __switch_to() checks current->thread.sctlr as an
> +	 * optimisation. Disable preemption so that it does not see
> +	 * the variable update before the SCTLR_EL1 one.
> +	 */
> +	preempt_disable();
> +	current->thread.sctlr = sctlr;
> +	update_sctlr_el1(sctlr);
> +	preempt_enable();
> +}
> +#else
> +static void update_sctlr_el1(u64 sctlr)
> +{
> +}
> +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> +
>  /*
>   * Thread switching.
>   */
> @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>  	dsb(ish);
>  
> +	/* avoid expensive SCTLR_EL1 accesses if no change */
> +	if (prev->thread.sctlr != next->thread.sctlr)
> +		update_sctlr_el1(next->thread.sctlr);
> +
>  	/*
>  	 * MTE thread switching must happen after the DSB above to ensure that
>  	 * any asynchronous tag check faults have been logged in the TFSR*_EL1
> @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
>  void arch_setup_new_exec(void)
>  {
>  	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> +	if (current->thread.sctlr != init_sctlr)
> +		set_task_sctlr_el1(init_sctlr);
>  
>  	ptrauth_thread_init_user(current);
>  }
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index f49b349e16a3..2ed17fb07666 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -911,6 +911,38 @@ static int pac_mask_get(struct task_struct *target,
>  	return membuf_write(&to, &uregs, sizeof(uregs));
>  }
>  
> +static int pac_enabled_keys_get(struct task_struct *target,
> +				const struct user_regset *regset,
> +				struct membuf to)
> +{
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	return membuf_write(&to, &enabled_keys, sizeof(enabled_keys));
> +}
> +
> +static int pac_enabled_keys_set(struct task_struct *target,
> +				const struct user_regset *regset,
> +				unsigned int pos, unsigned int count,
> +				const void *kbuf, const void __user *ubuf)
> +{
> +	int ret;
> +	long enabled_keys = ptrauth_prctl_get_enabled_keys(target);
> +
> +	if (IS_ERR_VALUE(enabled_keys))
> +		return enabled_keys;
> +
> +	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
> +				 &enabled_keys, 0, -1);
> +	if (ret)
> +		return ret;
> +
> +	return ptrauth_prctl_set_enabled_keys(target, PR_PAC_ENABLED_KEYS_MASK,
> +					      enabled_keys);
> +}
> +
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  static __uint128_t pac_key_to_user(const struct ptrauth_key *key)
>  {
> @@ -1076,6 +1108,7 @@ enum aarch64_regset {
>  #endif
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	REGSET_PAC_MASK,
> +	REGSET_PAC_ENABLED_KEYS,
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	REGSET_PACA_KEYS,
>  	REGSET_PACG_KEYS,
> @@ -1162,6 +1195,14 @@ static const struct user_regset aarch64_regsets[] = {
>  		.regset_get = pac_mask_get,
>  		/* this cannot be set dynamically */
>  	},
> +	[REGSET_PAC_ENABLED_KEYS] = {
> +		.core_note_type = NT_ARM_PAC_ENABLED_KEYS,
> +		.n = 1,
> +		.size = sizeof(long),
> +		.align = sizeof(long),
> +		.regset_get = pac_enabled_keys_get,
> +		.set = pac_enabled_keys_set,
> +	},
>  #ifdef CONFIG_CHECKPOINT_RESTORE
>  	[REGSET_PACA_KEYS] = {
>  		.core_note_type = NT_ARM_PACA_KEYS,
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 77c4c9bad1b8..91932215a6dd 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
>  }
>  EXPORT_SYMBOL_GPL(cpu_logical_map);
>  
> +u64 init_sctlr;
> +
>  void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  {
>  	init_mm.start_code = (unsigned long) _text;
> @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
>  	init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
>  #endif
>  
> +	/*
> +	 * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> +	 * the value that it was set to by the early startup code.
> +	 */
> +	asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
> +	init_sctlr &= SCTLR_TASK_MASK;
> +	init_task.thread.sctlr = init_sctlr;
> +
>  	if (boot_args[1] || boot_args[2] || boot_args[3]) {
>  		pr_err("WARNING: x1-x3 nonzero in violation of boot protocol:\n"
>  			"\tx1: %016llx\n\tx2: %016llx\n\tx3: %016llx\n"
> diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
> index 30f68b42eeb5..61bf4774b8f2 100644
> --- a/include/uapi/linux/elf.h
> +++ b/include/uapi/linux/elf.h
> @@ -426,6 +426,7 @@ typedef struct elf64_shdr {
>  #define NT_ARM_PACA_KEYS	0x407	/* ARM pointer authentication address keys */
>  #define NT_ARM_PACG_KEYS	0x408	/* ARM pointer authentication generic key */
>  #define NT_ARM_TAGGED_ADDR_CTRL	0x409	/* arm64 tagged address control (prctl()) */
> +#define NT_ARM_PAC_ENABLED_KEYS	0x40a	/* arm64 ptr auth enabled keys (prctl()) */
>  #define NT_ARC_V2	0x600		/* ARCv2 accumulator/extra registers */
>  #define NT_VMCOREDD	0x700		/* Vmcore Device Dump Note */
>  #define NT_MIPS_DSP	0x800		/* MIPS DSP ASE registers */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 7f0827705c9a..0d1bb3a2e59a 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -247,4 +247,8 @@ struct prctl_mm_map {
>  #define PR_SET_IO_FLUSHER		57
>  #define PR_GET_IO_FLUSHER		58
>  
> +/* Set/get enabled arm64 pointer authentication keys */
> +#define PR_PAC_SET_ENABLED_KEYS		59
> +#define PR_PAC_GET_ENABLED_KEYS		60
> +
>  #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index ab6c409b1159..0cbd5841e521 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -119,6 +119,12 @@
>  #ifndef PAC_RESET_KEYS
>  # define PAC_RESET_KEYS(a, b)	(-EINVAL)
>  #endif
> +#ifndef PAC_SET_ENABLED_KEYS
> +# define PAC_SET_ENABLED_KEYS(a, b, c)	(-EINVAL)
> +#endif
> +#ifndef PAC_GET_ENABLED_KEYS
> +# define PAC_GET_ENABLED_KEYS(a)	(-EINVAL)
> +#endif
>  #ifndef SET_TAGGED_ADDR_CTRL
>  # define SET_TAGGED_ADDR_CTRL(a)	(-EINVAL)
>  #endif
> @@ -2497,6 +2503,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>  			return -EINVAL;
>  		error = PAC_RESET_KEYS(me, arg2);
>  		break;
> +	case PR_PAC_SET_ENABLED_KEYS:
> +		if (arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_SET_ENABLED_KEYS(me, arg2, arg3);
> +		break;
> +	case PR_PAC_GET_ENABLED_KEYS:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +		error = PAC_GET_ENABLED_KEYS(me);
> +		break;
>  	case PR_SET_TAGGED_ADDR_CTRL:
>  		if (arg3 || arg4 || arg5)
>  			return -EINVAL;
> -- 
> 2.28.0.1011.ga647a8990f-goog
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-18 17:31       ` Florian Weimer
@ 2020-11-18 18:18         ` Dave Martin
  -1 siblings, 0 replies; 34+ messages in thread
From: Dave Martin @ 2020-11-18 18:18 UTC (permalink / raw)
  To: Florian Weimer
  Cc: libc-alpha, Will Deacon, linux-api, Kevin Brodsky,
	Andrey Konovalov, Kostya Serebryany, Linux ARM, Catalin Marinas,
	Vincenzo Frascino, Peter Collingbourne, Evgenii Stepanov

On Wed, Nov 18, 2020 at 06:31:21PM +0100, Florian Weimer wrote:
> * Dave Martin:
> 
> > IIUC the default compiler options when using PAC will only use the
> > A key, and only use the PAC instructions that execute as NOPs when the
> > affected key is disabled (precisely so that the code still runs on non-
> > PAC supporting hardware).  But you can't simply flip it on and off while
> > there are function frames on the stack that assume it's either on or off.
> 
> I think we can do the switch at the top-most frame, at least in ld.so.
> I have not thought about statically linked binaries. 8-/

I guess that's one argument for doing this in the kernel, if it can be
done in a compatible way.

We might want an antiproperty for this: so ..._PAC means that PAC is
used and _PAC|_NOIBKEY|_NODBKEY etc. disables specific keys.  
(With no flags, we could keep the legacy behaviour of just enabling all
the keys, but the program might not use PAC at all.)

Alternatively, if the linker provides symbols for the property section,
maybe the libc startup could inspect it?  I think this section is mapped
to PT_LOAD segment in practice.

Cheers
--Dave

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 18:18         ` Dave Martin
  0 siblings, 0 replies; 34+ messages in thread
From: Dave Martin @ 2020-11-18 18:18 UTC (permalink / raw)
  To: Florian Weimer
  Cc: libc-alpha, Peter Collingbourne, Andrey Konovalov, Kevin Brodsky,
	Evgenii Stepanov, Kostya Serebryany, Catalin Marinas, linux-api,
	Vincenzo Frascino, Will Deacon, Linux ARM

On Wed, Nov 18, 2020 at 06:31:21PM +0100, Florian Weimer wrote:
> * Dave Martin:
> 
> > IIUC the default compiler options when using PAC will only use the
> > A key, and only use the PAC instructions that execute as NOPs when the
> > affected key is disabled (precisely so that the code still runs on non-
> > PAC supporting hardware).  But you can't simply flip it on and off while
> > there are function frames on the stack that assume it's either on or off.
> 
> I think we can do the switch at the top-most frame, at least in ld.so.
> I have not thought about statically linked binaries. 8-/

I guess that's one argument for doing this in the kernel, if it can be
done in a compatible way.

We might want an antiproperty for this: so ..._PAC means that PAC is
used and _PAC|_NOIBKEY|_NODBKEY etc. disables specific keys.  
(With no flags, we could keep the legacy behaviour of just enabling all
the keys, but the program might not use PAC at all.)

Alternatively, if the linker provides symbols for the property section,
maybe the libc startup could inspect it?  I think this section is mapped
to PT_LOAD segment in practice.

Cheers
--Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-18 17:55   ` Dave Martin
@ 2020-11-18 19:05     ` Peter Collingbourne
  -1 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-18 19:05 UTC (permalink / raw)
  To: Dave Martin
  Cc: Catalin Marinas, Evgenii Stepanov, Kostya Serebryany,
	Vincenzo Frascino, Andrey Konovalov, Kevin Brodsky, Linux API,
	Will Deacon, Linux ARM

On Wed, Nov 18, 2020 at 9:55 AM Dave Martin <Dave.Martin@arm.com> wrote:
>
> On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
>
> Does this boil down to using the B key(s) to sign pointers that flow
> across library boundaries?

Right, the B key or whichever I key you select as part of the
interprocedural ABI (most likely B to avoid the kernel's entry/exit
slow path if it needs to be disabled).

> ld.so would then need a control to turn off the B key off if, say,
> the main program or some random library it uses hasn't been rebuilt
> to enable this signing.
>
> (I think we discussed this before, but I keep forgetting the exact
> rationale.)

See [1] where we discussed this before.

> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
>
> We probably need a new program property in ELF to describe this.

There is a draft PAuth ABI [2] which will specify how the feature
works in ELF. I believe that the specification will include details of
how the ELF markings are going to work.

> Currently, there is GNU_PROPERTY_AARCH64_FEATURE_1_PAC, which does not
> specify a key -- but I think it could be defined retrospectively to
> apply to just the APIA key, say.
>
> This raises the question of whether the kernel should actually turn the
> keys on or off based on these flags.  For historical compatibility,
> probably not -- but we could perhaps do that for the B key since the ABI
> for use of the B key is entirely unstandardised so far.

I would be against having the kernel read any ELF properties at this
point. All of the processing can be done in the dynamic loader, and
ELF properties are strictly less powerful than having the dynamic
loader decide what to do. It's not enough to look at the main
executable anyway because you would also need to read the recursive
dynamic libraries and that's a task better done in userspace.
Userspace may also require more complex logic than what can be
expressed with ELF properties (for example, on Android we would need
the same executable to be launched twice, once with keys enabled and
once with keys disabled). If we do anything at all it should probably
be based on whatever we come up with for the PAuth ABI which hasn't
been finalized yet.

> I'll take a look at the patch tomorrow -- my brain isn't functioning
> right now.

Thanks.

Peter

[1] https://lore.kernel.org/linux-arm-kernel/CAMn1gO5JV4-xDO0TAJcY8DmNVgZb_sBH=RUEQY1gTUmmFPGHqQ@mail.gmail.com/
[2] https://github.com/ARM-software/abi-aa/blob/master/pauthabielf64/pauthabielf64.rst

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-18 19:05     ` Peter Collingbourne
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-18 19:05 UTC (permalink / raw)
  To: Dave Martin
  Cc: Andrey Konovalov, Kevin Brodsky, Linux API, Kostya Serebryany,
	Linux ARM, Catalin Marinas, Vincenzo Frascino, Will Deacon,
	Evgenii Stepanov

On Wed, Nov 18, 2020 at 9:55 AM Dave Martin <Dave.Martin@arm.com> wrote:
>
> On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > This prctl allows the user program to control which PAC keys are enabled
> > in a particular task. The main reason why this is useful is to enable a
> > userspace ABI that uses PAC to sign and authenticate function pointers
> > and other pointers exposed outside of the function, while still allowing
> > binaries conforming to the ABI to interoperate with legacy binaries that
> > do not sign or authenticate pointers.
>
> Does this boil down to using the B key(s) to sign pointers that flow
> across library boundaries?

Right, the B key or whichever I key you select as part of the
interprocedural ABI (most likely B to avoid the kernel's entry/exit
slow path if it needs to be disabled).

> ld.so would then need a control to turn off the B key off if, say,
> the main program or some random library it uses hasn't been rebuilt
> to enable this signing.
>
> (I think we discussed this before, but I keep forgetting the exact
> rationale.)

See [1] where we discussed this before.

> > The idea is that a dynamic loader or early startup code would issue
> > this prctl very early after establishing that a process may load legacy
> > binaries, but before executing any PAC instructions.
>
> We probably need a new program property in ELF to describe this.

There is a draft PAuth ABI [2] which will specify how the feature
works in ELF. I believe that the specification will include details of
how the ELF markings are going to work.

> Currently, there is GNU_PROPERTY_AARCH64_FEATURE_1_PAC, which does not
> specify a key -- but I think it could be defined retrospectively to
> apply to just the APIA key, say.
>
> This raises the question of whether the kernel should actually turn the
> keys on or off based on these flags.  For historical compatibility,
> probably not -- but we could perhaps do that for the B key since the ABI
> for use of the B key is entirely unstandardised so far.

I would be against having the kernel read any ELF properties at this
point. All of the processing can be done in the dynamic loader, and
ELF properties are strictly less powerful than having the dynamic
loader decide what to do. It's not enough to look at the main
executable anyway because you would also need to read the recursive
dynamic libraries and that's a task better done in userspace.
Userspace may also require more complex logic than what can be
expressed with ELF properties (for example, on Android we would need
the same executable to be launched twice, once with keys enabled and
once with keys disabled). If we do anything at all it should probably
be based on whatever we come up with for the PAuth ABI which hasn't
been finalized yet.

> I'll take a look at the patch tomorrow -- my brain isn't functioning
> right now.

Thanks.

Peter

[1] https://lore.kernel.org/linux-arm-kernel/CAMn1gO5JV4-xDO0TAJcY8DmNVgZb_sBH=RUEQY1gTUmmFPGHqQ@mail.gmail.com/
[2] https://github.com/ARM-software/abi-aa/blob/master/pauthabielf64/pauthabielf64.rst

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  2020-11-18 12:25   ` Catalin Marinas
@ 2020-11-19  5:20     ` Peter Collingbourne
  -1 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-19  5:20 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Evgenii Stepanov, Kostya Serebryany, Vincenzo Frascino,
	Dave Martin, Linux ARM, Kevin Brodsky, Andrey Konovalov,
	Will Deacon, Linux API, Szabolcs Nagy, libc-alpha

On Wed, Nov 18, 2020 at 4:25 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ff34461524d4..19d24666b529 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -214,6 +214,18 @@ alternative_else_nop_endif
> >
> >       ptrauth_keys_install_kernel tsk, x20, x22, x23
> >
> > +#ifdef CONFIG_ARM64_PTR_AUTH
> > +alternative_if ARM64_HAS_ADDRESS_AUTH
>
> If we get too many instructions, we might as well do (same further
> down):
>
> alternative_if_not ARM64_HAS_ADDRESS_AUTH
>         b       1f
> alternative_else_nop_endif

I benchmarked this on the DragonBoard 845c and the nops were faster
(461.4ns vs 463.8ns) so I left it as is.

> > +     /* Enable IA for in-kernel PAC if the task had it disabled. */
> > +     ldr     x0, [tsk, THREAD_SCTLR]
> > +     tbnz    x0, SCTLR_ELx_ENIA_SHIFT, 1f
> > +     mrs     x0, sctlr_el1
> > +     orr     x0, x0, SCTLR_ELx_ENIA
> > +     msr     sctlr_el1, x0
>
> I now realised that this is missing an ISB. Writes to system registers
> in general are not visible until the context is synchronised. I suggest
> you change the line above your hunk to
> ptrauth_keys_install_kernel_nosync followed by an explicit isb here.
>
> Note that this ISB may affect your benchmark results slightly. I think
> you only used MSR without ISB.

Okay, so for these benchmarks we need to simulate what's happening
after we install the kernel keys (that part we can't do on existing
HW). So we need to compare ISB against conditionally enabling EnIA
followed by ISB. Again I simulate conditional enabling using
conditional disabling to make it compatible with existing HW. I also
measured conditional disabling where the branches are not taken (i.e.
slow path). On the DragonBoard I get:

ISB: 488.1ns
EnIA+ISB (fast path): 493.0ns
EnIA+ISB (slow path): 531.8ns

So the delta between ISB and fast path turns out to be a little bit
less than what I measured before.

The exact changes that I tested are here:
https://github.com/pcc/linux/tree/pac-enabled-keys-bench

> > +1:
> > +alternative_else_nop_endif
> > +#endif
> > +
> >       scs_load tsk, x20
> >       .else
> >       add     x21, sp, #S_FRAME_SIZE
> > @@ -332,6 +344,21 @@ alternative_else_nop_endif
> >       /* No kernel C function calls after this as user keys are set. */
> >       ptrauth_keys_install_user tsk, x0, x1, x2
> >
> > +#ifdef CONFIG_ARM64_PTR_AUTH
> > +alternative_if ARM64_HAS_ADDRESS_AUTH
> > +     /*
> > +      * IA was enabled for in-kernel PAC. Disable it now if needed.
> > +      * All other per-task SCTLR bits were updated on task switch.
> > +      */
> > +     ldr     x0, [tsk, THREAD_SCTLR]
> > +     tbnz    x0, SCTLR_ELx_ENIA_SHIFT, 1f
> > +     mrs     x0, sctlr_el1
> > +     bic     x0, x0, SCTLR_ELx_ENIA
> > +     msr     sctlr_el1, x0
>
> That's correct, no need for context synchronisation here as we are soon
> doing an ERET.

Ack.

> > +1:
> > +alternative_else_nop_endif
> > +#endif
> > +
> >       apply_ssbd 0, x0, x1
> >       .endif
> >
> [...]
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index 05a9cdd0b471..7fb28ccdf862 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
> >       write_sysreg(val, cntkctl_el1);
> >  }
> >
> > +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> > +static void update_sctlr_el1(u64 sctlr)
> > +{
> > +     /*
> > +      * EnIA must not be cleared while in the kernel as this is necessary for
> > +      * in-kernel PAC. It will be cleared on kernel exit if needed.
> > +      */
> > +     sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> > +
> > +     /* ISB required for the kernel uaccess routines when setting TCF0. */
> > +     isb();
> > +}
> > +
> > +void set_task_sctlr_el1(u64 sctlr)
> > +{
> > +     /*
> > +      * __switch_to() checks current->thread.sctlr as an
> > +      * optimisation. Disable preemption so that it does not see
> > +      * the variable update before the SCTLR_EL1 one.
> > +      */
> > +     preempt_disable();
> > +     current->thread.sctlr = sctlr;
> > +     update_sctlr_el1(sctlr);
> > +     preempt_enable();
> > +}
> > +#else
> > +static void update_sctlr_el1(u64 sctlr)
> > +{
> > +}
> > +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> > +
> >  /*
> >   * Thread switching.
> >   */
> > @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
> >        */
> >       dsb(ish);
> >
> > +     /* avoid expensive SCTLR_EL1 accesses if no change */
> > +     if (prev->thread.sctlr != next->thread.sctlr)
> > +             update_sctlr_el1(next->thread.sctlr);
> > +
> >       /*
> >        * MTE thread switching must happen after the DSB above to ensure that
> >        * any asynchronous tag check faults have been logged in the TFSR*_EL1
> > @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
> >  void arch_setup_new_exec(void)
> >  {
> >       current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> > +     if (current->thread.sctlr != init_sctlr)
> > +             set_task_sctlr_el1(init_sctlr);
>
> init_sctlr may not have the full information when initialised in
> setup_arch(). We probably get away with this for the current features
> but it may have unexpected behaviour in the long run.
>
> I think we could start from the existing current->thread.sctlr and just
> set the PAC enable bits via ptrauth_thread_init_user(). Similarly, we
> could call a mte_thread_init_user() (just rename flush_mte_state() and
> move its call place) which sets the current->thread.stclr bits that we
> need.
>
> Since we set the bits explicitly within the SCTLR_TASK_MASK, I don't
> think we need any init_sctlr.

Okay, I've changed the code to work like that and removed init_sctlr in v3.

> > diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> > index 77c4c9bad1b8..91932215a6dd 100644
> > --- a/arch/arm64/kernel/setup.c
> > +++ b/arch/arm64/kernel/setup.c
> > @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
> >  }
> >  EXPORT_SYMBOL_GPL(cpu_logical_map);
> >
> > +u64 init_sctlr;
> > +
> >  void __init __no_sanitize_address setup_arch(char **cmdline_p)
> >  {
> >       init_mm.start_code = (unsigned long) _text;
> > @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
> >       init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
> >  #endif
> >
> > +     /*
> > +      * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> > +      * the value that it was set to by the early startup code.
> > +      */
> > +     asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
>
> We have a read_sysreg(sctlr_el1) but I don't think we need it.
>
> > +     init_sctlr &= SCTLR_TASK_MASK;
> > +     init_task.thread.sctlr = init_sctlr;
>
> At this point we have the full SCTLR_EL1. The cpufeature framework may
> turn on additional bits when SMP is fully operational. Also, a user
> thread is not interested in bits that are specific to the kernel only.
>
> As I mentioned above, if we set the bits explicitly when creating a user
> thread, we can just leave init_task.thread.sctlr to 0.
>
> Also, I'd rename thread.sctlr to sctlr_user so that it's clearer that it
> doesn't affect the kernel settings.

Renamed in v3.

Peter

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
@ 2020-11-19  5:20     ` Peter Collingbourne
  0 siblings, 0 replies; 34+ messages in thread
From: Peter Collingbourne @ 2020-11-19  5:20 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: libc-alpha, Szabolcs Nagy, Andrey Konovalov, Kevin Brodsky,
	Kostya Serebryany, Evgenii Stepanov, Linux API,
	Vincenzo Frascino, Will Deacon, Dave Martin, Linux ARM

On Wed, Nov 18, 2020 at 4:25 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Tue, Oct 13, 2020 at 10:51:06PM -0700, Peter Collingbourne wrote:
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ff34461524d4..19d24666b529 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -214,6 +214,18 @@ alternative_else_nop_endif
> >
> >       ptrauth_keys_install_kernel tsk, x20, x22, x23
> >
> > +#ifdef CONFIG_ARM64_PTR_AUTH
> > +alternative_if ARM64_HAS_ADDRESS_AUTH
>
> If we get too many instructions, we might as well do (same further
> down):
>
> alternative_if_not ARM64_HAS_ADDRESS_AUTH
>         b       1f
> alternative_else_nop_endif

I benchmarked this on the DragonBoard 845c and the nops were faster
(461.4ns vs 463.8ns) so I left it as is.

> > +     /* Enable IA for in-kernel PAC if the task had it disabled. */
> > +     ldr     x0, [tsk, THREAD_SCTLR]
> > +     tbnz    x0, SCTLR_ELx_ENIA_SHIFT, 1f
> > +     mrs     x0, sctlr_el1
> > +     orr     x0, x0, SCTLR_ELx_ENIA
> > +     msr     sctlr_el1, x0
>
> I now realised that this is missing an ISB. Writes to system registers
> in general are not visible until the context is synchronised. I suggest
> you change the line above your hunk to
> ptrauth_keys_install_kernel_nosync followed by an explicit isb here.
>
> Note that this ISB may affect your benchmark results slightly. I think
> you only used MSR without ISB.

Okay, so for these benchmarks we need to simulate what's happening
after we install the kernel keys (that part we can't do on existing
HW). So we need to compare ISB against conditionally enabling EnIA
followed by ISB. Again I simulate conditional enabling using
conditional disabling to make it compatible with existing HW. I also
measured conditional disabling where the branches are not taken (i.e.
slow path). On the DragonBoard I get:

ISB: 488.1ns
EnIA+ISB (fast path): 493.0ns
EnIA+ISB (slow path): 531.8ns

So the delta between ISB and fast path turns out to be a little bit
less than what I measured before.

The exact changes that I tested are here:
https://github.com/pcc/linux/tree/pac-enabled-keys-bench

> > +1:
> > +alternative_else_nop_endif
> > +#endif
> > +
> >       scs_load tsk, x20
> >       .else
> >       add     x21, sp, #S_FRAME_SIZE
> > @@ -332,6 +344,21 @@ alternative_else_nop_endif
> >       /* No kernel C function calls after this as user keys are set. */
> >       ptrauth_keys_install_user tsk, x0, x1, x2
> >
> > +#ifdef CONFIG_ARM64_PTR_AUTH
> > +alternative_if ARM64_HAS_ADDRESS_AUTH
> > +     /*
> > +      * IA was enabled for in-kernel PAC. Disable it now if needed.
> > +      * All other per-task SCTLR bits were updated on task switch.
> > +      */
> > +     ldr     x0, [tsk, THREAD_SCTLR]
> > +     tbnz    x0, SCTLR_ELx_ENIA_SHIFT, 1f
> > +     mrs     x0, sctlr_el1
> > +     bic     x0, x0, SCTLR_ELx_ENIA
> > +     msr     sctlr_el1, x0
>
> That's correct, no need for context synchronisation here as we are soon
> doing an ERET.

Ack.

> > +1:
> > +alternative_else_nop_endif
> > +#endif
> > +
> >       apply_ssbd 0, x0, x1
> >       .endif
> >
> [...]
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index 05a9cdd0b471..7fb28ccdf862 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -552,6 +552,37 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
> >       write_sysreg(val, cntkctl_el1);
> >  }
> >
> > +#if defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE)
> > +static void update_sctlr_el1(u64 sctlr)
> > +{
> > +     /*
> > +      * EnIA must not be cleared while in the kernel as this is necessary for
> > +      * in-kernel PAC. It will be cleared on kernel exit if needed.
> > +      */
> > +     sysreg_clear_set(sctlr_el1, SCTLR_TASK_MASK & ~SCTLR_ELx_ENIA, sctlr);
> > +
> > +     /* ISB required for the kernel uaccess routines when setting TCF0. */
> > +     isb();
> > +}
> > +
> > +void set_task_sctlr_el1(u64 sctlr)
> > +{
> > +     /*
> > +      * __switch_to() checks current->thread.sctlr as an
> > +      * optimisation. Disable preemption so that it does not see
> > +      * the variable update before the SCTLR_EL1 one.
> > +      */
> > +     preempt_disable();
> > +     current->thread.sctlr = sctlr;
> > +     update_sctlr_el1(sctlr);
> > +     preempt_enable();
> > +}
> > +#else
> > +static void update_sctlr_el1(u64 sctlr)
> > +{
> > +}
> > +#endif  /* defined(CONFIG_ARM64_PTR_AUTH) || defined(CONFIG_ARM64_MTE) */
> > +
> >  /*
> >   * Thread switching.
> >   */
> > @@ -577,6 +608,10 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
> >        */
> >       dsb(ish);
> >
> > +     /* avoid expensive SCTLR_EL1 accesses if no change */
> > +     if (prev->thread.sctlr != next->thread.sctlr)
> > +             update_sctlr_el1(next->thread.sctlr);
> > +
> >       /*
> >        * MTE thread switching must happen after the DSB above to ensure that
> >        * any asynchronous tag check faults have been logged in the TFSR*_EL1
> > @@ -631,6 +666,8 @@ unsigned long arch_align_stack(unsigned long sp)
> >  void arch_setup_new_exec(void)
> >  {
> >       current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
> > +     if (current->thread.sctlr != init_sctlr)
> > +             set_task_sctlr_el1(init_sctlr);
>
> init_sctlr may not have the full information when initialised in
> setup_arch(). We probably get away with this for the current features
> but it may have unexpected behaviour in the long run.
>
> I think we could start from the existing current->thread.sctlr and just
> set the PAC enable bits via ptrauth_thread_init_user(). Similarly, we
> could call a mte_thread_init_user() (just rename flush_mte_state() and
> move its call place) which sets the current->thread.stclr bits that we
> need.
>
> Since we set the bits explicitly within the SCTLR_TASK_MASK, I don't
> think we need any init_sctlr.

Okay, I've changed the code to work like that and removed init_sctlr in v3.

> > diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> > index 77c4c9bad1b8..91932215a6dd 100644
> > --- a/arch/arm64/kernel/setup.c
> > +++ b/arch/arm64/kernel/setup.c
> > @@ -282,6 +282,8 @@ u64 cpu_logical_map(int cpu)
> >  }
> >  EXPORT_SYMBOL_GPL(cpu_logical_map);
> >
> > +u64 init_sctlr;
> > +
> >  void __init __no_sanitize_address setup_arch(char **cmdline_p)
> >  {
> >       init_mm.start_code = (unsigned long) _text;
> > @@ -370,6 +372,14 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
> >       init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
> >  #endif
> >
> > +     /*
> > +      * Stash a task's initial SCTLR_EL1 per-task bits, which is the same as
> > +      * the value that it was set to by the early startup code.
> > +      */
> > +     asm("mrs %0, sctlr_el1" : "=r"(init_sctlr));
>
> We have a read_sysreg(sctlr_el1) but I don't think we need it.
>
> > +     init_sctlr &= SCTLR_TASK_MASK;
> > +     init_task.thread.sctlr = init_sctlr;
>
> At this point we have the full SCTLR_EL1. The cpufeature framework may
> turn on additional bits when SMP is fully operational. Also, a user
> thread is not interested in bits that are specific to the kernel only.
>
> As I mentioned above, if we set the bits explicitly when creating a user
> thread, we can just leave init_task.thread.sctlr to 0.
>
> Also, I'd rename thread.sctlr to sctlr_user so that it's clearer that it
> doesn't affect the kernel settings.

Renamed in v3.

Peter

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2020-11-19  5:21 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-14  5:51 [PATCH v2] arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS) Peter Collingbourne
2020-10-14  5:51 ` Peter Collingbourne
2020-11-17 17:29 ` Catalin Marinas
2020-11-17 17:29   ` Catalin Marinas
2020-11-17 18:14   ` Szabolcs Nagy
2020-11-17 18:14     ` Szabolcs Nagy
2020-11-17 18:40     ` Peter Collingbourne
2020-11-17 18:40       ` Peter Collingbourne
2020-11-17 17:48 ` Florian Weimer
2020-11-17 17:48   ` Florian Weimer
2020-11-17 18:17   ` Peter Collingbourne
2020-11-17 18:17     ` Peter Collingbourne
2020-11-17 18:39     ` Szabolcs Nagy
2020-11-17 18:39       ` Szabolcs Nagy
2020-11-18 12:33       ` Catalin Marinas
2020-11-18 12:33         ` Catalin Marinas
2020-11-18 13:31         ` Szabolcs Nagy
2020-11-18 13:31           ` Szabolcs Nagy
2020-11-18 13:37           ` Catalin Marinas
2020-11-18 13:37             ` Catalin Marinas
2020-11-18 17:19   ` Dave Martin
2020-11-18 17:19     ` Dave Martin
2020-11-18 17:31     ` Florian Weimer
2020-11-18 17:31       ` Florian Weimer
2020-11-18 18:18       ` Dave Martin
2020-11-18 18:18         ` Dave Martin
2020-11-18 12:25 ` Catalin Marinas
2020-11-18 12:25   ` Catalin Marinas
2020-11-19  5:20   ` Peter Collingbourne
2020-11-19  5:20     ` Peter Collingbourne
2020-11-18 17:55 ` Dave Martin
2020-11-18 17:55   ` Dave Martin
2020-11-18 19:05   ` Peter Collingbourne
2020-11-18 19:05     ` Peter Collingbourne

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.