All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: Peter Collingbourne <pcc@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Evgenii Stepanov <eugenis@google.com>,
	Szabolcs Nagy <szabolcs.nagy@arm.com>,
	Tejas Belagod <Tejas.Belagod@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Subject: Re: [PATCH v10 3/5] arm64: move preemption disablement to prctl handlers
Date: Wed, 14 Jul 2021 12:43:38 +0100	[thread overview]
Message-ID: <20210714114337.GC31686@willie-the-truck> (raw)
In-Reply-To: <20210713234801.3858018-4-pcc@google.com>

On Tue, Jul 13, 2021 at 04:47:59PM -0700, Peter Collingbourne wrote:
> In the next patch, we will start reading sctlr_user from
> mte_update_sctlr_user and subsequently writing a new value based on the
> task's TCF setting and potentially the per-CPU TCF preference. This
> means that we need to be careful to disable preemption around any
> code sequences that read from sctlr_user and subsequently write to
> sctlr_user and/or SCTLR_EL1, so that we don't end up writing a stale
> value (based on the previous CPU's TCF preference) to either of them.
> 
> We currently have four such sequences, in the prctl handlers for
> PR_SET_TAGGED_ADDR_CTRL and PR_PAC_SET_ENABLED_KEYS, as well as in
> the task initialization code that resets the prctl settings. Change
> the prctl handlers to disable preemption in the handlers themselves
> rather than the functions that they call, and change the task
> initialization code to call the respective prctl handlers instead of
> setting sctlr_user directly.
> 
> As a result of this change, we no longer need the helper function
> set_task_sctlr_el1, nor does its behavior make sense any more, so
> remove it.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> Link: https://linux-review.googlesource.com/id/Ic0e8a0c00bb47d786c1e8011df0b7fe99bee4bb5
> ---
>  arch/arm64/include/asm/pointer_auth.h | 12 ++++++------
>  arch/arm64/include/asm/processor.h    |  2 +-
>  arch/arm64/kernel/mte.c               |  8 ++++----
>  arch/arm64/kernel/pointer_auth.c      | 10 ++++++----
>  arch/arm64/kernel/process.c           | 21 +++++++--------------
>  5 files changed, 24 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index d50416be99be..592968f0bc22 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -10,6 +10,9 @@
>  #include <asm/memory.h>
>  #include <asm/sysreg.h>
>  
> +#define PR_PAC_ENABLED_KEYS_MASK                                               \
> +	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
> +
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  /*
>   * Each key is a 128-bit quantity which is split across a pair of 64-bit
> @@ -113,9 +116,9 @@ static __always_inline void ptrauth_enable(void)
>  									       \
>  		/* enable all keys */                                          \
>  		if (system_supports_address_auth())                            \
> -			set_task_sctlr_el1(current->thread.sctlr_user |        \
> -					   SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |   \
> -					   SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);   \
> +			ptrauth_set_enabled_keys(current,                      \
> +						 PR_PAC_ENABLED_KEYS_MASK,     \
> +						 PR_PAC_ENABLED_KEYS_MASK);    \
>  	} while (0)
>  
>  #define ptrauth_thread_switch_user(tsk)                                        \
> @@ -139,7 +142,4 @@ static __always_inline void ptrauth_enable(void)
>  #define ptrauth_thread_switch_kernel(tsk)
>  #endif /* CONFIG_ARM64_PTR_AUTH */
>  
> -#define PR_PAC_ENABLED_KEYS_MASK                                               \
> -	(PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY)
> -
>  #endif /* __ASM_POINTER_AUTH_H */
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 80ceb9cbdd60..ebb3b1aefed7 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -257,7 +257,7 @@ extern void release_thread(struct task_struct *);
>  
>  unsigned long get_wchan(struct task_struct *p);
>  
> -void set_task_sctlr_el1(u64 sctlr);
> +void update_sctlr_el1(u64 sctlr);
>  
>  /* Thread switching */
>  extern struct task_struct *cpu_switch_to(struct task_struct *prev,
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 53d89915029d..432d9b641e9c 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -222,9 +222,7 @@ void mte_thread_init_user(void)
>  	write_sysreg_s(0, SYS_TFSRE0_EL1);
>  	clear_thread_flag(TIF_MTE_ASYNC_FAULT);
>  	/* disable tag checking and reset tag generation mask */
> -	current->thread.mte_ctrl = MTE_CTRL_GCR_USER_EXCL_MASK;
> -	mte_update_sctlr_user(current);
> -	set_task_sctlr_el1(current->thread.sctlr_user);
> +	set_mte_ctrl(current, 0);
>  }
>  
>  void mte_thread_switch(struct task_struct *next)
> @@ -281,8 +279,10 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
>  
>  	task->thread.mte_ctrl = mte_ctrl;
>  	if (task == current) {
> +		preempt_disable();
>  		mte_update_sctlr_user(task);
> -		set_task_sctlr_el1(task->thread.sctlr_user);
> +		update_sctlr_el1(task->thread.sctlr_user);
> +		preempt_enable();
>  	}
>  
>  	return 0;
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> index 60901ab0a7fe..2708b620b4ae 100644
> --- a/arch/arm64/kernel/pointer_auth.c
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -67,7 +67,7 @@ static u64 arg_to_enxx_mask(unsigned long arg)
>  int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
>  			     unsigned long enabled)
>  {
> -	u64 sctlr = tsk->thread.sctlr_user;
> +	u64 sctlr;
>  
>  	if (!system_supports_address_auth())
>  		return -EINVAL;
> @@ -78,12 +78,14 @@ int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys,
>  	if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys))
>  		return -EINVAL;
>  
> +	preempt_disable();
> +	sctlr = tsk->thread.sctlr_user;
>  	sctlr &= ~arg_to_enxx_mask(keys);
>  	sctlr |= arg_to_enxx_mask(enabled);
> +	tsk->thread.sctlr_user = sctlr;
>  	if (tsk == current)
> -		set_task_sctlr_el1(sctlr);
> -	else
> -		tsk->thread.sctlr_user = sctlr;
> +		update_sctlr_el1(sctlr);
> +	preempt_enable();

This should work, as long as tsk is stopped if tsk != current, otherwise I
think there are a bunch of existing races between
ptrauth_set_enabled_keys(), set_mte_ctrl() and ptrauth_get_enabled_keys().
That's (probably?) true for the ptrace case, which is the only scenario
where tsk != current afaict.

I can't help but feel we should add _something_ (as a separate patch) to
avoid tripping over this in future. Maybe a WARN_ON(!task_is_stopped(tsk))
if tsk != current? Failing that, we could handle the concurrency by adding a
helper to update sctlr_user, which takes a clear and a set mask and does a
cmpxchg_relaxed() under the hood? That might also eliminate the need for
some of these preempt_disable() sections.

Anyway, that's a pre-existing issue, so for this patch:

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-07-14 11:45 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-13 23:47 [PATCH v10 0/5] arm64: mte: allow async MTE to be upgraded to sync on a per-CPU basis Peter Collingbourne
2021-07-13 23:47 ` [PATCH v10 1/5] arm64: mte: rename gcr_user_excl to mte_ctrl Peter Collingbourne
2021-07-14 11:01   ` Will Deacon
2021-07-13 23:47 ` [PATCH v10 2/5] arm64: mte: change ASYNC and SYNC TCF settings into bitfields Peter Collingbourne
2021-07-14 11:02   ` Will Deacon
2021-07-13 23:47 ` [PATCH v10 3/5] arm64: move preemption disablement to prctl handlers Peter Collingbourne
2021-07-14 11:43   ` Will Deacon [this message]
2021-07-13 23:48 ` [PATCH v10 4/5] arm64: mte: introduce a per-CPU tag checking mode preference Peter Collingbourne
2021-07-14 11:45   ` Will Deacon
2021-07-13 23:48 ` [PATCH v10 5/5] Documentation: document the preferred tag checking mode feature Peter Collingbourne
2021-07-14 11:50   ` Will Deacon
2021-07-27 18:20 ` [PATCH v10 0/5] arm64: mte: allow async MTE to be upgraded to sync on a per-CPU basis Catalin Marinas
2021-07-27 20:54   ` Peter Collingbourne

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210714114337.GC31686@willie-the-truck \
    --to=will@kernel.org \
    --cc=Tejas.Belagod@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=eugenis@google.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=pcc@google.com \
    --cc=szabolcs.nagy@arm.com \
    --cc=vincenzo.frascino@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.