All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: Dave Martin <Dave.Martin@arm.com>
Cc: linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Szabolcs Nagy <szabolcs.nagy@arm.com>,
	Okamoto Takayuki <tokamoto@jp.fujitsu.com>,
	kvmarm@lists.cs.columbia.edu, libc-alpha@sourceware.org,
	linux-arch@vger.kernel.org
Subject: Re: [PATCH v5 15/30] arm64/sve: Signal handling support
Date: Tue, 07 Nov 2017 13:22:33 +0000	[thread overview]
Message-ID: <87vaimql46.fsf@linaro.org> (raw)
In-Reply-To: <1509465082-30427-16-git-send-email-Dave.Martin@arm.com>


Dave Martin <Dave.Martin@arm.com> writes:

> This patch implements support for saving and restoring the SVE
> registers around signals.
>
> A fixed-size header struct sve_context is always included in the
> signal frame encoding the thread's vector length at the time of
> signal delivery, optionally followed by a variable-layout structure
> encoding the SVE registers.
>
> Because of the need to preserve backwards compatibility, the FPSIMD
> view of the SVE registers is always dumped as a struct
> fpsimd_context in the usual way, in addition to any sve_context.
>
> The SVE vector registers are dumped in full, including bits 127:0
> of each register which alias the corresponding FPSIMD vector
> registers in the hardware.  To avoid any ambiguity about which
> alias to restore during sigreturn, the kernel always restores bits
> 127:0 of each SVE vector register from the fpsimd_context in the
> signal frame (which must be present): userspace needs to take this
> into account if it wants to modify the SVE vector register contents
> on return from a signal.
>
> FPSR and FPCR, which are used by both FPSIMD and SVE, are not
> included in sve_context because they are always present in
> fpsimd_context anyway.
>
> For signal delivery, a new helper
> fpsimd_signal_preserve_current_state() is added to update _both_
> the FPSIMD and SVE views in the task struct, to make it easier to
> populate this information into the signal frame.  Because of the
> redundancy between the two views of the state, only one is updated
> otherwise.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Alex Bennée <alex.bennee@linaro.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

After adding SVE support for RISU I was able to confirm consistent
signal delivery with a decent context over multiple runs and after
hacking echo "32" > /proc/sys/abi/sve_default_vector_length to change
the default length the tests would fail (as expected).

Tested-by: Alex Bennée <alex.bennee@linaro.org>

>
> ---
>
> **Dropped** Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> (Non-trivial changes.)
>
> Changes since v4
> ----------------
>
> Requested by Will Deacon:
>
>  * Fix inconsistent return semantics in restore_sve_fpsimd_context().
>
>    Currently a nonzero return value from __copy_from_user() is passed
>    back as-is to the caller or restore_sve_fpsimd_context(), rather than
>    translating it do an error code as is done elsewhere.
>
>    Callers of restore_sve_fpsimd_context() only care whether the return
>    value is 0 or not, so this isn't a big deal, but it is more
>    consistent to return proper error codes on failure in case we grow a
>    use for them later.
>
>    This patch returns -EFAULT in the __copy_from_user() error cases
>    that weren't explicitly handled.
>
> Miscellaneous:
>
>  * Change inconsistent copy_to_user() calls to __copy_to_user() in
>    preserve_sve_context().
>
>    There are already __put_user_error() calls here.
>
>    The whole extended signal frame is already checked for
>    access_ok(VERIFY_WRITE) in get_sigframe().
> ---
>  arch/arm64/include/asm/fpsimd.h |   1 +
>  arch/arm64/kernel/fpsimd.c      |  55 ++++++++++---
>  arch/arm64/kernel/signal.c      | 167 ++++++++++++++++++++++++++++++++++++++--
>  arch/arm64/kernel/signal32.c    |   2 +-
>  4 files changed, 206 insertions(+), 19 deletions(-)
>
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index 5655fe1..9bbd74c 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -63,6 +63,7 @@ extern void fpsimd_load_state(struct fpsimd_state *state);
>  extern void fpsimd_thread_switch(struct task_struct *next);
>  extern void fpsimd_flush_thread(void);
>
> +extern void fpsimd_signal_preserve_current_state(void);
>  extern void fpsimd_preserve_current_state(void);
>  extern void fpsimd_restore_current_state(void);
>  extern void fpsimd_update_current_state(struct fpsimd_state *state);
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index c91be8e..e0b5ef5 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -299,6 +299,32 @@ static void fpsimd_to_sve(struct task_struct *task)
>  		       sizeof(fst->vregs[i]));
>  }
>
> +/*
> + * Transfer the SVE state in task->thread.sve_state to
> + * task->thread.fpsimd_state.
> + *
> + * Task can be a non-runnable task, or current.  In the latter case,
> + * softirqs (and preemption) must be disabled.
> + * task->thread.sve_state must point to at least sve_state_size(task)
> + * bytes of allocated kernel memory.
> + * task->thread.sve_state must be up to date before calling this function.
> + */
> +static void sve_to_fpsimd(struct task_struct *task)
> +{
> +	unsigned int vq;
> +	void const *sst = task->thread.sve_state;
> +	struct fpsimd_state *fst = &task->thread.fpsimd_state;
> +	unsigned int i;
> +
> +	if (!system_supports_sve())
> +		return;
> +
> +	vq = sve_vq_from_vl(task->thread.sve_vl);
> +	for (i = 0; i < 32; ++i)
> +		memcpy(&fst->vregs[i], ZREG(sst, vq, i),
> +		       sizeof(fst->vregs[i]));
> +}
> +
>  #ifdef CONFIG_ARM64_SVE
>
>  /*
> @@ -500,9 +526,6 @@ void fpsimd_flush_thread(void)
>  /*
>   * Save the userland FPSIMD state of 'current' to memory, but only if the state
>   * currently held in the registers does in fact belong to 'current'
> - *
> - * Currently, SVE tasks can't exist, so just WARN in that case.
> - * Subsequent patches will add full SVE support here.
>   */
>  void fpsimd_preserve_current_state(void)
>  {
> @@ -510,16 +533,23 @@ void fpsimd_preserve_current_state(void)
>  		return;
>
>  	local_bh_disable();
> -
> -	if (!test_thread_flag(TIF_FOREIGN_FPSTATE))
> -		fpsimd_save_state(&current->thread.fpsimd_state);
> -
> -	WARN_ON_ONCE(test_and_clear_thread_flag(TIF_SVE));
> -
> +	task_fpsimd_save();
>  	local_bh_enable();
>  }
>
>  /*
> + * Like fpsimd_preserve_current_state(), but ensure that
> + * current->thread.fpsimd_state is updated so that it can be copied to
> + * the signal frame.
> + */
> +void fpsimd_signal_preserve_current_state(void)
> +{
> +	fpsimd_preserve_current_state();
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE))
> +		sve_to_fpsimd(current);
> +}
> +
> +/*
>   * Load the userland FPSIMD state of 'current' from memory, but only if the
>   * FPSIMD state already held in the registers is /not/ the most recent FPSIMD
>   * state of 'current'
> @@ -554,7 +584,12 @@ void fpsimd_update_current_state(struct fpsimd_state *state)
>
>  	local_bh_disable();
>
> -	fpsimd_load_state(state);
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE)) {
> +		current->thread.fpsimd_state = *state;
> +		fpsimd_to_sve(current);
> +	}
> +	task_fpsimd_load();
> +
>  	if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
>  		struct fpsimd_state *st = &current->thread.fpsimd_state;
>
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 4716729..64abf88 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -63,6 +63,7 @@ struct rt_sigframe_user_layout {
>
>  	unsigned long fpsimd_offset;
>  	unsigned long esr_offset;
> +	unsigned long sve_offset;
>  	unsigned long extra_offset;
>  	unsigned long end_offset;
>  };
> @@ -179,9 +180,6 @@ static int preserve_fpsimd_context(struct fpsimd_context __user *ctx)
>  	struct fpsimd_state *fpsimd = &current->thread.fpsimd_state;
>  	int err;
>
> -	/* dump the hardware registers to the fpsimd_state structure */
> -	fpsimd_preserve_current_state();
> -
>  	/* copy the FP and status/control registers */
>  	err = __copy_to_user(ctx->vregs, fpsimd->vregs, sizeof(fpsimd->vregs));
>  	__put_user_error(fpsimd->fpsr, &ctx->fpsr, err);
> @@ -214,6 +212,8 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	__get_user_error(fpsimd.fpsr, &ctx->fpsr, err);
>  	__get_user_error(fpsimd.fpcr, &ctx->fpcr, err);
>
> +	clear_thread_flag(TIF_SVE);
> +
>  	/* load the hardware registers from the fpsimd_state structure */
>  	if (!err)
>  		fpsimd_update_current_state(&fpsimd);
> @@ -221,10 +221,118 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	return err ? -EFAULT : 0;
>  }
>
> +
>  struct user_ctxs {
>  	struct fpsimd_context __user *fpsimd;
> +	struct sve_context __user *sve;
>  };
>
> +#ifdef CONFIG_ARM64_SVE
> +
> +static int preserve_sve_context(struct sve_context __user *ctx)
> +{
> +	int err = 0;
> +	u16 reserved[ARRAY_SIZE(ctx->__reserved)];
> +	unsigned int vl = current->thread.sve_vl;
> +	unsigned int vq = 0;
> +
> +	if (test_thread_flag(TIF_SVE))
> +		vq = sve_vq_from_vl(vl);
> +
> +	memset(reserved, 0, sizeof(reserved));
> +
> +	__put_user_error(SVE_MAGIC, &ctx->head.magic, err);
> +	__put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16),
> +			 &ctx->head.size, err);
> +	__put_user_error(vl, &ctx->vl, err);
> +	BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved));
> +	err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved));
> +
> +	if (vq) {
> +		/*
> +		 * This assumes that the SVE state has already been saved to
> +		 * the task struct by calling preserve_fpsimd_context().
> +		 */
> +		err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
> +				      current->thread.sve_state,
> +				      SVE_SIG_REGS_SIZE(vq));
> +	}
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +static int restore_sve_fpsimd_context(struct user_ctxs *user)
> +{
> +	int err;
> +	unsigned int vq;
> +	struct fpsimd_state fpsimd;
> +	struct sve_context sve;
> +
> +	if (__copy_from_user(&sve, user->sve, sizeof(sve)))
> +		return -EFAULT;
> +
> +	if (sve.vl != current->thread.sve_vl)
> +		return -EINVAL;
> +
> +	if (sve.head.size <= sizeof(*user->sve)) {
> +		clear_thread_flag(TIF_SVE);
> +		goto fpsimd_only;
> +	}
> +
> +	vq = sve_vq_from_vl(sve.vl);
> +
> +	if (sve.head.size < SVE_SIG_CONTEXT_SIZE(vq))
> +		return -EINVAL;
> +
> +	/*
> +	 * Careful: we are about __copy_from_user() directly into
> +	 * thread.sve_state with preemption enabled, so protection is
> +	 * needed to prevent a racing context switch from writing stale
> +	 * registers back over the new data.
> +	 */
> +
> +	fpsimd_flush_task_state(current);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't clear TIF_FOREIGN_FPSTATE */
> +
> +	set_thread_flag(TIF_FOREIGN_FPSTATE);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
> +
> +	sve_alloc(current);
> +	err = __copy_from_user(current->thread.sve_state,
> +			       (char __user const *)user->sve +
> +					SVE_SIG_REGS_OFFSET,
> +			       SVE_SIG_REGS_SIZE(vq));
> +	if (err)
> +		return -EFAULT;
> +
> +	set_thread_flag(TIF_SVE);
> +
> +fpsimd_only:
> +	/* copy the FP and status/control registers */
> +	/* restore_sigframe() already checked that user->fpsimd != NULL. */
> +	err = __copy_from_user(fpsimd.vregs, user->fpsimd->vregs,
> +			       sizeof(fpsimd.vregs));
> +	__get_user_error(fpsimd.fpsr, &user->fpsimd->fpsr, err);
> +	__get_user_error(fpsimd.fpcr, &user->fpsimd->fpcr, err);
> +
> +	/* load the hardware registers from the fpsimd_state structure */
> +	if (!err)
> +		fpsimd_update_current_state(&fpsimd);
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +#else /* ! CONFIG_ARM64_SVE */
> +
> +/* Turn any non-optimised out attempts to use these into a link error: */
> +extern int preserve_sve_context(void __user *ctx);
> +extern int restore_sve_fpsimd_context(struct user_ctxs *user);
> +
> +#endif /* ! CONFIG_ARM64_SVE */
> +
> +
>  static int parse_user_sigframe(struct user_ctxs *user,
>  			       struct rt_sigframe __user *sf)
>  {
> @@ -237,6 +345,7 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	char const __user *const sfp = (char const __user *)sf;
>
>  	user->fpsimd = NULL;
> +	user->sve = NULL;
>
>  	if (!IS_ALIGNED((unsigned long)base, 16))
>  		goto invalid;
> @@ -287,6 +396,19 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  			/* ignore */
>  			break;
>
> +		case SVE_MAGIC:
> +			if (!system_supports_sve())
> +				goto invalid;
> +
> +			if (user->sve)
> +				goto invalid;
> +
> +			if (size < sizeof(*user->sve))
> +				goto invalid;
> +
> +			user->sve = (struct sve_context __user *)head;
> +			break;
> +
>  		case EXTRA_MAGIC:
>  			if (have_extra_context)
>  				goto invalid;
> @@ -363,9 +485,6 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	}
>
>  done:
> -	if (!user->fpsimd)
> -		goto invalid;
> -
>  	return 0;
>
>  invalid:
> @@ -399,8 +518,19 @@ static int restore_sigframe(struct pt_regs *regs,
>  	if (err == 0)
>  		err = parse_user_sigframe(&user, sf);
>
> -	if (err == 0)
> -		err = restore_fpsimd_context(user.fpsimd);
> +	if (err == 0) {
> +		if (!user.fpsimd)
> +			return -EINVAL;
> +
> +		if (user.sve) {
> +			if (!system_supports_sve())
> +				return -EINVAL;
> +
> +			err = restore_sve_fpsimd_context(&user);
> +		} else {
> +			err = restore_fpsimd_context(user.fpsimd);
> +		}
> +	}
>
>  	return err;
>  }
> @@ -459,6 +589,18 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user)
>  			return err;
>  	}
>
> +	if (system_supports_sve()) {
> +		unsigned int vq = 0;
> +
> +		if (test_thread_flag(TIF_SVE))
> +			vq = sve_vq_from_vl(current->thread.sve_vl);
> +
> +		err = sigframe_alloc(user, &user->sve_offset,
> +				     SVE_SIG_CONTEXT_SIZE(vq));
> +		if (err)
> +			return err;
> +	}
> +
>  	return sigframe_alloc_end(user);
>  }
>
> @@ -500,6 +642,13 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user,
>  		__put_user_error(current->thread.fault_code, &esr_ctx->esr, err);
>  	}
>
> +	/* Scalable Vector Extension state, if present */
> +	if (system_supports_sve() && err == 0 && user->sve_offset) {
> +		struct sve_context __user *sve_ctx =
> +			apply_user_offset(user, user->sve_offset);
> +		err |= preserve_sve_context(sve_ctx);
> +	}
> +
>  	if (err == 0 && user->extra_offset) {
>  		char __user *sfp = (char __user *)user->sigframe;
>  		char __user *userp =
> @@ -599,6 +748,8 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
>  	struct rt_sigframe __user *frame;
>  	int err = 0;
>
> +	fpsimd_signal_preserve_current_state();
> +
>  	if (get_sigframe(&user, ksig, regs))
>  		return 1;
>
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index e09bf5d..22711ee 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -239,7 +239,7 @@ static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame)
>  	 * Note that this also saves V16-31, which aren't visible
>  	 * in AArch32.
>  	 */
> -	fpsimd_preserve_current_state();
> +	fpsimd_signal_preserve_current_state();
>
>  	/* Place structure header on the stack */
>  	__put_user_error(magic, &frame->magic, err);

WARNING: multiple messages have this Message-ID (diff)
From: "Alex Bennée" <alex.bennee@linaro.org>
To: Dave Martin <Dave.Martin@arm.com>
Cc: linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Szabolcs Nagy <szabolcs.nagy@arm.com>,
	Okamoto Takayuki <tokamoto@jp.fujitsu.com>,
	kvmarm@lists.cs.columbia.edu, libc-alpha@sourceware.org,
	linux-arch@vger.kernel.org
Subject: Re: [PATCH v5 15/30] arm64/sve: Signal handling support
Date: Tue, 07 Nov 2017 13:22:33 +0000	[thread overview]
Message-ID: <87vaimql46.fsf@linaro.org> (raw)
Message-ID: <20171107132233.MCC4fwRYGOxzQa5W3KIx4N_xYgHpg7O40FVn6COkJwQ@z> (raw)
In-Reply-To: <1509465082-30427-16-git-send-email-Dave.Martin@arm.com>


Dave Martin <Dave.Martin@arm.com> writes:

> This patch implements support for saving and restoring the SVE
> registers around signals.
>
> A fixed-size header struct sve_context is always included in the
> signal frame encoding the thread's vector length at the time of
> signal delivery, optionally followed by a variable-layout structure
> encoding the SVE registers.
>
> Because of the need to preserve backwards compatibility, the FPSIMD
> view of the SVE registers is always dumped as a struct
> fpsimd_context in the usual way, in addition to any sve_context.
>
> The SVE vector registers are dumped in full, including bits 127:0
> of each register which alias the corresponding FPSIMD vector
> registers in the hardware.  To avoid any ambiguity about which
> alias to restore during sigreturn, the kernel always restores bits
> 127:0 of each SVE vector register from the fpsimd_context in the
> signal frame (which must be present): userspace needs to take this
> into account if it wants to modify the SVE vector register contents
> on return from a signal.
>
> FPSR and FPCR, which are used by both FPSIMD and SVE, are not
> included in sve_context because they are always present in
> fpsimd_context anyway.
>
> For signal delivery, a new helper
> fpsimd_signal_preserve_current_state() is added to update _both_
> the FPSIMD and SVE views in the task struct, to make it easier to
> populate this information into the signal frame.  Because of the
> redundancy between the two views of the state, only one is updated
> otherwise.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Alex Bennée <alex.bennee@linaro.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

After adding SVE support for RISU I was able to confirm consistent
signal delivery with a decent context over multiple runs and after
hacking echo "32" > /proc/sys/abi/sve_default_vector_length to change
the default length the tests would fail (as expected).

Tested-by: Alex Bennée <alex.bennee@linaro.org>

>
> ---
>
> **Dropped** Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> (Non-trivial changes.)
>
> Changes since v4
> ----------------
>
> Requested by Will Deacon:
>
>  * Fix inconsistent return semantics in restore_sve_fpsimd_context().
>
>    Currently a nonzero return value from __copy_from_user() is passed
>    back as-is to the caller or restore_sve_fpsimd_context(), rather than
>    translating it do an error code as is done elsewhere.
>
>    Callers of restore_sve_fpsimd_context() only care whether the return
>    value is 0 or not, so this isn't a big deal, but it is more
>    consistent to return proper error codes on failure in case we grow a
>    use for them later.
>
>    This patch returns -EFAULT in the __copy_from_user() error cases
>    that weren't explicitly handled.
>
> Miscellaneous:
>
>  * Change inconsistent copy_to_user() calls to __copy_to_user() in
>    preserve_sve_context().
>
>    There are already __put_user_error() calls here.
>
>    The whole extended signal frame is already checked for
>    access_ok(VERIFY_WRITE) in get_sigframe().
> ---
>  arch/arm64/include/asm/fpsimd.h |   1 +
>  arch/arm64/kernel/fpsimd.c      |  55 ++++++++++---
>  arch/arm64/kernel/signal.c      | 167 ++++++++++++++++++++++++++++++++++++++--
>  arch/arm64/kernel/signal32.c    |   2 +-
>  4 files changed, 206 insertions(+), 19 deletions(-)
>
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index 5655fe1..9bbd74c 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -63,6 +63,7 @@ extern void fpsimd_load_state(struct fpsimd_state *state);
>  extern void fpsimd_thread_switch(struct task_struct *next);
>  extern void fpsimd_flush_thread(void);
>
> +extern void fpsimd_signal_preserve_current_state(void);
>  extern void fpsimd_preserve_current_state(void);
>  extern void fpsimd_restore_current_state(void);
>  extern void fpsimd_update_current_state(struct fpsimd_state *state);
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index c91be8e..e0b5ef5 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -299,6 +299,32 @@ static void fpsimd_to_sve(struct task_struct *task)
>  		       sizeof(fst->vregs[i]));
>  }
>
> +/*
> + * Transfer the SVE state in task->thread.sve_state to
> + * task->thread.fpsimd_state.
> + *
> + * Task can be a non-runnable task, or current.  In the latter case,
> + * softirqs (and preemption) must be disabled.
> + * task->thread.sve_state must point to at least sve_state_size(task)
> + * bytes of allocated kernel memory.
> + * task->thread.sve_state must be up to date before calling this function.
> + */
> +static void sve_to_fpsimd(struct task_struct *task)
> +{
> +	unsigned int vq;
> +	void const *sst = task->thread.sve_state;
> +	struct fpsimd_state *fst = &task->thread.fpsimd_state;
> +	unsigned int i;
> +
> +	if (!system_supports_sve())
> +		return;
> +
> +	vq = sve_vq_from_vl(task->thread.sve_vl);
> +	for (i = 0; i < 32; ++i)
> +		memcpy(&fst->vregs[i], ZREG(sst, vq, i),
> +		       sizeof(fst->vregs[i]));
> +}
> +
>  #ifdef CONFIG_ARM64_SVE
>
>  /*
> @@ -500,9 +526,6 @@ void fpsimd_flush_thread(void)
>  /*
>   * Save the userland FPSIMD state of 'current' to memory, but only if the state
>   * currently held in the registers does in fact belong to 'current'
> - *
> - * Currently, SVE tasks can't exist, so just WARN in that case.
> - * Subsequent patches will add full SVE support here.
>   */
>  void fpsimd_preserve_current_state(void)
>  {
> @@ -510,16 +533,23 @@ void fpsimd_preserve_current_state(void)
>  		return;
>
>  	local_bh_disable();
> -
> -	if (!test_thread_flag(TIF_FOREIGN_FPSTATE))
> -		fpsimd_save_state(&current->thread.fpsimd_state);
> -
> -	WARN_ON_ONCE(test_and_clear_thread_flag(TIF_SVE));
> -
> +	task_fpsimd_save();
>  	local_bh_enable();
>  }
>
>  /*
> + * Like fpsimd_preserve_current_state(), but ensure that
> + * current->thread.fpsimd_state is updated so that it can be copied to
> + * the signal frame.
> + */
> +void fpsimd_signal_preserve_current_state(void)
> +{
> +	fpsimd_preserve_current_state();
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE))
> +		sve_to_fpsimd(current);
> +}
> +
> +/*
>   * Load the userland FPSIMD state of 'current' from memory, but only if the
>   * FPSIMD state already held in the registers is /not/ the most recent FPSIMD
>   * state of 'current'
> @@ -554,7 +584,12 @@ void fpsimd_update_current_state(struct fpsimd_state *state)
>
>  	local_bh_disable();
>
> -	fpsimd_load_state(state);
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE)) {
> +		current->thread.fpsimd_state = *state;
> +		fpsimd_to_sve(current);
> +	}
> +	task_fpsimd_load();
> +
>  	if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
>  		struct fpsimd_state *st = &current->thread.fpsimd_state;
>
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 4716729..64abf88 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -63,6 +63,7 @@ struct rt_sigframe_user_layout {
>
>  	unsigned long fpsimd_offset;
>  	unsigned long esr_offset;
> +	unsigned long sve_offset;
>  	unsigned long extra_offset;
>  	unsigned long end_offset;
>  };
> @@ -179,9 +180,6 @@ static int preserve_fpsimd_context(struct fpsimd_context __user *ctx)
>  	struct fpsimd_state *fpsimd = &current->thread.fpsimd_state;
>  	int err;
>
> -	/* dump the hardware registers to the fpsimd_state structure */
> -	fpsimd_preserve_current_state();
> -
>  	/* copy the FP and status/control registers */
>  	err = __copy_to_user(ctx->vregs, fpsimd->vregs, sizeof(fpsimd->vregs));
>  	__put_user_error(fpsimd->fpsr, &ctx->fpsr, err);
> @@ -214,6 +212,8 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	__get_user_error(fpsimd.fpsr, &ctx->fpsr, err);
>  	__get_user_error(fpsimd.fpcr, &ctx->fpcr, err);
>
> +	clear_thread_flag(TIF_SVE);
> +
>  	/* load the hardware registers from the fpsimd_state structure */
>  	if (!err)
>  		fpsimd_update_current_state(&fpsimd);
> @@ -221,10 +221,118 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	return err ? -EFAULT : 0;
>  }
>
> +
>  struct user_ctxs {
>  	struct fpsimd_context __user *fpsimd;
> +	struct sve_context __user *sve;
>  };
>
> +#ifdef CONFIG_ARM64_SVE
> +
> +static int preserve_sve_context(struct sve_context __user *ctx)
> +{
> +	int err = 0;
> +	u16 reserved[ARRAY_SIZE(ctx->__reserved)];
> +	unsigned int vl = current->thread.sve_vl;
> +	unsigned int vq = 0;
> +
> +	if (test_thread_flag(TIF_SVE))
> +		vq = sve_vq_from_vl(vl);
> +
> +	memset(reserved, 0, sizeof(reserved));
> +
> +	__put_user_error(SVE_MAGIC, &ctx->head.magic, err);
> +	__put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16),
> +			 &ctx->head.size, err);
> +	__put_user_error(vl, &ctx->vl, err);
> +	BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved));
> +	err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved));
> +
> +	if (vq) {
> +		/*
> +		 * This assumes that the SVE state has already been saved to
> +		 * the task struct by calling preserve_fpsimd_context().
> +		 */
> +		err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
> +				      current->thread.sve_state,
> +				      SVE_SIG_REGS_SIZE(vq));
> +	}
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +static int restore_sve_fpsimd_context(struct user_ctxs *user)
> +{
> +	int err;
> +	unsigned int vq;
> +	struct fpsimd_state fpsimd;
> +	struct sve_context sve;
> +
> +	if (__copy_from_user(&sve, user->sve, sizeof(sve)))
> +		return -EFAULT;
> +
> +	if (sve.vl != current->thread.sve_vl)
> +		return -EINVAL;
> +
> +	if (sve.head.size <= sizeof(*user->sve)) {
> +		clear_thread_flag(TIF_SVE);
> +		goto fpsimd_only;
> +	}
> +
> +	vq = sve_vq_from_vl(sve.vl);
> +
> +	if (sve.head.size < SVE_SIG_CONTEXT_SIZE(vq))
> +		return -EINVAL;
> +
> +	/*
> +	 * Careful: we are about __copy_from_user() directly into
> +	 * thread.sve_state with preemption enabled, so protection is
> +	 * needed to prevent a racing context switch from writing stale
> +	 * registers back over the new data.
> +	 */
> +
> +	fpsimd_flush_task_state(current);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't clear TIF_FOREIGN_FPSTATE */
> +
> +	set_thread_flag(TIF_FOREIGN_FPSTATE);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
> +
> +	sve_alloc(current);
> +	err = __copy_from_user(current->thread.sve_state,
> +			       (char __user const *)user->sve +
> +					SVE_SIG_REGS_OFFSET,
> +			       SVE_SIG_REGS_SIZE(vq));
> +	if (err)
> +		return -EFAULT;
> +
> +	set_thread_flag(TIF_SVE);
> +
> +fpsimd_only:
> +	/* copy the FP and status/control registers */
> +	/* restore_sigframe() already checked that user->fpsimd != NULL. */
> +	err = __copy_from_user(fpsimd.vregs, user->fpsimd->vregs,
> +			       sizeof(fpsimd.vregs));
> +	__get_user_error(fpsimd.fpsr, &user->fpsimd->fpsr, err);
> +	__get_user_error(fpsimd.fpcr, &user->fpsimd->fpcr, err);
> +
> +	/* load the hardware registers from the fpsimd_state structure */
> +	if (!err)
> +		fpsimd_update_current_state(&fpsimd);
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +#else /* ! CONFIG_ARM64_SVE */
> +
> +/* Turn any non-optimised out attempts to use these into a link error: */
> +extern int preserve_sve_context(void __user *ctx);
> +extern int restore_sve_fpsimd_context(struct user_ctxs *user);
> +
> +#endif /* ! CONFIG_ARM64_SVE */
> +
> +
>  static int parse_user_sigframe(struct user_ctxs *user,
>  			       struct rt_sigframe __user *sf)
>  {
> @@ -237,6 +345,7 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	char const __user *const sfp = (char const __user *)sf;
>
>  	user->fpsimd = NULL;
> +	user->sve = NULL;
>
>  	if (!IS_ALIGNED((unsigned long)base, 16))
>  		goto invalid;
> @@ -287,6 +396,19 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  			/* ignore */
>  			break;
>
> +		case SVE_MAGIC:
> +			if (!system_supports_sve())
> +				goto invalid;
> +
> +			if (user->sve)
> +				goto invalid;
> +
> +			if (size < sizeof(*user->sve))
> +				goto invalid;
> +
> +			user->sve = (struct sve_context __user *)head;
> +			break;
> +
>  		case EXTRA_MAGIC:
>  			if (have_extra_context)
>  				goto invalid;
> @@ -363,9 +485,6 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	}
>
>  done:
> -	if (!user->fpsimd)
> -		goto invalid;
> -
>  	return 0;
>
>  invalid:
> @@ -399,8 +518,19 @@ static int restore_sigframe(struct pt_regs *regs,
>  	if (err == 0)
>  		err = parse_user_sigframe(&user, sf);
>
> -	if (err == 0)
> -		err = restore_fpsimd_context(user.fpsimd);
> +	if (err == 0) {
> +		if (!user.fpsimd)
> +			return -EINVAL;
> +
> +		if (user.sve) {
> +			if (!system_supports_sve())
> +				return -EINVAL;
> +
> +			err = restore_sve_fpsimd_context(&user);
> +		} else {
> +			err = restore_fpsimd_context(user.fpsimd);
> +		}
> +	}
>
>  	return err;
>  }
> @@ -459,6 +589,18 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user)
>  			return err;
>  	}
>
> +	if (system_supports_sve()) {
> +		unsigned int vq = 0;
> +
> +		if (test_thread_flag(TIF_SVE))
> +			vq = sve_vq_from_vl(current->thread.sve_vl);
> +
> +		err = sigframe_alloc(user, &user->sve_offset,
> +				     SVE_SIG_CONTEXT_SIZE(vq));
> +		if (err)
> +			return err;
> +	}
> +
>  	return sigframe_alloc_end(user);
>  }
>
> @@ -500,6 +642,13 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user,
>  		__put_user_error(current->thread.fault_code, &esr_ctx->esr, err);
>  	}
>
> +	/* Scalable Vector Extension state, if present */
> +	if (system_supports_sve() && err == 0 && user->sve_offset) {
> +		struct sve_context __user *sve_ctx =
> +			apply_user_offset(user, user->sve_offset);
> +		err |= preserve_sve_context(sve_ctx);
> +	}
> +
>  	if (err == 0 && user->extra_offset) {
>  		char __user *sfp = (char __user *)user->sigframe;
>  		char __user *userp =
> @@ -599,6 +748,8 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
>  	struct rt_sigframe __user *frame;
>  	int err = 0;
>
> +	fpsimd_signal_preserve_current_state();
> +
>  	if (get_sigframe(&user, ksig, regs))
>  		return 1;
>
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index e09bf5d..22711ee 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -239,7 +239,7 @@ static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame)
>  	 * Note that this also saves V16-31, which aren't visible
>  	 * in AArch32.
>  	 */
> -	fpsimd_preserve_current_state();
> +	fpsimd_signal_preserve_current_state();
>
>  	/* Place structure header on the stack */
>  	__put_user_error(magic, &frame->magic, err);


--
Alex Bennée

WARNING: multiple messages have this Message-ID (diff)
From: alex.bennee@linaro.org (Alex Bennée)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v5 15/30] arm64/sve: Signal handling support
Date: Tue, 07 Nov 2017 13:22:33 +0000	[thread overview]
Message-ID: <87vaimql46.fsf@linaro.org> (raw)
In-Reply-To: <1509465082-30427-16-git-send-email-Dave.Martin@arm.com>


Dave Martin <Dave.Martin@arm.com> writes:

> This patch implements support for saving and restoring the SVE
> registers around signals.
>
> A fixed-size header struct sve_context is always included in the
> signal frame encoding the thread's vector length at the time of
> signal delivery, optionally followed by a variable-layout structure
> encoding the SVE registers.
>
> Because of the need to preserve backwards compatibility, the FPSIMD
> view of the SVE registers is always dumped as a struct
> fpsimd_context in the usual way, in addition to any sve_context.
>
> The SVE vector registers are dumped in full, including bits 127:0
> of each register which alias the corresponding FPSIMD vector
> registers in the hardware.  To avoid any ambiguity about which
> alias to restore during sigreturn, the kernel always restores bits
> 127:0 of each SVE vector register from the fpsimd_context in the
> signal frame (which must be present): userspace needs to take this
> into account if it wants to modify the SVE vector register contents
> on return from a signal.
>
> FPSR and FPCR, which are used by both FPSIMD and SVE, are not
> included in sve_context because they are always present in
> fpsimd_context anyway.
>
> For signal delivery, a new helper
> fpsimd_signal_preserve_current_state() is added to update _both_
> the FPSIMD and SVE views in the task struct, to make it easier to
> populate this information into the signal frame.  Because of the
> redundancy between the two views of the state, only one is updated
> otherwise.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Alex Benn?e <alex.bennee@linaro.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

After adding SVE support for RISU I was able to confirm consistent
signal delivery with a decent context over multiple runs and after
hacking echo "32" > /proc/sys/abi/sve_default_vector_length to change
the default length the tests would fail (as expected).

Tested-by: Alex Benn?e <alex.bennee@linaro.org>

>
> ---
>
> **Dropped** Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> (Non-trivial changes.)
>
> Changes since v4
> ----------------
>
> Requested by Will Deacon:
>
>  * Fix inconsistent return semantics in restore_sve_fpsimd_context().
>
>    Currently a nonzero return value from __copy_from_user() is passed
>    back as-is to the caller or restore_sve_fpsimd_context(), rather than
>    translating it do an error code as is done elsewhere.
>
>    Callers of restore_sve_fpsimd_context() only care whether the return
>    value is 0 or not, so this isn't a big deal, but it is more
>    consistent to return proper error codes on failure in case we grow a
>    use for them later.
>
>    This patch returns -EFAULT in the __copy_from_user() error cases
>    that weren't explicitly handled.
>
> Miscellaneous:
>
>  * Change inconsistent copy_to_user() calls to __copy_to_user() in
>    preserve_sve_context().
>
>    There are already __put_user_error() calls here.
>
>    The whole extended signal frame is already checked for
>    access_ok(VERIFY_WRITE) in get_sigframe().
> ---
>  arch/arm64/include/asm/fpsimd.h |   1 +
>  arch/arm64/kernel/fpsimd.c      |  55 ++++++++++---
>  arch/arm64/kernel/signal.c      | 167 ++++++++++++++++++++++++++++++++++++++--
>  arch/arm64/kernel/signal32.c    |   2 +-
>  4 files changed, 206 insertions(+), 19 deletions(-)
>
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index 5655fe1..9bbd74c 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -63,6 +63,7 @@ extern void fpsimd_load_state(struct fpsimd_state *state);
>  extern void fpsimd_thread_switch(struct task_struct *next);
>  extern void fpsimd_flush_thread(void);
>
> +extern void fpsimd_signal_preserve_current_state(void);
>  extern void fpsimd_preserve_current_state(void);
>  extern void fpsimd_restore_current_state(void);
>  extern void fpsimd_update_current_state(struct fpsimd_state *state);
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index c91be8e..e0b5ef5 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -299,6 +299,32 @@ static void fpsimd_to_sve(struct task_struct *task)
>  		       sizeof(fst->vregs[i]));
>  }
>
> +/*
> + * Transfer the SVE state in task->thread.sve_state to
> + * task->thread.fpsimd_state.
> + *
> + * Task can be a non-runnable task, or current.  In the latter case,
> + * softirqs (and preemption) must be disabled.
> + * task->thread.sve_state must point to at least sve_state_size(task)
> + * bytes of allocated kernel memory.
> + * task->thread.sve_state must be up to date before calling this function.
> + */
> +static void sve_to_fpsimd(struct task_struct *task)
> +{
> +	unsigned int vq;
> +	void const *sst = task->thread.sve_state;
> +	struct fpsimd_state *fst = &task->thread.fpsimd_state;
> +	unsigned int i;
> +
> +	if (!system_supports_sve())
> +		return;
> +
> +	vq = sve_vq_from_vl(task->thread.sve_vl);
> +	for (i = 0; i < 32; ++i)
> +		memcpy(&fst->vregs[i], ZREG(sst, vq, i),
> +		       sizeof(fst->vregs[i]));
> +}
> +
>  #ifdef CONFIG_ARM64_SVE
>
>  /*
> @@ -500,9 +526,6 @@ void fpsimd_flush_thread(void)
>  /*
>   * Save the userland FPSIMD state of 'current' to memory, but only if the state
>   * currently held in the registers does in fact belong to 'current'
> - *
> - * Currently, SVE tasks can't exist, so just WARN in that case.
> - * Subsequent patches will add full SVE support here.
>   */
>  void fpsimd_preserve_current_state(void)
>  {
> @@ -510,16 +533,23 @@ void fpsimd_preserve_current_state(void)
>  		return;
>
>  	local_bh_disable();
> -
> -	if (!test_thread_flag(TIF_FOREIGN_FPSTATE))
> -		fpsimd_save_state(&current->thread.fpsimd_state);
> -
> -	WARN_ON_ONCE(test_and_clear_thread_flag(TIF_SVE));
> -
> +	task_fpsimd_save();
>  	local_bh_enable();
>  }
>
>  /*
> + * Like fpsimd_preserve_current_state(), but ensure that
> + * current->thread.fpsimd_state is updated so that it can be copied to
> + * the signal frame.
> + */
> +void fpsimd_signal_preserve_current_state(void)
> +{
> +	fpsimd_preserve_current_state();
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE))
> +		sve_to_fpsimd(current);
> +}
> +
> +/*
>   * Load the userland FPSIMD state of 'current' from memory, but only if the
>   * FPSIMD state already held in the registers is /not/ the most recent FPSIMD
>   * state of 'current'
> @@ -554,7 +584,12 @@ void fpsimd_update_current_state(struct fpsimd_state *state)
>
>  	local_bh_disable();
>
> -	fpsimd_load_state(state);
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE)) {
> +		current->thread.fpsimd_state = *state;
> +		fpsimd_to_sve(current);
> +	}
> +	task_fpsimd_load();
> +
>  	if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
>  		struct fpsimd_state *st = &current->thread.fpsimd_state;
>
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 4716729..64abf88 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -63,6 +63,7 @@ struct rt_sigframe_user_layout {
>
>  	unsigned long fpsimd_offset;
>  	unsigned long esr_offset;
> +	unsigned long sve_offset;
>  	unsigned long extra_offset;
>  	unsigned long end_offset;
>  };
> @@ -179,9 +180,6 @@ static int preserve_fpsimd_context(struct fpsimd_context __user *ctx)
>  	struct fpsimd_state *fpsimd = &current->thread.fpsimd_state;
>  	int err;
>
> -	/* dump the hardware registers to the fpsimd_state structure */
> -	fpsimd_preserve_current_state();
> -
>  	/* copy the FP and status/control registers */
>  	err = __copy_to_user(ctx->vregs, fpsimd->vregs, sizeof(fpsimd->vregs));
>  	__put_user_error(fpsimd->fpsr, &ctx->fpsr, err);
> @@ -214,6 +212,8 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	__get_user_error(fpsimd.fpsr, &ctx->fpsr, err);
>  	__get_user_error(fpsimd.fpcr, &ctx->fpcr, err);
>
> +	clear_thread_flag(TIF_SVE);
> +
>  	/* load the hardware registers from the fpsimd_state structure */
>  	if (!err)
>  		fpsimd_update_current_state(&fpsimd);
> @@ -221,10 +221,118 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	return err ? -EFAULT : 0;
>  }
>
> +
>  struct user_ctxs {
>  	struct fpsimd_context __user *fpsimd;
> +	struct sve_context __user *sve;
>  };
>
> +#ifdef CONFIG_ARM64_SVE
> +
> +static int preserve_sve_context(struct sve_context __user *ctx)
> +{
> +	int err = 0;
> +	u16 reserved[ARRAY_SIZE(ctx->__reserved)];
> +	unsigned int vl = current->thread.sve_vl;
> +	unsigned int vq = 0;
> +
> +	if (test_thread_flag(TIF_SVE))
> +		vq = sve_vq_from_vl(vl);
> +
> +	memset(reserved, 0, sizeof(reserved));
> +
> +	__put_user_error(SVE_MAGIC, &ctx->head.magic, err);
> +	__put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16),
> +			 &ctx->head.size, err);
> +	__put_user_error(vl, &ctx->vl, err);
> +	BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved));
> +	err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved));
> +
> +	if (vq) {
> +		/*
> +		 * This assumes that the SVE state has already been saved to
> +		 * the task struct by calling preserve_fpsimd_context().
> +		 */
> +		err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
> +				      current->thread.sve_state,
> +				      SVE_SIG_REGS_SIZE(vq));
> +	}
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +static int restore_sve_fpsimd_context(struct user_ctxs *user)
> +{
> +	int err;
> +	unsigned int vq;
> +	struct fpsimd_state fpsimd;
> +	struct sve_context sve;
> +
> +	if (__copy_from_user(&sve, user->sve, sizeof(sve)))
> +		return -EFAULT;
> +
> +	if (sve.vl != current->thread.sve_vl)
> +		return -EINVAL;
> +
> +	if (sve.head.size <= sizeof(*user->sve)) {
> +		clear_thread_flag(TIF_SVE);
> +		goto fpsimd_only;
> +	}
> +
> +	vq = sve_vq_from_vl(sve.vl);
> +
> +	if (sve.head.size < SVE_SIG_CONTEXT_SIZE(vq))
> +		return -EINVAL;
> +
> +	/*
> +	 * Careful: we are about __copy_from_user() directly into
> +	 * thread.sve_state with preemption enabled, so protection is
> +	 * needed to prevent a racing context switch from writing stale
> +	 * registers back over the new data.
> +	 */
> +
> +	fpsimd_flush_task_state(current);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't clear TIF_FOREIGN_FPSTATE */
> +
> +	set_thread_flag(TIF_FOREIGN_FPSTATE);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
> +
> +	sve_alloc(current);
> +	err = __copy_from_user(current->thread.sve_state,
> +			       (char __user const *)user->sve +
> +					SVE_SIG_REGS_OFFSET,
> +			       SVE_SIG_REGS_SIZE(vq));
> +	if (err)
> +		return -EFAULT;
> +
> +	set_thread_flag(TIF_SVE);
> +
> +fpsimd_only:
> +	/* copy the FP and status/control registers */
> +	/* restore_sigframe() already checked that user->fpsimd != NULL. */
> +	err = __copy_from_user(fpsimd.vregs, user->fpsimd->vregs,
> +			       sizeof(fpsimd.vregs));
> +	__get_user_error(fpsimd.fpsr, &user->fpsimd->fpsr, err);
> +	__get_user_error(fpsimd.fpcr, &user->fpsimd->fpcr, err);
> +
> +	/* load the hardware registers from the fpsimd_state structure */
> +	if (!err)
> +		fpsimd_update_current_state(&fpsimd);
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +#else /* ! CONFIG_ARM64_SVE */
> +
> +/* Turn any non-optimised out attempts to use these into a link error: */
> +extern int preserve_sve_context(void __user *ctx);
> +extern int restore_sve_fpsimd_context(struct user_ctxs *user);
> +
> +#endif /* ! CONFIG_ARM64_SVE */
> +
> +
>  static int parse_user_sigframe(struct user_ctxs *user,
>  			       struct rt_sigframe __user *sf)
>  {
> @@ -237,6 +345,7 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	char const __user *const sfp = (char const __user *)sf;
>
>  	user->fpsimd = NULL;
> +	user->sve = NULL;
>
>  	if (!IS_ALIGNED((unsigned long)base, 16))
>  		goto invalid;
> @@ -287,6 +396,19 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  			/* ignore */
>  			break;
>
> +		case SVE_MAGIC:
> +			if (!system_supports_sve())
> +				goto invalid;
> +
> +			if (user->sve)
> +				goto invalid;
> +
> +			if (size < sizeof(*user->sve))
> +				goto invalid;
> +
> +			user->sve = (struct sve_context __user *)head;
> +			break;
> +
>  		case EXTRA_MAGIC:
>  			if (have_extra_context)
>  				goto invalid;
> @@ -363,9 +485,6 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	}
>
>  done:
> -	if (!user->fpsimd)
> -		goto invalid;
> -
>  	return 0;
>
>  invalid:
> @@ -399,8 +518,19 @@ static int restore_sigframe(struct pt_regs *regs,
>  	if (err == 0)
>  		err = parse_user_sigframe(&user, sf);
>
> -	if (err == 0)
> -		err = restore_fpsimd_context(user.fpsimd);
> +	if (err == 0) {
> +		if (!user.fpsimd)
> +			return -EINVAL;
> +
> +		if (user.sve) {
> +			if (!system_supports_sve())
> +				return -EINVAL;
> +
> +			err = restore_sve_fpsimd_context(&user);
> +		} else {
> +			err = restore_fpsimd_context(user.fpsimd);
> +		}
> +	}
>
>  	return err;
>  }
> @@ -459,6 +589,18 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user)
>  			return err;
>  	}
>
> +	if (system_supports_sve()) {
> +		unsigned int vq = 0;
> +
> +		if (test_thread_flag(TIF_SVE))
> +			vq = sve_vq_from_vl(current->thread.sve_vl);
> +
> +		err = sigframe_alloc(user, &user->sve_offset,
> +				     SVE_SIG_CONTEXT_SIZE(vq));
> +		if (err)
> +			return err;
> +	}
> +
>  	return sigframe_alloc_end(user);
>  }
>
> @@ -500,6 +642,13 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user,
>  		__put_user_error(current->thread.fault_code, &esr_ctx->esr, err);
>  	}
>
> +	/* Scalable Vector Extension state, if present */
> +	if (system_supports_sve() && err == 0 && user->sve_offset) {
> +		struct sve_context __user *sve_ctx =
> +			apply_user_offset(user, user->sve_offset);
> +		err |= preserve_sve_context(sve_ctx);
> +	}
> +
>  	if (err == 0 && user->extra_offset) {
>  		char __user *sfp = (char __user *)user->sigframe;
>  		char __user *userp =
> @@ -599,6 +748,8 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
>  	struct rt_sigframe __user *frame;
>  	int err = 0;
>
> +	fpsimd_signal_preserve_current_state();
> +
>  	if (get_sigframe(&user, ksig, regs))
>  		return 1;
>
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index e09bf5d..22711ee 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -239,7 +239,7 @@ static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame)
>  	 * Note that this also saves V16-31, which aren't visible
>  	 * in AArch32.
>  	 */
> -	fpsimd_preserve_current_state();
> +	fpsimd_signal_preserve_current_state();
>
>  	/* Place structure header on the stack */
>  	__put_user_error(magic, &frame->magic, err);


--
Alex Benn?e

  parent reply	other threads:[~2017-11-07 13:22 UTC|newest]

Thread overview: 174+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-31 15:50 [PATCH v5 00/30] ARM Scalable Vector Extension (SVE) Dave Martin
2017-10-31 15:50 ` Dave Martin
2017-10-31 15:50 ` Dave Martin
2017-10-31 15:50 ` [PATCH v5 01/30] regset: Add support for dynamically sized regsets Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-11-01 11:42   ` Catalin Marinas
2017-11-01 11:42     ` Catalin Marinas
2017-11-01 13:16     ` Dave Martin
2017-11-01 13:16       ` Dave Martin
2017-11-08 11:50       ` Alex Bennée
2017-11-08 11:50         ` Alex Bennée
2017-11-08 11:50         ` Alex Bennée
2017-10-31 15:50 ` [PATCH v5 02/30] arm64: fpsimd: Correctly annotate exception helpers called from asm Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-11-01 11:42   ` Catalin Marinas
2017-11-01 11:42     ` Catalin Marinas
2017-10-31 15:50 ` [PATCH v5 03/30] arm64: signal: Verify extra data is user-readable in sys_rt_sigreturn Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-11-01 11:43   ` Catalin Marinas
2017-11-01 11:43     ` Catalin Marinas
2017-10-31 15:50 ` [PATCH v5 04/30] arm64: KVM: Hide unsupported AArch64 CPU features from guests Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-11-01  4:47   ` Christoffer Dall
2017-11-01  4:47     ` Christoffer Dall
2017-11-01 10:26     ` Dave Martin
2017-11-01 10:26       ` Dave Martin
2017-11-02  8:15       ` Christoffer Dall
2017-11-02  8:15         ` Christoffer Dall
2017-11-02  9:20         ` Dave Martin
2017-11-02  9:20           ` Dave Martin
2017-11-02 11:01         ` Dave Martin
2017-11-02 11:01           ` Dave Martin
2017-11-02 19:18           ` Christoffer Dall
2017-11-02 19:18             ` Christoffer Dall
2017-10-31 15:50 ` [PATCH v5 05/30] arm64: efi: Add missing Kconfig dependency on KERNEL_MODE_NEON Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-10-31 15:50 ` [PATCH v5 06/30] arm64: Port deprecated instruction emulation to new sysctl interface Dave Martin
2017-10-31 15:50   ` Dave Martin
2017-10-31 15:50 ` [PATCH v5 07/30] arm64: fpsimd: Simplify uses of {set,clear}_ti_thread_flag() Dave Martin
2017-10-31 15:50   ` [PATCH v5 07/30] arm64: fpsimd: Simplify uses of {set, clear}_ti_thread_flag() Dave Martin
2017-10-31 15:51 ` [PATCH v5 08/30] arm64/sve: System register and exception syndrome definitions Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 09/30] arm64/sve: Low-level SVE architectural state manipulation functions Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 10/30] arm64/sve: Kconfig update and conditional compilation support Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 11/30] arm64/sve: Signal frame and context structure definition Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-11-08 16:34   ` Alex Bennée
2017-11-08 16:34     ` Alex Bennée
2017-11-08 16:34     ` Alex Bennée
2017-10-31 15:51 ` [PATCH v5 12/30] arm64/sve: Low-level CPU setup Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-11-08 16:37   ` Alex Bennée
2017-11-08 16:37     ` Alex Bennée
2017-11-08 16:37     ` Alex Bennée
2017-10-31 15:51 ` [PATCH v5 13/30] arm64/sve: Core task context handling Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-11-09 17:16   ` Alex Bennée
2017-11-09 17:16     ` Alex Bennée
2017-11-09 17:16     ` Alex Bennée
2017-11-09 17:56     ` Dave Martin
2017-11-09 17:56       ` Dave Martin
2017-11-09 18:06       ` Alex Bennée
2017-11-09 18:06         ` Alex Bennée
2017-11-09 18:06         ` Alex Bennée
2017-10-31 15:51 ` [PATCH v5 14/30] arm64/sve: Support vector length resetting for new processes Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 15/30] arm64/sve: Signal handling support Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-11-01 14:33   ` Catalin Marinas
2017-11-01 14:33     ` Catalin Marinas
2017-11-07 13:22   ` Alex Bennée [this message]
2017-11-07 13:22     ` Alex Bennée
2017-11-07 13:22     ` Alex Bennée
2017-11-08 16:11     ` Dave Martin
2017-11-08 16:11       ` Dave Martin
2017-12-06 19:56   ` Kees Cook
2017-12-06 19:56     ` Kees Cook
2017-12-07 10:49     ` Will Deacon
2017-12-07 10:49       ` Will Deacon
2017-12-07 12:03       ` Dave Martin
2017-12-07 12:03         ` Dave Martin
2017-12-07 18:50       ` Kees Cook
2017-12-07 18:50         ` Kees Cook
2017-12-11 14:07         ` Will Deacon
2017-12-11 14:07           ` Will Deacon
2017-12-11 19:23           ` Kees Cook
2017-12-11 19:23             ` Kees Cook
2017-12-12 10:40             ` Will Deacon
2017-12-12 10:40               ` Will Deacon
2017-12-12 11:11               ` Dave Martin
2017-12-12 11:11                 ` Dave Martin
2017-12-12 19:36                 ` Kees Cook
2017-12-12 19:36                   ` Kees Cook
2017-12-12 19:36                   ` Kees Cook
2017-10-31 15:51 ` [PATCH v5 16/30] arm64/sve: Backend logic for setting the vector length Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-11-10 10:27   ` Alex Bennée
2017-11-10 10:27     ` Alex Bennée
2017-11-10 10:27     ` Alex Bennée
2017-10-31 15:51 ` [PATCH v5 17/30] arm64: cpufeature: Move sys_caps_initialised declarations Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 18/30] arm64/sve: Probe SVE capabilities and usable vector lengths Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 19/30] arm64/sve: Preserve SVE registers around kernel-mode NEON use Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 20/30] arm64/sve: Preserve SVE registers around EFI runtime service calls Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 21/30] arm64/sve: ptrace and ELF coredump support Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 22/30] arm64/sve: Add prctl controls for userspace vector length management Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 23/30] arm64/sve: Add sysctl to set the default vector length for new processes Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 24/30] arm64/sve: KVM: Prevent guests from using SVE Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 25/30] arm64/sve: KVM: Treat guest SVE use as undefined instruction execution Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 26/30] arm64/sve: KVM: Hide SVE from CPU features exposed to guests Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [PATCH v5 27/30] arm64/sve: Detect SVE and activate runtime support Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [RFC PATCH v5 29/30] arm64: signal: Report signal frame size to userspace via auxv Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51 ` [RFC PATCH v5 30/30] arm64/sve: signal: Include SVE when computing AT_MINSIGSTKSZ Dave Martin
2017-10-31 15:51   ` Dave Martin
2017-10-31 15:51   ` Dave Martin
     [not found] ` <1509465082-30427-1-git-send-email-Dave.Martin-5wv7dgnIgG8@public.gmane.org>
2017-10-31 15:51   ` [PATCH v5 28/30] arm64/sve: Add documentation Dave Martin
2017-10-31 15:51     ` Dave Martin
2017-10-31 15:51     ` Dave Martin
2017-11-02 16:32   ` [PATCH v5 00/30] ARM Scalable Vector Extension (SVE) Will Deacon
2017-11-02 16:32     ` Will Deacon
2017-11-02 16:32     ` Will Deacon
     [not found]     ` <20171102163248.GB595-5wv7dgnIgG8@public.gmane.org>
2017-11-02 17:04       ` Dave P Martin
2017-11-02 17:04         ` Dave P Martin
2017-11-02 17:04         ` Dave P Martin
2017-11-29 15:04 ` Alex Bennée
2017-11-29 15:04   ` Alex Bennée
2017-11-29 15:04   ` Alex Bennée
     [not found]   ` <877eu9dt3n.fsf-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2017-11-29 15:21     ` Will Deacon
2017-11-29 15:21       ` Will Deacon
2017-11-29 15:21       ` Will Deacon
     [not found]       ` <20171129152140.GD10650-5wv7dgnIgG8@public.gmane.org>
2017-11-29 15:37         ` Dave Martin
2017-11-29 15:37           ` Dave Martin
2017-11-29 15:37           ` Dave Martin
2018-01-08 14:49 ` Yury Norov
2018-01-08 14:49   ` Yury Norov
2018-01-08 14:49   ` Yury Norov
2018-01-09 16:51   ` Yury Norov
2018-01-09 16:51     ` Yury Norov
2018-01-09 16:51     ` Yury Norov
2018-01-15 17:22     ` Dave Martin
2018-01-15 17:22       ` Dave Martin
2018-01-15 17:22       ` Dave Martin
     [not found]       ` <20180115172201.GW22781-M5GwZQ6tE7x5pKCnmE3YQBJ8xKzm50AiAL8bYrjMMd8@public.gmane.org>
2018-01-16 10:11         ` Yury Norov
2018-01-16 10:11           ` Yury Norov
2018-01-16 16:05           ` Dave Martin
2018-01-16 16:05             ` Dave Martin
2018-01-15 16:55   ` Dave Martin
2018-01-15 16:55     ` Dave Martin
2018-01-15 16:55     ` Dave Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87vaimql46.fsf@linaro.org \
    --to=alex.bennee@linaro.org \
    --cc=Dave.Martin@arm.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=catalin.marinas@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=libc-alpha@sourceware.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=szabolcs.nagy@arm.com \
    --cc=tokamoto@jp.fujitsu.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.