From: Paolo Bonzini <pbonzini@redhat.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
linux-kernel@vger.kernel.org
Cc: x86@kernel.org, "Andy Lutomirski" <luto@kernel.org>,
"Radim Krčmář" <rkrcmar@redhat.com>,
kvm@vger.kernel.org, "Jason A. Donenfeld" <Jason@zx2c4.com>,
"Rik van Riel" <riel@surriel.com>
Subject: Re: [RFC PATCH 04/10] x86/fpu: eager switch PKRU state
Date: Wed, 12 Sep 2018 16:18:44 +0200 [thread overview]
Message-ID: <8e5b64e4-b3e6-f884-beb6-b7b69ab2d8c1@redhat.com> (raw)
In-Reply-To: <20180912133353.20595-5-bigeasy@linutronix.de>
On 12/09/2018 15:33, Sebastian Andrzej Siewior wrote:
> From: Rik van Riel <riel@surriel.com>
>
> While most of a task's FPU state is only needed in user space,
> the protection keys need to be in place immediately after a
> context switch.
>
> The reason is that any accesses to userspace memory while running
> in kernel mode also need to abide by the memory permissions
> specified in the protection keys.
>
> The pkru info is put in its own cache line in the fpu struct because
> that cache line is accessed anyway at context switch time, and the
> location of the pkru info in the xsave buffer changes depending on
> what other FPU registers are in use if the CPU uses compressed xsave
> state (on by default).
>
> The initial state of pkru is zeroed out automatically by fpstate_init.
>
> Signed-off-by: Rik van Riel <riel@surriel.com>
> [bigeasy: load PKRU state only if we also load FPU content]
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> arch/x86/include/asm/fpu/internal.h | 11 +++++++++--
> arch/x86/include/asm/fpu/types.h | 10 ++++++++++
> arch/x86/include/asm/pgtable.h | 6 +-----
> arch/x86/mm/pkeys.c | 14 ++++++++++++++
> 4 files changed, 34 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
> index 16c4077ffc945..57bd1576e033d 100644
> --- a/arch/x86/include/asm/fpu/internal.h
> +++ b/arch/x86/include/asm/fpu/internal.h
> @@ -573,8 +573,15 @@ static inline void switch_fpu_finish(struct fpu *new_fpu, int cpu)
> bool preload = static_cpu_has(X86_FEATURE_FPU) &&
> new_fpu->initialized;
>
> - if (preload)
> - __fpregs_load_activate(new_fpu, cpu);
> + if (!preload)
> + return;
> +
> + __fpregs_load_activate(new_fpu, cpu);
> + /* Protection keys need to be in place right at context switch time. */
> + if (boot_cpu_has(X86_FEATURE_OSPKE)) {
> + if (new_fpu->pkru != __read_pkru())
> + __write_pkru(new_fpu->pkru);
> + }
> }
>
> /*
> diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h
> index 202c53918ecfa..6fa58d37938d2 100644
> --- a/arch/x86/include/asm/fpu/types.h
> +++ b/arch/x86/include/asm/fpu/types.h
> @@ -293,6 +293,16 @@ struct fpu {
> */
> unsigned int last_cpu;
>
> + /*
> + * Protection key bits. These also live inside fpu.state.xsave,
> + * but the location varies if the CPU uses the compressed format
> + * for XSAVE(OPT).
> + *
> + * The protection key needs to be switched out immediately at context
> + * switch time, so it is in place for things like copy_to_user.
> + */
> + unsigned int pkru;
> +
> /*
> * @initialized:
> *
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 690c0307afed0..cc36f91011ad7 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -132,11 +132,7 @@ static inline u32 read_pkru(void)
> return 0;
> }
>
> -static inline void write_pkru(u32 pkru)
> -{
> - if (boot_cpu_has(X86_FEATURE_OSPKE))
> - __write_pkru(pkru);
> -}
> +extern void write_pkru(u32 pkru);
>
> static inline int pte_young(pte_t pte)
> {
> diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
> index 6e98e0a7c9231..c7a7b6bd64009 100644
> --- a/arch/x86/mm/pkeys.c
> +++ b/arch/x86/mm/pkeys.c
> @@ -18,6 +18,20 @@
>
> #include <asm/cpufeature.h> /* boot_cpu_has, ... */
> #include <asm/mmu_context.h> /* vma_pkey() */
> +#include <asm/fpu/internal.h>
> +
> +void write_pkru(u32 pkru)
> +{
> + if (!boot_cpu_has(X86_FEATURE_OSPKE))
> + return;
> +
> + current->thread.fpu.pkru = pkru;
> +
> + __fpregs_changes_begin();
> + __fpregs_load_activate(¤t->thread.fpu, smp_processor_id());
> + __write_pkru(pkru);
> + __fpregs_changes_end();
> +}
>
> int __execute_only_pkey(struct mm_struct *mm)
> {
>
I think you can go a step further and exclude PKRU state from
copy_kernel_to_fpregs altogether; you just use RDPKRU/WRPKRU. This also
means you don't need to call __fpregs_* functions in write_pkru.
Thanks,
Paolo
next prev parent reply other threads:[~2018-09-12 14:18 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-12 13:33 [RFC PATCH] x86: load FPU registers on return to userland Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 01/10] x86/entry: remove _TIF_ALLWORK_MASK Sebastian Andrzej Siewior
2018-09-27 14:21 ` Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 02/10] kvm: x86: make kvm_{load|put}_guest_fpu() static Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 03/10] x86/fpu: add (__)make_fpregs_active helpers Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 04/10] x86/fpu: eager switch PKRU state Sebastian Andrzej Siewior
2018-09-12 14:18 ` Paolo Bonzini [this message]
2018-09-12 15:24 ` Andy Lutomirski
2018-09-12 15:30 ` Paolo Bonzini
2018-09-14 20:35 ` [RFC PATCH 04/10 v2 ] " Sebastian Andrzej Siewior
2018-09-17 8:37 ` Paolo Bonzini
2018-09-18 14:27 ` Sebastian Andrzej Siewior
2018-09-18 15:07 ` Paolo Bonzini
2018-09-18 15:11 ` Rik van Riel
2018-09-18 15:29 ` Paolo Bonzini
2018-09-18 16:04 ` Sebastian Andrzej Siewior
2018-09-18 17:29 ` Rik van Riel
2018-09-19 5:55 ` Paolo Bonzini
2018-09-19 16:57 ` Sebastian Andrzej Siewior
2018-09-19 17:00 ` Paolo Bonzini
2018-09-19 17:19 ` Sebastian Andrzej Siewior
2018-09-19 19:38 ` Rik van Riel
2018-09-19 19:49 ` Andy Lutomirski
2018-09-12 15:20 ` [RFC PATCH 04/10] " Andy Lutomirski
2018-09-12 15:30 ` Rik van Riel
2018-09-12 15:49 ` Andy Lutomirski
2018-09-19 16:58 ` Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 05/10] x86/pkeys: Drop the preempt-disable section Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 06/10] x86/fpu: Always store the registers in copy_fpstate_to_sigframe() Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 07/10] x86/entry: add TIF_LOAD_FPU Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 08/10] x86/fpu: prepare copy_fpstate_to_sigframe for TIF_LOAD_FPU Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 09/10] x86/fpu: copy non-resident FPU state at fork time Sebastian Andrzej Siewior
2018-09-12 13:33 ` [RFC PATCH 10/10] x86/fpu: defer FPU state load until return to userspace Sebastian Andrzej Siewior
2018-09-12 15:47 ` Andy Lutomirski
2018-09-19 17:05 ` Sebastian Andrzej Siewior
2018-09-21 3:45 ` Andy Lutomirski
2018-09-21 4:15 ` Andy Lutomirski
2018-09-26 11:12 ` Sebastian Andrzej Siewior
2018-09-26 14:34 ` Andy Lutomirski
2018-09-26 15:32 ` Sebastian Andrzej Siewior
2018-09-26 16:24 ` Andy Lutomirski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8e5b64e4-b3e6-f884-beb6-b7b69ab2d8c1@redhat.com \
--to=pbonzini@redhat.com \
--cc=Jason@zx2c4.com \
--cc=bigeasy@linutronix.de \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=riel@surriel.com \
--cc=rkrcmar@redhat.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).