linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: Andy Lutomirski <luto@amacapital.net>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [PATCH 134/208] x86/fpu: Simplify FPU handling by embedding the fpstate in task_struct (again)
Date: Tue,  5 May 2015 19:50:26 +0200	[thread overview]
Message-ID: <1430848300-27877-56-git-send-email-mingo@kernel.org> (raw)
In-Reply-To: <1430848300-27877-1-git-send-email-mingo@kernel.org>

So 6 years ago we made the FPU fpstate dynamically allocated:

  aa283f49276e ("x86, fpu: lazy allocation of FPU area - v5")
  61c4628b5386 ("x86, fpu: split FPU state from task struct - v5")

In hindsight this was a mistake:

   - it complicated context allocation failure handling, such as:

		/* kthread execs. TODO: cleanup this horror. */
		if (WARN_ON(fpstate_alloc_init(fpu)))
			force_sig(SIGKILL, tsk);

   - it caused us to enable irqs in fpu__restore():

                local_irq_enable();
                /*
                 * does a slab alloc which can sleep
                 */
                if (fpstate_alloc_init(fpu)) {
                        /*
                         * ran out of memory!
                         */
                        do_group_exit(SIGKILL);
                        return;
                }
                local_irq_disable();

   - it (slightly) slowed down task creation/destruction by adding
     slab allocation/free pattens.

   - it made access to context contents (slightly) slower by adding
     one more pointer dereference.

The motivation for the dynamic allocation was two-fold:

   - reduce memory consumption by non-FPU tasks

   - allocate and handle only the necessary amount of context for
     various XSAVE processors that have varying hardware frame
     sizes.

These days, with glibc using SSE memcpy by default and GCC optimizing
for SSE/AVX by default, the scope of FPU using apps on an x86 system is
much larger than it was 6 years ago.

For example on a freshly installed Fedora 21 desktop system, with a
recent kernel, all non-kthread tasks have used the FPU shortly after
bootup.

Also, even modern embedded x86 CPUs try to support the latest vector
instruction set - so they'll too often use the larger xstate frame
sizes.

So remove the dynamic allocation complication by embedding the FPU
fpstate in task_struct again. This should make the FPU a lot more
accessible to all sorts of atomic contexts.

We could still optimize for the xstate frame size in the future,
by moving the state structure to the last element of task_struct,
and allocating only a part of that.

This change is kept minimal by still keeping the ctx_alloc()/free()
routines (that now do nothing substantial) - we'll remove them in
the following patches.

Reviewed-by: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/fpu/internal.h | 34 +++++++++++++++++-----------------
 arch/x86/include/asm/fpu/types.h    |  2 +-
 arch/x86/kernel/fpu/core.c          | 52 ++++++++++++++++++++--------------------------------
 arch/x86/kernel/fpu/xsave.c         | 12 ++++++------
 arch/x86/kernel/traps.c             |  2 +-
 arch/x86/kvm/x86.c                  | 14 +++++++-------
 arch/x86/math-emu/fpu_aux.c         |  2 +-
 arch/x86/math-emu/fpu_entry.c       |  4 ++--
 arch/x86/math-emu/fpu_system.h      |  2 +-
 arch/x86/mm/mpx.c                   |  2 +-
 10 files changed, 57 insertions(+), 69 deletions(-)

diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
index 10663b02ee22..4ce830fb3f31 100644
--- a/arch/x86/include/asm/fpu/internal.h
+++ b/arch/x86/include/asm/fpu/internal.h
@@ -232,9 +232,9 @@ static inline int frstor_user(struct i387_fsave_struct __user *fx)
 static inline void fpu_fxsave(struct fpu *fpu)
 {
 	if (config_enabled(CONFIG_X86_32))
-		asm volatile( "fxsave %[fx]" : [fx] "=m" (fpu->state->fxsave));
+		asm volatile( "fxsave %[fx]" : [fx] "=m" (fpu->state.fxsave));
 	else if (config_enabled(CONFIG_AS_FXSAVEQ))
-		asm volatile("fxsaveq %[fx]" : [fx] "=m" (fpu->state->fxsave));
+		asm volatile("fxsaveq %[fx]" : [fx] "=m" (fpu->state.fxsave));
 	else {
 		/* Using "rex64; fxsave %0" is broken because, if the memory
 		 * operand uses any extended registers for addressing, a second
@@ -251,15 +251,15 @@ static inline void fpu_fxsave(struct fpu *fpu)
 		 * an extended register is needed for addressing (fix submitted
 		 * to mainline 2005-11-21).
 		 *
-		 *  asm volatile("rex64/fxsave %0" : "=m" (fpu->state->fxsave));
+		 *  asm volatile("rex64/fxsave %0" : "=m" (fpu->state.fxsave));
 		 *
 		 * This, however, we can work around by forcing the compiler to
 		 * select an addressing mode that doesn't require extended
 		 * registers.
 		 */
 		asm volatile( "rex64/fxsave (%[fx])"
-			     : "=m" (fpu->state->fxsave)
-			     : [fx] "R" (&fpu->state->fxsave));
+			     : "=m" (fpu->state.fxsave)
+			     : [fx] "R" (&fpu->state.fxsave));
 	}
 }
 
@@ -276,7 +276,7 @@ static inline void fpu_fxsave(struct fpu *fpu)
 static inline int copy_fpregs_to_fpstate(struct fpu *fpu)
 {
 	if (likely(use_xsave())) {
-		xsave_state(&fpu->state->xsave);
+		xsave_state(&fpu->state.xsave);
 		return 1;
 	}
 
@@ -289,7 +289,7 @@ static inline int copy_fpregs_to_fpstate(struct fpu *fpu)
 	 * Legacy FPU register saving, FNSAVE always clears FPU registers,
 	 * so we have to mark them inactive:
 	 */
-	asm volatile("fnsave %[fx]; fwait" : [fx] "=m" (fpu->state->fsave));
+	asm volatile("fnsave %[fx]; fwait" : [fx] "=m" (fpu->state.fsave));
 
 	return 0;
 }
@@ -299,11 +299,11 @@ extern void fpu__save(struct fpu *fpu);
 static inline int fpu_restore_checking(struct fpu *fpu)
 {
 	if (use_xsave())
-		return fpu_xrstor_checking(&fpu->state->xsave);
+		return fpu_xrstor_checking(&fpu->state.xsave);
 	else if (use_fxsr())
-		return fxrstor_checking(&fpu->state->fxsave);
+		return fxrstor_checking(&fpu->state.fxsave);
 	else
-		return frstor_checking(&fpu->state->fsave);
+		return frstor_checking(&fpu->state.fsave);
 }
 
 static inline int restore_fpu_checking(struct fpu *fpu)
@@ -454,7 +454,7 @@ switch_fpu_prepare(struct fpu *old_fpu, struct fpu *new_fpu, int cpu)
 		if (fpu.preload) {
 			new_fpu->counter++;
 			__fpregs_activate(new_fpu);
-			prefetch(new_fpu->state);
+			prefetch(&new_fpu->state);
 		} else if (!use_eager_fpu())
 			stts();
 	} else {
@@ -465,7 +465,7 @@ switch_fpu_prepare(struct fpu *old_fpu, struct fpu *new_fpu, int cpu)
 			if (fpu_want_lazy_restore(new_fpu, cpu))
 				fpu.preload = 0;
 			else
-				prefetch(new_fpu->state);
+				prefetch(&new_fpu->state);
 			fpregs_activate(new_fpu);
 		}
 	}
@@ -534,25 +534,25 @@ static inline void user_fpu_begin(void)
 static inline unsigned short get_fpu_cwd(struct task_struct *tsk)
 {
 	if (cpu_has_fxsr) {
-		return tsk->thread.fpu.state->fxsave.cwd;
+		return tsk->thread.fpu.state.fxsave.cwd;
 	} else {
-		return (unsigned short)tsk->thread.fpu.state->fsave.cwd;
+		return (unsigned short)tsk->thread.fpu.state.fsave.cwd;
 	}
 }
 
 static inline unsigned short get_fpu_swd(struct task_struct *tsk)
 {
 	if (cpu_has_fxsr) {
-		return tsk->thread.fpu.state->fxsave.swd;
+		return tsk->thread.fpu.state.fxsave.swd;
 	} else {
-		return (unsigned short)tsk->thread.fpu.state->fsave.swd;
+		return (unsigned short)tsk->thread.fpu.state.fsave.swd;
 	}
 }
 
 static inline unsigned short get_fpu_mxcsr(struct task_struct *tsk)
 {
 	if (cpu_has_xmm) {
-		return tsk->thread.fpu.state->fxsave.mxcsr;
+		return tsk->thread.fpu.state.fxsave.mxcsr;
 	} else {
 		return MXCSR_DEFAULT;
 	}
diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h
index 231a8f53b2f8..3a15ac6032eb 100644
--- a/arch/x86/include/asm/fpu/types.h
+++ b/arch/x86/include/asm/fpu/types.h
@@ -143,7 +143,7 @@ struct fpu {
 	unsigned int			last_cpu;
 
 	unsigned int			fpregs_active;
-	union thread_xstate		*state;
+	union thread_xstate		state;
 	/*
 	 * This counter contains the number of consecutive context switches
 	 * during which the FPU stays used. If this is over a threshold, the
diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
index 538f2541b7f7..422cbb4bbe01 100644
--- a/arch/x86/kernel/fpu/core.c
+++ b/arch/x86/kernel/fpu/core.c
@@ -174,9 +174,9 @@ static void __save_fpu(struct fpu *fpu)
 {
 	if (use_xsave()) {
 		if (unlikely(system_state == SYSTEM_BOOTING))
-			xsave_state_booting(&fpu->state->xsave);
+			xsave_state_booting(&fpu->state.xsave);
 		else
-			xsave_state(&fpu->state->xsave);
+			xsave_state(&fpu->state.xsave);
 	} else {
 		fpu_fxsave(fpu);
 	}
@@ -207,16 +207,16 @@ EXPORT_SYMBOL_GPL(fpu__save);
 void fpstate_init(struct fpu *fpu)
 {
 	if (!cpu_has_fpu) {
-		finit_soft_fpu(&fpu->state->soft);
+		finit_soft_fpu(&fpu->state.soft);
 		return;
 	}
 
-	memset(fpu->state, 0, xstate_size);
+	memset(&fpu->state, 0, xstate_size);
 
 	if (cpu_has_fxsr) {
-		fx_finit(&fpu->state->fxsave);
+		fx_finit(&fpu->state.fxsave);
 	} else {
-		struct i387_fsave_struct *fp = &fpu->state->fsave;
+		struct i387_fsave_struct *fp = &fpu->state.fsave;
 		fp->cwd = 0xffff037fu;
 		fp->swd = 0xffff0000u;
 		fp->twd = 0xffffffffu;
@@ -241,15 +241,8 @@ void fpstate_cache_init(void)
 
 int fpstate_alloc(struct fpu *fpu)
 {
-	if (fpu->state)
-		return 0;
-
-	fpu->state = kmem_cache_alloc(task_xstate_cachep, GFP_KERNEL);
-	if (!fpu->state)
-		return -ENOMEM;
-
 	/* The CPU requires the FPU state to be aligned to 16 byte boundaries: */
-	WARN_ON((unsigned long)fpu->state & 15);
+	WARN_ON((unsigned long)&fpu->state & 15);
 
 	return 0;
 }
@@ -257,10 +250,6 @@ EXPORT_SYMBOL_GPL(fpstate_alloc);
 
 void fpstate_free(struct fpu *fpu)
 {
-	if (fpu->state) {
-		kmem_cache_free(task_xstate_cachep, fpu->state);
-		fpu->state = NULL;
-	}
 }
 EXPORT_SYMBOL_GPL(fpstate_free);
 
@@ -277,11 +266,11 @@ static void fpu_copy(struct fpu *dst_fpu, struct fpu *src_fpu)
 	WARN_ON(src_fpu != &current->thread.fpu);
 
 	if (use_eager_fpu()) {
-		memset(&dst_fpu->state->xsave, 0, xstate_size);
+		memset(&dst_fpu->state.xsave, 0, xstate_size);
 		__save_fpu(dst_fpu);
 	} else {
 		fpu__save(src_fpu);
-		memcpy(dst_fpu->state, src_fpu->state, xstate_size);
+		memcpy(&dst_fpu->state, &src_fpu->state, xstate_size);
 	}
 }
 
@@ -289,7 +278,6 @@ int fpu__copy(struct fpu *dst_fpu, struct fpu *src_fpu)
 {
 	dst_fpu->counter = 0;
 	dst_fpu->fpregs_active = 0;
-	dst_fpu->state = NULL;
 	dst_fpu->last_cpu = -1;
 
 	if (src_fpu->fpstate_active) {
@@ -483,7 +471,7 @@ int xfpregs_get(struct task_struct *target, const struct user_regset *regset,
 	sanitize_i387_state(target);
 
 	return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
-				   &fpu->state->fxsave, 0, -1);
+				   &fpu->state.fxsave, 0, -1);
 }
 
 int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
@@ -503,19 +491,19 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
 	sanitize_i387_state(target);
 
 	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
-				 &fpu->state->fxsave, 0, -1);
+				 &fpu->state.fxsave, 0, -1);
 
 	/*
 	 * mxcsr reserved bits must be masked to zero for security reasons.
 	 */
-	fpu->state->fxsave.mxcsr &= mxcsr_feature_mask;
+	fpu->state.fxsave.mxcsr &= mxcsr_feature_mask;
 
 	/*
 	 * update the header bits in the xsave header, indicating the
 	 * presence of FP and SSE state.
 	 */
 	if (cpu_has_xsave)
-		fpu->state->xsave.header.xfeatures |= XSTATE_FPSSE;
+		fpu->state.xsave.header.xfeatures |= XSTATE_FPSSE;
 
 	return ret;
 }
@@ -535,7 +523,7 @@ int xstateregs_get(struct task_struct *target, const struct user_regset *regset,
 	if (ret)
 		return ret;
 
-	xsave = &fpu->state->xsave;
+	xsave = &fpu->state.xsave;
 
 	/*
 	 * Copy the 48bytes defined by the software first into the xstate
@@ -566,7 +554,7 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset,
 	if (ret)
 		return ret;
 
-	xsave = &fpu->state->xsave;
+	xsave = &fpu->state.xsave;
 
 	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, xsave, 0, -1);
 	/*
@@ -657,7 +645,7 @@ static inline u32 twd_fxsr_to_i387(struct i387_fxsave_struct *fxsave)
 void
 convert_from_fxsr(struct user_i387_ia32_struct *env, struct task_struct *tsk)
 {
-	struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave;
+	struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state.fxsave;
 	struct _fpreg *to = (struct _fpreg *) &env->st_space[0];
 	struct _fpxreg *from = (struct _fpxreg *) &fxsave->st_space[0];
 	int i;
@@ -695,7 +683,7 @@ void convert_to_fxsr(struct task_struct *tsk,
 		     const struct user_i387_ia32_struct *env)
 
 {
-	struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave;
+	struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state.fxsave;
 	struct _fpreg *from = (struct _fpreg *) &env->st_space[0];
 	struct _fpxreg *to = (struct _fpxreg *) &fxsave->st_space[0];
 	int i;
@@ -736,7 +724,7 @@ int fpregs_get(struct task_struct *target, const struct user_regset *regset,
 
 	if (!cpu_has_fxsr)
 		return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
-					   &fpu->state->fsave, 0,
+					   &fpu->state.fsave, 0,
 					   -1);
 
 	sanitize_i387_state(target);
@@ -770,7 +758,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset,
 
 	if (!cpu_has_fxsr)
 		return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
-					  &fpu->state->fsave, 0,
+					  &fpu->state.fsave, 0,
 					  -1);
 
 	if (pos > 0 || count < sizeof(env))
@@ -785,7 +773,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset,
 	 * presence of FP.
 	 */
 	if (cpu_has_xsave)
-		fpu->state->xsave.header.xfeatures |= XSTATE_FP;
+		fpu->state.xsave.header.xfeatures |= XSTATE_FP;
 	return ret;
 }
 
diff --git a/arch/x86/kernel/fpu/xsave.c b/arch/x86/kernel/fpu/xsave.c
index a23236358fb0..c7d48eb0a194 100644
--- a/arch/x86/kernel/fpu/xsave.c
+++ b/arch/x86/kernel/fpu/xsave.c
@@ -44,14 +44,14 @@ static unsigned int xfeatures_nr;
  */
 void __sanitize_i387_state(struct task_struct *tsk)
 {
-	struct i387_fxsave_struct *fx = &tsk->thread.fpu.state->fxsave;
+	struct i387_fxsave_struct *fx = &tsk->thread.fpu.state.fxsave;
 	int feature_bit;
 	u64 xfeatures;
 
 	if (!fx)
 		return;
 
-	xfeatures = tsk->thread.fpu.state->xsave.header.xfeatures;
+	xfeatures = tsk->thread.fpu.state.xsave.header.xfeatures;
 
 	/*
 	 * None of the feature bits are in init state. So nothing else
@@ -147,7 +147,7 @@ static inline int check_for_xstate(struct i387_fxsave_struct __user *buf,
 static inline int save_fsave_header(struct task_struct *tsk, void __user *buf)
 {
 	if (use_fxsr()) {
-		struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave;
+		struct xsave_struct *xsave = &tsk->thread.fpu.state.xsave;
 		struct user_i387_ia32_struct env;
 		struct _fpstate_ia32 __user *fp = buf;
 
@@ -245,7 +245,7 @@ static inline int save_user_xstate(struct xsave_struct __user *buf)
  */
 int save_xstate_sig(void __user *buf, void __user *buf_fx, int size)
 {
-	struct xsave_struct *xsave = &current->thread.fpu.state->xsave;
+	struct xsave_struct *xsave = &current->thread.fpu.state.xsave;
 	struct task_struct *tsk = current;
 	int ia32_fxstate = (buf != buf_fx);
 
@@ -288,7 +288,7 @@ sanitize_restored_xstate(struct task_struct *tsk,
 			 struct user_i387_ia32_struct *ia32_env,
 			 u64 xfeatures, int fx_only)
 {
-	struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave;
+	struct xsave_struct *xsave = &tsk->thread.fpu.state.xsave;
 	struct xstate_header *header = &xsave->header;
 
 	if (use_xsave()) {
@@ -402,7 +402,7 @@ int __restore_xstate_sig(void __user *buf, void __user *buf_fx, int size)
 		 */
 		drop_fpu(fpu);
 
-		if (__copy_from_user(&fpu->state->xsave, buf_fx, state_size) ||
+		if (__copy_from_user(&fpu->state.xsave, buf_fx, state_size) ||
 		    __copy_from_user(&env, buf, sizeof(env))) {
 			fpstate_init(fpu);
 			err = -1;
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index f028f1da3480..48dfcd9ed351 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -396,7 +396,7 @@ dotraplinkage void do_bounds(struct pt_regs *regs, long error_code)
 	 * do an xsave and then pull it out of the xsave buffer.
 	 */
 	copy_fpregs_to_fpstate(&tsk->thread.fpu);
-	xsave_buf = &(tsk->thread.fpu.state->xsave);
+	xsave_buf = &(tsk->thread.fpu.state.xsave);
 	bndcsr = get_xsave_addr(xsave_buf, XSTATE_BNDCSR);
 	if (!bndcsr)
 		goto exit_trap;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d90bf4afa2b0..8bb0de5bf9c0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3196,7 +3196,7 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
 
 static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu)
 {
-	struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state->xsave;
+	struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state.xsave;
 	u64 xstate_bv = xsave->header.xfeatures;
 	u64 valid;
 
@@ -3232,7 +3232,7 @@ static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu)
 
 static void load_xsave(struct kvm_vcpu *vcpu, u8 *src)
 {
-	struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state->xsave;
+	struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state.xsave;
 	u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET);
 	u64 valid;
 
@@ -3277,7 +3277,7 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
 		fill_xsave((u8 *) guest_xsave->region, vcpu);
 	} else {
 		memcpy(guest_xsave->region,
-			&vcpu->arch.guest_fpu.state->fxsave,
+			&vcpu->arch.guest_fpu.state.fxsave,
 			sizeof(struct i387_fxsave_struct));
 		*(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] =
 			XSTATE_FPSSE;
@@ -3302,7 +3302,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
 	} else {
 		if (xstate_bv & ~XSTATE_FPSSE)
 			return -EINVAL;
-		memcpy(&vcpu->arch.guest_fpu.state->fxsave,
+		memcpy(&vcpu->arch.guest_fpu.state.fxsave,
 			guest_xsave->region, sizeof(struct i387_fxsave_struct));
 	}
 	return 0;
@@ -6973,7 +6973,7 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
 int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
 {
 	struct i387_fxsave_struct *fxsave =
-			&vcpu->arch.guest_fpu.state->fxsave;
+			&vcpu->arch.guest_fpu.state.fxsave;
 
 	memcpy(fpu->fpr, fxsave->st_space, 128);
 	fpu->fcw = fxsave->cwd;
@@ -6990,7 +6990,7 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
 int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
 {
 	struct i387_fxsave_struct *fxsave =
-			&vcpu->arch.guest_fpu.state->fxsave;
+			&vcpu->arch.guest_fpu.state.fxsave;
 
 	memcpy(fxsave->st_space, fpu->fpr, 128);
 	fxsave->cwd = fpu->fcw;
@@ -7014,7 +7014,7 @@ int fx_init(struct kvm_vcpu *vcpu)
 
 	fpstate_init(&vcpu->arch.guest_fpu);
 	if (cpu_has_xsaves)
-		vcpu->arch.guest_fpu.state->xsave.header.xcomp_bv =
+		vcpu->arch.guest_fpu.state.xsave.header.xcomp_bv =
 			host_xcr0 | XSTATE_COMPACTION_ENABLED;
 
 	/*
diff --git a/arch/x86/math-emu/fpu_aux.c b/arch/x86/math-emu/fpu_aux.c
index dc8adad10a2f..7562341ce299 100644
--- a/arch/x86/math-emu/fpu_aux.c
+++ b/arch/x86/math-emu/fpu_aux.c
@@ -52,7 +52,7 @@ void finit_soft_fpu(struct i387_soft_struct *soft)
 
 void finit(void)
 {
-	finit_soft_fpu(&current->thread.fpu.state->soft);
+	finit_soft_fpu(&current->thread.fpu.state.soft);
 }
 
 /*
diff --git a/arch/x86/math-emu/fpu_entry.c b/arch/x86/math-emu/fpu_entry.c
index cf843855e4f6..5e003704ebfa 100644
--- a/arch/x86/math-emu/fpu_entry.c
+++ b/arch/x86/math-emu/fpu_entry.c
@@ -683,7 +683,7 @@ int fpregs_soft_set(struct task_struct *target,
 		    unsigned int pos, unsigned int count,
 		    const void *kbuf, const void __user *ubuf)
 {
-	struct i387_soft_struct *s387 = &target->thread.fpu.state->soft;
+	struct i387_soft_struct *s387 = &target->thread.fpu.state.soft;
 	void *space = s387->st_space;
 	int ret;
 	int offset, other, i, tags, regnr, tag, newtop;
@@ -735,7 +735,7 @@ int fpregs_soft_get(struct task_struct *target,
 		    unsigned int pos, unsigned int count,
 		    void *kbuf, void __user *ubuf)
 {
-	struct i387_soft_struct *s387 = &target->thread.fpu.state->soft;
+	struct i387_soft_struct *s387 = &target->thread.fpu.state.soft;
 	const void *space = s387->st_space;
 	int ret;
 	int offset = (S387->ftop & 7) * 10, other = 80 - offset;
diff --git a/arch/x86/math-emu/fpu_system.h b/arch/x86/math-emu/fpu_system.h
index 2c614410a5f3..9ccecb61a4fa 100644
--- a/arch/x86/math-emu/fpu_system.h
+++ b/arch/x86/math-emu/fpu_system.h
@@ -31,7 +31,7 @@
 #define SEG_EXPAND_DOWN(s)	(((s).b & ((1 << 11) | (1 << 10))) \
 				 == (1 << 10))
 
-#define I387			(current->thread.fpu.state)
+#define I387			(&current->thread.fpu.state)
 #define FPU_info		(I387->soft.info)
 
 #define FPU_CS			(*(unsigned short *) &(FPU_info->regs->cs))
diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
index 3287215be60a..ea5b367b63a9 100644
--- a/arch/x86/mm/mpx.c
+++ b/arch/x86/mm/mpx.c
@@ -358,7 +358,7 @@ static __user void *task_get_bounds_dir(struct task_struct *tsk)
 	 * only accessible if we first do an xsave.
 	 */
 	copy_fpregs_to_fpstate(&tsk->thread.fpu);
-	bndcsr = get_xsave_addr(&tsk->thread.fpu.state->xsave, XSTATE_BNDCSR);
+	bndcsr = get_xsave_addr(&tsk->thread.fpu.state.xsave, XSTATE_BNDCSR);
 	if (!bndcsr)
 		return MPX_INVALID_BOUNDS_DIR;
 
-- 
2.1.0


  parent reply	other threads:[~2015-05-05 18:20 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-05 17:49 [PATCH 000/208] big x86 FPU code rewrite Ingo Molnar
2015-05-05 17:49 ` [PATCH 080/208] x86/fpu: Rename 'xstate_features' to 'xfeatures_nr' Ingo Molnar
2015-05-05 17:49 ` [PATCH 081/208] x86/fpu: Move XCR0 manipulation to the FPU code proper Ingo Molnar
2015-05-05 17:49 ` [PATCH 082/208] x86/fpu: Clean up regset functions Ingo Molnar
2015-05-05 17:49 ` [PATCH 083/208] x86/fpu: Rename 'xsave_hdr' to 'header' Ingo Molnar
2015-05-05 17:49 ` [PATCH 084/208] x86/fpu: Rename xsave.header::xstate_bv to 'xfeatures' Ingo Molnar
2015-05-05 17:57   ` Dave Hansen
2015-05-05 18:16     ` Ingo Molnar
2015-05-05 18:25       ` Dave Hansen
2015-05-06  6:16         ` Ingo Molnar
2015-05-06 12:46           ` Ingo Molnar
2015-05-06 15:09             ` Dave Hansen
2015-05-07 11:46               ` Ingo Molnar
2015-05-06 18:27           ` Dave Hansen
2015-05-07 10:59             ` Borislav Petkov
2015-05-07 12:22             ` Ingo Molnar
2015-05-07 14:58               ` Dave Hansen
2015-05-07 15:33                 ` Ingo Molnar
2015-05-07 15:58                   ` Dave Hansen
2015-05-07 19:35                     ` Ingo Molnar
2015-05-05 17:49 ` [PATCH 085/208] x86/fpu: Clean up and fix MXCSR handling Ingo Molnar
2015-05-05 17:49 ` [PATCH 086/208] x86/fpu: Rename regset FPU register accessors Ingo Molnar
2015-05-05 17:49 ` [PATCH 087/208] x86/fpu: Explain the AVX register layout in the xsave area Ingo Molnar
2015-05-05 17:49 ` [PATCH 088/208] x86/fpu: Improve the __sanitize_i387_state() documentation Ingo Molnar
2015-05-05 17:49 ` [PATCH 089/208] x86/fpu: Rename fpu->has_fpu to fpu->fpregs_active Ingo Molnar
2015-05-05 17:49 ` [PATCH 090/208] x86/fpu: Rename __thread_set_has_fpu() to __fpregs_activate() Ingo Molnar
2015-05-05 17:49 ` [PATCH 091/208] x86/fpu: Rename __thread_clear_has_fpu() to __fpregs_deactivate() Ingo Molnar
2015-05-05 17:49 ` [PATCH 092/208] x86/fpu: Rename __thread_fpu_begin() to fpregs_activate() Ingo Molnar
2015-05-05 17:49 ` [PATCH 093/208] x86/fpu: Rename __thread_fpu_end() to fpregs_deactivate() Ingo Molnar
2015-05-05 17:49 ` [PATCH 094/208] x86/fpu: Remove fpstate_xstate_init_size() boot quirk Ingo Molnar
2015-05-05 17:49 ` [PATCH 095/208] x86/fpu: Remove xsave_init() bootmem allocations Ingo Molnar
2015-05-05 17:49 ` [PATCH 096/208] x86/fpu: Make setup_init_fpu_buf() run-once explicitly Ingo Molnar
2015-05-05 17:49 ` [PATCH 097/208] x86/fpu: Remove 'init_xstate_buf' bootmem allocation Ingo Molnar
2015-07-14 19:46   ` 4.2-rc2: early boot memory corruption from FPU rework Dave Hansen
2015-07-15  1:25     ` H. Peter Anvin
2015-07-15 11:07     ` Ingo Molnar
2015-07-16  0:34       ` [REGRESSION] " Dave Hansen
2015-07-16  2:39         ` Linus Torvalds
2015-07-16  2:51         ` Linus Torvalds
2015-07-17  7:45         ` Ingo Molnar
2015-07-17  8:51           ` Ingo Molnar
2015-07-17 16:58           ` Dave Hansen
2015-07-17 19:32             ` Ingo Molnar
2015-07-17 20:01               ` Dave Hansen
2015-05-05 17:49 ` [PATCH 098/208] x86/fpu: Split fpu__cpu_init() into early-boot and cpu-boot parts Ingo Molnar
2015-05-05 17:49 ` [PATCH 099/208] x86/fpu: Make the system/cpu init distinction clear in the xstate code as well Ingo Molnar
2015-05-05 17:49 ` [PATCH 100/208] x86/fpu: Move CPU capability check into fpu__init_cpu_xstate() Ingo Molnar
2015-05-05 17:49 ` [PATCH 101/208] x86/fpu: Move legacy check to fpu__init_system_xstate() Ingo Molnar
2015-05-05 17:49 ` [PATCH 102/208] x86/fpu: Propagate once per boot quirk into fpu__init_system_xstate() Ingo Molnar
2015-05-05 17:49 ` [PATCH 103/208] x86/fpu: Remove xsave_init() Ingo Molnar
2015-05-05 17:49 ` [PATCH 104/208] x86/fpu: Do fpu__init_system_xstate only from fpu__init_system() Ingo Molnar
2015-05-05 17:49 ` [PATCH 105/208] x86/fpu: Set up the legacy FPU init image " Ingo Molnar
2015-05-05 17:49 ` [PATCH 106/208] x86/fpu: Remove setup_init_fpu_buf() call from eager_fpu_init() Ingo Molnar
2015-05-05 17:49 ` [PATCH 107/208] x86/fpu: Move all eager-fpu setup code to eager_fpu_init() Ingo Molnar
2015-05-05 17:50 ` [PATCH 108/208] x86/fpu: Move eager_fpu_init() to fpu/init.c Ingo Molnar
2015-05-05 17:50 ` [PATCH 109/208] x86/fpu: Clean up eager_fpu_init() and rename it to fpu__ctx_switch_init() Ingo Molnar
2015-05-05 17:50 ` [PATCH 110/208] x86/fpu: Split fpu__ctx_switch_init() into _cpu() and _system() portions Ingo Molnar
2015-05-05 17:50 ` [PATCH 111/208] x86/fpu: Do CLTS fpu__init_system() Ingo Molnar
2015-05-05 17:50 ` [PATCH 112/208] x86/fpu: Move the fpstate_xstate_init_size() call into fpu__init_system() Ingo Molnar
2015-05-05 17:50 ` [PATCH 113/208] x86/fpu: Call fpu__init_cpu_ctx_switch() from fpu__init_cpu() Ingo Molnar
2015-05-05 17:50 ` [PATCH 114/208] x86/fpu: Do system-wide setup from fpu__detect() Ingo Molnar
2015-05-05 17:50 ` [PATCH 115/208] x86/fpu: Remove fpu__init_cpu_ctx_switch() call from fpu__init_system() Ingo Molnar
2015-05-05 17:50 ` [PATCH 116/208] x86/fpu: Simplify fpu__cpu_init() Ingo Molnar
2015-05-05 17:50 ` [PATCH 117/208] x86/fpu: Factor out fpu__init_cpu_generic() Ingo Molnar
2015-05-05 17:50 ` [PATCH 118/208] x86/fpu: Factor out fpu__init_system_generic() Ingo Molnar
2015-05-05 17:50 ` [PATCH 119/208] x86/fpu: Factor out fpu__init_system_early_generic() Ingo Molnar
2015-05-05 17:50 ` [PATCH 120/208] x86/fpu: Move !FPU check ingo fpu__init_system_early_generic() Ingo Molnar
2015-05-05 17:50 ` [PATCH 121/208] x86/fpu: Factor out FPU bug checks into fpu/bugs.c Ingo Molnar
2015-05-05 17:50 ` [PATCH 122/208] x86/fpu: Make check_fpu() init ordering independent Ingo Molnar
2015-05-05 17:50 ` [PATCH 123/208] x86/fpu: Move fpu__init_system_early_generic() out of fpu__detect() Ingo Molnar
2015-05-05 17:50 ` [PATCH 124/208] x86/fpu: Remove the extra fpu__detect() layer Ingo Molnar
2015-05-05 17:50 ` [PATCH 125/208] x86/fpu: Rename fpstate_xstate_init_size() to fpu__init_system_xstate_size_legacy() Ingo Molnar
2015-05-05 17:50 ` [PATCH 126/208] x86/fpu: Reorder init methods Ingo Molnar
2015-05-05 17:50 ` [PATCH 127/208] x86/fpu: Add more comments to the FPU init code Ingo Molnar
2015-05-05 17:50 ` [PATCH 128/208] x86/fpu: Move fpu__save() to fpu/internals.h Ingo Molnar
2015-05-05 17:50 ` [PATCH 129/208] x86/fpu: Uninline kernel_fpu_begin()/end() Ingo Molnar
2015-05-05 17:50 ` [PATCH 130/208] x86/fpu: Move various internal function prototypes to fpu/internal.h Ingo Molnar
2015-05-05 17:50 ` [PATCH 131/208] x86/fpu: Uninline the irq_ts_save()/restore() functions Ingo Molnar
2015-05-05 17:50 ` [PATCH 132/208] x86/fpu: Rename fpu_save_init() to copy_fpregs_to_fpstate() Ingo Molnar
2015-05-05 17:50 ` [PATCH 133/208] x86/fpu: Optimize copy_fpregs_to_fpstate() by removing the FNCLEX synchronization with FP exceptions Ingo Molnar
2015-05-05 17:50 ` Ingo Molnar [this message]
2015-05-05 17:50 ` [PATCH 135/208] x86/fpu: Remove failure paths from fpstate-alloc low level functions Ingo Molnar
2015-05-05 17:50 ` [PATCH 136/208] x86/fpu: Remove failure return from fpstate_alloc_init() Ingo Molnar
2015-05-05 17:50 ` [PATCH 137/208] x86/fpu: Rename fpstate_alloc_init() to fpstate_init_curr() Ingo Molnar
2015-05-05 17:50 ` [PATCH 138/208] x86/fpu: Simplify fpu__unlazy_stopped() error handling Ingo Molnar
2015-05-05 17:50 ` [PATCH 139/208] x86/fpu, kvm: Simplify fx_init() Ingo Molnar
2015-05-05 17:50 ` [PATCH 140/208] x86/fpu: Simplify fpstate_init_curr() usage Ingo Molnar
2015-05-05 17:50 ` [PATCH 141/208] x86/fpu: Rename fpu__unlazy_stopped() to fpu__activate_stopped() Ingo Molnar
2015-05-05 17:50 ` [PATCH 142/208] x86/fpu: Factor out FPU hw activation/deactivation Ingo Molnar
2015-05-05 17:50 ` [PATCH 143/208] x86/fpu: Simplify __save_fpu() Ingo Molnar
2015-05-05 17:50 ` [PATCH 144/208] x86/fpu: Eliminate __save_fpu() Ingo Molnar
2015-05-05 17:50 ` [PATCH 145/208] x86/fpu: Simplify fpu__save() Ingo Molnar
2015-05-05 17:50 ` [PATCH 146/208] x86/fpu: Optimize fpu__save() Ingo Molnar
2015-05-05 17:50 ` [PATCH 147/208] x86/fpu: Optimize fpu_copy() Ingo Molnar
2015-05-05 17:50 ` [PATCH 148/208] x86/fpu: Optimize fpu_copy() some more on lazy switching systems Ingo Molnar
2015-05-05 17:50 ` [PATCH 149/208] x86/fpu: Rename fpu/xsave.h to fpu/xstate.h Ingo Molnar
2015-05-05 17:50 ` [PATCH 150/208] x86/fpu: Rename fpu/xsave.c to fpu/xstate.c Ingo Molnar
2015-05-05 17:50 ` [PATCH 151/208] x86/fpu: Introduce cpu_has_xfeatures(xfeatures_mask, feature_name) Ingo Molnar
2015-05-05 22:15   ` Yu, Fenghua
2015-05-06  5:00     ` Ingo Molnar
2015-05-05 17:50 ` [PATCH 152/208] x86/fpu: Simplify print_xstate_features() Ingo Molnar
2015-05-05 17:50 ` [PATCH 153/208] x86/fpu: Enumerate xfeature bits Ingo Molnar
2015-05-05 17:50 ` [PATCH 154/208] x86/fpu: Move xfeature type enumeration to fpu/types.h Ingo Molnar
2015-05-05 17:50 ` [PATCH 155/208] x86/fpu, crypto x86/camellia_aesni_avx: Simplify the camellia_aesni_init() xfeature checks Ingo Molnar
2015-05-05 17:50 ` [PATCH 156/208] x86/fpu, crypto x86/sha256_ssse3: Simplify the sha256_ssse3_mod_init() " Ingo Molnar
2015-05-05 17:50 ` [PATCH 157/208] x86/fpu, crypto x86/camellia_aesni_avx2: Simplify the camellia_aesni_init() " Ingo Molnar
2015-05-05 17:50 ` [PATCH 158/208] x86/fpu, crypto x86/twofish_avx: Simplify the twofish_init() " Ingo Molnar
2015-05-05 17:50 ` [PATCH 159/208] x86/fpu, crypto x86/serpent_avx: Simplify the serpent_init() " Ingo Molnar
2015-05-05 17:50 ` [PATCH 160/208] x86/fpu, crypto x86/cast5_avx: Simplify the cast5_init() " Ingo Molnar
2015-05-05 17:50 ` [PATCH 161/208] x86/fpu, crypto x86/sha512_ssse3: Simplify the sha512_ssse3_mod_init() " Ingo Molnar
2015-05-05 17:50 ` [PATCH 162/208] x86/fpu, crypto x86/cast6_avx: Simplify the cast6_init() " Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1430848300-27877-56-git-send-email-mingo@kernel.org \
    --to=mingo@kernel.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=oleg@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).