linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch
@ 2015-01-11 21:46 riel
  2015-01-11 21:46 ` [RFC PATCH 01/11] x86,fpu: document the data structures a little riel
                   ` (11 more replies)
  0 siblings, 12 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

Currently the kernel will always load the FPU context, even
when switching to a kernel thread, or to an idle thread. In
the case of a task on a KVM VCPU going idle for a bit, and
waking up again later, this creates a vastly inefficient
chain of FPU context saves & loads:

1) save task FPU context, load idle task FPU context (in KVM guest)
2) trap to host
3) save VCPU guest FPU context, load VCPU userspace context (__kernel_fpu_end)
4) save VCPU userspace context, load idle thread FPU context
5) save idle thread FPU context, load VCPU userspace FPU context
6) save VCPU userspace FPU context, load guest FPU context (__kernel_fpu_begin)
7) enter guest
8) save idle task FPU context, load task FPU context (in KVM guest)

This is a total of 6 FPU saves and 6 restores, touching 4 different
FPU contexts, only one of which is ever used. The hardware optimizes
FPU load and restore pretty well, but 12 operations involving 384
bytes of data adds substantial overhead. Additionally, the XSTOROPT
optimization does not work across VMENTER / VMEXIT boundaries, so
things are slower than they would be on bare metal.

This patch series reduces it to two saves (1) and (3), and one load
(6), if the VCPU and the task inside the guest both stay on the same
CPU. The load could be optimized away in a subsequent series, by
recognizing that the emulator did not touch the in-memory FPU state
for the guest.

This could also give a small performance gain for bare metal
applications that wake up and go idle repeatedly, staying on the
same CPU.

Where it all falls apart (probably due to a stupid mistake on my end)
is the signal handling code.

In the signal handling code, the registers (including FPU state) are
all saved to the user space stack, and on sigreturn they are loaded
back in. The signal handler setup code needs to be fixed to deal with
the other changes, but I am apparently doing that incorrectly.

I have been staring at the code for a few weeks now, and do not
appear to be any closer to figuring out what I did wrong in the last
patch of this series.

I would really appreciate it if people with better knowledge of the
signal handler and/or FPU code could take a look :)


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH 01/11] x86,fpu: document the data structures a little
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
@ 2015-01-11 21:46 ` riel
  2015-01-12 21:18   ` Borislav Petkov
  2015-01-12 21:52   ` Dave Hansen
  2015-01-11 21:46 ` [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag riel
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

Add some documentation to data structures used for FPU context
switching.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/processor.h | 9 +++++++--
 arch/x86/kernel/cpu/common.c     | 1 +
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index a092a0c..17bd8a0 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -400,6 +400,11 @@ struct xsave_struct {
 	/* new processor state extensions will go here */
 } __attribute__ ((packed, aligned (64)));
 
+/*
+ * The FPU state used depends on the capabilities of the hardware; the
+ * registers used for vector instructions on newer hardware are included
+ * in the FPU state.
+ */
 union thread_xstate {
 	struct i387_fsave_struct	fsave;
 	struct i387_fxsave_struct	fxsave;
@@ -408,8 +413,8 @@ union thread_xstate {
 };
 
 struct fpu {
-	unsigned int last_cpu;
-	unsigned int has_fpu;
+	unsigned int last_cpu;		/* FPU state last loaded on this CPU */
+	unsigned int has_fpu;		/* FPU state in current use on CPU */
 	union thread_xstate *state;
 };
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index c604965..36837bb 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1143,6 +1143,7 @@ DEFINE_PER_CPU(unsigned int, irq_count) __visible = -1;
 DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
 EXPORT_PER_CPU_SYMBOL(__preempt_count);
 
+/* Task that currently has its FPU state loaded on the CPU. */
 DEFINE_PER_CPU(struct task_struct *, fpu_owner_task);
 
 /*
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
  2015-01-11 21:46 ` [RFC PATCH 01/11] x86,fpu: document the data structures a little riel
@ 2015-01-11 21:46 ` riel
  2015-01-13 15:24   ` Oleg Nesterov
  2015-01-11 21:46 ` [RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu riel
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

Replace fpu_switch_t with a thread flag, in preparation for only
restoring the FPU state on return to user space.

I have left the code around fpu_lazy_restore intact, even though
there appears to be no protection against races with eg. ptrace,
and the optimization appears equally valid with eager fpu mode.
This is addressed later in the series.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 38 +++++++++++++------------------------
 arch/x86/include/asm/thread_info.h  |  4 +++-
 arch/x86/kernel/process_32.c        |  5 ++---
 arch/x86/kernel/process_64.c        |  5 ++---
 4 files changed, 20 insertions(+), 32 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index e97622f..5f8f971 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -385,20 +385,6 @@ static inline void drop_init_fpu(struct task_struct *tsk)
 }
 
 /*
- * FPU state switching for scheduling.
- *
- * This is a two-stage process:
- *
- *  - switch_fpu_prepare() saves the old state and
- *    sets the new state of the CR0.TS bit. This is
- *    done within the context of the old process.
- *
- *  - switch_fpu_finish() restores the new state as
- *    necessary.
- */
-typedef struct { int preload; } fpu_switch_t;
-
-/*
  * Must be run with preemption disabled: this clears the fpu_owner_task,
  * on this CPU.
  *
@@ -416,15 +402,13 @@ static inline int fpu_lazy_restore(struct task_struct *new, unsigned int cpu)
 		cpu == new->thread.fpu.last_cpu;
 }
 
-static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct task_struct *new, int cpu)
+static inline void switch_fpu_prepare(struct task_struct *old, struct task_struct *new, int cpu)
 {
-	fpu_switch_t fpu;
-
 	/*
 	 * If the task has used the math, pre-load the FPU on xsave processors
 	 * or if the past 5 consecutive context-switches used math.
 	 */
-	fpu.preload = tsk_used_math(new) && (use_eager_fpu() ||
+	bool preload = tsk_used_math(new) && (use_eager_fpu() ||
 					     new->thread.fpu_counter > 5);
 	if (__thread_has_fpu(old)) {
 		if (!__save_init_fpu(old))
@@ -433,8 +417,9 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta
 		old->thread.fpu.has_fpu = 0;	/* But leave fpu_owner_task! */
 
 		/* Don't change CR0.TS if we just switch! */
-		if (fpu.preload) {
+		if (preload) {
 			new->thread.fpu_counter++;
+			set_thread_flag(TIF_LOAD_FPU);
 			__thread_set_has_fpu(new);
 			prefetch(new->thread.fpu.state);
 		} else if (!use_eager_fpu())
@@ -442,16 +427,19 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta
 	} else {
 		old->thread.fpu_counter = 0;
 		old->thread.fpu.last_cpu = ~0;
-		if (fpu.preload) {
+		if (preload) {
 			new->thread.fpu_counter++;
 			if (!use_eager_fpu() && fpu_lazy_restore(new, cpu))
-				fpu.preload = 0;
-			else
+				/* XXX: is this safe against ptrace??? */
+				__thread_fpu_begin(new);
+			else {
 				prefetch(new->thread.fpu.state);
+				set_thread_flag(TIF_LOAD_FPU);
+			}
 			__thread_fpu_begin(new);
 		}
+		/* else: CR0.TS is still set from a previous FPU switch */
 	}
-	return fpu;
 }
 
 /*
@@ -460,9 +448,9 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta
  * state - all we need to do is to conditionally restore the register
  * state itself.
  */
-static inline void switch_fpu_finish(struct task_struct *new, fpu_switch_t fpu)
+static inline void switch_fpu_finish(struct task_struct *new)
 {
-	if (fpu.preload) {
+	if (test_and_clear_thread_flag(TIF_LOAD_FPU)) {
 		if (unlikely(restore_fpu_checking(new)))
 			drop_init_fpu(new);
 	}
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 547e344..077fcd9 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -91,6 +91,7 @@ struct thread_info {
 #define TIF_SYSCALL_TRACEPOINT	28	/* syscall tracepoint instrumentation */
 #define TIF_ADDR32		29	/* 32-bit address space on 64 bits */
 #define TIF_X32			30	/* 32-bit native x86-64 binary */
+#define TIF_LOAD_FPU		31	/* load FPU on return to userspace */
 
 #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
 #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
@@ -115,6 +116,7 @@ struct thread_info {
 #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
 #define _TIF_ADDR32		(1 << TIF_ADDR32)
 #define _TIF_X32		(1 << TIF_X32)
+#define _TIF_LOAD_FPU		(1 << TIF_LOAD_FPU)
 
 /* work to do in syscall_trace_enter() */
 #define _TIF_WORK_SYSCALL_ENTRY	\
@@ -141,7 +143,7 @@ struct thread_info {
 /* Only used for 64 bit */
 #define _TIF_DO_NOTIFY_MASK						\
 	(_TIF_SIGPENDING | _TIF_MCE_NOTIFY | _TIF_NOTIFY_RESUME |	\
-	 _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE)
+	 _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE | _TIF_LOAD_FPU)
 
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW							\
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 8f3ebfe..c4b00e6 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -249,11 +249,10 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 				 *next = &next_p->thread;
 	int cpu = smp_processor_id();
 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
-	fpu_switch_t fpu;
 
 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
 
-	fpu = switch_fpu_prepare(prev_p, next_p, cpu);
+	switch_fpu_prepare(prev_p, next_p, cpu);
 
 	/*
 	 * Reload esp0.
@@ -320,7 +319,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	if (prev->gs | next->gs)
 		lazy_load_gs(next->gs);
 
-	switch_fpu_finish(next_p, fpu);
+	switch_fpu_finish(next_p);
 
 	this_cpu_write(current_task, next_p);
 
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 3ed4a68..ee3824f 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -279,9 +279,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	int cpu = smp_processor_id();
 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
 	unsigned fsindex, gsindex;
-	fpu_switch_t fpu;
 
-	fpu = switch_fpu_prepare(prev_p, next_p, cpu);
+	switch_fpu_prepare(prev_p, next_p, cpu);
 
 	/*
 	 * Reload esp0, LDT and the page table pointer:
@@ -351,7 +350,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 		wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
 	prev->gsindex = gsindex;
 
-	switch_fpu_finish(next_p, fpu);
+	switch_fpu_finish(next_p);
 
 	/*
 	 * Switch the PDA and FPU contexts.
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
  2015-01-11 21:46 ` [RFC PATCH 01/11] x86,fpu: document the data structures a little riel
  2015-01-11 21:46 ` [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag riel
@ 2015-01-11 21:46 ` riel
  2015-01-13 15:24   ` Oleg Nesterov
  2015-01-11 21:46 ` [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace riel
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

Move the call to __thread_fpu_begin, which in turn calls
__thread_set_has_fpu, to a spot where the task actually has
the FPU.

This is in preparation for the next patch.

This changeset introduces an extraneous clts() call when
switching from one FPU-using task to another FPU-using
task in non-eager FPU switching mode. The next patch gets
rid of that.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 5f8f971..27556f4 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -420,7 +420,6 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
 		if (preload) {
 			new->thread.fpu_counter++;
 			set_thread_flag(TIF_LOAD_FPU);
-			__thread_set_has_fpu(new);
 			prefetch(new->thread.fpu.state);
 		} else if (!use_eager_fpu())
 			stts();
@@ -436,7 +435,6 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
 				prefetch(new->thread.fpu.state);
 				set_thread_flag(TIF_LOAD_FPU);
 			}
-			__thread_fpu_begin(new);
 		}
 		/* else: CR0.TS is still set from a previous FPU switch */
 	}
@@ -451,6 +449,7 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
 static inline void switch_fpu_finish(struct task_struct *new)
 {
 	if (test_and_clear_thread_flag(TIF_LOAD_FPU)) {
+		__thread_fpu_begin(new);
 		if (unlikely(restore_fpu_checking(new)))
 			drop_init_fpu(new);
 	}
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (2 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu riel
@ 2015-01-11 21:46 ` riel
  2015-01-13 15:53   ` Oleg Nesterov
                     ` (3 more replies)
  2015-01-11 21:46 ` [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced riel
                   ` (7 subsequent siblings)
  11 siblings, 4 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

Defer restoring the FPU state, if so desired, until the task returns to
userspace.

In case of kernel threads, KVM VCPU threads, and tasks performing longer
running operations in kernel space, this could mean skipping the FPU state
restore entirely for several context switches.

This also allows the kernel to preserve the FPU state of a userspace task
that gets interrupted by a kernel thread in kernel mode.

If the old task left TIF_LOAD_FPU set, and the new task does not want
an FPU state restore (or does not have an FPU state), clear the flag
to prevent attempted restoring of a non-existent FPU state.

TODO: remove superfluous irq disabling from entry_{32,64}.S, Andy has
some conflicting changes in that part of the code, so wait with that.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 33 +++++++++++++++++++++------------
 arch/x86/kernel/process_32.c        |  2 --
 arch/x86/kernel/process_64.c        |  2 --
 arch/x86/kernel/signal.c            |  9 +++++++++
 4 files changed, 30 insertions(+), 16 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 27556f4..539b050 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -382,6 +382,7 @@ static inline void drop_init_fpu(struct task_struct *tsk)
 		else
 			fxrstor_checking(&init_xstate_buf->i387);
 	}
+	clear_thread_flag(TIF_LOAD_FPU);
 }
 
 /*
@@ -435,24 +436,32 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
 				prefetch(new->thread.fpu.state);
 				set_thread_flag(TIF_LOAD_FPU);
 			}
-		}
-		/* else: CR0.TS is still set from a previous FPU switch */
+		} else
+			/*
+			 * The new task does not want an FPU state restore,
+			 * and may not even have an FPU state. However, the
+			 * old task may have left TIF_LOAD_FPU set.
+			 * Clear it to avoid trouble.
+			 *
+			 * CR0.TS is still set from a previous FPU switch
+			 */
+			clear_thread_flag(TIF_LOAD_FPU);
 	}
 }
 
 /*
- * By the time this gets called, we've already cleared CR0.TS and
- * given the process the FPU if we are going to preload the FPU
- * state - all we need to do is to conditionally restore the register
- * state itself.
+ * This is called if and when the task returns to userspace.
+ * Clear CR0.TS if necessary, so the task can access the FPU register
+ * state this function restores.
  */
-static inline void switch_fpu_finish(struct task_struct *new)
+static inline void switch_fpu_finish(void)
 {
-	if (test_and_clear_thread_flag(TIF_LOAD_FPU)) {
-		__thread_fpu_begin(new);
-		if (unlikely(restore_fpu_checking(new)))
-			drop_init_fpu(new);
-	}
+	struct task_struct *tsk = current;
+
+	__thread_fpu_begin(tsk);
+
+	if (unlikely(restore_fpu_checking(tsk)))
+		drop_init_fpu(tsk);
 }
 
 /*
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index c4b00e6..4da02ae 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -319,8 +319,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	if (prev->gs | next->gs)
 		lazy_load_gs(next->gs);
 
-	switch_fpu_finish(next_p);
-
 	this_cpu_write(current_task, next_p);
 
 	return prev_p;
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index ee3824f..20e206e 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -350,8 +350,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 		wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
 	prev->gsindex = gsindex;
 
-	switch_fpu_finish(next_p);
-
 	/*
 	 * Switch the PDA and FPU contexts.
 	 */
diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
index ed37a76..46e3008 100644
--- a/arch/x86/kernel/signal.c
+++ b/arch/x86/kernel/signal.c
@@ -761,6 +761,15 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
 		fire_user_return_notifiers();
 
 	user_enter();
+
+	/*
+	 * Test thread flag, as the signal code may have cleared TIF_LOAD_FPU.
+	 * We cannot reschedule after loading the FPU state back into the CPU.
+	 * IRQs will be re-enabled on the switch to userspace.
+	 */
+	local_irq_disable();
+	if (test_thread_flag(TIF_LOAD_FPU))
+		switch_fpu_finish();
 }
 
 void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (3 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace riel
@ 2015-01-11 21:46 ` riel
  2015-01-13 16:19   ` Oleg Nesterov
  2015-01-11 21:46 ` [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too riel
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

If the old task is in a state where its FPU state could be changed by
a debugger, ensure the FPU state is always restored from memory on the
next context switch.

Currently the system only skips FPU reloads when !eager_fpu_mode()
and the task's FPU state is still loaded on the CPU.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 539b050..4db8781 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -412,8 +412,14 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
 	bool preload = tsk_used_math(new) && (use_eager_fpu() ||
 					     new->thread.fpu_counter > 5);
 	if (__thread_has_fpu(old)) {
-		if (!__save_init_fpu(old))
+		/*
+		 * Make sure the FPU state is restored from memory next time,
+		 * if the task has an FPU exception pending, or the task's in
+		 * memory FPU state could be changed by a debugger.
+		 */
+		if (!__save_init_fpu(old) || task_is_stopped_or_traced(old))
 			cpu = ~0;
+
 		old->thread.fpu.last_cpu = cpu;
 		old->thread.fpu.has_fpu = 0;	/* But leave fpu_owner_task! */
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (4 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced riel
@ 2015-01-11 21:46 ` riel
  2015-01-13 17:11   ` Andy Lutomirski
  2015-01-14 18:36   ` Oleg Nesterov
  2015-01-11 21:46 ` [RFC PATCH 07/11] x86,fpu: store current fpu pointer, instead of fpu_owner_task riel
                   ` (5 subsequent siblings)
  11 siblings, 2 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

If the next task still has its FPU state present in the FPU registers,
there is no need to restore it from memory.

This is no big deal on bare metal, where XSAVEOPT / XRSTOR are heavily
optimized, but those optimizations do not carry across VMENTER / VMEXIT.

Skipping the call to fpu_restore_checking when the FPU state is already
loaded in the CPU could save a little bit of overhead on bare metal too,
so this is not just a KVM optimization.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 4db8781..a5a40c7 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -435,13 +435,9 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
 		old->thread.fpu.last_cpu = ~0;
 		if (preload) {
 			new->thread.fpu_counter++;
-			if (!use_eager_fpu() && fpu_lazy_restore(new, cpu))
-				/* XXX: is this safe against ptrace??? */
-				__thread_fpu_begin(new);
-			else {
+			set_thread_flag(TIF_LOAD_FPU);
+			if (!fpu_lazy_restore(new, cpu))
 				prefetch(new->thread.fpu.state);
-				set_thread_flag(TIF_LOAD_FPU);
-			}
 		} else
 			/*
 			 * The new task does not want an FPU state restore,
@@ -466,6 +462,10 @@ static inline void switch_fpu_finish(void)
 
 	__thread_fpu_begin(tsk);
 
+	/* The FPU registers already have this task's FPU state. */
+	if (fpu_lazy_restore(tsk, raw_smp_processor_id()))
+		return;
+
 	if (unlikely(restore_fpu_checking(tsk)))
 		drop_init_fpu(tsk);
 }
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 07/11] x86,fpu: store current fpu pointer, instead of fpu_owner_task
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (5 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too riel
@ 2015-01-11 21:46 ` riel
  2015-01-11 21:46 ` [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end riel
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

This change has no impact on normal tasks, but it allows tasks
with multiple FPU states(like a KVM vcpu thread) to check
whether its other FPU state is still loaded.

Exported so KVM can use it.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 15 ++++++++-------
 arch/x86/kernel/cpu/common.c        |  5 +++--
 arch/x86/kernel/i387.c              |  2 +-
 3 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index a5a40c7..8546c0a 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -42,7 +42,7 @@ extern unsigned int mxcsr_feature_mask;
 extern void fpu_init(void);
 extern void eager_fpu_init(void);
 
-DECLARE_PER_CPU(struct task_struct *, fpu_owner_task);
+DECLARE_PER_CPU(struct fpu *, fpu_owner);
 
 extern void convert_from_fxsr(struct user_i387_ia32_struct *env,
 			      struct task_struct *tsk);
@@ -318,14 +318,14 @@ static inline int __thread_has_fpu(struct task_struct *tsk)
 static inline void __thread_clear_has_fpu(struct task_struct *tsk)
 {
 	tsk->thread.fpu.has_fpu = 0;
-	this_cpu_write(fpu_owner_task, NULL);
+	this_cpu_write(fpu_owner, NULL);
 }
 
 /* Must be paired with a 'clts' before! */
 static inline void __thread_set_has_fpu(struct task_struct *tsk)
 {
 	tsk->thread.fpu.has_fpu = 1;
-	this_cpu_write(fpu_owner_task, tsk);
+	this_cpu_write(fpu_owner, &tsk->thread.fpu);
 }
 
 /*
@@ -394,13 +394,14 @@ static inline void drop_init_fpu(struct task_struct *tsk)
  */
 static inline void __cpu_disable_lazy_restore(unsigned int cpu)
 {
-	per_cpu(fpu_owner_task, cpu) = NULL;
+	per_cpu(fpu_owner, cpu) = NULL;
 }
 
 static inline int fpu_lazy_restore(struct task_struct *new, unsigned int cpu)
 {
-	return new == this_cpu_read_stable(fpu_owner_task) &&
-		cpu == new->thread.fpu.last_cpu;
+	struct fpu *new_fpu = &new->thread.fpu;
+	return new_fpu == this_cpu_read_stable(fpu_owner) &&
+		cpu == new_fpu->last_cpu;
 }
 
 static inline void switch_fpu_prepare(struct task_struct *old, struct task_struct *new, int cpu)
@@ -421,7 +422,7 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
 			cpu = ~0;
 
 		old->thread.fpu.last_cpu = cpu;
-		old->thread.fpu.has_fpu = 0;	/* But leave fpu_owner_task! */
+		old->thread.fpu.has_fpu = 0;	/* But leave fpu_owner! */
 
 		/* Don't change CR0.TS if we just switch! */
 		if (preload) {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 36837bb..ad4d1a9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1143,8 +1143,9 @@ DEFINE_PER_CPU(unsigned int, irq_count) __visible = -1;
 DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
 EXPORT_PER_CPU_SYMBOL(__preempt_count);
 
-/* Task that currently has its FPU state loaded on the CPU. */
-DEFINE_PER_CPU(struct task_struct *, fpu_owner_task);
+/* FPU state that is currently loaded on the CPU. */
+DEFINE_PER_CPU(struct fpu *, fpu_owner);
+EXPORT_PER_CPU_SYMBOL_GPL(fpu_owner);
 
 /*
  * Special IST stacks which the CPU switches to when it calls
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index a9a4229..c98f88d 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -78,7 +78,7 @@ void __kernel_fpu_begin(void)
 		__save_init_fpu(me);
 		/* We do 'stts()' in __kernel_fpu_end() */
 	} else if (!use_eager_fpu()) {
-		this_cpu_write(fpu_owner_task, NULL);
+		this_cpu_write(fpu_owner, NULL);
 		clts();
 	}
 }
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (6 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 07/11] x86,fpu: store current fpu pointer, instead of fpu_owner_task riel
@ 2015-01-11 21:46 ` riel
  2015-01-14 18:43   ` Oleg Nesterov
  2015-01-11 21:46 ` [RFC PATCH 09/11] x86,fpu,kvm: keep vcpu FPU active as long as it is resident riel
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

Tasks may have multiple invocations of kernel_fpu_start and kernel_fpu_end
in sequence without ever hitting userspace in-between.

Delaying the restore of the user FPU state until the task returns to
userspace means the kernel only has to save the user FPU state on the
first invocation of kernel_fpu_start, making the other invocations
cheaper.

This is especially true for KVM vcpu threads, which can handle lots
of guest events and exceptions entirely in kernel mode.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/kernel/i387.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index c98f88d..cfbf325 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -89,13 +89,11 @@ void __kernel_fpu_end(void)
 	if (use_eager_fpu()) {
 		/*
 		 * For eager fpu, most the time, tsk_used_math() is true.
-		 * Restore the user math as we are done with the kernel usage.
-		 * At few instances during thread exit, signal handling etc,
-		 * tsk_used_math() is false. Those few places will take proper
-		 * actions, so we don't need to restore the math here.
+		 * Make sure the user math state is restored on return to
+		 * userspace.
 		 */
 		if (likely(tsk_used_math(current)))
-			math_state_restore();
+			set_thread_flag(TIF_LOAD_FPU);
 	} else {
 		stts();
 	}
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 09/11] x86,fpu,kvm: keep vcpu FPU active as long as it is resident
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (7 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end riel
@ 2015-01-11 21:46 ` riel
  2015-01-11 21:46 ` [RFC PATCH 10/11] x86,fpu: fix fpu_copy to deal with not-loaded fpu riel
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

Currently KVM always deactivates the FPU on VCPU unload, only to
reactivate it next time the guest uses it. This can make using the
FPU inside a KVM guest fairly expensive.

On the other hand, restoring the FPU state for a KVM guest is also
significantly more involved (and expensive) than restoring the FPU
state on bare metal, so it is unlikely we will want to do this all
the time for every single guest.

This patch strikes a fairly conservative balance, where the FPU
state is re-used as long as nothing else used the FPU registers
on the same CPU. This means the guest FPU will stay active across
KVM event handling in the host, and even periods where the guest
VCPU went idle, or got interrupted by a kernel thread, but no
userspace tasks ran on the CPU.

Smarter, and more aggressive policies may make sense.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/kvm/x86.c       | 26 +++++++++++++++++++-------
 include/linux/kvm_host.h |  2 +-
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0033df3..a5b01e6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6128,10 +6128,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			r = 0;
 			goto out;
 		}
-		if (kvm_check_request(KVM_REQ_DEACTIVATE_FPU, vcpu)) {
-			vcpu->fpu_active = 0;
-			kvm_x86_ops->fpu_deactivate(vcpu);
-		}
 		if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) {
 			/* Page is swapped out. Do synthetic halt */
 			vcpu->arch.apf.halted = true;
@@ -6188,8 +6184,23 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	preempt_disable();
 
 	kvm_x86_ops->prepare_guest_switch(vcpu);
-	if (vcpu->fpu_active)
-		kvm_load_guest_fpu(vcpu);
+
+	if (vcpu->fpu_active) {
+		if (this_cpu_read(fpu_owner) == &vcpu->arch.guest_fpu) {
+			/*
+			 * The FPU is all ours. Still restore the FPU state
+			 * from memory, just in case the emulator touched it.
+			 */
+			kvm_load_guest_fpu(vcpu);
+		} else {
+			/*
+			 * Something else is using the FPU. The guest will
+			 * re-activate it on demand, if needed.
+			 */
+			vcpu->fpu_active = 0;
+			kvm_x86_ops->fpu_deactivate(vcpu);
+		}
+	}
 	kvm_load_guest_xcr0(vcpu);
 
 	vcpu->mode = IN_GUEST_MODE;
@@ -6903,6 +6914,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
 	kvm_put_guest_xcr0(vcpu);
 	vcpu->guest_fpu_loaded = 1;
 	__kernel_fpu_begin();
+	this_cpu_write(fpu_owner, &vcpu->arch.guest_fpu);
 	fpu_restore_checking(&vcpu->arch.guest_fpu);
 	trace_kvm_fpu(1);
 }
@@ -6917,8 +6929,8 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
 	vcpu->guest_fpu_loaded = 0;
 	fpu_save_init(&vcpu->arch.guest_fpu);
 	__kernel_fpu_end();
+	/* but keep fpu_owner pointed at &vcpu->arch.guest_fpu */
 	++vcpu->stat.fpu_reload;
-	kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
 	trace_kvm_fpu(0);
 }
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a6059bd..503259f 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -121,7 +121,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_MMU_SYNC           7
 #define KVM_REQ_CLOCK_UPDATE       8
 #define KVM_REQ_KICK               9
-#define KVM_REQ_DEACTIVATE_FPU    10
+
 #define KVM_REQ_EVENT             11
 #define KVM_REQ_APF_HALT          12
 #define KVM_REQ_STEAL_UPDATE      13
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 10/11] x86,fpu: fix fpu_copy to deal with not-loaded fpu
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (8 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 09/11] x86,fpu,kvm: keep vcpu FPU active as long as it is resident riel
@ 2015-01-11 21:46 ` riel
  2015-01-11 21:46 ` [RFC PATCH 11/11] (BROKEN) x86,fpu: broken signal handler stack setup riel
  2015-01-15 19:19 ` [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix Oleg Nesterov
  11 siblings, 0 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

It is possible to hit fpu_copy in eager fpu mode, but without
the current task's FPU context actually loaded into the CPU.

In that case, we should copy the FPU context from memory, not
save it from registers.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 8546c0a..095dacc 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -594,7 +594,8 @@ static inline void fpu_free(struct fpu *fpu)
 
 static inline void fpu_copy(struct task_struct *dst, struct task_struct *src)
 {
-	if (use_eager_fpu()) {
+	preempt_disable();
+	if (use_eager_fpu() && __thread_has_fpu(src)) {
 		memset(&dst->thread.fpu.state->xsave, 0, xstate_size);
 		__save_fpu(dst);
 	} else {
@@ -604,6 +605,7 @@ static inline void fpu_copy(struct task_struct *dst, struct task_struct *src)
 		unlazy_fpu(src);
 		memcpy(dfpu->state, sfpu->state, xstate_size);
 	}
+	preempt_enable();
 }
 
 static inline unsigned long
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 11/11] (BROKEN) x86,fpu: broken signal handler stack setup
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (9 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 10/11] x86,fpu: fix fpu_copy to deal with not-loaded fpu riel
@ 2015-01-11 21:46 ` riel
  2015-01-15 19:19 ` [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix Oleg Nesterov
  11 siblings, 0 replies; 76+ messages in thread
From: riel @ 2015-01-11 21:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

From: Rik van Riel <riel@redhat.com>

The previous patches result in situations where the FPU state
for a task is not present in the FPU registers, when using eager
fpu mode. The signal frame setup and restore code needs to be
adjusted to deal with that situation.

Without this patch, the signal handler stack setup is broken.

With it, it is still broken.

However, I have been staring at it for too long to figure out
why. This patch needs other eyeballs.

In other words: HELP!

The LTP signal handling tests all seem to pass, but SPECjbb2005
crashes in minutes in the signal handler code:

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f4df1f3e753, pid=1711, tid=139972332160768
#
# JRE version: OpenJDK Runtime Environment (8.0) (build 1.8.0-internal-0)
# Java VM: OpenJDK 64-Bit Server VM (25.0-b57-internal mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V  [libjvm.so+0x854753]  JVM_handle_linux_signal+0x5e193

Not-signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/include/asm/fpu-internal.h | 31 +++++++++++++++++-----------
 arch/x86/kernel/xsave.c             | 41 ++++++++++++++++++-------------------
 2 files changed, 39 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 095dacc..ac45074 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -358,6 +358,8 @@ static inline void __drop_fpu(struct task_struct *tsk)
 			     _ASM_EXTABLE(1b, 2b));
 		__thread_fpu_end(tsk);
 	}
+	/* disable lazy fpu restore */
+	tsk->thread.fpu.last_cpu = ~0;
 }
 
 static inline void drop_fpu(struct task_struct *tsk)
@@ -377,12 +379,12 @@ static inline void drop_init_fpu(struct task_struct *tsk)
 	if (!use_eager_fpu())
 		drop_fpu(tsk);
 	else {
-		if (use_xsave())
-			xrstor_state(init_xstate_buf, -1);
-		else
-			fxrstor_checking(&init_xstate_buf->i387);
+		preempt_disable();
+		__thread_fpu_end(tsk);
+		fpu_finit(&tsk->thread.fpu);
+		set_thread_flag(TIF_LOAD_FPU);
+		preempt_enable();
 	}
-	clear_thread_flag(TIF_LOAD_FPU);
 }
 
 /*
@@ -467,7 +469,7 @@ static inline void switch_fpu_finish(void)
 	if (fpu_lazy_restore(tsk, raw_smp_processor_id()))
 		return;
 
-	if (unlikely(restore_fpu_checking(tsk)))
+	if (restore_fpu_checking(tsk))
 		drop_init_fpu(tsk);
 }
 
@@ -525,16 +527,21 @@ static inline void __save_fpu(struct task_struct *tsk)
  */
 static inline void save_init_fpu(struct task_struct *tsk)
 {
-	WARN_ON_ONCE(!__thread_has_fpu(tsk));
+	preempt_disable();
 
-	if (use_eager_fpu()) {
+	if (unlikely(!__thread_has_fpu(tsk)))
+		goto out;
+
+	if (use_eager_fpu())
 		__save_fpu(tsk);
-		return;
-	}
+	else
+		__save_init_fpu(tsk);
 
-	preempt_disable();
-	__save_init_fpu(tsk);
 	__thread_fpu_end(tsk);
+
+ out:
+	/* disable lazy fpu restore */
+	tsk->thread.fpu.last_cpu = ~0;
 	preempt_enable();
 }
 
diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c
index 4c540c4..5434491 100644
--- a/arch/x86/kernel/xsave.c
+++ b/arch/x86/kernel/xsave.c
@@ -251,18 +251,13 @@ int save_xstate_sig(void __user *buf, void __user *buf_fx, int size)
 			sizeof(struct user_i387_ia32_struct), NULL,
 			(struct _fpstate_ia32 __user *) buf) ? -1 : 1;
 
-	if (user_has_fpu()) {
-		/* Save the live register state to the user directly. */
-		if (save_user_xstate(buf_fx))
-			return -1;
-		/* Update the thread's fxstate to save the fsave header. */
-		if (ia32_fxstate)
-			fpu_fxsave(&tsk->thread.fpu);
-	} else {
-		sanitize_i387_state(tsk);
-		if (__copy_to_user(buf_fx, xsave, xstate_size))
-			return -1;
-	}
+	/* Atomically save the FPU state from registers. */
+	save_init_fpu(tsk);
+
+	/* Then copy it to userspace. */
+	sanitize_i387_state(tsk);
+	if (__copy_to_user(buf_fx, xsave, xstate_size))
+		return -1;
 
 	/* Save the fsave header for the 32-bit frames. */
 	if ((ia32_fxstate || !use_fxsr()) && save_fsave_header(tsk, buf))
@@ -400,23 +395,27 @@ int __restore_xstate_sig(void __user *buf, void __user *buf_fx, int size)
 			set_used_math();
 		}
 
-		if (use_eager_fpu()) {
-			preempt_disable();
-			math_state_restore();
-			preempt_enable();
-		}
+		if (use_eager_fpu())
+			set_thread_flag(TIF_LOAD_FPU);
 
 		return err;
 	} else {
+		struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave;
 		/*
-		 * For 64-bit frames and 32-bit fsave frames, restore the user
-		 * state to the registers directly (with exceptions handled).
+		 * Copy the xstate from user space into the kernel buffer.
+		 * Clear task used math during the operation, to ensure the
+		 * context switching code does not overwrite the xstate buffer
+		 * with whatever is in the FPU registers.
 		 */
-		user_fpu_begin();
-		if (restore_user_xstate(buf_fx, xstate_bv, fx_only)) {
+		drop_fpu(tsk);
+		if (__copy_from_user(xsave, buf_fx, state_size)) {
 			drop_init_fpu(tsk);
 			return -1;
 		}
+		set_used_math();
+
+		if (use_eager_fpu())
+			set_thread_flag(TIF_LOAD_FPU);
 	}
 
 	return 0;
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 01/11] x86,fpu: document the data structures a little
  2015-01-11 21:46 ` [RFC PATCH 01/11] x86,fpu: document the data structures a little riel
@ 2015-01-12 21:18   ` Borislav Petkov
  2015-01-12 21:38     ` Rik van Riel
  2015-01-12 21:52   ` Dave Hansen
  1 sibling, 1 reply; 76+ messages in thread
From: Borislav Petkov @ 2015-01-12 21:18 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, oleg, pbonzini, tglx, luto

On Sun, Jan 11, 2015 at 04:46:23PM -0500, riel@redhat.com wrote:
> From: Rik van Riel <riel@redhat.com>
> 
> Add some documentation to data structures used for FPU context
> switching.
> 
> Signed-off-by: Rik van Riel <riel@redhat.com>
> ---
>  arch/x86/include/asm/processor.h | 9 +++++++--
>  arch/x86/kernel/cpu/common.c     | 1 +
>  2 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index a092a0c..17bd8a0 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -400,6 +400,11 @@ struct xsave_struct {
>  	/* new processor state extensions will go here */
>  } __attribute__ ((packed, aligned (64)));
>  
> +/*
> + * The FPU state used depends on the capabilities of the hardware; the
> + * registers used for vector instructions on newer hardware are included
> + * in the FPU state.
> + */
>  union thread_xstate {
>  	struct i387_fsave_struct	fsave;
>  	struct i387_fxsave_struct	fxsave;
> @@ -408,8 +413,8 @@ union thread_xstate {
>  };
>  
>  struct fpu {
> -	unsigned int last_cpu;
> -	unsigned int has_fpu;
> +	unsigned int last_cpu;		/* FPU state last loaded on this CPU */

Isn't that the last CPU which had the FPU?

> +	unsigned int has_fpu;		/* FPU state in current use on CPU */

I understand ->has_fpu as this thread has the FPU. See comment over
user_has_fpu().

-- 
Regards/Gruss,
    Boris.

Sent from a fat crate under my desk. Formatting is fine.
--

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 01/11] x86,fpu: document the data structures a little
  2015-01-12 21:18   ` Borislav Petkov
@ 2015-01-12 21:38     ` Rik van Riel
  0 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-12 21:38 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, mingo, hpa, matt.fleming, oleg, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/12/2015 04:18 PM, Borislav Petkov wrote:
> On Sun, Jan 11, 2015 at 04:46:23PM -0500, riel@redhat.com wrote:

>> +++ b/arch/x86/include/asm/processor.h @@ -400,6 +400,11 @@
>> struct xsave_struct { /* new processor state extensions will go
>> here */ } __attribute__ ((packed, aligned (64)));
>> 
>> +/* + * The FPU state used depends on the capabilities of the
>> hardware; the + * registers used for vector instructions on newer
>> hardware are included + * in the FPU state. + */ union
>> thread_xstate { struct i387_fsave_struct	fsave; struct
>> i387_fxsave_struct	fxsave; @@ -408,8 +413,8 @@ union
>> thread_xstate { };
>> 
>> struct fpu { -	unsigned int last_cpu; -	unsigned int has_fpu; +
>> unsigned int last_cpu;		/* FPU state last loaded on this CPU */
> 
> Isn't that the last CPU which had the FPU?

Indeed it is. Good catch. I will fix it.

>> +	unsigned int has_fpu;		/* FPU state in current use on CPU */
> 
> I understand ->has_fpu as this thread has the FPU. See comment
> over user_has_fpu().

It does that currently, but the patch to __kernel_fpu_start() changes it
so it can point to a kernel FPU state as well.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtD7aAAoJEM553pKExN6D1TYH/34+5t9h+UxeVgqCxIAZuvAN
PY8r5PJMOdJb4hwbsbXfipA4oufRTXT3UrFYDUD2ASiJUv+P69FtXJgJRak0khJV
ZfOSXjnGA0WuTIWkTdXfl6H0RKxJd51Yc/EMgJL/kvnOucWBNjOPJnsWkQXrAan5
QJyM6WASFsUTjqAW3vzsKZwUmo/FnVh/FK/7pTVTqgZCI+j0WvitnGhD2J1A2Tvc
jlF2llmKQ/QxknW/HylurPAD4C68pw1AU08JcYtDmrRJGfweID/w5z+X3hd9xhjm
sEMjIKMTyrelj1K800uq0LXLw9gP7Vv7gNu6tMMWsle5wCTB0MrMH6/Qz3E/7Mk=
=5cE/
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 01/11] x86,fpu: document the data structures a little
  2015-01-11 21:46 ` [RFC PATCH 01/11] x86,fpu: document the data structures a little riel
  2015-01-12 21:18   ` Borislav Petkov
@ 2015-01-12 21:52   ` Dave Hansen
  2015-01-13 15:59     ` Rik van Riel
  1 sibling, 1 reply; 76+ messages in thread
From: Dave Hansen @ 2015-01-12 21:52 UTC (permalink / raw)
  To: riel, linux-kernel
  Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

On 01/11/2015 01:46 PM, riel@redhat.com wrote:
> +/*
> + * The FPU state used depends on the capabilities of the hardware; the
> + * registers used for vector instructions on newer hardware are included
> + * in the FPU state.
> + */
>  union thread_xstate {

Tiny nit, if you happen to revise this set at some point...

While you probably don't need/want to have an exhaustive list of
everything that gets saved in there, we should probably still mention
that it's a lot more than just vector instructions.

I'd say:

	The FPU state used depends on the capabilities of the
	hardware.  This state increasingly contains things
	not related to the FPU itself, like registers used for
	vector instructions.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag
  2015-01-11 21:46 ` [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag riel
@ 2015-01-13 15:24   ` Oleg Nesterov
  2015-01-13 16:35     ` Rik van Riel
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 15:24 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

Rik,

I can't review this series, I forgot almost everything I learned about
this code. The only thing I can recall is that it needs cleanups and
fixes ;) Just a couple of random questions.

On 01/11, riel@redhat.com wrote:
>
> +static inline void switch_fpu_prepare(struct task_struct *old, struct task_struct *new, int cpu)
>  {
> -	fpu_switch_t fpu;
> -
>  	/*
>  	 * If the task has used the math, pre-load the FPU on xsave processors
>  	 * or if the past 5 consecutive context-switches used math.
>  	 */
> -	fpu.preload = tsk_used_math(new) && (use_eager_fpu() ||
> +	bool preload = tsk_used_math(new) && (use_eager_fpu() ||
>  					     new->thread.fpu_counter > 5);
>  	if (__thread_has_fpu(old)) {
>  		if (!__save_init_fpu(old))
> @@ -433,8 +417,9 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta
>  		old->thread.fpu.has_fpu = 0;	/* But leave fpu_owner_task! */
>
>  		/* Don't change CR0.TS if we just switch! */
> -		if (fpu.preload) {
> +		if (preload) {
>  			new->thread.fpu_counter++;
> +			set_thread_flag(TIF_LOAD_FPU);
>  			__thread_set_has_fpu(new);
>  			prefetch(new->thread.fpu.state);
>  		} else if (!use_eager_fpu())
> @@ -442,16 +427,19 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta
>  	} else {
>  		old->thread.fpu_counter = 0;
>  		old->thread.fpu.last_cpu = ~0;
> -		if (fpu.preload) {
> +		if (preload) {
>  			new->thread.fpu_counter++;
>  			if (!use_eager_fpu() && fpu_lazy_restore(new, cpu))
> -				fpu.preload = 0;
> -			else
> +				/* XXX: is this safe against ptrace??? */

Could you explain your concerns?

> +				__thread_fpu_begin(new);

this looks strange/unnecessary, there is another unconditonal
__thread_fpu_begin(new) below.

OK, the next patch moves it to switch_fpu_finish(), so perhaps this change
should go into 3/11.

And I am not sure I understand set_thread_flag(TIF_LOAD_FPU). This is called
before __switch_to() updates kernel_stack, so it seems that the old thread
gets this flag set, not new?

Even if this is correct, perhaps set_tsk_thread_flag(new) will look better?

The same for switch_fpu_finish(). I guess this should actually work after
this patch, because switch_fpu_finish() is called before
this_cpu_write(kernel_stack) too and thus both prepare/finish will use the
same thread_info, but this looks confusing at least.

> --- a/arch/x86/include/asm/thread_info.h
> +++ b/arch/x86/include/asm/thread_info.h
> @@ -91,6 +91,7 @@ struct thread_info {
>  #define TIF_SYSCALL_TRACEPOINT	28	/* syscall tracepoint instrumentation */
>  #define TIF_ADDR32		29	/* 32-bit address space on 64 bits */
>  #define TIF_X32			30	/* 32-bit native x86-64 binary */
> +#define TIF_LOAD_FPU		31	/* load FPU on return to userspace */

Well, the comment is wrong after this patch, but I see 4/11...

>  #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
>  #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
> @@ -115,6 +116,7 @@ struct thread_info {
>  #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
>  #define _TIF_ADDR32		(1 << TIF_ADDR32)
>  #define _TIF_X32		(1 << TIF_X32)
> +#define _TIF_LOAD_FPU		(1 << TIF_LOAD_FPU)
>
>  /* work to do in syscall_trace_enter() */
>  #define _TIF_WORK_SYSCALL_ENTRY	\
> @@ -141,7 +143,7 @@ struct thread_info {
>  /* Only used for 64 bit */
>  #define _TIF_DO_NOTIFY_MASK						\
>  	(_TIF_SIGPENDING | _TIF_MCE_NOTIFY | _TIF_NOTIFY_RESUME |	\
> -	 _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE)
> +	 _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE | _TIF_LOAD_FPU)

This too. I mean, this change has no effect until 4/11.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu
  2015-01-11 21:46 ` [RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu riel
@ 2015-01-13 15:24   ` Oleg Nesterov
  2015-01-13 16:37     ` Rik van Riel
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 15:24 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/11, riel@redhat.com wrote:
>
> --- a/arch/x86/include/asm/fpu-internal.h
> +++ b/arch/x86/include/asm/fpu-internal.h
> @@ -420,7 +420,6 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>  		if (preload) {
>  			new->thread.fpu_counter++;
>  			set_thread_flag(TIF_LOAD_FPU);
> -			__thread_set_has_fpu(new);
>  			prefetch(new->thread.fpu.state);
>  		} else if (!use_eager_fpu())
>  			stts();
> @@ -436,7 +435,6 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>  				prefetch(new->thread.fpu.state);
>  				set_thread_flag(TIF_LOAD_FPU);
>  			}
> -			__thread_fpu_begin(new);
>  		}
>  		/* else: CR0.TS is still set from a previous FPU switch */
>  	}
> @@ -451,6 +449,7 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>  static inline void switch_fpu_finish(struct task_struct *new)
>  {
>  	if (test_and_clear_thread_flag(TIF_LOAD_FPU)) {
> +		__thread_fpu_begin(new);
>  		if (unlikely(restore_fpu_checking(new)))
>  			drop_init_fpu(new);
>  	}

Then perhaps it makes sense to move fpu_lazy_restore() to fpu_finish() too ?

Either way, afaics we do not need use_eager_fpu() before fpu_lazy_restore(),
and this reminds me that every use_eager_fpu() check in switch_fpu_prepare()
looks confusing.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-11 21:46 ` [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace riel
@ 2015-01-13 15:53   ` Oleg Nesterov
  2015-01-13 17:07   ` Andy Lutomirski
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 15:53 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/11, riel@redhat.com wrote:
>
> @@ -382,6 +382,7 @@ static inline void drop_init_fpu(struct task_struct *tsk)
>  		else
>  			fxrstor_checking(&init_xstate_buf->i387);
>  	}
> +	clear_thread_flag(TIF_LOAD_FPU);

OK, in this case tsk should be current. Still I think clear_tsk_thread_flag()
will look more consistent.

> @@ -435,24 +436,32 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>  				prefetch(new->thread.fpu.state);

I am wondering if these prefetch()es in switch_fpu_prepare() make
sense after this patch.

> +		} else
> +			/*
> +			 * The new task does not want an FPU state restore,
> +			 * and may not even have an FPU state. However, the
> +			 * old task may have left TIF_LOAD_FPU set.
> +			 * Clear it to avoid trouble.
> +			 *
> +			 * CR0.TS is still set from a previous FPU switch
> +			 */
> +			clear_thread_flag(TIF_LOAD_FPU);

I got lost ;) Simply can't understand what this change tries to do.

And it looks "obviously wrong"... OK, suppose that a TIF_LOAD_FPU task
schedules before it returns to user mode (and calls switch_fpu_finish).
Why should we clear its flag if the new task doesn't want FPU ?

"CR0.TS is still set" is not true if use_eager_fpu()... OTOH, .TS can
be also set even if preload == T above, when we set TIF_LOAD_FPU.

> -static inline void switch_fpu_finish(struct task_struct *new)
> +static inline void switch_fpu_finish(void)
>  {
> -	if (test_and_clear_thread_flag(TIF_LOAD_FPU)) {
> -		__thread_fpu_begin(new);
> -		if (unlikely(restore_fpu_checking(new)))
> -			drop_init_fpu(new);
> -	}
> +	struct task_struct *tsk = current;
> +
> +	__thread_fpu_begin(tsk);
> +
> +	if (unlikely(restore_fpu_checking(tsk)))
> +		drop_init_fpu(tsk);
>  }

Again, I am totally confused. After this patch the usage of set_thread_flag()
in switch_fpu_prepare() becomes wrong (see my reply to 2/11), but it is quite
possible I misunderstood these patches.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 01/11] x86,fpu: document the data structures a little
  2015-01-12 21:52   ` Dave Hansen
@ 2015-01-13 15:59     ` Rik van Riel
  0 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 15:59 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: mingo, hpa, matt.fleming, bp, oleg, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/12/2015 04:52 PM, Dave Hansen wrote:
> On 01/11/2015 01:46 PM, riel@redhat.com wrote:
>> +/* + * The FPU state used depends on the capabilities of the
>> hardware; the + * registers used for vector instructions on newer
>> hardware are included + * in the FPU state. + */ union
>> thread_xstate {
> 
> Tiny nit, if you happen to revise this set at some point...
> 
> While you probably don't need/want to have an exhaustive list of 
> everything that gets saved in there, we should probably still
> mention that it's a lot more than just vector instructions.
> 
> I'd say:
> 
> The FPU state used depends on the capabilities of the hardware.
> This state increasingly contains things not related to the FPU
> itself, like registers used for vector instructions.
> 
I like it.  I replaced my comment with your improved one.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtUD3AAoJEM553pKExN6D9vAIAJg02XYwxLoJYWAEDOn7SxWc
Y18RGzaE8YIM0GLevMdOAu5EJ9IitZzHG/I7dBPr0j7ATwOZrjfg6WsS02Eonk8P
yLS/NQFUWb4bkjPYJNdyrM76BLr2diEHdS2aw97sBXiVrM2SVVu0o45GU8pc241X
WGPV4G3TlyTgJLxAn0+kbvBF0PZhEEHhTP4QPkIjqht8/q9DKOC7dPaQA8aUYYVy
ISXSsg7gHCZ9KcBGmgx90kFLKDwLC6H8lV8MCe7tZDyof/mvpuNcZY843YV/uZNK
/DeMrSbdVJgyau73HZXt8Fm42G56PiA79/kluuQc7rXBk4tcwu/t/DxX0wbD6WE=
=3Vvs
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced
  2015-01-11 21:46 ` [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced riel
@ 2015-01-13 16:19   ` Oleg Nesterov
  2015-01-13 16:33     ` Rik van Riel
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 16:19 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/11, riel@redhat.com wrote:
>
> @@ -412,8 +412,14 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>  	bool preload = tsk_used_math(new) && (use_eager_fpu() ||
>  					     new->thread.fpu_counter > 5);
>  	if (__thread_has_fpu(old)) {
> -		if (!__save_init_fpu(old))
> +		/*
> +		 * Make sure the FPU state is restored from memory next time,
> +		 * if the task has an FPU exception pending, or the task's in
> +		 * memory FPU state could be changed by a debugger.
> +		 */
> +		if (!__save_init_fpu(old) || task_is_stopped_or_traced(old))
>  			cpu = ~0;

Well, if debugger wants to change FPU state, it should call init_fpu()
which resets .last_cpu ?

Plus "is_stopped" is not needed, ptracer no longer can do STOPPED -> TRACED
transition.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced
  2015-01-13 16:19   ` Oleg Nesterov
@ 2015-01-13 16:33     ` Rik van Riel
  2015-01-13 16:50       ` Oleg Nesterov
  0 siblings, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 16:33 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 11:19 AM, Oleg Nesterov wrote:
> On 01/11, riel@redhat.com wrote:
>> 
>> @@ -412,8 +412,14 @@ static inline void switch_fpu_prepare(struct
>> task_struct *old, struct task_struc bool preload =
>> tsk_used_math(new) && (use_eager_fpu() || new->thread.fpu_counter
>> > 5); if (__thread_has_fpu(old)) { -		if (!__save_init_fpu(old)) 
>> +		/* +		 * Make sure the FPU state is restored from memory next
>> time, +		 * if the task has an FPU exception pending, or the
>> task's in +		 * memory FPU state could be changed by a debugger. 
>> +		 */ +		if (!__save_init_fpu(old) ||
>> task_is_stopped_or_traced(old)) cpu = ~0;
> 
> Well, if debugger wants to change FPU state, it should call
> init_fpu() which resets .last_cpu ?

Does the ptrace (and utrace, and ... ) code actually do that?

> Plus "is_stopped" is not needed, ptracer no longer can do STOPPED
> -> TRACED transition.

I will remove _is_stopped.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtUjqAAoJEM553pKExN6DtXIIAKzZj0zbQq4cQ08oKr6zMZqn
qz0MyTD91+1phywX+CFOoQ45tQUzAzpAw2k+2yGUFWhMsdEQEaANirK3stZj2Lte
Jh/VTlX1Y2ZpjoHFH69S6+5jy6nAqHaUTsMQ3jiXvi4YkzjYn9JGYY/N1E7BgRV/
T2HozdKD9+h27NetPaVPymZOwgIP5Qykbl77mH3SqwFaah/8DI3K3rMem5Ze88op
hlk3FFAgB4n8WsHDKtDXOARnF1dYCnxyR9f+3mRIuLgCdYDvktXOizTUumsbojE6
hMN9lwM/r469kVh3L20asAbXNdCAAG6U/tZcoQOizGrj/L9M4V5ACAzn7w+y+kE=
=8971
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag
  2015-01-13 15:24   ` Oleg Nesterov
@ 2015-01-13 16:35     ` Rik van Riel
  2015-01-13 16:55       ` Oleg Nesterov
  0 siblings, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 16:35 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 10:24 AM, Oleg Nesterov wrote:
> Rik,
> 
> I can't review this series, I forgot almost everything I learned
> about this code. The only thing I can recall is that it needs
> cleanups and fixes ;) Just a couple of random questions.
> 
> On 01/11, riel@redhat.com wrote:
>> 
>> +static inline void switch_fpu_prepare(struct task_struct *old,
>> struct task_struct *new, int cpu) { -	fpu_switch_t fpu; - /* * If
>> the task has used the math, pre-load the FPU on xsave processors 
>> * or if the past 5 consecutive context-switches used math. */ -
>> fpu.preload = tsk_used_math(new) && (use_eager_fpu() || +	bool
>> preload = tsk_used_math(new) && (use_eager_fpu() || 
>> new->thread.fpu_counter > 5); if (__thread_has_fpu(old)) { if
>> (!__save_init_fpu(old)) @@ -433,8 +417,9 @@ static inline
>> fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct
>> ta old->thread.fpu.has_fpu = 0;	/* But leave fpu_owner_task! */
>> 
>> /* Don't change CR0.TS if we just switch! */ -		if (fpu.preload)
>> { +		if (preload) { new->thread.fpu_counter++; +
>> set_thread_flag(TIF_LOAD_FPU); __thread_set_has_fpu(new); 
>> prefetch(new->thread.fpu.state); } else if (!use_eager_fpu()) @@
>> -442,16 +427,19 @@ static inline fpu_switch_t
>> switch_fpu_prepare(struct task_struct *old, struct ta } else { 
>> old->thread.fpu_counter = 0; old->thread.fpu.last_cpu = ~0; -		if
>> (fpu.preload) { +		if (preload) { new->thread.fpu_counter++; if
>> (!use_eager_fpu() && fpu_lazy_restore(new, cpu)) -				fpu.preload
>> = 0; -			else +				/* XXX: is this safe against ptrace??? */
> 
> Could you explain your concerns?

Ptrace could modify the in-memory copy of a task's FPU context,
while fpu_lazy_restore() could decide that the task's FPU context
is still loaded in the registers (nothing else on the CPU has used
the FPU since it last ran), and does not need to be re-loaded.

I address this later in the series.

>> +				__thread_fpu_begin(new);
> 
> this looks strange/unnecessary, there is another unconditonal 
> __thread_fpu_begin(new) below.
> 
> OK, the next patch moves it to switch_fpu_finish(), so perhaps this
> change should go into 3/11.

I would like to keep each patch small. I waffled between merging
patches 2 & 3 into one larger patch, or keeping patch 2 somewhat
awkward, but having both easier to review.

> And I am not sure I understand set_thread_flag(TIF_LOAD_FPU). This
> is called before __switch_to() updates kernel_stack, so it seems
> that the old thread gets this flag set, not new?
> 
> Even if this is correct, perhaps set_tsk_thread_flag(new) will look
> better?

It is correct in this patch, but this may just be the bug that has been
plaguing me later in the series!  Thanks for spotting this one, Oleg!

>> --- a/arch/x86/include/asm/thread_info.h +++
>> b/arch/x86/include/asm/thread_info.h @@ -91,6 +91,7 @@ struct
>> thread_info { #define TIF_SYSCALL_TRACEPOINT	28	/* syscall
>> tracepoint instrumentation */ #define TIF_ADDR32		29	/* 32-bit
>> address space on 64 bits */ #define TIF_X32			30	/* 32-bit native
>> x86-64 binary */ +#define TIF_LOAD_FPU		31	/* load FPU on return
>> to userspace */
> 
> Well, the comment is wrong after this patch, but I see 4/11...

I did not want to change that same line in two different patches,
with the idea that that would make things harder to review.

>> /* work to do in syscall_trace_enter() */ #define
>> _TIF_WORK_SYSCALL_ENTRY	\ @@ -141,7 +143,7 @@ struct thread_info
>> { /* Only used for 64 bit */ #define _TIF_DO_NOTIFY_MASK						\ 
>> (_TIF_SIGPENDING | _TIF_MCE_NOTIFY | _TIF_NOTIFY_RESUME |	\ -
>> _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE) +	 _TIF_USER_RETURN_NOTIFY
>> | _TIF_UPROBE | _TIF_LOAD_FPU)
> 
> This too. I mean, this change has no effect until 4/11.

I can move this line to patch 4/11 if you prefer.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtUltAAoJEM553pKExN6DPGIH/Ais2KOa3eHr/eIcsBVR13Di
U9FEdFw6EH/pCvzSrO+PM/UaWplYDdPiEX01gvxQ/9KpoDyZRq63DmTyfksimXzD
k1tShoBLCCUJ3COelcYaqzCNI0qbkRPANbjqUp0xgh5FnBbebRnRQbyOLIFQ1CSZ
GjOe3XZbFn+v37f6v6YPJ5rktU2DWB6gIc8KnQ4hffQOyD0OH+WsHfQ3aWBEsYFj
QrlMqlgs4Qqg5S4jrTMc3Z2t6nQ7LPbYpWamPpUQ8Z+5gySU2ObZaLTV0LY/crYR
FDRyu6o/c0JTOq5cb7ayObBgZmnu4Hk3yA2XXD2qb/WdBKs7XQNypQXMt3naxbQ=
=hR1Z
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu
  2015-01-13 15:24   ` Oleg Nesterov
@ 2015-01-13 16:37     ` Rik van Riel
  0 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 16:37 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 10:24 AM, Oleg Nesterov wrote:
> On 01/11, riel@redhat.com wrote:
>> 
>> --- a/arch/x86/include/asm/fpu-internal.h +++
>> b/arch/x86/include/asm/fpu-internal.h @@ -420,7 +420,6 @@ static
>> inline void switch_fpu_prepare(struct task_struct *old, struct
>> task_struc if (preload) { new->thread.fpu_counter++; 
>> set_thread_flag(TIF_LOAD_FPU); -			__thread_set_has_fpu(new); 
>> prefetch(new->thread.fpu.state); } else if (!use_eager_fpu()) 
>> stts(); @@ -436,7 +435,6 @@ static inline void
>> switch_fpu_prepare(struct task_struct *old, struct task_struc 
>> prefetch(new->thread.fpu.state); set_thread_flag(TIF_LOAD_FPU); 
>> } -			__thread_fpu_begin(new); } /* else: CR0.TS is still set
>> from a previous FPU switch */ } @@ -451,6 +449,7 @@ static inline
>> void switch_fpu_prepare(struct task_struct *old, struct
>> task_struc static inline void switch_fpu_finish(struct
>> task_struct *new) { if (test_and_clear_thread_flag(TIF_LOAD_FPU))
>> { +		__thread_fpu_begin(new); if
>> (unlikely(restore_fpu_checking(new))) drop_init_fpu(new); }
> 
> Then perhaps it makes sense to move fpu_lazy_restore() to
> fpu_finish() too ?

I do that later in the series.  I would like to keep that in a
separate changeset, for bisecting reasons.

> Either way, afaics we do not need use_eager_fpu() before
> fpu_lazy_restore(), and this reminds me that every use_eager_fpu()
> check in switch_fpu_prepare() looks confusing.

Agreed. The patch that moves fpu_lazy_restore() into
switch_fpu_finish() also removes the use_eager_fpu()
condition for that.

Another reason for it to be by itself in its own
changeset :)

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtUniAAoJEM553pKExN6D1KkH/0jJaxlbXBWdx3XRx3KczPiH
DJzWhUanmEZzFdCuyK/V8zDI0OjDorOntTTrA3rg1StBYGYgAm3hPMiVB5YvDJ8e
8YeJ9R+5FhMxqgBSKLHIGOazOjYCCAbl/9F+uTDnkOlsyRvJiNAVuURFdeXeWPx9
2tr2PoTVyPZlfIzVHDZKke1aeN1nS5m8Dlww3Wi1NTNJaDiKcguIMvfFtMc8j6BG
NRezcbEr+4UfQbV/k6JbMdS79Svq3xg9gWNx5BRZtv6eh4ycC8BMIICSUFYh5LAm
HfCRvWZYN+mvykaGfxYnmONrdT2TXyivy0YOeyIbCOi0hx0q52XoNKA6JwBePeA=
=wEv+
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced
  2015-01-13 16:33     ` Rik van Riel
@ 2015-01-13 16:50       ` Oleg Nesterov
  2015-01-13 16:57         ` Rik van Riel
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 16:50 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/13, Rik van Riel wrote:
>
> >> @@ -412,8 +412,14 @@ static inline void switch_fpu_prepare(struct
> >> task_struct *old, struct task_struc bool preload =
> >> tsk_used_math(new) && (use_eager_fpu() || new->thread.fpu_counter
> >> > 5); if (__thread_has_fpu(old)) { -		if (!__save_init_fpu(old)) 
> >> +		/* +		 * Make sure the FPU state is restored from memory next
> >> time, +		 * if the task has an FPU exception pending, or the
> >> task's in +		 * memory FPU state could be changed by a debugger. 
> >> +		 */ +		if (!__save_init_fpu(old) ||
> >> task_is_stopped_or_traced(old)) cpu = ~0;
> >
> > Well, if debugger wants to change FPU state, it should call
> > init_fpu() which resets .last_cpu ?
>
> Does the ptrace (and utrace, and ... ) code actually do that?

Yes, see xfpregs_get/set. So I think this change is not needed (but I
didn't look at the next patches).

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag
  2015-01-13 16:35     ` Rik van Riel
@ 2015-01-13 16:55       ` Oleg Nesterov
  0 siblings, 0 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 16:55 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/13, Rik van Riel wrote:
>
> On 01/13/2015 10:24 AM, Oleg Nesterov wrote:
> > Rik,
> >
> > I can't review this series, I forgot almost everything I learned
> > about this code. The only thing I can recall is that it needs
> > cleanups and fixes ;) Just a couple of random questions.
> >
> > On 01/11, riel@redhat.com wrote:
> >>
> >> +static inline void switch_fpu_prepare(struct task_struct *old,
> >> struct task_struct *new, int cpu) { -	fpu_switch_t fpu; - /* * If
> >> the task has used the math, pre-load the FPU on xsave processors
> >> * or if the past 5 consecutive context-switches used math. */ -
> >> fpu.preload = tsk_used_math(new) && (use_eager_fpu() || +	bool
> >> preload = tsk_used_math(new) && (use_eager_fpu() ||
> >> new->thread.fpu_counter > 5); if (__thread_has_fpu(old)) { if
> >> (!__save_init_fpu(old)) @@ -433,8 +417,9 @@ static inline
> >> fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct
> >> ta old->thread.fpu.has_fpu = 0;	/* But leave fpu_owner_task! */
> >>
> >> /* Don't change CR0.TS if we just switch! */ -		if (fpu.preload)
> >> { +		if (preload) { new->thread.fpu_counter++; +
> >> set_thread_flag(TIF_LOAD_FPU); __thread_set_has_fpu(new);
> >> prefetch(new->thread.fpu.state); } else if (!use_eager_fpu()) @@
> >> -442,16 +427,19 @@ static inline fpu_switch_t
> >> switch_fpu_prepare(struct task_struct *old, struct ta } else {
> >> old->thread.fpu_counter = 0; old->thread.fpu.last_cpu = ~0; -		if
> >> (fpu.preload) { +		if (preload) { new->thread.fpu_counter++; if
> >> (!use_eager_fpu() && fpu_lazy_restore(new, cpu)) -				fpu.preload
> >> = 0; -			else +				/* XXX: is this safe against ptrace??? */
> >
> > Could you explain your concerns?
>
> Ptrace could modify the in-memory copy of a task's FPU context,
> while fpu_lazy_restore() could decide that the task's FPU context
> is still loaded in the registers (nothing else on the CPU has used
> the FPU since it last ran), and does not need to be re-loaded.

This connects to our discussion about 5/11. Debugger should reset
.last_cpu in this case.

Yes, yes, I agree this all needs cleanups ;)

> > Well, the comment is wrong after this patch, but I see 4/11...
>
> I did not want to change that same line in two different patches,
> with the idea that that would make things harder to review.
>
> >> /* work to do in syscall_trace_enter() */ #define
> >> _TIF_WORK_SYSCALL_ENTRY	\ @@ -141,7 +143,7 @@ struct thread_info
> >> { /* Only used for 64 bit */ #define _TIF_DO_NOTIFY_MASK						\
> >> (_TIF_SIGPENDING | _TIF_MCE_NOTIFY | _TIF_NOTIFY_RESUME |	\ -
> >> _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE) +	 _TIF_USER_RETURN_NOTIFY
> >> | _TIF_UPROBE | _TIF_LOAD_FPU)
> >
> > This too. I mean, this change has no effect until 4/11.
>
> I can move this line to patch 4/11 if you prefer.

No, no, please ignore. I mentioned this only because I was a bit confused
initially.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced
  2015-01-13 16:50       ` Oleg Nesterov
@ 2015-01-13 16:57         ` Rik van Riel
  0 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 16:57 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 11:50 AM, Oleg Nesterov wrote:
> On 01/13, Rik van Riel wrote:
>> 
>>>> @@ -412,8 +412,14 @@ static inline void
>>>> switch_fpu_prepare(struct task_struct *old, struct task_struc
>>>> bool preload = tsk_used_math(new) && (use_eager_fpu() ||
>>>> new->thread.fpu_counter
>>>>> 5); if (__thread_has_fpu(old)) { -		if
>>>>> (!__save_init_fpu(old))
>>>> +		/* +		 * Make sure the FPU state is restored from memory
>>>> next time, +		 * if the task has an FPU exception pending, or
>>>> the task's in +		 * memory FPU state could be changed by a
>>>> debugger. +		 */ +		if (!__save_init_fpu(old) || 
>>>> task_is_stopped_or_traced(old)) cpu = ~0;
>>> 
>>> Well, if debugger wants to change FPU state, it should call 
>>> init_fpu() which resets .last_cpu ?
>> 
>> Does the ptrace (and utrace, and ... ) code actually do that?
> 
> Yes, see xfpregs_get/set. So I think this change is not needed (but
> I didn't look at the next patches).

Excellent, I will drop this then.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtU6FAAoJEM553pKExN6Dn1YH/0zWFvPtmbSWPez8EfsnHJe9
dF0h0kUHMKb1bUFNyqrQpmX1MuBHba9ab4a1P8ySoC9nIo8F7spSX+OG/CigDZzZ
vguz9J/MFrjhW83+eeyVc5PLwtVWwiYzbGB+Xc8ValMqHPT8m6kG2H+hw6ItRtnR
UwoPeQn27WjdaVT9/L13uYA8i6vwqD6pO68PzBs3xSFXbt0FHl9T0jaZ5O0WIzav
b7bNQ/HFpCZSqjqmc4KxlavBFlC8Ku9rMTYRzWXYtdjwfcn8rDTKpDZz7uBSd+Gl
bwb0u1rYcQnYQrJVmrarMEoySSfYAp6WsTIl7G/zWfJDSmPOYHLdurimoC14vgA=
=fxF+
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-11 21:46 ` [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace riel
  2015-01-13 15:53   ` Oleg Nesterov
@ 2015-01-13 17:07   ` Andy Lutomirski
  2015-01-13 17:11   ` Oleg Nesterov
  2015-01-13 17:58   ` Oleg Nesterov
  3 siblings, 0 replies; 76+ messages in thread
From: Andy Lutomirski @ 2015-01-13 17:07 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, Ingo Molnar, H. Peter Anvin, Matt Fleming,
	Borislav Petkov, Oleg Nesterov, Paolo Bonzini, Thomas Gleixner

On Sun, Jan 11, 2015 at 1:46 PM,  <riel@redhat.com> wrote:
> From: Rik van Riel <riel@redhat.com>
>
> Defer restoring the FPU state, if so desired, until the task returns to
> userspace.
>
> In case of kernel threads, KVM VCPU threads, and tasks performing longer
> running operations in kernel space, this could mean skipping the FPU state
> restore entirely for several context switches.
>
> This also allows the kernel to preserve the FPU state of a userspace task
> that gets interrupted by a kernel thread in kernel mode.
>
> If the old task left TIF_LOAD_FPU set, and the new task does not want
> an FPU state restore (or does not have an FPU state), clear the flag
> to prevent attempted restoring of a non-existent FPU state.
>
> TODO: remove superfluous irq disabling from entry_{32,64}.S, Andy has
> some conflicting changes in that part of the code, so wait with that.

This is more a general comment than a specific comment on this patch,
but maybe it belongs here: would it make sense to add some assertions,
possibly in __switch_to, that has_fpu, last_cpu, TIF_LOAD_FPU, etc are
all consistent?

For example, this patch seems to be trying to make sure that tasks
that are not running don't have TIF_LOAD_FPU set (is that what the new
code here that clears the bit is for?), so maybe __switch_to or
switch_fpu_prepare should assert that.  And presumably TIF_LOAD_FPU
shouldn't be set if has_fpu is clear (maybe -- I could have confused
myself there).

--Andy

>
> Signed-off-by: Rik van Riel <riel@redhat.com>
> ---
>  arch/x86/include/asm/fpu-internal.h | 33 +++++++++++++++++++++------------
>  arch/x86/kernel/process_32.c        |  2 --
>  arch/x86/kernel/process_64.c        |  2 --
>  arch/x86/kernel/signal.c            |  9 +++++++++
>  4 files changed, 30 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
> index 27556f4..539b050 100644
> --- a/arch/x86/include/asm/fpu-internal.h
> +++ b/arch/x86/include/asm/fpu-internal.h
> @@ -382,6 +382,7 @@ static inline void drop_init_fpu(struct task_struct *tsk)
>                 else
>                         fxrstor_checking(&init_xstate_buf->i387);
>         }
> +       clear_thread_flag(TIF_LOAD_FPU);
>  }
>
>  /*
> @@ -435,24 +436,32 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>                                 prefetch(new->thread.fpu.state);
>                                 set_thread_flag(TIF_LOAD_FPU);
>                         }
> -               }
> -               /* else: CR0.TS is still set from a previous FPU switch */
> +               } else
> +                       /*
> +                        * The new task does not want an FPU state restore,
> +                        * and may not even have an FPU state. However, the
> +                        * old task may have left TIF_LOAD_FPU set.
> +                        * Clear it to avoid trouble.
> +                        *
> +                        * CR0.TS is still set from a previous FPU switch
> +                        */
> +                       clear_thread_flag(TIF_LOAD_FPU);
>         }
>  }
>
>  /*
> - * By the time this gets called, we've already cleared CR0.TS and
> - * given the process the FPU if we are going to preload the FPU
> - * state - all we need to do is to conditionally restore the register
> - * state itself.
> + * This is called if and when the task returns to userspace.
> + * Clear CR0.TS if necessary, so the task can access the FPU register
> + * state this function restores.
>   */
> -static inline void switch_fpu_finish(struct task_struct *new)
> +static inline void switch_fpu_finish(void)
>  {
> -       if (test_and_clear_thread_flag(TIF_LOAD_FPU)) {
> -               __thread_fpu_begin(new);
> -               if (unlikely(restore_fpu_checking(new)))
> -                       drop_init_fpu(new);
> -       }
> +       struct task_struct *tsk = current;
> +
> +       __thread_fpu_begin(tsk);
> +
> +       if (unlikely(restore_fpu_checking(tsk)))
> +               drop_init_fpu(tsk);
>  }
>
>  /*
> diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
> index c4b00e6..4da02ae 100644
> --- a/arch/x86/kernel/process_32.c
> +++ b/arch/x86/kernel/process_32.c
> @@ -319,8 +319,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
>         if (prev->gs | next->gs)
>                 lazy_load_gs(next->gs);
>
> -       switch_fpu_finish(next_p);
> -
>         this_cpu_write(current_task, next_p);
>
>         return prev_p;
> diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
> index ee3824f..20e206e 100644
> --- a/arch/x86/kernel/process_64.c
> +++ b/arch/x86/kernel/process_64.c
> @@ -350,8 +350,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
>                 wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
>         prev->gsindex = gsindex;
>
> -       switch_fpu_finish(next_p);
> -
>         /*
>          * Switch the PDA and FPU contexts.
>          */
> diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
> index ed37a76..46e3008 100644
> --- a/arch/x86/kernel/signal.c
> +++ b/arch/x86/kernel/signal.c
> @@ -761,6 +761,15 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
>                 fire_user_return_notifiers();
>
>         user_enter();
> +
> +       /*
> +        * Test thread flag, as the signal code may have cleared TIF_LOAD_FPU.
> +        * We cannot reschedule after loading the FPU state back into the CPU.
> +        * IRQs will be re-enabled on the switch to userspace.
> +        */
> +       local_irq_disable();
> +       if (test_thread_flag(TIF_LOAD_FPU))
> +               switch_fpu_finish();
>  }
>
>  void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
> --
> 1.9.3
>



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too
  2015-01-11 21:46 ` [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too riel
@ 2015-01-13 17:11   ` Andy Lutomirski
  2015-01-13 20:43     ` Rik van Riel
  2015-01-14 18:36   ` Oleg Nesterov
  1 sibling, 1 reply; 76+ messages in thread
From: Andy Lutomirski @ 2015-01-13 17:11 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, Ingo Molnar, H. Peter Anvin, Matt Fleming,
	Borislav Petkov, Oleg Nesterov, Paolo Bonzini, Thomas Gleixner

On Sun, Jan 11, 2015 at 1:46 PM,  <riel@redhat.com> wrote:
> From: Rik van Riel <riel@redhat.com>
>
> If the next task still has its FPU state present in the FPU registers,
> there is no need to restore it from memory.
>
> This is no big deal on bare metal, where XSAVEOPT / XRSTOR are heavily
> optimized, but those optimizations do not carry across VMENTER / VMEXIT.
>
> Skipping the call to fpu_restore_checking when the FPU state is already
> loaded in the CPU could save a little bit of overhead on bare metal too,
> so this is not just a KVM optimization.
>
> Signed-off-by: Rik van Riel <riel@redhat.com>
> ---
>  arch/x86/include/asm/fpu-internal.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
> index 4db8781..a5a40c7 100644
> --- a/arch/x86/include/asm/fpu-internal.h
> +++ b/arch/x86/include/asm/fpu-internal.h
> @@ -435,13 +435,9 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>                 old->thread.fpu.last_cpu = ~0;
>                 if (preload) {
>                         new->thread.fpu_counter++;
> -                       if (!use_eager_fpu() && fpu_lazy_restore(new, cpu))
> -                               /* XXX: is this safe against ptrace??? */
> -                               __thread_fpu_begin(new);
> -                       else {
> +                       set_thread_flag(TIF_LOAD_FPU);
> +                       if (!fpu_lazy_restore(new, cpu))
>                                 prefetch(new->thread.fpu.state);

Is this prefetch still worth keeping?  Wouldn't prefetching the fpu
effectively require grabbing more than one cacheline anyway?

--Andy

> -                               set_thread_flag(TIF_LOAD_FPU);
> -                       }
>                 } else
>                         /*
>                          * The new task does not want an FPU state restore,
> @@ -466,6 +462,10 @@ static inline void switch_fpu_finish(void)
>
>         __thread_fpu_begin(tsk);
>
> +       /* The FPU registers already have this task's FPU state. */
> +       if (fpu_lazy_restore(tsk, raw_smp_processor_id()))
> +               return;
> +
>         if (unlikely(restore_fpu_checking(tsk)))
>                 drop_init_fpu(tsk);
>  }
> --
> 1.9.3
>



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-11 21:46 ` [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace riel
  2015-01-13 15:53   ` Oleg Nesterov
  2015-01-13 17:07   ` Andy Lutomirski
@ 2015-01-13 17:11   ` Oleg Nesterov
  2015-01-13 17:18     ` Andy Lutomirski
  2015-01-13 17:54     ` Rik van Riel
  2015-01-13 17:58   ` Oleg Nesterov
  3 siblings, 2 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 17:11 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/11, riel@redhat.com wrote:
>
> Defer restoring the FPU state, if so desired, until the task returns to
> userspace.

And I have another concern.

Afaocs with this patch the idle threads will run with TIF_LOAD_FPU set but
without fpu.has_fpu.

This is fine by itself, but this (performance-wise) breaks kernel_fpu_begin()
if use_eager_fpu() == T. Please see the changelog in 5187b28ff08249ab8a162e8
and note that this patch cc's @stable.

Yes, yes, even if I am right we should blame kernel_fpu_begin() and other
eager_fpu oddities. I tried to start the cleanups/fixes in this area some
time ago, but they were ignored.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 17:11   ` Oleg Nesterov
@ 2015-01-13 17:18     ` Andy Lutomirski
  2015-01-13 17:44       ` Rik van Riel
  2015-01-13 17:54     ` Rik van Riel
  1 sibling, 1 reply; 76+ messages in thread
From: Andy Lutomirski @ 2015-01-13 17:18 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Rik van Riel, linux-kernel, Ingo Molnar, H. Peter Anvin,
	Matt Fleming, Borislav Petkov, Paolo Bonzini, Thomas Gleixner

On Tue, Jan 13, 2015 at 9:11 AM, Oleg Nesterov <oleg@redhat.com> wrote:
> On 01/11, riel@redhat.com wrote:
>>
>> Defer restoring the FPU state, if so desired, until the task returns to
>> userspace.
>
> And I have another concern.
>
> Afaocs with this patch the idle threads will run with TIF_LOAD_FPU set but
> without fpu.has_fpu.

Yuck.  IMO there are still too many possible states.

AFAICS the sensible states for a task are:

 - Task is current on some cpu and FPU is loaded into that cpu.
 - Task is current on some cpu and FPU is in memory.
 - Task is current on some cpu and FPU is loaded into a different cpu.
 - Task is not current and FPU is in memory.
 - Task is not current and FPU is loaded into some cpu.

Am I missing anything?  (In lazy mode, there are a few more involving CR0.TS.)

That's five states, plus an optional cpu number.  But we have tons of
state variable that can express all kinds of nonsense things.

If we asserted that we were in a sensible state and fixed the things
that exited the sensible states, maybe this would be easier to
understand and debug.

--Andy

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 17:18     ` Andy Lutomirski
@ 2015-01-13 17:44       ` Rik van Riel
  2015-01-13 17:57         ` Andy Lutomirski
  0 siblings, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 17:44 UTC (permalink / raw)
  To: Andy Lutomirski, Oleg Nesterov
  Cc: linux-kernel, Ingo Molnar, H. Peter Anvin, Matt Fleming,
	Borislav Petkov, Paolo Bonzini, Thomas Gleixner

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 12:18 PM, Andy Lutomirski wrote:
> On Tue, Jan 13, 2015 at 9:11 AM, Oleg Nesterov <oleg@redhat.com>
> wrote:
>> On 01/11, riel@redhat.com wrote:
>>> 
>>> Defer restoring the FPU state, if so desired, until the task
>>> returns to userspace.
>> 
>> And I have another concern.
>> 
>> Afaocs with this patch the idle threads will run with
>> TIF_LOAD_FPU set but without fpu.has_fpu.
> 
> Yuck.  IMO there are still too many possible states.
> 
> AFAICS the sensible states for a task are:
> 
> - Task is current on some cpu and FPU is loaded into that cpu. -
> Task is current on some cpu and FPU is in memory. - Task is current
> on some cpu and FPU is loaded into a different cpu.

When switching a task out, in prepare_fpu_switch() we will always save
the FPU state to memory. The third case can be treated identical to the
second case.

> - Task is not current and FPU is in memory. - Task is not current
> and FPU is loaded into some cpu.

Same for this one. When the task is not current, the FPU state
will have been saved to memory. If we try running the task
somewhere else, it devolves to "FPU is in memory".

> Am I missing anything?  (In lazy mode, there are a few more
> involving CR0.TS.)
> 
> That's five states, plus an optional cpu number.  But we have tons
> of state variable that can express all kinds of nonsense things.
> 
> If we asserted that we were in a sensible state and fixed the
> things that exited the sensible states, maybe this would be easier
> to understand and debug.

Lets see what things we could test, at different points.

1) At context switch time, we need to make sure that the previous task
   will no longer have __thread_has_fpu()

2) When loading the FPU state, we need to make sure that the current
   task does not have __thread_has_fpu()

3) ... what else?

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtVmPAAoJEM553pKExN6Dc0gIAI0JZE4VpNlJElnuGbFKzM8t
r9dSH3XLeEX5/JARsQMNuwUL7HjjBbYO9PrdpejVAEUq2XltVyq3+RuSYMBprlpV
aCWKrw9jn7H43++LSziXh0LP+t+zqL9+/5WUm3oVWkTXJEBu9CdNgYYP7luf87nX
+mxqDsp5YjmmYJRW2LncK/SuEpItTAtHI5bORjsbWNQOZ2pl31cr1ZEbdrRaZqje
5wONfs0f2mor0Ms7FSzVJCNmps3TGJnfpkx37vzqRiPkisdWVcM16RwvM6jJTFZ+
VmM2Jj4xWMhPYvqpZ8oZWAb8xNCl+DijjZFEGW1j9Ji5gcPQ2HLvFEWNp3TpIeM=
=OU/0
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 17:11   ` Oleg Nesterov
  2015-01-13 17:18     ` Andy Lutomirski
@ 2015-01-13 17:54     ` Rik van Riel
  2015-01-13 18:22       ` Oleg Nesterov
  1 sibling, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 17:54 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 12:11 PM, Oleg Nesterov wrote:
> On 01/11, riel@redhat.com wrote:
>> 
>> Defer restoring the FPU state, if so desired, until the task
>> returns to userspace.
> 
> And I have another concern.
> 
> Afaocs with this patch the idle threads will run with TIF_LOAD_FPU
> set but without fpu.has_fpu.
> 
> This is fine by itself, but this (performance-wise) breaks
> kernel_fpu_begin() if use_eager_fpu() == T. Please see the
> changelog in 5187b28ff08249ab8a162e8 and note that this patch cc's
> @stable.
> 
> Yes, yes, even if I am right we should blame kernel_fpu_begin() and
> other eager_fpu oddities. I tried to start the cleanups/fixes in
> this area some time ago, but they were ignored.

I suppose we could make kernel_fpu_begin() explicitly point
the cpu's fpu pointer at a special value to indicate that
we are in the middle of a kernel_fpu_begin() / kernel_fpu_end()
session, and should not use the FPU from interrupt context
right now.

Does that seem a reasonable fix?

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtVvmAAoJEM553pKExN6DmWsH/0V3+3LQBLbOvLVM3h4abS0y
1aI1OqCVzhK4PUChbFwRWsk8IryvGt81n9n9CvnRBOPAg/ZII/NWsWweAVlD4NyL
sqaQf8h/mdaUnjoOUzfUmZmhLGtnktMW2beBKfPRbp5ynlLhr0u+kmQ15W2yUio3
Bu/FIhS7THcv/vNBtUfiz6cAaZZGDVrDXGIzxmNSIDTm7tibboZL2VfFI1AjQ+Mw
K0iH1tvG1E7n95krapnXHv8GlnzIFgd4yVe8jrLoV4i/045UZKWZbrY/az9VEi++
TFYzMbfJGtFc+MR4AUCed/g1zu2r1lIBNqpjf1GYu/qgJAzF9G07akZnUcITONA=
=Vg2Z
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 17:44       ` Rik van Riel
@ 2015-01-13 17:57         ` Andy Lutomirski
  2015-01-13 18:13           ` Rik van Riel
  0 siblings, 1 reply; 76+ messages in thread
From: Andy Lutomirski @ 2015-01-13 17:57 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Oleg Nesterov, linux-kernel, Ingo Molnar, H. Peter Anvin,
	Matt Fleming, Borislav Petkov, Paolo Bonzini, Thomas Gleixner

On Tue, Jan 13, 2015 at 9:44 AM, Rik van Riel <riel@redhat.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 01/13/2015 12:18 PM, Andy Lutomirski wrote:
>> On Tue, Jan 13, 2015 at 9:11 AM, Oleg Nesterov <oleg@redhat.com>
>> wrote:
>>> On 01/11, riel@redhat.com wrote:
>>>>
>>>> Defer restoring the FPU state, if so desired, until the task
>>>> returns to userspace.
>>>
>>> And I have another concern.
>>>
>>> Afaocs with this patch the idle threads will run with
>>> TIF_LOAD_FPU set but without fpu.has_fpu.
>>
>> Yuck.  IMO there are still too many possible states.
>>
>> AFAICS the sensible states for a task are:
>>
>> - Task is current on some cpu and FPU is loaded into that cpu. -
>> Task is current on some cpu and FPU is in memory. - Task is current
>> on some cpu and FPU is loaded into a different cpu.
>
> When switching a task out, in prepare_fpu_switch() we will always save
> the FPU state to memory. The third case can be treated identical to the
> second case.

Then what's last_cpu for?

>
>> - Task is not current and FPU is in memory. - Task is not current
>> and FPU is loaded into some cpu.
>
> Same for this one. When the task is not current, the FPU state
> will have been saved to memory. If we try running the task
> somewhere else, it devolves to "FPU is in memory".
>

Isn't there a case where the FPU is in memory *and* in the cpu regs?
Isn't that how you can skip reloading the FPU after going idle and
returning?  Is this what fpu_lazy_restore is for?  Confused.

>> Am I missing anything?  (In lazy mode, there are a few more
>> involving CR0.TS.)
>>
>> That's five states, plus an optional cpu number.  But we have tons
>> of state variable that can express all kinds of nonsense things.
>>
>> If we asserted that we were in a sensible state and fixed the
>> things that exited the sensible states, maybe this would be easier
>> to understand and debug.
>
> Lets see what things we could test, at different points.
>
> 1) At context switch time, we need to make sure that the previous task
>    will no longer have __thread_has_fpu()
>
> 2) When loading the FPU state, we need to make sure that the current
>    task does not have __thread_has_fpu()

Examples, any of which may be wrong:

If !current, then !TIF_LOAD_FPU

If switching out a task with TIF_LOAD_FPU set, then !has_fpu

If last_cpu == smp_processor_id(), then fpu_owner == fpu.

If has_fpu, then the task must be current somewhere and last_cpu must
be the cpu on which it's current.

At the very least, asking these questions may help fill in holes in my
understanding :)

--Andy

>
> 3) ... what else?
>
> - --
> All rights reversed
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJUtVmPAAoJEM553pKExN6Dc0gIAI0JZE4VpNlJElnuGbFKzM8t
> r9dSH3XLeEX5/JARsQMNuwUL7HjjBbYO9PrdpejVAEUq2XltVyq3+RuSYMBprlpV
> aCWKrw9jn7H43++LSziXh0LP+t+zqL9+/5WUm3oVWkTXJEBu9CdNgYYP7luf87nX
> +mxqDsp5YjmmYJRW2LncK/SuEpItTAtHI5bORjsbWNQOZ2pl31cr1ZEbdrRaZqje
> 5wONfs0f2mor0Ms7FSzVJCNmps3TGJnfpkx37vzqRiPkisdWVcM16RwvM6jJTFZ+
> VmM2Jj4xWMhPYvqpZ8oZWAb8xNCl+DijjZFEGW1j9Ji5gcPQ2HLvFEWNp3TpIeM=
> =OU/0
> -----END PGP SIGNATURE-----



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-11 21:46 ` [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace riel
                     ` (2 preceding siblings ...)
  2015-01-13 17:11   ` Oleg Nesterov
@ 2015-01-13 17:58   ` Oleg Nesterov
  2015-01-13 19:32     ` Rik van Riel
  3 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 17:58 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/11, riel@redhat.com wrote:
>
> Defer restoring the FPU state, if so desired, until the task returns to
> userspace.

And yet another concern ;) Although I feel that I am totally confused and
probably wrong.

> --- a/arch/x86/include/asm/fpu-internal.h
> +++ b/arch/x86/include/asm/fpu-internal.h
> @@ -382,6 +382,7 @@ static inline void drop_init_fpu(struct task_struct *tsk)
>  		else
>  			fxrstor_checking(&init_xstate_buf->i387);
>  	}
> +	clear_thread_flag(TIF_LOAD_FPU);
>  }

OK, but shouldn't (say) restore_user_xstate() clear TIF_LOAD_FPU too?
Otherwise, can't switch_fpu_finish() restore the wrong context later?

Or. Perhaps switch_fpu_finish() should do nothing if fpu.has_fpu == T,
I dunno.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 17:57         ` Andy Lutomirski
@ 2015-01-13 18:13           ` Rik van Riel
  2015-01-13 18:26             ` Andy Lutomirski
  0 siblings, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 18:13 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Oleg Nesterov, linux-kernel, Ingo Molnar, H. Peter Anvin,
	Matt Fleming, Borislav Petkov, Paolo Bonzini, Thomas Gleixner

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 12:57 PM, Andy Lutomirski wrote:
> On Tue, Jan 13, 2015 at 9:44 AM, Rik van Riel <riel@redhat.com>
> wrote: On 01/13/2015 12:18 PM, Andy Lutomirski wrote:

>>>> - Task is not current and FPU is in memory. - Task is not
>>>> current and FPU is loaded into some cpu.
> 
>> Same for this one. When the task is not current, the FPU state 
>> will have been saved to memory. If we try running the task 
>> somewhere else, it devolves to "FPU is in memory".
> 
> 
> Isn't there a case where the FPU is in memory *and* in the cpu
> regs? Isn't that how you can skip reloading the FPU after going
> idle and returning?  Is this what fpu_lazy_restore is for?
> Confused.

Indeed, if we end up running the task on the same CPU again, and the
FPU still has the state loaded, we may skip restoring the FPU state.

>>>> Am I missing anything?  (In lazy mode, there are a few more 
>>>> involving CR0.TS.)
>>>> 
>>>> That's five states, plus an optional cpu number.  But we have
>>>> tons of state variable that can express all kinds of nonsense
>>>> things.
>>>> 
>>>> If we asserted that we were in a sensible state and fixed
>>>> the things that exited the sensible states, maybe this would
>>>> be easier to understand and debug.
> 
> Lets see what things we could test, at different points.
> 
> 1) At context switch time, we need to make sure that the previous
> task will no longer have __thread_has_fpu()
> 
> 2) When loading the FPU state, we need to make sure that the
> current task does not have __thread_has_fpu()
> 
>> Examples, any of which may be wrong:
> 
>> If !current, then !TIF_LOAD_FPU

We set TIF_LOAD_CPU on the next task at context switch time,
which is different from the current task. I suspect we can
deal with that exception, though :)

What we can test is that "new" does not already have TIF_LOAD_CPU
set...

>> If switching out a task with TIF_LOAD_FPU set, then !has_fpu

... and that old does not have both TIF_LOAD_FPU and has_fpu.

>> If last_cpu == smp_processor_id(), then fpu_owner == fpu.

Not necessarily, since the task may not have entered userspace in
a very long time, so it may not have loaded its FPU context.

>> If has_fpu, then the task must be current somewhere and last_cpu
>> must be the cpu on which it's current.

Indeed, if has_fpu, then last_cpu must match the current cpu.


- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtWBOAAoJEM553pKExN6DNwIH/2wzfLqqM1V/Asd29nidDUrw
zD7HN//LyWTLjNMfAS4M/rOk3LsbphFBOo2L5BE7CYoNAGEWwKcQi7ld3dDAXeZL
+AkRtzMNEU1NqzrtnpGhABBrn3wBXwr9ldKSlaVQhYUop3q5Hhg8lyo2v+wWKf7y
ULi/RLiERS72tUomFXTE4RT021N2h+tl42jSREEyT0+VqEc7vqTlb5fctsF3VAhS
g48fX/VOYit3rXFU9hPz9m9vnodsEGCapdRxsXaE4xA7lg8dZ5WsaAos2TUwPQYt
EyCbS9z2Yzy1UpySwZudo6OGbQIaugOtgrcCS/cvdvlRb8K4mLe+807MPGmBOGA=
=7wEX
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 17:54     ` Rik van Riel
@ 2015-01-13 18:22       ` Oleg Nesterov
  2015-01-13 18:30         ` Oleg Nesterov
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 18:22 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/13, Rik van Riel wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 01/13/2015 12:11 PM, Oleg Nesterov wrote:
> > On 01/11, riel@redhat.com wrote:
> >>
> >> Defer restoring the FPU state, if so desired, until the task
> >> returns to userspace.
> >
> > And I have another concern.
> >
> > Afaocs with this patch the idle threads will run with TIF_LOAD_FPU
> > set but without fpu.has_fpu.
> >
> > This is fine by itself, but this (performance-wise) breaks
> > kernel_fpu_begin() if use_eager_fpu() == T. Please see the
> > changelog in 5187b28ff08249ab8a162e8 and note that this patch cc's
> > @stable.
> >
> > Yes, yes, even if I am right we should blame kernel_fpu_begin() and
> > other eager_fpu oddities. I tried to start the cleanups/fixes in
> > this area some time ago, but they were ignored.
>
> I suppose we could make kernel_fpu_begin() explicitly point
> the cpu's fpu pointer at a special value to indicate that
> we are in the middle of a kernel_fpu_begin() / kernel_fpu_end()
> session, and should not use the FPU from interrupt context
> right now.

Not sure I understand...

But yes, I think we need the per-cpu "in_kernel_fpu" and irq_fpu_usable()
must die. Please look at http://marc.info/?l=linux-kernel&m=14096628660929

Until then imo this series should try to ensure that kernel_fpu_begin/end
will work if interrupted thread is idle task.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 18:13           ` Rik van Riel
@ 2015-01-13 18:26             ` Andy Lutomirski
  0 siblings, 0 replies; 76+ messages in thread
From: Andy Lutomirski @ 2015-01-13 18:26 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Oleg Nesterov, linux-kernel, Ingo Molnar, H. Peter Anvin,
	Matt Fleming, Borislav Petkov, Paolo Bonzini, Thomas Gleixner

On Tue, Jan 13, 2015 at 10:13 AM, Rik van Riel <riel@redhat.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 01/13/2015 12:57 PM, Andy Lutomirski wrote:
>> On Tue, Jan 13, 2015 at 9:44 AM, Rik van Riel <riel@redhat.com>
>> wrote: On 01/13/2015 12:18 PM, Andy Lutomirski wrote:
>
>>>>> - Task is not current and FPU is in memory. - Task is not
>>>>> current and FPU is loaded into some cpu.
>>
>>> Same for this one. When the task is not current, the FPU state
>>> will have been saved to memory. If we try running the task
>>> somewhere else, it devolves to "FPU is in memory".
>>
>>
>> Isn't there a case where the FPU is in memory *and* in the cpu
>> regs? Isn't that how you can skip reloading the FPU after going
>> idle and returning?  Is this what fpu_lazy_restore is for?
>> Confused.
>
> Indeed, if we end up running the task on the same CPU again, and the
> FPU still has the state loaded, we may skip restoring the FPU state.
>
>>>>> Am I missing anything?  (In lazy mode, there are a few more
>>>>> involving CR0.TS.)
>>>>>
>>>>> That's five states, plus an optional cpu number.  But we have
>>>>> tons of state variable that can express all kinds of nonsense
>>>>> things.
>>>>>
>>>>> If we asserted that we were in a sensible state and fixed
>>>>> the things that exited the sensible states, maybe this would
>>>>> be easier to understand and debug.
>>
>> Lets see what things we could test, at different points.
>>
>> 1) At context switch time, we need to make sure that the previous
>> task will no longer have __thread_has_fpu()
>>
>> 2) When loading the FPU state, we need to make sure that the
>> current task does not have __thread_has_fpu()
>>
>>> Examples, any of which may be wrong:
>>
>>> If !current, then !TIF_LOAD_FPU
>
> We set TIF_LOAD_CPU on the next task at context switch time,
> which is different from the current task. I suspect we can
> deal with that exception, though :)
>
> What we can test is that "new" does not already have TIF_LOAD_CPU
> set...
>
>>> If switching out a task with TIF_LOAD_FPU set, then !has_fpu
>
> ... and that old does not have both TIF_LOAD_FPU and has_fpu.
>
>>> If last_cpu == smp_processor_id(), then fpu_owner == fpu.
>
> Not necessarily, since the task may not have entered userspace in
> a very long time, so it may not have loaded its FPU context.
>

Is the idea that it's possible for fpu_owner == fpu if the task is
brand new (i.e. fpu_owner really refers to some now-dead task whose
memory is reused) but that, if this is the case, then last_cpu won't
match that cpu?

If so, then I think that I'm finally starting to understand that part of it.

>>> If has_fpu, then the task must be current somewhere and last_cpu
>>> must be the cpu on which it's current.
>
> Indeed, if has_fpu, then last_cpu must match the current cpu.

Phew.

Thanks,
Andy

>
>
> - --
> All rights reversed
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJUtWBOAAoJEM553pKExN6DNwIH/2wzfLqqM1V/Asd29nidDUrw
> zD7HN//LyWTLjNMfAS4M/rOk3LsbphFBOo2L5BE7CYoNAGEWwKcQi7ld3dDAXeZL
> +AkRtzMNEU1NqzrtnpGhABBrn3wBXwr9ldKSlaVQhYUop3q5Hhg8lyo2v+wWKf7y
> ULi/RLiERS72tUomFXTE4RT021N2h+tl42jSREEyT0+VqEc7vqTlb5fctsF3VAhS
> g48fX/VOYit3rXFU9hPz9m9vnodsEGCapdRxsXaE4xA7lg8dZ5WsaAos2TUwPQYt
> EyCbS9z2Yzy1UpySwZudo6OGbQIaugOtgrcCS/cvdvlRb8K4mLe+807MPGmBOGA=
> =7wEX
> -----END PGP SIGNATURE-----



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 18:22       ` Oleg Nesterov
@ 2015-01-13 18:30         ` Oleg Nesterov
  2015-01-13 20:06           ` Rik van Riel
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-13 18:30 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/13, Oleg Nesterov wrote:
>
> But yes, I think we need the per-cpu "in_kernel_fpu" and irq_fpu_usable()
> must die. Please look at http://marc.info/?l=linux-kernel&m=14096628660929

Plus I think we need this to fix the race with math_state_restore()...
Please see http://marc.info/?l=linux-kernel&m=140992479607206

Unfortunately I forgot other problems I found in this area... And
I completely forgot the plan I had in mind ;)

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 17:58   ` Oleg Nesterov
@ 2015-01-13 19:32     ` Rik van Riel
  0 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 19:32 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 12:58 PM, Oleg Nesterov wrote:
> On 01/11, riel@redhat.com wrote:
>> 
>> Defer restoring the FPU state, if so desired, until the task
>> returns to userspace.
> 
> And yet another concern ;) Although I feel that I am totally
> confused and probably wrong.
> 
>> --- a/arch/x86/include/asm/fpu-internal.h +++
>> b/arch/x86/include/asm/fpu-internal.h @@ -382,6 +382,7 @@ static
>> inline void drop_init_fpu(struct task_struct *tsk) else 
>> fxrstor_checking(&init_xstate_buf->i387); } +
>> clear_thread_flag(TIF_LOAD_FPU); }
> 
> OK, but shouldn't (say) restore_user_xstate() clear TIF_LOAD_FPU
> too? Otherwise, can't switch_fpu_finish() restore the wrong context
> later?

I address restore_user_xstate and save_user_xstate later in the
patch series. Once I have all the code working, I should refactor
that, in order to get a cleanly bisectable series.

> Or. Perhaps switch_fpu_finish() should do nothing if fpu.has_fpu ==
> T, I dunno.

One of the later patches in the series implements that.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtXLKAAoJEM553pKExN6DCksH/iSAOJ4rqvDsBBY+VG5ZtQEK
7fo5bK1HQb0X8yeJPoR8rSg3PzqHjV1v+eU4d/OWEJHV7U9zxWedn+BRv8DrVZvo
1f1QqHY9ORZD0/NVZRJxlNBBqvUijUE1POTFixZVhU/t0MuWFG0+4a7BgY5B6RIa
/hZ4XcZ9fnTpN3cOSIEIOHCT90xZuUMwopRX/7ad8QRhqLaoH0MXqspAEMi8321b
4xwQUw1wD6OtCqWk1fG5dfF69F8ZrSGgNilUudwMeBEg4NODbVis6jw6iOZnjM+3
juZZXZ49LdYTlw5erjPVoZJ3kXZbdT1cIa81+LPDot/uKyAx5j6IKmM4ENQusw0=
=OHgf
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 18:30         ` Oleg Nesterov
@ 2015-01-13 20:06           ` Rik van Riel
  2015-01-14 17:56             ` Oleg Nesterov
  0 siblings, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 20:06 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 01:30 PM, Oleg Nesterov wrote:
> On 01/13, Oleg Nesterov wrote:
>> 
>> But yes, I think we need the per-cpu "in_kernel_fpu" and
>> irq_fpu_usable() must die. Please look at
>> http://marc.info/?l=linux-kernel&m=14096628660929

This link does not work for me. I wonder if that email is not
on the marc.info node I managed to send my http request to...

> Plus I think we need this to fix the race with
> math_state_restore()... Please see
> http://marc.info/?l=linux-kernel&m=140992479607206
> 
> Unfortunately I forgot other problems I found in this area... And I
> completely forgot the plan I had in mind ;)

That I can certainly merge into my series.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtXqsAAoJEM553pKExN6DESsH/iVrj6VEpV1gqyxB7T0cC0wr
UWjPP5OyV2JRkOronbl1iA9YMhN5WUfhvdWph4nauL0nR+IOip2qRqCmQN6NG4yj
A9u3NTDzKpXCBaBf+Ub5yPBv/QESvGomxUQmEYYdgxW6OfZtp0apouSPIhCAwWMq
XPKVXyjl0AH30+tV9oMnhhnXuwdG+ojNDFK8xeMG15gBPP8LncVYkmlgTeD4Kzca
VPw83KxFqNekEA7TPboBkjqDEfckT8tTsutlOMWmBnyI51duuMX3qc4xoJfCdmNB
+f8TYL0Y/7RKF2RS9qDXZRk1fakQHbzCaswpRIs7522zj49tkrGi4AFmUaOZ+0k=
=7eRE
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too
  2015-01-13 17:11   ` Andy Lutomirski
@ 2015-01-13 20:43     ` Rik van Riel
  0 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-13 20:43 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, H. Peter Anvin, Matt Fleming,
	Borislav Petkov, Oleg Nesterov, Paolo Bonzini, Thomas Gleixner

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/13/2015 12:11 PM, Andy Lutomirski wrote:
> On Sun, Jan 11, 2015 at 1:46 PM,  <riel@redhat.com> wrote:

>> diff --git a/arch/x86/include/asm/fpu-internal.h
>> b/arch/x86/include/asm/fpu-internal.h index 4db8781..a5a40c7
>> 100644 --- a/arch/x86/include/asm/fpu-internal.h +++
>> b/arch/x86/include/asm/fpu-internal.h @@ -435,13 +435,9 @@ static
>> inline void switch_fpu_prepare(struct task_struct *old, struct
>> task_struc old->thread.fpu.last_cpu = ~0; if (preload) { 
>> new->thread.fpu_counter++; -                       if
>> (!use_eager_fpu() && fpu_lazy_restore(new, cpu)) -
>> /* XXX: is this safe against ptrace??? */ -
>> __thread_fpu_begin(new); -                       else { +
>> set_thread_flag(TIF_LOAD_FPU); +                       if
>> (!fpu_lazy_restore(new, cpu)) prefetch(new->thread.fpu.state);
> 
> Is this prefetch still worth keeping?  Wouldn't prefetching the
> fpu effectively require grabbing more than one cacheline anyway?

Probably not. I'll drop it, for more code simplification.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtYOCAAoJEM553pKExN6DDYMIAL3m7Nmlz8qiYrDEIhimHupj
JeIp92cdm+AcgeYydC9Tzz3cQrDExjJ3juI25OmjYF1rTHoRyFC/trWCjUajeKxa
rdxyHYmcNCvsmvEN9Ls3IfrRt4SF7iDKP9P9Qa49rakCprR82P2UUQ25zDuad8wR
n7G+nev1GNpsc8w8++ebczyEgVq579s2QA3x/mpO9FWA2DNrSndT1P7OxY6BBbEO
jcp/9V17rfZOvh7W9yCSRs9M5B3OzjACTktSTRWWuPHS7qBJa1/oeA7Z45Vgb9Bk
xxH1zS4IPcHu5WEB4FTDPYHoR7NOPWaXNDViU3NnV6jMzaCff1Vd6XgN5PHQhZc=
=y0qh
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace
  2015-01-13 20:06           ` Rik van Riel
@ 2015-01-14 17:56             ` Oleg Nesterov
  0 siblings, 0 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-14 17:56 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/13, Rik van Riel wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 01/13/2015 01:30 PM, Oleg Nesterov wrote:
> > On 01/13, Oleg Nesterov wrote:
> >>
> >> But yes, I think we need the per-cpu "in_kernel_fpu" and
> >> irq_fpu_usable() must die. Please look at
> >> http://marc.info/?l=linux-kernel&m=14096628660929
>
> This link does not work for me. I wonder if that email is not
> on the marc.info node I managed to send my http request to...

Sorry, copy-and-paste error,
http://marc.info/?l=linux-kernel&m=140966286609295

> > Plus I think we need this to fix the race with
> > math_state_restore()... Please see
> > http://marc.info/?l=linux-kernel&m=140992479607206
> >
> > Unfortunately I forgot other problems I found in this area... And I
> > completely forgot the plan I had in mind ;)
>
> That I can certainly merge into my series.

Perhaps I should resend this series... I'll recheck, it should not
conflict with your changes (at least too much).

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too
  2015-01-11 21:46 ` [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too riel
  2015-01-13 17:11   ` Andy Lutomirski
@ 2015-01-14 18:36   ` Oleg Nesterov
  2015-01-15  2:49     ` Rik van Riel
  1 sibling, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-14 18:36 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/11, riel@redhat.com wrote:
>
> If the next task still has its FPU state present in the FPU registers,
> there is no need to restore it from memory.

Another patch I can't understand...

> --- a/arch/x86/include/asm/fpu-internal.h
> +++ b/arch/x86/include/asm/fpu-internal.h
> @@ -435,13 +435,9 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
>  		old->thread.fpu.last_cpu = ~0;
>  		if (preload) {
>  			new->thread.fpu_counter++;
> -			if (!use_eager_fpu() && fpu_lazy_restore(new, cpu))
> -				/* XXX: is this safe against ptrace??? */
> -				__thread_fpu_begin(new);
> -			else {
> +			set_thread_flag(TIF_LOAD_FPU);
> +			if (!fpu_lazy_restore(new, cpu))
>  				prefetch(new->thread.fpu.state);
> -				set_thread_flag(TIF_LOAD_FPU);
> -			}

It is not clear to me why do we set TIF_LOAD_FPU if fpu_lazy_restore()
succeeds. __thread_fpu_begin() is cheap.

At the same time, if switch_fpu_finish() does fpu_lazy_restore() anyway,
why this patch doesn't remove it from switch_fpu_prepare() ?

However,

> @@ -466,6 +462,10 @@ static inline void switch_fpu_finish(void)
>
>  	__thread_fpu_begin(tsk);
>
> +	/* The FPU registers already have this task's FPU state. */
> +	if (fpu_lazy_restore(tsk, raw_smp_processor_id()))
> +		return;
> +

Now that this is called before return to user-mode, I am not sure this is
correct. Note that __kernel_fpu_begin() doesn't clear fpu_owner_task if
use_eager_fpu().

OK, interrupted_kernel_fpu_idle() should fail in this case... but as we
already discussed this means the perfomance regression, so this should
be changed.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end
  2015-01-11 21:46 ` [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end riel
@ 2015-01-14 18:43   ` Oleg Nesterov
  2015-01-14 19:08     ` Oleg Nesterov
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-14 18:43 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/11, riel@redhat.com wrote:
>
> --- a/arch/x86/kernel/i387.c
> +++ b/arch/x86/kernel/i387.c
> @@ -89,13 +89,11 @@ void __kernel_fpu_end(void)
>  	if (use_eager_fpu()) {
>  		/*
>  		 * For eager fpu, most the time, tsk_used_math() is true.
> -		 * Restore the user math as we are done with the kernel usage.
> -		 * At few instances during thread exit, signal handling etc,
> -		 * tsk_used_math() is false. Those few places will take proper
> -		 * actions, so we don't need to restore the math here.
> +		 * Make sure the user math state is restored on return to
> +		 * userspace.
>  		 */
>  		if (likely(tsk_used_math(current)))
> -			math_state_restore();
> +			set_thread_flag(TIF_LOAD_FPU);

Hmm. And this looks obviously wrong if interrupted_user_mode().

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end
  2015-01-14 18:43   ` Oleg Nesterov
@ 2015-01-14 19:08     ` Oleg Nesterov
  0 siblings, 0 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-14 19:08 UTC (permalink / raw)
  To: riel; +Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/14, Oleg Nesterov wrote:
>
> On 01/11, riel@redhat.com wrote:
> >
> > --- a/arch/x86/kernel/i387.c
> > +++ b/arch/x86/kernel/i387.c
> > @@ -89,13 +89,11 @@ void __kernel_fpu_end(void)
> >  	if (use_eager_fpu()) {
> >  		/*
> >  		 * For eager fpu, most the time, tsk_used_math() is true.
> > -		 * Restore the user math as we are done with the kernel usage.
> > -		 * At few instances during thread exit, signal handling etc,
> > -		 * tsk_used_math() is false. Those few places will take proper
> > -		 * actions, so we don't need to restore the math here.
> > +		 * Make sure the user math state is restored on return to
> > +		 * userspace.
> >  		 */
> >  		if (likely(tsk_used_math(current)))
> > -			math_state_restore();
> > +			set_thread_flag(TIF_LOAD_FPU);
>
> Hmm. And this looks obviously wrong if interrupted_user_mode().

OOPS, sorry, I misread this change.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too
  2015-01-14 18:36   ` Oleg Nesterov
@ 2015-01-15  2:49     ` Rik van Riel
  2015-01-15 19:34       ` Oleg Nesterov
  0 siblings, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-15  2:49 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/14/2015 01:36 PM, Oleg Nesterov wrote:
> On 01/11, riel@redhat.com wrote:
>> 
>> If the next task still has its FPU state present in the FPU
>> registers, there is no need to restore it from memory.
> 
> Another patch I can't understand...
> 
>> --- a/arch/x86/include/asm/fpu-internal.h +++
>> b/arch/x86/include/asm/fpu-internal.h @@ -435,13 +435,9 @@ static
>> inline void switch_fpu_prepare(struct task_struct *old, struct
>> task_struc old->thread.fpu.last_cpu = ~0; if (preload) { 
>> new->thread.fpu_counter++; -			if (!use_eager_fpu() &&
>> fpu_lazy_restore(new, cpu)) -				/* XXX: is this safe against
>> ptrace??? */ -				__thread_fpu_begin(new); -			else { +
>> set_thread_flag(TIF_LOAD_FPU); +			if (!fpu_lazy_restore(new,
>> cpu)) prefetch(new->thread.fpu.state); -
>> set_thread_flag(TIF_LOAD_FPU); -			}
> 
> It is not clear to me why do we set TIF_LOAD_FPU if
> fpu_lazy_restore() succeeds. __thread_fpu_begin() is cheap.
> 
> At the same time, if switch_fpu_finish() does fpu_lazy_restore()
> anyway, why this patch doesn't remove it from switch_fpu_prepare()
> ?

I have it removed now, because the prefetch does not make
much sense (as was pointed out by either you or Andy).

>> @@ -466,6 +462,10 @@ static inline void switch_fpu_finish(void)
>> 
>> __thread_fpu_begin(tsk);
>> 
>> +	/* The FPU registers already have this task's FPU state. */ +
>> if (fpu_lazy_restore(tsk, raw_smp_processor_id())) +		return; +
> 
> Now that this is called before return to user-mode, I am not sure
> this is correct. Note that __kernel_fpu_begin() doesn't clear
> fpu_owner_task if use_eager_fpu().

However, __kernel_fpu_begin() does call __thread_clear_has_fpu(),
which clears the per-cpu fpu_owner variable, which is also
evaluated by fpu_lazy_restore(), so I think this is actually
correct.

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUtyqgAAoJEM553pKExN6DFG0H+wfeZaKByANgrgUBHMYjrkEW
0C6f3lWaxyi8CPad7ghWN3GnSARpaA+OorukD3xwmubZjUc69vcNHMPW9A8hT95q
FNpRQHW/ehx6esXme+Jc7r1FCYr5Jm9hvfQ4xPm6jQQDs/Sok4vjsPgOnaa0DeHa
gqeE2cXt38kTtTgxsP7CKC/m3/B+KQ2c7ieB4XtXfWfwBNiFUiFgfRB22ip0hZCr
7D2UuatSat+zyaH8G5bHPQciEzGWARYB/SrzhmoUXrX7fGdY7fMvKUDyLH+p2SK0
0k3V4yBETi2GtMK2+z3KNlQ8TVp4/LvkpuCKbu54hHQh1sDYkDCXDpMfcwuJ4H4=
=Hah8
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix
  2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
                   ` (10 preceding siblings ...)
  2015-01-11 21:46 ` [RFC PATCH 11/11] (BROKEN) x86,fpu: broken signal handler stack setup riel
@ 2015-01-15 19:19 ` Oleg Nesterov
  2015-01-15 19:19   ` [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu" Oleg Nesterov
                     ` (3 more replies)
  11 siblings, 4 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-15 19:19 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

Add cc's.

On 01/11, riel@redhat.com wrote:
>
> Currently the kernel will always load the FPU context, even
> when switching to a kernel thread, or to an idle thread. In
> the case of a task on a KVM VCPU going idle for a bit, and
> waking up again later, this creates a vastly inefficient
> chain of FPU context saves & loads:

I assume you will send v2.

Let me send initial kernel_fpu_begin/end cleanups, I believe they make
sense anyway and won't conflict with your changes.

This is actually resend, I sent more patches some time ago but they were
ignored.

Note that (I hope) we can do more changes on top of this series, in
particular:

	- remove all checks from irq_fpu_usable() except in_kernel_fpu

	- do not abuse FPU in kernel threads, this makes sense even if
	  use_eager_fpu(), and with or without the changes you proposed.

Please review.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu"
  2015-01-15 19:19 ` [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix Oleg Nesterov
@ 2015-01-15 19:19   ` Oleg Nesterov
  2015-01-16  2:22     ` Rik van Riel
  2015-01-20 12:54     ` [tip:x86/fpu] x86, fpu: Introduce per-cpu in_kernel_fpu state tip-bot for Oleg Nesterov
  2015-01-15 19:20   ` [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}() Oleg Nesterov
                     ` (2 subsequent siblings)
  3 siblings, 2 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-15 19:19 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

interrupted_kernel_fpu_idle() tries to detect if kernel_fpu_begin()
is safe or not. In particular it should obviously deny the nested
kernel_fpu_begin() and this logic looks very confusing.

If use_eager_fpu() == T we rely on a) __thread_has_fpu() check in
interrupted_kernel_fpu_idle(), and b) on the fact that _begin() does
__thread_clear_has_fpu().

Otherwise we demand that the interrupted task has no FPU if it is in
kernel mode, this works because __kernel_fpu_begin() does clts() and
interrupted_kernel_fpu_idle() checks X86_CR0_TS.

Add the per-cpu "bool in_kernel_fpu" variable, and change this code
to check/set/clear it. This allows to do more cleanups and fixes, see
the next changes.

The patch also moves WARN_ON_ONCE() under preempt_disable() just to
make this_cpu_read() look better, this is not really needed. And in
fact I think we should move it into __kernel_fpu_begin().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 arch/x86/include/asm/i387.h |    2 +-
 arch/x86/kernel/i387.c      |    9 +++++++++
 2 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index ed8089d..5e275d3 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -40,8 +40,8 @@ extern void __kernel_fpu_end(void);
 
 static inline void kernel_fpu_begin(void)
 {
-	WARN_ON_ONCE(!irq_fpu_usable());
 	preempt_disable();
+	WARN_ON_ONCE(!irq_fpu_usable());
 	__kernel_fpu_begin();
 }
 
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index a9a4229..a815723 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -19,6 +19,8 @@
 #include <asm/fpu-internal.h>
 #include <asm/user.h>
 
+static DEFINE_PER_CPU(bool, in_kernel_fpu);
+
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
@@ -33,6 +35,9 @@
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
+	if (this_cpu_read(in_kernel_fpu))
+		return false;
+
 	if (use_eager_fpu())
 		return __thread_has_fpu(current);
 
@@ -73,6 +78,8 @@ void __kernel_fpu_begin(void)
 {
 	struct task_struct *me = current;
 
+	this_cpu_write(in_kernel_fpu, true);
+
 	if (__thread_has_fpu(me)) {
 		__thread_clear_has_fpu(me);
 		__save_init_fpu(me);
@@ -99,6 +106,8 @@ void __kernel_fpu_end(void)
 	} else {
 		stts();
 	}
+
+	this_cpu_write(in_kernel_fpu, false);
 }
 EXPORT_SYMBOL(__kernel_fpu_end);
 
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}()
  2015-01-15 19:19 ` [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix Oleg Nesterov
  2015-01-15 19:19   ` [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu" Oleg Nesterov
@ 2015-01-15 19:20   ` Oleg Nesterov
  2015-01-16  2:27     ` Rik van Riel
  2015-01-20 12:55     ` [tip:x86/fpu] x86, fpu: Don't abuse has_fpu in __kernel_fpu_begin /end() tip-bot for Oleg Nesterov
  2015-01-15 19:20   ` [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin() Oleg Nesterov
  2015-01-19 18:51   ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Oleg Nesterov
  3 siblings, 2 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-15 19:20 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

Now that we have in_kernel_fpu we can remove __thread_clear_has_fpu()
in __kernel_fpu_begin(). And this allows to replace the asymmetrical
and nontrivial use_eager_fpu + tsk_used_math check in kernel_fpu_end()
with the same __thread_has_fpu() check.

The logic becomes really simple; if _begin() does save() then _end()
needs restore(), this is controlled by __thread_has_fpu(). Otherwise
they do clts/stts unless use_eager_fpu().

Not only this makes begin/end symmetrical and imo more understandable,
potentially this allows to change irq_fpu_usable() to avoid all other
checks except "in_kernel_fpu".

Also, with this patch __kernel_fpu_end() does restore_fpu_checking()
and WARNs if it fails instead of math_state_restore(). I think this
looks better because we no longer need __thread_fpu_begin(), and it
would be better to report the failure in this case.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 arch/x86/kernel/i387.c |   19 ++++++-------------
 1 files changed, 6 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index a815723..12088a3 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -81,9 +81,7 @@ void __kernel_fpu_begin(void)
 	this_cpu_write(in_kernel_fpu, true);
 
 	if (__thread_has_fpu(me)) {
-		__thread_clear_has_fpu(me);
 		__save_init_fpu(me);
-		/* We do 'stts()' in __kernel_fpu_end() */
 	} else if (!use_eager_fpu()) {
 		this_cpu_write(fpu_owner_task, NULL);
 		clts();
@@ -93,17 +91,12 @@ EXPORT_SYMBOL(__kernel_fpu_begin);
 
 void __kernel_fpu_end(void)
 {
-	if (use_eager_fpu()) {
-		/*
-		 * For eager fpu, most the time, tsk_used_math() is true.
-		 * Restore the user math as we are done with the kernel usage.
-		 * At few instances during thread exit, signal handling etc,
-		 * tsk_used_math() is false. Those few places will take proper
-		 * actions, so we don't need to restore the math here.
-		 */
-		if (likely(tsk_used_math(current)))
-			math_state_restore();
-	} else {
+	struct task_struct *me = current;
+
+	if (__thread_has_fpu(me)) {
+		if (WARN_ON(restore_fpu_checking(me)))
+			drop_init_fpu(me);
+	} else if (!use_eager_fpu()) {
 		stts();
 	}
 
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin()
  2015-01-15 19:19 ` [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix Oleg Nesterov
  2015-01-15 19:19   ` [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu" Oleg Nesterov
  2015-01-15 19:20   ` [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}() Oleg Nesterov
@ 2015-01-15 19:20   ` Oleg Nesterov
  2015-01-16  2:30     ` Rik van Riel
  2015-01-20 12:55     ` [tip:x86/fpu] x86, fpu: Fix " tip-bot for Oleg Nesterov
  2015-01-19 18:51   ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Oleg Nesterov
  3 siblings, 2 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-15 19:20 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

math_state_restore() can race with kernel_fpu_begin() if irq comes
right after __thread_fpu_begin(), __save_init_fpu() will overwrite
fpu->state we are going to restore.

Add 2 simple helpers, kernel_fpu_disable() and kernel_fpu_enable()
which simply set/clear in_kernel_fpu, and change math_state_restore()
to exclude kernel_fpu_begin() in between.

Alternatively we could use local_irq_save/restore, but probably these
new helpers can have more users.

Perhaps they should disable/enable preemption themselves, in this case
we can remove preempt_disable() in __restore_xstate_sig().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 arch/x86/include/asm/i387.h |    4 ++++
 arch/x86/kernel/i387.c      |   11 +++++++++++
 arch/x86/kernel/traps.c     |   12 +++++-------
 3 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 5e275d3..6eb6fcb 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -51,6 +51,10 @@ static inline void kernel_fpu_end(void)
 	preempt_enable();
 }
 
+/* Must be called with preempt disabled */
+extern void kernel_fpu_disable(void);
+extern void kernel_fpu_enable(void);
+
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
  * DNA fault but don't modify SSE registers. And these instructions
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 12088a3..81049ff 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -21,6 +21,17 @@
 
 static DEFINE_PER_CPU(bool, in_kernel_fpu);
 
+void kernel_fpu_disable(void)
+{
+	WARN_ON(this_cpu_read(in_kernel_fpu));
+	this_cpu_write(in_kernel_fpu, true);
+}
+
+void kernel_fpu_enable(void)
+{
+	this_cpu_write(in_kernel_fpu, false);
+}
+
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 88900e2..fb4cb6a 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -788,18 +788,16 @@ void math_state_restore(void)
 		local_irq_disable();
 	}
 
+	/* Avoid __kernel_fpu_begin() right after __thread_fpu_begin() */
+	kernel_fpu_disable();
 	__thread_fpu_begin(tsk);
-
-	/*
-	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
-	 */
 	if (unlikely(restore_fpu_checking(tsk))) {
 		drop_init_fpu(tsk);
 		force_sig_info(SIGSEGV, SEND_SIG_PRIV, tsk);
-		return;
+	} else {
+		tsk->thread.fpu_counter++;
 	}
-
-	tsk->thread.fpu_counter++;
+	kernel_fpu_enable();
 }
 EXPORT_SYMBOL_GPL(math_state_restore);
 
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too
  2015-01-15  2:49     ` Rik van Riel
@ 2015-01-15 19:34       ` Oleg Nesterov
  0 siblings, 0 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-15 19:34 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/14, Rik van Riel wrote:
>
> On 01/14/2015 01:36 PM, Oleg Nesterov wrote:
>
> >> @@ -466,6 +462,10 @@ static inline void switch_fpu_finish(void)
> >>
> >> __thread_fpu_begin(tsk);
> >>
> >> +	/* The FPU registers already have this task's FPU state. */ +
> >> if (fpu_lazy_restore(tsk, raw_smp_processor_id())) +		return; +
> >
> > Now that this is called before return to user-mode, I am not sure
> > this is correct. Note that __kernel_fpu_begin() doesn't clear
> > fpu_owner_task if use_eager_fpu().
>
> However, __kernel_fpu_begin() does call __thread_clear_has_fpu(),
> which clears the per-cpu fpu_owner variable, which is also
> evaluated by fpu_lazy_restore(), so I think this is actually
> correct.

Sure, but only if __thread_has_fpu().

But please ignore. My comment was confusing, sorry. What I actually
tried to say is that this patch is another reason why (I think) we
should start with kernel_fpu_begin/end.

If nothing else:

	1. interrupted_kernel_fpu_idle() should not fail if
	   use_eager_fpu() && !__thread_has_fpu(), otherwise your
	   changes will introduce the performance regression.

	   And in fact I think that it should only fail if
	   kernel_fpu_begin() is already in progress.

	2. And in this case this_cpu_write(fpu_owner_task, NULL)
	   can't depend on use_eager_fpu().

	   And in fact I think it should not depend in any case,
	   this only adds more confusion.

Please look at the initial cleanups I sent.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu"
  2015-01-15 19:19   ` [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu" Oleg Nesterov
@ 2015-01-16  2:22     ` Rik van Riel
  2015-01-20 12:54     ` [tip:x86/fpu] x86, fpu: Introduce per-cpu in_kernel_fpu state tip-bot for Oleg Nesterov
  1 sibling, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-16  2:22 UTC (permalink / raw)
  To: Oleg Nesterov, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/15/2015 02:19 PM, Oleg Nesterov wrote:
> interrupted_kernel_fpu_idle() tries to detect if
> kernel_fpu_begin() is safe or not. In particular it should
> obviously deny the nested kernel_fpu_begin() and this logic looks
> very confusing.
> 
> If use_eager_fpu() == T we rely on a) __thread_has_fpu() check in 
> interrupted_kernel_fpu_idle(), and b) on the fact that _begin()
> does __thread_clear_has_fpu().
> 
> Otherwise we demand that the interrupted task has no FPU if it is
> in kernel mode, this works because __kernel_fpu_begin() does clts()
> and interrupted_kernel_fpu_idle() checks X86_CR0_TS.
> 
> Add the per-cpu "bool in_kernel_fpu" variable, and change this
> code to check/set/clear it. This allows to do more cleanups and
> fixes, see the next changes.
> 
> The patch also moves WARN_ON_ONCE() under preempt_disable() just
> to make this_cpu_read() look better, this is not really needed. And
> in fact I think we should move it into __kernel_fpu_begin().
> 
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Reviewed-by: Rik van Riel <riel@redhat.com>


- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUuHXTAAoJEM553pKExN6Di2kIAL4HZ8qbOhHOaNKBvAA7uvdj
3uFZcDdgoviKKT5yi4Q/49+AvXulKVxLuwpZoZXy74wID2J2AQ1bEiVkUKXhhbrl
4FKW412VAD61fsAXvGp4n3l++ITTfjX4rL0hk6ntJlegqnI3l2sEYIWGa+Hnlh7e
nTabtEOl3Ib1rkIKlR+6wVgogTzzLxLboGKY0aHHqYZmhlbGzWvnJ04PkgWPGFND
9rQWz/+ZhbBgpeQRQSW8syluswcs/gQah3BygIRnPFW500zDQzihxjssDSd7/X2Z
3lYq+TCWab8EGSpc4kOqgq+LU8nXxggP9nIA7LplqgpnIdAyg4YrxLvyWL5Y2Ys=
=t5LD
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}()
  2015-01-15 19:20   ` [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}() Oleg Nesterov
@ 2015-01-16  2:27     ` Rik van Riel
  2015-01-16 15:54       ` Oleg Nesterov
  2015-01-20 12:55     ` [tip:x86/fpu] x86, fpu: Don't abuse has_fpu in __kernel_fpu_begin /end() tip-bot for Oleg Nesterov
  1 sibling, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-16  2:27 UTC (permalink / raw)
  To: Oleg Nesterov, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/15/2015 02:20 PM, Oleg Nesterov wrote:
> Now that we have in_kernel_fpu we can remove
> __thread_clear_has_fpu() in __kernel_fpu_begin(). And this allows
> to replace the asymmetrical and nontrivial use_eager_fpu +
> tsk_used_math check in kernel_fpu_end() with the same
> __thread_has_fpu() check.
> 
> The logic becomes really simple; if _begin() does save() then
> _end() needs restore(), this is controlled by __thread_has_fpu().
> Otherwise they do clts/stts unless use_eager_fpu().
> 
> Not only this makes begin/end symmetrical and imo more
> understandable, potentially this allows to change irq_fpu_usable()
> to avoid all other checks except "in_kernel_fpu".
> 
> Also, with this patch __kernel_fpu_end() does
> restore_fpu_checking() and WARNs if it fails instead of
> math_state_restore(). I think this looks better because we no
> longer need __thread_fpu_begin(), and it would be better to report
> the failure in this case.
> 
> Signed-off-by: Oleg Nesterov <oleg@redhat.com> --- 
> arch/x86/kernel/i387.c |   19 ++++++------------- 1 files changed,
> 6 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index
> a815723..12088a3 100644 --- a/arch/x86/kernel/i387.c +++
> b/arch/x86/kernel/i387.c @@ -81,9 +81,7 @@ void
> __kernel_fpu_begin(void) this_cpu_write(in_kernel_fpu, true);
> 
> if (__thread_has_fpu(me)) { -		__thread_clear_has_fpu(me);

I will put that line back in the patch series that defers the
loading of FPU state until the switch to user space, but I
guess we can go either way for now...

> __save_init_fpu(me); -		/* We do 'stts()' in __kernel_fpu_end() */ 
> } else if (!use_eager_fpu()) { this_cpu_write(fpu_owner_task,
> NULL); clts(); @@ -93,17 +91,12 @@
> EXPORT_SYMBOL(__kernel_fpu_begin);
> 
> void __kernel_fpu_end(void) { -	if (use_eager_fpu()) { -		/* -		 *
> For eager fpu, most the time, tsk_used_math() is true. -		 *
> Restore the user math as we are done with the kernel usage. -		 *
> At few instances during thread exit, signal handling etc, -		 *
> tsk_used_math() is false. Those few places will take proper -		 *
> actions, so we don't need to restore the math here. -		 */ -		if
> (likely(tsk_used_math(current))) -			math_state_restore(); -	} else
> { +	struct task_struct *me = current; + +	if (__thread_has_fpu(me))
> { +		if (WARN_ON(restore_fpu_checking(me))) +			drop_init_fpu(me); 
> +	} else if (!use_eager_fpu()) { stts(); }
> 
> 


- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUuHb9AAoJEM553pKExN6DOqAIALCVqEtQ9yZEA6G9a4n4wf2r
2FHhz+H1KXUcFSVQp/KeqPq2ZJ4/1rO/omai49m1isXnjmOOLGF4Ur4E0gg6WM5W
cBgN4HYNaFIr0JMWbMfFo7hnwL7BiQLoMwqxJDxi7HQcGoIkFFDIRtezZz75acey
dKhOUz4p6C8EMgOEc1ssa5Vgo011hIfuB6aCS6NRNDJAlYornhzf0bt7HILw4tFj
1IGR+yBpb7AmlWm6EvY4NluQ3M0KjtWErpBMSMCubVMBSa62e7Twv2K9eDLx+dbz
eScoNZiy0aTgMGlN4hfXoSIwbxUkOaBl+6yAMOLn0OEV00OxxRKOAuZdQS0NUsE=
=M7Fz
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin()
  2015-01-15 19:20   ` [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin() Oleg Nesterov
@ 2015-01-16  2:30     ` Rik van Riel
  2015-01-16 16:03       ` Oleg Nesterov
  2015-01-20 12:55     ` [tip:x86/fpu] x86, fpu: Fix " tip-bot for Oleg Nesterov
  1 sibling, 1 reply; 76+ messages in thread
From: Rik van Riel @ 2015-01-16  2:30 UTC (permalink / raw)
  To: Oleg Nesterov, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/15/2015 02:20 PM, Oleg Nesterov wrote:
> math_state_restore() can race with kernel_fpu_begin() if irq comes 
> right after __thread_fpu_begin(), __save_init_fpu() will overwrite 
> fpu->state we are going to restore.
> 
> Add 2 simple helpers, kernel_fpu_disable() and kernel_fpu_enable() 
> which simply set/clear in_kernel_fpu, and change
> math_state_restore() to exclude kernel_fpu_begin() in between.
> 
> Alternatively we could use local_irq_save/restore, but probably
> these new helpers can have more users.
> 
> Perhaps they should disable/enable preemption themselves, in this
> case we can remove preempt_disable() in __restore_xstate_sig().

Given that math_state_restore does an implicit preempt_disable
through local_irq_disable, I am not sure whether adding an
explicit preempt_disable would be good or bad.

It's not like the additional locking rule makes this code any
more complex.

> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Reviewed-by: Rik van Riel <riel@redhat.com>

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUuHe8AAoJEM553pKExN6Ds4kH/2dIkmOlhUNF7npjpvRNy6As
a7/QVBJOvo2IOD5My4An2f/pdfNiJyC4dwIN8tM3JngA2LM57VFR5TzaODByq9TI
xxPKCm+SY6M3apCBx7CWyTEloEXYLjvxnVvNkbfkOhArrqJzJLGqDiV5nkMi13fs
96ibGr04vIYRJ6VJNOfmCq1psAO31Yy6ZKfAADbkiOn7VmZ/qZykyjylfeidNiyj
PTSAx9htvb39N2EMjYRnqhypZ90LMCffYg7YMT4Wdc9+BorMz3oiwzZZSjI/WcBS
Dr2rH80KNMQvSg2iYAtuWZB7BY4cnvhRqoFHqJsFQNzgVAksC0LYE+66bvQO0JQ=
=nxZE
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}()
  2015-01-16  2:27     ` Rik van Riel
@ 2015-01-16 15:54       ` Oleg Nesterov
  2015-01-16 16:07         ` Rik van Riel
  0 siblings, 1 reply; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-16 15:54 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Linus Torvalds, Suresh Siddha, linux-kernel, mingo, hpa,
	matt.fleming, bp, pbonzini, tglx, luto

On 01/15, Rik van Riel wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 01/15/2015 02:20 PM, Oleg Nesterov wrote:
> > b/arch/x86/kernel/i387.c @@ -81,9 +81,7 @@ void
> > __kernel_fpu_begin(void) this_cpu_write(in_kernel_fpu, true);
> >
> > if (__thread_has_fpu(me)) { -		__thread_clear_has_fpu(me);
>
> I will put that line back in the patch series that defers the
> loading of FPU state until the switch to user space,

I think this needs more discussion.

Firstly, __thread_clear_has_fpu() should go into __kernel_fpu_end(), it
should replace restore_fpu_checking().

But does your series actually need this change? Correctness-wise this is
not needed (afaics). Performance-wise I am not sure, kernel_fpu_begin()
is unlikely event. Plus I am not sure this is a win (in general), but I
can be easily wrong.

> but I
> guess we can go either way for now...

Yes, this should not really conflict with your changes in any case.

Given that you acked 1/3 and 3/3, perhaps you can ack this one as well?

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin()
  2015-01-16  2:30     ` Rik van Riel
@ 2015-01-16 16:03       ` Oleg Nesterov
  0 siblings, 0 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-16 16:03 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Linus Torvalds, Suresh Siddha, linux-kernel, mingo, hpa,
	matt.fleming, bp, pbonzini, tglx, luto

On 01/15, Rik van Riel wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 01/15/2015 02:20 PM, Oleg Nesterov wrote:
> > math_state_restore() can race with kernel_fpu_begin() if irq comes
> > right after __thread_fpu_begin(), __save_init_fpu() will overwrite
> > fpu->state we are going to restore.
> >
> > Add 2 simple helpers, kernel_fpu_disable() and kernel_fpu_enable()
> > which simply set/clear in_kernel_fpu, and change
> > math_state_restore() to exclude kernel_fpu_begin() in between.
> >
> > Alternatively we could use local_irq_save/restore, but probably
> > these new helpers can have more users.
> >
> > Perhaps they should disable/enable preemption themselves, in this
> > case we can remove preempt_disable() in __restore_xstate_sig().
>
> Given that math_state_restore does an implicit preempt_disable
> through local_irq_disable,

Not really. do_device_not_available() calls it with irqs disabled,
__restore_xstate_sig() calls it under preempt_disable().

> I am not sure whether adding an
> explicit preempt_disable would be good or bad.

Me too, but this code needs cleanups in any case imo, lets do this
later.

> Reviewed-by: Rik van Riel <riel@redhat.com>

Thanks!


I'll try to send another short series today, we need to remove
__thread_has_fpu() from interrupted_kernel_fpu_idle() before we
add TIF_LOAD_FPU.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}()
  2015-01-16 15:54       ` Oleg Nesterov
@ 2015-01-16 16:07         ` Rik van Riel
  0 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-16 16:07 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Linus Torvalds, Suresh Siddha, linux-kernel, mingo, hpa,
	matt.fleming, bp, pbonzini, tglx, luto

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/16/2015 10:54 AM, Oleg Nesterov wrote:
> On 01/15, Rik van Riel wrote:
>> 
>> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
>> 
>> On 01/15/2015 02:20 PM, Oleg Nesterov wrote:
>>> b/arch/x86/kernel/i387.c @@ -81,9 +81,7 @@ void 
>>> __kernel_fpu_begin(void) this_cpu_write(in_kernel_fpu, true);
>>> 
>>> if (__thread_has_fpu(me)) { -		__thread_clear_has_fpu(me);
>> 
>> I will put that line back in the patch series that defers the 
>> loading of FPU state until the switch to user space,
> 
> I think this needs more discussion.
> 
> Firstly, __thread_clear_has_fpu() should go into
> __kernel_fpu_end(), it should replace restore_fpu_checking().
> 
> But does your series actually need this change? Correctness-wise
> this is not needed (afaics). Performance-wise I am not sure,
> kernel_fpu_begin() is unlikely event. Plus I am not sure this is a
> win (in general), but I can be easily wrong.

__kernel_fpu_begin() / __kernel_fpu_end() are used all the time
when running KVM guests. The KVM VCPU thread has both a user space
and a guest VCPU state. Generally the VCPU thread stays in the kernel,
and rarely (if ever) exits to user space.

>> but I guess we can go either way for now...
> 
> Yes, this should not really conflict with your changes in any
> case.
> 
> Given that you acked 1/3 and 3/3, perhaps you can ack this one as
> well?

Sure, we can figure out the cleanest thing to do for KVM later on.
Lets get these cleanups merged first.

Acked-by: Rik van Riel <riel@redhat.com>

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEbBAEBAgAGBQJUuTcmAAoJEM553pKExN6DJJMH+IjE3w+9UJ29AgE5Nw11qG86
5CgsxQ9T2CuzysW3a3HxRssjjPRtYeU1acDlQ0QMl6SKRw73wgy1kcZ22qEN1hSF
jKxSmkOgMWtUejJTRVKwHuKwG53BMiT/ZfSy8sIDtKOJ6HWZrHFCtvacGQ/tIjDF
gtfmUzZ2tfiJaYXYhdUlYDssKbI1lh/BMp3Y7vW6h5doDJL/KIKALlGWlI1Arsjl
Ns3SCvcztw8ojJnHK9tuK6ngAz9fkVJtef6+r59ITdcO1++lE8a73xLwg+FUDj4v
wlHz2gkYDjuLCNrahnplJHIuTV6LXR1FWiAFh6MElt7qejSUmcSxKudwhncdIw==
=GbZ2
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH 0/3] x86, fpu: more eagerfpu cleanups
  2015-01-15 19:19 ` [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix Oleg Nesterov
                     ` (2 preceding siblings ...)
  2015-01-15 19:20   ` [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin() Oleg Nesterov
@ 2015-01-19 18:51   ` Oleg Nesterov
  2015-01-19 18:51     ` [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu() Oleg Nesterov
                       ` (3 more replies)
  3 siblings, 4 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-19 18:51 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/15, Oleg Nesterov wrote:
>
> Let me send initial kernel_fpu_begin/end cleanups, I believe they make
> sense anyway and won't conflict with your changes.
>
> This is actually resend, I sent more patches some time ago but they were
> ignored.
>
> Note that (I hope) we can do more changes on top of this series, in
> particular:
>
> 	- remove all checks from irq_fpu_usable() except in_kernel_fpu
>
> 	- do not abuse FPU in kernel threads, this makes sense even if
> 	  use_eager_fpu(), and with or without the changes you proposed.

On top of this series.

Initially I was going to make more changes, but then I decided to delay
the cleanups. IMHO this code needs them in any case. math_state_restore()
and its usage doesn't look nice. init_fpu() too, and unlazy_fpu(current)
is simply wrong afaics. Fortunately the only caller of init_fpu(current)
is coredump, so this task can't return to user-mode, still this doesn't
look good. And it should be unified with save_init_fpu(). Which has the
wrong WARN_ON_ONCE(!__thread_has_fpu(tsk)).

And I am not sure that unlazy_fpu() is correct wrt __kernel_fpu_begin(),
but probably this is because I do not know how fpu works. If the nested
__save_init_fpu() is fine, then why (before the previous changes)
__kernel_fpu_begin() does __thread_clear_has_fpu() first?

Rik, to remind, I think that your changes need 1 + 2 at least, to avoid
the performance regression. Perhaps this needs a single patch.

3/3 is not strictly neccessary, but imo it makes sense anyway, even
without your changes. And if we add TIF_LOAD_FPU, it would be nice to
filter out kthreads automatically.


Could someone review this series?

If this makes any sense, I'll try to make the cleanups later.

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu()
  2015-01-19 18:51   ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Oleg Nesterov
@ 2015-01-19 18:51     ` Oleg Nesterov
  2015-01-20 14:15       ` Rik van Riel
                         ` (2 more replies)
  2015-01-19 18:51     ` [PATCH 2/3] x86, fpu: always allow FPU in interrupt " Oleg Nesterov
                       ` (2 subsequent siblings)
  3 siblings, 3 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-19 18:51 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

__kernel_fpu_begin() does nothing if !__thread_has_fpu() && use_eager_fpu(),
perhaps it assumes that this case is simply impossible. This is certainly
not possible if in_interrupt() == T; interrupted_user_mode() should have
FPU, and interrupted_kernel_fpu_idle() should fail if !__thread_has_fpu().

However, even if use_eager_fpu() == T a task can do drop_fpu(), then switch
to another thread which becomes fpu_owner_task, then resume and call some
function which does kernel_fpu_begin(). Say, an exiting task does a lot of
things after exit_thread(), it is not safe to assume that it can't use FPU
in these paths.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 arch/x86/kernel/i387.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 81049ff..26f0e80 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -93,9 +93,10 @@ void __kernel_fpu_begin(void)
 
 	if (__thread_has_fpu(me)) {
 		__save_init_fpu(me);
-	} else if (!use_eager_fpu()) {
+	} else {
 		this_cpu_write(fpu_owner_task, NULL);
-		clts();
+		if (!use_eager_fpu())
+			clts();
 	}
 }
 EXPORT_SYMBOL(__kernel_fpu_begin);
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 2/3] x86, fpu: always allow FPU in interrupt if use_eager_fpu()
  2015-01-19 18:51   ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Oleg Nesterov
  2015-01-19 18:51     ` [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu() Oleg Nesterov
@ 2015-01-19 18:51     ` Oleg Nesterov
  2015-01-20 14:46       ` Rik van Riel
                         ` (3 more replies)
  2015-01-19 18:52     ` [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads " Oleg Nesterov
  2015-02-20 12:10     ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Borislav Petkov
  3 siblings, 4 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-19 18:51 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

The __thread_has_fpu() check in interrupted_kernel_fpu_idle() was needed
to prevent the nested kernel_fpu_begin(). Now that we have in_kernel_fpu
and !__thread_has_fpu() case in __kernel_fpu_begin() does not depend on
use_eager_fpu() (except clts) we can remove it.

__thread_has_fpu() can be false even if use_eager_fpu(), but this case
does not differ from !use_eager_fpu() case except we should not worry
about X86_CR0_TS, __kernel_fpu_begin/end will not touch this bit.

Note: I think we can kill all irq_fpu_usable() checks except in_kernel_fpu,
just we need to record the state of X86_CR0_TS in __kernel_fpu_begin() and
conditionalize stts() in __kernel_fpu_end(), but this needs another patch.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 arch/x86/kernel/i387.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 26f0e80..4734865 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -41,8 +41,8 @@ void kernel_fpu_enable(void)
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
  *
- * Except for the eagerfpu case when we return 1 unless we've already
- * been eager and saved the state in kernel_fpu_begin().
+ * Except for the eagerfpu case when we return true; in the likely case
+ * the thread has FPU but we are not going to set/clear TS.
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
@@ -50,7 +50,7 @@ static inline bool interrupted_kernel_fpu_idle(void)
 		return false;
 
 	if (use_eager_fpu())
-		return __thread_has_fpu(current);
+		return true;
 
 	return !__thread_has_fpu(current) &&
 		(read_cr0() & X86_CR0_TS);
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads if use_eager_fpu()
  2015-01-19 18:51   ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Oleg Nesterov
  2015-01-19 18:51     ` [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu() Oleg Nesterov
  2015-01-19 18:51     ` [PATCH 2/3] x86, fpu: always allow FPU in interrupt " Oleg Nesterov
@ 2015-01-19 18:52     ` Oleg Nesterov
  2015-01-20 14:53       ` Rik van Riel
                         ` (2 more replies)
  2015-02-20 12:10     ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Borislav Petkov
  3 siblings, 3 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-01-19 18:52 UTC (permalink / raw)
  To: Rik van Riel, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

Afaics there is no reason why kernel threads should have FPU context
even if use_eager_fpu() == T. Now that interrupted_kernel_fpu_idle()
does not check __thread_has_fpu() we can remove the init_fpu() code
from eager_fpu_init() and change flush_thread() called by do_execve()
to initialize FPU.

Note: of course, the change in flush_thread() is horrible and must be
cleanuped. We need the new helper, and flush_thread() should return the
error if init_fpu() fails.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 arch/x86/kernel/process.c |    7 +++++++
 arch/x86/kernel/xsave.c   |   13 +------------
 2 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index e127dda..dd9a069 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -130,6 +130,7 @@ void flush_thread(void)
 
 	flush_ptrace_hw_breakpoint(tsk);
 	memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
+
 	drop_init_fpu(tsk);
 	/*
 	 * Free the FPU state for non xsave platforms. They get reallocated
@@ -137,6 +138,12 @@ void flush_thread(void)
 	 */
 	if (!use_eager_fpu())
 		free_thread_xstate(tsk);
+	else if (!used_math()) {
+		/* kthread execs. TODO: cleanup this horror. */
+		if (WARN_ON(init_fpu(current)))
+			force_sig(SIGKILL, current);
+		math_state_restore();
+	}
 }
 
 static void hard_disable_TSC(void)
diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c
index 0de1fae..de9dcf8 100644
--- a/arch/x86/kernel/xsave.c
+++ b/arch/x86/kernel/xsave.c
@@ -688,7 +688,7 @@ void eager_fpu_init(void)
 {
 	static __refdata void (*boot_func)(void) = eager_fpu_init_bp;
 
-	clear_used_math();
+	WARN_ON(used_math());
 	current_thread_info()->status = 0;
 
 	if (eagerfpu == ENABLE)
@@ -703,17 +703,6 @@ void eager_fpu_init(void)
 		boot_func();
 		boot_func = NULL;
 	}
-
-	/*
-	 * This is same as math_state_restore(). But use_xsave() is
-	 * not yet patched to use math_state_restore().
-	 */
-	init_fpu(current);
-	__thread_fpu_begin(current);
-	if (cpu_has_xsave)
-		xrstor_state(init_xstate_buf, -1);
-	else
-		fxrstor_checking(&init_xstate_buf->i387);
 }
 
 /*
-- 
1.5.5.1



^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [tip:x86/fpu] x86, fpu: Introduce per-cpu in_kernel_fpu state
  2015-01-15 19:19   ` [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu" Oleg Nesterov
  2015-01-16  2:22     ` Rik van Riel
@ 2015-01-20 12:54     ` tip-bot for Oleg Nesterov
  1 sibling, 0 replies; 76+ messages in thread
From: tip-bot for Oleg Nesterov @ 2015-01-20 12:54 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, oleg, tglx, torvalds, sbsiddha, linux-kernel, riel, mingo

Commit-ID:  14e153ef75eecae8fd0738ffb42120f4962a00cd
Gitweb:     http://git.kernel.org/tip/14e153ef75eecae8fd0738ffb42120f4962a00cd
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Thu, 15 Jan 2015 20:19:43 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Tue, 20 Jan 2015 13:53:07 +0100

x86, fpu: Introduce per-cpu in_kernel_fpu state

interrupted_kernel_fpu_idle() tries to detect if kernel_fpu_begin()
is safe or not. In particular it should obviously deny the nested
kernel_fpu_begin() and this logic looks very confusing.

If use_eager_fpu() == T we rely on a) __thread_has_fpu() check in
interrupted_kernel_fpu_idle(), and b) on the fact that _begin() does
__thread_clear_has_fpu().

Otherwise we demand that the interrupted task has no FPU if it is in
kernel mode, this works because __kernel_fpu_begin() does clts() and
interrupted_kernel_fpu_idle() checks X86_CR0_TS.

Add the per-cpu "bool in_kernel_fpu" variable, and change this code
to check/set/clear it. This allows to do more cleanups and fixes, see
the next changes.

The patch also moves WARN_ON_ONCE() under preempt_disable() just to
make this_cpu_read() look better, this is not really needed. And in
fact I think we should move it into __kernel_fpu_begin().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: matt.fleming@intel.com
Cc: bp@suse.de
Cc: pbonzini@redhat.com
Cc: luto@amacapital.net
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Link: http://lkml.kernel.org/r/20150115191943.GB27332@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/i387.h | 2 +-
 arch/x86/kernel/i387.c      | 9 +++++++++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index ed8089d..5e275d3 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -40,8 +40,8 @@ extern void __kernel_fpu_end(void);
 
 static inline void kernel_fpu_begin(void)
 {
-	WARN_ON_ONCE(!irq_fpu_usable());
 	preempt_disable();
+	WARN_ON_ONCE(!irq_fpu_usable());
 	__kernel_fpu_begin();
 }
 
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index a9a4229..a815723 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -19,6 +19,8 @@
 #include <asm/fpu-internal.h>
 #include <asm/user.h>
 
+static DEFINE_PER_CPU(bool, in_kernel_fpu);
+
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
@@ -33,6 +35,9 @@
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
+	if (this_cpu_read(in_kernel_fpu))
+		return false;
+
 	if (use_eager_fpu())
 		return __thread_has_fpu(current);
 
@@ -73,6 +78,8 @@ void __kernel_fpu_begin(void)
 {
 	struct task_struct *me = current;
 
+	this_cpu_write(in_kernel_fpu, true);
+
 	if (__thread_has_fpu(me)) {
 		__thread_clear_has_fpu(me);
 		__save_init_fpu(me);
@@ -99,6 +106,8 @@ void __kernel_fpu_end(void)
 	} else {
 		stts();
 	}
+
+	this_cpu_write(in_kernel_fpu, false);
 }
 EXPORT_SYMBOL(__kernel_fpu_end);
 

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [tip:x86/fpu] x86, fpu: Don't abuse has_fpu in __kernel_fpu_begin /end()
  2015-01-15 19:20   ` [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}() Oleg Nesterov
  2015-01-16  2:27     ` Rik van Riel
@ 2015-01-20 12:55     ` tip-bot for Oleg Nesterov
  1 sibling, 0 replies; 76+ messages in thread
From: tip-bot for Oleg Nesterov @ 2015-01-20 12:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, sbsiddha, riel, hpa, linux-kernel, torvalds, tglx, oleg

Commit-ID:  33a3ebdc077fd85f1bf4d4586eea579b297461ae
Gitweb:     http://git.kernel.org/tip/33a3ebdc077fd85f1bf4d4586eea579b297461ae
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Thu, 15 Jan 2015 20:20:05 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Tue, 20 Jan 2015 13:53:07 +0100

x86, fpu: Don't abuse has_fpu in __kernel_fpu_begin/end()

Now that we have in_kernel_fpu we can remove __thread_clear_has_fpu()
in __kernel_fpu_begin(). And this allows to replace the asymmetrical
and nontrivial use_eager_fpu + tsk_used_math check in kernel_fpu_end()
with the same __thread_has_fpu() check.

The logic becomes really simple; if _begin() does save() then _end()
needs restore(), this is controlled by __thread_has_fpu(). Otherwise
they do clts/stts unless use_eager_fpu().

Not only this makes begin/end symmetrical and imo more understandable,
potentially this allows to change irq_fpu_usable() to avoid all other
checks except "in_kernel_fpu".

Also, with this patch __kernel_fpu_end() does restore_fpu_checking()
and WARNs if it fails instead of math_state_restore(). I think this
looks better because we no longer need __thread_fpu_begin(), and it
would be better to report the failure in this case.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: matt.fleming@intel.com
Cc: bp@suse.de
Cc: pbonzini@redhat.com
Cc: luto@amacapital.net
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Link: http://lkml.kernel.org/r/20150115192005.GC27332@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/i387.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index a815723..12088a3 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -81,9 +81,7 @@ void __kernel_fpu_begin(void)
 	this_cpu_write(in_kernel_fpu, true);
 
 	if (__thread_has_fpu(me)) {
-		__thread_clear_has_fpu(me);
 		__save_init_fpu(me);
-		/* We do 'stts()' in __kernel_fpu_end() */
 	} else if (!use_eager_fpu()) {
 		this_cpu_write(fpu_owner_task, NULL);
 		clts();
@@ -93,17 +91,12 @@ EXPORT_SYMBOL(__kernel_fpu_begin);
 
 void __kernel_fpu_end(void)
 {
-	if (use_eager_fpu()) {
-		/*
-		 * For eager fpu, most the time, tsk_used_math() is true.
-		 * Restore the user math as we are done with the kernel usage.
-		 * At few instances during thread exit, signal handling etc,
-		 * tsk_used_math() is false. Those few places will take proper
-		 * actions, so we don't need to restore the math here.
-		 */
-		if (likely(tsk_used_math(current)))
-			math_state_restore();
-	} else {
+	struct task_struct *me = current;
+
+	if (__thread_has_fpu(me)) {
+		if (WARN_ON(restore_fpu_checking(me)))
+			drop_init_fpu(me);
+	} else if (!use_eager_fpu()) {
 		stts();
 	}
 

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [tip:x86/fpu] x86, fpu: Fix math_state_restore() race with kernel_fpu_begin()
  2015-01-15 19:20   ` [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin() Oleg Nesterov
  2015-01-16  2:30     ` Rik van Riel
@ 2015-01-20 12:55     ` tip-bot for Oleg Nesterov
  1 sibling, 0 replies; 76+ messages in thread
From: tip-bot for Oleg Nesterov @ 2015-01-20 12:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: riel, hpa, mingo, tglx, linux-kernel, oleg, torvalds, sbsiddha

Commit-ID:  7575637ab293861a799f3bbafe0d8c597389f4e9
Gitweb:     http://git.kernel.org/tip/7575637ab293861a799f3bbafe0d8c597389f4e9
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Thu, 15 Jan 2015 20:20:28 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Tue, 20 Jan 2015 13:53:07 +0100

x86, fpu: Fix math_state_restore() race with kernel_fpu_begin()

math_state_restore() can race with kernel_fpu_begin() if irq comes
right after __thread_fpu_begin(), __save_init_fpu() will overwrite
fpu->state we are going to restore.

Add 2 simple helpers, kernel_fpu_disable() and kernel_fpu_enable()
which simply set/clear in_kernel_fpu, and change math_state_restore()
to exclude kernel_fpu_begin() in between.

Alternatively we could use local_irq_save/restore, but probably these
new helpers can have more users.

Perhaps they should disable/enable preemption themselves, in this case
we can remove preempt_disable() in __restore_xstate_sig().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: matt.fleming@intel.com
Cc: bp@suse.de
Cc: pbonzini@redhat.com
Cc: luto@amacapital.net
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Link: http://lkml.kernel.org/r/20150115192028.GD27332@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/i387.h |  4 ++++
 arch/x86/kernel/i387.c      | 11 +++++++++++
 arch/x86/kernel/traps.c     | 12 +++++-------
 3 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 5e275d3..6eb6fcb 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -51,6 +51,10 @@ static inline void kernel_fpu_end(void)
 	preempt_enable();
 }
 
+/* Must be called with preempt disabled */
+extern void kernel_fpu_disable(void);
+extern void kernel_fpu_enable(void);
+
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
  * DNA fault but don't modify SSE registers. And these instructions
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 12088a3..81049ff 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -21,6 +21,17 @@
 
 static DEFINE_PER_CPU(bool, in_kernel_fpu);
 
+void kernel_fpu_disable(void)
+{
+	WARN_ON(this_cpu_read(in_kernel_fpu));
+	this_cpu_write(in_kernel_fpu, true);
+}
+
+void kernel_fpu_enable(void)
+{
+	this_cpu_write(in_kernel_fpu, false);
+}
+
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 88900e2..fb4cb6a 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -788,18 +788,16 @@ void math_state_restore(void)
 		local_irq_disable();
 	}
 
+	/* Avoid __kernel_fpu_begin() right after __thread_fpu_begin() */
+	kernel_fpu_disable();
 	__thread_fpu_begin(tsk);
-
-	/*
-	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
-	 */
 	if (unlikely(restore_fpu_checking(tsk))) {
 		drop_init_fpu(tsk);
 		force_sig_info(SIGSEGV, SEND_SIG_PRIV, tsk);
-		return;
+	} else {
+		tsk->thread.fpu_counter++;
 	}
-
-	tsk->thread.fpu_counter++;
+	kernel_fpu_enable();
 }
 EXPORT_SYMBOL_GPL(math_state_restore);
 

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* Re: [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu()
  2015-01-19 18:51     ` [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu() Oleg Nesterov
@ 2015-01-20 14:15       ` Rik van Riel
  2015-02-20 18:13       ` Borislav Petkov
  2015-03-03 11:27       ` [tip:x86/fpu] x86/fpu: " tip-bot for Oleg Nesterov
  2 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-20 14:15 UTC (permalink / raw)
  To: Oleg Nesterov, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/19/2015 01:51 PM, Oleg Nesterov wrote:
> __kernel_fpu_begin() does nothing if !__thread_has_fpu() && use_eager_fpu(),
> perhaps it assumes that this case is simply impossible. This is certainly
> not possible if in_interrupt() == T; interrupted_user_mode() should have
> FPU, and interrupted_kernel_fpu_idle() should fail if !__thread_has_fpu().
>
> However, even if use_eager_fpu() == T a task can do drop_fpu(), then switch
> to another thread which becomes fpu_owner_task, then resume and call some
> function which does kernel_fpu_begin(). Say, an exiting task does a lot of
> things after exit_thread(), it is not safe to assume that it can't use FPU
> in these paths.
>
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Reviewed-by: Rik van Riel <riel@redhat.com>


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 2/3] x86, fpu: always allow FPU in interrupt if use_eager_fpu()
  2015-01-19 18:51     ` [PATCH 2/3] x86, fpu: always allow FPU in interrupt " Oleg Nesterov
@ 2015-01-20 14:46       ` Rik van Riel
  2015-01-20 22:46       ` Andy Lutomirski
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-20 14:46 UTC (permalink / raw)
  To: Oleg Nesterov, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/19/2015 01:51 PM, Oleg Nesterov wrote:
> The __thread_has_fpu() check in interrupted_kernel_fpu_idle() was needed
> to prevent the nested kernel_fpu_begin(). Now that we have in_kernel_fpu
> and !__thread_has_fpu() case in __kernel_fpu_begin() does not depend on
> use_eager_fpu() (except clts) we can remove it.
>
> __thread_has_fpu() can be false even if use_eager_fpu(), but this case
> does not differ from !use_eager_fpu() case except we should not worry
> about X86_CR0_TS, __kernel_fpu_begin/end will not touch this bit.
>
> Note: I think we can kill all irq_fpu_usable() checks except in_kernel_fpu,
> just we need to record the state of X86_CR0_TS in __kernel_fpu_begin() and
> conditionalize stts() in __kernel_fpu_end(), but this needs another patch.
>
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Reviewed-by: Rik van Riel <riel@redhat.com>


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads if use_eager_fpu()
  2015-01-19 18:52     ` [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads " Oleg Nesterov
@ 2015-01-20 14:53       ` Rik van Riel
  2015-02-23 15:31       ` Borislav Petkov
  2015-03-03 11:28       ` [tip:x86/fpu] x86/fpu: Don' t " tip-bot for Oleg Nesterov
  2 siblings, 0 replies; 76+ messages in thread
From: Rik van Riel @ 2015-01-20 14:53 UTC (permalink / raw)
  To: Oleg Nesterov, Linus Torvalds, Suresh Siddha
  Cc: linux-kernel, mingo, hpa, matt.fleming, bp, pbonzini, tglx, luto

On 01/19/2015 01:52 PM, Oleg Nesterov wrote:
> Afaics there is no reason why kernel threads should have FPU context
> even if use_eager_fpu() == T. Now that interrupted_kernel_fpu_idle()
> does not check __thread_has_fpu() we can remove the init_fpu() code
> from eager_fpu_init() and change flush_thread() called by do_execve()
> to initialize FPU.
>
> Note: of course, the change in flush_thread() is horrible and must be
> cleanuped. We need the new helper, and flush_thread() should return the
> error if init_fpu() fails.
>
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Nice. This should help prevent us from loading and saving
the totally unused FPU state for things like idle threads
and other kernel threads.

Reviewed-by: Rik van Riel <riel@redhat.com>


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 2/3] x86, fpu: always allow FPU in interrupt if use_eager_fpu()
  2015-01-19 18:51     ` [PATCH 2/3] x86, fpu: always allow FPU in interrupt " Oleg Nesterov
  2015-01-20 14:46       ` Rik van Riel
@ 2015-01-20 22:46       ` Andy Lutomirski
  2015-02-20 21:48       ` Borislav Petkov
  2015-03-03 11:28       ` [tip:x86/fpu] x86/fpu: Always " tip-bot for Oleg Nesterov
  3 siblings, 0 replies; 76+ messages in thread
From: Andy Lutomirski @ 2015-01-20 22:46 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Rik van Riel, Linus Torvalds, Suresh Siddha, linux-kernel,
	Ingo Molnar, H. Peter Anvin, Matt Fleming, Borislav Petkov,
	Paolo Bonzini, Thomas Gleixner

On Mon, Jan 19, 2015 at 10:51 AM, Oleg Nesterov <oleg@redhat.com> wrote:
> The __thread_has_fpu() check in interrupted_kernel_fpu_idle() was needed
> to prevent the nested kernel_fpu_begin(). Now that we have in_kernel_fpu
> and !__thread_has_fpu() case in __kernel_fpu_begin() does not depend on
> use_eager_fpu() (except clts) we can remove it.
>
> __thread_has_fpu() can be false even if use_eager_fpu(), but this case
> does not differ from !use_eager_fpu() case except we should not worry
> about X86_CR0_TS, __kernel_fpu_begin/end will not touch this bit.
>
> Note: I think we can kill all irq_fpu_usable() checks except in_kernel_fpu,
> just we need to record the state of X86_CR0_TS in __kernel_fpu_begin() and
> conditionalize stts() in __kernel_fpu_end(), but this needs another patch.
>
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>
> ---
>  arch/x86/kernel/i387.c |    6 +++---
>  1 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
> index 26f0e80..4734865 100644
> --- a/arch/x86/kernel/i387.c
> +++ b/arch/x86/kernel/i387.c
> @@ -41,8 +41,8 @@ void kernel_fpu_enable(void)
>   * be set (so that the clts/stts pair does nothing that is
>   * visible in the interrupted kernel thread).
>   *
> - * Except for the eagerfpu case when we return 1 unless we've already
> - * been eager and saved the state in kernel_fpu_begin().
> + * Except for the eagerfpu case when we return true; in the likely case
> + * the thread has FPU but we are not going to set/clear TS.
>   */
>  static inline bool interrupted_kernel_fpu_idle(void)
>  {
> @@ -50,7 +50,7 @@ static inline bool interrupted_kernel_fpu_idle(void)
>                 return false;
>
>         if (use_eager_fpu())
> -               return __thread_has_fpu(current);
> +               return true;
>
>         return !__thread_has_fpu(current) &&
>                 (read_cr0() & X86_CR0_TS);
> --
> 1.5.5.1
>
>

Looks good to me.

--Andy

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 0/3] x86, fpu: more eagerfpu cleanups
  2015-01-19 18:51   ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Oleg Nesterov
                       ` (2 preceding siblings ...)
  2015-01-19 18:52     ` [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads " Oleg Nesterov
@ 2015-02-20 12:10     ` Borislav Petkov
  2015-02-20 13:30       ` Oleg Nesterov
  3 siblings, 1 reply; 76+ messages in thread
From: Borislav Petkov @ 2015-02-20 12:10 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Rik van Riel, Linus Torvalds, Suresh Siddha, linux-kernel, mingo,
	hpa, matt.fleming, bp, pbonzini, tglx, luto

On Mon, Jan 19, 2015 at 07:51:09PM +0100, Oleg Nesterov wrote:
> math_state_restore() and its usage doesn't look nice. init_fpu() too,

Yeah, about that:

I see:

math_state_restore
...
	 if (!tsk_used_math(tsk))
		init_fpu()

and init_fpu() then does:

	if (tsk_used_math(tsk))

Could use a cleanup and so on... Perhaps it is in the works already :)

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 0/3] x86, fpu: more eagerfpu cleanups
  2015-02-20 12:10     ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Borislav Petkov
@ 2015-02-20 13:30       ` Oleg Nesterov
  0 siblings, 0 replies; 76+ messages in thread
From: Oleg Nesterov @ 2015-02-20 13:30 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Rik van Riel, Linus Torvalds, Suresh Siddha, linux-kernel, mingo,
	hpa, matt.fleming, bp, pbonzini, tglx, luto

On 02/20, Borislav Petkov wrote:
>
> On Mon, Jan 19, 2015 at 07:51:09PM +0100, Oleg Nesterov wrote:
> > math_state_restore() and its usage doesn't look nice. init_fpu() too,
>
> Yeah, about that:
>
> I see:
>
> math_state_restore
> ...
> 	 if (!tsk_used_math(tsk))
> 		init_fpu()
>
> and init_fpu() then does:
>
> 	if (tsk_used_math(tsk))
>

Yes, and more. math_state_restore() assumes that it is called with irqs
disabled. At least if !tsk_used_math. That is why 3/3 calls init_fpu()
first. Not only this doesn't look clean, this is simply not true in
general.

The comment above init_fpu() is simply wrong. And unlazy_fpu() from
there doesn't look nice. This mixes 2 completely differents things.

> Could use a cleanup and so on... Perhaps it is in the works already :)

Yes, I'll try to make the cleanups on top of these changes. And let me
repeat that there is another reason for 1/3 and 2/3 at least (3/3 makes
sense too), if we add TIF_LOAD_FPU we need to avoid the performance
regression (irq_fpu_usable() should not fail if !__thread_has_fpu()).

Oleg.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu()
  2015-01-19 18:51     ` [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu() Oleg Nesterov
  2015-01-20 14:15       ` Rik van Riel
@ 2015-02-20 18:13       ` Borislav Petkov
  2015-03-03 11:27       ` [tip:x86/fpu] x86/fpu: " tip-bot for Oleg Nesterov
  2 siblings, 0 replies; 76+ messages in thread
From: Borislav Petkov @ 2015-02-20 18:13 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Rik van Riel, Linus Torvalds, Suresh Siddha, linux-kernel, mingo,
	hpa, matt.fleming, bp, pbonzini, tglx, luto

On Mon, Jan 19, 2015 at 07:51:32PM +0100, Oleg Nesterov wrote:
> __kernel_fpu_begin() does nothing if !__thread_has_fpu() && use_eager_fpu(),
> perhaps it assumes that this case is simply impossible. This is certainly
> not possible if in_interrupt() == T; interrupted_user_mode() should have
> FPU, and interrupted_kernel_fpu_idle() should fail if !__thread_has_fpu().
>
> However, even if use_eager_fpu() == T a task can do drop_fpu(), then switch
> to another thread which becomes fpu_owner_task, then resume and call some
> function which does kernel_fpu_begin(). Say, an exiting task does a lot of
> things after exit_thread(), it is not safe to assume that it can't use FPU
> in these paths.

Yap, that makes sense. Applied.

> Signed-off-by: Oleg Nesterov <oleg@redhat.com>
> ---
>  arch/x86/kernel/i387.c |    5 +++--
>  1 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
> index 81049ff..26f0e80 100644
> --- a/arch/x86/kernel/i387.c
> +++ b/arch/x86/kernel/i387.c
> @@ -93,9 +93,10 @@ void __kernel_fpu_begin(void)
>  
>  	if (__thread_has_fpu(me)) {
>  		__save_init_fpu(me);
> -	} else if (!use_eager_fpu()) {
> +	} else {
>  		this_cpu_write(fpu_owner_task, NULL);
> -		clts();
> +		if (!use_eager_fpu())
> +			clts();
> 	}

Some git archeology:

304bceda6a18 ("x86, fpu: use non-lazy fpu restore for processors supporting xsave")

added that different handling on use_eager_fpu() boxes.
interrupted_kernel_fpu_idle() failed then on those machines though and when it
was switched to

	if (use_eager_fpu())
		return __thread_has_fpu(current);

in

5187b28ff082 ("x86: Allow FPU to be used at interrupt time even with eagerfpu")

it forgot to correct the "else if" in __kernel_fpu_begin().

Here's the relevant hunk from 304bceda6a18:

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index ab6a2e8028ae..528557470ddb 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,7 +22,15 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * We can do a kernel_fpu_begin/end() pair *ONLY* if that
+ * For now, on xsave platforms we will return interrupted
+ * kernel FPU as not-idle. TBD: As we use non-lazy FPU restore
+ * for xsave platforms, ideally we can change the return value
+ * to something like __thread_has_fpu(current). But we need to
+ * be careful of doing __thread_clear_has_fpu() before saving
+ * the FPU etc for supporting nested uses etc. For now, take
+ * the simple route!
+ *
+ * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
@@ -30,6 +38,9 @@
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
+       if (use_xsave())
+               return 0;
+
        return !__thread_has_fpu(current) &&
                (read_cr0() & X86_CR0_TS);
 }
@@ -73,7 +84,7 @@ void kernel_fpu_begin(void)
                __save_init_fpu(me);
                __thread_clear_has_fpu(me);
                /* We do 'stts()' in kernel_fpu_end() */
-       } else {
+       } else if (!use_xsave()) {
                this_cpu_write(fpu_owner_task, NULL);
                clts();
        }

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* Re: [PATCH 2/3] x86, fpu: always allow FPU in interrupt if use_eager_fpu()
  2015-01-19 18:51     ` [PATCH 2/3] x86, fpu: always allow FPU in interrupt " Oleg Nesterov
  2015-01-20 14:46       ` Rik van Riel
  2015-01-20 22:46       ` Andy Lutomirski
@ 2015-02-20 21:48       ` Borislav Petkov
  2015-03-03 11:28       ` [tip:x86/fpu] x86/fpu: Always " tip-bot for Oleg Nesterov
  3 siblings, 0 replies; 76+ messages in thread
From: Borislav Petkov @ 2015-02-20 21:48 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Rik van Riel, Linus Torvalds, Suresh Siddha, linux-kernel, mingo,
	hpa, matt.fleming, bp, pbonzini, tglx, luto

On Mon, Jan 19, 2015 at 07:51:51PM +0100, Oleg Nesterov wrote:
> The __thread_has_fpu() check in interrupted_kernel_fpu_idle() was needed
> to prevent the nested kernel_fpu_begin(). Now that we have in_kernel_fpu
> and !__thread_has_fpu() case in __kernel_fpu_begin() does not depend on
> use_eager_fpu() (except clts) we can remove it.
> 
> __thread_has_fpu() can be false even if use_eager_fpu(), but this case
> does not differ from !use_eager_fpu() case except we should not worry
> about X86_CR0_TS, __kernel_fpu_begin/end will not touch this bit.
> 
> Note: I think we can kill all irq_fpu_usable() checks except in_kernel_fpu,
> just we need to record the state of X86_CR0_TS in __kernel_fpu_begin() and
> conditionalize stts() in __kernel_fpu_end(), but this needs another patch.
> 
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Applied, thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads if use_eager_fpu()
  2015-01-19 18:52     ` [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads " Oleg Nesterov
  2015-01-20 14:53       ` Rik van Riel
@ 2015-02-23 15:31       ` Borislav Petkov
  2015-03-03 11:28       ` [tip:x86/fpu] x86/fpu: Don' t " tip-bot for Oleg Nesterov
  2 siblings, 0 replies; 76+ messages in thread
From: Borislav Petkov @ 2015-02-23 15:31 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Rik van Riel, Linus Torvalds, Suresh Siddha, linux-kernel, mingo,
	hpa, matt.fleming, bp, pbonzini, tglx, luto

On Mon, Jan 19, 2015 at 07:52:12PM +0100, Oleg Nesterov wrote:
> Afaics there is no reason why kernel threads should have FPU context
> even if use_eager_fpu() == T. Now that interrupted_kernel_fpu_idle()
> does not check __thread_has_fpu() we can remove the init_fpu() code
				   ^
				... in the use_eager_fpu() case...

I added it to the commit message.

> from eager_fpu_init() and change flush_thread() called by do_execve()
> to initialize FPU.
> 
> Note: of course, the change in flush_thread() is horrible and must be
> cleanuped. We need the new helper, and flush_thread() should return the
> error if init_fpu() fails.
> 
> Signed-off-by: Oleg Nesterov <oleg@redhat.com>

Applied, thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [tip:x86/fpu] x86/fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu()
  2015-01-19 18:51     ` [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu() Oleg Nesterov
  2015-01-20 14:15       ` Rik van Riel
  2015-02-20 18:13       ` Borislav Petkov
@ 2015-03-03 11:27       ` tip-bot for Oleg Nesterov
  2 siblings, 0 replies; 76+ messages in thread
From: tip-bot for Oleg Nesterov @ 2015-03-03 11:27 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, torvalds, tglx, mingo, bp, luto, priikone, riel,
	sbsiddha, oleg, hpa

Commit-ID:  7aeccb83e76316b365e4b44a1dd982ee22a7d8b2
Gitweb:     http://git.kernel.org/tip/7aeccb83e76316b365e4b44a1dd982ee22a7d8b2
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Mon, 19 Jan 2015 19:51:32 +0100
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Mon, 23 Feb 2015 15:50:28 +0100

x86/fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu()

__kernel_fpu_begin() does nothing if !__thread_has_fpu() && use_eager_fpu(),
perhaps it assumes that this case is simply impossible. This is certainly
not possible if in_interrupt() == T; interrupted_user_mode() should have
FPU, and interrupted_kernel_fpu_idle() should fail if !__thread_has_fpu().

However, even if use_eager_fpu() == T a task can do drop_fpu(), then switch
to another thread which becomes fpu_owner_task, then resume and call some
function which does kernel_fpu_begin(). Say, an exiting task does a lot of
things after exit_thread(), it is not safe to assume that it can't use FPU
in these paths.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Pekka Riikonen <priikone@iki.fi>
Link: http://lkml.kernel.org/r/20150119185132.GB16427@redhat.com
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kernel/i387.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index f59d806..ad3a2a2 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -93,9 +93,10 @@ void __kernel_fpu_begin(void)
 
 	if (__thread_has_fpu(me)) {
 		__save_init_fpu(me);
-	} else if (!use_eager_fpu()) {
+	} else {
 		this_cpu_write(fpu_owner_task, NULL);
-		clts();
+		if (!use_eager_fpu())
+			clts();
 	}
 }
 EXPORT_SYMBOL(__kernel_fpu_begin);

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [tip:x86/fpu] x86/fpu: Always allow FPU in interrupt if use_eager_fpu()
  2015-01-19 18:51     ` [PATCH 2/3] x86, fpu: always allow FPU in interrupt " Oleg Nesterov
                         ` (2 preceding siblings ...)
  2015-02-20 21:48       ` Borislav Petkov
@ 2015-03-03 11:28       ` tip-bot for Oleg Nesterov
  3 siblings, 0 replies; 76+ messages in thread
From: tip-bot for Oleg Nesterov @ 2015-03-03 11:28 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bp, sbsiddha, linux-kernel, mingo, hpa, torvalds, luto, riel, tglx, oleg

Commit-ID:  4b2e762e2e53c721458a83d547b222178bb72a34
Gitweb:     http://git.kernel.org/tip/4b2e762e2e53c721458a83d547b222178bb72a34
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Mon, 19 Jan 2015 19:51:51 +0100
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Mon, 23 Feb 2015 15:50:41 +0100

x86/fpu: Always allow FPU in interrupt if use_eager_fpu()

The __thread_has_fpu() check in interrupted_kernel_fpu_idle() was needed
to prevent the nested kernel_fpu_begin(). Now that we have in_kernel_fpu
and !__thread_has_fpu() case in __kernel_fpu_begin() does not depend on
use_eager_fpu() (except clts) we can remove it.

__thread_has_fpu() can be false even if use_eager_fpu(), but this case
does not differ from !use_eager_fpu() case except we should not worry
about X86_CR0_TS, __kernel_fpu_begin()/end() will not touch this bit.

Note: I think we can kill all irq_fpu_usable() checks except in_kernel_fpu,
just we need to record the state of X86_CR0_TS in __kernel_fpu_begin() and
conditionalize stts() in __kernel_fpu_end(), but this needs another patch.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Link: http://lkml.kernel.org/r/20150119185151.GC16427@redhat.com
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kernel/i387.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index ad3a2a2..8416b5f 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -41,8 +41,8 @@ void kernel_fpu_enable(void)
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
  *
- * Except for the eagerfpu case when we return 1 unless we've already
- * been eager and saved the state in kernel_fpu_begin().
+ * Except for the eagerfpu case when we return true; in the likely case
+ * the thread has FPU but we are not going to set/clear TS.
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
@@ -50,7 +50,7 @@ static inline bool interrupted_kernel_fpu_idle(void)
 		return false;
 
 	if (use_eager_fpu())
-		return __thread_has_fpu(current);
+		return true;
 
 	return !__thread_has_fpu(current) &&
 		(read_cr0() & X86_CR0_TS);

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [tip:x86/fpu] x86/fpu: Don' t abuse FPU in kernel threads if use_eager_fpu()
  2015-01-19 18:52     ` [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads " Oleg Nesterov
  2015-01-20 14:53       ` Rik van Riel
  2015-02-23 15:31       ` Borislav Petkov
@ 2015-03-03 11:28       ` tip-bot for Oleg Nesterov
  2 siblings, 0 replies; 76+ messages in thread
From: tip-bot for Oleg Nesterov @ 2015-03-03 11:28 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: oleg, tglx, torvalds, riel, luto, linux-kernel, bp, sbsiddha, mingo, hpa

Commit-ID:  110d7f7513bbb916b8654da9e2973ac5bed929a9
Gitweb:     http://git.kernel.org/tip/110d7f7513bbb916b8654da9e2973ac5bed929a9
Author:     Oleg Nesterov <oleg@redhat.com>
AuthorDate: Mon, 19 Jan 2015 19:52:12 +0100
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Mon, 23 Feb 2015 15:50:45 +0100

x86/fpu: Don't abuse FPU in kernel threads if use_eager_fpu()

AFAICS, there is no reason why kernel threads should have FPU context
even if use_eager_fpu() == T. Now that interrupted_kernel_fpu_idle()
does not check __thread_has_fpu() in the use_eager_fpu() case, we
can remove the init_fpu() code from eager_fpu_init() and change
flush_thread() called by do_execve() to initialize FPU.

Note: of course, the change in flush_thread() is horrible and must be
cleanuped. We need the new helper, and flush_thread() should return the
error if init_fpu() fails.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/20150119185212.GD16427@redhat.com
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kernel/process.c |  7 +++++++
 arch/x86/kernel/xsave.c   | 13 +------------
 2 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index ce8b103..8348037 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -130,6 +130,7 @@ void flush_thread(void)
 
 	flush_ptrace_hw_breakpoint(tsk);
 	memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
+
 	drop_init_fpu(tsk);
 	/*
 	 * Free the FPU state for non xsave platforms. They get reallocated
@@ -137,6 +138,12 @@ void flush_thread(void)
 	 */
 	if (!use_eager_fpu())
 		free_thread_xstate(tsk);
+	else if (!used_math()) {
+		/* kthread execs. TODO: cleanup this horror. */
+		if (WARN_ON(init_fpu(current)))
+			force_sig(SIGKILL, current);
+		math_state_restore();
+	}
 }
 
 static void hard_disable_TSC(void)
diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c
index 0de1fae..de9dcf8 100644
--- a/arch/x86/kernel/xsave.c
+++ b/arch/x86/kernel/xsave.c
@@ -688,7 +688,7 @@ void eager_fpu_init(void)
 {
 	static __refdata void (*boot_func)(void) = eager_fpu_init_bp;
 
-	clear_used_math();
+	WARN_ON(used_math());
 	current_thread_info()->status = 0;
 
 	if (eagerfpu == ENABLE)
@@ -703,17 +703,6 @@ void eager_fpu_init(void)
 		boot_func();
 		boot_func = NULL;
 	}
-
-	/*
-	 * This is same as math_state_restore(). But use_xsave() is
-	 * not yet patched to use math_state_restore().
-	 */
-	init_fpu(current);
-	__thread_fpu_begin(current);
-	if (cpu_has_xsave)
-		xrstor_state(init_xstate_buf, -1);
-	else
-		fxrstor_checking(&init_xstate_buf->i387);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 76+ messages in thread

end of thread, other threads:[~2015-03-03 11:28 UTC | newest]

Thread overview: 76+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-11 21:46 [RFC PATCH 0/11 BROKEN] move FPU context loading to userspace switch riel
2015-01-11 21:46 ` [RFC PATCH 01/11] x86,fpu: document the data structures a little riel
2015-01-12 21:18   ` Borislav Petkov
2015-01-12 21:38     ` Rik van Riel
2015-01-12 21:52   ` Dave Hansen
2015-01-13 15:59     ` Rik van Riel
2015-01-11 21:46 ` [RFC PATCH 02/11] x86,fpu: replace fpu_switch_t with a thread flag riel
2015-01-13 15:24   ` Oleg Nesterov
2015-01-13 16:35     ` Rik van Riel
2015-01-13 16:55       ` Oleg Nesterov
2015-01-11 21:46 ` [RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu riel
2015-01-13 15:24   ` Oleg Nesterov
2015-01-13 16:37     ` Rik van Riel
2015-01-11 21:46 ` [RFC PATCH 04/11] x86,fpu: defer FPU restore until return to userspace riel
2015-01-13 15:53   ` Oleg Nesterov
2015-01-13 17:07   ` Andy Lutomirski
2015-01-13 17:11   ` Oleg Nesterov
2015-01-13 17:18     ` Andy Lutomirski
2015-01-13 17:44       ` Rik van Riel
2015-01-13 17:57         ` Andy Lutomirski
2015-01-13 18:13           ` Rik van Riel
2015-01-13 18:26             ` Andy Lutomirski
2015-01-13 17:54     ` Rik van Riel
2015-01-13 18:22       ` Oleg Nesterov
2015-01-13 18:30         ` Oleg Nesterov
2015-01-13 20:06           ` Rik van Riel
2015-01-14 17:56             ` Oleg Nesterov
2015-01-13 17:58   ` Oleg Nesterov
2015-01-13 19:32     ` Rik van Riel
2015-01-11 21:46 ` [RFC PATCH 05/11] x86,fpu: ensure FPU state is reloaded from memory if task is traced riel
2015-01-13 16:19   ` Oleg Nesterov
2015-01-13 16:33     ` Rik van Riel
2015-01-13 16:50       ` Oleg Nesterov
2015-01-13 16:57         ` Rik van Riel
2015-01-11 21:46 ` [RFC PATCH 06/11] x86,fpu: lazily skip fpu restore with eager fpu mode, too riel
2015-01-13 17:11   ` Andy Lutomirski
2015-01-13 20:43     ` Rik van Riel
2015-01-14 18:36   ` Oleg Nesterov
2015-01-15  2:49     ` Rik van Riel
2015-01-15 19:34       ` Oleg Nesterov
2015-01-11 21:46 ` [RFC PATCH 07/11] x86,fpu: store current fpu pointer, instead of fpu_owner_task riel
2015-01-11 21:46 ` [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end riel
2015-01-14 18:43   ` Oleg Nesterov
2015-01-14 19:08     ` Oleg Nesterov
2015-01-11 21:46 ` [RFC PATCH 09/11] x86,fpu,kvm: keep vcpu FPU active as long as it is resident riel
2015-01-11 21:46 ` [RFC PATCH 10/11] x86,fpu: fix fpu_copy to deal with not-loaded fpu riel
2015-01-11 21:46 ` [RFC PATCH 11/11] (BROKEN) x86,fpu: broken signal handler stack setup riel
2015-01-15 19:19 ` [PATCH 0/3] x86, fpu: kernel_fpu_begin/end initial cleanups/fix Oleg Nesterov
2015-01-15 19:19   ` [PATCH 1/3] x86, fpu: introduce per-cpu "bool in_kernel_fpu" Oleg Nesterov
2015-01-16  2:22     ` Rik van Riel
2015-01-20 12:54     ` [tip:x86/fpu] x86, fpu: Introduce per-cpu in_kernel_fpu state tip-bot for Oleg Nesterov
2015-01-15 19:20   ` [PATCH 2/3] x86, fpu: don't abuse ->has_fpu in __kernel_fpu_{begin,end}() Oleg Nesterov
2015-01-16  2:27     ` Rik van Riel
2015-01-16 15:54       ` Oleg Nesterov
2015-01-16 16:07         ` Rik van Riel
2015-01-20 12:55     ` [tip:x86/fpu] x86, fpu: Don't abuse has_fpu in __kernel_fpu_begin /end() tip-bot for Oleg Nesterov
2015-01-15 19:20   ` [PATCH 3/3] x86, fpu: fix math_state_restore() race with kernel_fpu_begin() Oleg Nesterov
2015-01-16  2:30     ` Rik van Riel
2015-01-16 16:03       ` Oleg Nesterov
2015-01-20 12:55     ` [tip:x86/fpu] x86, fpu: Fix " tip-bot for Oleg Nesterov
2015-01-19 18:51   ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Oleg Nesterov
2015-01-19 18:51     ` [PATCH 1/3] x86, fpu: __kernel_fpu_begin() should clear fpu_owner_task even if use_eager_fpu() Oleg Nesterov
2015-01-20 14:15       ` Rik van Riel
2015-02-20 18:13       ` Borislav Petkov
2015-03-03 11:27       ` [tip:x86/fpu] x86/fpu: " tip-bot for Oleg Nesterov
2015-01-19 18:51     ` [PATCH 2/3] x86, fpu: always allow FPU in interrupt " Oleg Nesterov
2015-01-20 14:46       ` Rik van Riel
2015-01-20 22:46       ` Andy Lutomirski
2015-02-20 21:48       ` Borislav Petkov
2015-03-03 11:28       ` [tip:x86/fpu] x86/fpu: Always " tip-bot for Oleg Nesterov
2015-01-19 18:52     ` [PATCH 3/3] x86, fpu: don't abuse FPU in kernel threads " Oleg Nesterov
2015-01-20 14:53       ` Rik van Riel
2015-02-23 15:31       ` Borislav Petkov
2015-03-03 11:28       ` [tip:x86/fpu] x86/fpu: Don' t " tip-bot for Oleg Nesterov
2015-02-20 12:10     ` [PATCH 0/3] x86, fpu: more eagerfpu cleanups Borislav Petkov
2015-02-20 13:30       ` Oleg Nesterov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).