linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>, Dave Hansen <dave@sr71.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Oleg Nesterov <oleg@redhat.com>
Subject: [GIT PULL] x86 fixes
Date: Sat, 18 Jul 2015 05:18:10 +0200	[thread overview]
Message-ID: <20150718031810.GA19818@gmail.com> (raw)

Linus,

Please pull the latest x86-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86-urgent-for-linus

   # HEAD: 5aaeb5c01c5b6c0be7b7aadbf3ace9f3a4458c3d x86/fpu, sched: Introduce CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT and use it on x86

Two families of fixes:

  - Fix an FPU context related boot crash on newer x86 hardware with larger 
    context sizes than what most people test. To fix this without ugly kludges or 
    extensive reverts we had to touch core task allocator, to allow x86 to 
    determine the task size dynamically, at boot time.

    I've tested it on a number of x86 platforms, and I cross-built it to a handful 
    of architectures:

	                                 (warns)               (warns)
	testing     x86-64:  -git:  pass (    0),  -tip:  pass (    0)
	testing     x86-32:  -git:  pass (    0),  -tip:  pass (    0)
	testing        arm:  -git:  pass ( 1359),  -tip:  pass ( 1359)
	testing       cris:  -git:  pass ( 1031),  -tip:  pass ( 1031)
	testing       m32r:  -git:  pass ( 1135),  -tip:  pass ( 1135)
	testing       m68k:  -git:  pass ( 1471),  -tip:  pass ( 1471)
	testing       mips:  -git:  pass ( 1162),  -tip:  pass ( 1162)
	testing    mn10300:  -git:  pass ( 1058),  -tip:  pass ( 1058)
	testing     parisc:  -git:  pass ( 1846),  -tip:  pass ( 1846)
	testing      sparc:  -git:  pass ( 1185),  -tip:  pass ( 1185)

     ... so I hope the cross-arch impact 'none', as intended.

    (by Dave Hansen)

  - Fix various NMI handling related bugs unearthed by the big asm code rewrite 
    and generally make the NMI code more robust and more maintainable while at it. 
    These changes are a bit late in the cycle, I hope they are still acceptable.

    (by Andy Lutomirski)

  out-of-topic modifications in x86-urgent-for-linus:
  -----------------------------------------------------
  arch/Kconfig                       # 5aaeb5c01c5b: x86/fpu, sched: Introduce CO
  fs/proc/kcore.c                    # 5aaeb5c01c5b: x86/fpu, sched: Introduce CO
  kernel/fork.c                      # 0c8c0f03e3a2: x86/fpu, sched: Dynamically 

 Thanks,

	Ingo

------------------>
Andy Lutomirski (9):
      x86/nmi: Enable nested do_nmi() handling for 64-bit kernels
      x86/nmi/64: Remove asm code that saves CR2
      x86/nmi/64: Switch stacks on userspace NMI entry
      x86/nmi/64: Improve nested NMI comments
      x86/nmi/64: Reorder nested NMI checks
      x86/nmi/64: Use DF to avoid userspace RSP confusing nested NMI detection
      x86/nmi/64: Minor asm simplification
      x86/nmi/64: Make the "NMI executing" variable more consistent
      x86/entry/64, x86/nmi/64: Add CONFIG_DEBUG_ENTRY NMI testing code

Dave Hansen (1):
      x86/fpu, sched: Dynamically allocate 'struct fpu'

Ingo Molnar (1):
      x86/fpu, sched: Introduce CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT and use it on x86


 arch/Kconfig                     |   4 +
 arch/x86/Kconfig                 |   1 +
 arch/x86/Kconfig.debug           |  12 ++
 arch/x86/entry/entry_64.S        | 299 ++++++++++++++++++++++++++-------------
 arch/x86/include/asm/fpu/types.h |  72 +++++-----
 arch/x86/include/asm/processor.h |  10 +-
 arch/x86/kernel/fpu/init.c       |  40 ++++++
 arch/x86/kernel/nmi.c            | 123 +++++++---------
 arch/x86/kernel/process.c        |   2 +-
 fs/proc/kcore.c                  |   4 +-
 include/linux/sched.h            |  16 ++-
 kernel/fork.c                    |   7 +-
 12 files changed, 376 insertions(+), 214 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index bec6666a3cc4..8a8ea7110de8 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -221,6 +221,10 @@ config ARCH_TASK_STRUCT_ALLOCATOR
 config ARCH_THREAD_INFO_ALLOCATOR
 	bool
 
+# Select if arch wants to size task_struct dynamically via arch_task_struct_size:
+config ARCH_WANTS_DYNAMIC_TASK_STRUCT
+	bool
+
 config HAVE_REGS_AND_STACK_ACCESS_API
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3dbb7e7909ca..b3a1a5d77d92 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -41,6 +41,7 @@ config X86
 	select ARCH_USE_CMPXCHG_LOCKREF		if X86_64
 	select ARCH_USE_QUEUED_RWLOCKS
 	select ARCH_USE_QUEUED_SPINLOCKS
+	select ARCH_WANTS_DYNAMIC_TASK_STRUCT
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_WANT_IPC_PARSE_VERSION	if X86_32
 	select ARCH_WANT_OPTIONAL_GPIOLIB
diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
index a15893d17c55..d8c0d3266173 100644
--- a/arch/x86/Kconfig.debug
+++ b/arch/x86/Kconfig.debug
@@ -297,6 +297,18 @@ config OPTIMIZE_INLINING
 
 	  If unsure, say N.
 
+config DEBUG_ENTRY
+	bool "Debug low-level entry code"
+	depends on DEBUG_KERNEL
+	---help---
+	  This option enables sanity checks in x86's low-level entry code.
+	  Some of these sanity checks may slow down kernel entries and
+	  exits or otherwise impact performance.
+
+	  This is currently used to help test NMI code.
+
+	  If unsure, say N.
+
 config DEBUG_NMI_SELFTEST
 	bool "NMI Selftest"
 	depends on DEBUG_KERNEL && X86_LOCAL_APIC
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 3bb2c4302df1..8cb3e438f21e 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1237,11 +1237,12 @@ ENTRY(nmi)
 	 *  If the variable is not set and the stack is not the NMI
 	 *  stack then:
 	 *    o Set the special variable on the stack
-	 *    o Copy the interrupt frame into a "saved" location on the stack
-	 *    o Copy the interrupt frame into a "copy" location on the stack
+	 *    o Copy the interrupt frame into an "outermost" location on the
+	 *      stack
+	 *    o Copy the interrupt frame into an "iret" location on the stack
 	 *    o Continue processing the NMI
 	 *  If the variable is set or the previous stack is the NMI stack:
-	 *    o Modify the "copy" location to jump to the repeate_nmi
+	 *    o Modify the "iret" location to jump to the repeat_nmi
 	 *    o return back to the first NMI
 	 *
 	 * Now on exit of the first NMI, we first clear the stack variable
@@ -1250,31 +1251,151 @@ ENTRY(nmi)
 	 * a nested NMI that updated the copy interrupt stack frame, a
 	 * jump will be made to the repeat_nmi code that will handle the second
 	 * NMI.
+	 *
+	 * However, espfix prevents us from directly returning to userspace
+	 * with a single IRET instruction.  Similarly, IRET to user mode
+	 * can fault.  We therefore handle NMIs from user space like
+	 * other IST entries.
 	 */
 
 	/* Use %rdx as our temp variable throughout */
 	pushq	%rdx
 
+	testb	$3, CS-RIP+8(%rsp)
+	jz	.Lnmi_from_kernel
+
+	/*
+	 * NMI from user mode.  We need to run on the thread stack, but we
+	 * can't go through the normal entry paths: NMIs are masked, and
+	 * we don't want to enable interrupts, because then we'll end
+	 * up in an awkward situation in which IRQs are on but NMIs
+	 * are off.
+	 */
+
+	SWAPGS
+	cld
+	movq	%rsp, %rdx
+	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+	pushq	5*8(%rdx)	/* pt_regs->ss */
+	pushq	4*8(%rdx)	/* pt_regs->rsp */
+	pushq	3*8(%rdx)	/* pt_regs->flags */
+	pushq	2*8(%rdx)	/* pt_regs->cs */
+	pushq	1*8(%rdx)	/* pt_regs->rip */
+	pushq   $-1		/* pt_regs->orig_ax */
+	pushq   %rdi		/* pt_regs->di */
+	pushq   %rsi		/* pt_regs->si */
+	pushq   (%rdx)		/* pt_regs->dx */
+	pushq   %rcx		/* pt_regs->cx */
+	pushq   %rax		/* pt_regs->ax */
+	pushq   %r8		/* pt_regs->r8 */
+	pushq   %r9		/* pt_regs->r9 */
+	pushq   %r10		/* pt_regs->r10 */
+	pushq   %r11		/* pt_regs->r11 */
+	pushq	%rbx		/* pt_regs->rbx */
+	pushq	%rbp		/* pt_regs->rbp */
+	pushq	%r12		/* pt_regs->r12 */
+	pushq	%r13		/* pt_regs->r13 */
+	pushq	%r14		/* pt_regs->r14 */
+	pushq	%r15		/* pt_regs->r15 */
+
+	/*
+	 * At this point we no longer need to worry about stack damage
+	 * due to nesting -- we're on the normal thread stack and we're
+	 * done with the NMI stack.
+	 */
+
+	movq	%rsp, %rdi
+	movq	$-1, %rsi
+	call	do_nmi
+
+	/*
+	 * Return back to user mode.  We must *not* do the normal exit
+	 * work, because we don't want to enable interrupts.  Fortunately,
+	 * do_nmi doesn't modify pt_regs.
+	 */
+	SWAPGS
+	jmp	restore_c_regs_and_iret
+
+.Lnmi_from_kernel:
+	/*
+	 * Here's what our stack frame will look like:
+	 * +---------------------------------------------------------+
+	 * | original SS                                             |
+	 * | original Return RSP                                     |
+	 * | original RFLAGS                                         |
+	 * | original CS                                             |
+	 * | original RIP                                            |
+	 * +---------------------------------------------------------+
+	 * | temp storage for rdx                                    |
+	 * +---------------------------------------------------------+
+	 * | "NMI executing" variable                                |
+	 * +---------------------------------------------------------+
+	 * | iret SS          } Copied from "outermost" frame        |
+	 * | iret Return RSP  } on each loop iteration; overwritten  |
+	 * | iret RFLAGS      } by a nested NMI to force another     |
+	 * | iret CS          } iteration if needed.                 |
+	 * | iret RIP         }                                      |
+	 * +---------------------------------------------------------+
+	 * | outermost SS          } initialized in first_nmi;       |
+	 * | outermost Return RSP  } will not be changed before      |
+	 * | outermost RFLAGS      } NMI processing is done.         |
+	 * | outermost CS          } Copied to "iret" frame on each  |
+	 * | outermost RIP         } iteration.                      |
+	 * +---------------------------------------------------------+
+	 * | pt_regs                                                 |
+	 * +---------------------------------------------------------+
+	 *
+	 * The "original" frame is used by hardware.  Before re-enabling
+	 * NMIs, we need to be done with it, and we need to leave enough
+	 * space for the asm code here.
+	 *
+	 * We return by executing IRET while RSP points to the "iret" frame.
+	 * That will either return for real or it will loop back into NMI
+	 * processing.
+	 *
+	 * The "outermost" frame is copied to the "iret" frame on each
+	 * iteration of the loop, so each iteration starts with the "iret"
+	 * frame pointing to the final return target.
+	 */
+
 	/*
-	 * If %cs was not the kernel segment, then the NMI triggered in user
-	 * space, which means it is definitely not nested.
+	 * Determine whether we're a nested NMI.
+	 *
+	 * If we interrupted kernel code between repeat_nmi and
+	 * end_repeat_nmi, then we are a nested NMI.  We must not
+	 * modify the "iret" frame because it's being written by
+	 * the outer NMI.  That's okay; the outer NMI handler is
+	 * about to about to call do_nmi anyway, so we can just
+	 * resume the outer NMI.
 	 */
-	cmpl	$__KERNEL_CS, 16(%rsp)
-	jne	first_nmi
+
+	movq	$repeat_nmi, %rdx
+	cmpq	8(%rsp), %rdx
+	ja	1f
+	movq	$end_repeat_nmi, %rdx
+	cmpq	8(%rsp), %rdx
+	ja	nested_nmi_out
+1:
 
 	/*
-	 * Check the special variable on the stack to see if NMIs are
-	 * executing.
+	 * Now check "NMI executing".  If it's set, then we're nested.
+	 * This will not detect if we interrupted an outer NMI just
+	 * before IRET.
 	 */
 	cmpl	$1, -8(%rsp)
 	je	nested_nmi
 
 	/*
-	 * Now test if the previous stack was an NMI stack.
-	 * We need the double check. We check the NMI stack to satisfy the
-	 * race when the first NMI clears the variable before returning.
-	 * We check the variable because the first NMI could be in a
-	 * breakpoint routine using a breakpoint stack.
+	 * Now test if the previous stack was an NMI stack.  This covers
+	 * the case where we interrupt an outer NMI after it clears
+	 * "NMI executing" but before IRET.  We need to be careful, though:
+	 * there is one case in which RSP could point to the NMI stack
+	 * despite there being no NMI active: naughty userspace controls
+	 * RSP at the very beginning of the SYSCALL targets.  We can
+	 * pull a fast one on naughty userspace, though: we program
+	 * SYSCALL to mask DF, so userspace cannot cause DF to be set
+	 * if it controls the kernel's RSP.  We set DF before we clear
+	 * "NMI executing".
 	 */
 	lea	6*8(%rsp), %rdx
 	/* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */
@@ -1286,25 +1407,20 @@ ENTRY(nmi)
 	cmpq	%rdx, 4*8(%rsp)
 	/* If it is below the NMI stack, it is a normal NMI */
 	jb	first_nmi
-	/* Ah, it is within the NMI stack, treat it as nested */
+
+	/* Ah, it is within the NMI stack. */
+
+	testb	$(X86_EFLAGS_DF >> 8), (3*8 + 1)(%rsp)
+	jz	first_nmi	/* RSP was user controlled. */
+
+	/* This is a nested NMI. */
 
 nested_nmi:
 	/*
-	 * Do nothing if we interrupted the fixup in repeat_nmi.
-	 * It's about to repeat the NMI handler, so we are fine
-	 * with ignoring this one.
+	 * Modify the "iret" frame to point to repeat_nmi, forcing another
+	 * iteration of NMI handling.
 	 */
-	movq	$repeat_nmi, %rdx
-	cmpq	8(%rsp), %rdx
-	ja	1f
-	movq	$end_repeat_nmi, %rdx
-	cmpq	8(%rsp), %rdx
-	ja	nested_nmi_out
-
-1:
-	/* Set up the interrupted NMIs stack to jump to repeat_nmi */
-	leaq	-1*8(%rsp), %rdx
-	movq	%rdx, %rsp
+	subq	$8, %rsp
 	leaq	-10*8(%rsp), %rdx
 	pushq	$__KERNEL_DS
 	pushq	%rdx
@@ -1318,61 +1434,42 @@ ENTRY(nmi)
 nested_nmi_out:
 	popq	%rdx
 
-	/* No need to check faults here */
+	/* We are returning to kernel mode, so this cannot result in a fault. */
 	INTERRUPT_RETURN
 
 first_nmi:
-	/*
-	 * Because nested NMIs will use the pushed location that we
-	 * stored in rdx, we must keep that space available.
-	 * Here's what our stack frame will look like:
-	 * +-------------------------+
-	 * | original SS             |
-	 * | original Return RSP     |
-	 * | original RFLAGS         |
-	 * | original CS             |
-	 * | original RIP            |
-	 * +-------------------------+
-	 * | temp storage for rdx    |
-	 * +-------------------------+
-	 * | NMI executing variable  |
-	 * +-------------------------+
-	 * | copied SS               |
-	 * | copied Return RSP       |
-	 * | copied RFLAGS           |
-	 * | copied CS               |
-	 * | copied RIP              |
-	 * +-------------------------+
-	 * | Saved SS                |
-	 * | Saved Return RSP        |
-	 * | Saved RFLAGS            |
-	 * | Saved CS                |
-	 * | Saved RIP               |
-	 * +-------------------------+
-	 * | pt_regs                 |
-	 * +-------------------------+
-	 *
-	 * The saved stack frame is used to fix up the copied stack frame
-	 * that a nested NMI may change to make the interrupted NMI iret jump
-	 * to the repeat_nmi. The original stack frame and the temp storage
-	 * is also used by nested NMIs and can not be trusted on exit.
-	 */
-	/* Do not pop rdx, nested NMIs will corrupt that part of the stack */
+	/* Restore rdx. */
 	movq	(%rsp), %rdx
 
-	/* Set the NMI executing variable on the stack. */
-	pushq	$1
+	/* Make room for "NMI executing". */
+	pushq	$0
 
-	/* Leave room for the "copied" frame */
+	/* Leave room for the "iret" frame */
 	subq	$(5*8), %rsp
 
-	/* Copy the stack frame to the Saved frame */
+	/* Copy the "original" frame to the "outermost" frame */
 	.rept 5
 	pushq	11*8(%rsp)
 	.endr
 
 	/* Everything up to here is safe from nested NMIs */
 
+#ifdef CONFIG_DEBUG_ENTRY
+	/*
+	 * For ease of testing, unmask NMIs right away.  Disabled by
+	 * default because IRET is very expensive.
+	 */
+	pushq	$0		/* SS */
+	pushq	%rsp		/* RSP (minus 8 because of the previous push) */
+	addq	$8, (%rsp)	/* Fix up RSP */
+	pushfq			/* RFLAGS */
+	pushq	$__KERNEL_CS	/* CS */
+	pushq	$1f		/* RIP */
+	INTERRUPT_RETURN	/* continues at repeat_nmi below */
+1:
+#endif
+
+repeat_nmi:
 	/*
 	 * If there was a nested NMI, the first NMI's iret will return
 	 * here. But NMIs are still enabled and we can take another
@@ -1381,16 +1478,20 @@ ENTRY(nmi)
 	 * it will just return, as we are about to repeat an NMI anyway.
 	 * This makes it safe to copy to the stack frame that a nested
 	 * NMI will update.
+	 *
+	 * RSP is pointing to "outermost RIP".  gsbase is unknown, but, if
+	 * we're repeating an NMI, gsbase has the same value that it had on
+	 * the first iteration.  paranoid_entry will load the kernel
+	 * gsbase if needed before we call do_nmi.  "NMI executing"
+	 * is zero.
 	 */
-repeat_nmi:
+	movq	$1, 10*8(%rsp)		/* Set "NMI executing". */
+
 	/*
-	 * Update the stack variable to say we are still in NMI (the update
-	 * is benign for the non-repeat case, where 1 was pushed just above
-	 * to this very stack slot).
+	 * Copy the "outermost" frame to the "iret" frame.  NMIs that nest
+	 * here must not modify the "iret" frame while we're writing to
+	 * it or it will end up containing garbage.
 	 */
-	movq	$1, 10*8(%rsp)
-
-	/* Make another copy, this one may be modified by nested NMIs */
 	addq	$(10*8), %rsp
 	.rept 5
 	pushq	-6*8(%rsp)
@@ -1399,9 +1500,9 @@ ENTRY(nmi)
 end_repeat_nmi:
 
 	/*
-	 * Everything below this point can be preempted by a nested
-	 * NMI if the first NMI took an exception and reset our iret stack
-	 * so that we repeat another NMI.
+	 * Everything below this point can be preempted by a nested NMI.
+	 * If this happens, then the inner NMI will change the "iret"
+	 * frame to point back to repeat_nmi.
 	 */
 	pushq	$-1				/* ORIG_RAX: no syscall to restart */
 	ALLOC_PT_GPREGS_ON_STACK
@@ -1415,28 +1516,11 @@ ENTRY(nmi)
 	 */
 	call	paranoid_entry
 
-	/*
-	 * Save off the CR2 register. If we take a page fault in the NMI then
-	 * it could corrupt the CR2 value. If the NMI preempts a page fault
-	 * handler before it was able to read the CR2 register, and then the
-	 * NMI itself takes a page fault, the page fault that was preempted
-	 * will read the information from the NMI page fault and not the
-	 * origin fault. Save it off and restore it if it changes.
-	 * Use the r12 callee-saved register.
-	 */
-	movq	%cr2, %r12
-
 	/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
 	movq	%rsp, %rdi
 	movq	$-1, %rsi
 	call	do_nmi
 
-	/* Did the NMI take a page fault? Restore cr2 if it did */
-	movq	%cr2, %rcx
-	cmpq	%rcx, %r12
-	je	1f
-	movq	%r12, %cr2
-1:
 	testl	%ebx, %ebx			/* swapgs needed? */
 	jnz	nmi_restore
 nmi_swapgs:
@@ -1444,11 +1528,26 @@ ENTRY(nmi)
 nmi_restore:
 	RESTORE_EXTRA_REGS
 	RESTORE_C_REGS
-	/* Pop the extra iret frame at once */
+
+	/* Point RSP at the "iret" frame. */
 	REMOVE_PT_GPREGS_FROM_STACK 6*8
 
-	/* Clear the NMI executing stack variable */
-	movq	$0, 5*8(%rsp)
+	/*
+	 * Clear "NMI executing".  Set DF first so that we can easily
+	 * distinguish the remaining code between here and IRET from
+	 * the SYSCALL entry and exit paths.  On a native kernel, we
+	 * could just inspect RIP, but, on paravirt kernels,
+	 * INTERRUPT_RETURN can translate into a jump into a
+	 * hypercall page.
+	 */
+	std
+	movq	$0, 5*8(%rsp)		/* clear "NMI executing" */
+
+	/*
+	 * INTERRUPT_RETURN reads the "iret" frame and exits the NMI
+	 * stack in a single instruction.  We are returning to kernel
+	 * mode, so this cannot result in a fault.
+	 */
 	INTERRUPT_RETURN
 END(nmi)
 
diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h
index 0637826292de..c49c5173158e 100644
--- a/arch/x86/include/asm/fpu/types.h
+++ b/arch/x86/include/asm/fpu/types.h
@@ -189,6 +189,7 @@ union fpregs_state {
 	struct fxregs_state		fxsave;
 	struct swregs_state		soft;
 	struct xregs_state		xsave;
+	u8 __padding[PAGE_SIZE];
 };
 
 /*
@@ -198,40 +199,6 @@ union fpregs_state {
  */
 struct fpu {
 	/*
-	 * @state:
-	 *
-	 * In-memory copy of all FPU registers that we save/restore
-	 * over context switches. If the task is using the FPU then
-	 * the registers in the FPU are more recent than this state
-	 * copy. If the task context-switches away then they get
-	 * saved here and represent the FPU state.
-	 *
-	 * After context switches there may be a (short) time period
-	 * during which the in-FPU hardware registers are unchanged
-	 * and still perfectly match this state, if the tasks
-	 * scheduled afterwards are not using the FPU.
-	 *
-	 * This is the 'lazy restore' window of optimization, which
-	 * we track though 'fpu_fpregs_owner_ctx' and 'fpu->last_cpu'.
-	 *
-	 * We detect whether a subsequent task uses the FPU via setting
-	 * CR0::TS to 1, which causes any FPU use to raise a #NM fault.
-	 *
-	 * During this window, if the task gets scheduled again, we
-	 * might be able to skip having to do a restore from this
-	 * memory buffer to the hardware registers - at the cost of
-	 * incurring the overhead of #NM fault traps.
-	 *
-	 * Note that on modern CPUs that support the XSAVEOPT (or other
-	 * optimized XSAVE instructions), we don't use #NM traps anymore,
-	 * as the hardware can track whether FPU registers need saving
-	 * or not. On such CPUs we activate the non-lazy ('eagerfpu')
-	 * logic, which unconditionally saves/restores all FPU state
-	 * across context switches. (if FPU state exists.)
-	 */
-	union fpregs_state		state;
-
-	/*
 	 * @last_cpu:
 	 *
 	 * Records the last CPU on which this context was loaded into
@@ -288,6 +255,43 @@ struct fpu {
 	 * deal with bursty apps that only use the FPU for a short time:
 	 */
 	unsigned char			counter;
+	/*
+	 * @state:
+	 *
+	 * In-memory copy of all FPU registers that we save/restore
+	 * over context switches. If the task is using the FPU then
+	 * the registers in the FPU are more recent than this state
+	 * copy. If the task context-switches away then they get
+	 * saved here and represent the FPU state.
+	 *
+	 * After context switches there may be a (short) time period
+	 * during which the in-FPU hardware registers are unchanged
+	 * and still perfectly match this state, if the tasks
+	 * scheduled afterwards are not using the FPU.
+	 *
+	 * This is the 'lazy restore' window of optimization, which
+	 * we track though 'fpu_fpregs_owner_ctx' and 'fpu->last_cpu'.
+	 *
+	 * We detect whether a subsequent task uses the FPU via setting
+	 * CR0::TS to 1, which causes any FPU use to raise a #NM fault.
+	 *
+	 * During this window, if the task gets scheduled again, we
+	 * might be able to skip having to do a restore from this
+	 * memory buffer to the hardware registers - at the cost of
+	 * incurring the overhead of #NM fault traps.
+	 *
+	 * Note that on modern CPUs that support the XSAVEOPT (or other
+	 * optimized XSAVE instructions), we don't use #NM traps anymore,
+	 * as the hardware can track whether FPU registers need saving
+	 * or not. On such CPUs we activate the non-lazy ('eagerfpu')
+	 * logic, which unconditionally saves/restores all FPU state
+	 * across context switches. (if FPU state exists.)
+	 */
+	union fpregs_state		state;
+	/*
+	 * WARNING: 'state' is dynamically-sized.  Do not put
+	 * anything after it here.
+	 */
 };
 
 #endif /* _ASM_X86_FPU_H */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 43e6519df0d5..944f1785ed0d 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -390,9 +390,6 @@ struct thread_struct {
 #endif
 	unsigned long		gs;
 
-	/* Floating point and extended processor state */
-	struct fpu		fpu;
-
 	/* Save middle states of ptrace breakpoints */
 	struct perf_event	*ptrace_bps[HBP_NUM];
 	/* Debug status used for traps, single steps, etc... */
@@ -418,6 +415,13 @@ struct thread_struct {
 	unsigned long		iopl;
 	/* Max allowed port in the bitmap, in bytes: */
 	unsigned		io_bitmap_max;
+
+	/* Floating point and extended processor state */
+	struct fpu		fpu;
+	/*
+	 * WARNING: 'fpu' is dynamically-sized.  It *MUST* be at
+	 * the end.
+	 */
 };
 
 /*
diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
index 32826791e675..0b39173dd971 100644
--- a/arch/x86/kernel/fpu/init.c
+++ b/arch/x86/kernel/fpu/init.c
@@ -4,6 +4,8 @@
 #include <asm/fpu/internal.h>
 #include <asm/tlbflush.h>
 
+#include <linux/sched.h>
+
 /*
  * Initialize the TS bit in CR0 according to the style of context-switches
  * we are using:
@@ -136,6 +138,43 @@ static void __init fpu__init_system_generic(void)
 unsigned int xstate_size;
 EXPORT_SYMBOL_GPL(xstate_size);
 
+/* Enforce that 'MEMBER' is the last field of 'TYPE': */
+#define CHECK_MEMBER_AT_END_OF(TYPE, MEMBER) \
+	BUILD_BUG_ON(sizeof(TYPE) != offsetofend(TYPE, MEMBER))
+
+/*
+ * We append the 'struct fpu' to the task_struct:
+ */
+static void __init fpu__init_task_struct_size(void)
+{
+	int task_size = sizeof(struct task_struct);
+
+	/*
+	 * Subtract off the static size of the register state.
+	 * It potentially has a bunch of padding.
+	 */
+	task_size -= sizeof(((struct task_struct *)0)->thread.fpu.state);
+
+	/*
+	 * Add back the dynamically-calculated register state
+	 * size.
+	 */
+	task_size += xstate_size;
+
+	/*
+	 * We dynamically size 'struct fpu', so we require that
+	 * it be at the end of 'thread_struct' and that
+	 * 'thread_struct' be at the end of 'task_struct'.  If
+	 * you hit a compile error here, check the structure to
+	 * see if something got added to the end.
+	 */
+	CHECK_MEMBER_AT_END_OF(struct fpu, state);
+	CHECK_MEMBER_AT_END_OF(struct thread_struct, fpu);
+	CHECK_MEMBER_AT_END_OF(struct task_struct, thread);
+
+	arch_task_struct_size = task_size;
+}
+
 /*
  * Set up the xstate_size based on the legacy FPU context size.
  *
@@ -287,6 +326,7 @@ void __init fpu__init_system(struct cpuinfo_x86 *c)
 	fpu__init_system_generic();
 	fpu__init_system_xstate_size_legacy();
 	fpu__init_system_xstate();
+	fpu__init_task_struct_size();
 
 	fpu__init_system_ctx_switch();
 }
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index c3e985d1751c..d05bd2e2ee91 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -408,15 +408,15 @@ static void default_do_nmi(struct pt_regs *regs)
 NOKPROBE_SYMBOL(default_do_nmi);
 
 /*
- * NMIs can hit breakpoints which will cause it to lose its
- * NMI context with the CPU when the breakpoint does an iret.
- */
-#ifdef CONFIG_X86_32
-/*
- * For i386, NMIs use the same stack as the kernel, and we can
- * add a workaround to the iret problem in C (preventing nested
- * NMIs if an NMI takes a trap). Simply have 3 states the NMI
- * can be in:
+ * NMIs can page fault or hit breakpoints which will cause it to lose
+ * its NMI context with the CPU when the breakpoint or page fault does an IRET.
+ *
+ * As a result, NMIs can nest if NMIs get unmasked due an IRET during
+ * NMI processing.  On x86_64, the asm glue protects us from nested NMIs
+ * if the outer NMI came from kernel mode, but we can still nest if the
+ * outer NMI came from user mode.
+ *
+ * To handle these nested NMIs, we have three states:
  *
  *  1) not running
  *  2) executing
@@ -430,15 +430,14 @@ NOKPROBE_SYMBOL(default_do_nmi);
  * (Note, the latch is binary, thus multiple NMIs triggering,
  *  when one is running, are ignored. Only one NMI is restarted.)
  *
- * If an NMI hits a breakpoint that executes an iret, another
- * NMI can preempt it. We do not want to allow this new NMI
- * to run, but we want to execute it when the first one finishes.
- * We set the state to "latched", and the exit of the first NMI will
- * perform a dec_return, if the result is zero (NOT_RUNNING), then
- * it will simply exit the NMI handler. If not, the dec_return
- * would have set the state to NMI_EXECUTING (what we want it to
- * be when we are running). In this case, we simply jump back
- * to rerun the NMI handler again, and restart the 'latched' NMI.
+ * If an NMI executes an iret, another NMI can preempt it. We do not
+ * want to allow this new NMI to run, but we want to execute it when the
+ * first one finishes.  We set the state to "latched", and the exit of
+ * the first NMI will perform a dec_return, if the result is zero
+ * (NOT_RUNNING), then it will simply exit the NMI handler. If not, the
+ * dec_return would have set the state to NMI_EXECUTING (what we want it
+ * to be when we are running). In this case, we simply jump back to
+ * rerun the NMI handler again, and restart the 'latched' NMI.
  *
  * No trap (breakpoint or page fault) should be hit before nmi_restart,
  * thus there is no race between the first check of state for NOT_RUNNING
@@ -461,49 +460,36 @@ enum nmi_states {
 static DEFINE_PER_CPU(enum nmi_states, nmi_state);
 static DEFINE_PER_CPU(unsigned long, nmi_cr2);
 
-#define nmi_nesting_preprocess(regs)					\
-	do {								\
-		if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) {	\
-			this_cpu_write(nmi_state, NMI_LATCHED);		\
-			return;						\
-		}							\
-		this_cpu_write(nmi_state, NMI_EXECUTING);		\
-		this_cpu_write(nmi_cr2, read_cr2());			\
-	} while (0);							\
-	nmi_restart:
-
-#define nmi_nesting_postprocess()					\
-	do {								\
-		if (unlikely(this_cpu_read(nmi_cr2) != read_cr2()))	\
-			write_cr2(this_cpu_read(nmi_cr2));		\
-		if (this_cpu_dec_return(nmi_state))			\
-			goto nmi_restart;				\
-	} while (0)
-#else /* x86_64 */
+#ifdef CONFIG_X86_64
 /*
- * In x86_64 things are a bit more difficult. This has the same problem
- * where an NMI hitting a breakpoint that calls iret will remove the
- * NMI context, allowing a nested NMI to enter. What makes this more
- * difficult is that both NMIs and breakpoints have their own stack.
- * When a new NMI or breakpoint is executed, the stack is set to a fixed
- * point. If an NMI is nested, it will have its stack set at that same
- * fixed address that the first NMI had, and will start corrupting the
- * stack. This is handled in entry_64.S, but the same problem exists with
- * the breakpoint stack.
+ * In x86_64, we need to handle breakpoint -> NMI -> breakpoint.  Without
+ * some care, the inner breakpoint will clobber the outer breakpoint's
+ * stack.
  *
- * If a breakpoint is being processed, and the debug stack is being used,
- * if an NMI comes in and also hits a breakpoint, the stack pointer
- * will be set to the same fixed address as the breakpoint that was
- * interrupted, causing that stack to be corrupted. To handle this case,
- * check if the stack that was interrupted is the debug stack, and if
- * so, change the IDT so that new breakpoints will use the current stack
- * and not switch to the fixed address. On return of the NMI, switch back
- * to the original IDT.
+ * If a breakpoint is being processed, and the debug stack is being
+ * used, if an NMI comes in and also hits a breakpoint, the stack
+ * pointer will be set to the same fixed address as the breakpoint that
+ * was interrupted, causing that stack to be corrupted. To handle this
+ * case, check if the stack that was interrupted is the debug stack, and
+ * if so, change the IDT so that new breakpoints will use the current
+ * stack and not switch to the fixed address. On return of the NMI,
+ * switch back to the original IDT.
  */
 static DEFINE_PER_CPU(int, update_debug_stack);
+#endif
 
-static inline void nmi_nesting_preprocess(struct pt_regs *regs)
+dotraplinkage notrace void
+do_nmi(struct pt_regs *regs, long error_code)
 {
+	if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) {
+		this_cpu_write(nmi_state, NMI_LATCHED);
+		return;
+	}
+	this_cpu_write(nmi_state, NMI_EXECUTING);
+	this_cpu_write(nmi_cr2, read_cr2());
+nmi_restart:
+
+#ifdef CONFIG_X86_64
 	/*
 	 * If we interrupted a breakpoint, it is possible that
 	 * the nmi handler will have breakpoints too. We need to
@@ -514,22 +500,8 @@ static inline void nmi_nesting_preprocess(struct pt_regs *regs)
 		debug_stack_set_zero();
 		this_cpu_write(update_debug_stack, 1);
 	}
-}
-
-static inline void nmi_nesting_postprocess(void)
-{
-	if (unlikely(this_cpu_read(update_debug_stack))) {
-		debug_stack_reset();
-		this_cpu_write(update_debug_stack, 0);
-	}
-}
 #endif
 
-dotraplinkage notrace void
-do_nmi(struct pt_regs *regs, long error_code)
-{
-	nmi_nesting_preprocess(regs);
-
 	nmi_enter();
 
 	inc_irq_stat(__nmi_count);
@@ -539,8 +511,17 @@ do_nmi(struct pt_regs *regs, long error_code)
 
 	nmi_exit();
 
-	/* On i386, may loop back to preprocess */
-	nmi_nesting_postprocess();
+#ifdef CONFIG_X86_64
+	if (unlikely(this_cpu_read(update_debug_stack))) {
+		debug_stack_reset();
+		this_cpu_write(update_debug_stack, 0);
+	}
+#endif
+
+	if (unlikely(this_cpu_read(nmi_cr2) != read_cr2()))
+		write_cr2(this_cpu_read(nmi_cr2));
+	if (this_cpu_dec_return(nmi_state))
+		goto nmi_restart;
 }
 NOKPROBE_SYMBOL(do_nmi);
 
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 9cad694ed7c4..397688beed4b 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -81,7 +81,7 @@ EXPORT_SYMBOL_GPL(idle_notifier_unregister);
  */
 int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
 {
-	*dst = *src;
+	memcpy(dst, src, arch_task_struct_size);
 
 	return fpu__copy(&dst->thread.fpu, &src->thread.fpu);
 }
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
index 91a4e6426321..92e6726f6e37 100644
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -92,7 +92,7 @@ static size_t get_kcore_size(int *nphdr, size_t *elf_buflen)
 			     roundup(sizeof(CORE_STR), 4)) +
 			roundup(sizeof(struct elf_prstatus), 4) +
 			roundup(sizeof(struct elf_prpsinfo), 4) +
-			roundup(sizeof(struct task_struct), 4);
+			roundup(arch_task_struct_size, 4);
 	*elf_buflen = PAGE_ALIGN(*elf_buflen);
 	return size + *elf_buflen;
 }
@@ -415,7 +415,7 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
 	/* set up the task structure */
 	notes[2].name	= CORE_STR;
 	notes[2].type	= NT_TASKSTRUCT;
-	notes[2].datasz	= sizeof(struct task_struct);
+	notes[2].datasz	= arch_task_struct_size;
 	notes[2].data	= current;
 
 	nhdr->p_filesz	+= notesize(&notes[2]);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ae21f1591615..04b5ada460b4 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1522,8 +1522,6 @@ struct task_struct {
 /* hung task detection */
 	unsigned long last_switch_count;
 #endif
-/* CPU-specific state of this task */
-	struct thread_struct thread;
 /* filesystem information */
 	struct fs_struct *fs;
 /* open file information */
@@ -1778,8 +1776,22 @@ struct task_struct {
 	unsigned long	task_state_change;
 #endif
 	int pagefault_disabled;
+/* CPU-specific state of this task */
+	struct thread_struct thread;
+/*
+ * WARNING: on x86, 'thread_struct' contains a variable-sized
+ * structure.  It *MUST* be at the end of 'task_struct'.
+ *
+ * Do not put anything below here!
+ */
 };
 
+#ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT
+extern int arch_task_struct_size __read_mostly;
+#else
+# define arch_task_struct_size (sizeof(struct task_struct))
+#endif
+
 /* Future-safe accessor for struct task_struct's cpus_allowed. */
 #define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
 
diff --git a/kernel/fork.c b/kernel/fork.c
index 1bfefc6f96a4..dbd9b8d7b7cc 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -287,6 +287,11 @@ static void set_max_threads(unsigned int max_threads_suggested)
 	max_threads = clamp_t(u64, threads, MIN_THREADS, MAX_THREADS);
 }
 
+#ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT
+/* Initialized by the architecture: */
+int arch_task_struct_size __read_mostly;
+#endif
+
 void __init fork_init(void)
 {
 #ifndef CONFIG_ARCH_TASK_STRUCT_ALLOCATOR
@@ -295,7 +300,7 @@ void __init fork_init(void)
 #endif
 	/* create a slab on which task_structs can be allocated */
 	task_struct_cachep =
-		kmem_cache_create("task_struct", sizeof(struct task_struct),
+		kmem_cache_create("task_struct", arch_task_struct_size,
 			ARCH_MIN_TASKALIGN, SLAB_PANIC | SLAB_NOTRACK, NULL);
 #endif
 

             reply	other threads:[~2015-07-18  3:18 UTC|newest]

Thread overview: 566+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-18  3:18 Ingo Molnar [this message]
2015-07-20  7:20 ` [GIT PULL] x86 fixes Heiko Carstens
2015-07-20  8:00   ` [PATCH] sched, s390: Fix the fallout of increasing the offset of 'thread_struct' within 'task_struct' Ingo Molnar
2015-07-20  8:12     ` Martin Schwidefsky
2015-07-20  8:38       ` Ingo Molnar
2015-07-20  8:56         ` Martin Schwidefsky
2015-07-20  8:20   ` [PATCH] s390, sched: Fix thread_struct move fallout to __switch_to() Ingo Molnar
2015-07-20  8:33     ` Martin Schwidefsky
2015-07-20  8:34     ` Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2023-12-23 14:34 [GIT PULL] x86 fixes Ingo Molnar
2023-12-23 20:21 ` pr-tracker-bot
2023-11-26  9:51 Ingo Molnar
2023-11-26 17:16 ` pr-tracker-bot
2023-10-28 10:50 Ingo Molnar
2023-10-28 18:17 ` pr-tracker-bot
2023-10-14 22:08 Ingo Molnar
2023-10-14 22:49 ` pr-tracker-bot
2023-10-08  9:41 Ingo Molnar
2023-10-08 18:06 ` pr-tracker-bot
2023-10-01  9:04 Ingo Molnar
2023-10-01 17:08 ` pr-tracker-bot
2023-09-22 10:33 Ingo Molnar
2023-09-22 20:19 ` pr-tracker-bot
2023-09-17 17:44 Ingo Molnar
2023-09-17 18:24 ` pr-tracker-bot
2023-09-02 10:24 [GIT PULL] x86 fix Ingo Molnar
2023-09-10 16:26 ` [GIT PULL] x86 fixes Ingo Molnar
2023-09-10 18:08   ` pr-tracker-bot
2023-08-26 17:54 Ingo Molnar
2023-08-26 18:08 ` pr-tracker-bot
2023-02-11  8:59 Ingo Molnar
2023-02-11 19:24 ` pr-tracker-bot
2022-08-28 14:57 Ingo Molnar
2022-08-28 18:18 ` pr-tracker-bot
2022-08-06 19:29 Ingo Molnar
2022-08-07  0:50 ` pr-tracker-bot
2021-03-28 10:44 Ingo Molnar
2021-03-28 19:22 ` pr-tracker-bot
2020-10-11  8:08 Ingo Molnar
2020-10-11 18:00 ` Linus Torvalds
2020-10-11 20:00   ` Thomas Gleixner
2020-10-11 18:23 ` pr-tracker-bot
2020-09-06  8:15 Ingo Molnar
2020-09-06 19:14 ` pr-tracker-bot
2020-08-15 11:45 Ingo Molnar
2020-08-16  1:55 ` pr-tracker-bot
2020-07-25 11:46 Ingo Molnar
2020-07-25 22:30 ` pr-tracker-bot
2020-03-02  8:49 Ingo Molnar
2020-03-03 23:35 ` pr-tracker-bot
2020-01-31 11:52 Ingo Molnar
2020-01-31 19:35 ` pr-tracker-bot
2020-01-18 18:52 Ingo Molnar
2020-01-18 21:05 ` pr-tracker-bot
2019-12-01 22:22 Ingo Molnar
2019-12-02  4:40 ` pr-tracker-bot
2019-11-16 21:42 Ingo Molnar
2019-11-17  0:35 ` pr-tracker-bot
2019-10-12 13:19 Ingo Molnar
2019-10-12 22:35 ` pr-tracker-bot
2019-09-12 12:57 Ingo Molnar
2019-09-12 14:05 ` pr-tracker-bot
2019-09-05  8:07 Ingo Molnar
2019-09-05 21:15 ` pr-tracker-bot
2019-06-29  9:14 Ingo Molnar
2019-06-29 11:45 ` pr-tracker-bot
2019-06-02 17:44 Ingo Molnar
2019-06-02 18:15 ` pr-tracker-bot
2019-05-16 16:26 Ingo Molnar
2019-05-16 18:20 ` pr-tracker-bot
2019-04-27 14:42 Ingo Molnar
2019-04-27 18:45 ` pr-tracker-bot
2019-04-20  7:38 Ingo Molnar
2019-04-20 19:25 ` pr-tracker-bot
2019-04-12 13:10 Ingo Molnar
2019-04-13  4:05 ` pr-tracker-bot
2019-02-17 10:19 Ingo Molnar
2019-02-17 16:50 ` pr-tracker-bot
2019-02-10  9:13 Ingo Molnar
2019-02-10 18:30 ` pr-tracker-bot
2019-01-11  7:14 Ingo Molnar
2019-01-11 18:00 ` pr-tracker-bot
2018-12-21 12:25 Ingo Molnar
2018-12-21 19:30 ` pr-tracker-bot
2018-12-09 22:06 Ingo Molnar
2018-12-09 23:45 ` pr-tracker-bot
2018-11-30  6:29 Ingo Molnar
2018-11-30 21:00 ` pr-tracker-bot
2018-11-03 23:09 Ingo Molnar
2018-11-04  1:27 ` Linus Torvalds
2018-10-20  8:54 Ingo Molnar
2018-10-20 13:28 ` Greg Kroah-Hartman
2018-10-11  9:14 Ingo Molnar
2018-10-11 12:32 ` Greg Kroah-Hartman
2018-10-05  9:53 Ingo Molnar
2018-10-05 23:06 ` Greg Kroah-Hartman
2018-09-15 13:24 Ingo Molnar
2018-07-30 17:59 Ingo Molnar
2018-06-30  8:49 Ingo Molnar
2018-06-30 19:01 ` Linus Torvalds
2018-07-02 18:47   ` Andy Lutomirski
2018-07-02 18:53     ` Linus Torvalds
2018-07-03  7:56       ` Ingo Molnar
2018-03-31 10:36 Ingo Molnar
2018-02-15  0:45 Ingo Molnar
2018-01-17 15:41 Ingo Molnar
2018-01-17 20:35 ` Linus Torvalds
2018-01-18  0:24   ` Ingo Molnar
2018-01-18  0:29     ` Andrew Morton
2018-01-12 13:56 Ingo Molnar
2017-12-15 15:43 Ingo Molnar
2017-12-15 15:50 ` Andy Lutomirski
2017-12-15 16:07   ` Ingo Molnar
2017-12-17  3:25     ` Andy Lutomirski
2017-12-17  8:32       ` Ingo Molnar
2017-12-17 11:41       ` Thomas Gleixner
2017-12-17 15:15         ` Borislav Petkov
2017-12-06 22:36 Ingo Molnar
2017-11-26 12:48 Ingo Molnar
2017-11-05 14:46 Ingo Molnar
2017-10-27 19:24 Ingo Molnar
2017-10-14 16:16 Ingo Molnar
2017-09-24 11:28 Ingo Molnar
2017-09-13 17:54 Ingo Molnar
2017-09-12 15:38 Ingo Molnar
2017-08-26  7:26 Ingo Molnar
2017-07-21 10:26 Ingo Molnar
2017-06-10  9:03 Ingo Molnar
2017-06-02  6:54 Ingo Molnar
2017-05-12  7:39 Ingo Molnar
2017-03-07 20:40 Ingo Molnar
2017-02-28  8:08 Ingo Molnar
2017-02-11 18:18 Ingo Molnar
2017-02-02 21:04 Ingo Molnar
2017-01-15 10:06 Ingo Molnar
2016-12-23 22:57 Ingo Molnar
2016-12-07 18:53 Ingo Molnar
2016-11-22 15:41 Ingo Molnar
2016-11-14  8:03 Ingo Molnar
2016-10-28  8:41 Ingo Molnar
2016-10-22 11:16 Ingo Molnar
2016-10-18 11:22 Ingo Molnar
2016-09-13 18:20 Ingo Molnar
2016-08-18 20:49 Ingo Molnar
2016-08-12 19:46 Ingo Molnar
2016-08-06  6:13 Ingo Molnar
2016-07-13 12:54 Ingo Molnar
2016-07-08 14:00 Ingo Molnar
2016-06-10 14:43 Ingo Molnar
2016-05-25 22:00 Ingo Molnar
2016-05-10 12:01 Ingo Molnar
2016-05-06 19:20 Ingo Molnar
2016-04-28 18:00 Ingo Molnar
2016-04-23 11:38 Ingo Molnar
2016-04-14 14:13 Ingo Molnar
2016-03-24  8:01 Ingo Molnar
2016-03-12 19:06 Ingo Molnar
2016-02-20 11:30 Ingo Molnar
2016-01-14 10:16 Ingo Molnar
2016-01-08 12:57 Ingo Molnar
2015-10-23 11:45 Ingo Molnar
2015-10-03 10:24 Ingo Molnar
2015-10-03 10:57 ` Ingo Molnar
2015-10-03 19:40   ` Thomas Gleixner
2015-09-17  8:28 Ingo Molnar
2015-08-22 12:21 Ingo Molnar
2015-08-14  7:15 Ingo Molnar
2015-08-14 18:25 ` Linus Torvalds
2015-08-14 18:46   ` Andy Lutomirski
2015-08-14 18:57     ` Linus Torvalds
2015-08-14 19:06       ` Linus Torvalds
2015-08-14 19:18         ` Andy Lutomirski
2015-08-14 19:37           ` Linus Torvalds
2015-08-14 19:14       ` Andy Lutomirski
2015-08-17  8:01   ` Ingo Molnar
2015-08-17 10:59     ` Denys Vlasenko
2015-08-17 16:57       ` Linus Torvalds
2015-08-18  7:57         ` Ingo Molnar
2015-08-17 16:47     ` Linus Torvalds
2015-08-17 16:58       ` H. Peter Anvin
2015-08-17 17:17         ` Linus Torvalds
2015-08-17 22:17           ` H. Peter Anvin
2015-08-19  5:59             ` Ingo Molnar
2015-08-19  6:15               ` Ingo Molnar
2015-08-19  6:50               ` Ingo Molnar
2015-08-19 10:00                 ` H. Peter Anvin
2015-08-19 22:33                   ` Linus Torvalds
2015-08-20  6:54                     ` H. Peter Anvin
2015-08-19 21:53                 ` H. Peter Anvin
2015-08-21 10:17                 ` Denys Vlasenko
2015-08-17 23:47           ` Bryan O'Donoghue
2015-08-17 21:03     ` H. Peter Anvin
2015-08-17 23:59     ` Andy Lutomirski
2015-08-18  0:01       ` H. Peter Anvin
2015-08-18  0:06       ` H. Peter Anvin
2015-08-18  0:19         ` Andy Lutomirski
2015-08-18  5:56           ` H. Peter Anvin
2015-08-18  5:59           ` H. Peter Anvin
2015-08-18  7:55       ` Ingo Molnar
2015-08-01  8:44 Ingo Molnar
2015-07-04 11:29 Ingo Molnar
2015-06-05  8:40 Ingo Molnar
2015-05-27 12:54 Ingo Molnar
2015-05-06 12:58 Ingo Molnar
2015-05-06 18:14 ` Linus Torvalds
2015-04-18 15:26 Ingo Molnar
2015-04-03 13:16 Ingo Molnar
2015-03-17 16:54 Ingo Molnar
2015-03-05 17:02 Ingo Molnar
2015-03-01 17:14 Ingo Molnar
2015-02-20 13:47 Ingo Molnar
2015-01-11  8:51 Ingo Molnar
2014-12-14 19:46 Ingo Molnar
2014-11-20  8:02 Ingo Molnar
2014-11-16  9:07 Ingo Molnar
2014-11-17  7:42 ` Markus Trippelsdorf
2014-11-17  8:27   ` Markus Trippelsdorf
2014-11-17 13:58     ` Ingo Molnar
2014-11-17 21:02       ` Kees Cook
2014-11-17 21:05         ` Markus Trippelsdorf
2014-11-17 21:21         ` Markus Trippelsdorf
2014-11-17 23:09           ` Kees Cook
2014-10-31 11:26 Ingo Molnar
2014-09-27  6:02 Ingo Molnar
2014-09-19 10:40 Ingo Molnar
2014-09-23  5:22 ` Linus Torvalds
2014-09-23  5:35   ` Ingo Molnar
2014-09-23  5:37     ` Ingo Molnar
2014-09-23  5:44       ` H. Peter Anvin
2014-09-23  5:59         ` Linus Torvalds
2014-09-23  6:07           ` Linus Torvalds
2014-09-23  6:56           ` Matt Fleming
     [not found]             ` <CA+55aFz+2tf7zEGjVmkVuncZssiDdVRKJ=OUfgnDFf2TYN-KvA@mail.gmail.com>
2014-09-23  7:35               ` Matt Fleming
2014-09-23 12:18             ` Josh Boyer
2014-09-23  5:58       ` Ingo Molnar
2014-09-23  7:20         ` Matt Fleming
2014-09-23  8:18           ` Ard Biesheuvel
2014-09-23 13:18             ` Matt Fleming
2014-09-23 13:59               ` Leif Lindholm
2014-09-23 14:25               ` Maarten Lankhorst
2014-09-23 14:37                 ` Matt Fleming
2014-09-23 16:01                   ` Linus Torvalds
2014-09-24  7:26                   ` Ingo Molnar
2014-09-24 11:42                     ` Matt Fleming
2014-09-24 13:08                       ` Ingo Molnar
2014-09-24 13:18                         ` Matt Fleming
2014-09-24 13:18                           ` Ingo Molnar
2014-09-23 16:05               ` Linus Torvalds
2014-09-23 16:11                 ` Matt Fleming
2014-09-23 16:17                   ` Josh Boyer
2014-09-23 17:21                     ` Josh Boyer
2014-09-23 20:43                       ` Matt Fleming
2014-08-24 20:28 Ingo Molnar
2014-04-16 13:21 Ingo Molnar
2013-11-13 20:47 Ingo Molnar
2013-10-18 19:11 Ingo Molnar
2013-10-12 17:15 Ingo Molnar
2013-10-12 18:05 ` Linus Torvalds
2013-10-12 18:18   ` H. Peter Anvin
2013-10-12 18:49     ` Ingo Molnar
2013-10-15  7:15       ` Ingo Molnar
2013-10-15 10:57         ` Borislav Petkov
2013-10-12 19:28   ` Matthew Garrett
2013-10-12 19:41     ` Linus Torvalds
2013-10-12 20:35       ` H. Peter Anvin
2013-10-04  7:57 Ingo Molnar
2013-09-25 18:16 Ingo Molnar
2013-09-18 16:24 Ingo Molnar
2013-09-05 11:03 Ingo Molnar
2013-08-19 11:23 Ingo Molnar
2013-04-14 15:55 Ingo Molnar
2013-02-26 12:10 Ingo Molnar
2013-02-04 18:31 Ingo Molnar
2012-10-26 14:52 Ingo Molnar
2012-09-21 19:15 Ingo Molnar
2012-08-23 10:54 Ingo Molnar
2012-08-20  9:21 Ingo Molnar
2012-08-21  8:00 ` Ingo Molnar
2012-08-03 16:51 Ingo Molnar
2012-06-29 15:33 Ingo Molnar
2012-06-15 18:53 Ingo Molnar
2012-06-08 14:46 Ingo Molnar
2012-05-17  8:24 Ingo Molnar
2012-04-27  6:57 Ingo Molnar
2012-04-03 22:45 Ingo Molnar
2012-04-03 23:47 ` Konrad Rzeszutek Wilk
2012-04-04  6:56   ` Ingo Molnar
2012-04-04 13:03     ` Konrad Rzeszutek Wilk
2012-02-27 10:32 Ingo Molnar
2012-02-02 10:10 Ingo Molnar
2012-01-26 20:15 Ingo Molnar
2012-01-15 13:40 Ingo Molnar
2011-12-13 23:00 Ingo Molnar
2011-12-05 19:18 Ingo Molnar
2011-07-07 18:24 Ingo Molnar
2011-06-19  9:09 Ingo Molnar
2011-06-13  9:49 Ingo Molnar
2011-06-07 18:44 Ingo Molnar
2011-05-31 16:30 Ingo Molnar
2011-05-31 16:35 ` Joe Perches
2011-05-31 18:16   ` Borislav Petkov
2011-05-31 19:04     ` Ingo Molnar
2011-05-31 19:51       ` Borislav Petkov
2011-05-31 21:35     ` Linus Torvalds
2011-06-01  6:00       ` Ingo Molnar
2011-06-01  6:08         ` Joe Perches
2011-06-01  6:18         ` Borislav Petkov
2011-06-01  6:24         ` Ingo Molnar
2011-05-23 10:19 Ingo Molnar
2011-05-17 21:43 Ingo Molnar
2011-05-03 11:44 Ingo Molnar
2011-04-29 18:02 Ingo Molnar
2011-04-21 16:06 Ingo Molnar
2011-04-16 10:11 Ingo Molnar
2011-04-07 17:36 Ingo Molnar
2011-04-02 10:52 Ingo Molnar
2011-03-25 13:35 Ingo Molnar
2011-03-22 10:20 Ingo Molnar
2011-03-18 13:54 Ingo Molnar
2011-03-16 16:21 Ingo Molnar
2011-03-10  8:10 Ingo Molnar
2011-02-28 17:37 Ingo Molnar
2011-02-25 19:58 Ingo Molnar
2011-02-15 16:36 Ingo Molnar
2011-02-06 11:18 Ingo Molnar
2011-01-27 17:28 Ingo Molnar
2011-01-24 13:01 Ingo Molnar
2011-01-19 19:01 Ingo Molnar
2011-01-18 19:05 Ingo Molnar
2011-01-15 15:17 Ingo Molnar
2010-12-19 15:30 Ingo Molnar
2010-11-26 13:27 Ingo Molnar
2010-11-11 11:03 Ingo Molnar
2010-10-27 16:05 Ingo Molnar
2010-10-27 16:07 ` Ingo Molnar
2010-09-26  8:50 Ingo Molnar
2010-09-08 13:08 Ingo Molnar
2010-06-02 11:49 Ingo Molnar
2010-03-30 12:30 Ingo Molnar
2010-03-26 15:43 Ingo Molnar
2010-03-29 15:47 ` Linus Torvalds
2010-03-29 16:47   ` Ingo Molnar
2010-03-13 16:39 Ingo Molnar
2010-01-31 17:19 Ingo Molnar
2010-01-16 17:03 Ingo Molnar
2010-01-16 20:34 ` Linus Torvalds
2010-01-16 20:53   ` Cyrill Gorcunov
2010-01-16 21:16     ` Ian Campbell
2010-01-16 22:12       ` Cyrill Gorcunov
2010-01-17  0:50       ` H. Peter Anvin
2010-01-16 21:06   ` H. Peter Anvin
2010-01-16 21:09     ` H. Peter Anvin
2010-01-17  0:18   ` Brian Gerst
2010-01-17  6:00     ` Ian Campbell
2009-12-31 12:03 Ingo Molnar
2009-12-31 12:56 ` Borislav Petkov
2009-12-18 18:56 Ingo Molnar
2009-12-15 20:36 Ingo Molnar
2009-12-14 19:06 Ingo Molnar
2009-12-10 19:42 Ingo Molnar
2009-11-10 17:40 Ingo Molnar
2009-11-04 15:48 Ingo Molnar
2009-11-01 15:24 Ingo Molnar
2009-10-23 14:40 Ingo Molnar
2009-10-15 10:55 Ingo Molnar
2009-10-13 18:15 Ingo Molnar
2009-10-08 18:57 Ingo Molnar
2009-10-02 12:36 Ingo Molnar
2009-09-26 12:21 Ingo Molnar
2009-09-21 12:59 Ingo Molnar
2009-08-28 10:40 Ingo Molnar
2009-08-25 18:00 Ingo Molnar
2009-08-17 21:37 Ingo Molnar
2009-08-13 18:49 Ingo Molnar
2009-08-09 16:01 Ingo Molnar
2009-08-04 18:55 Ingo Molnar
2009-06-26 19:07 Ingo Molnar
2009-06-12 10:47 Ingo Molnar
2009-05-18 14:38 Ingo Molnar
2009-05-08 18:46 Ingo Molnar
2009-05-05  9:26 Ingo Molnar
2009-04-26 17:18 Ingo Molnar
2009-04-17  1:32 Ingo Molnar
2009-04-13 17:36 Ingo Molnar
2009-04-09 15:47 Ingo Molnar
2009-04-03 22:46 Ingo Molnar
2009-03-06 18:36 [git pull] " Ingo Molnar
2009-03-03 20:59 Ingo Molnar
2009-03-02  8:47 Ingo Molnar
2009-02-27 16:28 Ingo Molnar
2009-02-21 17:08 Ingo Molnar
2009-02-20 14:18 Ingo Molnar
2009-02-19 17:10 Ingo Molnar
2009-02-21  2:13 ` Linus Torvalds
2009-02-21  6:56   ` H. Peter Anvin
2009-02-21  8:32   ` Ingo Molnar
2009-02-21  8:39     ` Ingo Molnar
2009-02-21  8:42       ` H. Peter Anvin
2009-02-21  9:18     ` Sam Ravnborg
2009-02-21  9:46       ` Ingo Molnar
2009-02-17 16:36 Ingo Molnar
2009-02-11 14:31 Ingo Molnar
2009-02-04 19:22 Ingo Molnar
2009-01-30 23:00 Ingo Molnar
2009-01-26 17:17 Ingo Molnar
2009-01-26 19:05 ` Andrew Morton
2009-01-26 19:20   ` Ingo Molnar
2009-01-26 19:40     ` Andrew Morton
2009-01-26 19:59       ` Ingo Molnar
2009-01-26 20:14         ` Andrew Morton
2009-01-26 20:28           ` Ingo Molnar
2009-01-19 23:23 Ingo Molnar
2009-01-12 18:28 Ingo Molnar
2009-01-11 14:39 Ingo Molnar
2009-01-11 16:45 ` Torsten Kaiser
2009-01-11 18:18   ` Ingo Molnar
2009-01-12 18:17   ` Pallipadi, Venkatesh
2009-01-12 19:01     ` Torsten Kaiser
2009-01-12 19:19       ` Pallipadi, Venkatesh
2009-01-12 19:29         ` Pallipadi, Venkatesh
2009-01-12 19:47           ` Linus Torvalds
2009-01-12 19:54             ` Pallipadi, Venkatesh
2009-01-12 20:38               ` Ingo Molnar
2009-01-12 20:52             ` Ingo Molnar
2009-01-12 21:03               ` Harvey Harrison
2009-01-12 21:12                 ` Ingo Molnar
2009-01-12 21:55               ` Torsten Kaiser
2009-01-12 22:03                 ` Ingo Molnar
2009-01-12 20:05           ` Torsten Kaiser
2009-01-12 20:40             ` Ingo Molnar
2009-01-12 21:50               ` Torsten Kaiser
2009-01-12 22:13                 ` Ingo Molnar
2009-01-13 19:20                   ` Torsten Kaiser
2009-01-12 22:16                 ` Ingo Molnar
2009-01-02 21:48 Ingo Molnar
2008-12-20 13:43 Ingo Molnar
2008-12-20 19:16 ` Linus Torvalds
2008-12-20 19:31   ` Ingo Molnar
2008-12-20 22:11     ` Linus Torvalds
2008-12-20 20:58   ` Joerg Roedel
2008-12-08 18:26 Ingo Molnar
2008-12-04 19:46 Ingo Molnar
2008-11-29 19:31 Ingo Molnar
2008-11-20 11:22 Ingo Molnar
2008-11-18 20:35 Ingo Molnar
2008-11-06 21:29 Ingo Molnar
2008-11-01 17:06 Ingo Molnar
2008-10-30 23:34 Ingo Molnar
2008-10-28 10:49 Ingo Molnar
2008-10-23 19:33 Ingo Molnar
2008-10-17 17:27 Ingo Molnar
2008-10-15 16:32 Ingo Molnar
2008-10-01 18:05 Ingo Molnar
2008-09-27 21:02 Ingo Molnar
2008-09-23 19:34 Ingo Molnar
2008-09-17  9:58 Ingo Molnar
2008-09-09 19:03 H. Peter Anvin
2008-09-08 19:32 H. Peter Anvin
2008-09-08 20:34 ` David Sanders
2008-09-08 21:20   ` H. Peter Anvin
2008-09-08 21:22     ` H. Peter Anvin
2008-09-08 21:43   ` H. Peter Anvin
2008-09-08 22:16     ` David Sanders
2008-09-09  6:05       ` Ingo Molnar
2008-09-09  7:19         ` Ingo Molnar
2008-09-09 19:18 ` David Sanders
2008-09-09 19:56   ` Linus Torvalds
2008-09-09 20:37     ` David Sanders
2008-09-09 20:45       ` Linus Torvalds
2008-09-09 20:46         ` Linus Torvalds
2008-09-09 20:49           ` Ingo Molnar
2008-09-09 20:53           ` David Sanders
2008-09-08 17:52 H. Peter Anvin
2008-09-08 18:04 ` Linus Torvalds
2008-09-08 18:17   ` Linus Torvalds
2008-09-08 22:42     ` Andi Kleen
2008-09-08 18:22   ` H. Peter Anvin
2008-09-08 18:46     ` Arjan van de Ven
2008-09-08 18:51       ` H. Peter Anvin
2008-09-08 19:02         ` Ingo Molnar
2008-09-08 19:30           ` Linus Torvalds
2008-09-08 19:55             ` Arjan van de Ven
2008-09-08 20:14               ` H. Peter Anvin
2008-09-08 23:17             ` Krzysztof Halasa
2008-09-08 18:42               ` Arjan van de Ven
2008-09-09 10:24               ` Andi Kleen
2008-09-09 14:54                 ` Linus Torvalds
2008-09-09 17:01                 ` H. Peter Anvin
2008-09-09 17:17                 ` Mark Lord
2008-09-09 17:19                   ` H. Peter Anvin
2008-09-09 17:48                   ` Mark Lord
2008-09-09 18:40                   ` Andi Kleen
2008-09-09 16:05             ` Adrian Bunk
2008-09-09 16:15               ` Linus Torvalds
2008-09-08 20:25           ` Valdis.Kletnieks
2008-09-09  7:27             ` Ingo Molnar
2008-09-08 22:43       ` Andi Kleen
2008-09-09 16:57   ` Adrian Bunk
2008-09-09 17:03     ` H. Peter Anvin
2008-09-09 17:43       ` Adrian Bunk
2008-09-09 18:12         ` H. Peter Anvin
2008-09-06 19:01 Ingo Molnar
2008-09-05 18:51 Ingo Molnar
2008-08-28 11:41 Ingo Molnar
2008-08-25 17:50 Ingo Molnar
2008-08-22 12:23 Ingo Molnar
2008-08-18 18:36 Ingo Molnar
2008-07-31 21:42 Ingo Molnar
2008-07-29 15:53 Ingo Molnar
2008-07-26 19:15 Ingo Molnar
2008-07-24 15:12 Ingo Molnar
2008-07-24 19:36 ` Linus Torvalds
2008-07-24 20:38   ` H. Peter Anvin
2008-07-22 14:03 Ingo Molnar
2008-07-22 14:35 ` Johannes Weiner
2008-07-22 15:08   ` Jeremy Fitzhardinge
2008-07-22 15:23     ` Johannes Weiner
2008-07-17 17:32 Ingo Molnar
2008-07-15 15:01 Ingo Molnar
2008-07-15 15:13 ` Ingo Molnar
2008-07-15 16:03   ` Linus Torvalds
2008-07-05 19:29 Ingo Molnar
2008-07-04 16:48 Ingo Molnar
2008-06-30 15:30 Ingo Molnar
2008-06-19 15:13 Ingo Molnar
2008-06-19 21:29 ` Simon Holm Thøgersen
2008-06-19 23:34   ` Suresh Siddha
2008-06-12 19:51 Ingo Molnar
2008-05-13 19:27 Ingo Molnar
2008-05-13 19:40 ` Adrian Bunk
2008-05-13 20:02   ` Adrian Bunk
2008-05-13 20:38   ` Adrian Bunk
2008-05-13 21:01   ` H. Peter Anvin
2008-05-13 20:20 ` Linus Torvalds
2008-05-04 19:35 Ingo Molnar
2008-05-05 15:12 ` Adrian Bunk
2008-05-05 15:29   ` Andres Salomon
2008-05-06 12:49     ` Thomas Gleixner
2008-05-07 15:41       ` Andres Salomon
2008-05-07 19:08         ` Thomas Gleixner
2008-05-07 19:48           ` Andres Salomon
2008-05-07 20:07             ` Andrew Morton
2008-05-09 10:28         ` Ingo Molnar
2008-04-30 21:24 Ingo Molnar
2008-04-24 21:37 Ingo Molnar
2008-04-07 19:38 Ingo Molnar
2008-03-27 20:03 Ingo Molnar
2008-03-27 20:31 ` Linus Torvalds
2008-03-27 20:48   ` Harvey Harrison
2008-03-27 20:55     ` Ingo Molnar
2008-03-27 21:01       ` Ingo Molnar
2008-03-27 21:08         ` Harvey Harrison
2008-03-27 20:50   ` Ingo Molnar
2008-03-27 21:24     ` Ingo Molnar
2008-03-26 21:41 Ingo Molnar
2008-03-21 16:20 Ingo Molnar
2008-03-11 16:12 Ingo Molnar
2008-03-07 15:50 Ingo Molnar
2008-03-04 16:59 Ingo Molnar
2008-03-03 13:18 Ingo Molnar
2008-01-01 17:21 Ingo Molnar
2007-12-21  0:46 Ingo Molnar
2007-12-19 23:04 Ingo Molnar
2007-12-04 16:41 Ingo Molnar
2007-12-02 19:12 Ingo Molnar
2007-12-03 16:23 ` Linus Torvalds
2007-12-03 16:38   ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150718031810.GA19818@gmail.com \
    --to=mingo@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=dave@sr71.net \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=oleg@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).