All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Mika Penttilä" <mika.penttila@nextfour.com>
To: Andy Lutomirski <luto@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	<x86@kernel.org>, Borislav Petkov <bp@alien8.de>
Cc: Nadav Amit <nadav.amit@gmail.com>,
	Kees Cook <keescook@chromium.org>,
	Brian Gerst <brgerst@gmail.com>,
	"kernel-hardening@lists.openwall.com" 
	<kernel-hardening@lists.openwall.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: Re: [PATCH 12/13] x86/mm/64: Enable vmapped stacks
Date: Thu, 16 Jun 2016 07:17:41 +0300	[thread overview]
Message-ID: <57622865.2070701@nextfour.com> (raw)
In-Reply-To: <3f0299bde58d0161c1dad75e0b7f93f074a6cd12.1466036668.git.luto@kernel.org>

Hi,

On 06/16/2016 03:28 AM, Andy Lutomirski wrote:
> This allows x86_64 kernels to enable vmapped stacks.  There are a
> couple of interesting bits.
> 
> First, x86 lazily faults in top-level paging entries for the vmalloc
> area.  This won't work if we get a page fault while trying to access
> the stack: the CPU will promote it to a double-fault and we'll die.
> To avoid this problem, probe the new stack when switching stacks and
> forcibly populate the pgd entry for the stack when switching mms.
> 
> Second, once we have guard pages around the stack, we'll want to
> detect and handle stack overflow.
> 
> I didn't enable it on x86_32.  We'd need to rework the double-fault
> code a bit and I'm concerned about running out of vmalloc virtual
> addresses under some workloads.
> 
> This patch, by itself, will behave somewhat erratically when the
> stack overflows while RSP is still more than a few tens of bytes
> above the bottom of the stack.  Specifically, we'll get #PF and make
> it to no_context and an oops without triggering a double-fault, and
> no_context doesn't know about stack overflows.  The next patch will
> improve that case.
> 
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/Kconfig                 |  1 +
>  arch/x86/include/asm/switch_to.h | 28 +++++++++++++++++++++++++++-
>  arch/x86/kernel/traps.c          | 32 ++++++++++++++++++++++++++++++++
>  arch/x86/mm/tlb.c                | 15 +++++++++++++++
>  4 files changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 0a7b885964ba..b624b24d1dc1 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -92,6 +92,7 @@ config X86
>  	select HAVE_ARCH_TRACEHOOK
>  	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>  	select HAVE_EBPF_JIT			if X86_64
> +	select HAVE_ARCH_VMAP_STACK		if X86_64
>  	select HAVE_CC_STACKPROTECTOR
>  	select HAVE_CMPXCHG_DOUBLE
>  	select HAVE_CMPXCHG_LOCAL
> diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h
> index 8f321a1b03a1..14e4b20f0aaf 100644
> --- a/arch/x86/include/asm/switch_to.h
> +++ b/arch/x86/include/asm/switch_to.h
> @@ -8,6 +8,28 @@ struct tss_struct;
>  void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
>  		      struct tss_struct *tss);
>  
> +/* This runs runs on the previous thread's stack. */
> +static inline void prepare_switch_to(struct task_struct *prev,
> +				     struct task_struct *next)
> +{
> +#ifdef CONFIG_VMAP_STACK
> +	/*
> +	 * If we switch to a stack that has a top-level paging entry
> +	 * that is not present in the current mm, the resulting #PF will
> +	 * will be promoted to a double-fault and we'll panic.  Probe
> +	 * the new stack now so that vmalloc_fault can fix up the page
> +	 * tables if needed.  This can only happen if we use a stack
> +	 * in vmap space.
> +	 *
> +	 * We assume that the stack is aligned so that it never spans
> +	 * more than one top-level paging entry.
> +	 *
> +	 * To minimize cache pollution, just follow the stack pointer.
> +	 */
> +	READ_ONCE(*(unsigned char *)next->thread.sp);
> +#endif
> +}
> +
>  #ifdef CONFIG_X86_32
>  
>  #ifdef CONFIG_CC_STACKPROTECTOR
> @@ -39,6 +61,8 @@ do {									\
>  	 */								\
>  	unsigned long ebx, ecx, edx, esi, edi;				\
>  									\
> +	prepare_switch_to(prev, next);					\
> +									\
>  	asm volatile("pushl %%ebp\n\t"		/* save    EBP   */	\
>  		     "movl %%esp,%[prev_sp]\n\t"	/* save    ESP   */ \
>  		     "movl %[next_sp],%%esp\n\t"	/* restore ESP   */ \
> @@ -103,7 +127,9 @@ do {									\
>   * clean in kernel mode, with the possible exception of IOPL.  Kernel IOPL
>   * has no effect.
>   */
> -#define switch_to(prev, next, last) \
> +#define switch_to(prev, next, last)					  \
> +	prepare_switch_to(prev, next);					  \
> +									  \
>  	asm volatile(SAVE_CONTEXT					  \
>  	     "movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */	  \
>  	     "movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */	  \
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index 00f03d82e69a..9cb7ea781176 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -292,12 +292,30 @@ DO_ERROR(X86_TRAP_NP,     SIGBUS,  "segment not present",	segment_not_present)
>  DO_ERROR(X86_TRAP_SS,     SIGBUS,  "stack segment",		stack_segment)
>  DO_ERROR(X86_TRAP_AC,     SIGBUS,  "alignment check",		alignment_check)
>  
> +#ifdef CONFIG_VMAP_STACK
> +static void __noreturn handle_stack_overflow(const char *message,
> +					     struct pt_regs *regs,
> +					     unsigned long fault_address)
> +{
> +	printk(KERN_EMERG "BUG: stack guard page was hit at %p (stack is %p..%p)\n",
> +		 (void *)fault_address, current->stack,
> +		 (char *)current->stack + THREAD_SIZE - 1);
> +	die(message, regs, 0);
> +
> +	/* Be absolutely certain we don't return. */
> +	panic(message);
> +}
> +#endif
> +
>  #ifdef CONFIG_X86_64
>  /* Runs on IST stack */
>  dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
>  {
>  	static const char str[] = "double fault";
>  	struct task_struct *tsk = current;
> +#ifdef CONFIG_VMAP_STACK
> +	unsigned long cr2;
> +#endif
>  
>  #ifdef CONFIG_X86_ESPFIX64
>  	extern unsigned char native_irq_return_iret[];
> @@ -332,6 +350,20 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
>  	tsk->thread.error_code = error_code;
>  	tsk->thread.trap_nr = X86_TRAP_DF;
>  
> +#ifdef CONFIG_VMAP_STACK
> +	/*
> +	 * If we overflow the stack into a guard page, the CPU will fail
> +	 * to deliver #PF and will send #DF instead.  CR2 will contain
> +	 * the linear address of the second fault, which will be in the
> +	 * guard page below the bottom of the stack.
> +	 */
> +	cr2 = read_cr2();
> +	if ((unsigned long)tsk->stack - 1 - cr2 < PAGE_SIZE)
> +		handle_stack_overflow(
> +			"kernel stack overflow (double-fault)",
> +			regs, cr2);
> +#endif
> +
>  #ifdef CONFIG_DOUBLEFAULT
>  	df_debug(regs, error_code);
>  #endif
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 5643fd0b1a7d..fbf036ae72ac 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -77,10 +77,25 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>  	unsigned cpu = smp_processor_id();
>  
>  	if (likely(prev != next)) {
> +		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
> +			/*
> +			 * If our current stack is in vmalloc space and isn't
> +			 * mapped in the new pgd, we'll double-fault.  Forcibly
> +			 * map it.
> +			 */
> +			unsigned int stack_pgd_index =
> +				pgd_index(current_stack_pointer());


stack pointer is still the previous task's, current_stack_pointer() returns that, not
next task's which was intention I guess. Things may happen to work if on same pgd, but at least the
boot cpu init_task_struct is special.


> +			pgd_t *pgd = next->pgd + stack_pgd_index;
> +
> +			if (unlikely(pgd_none(*pgd)))
> +				set_pgd(pgd, init_mm.pgd[stack_pgd_index]);
> +		}
> +
>  #ifdef CONFIG_SMP
>  		this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
>  		this_cpu_write(cpu_tlbstate.active_mm, next);
>  #endif
> +
>  		cpumask_set_cpu(cpu, mm_cpumask(next));
>  
>  		/*
> 

--Mika

WARNING: multiple messages have this Message-ID (diff)
From: "Mika Penttilä" <mika.penttila@nextfour.com>
To: Andy Lutomirski <luto@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	x86@kernel.org, Borislav Petkov <bp@alien8.de>
Cc: Nadav Amit <nadav.amit@gmail.com>,
	Kees Cook <keescook@chromium.org>,
	Brian Gerst <brgerst@gmail.com>,
	"kernel-hardening@lists.openwall.com"
	<kernel-hardening@lists.openwall.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: [kernel-hardening] Re: [PATCH 12/13] x86/mm/64: Enable vmapped stacks
Date: Thu, 16 Jun 2016 07:17:41 +0300	[thread overview]
Message-ID: <57622865.2070701@nextfour.com> (raw)
In-Reply-To: <3f0299bde58d0161c1dad75e0b7f93f074a6cd12.1466036668.git.luto@kernel.org>

Hi,

On 06/16/2016 03:28 AM, Andy Lutomirski wrote:
> This allows x86_64 kernels to enable vmapped stacks.  There are a
> couple of interesting bits.
> 
> First, x86 lazily faults in top-level paging entries for the vmalloc
> area.  This won't work if we get a page fault while trying to access
> the stack: the CPU will promote it to a double-fault and we'll die.
> To avoid this problem, probe the new stack when switching stacks and
> forcibly populate the pgd entry for the stack when switching mms.
> 
> Second, once we have guard pages around the stack, we'll want to
> detect and handle stack overflow.
> 
> I didn't enable it on x86_32.  We'd need to rework the double-fault
> code a bit and I'm concerned about running out of vmalloc virtual
> addresses under some workloads.
> 
> This patch, by itself, will behave somewhat erratically when the
> stack overflows while RSP is still more than a few tens of bytes
> above the bottom of the stack.  Specifically, we'll get #PF and make
> it to no_context and an oops without triggering a double-fault, and
> no_context doesn't know about stack overflows.  The next patch will
> improve that case.
> 
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/Kconfig                 |  1 +
>  arch/x86/include/asm/switch_to.h | 28 +++++++++++++++++++++++++++-
>  arch/x86/kernel/traps.c          | 32 ++++++++++++++++++++++++++++++++
>  arch/x86/mm/tlb.c                | 15 +++++++++++++++
>  4 files changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 0a7b885964ba..b624b24d1dc1 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -92,6 +92,7 @@ config X86
>  	select HAVE_ARCH_TRACEHOOK
>  	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>  	select HAVE_EBPF_JIT			if X86_64
> +	select HAVE_ARCH_VMAP_STACK		if X86_64
>  	select HAVE_CC_STACKPROTECTOR
>  	select HAVE_CMPXCHG_DOUBLE
>  	select HAVE_CMPXCHG_LOCAL
> diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h
> index 8f321a1b03a1..14e4b20f0aaf 100644
> --- a/arch/x86/include/asm/switch_to.h
> +++ b/arch/x86/include/asm/switch_to.h
> @@ -8,6 +8,28 @@ struct tss_struct;
>  void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
>  		      struct tss_struct *tss);
>  
> +/* This runs runs on the previous thread's stack. */
> +static inline void prepare_switch_to(struct task_struct *prev,
> +				     struct task_struct *next)
> +{
> +#ifdef CONFIG_VMAP_STACK
> +	/*
> +	 * If we switch to a stack that has a top-level paging entry
> +	 * that is not present in the current mm, the resulting #PF will
> +	 * will be promoted to a double-fault and we'll panic.  Probe
> +	 * the new stack now so that vmalloc_fault can fix up the page
> +	 * tables if needed.  This can only happen if we use a stack
> +	 * in vmap space.
> +	 *
> +	 * We assume that the stack is aligned so that it never spans
> +	 * more than one top-level paging entry.
> +	 *
> +	 * To minimize cache pollution, just follow the stack pointer.
> +	 */
> +	READ_ONCE(*(unsigned char *)next->thread.sp);
> +#endif
> +}
> +
>  #ifdef CONFIG_X86_32
>  
>  #ifdef CONFIG_CC_STACKPROTECTOR
> @@ -39,6 +61,8 @@ do {									\
>  	 */								\
>  	unsigned long ebx, ecx, edx, esi, edi;				\
>  									\
> +	prepare_switch_to(prev, next);					\
> +									\
>  	asm volatile("pushl %%ebp\n\t"		/* save    EBP   */	\
>  		     "movl %%esp,%[prev_sp]\n\t"	/* save    ESP   */ \
>  		     "movl %[next_sp],%%esp\n\t"	/* restore ESP   */ \
> @@ -103,7 +127,9 @@ do {									\
>   * clean in kernel mode, with the possible exception of IOPL.  Kernel IOPL
>   * has no effect.
>   */
> -#define switch_to(prev, next, last) \
> +#define switch_to(prev, next, last)					  \
> +	prepare_switch_to(prev, next);					  \
> +									  \
>  	asm volatile(SAVE_CONTEXT					  \
>  	     "movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */	  \
>  	     "movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */	  \
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index 00f03d82e69a..9cb7ea781176 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -292,12 +292,30 @@ DO_ERROR(X86_TRAP_NP,     SIGBUS,  "segment not present",	segment_not_present)
>  DO_ERROR(X86_TRAP_SS,     SIGBUS,  "stack segment",		stack_segment)
>  DO_ERROR(X86_TRAP_AC,     SIGBUS,  "alignment check",		alignment_check)
>  
> +#ifdef CONFIG_VMAP_STACK
> +static void __noreturn handle_stack_overflow(const char *message,
> +					     struct pt_regs *regs,
> +					     unsigned long fault_address)
> +{
> +	printk(KERN_EMERG "BUG: stack guard page was hit at %p (stack is %p..%p)\n",
> +		 (void *)fault_address, current->stack,
> +		 (char *)current->stack + THREAD_SIZE - 1);
> +	die(message, regs, 0);
> +
> +	/* Be absolutely certain we don't return. */
> +	panic(message);
> +}
> +#endif
> +
>  #ifdef CONFIG_X86_64
>  /* Runs on IST stack */
>  dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
>  {
>  	static const char str[] = "double fault";
>  	struct task_struct *tsk = current;
> +#ifdef CONFIG_VMAP_STACK
> +	unsigned long cr2;
> +#endif
>  
>  #ifdef CONFIG_X86_ESPFIX64
>  	extern unsigned char native_irq_return_iret[];
> @@ -332,6 +350,20 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
>  	tsk->thread.error_code = error_code;
>  	tsk->thread.trap_nr = X86_TRAP_DF;
>  
> +#ifdef CONFIG_VMAP_STACK
> +	/*
> +	 * If we overflow the stack into a guard page, the CPU will fail
> +	 * to deliver #PF and will send #DF instead.  CR2 will contain
> +	 * the linear address of the second fault, which will be in the
> +	 * guard page below the bottom of the stack.
> +	 */
> +	cr2 = read_cr2();
> +	if ((unsigned long)tsk->stack - 1 - cr2 < PAGE_SIZE)
> +		handle_stack_overflow(
> +			"kernel stack overflow (double-fault)",
> +			regs, cr2);
> +#endif
> +
>  #ifdef CONFIG_DOUBLEFAULT
>  	df_debug(regs, error_code);
>  #endif
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 5643fd0b1a7d..fbf036ae72ac 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -77,10 +77,25 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>  	unsigned cpu = smp_processor_id();
>  
>  	if (likely(prev != next)) {
> +		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
> +			/*
> +			 * If our current stack is in vmalloc space and isn't
> +			 * mapped in the new pgd, we'll double-fault.  Forcibly
> +			 * map it.
> +			 */
> +			unsigned int stack_pgd_index =
> +				pgd_index(current_stack_pointer());


stack pointer is still the previous task's, current_stack_pointer() returns that, not
next task's which was intention I guess. Things may happen to work if on same pgd, but at least the
boot cpu init_task_struct is special.


> +			pgd_t *pgd = next->pgd + stack_pgd_index;
> +
> +			if (unlikely(pgd_none(*pgd)))
> +				set_pgd(pgd, init_mm.pgd[stack_pgd_index]);
> +		}
> +
>  #ifdef CONFIG_SMP
>  		this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
>  		this_cpu_write(cpu_tlbstate.active_mm, next);
>  #endif
> +
>  		cpumask_set_cpu(cpu, mm_cpumask(next));
>  
>  		/*
> 

--Mika

  reply	other threads:[~2016-06-16  4:17 UTC|newest]

Thread overview: 112+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-16  0:28 [PATCH 00/13] Virtually mapped stacks with guard pages (x86, core) Andy Lutomirski
2016-06-16  0:28 ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28 ` [PATCH 01/13] x86/mm/hotplug: Don't remove PGD entries in remove_pagetable() Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28   ` Andy Lutomirski
2016-06-16  0:28 ` [PATCH 02/13] x86/cpa: In populate_pgd, don't set the pgd entry until it's populated Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28 ` [PATCH 03/13] x86/cpa: Warn if kernel_unmap_pages_in_pgd is used inappropriately Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28 ` [PATCH 04/13] mm: Track NR_KERNEL_STACK in pages instead of number of stacks Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28   ` Andy Lutomirski
2016-06-16 11:10   ` Vladimir Davydov
2016-06-16 11:10     ` [kernel-hardening] " Vladimir Davydov
2016-06-16 11:10     ` Vladimir Davydov
2016-06-16 17:21     ` Andy Lutomirski
2016-06-16 17:21       ` [kernel-hardening] " Andy Lutomirski
2016-06-16 17:21       ` Andy Lutomirski
2016-06-16 19:20       ` Andy Lutomirski
2016-06-16 19:20         ` [kernel-hardening] " Andy Lutomirski
2016-06-16 19:20         ` Andy Lutomirski
2016-06-16 15:33   ` Josh Poimboeuf
2016-06-16 15:33     ` [kernel-hardening] " Josh Poimboeuf
2016-06-16 15:33     ` Josh Poimboeuf
2016-06-16 17:39     ` Andy Lutomirski
2016-06-16 17:39       ` [kernel-hardening] " Andy Lutomirski
2016-06-16 17:39       ` Andy Lutomirski
2016-06-16 19:39       ` Josh Poimboeuf
2016-06-16 19:39         ` [kernel-hardening] " Josh Poimboeuf
2016-06-16 19:39         ` Josh Poimboeuf
2016-06-16  0:28 ` [PATCH 05/13] mm: Move memcg stack accounting to account_kernel_stack Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28   ` Andy Lutomirski
2016-06-16  0:28 ` [PATCH 06/13] fork: Add generic vmalloced stack support Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16 17:25   ` Kees Cook
2016-06-16 17:25     ` [kernel-hardening] " Kees Cook
2016-06-16 17:37     ` Andy Lutomirski
2016-06-16 17:37       ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28 ` [PATCH 07/13] x86/die: Don't try to recover from an OOPS on a non-default stack Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28 ` [PATCH 08/13] x86/dumpstack: When OOPSing, rewind the stack before do_exit Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16 17:50   ` Josh Poimboeuf
2016-06-16 17:50     ` [kernel-hardening] " Josh Poimboeuf
2016-06-16 17:57     ` Andy Lutomirski
2016-06-16 17:57       ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28 ` [PATCH 09/13] x86/dumpstack: When dumping stack bytes due to OOPS, start with regs->sp Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16 11:56   ` Borislav Petkov
2016-06-16 11:56     ` [kernel-hardening] " Borislav Petkov
2016-07-08 12:07   ` [tip:x86/debug] x86/dumpstack: Honor supplied @regs arg tip-bot for Andy Lutomirski
2016-06-16  0:28 ` [PATCH 10/13] x86/dumpstack: Try harder to get a call trace on stack overflow Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16 18:16   ` Josh Poimboeuf
2016-06-16 18:16     ` [kernel-hardening] " Josh Poimboeuf
2016-06-16 18:22     ` Andy Lutomirski
2016-06-16 18:22       ` [kernel-hardening] " Andy Lutomirski
2016-06-16 18:33       ` Josh Poimboeuf
2016-06-16 18:33         ` [kernel-hardening] " Josh Poimboeuf
2016-06-16 18:37         ` Andy Lutomirski
2016-06-16 18:37           ` [kernel-hardening] " Andy Lutomirski
2016-06-16 18:54           ` Josh Poimboeuf
2016-06-16 18:54             ` [kernel-hardening] " Josh Poimboeuf
2016-06-16  0:28 ` [PATCH 11/13] x86/dumpstack/64: Handle faults when printing the "Stack:" part of an OOPS Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  0:28 ` [PATCH 12/13] x86/mm/64: Enable vmapped stacks Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  4:17   ` Mika Penttilä [this message]
2016-06-16  4:17     ` [kernel-hardening] " Mika Penttilä
2016-06-16  5:33     ` Andy Lutomirski
2016-06-16  5:33       ` [kernel-hardening] " Andy Lutomirski
2016-06-16 13:11       ` Rik van Riel
2016-06-16  0:28 ` [PATCH 13/13] x86/mm: Improve stack-overflow #PF handling Andy Lutomirski
2016-06-16  0:28   ` [kernel-hardening] " Andy Lutomirski
2016-06-16  6:05 ` [PATCH 00/13] Virtually mapped stacks with guard pages (x86, core) Heiko Carstens
2016-06-16  6:05   ` [kernel-hardening] " Heiko Carstens
2016-06-16 17:50   ` Andy Lutomirski
2016-06-16 17:50     ` [kernel-hardening] " Andy Lutomirski
2016-06-16 18:14     ` Andy Lutomirski
2016-06-16 18:14       ` [kernel-hardening] " Andy Lutomirski
2016-06-16 21:27       ` Andy Lutomirski
2016-06-16 21:27         ` [kernel-hardening] " Andy Lutomirski
2016-06-17  3:58   ` Andy Lutomirski
2016-06-17  3:58     ` [kernel-hardening] " Andy Lutomirski
2016-06-17  7:27     ` Heiko Carstens
2016-06-17  7:27       ` [kernel-hardening] " Heiko Carstens
2016-06-17 17:38       ` Andy Lutomirski
2016-06-17 17:38         ` [kernel-hardening] " Andy Lutomirski
2016-06-20  5:58         ` Heiko Carstens
2016-06-20  5:58           ` [kernel-hardening] " Heiko Carstens
2016-06-20  6:01           ` Andy Lutomirski
2016-06-20  6:01             ` [kernel-hardening] " Andy Lutomirski
2016-06-20  7:07             ` Heiko Carstens
2016-06-20  7:07               ` [kernel-hardening] " Heiko Carstens
2016-06-16 17:24 ` Kees Cook
2016-06-16 17:24   ` [kernel-hardening] " Kees Cook
2016-07-04 22:31 [PATCH -v2 0/3] x86/MSR: Improve unhandled MSR access error message Borislav Petkov
2016-07-04 22:31 ` [PATCH -v2 1/3] x86/dumpstack: Honor supplied @regs arg Borislav Petkov
2016-07-04 22:31 ` [PATCH -v2 2/3] printk: Make the printk*once() variants return a value Borislav Petkov
2016-07-08 12:08   ` [tip:x86/debug] " tip-bot for Borislav Petkov
2016-07-09  2:40     ` Joe Perches
2016-07-09  7:50       ` Borislav Petkov
2016-07-09 17:56         ` Joe Perches
2016-07-10  6:49           ` Borislav Petkov
2016-07-10  8:23             ` Joe Perches
2016-07-10 12:06               ` Borislav Petkov
2016-07-10 12:33                 ` Joe Perches
2016-07-04 22:31 ` [PATCH -v2 3/3] x86/dumpstack: Add show_stack_regs() and use it Borislav Petkov
2016-07-08 12:08   ` [tip:x86/debug] " tip-bot for Borislav Petkov
2016-07-06 12:58 ` [PATCH -v2 0/3] x86/MSR: Improve unhandled MSR access error message Andy Lutomirski
2016-07-06 13:11   ` Borislav Petkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57622865.2070701@nextfour.com \
    --to=mika.penttila@nextfour.com \
    --cc=bp@alien8.de \
    --cc=brgerst@gmail.com \
    --cc=jpoimboe@redhat.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=nadav.amit@gmail.com \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.