All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/mm: disable preemption during CR3 read+write
@ 2016-08-05 13:37 ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-08-05 13:37 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: x86, Borislav Petkov, Andy Lutomirski, Rik van Riel, Mel Gorman,
	Peter Zijlstra

Usually current->mm (and therefore mm->pgd) stays the same during the
lifetime of a task so it does not matter if a task gets preempted during
the read and write of the CR3.

But then, there is this scenario on x86-UP:
TaskA is in do_exit() and exit_mm() sets current->mm = NULL followed by
mmput() -> exit_mmap() -> tlb_finish_mmu() -> tlb_flush_mmu() ->
tlb_flush_mmu_tlbonly() -> tlb_flush() -> flush_tlb_mm_range() ->
__flush_tlb_up() -> __flush_tlb() ->  __native_flush_tlb().

At this point current->mm is NULL but current->active_mm still points to
the "old" mm.
Let's preempt taskA _after_ native_read_cr3() by taskB. TaskB has its
own mm so CR3 has changed.
Now preempt back to taskA. TaskA has no ->mm set so it borrows taskB's
mm and so CR3 remains unchanged. Once taskA gets active it continues
where it was interrupted and that means it writes its old CR3 value
back. Everything is fine because userland won't need its memory
anymore.

Now the fun part. Let's preempt taskA one more time and get back to
taskB. This time switch_mm() won't do a thing because oldmm
(->active_mm) is the same as mm (as per context_switch()). So we remain
with a bad CR3 / pgd and return to userland.
The next thing that happens is handle_mm_fault() with an address for the
execution of its code in userland. handle_mm_fault() realizes that it
has a PTE with proper rights so it returns doing nothing. But the CPU
looks at the wrong pgd and insists that something is wrong and faults
again. And again. And one more time…

This pagefault circle continues until the scheduler gets tired of it and
puts another task on the CPU. It gets little difficult if the task is a
RT task with a high priority. The system will either freeze or it gets
fixed by the software watchdog thread which usually runs at RT-max prio.
But waiting for the watchdog will increase the latency of the RT task
which is no good.

Cc: stable@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/include/asm/tlbflush.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 4e5be94e079a..1ee065954e24 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -135,7 +135,14 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
 
 static inline void __native_flush_tlb(void)
 {
+	/*
+	 * if current->mm == NULL then we borrow a mm which may change during a
+	 * task switch and therefore we must not be preempted while we write CR3
+	 * back.
+	 */
+	preempt_disable();
 	native_write_cr3(native_read_cr3());
+	preempt_enable();
 }
 
 static inline void __native_flush_tlb_global_irq_disabled(void)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH] x86/mm: disable preemption during CR3 read+write
@ 2016-08-05 13:37 ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-08-05 13:37 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: x86, Borislav Petkov, Andy Lutomirski, Rik van Riel, Mel Gorman,
	Peter Zijlstra

Usually current->mm (and therefore mm->pgd) stays the same during the
lifetime of a task so it does not matter if a task gets preempted during
the read and write of the CR3.

But then, there is this scenario on x86-UP:
TaskA is in do_exit() and exit_mm() sets current->mm = NULL followed by
mmput() -> exit_mmap() -> tlb_finish_mmu() -> tlb_flush_mmu() ->
tlb_flush_mmu_tlbonly() -> tlb_flush() -> flush_tlb_mm_range() ->
__flush_tlb_up() -> __flush_tlb() ->  __native_flush_tlb().

At this point current->mm is NULL but current->active_mm still points to
the "old" mm.
Let's preempt taskA _after_ native_read_cr3() by taskB. TaskB has its
own mm so CR3 has changed.
Now preempt back to taskA. TaskA has no ->mm set so it borrows taskB's
mm and so CR3 remains unchanged. Once taskA gets active it continues
where it was interrupted and that means it writes its old CR3 value
back. Everything is fine because userland won't need its memory
anymore.

Now the fun part. Let's preempt taskA one more time and get back to
taskB. This time switch_mm() won't do a thing because oldmm
(->active_mm) is the same as mm (as per context_switch()). So we remain
with a bad CR3 / pgd and return to userland.
The next thing that happens is handle_mm_fault() with an address for the
execution of its code in userland. handle_mm_fault() realizes that it
has a PTE with proper rights so it returns doing nothing. But the CPU
looks at the wrong pgd and insists that something is wrong and faults
again. And again. And one more timea?|

This pagefault circle continues until the scheduler gets tired of it and
puts another task on the CPU. It gets little difficult if the task is a
RT task with a high priority. The system will either freeze or it gets
fixed by the software watchdog thread which usually runs at RT-max prio.
But waiting for the watchdog will increase the latency of the RT task
which is no good.

Cc: stable@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/include/asm/tlbflush.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 4e5be94e079a..1ee065954e24 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -135,7 +135,14 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
 
 static inline void __native_flush_tlb(void)
 {
+	/*
+	 * if current->mm == NULL then we borrow a mm which may change during a
+	 * task switch and therefore we must not be preempted while we write CR3
+	 * back.
+	 */
+	preempt_disable();
 	native_write_cr3(native_read_cr3());
+	preempt_enable();
 }
 
 static inline void __native_flush_tlb_global_irq_disabled(void)
-- 
2.8.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: disable preemption during CR3 read+write
  2016-08-05 13:37 ` Sebastian Andrzej Siewior
@ 2016-08-05 13:53   ` Peter Zijlstra
  -1 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2016-08-05 13:53 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-mm, linux-kernel, x86, Borislav Petkov, Andy Lutomirski,
	Rik van Riel, Mel Gorman

On Fri, Aug 05, 2016 at 03:37:39PM +0200, Sebastian Andrzej Siewior wrote:
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 4e5be94e079a..1ee065954e24 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -135,7 +135,14 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
>  
>  static inline void __native_flush_tlb(void)
>  {
> +	/*
> +	 * if current->mm == NULL then we borrow a mm which may change during a
> +	 * task switch and therefore we must not be preempted while we write CR3
> +	 * back.
> +	 */
> +	preempt_disable();
>  	native_write_cr3(native_read_cr3());
> +	preempt_enable();
>  }

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: disable preemption during CR3 read+write
@ 2016-08-05 13:53   ` Peter Zijlstra
  0 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2016-08-05 13:53 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-mm, linux-kernel, x86, Borislav Petkov, Andy Lutomirski,
	Rik van Riel, Mel Gorman

On Fri, Aug 05, 2016 at 03:37:39PM +0200, Sebastian Andrzej Siewior wrote:
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 4e5be94e079a..1ee065954e24 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -135,7 +135,14 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
>  
>  static inline void __native_flush_tlb(void)
>  {
> +	/*
> +	 * if current->mm == NULL then we borrow a mm which may change during a
> +	 * task switch and therefore we must not be preempted while we write CR3
> +	 * back.
> +	 */
> +	preempt_disable();
>  	native_write_cr3(native_read_cr3());
> +	preempt_enable();
>  }

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: disable preemption during CR3 read+write
  2016-08-05 13:37 ` Sebastian Andrzej Siewior
  (?)
  (?)
@ 2016-08-05 14:38 ` Rik van Riel
  -1 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel @ 2016-08-05 14:38 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, linux-mm, linux-kernel
  Cc: x86, Borislav Petkov, Andy Lutomirski, Mel Gorman, Peter Zijlstra

[-- Attachment #1: Type: text/plain, Size: 666 bytes --]

On Fri, 2016-08-05 at 15:37 +0200, Sebastian Andrzej Siewior wrote:
> 
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -135,7 +135,14 @@ static inline void
> cr4_set_bits_and_update_boot(unsigned long mask)
>  
>  static inline void __native_flush_tlb(void)
>  {
> +	/*
> +	 * if current->mm == NULL then we borrow a mm which may
> change during a
> +	 * task switch and therefore we must not be preempted while
> we write CR3
> +	 * back.
> +	 */
> +	preempt_disable();
>  	native_write_cr3(native_read_cr3());
> +	preempt_enable();
>  }

That is one subtle race!

Acked-by: Rik van Riel <riel@redhat.com>

-- 

All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: disable preemption during CR3 read+write
  2016-08-05 13:37 ` Sebastian Andrzej Siewior
@ 2016-08-05 15:42   ` Andy Lutomirski
  -1 siblings, 0 replies; 13+ messages in thread
From: Andy Lutomirski @ 2016-08-05 15:42 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-mm, linux-kernel, X86 ML, Borislav Petkov, Andy Lutomirski,
	Rik van Riel, Mel Gorman, Peter Zijlstra

On Fri, Aug 5, 2016 at 6:37 AM, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
> Usually current->mm (and therefore mm->pgd) stays the same during the
> lifetime of a task so it does not matter if a task gets preempted during
> the read and write of the CR3.
>
> But then, there is this scenario on x86-UP:
> TaskA is in do_exit() and exit_mm() sets current->mm = NULL followed by
> mmput() -> exit_mmap() -> tlb_finish_mmu() -> tlb_flush_mmu() ->
> tlb_flush_mmu_tlbonly() -> tlb_flush() -> flush_tlb_mm_range() ->
> __flush_tlb_up() -> __flush_tlb() ->  __native_flush_tlb().
>
> At this point current->mm is NULL but current->active_mm still points to
> the "old" mm.
> Let's preempt taskA _after_ native_read_cr3() by taskB. TaskB has its
> own mm so CR3 has changed.
> Now preempt back to taskA. TaskA has no ->mm set so it borrows taskB's
> mm and so CR3 remains unchanged. Once taskA gets active it continues
> where it was interrupted and that means it writes its old CR3 value
> back. Everything is fine because userland won't need its memory
> anymore.

This should affect kernel threads too, right?

Acked-by: Andy Lutomirski <luto@kernel.org>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: disable preemption during CR3 read+write
@ 2016-08-05 15:42   ` Andy Lutomirski
  0 siblings, 0 replies; 13+ messages in thread
From: Andy Lutomirski @ 2016-08-05 15:42 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-mm, linux-kernel, X86 ML, Borislav Petkov, Andy Lutomirski,
	Rik van Riel, Mel Gorman, Peter Zijlstra

On Fri, Aug 5, 2016 at 6:37 AM, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
> Usually current->mm (and therefore mm->pgd) stays the same during the
> lifetime of a task so it does not matter if a task gets preempted during
> the read and write of the CR3.
>
> But then, there is this scenario on x86-UP:
> TaskA is in do_exit() and exit_mm() sets current->mm = NULL followed by
> mmput() -> exit_mmap() -> tlb_finish_mmu() -> tlb_flush_mmu() ->
> tlb_flush_mmu_tlbonly() -> tlb_flush() -> flush_tlb_mm_range() ->
> __flush_tlb_up() -> __flush_tlb() ->  __native_flush_tlb().
>
> At this point current->mm is NULL but current->active_mm still points to
> the "old" mm.
> Let's preempt taskA _after_ native_read_cr3() by taskB. TaskB has its
> own mm so CR3 has changed.
> Now preempt back to taskA. TaskA has no ->mm set so it borrows taskB's
> mm and so CR3 remains unchanged. Once taskA gets active it continues
> where it was interrupted and that means it writes its old CR3 value
> back. Everything is fine because userland won't need its memory
> anymore.

This should affect kernel threads too, right?

Acked-by: Andy Lutomirski <luto@kernel.org>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: disable preemption during CR3 read+write
  2016-08-05 15:42   ` Andy Lutomirski
@ 2016-08-05 15:52     ` Sebastian Andrzej Siewior
  -1 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-08-05 15:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-mm, linux-kernel, X86 ML, Borislav Petkov, Andy Lutomirski,
	Rik van Riel, Mel Gorman, Peter Zijlstra

On 08/05/2016 05:42 PM, Andy Lutomirski wrote:
> 
> This should affect kernel threads too, right?

I don't think so because they don't have a MM in the first place so
they don't shouldn't need to flush a TLB. But then there is iounmap()
and vfree() for instance which does

vmap_debug_free_range()
{
   if (debug_pagealloc_enabled()) {
         vunmap_page_range(start, end);
         flush_tlb_kernel_range(start, end);
   }
}

so it looks like a candidate.

> Acked-by: Andy Lutomirski <luto@kernel.org>

Sebastian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: disable preemption during CR3 read+write
@ 2016-08-05 15:52     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-08-05 15:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-mm, linux-kernel, X86 ML, Borislav Petkov, Andy Lutomirski,
	Rik van Riel, Mel Gorman, Peter Zijlstra

On 08/05/2016 05:42 PM, Andy Lutomirski wrote:
> 
> This should affect kernel threads too, right?

I don't think so because they don't have a MM in the first place so
they don't shouldn't need to flush a TLB. But then there is iounmap()
and vfree() for instance which does

vmap_debug_free_range()
{
   if (debug_pagealloc_enabled()) {
         vunmap_page_range(start, end);
         flush_tlb_kernel_range(start, end);
   }
}

so it looks like a candidate.

> Acked-by: Andy Lutomirski <luto@kernel.org>

Sebastian

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tip:x86/urgent] x86/mm: Disable preemption during CR3 read+write
  2016-08-05 13:37 ` Sebastian Andrzej Siewior
                   ` (3 preceding siblings ...)
  (?)
@ 2016-08-10 18:10 ` tip-bot for Sebastian Andrzej Siewior
  -1 siblings, 0 replies; 13+ messages in thread
From: tip-bot for Sebastian Andrzej Siewior @ 2016-08-10 18:10 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bp, torvalds, luto, mgorman, bp, linux-kernel, dvlasenk,
	a.p.zijlstra, riel, hpa, tglx, brgerst, mingo, jpoimboe, bigeasy,
	peterz

Commit-ID:  5cf0791da5c162ebc14b01eb01631cfa7ed4fa6e
Gitweb:     http://git.kernel.org/tip/5cf0791da5c162ebc14b01eb01631cfa7ed4fa6e
Author:     Sebastian Andrzej Siewior <bigeasy@linutronix.de>
AuthorDate: Fri, 5 Aug 2016 15:37:39 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 10 Aug 2016 15:37:16 +0200

x86/mm: Disable preemption during CR3 read+write

There's a subtle preemption race on UP kernels:

Usually current->mm (and therefore mm->pgd) stays the same during the
lifetime of a task so it does not matter if a task gets preempted during
the read and write of the CR3.

But then, there is this scenario on x86-UP:

TaskA is in do_exit() and exit_mm() sets current->mm = NULL followed by:

 -> mmput()
 -> exit_mmap()
 -> tlb_finish_mmu()
 -> tlb_flush_mmu()
 -> tlb_flush_mmu_tlbonly()
 -> tlb_flush()
 -> flush_tlb_mm_range()
 -> __flush_tlb_up()
 -> __flush_tlb()
 ->  __native_flush_tlb()

At this point current->mm is NULL but current->active_mm still points to
the "old" mm.

Let's preempt taskA _after_ native_read_cr3() by taskB. TaskB has its
own mm so CR3 has changed.

Now preempt back to taskA. TaskA has no ->mm set so it borrows taskB's
mm and so CR3 remains unchanged. Once taskA gets active it continues
where it was interrupted and that means it writes its old CR3 value
back. Everything is fine because userland won't need its memory
anymore.

Now the fun part:

Let's preempt taskA one more time and get back to taskB. This
time switch_mm() won't do a thing because oldmm (->active_mm)
is the same as mm (as per context_switch()). So we remain
with a bad CR3 / PGD and return to userland.

The next thing that happens is handle_mm_fault() with an address for
the execution of its code in userland. handle_mm_fault() realizes that
it has a PTE with proper rights so it returns doing nothing. But the
CPU looks at the wrong PGD and insists that something is wrong and
faults again. And again. And one more time…

This pagefault circle continues until the scheduler gets tired of it and
puts another task on the CPU. It gets little difficult if the task is a
RT task with a high priority. The system will either freeze or it gets
fixed by the software watchdog thread which usually runs at RT-max prio.
But waiting for the watchdog will increase the latency of the RT task
which is no good.

Fix this by disabling preemption across the critical code section.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1470404259-26290-1-git-send-email-bigeasy@linutronix.de
[ Prettified the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/tlbflush.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 4e5be94..6fa8594 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -135,7 +135,14 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
 
 static inline void __native_flush_tlb(void)
 {
+	/*
+	 * If current->mm == NULL then we borrow a mm which may change during a
+	 * task switch and therefore we must not be preempted while we write CR3
+	 * back:
+	 */
+	preempt_disable();
 	native_write_cr3(native_read_cr3());
+	preempt_enable();
 }
 
 static inline void __native_flush_tlb_global_irq_disabled(void)

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: Disable preemption during CR3 read+write
  2017-10-18 15:01 ` Bernhard Kaindl
@ 2017-10-18 16:51   ` Greg KH
  0 siblings, 0 replies; 13+ messages in thread
From: Greg KH @ 2017-10-18 16:51 UTC (permalink / raw)
  To: Bernhard Kaindl; +Cc: Sebastian Andrzej Siewior, stable, SPIES Werner

On Wed, Oct 18, 2017 at 05:01:57PM +0200, Bernhard Kaindl wrote:
> Hi Greg!
> 
> On 17.10.2017 17:57, Sebastian Andrzej Siewior wrote:
> > Upstream commit 5cf0791da5c162ebc14b01eb01631cfa7ed4fa6e
> 
> This race happens when a process exit is preempted at the wrong time and
> we confirm the bug fixed by this commit to happen on Linux-3.18 systems:
> 
> This is how we are affected without this fix which is missing in 3.18.x:
> RT Process fails to make progress -> HW watchdog fires -> System resets.
> 
> We were able to see the commit's sequence of events and the CR3 having the
> PGD of a dead process using ftrace + SW watchdog.
> 
> Systems with SCHED_FIFO tasks are especially vulnerable, because when
> the SCHED_FIFO task with the highest priority gets the deceased PGD@CR3, the
> task will page fault forever without making progress and no other process
> can be scheduled anymore.
> 
> If this SCHED_FIFO task is triggering an HW watchdog, the HW watchdog
> will fire, but if not, the system will ping, but not do anything else.
> 
> With kernel.sched_rt_runtime_us > 0, SCHED_OTHER processes could cause
> a context switch after kernel.sched_rt_period_us expires, so usually
> this would allow the system to recover, because then CR3 would be swichted,
> but this is too late, and a real-time system would have failed
> at this point already.
> 
> With kernel.sched_rt_runtime_us < 0, the only recovery in this case is a HW
> watchdog resetting the machine, but with devastating loss of function until
> the system is up again.
> 
> All UP preemptible-kernel x86 real-time systems, including industrial
> control/automation, SCADA, Linux-based PLCs (e.g. using Intel Quark),
> are definitely affected when process termination collide with HW/SW
> interrupts.
> 
> Non-real time systems: Except for some threads occasionally failing to make
> progress, the system will recover:
> Other processes will eventually be scheduled, causing CR3 to be loaded again
> correctly from task->mm->pgd, resolving the problem.
> 
> > This patch is already part of various stable tree but is missing in the >  � v3.18>  � v4.1
> Yes, the long-term branches of 3.2, 3.10, 3.16 and 4.4 have got the fix
> (long time ago!), 4.9 already has it merged mainline.
> 
> > tree and applies cleanly on top of
> >  � v3.18.69
> >  � v4.1.43
> > 
> > I've been contacted by Bernhard Kaindl (Cc:) and he asked about the
> > whereabouts of the patch in the two stable trees. He can confirm that
> > this patch cures his problem on the v3.18 stable tree he is using.
> > He assumes that the same problem might occur on the v4.1 tree and should
> > be fixed by the patch but he has no working setup with v4.1 kernel to
> > confirm this.
> 
> I comfirm - here is a quick summary of what we found:
> 
> We saw that our watchdog process got the PGD of a dead process in CR3,
> causing failure to pulse the watchdog because of the page fault loop
> described in the commit log.
> 
> We had a lab of 16 machines available for testing the crash fixed by this
> commit.
> 
> We found this fix by pure luck thanks to Google after a lot of searches by
> several people. With the fix, over this weekend, in the lab, we didn't
> trigger this issue anymore.
> 
> (actually, we found another issue in our own code and had an unknown machine
> hang for which to debug, we need more specific HW which we don't have ATM,
> but it is likely that this is also the same issue caused by our own bug)
> 
> Before having the fix, we demonstrated the sequence of events which the
> commit log describes within one hour on a single machine exactly.
> 
> With Linux-4.4.64 (which does have this fix), we didn't see this bug.
> 
> Because it appears to fix both 3.18 and 4.4, it makes sense to apply it to
> the v4.1.x longterm branch too.

Thanks for the detailed description, much appreciated.  I've queued it
up for 3.18, it's up to Sasha to do it for 4.1.

thanks again,

greg k-h

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/mm: Disable preemption during CR3 read+write
  2017-10-17 15:57 [PATCH] " Sebastian Andrzej Siewior
@ 2017-10-18 15:01 ` Bernhard Kaindl
  2017-10-18 16:51   ` Greg KH
  0 siblings, 1 reply; 13+ messages in thread
From: Bernhard Kaindl @ 2017-10-18 15:01 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, stable, SPIES Werner

Hi Greg!

On 17.10.2017 17:57, Sebastian Andrzej Siewior wrote:
 > Upstream commit 5cf0791da5c162ebc14b01eb01631cfa7ed4fa6e

This race happens when a process exit is preempted at the wrong time and
we confirm the bug fixed by this commit to happen on Linux-3.18 systems:

This is how we are affected without this fix which is missing in 3.18.x:
RT Process fails to make progress -> HW watchdog fires -> System resets.

We were able to see the commit's sequence of events and the CR3 having 
the PGD of a dead process using ftrace + SW watchdog.

Systems with SCHED_FIFO tasks are especially vulnerable, because when
the SCHED_FIFO task with the highest priority gets the deceased PGD@CR3, 
the task will page fault forever without making progress and no other 
process can be scheduled anymore.

If this SCHED_FIFO task is triggering an HW watchdog, the HW watchdog
will fire, but if not, the system will ping, but not do anything else.

With kernel.sched_rt_runtime_us > 0, SCHED_OTHER processes could cause
a context switch after kernel.sched_rt_period_us expires, so usually
this would allow the system to recover, because then CR3 would be 
swichted, but this is too late, and a real-time system would have failed
at this point already.

With kernel.sched_rt_runtime_us < 0, the only recovery in this case is a 
HW watchdog resetting the machine, but with devastating loss of function 
until the system is up again.

All UP preemptible-kernel x86 real-time systems, including industrial 
control/automation, SCADA, Linux-based PLCs (e.g. using Intel Quark),
are definitely affected when process termination collide with HW/SW 
interrupts.

Non-real time systems: Except for some threads occasionally failing to 
make progress, the system will recover:
Other processes will eventually be scheduled, causing CR3 to be loaded 
again correctly from task->mm->pgd, resolving the problem.

> This patch is already part of various stable tree but is missing in the >    v3.18>    v4.1
Yes, the long-term branches of 3.2, 3.10, 3.16 and 4.4 have got the fix 
(long time ago!), 4.9 already has it merged mainline.

> tree and applies cleanly on top of
>    v3.18.69
>    v4.1.43
> 
> I've been contacted by Bernhard Kaindl (Cc:) and he asked about the
> whereabouts of the patch in the two stable trees. He can confirm that
> this patch cures his problem on the v3.18 stable tree he is using.
> He assumes that the same problem might occur on the v4.1 tree and should
> be fixed by the patch but he has no working setup with v4.1 kernel to
> confirm this.

I comfirm - here is a quick summary of what we found:

We saw that our watchdog process got the PGD of a dead process in CR3, 
causing failure to pulse the watchdog because of the page fault loop 
described in the commit log.

We had a lab of 16 machines available for testing the crash fixed by 
this commit.

We found this fix by pure luck thanks to Google after a lot of searches 
by several people. With the fix, over this weekend, in the lab, we 
didn't trigger this issue anymore.

(actually, we found another issue in our own code and had an unknown 
machine hang for which to debug, we need more specific HW which we don't 
have ATM, but it is likely that this is also the same issue caused by 
our own bug)

Before having the fix, we demonstrated the sequence of events which the 
commit log describes within one hour on a single machine exactly.

With Linux-4.4.64 (which does have this fix), we didn't see this bug.

Because it appears to fix both 3.18 and 4.4, it makes sense to apply it 
to the v4.1.x longterm branch too.

Bernhard

-- 
PS: Here is further detail if you want more confirmation that we indeed
     manged to trigger and analyz this race which enabled us to find this
     fix:

The test case we used to reproduce the issue is not simple and requires
a specific HW Lab and SW environment, it is sensible to tiniest changes.

By adding tracing of CR3 to a light-weight ftrace setup and using a SW
watchdog which we used to boot a kdump kernel we found this:

In the ftrace buffer of the vmcore of 3.18.26 (we crashed 3.18.72
too since it does not have the fix), the behavior which the commit
log describes is visible:

* A process exits as described in the commit log of the fix
   and gets rescheduled using the exit.

* With the CR3 of the exiting process, another process continues to run, 
  but due to CR3 pointing to the incorrect PGD, the endless page faults
   described in the commit log happen.

     Several task switches between threads of the process and the kernel
   happen, but due to no need to change CR3, each thread continues to get
   the endless pagefauls with identical IP and address.

     Finally, the watchdog thread of the same process is scheduled (again
   no CR3 switch needed) and the watchdog thread gets the repeated page
   faults as well, causing our kernel's SW watchdog to fire and call into
   panic().

   At this point, in the vmcore, we saw the whole story unfold in the 
ftrace  buffer using the crash tool with the trace extension loaded 
using trace-cmd  to get the symbolic ftrace output--
Bernhard Kaindl
SW Engineer for Control Platform Systems
----------------------------------------------------
Thales Austria GmbH
Handelskai 92, 1200 Vienna, Austria
Tel.: +43 1 27711-5095
Email: Bernhard.Kaindl@thalesgroup.com
www.thalesgroup.com/austria

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] x86/mm: Disable preemption during CR3 read+write
@ 2017-10-17 15:57 Sebastian Andrzej Siewior
  2017-10-18 15:01 ` Bernhard Kaindl
  0 siblings, 1 reply; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2017-10-17 15:57 UTC (permalink / raw)
  To: stable; +Cc: Bernhard Kaindl

Upstream commit 5cf0791da5c162ebc14b01eb01631cfa7ed4fa6e

This patch is already part of various stable tree but is missing in the
  v3.18
  v4.1

tree and applies cleanly on top of
  v3.18.69
  v4.1.43

I've been contacted by Bernhard Kaindl (Cc:) and he asked about the
whereabouts of the patch in the two stable trees. He can confirm that
this patch cures his problem on the v3.18 stable tree he is using.
He assumes that the same problem might occur on the v4.1 tree and should
be fixed by the patch but he has no working setup with v4.1 kernel to
confirm this.

Sebastian

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-10-18 16:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-05 13:37 [PATCH] x86/mm: disable preemption during CR3 read+write Sebastian Andrzej Siewior
2016-08-05 13:37 ` Sebastian Andrzej Siewior
2016-08-05 13:53 ` Peter Zijlstra
2016-08-05 13:53   ` Peter Zijlstra
2016-08-05 14:38 ` Rik van Riel
2016-08-05 15:42 ` Andy Lutomirski
2016-08-05 15:42   ` Andy Lutomirski
2016-08-05 15:52   ` Sebastian Andrzej Siewior
2016-08-05 15:52     ` Sebastian Andrzej Siewior
2016-08-10 18:10 ` [tip:x86/urgent] x86/mm: Disable " tip-bot for Sebastian Andrzej Siewior
2017-10-17 15:57 [PATCH] " Sebastian Andrzej Siewior
2017-10-18 15:01 ` Bernhard Kaindl
2017-10-18 16:51   ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.