linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH - sort of] x86: Livelock in handle_pte_fault
@ 2013-05-17  8:42 Stanislav Meduna
  2013-05-22  0:39 ` Steven Rostedt
  0 siblings, 1 reply; 35+ messages in thread
From: Stanislav Meduna @ 2013-05-17  8:42 UTC (permalink / raw)
  To: linux-rt-users, linux-kernel
  Cc: rostedt, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86

Hi all,

I don't know whether this is linux-rt specific or applies to
the mainline too, so I'll repeat some things the linux-rt
readers already know.

Environment:

- Geode LX or Celeron M
- _not_ CONFIG_SMP
- linux 3.4 with realtime patches and full preempt configured
- an application consisting of several mostly RR-class threads
- the application runs with mlockall()
- there is no swap

Problem:

- after several hours to 1-2 weeks some of the threads start to loop
  in the following way

  0d...0 62811.755382: function:  do_page_fault
  0....0 62811.755386: function:     handle_mm_fault
  0....0 62811.755389: function:        handle_pte_fault
  0d...0 62811.755394: function:  do_page_fault
  0....0 62811.755396: function:     handle_mm_fault
  0....0 62811.755398: function:        handle_pte_fault
  0d...0 62811.755402: function:  do_page_fault
  0....0 62811.755404: function:     handle_mm_fault
  0....0 62811.755406: function:        handle_pte_fault

  and stay in the loop until the RT throttling gets activated.
  One of the faulting addresses was in code (after returning
  from a syscall), a second one in stack (inside put_user right
  before a syscall ends), both were surely mapped.

- After RT throttler activates it somehow magically fixes itself,
  probably (not verified) because another _process_ gets scheduled.
  When throttled the RR and FF threads are not allowed to run for
  a while (20 ms in my configuration). The livelocks lasts around
  1-3 seconds, and there is a SCHED_OTHER process that runs each
  2 seconds.

- Kernel threads with higher priority than the faulting one (linux-rt
  irq threads) run normally. A higher priority user thread from the
  same process gets scheduled and then enters the same faulting loop.

- in ps -o min_flt,maj_flt the number of minor page faults
  for the offending thread skyrockets to hundreds of thousands
  (normally it stays zero as everything is already mapped
  when it is started)

- The code in handle_pte_fault proceeds through the
    entry = pte_mkyoung(entry);
  line and the following
    ptep_set_access_flags
  returns zero.

- The livelock is extremely timing sensitive - different workloads
  cause it not to happen at all or far later.

- I was able to make this happen a bit faster (once per ~4 hours)
  with the rt thread repeatly causing the kernel to try to
  invoke modprobe to load a missing module - so there is a load
  of kworker-s launching modprobes (in case anyone wonders how it
  can happen: this was a bug in our application with invalid level
  specified for setsockopt causing searching for TCP congestion
  module instead of setting SO_LINGER)

- the symptoms are similar to
    http://lkml.indiana.edu/hypermail/linux/kernel/1103.0/01364.html
  which got fixed by
    https://lkml.org/lkml/2011/3/15/516
  but this fix does not apply to the processors in question

- the patch below _seems_ to fix it, or at least massively delay it -
  the testcase now runs for 2.5 days instead of 4 hours. I doubt
  it is the proper patch (it brutally reloads the CR3 every time
  a thread with userspace mapping is switched to). I just got the
  suspicion that there is some way the kernel forgets to update
  the memory mapping when going from an userpace thread through
  some kernel ones back to another userspace one and tried to make
  sure the mapping is always reloaded.

- the whole history starts at
    http://www.spinics.net/lists/linux-rt-users/msg09758.html
  I originally thought the problem is in timerfd and hunted it
  in several places until I learned to use the tracing infrastructure
  and started to pin it down with trace prints etc :)

- A trace file of the hang is at
  http://www.meduna.org/tmp/trace.mmfaulthang.dat.gz

Does this ring a bell with someone?

Thanks
                                              Stano




diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 6902152..3d54a15 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -54,21 +54,23 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 		if (unlikely(prev->context.ldt != next->context.ldt))
 			load_LDT_nolock(&next->context);
 	}
-#ifdef CONFIG_SMP
 	else {
+#ifdef CONFIG_SMP
 		percpu_write(cpu_tlbstate.state, TLBSTATE_OK);
 		BUG_ON(percpu_read(cpu_tlbstate.active_mm) != next);

 		if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next))) {
+#endif
 			/* We were in lazy tlb mode and leave_mm disabled
 			 * tlb flush IPI delivery. We must reload CR3
 			 * to make sure to use no freed page tables.
 			 */
 			load_cr3(next->pgd);
 			load_LDT_nolock(&next->context);
+#ifdef CONFIG_SMP
 		}
-	}
 #endif
+	}
 }

 #define activate_mm(prev, next)


^ permalink raw reply related	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2013-06-20 17:51 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-17  8:42 [PATCH - sort of] x86: Livelock in handle_pte_fault Stanislav Meduna
2013-05-22  0:39 ` Steven Rostedt
2013-05-22  7:32   ` Stanislav Meduna
2013-05-22 12:33   ` Rik van Riel
2013-05-22 15:01     ` Linus Torvalds
2013-05-22 17:41       ` [PATCH] mm: fix up a spurious page fault whenever it happens Rik van Riel
2013-05-22 18:04         ` Stanislav Meduna
2013-05-22 18:11           ` Steven Rostedt
2013-05-22 18:21             ` Stanislav Meduna
2013-05-22 18:35               ` Rik van Riel
2013-05-22 18:42                 ` H. Peter Anvin
2013-05-22 18:43                   ` Rik van Riel
2013-05-23  8:07                     ` Stanislav Meduna
2013-05-23 12:19                       ` Rik van Riel
2013-05-23 13:29                         ` Steven Rostedt
2013-05-23 15:06                           ` H. Peter Anvin
2013-05-23 15:27                             ` Steven Rostedt
2013-05-23 17:24                               ` H. Peter Anvin
2013-05-23 17:36                                 ` Steven Rostedt
2013-05-23 17:38                                   ` H. Peter Anvin
2013-05-24  8:29                         ` Stanislav Meduna
2013-05-24 10:28                           ` Stanislav Meduna
2013-05-24 13:06                           ` Rik van Riel
2013-05-24 13:55                             ` Stanislav Meduna
2013-05-24 14:23                               ` Stanislav Meduna
2013-06-16 21:34                             ` Stanislav Meduna
2013-06-18 19:13                               ` Stanislav Meduna
2013-06-19  5:20                                 ` Linus Torvalds
2013-06-19  7:36                                   ` Stanislav Meduna
2013-06-19  8:06                                     ` Peter Zijlstra
2013-06-20 17:50                                       ` Stanislav Meduna
2013-05-23 14:45                       ` Linus Torvalds
2013-05-23 14:50                         ` Linus Torvalds
2013-05-23 15:03                           ` Stanislav Meduna
2013-05-22 18:47                 ` Stanislav Meduna

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).