linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/x86: Flush lazy MMU when DEBUG_PAGEALLOC is set
@ 2013-02-26 22:56 Boris Ostrovsky
  2013-02-26 23:38 ` H. Peter Anvin
  0 siblings, 1 reply; 6+ messages in thread
From: Boris Ostrovsky @ 2013-02-26 22:56 UTC (permalink / raw)
  To: tglx, mingo, hpa; +Cc: konrad.wilk, xen-devel, linux-kernel, Boris Ostrovsky

When CONFIG_DEBUG_PAGEALLOC is set page table updates made by
kernel_map_pages() are not made visible (via TLB flush) immediately if lazy
MMU is on. In environments that support lazy MMU (e.g. Xen) this may lead to
fatal page faults, for example, when zap_pte_range() needs to allocate pages
in __tlb_remove_page() -> tlb_next_batch().

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 arch/x86/mm/pageattr.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index ca1f1c2..7b3216e 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1369,6 +1369,8 @@ void kernel_map_pages(struct page *page, int numpages, int enable)
 	 * but that can deadlock->flush only current cpu:
 	 */
 	__flush_tlb_all();
+
+	arch_flush_lazy_mmu_mode();
 }
 
 #ifdef CONFIG_HIBERNATION
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread
* Re: [PATCH] mm/x86: Flush lazy MMU when DEBUG_PAGEALLOC is set
@ 2013-02-26 23:57 Boris Ostrovsky
  2013-02-27 22:40 ` H. Peter Anvin
  0 siblings, 1 reply; 6+ messages in thread
From: Boris Ostrovsky @ 2013-02-26 23:57 UTC (permalink / raw)
  To: hpa; +Cc: mingo, konrad.wilk, tglx, xen-devel, linux-kernel


----- hpa@zytor.com wrote:

> On 02/26/2013 02:56 PM, Boris Ostrovsky wrote:
> > When CONFIG_DEBUG_PAGEALLOC is set page table updates made by
> > kernel_map_pages() are not made visible (via TLB flush) immediately
> if lazy
> > MMU is on. In environments that support lazy MMU (e.g. Xen) this may
> lead to
> > fatal page faults, for example, when zap_pte_range() needs to
> allocate pages
> > in __tlb_remove_page() -> tlb_next_batch().
> > 
> > Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > ---
> >  arch/x86/mm/pageattr.c | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> > index ca1f1c2..7b3216e 100644
> > --- a/arch/x86/mm/pageattr.c
> > +++ b/arch/x86/mm/pageattr.c
> > @@ -1369,6 +1369,8 @@ void kernel_map_pages(struct page *page, int
> numpages, int enable)
> >  	 * but that can deadlock->flush only current cpu:
> >  	 */
> >  	__flush_tlb_all();
> > +
> > +	arch_flush_lazy_mmu_mode();
> >  }
> >  
> >  #ifdef CONFIG_HIBERNATION
> > 
> 
> This sounds like a critical fix, i.e. a -stable candidate.  Am I
> correct?

I considered copying stable but then I decided that this is a debugging feature
--- kernel_map_pages() is only defined if CONFIG_DEBUG_PAGEALLOC is set and my
thinking was that stable kernels usually don't do this.


-boris

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-02-27 23:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-26 22:56 [PATCH] mm/x86: Flush lazy MMU when DEBUG_PAGEALLOC is set Boris Ostrovsky
2013-02-26 23:38 ` H. Peter Anvin
2013-02-26 23:57 Boris Ostrovsky
2013-02-27 22:40 ` H. Peter Anvin
2013-02-27 23:00   ` Greg KH
2013-02-27 23:07     ` H. Peter Anvin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).