All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Will Deacon <will.deacon@arm.com>
Cc: Minchan Kim <minchan@kernel.org>, Wang Nan <wangnan0@huawei.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Bob Liu <liubo95@huawei.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Rientjes <rientjes@google.com>,
	Ingo Molnar <mingo@kernel.org>, Roman Gushchin <guro@fb.com>,
	Konstantin Khlebnikov <khlebnikov@yandex-team.ru>,
	Andrea Arcangeli <aarcange@redhat.com>
Subject: Re: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy (was: Re: [RESEND PATCH] mm, oom_reaper: gather each vma to prevent) leaking TLB entry
Date: Thu, 16 Nov 2017 10:20:42 +0100	[thread overview]
Message-ID: <20171116092042.esxqtnfxdrozfwey@dhcp22.suse.cz> (raw)
In-Reply-To: <20171115173332.GL19071@arm.com>

On Wed 15-11-17 17:33:32, Will Deacon wrote:
> Hi Michal,
> 
> On Fri, Nov 10, 2017 at 01:26:35PM +0100, Michal Hocko wrote:
> > From 7f0fcd2cab379ddac5611b2a520cdca8a77a235b Mon Sep 17 00:00:00 2001
> > From: Michal Hocko <mhocko@suse.com>
> > Date: Fri, 10 Nov 2017 11:27:17 +0100
> > Subject: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy
> > 
> > 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1") has
> > introduced an optimization to not flush tlb when we are tearing the
> > whole address space down. Will goes on to explain
> > 
> > : Basically, we tag each address space with an ASID (PCID on x86) which
> > : is resident in the TLB. This means we can elide TLB invalidation when
> > : pulling down a full mm because we won't ever assign that ASID to
> > : another mm without doing TLB invalidation elsewhere (which actually
> > : just nukes the whole TLB).
> > 
> > This all is nice but tlb_gather users are not aware of that and this can
> > actually cause some real problems. E.g. the oom_reaper tries to reap the
> > whole address space but it might race with threads accessing the memory [1].
> > It is possible that soft-dirty handling might suffer from the same
> > problem [2].
> > 
> > Introduce an explicit lazy variant tlb_gather_mmu_lazy which allows the
> > behavior arm64 implements for the fullmm case and replace it by an
> > explicit lazy flag in the mmu_gather structure. exit_mmap path is then
> > turned into the explicit lazy variant. Other architectures simply ignore
> > the flag.
> > 
> > [1] http://lkml.kernel.org/r/20171106033651.172368-1-wangnan0@huawei.com
> > [2] http://lkml.kernel.org/r/20171110001933.GA12421@bbox
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  arch/arm/include/asm/tlb.h   |  3 ++-
> >  arch/arm64/include/asm/tlb.h |  2 +-
> >  arch/ia64/include/asm/tlb.h  |  3 ++-
> >  arch/s390/include/asm/tlb.h  |  3 ++-
> >  arch/sh/include/asm/tlb.h    |  2 +-
> >  arch/um/include/asm/tlb.h    |  2 +-
> >  include/asm-generic/tlb.h    |  6 ++++--
> >  include/linux/mm_types.h     |  2 ++
> >  mm/memory.c                  | 17 +++++++++++++++--
> >  mm/mmap.c                    |  2 +-
> >  10 files changed, 31 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
> > index d5562f9ce600..fe9042aee8e9 100644
> > --- a/arch/arm/include/asm/tlb.h
> > +++ b/arch/arm/include/asm/tlb.h
> > @@ -149,7 +149,8 @@ static inline void tlb_flush_mmu(struct mmu_gather *tlb)
> >  
> >  static inline void
> >  arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
> > -			unsigned long start, unsigned long end)
> > +			unsigned long start, unsigned long end,
> > +			bool lazy)
> >  {
> >  	tlb->mm = mm;
> >  	tlb->fullmm = !(start | (end+1));
> > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> > index ffdaea7954bb..7adde19b2bcc 100644
> > --- a/arch/arm64/include/asm/tlb.h
> > +++ b/arch/arm64/include/asm/tlb.h
> > @@ -43,7 +43,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
> >  	 * The ASID allocator will either invalidate the ASID or mark
> >  	 * it as used.
> >  	 */
> > -	if (tlb->fullmm)
> > +	if (tlb->lazy)
> >  		return;
> 
> This looks like the right idea, but I'd rather make this check:
> 
> 	if (tlb->fullmm && tlb->lazy)
> 
> since the optimisation doesn't work for anything than tearing down the
> entire address space.

OK, that makes sense.

> Alternatively, I could actually go check MMF_UNSTABLE in tlb->mm, which
> would save you having to add an extra flag in the first place, e.g.:
> 
> 	if (tlb->fullmm && !test_bit(MMF_UNSTABLE, &tlb->mm->flags))
> 
> which is a nice one-liner.

But that would make it oom_reaper specific. What about the softdirty
case Minchan has mentioned earlier?
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: Will Deacon <will.deacon@arm.com>
Cc: Minchan Kim <minchan@kernel.org>, Wang Nan <wangnan0@huawei.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Bob Liu <liubo95@huawei.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Rientjes <rientjes@google.com>,
	Ingo Molnar <mingo@kernel.org>, Roman Gushchin <guro@fb.com>,
	Konstantin Khlebnikov <khlebnikov@yandex-team.ru>,
	Andrea Arcangeli <aarcange@redhat.com>
Subject: Re: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy (was: Re: [RESEND PATCH] mm, oom_reaper: gather each vma to prevent) leaking TLB entry
Date: Thu, 16 Nov 2017 10:20:42 +0100	[thread overview]
Message-ID: <20171116092042.esxqtnfxdrozfwey@dhcp22.suse.cz> (raw)
In-Reply-To: <20171115173332.GL19071@arm.com>

On Wed 15-11-17 17:33:32, Will Deacon wrote:
> Hi Michal,
> 
> On Fri, Nov 10, 2017 at 01:26:35PM +0100, Michal Hocko wrote:
> > From 7f0fcd2cab379ddac5611b2a520cdca8a77a235b Mon Sep 17 00:00:00 2001
> > From: Michal Hocko <mhocko@suse.com>
> > Date: Fri, 10 Nov 2017 11:27:17 +0100
> > Subject: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy
> > 
> > 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1") has
> > introduced an optimization to not flush tlb when we are tearing the
> > whole address space down. Will goes on to explain
> > 
> > : Basically, we tag each address space with an ASID (PCID on x86) which
> > : is resident in the TLB. This means we can elide TLB invalidation when
> > : pulling down a full mm because we won't ever assign that ASID to
> > : another mm without doing TLB invalidation elsewhere (which actually
> > : just nukes the whole TLB).
> > 
> > This all is nice but tlb_gather users are not aware of that and this can
> > actually cause some real problems. E.g. the oom_reaper tries to reap the
> > whole address space but it might race with threads accessing the memory [1].
> > It is possible that soft-dirty handling might suffer from the same
> > problem [2].
> > 
> > Introduce an explicit lazy variant tlb_gather_mmu_lazy which allows the
> > behavior arm64 implements for the fullmm case and replace it by an
> > explicit lazy flag in the mmu_gather structure. exit_mmap path is then
> > turned into the explicit lazy variant. Other architectures simply ignore
> > the flag.
> > 
> > [1] http://lkml.kernel.org/r/20171106033651.172368-1-wangnan0@huawei.com
> > [2] http://lkml.kernel.org/r/20171110001933.GA12421@bbox
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  arch/arm/include/asm/tlb.h   |  3 ++-
> >  arch/arm64/include/asm/tlb.h |  2 +-
> >  arch/ia64/include/asm/tlb.h  |  3 ++-
> >  arch/s390/include/asm/tlb.h  |  3 ++-
> >  arch/sh/include/asm/tlb.h    |  2 +-
> >  arch/um/include/asm/tlb.h    |  2 +-
> >  include/asm-generic/tlb.h    |  6 ++++--
> >  include/linux/mm_types.h     |  2 ++
> >  mm/memory.c                  | 17 +++++++++++++++--
> >  mm/mmap.c                    |  2 +-
> >  10 files changed, 31 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
> > index d5562f9ce600..fe9042aee8e9 100644
> > --- a/arch/arm/include/asm/tlb.h
> > +++ b/arch/arm/include/asm/tlb.h
> > @@ -149,7 +149,8 @@ static inline void tlb_flush_mmu(struct mmu_gather *tlb)
> >  
> >  static inline void
> >  arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
> > -			unsigned long start, unsigned long end)
> > +			unsigned long start, unsigned long end,
> > +			bool lazy)
> >  {
> >  	tlb->mm = mm;
> >  	tlb->fullmm = !(start | (end+1));
> > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> > index ffdaea7954bb..7adde19b2bcc 100644
> > --- a/arch/arm64/include/asm/tlb.h
> > +++ b/arch/arm64/include/asm/tlb.h
> > @@ -43,7 +43,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
> >  	 * The ASID allocator will either invalidate the ASID or mark
> >  	 * it as used.
> >  	 */
> > -	if (tlb->fullmm)
> > +	if (tlb->lazy)
> >  		return;
> 
> This looks like the right idea, but I'd rather make this check:
> 
> 	if (tlb->fullmm && tlb->lazy)
> 
> since the optimisation doesn't work for anything than tearing down the
> entire address space.

OK, that makes sense.

> Alternatively, I could actually go check MMF_UNSTABLE in tlb->mm, which
> would save you having to add an extra flag in the first place, e.g.:
> 
> 	if (tlb->fullmm && !test_bit(MMF_UNSTABLE, &tlb->mm->flags))
> 
> which is a nice one-liner.

But that would make it oom_reaper specific. What about the softdirty
case Minchan has mentioned earlier?
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-11-16  9:21 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-07  9:54 [RESEND PATCH] mm, oom_reaper: gather each vma to prevent leaking TLB entry Wang Nan
2017-11-07  9:54 ` Wang Nan
2017-11-07 10:09 ` Michal Hocko
2017-11-07 10:09   ` Michal Hocko
2017-11-10  0:19 ` Minchan Kim
2017-11-10  0:19   ` Minchan Kim
2017-11-10 10:15   ` Michal Hocko
2017-11-10 10:15     ` Michal Hocko
2017-11-10 12:26     ` [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy (was: Re: [RESEND PATCH] mm, oom_reaper: gather each vma to prevent) " Michal Hocko
2017-11-10 12:26       ` Michal Hocko
2017-11-13  0:28       ` Minchan Kim
2017-11-13  0:28         ` Minchan Kim
2017-11-13  9:51         ` Michal Hocko
2017-11-13  9:51           ` Michal Hocko
2017-11-14  1:45           ` Minchan Kim
2017-11-14  1:45             ` Minchan Kim
2017-11-14  7:21             ` Michal Hocko
2017-11-14  7:21               ` Michal Hocko
2017-11-15  0:12               ` Minchan Kim
2017-11-15  0:12                 ` Minchan Kim
2017-11-15  8:14         ` Michal Hocko
2017-11-15  8:14           ` Michal Hocko
2017-11-16  0:44           ` Minchan Kim
2017-11-16  0:44             ` Minchan Kim
2017-11-16  9:19             ` Michal Hocko
2017-11-16  9:19               ` Michal Hocko
2017-11-15 17:33       ` Will Deacon
2017-11-15 17:33         ` Will Deacon
2017-11-16  9:20         ` Michal Hocko [this message]
2017-11-16  9:20           ` Michal Hocko
2017-11-20 14:24           ` Will Deacon
2017-11-20 14:24             ` Will Deacon
2017-11-20 16:04             ` [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy Michal Hocko
2017-11-20 16:04               ` Michal Hocko
2017-11-22 19:30               ` Will Deacon
2017-11-22 19:30                 ` Will Deacon
2017-11-23  6:18                 ` Minchan Kim
2017-11-23  6:18                   ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171116092042.esxqtnfxdrozfwey@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liubo95@huawei.com \
    --cc=minchan@kernel.org \
    --cc=mingo@kernel.org \
    --cc=rientjes@google.com \
    --cc=wangnan0@huawei.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.