From: Michal Hocko <mhocko@kernel.org> To: Will Deacon <will.deacon@arm.com> Cc: Minchan Kim <minchan@kernel.org>, Wang Nan <wangnan0@huawei.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Bob Liu <liubo95@huawei.com>, Andrew Morton <akpm@linux-foundation.org>, David Rientjes <rientjes@google.com>, Ingo Molnar <mingo@kernel.org>, Roman Gushchin <guro@fb.com>, Konstantin Khlebnikov <khlebnikov@yandex-team.ru>, Andrea Arcangeli <aarcange@redhat.com> Subject: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy Date: Mon, 20 Nov 2017 17:04:22 +0100 [thread overview] Message-ID: <20171120160422.5ieustt5ovbyelyx@dhcp22.suse.cz> (raw) In-Reply-To: <20171120142444.GA32488@arm.com> On Mon 20-11-17 14:24:44, Will Deacon wrote: > On Thu, Nov 16, 2017 at 10:20:42AM +0100, Michal Hocko wrote: > > On Wed 15-11-17 17:33:32, Will Deacon wrote: [...] > > > > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h > > > > index ffdaea7954bb..7adde19b2bcc 100644 > > > > --- a/arch/arm64/include/asm/tlb.h > > > > +++ b/arch/arm64/include/asm/tlb.h > > > > @@ -43,7 +43,7 @@ static inline void tlb_flush(struct mmu_gather *tlb) > > > > * The ASID allocator will either invalidate the ASID or mark > > > > * it as used. > > > > */ > > > > - if (tlb->fullmm) > > > > + if (tlb->lazy) > > > > return; > > > > > > This looks like the right idea, but I'd rather make this check: > > > > > > if (tlb->fullmm && tlb->lazy) > > > > > > since the optimisation doesn't work for anything than tearing down the > > > entire address space. > > > > OK, that makes sense. > > > > > Alternatively, I could actually go check MMF_UNSTABLE in tlb->mm, which > > > would save you having to add an extra flag in the first place, e.g.: > > > > > > if (tlb->fullmm && !test_bit(MMF_UNSTABLE, &tlb->mm->flags)) > > > > > > which is a nice one-liner. > > > > But that would make it oom_reaper specific. What about the softdirty > > case Minchan has mentioned earlier? > > We don't (yet) support that on arm64, so we're ok for now. If we do grow > support for it, then I agree that we want a flag to identify the case where > the address space is going away and only elide the invalidation then. What do you think about the following patch instead? I have to confess I do not really understand the fullmm semantic so I might introduce some duplication by this flag. If you think this is a good idea, I will post it in a separate thread. --- >From ba2633169d355a77dc17bda11735432b554efc28 Mon Sep 17 00:00:00 2001 From: Michal Hocko <mhocko@suse.com> Date: Mon, 20 Nov 2017 17:02:00 +0100 Subject: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1") has introduced an optimization to not flush tlb when we are tearing the whole address space down. Will goes on to explain : Basically, we tag each address space with an ASID (PCID on x86) which : is resident in the TLB. This means we can elide TLB invalidation when : pulling down a full mm because we won't ever assign that ASID to : another mm without doing TLB invalidation elsewhere (which actually : just nukes the whole TLB). This all is nice but tlb_gather users are not aware of that and this can actually cause some real problems. E.g. the oom_reaper tries to reap the whole address space but it might race with threads accessing the memory [1]. It is possible that soft-dirty handling might suffer from the same problem [2] as soon as it starts supporting the feature. Introduce an explicit lazy variant tlb_gather_mmu_lazy which allows the behavior arm64 implements for the fullmm case and replace it by an explicit lazy flag in the mmu_gather structure. exit_mmap path is then turned into the explicit lazy variant. Other architectures simply ignore the flag. [1] http://lkml.kernel.org/r/20171106033651.172368-1-wangnan0@huawei.com [2] http://lkml.kernel.org/r/20171110001933.GA12421@bbox Signed-off-by: Michal Hocko <mhocko@suse.com> --- arch/arm/include/asm/tlb.h | 3 ++- arch/arm64/include/asm/tlb.h | 2 +- arch/ia64/include/asm/tlb.h | 3 ++- arch/s390/include/asm/tlb.h | 3 ++- arch/sh/include/asm/tlb.h | 2 +- arch/um/include/asm/tlb.h | 2 +- include/asm-generic/tlb.h | 6 ++++-- include/linux/mm_types.h | 2 ++ mm/memory.c | 17 +++++++++++++++-- mm/mmap.c | 2 +- 10 files changed, 31 insertions(+), 11 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index d5562f9ce600..fe9042aee8e9 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -149,7 +149,8 @@ static inline void tlb_flush_mmu(struct mmu_gather *tlb) static inline void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + bool lazy) { tlb->mm = mm; tlb->fullmm = !(start | (end+1)); diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index ffdaea7954bb..52bfbeae2aca 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -43,7 +43,7 @@ static inline void tlb_flush(struct mmu_gather *tlb) * The ASID allocator will either invalidate the ASID or mark * it as used. */ - if (tlb->fullmm) + if (tlb->fullmm && tlb->lazy) return; /* diff --git a/arch/ia64/include/asm/tlb.h b/arch/ia64/include/asm/tlb.h index cbe5ac3699bf..50c440f5b7bc 100644 --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -169,7 +169,8 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb) static inline void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + bool lazy) { tlb->mm = mm; tlb->max = ARRAY_SIZE(tlb->local); diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 2eb8ff0d6fca..2310657b64c4 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -49,7 +49,8 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table); static inline void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + bool lazy) { tlb->mm = mm; tlb->start = start; diff --git a/arch/sh/include/asm/tlb.h b/arch/sh/include/asm/tlb.h index 51a8bc967e75..ae4c50a7c1ec 100644 --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -37,7 +37,7 @@ static inline void init_tlb_gather(struct mmu_gather *tlb) static inline void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, bool lazy) { tlb->mm = mm; tlb->start = start; diff --git a/arch/um/include/asm/tlb.h b/arch/um/include/asm/tlb.h index 344d95619d03..f24af66d07a4 100644 --- a/arch/um/include/asm/tlb.h +++ b/arch/um/include/asm/tlb.h @@ -46,7 +46,7 @@ static inline void init_tlb_gather(struct mmu_gather *tlb) static inline void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, bool lazy) { tlb->mm = mm; tlb->start = start; diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index faddde44de8c..e6f0b8715e52 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -101,7 +101,8 @@ struct mmu_gather { unsigned int fullmm : 1, /* we have performed an operation which * requires a complete flush of the tlb */ - need_flush_all : 1; + need_flush_all : 1, + lazy : 1; struct mmu_gather_batch *active; struct mmu_gather_batch local; @@ -113,7 +114,8 @@ struct mmu_gather { #define HAVE_GENERIC_MMU_GATHER void arch_tlb_gather_mmu(struct mmu_gather *tlb, - struct mm_struct *mm, unsigned long start, unsigned long end); + struct mm_struct *mm, unsigned long start, unsigned long end, + bool lazy); void tlb_flush_mmu(struct mmu_gather *tlb); void arch_tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end, bool force); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 2a728317cba0..3208bea0356f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -523,6 +523,8 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) struct mmu_gather; extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); +extern void tlb_gather_mmu_lazy(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end); extern void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end); diff --git a/mm/memory.c b/mm/memory.c index 590709e84a43..7dfdd4d8224f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -218,13 +218,15 @@ static bool tlb_next_batch(struct mmu_gather *tlb) } void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + bool lazy) { tlb->mm = mm; /* Is it from 0 to ~0? */ tlb->fullmm = !(start | (end+1)); tlb->need_flush_all = 0; + tlb->lazy = lazy; tlb->local.next = NULL; tlb->local.nr = 0; tlb->local.max = ARRAY_SIZE(tlb->__pages); @@ -408,7 +410,18 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table) void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) { - arch_tlb_gather_mmu(tlb, mm, start, end); + arch_tlb_gather_mmu(tlb, mm, start, end, false); + inc_tlb_flush_pending(tlb->mm); +} + +/* tlb_gather_mmu_lazy + * Basically same as tlb_gather_mmu except it allows architectures to + * skip tlb flushing if they can ensure that nobody will reuse tlb entries + */ +void tlb_gather_mmu_lazy(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + arch_tlb_gather_mmu(tlb, mm, start, end, true); inc_tlb_flush_pending(tlb->mm); } diff --git a/mm/mmap.c b/mm/mmap.c index 680506faceae..43594a6a2eac 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2997,7 +2997,7 @@ void exit_mmap(struct mm_struct *mm) lru_add_drain(); flush_cache_mm(mm); - tlb_gather_mmu(&tlb, mm, 0, -1); + tlb_gather_mmu_lazy(&tlb, mm, 0, -1); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); -- 2.15.0 -- Michal Hocko SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org> To: Will Deacon <will.deacon@arm.com> Cc: Minchan Kim <minchan@kernel.org>, Wang Nan <wangnan0@huawei.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Bob Liu <liubo95@huawei.com>, Andrew Morton <akpm@linux-foundation.org>, David Rientjes <rientjes@google.com>, Ingo Molnar <mingo@kernel.org>, Roman Gushchin <guro@fb.com>, Konstantin Khlebnikov <khlebnikov@yandex-team.ru>, Andrea Arcangeli <aarcange@redhat.com> Subject: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy Date: Mon, 20 Nov 2017 17:04:22 +0100 [thread overview] Message-ID: <20171120160422.5ieustt5ovbyelyx@dhcp22.suse.cz> (raw) In-Reply-To: <20171120142444.GA32488@arm.com> On Mon 20-11-17 14:24:44, Will Deacon wrote: > On Thu, Nov 16, 2017 at 10:20:42AM +0100, Michal Hocko wrote: > > On Wed 15-11-17 17:33:32, Will Deacon wrote: [...] > > > > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h > > > > index ffdaea7954bb..7adde19b2bcc 100644 > > > > --- a/arch/arm64/include/asm/tlb.h > > > > +++ b/arch/arm64/include/asm/tlb.h > > > > @@ -43,7 +43,7 @@ static inline void tlb_flush(struct mmu_gather *tlb) > > > > * The ASID allocator will either invalidate the ASID or mark > > > > * it as used. > > > > */ > > > > - if (tlb->fullmm) > > > > + if (tlb->lazy) > > > > return; > > > > > > This looks like the right idea, but I'd rather make this check: > > > > > > if (tlb->fullmm && tlb->lazy) > > > > > > since the optimisation doesn't work for anything than tearing down the > > > entire address space. > > > > OK, that makes sense. > > > > > Alternatively, I could actually go check MMF_UNSTABLE in tlb->mm, which > > > would save you having to add an extra flag in the first place, e.g.: > > > > > > if (tlb->fullmm && !test_bit(MMF_UNSTABLE, &tlb->mm->flags)) > > > > > > which is a nice one-liner. > > > > But that would make it oom_reaper specific. What about the softdirty > > case Minchan has mentioned earlier? > > We don't (yet) support that on arm64, so we're ok for now. If we do grow > support for it, then I agree that we want a flag to identify the case where > the address space is going away and only elide the invalidation then. What do you think about the following patch instead? I have to confess I do not really understand the fullmm semantic so I might introduce some duplication by this flag. If you think this is a good idea, I will post it in a separate thread. ---
next prev parent reply other threads:[~2017-11-20 16:04 UTC|newest] Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-11-07 9:54 [RESEND PATCH] mm, oom_reaper: gather each vma to prevent leaking TLB entry Wang Nan 2017-11-07 9:54 ` Wang Nan 2017-11-07 10:09 ` Michal Hocko 2017-11-07 10:09 ` Michal Hocko 2017-11-10 0:19 ` Minchan Kim 2017-11-10 0:19 ` Minchan Kim 2017-11-10 10:15 ` Michal Hocko 2017-11-10 10:15 ` Michal Hocko 2017-11-10 12:26 ` [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy (was: Re: [RESEND PATCH] mm, oom_reaper: gather each vma to prevent) " Michal Hocko 2017-11-10 12:26 ` Michal Hocko 2017-11-13 0:28 ` Minchan Kim 2017-11-13 0:28 ` Minchan Kim 2017-11-13 9:51 ` Michal Hocko 2017-11-13 9:51 ` Michal Hocko 2017-11-14 1:45 ` Minchan Kim 2017-11-14 1:45 ` Minchan Kim 2017-11-14 7:21 ` Michal Hocko 2017-11-14 7:21 ` Michal Hocko 2017-11-15 0:12 ` Minchan Kim 2017-11-15 0:12 ` Minchan Kim 2017-11-15 8:14 ` Michal Hocko 2017-11-15 8:14 ` Michal Hocko 2017-11-16 0:44 ` Minchan Kim 2017-11-16 0:44 ` Minchan Kim 2017-11-16 9:19 ` Michal Hocko 2017-11-16 9:19 ` Michal Hocko 2017-11-15 17:33 ` Will Deacon 2017-11-15 17:33 ` Will Deacon 2017-11-16 9:20 ` Michal Hocko 2017-11-16 9:20 ` Michal Hocko 2017-11-20 14:24 ` Will Deacon 2017-11-20 14:24 ` Will Deacon 2017-11-20 16:04 ` Michal Hocko [this message] 2017-11-20 16:04 ` [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy Michal Hocko 2017-11-22 19:30 ` Will Deacon 2017-11-22 19:30 ` Will Deacon 2017-11-23 6:18 ` Minchan Kim 2017-11-23 6:18 ` Minchan Kim
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20171120160422.5ieustt5ovbyelyx@dhcp22.suse.cz \ --to=mhocko@kernel.org \ --cc=aarcange@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=guro@fb.com \ --cc=khlebnikov@yandex-team.ru \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=liubo95@huawei.com \ --cc=minchan@kernel.org \ --cc=mingo@kernel.org \ --cc=rientjes@google.com \ --cc=wangnan0@huawei.com \ --cc=will.deacon@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.