All of lore.kernel.org
 help / color / mirror / Atom feed
* [RESEND PATCH] mm, oom_reaper: gather each vma to prevent leaking TLB entry
@ 2017-11-07  9:54 ` Wang Nan
  0 siblings, 0 replies; 38+ messages in thread
From: Wang Nan @ 2017-11-07  9:54 UTC (permalink / raw)
  To: linux-mm, linux-kernel, mhocko, will.deacon
  Cc: Wang Nan, Bob Liu, Andrew Morton, Michal Hocko, David Rientjes,
	Ingo Molnar, Roman Gushchin, Konstantin Khlebnikov,
	Andrea Arcangeli

tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't
flush TLB when tlb->fullmm is true:

  commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").

Which makes leaking of tlb entries.

Will clarifies his patch:

> Basically, we tag each address space with an ASID (PCID on x86) which
> is resident in the TLB. This means we can elide TLB invalidation when
> pulling down a full mm because we won't ever assign that ASID to another mm
> without doing TLB invalidation elsewhere (which actually just nukes the
> whole TLB).
>
> I think that means that we could potentially not fault on a kernel uaccess,
> because we could hit in the TLB.

There could be a window between complete_signal() sending IPI to other
cores and all threads sharing this mm are really kicked off from cores.
In this window, the oom reaper may calls tlb_flush_mmu_tlbonly() to flush
TLB then frees pages. However, due to the above problem, the TLB entries
are not really flushed on arm64. Other threads are possible to access
these pages through TLB entries. Moreover, a copy_to_user() can also
write to these pages without generating page fault, causes use-after-free
bugs.

This patch gathers each vma instead of gathering full vm space.
In this case tlb->fullmm is not true. The behavior of oom reaper become
similar to munmapping before do_exit, which should be safe for all archs.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Bob Liu <liubo95@huawei.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Andrea Arcangeli <aarcange@redhat.com>
---
 mm/oom_kill.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index dee0f75..18c5b35 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -532,7 +532,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 	 */
 	set_bit(MMF_UNSTABLE, &mm->flags);
 
-	tlb_gather_mmu(&tlb, mm, 0, -1);
 	for (vma = mm->mmap ; vma; vma = vma->vm_next) {
 		if (!can_madv_dontneed_vma(vma))
 			continue;
@@ -547,11 +546,13 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 		 * we do not want to block exit_mmap by keeping mm ref
 		 * count elevated without a good reason.
 		 */
-		if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))
+		if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {
+			tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end);
 			unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,
 					 NULL);
+			tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end);
+		}
 	}
-	tlb_finish_mmu(&tlb, 0, -1);
 	pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
 			task_pid_nr(tsk), tsk->comm,
 			K(get_mm_counter(mm, MM_ANONPAGES)),
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2017-11-23  6:18 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-07  9:54 [RESEND PATCH] mm, oom_reaper: gather each vma to prevent leaking TLB entry Wang Nan
2017-11-07  9:54 ` Wang Nan
2017-11-07 10:09 ` Michal Hocko
2017-11-07 10:09   ` Michal Hocko
2017-11-10  0:19 ` Minchan Kim
2017-11-10  0:19   ` Minchan Kim
2017-11-10 10:15   ` Michal Hocko
2017-11-10 10:15     ` Michal Hocko
2017-11-10 12:26     ` [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy (was: Re: [RESEND PATCH] mm, oom_reaper: gather each vma to prevent) " Michal Hocko
2017-11-10 12:26       ` Michal Hocko
2017-11-13  0:28       ` Minchan Kim
2017-11-13  0:28         ` Minchan Kim
2017-11-13  9:51         ` Michal Hocko
2017-11-13  9:51           ` Michal Hocko
2017-11-14  1:45           ` Minchan Kim
2017-11-14  1:45             ` Minchan Kim
2017-11-14  7:21             ` Michal Hocko
2017-11-14  7:21               ` Michal Hocko
2017-11-15  0:12               ` Minchan Kim
2017-11-15  0:12                 ` Minchan Kim
2017-11-15  8:14         ` Michal Hocko
2017-11-15  8:14           ` Michal Hocko
2017-11-16  0:44           ` Minchan Kim
2017-11-16  0:44             ` Minchan Kim
2017-11-16  9:19             ` Michal Hocko
2017-11-16  9:19               ` Michal Hocko
2017-11-15 17:33       ` Will Deacon
2017-11-15 17:33         ` Will Deacon
2017-11-16  9:20         ` Michal Hocko
2017-11-16  9:20           ` Michal Hocko
2017-11-20 14:24           ` Will Deacon
2017-11-20 14:24             ` Will Deacon
2017-11-20 16:04             ` [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy Michal Hocko
2017-11-20 16:04               ` Michal Hocko
2017-11-22 19:30               ` Will Deacon
2017-11-22 19:30                 ` Will Deacon
2017-11-23  6:18                 ` Minchan Kim
2017-11-23  6:18                   ` Minchan Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.