From: Nadav Amit <nadav.amit@gmail.com>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Nadav Amit <namit@vmware.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Andy Lutomirski <luto@kernel.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Will Deacon <will@kernel.org>, Yu Zhao <yuzhao@google.com>,
Nick Piggin <npiggin@gmail.com>,
x86@kernel.org
Subject: [RFC 06/20] fs/task_mmu: use mmu_gather interface of clear-soft-dirty
Date: Sat, 30 Jan 2021 16:11:18 -0800 [thread overview]
Message-ID: <20210131001132.3368247-7-namit@vmware.com> (raw)
In-Reply-To: <20210131001132.3368247-1-namit@vmware.com>
From: Nadav Amit <namit@vmware.com>
Use mmu_gather interface in task_mmu instead of
{inc|dec}_tlb_flush_pending(). This would allow to consolidate the code
and to avoid potential bugs.
Signed-off-by: Nadav Amit <namit@vmware.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: x86@kernel.org
---
fs/proc/task_mmu.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 3cec6fbef725..4cd048ffa0f6 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1032,8 +1032,25 @@ enum clear_refs_types {
struct clear_refs_private {
enum clear_refs_types type;
+ struct mmu_gather tlb;
};
+static int tlb_pre_vma(unsigned long start, unsigned long end,
+ struct mm_walk *walk)
+{
+ struct clear_refs_private *cp = walk->private;
+
+ tlb_start_vma(&cp->tlb, walk->vma);
+ return 0;
+}
+
+static void tlb_post_vma(struct mm_walk *walk)
+{
+ struct clear_refs_private *cp = walk->private;
+
+ tlb_end_vma(&cp->tlb, walk->vma);
+}
+
#ifdef CONFIG_MEM_SOFT_DIRTY
#define is_cow_mapping(flags) (((flags) & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE)
@@ -1140,6 +1157,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
/* Clear accessed and referenced bits. */
pmdp_test_and_clear_young(vma, addr, pmd);
test_and_clear_page_young(page);
+ tlb_flush_pmd_range(&cp->tlb, addr, HPAGE_PMD_SIZE);
ClearPageReferenced(page);
out:
spin_unlock(ptl);
@@ -1155,6 +1173,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
if (cp->type == CLEAR_REFS_SOFT_DIRTY) {
clear_soft_dirty(vma, addr, pte);
+ tlb_flush_pte_range(&cp->tlb, addr, PAGE_SIZE);
continue;
}
@@ -1168,6 +1187,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
/* Clear accessed and referenced bits. */
ptep_test_and_clear_young(vma, addr, pte);
test_and_clear_page_young(page);
+ tlb_flush_pte_range(&cp->tlb, addr, PAGE_SIZE);
ClearPageReferenced(page);
}
pte_unmap_unlock(pte - 1, ptl);
@@ -1198,6 +1218,8 @@ static int clear_refs_test_walk(unsigned long start, unsigned long end,
}
static const struct mm_walk_ops clear_refs_walk_ops = {
+ .pre_vma = tlb_pre_vma,
+ .post_vma = tlb_post_vma,
.pmd_entry = clear_refs_pte_range,
.test_walk = clear_refs_test_walk,
};
@@ -1248,6 +1270,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
goto out_unlock;
}
+ tlb_gather_mmu(&cp.tlb, mm);
if (type == CLEAR_REFS_SOFT_DIRTY) {
for (vma = mm->mmap; vma; vma = vma->vm_next) {
if (!(vma->vm_flags & VM_SOFTDIRTY))
@@ -1256,7 +1279,6 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
vma_set_page_prot(vma);
}
- inc_tlb_flush_pending(mm);
mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY,
0, NULL, mm, 0, -1UL);
mmu_notifier_invalidate_range_start(&range);
@@ -1265,10 +1287,9 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
&cp);
if (type == CLEAR_REFS_SOFT_DIRTY) {
mmu_notifier_invalidate_range_end(&range);
- flush_tlb_mm(mm);
- dec_tlb_flush_pending(mm);
}
out_unlock:
+ tlb_finish_mmu(&cp.tlb);
mmap_write_unlock(mm);
out_mm:
mmput(mm);
--
2.25.1
next prev parent reply other threads:[~2021-01-31 0:16 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-31 0:11 [RFC 00/20] TLB batching consolidation and enhancements Nadav Amit
2021-01-31 0:11 ` [RFC 01/20] mm/tlb: fix fullmm semantics Nadav Amit
2021-01-31 1:02 ` Andy Lutomirski
2021-01-31 1:19 ` Nadav Amit
2021-01-31 2:57 ` Andy Lutomirski
2021-02-01 7:30 ` Nadav Amit
2021-02-01 11:36 ` Peter Zijlstra
2021-02-02 9:32 ` Nadav Amit
2021-02-02 11:00 ` Peter Zijlstra
2021-02-02 21:35 ` Nadav Amit
2021-02-03 9:44 ` Will Deacon
2021-02-04 3:20 ` Nadav Amit
2021-01-31 0:11 ` [RFC 02/20] mm/mprotect: use mmu_gather Nadav Amit
2021-01-31 0:11 ` [RFC 03/20] mm/mprotect: do not flush on permission promotion Nadav Amit
2021-01-31 1:07 ` Andy Lutomirski
2021-01-31 1:17 ` Nadav Amit
2021-01-31 2:59 ` Andy Lutomirski
[not found] ` <7a6de15a-a570-31f2-14d6-a8010296e694@citrix.com>
2021-02-01 5:58 ` Nadav Amit
2021-02-01 15:38 ` Andrew Cooper
2021-01-31 0:11 ` [RFC 04/20] mm/mapping_dirty_helpers: use mmu_gather Nadav Amit
2021-01-31 0:11 ` [RFC 05/20] mm/tlb: move BATCHED_UNMAP_TLB_FLUSH to tlb.h Nadav Amit
2021-01-31 0:11 ` Nadav Amit [this message]
2021-01-31 0:11 ` [RFC 07/20] mm: move x86 tlb_gen to generic code Nadav Amit
2021-01-31 18:26 ` Andy Lutomirski
2021-01-31 0:11 ` [RFC 08/20] mm: store completed TLB generation Nadav Amit
2021-01-31 20:32 ` Andy Lutomirski
2021-02-01 7:28 ` Nadav Amit
2021-02-01 16:53 ` Andy Lutomirski
2021-02-01 11:52 ` Peter Zijlstra
2021-01-31 0:11 ` [RFC 09/20] mm: create pte/pmd_tlb_flush_pending() Nadav Amit
2021-01-31 0:11 ` [RFC 10/20] mm: add pte_to_page() Nadav Amit
2021-01-31 0:11 ` [RFC 11/20] mm/tlb: remove arch-specific tlb_start/end_vma() Nadav Amit
2021-02-01 12:09 ` Peter Zijlstra
2021-02-02 6:41 ` Nicholas Piggin
2021-02-02 7:20 ` Nadav Amit
2021-02-02 9:31 ` Peter Zijlstra
2021-02-02 9:54 ` Nadav Amit
2021-02-02 11:04 ` Peter Zijlstra
2021-01-31 0:11 ` [RFC 12/20] mm/tlb: save the VMA that is flushed during tlb_start_vma() Nadav Amit
2021-02-01 12:28 ` Peter Zijlstra
2021-01-31 0:11 ` [RFC 13/20] mm/tlb: introduce tlb_start_ptes() and tlb_end_ptes() Nadav Amit
2021-01-31 9:57 ` Damian Tometzki
2021-01-31 10:07 ` Damian Tometzki
2021-02-01 7:29 ` Nadav Amit
2021-02-01 13:19 ` Peter Zijlstra
2021-02-01 23:00 ` Nadav Amit
2021-01-31 0:11 ` [RFC 14/20] mm: move inc/dec_tlb_flush_pending() to mmu_gather.c Nadav Amit
2021-01-31 0:11 ` [RFC 15/20] mm: detect deferred TLB flushes in vma granularity Nadav Amit
2021-02-01 22:04 ` Nadav Amit
2021-02-02 0:14 ` Andy Lutomirski
2021-02-02 20:51 ` Nadav Amit
2021-02-04 4:35 ` Andy Lutomirski
2021-01-31 0:11 ` [RFC 16/20] mm/tlb: per-page table generation tracking Nadav Amit
2021-01-31 0:11 ` [RFC 17/20] mm/tlb: updated completed deferred TLB flush conditionally Nadav Amit
2021-01-31 0:11 ` [RFC 18/20] mm: make mm_cpumask() volatile Nadav Amit
2021-01-31 0:11 ` [RFC 19/20] lib/cpumask: introduce cpumask_atomic_or() Nadav Amit
2021-01-31 0:11 ` [RFC 20/20] mm/rmap: avoid potential races Nadav Amit
2021-08-23 8:05 ` Huang, Ying
2021-08-23 15:50 ` Nadav Amit
2021-08-24 0:36 ` Huang, Ying
2021-01-31 0:39 ` [RFC 00/20] TLB batching consolidation and enhancements Andy Lutomirski
2021-01-31 1:08 ` Nadav Amit
2021-01-31 3:30 ` Nicholas Piggin
2021-01-31 7:57 ` Nadav Amit
2021-01-31 8:14 ` Nadav Amit
2021-02-01 12:44 ` Peter Zijlstra
2021-02-02 7:14 ` Nicholas Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210131001132.3368247-7-namit@vmware.com \
--to=nadav.amit@gmail.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=namit@vmware.com \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).