From: Hugh Dickins <hughd@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
Matthew Wilcox <willy@infradead.org>,
David Hildenbrand <david@redhat.com>,
Alistair Popple <apopple@nvidia.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Rik van Riel <riel@surriel.com>,
Suren Baghdasaryan <surenb@google.com>,
Yu Zhao <yuzhao@google.com>, Greg Thelen <gthelen@google.com>,
Shakeel Butt <shakeelb@google.com>,
Hillf Danton <hdanton@sina.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH v2 07/13] mm/munlock: mlock_pte_range() when mlocking or munlocking
Date: Mon, 14 Feb 2022 18:31:48 -0800 (PST) [thread overview]
Message-ID: <d39f6e4d-aa4f-731a-68ee-e77cdbf1d7bb@google.com> (raw)
In-Reply-To: <55a49083-37f9-3766-1de9-9feea7428ac@google.com>
Fill in missing pieces: reimplementation of munlock_vma_pages_range(),
required to lower the mlock_counts when munlocking without munmapping;
and its complement, implementation of mlock_vma_pages_range(), required
to raise the mlock_counts on pages already there when a range is mlocked.
Combine them into just the one function mlock_vma_pages_range(), using
walk_page_range() to run mlock_pte_range(). This approach fixes the
"Very slow unlockall()" of unpopulated PROT_NONE areas, reported in
https://lore.kernel.org/linux-mm/70885d37-62b7-748b-29df-9e94f3291736@gmail.com/
Munlock clears VM_LOCKED at the start, under exclusive mmap_lock; but if
a racing truncate or holepunch (depending on i_mmap_rwsem) gets to the
pte first, it will not try to munlock the page: leaving release_pages()
to correct it when the last reference to the page is gone - that's okay,
a page is not evictable anyway while it is held by an extra reference.
Mlock sets VM_LOCKED at the start, under exclusive mmap_lock; but if
a racing remove_migration_pte() or try_to_unmap_one() (depending on
i_mmap_rwsem) gets to the pte first, it will try to mlock the page,
then mlock_pte_range() mlock it a second time. This is harder to
reproduce, but a more serious race because it could leave the page
unevictable indefinitely though the area is munlocked afterwards.
Guard against it by setting the (inappropriate) VM_IO flag,
and modifying mlock_vma_page() to decline such vmas.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
---
v2: Delete reintroduced VM_LOCKED_CLEAR_MASKing here, per Vlastimil.
Use oldflags instead of vma->vm_flags in mlock_fixup(), per Vlastimil.
Make clearer comment on mmap_lock and WRITE_ONCE, per Hillf.
mm/internal.h | 3 +-
mm/mlock.c | 111 ++++++++++++++++++++++++++++++++++++++++----------
2 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index a43d79335c16..b3f0dd3ffba2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -412,7 +412,8 @@ void mlock_page(struct page *page);
static inline void mlock_vma_page(struct page *page,
struct vm_area_struct *vma, bool compound)
{
- if (unlikely(vma->vm_flags & VM_LOCKED) &&
+ /* VM_IO check prevents migration from double-counting during mlock */
+ if (unlikely((vma->vm_flags & (VM_LOCKED|VM_IO)) == VM_LOCKED) &&
(compound || !PageTransCompound(page)))
mlock_page(page);
}
diff --git a/mm/mlock.c b/mm/mlock.c
index f8a3a54687dd..581ea8bf1b83 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -14,6 +14,7 @@
#include <linux/swapops.h>
#include <linux/pagemap.h>
#include <linux/pagevec.h>
+#include <linux/pagewalk.h>
#include <linux/mempolicy.h>
#include <linux/syscalls.h>
#include <linux/sched.h>
@@ -127,25 +128,91 @@ void munlock_page(struct page *page)
unlock_page_memcg(page);
}
+static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
+ unsigned long end, struct mm_walk *walk)
+
+{
+ struct vm_area_struct *vma = walk->vma;
+ spinlock_t *ptl;
+ pte_t *start_pte, *pte;
+ struct page *page;
+
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ if (!pmd_present(*pmd))
+ goto out;
+ if (is_huge_zero_pmd(*pmd))
+ goto out;
+ page = pmd_page(*pmd);
+ if (vma->vm_flags & VM_LOCKED)
+ mlock_page(page);
+ else
+ munlock_page(page);
+ goto out;
+ }
+
+ start_pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+ for (pte = start_pte; addr != end; pte++, addr += PAGE_SIZE) {
+ if (!pte_present(*pte))
+ continue;
+ page = vm_normal_page(vma, addr, *pte);
+ if (!page)
+ continue;
+ if (PageTransCompound(page))
+ continue;
+ if (vma->vm_flags & VM_LOCKED)
+ mlock_page(page);
+ else
+ munlock_page(page);
+ }
+ pte_unmap(start_pte);
+out:
+ spin_unlock(ptl);
+ cond_resched();
+ return 0;
+}
+
/*
- * munlock_vma_pages_range() - munlock all pages in the vma range.'
- * @vma - vma containing range to be munlock()ed.
+ * mlock_vma_pages_range() - mlock any pages already in the range,
+ * or munlock all pages in the range.
+ * @vma - vma containing range to be mlock()ed or munlock()ed
* @start - start address in @vma of the range
- * @end - end of range in @vma.
- *
- * For mremap(), munmap() and exit().
+ * @end - end of range in @vma
+ * @newflags - the new set of flags for @vma.
*
- * Called with @vma VM_LOCKED.
- *
- * Returns with VM_LOCKED cleared. Callers must be prepared to
- * deal with this.
+ * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED;
+ * called for munlock() and munlockall(), to clear VM_LOCKED from @vma.
*/
-static void munlock_vma_pages_range(struct vm_area_struct *vma,
- unsigned long start, unsigned long end)
+static void mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end, vm_flags_t newflags)
{
- vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+ static const struct mm_walk_ops mlock_walk_ops = {
+ .pmd_entry = mlock_pte_range,
+ };
- /* Reimplementation to follow in later commit */
+ /*
+ * There is a slight chance that concurrent page migration,
+ * or page reclaim finding a page of this now-VM_LOCKED vma,
+ * will call mlock_vma_page() and raise page's mlock_count:
+ * double counting, leaving the page unevictable indefinitely.
+ * Communicate this danger to mlock_vma_page() with VM_IO,
+ * which is a VM_SPECIAL flag not allowed on VM_LOCKED vmas.
+ * mmap_lock is held in write mode here, so this weird
+ * combination should not be visible to other mmap_lock users;
+ * but WRITE_ONCE so rmap walkers must see VM_IO if VM_LOCKED.
+ */
+ if (newflags & VM_LOCKED)
+ newflags |= VM_IO;
+ WRITE_ONCE(vma->vm_flags, newflags);
+
+ lru_add_drain();
+ walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL);
+ lru_add_drain();
+
+ if (newflags & VM_IO) {
+ newflags &= ~VM_IO;
+ WRITE_ONCE(vma->vm_flags, newflags);
+ }
}
/*
@@ -164,10 +231,9 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
pgoff_t pgoff;
int nr_pages;
int ret = 0;
- int lock = !!(newflags & VM_LOCKED);
- vm_flags_t old_flags = vma->vm_flags;
+ vm_flags_t oldflags = vma->vm_flags;
- if (newflags == vma->vm_flags || (vma->vm_flags & VM_SPECIAL) ||
+ if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
vma_is_dax(vma) || vma_is_secretmem(vma))
/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
@@ -199,9 +265,9 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
* Keep track of amount of locked VM.
*/
nr_pages = (end - start) >> PAGE_SHIFT;
- if (!lock)
+ if (!(newflags & VM_LOCKED))
nr_pages = -nr_pages;
- else if (old_flags & VM_LOCKED)
+ else if (oldflags & VM_LOCKED)
nr_pages = 0;
mm->locked_vm += nr_pages;
@@ -211,11 +277,12 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
* set VM_LOCKED, populate_vma_page_range will bring it back.
*/
- if (lock)
+ if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) {
+ /* No work to do, and mlocking twice would be wrong */
vma->vm_flags = newflags;
- else
- munlock_vma_pages_range(vma, start, end);
-
+ } else {
+ mlock_vma_pages_range(vma, start, end, newflags);
+ }
out:
*prev = vma;
return ret;
--
2.34.1
next prev parent reply other threads:[~2022-02-15 2:31 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-15 2:18 [PATCH v2 00/13] mm/munlock: rework of mlock+munlock page handling Hugh Dickins
2022-02-15 2:20 ` [PATCH v2 01/13] mm/munlock: delete page_mlock() and all its works Hugh Dickins
2022-02-15 2:21 ` [PATCH v2 02/13] mm/munlock: delete FOLL_MLOCK and FOLL_POPULATE Hugh Dickins
2022-02-15 2:23 ` [PATCH v2 03/13] mm/munlock: delete munlock_vma_pages_all(), allow oomreap Hugh Dickins
2022-02-15 2:26 ` [PATCH v2 04/13] mm/munlock: rmap call mlock_vma_page() munlock_vma_page() Hugh Dickins
2022-02-15 15:22 ` Matthew Wilcox
2022-02-15 21:38 ` Hugh Dickins
2022-02-15 23:21 ` Matthew Wilcox
2022-02-15 2:28 ` [PATCH v2 05/13] mm/munlock: replace clear_page_mlock() by final clearance Hugh Dickins
2022-02-15 2:29 ` [PATCH v2 06/13] mm/munlock: maintain page->mlock_count while unevictable Hugh Dickins
2022-02-15 2:31 ` Hugh Dickins [this message]
2022-02-18 6:35 ` [mm/munlock] 237b445401: stress-ng.remap.ops_per_sec -62.6% regression kernel test robot
2022-02-18 8:49 ` Hugh Dickins
2022-02-21 6:32 ` Hugh Dickins
2022-02-24 8:37 ` Oliver Sang
2022-02-15 2:33 ` [PATCH v2 08/13] mm/migrate: __unmap_and_move() push good newpage to LRU Hugh Dickins
2022-02-15 2:34 ` [PATCH v2 09/13] mm/munlock: delete smp_mb() from __pagevec_lru_add_fn() Hugh Dickins
2022-02-15 2:37 ` [PATCH v2 10/13] mm/munlock: mlock_page() munlock_page() batch by pagevec Hugh Dickins
2022-02-15 16:40 ` Matthew Wilcox
2022-02-15 21:02 ` Hugh Dickins
2022-02-15 22:56 ` Matthew Wilcox
2022-02-15 2:38 ` [PATCH v2 11/13] mm/munlock: page migration needs mlock pagevec drained Hugh Dickins
2022-02-15 2:40 ` [PATCH v2 12/13] mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH) Hugh Dickins
2022-02-15 2:42 ` [PATCH v2 13/13] mm/thp: shrink_page_list() avoid splitting VM_LOCKED THP Hugh Dickins
2022-02-15 19:35 ` [PATCH v2 00/13] mm/munlock: rework of mlock+munlock page handling Matthew Wilcox
-- strict thread matches above, loose matches on Subject: below --
2022-02-06 21:27 [PATCH " Hugh Dickins
2022-02-06 21:42 ` [PATCH 07/13] mm/munlock: mlock_pte_range() when mlocking or munlocking Hugh Dickins
2022-02-11 16:45 ` Vlastimil Babka
2022-02-14 6:32 ` Hugh Dickins
2022-02-14 7:09 ` [PATCH v2 " Hugh Dickins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d39f6e4d-aa4f-731a-68ee-e77cdbf1d7bb@google.com \
--to=hughd@google.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=david@redhat.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=riel@surriel.com \
--cc=shakeelb@google.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).