All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/6] Anon rmap cleanups
@ 2023-09-13 12:51 David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 1/6] mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap() David Hildenbrand
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 12:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Kravetz, Muchun Song

Some cleanups around rmap for anon pages. I'm working on more cleanups
also around file rmap -- also to handle the "compound" parameter
internally only and to let hugetlb use page_add_file_rmap(), but these
changes make sense separately.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>

David Hildenbrand (6):
  mm/rmap: drop stale comment in page_add_anon_rmap and
    hugepage_add_anon_rmap()
  mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap()
  mm/rmap: move folio_test_anon() check out of __folio_set_anon()
  mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap()
  mm/rmap: simplify PageAnonExclusive sanity checks when adding anon
    rmap
  mm/rmap: pass folio to hugepage_add_anon_rmap()

 include/linux/rmap.h |  2 +-
 mm/migrate.c         |  2 +-
 mm/rmap.c            | 79 +++++++++++++++++++++-----------------------
 3 files changed, 39 insertions(+), 44 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v1 1/6] mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap()
  2023-09-13 12:51 [PATCH v1 0/6] Anon rmap cleanups David Hildenbrand
@ 2023-09-13 12:51 ` David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 2/6] mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap() David Hildenbrand
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 12:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Kravetz, Muchun Song

That comment was added in commit 5dbe0af47f8a ("mm: fix kernel BUG at
mm/rmap.c:1017!") to document why we can see vma->vm_end getting
adjusted concurrently due to a VMA split.

However, the optimized locking code was changed again in bf181b9f9d8
("mm anon rmap: replace same_anon_vma linked list with an interval tree.").

... and later, the comment was changed in commit 0503ea8f5ba7 ("mm/mmap:
remove __vma_adjust()") to talk about "vma_merge" although the original
issue was with VMA splitting.

Let's just remove that comment. Nowadays, it's outdated, imprecise and
confusing.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/rmap.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index ec7f8e6c9e48..ca2787c1fe05 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1245,7 +1245,6 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 		__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
 
 	if (likely(!folio_test_ksm(folio))) {
-		/* address might be in next vma when migration races vma_merge */
 		if (first)
 			__page_set_anon_rmap(folio, page, vma, address,
 					     !!(flags & RMAP_EXCLUSIVE));
@@ -2536,7 +2535,6 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 
 	BUG_ON(!folio_test_locked(folio));
 	BUG_ON(!anon_vma);
-	/* address might be in next vma when migration races vma_merge */
 	first = atomic_inc_and_test(&folio->_entire_mapcount);
 	VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
 	VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 2/6] mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap()
  2023-09-13 12:51 [PATCH v1 0/6] Anon rmap cleanups David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 1/6] mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap() David Hildenbrand
@ 2023-09-13 12:51 ` David Hildenbrand
  2023-09-13 14:32   ` Matthew Wilcox
  2023-09-13 12:51 ` [PATCH v1 3/6] mm/rmap: move folio_test_anon() check out of __folio_set_anon() David Hildenbrand
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 12:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Kravetz, Muchun Song

Let's handle it in the caller. No need to pass the page. While at it,
rename the function to __folio_set_anon() and pass "bool exclusive" instead
of "int exclusive".

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/rmap.c | 41 +++++++++++++++++++++--------------------
 1 file changed, 21 insertions(+), 20 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index ca2787c1fe05..ab16baa0fcb3 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1122,27 +1122,25 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
 }
 
 /**
- * __page_set_anon_rmap - set up new anonymous rmap
- * @folio:	Folio which contains page.
- * @page:	Page to add to rmap.
- * @vma:	VM area to add page to.
+ * __folio_set_anon - set up a new anonymous rmap for a folio
+ * @folio:	The folio to set up the new anonymous rmap for.
+ * @vma:	VM area to add the folio to.
  * @address:	User virtual address of the mapping
- * @exclusive:	the page is exclusively owned by the current process
+ * @exclusive:	Whether the folio is exclusive to the process.
  */
-static void __page_set_anon_rmap(struct folio *folio, struct page *page,
-	struct vm_area_struct *vma, unsigned long address, int exclusive)
+static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
+			     unsigned long address, bool exclusive)
 {
 	struct anon_vma *anon_vma = vma->anon_vma;
 
 	BUG_ON(!anon_vma);
 
 	if (folio_test_anon(folio))
-		goto out;
+		return;
 
 	/*
-	 * If the page isn't exclusively mapped into this vma,
-	 * we must use the _oldest_ possible anon_vma for the
-	 * page mapping!
+	 * If the folio isn't exclusive to this vma, we must use the _oldest_
+	 * possible anon_vma for the folio mapping!
 	 */
 	if (!exclusive)
 		anon_vma = anon_vma->root;
@@ -1156,9 +1154,6 @@ static void __page_set_anon_rmap(struct folio *folio, struct page *page,
 	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
 	WRITE_ONCE(folio->mapping, (struct address_space *) anon_vma);
 	folio->index = linear_page_index(vma, address);
-out:
-	if (exclusive)
-		SetPageAnonExclusive(page);
 }
 
 /**
@@ -1246,11 +1241,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 
 	if (likely(!folio_test_ksm(folio))) {
 		if (first)
-			__page_set_anon_rmap(folio, page, vma, address,
-					     !!(flags & RMAP_EXCLUSIVE));
+			__folio_set_anon(folio, vma, address,
+					 !!(flags & RMAP_EXCLUSIVE));
 		else
 			__page_check_anon_rmap(folio, page, vma, address);
 	}
+	if (flags & RMAP_EXCLUSIVE)
+		SetPageAnonExclusive(page);
 
 	mlock_vma_folio(folio, vma, compound);
 }
@@ -1289,7 +1286,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
 	}
 
 	__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
-	__page_set_anon_rmap(folio, &folio->page, vma, address, 1);
+	__folio_set_anon(folio, vma, address, true);
+	SetPageAnonExclusive(&folio->page);
 }
 
 /**
@@ -2539,8 +2537,10 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 	VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
 	VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
 	if (first)
-		__page_set_anon_rmap(folio, page, vma, address,
-				     !!(flags & RMAP_EXCLUSIVE));
+		__folio_set_anon(folio, vma, address,
+				 !!(flags & RMAP_EXCLUSIVE));
+	if (flags & RMAP_EXCLUSIVE)
+		SetPageAnonExclusive(page);
 }
 
 void hugepage_add_new_anon_rmap(struct folio *folio,
@@ -2550,6 +2550,7 @@ void hugepage_add_new_anon_rmap(struct folio *folio,
 	/* increment count (starts at -1) */
 	atomic_set(&folio->_entire_mapcount, 0);
 	folio_clear_hugetlb_restore_reserve(folio);
-	__page_set_anon_rmap(folio, &folio->page, vma, address, 1);
+	__folio_set_anon(folio, vma, address, true);
+	SetPageAnonExclusive(&folio->page);
 }
 #endif /* CONFIG_HUGETLB_PAGE */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 3/6] mm/rmap: move folio_test_anon() check out of __folio_set_anon()
  2023-09-13 12:51 [PATCH v1 0/6] Anon rmap cleanups David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 1/6] mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap() David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 2/6] mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap() David Hildenbrand
@ 2023-09-13 12:51 ` David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 4/6] mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap() David Hildenbrand
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 12:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Kravetz, Muchun Song

Let's handle it in the caller; no need for the "first" check based on
the mapcount.

We really only end up with !anon pages in page_add_anon_rmap() via
do_swap_page(), where we hold the folio lock. So races are not possible.
Add a VM_WARN_ON_FOLIO() to make sure that we really hold the folio
lock.

In the future, we might want to let do_swap_page() use
folio_add_new_anon_rmap() on new pages instead: however, we might have
to pass then whether the folio is exclusive or not. So keep it in there
for now.

For hugetlb we never expect to have a non-anon page in
hugepage_add_anon_rmap(). Remove that code, along with some other checks
that are either not required or were checked in
hugepage_add_new_anon_rmap() already.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/rmap.c | 23 ++++++++---------------
 1 file changed, 8 insertions(+), 15 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index ab16baa0fcb3..1ac5bd1b8169 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1135,9 +1135,6 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
 
 	BUG_ON(!anon_vma);
 
-	if (folio_test_anon(folio))
-		return;
-
 	/*
 	 * If the folio isn't exclusive to this vma, we must use the _oldest_
 	 * possible anon_vma for the folio mapping!
@@ -1239,12 +1236,12 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 	if (nr)
 		__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
 
-	if (likely(!folio_test_ksm(folio))) {
-		if (first)
-			__folio_set_anon(folio, vma, address,
-					 !!(flags & RMAP_EXCLUSIVE));
-		else
-			__page_check_anon_rmap(folio, page, vma, address);
+	if (unlikely(!folio_test_anon(folio))) {
+		VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+		__folio_set_anon(folio, vma, address,
+				 !!(flags & RMAP_EXCLUSIVE));
+	} else if (likely(!folio_test_ksm(folio))) {
+		__page_check_anon_rmap(folio, page, vma, address);
 	}
 	if (flags & RMAP_EXCLUSIVE)
 		SetPageAnonExclusive(page);
@@ -2528,17 +2525,13 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 			    unsigned long address, rmap_t flags)
 {
 	struct folio *folio = page_folio(page);
-	struct anon_vma *anon_vma = vma->anon_vma;
 	int first;
 
-	BUG_ON(!folio_test_locked(folio));
-	BUG_ON(!anon_vma);
+	VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
+
 	first = atomic_inc_and_test(&folio->_entire_mapcount);
 	VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
 	VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
-	if (first)
-		__folio_set_anon(folio, vma, address,
-				 !!(flags & RMAP_EXCLUSIVE));
 	if (flags & RMAP_EXCLUSIVE)
 		SetPageAnonExclusive(page);
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 4/6] mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap()
  2023-09-13 12:51 [PATCH v1 0/6] Anon rmap cleanups David Hildenbrand
                   ` (2 preceding siblings ...)
  2023-09-13 12:51 ` [PATCH v1 3/6] mm/rmap: move folio_test_anon() check out of __folio_set_anon() David Hildenbrand
@ 2023-09-13 12:51 ` David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 5/6] mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 6/6] mm/rmap: pass folio to hugepage_add_anon_rmap() David Hildenbrand
  5 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 12:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Kravetz, Muchun Song

If swapin code would ever decide to not use order-0 pages and supply a
PTE-mapped large folio, we will have to change how we call
__folio_set_anon() -- eventually with exclusive=false and an adjusted
address. For now, let's add a VM_WARN_ON_FOLIO() with a comment about the
situation.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/rmap.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/rmap.c b/mm/rmap.c
index 1ac5bd1b8169..489c142d073b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1238,6 +1238,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 
 	if (unlikely(!folio_test_anon(folio))) {
 		VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+		/*
+		 * For a PTE-mapped large folio, we only know that the single
+		 * PTE is exclusive. Further, __folio_set_anon() might not get
+		 * folio->index right when not given the address of the head
+		 * page.
+		 */
+		VM_WARN_ON_FOLIO(folio_test_large(folio) && !compound, folio);
 		__folio_set_anon(folio, vma, address,
 				 !!(flags & RMAP_EXCLUSIVE));
 	} else if (likely(!folio_test_ksm(folio))) {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 5/6] mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap
  2023-09-13 12:51 [PATCH v1 0/6] Anon rmap cleanups David Hildenbrand
                   ` (3 preceding siblings ...)
  2023-09-13 12:51 ` [PATCH v1 4/6] mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap() David Hildenbrand
@ 2023-09-13 12:51 ` David Hildenbrand
  2023-09-18  9:59   ` David Hildenbrand
  2023-09-13 12:51 ` [PATCH v1 6/6] mm/rmap: pass folio to hugepage_add_anon_rmap() David Hildenbrand
  5 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 12:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Kravetz, Muchun Song

Let's sanity-check PageAnonExclusive vs. mapcount in page_add_anon_rmap()
and hugepage_add_anon_rmap() after setting PageAnonExclusive simply by
re-reading the mapcounts.

We can stop initializing the "first" variable in page_add_anon_rmap()
and no longer need an atomic_inc_and_test() in hugepage_add_anon_rmap().

While at it, switch to VM_WARN_ON_FOLIO().

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/rmap.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 489c142d073b..10d477a0991f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1199,7 +1199,7 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 	atomic_t *mapped = &folio->_nr_pages_mapped;
 	int nr = 0, nr_pmdmapped = 0;
 	bool compound = flags & RMAP_COMPOUND;
-	bool first = true;
+	bool first;
 
 	/* Is page being mapped by PTE? Is this its first map to be added? */
 	if (likely(!compound)) {
@@ -1228,9 +1228,6 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 		}
 	}
 
-	VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
-	VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
-
 	if (nr_pmdmapped)
 		__lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr_pmdmapped);
 	if (nr)
@@ -1252,6 +1249,8 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 	}
 	if (flags & RMAP_EXCLUSIVE)
 		SetPageAnonExclusive(page);
+	VM_WARN_ON_FOLIO(page_mapcount(page) > 1 && PageAnonExclusive(page),
+			 folio);
 
 	mlock_vma_folio(folio, vma, compound);
 }
@@ -2532,15 +2531,14 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 			    unsigned long address, rmap_t flags)
 {
 	struct folio *folio = page_folio(page);
-	int first;
 
 	VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
 
-	first = atomic_inc_and_test(&folio->_entire_mapcount);
-	VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
-	VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
+	atomic_inc(&folio->_entire_mapcount);
 	if (flags & RMAP_EXCLUSIVE)
 		SetPageAnonExclusive(page);
+	VM_WARN_ON_FOLIO(folio_entire_mapcount(folio) > 1 &&
+			 PageAnonExclusive(page), folio);
 }
 
 void hugepage_add_new_anon_rmap(struct folio *folio,
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 6/6] mm/rmap: pass folio to hugepage_add_anon_rmap()
  2023-09-13 12:51 [PATCH v1 0/6] Anon rmap cleanups David Hildenbrand
                   ` (4 preceding siblings ...)
  2023-09-13 12:51 ` [PATCH v1 5/6] mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap David Hildenbrand
@ 2023-09-13 12:51 ` David Hildenbrand
  5 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 12:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Kravetz, Muchun Song

Let's pass a folio; we are always mapping the entire thing.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/rmap.h | 2 +-
 mm/migrate.c         | 2 +-
 mm/rmap.c            | 8 +++-----
 3 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 51cc21ebb568..d22f4d21a11c 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -203,7 +203,7 @@ void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
 void page_remove_rmap(struct page *, struct vm_area_struct *,
 		bool compound);
 
-void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
+void hugepage_add_anon_rmap(struct folio *, struct vm_area_struct *,
 		unsigned long address, rmap_t flags);
 void hugepage_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
 		unsigned long address);
diff --git a/mm/migrate.c b/mm/migrate.c
index 123fc4dc0bc4..d4e8cbf0a704 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -247,7 +247,7 @@ static bool remove_migration_pte(struct folio *folio,
 
 			pte = arch_make_huge_pte(pte, shift, vma->vm_flags);
 			if (folio_test_anon(folio))
-				hugepage_add_anon_rmap(new, vma, pvmw.address,
+				hugepage_add_anon_rmap(folio, vma, pvmw.address,
 						       rmap_flags);
 			else
 				page_dup_file_rmap(new, true);
diff --git a/mm/rmap.c b/mm/rmap.c
index 10d477a0991f..789a2beb8b3a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2527,18 +2527,16 @@ void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc)
  *
  * RMAP_COMPOUND is ignored.
  */
-void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
+void hugepage_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
 			    unsigned long address, rmap_t flags)
 {
-	struct folio *folio = page_folio(page);
-
 	VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
 
 	atomic_inc(&folio->_entire_mapcount);
 	if (flags & RMAP_EXCLUSIVE)
-		SetPageAnonExclusive(page);
+		SetPageAnonExclusive(&folio->page);
 	VM_WARN_ON_FOLIO(folio_entire_mapcount(folio) > 1 &&
-			 PageAnonExclusive(page), folio);
+			 PageAnonExclusive(&folio->page), folio);
 }
 
 void hugepage_add_new_anon_rmap(struct folio *folio,
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 2/6] mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap()
  2023-09-13 12:51 ` [PATCH v1 2/6] mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap() David Hildenbrand
@ 2023-09-13 14:32   ` Matthew Wilcox
  2023-09-13 14:34     ` David Hildenbrand
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2023-09-13 14:32 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, Andrew Morton, Mike Kravetz, Muchun Song

On Wed, Sep 13, 2023 at 02:51:09PM +0200, David Hildenbrand wrote:
> @@ -1246,11 +1241,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
>  
>  	if (likely(!folio_test_ksm(folio))) {
>  		if (first)
> -			__page_set_anon_rmap(folio, page, vma, address,
> -					     !!(flags & RMAP_EXCLUSIVE));
> +			__folio_set_anon(folio, vma, address,
> +					 !!(flags & RMAP_EXCLUSIVE));
>  		else
>  			__page_check_anon_rmap(folio, page, vma, address);
>  	}
> +	if (flags & RMAP_EXCLUSIVE)
> +		SetPageAnonExclusive(page);

Won't we end up setting AnonExclusive on ksm pages, or do we make sure
to never pass RMAP_EXCLUSIVE for ksm pages?

Maybe better to move these last two lines inside the previous test,
just to avoid the question.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 2/6] mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap()
  2023-09-13 14:32   ` Matthew Wilcox
@ 2023-09-13 14:34     ` David Hildenbrand
  0 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2023-09-13 14:34 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, linux-mm, Andrew Morton, Mike Kravetz, Muchun Song

On 13.09.23 16:32, Matthew Wilcox wrote:
> On Wed, Sep 13, 2023 at 02:51:09PM +0200, David Hildenbrand wrote:
>> @@ -1246,11 +1241,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
>>   
>>   	if (likely(!folio_test_ksm(folio))) {
>>   		if (first)
>> -			__page_set_anon_rmap(folio, page, vma, address,
>> -					     !!(flags & RMAP_EXCLUSIVE));
>> +			__folio_set_anon(folio, vma, address,
>> +					 !!(flags & RMAP_EXCLUSIVE));
>>   		else
>>   			__page_check_anon_rmap(folio, page, vma, address);
>>   	}
>> +	if (flags & RMAP_EXCLUSIVE)
>> +		SetPageAnonExclusive(page);
> 
> Won't we end up setting AnonExclusive on ksm pages, or do we make sure
> to never pass RMAP_EXCLUSIVE for ksm pages?

Not if there is a bug and someone passes RMAP_EXCLUSIVE for these. :)

Fortunately, we do have

VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);

in SetPageAnonExclusive() to catch such bugs.

> 
> Maybe better to move these last two lines inside the previous test,
> just to avoid the question.

That could end up hiding another BUG, so I'd rather let 
SetPageAnonExclusive() catch it.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 5/6] mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap
  2023-09-13 12:51 ` [PATCH v1 5/6] mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap David Hildenbrand
@ 2023-09-18  9:59   ` David Hildenbrand
  0 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2023-09-18  9:59 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, Andrew Morton, Mike Kravetz, Muchun Song

On 13.09.23 14:51, David Hildenbrand wrote:
> Let's sanity-check PageAnonExclusive vs. mapcount in page_add_anon_rmap()
> and hugepage_add_anon_rmap() after setting PageAnonExclusive simply by
> re-reading the mapcounts.
> 
> We can stop initializing the "first" variable in page_add_anon_rmap()
> and no longer need an atomic_inc_and_test() in hugepage_add_anon_rmap().
> 
> While at it, switch to VM_WARN_ON_FOLIO().
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---

Hi Andrew, the following fixup on top:

 From 4b945ca15817b491123cc922f022b253134075b7 Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Mon, 18 Sep 2023 11:16:57 +0200
Subject: [PATCH] fixup: mm/rmap: simplify PageAnonExclusive sanity checks when
  adding anon rmap

While PTE-mapping a THP, we temporarily have an exclusive page of a THP
mapped by both, a PMD and a PTE at the same time. Update our check to
allow for that combination.

Reported-by: syzbot+6e4f59235036c3c2e296@syzkaller.appspotmail.com
Signed-off-by: David Hildenbrand <david@redhat.com>
---
  mm/rmap.c | 6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 789a2beb8b3a..f13a2927163d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1249,8 +1249,10 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
  	}
  	if (flags & RMAP_EXCLUSIVE)
  		SetPageAnonExclusive(page);
-	VM_WARN_ON_FOLIO(page_mapcount(page) > 1 && PageAnonExclusive(page),
-			 folio);
+	/* While PTE-mapping a THP we have a PMD and a PTE mapping. */
+	VM_WARN_ON_FOLIO((atomic_read(&page->_mapcount) > 0 ||
+			  (folio_test_large(folio) && folio_entire_mapcount(folio) > 1)) &&
+			 PageAnonExclusive(page), folio);
  
  	mlock_vma_folio(folio, vma, compound);
  }
-- 
2.41.0


-- 
Cheers,

David / dhildenb


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-09-18 10:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-13 12:51 [PATCH v1 0/6] Anon rmap cleanups David Hildenbrand
2023-09-13 12:51 ` [PATCH v1 1/6] mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap() David Hildenbrand
2023-09-13 12:51 ` [PATCH v1 2/6] mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap() David Hildenbrand
2023-09-13 14:32   ` Matthew Wilcox
2023-09-13 14:34     ` David Hildenbrand
2023-09-13 12:51 ` [PATCH v1 3/6] mm/rmap: move folio_test_anon() check out of __folio_set_anon() David Hildenbrand
2023-09-13 12:51 ` [PATCH v1 4/6] mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap() David Hildenbrand
2023-09-13 12:51 ` [PATCH v1 5/6] mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap David Hildenbrand
2023-09-18  9:59   ` David Hildenbrand
2023-09-13 12:51 ` [PATCH v1 6/6] mm/rmap: pass folio to hugepage_add_anon_rmap() David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.