All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] update mlock to use folios
@ 2022-12-26  8:44 Lorenzo Stoakes
  2022-12-26  8:44 ` [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit() Lorenzo Stoakes
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Lorenzo Stoakes @ 2022-12-26  8:44 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Vlastimil Babka, Liam Howlett,
	William Kucharski, Christian Brauner, Jonathan Corbet,
	Mike Rapoport, Joel Fernandes, Geert Uytterhoeven,
	Lorenzo Stoakes

This series updates mlock to use folios, converting the internal interface
to using folios exclusively and exposing the folio interface externally.

As a product of this moves to using a folio batch rather than a pagevec for
mlock folios, which brings it in line with the core folio batches contained
in mm/swap.c.

Lorenzo Stoakes (5):
  mm: pagevec: add folio_batch_reinit()
  mm: mlock: use folios and a folio batch internally
  m68k/mm/motorola: specify pmd_page() type
  mm: mlock: update the interface to use folios
  Documentation/mm: Update references to __m[un]lock_page() to *_folio()

 Documentation/mm/unevictable-lru.rst     |  12 +-
 arch/m68k/include/asm/motorola_pgtable.h |   2 +-
 include/linux/pagevec.h                  |   5 +
 mm/internal.h                            |  26 ++-
 mm/mlock.c                               | 266 +++++++++++------------
 mm/swap.c                                |   2 +-
 6 files changed, 162 insertions(+), 151 deletions(-)

--
2.39.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit()
  2022-12-26  8:44 [PATCH v3 0/5] update mlock to use folios Lorenzo Stoakes
@ 2022-12-26  8:44 ` Lorenzo Stoakes
  2023-01-12  9:57   ` Vlastimil Babka
  2022-12-26  8:44 ` [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Lorenzo Stoakes
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Lorenzo Stoakes @ 2022-12-26  8:44 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Vlastimil Babka, Liam Howlett,
	William Kucharski, Christian Brauner, Jonathan Corbet,
	Mike Rapoport, Joel Fernandes, Geert Uytterhoeven,
	Lorenzo Stoakes

This performs the same task as pagevec_reinit(), only modifying a folio
batch rather than a pagevec.

Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
 include/linux/pagevec.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 215eb6c3bdc9..2a6f61a0c10a 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -103,6 +103,11 @@ static inline void folio_batch_init(struct folio_batch *fbatch)
 	fbatch->percpu_pvec_drained = false;
 }
 
+static inline void folio_batch_reinit(struct folio_batch *fbatch)
+{
+	fbatch->nr = 0;
+}
+
 static inline unsigned int folio_batch_count(struct folio_batch *fbatch)
 {
 	return fbatch->nr;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally
  2022-12-26  8:44 [PATCH v3 0/5] update mlock to use folios Lorenzo Stoakes
  2022-12-26  8:44 ` [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit() Lorenzo Stoakes
@ 2022-12-26  8:44 ` Lorenzo Stoakes
  2023-01-12 10:31   ` Vlastimil Babka
  2022-12-26  8:44 ` [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type Lorenzo Stoakes
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Lorenzo Stoakes @ 2022-12-26  8:44 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Vlastimil Babka, Liam Howlett,
	William Kucharski, Christian Brauner, Jonathan Corbet,
	Mike Rapoport, Joel Fernandes, Geert Uytterhoeven,
	Lorenzo Stoakes

This brings mlock in line with the folio batches declared in mm/swap.c and
makes the code more consistent across the two.

The existing mechanism for identifying which operation each folio in the
batch is undergoing is maintained, i.e. using the lower 2 bits of the
struct folio address (previously struct page address). This should continue
to function correctly as folios remain at least system word-aligned.

All invoctions of mlock() pass either a non-compound page or the head of a
THP-compound page and no tail pages need updating so this functionality
works with struct folios being used internally rather than struct pages.

In this patch the external interface is kept identical to before in order
to maintain separation between patches in the series, using a rather
awkward conversion from struct page to struct folio in relevant functions.

However, this maintenance of the existing interface is intended to be
temporary - the next patch in the series will update the interfaces to
accept folios directly.

Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
 mm/mlock.c | 238 +++++++++++++++++++++++++++--------------------------
 1 file changed, 120 insertions(+), 118 deletions(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index 7032f6dd0ce1..e9ba47fe67ed 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -28,12 +28,12 @@
 
 #include "internal.h"
 
-struct mlock_pvec {
+struct mlock_fbatch {
 	local_lock_t lock;
-	struct pagevec vec;
+	struct folio_batch fbatch;
 };
 
-static DEFINE_PER_CPU(struct mlock_pvec, mlock_pvec) = {
+static DEFINE_PER_CPU(struct mlock_fbatch, mlock_fbatch) = {
 	.lock = INIT_LOCAL_LOCK(lock),
 };
 
@@ -48,192 +48,192 @@ bool can_do_mlock(void)
 EXPORT_SYMBOL(can_do_mlock);
 
 /*
- * Mlocked pages are marked with PageMlocked() flag for efficient testing
+ * Mlocked folios are marked with the PG_mlocked flag for efficient testing
  * in vmscan and, possibly, the fault path; and to support semi-accurate
  * statistics.
  *
- * An mlocked page [PageMlocked(page)] is unevictable.  As such, it will
- * be placed on the LRU "unevictable" list, rather than the [in]active lists.
- * The unevictable list is an LRU sibling list to the [in]active lists.
- * PageUnevictable is set to indicate the unevictable state.
+ * An mlocked folio [folio_test_mlocked(folio)] is unevictable.  As such, it
+ * will be ostensibly placed on the LRU "unevictable" list (actually no such
+ * list exists), rather than the [in]active lists. PG_unevictable is set to
+ * indicate the unevictable state.
  */
 
-static struct lruvec *__mlock_page(struct page *page, struct lruvec *lruvec)
+static struct lruvec *__mlock_folio(struct folio *folio, struct lruvec *lruvec)
 {
 	/* There is nothing more we can do while it's off LRU */
-	if (!TestClearPageLRU(page))
+	if (!folio_test_clear_lru(folio))
 		return lruvec;
 
-	lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec);
+	lruvec = folio_lruvec_relock_irq(folio, lruvec);
 
-	if (unlikely(page_evictable(page))) {
+	if (unlikely(folio_evictable(folio))) {
 		/*
-		 * This is a little surprising, but quite possible:
-		 * PageMlocked must have got cleared already by another CPU.
-		 * Could this page be on the Unevictable LRU?  I'm not sure,
-		 * but move it now if so.
+		 * This is a little surprising, but quite possible: PG_mlocked
+		 * must have got cleared already by another CPU.  Could this
+		 * folio be unevictable?  I'm not sure, but move it now if so.
 		 */
-		if (PageUnevictable(page)) {
-			del_page_from_lru_list(page, lruvec);
-			ClearPageUnevictable(page);
-			add_page_to_lru_list(page, lruvec);
+		if (folio_test_unevictable(folio)) {
+			lruvec_del_folio(lruvec, folio);
+			folio_clear_unevictable(folio);
+			lruvec_add_folio(lruvec, folio);
+
 			__count_vm_events(UNEVICTABLE_PGRESCUED,
-					  thp_nr_pages(page));
+					  folio_nr_pages(folio));
 		}
 		goto out;
 	}
 
-	if (PageUnevictable(page)) {
-		if (PageMlocked(page))
-			page->mlock_count++;
+	if (folio_test_unevictable(folio)) {
+		if (folio_test_mlocked(folio))
+			folio->mlock_count++;
 		goto out;
 	}
 
-	del_page_from_lru_list(page, lruvec);
-	ClearPageActive(page);
-	SetPageUnevictable(page);
-	page->mlock_count = !!PageMlocked(page);
-	add_page_to_lru_list(page, lruvec);
-	__count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page));
+	lruvec_del_folio(lruvec, folio);
+	folio_clear_active(folio);
+	folio_set_unevictable(folio);
+	folio->mlock_count = !!folio_test_mlocked(folio);
+	lruvec_add_folio(lruvec, folio);
+	__count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio));
 out:
-	SetPageLRU(page);
+	folio_set_lru(folio);
 	return lruvec;
 }
 
-static struct lruvec *__mlock_new_page(struct page *page, struct lruvec *lruvec)
+static struct lruvec *__mlock_new_folio(struct folio *folio, struct lruvec *lruvec)
 {
-	VM_BUG_ON_PAGE(PageLRU(page), page);
+	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
-	lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec);
+	lruvec = folio_lruvec_relock_irq(folio, lruvec);
 
 	/* As above, this is a little surprising, but possible */
-	if (unlikely(page_evictable(page)))
+	if (unlikely(folio_evictable(folio)))
 		goto out;
 
-	SetPageUnevictable(page);
-	page->mlock_count = !!PageMlocked(page);
-	__count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page));
+	folio_set_unevictable(folio);
+	folio->mlock_count = !!folio_test_mlocked(folio);
+	__count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio));
 out:
-	add_page_to_lru_list(page, lruvec);
-	SetPageLRU(page);
+	lruvec_add_folio(lruvec, folio);
+	folio_set_lru(folio);
 	return lruvec;
 }
 
-static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec)
+static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec)
 {
-	int nr_pages = thp_nr_pages(page);
+	int nr_pages = folio_nr_pages(folio);
 	bool isolated = false;
 
-	if (!TestClearPageLRU(page))
+	if (!folio_test_clear_lru(folio))
 		goto munlock;
 
 	isolated = true;
-	lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec);
+	lruvec = folio_lruvec_relock_irq(folio, lruvec);
 
-	if (PageUnevictable(page)) {
+	if (folio_test_unevictable(folio)) {
 		/* Then mlock_count is maintained, but might undercount */
-		if (page->mlock_count)
-			page->mlock_count--;
-		if (page->mlock_count)
+		if (folio->mlock_count)
+			folio->mlock_count--;
+		if (folio->mlock_count)
 			goto out;
 	}
 	/* else assume that was the last mlock: reclaim will fix it if not */
 
 munlock:
-	if (TestClearPageMlocked(page)) {
-		__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
-		if (isolated || !PageUnevictable(page))
+	if (folio_test_clear_mlocked(folio)) {
+		zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);
+		if (isolated || !folio_test_unevictable(folio))
 			__count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
 		else
 			__count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
 	}
 
-	/* page_evictable() has to be checked *after* clearing Mlocked */
-	if (isolated && PageUnevictable(page) && page_evictable(page)) {
-		del_page_from_lru_list(page, lruvec);
-		ClearPageUnevictable(page);
-		add_page_to_lru_list(page, lruvec);
+	/* folio_evictable() has to be checked *after* clearing Mlocked */
+	if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) {
+		lruvec_del_folio(lruvec, folio);
+		folio_clear_unevictable(folio);
+		lruvec_add_folio(lruvec, folio);
 		__count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
 	}
 out:
 	if (isolated)
-		SetPageLRU(page);
+		folio_set_lru(folio);
 	return lruvec;
 }
 
 /*
- * Flags held in the low bits of a struct page pointer on the mlock_pvec.
+ * Flags held in the low bits of a struct folio pointer on the mlock_fbatch.
  */
 #define LRU_PAGE 0x1
 #define NEW_PAGE 0x2
-static inline struct page *mlock_lru(struct page *page)
+static inline struct folio *mlock_lru(struct folio *folio)
 {
-	return (struct page *)((unsigned long)page + LRU_PAGE);
+	return (struct folio *)((unsigned long)folio + LRU_PAGE);
 }
 
-static inline struct page *mlock_new(struct page *page)
+static inline struct folio *mlock_new(struct folio *folio)
 {
-	return (struct page *)((unsigned long)page + NEW_PAGE);
+	return (struct folio *)((unsigned long)folio + NEW_PAGE);
 }
 
 /*
- * mlock_pagevec() is derived from pagevec_lru_move_fn():
- * perhaps that can make use of such page pointer flags in future,
- * but for now just keep it for mlock.  We could use three separate
- * pagevecs instead, but one feels better (munlocking a full pagevec
- * does not need to drain mlocking pagevecs first).
+ * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can
+ * make use of such page pointer flags in future, but for now just keep it for
+ * mlock.  We could use three separate folio batches instead, but one feels
+ * better (munlocking a full folio batch does not need to drain mlocking folio
+ * batches first).
  */
-static void mlock_pagevec(struct pagevec *pvec)
+static void mlock_folio_batch(struct folio_batch *fbatch)
 {
 	struct lruvec *lruvec = NULL;
 	unsigned long mlock;
-	struct page *page;
+	struct folio *folio;
 	int i;
 
-	for (i = 0; i < pagevec_count(pvec); i++) {
-		page = pvec->pages[i];
-		mlock = (unsigned long)page & (LRU_PAGE | NEW_PAGE);
-		page = (struct page *)((unsigned long)page - mlock);
-		pvec->pages[i] = page;
+	for (i = 0; i < folio_batch_count(fbatch); i++) {
+		folio = fbatch->folios[i];
+		mlock = (unsigned long)folio & (LRU_PAGE | NEW_PAGE);
+		folio = (struct folio *)((unsigned long)folio - mlock);
+		fbatch->folios[i] = folio;
 
 		if (mlock & LRU_PAGE)
-			lruvec = __mlock_page(page, lruvec);
+			lruvec = __mlock_folio(folio, lruvec);
 		else if (mlock & NEW_PAGE)
-			lruvec = __mlock_new_page(page, lruvec);
+			lruvec = __mlock_new_folio(folio, lruvec);
 		else
-			lruvec = __munlock_page(page, lruvec);
+			lruvec = __munlock_folio(folio, lruvec);
 	}
 
 	if (lruvec)
 		unlock_page_lruvec_irq(lruvec);
-	release_pages(pvec->pages, pvec->nr);
-	pagevec_reinit(pvec);
+	release_pages(fbatch->folios, fbatch->nr);
+	folio_batch_reinit(fbatch);
 }
 
 void mlock_page_drain_local(void)
 {
-	struct pagevec *pvec;
+	struct folio_batch *fbatch;
 
-	local_lock(&mlock_pvec.lock);
-	pvec = this_cpu_ptr(&mlock_pvec.vec);
-	if (pagevec_count(pvec))
-		mlock_pagevec(pvec);
-	local_unlock(&mlock_pvec.lock);
+	local_lock(&mlock_fbatch.lock);
+	fbatch = this_cpu_ptr(&mlock_fbatch.fbatch);
+	if (folio_batch_count(fbatch))
+		mlock_folio_batch(fbatch);
+	local_unlock(&mlock_fbatch.lock);
 }
 
 void mlock_page_drain_remote(int cpu)
 {
-	struct pagevec *pvec;
+	struct folio_batch *fbatch;
 
 	WARN_ON_ONCE(cpu_online(cpu));
-	pvec = &per_cpu(mlock_pvec.vec, cpu);
-	if (pagevec_count(pvec))
-		mlock_pagevec(pvec);
+	fbatch = &per_cpu(mlock_fbatch.fbatch, cpu);
+	if (folio_batch_count(fbatch))
+		mlock_folio_batch(fbatch);
 }
 
 bool need_mlock_page_drain(int cpu)
 {
-	return pagevec_count(&per_cpu(mlock_pvec.vec, cpu));
+	return folio_batch_count(&per_cpu(mlock_fbatch.fbatch, cpu));
 }
 
 /**
@@ -242,10 +242,10 @@ bool need_mlock_page_drain(int cpu)
  */
 void mlock_folio(struct folio *folio)
 {
-	struct pagevec *pvec;
+	struct folio_batch *fbatch;
 
-	local_lock(&mlock_pvec.lock);
-	pvec = this_cpu_ptr(&mlock_pvec.vec);
+	local_lock(&mlock_fbatch.lock);
+	fbatch = this_cpu_ptr(&mlock_fbatch.fbatch);
 
 	if (!folio_test_set_mlocked(folio)) {
 		int nr_pages = folio_nr_pages(folio);
@@ -255,10 +255,10 @@ void mlock_folio(struct folio *folio)
 	}
 
 	folio_get(folio);
-	if (!pagevec_add(pvec, mlock_lru(&folio->page)) ||
+	if (!folio_batch_add(fbatch, mlock_lru(folio)) ||
 	    folio_test_large(folio) || lru_cache_disabled())
-		mlock_pagevec(pvec);
-	local_unlock(&mlock_pvec.lock);
+		mlock_folio_batch(fbatch);
+	local_unlock(&mlock_fbatch.lock);
 }
 
 /**
@@ -267,20 +267,22 @@ void mlock_folio(struct folio *folio)
  */
 void mlock_new_page(struct page *page)
 {
-	struct pagevec *pvec;
-	int nr_pages = thp_nr_pages(page);
+	struct folio_batch *fbatch;
+	struct folio *folio = page_folio(page);
+	int nr_pages = folio_nr_pages(folio);
 
-	local_lock(&mlock_pvec.lock);
-	pvec = this_cpu_ptr(&mlock_pvec.vec);
-	SetPageMlocked(page);
-	mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages);
+	local_lock(&mlock_fbatch.lock);
+	fbatch = this_cpu_ptr(&mlock_fbatch.fbatch);
+	folio_set_mlocked(folio);
+
+	zone_stat_mod_folio(folio, NR_MLOCK, nr_pages);
 	__count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages);
 
-	get_page(page);
-	if (!pagevec_add(pvec, mlock_new(page)) ||
-	    PageHead(page) || lru_cache_disabled())
-		mlock_pagevec(pvec);
-	local_unlock(&mlock_pvec.lock);
+	folio_get(folio);
+	if (!folio_batch_add(fbatch, mlock_new(folio)) ||
+	    folio_test_large(folio) || lru_cache_disabled())
+		mlock_folio_batch(fbatch);
+	local_unlock(&mlock_fbatch.lock);
 }
 
 /**
@@ -289,20 +291,20 @@ void mlock_new_page(struct page *page)
  */
 void munlock_page(struct page *page)
 {
-	struct pagevec *pvec;
+	struct folio_batch *fbatch;
+	struct folio *folio = page_folio(page);
 
-	local_lock(&mlock_pvec.lock);
-	pvec = this_cpu_ptr(&mlock_pvec.vec);
+	local_lock(&mlock_fbatch.lock);
+	fbatch = this_cpu_ptr(&mlock_fbatch.fbatch);
 	/*
-	 * TestClearPageMlocked(page) must be left to __munlock_page(),
-	 * which will check whether the page is multiply mlocked.
+	 * folio_test_clear_mlocked(folio) must be left to __munlock_folio(),
+	 * which will check whether the folio is multiply mlocked.
 	 */
-
-	get_page(page);
-	if (!pagevec_add(pvec, page) ||
-	    PageHead(page) || lru_cache_disabled())
-		mlock_pagevec(pvec);
-	local_unlock(&mlock_pvec.lock);
+	folio_get(folio);
+	if (!folio_batch_add(fbatch, folio) ||
+	    folio_test_large(folio) || lru_cache_disabled())
+		mlock_folio_batch(fbatch);
+	local_unlock(&mlock_fbatch.lock);
 }
 
 static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type
  2022-12-26  8:44 [PATCH v3 0/5] update mlock to use folios Lorenzo Stoakes
  2022-12-26  8:44 ` [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit() Lorenzo Stoakes
  2022-12-26  8:44 ` [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Lorenzo Stoakes
@ 2022-12-26  8:44 ` Lorenzo Stoakes
  2022-12-27  9:38   ` Geert Uytterhoeven
  2023-01-12 10:38   ` Vlastimil Babka
  2022-12-26  8:44 ` [PATCH v3 4/5] mm: mlock: update the interface to use folios Lorenzo Stoakes
  2022-12-26  8:44 ` [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Lorenzo Stoakes
  4 siblings, 2 replies; 14+ messages in thread
From: Lorenzo Stoakes @ 2022-12-26  8:44 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Vlastimil Babka, Liam Howlett,
	William Kucharski, Christian Brauner, Jonathan Corbet,
	Mike Rapoport, Joel Fernandes, Geert Uytterhoeven,
	Lorenzo Stoakes

Failing to specify a specific type here breaks anything that relies on the type
being explicitly known, such as page_folio().

Make explicit the type of null pointer returned here.

Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
 arch/m68k/include/asm/motorola_pgtable.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h
index 7ac3d64c6b33..562b54e09850 100644
--- a/arch/m68k/include/asm/motorola_pgtable.h
+++ b/arch/m68k/include/asm/motorola_pgtable.h
@@ -124,7 +124,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp)
  * expects pmd_page() to exists, only to then DCE it all. Provide a dummy to
  * make the compiler happy.
  */
-#define pmd_page(pmd)		NULL
+#define pmd_page(pmd)		((struct page *)NULL)
 
 
 #define pud_none(pud)		(!pud_val(pud))
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 4/5] mm: mlock: update the interface to use folios
  2022-12-26  8:44 [PATCH v3 0/5] update mlock to use folios Lorenzo Stoakes
                   ` (2 preceding siblings ...)
  2022-12-26  8:44 ` [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type Lorenzo Stoakes
@ 2022-12-26  8:44 ` Lorenzo Stoakes
  2023-01-12 10:55   ` Vlastimil Babka
  2022-12-26  8:44 ` [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Lorenzo Stoakes
  4 siblings, 1 reply; 14+ messages in thread
From: Lorenzo Stoakes @ 2022-12-26  8:44 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Vlastimil Babka, Liam Howlett,
	William Kucharski, Christian Brauner, Jonathan Corbet,
	Mike Rapoport, Joel Fernandes, Geert Uytterhoeven,
	Lorenzo Stoakes

This patch updates the mlock interface to accept folios rather than pages,
bringing the interface in line with the internal implementation.

munlock_vma_page() still requires a page_folio() conversion, however this
is consistent with the existent mlock_vma_page() implementation and a
product of rmap still dealing in pages rather than folios.

Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
 mm/internal.h | 26 ++++++++++++++++----------
 mm/mlock.c    | 32 +++++++++++++++-----------------
 mm/swap.c     |  2 +-
 3 files changed, 32 insertions(+), 28 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 1d6f4e168510..8a6e83315369 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -515,10 +515,9 @@ extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
  * should be called with vma's mmap_lock held for read or write,
  * under page table lock for the pte/pmd being added or removed.
  *
- * mlock is usually called at the end of page_add_*_rmap(),
- * munlock at the end of page_remove_rmap(); but new anon
- * pages are managed by lru_cache_add_inactive_or_unevictable()
- * calling mlock_new_page().
+ * mlock is usually called at the end of page_add_*_rmap(), munlock at
+ * the end of page_remove_rmap(); but new anon folios are managed by
+ * folio_add_lru_vma() calling mlock_new_folio().
  *
  * @compound is used to include pmd mappings of THPs, but filter out
  * pte mappings of THPs, which cannot be consistently counted: a pte
@@ -547,15 +546,22 @@ static inline void mlock_vma_page(struct page *page,
 	mlock_vma_folio(page_folio(page), vma, compound);
 }
 
-void munlock_page(struct page *page);
-static inline void munlock_vma_page(struct page *page,
+void munlock_folio(struct folio *folio);
+
+static inline void munlock_vma_folio(struct folio *folio,
 			struct vm_area_struct *vma, bool compound)
 {
 	if (unlikely(vma->vm_flags & VM_LOCKED) &&
-	    (compound || !PageTransCompound(page)))
-		munlock_page(page);
+	    (compound || !folio_test_large(folio)))
+		munlock_folio(folio);
+}
+
+static inline void munlock_vma_page(struct page *page,
+			struct vm_area_struct *vma, bool compound)
+{
+	munlock_vma_folio(page_folio(page), vma, compound);
 }
-void mlock_new_page(struct page *page);
+void mlock_new_folio(struct folio *folio);
 bool need_mlock_page_drain(int cpu);
 void mlock_page_drain_local(void);
 void mlock_page_drain_remote(int cpu);
@@ -647,7 +653,7 @@ static inline void mlock_vma_page(struct page *page,
 			struct vm_area_struct *vma, bool compound) { }
 static inline void munlock_vma_page(struct page *page,
 			struct vm_area_struct *vma, bool compound) { }
-static inline void mlock_new_page(struct page *page) { }
+static inline void mlock_new_folio(struct folio *folio) { }
 static inline bool need_mlock_page_drain(int cpu) { return false; }
 static inline void mlock_page_drain_local(void) { }
 static inline void mlock_page_drain_remote(int cpu) { }
diff --git a/mm/mlock.c b/mm/mlock.c
index e9ba47fe67ed..3982ef4d1632 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -262,13 +262,12 @@ void mlock_folio(struct folio *folio)
 }
 
 /**
- * mlock_new_page - mlock a newly allocated page not yet on LRU
- * @page: page to be mlocked, either a normal page or a THP head.
+ * mlock_new_folio - mlock a newly allocated folio not yet on LRU
+ * @folio: folio to be mlocked, either normal or a THP head.
  */
-void mlock_new_page(struct page *page)
+void mlock_new_folio(struct folio *folio)
 {
 	struct folio_batch *fbatch;
-	struct folio *folio = page_folio(page);
 	int nr_pages = folio_nr_pages(folio);
 
 	local_lock(&mlock_fbatch.lock);
@@ -286,13 +285,12 @@ void mlock_new_page(struct page *page)
 }
 
 /**
- * munlock_page - munlock a page
- * @page: page to be munlocked, either a normal page or a THP head.
+ * munlock_folio - munlock a folio
+ * @folio: folio to be munlocked, either normal or a THP head.
  */
-void munlock_page(struct page *page)
+void munlock_folio(struct folio *folio)
 {
 	struct folio_batch *fbatch;
-	struct folio *folio = page_folio(page);
 
 	local_lock(&mlock_fbatch.lock);
 	fbatch = this_cpu_ptr(&mlock_fbatch.fbatch);
@@ -314,7 +312,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
 	struct vm_area_struct *vma = walk->vma;
 	spinlock_t *ptl;
 	pte_t *start_pte, *pte;
-	struct page *page;
+	struct folio *folio;
 
 	ptl = pmd_trans_huge_lock(pmd, vma);
 	if (ptl) {
@@ -322,11 +320,11 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
 			goto out;
 		if (is_huge_zero_pmd(*pmd))
 			goto out;
-		page = pmd_page(*pmd);
+		folio = page_folio(pmd_page(*pmd));
 		if (vma->vm_flags & VM_LOCKED)
-			mlock_folio(page_folio(page));
+			mlock_folio(folio);
 		else
-			munlock_page(page);
+			munlock_folio(folio);
 		goto out;
 	}
 
@@ -334,15 +332,15 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
 	for (pte = start_pte; addr != end; pte++, addr += PAGE_SIZE) {
 		if (!pte_present(*pte))
 			continue;
-		page = vm_normal_page(vma, addr, *pte);
-		if (!page || is_zone_device_page(page))
+		folio = vm_normal_folio(vma, addr, *pte);
+		if (!folio || folio_is_zone_device(folio))
 			continue;
-		if (PageTransCompound(page))
+		if (folio_test_large(folio))
 			continue;
 		if (vma->vm_flags & VM_LOCKED)
-			mlock_folio(page_folio(page));
+			mlock_folio(folio);
 		else
-			munlock_page(page);
+			munlock_folio(folio);
 	}
 	pte_unmap(start_pte);
 out:
diff --git a/mm/swap.c b/mm/swap.c
index e54e2a252e27..7df297b143f9 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -562,7 +562,7 @@ void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma)
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
 	if (unlikely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED))
-		mlock_new_page(&folio->page);
+		mlock_new_folio(folio);
 	else
 		folio_add_lru(folio);
 }
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio()
  2022-12-26  8:44 [PATCH v3 0/5] update mlock to use folios Lorenzo Stoakes
                   ` (3 preceding siblings ...)
  2022-12-26  8:44 ` [PATCH v3 4/5] mm: mlock: update the interface to use folios Lorenzo Stoakes
@ 2022-12-26  8:44 ` Lorenzo Stoakes
  2023-01-12 11:04   ` Vlastimil Babka
  4 siblings, 1 reply; 14+ messages in thread
From: Lorenzo Stoakes @ 2022-12-26  8:44 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Vlastimil Babka, Liam Howlett,
	William Kucharski, Christian Brauner, Jonathan Corbet,
	Mike Rapoport, Joel Fernandes, Geert Uytterhoeven,
	Lorenzo Stoakes

We now pass folios to these functions, so update the documentation
accordingly.

Additionally, correct the outdated reference to __pagevec_lru_add_fn(), the
referenced action occurs in __munlock_folio() directly now.

Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
 Documentation/mm/unevictable-lru.rst | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 4a0e158aa9ce..153629e0c100 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -308,22 +308,22 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the
 fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled.
 
 For each PTE (or PMD) being faulted into a VMA, the page add rmap function
-calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED
+calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED
 (unless it is a PTE mapping of a part of a transparent huge page).  Or when
 it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevictable()
-calls mlock_new_page() instead: similar to mlock_page(), but can make better
+calls mlock_new_folio() instead: similar to mlock_folio(), but can make better
 judgments, since this page is held exclusively and known not to be on LRU yet.
 
-mlock_page() sets PageMlocked immediately, then places the page on the CPU's
-mlock pagevec, to batch up the rest of the work to be done under lru_lock by
-__mlock_page().  __mlock_page() sets PageUnevictable, initializes mlock_count
+mlock_folio() sets PageMlocked immediately, then places the page on the CPU's
+mlock folio batch, to batch up the rest of the work to be done under lru_lock by
+__mlock_folio().  __mlock_folio() sets PageUnevictable, initializes mlock_count
 and moves the page to unevictable state ("the unevictable LRU", but with
 mlock_count in place of LRU threading).  Or if the page was already PageLRU
 and PageUnevictable and PageMlocked, it simply increments the mlock_count.
 
 But in practice that may not work ideally: the page may not yet be on an LRU, or
 it may have been temporarily isolated from LRU.  In such cases the mlock_count
-field cannot be touched, but will be set to 0 later when __pagevec_lru_add_fn()
+field cannot be touched, but will be set to 0 later when __munlock_folio()
 returns the page to "LRU".  Races prohibit mlock_count from being set to 1 then:
 rather than risk stranding a page indefinitely as unevictable, always err with
 mlock_count on the low side, so that when munlocked the page will be rescued to
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type
  2022-12-26  8:44 ` [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type Lorenzo Stoakes
@ 2022-12-27  9:38   ` Geert Uytterhoeven
  2023-01-12 10:38   ` Vlastimil Babka
  1 sibling, 0 replies; 14+ messages in thread
From: Geert Uytterhoeven @ 2022-12-27  9:38 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Hugh Dickins, Vlastimil Babka, Liam Howlett, William Kucharski,
	Christian Brauner, Jonathan Corbet, Mike Rapoport,
	Joel Fernandes

On Mon, Dec 26, 2022 at 9:45 AM Lorenzo Stoakes <lstoakes@gmail.com> wrote:
> Failing to specify a specific type here breaks anything that relies on the type
> being explicitly known, such as page_folio().
>
> Make explicit the type of null pointer returned here.
>
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit()
  2022-12-26  8:44 ` [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit() Lorenzo Stoakes
@ 2023-01-12  9:57   ` Vlastimil Babka
  0 siblings, 0 replies; 14+ messages in thread
From: Vlastimil Babka @ 2023-01-12  9:57 UTC (permalink / raw)
  To: Lorenzo Stoakes, linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Liam Howlett, William Kucharski,
	Christian Brauner, Jonathan Corbet, Mike Rapoport,
	Joel Fernandes, Geert Uytterhoeven

On 12/26/22 09:44, Lorenzo Stoakes wrote:
> This performs the same task as pagevec_reinit(), only modifying a folio
> batch rather than a pagevec.
> 
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/linux/pagevec.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
> index 215eb6c3bdc9..2a6f61a0c10a 100644
> --- a/include/linux/pagevec.h
> +++ b/include/linux/pagevec.h
> @@ -103,6 +103,11 @@ static inline void folio_batch_init(struct folio_batch *fbatch)
>  	fbatch->percpu_pvec_drained = false;
>  }
>  
> +static inline void folio_batch_reinit(struct folio_batch *fbatch)
> +{
> +	fbatch->nr = 0;
> +}
> +
>  static inline unsigned int folio_batch_count(struct folio_batch *fbatch)
>  {
>  	return fbatch->nr;


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally
  2022-12-26  8:44 ` [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Lorenzo Stoakes
@ 2023-01-12 10:31   ` Vlastimil Babka
  2023-01-12 11:27     ` Lorenzo Stoakes
  0 siblings, 1 reply; 14+ messages in thread
From: Vlastimil Babka @ 2023-01-12 10:31 UTC (permalink / raw)
  To: Lorenzo Stoakes, linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Liam Howlett, William Kucharski,
	Christian Brauner, Jonathan Corbet, Mike Rapoport,
	Joel Fernandes, Geert Uytterhoeven

On 12/26/22 09:44, Lorenzo Stoakes wrote:
> This brings mlock in line with the folio batches declared in mm/swap.c and
> makes the code more consistent across the two.
> 
> The existing mechanism for identifying which operation each folio in the
> batch is undergoing is maintained, i.e. using the lower 2 bits of the
> struct folio address (previously struct page address). This should continue
> to function correctly as folios remain at least system word-aligned.
> 
> All invoctions of mlock() pass either a non-compound page or the head of a
> THP-compound page and no tail pages need updating so this functionality
> works with struct folios being used internally rather than struct pages.
> 
> In this patch the external interface is kept identical to before in order
> to maintain separation between patches in the series, using a rather
> awkward conversion from struct page to struct folio in relevant functions.
> 
> However, this maintenance of the existing interface is intended to be
> temporary - the next patch in the series will update the interfaces to
> accept folios directly.
> 
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

with some nits:

> -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec)
> +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec)
>  {
> -	int nr_pages = thp_nr_pages(page);
> +	int nr_pages = folio_nr_pages(folio);
>  	bool isolated = false;
>  
> -	if (!TestClearPageLRU(page))
> +	if (!folio_test_clear_lru(folio))
>  		goto munlock;
>  
>  	isolated = true;
> -	lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec);
> +	lruvec = folio_lruvec_relock_irq(folio, lruvec);
>  
> -	if (PageUnevictable(page)) {
> +	if (folio_test_unevictable(folio)) {
>  		/* Then mlock_count is maintained, but might undercount */
> -		if (page->mlock_count)
> -			page->mlock_count--;
> -		if (page->mlock_count)
> +		if (folio->mlock_count)
> +			folio->mlock_count--;
> +		if (folio->mlock_count)
>  			goto out;
>  	}
>  	/* else assume that was the last mlock: reclaim will fix it if not */
>  
>  munlock:
> -	if (TestClearPageMlocked(page)) {
> -		__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> -		if (isolated || !PageUnevictable(page))
> +	if (folio_test_clear_mlocked(folio)) {
> +		zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);

AFAIK the 1:1 replacement would be __zone_stat_mod_folio(), this is stronger
thus not causing a bug, but unneccessary?

> +		if (isolated || !folio_test_unevictable(folio))
>  			__count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
>  		else
>  			__count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
>  	}
>  
> -	/* page_evictable() has to be checked *after* clearing Mlocked */
> -	if (isolated && PageUnevictable(page) && page_evictable(page)) {
> -		del_page_from_lru_list(page, lruvec);
> -		ClearPageUnevictable(page);
> -		add_page_to_lru_list(page, lruvec);
> +	/* folio_evictable() has to be checked *after* clearing Mlocked */
> +	if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) {
> +		lruvec_del_folio(lruvec, folio);
> +		folio_clear_unevictable(folio);
> +		lruvec_add_folio(lruvec, folio);
>  		__count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
>  	}
>  out:
>  	if (isolated)
> -		SetPageLRU(page);
> +		folio_set_lru(folio);
>  	return lruvec;
>  }
>  
>  /*
> - * Flags held in the low bits of a struct page pointer on the mlock_pvec.
> + * Flags held in the low bits of a struct folio pointer on the mlock_fbatch.
>   */
>  #define LRU_PAGE 0x1
>  #define NEW_PAGE 0x2

Should it be X_FOLIO now?

> -static inline struct page *mlock_lru(struct page *page)
> +static inline struct folio *mlock_lru(struct folio *folio)
>  {
> -	return (struct page *)((unsigned long)page + LRU_PAGE);
> +	return (struct folio *)((unsigned long)folio + LRU_PAGE);
>  }
>  
> -static inline struct page *mlock_new(struct page *page)
> +static inline struct folio *mlock_new(struct folio *folio)
>  {
> -	return (struct page *)((unsigned long)page + NEW_PAGE);
> +	return (struct folio *)((unsigned long)folio + NEW_PAGE);
>  }
>  
>  /*
> - * mlock_pagevec() is derived from pagevec_lru_move_fn():
> - * perhaps that can make use of such page pointer flags in future,
> - * but for now just keep it for mlock.  We could use three separate
> - * pagevecs instead, but one feels better (munlocking a full pagevec
> - * does not need to drain mlocking pagevecs first).
> + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can
> + * make use of such page pointer flags in future, but for now just keep it for

		       ^ folio?	

> + * mlock.  We could use three separate folio batches instead, but one feels
> + * better (munlocking a full folio batch does not need to drain mlocking folio
> + * batches first).
>   */
> -static void mlock_pagevec(struct pagevec *pvec)
> +static void mlock_folio_batch(struct folio_batch *fbatch)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type
  2022-12-26  8:44 ` [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type Lorenzo Stoakes
  2022-12-27  9:38   ` Geert Uytterhoeven
@ 2023-01-12 10:38   ` Vlastimil Babka
  1 sibling, 0 replies; 14+ messages in thread
From: Vlastimil Babka @ 2023-01-12 10:38 UTC (permalink / raw)
  To: Lorenzo Stoakes, linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Liam Howlett, William Kucharski,
	Christian Brauner, Jonathan Corbet, Mike Rapoport,
	Joel Fernandes, Geert Uytterhoeven

On 12/26/22 09:44, Lorenzo Stoakes wrote:
> Failing to specify a specific type here breaks anything that relies on the type
> being explicitly known, such as page_folio().
> 
> Make explicit the type of null pointer returned here.
> 
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  arch/m68k/include/asm/motorola_pgtable.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h
> index 7ac3d64c6b33..562b54e09850 100644
> --- a/arch/m68k/include/asm/motorola_pgtable.h
> +++ b/arch/m68k/include/asm/motorola_pgtable.h
> @@ -124,7 +124,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp)
>   * expects pmd_page() to exists, only to then DCE it all. Provide a dummy to
>   * make the compiler happy.
>   */
> -#define pmd_page(pmd)		NULL
> +#define pmd_page(pmd)		((struct page *)NULL)
>  
>  
>  #define pud_none(pud)		(!pud_val(pud))


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] mm: mlock: update the interface to use folios
  2022-12-26  8:44 ` [PATCH v3 4/5] mm: mlock: update the interface to use folios Lorenzo Stoakes
@ 2023-01-12 10:55   ` Vlastimil Babka
  2023-01-12 12:01     ` Lorenzo Stoakes
  0 siblings, 1 reply; 14+ messages in thread
From: Vlastimil Babka @ 2023-01-12 10:55 UTC (permalink / raw)
  To: Lorenzo Stoakes, linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Liam Howlett, William Kucharski,
	Christian Brauner, Jonathan Corbet, Mike Rapoport,
	Joel Fernandes, Geert Uytterhoeven

On 12/26/22 09:44, Lorenzo Stoakes wrote:
> This patch updates the mlock interface to accept folios rather than pages,
> bringing the interface in line with the internal implementation.
> 
> munlock_vma_page() still requires a page_folio() conversion, however this
> is consistent with the existent mlock_vma_page() implementation and a
> product of rmap still dealing in pages rather than folios.
> 
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

With some suggestion:

> ---
>  mm/internal.h | 26 ++++++++++++++++----------
>  mm/mlock.c    | 32 +++++++++++++++-----------------
>  mm/swap.c     |  2 +-
>  3 files changed, 32 insertions(+), 28 deletions(-)
> 
> diff --git a/mm/internal.h b/mm/internal.h
> index 1d6f4e168510..8a6e83315369 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -515,10 +515,9 @@ extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>   * should be called with vma's mmap_lock held for read or write,
>   * under page table lock for the pte/pmd being added or removed.
>   *
> - * mlock is usually called at the end of page_add_*_rmap(),
> - * munlock at the end of page_remove_rmap(); but new anon
> - * pages are managed by lru_cache_add_inactive_or_unevictable()
> - * calling mlock_new_page().
> + * mlock is usually called at the end of page_add_*_rmap(), munlock at
> + * the end of page_remove_rmap(); but new anon folios are managed by
> + * folio_add_lru_vma() calling mlock_new_folio().
>   *
>   * @compound is used to include pmd mappings of THPs, but filter out
>   * pte mappings of THPs, which cannot be consistently counted: a pte
> @@ -547,15 +546,22 @@ static inline void mlock_vma_page(struct page *page,
>  	mlock_vma_folio(page_folio(page), vma, compound);
>  }
>  
> -void munlock_page(struct page *page);
> -static inline void munlock_vma_page(struct page *page,
> +void munlock_folio(struct folio *folio);
> +
> +static inline void munlock_vma_folio(struct folio *folio,
>  			struct vm_area_struct *vma, bool compound)
>  {
>  	if (unlikely(vma->vm_flags & VM_LOCKED) &&
> -	    (compound || !PageTransCompound(page)))
> -		munlock_page(page);
> +	    (compound || !folio_test_large(folio)))
> +		munlock_folio(folio);
> +}
> +
> +static inline void munlock_vma_page(struct page *page,
> +			struct vm_area_struct *vma, bool compound)
> +{
> +	munlock_vma_folio(page_folio(page), vma, compound);
>  }
> -void mlock_new_page(struct page *page);
> +void mlock_new_folio(struct folio *folio);
>  bool need_mlock_page_drain(int cpu);
>  void mlock_page_drain_local(void);
>  void mlock_page_drain_remote(int cpu);

I think these drain related functions could use a rename as well?
Maybe replace "page" with "fbatch" or "folio_batch"? Even the old name isn't
great, should have been "pagevec".
But maybe it would fit patch 2/5 rather than 4/5 as it's logically internal
even if in a .h file.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio()
  2022-12-26  8:44 ` [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Lorenzo Stoakes
@ 2023-01-12 11:04   ` Vlastimil Babka
  0 siblings, 0 replies; 14+ messages in thread
From: Vlastimil Babka @ 2023-01-12 11:04 UTC (permalink / raw)
  To: Lorenzo Stoakes, linux-mm, Andrew Morton, linux-kernel
  Cc: Matthew Wilcox, Hugh Dickins, Liam Howlett, William Kucharski,
	Christian Brauner, Jonathan Corbet, Mike Rapoport,
	Joel Fernandes, Geert Uytterhoeven

On 12/26/22 09:44, Lorenzo Stoakes wrote:
> We now pass folios to these functions, so update the documentation
> accordingly.
> 
> Additionally, correct the outdated reference to __pagevec_lru_add_fn(), the
> referenced action occurs in __munlock_folio() directly now.
> 
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

With:

> ---
>  Documentation/mm/unevictable-lru.rst | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
> index 4a0e158aa9ce..153629e0c100 100644
> --- a/Documentation/mm/unevictable-lru.rst
> +++ b/Documentation/mm/unevictable-lru.rst
> @@ -308,22 +308,22 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the
>  fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled.
>  
>  For each PTE (or PMD) being faulted into a VMA, the page add rmap function
> -calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED
> +calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED
>  (unless it is a PTE mapping of a part of a transparent huge page).  Or when
>  it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevictable()

Think it would be more appropriate now:    ^ folio_add_lru_vma()

> -calls mlock_new_page() instead: similar to mlock_page(), but can make better
> +calls mlock_new_folio() instead: similar to mlock_folio(), but can make better
>  judgments, since this page is held exclusively and known not to be on LRU yet.
>  
> -mlock_page() sets PageMlocked immediately, then places the page on the CPU's

		     PG_mlocked?

> -mlock pagevec, to batch up the rest of the work to be done under lru_lock by
> -__mlock_page().  __mlock_page() sets PageUnevictable, initializes mlock_count

					PG_unevictable

ditto below

> +mlock_folio() sets PageMlocked immediately, then places the page on the CPU's
> +mlock folio batch, to batch up the rest of the work to be done under lru_lock by
> +__mlock_folio().  __mlock_folio() sets PageUnevictable, initializes mlock_count
>  and moves the page to unevictable state ("the unevictable LRU", but with
>  mlock_count in place of LRU threading).  Or if the page was already PageLRU
>  and PageUnevictable and PageMlocked, it simply increments the mlock_count.
>  
>  But in practice that may not work ideally: the page may not yet be on an LRU, or
>  it may have been temporarily isolated from LRU.  In such cases the mlock_count
> -field cannot be touched, but will be set to 0 later when __pagevec_lru_add_fn()
> +field cannot be touched, but will be set to 0 later when __munlock_folio()
>  returns the page to "LRU".  Races prohibit mlock_count from being set to 1 then:
>  rather than risk stranding a page indefinitely as unevictable, always err with
>  mlock_count on the low side, so that when munlocked the page will be rescued to


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally
  2023-01-12 10:31   ` Vlastimil Babka
@ 2023-01-12 11:27     ` Lorenzo Stoakes
  0 siblings, 0 replies; 14+ messages in thread
From: Lorenzo Stoakes @ 2023-01-12 11:27 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Hugh Dickins, Liam Howlett, William Kucharski, Christian Brauner,
	Jonathan Corbet, Mike Rapoport, Joel Fernandes,
	Geert Uytterhoeven

On Thu, Jan 12, 2023 at 11:31:49AM +0100, Vlastimil Babka wrote:
> On 12/26/22 09:44, Lorenzo Stoakes wrote:
> > This brings mlock in line with the folio batches declared in mm/swap.c and
> > makes the code more consistent across the two.
> >
> > The existing mechanism for identifying which operation each folio in the
> > batch is undergoing is maintained, i.e. using the lower 2 bits of the
> > struct folio address (previously struct page address). This should continue
> > to function correctly as folios remain at least system word-aligned.
> >
> > All invoctions of mlock() pass either a non-compound page or the head of a
> > THP-compound page and no tail pages need updating so this functionality
> > works with struct folios being used internally rather than struct pages.
> >
> > In this patch the external interface is kept identical to before in order
> > to maintain separation between patches in the series, using a rather
> > awkward conversion from struct page to struct folio in relevant functions.
> >
> > However, this maintenance of the existing interface is intended to be
> > temporary - the next patch in the series will update the interfaces to
> > accept folios directly.
> >
> > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
>
> with some nits:
>
> > -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec)
> > +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec)
> >  {
> > -	int nr_pages = thp_nr_pages(page);
> > +	int nr_pages = folio_nr_pages(folio);
> >  	bool isolated = false;
> >
> > -	if (!TestClearPageLRU(page))
> > +	if (!folio_test_clear_lru(folio))
> >  		goto munlock;
> >
> >  	isolated = true;
> > -	lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec);
> > +	lruvec = folio_lruvec_relock_irq(folio, lruvec);
> >
> > -	if (PageUnevictable(page)) {
> > +	if (folio_test_unevictable(folio)) {
> >  		/* Then mlock_count is maintained, but might undercount */
> > -		if (page->mlock_count)
> > -			page->mlock_count--;
> > -		if (page->mlock_count)
> > +		if (folio->mlock_count)
> > +			folio->mlock_count--;
> > +		if (folio->mlock_count)
> >  			goto out;
> >  	}
> >  	/* else assume that was the last mlock: reclaim will fix it if not */
> >
> >  munlock:
> > -	if (TestClearPageMlocked(page)) {
> > -		__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> > -		if (isolated || !PageUnevictable(page))
> > +	if (folio_test_clear_mlocked(folio)) {
> > +		zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);
>
> AFAIK the 1:1 replacement would be __zone_stat_mod_folio(), this is stronger
> thus not causing a bug, but unneccessary?

I used this rather than __zone_stat_mod_folio() as this is what mlock_folio()
does and I wanted to maintain consistency with that function.

However, given we were previously user the weaker page version of this function,
I agree that we should do the same with the folio, will change!

>
> > +		if (isolated || !folio_test_unevictable(folio))
> >  			__count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
> >  		else
> >  			__count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
> >  	}
> >
> > -	/* page_evictable() has to be checked *after* clearing Mlocked */
> > -	if (isolated && PageUnevictable(page) && page_evictable(page)) {
> > -		del_page_from_lru_list(page, lruvec);
> > -		ClearPageUnevictable(page);
> > -		add_page_to_lru_list(page, lruvec);
> > +	/* folio_evictable() has to be checked *after* clearing Mlocked */
> > +	if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) {
> > +		lruvec_del_folio(lruvec, folio);
> > +		folio_clear_unevictable(folio);
> > +		lruvec_add_folio(lruvec, folio);
> >  		__count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
> >  	}
> >  out:
> >  	if (isolated)
> > -		SetPageLRU(page);
> > +		folio_set_lru(folio);
> >  	return lruvec;
> >  }
> >
> >  /*
> > - * Flags held in the low bits of a struct page pointer on the mlock_pvec.
> > + * Flags held in the low bits of a struct folio pointer on the mlock_fbatch.
> >   */
> >  #define LRU_PAGE 0x1
> >  #define NEW_PAGE 0x2
>
> Should it be X_FOLIO now?
>
> > -static inline struct page *mlock_lru(struct page *page)
> > +static inline struct folio *mlock_lru(struct folio *folio)
> >  {
> > -	return (struct page *)((unsigned long)page + LRU_PAGE);
> > +	return (struct folio *)((unsigned long)folio + LRU_PAGE);
> >  }
> >
> > -static inline struct page *mlock_new(struct page *page)
> > +static inline struct folio *mlock_new(struct folio *folio)
> >  {
> > -	return (struct page *)((unsigned long)page + NEW_PAGE);
> > +	return (struct folio *)((unsigned long)folio + NEW_PAGE);
> >  }
> >
> >  /*
> > - * mlock_pagevec() is derived from pagevec_lru_move_fn():
> > - * perhaps that can make use of such page pointer flags in future,
> > - * but for now just keep it for mlock.  We could use three separate
> > - * pagevecs instead, but one feels better (munlocking a full pagevec
> > - * does not need to drain mlocking pagevecs first).
> > + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can
> > + * make use of such page pointer flags in future, but for now just keep it for
>
> 		       ^ folio?
>
> > + * mlock.  We could use three separate folio batches instead, but one feels
> > + * better (munlocking a full folio batch does not need to drain mlocking folio
> > + * batches first).
> >   */
> > -static void mlock_pagevec(struct pagevec *pvec)
> > +static void mlock_folio_batch(struct folio_batch *fbatch)
>

Ack on all remaining comments also, will spin a v4, thanks for the review!

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] mm: mlock: update the interface to use folios
  2023-01-12 10:55   ` Vlastimil Babka
@ 2023-01-12 12:01     ` Lorenzo Stoakes
  0 siblings, 0 replies; 14+ messages in thread
From: Lorenzo Stoakes @ 2023-01-12 12:01 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, Andrew Morton, linux-kernel, Matthew Wilcox,
	Hugh Dickins, Liam Howlett, William Kucharski, Christian Brauner,
	Jonathan Corbet, Mike Rapoport, Joel Fernandes,
	Geert Uytterhoeven

On Thu, Jan 12, 2023 at 11:55:13AM +0100, Vlastimil Babka wrote:
> On 12/26/22 09:44, Lorenzo Stoakes wrote:
> > This patch updates the mlock interface to accept folios rather than pages,
> > bringing the interface in line with the internal implementation.
> >
> > munlock_vma_page() still requires a page_folio() conversion, however this
> > is consistent with the existent mlock_vma_page() implementation and a
> > product of rmap still dealing in pages rather than folios.
> >
> > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
>
> With some suggestion:
>
> > ---
> >  mm/internal.h | 26 ++++++++++++++++----------
> >  mm/mlock.c    | 32 +++++++++++++++-----------------
> >  mm/swap.c     |  2 +-
> >  3 files changed, 32 insertions(+), 28 deletions(-)
> >
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 1d6f4e168510..8a6e83315369 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -515,10 +515,9 @@ extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> >   * should be called with vma's mmap_lock held for read or write,
> >   * under page table lock for the pte/pmd being added or removed.
> >   *
> > - * mlock is usually called at the end of page_add_*_rmap(),
> > - * munlock at the end of page_remove_rmap(); but new anon
> > - * pages are managed by lru_cache_add_inactive_or_unevictable()
> > - * calling mlock_new_page().
> > + * mlock is usually called at the end of page_add_*_rmap(), munlock at
> > + * the end of page_remove_rmap(); but new anon folios are managed by
> > + * folio_add_lru_vma() calling mlock_new_folio().
> >   *
> >   * @compound is used to include pmd mappings of THPs, but filter out
> >   * pte mappings of THPs, which cannot be consistently counted: a pte
> > @@ -547,15 +546,22 @@ static inline void mlock_vma_page(struct page *page,
> >  	mlock_vma_folio(page_folio(page), vma, compound);
> >  }
> >
> > -void munlock_page(struct page *page);
> > -static inline void munlock_vma_page(struct page *page,
> > +void munlock_folio(struct folio *folio);
> > +
> > +static inline void munlock_vma_folio(struct folio *folio,
> >  			struct vm_area_struct *vma, bool compound)
> >  {
> >  	if (unlikely(vma->vm_flags & VM_LOCKED) &&
> > -	    (compound || !PageTransCompound(page)))
> > -		munlock_page(page);
> > +	    (compound || !folio_test_large(folio)))
> > +		munlock_folio(folio);
> > +}
> > +
> > +static inline void munlock_vma_page(struct page *page,
> > +			struct vm_area_struct *vma, bool compound)
> > +{
> > +	munlock_vma_folio(page_folio(page), vma, compound);
> >  }
> > -void mlock_new_page(struct page *page);
> > +void mlock_new_folio(struct folio *folio);
> >  bool need_mlock_page_drain(int cpu);
> >  void mlock_page_drain_local(void);
> >  void mlock_page_drain_remote(int cpu);
>
> I think these drain related functions could use a rename as well?
> Maybe replace "page" with "fbatch" or "folio_batch"? Even the old name isn't
> great, should have been "pagevec".

Agreed, though I feel it's more readable if we just drop this bit altogether,
which is also more consistent with the core batch drain functions like
e.g. lru_add_drain().

In this case we'd go to need_mlock_drain(), mlock_drain_local() and
mlock_drain_remote().

> But maybe it would fit patch 2/5 rather than 4/5 as it's logically internal
> even if in a .h file.
>
>

Even though it is an internal interface across the board I feel like it makes
the patch series a little easier to read keeping this separate, so I think it
makes sense to keep it here so we can have a separation between
internal-to-mlock changes vs. internal-to-mm ones :)

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-01-12 12:07 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-26  8:44 [PATCH v3 0/5] update mlock to use folios Lorenzo Stoakes
2022-12-26  8:44 ` [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit() Lorenzo Stoakes
2023-01-12  9:57   ` Vlastimil Babka
2022-12-26  8:44 ` [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Lorenzo Stoakes
2023-01-12 10:31   ` Vlastimil Babka
2023-01-12 11:27     ` Lorenzo Stoakes
2022-12-26  8:44 ` [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type Lorenzo Stoakes
2022-12-27  9:38   ` Geert Uytterhoeven
2023-01-12 10:38   ` Vlastimil Babka
2022-12-26  8:44 ` [PATCH v3 4/5] mm: mlock: update the interface to use folios Lorenzo Stoakes
2023-01-12 10:55   ` Vlastimil Babka
2023-01-12 12:01     ` Lorenzo Stoakes
2022-12-26  8:44 ` [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Lorenzo Stoakes
2023-01-12 11:04   ` Vlastimil Babka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.