All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/3] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
@ 2024-04-16  3:34 Lance Yang
  2024-04-16  3:34 ` [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper Lance Yang
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Lance Yang @ 2024-04-16  3:34 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel, Lance Yang

Hi All,

This patchset adds support for lazyfreeing multi-size THP (mTHP) without
needing to first split the large folio via split_folio(). However, we
still need to split a large folio that is not fully mapped within the
target range.

If a large folio is locked or shared, or if we fail to split it, we just
leave it in place and advance to the next PTE in the range. But note that
the behavior is changed; previously, any failure of this sort would cause
the entire operation to give up. As large folios become more common,
sticking to the old way could result in wasted opportunities.

Performance Testing
===================

On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
the same size results in the following runtimes for madvise(MADV_FREE)
in seconds (shorter is better):

Folio Size |   Old    |   New    | Change
------------------------------------------
      4KiB | 0.590251 | 0.590259 |    0%
     16KiB | 2.990447 | 0.185655 |  -94%
     32KiB | 2.547831 | 0.104870 |  -95%
     64KiB | 2.457796 | 0.052812 |  -97%
    128KiB | 2.281034 | 0.032777 |  -99%
    256KiB | 2.230387 | 0.017496 |  -99%
    512KiB | 2.189106 | 0.010781 |  -99%
   1024KiB | 2.183949 | 0.007753 |  -99%
   2048KiB | 0.002799 | 0.002804 |    0%

---
This patchset applies against mm-unstable (3aec6b2b34e2). 

The performance numbers are from v2. I did a quick benchmark run of v7 and
nothing significantly changed.

Changes since v6 [6]
====================
 - Fix a bug with incorrect bitwise operations (Thanks to Ryan Roberts)
 - Use a cmpxchg loop to only clear one of the flags to prevent race with
   the HW (per Ryan Roberts)

Changes since v5 [5]
====================
 - Convert mkold_ptes() to clear_young_dirty_ptes() (per Ryan Roberts)
 - Use the __bitwise flags as the input for clear_young_dirty_ptes()
   (per David Hildenbrand)
 - Follow the pattern already established by the original code
   (per Ryan Roberts)

Changes since v4 [4]
====================
 - The first patch implements the MADV_FREE change and introduces
   mkold_clean_ptes() with a generic implementation. The second patch
   specializes mkold_clean_ptes() for arm64, providing a performance boost
   specific to arm64 (per Ryan Roberts)
 - Drop the full parameter and call ptep_get_and_clear() in mkold_clean_ptes()
   (per Ryan Roberts)
 - Keep the previous behavior that avoids locking the folio if it wasn't in the
   swapcache or if it wasn't dirty (per Ryan Roberts)

Changes since v3 [3]
====================
 - Rename refresh_full_ptes -> mkold_clean_ptes (per Ryan Roberts)
 - Override mkold_clean_ptes() for arm64 to make it faster (per Ryan Roberts)
 - Update the changelog

Changes since v2 [2]
====================
 - Only skip all the PTEs for nr_pages when the number of batched PTEs matches
   nr_pages (per Barry Song)
 - Change folio_pte_batch() to consume an optional *any_dirty and *any_young
   function (per David Hildenbrand)
 - Move the ptep_get_and_clear_full() loop into refresh_full_ptes() (per
   David Hildenbrand)
 - Follow a similar pattern for madvise_free_pte_range() (per Ryan Roberts)

Changes since v1 [1]
====================
 - Update the performance numbers
 - Update the changelog (per Ryan Roberts)
 - Check the COW folio (per Yin Fengwei)
 - Check if we are mapping all subpages (per Barry Song, David Hildenbrand,
   Ryan Roberts)

[1] https://lore.kernel.org/linux-mm/20240225123215.86503-1-ioworker0@gmail.com
[2] https://lore.kernel.org/linux-mm/20240307061425.21013-1-ioworker0@gmail.com
[3] https://lore.kernel.org/linux-mm/20240316102952.39233-1-ioworker0@gmail.com
[4] https://lore.kernel.org/linux-mm/20240402124029.47846-1-ioworker0@gmail.com
[5] https://lore.kernel.org/linux-mm/20240408042437.10951-1-ioworker0@gmail.com
[6] https://lore.kernel.org/linux-mm/20240413002219.71246-1-ioworker0@gmail.com

Thanks,
Lance

Lance Yang (3):
 mm/madvise: introduce clear_young_dirty_ptes() batch helper
 mm/arm64: override clear_young_dirty_ptes() batch helper
 mm/madvise: optimize lazyfreeing with mTHP in madvise_free

 arch/arm64/include/asm/pgtable.h |  55 ++++++++++++++++++++++++++++++++
 arch/arm64/mm/contpte.c          |  29 +++++++++++++++++
 include/linux/mm_types.h         |   9 ++++++
 include/linux/pgtable.h          |  74 +++++++++++++++++++++++++--------------
 mm/internal.h                    |  12 +++++--
 mm/madvise.c                     | 147 ++++++++++++++++++++++++++++++---------
 mm/memory.c                      |   4 +--
 7 files changed, 233 insertions(+), 97 deletions(-)

-- 
2.33.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-16  3:34 [PATCH v7 0/3] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
@ 2024-04-16  3:34 ` Lance Yang
  2024-04-16 15:01   ` David Hildenbrand
  2024-04-16 16:25   ` Ryan Roberts
  2024-04-16  3:34 ` [PATCH v7 2/3] mm/arm64: override " Lance Yang
  2024-04-16  3:34 ` [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Lance Yang
  2 siblings, 2 replies; 18+ messages in thread
From: Lance Yang @ 2024-04-16  3:34 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel, Lance Yang

This commit introduces clear_young_dirty_ptes() to replace mkold_ptes().
By doing so, we can use the same function for both use cases
(madvise_pageout and madvise_free), and it also provides the flexibility
to only clear the dirty flag in the future if needed.

Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Lance Yang <ioworker0@gmail.com>
---
 include/linux/mm_types.h |  9 +++++
 include/linux/pgtable.h  | 74 ++++++++++++++++++++++++----------------
 mm/madvise.c             |  3 +-
 3 files changed, 55 insertions(+), 31 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index c432add95913..28822cd65d2a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1367,6 +1367,15 @@ enum fault_flag {
 
 typedef unsigned int __bitwise zap_flags_t;
 
+/* Flags for clear_young_dirty_ptes(). */
+typedef int __bitwise cydp_t;
+
+/* Clear the access bit */
+#define CYDP_CLEAR_YOUNG		((__force cydp_t)BIT(0))
+
+/* Clear the dirty bit */
+#define CYDP_CLEAR_DIRTY		((__force cydp_t)BIT(1))
+
 /*
  * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
  * other. Here is what they mean, and how to use them:
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index e2f45e22a6d1..18019f037bae 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -361,36 +361,6 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif
 
-#ifndef mkold_ptes
-/**
- * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old.
- * @vma: VMA the pages are mapped into.
- * @addr: Address the first page is mapped at.
- * @ptep: Page table pointer for the first entry.
- * @nr: Number of entries to mark old.
- *
- * May be overridden by the architecture; otherwise, implemented as a simple
- * loop over ptep_test_and_clear_young().
- *
- * Note that PTE bits in the PTE range besides the PFN can differ. For example,
- * some PTEs might be write-protected.
- *
- * Context: The caller holds the page table lock.  The PTEs map consecutive
- * pages that belong to the same folio.  The PTEs are all in the same PMD.
- */
-static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr,
-		pte_t *ptep, unsigned int nr)
-{
-	for (;;) {
-		ptep_test_and_clear_young(vma, addr, ptep);
-		if (--nr == 0)
-			break;
-		ptep++;
-		addr += PAGE_SIZE;
-	}
-}
-#endif
-
 #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
 static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
@@ -489,6 +459,50 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 }
 #endif
 
+#ifndef clear_young_dirty_ptes
+/**
+ * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
+ *		same folio as old/clean.
+ * @mm: Address space the pages are mapped into.
+ * @addr: Address the first page is mapped at.
+ * @ptep: Page table pointer for the first entry.
+ * @nr: Number of entries to mark old/clean.
+ * @flags: Flags to modify the PTE batch semantics.
+ *
+ * May be overridden by the architecture; otherwise, implemented by
+ * get_and_clear/modify/set for each pte in the range.
+ *
+ * Note that PTE bits in the PTE range besides the PFN can differ. For example,
+ * some PTEs might be write-protected.
+ *
+ * Context: The caller holds the page table lock.  The PTEs map consecutive
+ * pages that belong to the same folio.  The PTEs are all in the same PMD.
+ */
+static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
+					  unsigned long addr, pte_t *ptep,
+					  unsigned int nr, cydp_t flags)
+{
+	pte_t pte;
+
+	for (;;) {
+		if (flags == CYDP_CLEAR_YOUNG)
+			ptep_test_and_clear_young(vma, addr, ptep);
+		else {
+			pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
+			if (flags & CYDP_CLEAR_YOUNG)
+				pte = pte_mkold(pte);
+			if (flags & CYDP_CLEAR_DIRTY)
+				pte = pte_mkclean(pte);
+			set_pte_at(vma->vm_mm, addr, ptep, pte);
+		}
+		if (--nr == 0)
+			break;
+		ptep++;
+		addr += PAGE_SIZE;
+	}
+}
+#endif
+
 static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep)
 {
diff --git a/mm/madvise.c b/mm/madvise.c
index f59169888b8e..edb592adb749 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -507,7 +507,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 			continue;
 
 		if (!pageout && pte_young(ptent)) {
-			mkold_ptes(vma, addr, pte, nr);
+			clear_young_dirty_ptes(vma, addr, pte, nr,
+					       CYDP_CLEAR_YOUNG);
 			tlb_remove_tlb_entries(tlb, pte, nr, addr);
 		}
 
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v7 2/3] mm/arm64: override clear_young_dirty_ptes() batch helper
  2024-04-16  3:34 [PATCH v7 0/3] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
  2024-04-16  3:34 ` [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper Lance Yang
@ 2024-04-16  3:34 ` Lance Yang
  2024-04-16 15:02   ` David Hildenbrand
  2024-04-16 16:29   ` Ryan Roberts
  2024-04-16  3:34 ` [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Lance Yang
  2 siblings, 2 replies; 18+ messages in thread
From: Lance Yang @ 2024-04-16  3:34 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel, Lance Yang

The per-pte get_and_clear/modify/set approach would result in
unfolding/refolding for contpte mappings on arm64. So we need
to override clear_young_dirty_ptes() for arm64 to avoid it.

Suggested-by: David Hildenbrand <david@redhat.com>
Suggested-by: Barry Song <21cnbao@gmail.com>
Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Lance Yang <ioworker0@gmail.com>
---
 arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++
 arch/arm64/mm/contpte.c          | 29 +++++++++++++++++
 2 files changed, 84 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 9fd8613b2db2..1303d30287dc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1223,6 +1223,46 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
 		__ptep_set_wrprotect(mm, address, ptep);
 }
 
+static inline void __clear_young_dirty_pte(struct vm_area_struct *vma,
+					   unsigned long addr, pte_t *ptep,
+					   pte_t pte, cydp_t flags)
+{
+	pte_t old_pte;
+
+	do {
+		old_pte = pte;
+
+		if (flags & CYDP_CLEAR_YOUNG)
+			pte = pte_mkold(pte);
+		if (flags & CYDP_CLEAR_DIRTY)
+			pte = pte_mkclean(pte);
+
+		pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
+					       pte_val(old_pte), pte_val(pte));
+	} while (pte_val(pte) != pte_val(old_pte));
+}
+
+static inline void __clear_young_dirty_ptes(struct vm_area_struct *vma,
+					    unsigned long addr, pte_t *ptep,
+					    unsigned int nr, cydp_t flags)
+{
+	pte_t pte;
+
+	for (;;) {
+		pte = __ptep_get(ptep);
+
+		if (flags == (CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY))
+			__set_pte(ptep, pte_mkclean(pte_mkold(pte)));
+		else
+			__clear_young_dirty_pte(vma, addr, ptep, pte, flags);
+
+		if (--nr == 0)
+			break;
+		ptep++;
+		addr += PAGE_SIZE;
+	}
+}
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
 static inline void pmdp_set_wrprotect(struct mm_struct *mm,
@@ -1379,6 +1419,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
 				unsigned long addr, pte_t *ptep,
 				pte_t entry, int dirty);
+extern void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
+				unsigned long addr, pte_t *ptep,
+				unsigned int nr, cydp_t flags);
 
 static __always_inline void contpte_try_fold(struct mm_struct *mm,
 				unsigned long addr, pte_t *ptep, pte_t pte)
@@ -1603,6 +1646,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
 }
 
+#define clear_young_dirty_ptes clear_young_dirty_ptes
+static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
+					  unsigned long addr, pte_t *ptep,
+					  unsigned int nr, cydp_t flags)
+{
+	if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
+		__clear_young_dirty_ptes(vma, addr, ptep, nr, flags);
+	else
+		contpte_clear_young_dirty_ptes(vma, addr, ptep, nr, flags);
+}
+
 #else /* CONFIG_ARM64_CONTPTE */
 
 #define ptep_get				__ptep_get
@@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 #define wrprotect_ptes				__wrprotect_ptes
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
 #define ptep_set_access_flags			__ptep_set_access_flags
+#define clear_young_dirty_ptes			__clear_young_dirty_ptes
 
 #endif /* CONFIG_ARM64_CONTPTE */
 
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 1b64b4c3f8bf..9f9486de0004 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -361,6 +361,35 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
 
+void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
+				    unsigned long addr, pte_t *ptep,
+				    unsigned int nr, cydp_t flags)
+{
+	/*
+	 * We can safely clear access/dirty without needing to unfold from
+	 * the architectures perspective, even when contpte is set. If the
+	 * range starts or ends midway through a contpte block, we can just
+	 * expand to include the full contpte block. While this is not
+	 * exactly what the core-mm asked for, it tracks access/dirty per
+	 * folio, not per page. And since we only create a contpte block
+	 * when it is covered by a single folio, we can get away with
+	 * clearing access/dirty for the whole block.
+	 */
+	unsigned long start = addr;
+	unsigned long end = start + nr;
+
+	if (pte_cont(__ptep_get(ptep + nr - 1)))
+		end = ALIGN(end, CONT_PTE_SIZE);
+
+	if (pte_cont(__ptep_get(ptep))) {
+		start = ALIGN_DOWN(start, CONT_PTE_SIZE);
+		ptep = contpte_align_down(ptep);
+	}
+
+	__clear_young_dirty_ptes(vma, start, ptep, end - start, flags);
+}
+EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
+
 int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
 					unsigned long addr, pte_t *ptep,
 					pte_t entry, int dirty)
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-16  3:34 [PATCH v7 0/3] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
  2024-04-16  3:34 ` [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper Lance Yang
  2024-04-16  3:34 ` [PATCH v7 2/3] mm/arm64: override " Lance Yang
@ 2024-04-16  3:34 ` Lance Yang
  2024-04-16 16:35   ` Ryan Roberts
  2 siblings, 1 reply; 18+ messages in thread
From: Lance Yang @ 2024-04-16  3:34 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel, Lance Yang

This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
(Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
splitting if the large folio is fully mapped within the target range.

If a large folio is locked or shared, or if we fail to split it, we just
leave it in place and advance to the next PTE in the range. But note that
the behavior is changed; previously, any failure of this sort would cause
the entire operation to give up. As large folios become more common,
sticking to the old way could result in wasted opportunities.

On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
the same size results in the following runtimes for madvise(MADV_FREE) in
seconds (shorter is better):

Folio Size |   Old    |   New    | Change
------------------------------------------
      4KiB | 0.590251 | 0.590259 |    0%
     16KiB | 2.990447 | 0.185655 |  -94%
     32KiB | 2.547831 | 0.104870 |  -95%
     64KiB | 2.457796 | 0.052812 |  -97%
    128KiB | 2.281034 | 0.032777 |  -99%
    256KiB | 2.230387 | 0.017496 |  -99%
    512KiB | 2.189106 | 0.010781 |  -99%
   1024KiB | 2.183949 | 0.007753 |  -99%
   2048KiB | 0.002799 | 0.002804 |    0%

[1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
[2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com

Signed-off-by: Lance Yang <ioworker0@gmail.com>
---
 mm/internal.h |  12 ++++-
 mm/madvise.c  | 144 ++++++++++++++++++++++++++++----------------------
 mm/memory.c   |   4 +-
 3 files changed, 94 insertions(+), 66 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 2adc3f616b71..5d5e49b86fe3 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
  *		  first one is writable.
  * @any_young: Optional pointer to indicate whether any entry except the
  *		  first one is young.
+ * @any_dirty: Optional pointer to indicate whether any entry except the
+ *		  first one is dirty.
  *
  * Detect a PTE batch: consecutive (present) PTEs that map consecutive
  * pages of the same large folio.
@@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
  */
 static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 		pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
-		bool *any_writable, bool *any_young)
+		bool *any_writable, bool *any_young, bool *any_dirty)
 {
 	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
 	const pte_t *end_ptep = start_ptep + max_nr;
 	pte_t expected_pte, *ptep;
-	bool writable, young;
+	bool writable, young, dirty;
 	int nr;
 
 	if (any_writable)
 		*any_writable = false;
 	if (any_young)
 		*any_young = false;
+	if (any_dirty)
+		*any_dirty = false;
 
 	VM_WARN_ON_FOLIO(!pte_present(pte), folio);
 	VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
@@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 			writable = !!pte_write(pte);
 		if (any_young)
 			young = !!pte_young(pte);
+		if (any_dirty)
+			dirty = !!pte_dirty(pte);
 		pte = __pte_batch_clear_ignored(pte, flags);
 
 		if (!pte_same(pte, expected_pte))
@@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 			*any_writable |= writable;
 		if (any_young)
 			*any_young |= young;
+		if (any_dirty)
+			*any_dirty |= dirty;
 
 		nr = pte_batch_hint(ptep, pte);
 		expected_pte = pte_advance_pfn(expected_pte, nr);
diff --git a/mm/madvise.c b/mm/madvise.c
index edb592adb749..a6bfbbd881e9 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
 	       file_permission(vma->vm_file, MAY_WRITE) == 0;
 }
 
+static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
+					  struct folio *folio, pte_t *ptep,
+					  pte_t pte, bool *any_young,
+					  bool *any_dirty)
+{
+	int max_nr = (end - addr) / PAGE_SIZE;
+	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+
+	return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
+			       any_young, any_dirty);
+}
+
+static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
+					   unsigned long addr,
+					   struct folio *folio, pte_t **pte,
+					   spinlock_t **ptl)
+{
+	int err;
+
+	if (!folio_trylock(folio))
+		return false;
+
+	folio_get(folio);
+	pte_unmap_unlock(*pte, *ptl);
+	err = split_folio(folio);
+	folio_unlock(folio);
+	folio_put(folio);
+
+	*pte = pte_offset_map_lock(mm, pmd, addr, ptl);
+
+	return err == 0;
+}
+
 static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 				unsigned long addr, unsigned long end,
 				struct mm_walk *walk)
@@ -456,41 +489,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 		 * next pte in the range.
 		 */
 		if (folio_test_large(folio)) {
-			const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
-						FPB_IGNORE_SOFT_DIRTY;
-			int max_nr = (end - addr) / PAGE_SIZE;
 			bool any_young;
 
-			nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
-					     fpb_flags, NULL, &any_young);
-			if (any_young)
-				ptent = pte_mkyoung(ptent);
+			nr = madvise_folio_pte_batch(addr, end, folio, pte,
+						     ptent, &any_young, NULL);
 
 			if (nr < folio_nr_pages(folio)) {
-				int err;
-
 				if (folio_likely_mapped_shared(folio))
 					continue;
 				if (pageout_anon_only_filter && !folio_test_anon(folio))
 					continue;
-				if (!folio_trylock(folio))
-					continue;
-				folio_get(folio);
+
 				arch_leave_lazy_mmu_mode();
-				pte_unmap_unlock(start_pte, ptl);
-				start_pte = NULL;
-				err = split_folio(folio);
-				folio_unlock(folio);
-				folio_put(folio);
-				start_pte = pte =
-					pte_offset_map_lock(mm, pmd, addr, &ptl);
+				if (madvise_pte_split_folio(mm, pmd, addr,
+							    folio, &start_pte, &ptl))
+					nr = 0;
 				if (!start_pte)
 					break;
+				pte = start_pte;
 				arch_enter_lazy_mmu_mode();
-				if (!err)
-					nr = 0;
 				continue;
 			}
+
+			if (any_young)
+				ptent = pte_mkyoung(ptent);
 		}
 
 		/*
@@ -688,44 +710,51 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 			continue;
 
 		/*
-		 * If pmd isn't transhuge but the folio is large and
-		 * is owned by only this process, split it and
-		 * deactivate all pages.
+		 * If we encounter a large folio, only split it if it is not
+		 * fully mapped within the range we are operating on. Otherwise
+		 * leave it as is so that it can be marked as lazyfree. If we
+		 * fail to split a folio, leave it in place and advance to the
+		 * next pte in the range.
 		 */
 		if (folio_test_large(folio)) {
-			int err;
+			bool any_young, any_dirty;
 
-			if (folio_likely_mapped_shared(folio))
-				break;
-			if (!folio_trylock(folio))
-				break;
-			folio_get(folio);
-			arch_leave_lazy_mmu_mode();
-			pte_unmap_unlock(start_pte, ptl);
-			start_pte = NULL;
-			err = split_folio(folio);
-			folio_unlock(folio);
-			folio_put(folio);
-			if (err)
-				break;
-			start_pte = pte =
-				pte_offset_map_lock(mm, pmd, addr, &ptl);
-			if (!start_pte)
-				break;
-			arch_enter_lazy_mmu_mode();
-			pte--;
-			addr -= PAGE_SIZE;
-			continue;
+			nr = madvise_folio_pte_batch(addr, end, folio, pte,
+						     ptent, &any_young, &any_dirty);
+
+			if (nr < folio_nr_pages(folio)) {
+				if (folio_likely_mapped_shared(folio))
+					continue;
+
+				arch_leave_lazy_mmu_mode();
+				if (madvise_pte_split_folio(mm, pmd, addr,
+							    folio, &start_pte, &ptl))
+					nr = 0;
+				if (!start_pte)
+					break;
+				pte = start_pte;
+				arch_enter_lazy_mmu_mode();
+				continue;
+			}
+
+			if (any_young)
+				ptent = pte_mkyoung(ptent);
+			if (any_dirty)
+				ptent = pte_mkdirty(ptent);
 		}
 
+		if (folio_mapcount(folio) != folio_nr_pages(folio))
+			continue;
+
 		if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
 			if (!folio_trylock(folio))
 				continue;
 			/*
-			 * If folio is shared with others, we mustn't clear
-			 * the folio's dirty flag.
+			 * If we have a large folio at this point, we know it is
+			 * fully mapped so if its mapcount is the same as its
+			 * number of pages, it must be exclusive.
 			 */
-			if (folio_mapcount(folio) != 1) {
+			if (folio_mapcount(folio) != folio_nr_pages(folio)) {
 				folio_unlock(folio);
 				continue;
 			}
@@ -741,19 +770,10 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 		}
 
 		if (pte_young(ptent) || pte_dirty(ptent)) {
-			/*
-			 * Some of architecture(ex, PPC) don't update TLB
-			 * with set_pte_at and tlb_remove_tlb_entry so for
-			 * the portability, remap the pte with old|clean
-			 * after pte clearing.
-			 */
-			ptent = ptep_get_and_clear_full(mm, addr, pte,
-							tlb->fullmm);
-
-			ptent = pte_mkold(ptent);
-			ptent = pte_mkclean(ptent);
-			set_pte_at(mm, addr, pte, ptent);
-			tlb_remove_tlb_entry(tlb, pte, addr);
+			clear_young_dirty_ptes(vma, addr, pte, nr,
+					       CYDP_CLEAR_YOUNG |
+						       CYDP_CLEAR_DIRTY);
+			tlb_remove_tlb_entries(tlb, pte, nr, addr);
 		}
 		folio_mark_lazyfree(folio);
 	}
diff --git a/mm/memory.c b/mm/memory.c
index 33d87b64d15d..9e07d1b9020c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
 			flags |= FPB_IGNORE_SOFT_DIRTY;
 
 		nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
-				     &any_writable, NULL);
+				     &any_writable, NULL, NULL);
 		folio_ref_add(folio, nr);
 		if (folio_test_anon(folio)) {
 			if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
@@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
 	 */
 	if (unlikely(folio_test_large(folio) && max_nr != 1)) {
 		nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
-				     NULL, NULL);
+				     NULL, NULL, NULL);
 
 		zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
 				       addr, details, rss, force_flush,
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-16  3:34 ` [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper Lance Yang
@ 2024-04-16 15:01   ` David Hildenbrand
  2024-04-17  4:12     ` Lance Yang
  2024-04-16 16:25   ` Ryan Roberts
  1 sibling, 1 reply; 18+ messages in thread
From: David Hildenbrand @ 2024-04-16 15:01 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: ryan.roberts, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 16.04.24 05:34, Lance Yang wrote:
> This commit introduces clear_young_dirty_ptes() to replace mkold_ptes().
> By doing so, we can use the same function for both use cases
> (madvise_pageout and madvise_free), and it also provides the flexibility
> to only clear the dirty flag in the future if needed.
> 
> Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> ---
>   include/linux/mm_types.h |  9 +++++
>   include/linux/pgtable.h  | 74 ++++++++++++++++++++++++----------------
>   mm/madvise.c             |  3 +-
>   3 files changed, 55 insertions(+), 31 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index c432add95913..28822cd65d2a 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -1367,6 +1367,15 @@ enum fault_flag {
>   
>   typedef unsigned int __bitwise zap_flags_t;
>   
> +/* Flags for clear_young_dirty_ptes(). */
> +typedef int __bitwise cydp_t;
> +
> +/* Clear the access bit */
> +#define CYDP_CLEAR_YOUNG		((__force cydp_t)BIT(0))
> +
> +/* Clear the dirty bit */
> +#define CYDP_CLEAR_DIRTY		((__force cydp_t)BIT(1))
> +
>   /*
>    * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
>    * other. Here is what they mean, and how to use them:
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index e2f45e22a6d1..18019f037bae 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -361,36 +361,6 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>   }
>   #endif
>   
> -#ifndef mkold_ptes
> -/**
> - * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old.
> - * @vma: VMA the pages are mapped into.
> - * @addr: Address the first page is mapped at.
> - * @ptep: Page table pointer for the first entry.
> - * @nr: Number of entries to mark old.
> - *
> - * May be overridden by the architecture; otherwise, implemented as a simple
> - * loop over ptep_test_and_clear_young().
> - *
> - * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> - * some PTEs might be write-protected.
> - *
> - * Context: The caller holds the page table lock.  The PTEs map consecutive
> - * pages that belong to the same folio.  The PTEs are all in the same PMD.
> - */
> -static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr,
> -		pte_t *ptep, unsigned int nr)
> -{
> -	for (;;) {
> -		ptep_test_and_clear_young(vma, addr, ptep);
> -		if (--nr == 0)
> -			break;
> -		ptep++;
> -		addr += PAGE_SIZE;
> -	}
> -}
> -#endif
> -
>   #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
>   #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
>   static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> @@ -489,6 +459,50 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>   }
>   #endif
>   
> +#ifndef clear_young_dirty_ptes
> +/**
> + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
> + *		same folio as old/clean.
> + * @mm: Address space the pages are mapped into.
> + * @addr: Address the first page is mapped at.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to mark old/clean.
> + * @flags: Flags to modify the PTE batch semantics.
> + *
> + * May be overridden by the architecture; otherwise, implemented by
> + * get_and_clear/modify/set for each pte in the range.
> + *
> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> + * some PTEs might be write-protected.
> + *
> + * Context: The caller holds the page table lock.  The PTEs map consecutive
> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
> + */
> +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> +					  unsigned long addr, pte_t *ptep,
> +					  unsigned int nr, cydp_t flags)
> +{
> +	pte_t pte;
> +
> +	for (;;) {
> +		if (flags == CYDP_CLEAR_YOUNG)
> +			ptep_test_and_clear_young(vma, addr, ptep);
> +		else {
> +			pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
> +			if (flags & CYDP_CLEAR_YOUNG)
> +				pte = pte_mkold(pte);
> +			if (flags & CYDP_CLEAR_DIRTY)
> +				pte = pte_mkclean(pte);
> +			set_pte_at(vma->vm_mm, addr, ptep, pte);
> +		}
> +		if (--nr == 0)
> +			break;
> +		ptep++;
> +		addr += PAGE_SIZE;
> +	}
> +}

The complier *might* generate a bit faster code if you check for 
CYDP_CLEAR_YOUNG outside of the loop, so you don't have to recheck on 
each loop iteration.

For now, nothing to lose sleep about

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 2/3] mm/arm64: override clear_young_dirty_ptes() batch helper
  2024-04-16  3:34 ` [PATCH v7 2/3] mm/arm64: override " Lance Yang
@ 2024-04-16 15:02   ` David Hildenbrand
  2024-04-16 16:29   ` Ryan Roberts
  1 sibling, 0 replies; 18+ messages in thread
From: David Hildenbrand @ 2024-04-16 15:02 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: ryan.roberts, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 16.04.24 05:34, Lance Yang wrote:
> The per-pte get_and_clear/modify/set approach would result in
> unfolding/refolding for contpte mappings on arm64. So we need
> to override clear_young_dirty_ptes() for arm64 to avoid it.
> 
> Suggested-by: David Hildenbrand <david@redhat.com>

I could have sworn my suggestion would have better applied to patch #1 :)

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-16  3:34 ` [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper Lance Yang
  2024-04-16 15:01   ` David Hildenbrand
@ 2024-04-16 16:25   ` Ryan Roberts
  2024-04-17  4:13     ` Lance Yang
  1 sibling, 1 reply; 18+ messages in thread
From: Ryan Roberts @ 2024-04-16 16:25 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 16/04/2024 04:34, Lance Yang wrote:
> This commit introduces clear_young_dirty_ptes() to replace mkold_ptes().
> By doing so, we can use the same function for both use cases
> (madvise_pageout and madvise_free), and it also provides the flexibility
> to only clear the dirty flag in the future if needed.
> 
> Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Lance Yang <ioworker0@gmail.com>

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

> ---
>  include/linux/mm_types.h |  9 +++++
>  include/linux/pgtable.h  | 74 ++++++++++++++++++++++++----------------
>  mm/madvise.c             |  3 +-
>  3 files changed, 55 insertions(+), 31 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index c432add95913..28822cd65d2a 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -1367,6 +1367,15 @@ enum fault_flag {
>  
>  typedef unsigned int __bitwise zap_flags_t;
>  
> +/* Flags for clear_young_dirty_ptes(). */
> +typedef int __bitwise cydp_t;
> +
> +/* Clear the access bit */
> +#define CYDP_CLEAR_YOUNG		((__force cydp_t)BIT(0))
> +
> +/* Clear the dirty bit */
> +#define CYDP_CLEAR_DIRTY		((__force cydp_t)BIT(1))
> +
>  /*
>   * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
>   * other. Here is what they mean, and how to use them:
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index e2f45e22a6d1..18019f037bae 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -361,36 +361,6 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif
>  
> -#ifndef mkold_ptes
> -/**
> - * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old.
> - * @vma: VMA the pages are mapped into.
> - * @addr: Address the first page is mapped at.
> - * @ptep: Page table pointer for the first entry.
> - * @nr: Number of entries to mark old.
> - *
> - * May be overridden by the architecture; otherwise, implemented as a simple
> - * loop over ptep_test_and_clear_young().
> - *
> - * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> - * some PTEs might be write-protected.
> - *
> - * Context: The caller holds the page table lock.  The PTEs map consecutive
> - * pages that belong to the same folio.  The PTEs are all in the same PMD.
> - */
> -static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr,
> -		pte_t *ptep, unsigned int nr)
> -{
> -	for (;;) {
> -		ptep_test_and_clear_young(vma, addr, ptep);
> -		if (--nr == 0)
> -			break;
> -		ptep++;
> -		addr += PAGE_SIZE;
> -	}
> -}
> -#endif
> -
>  #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
>  static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> @@ -489,6 +459,50 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  }
>  #endif
>  
> +#ifndef clear_young_dirty_ptes
> +/**
> + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
> + *		same folio as old/clean.
> + * @mm: Address space the pages are mapped into.
> + * @addr: Address the first page is mapped at.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to mark old/clean.
> + * @flags: Flags to modify the PTE batch semantics.
> + *
> + * May be overridden by the architecture; otherwise, implemented by
> + * get_and_clear/modify/set for each pte in the range.
> + *
> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> + * some PTEs might be write-protected.
> + *
> + * Context: The caller holds the page table lock.  The PTEs map consecutive
> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
> + */
> +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> +					  unsigned long addr, pte_t *ptep,
> +					  unsigned int nr, cydp_t flags)
> +{
> +	pte_t pte;
> +
> +	for (;;) {
> +		if (flags == CYDP_CLEAR_YOUNG)
> +			ptep_test_and_clear_young(vma, addr, ptep);
> +		else {
> +			pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
> +			if (flags & CYDP_CLEAR_YOUNG)
> +				pte = pte_mkold(pte);
> +			if (flags & CYDP_CLEAR_DIRTY)
> +				pte = pte_mkclean(pte);
> +			set_pte_at(vma->vm_mm, addr, ptep, pte);
> +		}
> +		if (--nr == 0)
> +			break;
> +		ptep++;
> +		addr += PAGE_SIZE;
> +	}
> +}
> +#endif
> +
>  static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep)
>  {
> diff --git a/mm/madvise.c b/mm/madvise.c
> index f59169888b8e..edb592adb749 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -507,7 +507,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  			continue;
>  
>  		if (!pageout && pte_young(ptent)) {
> -			mkold_ptes(vma, addr, pte, nr);
> +			clear_young_dirty_ptes(vma, addr, pte, nr,
> +					       CYDP_CLEAR_YOUNG);
>  			tlb_remove_tlb_entries(tlb, pte, nr, addr);
>  		}
>  


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 2/3] mm/arm64: override clear_young_dirty_ptes() batch helper
  2024-04-16  3:34 ` [PATCH v7 2/3] mm/arm64: override " Lance Yang
  2024-04-16 15:02   ` David Hildenbrand
@ 2024-04-16 16:29   ` Ryan Roberts
  2024-04-17  4:16     ` Lance Yang
  1 sibling, 1 reply; 18+ messages in thread
From: Ryan Roberts @ 2024-04-16 16:29 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 16/04/2024 04:34, Lance Yang wrote:
> The per-pte get_and_clear/modify/set approach would result in
> unfolding/refolding for contpte mappings on arm64. So we need
> to override clear_young_dirty_ptes() for arm64 to avoid it.
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Suggested-by: Barry Song <21cnbao@gmail.com>
> Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Lance Yang <ioworker0@gmail.com>

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

> ---
>  arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++
>  arch/arm64/mm/contpte.c          | 29 +++++++++++++++++
>  2 files changed, 84 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 9fd8613b2db2..1303d30287dc 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1223,6 +1223,46 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
>  		__ptep_set_wrprotect(mm, address, ptep);
>  }
>  
> +static inline void __clear_young_dirty_pte(struct vm_area_struct *vma,
> +					   unsigned long addr, pte_t *ptep,
> +					   pte_t pte, cydp_t flags)
> +{
> +	pte_t old_pte;
> +
> +	do {
> +		old_pte = pte;
> +
> +		if (flags & CYDP_CLEAR_YOUNG)
> +			pte = pte_mkold(pte);
> +		if (flags & CYDP_CLEAR_DIRTY)
> +			pte = pte_mkclean(pte);
> +
> +		pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
> +					       pte_val(old_pte), pte_val(pte));
> +	} while (pte_val(pte) != pte_val(old_pte));
> +}
> +
> +static inline void __clear_young_dirty_ptes(struct vm_area_struct *vma,
> +					    unsigned long addr, pte_t *ptep,
> +					    unsigned int nr, cydp_t flags)
> +{
> +	pte_t pte;
> +
> +	for (;;) {
> +		pte = __ptep_get(ptep);
> +
> +		if (flags == (CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY))
> +			__set_pte(ptep, pte_mkclean(pte_mkold(pte)));
> +		else
> +			__clear_young_dirty_pte(vma, addr, ptep, pte, flags);
> +
> +		if (--nr == 0)
> +			break;
> +		ptep++;
> +		addr += PAGE_SIZE;
> +	}
> +}
> +
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_PMDP_SET_WRPROTECT
>  static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> @@ -1379,6 +1419,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>  				unsigned long addr, pte_t *ptep,
>  				pte_t entry, int dirty);
> +extern void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
> +				unsigned long addr, pte_t *ptep,
> +				unsigned int nr, cydp_t flags);
>  
>  static __always_inline void contpte_try_fold(struct mm_struct *mm,
>  				unsigned long addr, pte_t *ptep, pte_t pte)
> @@ -1603,6 +1646,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
>  	return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
>  }
>  
> +#define clear_young_dirty_ptes clear_young_dirty_ptes
> +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> +					  unsigned long addr, pte_t *ptep,
> +					  unsigned int nr, cydp_t flags)
> +{
> +	if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
> +		__clear_young_dirty_ptes(vma, addr, ptep, nr, flags);
> +	else
> +		contpte_clear_young_dirty_ptes(vma, addr, ptep, nr, flags);
> +}
> +
>  #else /* CONFIG_ARM64_CONTPTE */
>  
>  #define ptep_get				__ptep_get
> @@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
>  #define wrprotect_ptes				__wrprotect_ptes
>  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
>  #define ptep_set_access_flags			__ptep_set_access_flags
> +#define clear_young_dirty_ptes			__clear_young_dirty_ptes
>  
>  #endif /* CONFIG_ARM64_CONTPTE */
>  
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index 1b64b4c3f8bf..9f9486de0004 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -361,6 +361,35 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  }
>  EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
>  
> +void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
> +				    unsigned long addr, pte_t *ptep,
> +				    unsigned int nr, cydp_t flags)
> +{
> +	/*
> +	 * We can safely clear access/dirty without needing to unfold from
> +	 * the architectures perspective, even when contpte is set. If the
> +	 * range starts or ends midway through a contpte block, we can just
> +	 * expand to include the full contpte block. While this is not
> +	 * exactly what the core-mm asked for, it tracks access/dirty per
> +	 * folio, not per page. And since we only create a contpte block
> +	 * when it is covered by a single folio, we can get away with
> +	 * clearing access/dirty for the whole block.
> +	 */
> +	unsigned long start = addr;
> +	unsigned long end = start + nr;
> +
> +	if (pte_cont(__ptep_get(ptep + nr - 1)))
> +		end = ALIGN(end, CONT_PTE_SIZE);
> +
> +	if (pte_cont(__ptep_get(ptep))) {
> +		start = ALIGN_DOWN(start, CONT_PTE_SIZE);
> +		ptep = contpte_align_down(ptep);
> +	}
> +
> +	__clear_young_dirty_ptes(vma, start, ptep, end - start, flags);
> +}
> +EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
> +
>  int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>  					unsigned long addr, pte_t *ptep,
>  					pte_t entry, int dirty)


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-16  3:34 ` [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Lance Yang
@ 2024-04-16 16:35   ` Ryan Roberts
  2024-04-16 16:52     ` David Hildenbrand
  0 siblings, 1 reply; 18+ messages in thread
From: Ryan Roberts @ 2024-04-16 16:35 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 16/04/2024 04:34, Lance Yang wrote:
> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
> splitting if the large folio is fully mapped within the target range.
> 
> If a large folio is locked or shared, or if we fail to split it, we just
> leave it in place and advance to the next PTE in the range. But note that
> the behavior is changed; previously, any failure of this sort would cause
> the entire operation to give up. As large folios become more common,
> sticking to the old way could result in wasted opportunities.
> 
> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> the same size results in the following runtimes for madvise(MADV_FREE) in
> seconds (shorter is better):
> 
> Folio Size |   Old    |   New    | Change
> ------------------------------------------
>       4KiB | 0.590251 | 0.590259 |    0%
>      16KiB | 2.990447 | 0.185655 |  -94%
>      32KiB | 2.547831 | 0.104870 |  -95%
>      64KiB | 2.457796 | 0.052812 |  -97%
>     128KiB | 2.281034 | 0.032777 |  -99%
>     256KiB | 2.230387 | 0.017496 |  -99%
>     512KiB | 2.189106 | 0.010781 |  -99%
>    1024KiB | 2.183949 | 0.007753 |  -99%
>    2048KiB | 0.002799 | 0.002804 |    0%
> 
> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
> 
> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> ---
>  mm/internal.h |  12 ++++-
>  mm/madvise.c  | 144 ++++++++++++++++++++++++++++----------------------
>  mm/memory.c   |   4 +-
>  3 files changed, 94 insertions(+), 66 deletions(-)
> 
> diff --git a/mm/internal.h b/mm/internal.h
> index 2adc3f616b71..5d5e49b86fe3 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>   *		  first one is writable.
>   * @any_young: Optional pointer to indicate whether any entry except the
>   *		  first one is young.
> + * @any_dirty: Optional pointer to indicate whether any entry except the
> + *		  first one is dirty.
>   *
>   * Detect a PTE batch: consecutive (present) PTEs that map consecutive
>   * pages of the same large folio.
> @@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>   */
>  static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  		pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
> -		bool *any_writable, bool *any_young)
> +		bool *any_writable, bool *any_young, bool *any_dirty)
>  {
>  	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
>  	const pte_t *end_ptep = start_ptep + max_nr;
>  	pte_t expected_pte, *ptep;
> -	bool writable, young;
> +	bool writable, young, dirty;
>  	int nr;
>  
>  	if (any_writable)
>  		*any_writable = false;
>  	if (any_young)
>  		*any_young = false;
> +	if (any_dirty)
> +		*any_dirty = false;
>  
>  	VM_WARN_ON_FOLIO(!pte_present(pte), folio);
>  	VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
> @@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  			writable = !!pte_write(pte);
>  		if (any_young)
>  			young = !!pte_young(pte);
> +		if (any_dirty)
> +			dirty = !!pte_dirty(pte);
>  		pte = __pte_batch_clear_ignored(pte, flags);
>  
>  		if (!pte_same(pte, expected_pte))
> @@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  			*any_writable |= writable;
>  		if (any_young)
>  			*any_young |= young;
> +		if (any_dirty)
> +			*any_dirty |= dirty;
>  
>  		nr = pte_batch_hint(ptep, pte);
>  		expected_pte = pte_advance_pfn(expected_pte, nr);
> diff --git a/mm/madvise.c b/mm/madvise.c
> index edb592adb749..a6bfbbd881e9 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
>  	       file_permission(vma->vm_file, MAY_WRITE) == 0;
>  }
>  
> +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
> +					  struct folio *folio, pte_t *ptep,
> +					  pte_t pte, bool *any_young,
> +					  bool *any_dirty)
> +{
> +	int max_nr = (end - addr) / PAGE_SIZE;
> +	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> +
> +	return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
> +			       any_young, any_dirty);
> +}
> +
> +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
> +					   unsigned long addr,
> +					   struct folio *folio, pte_t **pte,
> +					   spinlock_t **ptl)
> +{
> +	int err;
> +
> +	if (!folio_trylock(folio))
> +		return false;
> +
> +	folio_get(folio);
> +	pte_unmap_unlock(*pte, *ptl);
> +	err = split_folio(folio);
> +	folio_unlock(folio);
> +	folio_put(folio);
> +
> +	*pte = pte_offset_map_lock(mm, pmd, addr, ptl);
> +
> +	return err == 0;
> +}
> +
>  static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  				unsigned long addr, unsigned long end,
>  				struct mm_walk *walk)
> @@ -456,41 +489,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  		 * next pte in the range.
>  		 */
>  		if (folio_test_large(folio)) {
> -			const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
> -						FPB_IGNORE_SOFT_DIRTY;
> -			int max_nr = (end - addr) / PAGE_SIZE;
>  			bool any_young;
>  
> -			nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
> -					     fpb_flags, NULL, &any_young);
> -			if (any_young)
> -				ptent = pte_mkyoung(ptent);
> +			nr = madvise_folio_pte_batch(addr, end, folio, pte,
> +						     ptent, &any_young, NULL);
>  
>  			if (nr < folio_nr_pages(folio)) {
> -				int err;
> -
>  				if (folio_likely_mapped_shared(folio))
>  					continue;
>  				if (pageout_anon_only_filter && !folio_test_anon(folio))
>  					continue;
> -				if (!folio_trylock(folio))
> -					continue;
> -				folio_get(folio);
> +
>  				arch_leave_lazy_mmu_mode();
> -				pte_unmap_unlock(start_pte, ptl);
> -				start_pte = NULL;
> -				err = split_folio(folio);
> -				folio_unlock(folio);
> -				folio_put(folio);
> -				start_pte = pte =
> -					pte_offset_map_lock(mm, pmd, addr, &ptl);
> +				if (madvise_pte_split_folio(mm, pmd, addr,
> +							    folio, &start_pte, &ptl))
> +					nr = 0;
>  				if (!start_pte)
>  					break;
> +				pte = start_pte;
>  				arch_enter_lazy_mmu_mode();
> -				if (!err)
> -					nr = 0;
>  				continue;
>  			}
> +
> +			if (any_young)
> +				ptent = pte_mkyoung(ptent);
>  		}
>  
>  		/*
> @@ -688,44 +710,51 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>  			continue;
>  
>  		/*
> -		 * If pmd isn't transhuge but the folio is large and
> -		 * is owned by only this process, split it and
> -		 * deactivate all pages.
> +		 * If we encounter a large folio, only split it if it is not
> +		 * fully mapped within the range we are operating on. Otherwise
> +		 * leave it as is so that it can be marked as lazyfree. If we
> +		 * fail to split a folio, leave it in place and advance to the
> +		 * next pte in the range.
>  		 */
>  		if (folio_test_large(folio)) {
> -			int err;
> +			bool any_young, any_dirty;
>  
> -			if (folio_likely_mapped_shared(folio))
> -				break;
> -			if (!folio_trylock(folio))
> -				break;
> -			folio_get(folio);
> -			arch_leave_lazy_mmu_mode();
> -			pte_unmap_unlock(start_pte, ptl);
> -			start_pte = NULL;
> -			err = split_folio(folio);
> -			folio_unlock(folio);
> -			folio_put(folio);
> -			if (err)
> -				break;
> -			start_pte = pte =
> -				pte_offset_map_lock(mm, pmd, addr, &ptl);
> -			if (!start_pte)
> -				break;
> -			arch_enter_lazy_mmu_mode();
> -			pte--;
> -			addr -= PAGE_SIZE;
> -			continue;
> +			nr = madvise_folio_pte_batch(addr, end, folio, pte,
> +						     ptent, &any_young, &any_dirty);
> +
> +			if (nr < folio_nr_pages(folio)) {
> +				if (folio_likely_mapped_shared(folio))
> +					continue;
> +
> +				arch_leave_lazy_mmu_mode();
> +				if (madvise_pte_split_folio(mm, pmd, addr,
> +							    folio, &start_pte, &ptl))
> +					nr = 0;
> +				if (!start_pte)
> +					break;
> +				pte = start_pte;
> +				arch_enter_lazy_mmu_mode();
> +				continue;
> +			}
> +
> +			if (any_young)
> +				ptent = pte_mkyoung(ptent);
> +			if (any_dirty)
> +				ptent = pte_mkdirty(ptent);
>  		}
>  
> +		if (folio_mapcount(folio) != folio_nr_pages(folio))
> +			continue;

Why is this here? I thought we had previously concluded to only do this test
inside the below if statement (where you have it duplicated).

> +
>  		if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
>  			if (!folio_trylock(folio))
>  				continue;
>  			/*
> -			 * If folio is shared with others, we mustn't clear
> -			 * the folio's dirty flag.
> +			 * If we have a large folio at this point, we know it is
> +			 * fully mapped so if its mapcount is the same as its
> +			 * number of pages, it must be exclusive.
>  			 */
> -			if (folio_mapcount(folio) != 1) {
> +			if (folio_mapcount(folio) != folio_nr_pages(folio)) {
>  				folio_unlock(folio);
>  				continue;
>  			}
> @@ -741,19 +770,10 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>  		}
>  
>  		if (pte_young(ptent) || pte_dirty(ptent)) {
> -			/*
> -			 * Some of architecture(ex, PPC) don't update TLB
> -			 * with set_pte_at and tlb_remove_tlb_entry so for
> -			 * the portability, remap the pte with old|clean
> -			 * after pte clearing.
> -			 */
> -			ptent = ptep_get_and_clear_full(mm, addr, pte,
> -							tlb->fullmm);
> -
> -			ptent = pte_mkold(ptent);
> -			ptent = pte_mkclean(ptent);
> -			set_pte_at(mm, addr, pte, ptent);
> -			tlb_remove_tlb_entry(tlb, pte, addr);
> +			clear_young_dirty_ptes(vma, addr, pte, nr,
> +					       CYDP_CLEAR_YOUNG |
> +						       CYDP_CLEAR_DIRTY);
> +			tlb_remove_tlb_entries(tlb, pte, nr, addr);
>  		}
>  		folio_mark_lazyfree(folio);
>  	}
> diff --git a/mm/memory.c b/mm/memory.c
> index 33d87b64d15d..9e07d1b9020c 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
>  			flags |= FPB_IGNORE_SOFT_DIRTY;
>  
>  		nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
> -				     &any_writable, NULL);
> +				     &any_writable, NULL, NULL);
>  		folio_ref_add(folio, nr);
>  		if (folio_test_anon(folio)) {
>  			if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
> @@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
>  	 */
>  	if (unlikely(folio_test_large(folio) && max_nr != 1)) {
>  		nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
> -				     NULL, NULL);
> +				     NULL, NULL, NULL);
>  
>  		zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
>  				       addr, details, rss, force_flush,


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-16 16:35   ` Ryan Roberts
@ 2024-04-16 16:52     ` David Hildenbrand
  2024-04-17  4:35       ` Lance Yang
  0 siblings, 1 reply; 18+ messages in thread
From: David Hildenbrand @ 2024-04-16 16:52 UTC (permalink / raw)
  To: Ryan Roberts, Lance Yang, akpm
  Cc: 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301, xiehuan09,
	wangkefeng.wang, songmuchun, peterx, minchan, linux-mm,
	linux-kernel

>> +			nr = madvise_folio_pte_batch(addr, end, folio, pte,
>> +						     ptent, &any_young, &any_dirty);
>> +
>> +			if (nr < folio_nr_pages(folio)) {
>> +				if (folio_likely_mapped_shared(folio))
>> +					continue;
>> +
>> +				arch_leave_lazy_mmu_mode();
>> +				if (madvise_pte_split_folio(mm, pmd, addr,
>> +							    folio, &start_pte, &ptl))
>> +					nr = 0;
>> +				if (!start_pte)
>> +					break;
>> +				pte = start_pte;
>> +				arch_enter_lazy_mmu_mode();
>> +				continue;
>> +			}
>> +
>> +			if (any_young)
>> +				ptent = pte_mkyoung(ptent);
>> +			if (any_dirty)
>> +				ptent = pte_mkdirty(ptent);
>>   		}
>>   
>> +		if (folio_mapcount(folio) != folio_nr_pages(folio))
>> +			continue;
> 
> Why is this here? I thought we had previously concluded to only do this test
> inside the below if statement (where you have it duplicated).

I stumbled over these same while reviewing. It's not exactly duplicate, 
because it's unreliable without the folio lock. It looks more like an 
best-effort early check.

But then, we also add it to cases where we previously wouldn't check the 
mapcount at all: when the folio was added to the swapcache or is already 
dirty.

In that case, we would even see a change for order-0 folios with that 
new check.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-16 15:01   ` David Hildenbrand
@ 2024-04-17  4:12     ` Lance Yang
  2024-04-17  5:04       ` Lance Yang
  0 siblings, 1 reply; 18+ messages in thread
From: Lance Yang @ 2024-04-17  4:12 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: akpm, ryan.roberts, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel

On Tue, Apr 16, 2024 at 11:03 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 16.04.24 05:34, Lance Yang wrote:
> > This commit introduces clear_young_dirty_ptes() to replace mkold_ptes().
> > By doing so, we can use the same function for both use cases
> > (madvise_pageout and madvise_free), and it also provides the flexibility
> > to only clear the dirty flag in the future if needed.
> >
> > Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> > Signed-off-by: Lance Yang <ioworker0@gmail.com>
> > ---
> >   include/linux/mm_types.h |  9 +++++
> >   include/linux/pgtable.h  | 74 ++++++++++++++++++++++++----------------
> >   mm/madvise.c             |  3 +-
> >   3 files changed, 55 insertions(+), 31 deletions(-)
> >
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index c432add95913..28822cd65d2a 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -1367,6 +1367,15 @@ enum fault_flag {
> >
> >   typedef unsigned int __bitwise zap_flags_t;
> >
> > +/* Flags for clear_young_dirty_ptes(). */
> > +typedef int __bitwise cydp_t;
> > +
> > +/* Clear the access bit */
> > +#define CYDP_CLEAR_YOUNG             ((__force cydp_t)BIT(0))
> > +
> > +/* Clear the dirty bit */
> > +#define CYDP_CLEAR_DIRTY             ((__force cydp_t)BIT(1))
> > +
> >   /*
> >    * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
> >    * other. Here is what they mean, and how to use them:
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index e2f45e22a6d1..18019f037bae 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -361,36 +361,6 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> >   }
> >   #endif
> >
> > -#ifndef mkold_ptes
> > -/**
> > - * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old.
> > - * @vma: VMA the pages are mapped into.
> > - * @addr: Address the first page is mapped at.
> > - * @ptep: Page table pointer for the first entry.
> > - * @nr: Number of entries to mark old.
> > - *
> > - * May be overridden by the architecture; otherwise, implemented as a simple
> > - * loop over ptep_test_and_clear_young().
> > - *
> > - * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> > - * some PTEs might be write-protected.
> > - *
> > - * Context: The caller holds the page table lock.  The PTEs map consecutive
> > - * pages that belong to the same folio.  The PTEs are all in the same PMD.
> > - */
> > -static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr,
> > -             pte_t *ptep, unsigned int nr)
> > -{
> > -     for (;;) {
> > -             ptep_test_and_clear_young(vma, addr, ptep);
> > -             if (--nr == 0)
> > -                     break;
> > -             ptep++;
> > -             addr += PAGE_SIZE;
> > -     }
> > -}
> > -#endif
> > -
> >   #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> >   #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
> >   static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> > @@ -489,6 +459,50 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
> >   }
> >   #endif
> >
> > +#ifndef clear_young_dirty_ptes
> > +/**
> > + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
> > + *           same folio as old/clean.
> > + * @mm: Address space the pages are mapped into.
> > + * @addr: Address the first page is mapped at.
> > + * @ptep: Page table pointer for the first entry.
> > + * @nr: Number of entries to mark old/clean.
> > + * @flags: Flags to modify the PTE batch semantics.
> > + *
> > + * May be overridden by the architecture; otherwise, implemented by
> > + * get_and_clear/modify/set for each pte in the range.
> > + *
> > + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> > + * some PTEs might be write-protected.
> > + *
> > + * Context: The caller holds the page table lock.  The PTEs map consecutive
> > + * pages that belong to the same folio.  The PTEs are all in the same PMD.
> > + */
> > +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> > +                                       unsigned long addr, pte_t *ptep,
> > +                                       unsigned int nr, cydp_t flags)
> > +{
> > +     pte_t pte;
> > +
> > +     for (;;) {
> > +             if (flags == CYDP_CLEAR_YOUNG)
> > +                     ptep_test_and_clear_young(vma, addr, ptep);
> > +             else {
> > +                     pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
> > +                     if (flags & CYDP_CLEAR_YOUNG)
> > +                             pte = pte_mkold(pte);
> > +                     if (flags & CYDP_CLEAR_DIRTY)
> > +                             pte = pte_mkclean(pte);
> > +                     set_pte_at(vma->vm_mm, addr, ptep, pte);
> > +             }
> > +             if (--nr == 0)
> > +                     break;
> > +             ptep++;
> > +             addr += PAGE_SIZE;
> > +     }
> > +}
>

Hey David,

Thanks for taking time to review!

> The complier *might* generate a bit faster code if you check for
> CYDP_CLEAR_YOUNG outside of the loop, so you don't have to recheck on
> each loop iteration.

Nice! I think moving the CYDP_CLEAR_YOUNG check outside of the loop
could speed up the code. I'll make this change in the next version.

>
> For now, nothing to lose sleep about
>
> Acked-by: David Hildenbrand <david@redhat.com>

Thanks again for the review!

Best,
Lance

>
> --
> Cheers,
>
> David / dhildenb
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-16 16:25   ` Ryan Roberts
@ 2024-04-17  4:13     ` Lance Yang
  0 siblings, 0 replies; 18+ messages in thread
From: Lance Yang @ 2024-04-17  4:13 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On Wed, Apr 17, 2024 at 12:25 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 16/04/2024 04:34, Lance Yang wrote:
> > This commit introduces clear_young_dirty_ptes() to replace mkold_ptes().
> > By doing so, we can use the same function for both use cases
> > (madvise_pageout and madvise_free), and it also provides the flexibility
> > to only clear the dirty flag in the future if needed.
> >
> > Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> > Signed-off-by: Lance Yang <ioworker0@gmail.com>
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

Hey Ryan,

Thanks for taking time to review!

Best,
Lance

>
> > ---
> >  include/linux/mm_types.h |  9 +++++
> >  include/linux/pgtable.h  | 74 ++++++++++++++++++++++++----------------
> >  mm/madvise.c             |  3 +-
> >  3 files changed, 55 insertions(+), 31 deletions(-)
> >
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index c432add95913..28822cd65d2a 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -1367,6 +1367,15 @@ enum fault_flag {
> >
> >  typedef unsigned int __bitwise zap_flags_t;
> >
> > +/* Flags for clear_young_dirty_ptes(). */
> > +typedef int __bitwise cydp_t;
> > +
> > +/* Clear the access bit */
> > +#define CYDP_CLEAR_YOUNG             ((__force cydp_t)BIT(0))
> > +
> > +/* Clear the dirty bit */
> > +#define CYDP_CLEAR_DIRTY             ((__force cydp_t)BIT(1))
> > +
> >  /*
> >   * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
> >   * other. Here is what they mean, and how to use them:
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index e2f45e22a6d1..18019f037bae 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -361,36 +361,6 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> >  }
> >  #endif
> >
> > -#ifndef mkold_ptes
> > -/**
> > - * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old.
> > - * @vma: VMA the pages are mapped into.
> > - * @addr: Address the first page is mapped at.
> > - * @ptep: Page table pointer for the first entry.
> > - * @nr: Number of entries to mark old.
> > - *
> > - * May be overridden by the architecture; otherwise, implemented as a simple
> > - * loop over ptep_test_and_clear_young().
> > - *
> > - * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> > - * some PTEs might be write-protected.
> > - *
> > - * Context: The caller holds the page table lock.  The PTEs map consecutive
> > - * pages that belong to the same folio.  The PTEs are all in the same PMD.
> > - */
> > -static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr,
> > -             pte_t *ptep, unsigned int nr)
> > -{
> > -     for (;;) {
> > -             ptep_test_and_clear_young(vma, addr, ptep);
> > -             if (--nr == 0)
> > -                     break;
> > -             ptep++;
> > -             addr += PAGE_SIZE;
> > -     }
> > -}
> > -#endif
> > -
> >  #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> >  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
> >  static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> > @@ -489,6 +459,50 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
> >  }
> >  #endif
> >
> > +#ifndef clear_young_dirty_ptes
> > +/**
> > + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
> > + *           same folio as old/clean.
> > + * @mm: Address space the pages are mapped into.
> > + * @addr: Address the first page is mapped at.
> > + * @ptep: Page table pointer for the first entry.
> > + * @nr: Number of entries to mark old/clean.
> > + * @flags: Flags to modify the PTE batch semantics.
> > + *
> > + * May be overridden by the architecture; otherwise, implemented by
> > + * get_and_clear/modify/set for each pte in the range.
> > + *
> > + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> > + * some PTEs might be write-protected.
> > + *
> > + * Context: The caller holds the page table lock.  The PTEs map consecutive
> > + * pages that belong to the same folio.  The PTEs are all in the same PMD.
> > + */
> > +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> > +                                       unsigned long addr, pte_t *ptep,
> > +                                       unsigned int nr, cydp_t flags)
> > +{
> > +     pte_t pte;
> > +
> > +     for (;;) {
> > +             if (flags == CYDP_CLEAR_YOUNG)
> > +                     ptep_test_and_clear_young(vma, addr, ptep);
> > +             else {
> > +                     pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
> > +                     if (flags & CYDP_CLEAR_YOUNG)
> > +                             pte = pte_mkold(pte);
> > +                     if (flags & CYDP_CLEAR_DIRTY)
> > +                             pte = pte_mkclean(pte);
> > +                     set_pte_at(vma->vm_mm, addr, ptep, pte);
> > +             }
> > +             if (--nr == 0)
> > +                     break;
> > +             ptep++;
> > +             addr += PAGE_SIZE;
> > +     }
> > +}
> > +#endif
> > +
> >  static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> >                             pte_t *ptep)
> >  {
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index f59169888b8e..edb592adb749 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -507,7 +507,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >                       continue;
> >
> >               if (!pageout && pte_young(ptent)) {
> > -                     mkold_ptes(vma, addr, pte, nr);
> > +                     clear_young_dirty_ptes(vma, addr, pte, nr,
> > +                                            CYDP_CLEAR_YOUNG);
> >                       tlb_remove_tlb_entries(tlb, pte, nr, addr);
> >               }
> >
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 2/3] mm/arm64: override clear_young_dirty_ptes() batch helper
  2024-04-16 16:29   ` Ryan Roberts
@ 2024-04-17  4:16     ` Lance Yang
  0 siblings, 0 replies; 18+ messages in thread
From: Lance Yang @ 2024-04-17  4:16 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On Wed, Apr 17, 2024 at 12:29 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 16/04/2024 04:34, Lance Yang wrote:
> > The per-pte get_and_clear/modify/set approach would result in
> > unfolding/refolding for contpte mappings on arm64. So we need
> > to override clear_young_dirty_ptes() for arm64 to avoid it.
> >
> > Suggested-by: David Hildenbrand <david@redhat.com>
> > Suggested-by: Barry Song <21cnbao@gmail.com>
> > Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> > Signed-off-by: Lance Yang <ioworker0@gmail.com>
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

Hey Ryan,

Thanks for taking time to review!

Best,
Lance

>
> > ---
> >  arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++
> >  arch/arm64/mm/contpte.c          | 29 +++++++++++++++++
> >  2 files changed, 84 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 9fd8613b2db2..1303d30287dc 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -1223,6 +1223,46 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
> >               __ptep_set_wrprotect(mm, address, ptep);
> >  }
> >
> > +static inline void __clear_young_dirty_pte(struct vm_area_struct *vma,
> > +                                        unsigned long addr, pte_t *ptep,
> > +                                        pte_t pte, cydp_t flags)
> > +{
> > +     pte_t old_pte;
> > +
> > +     do {
> > +             old_pte = pte;
> > +
> > +             if (flags & CYDP_CLEAR_YOUNG)
> > +                     pte = pte_mkold(pte);
> > +             if (flags & CYDP_CLEAR_DIRTY)
> > +                     pte = pte_mkclean(pte);
> > +
> > +             pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
> > +                                            pte_val(old_pte), pte_val(pte));
> > +     } while (pte_val(pte) != pte_val(old_pte));
> > +}
> > +
> > +static inline void __clear_young_dirty_ptes(struct vm_area_struct *vma,
> > +                                         unsigned long addr, pte_t *ptep,
> > +                                         unsigned int nr, cydp_t flags)
> > +{
> > +     pte_t pte;
> > +
> > +     for (;;) {
> > +             pte = __ptep_get(ptep);
> > +
> > +             if (flags == (CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY))
> > +                     __set_pte(ptep, pte_mkclean(pte_mkold(pte)));
> > +             else
> > +                     __clear_young_dirty_pte(vma, addr, ptep, pte, flags);
> > +
> > +             if (--nr == 0)
> > +                     break;
> > +             ptep++;
> > +             addr += PAGE_SIZE;
> > +     }
> > +}
> > +
> >  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >  #define __HAVE_ARCH_PMDP_SET_WRPROTECT
> >  static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> > @@ -1379,6 +1419,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> >  extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> >                               unsigned long addr, pte_t *ptep,
> >                               pte_t entry, int dirty);
> > +extern void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
> > +                             unsigned long addr, pte_t *ptep,
> > +                             unsigned int nr, cydp_t flags);
> >
> >  static __always_inline void contpte_try_fold(struct mm_struct *mm,
> >                               unsigned long addr, pte_t *ptep, pte_t pte)
> > @@ -1603,6 +1646,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> >       return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
> >  }
> >
> > +#define clear_young_dirty_ptes clear_young_dirty_ptes
> > +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> > +                                       unsigned long addr, pte_t *ptep,
> > +                                       unsigned int nr, cydp_t flags)
> > +{
> > +     if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
> > +             __clear_young_dirty_ptes(vma, addr, ptep, nr, flags);
> > +     else
> > +             contpte_clear_young_dirty_ptes(vma, addr, ptep, nr, flags);
> > +}
> > +
> >  #else /* CONFIG_ARM64_CONTPTE */
> >
> >  #define ptep_get                             __ptep_get
> > @@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> >  #define wrprotect_ptes                               __wrprotect_ptes
> >  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> >  #define ptep_set_access_flags                        __ptep_set_access_flags
> > +#define clear_young_dirty_ptes                       __clear_young_dirty_ptes
> >
> >  #endif /* CONFIG_ARM64_CONTPTE */
> >
> > diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> > index 1b64b4c3f8bf..9f9486de0004 100644
> > --- a/arch/arm64/mm/contpte.c
> > +++ b/arch/arm64/mm/contpte.c
> > @@ -361,6 +361,35 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> >  }
> >  EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
> >
> > +void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
> > +                                 unsigned long addr, pte_t *ptep,
> > +                                 unsigned int nr, cydp_t flags)
> > +{
> > +     /*
> > +      * We can safely clear access/dirty without needing to unfold from
> > +      * the architectures perspective, even when contpte is set. If the
> > +      * range starts or ends midway through a contpte block, we can just
> > +      * expand to include the full contpte block. While this is not
> > +      * exactly what the core-mm asked for, it tracks access/dirty per
> > +      * folio, not per page. And since we only create a contpte block
> > +      * when it is covered by a single folio, we can get away with
> > +      * clearing access/dirty for the whole block.
> > +      */
> > +     unsigned long start = addr;
> > +     unsigned long end = start + nr;
> > +
> > +     if (pte_cont(__ptep_get(ptep + nr - 1)))
> > +             end = ALIGN(end, CONT_PTE_SIZE);
> > +
> > +     if (pte_cont(__ptep_get(ptep))) {
> > +             start = ALIGN_DOWN(start, CONT_PTE_SIZE);
> > +             ptep = contpte_align_down(ptep);
> > +     }
> > +
> > +     __clear_young_dirty_ptes(vma, start, ptep, end - start, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
> > +
> >  int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> >                                       unsigned long addr, pte_t *ptep,
> >                                       pte_t entry, int dirty)
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-16 16:52     ` David Hildenbrand
@ 2024-04-17  4:35       ` Lance Yang
  0 siblings, 0 replies; 18+ messages in thread
From: Lance Yang @ 2024-04-17  4:35 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Ryan Roberts, akpm, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel

Hey Ryan, David,

Thanks for taking time to review!

On Wed, Apr 17, 2024 at 12:52 AM David Hildenbrand <david@redhat.com> wrote:
>
> >> +                    nr = madvise_folio_pte_batch(addr, end, folio, pte,
> >> +                                                 ptent, &any_young, &any_dirty);
> >> +
> >> +                    if (nr < folio_nr_pages(folio)) {
> >> +                            if (folio_likely_mapped_shared(folio))
> >> +                                    continue;
> >> +
> >> +                            arch_leave_lazy_mmu_mode();
> >> +                            if (madvise_pte_split_folio(mm, pmd, addr,
> >> +                                                        folio, &start_pte, &ptl))
> >> +                                    nr = 0;
> >> +                            if (!start_pte)
> >> +                                    break;
> >> +                            pte = start_pte;
> >> +                            arch_enter_lazy_mmu_mode();
> >> +                            continue;
> >> +                    }
> >> +
> >> +                    if (any_young)
> >> +                            ptent = pte_mkyoung(ptent);
> >> +                    if (any_dirty)
> >> +                            ptent = pte_mkdirty(ptent);
> >>              }
> >>
> >> +            if (folio_mapcount(folio) != folio_nr_pages(folio))
> >> +                    continue;
> >
> > Why is this here? I thought we had previously concluded to only do this test
> > inside the below if statement (where you have it duplicated).

My bad for this mistake - sorry!

>
> I stumbled over these same while reviewing. It's not exactly duplicate,
> because it's unreliable without the folio lock. It looks more like an
> best-effort early check.
>
> But then, we also add it to cases where we previously wouldn't check the
> mapcount at all: when the folio was added to the swapcache or is already
> dirty.
>
> In that case, we would even see a change for order-0 folios with that
> new check.

Thanks for pointing that out! I'll remove this check here in the next version.

I overlooked that this is a new check for order-0 folios :(

Thanks,
Lance

>
> --
> Cheers,
>
> David / dhildenb
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-17  4:12     ` Lance Yang
@ 2024-04-17  5:04       ` Lance Yang
  2024-04-17  8:19         ` David Hildenbrand
  0 siblings, 1 reply; 18+ messages in thread
From: Lance Yang @ 2024-04-17  5:04 UTC (permalink / raw)
  To: ioworker0
  Cc: 21cnbao, akpm, david, fengwei.yin, linux-kernel, linux-mm,
	mhocko, minchan, peterx, ryan.roberts, shy828301, songmuchun,
	wangkefeng.wang, xiehuan09, zokeefe

Hey David, Ryan,

How about this change?

static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
					  unsigned long addr, pte_t *ptep,
					  unsigned int nr, cydp_t flags)
{
	if (flags == CYDP_CLEAR_YOUNG) {
		for (;;) {
			ptep_test_and_clear_young(vma, addr, ptep);
			if (--nr == 0)
				break;
			ptep++;
			addr += PAGE_SIZE;
		}
		return;
	}

	pte_t pte;

	for (;;) {
		pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);

		if (flags & CYDP_CLEAR_YOUNG)
			pte = pte_mkold(pte);
		if (flags & CYDP_CLEAR_DIRTY)
			pte = pte_mkclean(pte);

		if (--nr == 0)
			break;
		ptep++;
		addr += PAGE_SIZE;
	}
}

Thanks,
Lance

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-17  5:04       ` Lance Yang
@ 2024-04-17  8:19         ` David Hildenbrand
  2024-04-17  9:01           ` Lance Yang
  0 siblings, 1 reply; 18+ messages in thread
From: David Hildenbrand @ 2024-04-17  8:19 UTC (permalink / raw)
  To: Lance Yang
  Cc: 21cnbao, akpm, fengwei.yin, linux-kernel, linux-mm, mhocko,
	minchan, peterx, ryan.roberts, shy828301, songmuchun,
	wangkefeng.wang, xiehuan09, zokeefe

On 17.04.24 07:04, Lance Yang wrote:
> Hey David, Ryan,
> 
> How about this change?
> 
> static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> 					  unsigned long addr, pte_t *ptep,
> 					  unsigned int nr, cydp_t flags)
> {
> 	if (flags == CYDP_CLEAR_YOUNG) {
> 		for (;;) {
> 			ptep_test_and_clear_young(vma, addr, ptep);
> 			if (--nr == 0)
> 				break;
> 			ptep++;
> 			addr += PAGE_SIZE;
> 		}
> 		return;
> 	}
> 
> 	pte_t pte;
> 
> 	for (;;) {
> 		pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
> 
> 		if (flags & CYDP_CLEAR_YOUNG)
> 			pte = pte_mkold(pte);
> 		if (flags & CYDP_CLEAR_DIRTY)
> 			pte = pte_mkclean(pte);
> 
> 		if (--nr == 0)
> 			break;
> 		ptep++;
> 		addr += PAGE_SIZE;
> 	}
> }

Likely it might be best to just KIS for now and leave it as is. The 
compiler should optimize out based on flags already, that's what I ignored.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-17  8:19         ` David Hildenbrand
@ 2024-04-17  9:01           ` Lance Yang
  2024-04-17 10:53             ` Ryan Roberts
  0 siblings, 1 reply; 18+ messages in thread
From: Lance Yang @ 2024-04-17  9:01 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: 21cnbao, akpm, fengwei.yin, linux-kernel, linux-mm, mhocko,
	minchan, peterx, ryan.roberts, shy828301, songmuchun,
	wangkefeng.wang, xiehuan09, zokeefe

On Wed, Apr 17, 2024 at 4:19 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 17.04.24 07:04, Lance Yang wrote:
> > Hey David, Ryan,
> >
> > How about this change?
> >
> > static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> >                                         unsigned long addr, pte_t *ptep,
> >                                         unsigned int nr, cydp_t flags)
> > {
> >       if (flags == CYDP_CLEAR_YOUNG) {
> >               for (;;) {
> >                       ptep_test_and_clear_young(vma, addr, ptep);
> >                       if (--nr == 0)
> >                               break;
> >                       ptep++;
> >                       addr += PAGE_SIZE;
> >               }
> >               return;
> >       }
> >
> >       pte_t pte;
> >
> >       for (;;) {
> >               pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
> >
> >               if (flags & CYDP_CLEAR_YOUNG)
> >                       pte = pte_mkold(pte);
> >               if (flags & CYDP_CLEAR_DIRTY)
> >                       pte = pte_mkclean(pte);
> >
> >               if (--nr == 0)
> >                       break;
> >               ptep++;
> >               addr += PAGE_SIZE;
> >       }
> > }
>
> Likely it might be best to just KIS for now and leave it as is. The
> compiler should optimize out based on flags already, that's what I ignored.

Got it. Let's keep it as is for now :)

Thanks,
Lance

>
> --
> Cheers,
>
> David / dhildenb
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper
  2024-04-17  9:01           ` Lance Yang
@ 2024-04-17 10:53             ` Ryan Roberts
  0 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2024-04-17 10:53 UTC (permalink / raw)
  To: Lance Yang, David Hildenbrand
  Cc: 21cnbao, akpm, fengwei.yin, linux-kernel, linux-mm, mhocko,
	minchan, peterx, shy828301, songmuchun, wangkefeng.wang,
	xiehuan09, zokeefe

On 17/04/2024 10:01, Lance Yang wrote:
> On Wed, Apr 17, 2024 at 4:19 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 17.04.24 07:04, Lance Yang wrote:
>>> Hey David, Ryan,
>>>
>>> How about this change?
>>>
>>> static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
>>>                                         unsigned long addr, pte_t *ptep,
>>>                                         unsigned int nr, cydp_t flags)
>>> {
>>>       if (flags == CYDP_CLEAR_YOUNG) {
>>>               for (;;) {
>>>                       ptep_test_and_clear_young(vma, addr, ptep);
>>>                       if (--nr == 0)
>>>                               break;
>>>                       ptep++;
>>>                       addr += PAGE_SIZE;
>>>               }
>>>               return;
>>>       }
>>>
>>>       pte_t pte;
>>>
>>>       for (;;) {
>>>               pte = ptep_get_and_clear(vma->vm_mm, addr, ptep);
>>>
>>>               if (flags & CYDP_CLEAR_YOUNG)
>>>                       pte = pte_mkold(pte);
>>>               if (flags & CYDP_CLEAR_DIRTY)
>>>                       pte = pte_mkclean(pte);
>>>
>>>               if (--nr == 0)
>>>                       break;
>>>               ptep++;
>>>               addr += PAGE_SIZE;
>>>       }
>>> }
>>
>> Likely it might be best to just KIS for now and leave it as is. The
>> compiler should optimize out based on flags already, that's what I ignored.
> 
> Got it. Let's keep it as is for now :)

Yep agreed; you're passing the flags as constants so the compiler should be
completely removing one half of the conditional. And if the input isn't a
constant, I'd still expect the compiler to hoist the conditional out of the loop.

> 
> Thanks,
> Lance
> 
>>
>> --
>> Cheers,
>>
>> David / dhildenb
>>


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2024-04-17 10:53 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-16  3:34 [PATCH v7 0/3] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
2024-04-16  3:34 ` [PATCH v7 1/3] mm/madvise: introduce clear_young_dirty_ptes() batch helper Lance Yang
2024-04-16 15:01   ` David Hildenbrand
2024-04-17  4:12     ` Lance Yang
2024-04-17  5:04       ` Lance Yang
2024-04-17  8:19         ` David Hildenbrand
2024-04-17  9:01           ` Lance Yang
2024-04-17 10:53             ` Ryan Roberts
2024-04-16 16:25   ` Ryan Roberts
2024-04-17  4:13     ` Lance Yang
2024-04-16  3:34 ` [PATCH v7 2/3] mm/arm64: override " Lance Yang
2024-04-16 15:02   ` David Hildenbrand
2024-04-16 16:29   ` Ryan Roberts
2024-04-17  4:16     ` Lance Yang
2024-04-16  3:34 ` [PATCH v7 3/3] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Lance Yang
2024-04-16 16:35   ` Ryan Roberts
2024-04-16 16:52     ` David Hildenbrand
2024-04-17  4:35       ` Lance Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.