All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
@ 2024-04-08  4:24 Lance Yang
  2024-04-08  4:24 ` [PATCH v5 1/2] mm/madvise: optimize " Lance Yang
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Lance Yang @ 2024-04-08  4:24 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel, Lance Yang

Hi All,

This patchset adds support for lazyfreeing multi-size THP (mTHP) without
needing to first split the large folio via split_folio(). However, we
still need to split a large folio that is not fully mapped within the
target range.

If a large folio is locked or shared, or if we fail to split it, we just
leave it in place and advance to the next PTE in the range. But note that
the behavior is changed; previously, any failure of this sort would cause
the entire operation to give up. As large folios become more common,
sticking to the old way could result in wasted opportunities.

Performance Testing
===================

On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
the same size results in the following runtimes for madvise(MADV_FREE)
in seconds (shorter is better):

Folio Size |   Old    |   New    | Change
------------------------------------------
      4KiB | 0.590251 | 0.590259 |    0%
     16KiB | 2.990447 | 0.185655 |  -94%
     32KiB | 2.547831 | 0.104870 |  -95%
     64KiB | 2.457796 | 0.052812 |  -97%
    128KiB | 2.281034 | 0.032777 |  -99%
    256KiB | 2.230387 | 0.017496 |  -99%
    512KiB | 2.189106 | 0.010781 |  -99%
   1024KiB | 2.183949 | 0.007753 |  -99%
   2048KiB | 0.002799 | 0.002804 |    0%

---
This patchset applies against mm-unstable (f43b3aae9451). 

The performance numbers are from v2. I did a quick benchmark run of v5 and
nothing significantly changed.

Changes since v4 [4]
====================
 - The first patch implements the MADV_FREE change and introduces
   mkold_clean_ptes() with a generic implementation. The second patch
   specializes mkold_clean_ptes() for arm64, providing a performance boost
   specific to arm64 (per Ryan Roberts)
 - Drop the full parameter and call ptep_get_and_clear() in mkold_clean_ptes()
   (per Ryan Roberts)
 - Keep the previous behavior that avoids locking the folio if it wasn't in the
   swapcache or if it wasn't dirty (per Ryan Roberts)

Changes since v3 [3]
====================
 - Rename refresh_full_ptes -> mkold_clean_ptes (per Ryan Roberts)
 - Override mkold_clean_ptes() for arm64 to make it faster (per Ryan Roberts)
 - Update the changelog

Changes since v2 [2]
====================
 - Only skip all the PTEs for nr_pages when the number of batched PTEs matches
   nr_pages (per Barry Song)
 - Change folio_pte_batch() to consume an optional *any_dirty and *any_young
   function (per David Hildenbrand)
 - Move the ptep_get_and_clear_full() loop into refresh_full_ptes() (per
   David Hildenbrand)
 - Follow a similar pattern for madvise_free_pte_range() (per Ryan Roberts)

Changes since v1 [1]
====================
 - Update the performance numbers
 - Update the changelog (per Ryan Roberts)
 - Check the COW folio (per Yin Fengwei)
 - Check if we are mapping all subpages (per Barry Song, David Hildenbrand,
   Ryan Roberts)

[1] https://lore.kernel.org/linux-mm/20240225123215.86503-1-ioworker0@gmail.com
[2] https://lore.kernel.org/linux-mm/20240307061425.21013-1-ioworker0@gmail.com
[3] https://lore.kernel.org/linux-mm/20240316102952.39233-1-ioworker0@gmail.com
[4] https://lore.kernel.org/linux-mm/20240402124029.47846-1-ioworker0@gmail.com

Thanks,
Lance

Lance Yang (2):
 mm/madvise: optimize lazyfreeing with mTHP in madvise_free
 mm/arm64: override mkold_clean_ptes() batch helper

 arch/arm64/include/asm/pgtable.h |  57 +++++++++++++++++++++++++++++++++
 arch/arm64/mm/contpte.c          |  15 +++++++++
 include/linux/pgtable.h          |  35 ++++++++++++++++++++
 mm/internal.h                    |  12 +++++--
 mm/madvise.c                     | 149 +++++++++++++++++++++++++++++++++++----
 mm/memory.c                      |   4 +--
 6 files changed, 202 insertions(+), 70 deletions(-)

-- 
2.33.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-08  4:24 [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
@ 2024-04-08  4:24 ` Lance Yang
  2024-04-11 11:11   ` Ryan Roberts
  2024-04-08  4:24 ` [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper Lance Yang
  2024-04-10 21:50 ` [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Andrew Morton
  2 siblings, 1 reply; 21+ messages in thread
From: Lance Yang @ 2024-04-08  4:24 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel, Lance Yang

This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
(Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
splitting if the large folio is fully mapped within the target range.

If a large folio is locked or shared, or if we fail to split it, we just
leave it in place and advance to the next PTE in the range. But note that
the behavior is changed; previously, any failure of this sort would cause
the entire operation to give up. As large folios become more common,
sticking to the old way could result in wasted opportunities.

On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
the same size results in the following runtimes for madvise(MADV_FREE) in
seconds (shorter is better):

Folio Size |   Old    |   New    | Change
------------------------------------------
      4KiB | 0.590251 | 0.590259 |    0%
     16KiB | 2.990447 | 0.185655 |  -94%
     32KiB | 2.547831 | 0.104870 |  -95%
     64KiB | 2.457796 | 0.052812 |  -97%
    128KiB | 2.281034 | 0.032777 |  -99%
    256KiB | 2.230387 | 0.017496 |  -99%
    512KiB | 2.189106 | 0.010781 |  -99%
   1024KiB | 2.183949 | 0.007753 |  -99%
   2048KiB | 0.002799 | 0.002804 |    0%

[1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
[2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com

Signed-off-by: Lance Yang <ioworker0@gmail.com>
---
 include/linux/pgtable.h |  34 +++++++++
 mm/internal.h           |  12 +++-
 mm/madvise.c            | 149 ++++++++++++++++++++++------------------
 mm/memory.c             |   4 +-
 4 files changed, 129 insertions(+), 70 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 0f4b2faa1d71..4dd442787420 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 }
 #endif
 
+#ifndef mkold_clean_ptes
+/**
+ * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
+ *		as old and clean.
+ * @mm: Address space the pages are mapped into.
+ * @addr: Address the first page is mapped at.
+ * @ptep: Page table pointer for the first entry.
+ * @nr: Number of entries to mark old and clean.
+ *
+ * May be overridden by the architecture; otherwise, implemented by
+ * get_and_clear/modify/set for each pte in the range.
+ *
+ * Note that PTE bits in the PTE range besides the PFN can differ. For example,
+ * some PTEs might be write-protected.
+ *
+ * Context: The caller holds the page table lock.  The PTEs map consecutive
+ * pages that belong to the same folio.  The PTEs are all in the same PMD.
+ */
+static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
+				    pte_t *ptep, unsigned int nr)
+{
+	pte_t pte;
+
+	for (;;) {
+		pte = ptep_get_and_clear(mm, addr, ptep);
+		set_pte_at(mm, addr, ptep, pte_mkclean(pte_mkold(pte)));
+		if (--nr == 0)
+			break;
+		ptep++;
+		addr += PAGE_SIZE;
+	}
+}
+#endif
+
 static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep)
 {
diff --git a/mm/internal.h b/mm/internal.h
index 57c1055d5568..792a9baf0d14 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -132,6 +132,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
  *		  first one is writable.
  * @any_young: Optional pointer to indicate whether any entry except the
  *		  first one is young.
+ * @any_dirty: Optional pointer to indicate whether any entry except the
+ *		  first one is dirty.
  *
  * Detect a PTE batch: consecutive (present) PTEs that map consecutive
  * pages of the same large folio.
@@ -147,18 +149,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
  */
 static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 		pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
-		bool *any_writable, bool *any_young)
+		bool *any_writable, bool *any_young, bool *any_dirty)
 {
 	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
 	const pte_t *end_ptep = start_ptep + max_nr;
 	pte_t expected_pte, *ptep;
-	bool writable, young;
+	bool writable, young, dirty;
 	int nr;
 
 	if (any_writable)
 		*any_writable = false;
 	if (any_young)
 		*any_young = false;
+	if (any_dirty)
+		*any_dirty = false;
 
 	VM_WARN_ON_FOLIO(!pte_present(pte), folio);
 	VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
@@ -174,6 +178,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 			writable = !!pte_write(pte);
 		if (any_young)
 			young = !!pte_young(pte);
+		if (any_dirty)
+			dirty = !!pte_dirty(pte);
 		pte = __pte_batch_clear_ignored(pte, flags);
 
 		if (!pte_same(pte, expected_pte))
@@ -191,6 +197,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 			*any_writable |= writable;
 		if (any_young)
 			*any_young |= young;
+		if (any_dirty)
+			*any_dirty |= dirty;
 
 		nr = pte_batch_hint(ptep, pte);
 		expected_pte = pte_advance_pfn(expected_pte, nr);
diff --git a/mm/madvise.c b/mm/madvise.c
index bf26cf2b7715..0777df2e3691 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
 	       file_permission(vma->vm_file, MAY_WRITE) == 0;
 }
 
+static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
+					  struct folio *folio, pte_t *ptep,
+					  pte_t pte, bool *any_young,
+					  bool *any_dirty)
+{
+	int max_nr = (end - addr) / PAGE_SIZE;
+	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+
+	return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
+			       any_young, any_dirty);
+}
+
+static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
+					   unsigned long addr,
+					   struct folio *folio, pte_t **pte,
+					   spinlock_t **ptl)
+{
+	int err;
+
+	if (!folio_trylock(folio))
+		return false;
+
+	folio_get(folio);
+	pte_unmap_unlock(*pte, *ptl);
+	err = split_folio(folio);
+	folio_unlock(folio);
+	folio_put(folio);
+
+	*pte = pte_offset_map_lock(mm, pmd, addr, ptl);
+
+	return err == 0;
+}
+
 static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 				unsigned long addr, unsigned long end,
 				struct mm_walk *walk)
@@ -456,41 +489,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 		 * next pte in the range.
 		 */
 		if (folio_test_large(folio)) {
-			const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
-						FPB_IGNORE_SOFT_DIRTY;
-			int max_nr = (end - addr) / PAGE_SIZE;
 			bool any_young;
-
-			nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
-					     fpb_flags, NULL, &any_young);
-			if (any_young)
-				ptent = pte_mkyoung(ptent);
+			nr = madvise_folio_pte_batch(addr, end, folio, pte,
+						     ptent, &any_young, NULL);
 
 			if (nr < folio_nr_pages(folio)) {
-				int err;
-
 				if (folio_likely_mapped_shared(folio))
 					continue;
 				if (pageout_anon_only_filter && !folio_test_anon(folio))
 					continue;
-				if (!folio_trylock(folio))
-					continue;
-				folio_get(folio);
+
 				arch_leave_lazy_mmu_mode();
-				pte_unmap_unlock(start_pte, ptl);
-				start_pte = NULL;
-				err = split_folio(folio);
-				folio_unlock(folio);
-				folio_put(folio);
-				start_pte = pte =
-					pte_offset_map_lock(mm, pmd, addr, &ptl);
+				if (madvise_pte_split_folio(mm, pmd, addr,
+							    folio, &start_pte, &ptl))
+					nr = 0;
 				if (!start_pte)
 					break;
+				pte = start_pte;
 				arch_enter_lazy_mmu_mode();
-				if (!err)
-					nr = 0;
 				continue;
 			}
+
+			if (any_young)
+				ptent = pte_mkyoung(ptent);
 		}
 
 		/*
@@ -687,47 +708,54 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 			continue;
 
 		/*
-		 * If pmd isn't transhuge but the folio is large and
-		 * is owned by only this process, split it and
-		 * deactivate all pages.
+		 * If we encounter a large folio, only split it if it is not
+		 * fully mapped within the range we are operating on. Otherwise
+		 * leave it as is so that it can be marked as lazyfree. If we
+		 * fail to split a folio, leave it in place and advance to the
+		 * next pte in the range.
 		 */
 		if (folio_test_large(folio)) {
-			int err;
+			bool any_young, any_dirty;
+			nr = madvise_folio_pte_batch(addr, end, folio, pte,
+						     ptent, &any_young, &any_dirty);
 
-			if (folio_likely_mapped_shared(folio))
-				break;
-			if (!folio_trylock(folio))
-				break;
-			folio_get(folio);
-			arch_leave_lazy_mmu_mode();
-			pte_unmap_unlock(start_pte, ptl);
-			start_pte = NULL;
-			err = split_folio(folio);
+			if (nr < folio_nr_pages(folio)) {
+				if (folio_likely_mapped_shared(folio))
+					continue;
+
+				arch_leave_lazy_mmu_mode();
+				if (madvise_pte_split_folio(mm, pmd, addr,
+							    folio, &start_pte, &ptl))
+					nr = 0;
+				if (!start_pte)
+					break;
+				pte = start_pte;
+				arch_enter_lazy_mmu_mode();
+				continue;
+			}
+
+			if (any_young)
+				ptent = pte_mkyoung(ptent);
+			if (any_dirty)
+				ptent = pte_mkdirty(ptent);
+		}
+
+		if (!folio_trylock(folio))
+			continue;
+		/*
+		 * If we have a large folio at this point, we know it is fully mapped
+		 * so if its mapcount is the same as its number of pages, it must be
+		 * exclusive.
+		 */
+		if (folio_mapcount(folio) != folio_nr_pages(folio)) {
 			folio_unlock(folio);
-			folio_put(folio);
-			if (err)
-				break;
-			start_pte = pte =
-				pte_offset_map_lock(mm, pmd, addr, &ptl);
-			if (!start_pte)
-				break;
-			arch_enter_lazy_mmu_mode();
-			pte--;
-			addr -= PAGE_SIZE;
 			continue;
 		}
+		folio_unlock(folio);
 
 		if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
 			if (!folio_trylock(folio))
 				continue;
-			/*
-			 * If folio is shared with others, we mustn't clear
-			 * the folio's dirty flag.
-			 */
-			if (folio_mapcount(folio) != 1) {
-				folio_unlock(folio);
-				continue;
-			}
 
 			if (folio_test_swapcache(folio) &&
 			    !folio_free_swap(folio)) {
@@ -740,19 +768,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 		}
 
 		if (pte_young(ptent) || pte_dirty(ptent)) {
-			/*
-			 * Some of architecture(ex, PPC) don't update TLB
-			 * with set_pte_at and tlb_remove_tlb_entry so for
-			 * the portability, remap the pte with old|clean
-			 * after pte clearing.
-			 */
-			ptent = ptep_get_and_clear_full(mm, addr, pte,
-							tlb->fullmm);
-
-			ptent = pte_mkold(ptent);
-			ptent = pte_mkclean(ptent);
-			set_pte_at(mm, addr, pte, ptent);
-			tlb_remove_tlb_entry(tlb, pte, addr);
+			mkold_clean_ptes(mm, addr, pte, nr);
+			tlb_remove_tlb_entries(tlb, pte, nr, addr);
 		}
 		folio_mark_lazyfree(folio);
 	}
diff --git a/mm/memory.c b/mm/memory.c
index 1723c8ddf9cb..fe9d4d64c627 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
 			flags |= FPB_IGNORE_SOFT_DIRTY;
 
 		nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
-				     &any_writable, NULL);
+				     &any_writable, NULL, NULL);
 		folio_ref_add(folio, nr);
 		if (folio_test_anon(folio)) {
 			if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
@@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
 	 */
 	if (unlikely(folio_test_large(folio) && max_nr != 1)) {
 		nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
-				     NULL, NULL);
+				     NULL, NULL, NULL);
 
 		zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
 				       addr, details, rss, force_flush,
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper
  2024-04-08  4:24 [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
  2024-04-08  4:24 ` [PATCH v5 1/2] mm/madvise: optimize " Lance Yang
@ 2024-04-08  4:24 ` Lance Yang
  2024-04-11 13:17   ` Ryan Roberts
  2024-04-10 21:50 ` [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Andrew Morton
  2 siblings, 1 reply; 21+ messages in thread
From: Lance Yang @ 2024-04-08  4:24 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel, Lance Yang

The per-pte get_and_clear/modify/set approach would result in
unfolding/refolding for contpte mappings on arm64. So we need
to override mkold_clean_ptes() for arm64 to avoid it.

Suggested-by: David Hildenbrand <david@redhat.com>
Suggested-by: Barry Song <21cnbao@gmail.com>
Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Lance Yang <ioworker0@gmail.com>
---
 arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++
 arch/arm64/mm/contpte.c          | 15 +++++++++
 2 files changed, 70 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 9fd8613b2db2..395754638a9a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1223,6 +1223,34 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
 		__ptep_set_wrprotect(mm, address, ptep);
 }
 
+static inline void ___ptep_mkold_clean(struct mm_struct *mm, unsigned long addr,
+				       pte_t *ptep, pte_t pte)
+{
+	pte_t old_pte;
+
+	do {
+		old_pte = pte;
+		pte = pte_mkclean(pte_mkold(pte));
+		pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
+					       pte_val(old_pte), pte_val(pte));
+	} while (pte_val(pte) != pte_val(old_pte));
+}
+
+static inline void __ptep_mkold_clean(struct mm_struct *mm, unsigned long addr,
+				      pte_t *ptep)
+{
+	___ptep_mkold_clean(mm, addr, ptep, __ptep_get(ptep));
+}
+
+static inline void __mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
+				      pte_t *ptep, unsigned int nr)
+{
+	unsigned int i;
+
+	for (i = 0; i < nr; i++, addr += PAGE_SIZE, ptep++)
+		__ptep_mkold_clean(mm, addr, ptep);
+}
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
 static inline void pmdp_set_wrprotect(struct mm_struct *mm,
@@ -1379,6 +1407,8 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
 				unsigned long addr, pte_t *ptep,
 				pte_t entry, int dirty);
+extern void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
+				pte_t *ptep, unsigned int nr);
 
 static __always_inline void contpte_try_fold(struct mm_struct *mm,
 				unsigned long addr, pte_t *ptep, pte_t pte)
@@ -1603,6 +1633,30 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
 }
 
+#define mkold_clean_ptes mkold_clean_ptes
+static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
+				    pte_t *ptep, unsigned int nr)
+{
+	if (likely(nr == 1)) {
+		/*
+		 * Optimization: mkold_clean_ptes() can only be called for present
+		 * ptes so we only need to check contig bit as condition for unfold,
+		 * and we can remove the contig bit from the pte we read to avoid
+		 * re-reading. This speeds up madvise(MADV_FREE) which is sensitive
+		 * for order-0 folios. Equivalent to contpte_try_unfold().
+		 */
+		pte_t orig_pte = __ptep_get(ptep);
+
+		if (unlikely(pte_cont(orig_pte))) {
+			__contpte_try_unfold(mm, addr, ptep, orig_pte);
+			orig_pte = pte_mknoncont(orig_pte);
+		}
+		___ptep_mkold_clean(mm, addr, ptep, orig_pte);
+	} else {
+		contpte_mkold_clean_ptes(mm, addr, ptep, nr);
+	}
+}
+
 #else /* CONFIG_ARM64_CONTPTE */
 
 #define ptep_get				__ptep_get
@@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 #define wrprotect_ptes				__wrprotect_ptes
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
 #define ptep_set_access_flags			__ptep_set_access_flags
+#define mkold_clean_ptes			__mkold_clean_ptes
 
 #endif /* CONFIG_ARM64_CONTPTE */
 
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 1b64b4c3f8bf..dbff9c5e9eff 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -361,6 +361,21 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
 
+void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep, unsigned int nr)
+{
+	/*
+	 * If clearing the young and dirty bits for an entire contig range, we can
+	 * avoid unfolding. Just set old/clean and wait for the later mmu_gather
+	 * flush to invalidate the tlb. If it's a partial range though, we need to
+	 * unfold.
+	 */
+
+	contpte_try_unfold_partial(mm, addr, ptep, nr);
+	__mkold_clean_ptes(mm, addr, ptep, nr);
+}
+EXPORT_SYMBOL_GPL(contpte_mkold_clean_ptes);
+
 int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
 					unsigned long addr, pte_t *ptep,
 					pte_t entry, int dirty)
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
  2024-04-08  4:24 [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
  2024-04-08  4:24 ` [PATCH v5 1/2] mm/madvise: optimize " Lance Yang
  2024-04-08  4:24 ` [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper Lance Yang
@ 2024-04-10 21:50 ` Andrew Morton
  2024-04-11  5:01   ` Lance Yang
  2 siblings, 1 reply; 21+ messages in thread
From: Andrew Morton @ 2024-04-10 21:50 UTC (permalink / raw)
  To: Lance Yang
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel

On Mon,  8 Apr 2024 12:24:35 +0800 Lance Yang <ioworker0@gmail.com> wrote:

> Hi All,
> 
> This patchset adds support for lazyfreeing multi-size THP (mTHP) without
> needing to first split the large folio via split_folio(). However, we
> still need to split a large folio that is not fully mapped within the
> target range.
> 
> If a large folio is locked or shared, or if we fail to split it, we just
> leave it in place and advance to the next PTE in the range. But note that
> the behavior is changed; previously, any failure of this sort would cause
> the entire operation to give up. As large folios become more common,
> sticking to the old way could result in wasted opportunities.
> 
> Performance Testing
> ===================
> 
> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> the same size results in the following runtimes for madvise(MADV_FREE)
> in seconds (shorter is better):
> 
> Folio Size |   Old    |   New    | Change
> ------------------------------------------
>       4KiB | 0.590251 | 0.590259 |    0%
>      16KiB | 2.990447 | 0.185655 |  -94%
>      32KiB | 2.547831 | 0.104870 |  -95%
>      64KiB | 2.457796 | 0.052812 |  -97%
>     128KiB | 2.281034 | 0.032777 |  -99%
>     256KiB | 2.230387 | 0.017496 |  -99%
>     512KiB | 2.189106 | 0.010781 |  -99%
>    1024KiB | 2.183949 | 0.007753 |  -99%
>    2048KiB | 0.002799 | 0.002804 |    0%

That looks nice but punting work to another thread can slightly
increase overall system load and can mess up utilization accounting by
attributing work to threads which didn't initiate that work.

And there's a corner-case risk where the thread running madvise() has
realtime policy (SCHED_RR/SCHED_FIFO) on a single-CPU system,
preventing any other threads from executing, resulting in indefinitely
deferred freeing resulting in memory squeezes or even OOM conditions.

It would be good if the changelog(s) were to show some consideration of
such matters and some demonstration that the benefits exceed the risks
and costs.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
  2024-04-10 21:50 ` [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Andrew Morton
@ 2024-04-11  5:01   ` Lance Yang
  2024-04-11 10:29     ` Ryan Roberts
  0 siblings, 1 reply; 21+ messages in thread
From: Lance Yang @ 2024-04-11  5:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: ryan.roberts, david, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel

On Thu, Apr 11, 2024 at 5:50 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon,  8 Apr 2024 12:24:35 +0800 Lance Yang <ioworker0@gmail.com> wrote:
>
> > Hi All,
> >
> > This patchset adds support for lazyfreeing multi-size THP (mTHP) without
> > needing to first split the large folio via split_folio(). However, we
> > still need to split a large folio that is not fully mapped within the
> > target range.
> >
> > If a large folio is locked or shared, or if we fail to split it, we just
> > leave it in place and advance to the next PTE in the range. But note that
> > the behavior is changed; previously, any failure of this sort would cause
> > the entire operation to give up. As large folios become more common,
> > sticking to the old way could result in wasted opportunities.
> >
> > Performance Testing
> > ===================
> >
> > On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> > the same size results in the following runtimes for madvise(MADV_FREE)
> > in seconds (shorter is better):
> >
> > Folio Size |   Old    |   New    | Change
> > ------------------------------------------
> >       4KiB | 0.590251 | 0.590259 |    0%
> >      16KiB | 2.990447 | 0.185655 |  -94%
> >      32KiB | 2.547831 | 0.104870 |  -95%
> >      64KiB | 2.457796 | 0.052812 |  -97%
> >     128KiB | 2.281034 | 0.032777 |  -99%
> >     256KiB | 2.230387 | 0.017496 |  -99%
> >     512KiB | 2.189106 | 0.010781 |  -99%
> >    1024KiB | 2.183949 | 0.007753 |  -99%
> >    2048KiB | 0.002799 | 0.002804 |    0%
>
> That looks nice but punting work to another thread can slightly
> increase overall system load and can mess up utilization accounting by
> attributing work to threads which didn't initiate that work.
>
> And there's a corner-case risk where the thread running madvise() has
> realtime policy (SCHED_RR/SCHED_FIFO) on a single-CPU system,
> preventing any other threads from executing, resulting in indefinitely
> deferred freeing resulting in memory squeezes or even OOM conditions.
>
> It would be good if the changelog(s) were to show some consideration of
> such matters and some demonstration that the benefits exceed the risks
> and costs.
>

Hey Andrew,

Thanks for bringing up these concerns!

I completely agree that we need to consider such masters and include
them into the changelog(s). Additionally, I'll do my best to show that the
benefits exceed the risks and costs, and then update the changelog(s)
accordingly.

Thanks again for your time!
Lance

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
  2024-04-11  5:01   ` Lance Yang
@ 2024-04-11 10:29     ` Ryan Roberts
  0 siblings, 0 replies; 21+ messages in thread
From: Ryan Roberts @ 2024-04-11 10:29 UTC (permalink / raw)
  To: Lance Yang, Andrew Morton
  Cc: david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 11/04/2024 06:01, Lance Yang wrote:
> On Thu, Apr 11, 2024 at 5:50 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>>
>> On Mon,  8 Apr 2024 12:24:35 +0800 Lance Yang <ioworker0@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> This patchset adds support for lazyfreeing multi-size THP (mTHP) without
>>> needing to first split the large folio via split_folio(). However, we
>>> still need to split a large folio that is not fully mapped within the
>>> target range.
>>>
>>> If a large folio is locked or shared, or if we fail to split it, we just
>>> leave it in place and advance to the next PTE in the range. But note that
>>> the behavior is changed; previously, any failure of this sort would cause
>>> the entire operation to give up. As large folios become more common,
>>> sticking to the old way could result in wasted opportunities.
>>>
>>> Performance Testing
>>> ===================
>>>
>>> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
>>> the same size results in the following runtimes for madvise(MADV_FREE)
>>> in seconds (shorter is better):
>>>
>>> Folio Size |   Old    |   New    | Change
>>> ------------------------------------------
>>>       4KiB | 0.590251 | 0.590259 |    0%
>>>      16KiB | 2.990447 | 0.185655 |  -94%
>>>      32KiB | 2.547831 | 0.104870 |  -95%
>>>      64KiB | 2.457796 | 0.052812 |  -97%
>>>     128KiB | 2.281034 | 0.032777 |  -99%
>>>     256KiB | 2.230387 | 0.017496 |  -99%
>>>     512KiB | 2.189106 | 0.010781 |  -99%
>>>    1024KiB | 2.183949 | 0.007753 |  -99%
>>>    2048KiB | 0.002799 | 0.002804 |    0%
>>
>> That looks nice but punting work to another thread can slightly
>> increase overall system load and can mess up utilization accounting by
>> attributing work to threads which didn't initiate that work.

My understanding is that this speedup is all coming from the avoidance of
splitting folios synchonously in the context of madvise(MADV_FREE). It's not
actually punting anymore work to be done lazily, its just avoiding doing extra
uneccessary work up front. In fact, it would result in less work to do at
lazyfree time because the folios remain large so there are fewer folios to free.

Perhaps I've misunderstood your point?

Thanks,
Ryan

>>
>> And there's a corner-case risk where the thread running madvise() has
>> realtime policy (SCHED_RR/SCHED_FIFO) on a single-CPU system,
>> preventing any other threads from executing, resulting in indefinitely
>> deferred freeing resulting in memory squeezes or even OOM conditions.
>>
>> It would be good if the changelog(s) were to show some consideration of
>> such matters and some demonstration that the benefits exceed the risks
>> and costs.
>>
> 
> Hey Andrew,
> 
> Thanks for bringing up these concerns!
> 
> I completely agree that we need to consider such masters and include
> them into the changelog(s). Additionally, I'll do my best to show that the
> benefits exceed the risks and costs, and then update the changelog(s)
> accordingly.
> 
> Thanks again for your time!
> Lance


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-08  4:24 ` [PATCH v5 1/2] mm/madvise: optimize " Lance Yang
@ 2024-04-11 11:11   ` Ryan Roberts
  2024-04-11 11:20     ` David Hildenbrand
  2024-04-11 12:46     ` Lance Yang
  0 siblings, 2 replies; 21+ messages in thread
From: Ryan Roberts @ 2024-04-11 11:11 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 08/04/2024 05:24, Lance Yang wrote:
> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
> splitting if the large folio is fully mapped within the target range.
> 
> If a large folio is locked or shared, or if we fail to split it, we just
> leave it in place and advance to the next PTE in the range. But note that
> the behavior is changed; previously, any failure of this sort would cause
> the entire operation to give up. As large folios become more common,
> sticking to the old way could result in wasted opportunities.
> 
> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> the same size results in the following runtimes for madvise(MADV_FREE) in
> seconds (shorter is better):
> 
> Folio Size |   Old    |   New    | Change
> ------------------------------------------
>       4KiB | 0.590251 | 0.590259 |    0%
>      16KiB | 2.990447 | 0.185655 |  -94%
>      32KiB | 2.547831 | 0.104870 |  -95%
>      64KiB | 2.457796 | 0.052812 |  -97%
>     128KiB | 2.281034 | 0.032777 |  -99%
>     256KiB | 2.230387 | 0.017496 |  -99%
>     512KiB | 2.189106 | 0.010781 |  -99%
>    1024KiB | 2.183949 | 0.007753 |  -99%
>    2048KiB | 0.002799 | 0.002804 |    0%
> 
> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
> 
> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> ---
>  include/linux/pgtable.h |  34 +++++++++
>  mm/internal.h           |  12 +++-
>  mm/madvise.c            | 149 ++++++++++++++++++++++------------------
>  mm/memory.c             |   4 +-
>  4 files changed, 129 insertions(+), 70 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 0f4b2faa1d71..4dd442787420 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  }
>  #endif
>  
> +#ifndef mkold_clean_ptes
> +/**
> + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
> + *		as old and clean.
> + * @mm: Address space the pages are mapped into.
> + * @addr: Address the first page is mapped at.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to mark old and clean.
> + *
> + * May be overridden by the architecture; otherwise, implemented by
> + * get_and_clear/modify/set for each pte in the range.
> + *
> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> + * some PTEs might be write-protected.
> + *
> + * Context: The caller holds the page table lock.  The PTEs map consecutive
> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
> + */
> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> +				    pte_t *ptep, unsigned int nr)

Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
(which I added as part of swap-out) to something like:

clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
		       pte_t *ptep, unsigned int nr,
		       bool clear_young, bool clear_dirty);

Then we can use the same function for both use cases and also have the ability
to only clear dirty in future if we ever need it. The other advantage is that we
only need to plumb a single function down the arm64 arch code. As it currently
stands, those 2 functions would be duplicating most of their code.

Generated code would still be the same since I'd expect the callsites to be
passing in constants for clear_young and clear_dirty.

> +{
> +	pte_t pte;
> +
> +	for (;;) {
> +		pte = ptep_get_and_clear(mm, addr, ptep);
> +		set_pte_at(mm, addr, ptep, pte_mkclean(pte_mkold(pte)));
> +		if (--nr == 0)
> +			break;
> +		ptep++;
> +		addr += PAGE_SIZE;
> +	}
> +}
> +#endif
> +
>  static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep)
>  {
> diff --git a/mm/internal.h b/mm/internal.h
> index 57c1055d5568..792a9baf0d14 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -132,6 +132,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>   *		  first one is writable.
>   * @any_young: Optional pointer to indicate whether any entry except the
>   *		  first one is young.
> + * @any_dirty: Optional pointer to indicate whether any entry except the
> + *		  first one is dirty.
>   *
>   * Detect a PTE batch: consecutive (present) PTEs that map consecutive
>   * pages of the same large folio.
> @@ -147,18 +149,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>   */
>  static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  		pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
> -		bool *any_writable, bool *any_young)
> +		bool *any_writable, bool *any_young, bool *any_dirty)
>  {
>  	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
>  	const pte_t *end_ptep = start_ptep + max_nr;
>  	pte_t expected_pte, *ptep;
> -	bool writable, young;
> +	bool writable, young, dirty;
>  	int nr;
>  
>  	if (any_writable)
>  		*any_writable = false;
>  	if (any_young)
>  		*any_young = false;
> +	if (any_dirty)
> +		*any_dirty = false;
>  
>  	VM_WARN_ON_FOLIO(!pte_present(pte), folio);
>  	VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
> @@ -174,6 +178,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  			writable = !!pte_write(pte);
>  		if (any_young)
>  			young = !!pte_young(pte);
> +		if (any_dirty)
> +			dirty = !!pte_dirty(pte);
>  		pte = __pte_batch_clear_ignored(pte, flags);
>  
>  		if (!pte_same(pte, expected_pte))
> @@ -191,6 +197,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  			*any_writable |= writable;
>  		if (any_young)
>  			*any_young |= young;
> +		if (any_dirty)
> +			*any_dirty |= dirty;
>  
>  		nr = pte_batch_hint(ptep, pte);
>  		expected_pte = pte_advance_pfn(expected_pte, nr);
> diff --git a/mm/madvise.c b/mm/madvise.c
> index bf26cf2b7715..0777df2e3691 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
>  	       file_permission(vma->vm_file, MAY_WRITE) == 0;
>  }
>  
> +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
> +					  struct folio *folio, pte_t *ptep,
> +					  pte_t pte, bool *any_young,
> +					  bool *any_dirty)
> +{
> +	int max_nr = (end - addr) / PAGE_SIZE;
> +	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> +
> +	return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
> +			       any_young, any_dirty);
> +}
> +
> +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
> +					   unsigned long addr,
> +					   struct folio *folio, pte_t **pte,
> +					   spinlock_t **ptl)
> +{
> +	int err;
> +
> +	if (!folio_trylock(folio))
> +		return false;
> +
> +	folio_get(folio);
> +	pte_unmap_unlock(*pte, *ptl);
> +	err = split_folio(folio);
> +	folio_unlock(folio);
> +	folio_put(folio);
> +
> +	*pte = pte_offset_map_lock(mm, pmd, addr, ptl);
> +
> +	return err == 0;
> +}
> +
>  static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  				unsigned long addr, unsigned long end,
>  				struct mm_walk *walk)
> @@ -456,41 +489,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  		 * next pte in the range.
>  		 */
>  		if (folio_test_large(folio)) {
> -			const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
> -						FPB_IGNORE_SOFT_DIRTY;
> -			int max_nr = (end - addr) / PAGE_SIZE;
>  			bool any_young;
> -

nit: there should be a blank line between variable declarations and following
code. You have removed it here (and similar in free function). Did you run
checkpatch.pl? It would have caught these things.

> -			nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
> -					     fpb_flags, NULL, &any_young);
> -			if (any_young)
> -				ptent = pte_mkyoung(ptent);
> +			nr = madvise_folio_pte_batch(addr, end, folio, pte,
> +						     ptent, &any_young, NULL);
>  
>  			if (nr < folio_nr_pages(folio)) {
> -				int err;
> -
>  				if (folio_likely_mapped_shared(folio))
>  					continue;
>  				if (pageout_anon_only_filter && !folio_test_anon(folio))
>  					continue;
> -				if (!folio_trylock(folio))
> -					continue;
> -				folio_get(folio);
> +
>  				arch_leave_lazy_mmu_mode();
> -				pte_unmap_unlock(start_pte, ptl);
> -				start_pte = NULL;
> -				err = split_folio(folio);
> -				folio_unlock(folio);
> -				folio_put(folio);
> -				start_pte = pte =
> -					pte_offset_map_lock(mm, pmd, addr, &ptl);
> +				if (madvise_pte_split_folio(mm, pmd, addr,
> +							    folio, &start_pte, &ptl))
> +					nr = 0;
>  				if (!start_pte)
>  					break;
> +				pte = start_pte;
>  				arch_enter_lazy_mmu_mode();
> -				if (!err)
> -					nr = 0;
>  				continue;
>  			}
> +
> +			if (any_young)
> +				ptent = pte_mkyoung(ptent);
>  		}
>  
>  		/*
> @@ -687,47 +708,54 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>  			continue;
>  
>  		/*
> -		 * If pmd isn't transhuge but the folio is large and
> -		 * is owned by only this process, split it and
> -		 * deactivate all pages.
> +		 * If we encounter a large folio, only split it if it is not
> +		 * fully mapped within the range we are operating on. Otherwise
> +		 * leave it as is so that it can be marked as lazyfree. If we
> +		 * fail to split a folio, leave it in place and advance to the
> +		 * next pte in the range.
>  		 */
>  		if (folio_test_large(folio)) {
> -			int err;
> +			bool any_young, any_dirty;
> +			nr = madvise_folio_pte_batch(addr, end, folio, pte,
> +						     ptent, &any_young, &any_dirty);
>  
> -			if (folio_likely_mapped_shared(folio))
> -				break;
> -			if (!folio_trylock(folio))
> -				break;
> -			folio_get(folio);
> -			arch_leave_lazy_mmu_mode();
> -			pte_unmap_unlock(start_pte, ptl);
> -			start_pte = NULL;
> -			err = split_folio(folio);
> +			if (nr < folio_nr_pages(folio)) {
> +				if (folio_likely_mapped_shared(folio))
> +					continue;
> +
> +				arch_leave_lazy_mmu_mode();
> +				if (madvise_pte_split_folio(mm, pmd, addr,
> +							    folio, &start_pte, &ptl))
> +					nr = 0;
> +				if (!start_pte)
> +					break;
> +				pte = start_pte;
> +				arch_enter_lazy_mmu_mode();
> +				continue;
> +			}
> +
> +			if (any_young)
> +				ptent = pte_mkyoung(ptent);
> +			if (any_dirty)
> +				ptent = pte_mkdirty(ptent);
> +		}
> +
> +		if (!folio_trylock(folio))
> +			continue;

This is still wrong. This should all be protected by the "if
(folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previously
so that you only call folio_trylock() if that condition is true. You are
unconditionally locking here, then unlocking, then relocking below if the
condition is met. Just put everything inside the condition and lock once.

Thanks,
Ryan

> +		/*
> +		 * If we have a large folio at this point, we know it is fully mapped
> +		 * so if its mapcount is the same as its number of pages, it must be
> +		 * exclusive.
> +		 */
> +		if (folio_mapcount(folio) != folio_nr_pages(folio)) {
>  			folio_unlock(folio);
> -			folio_put(folio);
> -			if (err)
> -				break;
> -			start_pte = pte =
> -				pte_offset_map_lock(mm, pmd, addr, &ptl);
> -			if (!start_pte)
> -				break;
> -			arch_enter_lazy_mmu_mode();
> -			pte--;
> -			addr -= PAGE_SIZE;
>  			continue;
>  		}
> +		folio_unlock(folio);
>  
>  		if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
>  			if (!folio_trylock(folio))
>  				continue;
> -			/*
> -			 * If folio is shared with others, we mustn't clear
> -			 * the folio's dirty flag.
> -			 */
> -			if (folio_mapcount(folio) != 1) {
> -				folio_unlock(folio);
> -				continue;
> -			}
>  
>  			if (folio_test_swapcache(folio) &&
>  			    !folio_free_swap(folio)) {
> @@ -740,19 +768,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>  		}
>  
>  		if (pte_young(ptent) || pte_dirty(ptent)) {
> -			/*
> -			 * Some of architecture(ex, PPC) don't update TLB
> -			 * with set_pte_at and tlb_remove_tlb_entry so for
> -			 * the portability, remap the pte with old|clean
> -			 * after pte clearing.
> -			 */
> -			ptent = ptep_get_and_clear_full(mm, addr, pte,
> -							tlb->fullmm);
> -
> -			ptent = pte_mkold(ptent);
> -			ptent = pte_mkclean(ptent);
> -			set_pte_at(mm, addr, pte, ptent);
> -			tlb_remove_tlb_entry(tlb, pte, addr);
> +			mkold_clean_ptes(mm, addr, pte, nr);
> +			tlb_remove_tlb_entries(tlb, pte, nr, addr);
>  		}
>  		folio_mark_lazyfree(folio);
>  	}
> diff --git a/mm/memory.c b/mm/memory.c
> index 1723c8ddf9cb..fe9d4d64c627 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
>  			flags |= FPB_IGNORE_SOFT_DIRTY;
>  
>  		nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
> -				     &any_writable, NULL);
> +				     &any_writable, NULL, NULL);
>  		folio_ref_add(folio, nr);
>  		if (folio_test_anon(folio)) {
>  			if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
> @@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
>  	 */
>  	if (unlikely(folio_test_large(folio) && max_nr != 1)) {
>  		nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
> -				     NULL, NULL);
> +				     NULL, NULL, NULL);
>  
>  		zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
>  				       addr, details, rss, force_flush,


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 11:11   ` Ryan Roberts
@ 2024-04-11 11:20     ` David Hildenbrand
  2024-04-11 11:27       ` Ryan Roberts
  2024-04-11 12:46     ` Lance Yang
  1 sibling, 1 reply; 21+ messages in thread
From: David Hildenbrand @ 2024-04-11 11:20 UTC (permalink / raw)
  To: Ryan Roberts, Lance Yang, akpm
  Cc: 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301, xiehuan09,
	wangkefeng.wang, songmuchun, peterx, minchan, linux-mm,
	linux-kernel

On 11.04.24 13:11, Ryan Roberts wrote:
> On 08/04/2024 05:24, Lance Yang wrote:
>> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
>> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
>> splitting if the large folio is fully mapped within the target range.
>>
>> If a large folio is locked or shared, or if we fail to split it, we just
>> leave it in place and advance to the next PTE in the range. But note that
>> the behavior is changed; previously, any failure of this sort would cause
>> the entire operation to give up. As large folios become more common,
>> sticking to the old way could result in wasted opportunities.
>>
>> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
>> the same size results in the following runtimes for madvise(MADV_FREE) in
>> seconds (shorter is better):
>>
>> Folio Size |   Old    |   New    | Change
>> ------------------------------------------
>>        4KiB | 0.590251 | 0.590259 |    0%
>>       16KiB | 2.990447 | 0.185655 |  -94%
>>       32KiB | 2.547831 | 0.104870 |  -95%
>>       64KiB | 2.457796 | 0.052812 |  -97%
>>      128KiB | 2.281034 | 0.032777 |  -99%
>>      256KiB | 2.230387 | 0.017496 |  -99%
>>      512KiB | 2.189106 | 0.010781 |  -99%
>>     1024KiB | 2.183949 | 0.007753 |  -99%
>>     2048KiB | 0.002799 | 0.002804 |    0%
>>
>> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
>> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
>>
>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>> ---
>>   include/linux/pgtable.h |  34 +++++++++
>>   mm/internal.h           |  12 +++-
>>   mm/madvise.c            | 149 ++++++++++++++++++++++------------------
>>   mm/memory.c             |   4 +-
>>   4 files changed, 129 insertions(+), 70 deletions(-)
>>
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 0f4b2faa1d71..4dd442787420 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   }
>>   #endif
>>   
>> +#ifndef mkold_clean_ptes
>> +/**
>> + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
>> + *		as old and clean.
>> + * @mm: Address space the pages are mapped into.
>> + * @addr: Address the first page is mapped at.
>> + * @ptep: Page table pointer for the first entry.
>> + * @nr: Number of entries to mark old and clean.
>> + *
>> + * May be overridden by the architecture; otherwise, implemented by
>> + * get_and_clear/modify/set for each pte in the range.
>> + *
>> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
>> + * some PTEs might be write-protected.
>> + *
>> + * Context: The caller holds the page table lock.  The PTEs map consecutive
>> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
>> + */
>> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
>> +				    pte_t *ptep, unsigned int nr)
> 
> Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
> (which I added as part of swap-out) to something like:
> 
> clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
> 		       pte_t *ptep, unsigned int nr,
> 		       bool clear_young, bool clear_dirty);
> 
> Then we can use the same function for both use cases and also have the ability
> to only clear dirty in future if we ever need it. The other advantage is that we
> only need to plumb a single function down the arm64 arch code. As it currently
> stands, those 2 functions would be duplicating most of their code.

Yes. Maybe better use proper __bitwise flags, the compiler should be 
smart enough to optimize either way.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 11:20     ` David Hildenbrand
@ 2024-04-11 11:27       ` Ryan Roberts
  2024-04-11 12:23         ` Lance Yang
  0 siblings, 1 reply; 21+ messages in thread
From: Ryan Roberts @ 2024-04-11 11:27 UTC (permalink / raw)
  To: David Hildenbrand, Lance Yang, akpm
  Cc: 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301, xiehuan09,
	wangkefeng.wang, songmuchun, peterx, minchan, linux-mm,
	linux-kernel

On 11/04/2024 12:20, David Hildenbrand wrote:
> On 11.04.24 13:11, Ryan Roberts wrote:
>> On 08/04/2024 05:24, Lance Yang wrote:
>>> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
>>> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
>>> splitting if the large folio is fully mapped within the target range.
>>>
>>> If a large folio is locked or shared, or if we fail to split it, we just
>>> leave it in place and advance to the next PTE in the range. But note that
>>> the behavior is changed; previously, any failure of this sort would cause
>>> the entire operation to give up. As large folios become more common,
>>> sticking to the old way could result in wasted opportunities.
>>>
>>> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
>>> the same size results in the following runtimes for madvise(MADV_FREE) in
>>> seconds (shorter is better):
>>>
>>> Folio Size |   Old    |   New    | Change
>>> ------------------------------------------
>>>        4KiB | 0.590251 | 0.590259 |    0%
>>>       16KiB | 2.990447 | 0.185655 |  -94%
>>>       32KiB | 2.547831 | 0.104870 |  -95%
>>>       64KiB | 2.457796 | 0.052812 |  -97%
>>>      128KiB | 2.281034 | 0.032777 |  -99%
>>>      256KiB | 2.230387 | 0.017496 |  -99%
>>>      512KiB | 2.189106 | 0.010781 |  -99%
>>>     1024KiB | 2.183949 | 0.007753 |  -99%
>>>     2048KiB | 0.002799 | 0.002804 |    0%
>>>
>>> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
>>> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
>>>
>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>> ---
>>>   include/linux/pgtable.h |  34 +++++++++
>>>   mm/internal.h           |  12 +++-
>>>   mm/madvise.c            | 149 ++++++++++++++++++++++------------------
>>>   mm/memory.c             |   4 +-
>>>   4 files changed, 129 insertions(+), 70 deletions(-)
>>>
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index 0f4b2faa1d71..4dd442787420 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct
>>> *mm,
>>>   }
>>>   #endif
>>>   +#ifndef mkold_clean_ptes
>>> +/**
>>> + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
>>> + *        as old and clean.
>>> + * @mm: Address space the pages are mapped into.
>>> + * @addr: Address the first page is mapped at.
>>> + * @ptep: Page table pointer for the first entry.
>>> + * @nr: Number of entries to mark old and clean.
>>> + *
>>> + * May be overridden by the architecture; otherwise, implemented by
>>> + * get_and_clear/modify/set for each pte in the range.
>>> + *
>>> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
>>> + * some PTEs might be write-protected.
>>> + *
>>> + * Context: The caller holds the page table lock.  The PTEs map consecutive
>>> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
>>> + */
>>> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
>>> +                    pte_t *ptep, unsigned int nr)
>>
>> Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
>> (which I added as part of swap-out) to something like:
>>
>> clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
>>                pte_t *ptep, unsigned int nr,
>>                bool clear_young, bool clear_dirty);
>>
>> Then we can use the same function for both use cases and also have the ability
>> to only clear dirty in future if we ever need it. The other advantage is that we
>> only need to plumb a single function down the arm64 arch code. As it currently
>> stands, those 2 functions would be duplicating most of their code.
> 
> Yes. Maybe better use proper __bitwise flags, the compiler should be smart
> enough to optimize either way.

Agreed. I was also thinking perhaps it makes sense to start using output bitwise
flags for folio_pte_batch() since this patch set takes us up to 3 optional bool
pointers for different things. Might be cleaner to have input flags to tell it
what we care about and output flags to highlight those things. I guess the
compiler should be able to optimize in the same way.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 11:27       ` Ryan Roberts
@ 2024-04-11 12:23         ` Lance Yang
  2024-04-11 13:51           ` Ryan Roberts
  0 siblings, 1 reply; 21+ messages in thread
From: Lance Yang @ 2024-04-11 12:23 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: David Hildenbrand, akpm, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel

On Thu, Apr 11, 2024 at 7:27 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 11/04/2024 12:20, David Hildenbrand wrote:
> > On 11.04.24 13:11, Ryan Roberts wrote:
> >> On 08/04/2024 05:24, Lance Yang wrote:
> >>> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
> >>> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
> >>> splitting if the large folio is fully mapped within the target range.
> >>>
> >>> If a large folio is locked or shared, or if we fail to split it, we just
> >>> leave it in place and advance to the next PTE in the range. But note that
> >>> the behavior is changed; previously, any failure of this sort would cause
> >>> the entire operation to give up. As large folios become more common,
> >>> sticking to the old way could result in wasted opportunities.
> >>>
> >>> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> >>> the same size results in the following runtimes for madvise(MADV_FREE) in
> >>> seconds (shorter is better):
> >>>
> >>> Folio Size |   Old    |   New    | Change
> >>> ------------------------------------------
> >>>        4KiB | 0.590251 | 0.590259 |    0%
> >>>       16KiB | 2.990447 | 0.185655 |  -94%
> >>>       32KiB | 2.547831 | 0.104870 |  -95%
> >>>       64KiB | 2.457796 | 0.052812 |  -97%
> >>>      128KiB | 2.281034 | 0.032777 |  -99%
> >>>      256KiB | 2.230387 | 0.017496 |  -99%
> >>>      512KiB | 2.189106 | 0.010781 |  -99%
> >>>     1024KiB | 2.183949 | 0.007753 |  -99%
> >>>     2048KiB | 0.002799 | 0.002804 |    0%
> >>>
> >>> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
> >>> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
> >>>
> >>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> >>> ---
> >>>   include/linux/pgtable.h |  34 +++++++++
> >>>   mm/internal.h           |  12 +++-
> >>>   mm/madvise.c            | 149 ++++++++++++++++++++++------------------
> >>>   mm/memory.c             |   4 +-
> >>>   4 files changed, 129 insertions(+), 70 deletions(-)
> >>>
> >>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> >>> index 0f4b2faa1d71..4dd442787420 100644
> >>> --- a/include/linux/pgtable.h
> >>> +++ b/include/linux/pgtable.h
> >>> @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct
> >>> *mm,
> >>>   }
> >>>   #endif
> >>>   +#ifndef mkold_clean_ptes
> >>> +/**
> >>> + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
> >>> + *        as old and clean.
> >>> + * @mm: Address space the pages are mapped into.
> >>> + * @addr: Address the first page is mapped at.
> >>> + * @ptep: Page table pointer for the first entry.
> >>> + * @nr: Number of entries to mark old and clean.
> >>> + *
> >>> + * May be overridden by the architecture; otherwise, implemented by
> >>> + * get_and_clear/modify/set for each pte in the range.
> >>> + *
> >>> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> >>> + * some PTEs might be write-protected.
> >>> + *
> >>> + * Context: The caller holds the page table lock.  The PTEs map consecutive
> >>> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
> >>> + */
> >>> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> >>> +                    pte_t *ptep, unsigned int nr)
> >>

Thanks for the suggestions, Ryan, David!

> >> Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
> >> (which I added as part of swap-out) to something like:

Yeah, this is definitely cleaner than before.

> >>
> >> clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
> >>                pte_t *ptep, unsigned int nr,
> >>                bool clear_young, bool clear_dirty);
> >>
> >> Then we can use the same function for both use cases and also have the ability
> >> to only clear dirty in future if we ever need it. The other advantage is that we
> >> only need to plumb a single function down the arm64 arch code. As it currently
> >> stands, those 2 functions would be duplicating most of their code.

Agreed. It's indeed a good idea to use a single function for both use cases.

> >
> > Yes. Maybe better use proper __bitwise flags, the compiler should be smart
> > enough to optimize either way.

Nice. I'll use the __bitwise flags as the input.

>
> Agreed. I was also thinking perhaps it makes sense to start using output bitwise
> flags for folio_pte_batch() since this patch set takes us up to 3 optional bool
> pointers for different things. Might be cleaner to have input flags to tell it
> what we care about and output flags to highlight those things. I guess the
> compiler should be able to optimize in the same way.
>

Should I start using output bitwise flags for folio_pte_batch() in
this patch set?

Thanks,
Lance

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 11:11   ` Ryan Roberts
  2024-04-11 11:20     ` David Hildenbrand
@ 2024-04-11 12:46     ` Lance Yang
  2024-04-11 13:48       ` Ryan Roberts
  1 sibling, 1 reply; 21+ messages in thread
From: Lance Yang @ 2024-04-11 12:46 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On Thu, Apr 11, 2024 at 7:11 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 08/04/2024 05:24, Lance Yang wrote:
> > This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
> > (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
> > splitting if the large folio is fully mapped within the target range.
> >
> > If a large folio is locked or shared, or if we fail to split it, we just
> > leave it in place and advance to the next PTE in the range. But note that
> > the behavior is changed; previously, any failure of this sort would cause
> > the entire operation to give up. As large folios become more common,
> > sticking to the old way could result in wasted opportunities.
> >
> > On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> > the same size results in the following runtimes for madvise(MADV_FREE) in
> > seconds (shorter is better):
> >
> > Folio Size |   Old    |   New    | Change
> > ------------------------------------------
> >       4KiB | 0.590251 | 0.590259 |    0%
> >      16KiB | 2.990447 | 0.185655 |  -94%
> >      32KiB | 2.547831 | 0.104870 |  -95%
> >      64KiB | 2.457796 | 0.052812 |  -97%
> >     128KiB | 2.281034 | 0.032777 |  -99%
> >     256KiB | 2.230387 | 0.017496 |  -99%
> >     512KiB | 2.189106 | 0.010781 |  -99%
> >    1024KiB | 2.183949 | 0.007753 |  -99%
> >    2048KiB | 0.002799 | 0.002804 |    0%
> >
> > [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
> > [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
> >
> > Signed-off-by: Lance Yang <ioworker0@gmail.com>
> > ---
> >  include/linux/pgtable.h |  34 +++++++++
> >  mm/internal.h           |  12 +++-
> >  mm/madvise.c            | 149 ++++++++++++++++++++++------------------
> >  mm/memory.c             |   4 +-
> >  4 files changed, 129 insertions(+), 70 deletions(-)
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 0f4b2faa1d71..4dd442787420 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
> >  }
> >  #endif
> >
> > +#ifndef mkold_clean_ptes
> > +/**
> > + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
> > + *           as old and clean.
> > + * @mm: Address space the pages are mapped into.
> > + * @addr: Address the first page is mapped at.
> > + * @ptep: Page table pointer for the first entry.
> > + * @nr: Number of entries to mark old and clean.
> > + *
> > + * May be overridden by the architecture; otherwise, implemented by
> > + * get_and_clear/modify/set for each pte in the range.
> > + *
> > + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> > + * some PTEs might be write-protected.
> > + *
> > + * Context: The caller holds the page table lock.  The PTEs map consecutive
> > + * pages that belong to the same folio.  The PTEs are all in the same PMD.
> > + */
> > +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> > +                                 pte_t *ptep, unsigned int nr)
>
> Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
> (which I added as part of swap-out) to something like:
>
> clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
>                        pte_t *ptep, unsigned int nr,
>                        bool clear_young, bool clear_dirty);
>
> Then we can use the same function for both use cases and also have the ability
> to only clear dirty in future if we ever need it. The other advantage is that we
> only need to plumb a single function down the arm64 arch code. As it currently
> stands, those 2 functions would be duplicating most of their code.
>
> Generated code would still be the same since I'd expect the callsites to be
> passing in constants for clear_young and clear_dirty.
>
> > +{
> > +     pte_t pte;
> > +
> > +     for (;;) {
> > +             pte = ptep_get_and_clear(mm, addr, ptep);
> > +             set_pte_at(mm, addr, ptep, pte_mkclean(pte_mkold(pte)));
> > +             if (--nr == 0)
> > +                     break;
> > +             ptep++;
> > +             addr += PAGE_SIZE;
> > +     }
> > +}
> > +#endif
> > +
> >  static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> >                             pte_t *ptep)
> >  {
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 57c1055d5568..792a9baf0d14 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -132,6 +132,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> >   *             first one is writable.
> >   * @any_young: Optional pointer to indicate whether any entry except the
> >   *             first one is young.
> > + * @any_dirty: Optional pointer to indicate whether any entry except the
> > + *             first one is dirty.
> >   *
> >   * Detect a PTE batch: consecutive (present) PTEs that map consecutive
> >   * pages of the same large folio.
> > @@ -147,18 +149,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> >   */
> >  static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> >               pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
> > -             bool *any_writable, bool *any_young)
> > +             bool *any_writable, bool *any_young, bool *any_dirty)
> >  {
> >       unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
> >       const pte_t *end_ptep = start_ptep + max_nr;
> >       pte_t expected_pte, *ptep;
> > -     bool writable, young;
> > +     bool writable, young, dirty;
> >       int nr;
> >
> >       if (any_writable)
> >               *any_writable = false;
> >       if (any_young)
> >               *any_young = false;
> > +     if (any_dirty)
> > +             *any_dirty = false;
> >
> >       VM_WARN_ON_FOLIO(!pte_present(pte), folio);
> >       VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
> > @@ -174,6 +178,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> >                       writable = !!pte_write(pte);
> >               if (any_young)
> >                       young = !!pte_young(pte);
> > +             if (any_dirty)
> > +                     dirty = !!pte_dirty(pte);
> >               pte = __pte_batch_clear_ignored(pte, flags);
> >
> >               if (!pte_same(pte, expected_pte))
> > @@ -191,6 +197,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> >                       *any_writable |= writable;
> >               if (any_young)
> >                       *any_young |= young;
> > +             if (any_dirty)
> > +                     *any_dirty |= dirty;
> >
> >               nr = pte_batch_hint(ptep, pte);
> >               expected_pte = pte_advance_pfn(expected_pte, nr);
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index bf26cf2b7715..0777df2e3691 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
> >              file_permission(vma->vm_file, MAY_WRITE) == 0;
> >  }
> >
> > +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
> > +                                       struct folio *folio, pte_t *ptep,
> > +                                       pte_t pte, bool *any_young,
> > +                                       bool *any_dirty)
> > +{
> > +     int max_nr = (end - addr) / PAGE_SIZE;
> > +     const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> > +
> > +     return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
> > +                            any_young, any_dirty);
> > +}
> > +
> > +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
> > +                                        unsigned long addr,
> > +                                        struct folio *folio, pte_t **pte,
> > +                                        spinlock_t **ptl)
> > +{
> > +     int err;
> > +
> > +     if (!folio_trylock(folio))
> > +             return false;
> > +
> > +     folio_get(folio);
> > +     pte_unmap_unlock(*pte, *ptl);
> > +     err = split_folio(folio);
> > +     folio_unlock(folio);
> > +     folio_put(folio);
> > +
> > +     *pte = pte_offset_map_lock(mm, pmd, addr, ptl);
> > +
> > +     return err == 0;
> > +}
> > +
> >  static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >                               unsigned long addr, unsigned long end,
> >                               struct mm_walk *walk)
> > @@ -456,41 +489,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> >                * next pte in the range.
> >                */
> >               if (folio_test_large(folio)) {
> > -                     const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
> > -                                             FPB_IGNORE_SOFT_DIRTY;
> > -                     int max_nr = (end - addr) / PAGE_SIZE;
> >                       bool any_young;
> > -
>
> nit: there should be a blank line between variable declarations and following
> code. You have removed it here (and similar in free function). Did you run
> checkpatch.pl? It would have caught these things.

Sorry for that. I did see this warning msg, but I didn't take it seriously :(

>
> > -                     nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
> > -                                          fpb_flags, NULL, &any_young);
> > -                     if (any_young)
> > -                             ptent = pte_mkyoung(ptent);
> > +                     nr = madvise_folio_pte_batch(addr, end, folio, pte,
> > +                                                  ptent, &any_young, NULL);
> >
> >                       if (nr < folio_nr_pages(folio)) {
> > -                             int err;
> > -
> >                               if (folio_likely_mapped_shared(folio))
> >                                       continue;
> >                               if (pageout_anon_only_filter && !folio_test_anon(folio))
> >                                       continue;
> > -                             if (!folio_trylock(folio))
> > -                                     continue;
> > -                             folio_get(folio);
> > +
> >                               arch_leave_lazy_mmu_mode();
> > -                             pte_unmap_unlock(start_pte, ptl);
> > -                             start_pte = NULL;
> > -                             err = split_folio(folio);
> > -                             folio_unlock(folio);
> > -                             folio_put(folio);
> > -                             start_pte = pte =
> > -                                     pte_offset_map_lock(mm, pmd, addr, &ptl);
> > +                             if (madvise_pte_split_folio(mm, pmd, addr,
> > +                                                         folio, &start_pte, &ptl))
> > +                                     nr = 0;
> >                               if (!start_pte)
> >                                       break;
> > +                             pte = start_pte;
> >                               arch_enter_lazy_mmu_mode();
> > -                             if (!err)
> > -                                     nr = 0;
> >                               continue;
> >                       }
> > +
> > +                     if (any_young)
> > +                             ptent = pte_mkyoung(ptent);
> >               }
> >
> >               /*
> > @@ -687,47 +708,54 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >                       continue;
> >
> >               /*
> > -              * If pmd isn't transhuge but the folio is large and
> > -              * is owned by only this process, split it and
> > -              * deactivate all pages.
> > +              * If we encounter a large folio, only split it if it is not
> > +              * fully mapped within the range we are operating on. Otherwise
> > +              * leave it as is so that it can be marked as lazyfree. If we
> > +              * fail to split a folio, leave it in place and advance to the
> > +              * next pte in the range.
> >                */
> >               if (folio_test_large(folio)) {
> > -                     int err;
> > +                     bool any_young, any_dirty;
> > +                     nr = madvise_folio_pte_batch(addr, end, folio, pte,
> > +                                                  ptent, &any_young, &any_dirty);
> >
> > -                     if (folio_likely_mapped_shared(folio))
> > -                             break;
> > -                     if (!folio_trylock(folio))
> > -                             break;
> > -                     folio_get(folio);
> > -                     arch_leave_lazy_mmu_mode();
> > -                     pte_unmap_unlock(start_pte, ptl);
> > -                     start_pte = NULL;
> > -                     err = split_folio(folio);
> > +                     if (nr < folio_nr_pages(folio)) {
> > +                             if (folio_likely_mapped_shared(folio))
> > +                                     continue;
> > +
> > +                             arch_leave_lazy_mmu_mode();
> > +                             if (madvise_pte_split_folio(mm, pmd, addr,
> > +                                                         folio, &start_pte, &ptl))
> > +                                     nr = 0;
> > +                             if (!start_pte)
> > +                                     break;
> > +                             pte = start_pte;
> > +                             arch_enter_lazy_mmu_mode();
> > +                             continue;
> > +                     }
> > +
> > +                     if (any_young)
> > +                             ptent = pte_mkyoung(ptent);
> > +                     if (any_dirty)
> > +                             ptent = pte_mkdirty(ptent);
> > +             }
> > +
> > +             if (!folio_trylock(folio))
> > +                     continue;
>
> This is still wrong. This should all be protected by the "if
> (folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previously
> so that you only call folio_trylock() if that condition is true. You are
> unconditionally locking here, then unlocking, then relocking below if the
> condition is met. Just put everything inside the condition and lock once.

I'm not sure if it's safe to call folio_mapcount() without holding the
folio lock.

As mentioned earlier by David in the v2[1]
> What could work for large folios is making sure that #ptes that map the
> folio here correspond to the folio_mapcount(). And folio_mapcount()
> should be called under folio lock, to avoid racing with swapout/migration.

[1] https://lore.kernel.org/all/5cc05529-eb80-410e-bc26-233b0ba0b21f@redhat.com/

Thanks,
Lance
>
> Thanks,
> Ryan
>
> > +             /*
> > +              * If we have a large folio at this point, we know it is fully mapped
> > +              * so if its mapcount is the same as its number of pages, it must be
> > +              * exclusive.
> > +              */
> > +             if (folio_mapcount(folio) != folio_nr_pages(folio)) {
> >                       folio_unlock(folio);
> > -                     folio_put(folio);
> > -                     if (err)
> > -                             break;
> > -                     start_pte = pte =
> > -                             pte_offset_map_lock(mm, pmd, addr, &ptl);
> > -                     if (!start_pte)
> > -                             break;
> > -                     arch_enter_lazy_mmu_mode();
> > -                     pte--;
> > -                     addr -= PAGE_SIZE;
> >                       continue;
> >               }
> > +             folio_unlock(folio);
> >
> >               if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
> >                       if (!folio_trylock(folio))
> >                               continue;
> > -                     /*
> > -                      * If folio is shared with others, we mustn't clear
> > -                      * the folio's dirty flag.
> > -                      */
> > -                     if (folio_mapcount(folio) != 1) {
> > -                             folio_unlock(folio);
> > -                             continue;
> > -                     }
> >
> >                       if (folio_test_swapcache(folio) &&
> >                           !folio_free_swap(folio)) {
> > @@ -740,19 +768,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >               }
> >
> >               if (pte_young(ptent) || pte_dirty(ptent)) {
> > -                     /*
> > -                      * Some of architecture(ex, PPC) don't update TLB
> > -                      * with set_pte_at and tlb_remove_tlb_entry so for
> > -                      * the portability, remap the pte with old|clean
> > -                      * after pte clearing.
> > -                      */
> > -                     ptent = ptep_get_and_clear_full(mm, addr, pte,
> > -                                                     tlb->fullmm);
> > -
> > -                     ptent = pte_mkold(ptent);
> > -                     ptent = pte_mkclean(ptent);
> > -                     set_pte_at(mm, addr, pte, ptent);
> > -                     tlb_remove_tlb_entry(tlb, pte, addr);
> > +                     mkold_clean_ptes(mm, addr, pte, nr);
> > +                     tlb_remove_tlb_entries(tlb, pte, nr, addr);
> >               }
> >               folio_mark_lazyfree(folio);
> >       }
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 1723c8ddf9cb..fe9d4d64c627 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
> >                       flags |= FPB_IGNORE_SOFT_DIRTY;
> >
> >               nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
> > -                                  &any_writable, NULL);
> > +                                  &any_writable, NULL, NULL);
> >               folio_ref_add(folio, nr);
> >               if (folio_test_anon(folio)) {
> >                       if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
> > @@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
> >        */
> >       if (unlikely(folio_test_large(folio) && max_nr != 1)) {
> >               nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
> > -                                  NULL, NULL);
> > +                                  NULL, NULL, NULL);
> >
> >               zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
> >                                      addr, details, rss, force_flush,
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper
  2024-04-08  4:24 ` [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper Lance Yang
@ 2024-04-11 13:17   ` Ryan Roberts
  2024-04-12  2:09     ` Lance Yang
  0 siblings, 1 reply; 21+ messages in thread
From: Ryan Roberts @ 2024-04-11 13:17 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 08/04/2024 05:24, Lance Yang wrote:
> The per-pte get_and_clear/modify/set approach would result in
> unfolding/refolding for contpte mappings on arm64. So we need
> to override mkold_clean_ptes() for arm64 to avoid it.

IIRC, in the last version, I suggested copying the wrprotect_ptes() pattern to
correctly iterate over contpte blocks. I meant for you to take it as inspiration
but looks like you have done a carbon copy, including lots of things that are
unneeded here. That's my fault for not being clear - sorry!


> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Suggested-by: Barry Song <21cnbao@gmail.com>
> Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> ---
>  arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++
>  arch/arm64/mm/contpte.c          | 15 +++++++++
>  2 files changed, 70 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 9fd8613b2db2..395754638a9a 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1223,6 +1223,34 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
>  		__ptep_set_wrprotect(mm, address, ptep);
>  }
>  
> +static inline void ___ptep_mkold_clean(struct mm_struct *mm, unsigned long addr,
> +				       pte_t *ptep, pte_t pte)
> +{
> +	pte_t old_pte;
> +
> +	do {
> +		old_pte = pte;
> +		pte = pte_mkclean(pte_mkold(pte));
> +		pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
> +					       pte_val(old_pte), pte_val(pte));
> +	} while (pte_val(pte) != pte_val(old_pte));
> +}

Given you are clearing old and dirty, you have nothing to race against, so you
shouldn't need the cmpxchg loop here; just a get/modify/set should do? Of course
if you are setting one or the other, then you need the loop.

> +
> +static inline void __ptep_mkold_clean(struct mm_struct *mm, unsigned long addr,
> +				      pte_t *ptep)
> +{
> +	___ptep_mkold_clean(mm, addr, ptep, __ptep_get(ptep));
> +}

I don't see a need for this intermediate function.

> +
> +static inline void __mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> +				      pte_t *ptep, unsigned int nr)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < nr; i++, addr += PAGE_SIZE, ptep++)

It would probably be good to use the for() loop pattern used by the generic
impls here too.

> +		__ptep_mkold_clean(mm, addr, ptep);
> +}
> +
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_PMDP_SET_WRPROTECT
>  static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> @@ -1379,6 +1407,8 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>  				unsigned long addr, pte_t *ptep,
>  				pte_t entry, int dirty);
> +extern void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> +				pte_t *ptep, unsigned int nr);
>  
>  static __always_inline void contpte_try_fold(struct mm_struct *mm,
>  				unsigned long addr, pte_t *ptep, pte_t pte)
> @@ -1603,6 +1633,30 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
>  	return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
>  }
>  
> +#define mkold_clean_ptes mkold_clean_ptes
> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> +				    pte_t *ptep, unsigned int nr)
> +{
> +	if (likely(nr == 1)) {
> +		/*
> +		 * Optimization: mkold_clean_ptes() can only be called for present
> +		 * ptes so we only need to check contig bit as condition for unfold,
> +		 * and we can remove the contig bit from the pte we read to avoid
> +		 * re-reading. This speeds up madvise(MADV_FREE) which is sensitive
> +		 * for order-0 folios. Equivalent to contpte_try_unfold().
> +		 */

Is this true? Do you have data that shows the cost? If not, I'd prefer to avoid
the optimization and do it the more standard way:

contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep));

> +		pte_t orig_pte = __ptep_get(ptep);
> +
> +		if (unlikely(pte_cont(orig_pte))) {
> +			__contpte_try_unfold(mm, addr, ptep, orig_pte);
> +			orig_pte = pte_mknoncont(orig_pte);
> +		}
> +		___ptep_mkold_clean(mm, addr, ptep, orig_pte);
> +	} else {
> +		contpte_mkold_clean_ptes(mm, addr, ptep, nr);
> +	}

...but I don't think you should ever need to unfold in the first place. Even if
it's folded and you are trying to clear access/dirty for a single pte, you can
just clear the whole block. See existing comment in
contpte_ptep_test_and_clear_young().

So this ends up as something like:

static inline void clear_young_dirty_ptes(struct mm_struct *mm,
			unsigned long addr, pte_t *ptep, unsigned int nr,
			bool clear_young, bool clear_dirty)
{
	if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
		clear_young_dirty_ptes(mm, addr, ptep, nr,
					clear_young, clear_dirty);
	else
		contpte_clear_young_dirty_ptes(mm, addr, ptep, nr,
					clear_young, clear_dirty);
}


> +}
> +
>  #else /* CONFIG_ARM64_CONTPTE */
>  
>  #define ptep_get				__ptep_get
> @@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
>  #define wrprotect_ptes				__wrprotect_ptes
>  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
>  #define ptep_set_access_flags			__ptep_set_access_flags
> +#define mkold_clean_ptes			__mkold_clean_ptes
>  
>  #endif /* CONFIG_ARM64_CONTPTE */
>  
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index 1b64b4c3f8bf..dbff9c5e9eff 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -361,6 +361,21 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  }
>  EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
>  
> +void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep, unsigned int nr)
> +{
> +	/*
> +	 * If clearing the young and dirty bits for an entire contig range, we can
> +	 * avoid unfolding. Just set old/clean and wait for the later mmu_gather
> +	 * flush to invalidate the tlb. If it's a partial range though, we need to
> +	 * unfold.
> +	 */

nit: Please reflow comments like this to 80 cols.

We can avoid unfolding in all cases. See existing comment in
contpte_ptep_test_and_clear_young(). Suggest something like this (untested):

void clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
			    pte_t *ptep, unsigned int nr,
			    bool clear_young, bool clear_dirty)
{
	/*
	 * We can safely clear access/dirty without needing to unfold from the
	 * architectures perspective, even when contpte is set. If the range
	 * starts or ends midway through a contpte block, we can just expand to
	 * include the full contpte block. While this is not exactly what the
	 * core-mm asked for, it tracks access/dirty per folio, not per page.
	 * And since we only create a contpte block when it is covered by a
	 * single folio, we can get away with clearing access/dirty for the
	 * whole block.
	 */

	unsigned int start = addr;
	unsigned int end = start + nr;

	if (pte_cont(__ptep_get(ptep + nr - 1)))
		end = ALIGN(end, CONT_PTE_SIZE);

	if (pte_cont(__ptep_get(ptep))) {
		start = ALIGN_DOWN(start, CONT_PTE_SIZE);
		ptep = contpte_align_down(ptep);
	}

	__clear_young_dirty_ptes(mm, start, ptep, end - start,
				 clear_young, clear_dirty);
}

Thanks,
Ryan

> +
> +	contpte_try_unfold_partial(mm, addr, ptep, nr);
> +	__mkold_clean_ptes(mm, addr, ptep, nr);
> +}
> +EXPORT_SYMBOL_GPL(contpte_mkold_clean_ptes);
> +
>  int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>  					unsigned long addr, pte_t *ptep,
>  					pte_t entry, int dirty)


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 12:46     ` Lance Yang
@ 2024-04-11 13:48       ` Ryan Roberts
  2024-04-11 14:07         ` Lance Yang
  0 siblings, 1 reply; 21+ messages in thread
From: Ryan Roberts @ 2024-04-11 13:48 UTC (permalink / raw)
  To: Lance Yang
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

[...]

>>> +
>>> +             if (!folio_trylock(folio))
>>> +                     continue;
>>
>> This is still wrong. This should all be protected by the "if
>> (folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previously
>> so that you only call folio_trylock() if that condition is true. You are
>> unconditionally locking here, then unlocking, then relocking below if the
>> condition is met. Just put everything inside the condition and lock once.
> 
> I'm not sure if it's safe to call folio_mapcount() without holding the
> folio lock.
> 
> As mentioned earlier by David in the v2[1]
>> What could work for large folios is making sure that #ptes that map the
>> folio here correspond to the folio_mapcount(). And folio_mapcount()
>> should be called under folio lock, to avoid racing with swapout/migration.
> 
> [1] https://lore.kernel.org/all/5cc05529-eb80-410e-bc26-233b0ba0b21f@redhat.com/

But I'm not suggesting that you should call folio_mapcount() without the lock.
I'm proposing this:

                if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
                        if (!folio_trylock(folio))
                                continue;
                        /*
-                        * If folio is shared with others, we mustn't clear
-                        * the folio's dirty flag.
+                        * If we have a large folio at this point, we know it is
+                        * fully mapped so if its mapcount is the same as its
+                        * number of pages, it must be exclusive.
                         */
-                       if (folio_mapcount(folio) != 1) {
+                       if (folio_mapcount(folio) != folio_nr_pages(folio)) {
                                folio_unlock(folio);
                                continue;
                        }

What am I missing?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 12:23         ` Lance Yang
@ 2024-04-11 13:51           ` Ryan Roberts
  2024-04-11 13:55             ` David Hildenbrand
  0 siblings, 1 reply; 21+ messages in thread
From: Ryan Roberts @ 2024-04-11 13:51 UTC (permalink / raw)
  To: Lance Yang
  Cc: David Hildenbrand, akpm, 21cnbao, mhocko, fengwei.yin, zokeefe,
	shy828301, xiehuan09, wangkefeng.wang, songmuchun, peterx,
	minchan, linux-mm, linux-kernel

On 11/04/2024 13:23, Lance Yang wrote:
> On Thu, Apr 11, 2024 at 7:27 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 11/04/2024 12:20, David Hildenbrand wrote:
>>> On 11.04.24 13:11, Ryan Roberts wrote:
>>>> On 08/04/2024 05:24, Lance Yang wrote:
>>>>> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
>>>>> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
>>>>> splitting if the large folio is fully mapped within the target range.
>>>>>
>>>>> If a large folio is locked or shared, or if we fail to split it, we just
>>>>> leave it in place and advance to the next PTE in the range. But note that
>>>>> the behavior is changed; previously, any failure of this sort would cause
>>>>> the entire operation to give up. As large folios become more common,
>>>>> sticking to the old way could result in wasted opportunities.
>>>>>
>>>>> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
>>>>> the same size results in the following runtimes for madvise(MADV_FREE) in
>>>>> seconds (shorter is better):
>>>>>
>>>>> Folio Size |   Old    |   New    | Change
>>>>> ------------------------------------------
>>>>>        4KiB | 0.590251 | 0.590259 |    0%
>>>>>       16KiB | 2.990447 | 0.185655 |  -94%
>>>>>       32KiB | 2.547831 | 0.104870 |  -95%
>>>>>       64KiB | 2.457796 | 0.052812 |  -97%
>>>>>      128KiB | 2.281034 | 0.032777 |  -99%
>>>>>      256KiB | 2.230387 | 0.017496 |  -99%
>>>>>      512KiB | 2.189106 | 0.010781 |  -99%
>>>>>     1024KiB | 2.183949 | 0.007753 |  -99%
>>>>>     2048KiB | 0.002799 | 0.002804 |    0%
>>>>>
>>>>> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
>>>>> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
>>>>>
>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>>> ---
>>>>>   include/linux/pgtable.h |  34 +++++++++
>>>>>   mm/internal.h           |  12 +++-
>>>>>   mm/madvise.c            | 149 ++++++++++++++++++++++------------------
>>>>>   mm/memory.c             |   4 +-
>>>>>   4 files changed, 129 insertions(+), 70 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>> index 0f4b2faa1d71..4dd442787420 100644
>>>>> --- a/include/linux/pgtable.h
>>>>> +++ b/include/linux/pgtable.h
>>>>> @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct
>>>>> *mm,
>>>>>   }
>>>>>   #endif
>>>>>   +#ifndef mkold_clean_ptes
>>>>> +/**
>>>>> + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
>>>>> + *        as old and clean.
>>>>> + * @mm: Address space the pages are mapped into.
>>>>> + * @addr: Address the first page is mapped at.
>>>>> + * @ptep: Page table pointer for the first entry.
>>>>> + * @nr: Number of entries to mark old and clean.
>>>>> + *
>>>>> + * May be overridden by the architecture; otherwise, implemented by
>>>>> + * get_and_clear/modify/set for each pte in the range.
>>>>> + *
>>>>> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
>>>>> + * some PTEs might be write-protected.
>>>>> + *
>>>>> + * Context: The caller holds the page table lock.  The PTEs map consecutive
>>>>> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
>>>>> + */
>>>>> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
>>>>> +                    pte_t *ptep, unsigned int nr)
>>>>
> 
> Thanks for the suggestions, Ryan, David!
> 
>>>> Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
>>>> (which I added as part of swap-out) to something like:
> 
> Yeah, this is definitely cleaner than before.
> 
>>>>
>>>> clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
>>>>                pte_t *ptep, unsigned int nr,
>>>>                bool clear_young, bool clear_dirty);
>>>>
>>>> Then we can use the same function for both use cases and also have the ability
>>>> to only clear dirty in future if we ever need it. The other advantage is that we
>>>> only need to plumb a single function down the arm64 arch code. As it currently
>>>> stands, those 2 functions would be duplicating most of their code.
> 
> Agreed. It's indeed a good idea to use a single function for both use cases.
> 
>>>
>>> Yes. Maybe better use proper __bitwise flags, the compiler should be smart
>>> enough to optimize either way.
> 
> Nice. I'll use the __bitwise flags as the input.
> 
>>
>> Agreed. I was also thinking perhaps it makes sense to start using output bitwise
>> flags for folio_pte_batch() since this patch set takes us up to 3 optional bool
>> pointers for different things. Might be cleaner to have input flags to tell it
>> what we care about and output flags to highlight those things. I guess the
>> compiler should be able to optimize in the same way.
>>
> 
> Should I start using output bitwise flags for folio_pte_batch() in
> this patch set?

I don't think its crucial (yet). I'd leave it as you have done it for now,
unless David shouts.

> 
> Thanks,
> Lance


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 13:51           ` Ryan Roberts
@ 2024-04-11 13:55             ` David Hildenbrand
  0 siblings, 0 replies; 21+ messages in thread
From: David Hildenbrand @ 2024-04-11 13:55 UTC (permalink / raw)
  To: Ryan Roberts, Lance Yang
  Cc: akpm, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 11.04.24 15:51, Ryan Roberts wrote:
> On 11/04/2024 13:23, Lance Yang wrote:
>> On Thu, Apr 11, 2024 at 7:27 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>
>>> On 11/04/2024 12:20, David Hildenbrand wrote:
>>>> On 11.04.24 13:11, Ryan Roberts wrote:
>>>>> On 08/04/2024 05:24, Lance Yang wrote:
>>>>>> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
>>>>>> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
>>>>>> splitting if the large folio is fully mapped within the target range.
>>>>>>
>>>>>> If a large folio is locked or shared, or if we fail to split it, we just
>>>>>> leave it in place and advance to the next PTE in the range. But note that
>>>>>> the behavior is changed; previously, any failure of this sort would cause
>>>>>> the entire operation to give up. As large folios become more common,
>>>>>> sticking to the old way could result in wasted opportunities.
>>>>>>
>>>>>> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
>>>>>> the same size results in the following runtimes for madvise(MADV_FREE) in
>>>>>> seconds (shorter is better):
>>>>>>
>>>>>> Folio Size |   Old    |   New    | Change
>>>>>> ------------------------------------------
>>>>>>         4KiB | 0.590251 | 0.590259 |    0%
>>>>>>        16KiB | 2.990447 | 0.185655 |  -94%
>>>>>>        32KiB | 2.547831 | 0.104870 |  -95%
>>>>>>        64KiB | 2.457796 | 0.052812 |  -97%
>>>>>>       128KiB | 2.281034 | 0.032777 |  -99%
>>>>>>       256KiB | 2.230387 | 0.017496 |  -99%
>>>>>>       512KiB | 2.189106 | 0.010781 |  -99%
>>>>>>      1024KiB | 2.183949 | 0.007753 |  -99%
>>>>>>      2048KiB | 0.002799 | 0.002804 |    0%
>>>>>>
>>>>>> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
>>>>>> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
>>>>>>
>>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>>>> ---
>>>>>>    include/linux/pgtable.h |  34 +++++++++
>>>>>>    mm/internal.h           |  12 +++-
>>>>>>    mm/madvise.c            | 149 ++++++++++++++++++++++------------------
>>>>>>    mm/memory.c             |   4 +-
>>>>>>    4 files changed, 129 insertions(+), 70 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>> index 0f4b2faa1d71..4dd442787420 100644
>>>>>> --- a/include/linux/pgtable.h
>>>>>> +++ b/include/linux/pgtable.h
>>>>>> @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct
>>>>>> *mm,
>>>>>>    }
>>>>>>    #endif
>>>>>>    +#ifndef mkold_clean_ptes
>>>>>> +/**
>>>>>> + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
>>>>>> + *        as old and clean.
>>>>>> + * @mm: Address space the pages are mapped into.
>>>>>> + * @addr: Address the first page is mapped at.
>>>>>> + * @ptep: Page table pointer for the first entry.
>>>>>> + * @nr: Number of entries to mark old and clean.
>>>>>> + *
>>>>>> + * May be overridden by the architecture; otherwise, implemented by
>>>>>> + * get_and_clear/modify/set for each pte in the range.
>>>>>> + *
>>>>>> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
>>>>>> + * some PTEs might be write-protected.
>>>>>> + *
>>>>>> + * Context: The caller holds the page table lock.  The PTEs map consecutive
>>>>>> + * pages that belong to the same folio.  The PTEs are all in the same PMD.
>>>>>> + */
>>>>>> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
>>>>>> +                    pte_t *ptep, unsigned int nr)
>>>>>
>>
>> Thanks for the suggestions, Ryan, David!
>>
>>>>> Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
>>>>> (which I added as part of swap-out) to something like:
>>
>> Yeah, this is definitely cleaner than before.
>>
>>>>>
>>>>> clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
>>>>>                 pte_t *ptep, unsigned int nr,
>>>>>                 bool clear_young, bool clear_dirty);
>>>>>
>>>>> Then we can use the same function for both use cases and also have the ability
>>>>> to only clear dirty in future if we ever need it. The other advantage is that we
>>>>> only need to plumb a single function down the arm64 arch code. As it currently
>>>>> stands, those 2 functions would be duplicating most of their code.
>>
>> Agreed. It's indeed a good idea to use a single function for both use cases.
>>
>>>>
>>>> Yes. Maybe better use proper __bitwise flags, the compiler should be smart
>>>> enough to optimize either way.
>>
>> Nice. I'll use the __bitwise flags as the input.
>>
>>>
>>> Agreed. I was also thinking perhaps it makes sense to start using output bitwise
>>> flags for folio_pte_batch() since this patch set takes us up to 3 optional bool
>>> pointers for different things. Might be cleaner to have input flags to tell it
>>> what we care about and output flags to highlight those things. I guess the
>>> compiler should be able to optimize in the same way.
>>>
>>
>> Should I start using output bitwise flags for folio_pte_batch() in
>> this patch set?
> 
> I don't think its crucial (yet). I'd leave it as you have done it for now,
> unless David shouts.

Let's do that separately, and investigate if the compiler actually is 
smart enough ... :)

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 13:48       ` Ryan Roberts
@ 2024-04-11 14:07         ` Lance Yang
  2024-04-11 14:39           ` Ryan Roberts
  0 siblings, 1 reply; 21+ messages in thread
From: Lance Yang @ 2024-04-11 14:07 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On Thu, Apr 11, 2024 at 9:48 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> [...]
>
> >>> +
> >>> +             if (!folio_trylock(folio))
> >>> +                     continue;
> >>
> >> This is still wrong. This should all be protected by the "if
> >> (folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previously
> >> so that you only call folio_trylock() if that condition is true. You are
> >> unconditionally locking here, then unlocking, then relocking below if the
> >> condition is met. Just put everything inside the condition and lock once.
> >
> > I'm not sure if it's safe to call folio_mapcount() without holding the
> > folio lock.
> >
> > As mentioned earlier by David in the v2[1]
> >> What could work for large folios is making sure that #ptes that map the
> >> folio here correspond to the folio_mapcount(). And folio_mapcount()
> >> should be called under folio lock, to avoid racing with swapout/migration.
> >
> > [1] https://lore.kernel.org/all/5cc05529-eb80-410e-bc26-233b0ba0b21f@redhat.com/
>
> But I'm not suggesting that you should call folio_mapcount() without the lock.
> I'm proposing this:
>
>                 if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
>                         if (!folio_trylock(folio))
>                                 continue;
>                         /*
> -                        * If folio is shared with others, we mustn't clear
> -                        * the folio's dirty flag.
> +                        * If we have a large folio at this point, we know it is
> +                        * fully mapped so if its mapcount is the same as its
> +                        * number of pages, it must be exclusive.
>                          */
> -                       if (folio_mapcount(folio) != 1) {
> +                       if (folio_mapcount(folio) != folio_nr_pages(folio)) {
>                                 folio_unlock(folio);
>                                 continue;
>                         }

IIUC, if the folio is clean and not in the swapcache, we still need to
compare the number of batched PTEs against folio_mapcount().

Thanks,
Lance

>
> What am I missing?
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 14:07         ` Lance Yang
@ 2024-04-11 14:39           ` Ryan Roberts
  2024-04-11 14:42             ` David Hildenbrand
  2024-04-12  1:48             ` Lance Yang
  0 siblings, 2 replies; 21+ messages in thread
From: Ryan Roberts @ 2024-04-11 14:39 UTC (permalink / raw)
  To: Lance Yang
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 11/04/2024 15:07, Lance Yang wrote:
> On Thu, Apr 11, 2024 at 9:48 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> [...]
>>
>>>>> +
>>>>> +             if (!folio_trylock(folio))
>>>>> +                     continue;
>>>>
>>>> This is still wrong. This should all be protected by the "if
>>>> (folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previously
>>>> so that you only call folio_trylock() if that condition is true. You are
>>>> unconditionally locking here, then unlocking, then relocking below if the
>>>> condition is met. Just put everything inside the condition and lock once.
>>>
>>> I'm not sure if it's safe to call folio_mapcount() without holding the
>>> folio lock.
>>>
>>> As mentioned earlier by David in the v2[1]
>>>> What could work for large folios is making sure that #ptes that map the
>>>> folio here correspond to the folio_mapcount(). And folio_mapcount()
>>>> should be called under folio lock, to avoid racing with swapout/migration.
>>>
>>> [1] https://lore.kernel.org/all/5cc05529-eb80-410e-bc26-233b0ba0b21f@redhat.com/
>>
>> But I'm not suggesting that you should call folio_mapcount() without the lock.
>> I'm proposing this:
>>
>>                 if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
>>                         if (!folio_trylock(folio))
>>                                 continue;
>>                         /*
>> -                        * If folio is shared with others, we mustn't clear
>> -                        * the folio's dirty flag.
>> +                        * If we have a large folio at this point, we know it is
>> +                        * fully mapped so if its mapcount is the same as its
>> +                        * number of pages, it must be exclusive.
>>                          */
>> -                       if (folio_mapcount(folio) != 1) {
>> +                       if (folio_mapcount(folio) != folio_nr_pages(folio)) {
>>                                 folio_unlock(folio);
>>                                 continue;
>>                         }
> 
> IIUC, if the folio is clean and not in the swapcache, we still need to
> compare the number of batched PTEs against folio_mapcount().

Why? That's not how the old code worked. In fact the comment says that the
reason for the exclusive check is to avoid marking a dirty *folio* as clean if
shared; that would be bad because we could throw away data that others relied
upon. It's perfectly safe to clear the dirty flag from the *pte* even if it is
shared; the ptes are private to the process so that won't affect sharers.

You should just follow the pattern already estabilished by the original code.
The only difference is that because the folio is now (potentially) large, you
have to change the way to detect exclusivity.

> 
> Thanks,
> Lance
> 
>>
>> What am I missing?
>>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 14:39           ` Ryan Roberts
@ 2024-04-11 14:42             ` David Hildenbrand
  2024-04-12  1:48             ` Lance Yang
  1 sibling, 0 replies; 21+ messages in thread
From: David Hildenbrand @ 2024-04-11 14:42 UTC (permalink / raw)
  To: Ryan Roberts, Lance Yang
  Cc: akpm, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 11.04.24 16:39, Ryan Roberts wrote:
> On 11/04/2024 15:07, Lance Yang wrote:
>> On Thu, Apr 11, 2024 at 9:48 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>
>>> [...]
>>>
>>>>>> +
>>>>>> +             if (!folio_trylock(folio))
>>>>>> +                     continue;
>>>>>
>>>>> This is still wrong. This should all be protected by the "if
>>>>> (folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previously
>>>>> so that you only call folio_trylock() if that condition is true. You are
>>>>> unconditionally locking here, then unlocking, then relocking below if the
>>>>> condition is met. Just put everything inside the condition and lock once.
>>>>
>>>> I'm not sure if it's safe to call folio_mapcount() without holding the
>>>> folio lock.
>>>>
>>>> As mentioned earlier by David in the v2[1]
>>>>> What could work for large folios is making sure that #ptes that map the
>>>>> folio here correspond to the folio_mapcount(). And folio_mapcount()
>>>>> should be called under folio lock, to avoid racing with swapout/migration.
>>>>
>>>> [1] https://lore.kernel.org/all/5cc05529-eb80-410e-bc26-233b0ba0b21f@redhat.com/
>>>
>>> But I'm not suggesting that you should call folio_mapcount() without the lock.
>>> I'm proposing this:
>>>
>>>                  if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
>>>                          if (!folio_trylock(folio))
>>>                                  continue;
>>>                          /*
>>> -                        * If folio is shared with others, we mustn't clear
>>> -                        * the folio's dirty flag.
>>> +                        * If we have a large folio at this point, we know it is
>>> +                        * fully mapped so if its mapcount is the same as its
>>> +                        * number of pages, it must be exclusive.
>>>                           */
>>> -                       if (folio_mapcount(folio) != 1) {
>>> +                       if (folio_mapcount(folio) != folio_nr_pages(folio)) {
>>>                                  folio_unlock(folio);
>>>                                  continue;
>>>                          }
>>
>> IIUC, if the folio is clean and not in the swapcache, we still need to
>> compare the number of batched PTEs against folio_mapcount().
> 
> Why? That's not how the old code worked. In fact the comment says that the
> reason for the exclusive check is to avoid marking a dirty *folio* as clean if
> shared; that would be bad because we could throw away data that others relied
> upon. It's perfectly safe to clear the dirty flag from the *pte* even if it is
> shared; the ptes are private to the process so that won't affect sharers.
> 
> You should just follow the pattern already estabilished by the original code.
> The only difference is that because the folio is now (potentially) large, you
> have to change the way to detect exclusivity.

+1

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free
  2024-04-11 14:39           ` Ryan Roberts
  2024-04-11 14:42             ` David Hildenbrand
@ 2024-04-12  1:48             ` Lance Yang
  1 sibling, 0 replies; 21+ messages in thread
From: Lance Yang @ 2024-04-12  1:48 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On Thu, Apr 11, 2024 at 10:39 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 11/04/2024 15:07, Lance Yang wrote:
> > On Thu, Apr 11, 2024 at 9:48 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> [...]
> >>
> >>>>> +
> >>>>> +             if (!folio_trylock(folio))
> >>>>> +                     continue;
> >>>>
> >>>> This is still wrong. This should all be protected by the "if
> >>>> (folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previously
> >>>> so that you only call folio_trylock() if that condition is true. You are
> >>>> unconditionally locking here, then unlocking, then relocking below if the
> >>>> condition is met. Just put everything inside the condition and lock once.
> >>>
> >>> I'm not sure if it's safe to call folio_mapcount() without holding the
> >>> folio lock.
> >>>
> >>> As mentioned earlier by David in the v2[1]
> >>>> What could work for large folios is making sure that #ptes that map the
> >>>> folio here correspond to the folio_mapcount(). And folio_mapcount()
> >>>> should be called under folio lock, to avoid racing with swapout/migration.
> >>>
> >>> [1] https://lore.kernel.org/all/5cc05529-eb80-410e-bc26-233b0ba0b21f@redhat.com/
> >>
> >> But I'm not suggesting that you should call folio_mapcount() without the lock.
> >> I'm proposing this:
> >>
> >>                 if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
> >>                         if (!folio_trylock(folio))
> >>                                 continue;
> >>                         /*
> >> -                        * If folio is shared with others, we mustn't clear
> >> -                        * the folio's dirty flag.
> >> +                        * If we have a large folio at this point, we know it is
> >> +                        * fully mapped so if its mapcount is the same as its
> >> +                        * number of pages, it must be exclusive.
> >>                          */
> >> -                       if (folio_mapcount(folio) != 1) {
> >> +                       if (folio_mapcount(folio) != folio_nr_pages(folio)) {
> >>                                 folio_unlock(folio);
> >>                                 continue;
> >>                         }
> >
> > IIUC, if the folio is clean and not in the swapcache, we still need to
> > compare the number of batched PTEs against folio_mapcount().
>
> Why? That's not how the old code worked. In fact the comment says that the
> reason for the exclusive check is to avoid marking a dirty *folio* as clean if
> shared; that would be bad because we could throw away data that others relied
> upon. It's perfectly safe to clear the dirty flag from the *pte* even if it is
> shared; the ptes are private to the process so that won't affect sharers.
>
> You should just follow the pattern already estabilished by the original code.
> The only difference is that because the folio is now (potentially) large, you
> have to change the way to detect exclusivity.

Thanks a lot for your patience and help!

My bad for the oversight and mistake :(
I'll take another look at the original code and make adjustments following the
established pattern.

Thanks,
Lance

>
> >
> > Thanks,
> > Lance
> >
> >>
> >> What am I missing?
> >>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper
  2024-04-11 13:17   ` Ryan Roberts
@ 2024-04-12  2:09     ` Lance Yang
  2024-04-12 11:21       ` Ryan Roberts
  0 siblings, 1 reply; 21+ messages in thread
From: Lance Yang @ 2024-04-12  2:09 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On Thu, Apr 11, 2024 at 9:17 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 08/04/2024 05:24, Lance Yang wrote:
> > The per-pte get_and_clear/modify/set approach would result in
> > unfolding/refolding for contpte mappings on arm64. So we need
> > to override mkold_clean_ptes() for arm64 to avoid it.
>
> IIRC, in the last version, I suggested copying the wrprotect_ptes() pattern to
> correctly iterate over contpte blocks. I meant for you to take it as inspiration
> but looks like you have done a carbon copy, including lots of things that are
> unneeded here. That's my fault for not being clear - sorry!

My bad. I must have misunderstood your intention.

>
>
> >
> > Suggested-by: David Hildenbrand <david@redhat.com>
> > Suggested-by: Barry Song <21cnbao@gmail.com>
> > Suggested-by: Ryan Roberts <ryan.roberts@arm.com>
> > Signed-off-by: Lance Yang <ioworker0@gmail.com>
> > ---
> >  arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++
> >  arch/arm64/mm/contpte.c          | 15 +++++++++
> >  2 files changed, 70 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 9fd8613b2db2..395754638a9a 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -1223,6 +1223,34 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
> >               __ptep_set_wrprotect(mm, address, ptep);
> >  }
> >
> > +static inline void ___ptep_mkold_clean(struct mm_struct *mm, unsigned long addr,
> > +                                    pte_t *ptep, pte_t pte)
> > +{
> > +     pte_t old_pte;
> > +
> > +     do {
> > +             old_pte = pte;
> > +             pte = pte_mkclean(pte_mkold(pte));
> > +             pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
> > +                                            pte_val(old_pte), pte_val(pte));
> > +     } while (pte_val(pte) != pte_val(old_pte));
> > +}
>
> Given you are clearing old and dirty, you have nothing to race against, so you
> shouldn't need the cmpxchg loop here; just a get/modify/set should do? Of course
> if you are setting one or the other, then you need the loop.

Got it.

>
> > +
> > +static inline void __ptep_mkold_clean(struct mm_struct *mm, unsigned long addr,
> > +                                   pte_t *ptep)
> > +{
> > +     ___ptep_mkold_clean(mm, addr, ptep, __ptep_get(ptep));
> > +}
>
> I don't see a need for this intermediate function.
>
> > +
> > +static inline void __mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> > +                                   pte_t *ptep, unsigned int nr)
> > +{
> > +     unsigned int i;
> > +
> > +     for (i = 0; i < nr; i++, addr += PAGE_SIZE, ptep++)
>
> It would probably be good to use the for() loop pattern used by the generic
> impls here too.

Got it.

>
> > +             __ptep_mkold_clean(mm, addr, ptep);
> > +}
> > +
> >  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >  #define __HAVE_ARCH_PMDP_SET_WRPROTECT
> >  static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> > @@ -1379,6 +1407,8 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> >  extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> >                               unsigned long addr, pte_t *ptep,
> >                               pte_t entry, int dirty);
> > +extern void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> > +                             pte_t *ptep, unsigned int nr);
> >
> >  static __always_inline void contpte_try_fold(struct mm_struct *mm,
> >                               unsigned long addr, pte_t *ptep, pte_t pte)
> > @@ -1603,6 +1633,30 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> >       return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty);
> >  }
> >
> > +#define mkold_clean_ptes mkold_clean_ptes
> > +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> > +                                 pte_t *ptep, unsigned int nr)
> > +{
> > +     if (likely(nr == 1)) {
> > +             /*
> > +              * Optimization: mkold_clean_ptes() can only be called for present
> > +              * ptes so we only need to check contig bit as condition for unfold,
> > +              * and we can remove the contig bit from the pte we read to avoid
> > +              * re-reading. This speeds up madvise(MADV_FREE) which is sensitive
> > +              * for order-0 folios. Equivalent to contpte_try_unfold().
> > +              */
>
> Is this true? Do you have data that shows the cost? If not, I'd prefer to avoid
> the optimization and do it the more standard way:
>
> contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep));
>
> > +             pte_t orig_pte = __ptep_get(ptep);
> > +
> > +             if (unlikely(pte_cont(orig_pte))) {
> > +                     __contpte_try_unfold(mm, addr, ptep, orig_pte);
> > +                     orig_pte = pte_mknoncont(orig_pte);
> > +             }
> > +             ___ptep_mkold_clean(mm, addr, ptep, orig_pte);
> > +     } else {
> > +             contpte_mkold_clean_ptes(mm, addr, ptep, nr);
> > +     }
>
> ...but I don't think you should ever need to unfold in the first place. Even if
> it's folded and you are trying to clear access/dirty for a single pte, you can
> just clear the whole block. See existing comment in
> contpte_ptep_test_and_clear_young().

Thanks for pointing that out.

>
> So this ends up as something like:
>
> static inline void clear_young_dirty_ptes(struct mm_struct *mm,
>                         unsigned long addr, pte_t *ptep, unsigned int nr,
>                         bool clear_young, bool clear_dirty)
> {
>         if (likely(nr == 1 && !pte_cont(__ptep_get(ptep))))
>                 clear_young_dirty_ptes(mm, addr, ptep, nr,
>                                         clear_young, clear_dirty);
>         else
>                 contpte_clear_young_dirty_ptes(mm, addr, ptep, nr,
>                                         clear_young, clear_dirty);
> }

Nice. I'll make sure to follow this approach.

>
>
> > +}
> > +
> >  #else /* CONFIG_ARM64_CONTPTE */
> >
> >  #define ptep_get                             __ptep_get
> > @@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> >  #define wrprotect_ptes                               __wrprotect_ptes
> >  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> >  #define ptep_set_access_flags                        __ptep_set_access_flags
> > +#define mkold_clean_ptes                     __mkold_clean_ptes
> >
> >  #endif /* CONFIG_ARM64_CONTPTE */
> >
> > diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> > index 1b64b4c3f8bf..dbff9c5e9eff 100644
> > --- a/arch/arm64/mm/contpte.c
> > +++ b/arch/arm64/mm/contpte.c
> > @@ -361,6 +361,21 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
> >  }
> >  EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
> >
> > +void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
> > +                           pte_t *ptep, unsigned int nr)
> > +{
> > +     /*
> > +      * If clearing the young and dirty bits for an entire contig range, we can
> > +      * avoid unfolding. Just set old/clean and wait for the later mmu_gather
> > +      * flush to invalidate the tlb. If it's a partial range though, we need to
> > +      * unfold.
> > +      */
>
> nit: Please reflow comments like this to 80 cols.
>
> We can avoid unfolding in all cases. See existing comment in
> contpte_ptep_test_and_clear_young(). Suggest something like this (untested):
>
> void clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
>                             pte_t *ptep, unsigned int nr,
>                             bool clear_young, bool clear_dirty)
> {
>         /*
>          * We can safely clear access/dirty without needing to unfold from the
>          * architectures perspective, even when contpte is set. If the range
>          * starts or ends midway through a contpte block, we can just expand to
>          * include the full contpte block. While this is not exactly what the
>          * core-mm asked for, it tracks access/dirty per folio, not per page.
>          * And since we only create a contpte block when it is covered by a
>          * single folio, we can get away with clearing access/dirty for the
>          * whole block.
>          */
>
>         unsigned int start = addr;
>         unsigned int end = start + nr;
>
>         if (pte_cont(__ptep_get(ptep + nr - 1)))
>                 end = ALIGN(end, CONT_PTE_SIZE);
>
>         if (pte_cont(__ptep_get(ptep))) {
>                 start = ALIGN_DOWN(start, CONT_PTE_SIZE);
>                 ptep = contpte_align_down(ptep);
>         }
>
>         __clear_young_dirty_ptes(mm, start, ptep, end - start,
>                                  clear_young, clear_dirty);
> }

Nice. Thanks a lot for your help!

Thanks,
Lance

>
> Thanks,
> Ryan
>
> > +
> > +     contpte_try_unfold_partial(mm, addr, ptep, nr);
> > +     __mkold_clean_ptes(mm, addr, ptep, nr);
> > +}
> > +EXPORT_SYMBOL_GPL(contpte_mkold_clean_ptes);
> > +
> >  int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> >                                       unsigned long addr, pte_t *ptep,
> >                                       pte_t entry, int dirty)
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper
  2024-04-12  2:09     ` Lance Yang
@ 2024-04-12 11:21       ` Ryan Roberts
  0 siblings, 0 replies; 21+ messages in thread
From: Ryan Roberts @ 2024-04-12 11:21 UTC (permalink / raw)
  To: Lance Yang
  Cc: akpm, david, 21cnbao, mhocko, fengwei.yin, zokeefe, shy828301,
	xiehuan09, wangkefeng.wang, songmuchun, peterx, minchan,
	linux-mm, linux-kernel

On 12/04/2024 03:09, Lance Yang wrote:
> On Thu, Apr 11, 2024 at 9:17 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 08/04/2024 05:24, Lance Yang wrote:
>>> The per-pte get_and_clear/modify/set approach would result in
>>> unfolding/refolding for contpte mappings on arm64. So we need
>>> to override mkold_clean_ptes() for arm64 to avoid it.
>>
>> IIRC, in the last version, I suggested copying the wrprotect_ptes() pattern to
>> correctly iterate over contpte blocks. I meant for you to take it as inspiration
>> but looks like you have done a carbon copy, including lots of things that are
>> unneeded here. That's my fault for not being clear - sorry!
> 
> My bad. I must have misunderstood your intention.

Not at all - it was my bad. wrprotect_ptes() is nothing like what I eventually
suggested below, so sorry for the bad steer.


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2024-04-12 11:21 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-08  4:24 [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Lance Yang
2024-04-08  4:24 ` [PATCH v5 1/2] mm/madvise: optimize " Lance Yang
2024-04-11 11:11   ` Ryan Roberts
2024-04-11 11:20     ` David Hildenbrand
2024-04-11 11:27       ` Ryan Roberts
2024-04-11 12:23         ` Lance Yang
2024-04-11 13:51           ` Ryan Roberts
2024-04-11 13:55             ` David Hildenbrand
2024-04-11 12:46     ` Lance Yang
2024-04-11 13:48       ` Ryan Roberts
2024-04-11 14:07         ` Lance Yang
2024-04-11 14:39           ` Ryan Roberts
2024-04-11 14:42             ` David Hildenbrand
2024-04-12  1:48             ` Lance Yang
2024-04-08  4:24 ` [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper Lance Yang
2024-04-11 13:17   ` Ryan Roberts
2024-04-12  2:09     ` Lance Yang
2024-04-12 11:21       ` Ryan Roberts
2024-04-10 21:50 ` [PATCH v5 0/2] mm/madvise: enhance lazyfreeing with mTHP in madvise_free Andrew Morton
2024-04-11  5:01   ` Lance Yang
2024-04-11 10:29     ` Ryan Roberts

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.