All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios
@ 2023-09-26  0:52 Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 1/9] mm_types: add _last_cpupid into folio Kefeng Wang
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

The cpupid(or access time) used by numa balancing is stored in flags
or _last_cpupid(if LAST_CPUPID_NOT_IN_PAGE_FLAGS) of page, this is to
convert page cpupid to folio cpupid, a new _last_cpupid is added into
folio, which make us to use folio->_last_cpupid directly, and the
page_cpupid_reset_last(), page_cpupid_xchg_last(), xchg_page_access_time(),
and page_cpupid_last() are converted to folio one.

Kefeng Wang (9):
  mm_types: add _last_cpupid into folio
  mm: mprotect: use a folio in change_pte_range()
  mm: huge_memory: use a folio in change_huge_pmd()
  mm: convert xchg_page_access_time to xchg_folio_access_time()
  mm: convert page_cpupid_last() to folio_cpupid_last()
  mm: make wp_page_reuse() and finish_mkwrite_fault() to take a folio
  mm: convert page_cpupid_xchg_last() to folio_cpupid_xchg_last()
  mm: page_alloc: use a folio in free_pages_prepare()
  mm: convert page_cpupid_reset_last() to folio_cpupid_reset_last()

 include/linux/mm.h       | 40 ++++++++++++++++++++--------------------
 include/linux/mm_types.h | 13 +++++++++----
 kernel/sched/fair.c      |  4 ++--
 mm/huge_memory.c         | 17 +++++++++--------
 mm/memory.c              | 39 +++++++++++++++++++++------------------
 mm/migrate.c             |  4 ++--
 mm/mm_init.c             |  1 -
 mm/mmzone.c              |  6 +++---
 mm/mprotect.c            | 16 +++++++++-------
 mm/page_alloc.c          | 17 +++++++++--------
 10 files changed, 84 insertions(+), 73 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH -next 1/9] mm_types: add _last_cpupid into folio
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 2/9] mm: mprotect: use a folio in change_pte_range() Kefeng Wang
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

At present, only arc/sparc/m68k define WANT_PAGE_VIRTUAL, both of
them don't support numa balancing, and the page struct is aligned
to _struct_page_alignment, it is safe to move _last_cpupid before
'virtual' in page, meanwhile, add it into folio, which make us to
use folio->_last_cpupid directly.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm_types.h | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5a995089cbf5..2fdfddd8264a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -183,6 +183,9 @@ struct page {
 #ifdef CONFIG_MEMCG
 	unsigned long memcg_data;
 #endif
+#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
+	int _last_cpupid;
+#endif
 
 	/*
 	 * On machines where all RAM is mapped into kernel address space,
@@ -210,10 +213,6 @@ struct page {
 	struct page *kmsan_shadow;
 	struct page *kmsan_origin;
 #endif
-
-#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
-	int _last_cpupid;
-#endif
 } _struct_page_alignment;
 
 /*
@@ -328,6 +327,9 @@ struct folio {
 			atomic_t _refcount;
 #ifdef CONFIG_MEMCG
 			unsigned long memcg_data;
+#endif
+#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
+			int _last_cpupid;
 #endif
 	/* private: the union with struct page is transitional */
 		};
@@ -384,6 +386,9 @@ FOLIO_MATCH(_refcount, _refcount);
 #ifdef CONFIG_MEMCG
 FOLIO_MATCH(memcg_data, memcg_data);
 #endif
+#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
+FOLIO_MATCH(_last_cpupid, _last_cpupid);
+#endif
 #undef FOLIO_MATCH
 #define FOLIO_MATCH(pg, fl)						\
 	static_assert(offsetof(struct folio, fl) ==			\
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 2/9] mm: mprotect: use a folio in change_pte_range()
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 1/9] mm_types: add _last_cpupid into folio Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 3/9] mm: huge_memory: use a folio in change_huge_pmd() Kefeng Wang
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

Use a folio in change_pte_range() to save three compound_head() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/mprotect.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/mm/mprotect.c b/mm/mprotect.c
index b94fbb45d5c7..459daa987131 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -114,7 +114,7 @@ static long change_pte_range(struct mmu_gather *tlb,
 			 * pages. See similar comment in change_huge_pmd.
 			 */
 			if (prot_numa) {
-				struct page *page;
+				struct folio *folio;
 				int nid;
 				bool toptier;
 
@@ -122,13 +122,14 @@ static long change_pte_range(struct mmu_gather *tlb,
 				if (pte_protnone(oldpte))
 					continue;
 
-				page = vm_normal_page(vma, addr, oldpte);
-				if (!page || is_zone_device_page(page) || PageKsm(page))
+				folio = vm_normal_folio(vma, addr, oldpte);
+				if (!folio || folio_is_zone_device(folio) ||
+				    folio_test_ksm(folio))
 					continue;
 
 				/* Also skip shared copy-on-write pages */
 				if (is_cow_mapping(vma->vm_flags) &&
-				    page_count(page) != 1)
+				    folio_ref_count(folio) != 1)
 					continue;
 
 				/*
@@ -136,14 +137,15 @@ static long change_pte_range(struct mmu_gather *tlb,
 				 * it cannot move them all from MIGRATE_ASYNC
 				 * context.
 				 */
-				if (page_is_file_lru(page) && PageDirty(page))
+				if (folio_is_file_lru(folio) &&
+				    folio_test_dirty(folio))
 					continue;
 
 				/*
 				 * Don't mess with PTEs if page is already on the node
 				 * a single-threaded process is running on.
 				 */
-				nid = page_to_nid(page);
+				nid = folio_nid(folio);
 				if (target_node == nid)
 					continue;
 				toptier = node_is_toptier(nid);
@@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
 					continue;
 				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 				    !toptier)
-					xchg_page_access_time(page,
+					xchg_page_access_time(&folio->page,
 						jiffies_to_msecs(jiffies));
 			}
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 3/9] mm: huge_memory: use a folio in change_huge_pmd()
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 1/9] mm_types: add _last_cpupid into folio Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 2/9] mm: mprotect: use a folio in change_pte_range() Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 4/9] mm: convert xchg_page_access_time to xchg_folio_access_time() Kefeng Wang
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

Use a folio in change_huge_pmd(), this is in preparation for
xchg_page_access_time() to folio conversion.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0f93a73115f7..c7efa214add8 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1849,7 +1849,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
 	if (is_swap_pmd(*pmd)) {
 		swp_entry_t entry = pmd_to_swp_entry(*pmd);
-		struct page *page = pfn_swap_entry_to_page(entry);
+		struct folio *folio = page_folio(pfn_swap_entry_to_page(entry));
 		pmd_t newpmd;
 
 		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
@@ -1858,7 +1858,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			 * A protection check is difficult so
 			 * just be safe and disable write
 			 */
-			if (PageAnon(page))
+			if (folio_test_anon(folio))
 				entry = make_readable_exclusive_migration_entry(swp_offset(entry));
 			else
 				entry = make_readable_migration_entry(swp_offset(entry));
@@ -1880,7 +1880,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 #endif
 
 	if (prot_numa) {
-		struct page *page;
+		struct folio *folio;
 		bool toptier;
 		/*
 		 * Avoid trapping faults against the zero page. The read-only
@@ -1893,8 +1893,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		if (pmd_protnone(*pmd))
 			goto unlock;
 
-		page = pmd_page(*pmd);
-		toptier = node_is_toptier(page_to_nid(page));
+		folio = page_folio(pmd_page(*pmd));
+		toptier = node_is_toptier(folio_nid(folio));
 		/*
 		 * Skip scanning top tier node if normal numa
 		 * balancing is disabled
@@ -1905,7 +1905,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 
 		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 		    !toptier)
-			xchg_page_access_time(page, jiffies_to_msecs(jiffies));
+			xchg_page_access_time(&folio->page,
+					      jiffies_to_msecs(jiffies));
 	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 4/9] mm: convert xchg_page_access_time to xchg_folio_access_time()
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (2 preceding siblings ...)
  2023-09-26  0:52 ` [PATCH -next 3/9] mm: huge_memory: use a folio in change_huge_pmd() Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 5/9] mm: convert page_cpupid_last() to folio_cpupid_last() Kefeng Wang
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

Make xchg_page_access_time to take a folio, and rename it to
xchg_folio_access_time() since all callers with a folio.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h  | 7 ++++---
 kernel/sched/fair.c | 2 +-
 mm/huge_memory.c    | 4 ++--
 mm/mprotect.c       | 2 +-
 4 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a1d0c82ac9a7..49b9fa383e7d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1711,11 +1711,12 @@ static inline void page_cpupid_reset_last(struct page *page)
 }
 #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */
 
-static inline int xchg_page_access_time(struct page *page, int time)
+static inline int xchg_folio_access_time(struct folio *folio, int time)
 {
 	int last_time;
 
-	last_time = page_cpupid_xchg_last(page, time >> PAGE_ACCESS_TIME_BUCKETS);
+	last_time = page_cpupid_xchg_last(&folio->page,
+					  time >> PAGE_ACCESS_TIME_BUCKETS);
 	return last_time << PAGE_ACCESS_TIME_BUCKETS;
 }
 
@@ -1734,7 +1735,7 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return page_to_nid(page); /* XXX */
 }
 
-static inline int xchg_page_access_time(struct page *page, int time)
+static inline int xchg_folio_access_time(struct folio *folio, int time)
 {
 	return 0;
 }
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b507ec29e1e1..afb9dc98a8ee 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1714,7 +1714,7 @@ static int numa_hint_fault_latency(struct folio *folio)
 	int last_time, time;
 
 	time = jiffies_to_msecs(jiffies);
-	last_time = xchg_page_access_time(&folio->page, time);
+	last_time = xchg_folio_access_time(folio, time);
 
 	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
 }
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c7efa214add8..c4f4951615fd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1905,8 +1905,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 
 		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 		    !toptier)
-			xchg_page_access_time(&folio->page,
-					      jiffies_to_msecs(jiffies));
+			xchg_folio_access_time(folio,
+					       jiffies_to_msecs(jiffies));
 	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 459daa987131..1c556651888a 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -159,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
 					continue;
 				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 				    !toptier)
-					xchg_page_access_time(&folio->page,
+					xchg_folio_access_time(folio,
 						jiffies_to_msecs(jiffies));
 			}
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 5/9] mm: convert page_cpupid_last() to folio_cpupid_last()
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (3 preceding siblings ...)
  2023-09-26  0:52 ` [PATCH -next 4/9] mm: convert xchg_page_access_time to xchg_folio_access_time() Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 6/9] mm: make wp_page_reuse() and finish_mkwrite_fault() to take a folio Kefeng Wang
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

Make page_cpupid_last() to take a folio, and rename it to
folio_cpupid_last() since all callers with a folio.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 12 ++++++------
 mm/huge_memory.c   |  4 ++--
 mm/memory.c        |  2 +-
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 49b9fa383e7d..aa7fdda1b56c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1689,18 +1689,18 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
 }
 
-static inline int page_cpupid_last(struct page *page)
+static inline int folio_cpupid_last(struct folio *folio)
 {
-	return page->_last_cpupid;
+	return folio->_last_cpupid;
 }
 static inline void page_cpupid_reset_last(struct page *page)
 {
 	page->_last_cpupid = -1 & LAST_CPUPID_MASK;
 }
 #else
-static inline int page_cpupid_last(struct page *page)
+static inline int folio_cpupid_last(struct folio *folio)
 {
-	return (page->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
+	return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 }
 
 extern int page_cpupid_xchg_last(struct page *page, int cpupid);
@@ -1740,9 +1740,9 @@ static inline int xchg_folio_access_time(struct folio *folio, int time)
 	return 0;
 }
 
-static inline int page_cpupid_last(struct page *page)
+static inline int folio_cpupid_last(struct folio *folio)
 {
-	return page_to_nid(page); /* XXX */
+	return folio_nid(folio); /* XXX */
 }
 
 static inline int cpupid_to_nid(int cpupid)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c4f4951615fd..93981a759daf 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1555,7 +1555,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 	 * to record page access time.  So use default value.
 	 */
 	if (node_is_toptier(nid))
-		last_cpupid = page_cpupid_last(&folio->page);
+		last_cpupid = folio_cpupid_last(folio);
 	target_nid = numa_migrate_prep(folio, vma, haddr, nid, &flags);
 	if (target_nid == NUMA_NO_NODE) {
 		folio_put(folio);
@@ -2508,7 +2508,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
 	if (page_is_idle(head))
 		set_page_idle(page_tail);
 
-	page_cpupid_xchg_last(page_tail, page_cpupid_last(head));
+	page_cpupid_xchg_last(page_tail, folio_cpupid_last(folio));
 
 	/*
 	 * always add to the tail because some iterators expect new
diff --git a/mm/memory.c b/mm/memory.c
index 29c5618c91e5..5ab6e8d45a7d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4814,7 +4814,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	    !node_is_toptier(nid))
 		last_cpupid = (-1 & LAST_CPUPID_MASK);
 	else
-		last_cpupid = page_cpupid_last(&folio->page);
+		last_cpupid = folio_cpupid_last(folio);
 	target_nid = numa_migrate_prep(folio, vma, vmf->address, nid, &flags);
 	if (target_nid == NUMA_NO_NODE) {
 		folio_put(folio);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 6/9] mm: make wp_page_reuse() and finish_mkwrite_fault() to take a folio
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (4 preceding siblings ...)
  2023-09-26  0:52 ` [PATCH -next 5/9] mm: convert page_cpupid_last() to folio_cpupid_last() Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 7/9] mm: convert page_cpupid_xchg_last() to folio_cpupid_xchg_last() Kefeng Wang
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

Make finish_mkwrite_fault() to a static function, and convert
wp_page_reuse() and finish_mkwrite_fault() to take a folio in
preparation for page_cpupid_xchg_last() to folio conversion.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h |  1 -
 mm/memory.c        | 37 ++++++++++++++++++++-----------------
 2 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index aa7fdda1b56c..9933f6345e66 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1335,7 +1335,6 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 		struct page *page, unsigned int nr, unsigned long addr);
 
 vm_fault_t finish_fault(struct vm_fault *vmf);
-vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
 #endif
 
 /*
diff --git a/mm/memory.c b/mm/memory.c
index 5ab6e8d45a7d..119c40e4465e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3014,23 +3014,24 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf)
  * case, all we need to do here is to mark the page as writable and update
  * any related book-keeping.
  */
-static inline void wp_page_reuse(struct vm_fault *vmf)
+static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
 	__releases(vmf->ptl)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct page *page = vmf->page;
 	pte_t entry;
 
 	VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
-	VM_BUG_ON(page && PageAnon(page) && !PageAnonExclusive(page));
+	if (folio) {
+		VM_BUG_ON(folio_test_anon(folio) &&
+			  !PageAnonExclusive(vmf->page));
 
-	/*
-	 * Clear the pages cpupid information as the existing
-	 * information potentially belongs to a now completely
-	 * unrelated process.
-	 */
-	if (page)
-		page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1);
+		/*
+		 * Clear the pages cpupid information as the existing
+		 * information potentially belongs to a now completely
+		 * unrelated process.
+		 */
+		page_cpupid_xchg_last(vmf->page, (1 << LAST_CPUPID_SHIFT) - 1);
+	}
 
 	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
 	entry = pte_mkyoung(vmf->orig_pte);
@@ -3223,6 +3224,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
  *			  writeable once the page is prepared
  *
  * @vmf: structure describing the fault
+ * @folio: the folio of vmf->page
  *
  * This function handles all that is needed to finish a write page fault in a
  * shared mapping due to PTE being read-only once the mapped page is prepared.
@@ -3234,7 +3236,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
  * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
  * we acquired PTE lock.
  */
-vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
+static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf,
+				       struct folio *folio)
 {
 	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
 	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
@@ -3250,7 +3253,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		return VM_FAULT_NOPAGE;
 	}
-	wp_page_reuse(vmf);
+	wp_page_reuse(vmf, folio);
 	return 0;
 }
 
@@ -3275,9 +3278,9 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
 		ret = vma->vm_ops->pfn_mkwrite(vmf);
 		if (ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))
 			return ret;
-		return finish_mkwrite_fault(vmf);
+		return finish_mkwrite_fault(vmf, NULL);
 	}
-	wp_page_reuse(vmf);
+	wp_page_reuse(vmf, NULL);
 	return 0;
 }
 
@@ -3305,14 +3308,14 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio)
 			folio_put(folio);
 			return tmp;
 		}
-		tmp = finish_mkwrite_fault(vmf);
+		tmp = finish_mkwrite_fault(vmf, folio);
 		if (unlikely(tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {
 			folio_unlock(folio);
 			folio_put(folio);
 			return tmp;
 		}
 	} else {
-		wp_page_reuse(vmf);
+		wp_page_reuse(vmf, folio);
 		folio_lock(folio);
 	}
 	ret |= fault_dirty_shared_page(vmf);
@@ -3436,7 +3439,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			return 0;
 		}
-		wp_page_reuse(vmf);
+		wp_page_reuse(vmf, folio);
 		return 0;
 	}
 copy:
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 7/9] mm: convert page_cpupid_xchg_last() to folio_cpupid_xchg_last()
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (5 preceding siblings ...)
  2023-09-26  0:52 ` [PATCH -next 6/9] mm: make wp_page_reuse() and finish_mkwrite_fault() to take a folio Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare() Kefeng Wang
  2023-09-26  0:52 ` [PATCH -next 9/9] mm: convert page_cpupid_reset_last() to folio_cpupid_reset_last() Kefeng Wang
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

Make page_cpupid_xchg_last() to take a folio, and rename it to
olio_cpupid_xchg_last() since all callers with a folio.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h  | 14 +++++++-------
 kernel/sched/fair.c |  2 +-
 mm/huge_memory.c    |  2 +-
 mm/memory.c         |  2 +-
 mm/migrate.c        |  4 ++--
 mm/mmzone.c         |  6 +++---
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9933f6345e66..a6f4b55bf469 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1683,9 +1683,9 @@ static inline bool __cpupid_match_pid(pid_t task_pid, int cpupid)
 
 #define cpupid_match_pid(task, cpupid) __cpupid_match_pid(task->pid, cpupid)
 #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
-static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
+static inline int folio_cpupid_xchg_last(struct folio *folio, int cpupid)
 {
-	return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
+	return xchg(&folio->_last_cpupid, cpupid & LAST_CPUPID_MASK);
 }
 
 static inline int folio_cpupid_last(struct folio *folio)
@@ -1702,7 +1702,7 @@ static inline int folio_cpupid_last(struct folio *folio)
 	return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 }
 
-extern int page_cpupid_xchg_last(struct page *page, int cpupid);
+extern int folio_cpupid_xchg_last(struct folio *folio, int cpupid);
 
 static inline void page_cpupid_reset_last(struct page *page)
 {
@@ -1714,8 +1714,8 @@ static inline int xchg_folio_access_time(struct folio *folio, int time)
 {
 	int last_time;
 
-	last_time = page_cpupid_xchg_last(&folio->page,
-					  time >> PAGE_ACCESS_TIME_BUCKETS);
+	last_time = folio_cpupid_xchg_last(folio,
+					   time >> PAGE_ACCESS_TIME_BUCKETS);
 	return last_time << PAGE_ACCESS_TIME_BUCKETS;
 }
 
@@ -1729,9 +1729,9 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 	}
 }
 #else /* !CONFIG_NUMA_BALANCING */
-static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
+static inline int folio_cpupid_xchg_last(struct folio *folio, int cpupid)
 {
-	return page_to_nid(page); /* XXX */
+	return folio_nid(folio); /* XXX */
 }
 
 static inline int xchg_folio_access_time(struct folio *folio, int time)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index afb9dc98a8ee..dca1546aa9c1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1810,7 +1810,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio,
 	}
 
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
-	last_cpupid = page_cpupid_xchg_last(&folio->page, this_cpupid);
+	last_cpupid = folio_cpupid_xchg_last(folio, this_cpupid);
 
 	if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
 	    !node_is_toptier(src_nid) && !cpupid_valid(last_cpupid))
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 93981a759daf..89e65ff46ad4 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2508,7 +2508,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
 	if (page_is_idle(head))
 		set_page_idle(page_tail);
 
-	page_cpupid_xchg_last(page_tail, folio_cpupid_last(folio));
+	folio_cpupid_xchg_last(new_folio, folio_cpupid_last(folio));
 
 	/*
 	 * always add to the tail because some iterators expect new
diff --git a/mm/memory.c b/mm/memory.c
index 119c40e4465e..bf07ebdc24a0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3030,7 +3030,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
 		 * information potentially belongs to a now completely
 		 * unrelated process.
 		 */
-		page_cpupid_xchg_last(vmf->page, (1 << LAST_CPUPID_SHIFT) - 1);
+		folio_cpupid_xchg_last(folio, (1 << LAST_CPUPID_SHIFT) - 1);
 	}
 
 	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
diff --git a/mm/migrate.c b/mm/migrate.c
index 7d1804c4a5d9..d41139ccbd3f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -588,7 +588,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 	 * Copy NUMA information to the new page, to prevent over-eager
 	 * future migrations of this same page.
 	 */
-	cpupid = page_cpupid_xchg_last(&folio->page, -1);
+	cpupid = folio_cpupid_xchg_last(folio, -1);
 	/*
 	 * For memory tiering mode, when migrate between slow and fast
 	 * memory node, reset cpupid, because that is used to record
@@ -601,7 +601,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 		if (f_toptier != t_toptier)
 			cpupid = -1;
 	}
-	page_cpupid_xchg_last(&newfolio->page, cpupid);
+	folio_cpupid_xchg_last(newfolio, cpupid);
 
 	folio_migrate_ksm(newfolio, folio);
 	/*
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 68e1511be12d..cd473f82b647 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -93,19 +93,19 @@ void lruvec_init(struct lruvec *lruvec)
 }
 
 #if defined(CONFIG_NUMA_BALANCING) && !defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS)
-int page_cpupid_xchg_last(struct page *page, int cpupid)
+int folio_cpupid_xchg_last(struct folio *folio, int cpupid)
 {
 	unsigned long old_flags, flags;
 	int last_cpupid;
 
-	old_flags = READ_ONCE(page->flags);
+	old_flags = READ_ONCE(folio->flags);
 	do {
 		flags = old_flags;
 		last_cpupid = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 
 		flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
 		flags |= (cpupid & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
-	} while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
+	} while (unlikely(!try_cmpxchg(&folio->flags, &old_flags, flags)));
 
 	return last_cpupid;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare()
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (6 preceding siblings ...)
  2023-09-26  0:52 ` [PATCH -next 7/9] mm: convert page_cpupid_xchg_last() to folio_cpupid_xchg_last() Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  2023-09-26  7:49   ` David Hildenbrand
  2023-09-26  0:52 ` [PATCH -next 9/9] mm: convert page_cpupid_reset_last() to folio_cpupid_reset_last() Kefeng Wang
  8 siblings, 1 reply; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

The page should not a tail page in free_pages_prepare(), let's use
a folio in free_pages_prepare() to save several compound_head() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/page_alloc.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 06be8821d833..a888b9d57751 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1070,6 +1070,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 			unsigned int order, fpi_t fpi_flags)
 {
 	int bad = 0;
+	struct folio *folio = page_folio(page);
 	bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
 	bool init = want_init_on_free();
 
@@ -1078,12 +1079,12 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	trace_mm_page_free(page, order);
 	kmsan_free_page(page, order);
 
-	if (unlikely(PageHWPoison(page)) && !order) {
+	if (unlikely(folio_test_hwpoison(folio)) && !order) {
 		/*
 		 * Do not let hwpoison pages hit pcplists/buddy
 		 * Untie memcg state and reset page's owner
 		 */
-		if (memcg_kmem_online() && PageMemcgKmem(page))
+		if (memcg_kmem_online() && folio_memcg_kmem(folio))
 			__memcg_kmem_uncharge_page(page, order);
 		reset_page_owner(page, order);
 		page_table_check_free(page, order);
@@ -1095,10 +1096,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	 * avoid checking PageCompound for order-0 pages.
 	 */
 	if (unlikely(order)) {
-		bool compound = PageCompound(page);
+		bool compound = folio_test_large(folio);
 		int i;
 
-		VM_BUG_ON_PAGE(compound && compound_order(page) != order, page);
+		VM_BUG_ON_FOLIO(compound && folio_order(folio) != order, folio);
 
 		if (compound)
 			page[1].flags &= ~PAGE_FLAGS_SECOND;
@@ -1114,9 +1115,9 @@ static __always_inline bool free_pages_prepare(struct page *page,
 			(page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
 		}
 	}
-	if (PageMappingFlags(page))
+	if (folio_mapping_flags(folio))
 		page->mapping = NULL;
-	if (memcg_kmem_online() && PageMemcgKmem(page))
+	if (memcg_kmem_online() && folio_memcg_kmem(folio))
 		__memcg_kmem_uncharge_page(page, order);
 	if (is_check_pages_enabled()) {
 		if (free_page_is_bad(page))
@@ -1130,7 +1131,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	reset_page_owner(page, order);
 	page_table_check_free(page, order);
 
-	if (!PageHighMem(page)) {
+	if (!folio_test_highmem(folio)) {
 		debug_check_no_locks_freed(page_address(page),
 					   PAGE_SIZE << order);
 		debug_check_no_obj_freed(page_address(page),
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH -next 9/9] mm: convert page_cpupid_reset_last() to folio_cpupid_reset_last()
  2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (7 preceding siblings ...)
  2023-09-26  0:52 ` [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare() Kefeng Wang
@ 2023-09-26  0:52 ` Kefeng Wang
  8 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  0:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, David Hildenbrand, linux-mm,
	linux-kernel, ying.huang, Zi Yan, Kefeng Wang

It isn't need to fill the default cpupid value for all the struct
page, since cpupid is only used for numa balancing, and the pages
for numa balancing are all from buddy, page_cpupid_reset_last()
is already called by free_pages_prepare() to initialize it, so
let's drop the page_cpupid_reset_last() in __init_single_page(),
then make page_cpupid_reset_last() to take a folio and rename it
to folio_cpupid_reset_last().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 10 +++++-----
 mm/mm_init.c       |  1 -
 mm/page_alloc.c    |  2 +-
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a6f4b55bf469..ca66a05eb2ed 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1692,9 +1692,9 @@ static inline int folio_cpupid_last(struct folio *folio)
 {
 	return folio->_last_cpupid;
 }
-static inline void page_cpupid_reset_last(struct page *page)
+static inline void folio_cpupid_reset_last(struct folio *folio)
 {
-	page->_last_cpupid = -1 & LAST_CPUPID_MASK;
+	folio->_last_cpupid = -1 & LAST_CPUPID_MASK;
 }
 #else
 static inline int folio_cpupid_last(struct folio *folio)
@@ -1704,9 +1704,9 @@ static inline int folio_cpupid_last(struct folio *folio)
 
 extern int folio_cpupid_xchg_last(struct folio *folio, int cpupid);
 
-static inline void page_cpupid_reset_last(struct page *page)
+static inline void folio_cpupid_reset_last(struct folio *folio)
 {
-	page->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT;
+	folio->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT;
 }
 #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */
 
@@ -1769,7 +1769,7 @@ static inline bool cpupid_pid_unset(int cpupid)
 	return true;
 }
 
-static inline void page_cpupid_reset_last(struct page *page)
+static inline void folio_cpupid_reset_last(struct folio *folio)
 {
 }
 
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 06a72c223bce..74c0dc27fbf1 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -563,7 +563,6 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn,
 	set_page_links(page, zone, nid, pfn);
 	init_page_count(page);
 	page_mapcount_reset(page);
-	page_cpupid_reset_last(page);
 	page_kasan_tag_reset(page);
 
 	INIT_LIST_HEAD(&page->lru);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a888b9d57751..852fc78ddb34 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1126,7 +1126,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 			return false;
 	}
 
-	page_cpupid_reset_last(page);
+	folio_cpupid_reset_last(folio);
 	page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
 	reset_page_owner(page, order);
 	page_table_check_free(page, order);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare()
  2023-09-26  0:52 ` [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare() Kefeng Wang
@ 2023-09-26  7:49   ` David Hildenbrand
  2023-09-26  9:39     ` Kefeng Wang
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2023-09-26  7:49 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, linux-mm, linux-kernel,
	ying.huang, Zi Yan

On 26.09.23 02:52, Kefeng Wang wrote:
> The page should not a tail page in free_pages_prepare(), let's use
> a folio in free_pages_prepare() to save several compound_head() calls.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>   mm/page_alloc.c | 15 ++++++++-------
>   1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 06be8821d833..a888b9d57751 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1070,6 +1070,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
>   			unsigned int order, fpi_t fpi_flags)
>   {
>   	int bad = 0;
> +	struct folio *folio = page_folio(page);

We might have higher-order pages here that are not folios (not compound 
pages). It looks a bit like this function really shouldn't be working 
with folios in the generic way, for that reason.

Wrong level of abstraction in that function.

What am I missing?

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare()
  2023-09-26  7:49   ` David Hildenbrand
@ 2023-09-26  9:39     ` Kefeng Wang
  2023-09-27 12:08       ` Kefeng Wang
  0 siblings, 1 reply; 13+ messages in thread
From: Kefeng Wang @ 2023-09-26  9:39 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, linux-mm, linux-kernel,
	ying.huang, Zi Yan



On 2023/9/26 15:49, David Hildenbrand wrote:
> On 26.09.23 02:52, Kefeng Wang wrote:
>> The page should not a tail page in free_pages_prepare(), let's use
>> a folio in free_pages_prepare() to save several compound_head() calls.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   mm/page_alloc.c | 15 ++++++++-------
>>   1 file changed, 8 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 06be8821d833..a888b9d57751 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1070,6 +1070,7 @@ static __always_inline bool 
>> free_pages_prepare(struct page *page,
>>               unsigned int order, fpi_t fpi_flags)
>>   {
>>       int bad = 0;
>> +    struct folio *folio = page_folio(page);
> 
> We might have higher-order pages here that are not folios (not compound 
> pages). It looks a bit like this function really shouldn't be working 
> with folios in the generic way, for that reason.
> 
> Wrong level of abstraction in that function.

Thanks for your point this, also the change also looks unnecessary too,
the main purpose to use a folio in this function is prepared for
converting page_cpupid_reset_last() to folio, as the higher-order pages
the next patch is not right, I will reconsider it.

> 
> What am I missing?
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare()
  2023-09-26  9:39     ` Kefeng Wang
@ 2023-09-27 12:08       ` Kefeng Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-27 12:08 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton
  Cc: Mike Rapoport, Matthew Wilcox, linux-mm, linux-kernel,
	ying.huang, Zi Yan

Hi David and all,

On 2023/9/26 17:39, Kefeng Wang wrote:
> 
> 
> On 2023/9/26 15:49, David Hildenbrand wrote:
>> On 26.09.23 02:52, Kefeng Wang wrote:
>>> The page should not a tail page in free_pages_prepare(), let's use
>>> a folio in free_pages_prepare() to save several compound_head() calls.
>>>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> ---
>>>   mm/page_alloc.c | 15 ++++++++-------
>>>   1 file changed, 8 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 06be8821d833..a888b9d57751 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -1070,6 +1070,7 @@ static __always_inline bool 
>>> free_pages_prepare(struct page *page,
>>>               unsigned int order, fpi_t fpi_flags)
>>>   {
>>>       int bad = 0;
>>> +    struct folio *folio = page_folio(page);
>>
>> We might have higher-order pages here that are not folios (not 
>> compound pages). It looks a bit like this function really shouldn't be 
>> working with folios in the generic way, for that reason.
>>
>> Wrong level of abstraction in that function.
> 
> Thanks for your point this, also the change also looks unnecessary too,
> the main purpose to use a folio in this function is prepared for
> converting page_cpupid_reset_last() to folio, as the higher-order pages
> the next patch is not right, I will reconsider it.
> 

As David mentioned,free_pages_prepare should not use folio, I won't to
convert page_cpupid_reset_last(), that is, only the first 7 patches are
reserved, any comments about the above patches, many thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-09-27 12:08 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-26  0:52 [PATCH -next rfc 0/9] mm: convert page cpupid functions to folios Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 1/9] mm_types: add _last_cpupid into folio Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 2/9] mm: mprotect: use a folio in change_pte_range() Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 3/9] mm: huge_memory: use a folio in change_huge_pmd() Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 4/9] mm: convert xchg_page_access_time to xchg_folio_access_time() Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 5/9] mm: convert page_cpupid_last() to folio_cpupid_last() Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 6/9] mm: make wp_page_reuse() and finish_mkwrite_fault() to take a folio Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 7/9] mm: convert page_cpupid_xchg_last() to folio_cpupid_xchg_last() Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 8/9] mm: page_alloc: use a folio in free_pages_prepare() Kefeng Wang
2023-09-26  7:49   ` David Hildenbrand
2023-09-26  9:39     ` Kefeng Wang
2023-09-27 12:08       ` Kefeng Wang
2023-09-26  0:52 ` [PATCH -next 9/9] mm: convert page_cpupid_reset_last() to folio_cpupid_reset_last() Kefeng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.