linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next v2 00/19] mm: convert page cpupid functions to folios
@ 2023-10-13  8:55 Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 01/19] mm_types: add virtual and _last_cpupid into struct folio Kefeng Wang
                   ` (18 more replies)
  0 siblings, 19 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

The cpupid(or access time) used by numa balancing is stored in flags
or _last_cpupid(if LAST_CPUPID_NOT_IN_PAGE_FLAGS) of page, this is to
convert page cpupid to folio cpupid, a new _last_cpupid is added into
folio, which make us to use folio->_last_cpupid directly, and the
page cpupid functions are converted to folio ones.

  page_cpupid_last()		-> folio_last_cpupid()
  xchg_page_access_time()	-> folio_xchg_access_time()
  page_cpupid_xchg_last()	-> folio_xchg_last_cpupid()

v2:
- add virtual into folio too
- re-write and split patches to make them easier to review
- rename to folio_last_cpupid/folio_xchg_last_cpupid/folio_xchg_access_time
- rebased on next-20231013

v1:
- drop inappropriate page_cpupid_reset_last conversion from RFC
- rebased on next-20231009

Kefeng Wang (19):
  mm_types: add virtual and _last_cpupid into struct folio
  mm: add folio_last_cpupid()
  mm: memory: use folio_last_cpupid() in do_numa_page()
  mm: huge_memory: use folio_last_cpupid() in do_huge_pmd_numa_page()
  mm: huge_memory: use folio_last_cpupid() in __split_huge_page_tail()
  mm: remove page_cpupid_last()
  mm: add folio_xchg_access_time()
  sched/fair: use folio_xchg_access_time() in numa_hint_fault_latency()
  mm: mprotect: use a folio in change_pte_range()
  mm: huge_memory: use a folio in change_huge_pmd()
  mm: remove xchg_page_access_time()
  mm: add folio_xchg_last_cpupid()
  sched/fair: use folio_xchg_last_cpupid() in
    should_numa_migrate_memory()
  mm: migrate: use folio_xchg_last_cpupid() in folio_migrate_flags()
  mm: huge_memory: use folio_xchg_last_cpupid() in
    __split_huge_page_tail()
  mm: make finish_mkwrite_fault() static
  mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
  mm: use folio_xchg_last_cpupid() in wp_page_reuse()
  mm: remove page_cpupid_xchg_last()

 include/linux/mm.h       | 30 +++++++++++++++---------------
 include/linux/mm_types.h | 22 ++++++++++++++++++----
 kernel/sched/fair.c      |  4 ++--
 mm/huge_memory.c         | 17 +++++++++--------
 mm/memory.c              | 37 +++++++++++++++++++------------------
 mm/migrate.c             |  8 ++++----
 mm/mmzone.c              |  6 +++---
 mm/mprotect.c            | 16 +++++++++-------
 8 files changed, 79 insertions(+), 61 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH -next v2 01/19] mm_types: add virtual and _last_cpupid into struct folio
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 02/19] mm: add folio_last_cpupid() Kefeng Wang
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

If WANT_PAGE_VIRTUAL and LAST_CPUPID_NOT_IN_PAGE_FLAGS defined,
the 'virtual' and '_last_cpupid' are in struct page, and since
_last_cpupid is used by numa balancing feature, it is better to
move it before KMSAN metadata from struct page, also add them
into struct folio to make us to access them from folio directly.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm_types.h | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index cc8bb767c003..34466be945a9 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -199,6 +199,10 @@ struct page {
 					   not kmapped, ie. highmem) */
 #endif /* WANT_PAGE_VIRTUAL */
 
+#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
+	int _last_cpupid;
+#endif
+
 #ifdef CONFIG_KMSAN
 	/*
 	 * KMSAN metadata for this page:
@@ -210,10 +214,6 @@ struct page {
 	struct page *kmsan_shadow;
 	struct page *kmsan_origin;
 #endif
-
-#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
-	int _last_cpupid;
-#endif
 } _struct_page_alignment;
 
 /*
@@ -272,6 +272,8 @@ typedef struct {
  * @_refcount: Do not access this member directly.  Use folio_ref_count()
  *    to find how many references there are to this folio.
  * @memcg_data: Memory Control Group data.
+ * @virtual: Virtual address in the kernel direct map.
+ * @_last_cpupid: IDs of last CPU and last process that accessed the folio.
  * @_entire_mapcount: Do not use directly, call folio_entire_mapcount().
  * @_nr_pages_mapped: Do not use directly, call folio_mapcount().
  * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
@@ -317,6 +319,12 @@ struct folio {
 			atomic_t _refcount;
 #ifdef CONFIG_MEMCG
 			unsigned long memcg_data;
+#endif
+#if defined(WANT_PAGE_VIRTUAL)
+			void *virtual;
+#endif
+#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
+			int _last_cpupid;
 #endif
 	/* private: the union with struct page is transitional */
 		};
@@ -373,6 +381,12 @@ FOLIO_MATCH(_refcount, _refcount);
 #ifdef CONFIG_MEMCG
 FOLIO_MATCH(memcg_data, memcg_data);
 #endif
+#if defined(WANT_PAGE_VIRTUAL)
+FOLIO_MATCH(virtual, virtual);
+#endif
+#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
+FOLIO_MATCH(_last_cpupid, _last_cpupid);
+#endif
 #undef FOLIO_MATCH
 #define FOLIO_MATCH(pg, fl)						\
 	static_assert(offsetof(struct folio, fl) ==			\
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 02/19] mm: add folio_last_cpupid()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 01/19] mm_types: add virtual and _last_cpupid into struct folio Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 03/19] mm: memory: use folio_last_cpupid() in do_numa_page() Kefeng Wang
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Add folio_last_cpupid() wrapper, which is required to convert
page_cpupid_last() to folio vertion later in the series.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index d005beccaa5d..1c393a72037b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1794,6 +1794,11 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+static inline int folio_last_cpupid(struct folio *folio)
+{
+	return page_cpupid_last(&folio->page);
+}
+
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 
 /*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 03/19] mm: memory: use folio_last_cpupid() in do_numa_page()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 01/19] mm_types: add virtual and _last_cpupid into struct folio Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 02/19] mm: add folio_last_cpupid() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 04/19] mm: huge_memory: use folio_last_cpupid() in do_huge_pmd_numa_page() Kefeng Wang
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_last_cpupid() in do_numa_page().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index c4b4aa4c1180..a1cf25a3ff16 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4861,7 +4861,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	    !node_is_toptier(nid))
 		last_cpupid = (-1 & LAST_CPUPID_MASK);
 	else
-		last_cpupid = page_cpupid_last(&folio->page);
+		last_cpupid = folio_last_cpupid(folio);
 	target_nid = numa_migrate_prep(folio, vma, vmf->address, nid, &flags);
 	if (target_nid == NUMA_NO_NODE) {
 		folio_put(folio);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 04/19] mm: huge_memory: use folio_last_cpupid() in do_huge_pmd_numa_page()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (2 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 03/19] mm: memory: use folio_last_cpupid() in do_numa_page() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 05/19] mm: huge_memory: use folio_last_cpupid() in __split_huge_page_tail() Kefeng Wang
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_last_cpupid() in do_huge_pmd_numa_page().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c9cbcbf6697e..f9571bf92603 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1562,7 +1562,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 	 * to record page access time.  So use default value.
 	 */
 	if (node_is_toptier(nid))
-		last_cpupid = page_cpupid_last(&folio->page);
+		last_cpupid = folio_last_cpupid(folio);
 	target_nid = numa_migrate_prep(folio, vma, haddr, nid, &flags);
 	if (target_nid == NUMA_NO_NODE) {
 		folio_put(folio);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 05/19] mm: huge_memory: use folio_last_cpupid() in __split_huge_page_tail()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (3 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 04/19] mm: huge_memory: use folio_last_cpupid() in do_huge_pmd_numa_page() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 06/19] mm: remove page_cpupid_last() Kefeng Wang
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_last_cpupid() in __split_huge_page_tail().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f9571bf92603..5455dfe4c3c7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2514,7 +2514,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
 	if (page_is_idle(head))
 		set_page_idle(page_tail);
 
-	page_cpupid_xchg_last(page_tail, page_cpupid_last(head));
+	page_cpupid_xchg_last(page_tail, folio_last_cpupid(folio));
 
 	/*
 	 * always add to the tail because some iterators expect new
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 06/19] mm: remove page_cpupid_last()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (4 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 05/19] mm: huge_memory: use folio_last_cpupid() in __split_huge_page_tail() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 07/19] mm: add folio_xchg_access_time() Kefeng Wang
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Since all calls use folio_last_cpupid(), remove page_cpupid_last().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1c393a72037b..1d56a818b212 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1700,18 +1700,18 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
 }
 
-static inline int page_cpupid_last(struct page *page)
+static inline int folio_last_cpupid(struct folio *folio)
 {
-	return page->_last_cpupid;
+	return folio->_last_cpupid;
 }
 static inline void page_cpupid_reset_last(struct page *page)
 {
 	page->_last_cpupid = -1 & LAST_CPUPID_MASK;
 }
 #else
-static inline int page_cpupid_last(struct page *page)
+static inline int folio_last_cpupid(struct folio *folio)
 {
-	return (page->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
+	return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 }
 
 extern int page_cpupid_xchg_last(struct page *page, int cpupid);
@@ -1750,9 +1750,9 @@ static inline int xchg_page_access_time(struct page *page, int time)
 	return 0;
 }
 
-static inline int page_cpupid_last(struct page *page)
+static inline int folio_last_cpupid(struct folio *folio)
 {
-	return page_to_nid(page); /* XXX */
+	return folio_nid(folio); /* XXX */
 }
 
 static inline int cpupid_to_nid(int cpupid)
@@ -1794,11 +1794,6 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
-static inline int folio_last_cpupid(struct folio *folio)
-{
-	return page_cpupid_last(&folio->page);
-}
-
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 
 /*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 07/19] mm: add folio_xchg_access_time()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (5 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 06/19] mm: remove page_cpupid_last() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 08/19] sched/fair: use folio_xchg_access_time() in numa_hint_fault_latency() Kefeng Wang
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Add folio_xchg_access_time() wrapper, which is required to convert
xchg_page_access_time() to folio vertion later in the series.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1d56a818b212..1238ab784d8b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1794,6 +1794,11 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+static inline int folio_xchg_access_time(struct folio *folio, int time)
+{
+	return xchg_page_access_time(&folio->page, time);
+}
+
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 
 /*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 08/19] sched/fair: use folio_xchg_access_time() in numa_hint_fault_latency()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (6 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 07/19] mm: add folio_xchg_access_time() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range() Kefeng Wang
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_xchg_access_time() in numa_hint_fault_latency().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 78ad23fcb7f9..bc07f29a4a42 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1766,7 +1766,7 @@ static int numa_hint_fault_latency(struct folio *folio)
 	int last_time, time;
 
 	time = jiffies_to_msecs(jiffies);
-	last_time = xchg_page_access_time(&folio->page, time);
+	last_time = folio_xchg_access_time(folio, time);
 
 	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (7 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 08/19] sched/fair: use folio_xchg_access_time() in numa_hint_fault_latency() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13 15:13   ` Matthew Wilcox
  2023-10-13  8:55 ` [PATCH -next v2 10/19] mm: huge_memory: use a folio in change_huge_pmd() Kefeng Wang
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Use a folio in change_pte_range() to save three compound_head() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/mprotect.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/mm/mprotect.c b/mm/mprotect.c
index f1dc8f8c84ef..81991102f785 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -114,7 +114,7 @@ static long change_pte_range(struct mmu_gather *tlb,
 			 * pages. See similar comment in change_huge_pmd.
 			 */
 			if (prot_numa) {
-				struct page *page;
+				struct folio *folio;
 				int nid;
 				bool toptier;
 
@@ -122,13 +122,14 @@ static long change_pte_range(struct mmu_gather *tlb,
 				if (pte_protnone(oldpte))
 					continue;
 
-				page = vm_normal_page(vma, addr, oldpte);
-				if (!page || is_zone_device_page(page) || PageKsm(page))
+				folio = vm_normal_folio(vma, addr, oldpte);
+				if (!folio || folio_is_zone_device(folio) ||
+				    folio_test_ksm(folio))
 					continue;
 
 				/* Also skip shared copy-on-write pages */
 				if (is_cow_mapping(vma->vm_flags) &&
-				    page_count(page) != 1)
+				    folio_ref_count(folio) != 1)
 					continue;
 
 				/*
@@ -136,14 +137,15 @@ static long change_pte_range(struct mmu_gather *tlb,
 				 * it cannot move them all from MIGRATE_ASYNC
 				 * context.
 				 */
-				if (page_is_file_lru(page) && PageDirty(page))
+				if (folio_is_file_lru(folio) &&
+				    folio_test_dirty(folio))
 					continue;
 
 				/*
 				 * Don't mess with PTEs if page is already on the node
 				 * a single-threaded process is running on.
 				 */
-				nid = page_to_nid(page);
+				nid = folio_nid(folio);
 				if (target_node == nid)
 					continue;
 				toptier = node_is_toptier(nid);
@@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
 					continue;
 				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 				    !toptier)
-					xchg_page_access_time(page,
+					folio_xchg_access_time(folio,
 						jiffies_to_msecs(jiffies));
 			}
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 10/19] mm: huge_memory: use a folio in change_huge_pmd()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (8 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 11/19] mm: remove xchg_page_access_time() Kefeng Wang
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Use a folio in change_huge_pmd(), which helps to remove last
xchg_page_access_time() caller.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5455dfe4c3c7..f01f345141da 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1856,7 +1856,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
 	if (is_swap_pmd(*pmd)) {
 		swp_entry_t entry = pmd_to_swp_entry(*pmd);
-		struct page *page = pfn_swap_entry_to_page(entry);
+		struct folio *folio = page_folio(pfn_swap_entry_to_page(entry));
 		pmd_t newpmd;
 
 		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
@@ -1865,7 +1865,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			 * A protection check is difficult so
 			 * just be safe and disable write
 			 */
-			if (PageAnon(page))
+			if (folio_test_anon(folio))
 				entry = make_readable_exclusive_migration_entry(swp_offset(entry));
 			else
 				entry = make_readable_migration_entry(swp_offset(entry));
@@ -1887,7 +1887,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 #endif
 
 	if (prot_numa) {
-		struct page *page;
+		struct folio *folio;
 		bool toptier;
 		/*
 		 * Avoid trapping faults against the zero page. The read-only
@@ -1900,8 +1900,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		if (pmd_protnone(*pmd))
 			goto unlock;
 
-		page = pmd_page(*pmd);
-		toptier = node_is_toptier(page_to_nid(page));
+		folio = page_folio(pmd_page(*pmd));
+		toptier = node_is_toptier(folio_nid(folio));
 		/*
 		 * Skip scanning top tier node if normal numa
 		 * balancing is disabled
@@ -1912,7 +1912,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 
 		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 		    !toptier)
-			xchg_page_access_time(page, jiffies_to_msecs(jiffies));
+			folio_xchg_access_time(folio,
+					       jiffies_to_msecs(jiffies));
 	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 11/19] mm: remove xchg_page_access_time()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (9 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 10/19] mm: huge_memory: use a folio in change_huge_pmd() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 12/19] mm: add folio_xchg_last_cpupid() Kefeng Wang
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Since all calls use folio_xchg_access_time(), remove
xchg_page_access_time().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1238ab784d8b..8a2ff345338b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1722,11 +1722,12 @@ static inline void page_cpupid_reset_last(struct page *page)
 }
 #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */
 
-static inline int xchg_page_access_time(struct page *page, int time)
+static inline int folio_xchg_access_time(struct folio *folio, int time)
 {
 	int last_time;
 
-	last_time = page_cpupid_xchg_last(page, time >> PAGE_ACCESS_TIME_BUCKETS);
+	last_time = page_cpupid_xchg_last(&folio->page,
+					  time >> PAGE_ACCESS_TIME_BUCKETS);
 	return last_time << PAGE_ACCESS_TIME_BUCKETS;
 }
 
@@ -1745,7 +1746,7 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return page_to_nid(page); /* XXX */
 }
 
-static inline int xchg_page_access_time(struct page *page, int time)
+static inline int folio_xchg_access_time(struct folio *folio, int time)
 {
 	return 0;
 }
@@ -1794,11 +1795,6 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
-static inline int folio_xchg_access_time(struct folio *folio, int time)
-{
-	return xchg_page_access_time(&folio->page, time);
-}
-
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 
 /*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 12/19] mm: add folio_xchg_last_cpupid()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (10 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 11/19] mm: remove xchg_page_access_time() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 13/19] sched/fair: use folio_xchg_last_cpupid() in should_numa_migrate_memory() Kefeng Wang
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Add folio_xchg_last_cpupid() wrapper, which is required to convert
page_cpupid_xchg_last() to folio vertion later in the series.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8a2ff345338b..8229137e093b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1795,6 +1795,11 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
+{
+	return page_cpupid_xchg_last(&folio->page, cpupid);
+}
+
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 
 /*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 13/19] sched/fair: use folio_xchg_last_cpupid() in should_numa_migrate_memory()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (11 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 12/19] mm: add folio_xchg_last_cpupid() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 14/19] mm: migrate: use folio_xchg_last_cpupid() in folio_migrate_flags() Kefeng Wang
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_xchg_last_cpupid() in should_numa_migrate_memory().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc07f29a4a42..f3cb4c8974c5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1862,7 +1862,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio,
 	}
 
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
-	last_cpupid = page_cpupid_xchg_last(&folio->page, this_cpupid);
+	last_cpupid = folio_xchg_last_cpupid(folio, this_cpupid);
 
 	if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
 	    !node_is_toptier(src_nid) && !cpupid_valid(last_cpupid))
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 14/19] mm: migrate: use folio_xchg_last_cpupid() in folio_migrate_flags()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (12 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 13/19] sched/fair: use folio_xchg_last_cpupid() in should_numa_migrate_memory() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:55 ` [PATCH -next v2 15/19] mm: huge_memory: use folio_xchg_last_cpupid() in __split_huge_page_tail() Kefeng Wang
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_xchg_last_cpupid() in folio_migrate_flags(), also
directly use folio_nid() instead of page_to_nid(&folio->page).

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 5348827bd958..821c42d61ed0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -588,20 +588,20 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 	 * Copy NUMA information to the new page, to prevent over-eager
 	 * future migrations of this same page.
 	 */
-	cpupid = page_cpupid_xchg_last(&folio->page, -1);
+	cpupid = folio_xchg_last_cpupid(folio, -1);
 	/*
 	 * For memory tiering mode, when migrate between slow and fast
 	 * memory node, reset cpupid, because that is used to record
 	 * page access time in slow memory node.
 	 */
 	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) {
-		bool f_toptier = node_is_toptier(page_to_nid(&folio->page));
-		bool t_toptier = node_is_toptier(page_to_nid(&newfolio->page));
+		bool f_toptier = node_is_toptier(folio_nid(folio));
+		bool t_toptier = node_is_toptier(folio_nid(newfolio));
 
 		if (f_toptier != t_toptier)
 			cpupid = -1;
 	}
-	page_cpupid_xchg_last(&newfolio->page, cpupid);
+	folio_xchg_last_cpupid(newfolio, cpupid);
 
 	folio_migrate_ksm(newfolio, folio);
 	/*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 15/19] mm: huge_memory: use folio_xchg_last_cpupid() in __split_huge_page_tail()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (13 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 14/19] mm: migrate: use folio_xchg_last_cpupid() in folio_migrate_flags() Kefeng Wang
@ 2023-10-13  8:55 ` Kefeng Wang
  2023-10-13  8:56 ` [PATCH -next v2 16/19] mm: make finish_mkwrite_fault() static Kefeng Wang
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_xchg_last_cpupid() in __split_huge_page_tail().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f01f345141da..f31f02472396 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2515,7 +2515,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
 	if (page_is_idle(head))
 		set_page_idle(page_tail);
 
-	page_cpupid_xchg_last(page_tail, folio_last_cpupid(folio));
+	folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio));
 
 	/*
 	 * always add to the tail because some iterators expect new
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 16/19] mm: make finish_mkwrite_fault() static
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (14 preceding siblings ...)
  2023-10-13  8:55 ` [PATCH -next v2 15/19] mm: huge_memory: use folio_xchg_last_cpupid() in __split_huge_page_tail() Kefeng Wang
@ 2023-10-13  8:56 ` Kefeng Wang
  2023-10-13  8:56 ` [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio Kefeng Wang
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Make finish_mkwrite_fault static since it is not used outside of
memory.c.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 1 -
 mm/memory.c        | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8229137e093b..70eae2e7d5e5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1346,7 +1346,6 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 		struct page *page, unsigned int nr, unsigned long addr);
 
 vm_fault_t finish_fault(struct vm_fault *vmf);
-vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
 #endif
 
 /*
diff --git a/mm/memory.c b/mm/memory.c
index a1cf25a3ff16..b6cc24257683 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3272,7 +3272,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
  * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
  * we acquired PTE lock.
  */
-vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
+static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
 {
 	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
 	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (15 preceding siblings ...)
  2023-10-13  8:56 ` [PATCH -next v2 16/19] mm: make finish_mkwrite_fault() static Kefeng Wang
@ 2023-10-13  8:56 ` Kefeng Wang
  2023-10-17  7:33   ` kernel test robot
  2023-10-13  8:56 ` [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse() Kefeng Wang
  2023-10-13  8:56 ` [PATCH -next v2 19/19] mm: remove page_cpupid_xchg_last() Kefeng Wang
  18 siblings, 1 reply; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Saves one compound_head() call, also in preparation for
page_cpupid_xchg_last() conversion.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index b6cc24257683..6b58ceb0961f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3018,7 +3018,7 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf)
  * case, all we need to do here is to mark the page as writable and update
  * any related book-keeping.
  */
-static inline void wp_page_reuse(struct vm_fault *vmf)
+static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
 	__releases(vmf->ptl)
 {
 	struct vm_area_struct *vma = vmf->vma;
@@ -3026,7 +3026,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 	pte_t entry;
 
 	VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
-	VM_BUG_ON(page && PageAnon(page) && !PageAnonExclusive(page));
+	VM_BUG_ON(folio && folio_test_anon(folio) && !PageAnonExclusive(page));
 
 	/*
 	 * Clear the pages cpupid information as the existing
@@ -3272,7 +3272,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
  * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
  * we acquired PTE lock.
  */
-static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
+static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio)
 {
 	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
 	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
@@ -3288,7 +3288,7 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		return VM_FAULT_NOPAGE;
 	}
-	wp_page_reuse(vmf);
+	wp_page_reuse(vmf, folio);
 	return 0;
 }
 
@@ -3312,9 +3312,9 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
 		ret = vma->vm_ops->pfn_mkwrite(vmf);
 		if (ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))
 			return ret;
-		return finish_mkwrite_fault(vmf);
+		return finish_mkwrite_fault(vmf, NULL);
 	}
-	wp_page_reuse(vmf);
+	wp_page_reuse(vmf, NULL);
 	return 0;
 }
 
@@ -3342,14 +3342,14 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio)
 			folio_put(folio);
 			return tmp;
 		}
-		tmp = finish_mkwrite_fault(vmf);
+		tmp = finish_mkwrite_fault(vmf, folio);
 		if (unlikely(tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {
 			folio_unlock(folio);
 			folio_put(folio);
 			return tmp;
 		}
 	} else {
-		wp_page_reuse(vmf);
+		wp_page_reuse(vmf, folio);
 		folio_lock(folio);
 	}
 	ret |= fault_dirty_shared_page(vmf);
@@ -3494,7 +3494,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			return 0;
 		}
-		wp_page_reuse(vmf);
+		wp_page_reuse(vmf, folio);
 		return 0;
 	}
 	/*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (16 preceding siblings ...)
  2023-10-13  8:56 ` [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio Kefeng Wang
@ 2023-10-13  8:56 ` Kefeng Wang
  2023-10-13 15:19   ` Matthew Wilcox
  2023-10-13  8:56 ` [PATCH -next v2 19/19] mm: remove page_cpupid_xchg_last() Kefeng Wang
  18 siblings, 1 reply; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Convert to use folio_xchg_last_cpupid() in wp_page_reuse(), and remove
page variable.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 6b58ceb0961f..e85c009917b4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3022,19 +3022,20 @@ static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
 	__releases(vmf->ptl)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct page *page = vmf->page;
 	pte_t entry;
 
 	VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
-	VM_BUG_ON(folio && folio_test_anon(folio) && !PageAnonExclusive(page));
 
-	/*
-	 * Clear the pages cpupid information as the existing
-	 * information potentially belongs to a now completely
-	 * unrelated process.
-	 */
-	if (page)
-		page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1);
+	if (folio) {
+		VM_BUG_ON(folio_test_anon(folio) &&
+			  !PageAnonExclusive(vmf->page));
+		/*
+		 * Clear the pages cpupid information as the existing
+		 * information potentially belongs to a now completely
+		 * unrelated process.
+		 */
+		folio_xchg_last_cpupid(folio, (1 << LAST_CPUPID_SHIFT) - 1);
+	}
 
 	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
 	entry = pte_mkyoung(vmf->orig_pte);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH -next v2 19/19] mm: remove page_cpupid_xchg_last()
  2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
                   ` (17 preceding siblings ...)
  2023-10-13  8:56 ` [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse() Kefeng Wang
@ 2023-10-13  8:56 ` Kefeng Wang
  18 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-13  8:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: willy, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Kefeng Wang

Since all calls use folio_xchg_last_cpupid(), remove
page_cpupid_xchg_last().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 19 +++++++------------
 mm/mmzone.c        |  6 +++---
 2 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 70eae2e7d5e5..287d52ace444 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1694,9 +1694,9 @@ static inline bool __cpupid_match_pid(pid_t task_pid, int cpupid)
 
 #define cpupid_match_pid(task, cpupid) __cpupid_match_pid(task->pid, cpupid)
 #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
-static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
+static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
 {
-	return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
+	return xchg(&folio->_last_cpupid, cpupid & LAST_CPUPID_MASK);
 }
 
 static inline int folio_last_cpupid(struct folio *folio)
@@ -1713,7 +1713,7 @@ static inline int folio_last_cpupid(struct folio *folio)
 	return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 }
 
-extern int page_cpupid_xchg_last(struct page *page, int cpupid);
+int folio_xchg_last_cpupid(struct folio *folio, int cpupid);
 
 static inline void page_cpupid_reset_last(struct page *page)
 {
@@ -1725,8 +1725,8 @@ static inline int folio_xchg_access_time(struct folio *folio, int time)
 {
 	int last_time;
 
-	last_time = page_cpupid_xchg_last(&folio->page,
-					  time >> PAGE_ACCESS_TIME_BUCKETS);
+	last_time = folio_xchg_last_cpupid(folio,
+					   time >> PAGE_ACCESS_TIME_BUCKETS);
 	return last_time << PAGE_ACCESS_TIME_BUCKETS;
 }
 
@@ -1740,9 +1740,9 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 	}
 }
 #else /* !CONFIG_NUMA_BALANCING */
-static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
+static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
 {
-	return page_to_nid(page); /* XXX */
+	return folio_nid(folio); /* XXX */
 }
 
 static inline int folio_xchg_access_time(struct folio *folio, int time)
@@ -1794,11 +1794,6 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
-static inline int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
-{
-	return page_cpupid_xchg_last(&folio->page, cpupid);
-}
-
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 
 /*
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 68e1511be12d..b594d3f268fe 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -93,19 +93,19 @@ void lruvec_init(struct lruvec *lruvec)
 }
 
 #if defined(CONFIG_NUMA_BALANCING) && !defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS)
-int page_cpupid_xchg_last(struct page *page, int cpupid)
+int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
 {
 	unsigned long old_flags, flags;
 	int last_cpupid;
 
-	old_flags = READ_ONCE(page->flags);
+	old_flags = READ_ONCE(folio->flags);
 	do {
 		flags = old_flags;
 		last_cpupid = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 
 		flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
 		flags |= (cpupid & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
-	} while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
+	} while (unlikely(!try_cmpxchg(&folio->flags, &old_flags, flags)));
 
 	return last_cpupid;
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range()
  2023-10-13  8:55 ` [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range() Kefeng Wang
@ 2023-10-13 15:13   ` Matthew Wilcox
  2023-10-14  3:11     ` Kefeng Wang
  0 siblings, 1 reply; 26+ messages in thread
From: Matthew Wilcox @ 2023-10-13 15:13 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot

On Fri, Oct 13, 2023 at 04:55:53PM +0800, Kefeng Wang wrote:
> Use a folio in change_pte_range() to save three compound_head() calls.

Yes, but here we have a change of behaviour, which should be argued
is desirable.  Before if a partial THP was mapped, or a fs large
folio, we would do this to individual pages.  Now we're doing it to the
entire folio.  Is that desirable?  I don't have the background to argue
either way.

> @@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
>  					continue;
>  				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
>  				    !toptier)
> -					xchg_page_access_time(page,
> +					folio_xchg_access_time(folio,
>  						jiffies_to_msecs(jiffies));
>  			}


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse()
  2023-10-13  8:56 ` [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse() Kefeng Wang
@ 2023-10-13 15:19   ` Matthew Wilcox
  0 siblings, 0 replies; 26+ messages in thread
From: Matthew Wilcox @ 2023-10-13 15:19 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot

On Fri, Oct 13, 2023 at 04:56:02PM +0800, Kefeng Wang wrote:
> Convert to use folio_xchg_last_cpupid() in wp_page_reuse(), and remove
> page variable.

... another case where we're changing behaviour and need to argue it's
desirable.

> -	/*
> -	 * Clear the pages cpupid information as the existing
> -	 * information potentially belongs to a now completely
> -	 * unrelated process.
> -	 */
> -	if (page)
> -		page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1);
> +	if (folio) {
> +		VM_BUG_ON(folio_test_anon(folio) &&
> +			  !PageAnonExclusive(vmf->page));
> +		/*
> +		 * Clear the pages cpupid information as the existing

s/pages/folio's/

> +		 * information potentially belongs to a now completely
> +		 * unrelated process.
> +		 */
> +		folio_xchg_last_cpupid(folio, (1 << LAST_CPUPID_SHIFT) - 1);
> +	}


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range()
  2023-10-13 15:13   ` Matthew Wilcox
@ 2023-10-14  3:11     ` Kefeng Wang
  0 siblings, 0 replies; 26+ messages in thread
From: Kefeng Wang @ 2023-10-14  3:11 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, linux-mm, linux-kernel, ying.huang, david, Zi Yan,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot



On 2023/10/13 23:13, Matthew Wilcox wrote:
> On Fri, Oct 13, 2023 at 04:55:53PM +0800, Kefeng Wang wrote:
>> Use a folio in change_pte_range() to save three compound_head() calls.
> 
> Yes, but here we have a change of behaviour, which should be argued
> is desirable.  Before if a partial THP was mapped, or a fs large
> folio, we would do this to individual pages.  Now we're doing it to the
> entire folio.  Is that desirable?  I don't have the background to argue
> either way.

The Huang's replay in v1[1] already mentioned this, we only use 
last_cpupid from head page, and large folio won't be handled from
do_numa_page(), and if large folio numa balancing is supported,
we could migrate the entire large folio mapped only one process,
or maybe split the large folio mapped multi-processes, and when
split it, we will copy the last_cpupid from head to the tail page.
Anyway, I think this change or the wp_page_reuse() won't break
current numa balancing.

Thanks.


[1]https://lore.kernel.org/linux-mm/874jixhfeu.fsf@yhuang6-desk2.ccr.corp.intel.com/
> 
>> @@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
>>   					continue;
>>   				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
>>   				    !toptier)
>> -					xchg_page_access_time(page,
>> +					folio_xchg_access_time(folio,
>>   						jiffies_to_msecs(jiffies));
>>   			}
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
  2023-10-13  8:56 ` [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio Kefeng Wang
@ 2023-10-17  7:33   ` kernel test robot
  2023-10-17  9:04     ` Kefeng Wang
  0 siblings, 1 reply; 26+ messages in thread
From: kernel test robot @ 2023-10-17  7:33 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, willy, linux-kernel,
	ying.huang, david, Zi Yan, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm_types-add-virtual-and-_last_cpupid-into-struct-folio/20231017-121040
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20231013085603.1227349-18-wangkefeng.wang%40huawei.com
patch subject: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20231017/202310171537.XhmrkImn-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231017/202310171537.XhmrkImn-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202310171537.XhmrkImn-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> mm/memory.c:3276: warning: Function parameter or member 'folio' not described in 'finish_mkwrite_fault'


vim +3276 mm/memory.c

2f38ab2c3c7fef Shachar Raindel 2015-04-14  3258  
66a6197c118540 Jan Kara        2016-12-14  3259  /**
66a6197c118540 Jan Kara        2016-12-14  3260   * finish_mkwrite_fault - finish page fault for a shared mapping, making PTE
66a6197c118540 Jan Kara        2016-12-14  3261   *			  writeable once the page is prepared
66a6197c118540 Jan Kara        2016-12-14  3262   *
66a6197c118540 Jan Kara        2016-12-14  3263   * @vmf: structure describing the fault
66a6197c118540 Jan Kara        2016-12-14  3264   *
66a6197c118540 Jan Kara        2016-12-14  3265   * This function handles all that is needed to finish a write page fault in a
66a6197c118540 Jan Kara        2016-12-14  3266   * shared mapping due to PTE being read-only once the mapped page is prepared.
a862f68a8b3600 Mike Rapoport   2019-03-05  3267   * It handles locking of PTE and modifying it.
66a6197c118540 Jan Kara        2016-12-14  3268   *
66a6197c118540 Jan Kara        2016-12-14  3269   * The function expects the page to be locked or other protection against
66a6197c118540 Jan Kara        2016-12-14  3270   * concurrent faults / writeback (such as DAX radix tree locks).
a862f68a8b3600 Mike Rapoport   2019-03-05  3271   *
2797e79f1a491f Liu Xiang       2021-06-28  3272   * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
a862f68a8b3600 Mike Rapoport   2019-03-05  3273   * we acquired PTE lock.
66a6197c118540 Jan Kara        2016-12-14  3274   */
60fe935fc6b035 Kefeng Wang     2023-10-13  3275  static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio)
66a6197c118540 Jan Kara        2016-12-14 @3276  {
66a6197c118540 Jan Kara        2016-12-14  3277  	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
66a6197c118540 Jan Kara        2016-12-14  3278  	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
66a6197c118540 Jan Kara        2016-12-14  3279  				       &vmf->ptl);
3db82b9374ca92 Hugh Dickins    2023-06-08  3280  	if (!vmf->pte)
3db82b9374ca92 Hugh Dickins    2023-06-08  3281  		return VM_FAULT_NOPAGE;
66a6197c118540 Jan Kara        2016-12-14  3282  	/*
66a6197c118540 Jan Kara        2016-12-14  3283  	 * We might have raced with another page fault while we released the
66a6197c118540 Jan Kara        2016-12-14  3284  	 * pte_offset_map_lock.
66a6197c118540 Jan Kara        2016-12-14  3285  	 */
c33c794828f212 Ryan Roberts    2023-06-12  3286  	if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) {
7df676974359f9 Bibo Mao        2020-05-27  3287  		update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
66a6197c118540 Jan Kara        2016-12-14  3288  		pte_unmap_unlock(vmf->pte, vmf->ptl);
a19e25536ed3a2 Jan Kara        2016-12-14  3289  		return VM_FAULT_NOPAGE;
66a6197c118540 Jan Kara        2016-12-14  3290  	}
60fe935fc6b035 Kefeng Wang     2023-10-13  3291  	wp_page_reuse(vmf, folio);
a19e25536ed3a2 Jan Kara        2016-12-14  3292  	return 0;
66a6197c118540 Jan Kara        2016-12-14  3293  }
66a6197c118540 Jan Kara        2016-12-14  3294  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
  2023-10-17  7:33   ` kernel test robot
@ 2023-10-17  9:04     ` Kefeng Wang
  2023-10-17 14:51       ` Andrew Morton
  0 siblings, 1 reply; 26+ messages in thread
From: Kefeng Wang @ 2023-10-17  9:04 UTC (permalink / raw)
  To: kernel test robot, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, willy, linux-kernel,
	ying.huang, david, Zi Yan, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot



On 2023/10/17 15:33, kernel test robot wrote:
> Hi Kefeng,
> 
> kernel test robot noticed the following build warnings:
> 
> [auto build test WARNING on akpm-mm/mm-everything]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm_types-add-virtual-and-_last_cpupid-into-struct-folio/20231017-121040
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link:    https://lore.kernel.org/r/20231013085603.1227349-18-wangkefeng.wang%40huawei.com
> patch subject: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
> config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20231017/202310171537.XhmrkImn-lkp@intel.com/config)
> compiler: m68k-linux-gcc (GCC) 13.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231017/202310171537.XhmrkImn-lkp@intel.com/reproduce)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202310171537.XhmrkImn-lkp@intel.com/
> 
> All warnings (new ones prefixed by >>):
> 
>>> mm/memory.c:3276: warning: Function parameter or member 'folio' not described in 'finish_mkwrite_fault'
> 

Hi Andrew, should I resend this patch? or could you help me to update
it, also a comment(page -> folio's) on patch18, thanks.

> 
> vim +3276 mm/memory.c
> 
> 2f38ab2c3c7fef Shachar Raindel 2015-04-14  3258
> 66a6197c118540 Jan Kara        2016-12-14  3259  /**
> 66a6197c118540 Jan Kara        2016-12-14  3260   * finish_mkwrite_fault - finish page fault for a shared mapping, making PTE
> 66a6197c118540 Jan Kara        2016-12-14  3261   *			  writeable once the page is prepared
> 66a6197c118540 Jan Kara        2016-12-14  3262   *
> 66a6197c118540 Jan Kara        2016-12-14  3263   * @vmf: structure describing the fault
> 66a6197c118540 Jan Kara        2016-12-14  3264   *
> 66a6197c118540 Jan Kara        2016-12-14  3265   * This function handles all that is needed to finish a write page fault in a
> 66a6197c118540 Jan Kara        2016-12-14  3266   * shared mapping due to PTE being read-only once the mapped page is prepared.
> a862f68a8b3600 Mike Rapoport   2019-03-05  3267   * It handles locking of PTE and modifying it.
> 66a6197c118540 Jan Kara        2016-12-14  3268   *
> 66a6197c118540 Jan Kara        2016-12-14  3269   * The function expects the page to be locked or other protection against
> 66a6197c118540 Jan Kara        2016-12-14  3270   * concurrent faults / writeback (such as DAX radix tree locks).
> a862f68a8b3600 Mike Rapoport   2019-03-05  3271   *
> 2797e79f1a491f Liu Xiang       2021-06-28  3272   * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before
> a862f68a8b3600 Mike Rapoport   2019-03-05  3273   * we acquired PTE lock.
> 66a6197c118540 Jan Kara        2016-12-14  3274   */
> 60fe935fc6b035 Kefeng Wang     2023-10-13  3275  static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio)
> 66a6197c118540 Jan Kara        2016-12-14 @3276  {
> 66a6197c118540 Jan Kara        2016-12-14  3277  	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
> 66a6197c118540 Jan Kara        2016-12-14  3278  	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
> 66a6197c118540 Jan Kara        2016-12-14  3279  				       &vmf->ptl);
> 3db82b9374ca92 Hugh Dickins    2023-06-08  3280  	if (!vmf->pte)
> 3db82b9374ca92 Hugh Dickins    2023-06-08  3281  		return VM_FAULT_NOPAGE;
> 66a6197c118540 Jan Kara        2016-12-14  3282  	/*
> 66a6197c118540 Jan Kara        2016-12-14  3283  	 * We might have raced with another page fault while we released the
> 66a6197c118540 Jan Kara        2016-12-14  3284  	 * pte_offset_map_lock.
> 66a6197c118540 Jan Kara        2016-12-14  3285  	 */
> c33c794828f212 Ryan Roberts    2023-06-12  3286  	if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) {
> 7df676974359f9 Bibo Mao        2020-05-27  3287  		update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
> 66a6197c118540 Jan Kara        2016-12-14  3288  		pte_unmap_unlock(vmf->pte, vmf->ptl);
> a19e25536ed3a2 Jan Kara        2016-12-14  3289  		return VM_FAULT_NOPAGE;
> 66a6197c118540 Jan Kara        2016-12-14  3290  	}
> 60fe935fc6b035 Kefeng Wang     2023-10-13  3291  	wp_page_reuse(vmf, folio);
> a19e25536ed3a2 Jan Kara        2016-12-14  3292  	return 0;
> 66a6197c118540 Jan Kara        2016-12-14  3293  }
> 66a6197c118540 Jan Kara        2016-12-14  3294
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio
  2023-10-17  9:04     ` Kefeng Wang
@ 2023-10-17 14:51       ` Andrew Morton
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Morton @ 2023-10-17 14:51 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: kernel test robot, oe-kbuild-all, Linux Memory Management List,
	willy, linux-kernel, ying.huang, david, Zi Yan, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot

On Tue, 17 Oct 2023 17:04:41 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:

> >>> mm/memory.c:3276: warning: Function parameter or member 'folio' not described in 'finish_mkwrite_fault'
> > 
> 
> Hi Andrew, should I resend this patch? or could you help me to update
> it, also a comment(page -> folio's) on patch18, thanks.

I'd assumed a new series would be sent, addressing Matthew's comments.


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-10-17 14:51 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-13  8:55 [PATCH -next v2 00/19] mm: convert page cpupid functions to folios Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 01/19] mm_types: add virtual and _last_cpupid into struct folio Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 02/19] mm: add folio_last_cpupid() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 03/19] mm: memory: use folio_last_cpupid() in do_numa_page() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 04/19] mm: huge_memory: use folio_last_cpupid() in do_huge_pmd_numa_page() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 05/19] mm: huge_memory: use folio_last_cpupid() in __split_huge_page_tail() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 06/19] mm: remove page_cpupid_last() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 07/19] mm: add folio_xchg_access_time() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 08/19] sched/fair: use folio_xchg_access_time() in numa_hint_fault_latency() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 09/19] mm: mprotect: use a folio in change_pte_range() Kefeng Wang
2023-10-13 15:13   ` Matthew Wilcox
2023-10-14  3:11     ` Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 10/19] mm: huge_memory: use a folio in change_huge_pmd() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 11/19] mm: remove xchg_page_access_time() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 12/19] mm: add folio_xchg_last_cpupid() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 13/19] sched/fair: use folio_xchg_last_cpupid() in should_numa_migrate_memory() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 14/19] mm: migrate: use folio_xchg_last_cpupid() in folio_migrate_flags() Kefeng Wang
2023-10-13  8:55 ` [PATCH -next v2 15/19] mm: huge_memory: use folio_xchg_last_cpupid() in __split_huge_page_tail() Kefeng Wang
2023-10-13  8:56 ` [PATCH -next v2 16/19] mm: make finish_mkwrite_fault() static Kefeng Wang
2023-10-13  8:56 ` [PATCH -next v2 17/19] mm: convert wp_page_reuse() and finish_mkwrite_fault() to take a folio Kefeng Wang
2023-10-17  7:33   ` kernel test robot
2023-10-17  9:04     ` Kefeng Wang
2023-10-17 14:51       ` Andrew Morton
2023-10-13  8:56 ` [PATCH -next v2 18/19] mm: use folio_xchg_last_cpupid() in wp_page_reuse() Kefeng Wang
2023-10-13 15:19   ` Matthew Wilcox
2023-10-13  8:56 ` [PATCH -next v2 19/19] mm: remove page_cpupid_xchg_last() Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).