damon.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios
@ 2022-12-27 12:27 Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 1/7] mm: page_idle: Convert page idle " Kefeng Wang
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

v2:
- drop unsafe pfn_to_online_folio(), convert page_idle_get_page() and 
damon_get_page() to return a folio after grab a refence
- convert damon hugetlb related functions
- rebased on next-20221226.

Kefeng Wang (7):
  mm: page_idle: Convert page idle to use folios
  mm: damon: introduce damon_get_folio()
  mm: damon: convert damon_ptep/pmdp_mkold() to use folios
  mm: damon: paddr: convert damon_pa_*() to use folios
  mm: damon: vaddr: convert damon_young_pmd_entry() to use folio
  mm: damon: remove unneed damon_get_page()
  mm: damon: vaddr: convert hugetlb related function to use folios

 mm/damon/ops-common.c | 34 +++++++++++++------------
 mm/damon/ops-common.h |  2 +-
 mm/damon/paddr.c      | 59 +++++++++++++++++++------------------------
 mm/damon/vaddr.c      | 38 ++++++++++++++--------------
 mm/page_idle.c        | 45 ++++++++++++++++-----------------
 5 files changed, 86 insertions(+), 92 deletions(-)

-- 
2.35.3


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH -next v2 1/7] mm: page_idle: Convert page idle to use folios
  2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
@ 2022-12-27 12:27 ` Kefeng Wang
  2022-12-27 18:14   ` Matthew Wilcox
  2022-12-27 12:27 ` [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio() Kefeng Wang
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

Firtly, make page_idle_get_page() return a folio when successfully
grab a ref of a page, also rename it page_idle_get_folio(), then,
use it to covert page_idle_bitmap_read()/write() functions.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/page_idle.c | 45 ++++++++++++++++++++++-----------------------
 1 file changed, 22 insertions(+), 23 deletions(-)

diff --git a/mm/page_idle.c b/mm/page_idle.c
index bc08332a609c..4f248e500e08 100644
--- a/mm/page_idle.c
+++ b/mm/page_idle.c
@@ -31,19 +31,20 @@
  *
  * This function tries to get a user memory page by pfn as described above.
  */
-static struct page *page_idle_get_page(unsigned long pfn)
+static struct folio *page_idle_get_folio(unsigned long pfn)
 {
 	struct page *page = pfn_to_online_page(pfn);
+	struct folio *folio;
 
-	if (!page || !PageLRU(page) ||
-	    !get_page_unless_zero(page))
+	if (!page || !PageLRU(page) || !get_page_unless_zero(page))
 		return NULL;
 
-	if (unlikely(!PageLRU(page))) {
-		put_page(page);
-		page = NULL;
+	folio = page_folio(page);
+	if (unlikely(!folio_test_lru(folio))) {
+		folio_put(folio);
+		folio = NULL;
 	}
-	return page;
+	return folio;
 }
 
 static bool page_idle_clear_pte_refs_one(struct folio *folio,
@@ -83,10 +84,8 @@ static bool page_idle_clear_pte_refs_one(struct folio *folio,
 	return true;
 }
 
-static void page_idle_clear_pte_refs(struct page *page)
+static void page_idle_clear_pte_refs(struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
-
 	/*
 	 * Since rwc.try_lock is unused, rwc is effectively immutable, so we
 	 * can make it static to save some cycles and stack.
@@ -115,7 +114,7 @@ static ssize_t page_idle_bitmap_read(struct file *file, struct kobject *kobj,
 				     loff_t pos, size_t count)
 {
 	u64 *out = (u64 *)buf;
-	struct page *page;
+	struct folio *folio;
 	unsigned long pfn, end_pfn;
 	int bit;
 
@@ -134,19 +133,19 @@ static ssize_t page_idle_bitmap_read(struct file *file, struct kobject *kobj,
 		bit = pfn % BITMAP_CHUNK_BITS;
 		if (!bit)
 			*out = 0ULL;
-		page = page_idle_get_page(pfn);
-		if (page) {
-			if (page_is_idle(page)) {
+		folio = page_idle_get_folio(pfn);
+		if (folio) {
+			if (folio_test_idle(folio)) {
 				/*
 				 * The page might have been referenced via a
 				 * pte, in which case it is not idle. Clear
 				 * refs and recheck.
 				 */
-				page_idle_clear_pte_refs(page);
-				if (page_is_idle(page))
+				page_idle_clear_pte_refs(folio);
+				if (folio_test_idle(folio))
 					*out |= 1ULL << bit;
 			}
-			put_page(page);
+			folio_put(folio);
 		}
 		if (bit == BITMAP_CHUNK_BITS - 1)
 			out++;
@@ -160,7 +159,7 @@ static ssize_t page_idle_bitmap_write(struct file *file, struct kobject *kobj,
 				      loff_t pos, size_t count)
 {
 	const u64 *in = (u64 *)buf;
-	struct page *page;
+	struct folio *folio;
 	unsigned long pfn, end_pfn;
 	int bit;
 
@@ -178,11 +177,11 @@ static ssize_t page_idle_bitmap_write(struct file *file, struct kobject *kobj,
 	for (; pfn < end_pfn; pfn++) {
 		bit = pfn % BITMAP_CHUNK_BITS;
 		if ((*in >> bit) & 1) {
-			page = page_idle_get_page(pfn);
-			if (page) {
-				page_idle_clear_pte_refs(page);
-				set_page_idle(page);
-				put_page(page);
+			folio = page_idle_get_folio(pfn);
+			if (folio) {
+				page_idle_clear_pte_refs(folio);
+				folio_set_idle(folio);
+				folio_put(folio);
 			}
 		}
 		if (bit == BITMAP_CHUNK_BITS - 1)
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio()
  2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 1/7] mm: page_idle: Convert page idle " Kefeng Wang
@ 2022-12-27 12:27 ` Kefeng Wang
  2022-12-27 19:42   ` SeongJae Park
  2022-12-27 12:27 ` [PATCH -next v2 3/7] mm: damon: convert damon_ptep/pmdp_mkold() to use folios Kefeng Wang
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

Introduce damon_get_folio(), and the temporary wrapper function
damon_get_page(), which help us to convert damon related function
to use folios, and it will be droped once the conversion is completed.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/damon/ops-common.c | 14 ++++++++------
 mm/damon/ops-common.h |  8 +++++++-
 2 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
index 75409601f934..ff38a19aa92e 100644
--- a/mm/damon/ops-common.c
+++ b/mm/damon/ops-common.c
@@ -16,21 +16,23 @@
  * Get an online page for a pfn if it's in the LRU list.  Otherwise, returns
  * NULL.
  *
- * The body of this function is stolen from the 'page_idle_get_page()'.  We
+ * The body of this function is stolen from the 'page_idle_get_folio()'.  We
  * steal rather than reuse it because the code is quite simple.
  */
-struct page *damon_get_page(unsigned long pfn)
+struct folio *damon_get_folio(unsigned long pfn)
 {
 	struct page *page = pfn_to_online_page(pfn);
+	struct folio *folio;
 
 	if (!page || !PageLRU(page) || !get_page_unless_zero(page))
 		return NULL;
 
-	if (unlikely(!PageLRU(page))) {
-		put_page(page);
-		page = NULL;
+	folio = page_folio(page);
+	if (unlikely(!folio_test_lru(folio))) {
+		folio_put(folio);
+		folio = NULL;
 	}
-	return page;
+	return folio;
 }
 
 void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
index 8d82d3722204..4ee607813981 100644
--- a/mm/damon/ops-common.h
+++ b/mm/damon/ops-common.h
@@ -7,7 +7,13 @@
 
 #include <linux/damon.h>
 
-struct page *damon_get_page(unsigned long pfn);
+struct folio *damon_get_folio(unsigned long pfn);
+static inline struct page *damon_get_page(unsigned long pfn)
+{
+	struct folio *folio = damon_get_folio(pfn);
+
+	return &folio->page;
+}
 
 void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr);
 void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -next v2 3/7] mm: damon: convert damon_ptep/pmdp_mkold() to use folios
  2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 1/7] mm: page_idle: Convert page idle " Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio() Kefeng Wang
@ 2022-12-27 12:27 ` Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 4/7] mm: damon: paddr: convert damon_pa_*() " Kefeng Wang
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

With damon_get_folio(), let's convert damon_ptep_mkold() and
damon_pmdp_mkold() to use folios.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/damon/ops-common.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
index ff38a19aa92e..6f9ac8750ca6 100644
--- a/mm/damon/ops-common.c
+++ b/mm/damon/ops-common.c
@@ -38,9 +38,9 @@ struct folio *damon_get_folio(unsigned long pfn)
 void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
 {
 	bool referenced = false;
-	struct page *page = damon_get_page(pte_pfn(*pte));
+	struct folio *folio = damon_get_folio(pte_pfn(*pte));
 
-	if (!page)
+	if (!folio)
 		return;
 
 	if (pte_young(*pte)) {
@@ -54,19 +54,19 @@ void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
 #endif /* CONFIG_MMU_NOTIFIER */
 
 	if (referenced)
-		set_page_young(page);
+		folio_set_young(folio);
 
-	set_page_idle(page);
-	put_page(page);
+	folio_set_idle(folio);
+	folio_put(folio);
 }
 
 void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	bool referenced = false;
-	struct page *page = damon_get_page(pmd_pfn(*pmd));
+	struct folio *folio = damon_get_folio(pmd_pfn(*pmd));
 
-	if (!page)
+	if (!folio)
 		return;
 
 	if (pmd_young(*pmd)) {
@@ -80,10 +80,10 @@ void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr)
 #endif /* CONFIG_MMU_NOTIFIER */
 
 	if (referenced)
-		set_page_young(page);
+		folio_set_young(folio);
 
-	set_page_idle(page);
-	put_page(page);
+	folio_set_idle(folio);
+	folio_put(folio);
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 }
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -next v2 4/7] mm: damon: paddr: convert damon_pa_*() to use folios
  2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
                   ` (2 preceding siblings ...)
  2022-12-27 12:27 ` [PATCH -next v2 3/7] mm: damon: convert damon_ptep/pmdp_mkold() to use folios Kefeng Wang
@ 2022-12-27 12:27 ` Kefeng Wang
  2022-12-27 19:50   ` SeongJae Park
  2022-12-27 12:27 ` [PATCH -next v2 5/7] mm: damon: vaddr: convert damon_young_pmd_entry() to use folio Kefeng Wang
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

With damon_get_folio(), let's convert all the damon_pa_*() to use folios.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/damon/paddr.c | 59 +++++++++++++++++++++---------------------------
 1 file changed, 26 insertions(+), 33 deletions(-)

diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 6334c99e5152..728a96c929fc 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -33,17 +33,15 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma,
 
 static void damon_pa_mkold(unsigned long paddr)
 {
-	struct folio *folio;
-	struct page *page = damon_get_page(PHYS_PFN(paddr));
+	struct folio *folio = damon_get_folio(PHYS_PFN(paddr));
 	struct rmap_walk_control rwc = {
 		.rmap_one = __damon_pa_mkold,
 		.anon_lock = folio_lock_anon_vma_read,
 	};
 	bool need_lock;
 
-	if (!page)
+	if (!folio)
 		return;
-	folio = page_folio(page);
 
 	if (!folio_mapped(folio) || !folio_raw_mapping(folio)) {
 		folio_set_idle(folio);
@@ -58,7 +56,6 @@ static void damon_pa_mkold(unsigned long paddr)
 
 	if (need_lock)
 		folio_unlock(folio);
-
 out:
 	folio_put(folio);
 }
@@ -122,8 +119,7 @@ static bool __damon_pa_young(struct folio *folio, struct vm_area_struct *vma,
 
 static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz)
 {
-	struct folio *folio;
-	struct page *page = damon_get_page(PHYS_PFN(paddr));
+	struct folio *folio = damon_get_folio(PHYS_PFN(paddr));
 	struct damon_pa_access_chk_result result = {
 		.page_sz = PAGE_SIZE,
 		.accessed = false,
@@ -135,9 +131,8 @@ static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz)
 	};
 	bool need_lock;
 
-	if (!page)
+	if (!folio)
 		return false;
-	folio = page_folio(page);
 
 	if (!folio_mapped(folio) || !folio_raw_mapping(folio)) {
 		if (folio_test_idle(folio))
@@ -203,18 +198,18 @@ static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx)
 }
 
 static bool __damos_pa_filter_out(struct damos_filter *filter,
-		struct page *page)
+		struct folio *folio)
 {
 	bool matched = false;
 	struct mem_cgroup *memcg;
 
 	switch (filter->type) {
 	case DAMOS_FILTER_TYPE_ANON:
-		matched = PageAnon(page);
+		matched = folio_test_anon(folio);
 		break;
 	case DAMOS_FILTER_TYPE_MEMCG:
 		rcu_read_lock();
-		memcg = page_memcg_check(page);
+		memcg = page_memcg_check(folio_page(folio, 0));
 		if (!memcg)
 			matched = false;
 		else
@@ -231,12 +226,12 @@ static bool __damos_pa_filter_out(struct damos_filter *filter,
 /*
  * damos_pa_filter_out - Return true if the page should be filtered out.
  */
-static bool damos_pa_filter_out(struct damos *scheme, struct page *page)
+static bool damos_pa_filter_out(struct damos *scheme, struct folio *folio)
 {
 	struct damos_filter *filter;
 
 	damos_for_each_filter(filter, scheme) {
-		if (__damos_pa_filter_out(filter, page))
+		if (__damos_pa_filter_out(filter, folio))
 			return true;
 	}
 	return false;
@@ -245,33 +240,33 @@ static bool damos_pa_filter_out(struct damos *scheme, struct page *page)
 static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
 {
 	unsigned long addr, applied;
-	LIST_HEAD(page_list);
+	LIST_HEAD(folio_list);
 
 	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
-		struct page *page = damon_get_page(PHYS_PFN(addr));
+		struct folio *folio = damon_get_folio(PHYS_PFN(addr));
 
-		if (!page)
+		if (!folio)
 			continue;
 
-		if (damos_pa_filter_out(s, page)) {
-			put_page(page);
+		if (damos_pa_filter_out(s, folio)) {
+			folio_put(folio);
 			continue;
 		}
 
-		ClearPageReferenced(page);
-		test_and_clear_page_young(page);
-		if (isolate_lru_page(page)) {
-			put_page(page);
+		folio_clear_referenced(folio);
+		folio_test_clear_young(folio);
+		if (folio_isolate_lru(folio)) {
+			folio_put(folio);
 			continue;
 		}
-		if (PageUnevictable(page)) {
-			putback_lru_page(page);
+		if (folio_test_unevictable(folio)) {
+			folio_putback_lru(folio);
 		} else {
-			list_add(&page->lru, &page_list);
-			put_page(page);
+			list_add(&folio->lru, &folio_list);
+			folio_put(folio);
 		}
 	}
-	applied = reclaim_pages(&page_list);
+	applied = reclaim_pages(&folio_list);
 	cond_resched();
 	return applied * PAGE_SIZE;
 }
@@ -282,14 +277,12 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
 	unsigned long addr, applied = 0;
 
 	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
-		struct page *page = damon_get_page(PHYS_PFN(addr));
-		struct folio *folio;
+		struct folio *folio = damon_get_folio(PHYS_PFN(addr));
 
-		if (!page)
+		if (!folio)
 			continue;
-		folio = page_folio(page);
 
-		if (damos_pa_filter_out(s, &folio->page)) {
+		if (damos_pa_filter_out(s, folio)) {
 			folio_put(folio);
 			continue;
 		}
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -next v2 5/7] mm: damon: vaddr: convert damon_young_pmd_entry() to use folio
  2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
                   ` (3 preceding siblings ...)
  2022-12-27 12:27 ` [PATCH -next v2 4/7] mm: damon: paddr: convert damon_pa_*() " Kefeng Wang
@ 2022-12-27 12:27 ` Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 6/7] mm: damon: remove unneed damon_get_page() Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 7/7] mm: damon: vaddr: convert hugetlb related function to use folios Kefeng Wang
  6 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

With damon_get_folio(), let's convert damon_young_pmd_entry()
to use folios.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/damon/vaddr.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 15f03df66db6..29227b7a6032 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -431,7 +431,7 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
 {
 	pte_t *pte;
 	spinlock_t *ptl;
-	struct page *page;
+	struct folio *folio;
 	struct damon_young_walk_private *priv = walk->private;
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -446,16 +446,16 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
 			spin_unlock(ptl);
 			goto regular_page;
 		}
-		page = damon_get_page(pmd_pfn(*pmd));
-		if (!page)
+		folio = damon_get_folio(pmd_pfn(*pmd));
+		if (!folio)
 			goto huge_out;
-		if (pmd_young(*pmd) || !page_is_idle(page) ||
+		if (pmd_young(*pmd) || !folio_test_idle(folio) ||
 					mmu_notifier_test_young(walk->mm,
 						addr)) {
 			*priv->page_sz = HPAGE_PMD_SIZE;
 			priv->young = true;
 		}
-		put_page(page);
+		folio_put(folio);
 huge_out:
 		spin_unlock(ptl);
 		return 0;
@@ -469,15 +469,15 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
 	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
 	if (!pte_present(*pte))
 		goto out;
-	page = damon_get_page(pte_pfn(*pte));
-	if (!page)
+	folio = damon_get_folio(pte_pfn(*pte));
+	if (!folio)
 		goto out;
-	if (pte_young(*pte) || !page_is_idle(page) ||
+	if (pte_young(*pte) || !folio_test_idle(folio) ||
 			mmu_notifier_test_young(walk->mm, addr)) {
 		*priv->page_sz = PAGE_SIZE;
 		priv->young = true;
 	}
-	put_page(page);
+	folio_put(folio);
 out:
 	pte_unmap_unlock(pte, ptl);
 	return 0;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -next v2 6/7] mm: damon: remove unneed damon_get_page()
  2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
                   ` (4 preceding siblings ...)
  2022-12-27 12:27 ` [PATCH -next v2 5/7] mm: damon: vaddr: convert damon_young_pmd_entry() to use folio Kefeng Wang
@ 2022-12-27 12:27 ` Kefeng Wang
  2022-12-27 12:27 ` [PATCH -next v2 7/7] mm: damon: vaddr: convert hugetlb related function to use folios Kefeng Wang
  6 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

After all damon_get_page() callers are converted to damon_get_folio(),
remove unneed wrapper damon_get_page().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/damon/ops-common.h | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
index 4ee607813981..14f4bc69f29b 100644
--- a/mm/damon/ops-common.h
+++ b/mm/damon/ops-common.h
@@ -8,12 +8,6 @@
 #include <linux/damon.h>
 
 struct folio *damon_get_folio(unsigned long pfn);
-static inline struct page *damon_get_page(unsigned long pfn)
-{
-	struct folio *folio = damon_get_folio(pfn);
-
-	return &folio->page;
-}
 
 void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr);
 void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -next v2 7/7] mm: damon: vaddr: convert hugetlb related function to use folios
  2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
                   ` (5 preceding siblings ...)
  2022-12-27 12:27 ` [PATCH -next v2 6/7] mm: damon: remove unneed damon_get_page() Kefeng Wang
@ 2022-12-27 12:27 ` Kefeng Wang
  6 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-27 12:27 UTC (permalink / raw)
  To: Andrew Morton, SeongJae Park
  Cc: damon, linux-mm, vishal.moola, willy, david, Kefeng Wang

Convert damon_hugetlb_mkold() and damon_young_hugetlb_entry() to
use folios.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/damon/vaddr.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 29227b7a6032..9d92c5eb3a1f 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -335,9 +335,9 @@ static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm,
 {
 	bool referenced = false;
 	pte_t entry = huge_ptep_get(pte);
-	struct page *page = pte_page(entry);
+	struct folio *folio = pfn_folio(pte_pfn(entry));
 
-	get_page(page);
+	folio_get(folio);
 
 	if (pte_young(entry)) {
 		referenced = true;
@@ -352,10 +352,10 @@ static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm,
 #endif /* CONFIG_MMU_NOTIFIER */
 
 	if (referenced)
-		set_page_young(page);
+		folio_set_young(folio);
 
-	set_page_idle(page);
-	put_page(page);
+	folio_set_idle(folio);
+	folio_put(folio);
 }
 
 static int damon_mkold_hugetlb_entry(pte_t *pte, unsigned long hmask,
@@ -490,7 +490,7 @@ static int damon_young_hugetlb_entry(pte_t *pte, unsigned long hmask,
 {
 	struct damon_young_walk_private *priv = walk->private;
 	struct hstate *h = hstate_vma(walk->vma);
-	struct page *page;
+	struct folio *folio;
 	spinlock_t *ptl;
 	pte_t entry;
 
@@ -499,16 +499,16 @@ static int damon_young_hugetlb_entry(pte_t *pte, unsigned long hmask,
 	if (!pte_present(entry))
 		goto out;
 
-	page = pte_page(entry);
-	get_page(page);
+	folio = pfn_folio(pte_pfn(entry));
+	folio_get(folio);
 
-	if (pte_young(entry) || !page_is_idle(page) ||
+	if (pte_young(entry) || !folio_test_idle(folio) ||
 	    mmu_notifier_test_young(walk->mm, addr)) {
 		*priv->page_sz = huge_page_size(h);
 		priv->young = true;
 	}
 
-	put_page(page);
+	folio_put(folio);
 
 out:
 	spin_unlock(ptl);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 1/7] mm: page_idle: Convert page idle to use folios
  2022-12-27 12:27 ` [PATCH -next v2 1/7] mm: page_idle: Convert page idle " Kefeng Wang
@ 2022-12-27 18:14   ` Matthew Wilcox
  2022-12-28  1:18     ` Kefeng Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Matthew Wilcox @ 2022-12-27 18:14 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, SeongJae Park, damon, linux-mm, vishal.moola, david

On Tue, Dec 27, 2022 at 08:27:08PM +0800, Kefeng Wang wrote:
> -static struct page *page_idle_get_page(unsigned long pfn)
> +static struct folio *page_idle_get_folio(unsigned long pfn)
>  {
>  	struct page *page = pfn_to_online_page(pfn);
> +	struct folio *folio;
>  
> -	if (!page || !PageLRU(page) ||
> -	    !get_page_unless_zero(page))
> +	if (!page || !PageLRU(page) || !get_page_unless_zero(page))
>  		return NULL;

Mmmm, no.  PageLRU hides a compound_head() call.  Try doing this instead:

	if (!page || PageTail(page))
		return NULL;
	folio = page_folio(page);
	if (!folio_test_lru(folio) || !folio_try_get(folio))
		return NULL;
	if (page_folio(page) != folio || !folio_test_lru(folio)) {
		folio_put(folio);
		folio = NULL;
	}

	return NULL;


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio()
  2022-12-27 12:27 ` [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio() Kefeng Wang
@ 2022-12-27 19:42   ` SeongJae Park
  2022-12-27 19:49     ` Matthew Wilcox
  2022-12-28  2:06     ` Kefeng Wang
  0 siblings, 2 replies; 16+ messages in thread
From: SeongJae Park @ 2022-12-27 19:42 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, SeongJae Park, damon, linux-mm, vishal.moola,
	willy, david

Hi Kefeng,

Could we use 'mm/damon:' prefix instead of 'mm: damon:' for the patch subjects?

On Tue, 27 Dec 2022 20:27:09 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:

> Introduce damon_get_folio(), and the temporary wrapper function
> damon_get_page(), which help us to convert damon related function
> to use folios, and it will be droped once the conversion is completed.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  mm/damon/ops-common.c | 14 ++++++++------
>  mm/damon/ops-common.h |  8 +++++++-
>  2 files changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
> index 75409601f934..ff38a19aa92e 100644
> --- a/mm/damon/ops-common.c
> +++ b/mm/damon/ops-common.c
> @@ -16,21 +16,23 @@
>   * Get an online page for a pfn if it's in the LRU list.  Otherwise, returns
>   * NULL.
>   *
> - * The body of this function is stolen from the 'page_idle_get_page()'.  We
> + * The body of this function is stolen from the 'page_idle_get_folio()'.  We
>   * steal rather than reuse it because the code is quite simple.
>   */
> -struct page *damon_get_page(unsigned long pfn)
> +struct folio *damon_get_folio(unsigned long pfn)
>  {
>  	struct page *page = pfn_to_online_page(pfn);
> +	struct folio *folio;
>  
>  	if (!page || !PageLRU(page) || !get_page_unless_zero(page))
>  		return NULL;
>  
> -	if (unlikely(!PageLRU(page))) {
> -		put_page(page);
> -		page = NULL;
> +	folio = page_folio(page);
> +	if (unlikely(!folio_test_lru(folio))) {
> +		folio_put(folio);
> +		folio = NULL;
>  	}

I think Matthew's comment for 'page_idle_get_folio()'[1] could applied here?


[1] https://lore.kernel.org/damon/Y6s2HPjrON2Sx%2Fgr@casper.infradead.org/

> -	return page;
> +	return folio;
>  }
>  
>  void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
> diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
> index 8d82d3722204..4ee607813981 100644
> --- a/mm/damon/ops-common.h
> +++ b/mm/damon/ops-common.h
> @@ -7,7 +7,13 @@
>  
>  #include <linux/damon.h>
>  
> -struct page *damon_get_page(unsigned long pfn);
> +struct folio *damon_get_folio(unsigned long pfn);
> +static inline struct page *damon_get_page(unsigned long pfn)
> +{
> +	struct folio *folio = damon_get_folio(pfn);
> +
> +	return &folio->page;

I think we should check if folio is NULL before dereferencing it?

> +}
>  
>  void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr);
>  void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr);
> -- 
> 2.35.3
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio()
  2022-12-27 19:42   ` SeongJae Park
@ 2022-12-27 19:49     ` Matthew Wilcox
  2022-12-27 21:38       ` SeongJae Park
  2022-12-28  2:06     ` Kefeng Wang
  1 sibling, 1 reply; 16+ messages in thread
From: Matthew Wilcox @ 2022-12-27 19:49 UTC (permalink / raw)
  To: SeongJae Park
  Cc: Kefeng Wang, Andrew Morton, damon, linux-mm, vishal.moola, david

On Tue, Dec 27, 2022 at 07:42:57PM +0000, SeongJae Park wrote:
> > +static inline struct page *damon_get_page(unsigned long pfn)
> > +{
> > +	struct folio *folio = damon_get_folio(pfn);
> > +
> > +	return &folio->page;
> 
> I think we should check if folio is NULL before dereferencing it?

&folio->page does not dereeference folio.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 4/7] mm: damon: paddr: convert damon_pa_*() to use folios
  2022-12-27 12:27 ` [PATCH -next v2 4/7] mm: damon: paddr: convert damon_pa_*() " Kefeng Wang
@ 2022-12-27 19:50   ` SeongJae Park
  2022-12-28  1:26     ` Kefeng Wang
  0 siblings, 1 reply; 16+ messages in thread
From: SeongJae Park @ 2022-12-27 19:50 UTC (permalink / raw)
  Cc: Andrew Morton, SeongJae Park, damon, linux-mm, vishal.moola,
	willy, david, Kefeng Wang

Hi Kefeng,

> With damon_get_folio(), let's convert all the damon_pa_*() to use folios.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  mm/damon/paddr.c | 59 +++++++++++++++++++++---------------------------
>  1 file changed, 26 insertions(+), 33 deletions(-)
> 
> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
> index 6334c99e5152..728a96c929fc 100644
> --- a/mm/damon/paddr.c
> +++ b/mm/damon/paddr.c
> @@ -33,17 +33,15 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma,
>  
>  static void damon_pa_mkold(unsigned long paddr)
>  {
> -	struct folio *folio;
> -	struct page *page = damon_get_page(PHYS_PFN(paddr));
> +	struct folio *folio = damon_get_folio(PHYS_PFN(paddr));
>  	struct rmap_walk_control rwc = {
>  		.rmap_one = __damon_pa_mkold,
>  		.anon_lock = folio_lock_anon_vma_read,
>  	};
>  	bool need_lock;
>  
> -	if (!page)
> +	if (!folio)
>  		return;
> -	folio = page_folio(page);
>  
>  	if (!folio_mapped(folio) || !folio_raw_mapping(folio)) {
>  		folio_set_idle(folio);
> @@ -58,7 +56,6 @@ static void damon_pa_mkold(unsigned long paddr)
>  
>  	if (need_lock)
>  		folio_unlock(folio);
> -

Seems unnecessary change?

>  out:
>  	folio_put(folio);
>  }
> @@ -122,8 +119,7 @@ static bool __damon_pa_young(struct folio *folio, struct vm_area_struct *vma,
>  
>  static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz)
>  {
> -	struct folio *folio;
> -	struct page *page = damon_get_page(PHYS_PFN(paddr));
> +	struct folio *folio = damon_get_folio(PHYS_PFN(paddr));
>  	struct damon_pa_access_chk_result result = {
>  		.page_sz = PAGE_SIZE,
>  		.accessed = false,
> @@ -135,9 +131,8 @@ static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz)
>  	};
>  	bool need_lock;
>  
> -	if (!page)
> +	if (!folio)
>  		return false;
> -	folio = page_folio(page);
>  
>  	if (!folio_mapped(folio) || !folio_raw_mapping(folio)) {
>  		if (folio_test_idle(folio))
> @@ -203,18 +198,18 @@ static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx)
>  }
>  
>  static bool __damos_pa_filter_out(struct damos_filter *filter,
> -		struct page *page)
> +		struct folio *folio)
>  {
>  	bool matched = false;
>  	struct mem_cgroup *memcg;
>  
>  	switch (filter->type) {
>  	case DAMOS_FILTER_TYPE_ANON:
> -		matched = PageAnon(page);
> +		matched = folio_test_anon(folio);
>  		break;
>  	case DAMOS_FILTER_TYPE_MEMCG:
>  		rcu_read_lock();
> -		memcg = page_memcg_check(page);
> +		memcg = page_memcg_check(folio_page(folio, 0));
>  		if (!memcg)
>  			matched = false;
>  		else
> @@ -231,12 +226,12 @@ static bool __damos_pa_filter_out(struct damos_filter *filter,
>  /*
>   * damos_pa_filter_out - Return true if the page should be filtered out.
>   */
> -static bool damos_pa_filter_out(struct damos *scheme, struct page *page)
> +static bool damos_pa_filter_out(struct damos *scheme, struct folio *folio)
>  {
>  	struct damos_filter *filter;
>  
>  	damos_for_each_filter(filter, scheme) {
> -		if (__damos_pa_filter_out(filter, page))
> +		if (__damos_pa_filter_out(filter, folio))
>  			return true;
>  	}
>  	return false;
> @@ -245,33 +240,33 @@ static bool damos_pa_filter_out(struct damos *scheme, struct page *page)
>  static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
>  {
>  	unsigned long addr, applied;
> -	LIST_HEAD(page_list);
> +	LIST_HEAD(folio_list);
>  
>  	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
> -		struct page *page = damon_get_page(PHYS_PFN(addr));
> +		struct folio *folio = damon_get_folio(PHYS_PFN(addr));
>  
> -		if (!page)
> +		if (!folio)
>  			continue;
>  
> -		if (damos_pa_filter_out(s, page)) {
> -			put_page(page);
> +		if (damos_pa_filter_out(s, folio)) {
> +			folio_put(folio);
>  			continue;
>  		}
>  
> -		ClearPageReferenced(page);
> -		test_and_clear_page_young(page);
> -		if (isolate_lru_page(page)) {
> -			put_page(page);
> +		folio_clear_referenced(folio);
> +		folio_test_clear_young(folio);
> +		if (folio_isolate_lru(folio)) {
> +			folio_put(folio);
>  			continue;
>  		}
> -		if (PageUnevictable(page)) {
> -			putback_lru_page(page);
> +		if (folio_test_unevictable(folio)) {
> +			folio_putback_lru(folio);
>  		} else {
> -			list_add(&page->lru, &page_list);
> -			put_page(page);
> +			list_add(&folio->lru, &folio_list);
> +			folio_put(folio);
>  		}
>  	}
> -	applied = reclaim_pages(&page_list);
> +	applied = reclaim_pages(&folio_list);
>  	cond_resched();
>  	return applied * PAGE_SIZE;
>  }
> @@ -282,14 +277,12 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
>  	unsigned long addr, applied = 0;
>  
>  	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
> -		struct page *page = damon_get_page(PHYS_PFN(addr));
> -		struct folio *folio;
> +		struct folio *folio = damon_get_folio(PHYS_PFN(addr));
>  
> -		if (!page)
> +		if (!folio)
>  			continue;
> -		folio = page_folio(page);
>  
> -		if (damos_pa_filter_out(s, &folio->page)) {
> +		if (damos_pa_filter_out(s, folio)) {
>  			folio_put(folio);
>  			continue;
>  		}
> -- 
> 2.35.3


Thanks,
SJ

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio()
  2022-12-27 19:49     ` Matthew Wilcox
@ 2022-12-27 21:38       ` SeongJae Park
  0 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2022-12-27 21:38 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: SeongJae Park, Kefeng Wang, Andrew Morton, damon, linux-mm,
	vishal.moola, david

On Tue, 27 Dec 2022 19:49:56 +0000 Matthew Wilcox <willy@infradead.org> wrote:

> On Tue, Dec 27, 2022 at 07:42:57PM +0000, SeongJae Park wrote:
> > > +static inline struct page *damon_get_page(unsigned long pfn)
> > > +{
> > > +	struct folio *folio = damon_get_folio(pfn);
> > > +
> > > +	return &folio->page;
> > 
> > I think we should check if folio is NULL before dereferencing it?
> 
> &folio->page does not dereeference folio.

Ah, ok.  And this is safe because ->page is at the beginning of folio, right?

Kefeng, could we add a comment explaining it, for people having bad eyes like
me?


Thanks,
SJ

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 1/7] mm: page_idle: Convert page idle to use folios
  2022-12-27 18:14   ` Matthew Wilcox
@ 2022-12-28  1:18     ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-28  1:18 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, SeongJae Park, damon, linux-mm, vishal.moola, david



On 2022/12/28 2:14, Matthew Wilcox wrote:
> On Tue, Dec 27, 2022 at 08:27:08PM +0800, Kefeng Wang wrote:
>> -static struct page *page_idle_get_page(unsigned long pfn)
>> +static struct folio *page_idle_get_folio(unsigned long pfn)
>>   {
>>   	struct page *page = pfn_to_online_page(pfn);
>> +	struct folio *folio;
>>   
>> -	if (!page || !PageLRU(page) ||
>> -	    !get_page_unless_zero(page))
>> +	if (!page || !PageLRU(page) || !get_page_unless_zero(page))
>>   		return NULL;
> 
> Mmmm, no.  PageLRU hides a compound_head() call.  Try doing this instead:
> 
> 	if (!page || PageTail(page))
> 		return NULL;
> 	folio = page_folio(page);
> 	if (!folio_test_lru(folio) || !folio_try_get(folio))
> 		return NULL;
> 	if (page_folio(page) != folio || !folio_test_lru(folio)) {
> 		folio_put(folio);
> 		folio = NULL;
> 	}
> 
Thanks Matthew, this is more complete, will update.
> 	return NULL;
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 4/7] mm: damon: paddr: convert damon_pa_*() to use folios
  2022-12-27 19:50   ` SeongJae Park
@ 2022-12-28  1:26     ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-28  1:26 UTC (permalink / raw)
  To: SeongJae Park; +Cc: Andrew Morton, damon, linux-mm, vishal.moola, willy, david



On 2022/12/28 3:50, SeongJae Park wrote:
> Hi Kefeng,
> 
>> With damon_get_folio(), let's convert all the damon_pa_*() to use folios.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   mm/damon/paddr.c | 59 +++++++++++++++++++++---------------------------
>>   1 file changed, 26 insertions(+), 33 deletions(-)
>>
>> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
>> index 6334c99e5152..728a96c929fc 100644
>> --- a/mm/damon/paddr.c
>> +++ b/mm/damon/paddr.c
>> @@ -33,17 +33,15 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma,
>>   
>>   static void damon_pa_mkold(unsigned long paddr)
>>   {
>> -	struct folio *folio;
>> -	struct page *page = damon_get_page(PHYS_PFN(paddr));
>> +	struct folio *folio = damon_get_folio(PHYS_PFN(paddr));
>>   	struct rmap_walk_control rwc = {
>>   		.rmap_one = __damon_pa_mkold,
>>   		.anon_lock = folio_lock_anon_vma_read,
>>   	};
>>   	bool need_lock;
>>   
>> -	if (!page)
>> +	if (!folio)
>>   		return;
>> -	folio = page_folio(page);
>>   
>>   	if (!folio_mapped(folio) || !folio_raw_mapping(folio)) {
>>   		folio_set_idle(folio);
>> @@ -58,7 +56,6 @@ static void damon_pa_mkold(unsigned long paddr)
>>   
>>   	if (need_lock)
>>   		folio_unlock(folio);
>> -
> 
> Seems unnecessary change?

oh, will drop this change, thanks

> 
> Thanks,
> SJ

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio()
  2022-12-27 19:42   ` SeongJae Park
  2022-12-27 19:49     ` Matthew Wilcox
@ 2022-12-28  2:06     ` Kefeng Wang
  1 sibling, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-12-28  2:06 UTC (permalink / raw)
  To: SeongJae Park; +Cc: Andrew Morton, damon, linux-mm, vishal.moola, willy, david



On 2022/12/28 3:42, SeongJae Park wrote:
> Hi Kefeng,
> 
> Could we use 'mm/damon:' prefix instead of 'mm: damon:' for the patch subjects?

Sure.

> 
> On Tue, 27 Dec 2022 20:27:09 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> 
>> Introduce damon_get_folio(), and the temporary wrapper function
>> damon_get_page(), which help us to convert damon related function
>> to use folios, and it will be droped once the conversion is completed.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   mm/damon/ops-common.c | 14 ++++++++------
>>   mm/damon/ops-common.h |  8 +++++++-
>>   2 files changed, 15 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
>> index 75409601f934..ff38a19aa92e 100644
>> --- a/mm/damon/ops-common.c
>> +++ b/mm/damon/ops-common.c
>> @@ -16,21 +16,23 @@
>>    * Get an online page for a pfn if it's in the LRU list.  Otherwise, returns
>>    * NULL.
>>    *
>> - * The body of this function is stolen from the 'page_idle_get_page()'.  We
>> + * The body of this function is stolen from the 'page_idle_get_folio()'.  We
>>    * steal rather than reuse it because the code is quite simple.
>>    */
>> -struct page *damon_get_page(unsigned long pfn)
>> +struct folio *damon_get_folio(unsigned long pfn)
>>   {
>>   	struct page *page = pfn_to_online_page(pfn);
>> +	struct folio *folio;
>>   
>>   	if (!page || !PageLRU(page) || !get_page_unless_zero(page))
>>   		return NULL;
>>   
>> -	if (unlikely(!PageLRU(page))) {
>> -		put_page(page);
>> -		page = NULL;
>> +	folio = page_folio(page);
>> +	if (unlikely(!folio_test_lru(folio))) {
>> +		folio_put(folio);
>> +		folio = NULL;
>>   	}
> 
> I think Matthew's comment for 'page_idle_get_folio()'[1] could applied here?
> 

Will change too.

> 
> [1] https://lore.kernel.org/damon/Y6s2HPjrON2Sx%2Fgr@casper.infradead.org/
> 
>> -	return page;
>> +	return folio;
>>   }
>>   
>>   void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
>> diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
>> index 8d82d3722204..4ee607813981 100644
>> --- a/mm/damon/ops-common.h
>> +++ b/mm/damon/ops-common.h
>> @@ -7,7 +7,13 @@
>>   
>>   #include <linux/damon.h>
>>   
>> -struct page *damon_get_page(unsigned long pfn);
>> +struct folio *damon_get_folio(unsigned long pfn);
>> +static inline struct page *damon_get_page(unsigned long pfn)
>> +{
>> +	struct folio *folio = damon_get_folio(pfn);
>> +
>> +	return &folio->page;
> 
> I think we should check if folio is NULL before dereferencing it?

I could add comment here, thanks.

> 
>> +}
>>   
>>   void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr);
>>   void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr);
>> -- 
>> 2.35.3
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-12-28  2:06 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-27 12:27 [PATCH -next v2 0/7] mm: convert page_idle/damon to use folios Kefeng Wang
2022-12-27 12:27 ` [PATCH -next v2 1/7] mm: page_idle: Convert page idle " Kefeng Wang
2022-12-27 18:14   ` Matthew Wilcox
2022-12-28  1:18     ` Kefeng Wang
2022-12-27 12:27 ` [PATCH -next v2 2/7] mm: damon: introduce damon_get_folio() Kefeng Wang
2022-12-27 19:42   ` SeongJae Park
2022-12-27 19:49     ` Matthew Wilcox
2022-12-27 21:38       ` SeongJae Park
2022-12-28  2:06     ` Kefeng Wang
2022-12-27 12:27 ` [PATCH -next v2 3/7] mm: damon: convert damon_ptep/pmdp_mkold() to use folios Kefeng Wang
2022-12-27 12:27 ` [PATCH -next v2 4/7] mm: damon: paddr: convert damon_pa_*() " Kefeng Wang
2022-12-27 19:50   ` SeongJae Park
2022-12-28  1:26     ` Kefeng Wang
2022-12-27 12:27 ` [PATCH -next v2 5/7] mm: damon: vaddr: convert damon_young_pmd_entry() to use folio Kefeng Wang
2022-12-27 12:27 ` [PATCH -next v2 6/7] mm: damon: remove unneed damon_get_page() Kefeng Wang
2022-12-27 12:27 ` [PATCH -next v2 7/7] mm: damon: vaddr: convert hugetlb related function to use folios Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).