linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] Convert deactivate_page() to folio_deactivate()
@ 2022-12-08 20:34 Vishal Moola (Oracle)
  2022-12-08 20:35 ` [PATCH v3 1/4] mm/memory: Add vm_normal_folio() Vishal Moola (Oracle)
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-08 20:34 UTC (permalink / raw)
  To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)

Deactivate_page() has already been converted to use folios. This patch
series modifies the callers of deactivate_page() to use folios. It also
introduces vm_normal_folio() to assist with folio conversions, and
converts deactivate_page() to folio_deactivate() which takes in a folio.

---
v3:
  Introduce vm_normal_folio() wrapper function to return a folio
  Fix madvise missing folio_mapcount()

v2:
  Fix a compilation issue
  Some minor rewording of comments/descriptions

Vishal Moola (Oracle) (4):
  mm/memory: Add vm_normal_folio()
  madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
  mm/damon: Convert damon_pa_mark_accessed_or_deactivate() to use folios
  mm/swap: Convert deactivate_page() to folio_deactivate()

 include/linux/mm.h   |  2 +
 include/linux/swap.h |  2 +-
 mm/damon/paddr.c     | 11 ++++--
 mm/madvise.c         | 92 ++++++++++++++++++++++----------------------
 mm/memory.c          | 10 +++++
 mm/swap.c            | 14 +++----
 6 files changed, 72 insertions(+), 59 deletions(-)

-- 
2.38.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/4] mm/memory: Add vm_normal_folio()
  2022-12-08 20:34 [PATCH v3 0/4] Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
@ 2022-12-08 20:35 ` Vishal Moola (Oracle)
  2022-12-08 20:56   ` Matthew Wilcox
  2022-12-08 20:35 ` [PATCH v3 2/4] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-08 20:35 UTC (permalink / raw)
  To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)

Introduce a wrapper function called vm_normal_folio(). This function
calls vm_normal_page() and returns the folio of the page found, or null
if no page is found.

This function allows callers to get a folio from a pte, which will
eventually allow them to completely replace their struct page variables
with struct folio instead.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h |  2 ++
 mm/memory.c        | 10 ++++++++++
 2 files changed, 12 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8bbcccbc5565..626ae0757bd3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1860,6 +1860,8 @@ static inline bool can_do_mlock(void) { return false; }
 extern int user_shm_lock(size_t, struct ucounts *);
 extern void user_shm_unlock(size_t, struct ucounts *);
 
+struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
+			     pte_t pte);
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			     pte_t pte);
 struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
diff --git a/mm/memory.c b/mm/memory.c
index f88c351aecd4..1247c19c516c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -672,6 +672,16 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 	return pfn_to_page(pfn);
 }
 
+struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
+			    pte_t pte)
+{
+	struct page *page = vm_normal_page(vma, addr, pte);
+
+	if (page)
+		return page_folio(page);
+	return NULL;
+}
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 				pmd_t pmd)
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/4] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
  2022-12-08 20:34 [PATCH v3 0/4] Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
  2022-12-08 20:35 ` [PATCH v3 1/4] mm/memory: Add vm_normal_folio() Vishal Moola (Oracle)
@ 2022-12-08 20:35 ` Vishal Moola (Oracle)
  2022-12-08 20:57   ` Matthew Wilcox
  2022-12-08 20:35 ` [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
  2022-12-08 20:35 ` [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
  3 siblings, 1 reply; 11+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-08 20:35 UTC (permalink / raw)
  To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)

This change removes a number of calls to compound_head(), and saves 1319
bytes of kernel text.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/madvise.c | 92 ++++++++++++++++++++++++++--------------------------
 1 file changed, 46 insertions(+), 46 deletions(-)

diff --git a/mm/madvise.c b/mm/madvise.c
index 2baa93ca2310..2a84b5dfbb4c 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -332,8 +332,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 	struct vm_area_struct *vma = walk->vma;
 	pte_t *orig_pte, *pte, ptent;
 	spinlock_t *ptl;
-	struct page *page = NULL;
-	LIST_HEAD(page_list);
+	struct folio *folio = NULL;
+	LIST_HEAD(folio_list);
 
 	if (fatal_signal_pending(current))
 		return -EINTR;
@@ -358,23 +358,23 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 			goto huge_unlock;
 		}
 
-		page = pmd_page(orig_pmd);
+		folio = pfn_folio(pmd_pfn(orig_pmd));
 
-		/* Do not interfere with other mappings of this page */
-		if (page_mapcount(page) != 1)
+		/* Do not interfere with other mappings of this folio */
+		if (folio_mapcount(folio) != 1)
 			goto huge_unlock;
 
 		if (next - addr != HPAGE_PMD_SIZE) {
 			int err;
 
-			get_page(page);
+			folio_get(folio);
 			spin_unlock(ptl);
-			lock_page(page);
-			err = split_huge_page(page);
-			unlock_page(page);
-			put_page(page);
+			folio_lock(folio);
+			err = split_folio(folio);
+			folio_unlock(folio);
+			folio_put(folio);
 			if (!err)
-				goto regular_page;
+				goto regular_folio;
 			return 0;
 		}
 
@@ -386,25 +386,25 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 			tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
 		}
 
-		ClearPageReferenced(page);
-		test_and_clear_page_young(page);
+		folio_clear_referenced(folio);
+		folio_test_clear_young(folio);
 		if (pageout) {
-			if (!isolate_lru_page(page)) {
-				if (PageUnevictable(page))
-					putback_lru_page(page);
+			if (!folio_isolate_lru(folio)) {
+				if (folio_test_unevictable(folio))
+					folio_putback_lru(folio);
 				else
-					list_add(&page->lru, &page_list);
+					list_add(&folio->lru, &folio_list);
 			}
 		} else
-			deactivate_page(page);
+			deactivate_page(&folio->page);
 huge_unlock:
 		spin_unlock(ptl);
 		if (pageout)
-			reclaim_pages(&page_list);
+			reclaim_pages(&folio_list);
 		return 0;
 	}
 
-regular_page:
+regular_folio:
 	if (pmd_trans_unstable(pmd))
 		return 0;
 #endif
@@ -421,31 +421,31 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 		if (!pte_present(ptent))
 			continue;
 
-		page = vm_normal_page(vma, addr, ptent);
-		if (!page || is_zone_device_page(page))
+		folio = vm_normal_folio(vma, addr, ptent);
+		if (!folio || folio_is_zone_device(folio))
 			continue;
 
 		/*
 		 * Creating a THP page is expensive so split it only if we
 		 * are sure it's worth. Split it if we are only owner.
 		 */
-		if (PageTransCompound(page)) {
-			if (page_mapcount(page) != 1)
+		if (folio_test_large(folio)) {
+			if (folio_mapcount(folio) != 1)
 				break;
-			get_page(page);
-			if (!trylock_page(page)) {
-				put_page(page);
+			folio_get(folio);
+			if (!folio_trylock(folio)) {
+				folio_put(folio);
 				break;
 			}
 			pte_unmap_unlock(orig_pte, ptl);
-			if (split_huge_page(page)) {
-				unlock_page(page);
-				put_page(page);
+			if (split_folio(folio)) {
+				folio_unlock(folio);
+				folio_put(folio);
 				orig_pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
 				break;
 			}
-			unlock_page(page);
-			put_page(page);
+			folio_unlock(folio);
+			folio_put(folio);
 			orig_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
 			pte--;
 			addr -= PAGE_SIZE;
@@ -453,13 +453,13 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 		}
 
 		/*
-		 * Do not interfere with other mappings of this page and
-		 * non-LRU page.
+		 * Do not interfere with other mappings of this folio and
+		 * non-LRU folio.
 		 */
-		if (!PageLRU(page) || page_mapcount(page) != 1)
+		if (!folio_test_lru(folio) || folio_mapcount(folio) != 1)
 			continue;
 
-		VM_BUG_ON_PAGE(PageTransCompound(page), page);
+		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
 
 		if (pte_young(ptent)) {
 			ptent = ptep_get_and_clear_full(mm, addr, pte,
@@ -470,28 +470,28 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 		}
 
 		/*
-		 * We are deactivating a page for accelerating reclaiming.
-		 * VM couldn't reclaim the page unless we clear PG_young.
+		 * We are deactivating a folio for accelerating reclaiming.
+		 * VM couldn't reclaim the folio unless we clear PG_young.
 		 * As a side effect, it makes confuse idle-page tracking
 		 * because they will miss recent referenced history.
 		 */
-		ClearPageReferenced(page);
-		test_and_clear_page_young(page);
+		folio_clear_referenced(folio);
+		folio_test_clear_young(folio);
 		if (pageout) {
-			if (!isolate_lru_page(page)) {
-				if (PageUnevictable(page))
-					putback_lru_page(page);
+			if (!folio_isolate_lru(folio)) {
+				if (folio_test_unevictable(folio))
+					folio_putback_lru(folio);
 				else
-					list_add(&page->lru, &page_list);
+					list_add(&folio->lru, &folio_list);
 			}
 		} else
-			deactivate_page(page);
+			deactivate_page(&folio->page);
 	}
 
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(orig_pte, ptl);
 	if (pageout)
-		reclaim_pages(&page_list);
+		reclaim_pages(&folio_list);
 	cond_resched();
 
 	return 0;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() to use folios
  2022-12-08 20:34 [PATCH v3 0/4] Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
  2022-12-08 20:35 ` [PATCH v3 1/4] mm/memory: Add vm_normal_folio() Vishal Moola (Oracle)
  2022-12-08 20:35 ` [PATCH v3 2/4] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
@ 2022-12-08 20:35 ` Vishal Moola (Oracle)
  2022-12-08 20:57   ` Matthew Wilcox
  2022-12-08 21:52   ` SeongJae Park
  2022-12-08 20:35 ` [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
  3 siblings, 2 replies; 11+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-08 20:35 UTC (permalink / raw)
  To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)

This change replaces 2 calls to compound_head() with one. This is in
preparation for the conversion of deactivate_page() to
folio_deactivate().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/damon/paddr.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index e1a4315c4be6..73548bc82297 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -238,15 +238,18 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
 
 	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
 		struct page *page = damon_get_page(PHYS_PFN(addr));
+		struct folio *folio;
 
 		if (!page)
 			continue;
+		folio = page_folio(page);
+
 		if (mark_accessed)
-			mark_page_accessed(page);
+			folio_mark_accessed(folio);
 		else
-			deactivate_page(page);
-		put_page(page);
-		applied++;
+			deactivate_page(&folio->page);
+		folio_put(folio);
+		applied += folio_nr_pages(folio);
 	}
 	return applied * PAGE_SIZE;
 }
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate()
  2022-12-08 20:34 [PATCH v3 0/4] Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
                   ` (2 preceding siblings ...)
  2022-12-08 20:35 ` [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
@ 2022-12-08 20:35 ` Vishal Moola (Oracle)
  2022-12-08 20:59   ` Matthew Wilcox
  2022-12-08 21:56   ` SeongJae Park
  3 siblings, 2 replies; 11+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-08 20:35 UTC (permalink / raw)
  To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)

Deactivate_page() has already been converted to use folios, this change
converts it to take in a folio argument instead of calling page_folio().
It also renames the function folio_deactivate() to be more consistent
with other folio functions.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/swap.h |  2 +-
 mm/damon/paddr.c     |  2 +-
 mm/madvise.c         |  4 ++--
 mm/swap.c            | 14 ++++++--------
 4 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index a18cf4b7c724..6427b3af30c3 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -409,7 +409,7 @@ extern void lru_add_drain(void);
 extern void lru_add_drain_cpu(int cpu);
 extern void lru_add_drain_cpu_zone(struct zone *zone);
 extern void lru_add_drain_all(void);
-extern void deactivate_page(struct page *page);
+void folio_deactivate(struct folio *folio);
 extern void mark_page_lazyfree(struct page *page);
 extern void swap_setup(void);
 
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 73548bc82297..6b36de1396a4 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -247,7 +247,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
 		if (mark_accessed)
 			folio_mark_accessed(folio);
 		else
-			deactivate_page(&folio->page);
+			folio_deactivate(folio);
 		folio_put(folio);
 		applied += folio_nr_pages(folio);
 	}
diff --git a/mm/madvise.c b/mm/madvise.c
index 2a84b5dfbb4c..1ab293019862 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -396,7 +396,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 					list_add(&folio->lru, &folio_list);
 			}
 		} else
-			deactivate_page(&folio->page);
+			folio_deactivate(folio);
 huge_unlock:
 		spin_unlock(ptl);
 		if (pageout)
@@ -485,7 +485,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 					list_add(&folio->lru, &folio_list);
 			}
 		} else
-			deactivate_page(&folio->page);
+			folio_deactivate(folio);
 	}
 
 	arch_leave_lazy_mmu_mode();
diff --git a/mm/swap.c b/mm/swap.c
index 955930f41d20..9cc8215acdbb 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -720,17 +720,15 @@ void deactivate_file_folio(struct folio *folio)
 }
 
 /*
- * deactivate_page - deactivate a page
- * @page: page to deactivate
+ * folio_deactivate - deactivate a folio
+ * @folio: folio to deactivate
  *
- * deactivate_page() moves @page to the inactive list if @page was on the active
- * list and was not an unevictable page.  This is done to accelerate the reclaim
- * of @page.
+ * folio_deactivate() moves @folio to the inactive list if @folio was on the
+ * active list and was not unevictable. This is done to accelerate the
+ * reclaim of @folio.
  */
-void deactivate_page(struct page *page)
+void folio_deactivate(struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
-
 	if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
 	    (folio_test_active(folio) || lru_gen_enabled())) {
 		struct folio_batch *fbatch;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/4] mm/memory: Add vm_normal_folio()
  2022-12-08 20:35 ` [PATCH v3 1/4] mm/memory: Add vm_normal_folio() Vishal Moola (Oracle)
@ 2022-12-08 20:56   ` Matthew Wilcox
  0 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2022-12-08 20:56 UTC (permalink / raw)
  To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj

On Thu, Dec 08, 2022 at 12:35:00PM -0800, Vishal Moola (Oracle) wrote:
> Introduce a wrapper function called vm_normal_folio(). This function
> calls vm_normal_page() and returns the folio of the page found, or null
> if no page is found.
> 
> This function allows callers to get a folio from a pte, which will
> eventually allow them to completely replace their struct page variables
> with struct folio instead.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() to use folios
  2022-12-08 20:35 ` [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
@ 2022-12-08 20:57   ` Matthew Wilcox
  2022-12-08 21:52   ` SeongJae Park
  1 sibling, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2022-12-08 20:57 UTC (permalink / raw)
  To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj

On Thu, Dec 08, 2022 at 12:35:02PM -0800, Vishal Moola (Oracle) wrote:
> This change replaces 2 calls to compound_head() with one. This is in
> preparation for the conversion of deactivate_page() to
> folio_deactivate().
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/4] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
  2022-12-08 20:35 ` [PATCH v3 2/4] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
@ 2022-12-08 20:57   ` Matthew Wilcox
  0 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2022-12-08 20:57 UTC (permalink / raw)
  To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj

On Thu, Dec 08, 2022 at 12:35:01PM -0800, Vishal Moola (Oracle) wrote:
> This change removes a number of calls to compound_head(), and saves 1319
> bytes of kernel text.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate()
  2022-12-08 20:35 ` [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
@ 2022-12-08 20:59   ` Matthew Wilcox
  2022-12-08 21:56   ` SeongJae Park
  1 sibling, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2022-12-08 20:59 UTC (permalink / raw)
  To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj

On Thu, Dec 08, 2022 at 12:35:03PM -0800, Vishal Moola (Oracle) wrote:
> Deactivate_page() has already been converted to use folios, this change
> converts it to take in a folio argument instead of calling page_folio().
> It also renames the function folio_deactivate() to be more consistent
> with other folio functions.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

(for future series like this, it's slightly fewer changes to introduce
folio_deactivate() first and change deactivate_page() to be a wrapper.
Then patches 2 & 3 in this series can just be converted straight to
folio_deactivate() instead of being changed twice.  wouldn't ask
you to redo the patch series at this point, but next time ...)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() to use folios
  2022-12-08 20:35 ` [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
  2022-12-08 20:57   ` Matthew Wilcox
@ 2022-12-08 21:52   ` SeongJae Park
  1 sibling, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2022-12-08 21:52 UTC (permalink / raw)
  To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj

On Thu, 8 Dec 2022 12:35:02 -0800 "Vishal Moola (Oracle)" <vishal.moola@gmail.com> wrote:

> This change replaces 2 calls to compound_head() with one.

I hoped this to be more detailed (e.g., 2 calls from mark_page_accessed() and
put_page() with 1 call from page_folio()), but it wouldn't be a blocker as we
had the discussion in v1 of this patch.

> This is in preparation for the conversion of deactivate_page() to
> folio_deactivate().
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: SeongJae Park <sj@kernel.org>

Please note that this conflicts with one of my patches that under review[1] at
the moment.  I will rebase and send the patch again if Andrew merge this first.

[1] https://lore.kernel.org/damon/20221205230830.144349-3-sj@kernel.org/


Thanks,
SJ

> ---
>  mm/damon/paddr.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
> index e1a4315c4be6..73548bc82297 100644
> --- a/mm/damon/paddr.c
> +++ b/mm/damon/paddr.c
> @@ -238,15 +238,18 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
>  
>  	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
>  		struct page *page = damon_get_page(PHYS_PFN(addr));
> +		struct folio *folio;
>  
>  		if (!page)
>  			continue;
> +		folio = page_folio(page);
> +
>  		if (mark_accessed)
> -			mark_page_accessed(page);
> +			folio_mark_accessed(folio);
>  		else
> -			deactivate_page(page);
> -		put_page(page);
> -		applied++;
> +			deactivate_page(&folio->page);
> +		folio_put(folio);
> +		applied += folio_nr_pages(folio);
>  	}
>  	return applied * PAGE_SIZE;
>  }
> -- 
> 2.38.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate()
  2022-12-08 20:35 ` [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
  2022-12-08 20:59   ` Matthew Wilcox
@ 2022-12-08 21:56   ` SeongJae Park
  1 sibling, 0 replies; 11+ messages in thread
From: SeongJae Park @ 2022-12-08 21:56 UTC (permalink / raw)
  To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj

On Thu, 8 Dec 2022 12:35:03 -0800 "Vishal Moola (Oracle)" <vishal.moola@gmail.com> wrote:

> Deactivate_page() has already been converted to use folios, this change
> converts it to take in a folio argument instead of calling page_folio().
> It also renames the function folio_deactivate() to be more consistent
> with other folio functions.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: SeongJae Park <sj@kernel.org>


Thanks,
SJ

> ---
>  include/linux/swap.h |  2 +-
>  mm/damon/paddr.c     |  2 +-
>  mm/madvise.c         |  4 ++--
>  mm/swap.c            | 14 ++++++--------
>  4 files changed, 10 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index a18cf4b7c724..6427b3af30c3 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -409,7 +409,7 @@ extern void lru_add_drain(void);
>  extern void lru_add_drain_cpu(int cpu);
>  extern void lru_add_drain_cpu_zone(struct zone *zone);
>  extern void lru_add_drain_all(void);
> -extern void deactivate_page(struct page *page);
> +void folio_deactivate(struct folio *folio);
>  extern void mark_page_lazyfree(struct page *page);
>  extern void swap_setup(void);
>  
> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
> index 73548bc82297..6b36de1396a4 100644
> --- a/mm/damon/paddr.c
> +++ b/mm/damon/paddr.c
> @@ -247,7 +247,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
>  		if (mark_accessed)
>  			folio_mark_accessed(folio);
>  		else
> -			deactivate_page(&folio->page);
> +			folio_deactivate(folio);
>  		folio_put(folio);
>  		applied += folio_nr_pages(folio);
>  	}
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 2a84b5dfbb4c..1ab293019862 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -396,7 +396,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  					list_add(&folio->lru, &folio_list);
>  			}
>  		} else
> -			deactivate_page(&folio->page);
> +			folio_deactivate(folio);
>  huge_unlock:
>  		spin_unlock(ptl);
>  		if (pageout)
> @@ -485,7 +485,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  					list_add(&folio->lru, &folio_list);
>  			}
>  		} else
> -			deactivate_page(&folio->page);
> +			folio_deactivate(folio);
>  	}
>  
>  	arch_leave_lazy_mmu_mode();
> diff --git a/mm/swap.c b/mm/swap.c
> index 955930f41d20..9cc8215acdbb 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -720,17 +720,15 @@ void deactivate_file_folio(struct folio *folio)
>  }
>  
>  /*
> - * deactivate_page - deactivate a page
> - * @page: page to deactivate
> + * folio_deactivate - deactivate a folio
> + * @folio: folio to deactivate
>   *
> - * deactivate_page() moves @page to the inactive list if @page was on the active
> - * list and was not an unevictable page.  This is done to accelerate the reclaim
> - * of @page.
> + * folio_deactivate() moves @folio to the inactive list if @folio was on the
> + * active list and was not unevictable. This is done to accelerate the
> + * reclaim of @folio.
>   */
> -void deactivate_page(struct page *page)
> +void folio_deactivate(struct folio *folio)
>  {
> -	struct folio *folio = page_folio(page);
> -
>  	if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
>  	    (folio_test_active(folio) || lru_gen_enabled())) {
>  		struct folio_batch *fbatch;
> -- 
> 2.38.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-12-08 21:56 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-08 20:34 [PATCH v3 0/4] Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
2022-12-08 20:35 ` [PATCH v3 1/4] mm/memory: Add vm_normal_folio() Vishal Moola (Oracle)
2022-12-08 20:56   ` Matthew Wilcox
2022-12-08 20:35 ` [PATCH v3 2/4] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
2022-12-08 20:57   ` Matthew Wilcox
2022-12-08 20:35 ` [PATCH v3 3/4] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
2022-12-08 20:57   ` Matthew Wilcox
2022-12-08 21:52   ` SeongJae Park
2022-12-08 20:35 ` [PATCH v3 4/4] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
2022-12-08 20:59   ` Matthew Wilcox
2022-12-08 21:56   ` SeongJae Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).