linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio
@ 2022-12-07  2:34 Kefeng Wang
  2022-12-07  2:34 ` [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree() Kefeng Wang
  2022-12-12 20:00 ` [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio Vishal Moola
  0 siblings, 2 replies; 7+ messages in thread
From: Kefeng Wang @ 2022-12-07  2:34 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-kernel, Kefeng Wang

Using folios instead of pages removes several calls to compound_head(),

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index abe6cfd92ffa..6e76c770529b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1603,7 +1603,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 {
 	spinlock_t *ptl;
 	pmd_t orig_pmd;
-	struct page *page;
+	struct folio *folio;
 	struct mm_struct *mm = tlb->mm;
 	bool ret = false;
 
@@ -1623,15 +1623,15 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		goto out;
 	}
 
-	page = pmd_page(orig_pmd);
+	folio = pfn_folio(pmd_pfn(orig_pmd));
 	/*
-	 * If other processes are mapping this page, we couldn't discard
-	 * the page unless they all do MADV_FREE so let's skip the page.
+	 * If other processes are mapping this folio, we couldn't discard
+	 * the folio unless they all do MADV_FREE so let's skip the folio.
 	 */
-	if (total_mapcount(page) != 1)
+	if (folio_mapcount(folio) != 1)
 		goto out;
 
-	if (!trylock_page(page))
+	if (!folio_trylock(folio))
 		goto out;
 
 	/*
@@ -1639,17 +1639,17 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	 * will deactivate only them.
 	 */
 	if (next - addr != HPAGE_PMD_SIZE) {
-		get_page(page);
+		folio_get(folio);
 		spin_unlock(ptl);
-		split_huge_page(page);
-		unlock_page(page);
-		put_page(page);
+		split_folio(folio);
+		folio_unlock(folio);
+		folio_put(folio);
 		goto out_unlocked;
 	}
 
-	if (PageDirty(page))
-		ClearPageDirty(page);
-	unlock_page(page);
+	if (folio_test_dirty(folio))
+		folio_clear_dirty(folio);
+	folio_unlock(folio);
 
 	if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
 		pmdp_invalidate(vma, addr, pmd);
@@ -1660,7 +1660,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
 	}
 
-	mark_page_lazyfree(page);
+	mark_page_lazyfree(&folio->page);
 	ret = true;
 out:
 	spin_unlock(ptl);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree()
  2022-12-07  2:34 [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio Kefeng Wang
@ 2022-12-07  2:34 ` Kefeng Wang
  2022-12-08 21:32   ` Vishal Moola
  2022-12-09  2:06   ` [PATCH v2] mm: swap: Convert mark_page_lazyfree() to folio_mark_lazyfree() Kefeng Wang
  2022-12-12 20:00 ` [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio Vishal Moola
  1 sibling, 2 replies; 7+ messages in thread
From: Kefeng Wang @ 2022-12-07  2:34 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-kernel, Kefeng Wang

mark_page_lazyfree() and the callers are converted to use folio,
this rename and make it to take in a folio argument instead of
calling page_folio().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/swap.h |  2 +-
 mm/huge_memory.c     |  2 +-
 mm/madvise.c         |  2 +-
 mm/swap.c            | 12 +++++-------
 4 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 2787b84eaf12..ae4e8f0d9951 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -402,7 +402,7 @@ extern void lru_add_drain_cpu(int cpu);
 extern void lru_add_drain_cpu_zone(struct zone *zone);
 extern void lru_add_drain_all(void);
 extern void deactivate_page(struct page *page);
-extern void mark_page_lazyfree(struct page *page);
+extern void mark_folio_lazyfree(struct folio *folio);
 extern void swap_setup(void);
 
 extern void lru_cache_add_inactive_or_unevictable(struct page *page,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6e76c770529b..91e149a11f2d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1660,7 +1660,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
 	}
 
-	mark_page_lazyfree(&folio->page);
+	mark_folio_lazyfree(folio);
 	ret = true;
 out:
 	spin_unlock(ptl);
diff --git a/mm/madvise.c b/mm/madvise.c
index 09bb97f118ac..7ae1743b67c1 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -727,7 +727,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 			set_pte_at(mm, addr, pte, ptent);
 			tlb_remove_tlb_entry(tlb, pte, addr);
 		}
-		mark_page_lazyfree(&folio->page);
+		mark_folio_lazyfree(folio);
 	}
 out:
 	if (nr_swap) {
diff --git a/mm/swap.c b/mm/swap.c
index 70e2063ef43a..a6046cd5f3a2 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -757,16 +757,14 @@ void deactivate_page(struct page *page)
 }
 
 /**
- * mark_page_lazyfree - make an anon page lazyfree
- * @page: page to deactivate
+ * mark_folio_lazyfree - make an anon folio lazyfree
+ * @folio: folio to deactivate
  *
- * mark_page_lazyfree() moves @page to the inactive file list.
- * This is done to accelerate the reclaim of @page.
+ * mark_folio_lazyfree() moves @folio to the inactive file list.
+ * This is done to accelerate the reclaim of @folio.
  */
-void mark_page_lazyfree(struct page *page)
+void mark_folio_lazyfree(struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
-
 	if (folio_test_lru(folio) && folio_test_anon(folio) &&
 	    folio_test_swapbacked(folio) && !folio_test_swapcache(folio) &&
 	    !folio_test_unevictable(folio)) {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree()
  2022-12-07  2:34 ` [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree() Kefeng Wang
@ 2022-12-08 21:32   ` Vishal Moola
  2022-12-09  1:15     ` Kefeng Wang
  2022-12-09  2:06   ` [PATCH v2] mm: swap: Convert mark_page_lazyfree() to folio_mark_lazyfree() Kefeng Wang
  1 sibling, 1 reply; 7+ messages in thread
From: Vishal Moola @ 2022-12-08 21:32 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: linux-mm, Andrew Morton, Matthew Wilcox (Oracle), linux-kernel

On Wed, Dec 07, 2022 at 10:34:31AM +0800, Kefeng Wang wrote:
> @@ -402,7 +402,7 @@ extern void lru_add_drain_cpu(int cpu);
>  extern void lru_add_drain_cpu_zone(struct zone *zone);
>  extern void lru_add_drain_all(void);
>  extern void deactivate_page(struct page *page);
> -extern void mark_page_lazyfree(struct page *page);
> +extern void mark_folio_lazyfree(struct folio *folio);
>  extern void swap_setup(void);

Can we rename this function to folio_mark_lazyfree() instead so it's more
consistent with other the folio functions. Also I believe we can get rid of
the 'extern' keyword.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree()
  2022-12-08 21:32   ` Vishal Moola
@ 2022-12-09  1:15     ` Kefeng Wang
  0 siblings, 0 replies; 7+ messages in thread
From: Kefeng Wang @ 2022-12-09  1:15 UTC (permalink / raw)
  To: Vishal Moola
  Cc: linux-mm, Andrew Morton, Matthew Wilcox (Oracle), linux-kernel


On 2022/12/9 5:32, Vishal Moola wrote:
> On Wed, Dec 07, 2022 at 10:34:31AM +0800, Kefeng Wang wrote:
>> @@ -402,7 +402,7 @@ extern void lru_add_drain_cpu(int cpu);
>>   extern void lru_add_drain_cpu_zone(struct zone *zone);
>>   extern void lru_add_drain_all(void);
>>   extern void deactivate_page(struct page *page);
>> -extern void mark_page_lazyfree(struct page *page);
>> +extern void mark_folio_lazyfree(struct folio *folio);
>>   extern void swap_setup(void);
> Can we rename this function to folio_mark_lazyfree() instead so it's more
> consistent with other the folio functions. Also I believe we can get rid of
> the 'extern' keyword.
ok, will change, thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2] mm: swap: Convert mark_page_lazyfree() to folio_mark_lazyfree()
  2022-12-07  2:34 ` [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree() Kefeng Wang
  2022-12-08 21:32   ` Vishal Moola
@ 2022-12-09  2:06   ` Kefeng Wang
  2022-12-12 20:01     ` Vishal Moola
  1 sibling, 1 reply; 7+ messages in thread
From: Kefeng Wang @ 2022-12-09  2:06 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-kernel, Vishal Moola, Kefeng Wang

mark_page_lazyfree() and the callers are converted to use folio,
this rename and make it to take in a folio argument instead of
calling page_folio().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v2: use new name, folio_mark_lazyfree() and drop extern suggested by
    Vishal Moola

 include/linux/swap.h |  2 +-
 mm/huge_memory.c     |  2 +-
 mm/madvise.c         |  2 +-
 mm/swap.c            | 12 +++++-------
 4 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 2787b84eaf12..93f1cebd8545 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -402,7 +402,7 @@ extern void lru_add_drain_cpu(int cpu);
 extern void lru_add_drain_cpu_zone(struct zone *zone);
 extern void lru_add_drain_all(void);
 extern void deactivate_page(struct page *page);
-extern void mark_page_lazyfree(struct page *page);
+void folio_mark_lazyfree(struct folio *folio);
 extern void swap_setup(void);
 
 extern void lru_cache_add_inactive_or_unevictable(struct page *page,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6e76c770529b..eb17c24c5fda 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1660,7 +1660,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
 	}
 
-	mark_page_lazyfree(&folio->page);
+	folio_mark_lazyfree(folio);
 	ret = true;
 out:
 	spin_unlock(ptl);
diff --git a/mm/madvise.c b/mm/madvise.c
index 87703a19bbef..72565e58e067 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -728,7 +728,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 			set_pte_at(mm, addr, pte, ptent);
 			tlb_remove_tlb_entry(tlb, pte, addr);
 		}
-		mark_page_lazyfree(&folio->page);
+		folio_mark_lazyfree(folio);
 	}
 out:
 	if (nr_swap) {
diff --git a/mm/swap.c b/mm/swap.c
index 70e2063ef43a..5e5eba186930 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -757,16 +757,14 @@ void deactivate_page(struct page *page)
 }
 
 /**
- * mark_page_lazyfree - make an anon page lazyfree
- * @page: page to deactivate
+ * folio_mark_lazyfree - make an anon folio lazyfree
+ * @folio: folio to deactivate
  *
- * mark_page_lazyfree() moves @page to the inactive file list.
- * This is done to accelerate the reclaim of @page.
+ * folio_mark_lazyfree() moves @folio to the inactive file list.
+ * This is done to accelerate the reclaim of @folio.
  */
-void mark_page_lazyfree(struct page *page)
+void folio_mark_lazyfree(struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
-
 	if (folio_test_lru(folio) && folio_test_anon(folio) &&
 	    folio_test_swapbacked(folio) && !folio_test_swapcache(folio) &&
 	    !folio_test_unevictable(folio)) {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio
  2022-12-07  2:34 [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio Kefeng Wang
  2022-12-07  2:34 ` [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree() Kefeng Wang
@ 2022-12-12 20:00 ` Vishal Moola
  1 sibling, 0 replies; 7+ messages in thread
From: Vishal Moola @ 2022-12-12 20:00 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: linux-mm, Andrew Morton, Matthew Wilcox (Oracle), linux-kernel

On Wed, Dec 07, 2022 at 10:34:30AM +0800, Kefeng Wang wrote:
> Using folios instead of pages removes several calls to compound_head(),
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] mm: swap: Convert mark_page_lazyfree() to folio_mark_lazyfree()
  2022-12-09  2:06   ` [PATCH v2] mm: swap: Convert mark_page_lazyfree() to folio_mark_lazyfree() Kefeng Wang
@ 2022-12-12 20:01     ` Vishal Moola
  0 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola @ 2022-12-12 20:01 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: linux-mm, Andrew Morton, Matthew Wilcox (Oracle), linux-kernel

On Fri, Dec 09, 2022 at 10:06:18AM +0800, Kefeng Wang wrote:
> mark_page_lazyfree() and the callers are converted to use folio,
> this rename and make it to take in a folio argument instead of
> calling page_folio().
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-12-12 20:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-07  2:34 [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio Kefeng Wang
2022-12-07  2:34 ` [PATCH 2/2] mm: swap: Convert mark_page_lazyfree() to mark_folio_lazyfree() Kefeng Wang
2022-12-08 21:32   ` Vishal Moola
2022-12-09  1:15     ` Kefeng Wang
2022-12-09  2:06   ` [PATCH v2] mm: swap: Convert mark_page_lazyfree() to folio_mark_lazyfree() Kefeng Wang
2022-12-12 20:01     ` Vishal Moola
2022-12-12 20:00 ` [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio Vishal Moola

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).