* [PATCH v2 0/6] mm: cleanup and use more folio in page fault
@ 2023-11-13 15:22 Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 1/6] mm: ksm: use more folio api in ksm_might_need_to_copy() Kefeng Wang
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Kefeng Wang @ 2023-11-13 15:22 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Sidhartha Kumar, Kefeng Wang
Rename page_copy_prealloc() to folio_prealloc(), which is used by
more functions, also do more folio conversion in page fault.
v2:
- add folio_test_large check in ksm_might_need_to_copy() and
replace page->index to folio->index, per David, Matthew
- Add RB of Sidhartha
Kefeng Wang (6):
mm: ksm: use more folio api in ksm_might_need_to_copy()
mm: memory: use a folio in validate_page_before_insert()
mm: memory: rename page_copy_prealloc() to folio_prealloc()
mm: memory: use a folio in do_cow_page()
mm: memory: use folio_prealloc() in wp_page_copy()
mm: memory: use folio_prealloc() in do_anonymous_page()
include/linux/ksm.h | 4 +--
mm/ksm.c | 39 ++++++++++++------------
mm/memory.c | 72 +++++++++++++++++++--------------------------
3 files changed, 53 insertions(+), 62 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/6] mm: ksm: use more folio api in ksm_might_need_to_copy()
2023-11-13 15:22 [PATCH v2 0/6] mm: cleanup and use more folio in page fault Kefeng Wang
@ 2023-11-13 15:22 ` Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 2/6] mm: memory: use a folio in validate_page_before_insert() Kefeng Wang
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2023-11-13 15:22 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Sidhartha Kumar, Kefeng Wang
Since ksm only support normal page, no swapout/in for ksm large
folio too, add large folio check in ksm_might_need_to_copy(),
also convert page->index to folio->index as page->index is going away.
Then convert ksm_might_need_to_copy() to use more folio api to save
nine compound_head() calls, short 'address' to reduce max-line-length.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
include/linux/ksm.h | 4 ++--
mm/ksm.c | 39 +++++++++++++++++++++------------------
2 files changed, 23 insertions(+), 20 deletions(-)
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index c2dd786a30e1..4643d5244e77 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -77,7 +77,7 @@ static inline void ksm_exit(struct mm_struct *mm)
* but what if the vma was unmerged while the page was swapped out?
*/
struct page *ksm_might_need_to_copy(struct page *page,
- struct vm_area_struct *vma, unsigned long address);
+ struct vm_area_struct *vma, unsigned long addr);
void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc);
void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
@@ -130,7 +130,7 @@ static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
}
static inline struct page *ksm_might_need_to_copy(struct page *page,
- struct vm_area_struct *vma, unsigned long address)
+ struct vm_area_struct *vma, unsigned long addr)
{
return page;
}
diff --git a/mm/ksm.c b/mm/ksm.c
index 7efcc68ccc6e..e9d72254e66c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2876,48 +2876,51 @@ void __ksm_exit(struct mm_struct *mm)
}
struct page *ksm_might_need_to_copy(struct page *page,
- struct vm_area_struct *vma, unsigned long address)
+ struct vm_area_struct *vma, unsigned long addr)
{
struct folio *folio = page_folio(page);
struct anon_vma *anon_vma = folio_anon_vma(folio);
- struct page *new_page;
+ struct folio *new_folio;
- if (PageKsm(page)) {
- if (page_stable_node(page) &&
+ if (folio_test_large(folio))
+ return page;
+
+ if (folio_test_ksm(folio)) {
+ if (folio_stable_node(folio) &&
!(ksm_run & KSM_RUN_UNMERGE))
return page; /* no need to copy it */
} else if (!anon_vma) {
return page; /* no need to copy it */
- } else if (page->index == linear_page_index(vma, address) &&
+ } else if (folio->index == linear_page_index(vma, addr) &&
anon_vma->root == vma->anon_vma->root) {
return page; /* still no need to copy it */
}
if (PageHWPoison(page))
return ERR_PTR(-EHWPOISON);
- if (!PageUptodate(page))
+ if (!folio_test_uptodate(folio))
return page; /* let do_swap_page report the error */
- new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
- if (new_page &&
- mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) {
- put_page(new_page);
- new_page = NULL;
+ new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr, false);
+ if (new_folio &&
+ mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL)) {
+ folio_put(new_folio);
+ new_folio = NULL;
}
- if (new_page) {
- if (copy_mc_user_highpage(new_page, page, address, vma)) {
- put_page(new_page);
+ if (new_folio) {
+ if (copy_mc_user_highpage(&new_folio->page, page, addr, vma)) {
+ folio_put(new_folio);
memory_failure_queue(page_to_pfn(page), 0);
return ERR_PTR(-EHWPOISON);
}
- SetPageDirty(new_page);
- __SetPageUptodate(new_page);
- __SetPageLocked(new_page);
+ folio_set_dirty(new_folio);
+ __folio_mark_uptodate(new_folio);
+ __folio_set_locked(new_folio);
#ifdef CONFIG_SWAP
count_vm_event(KSM_SWPIN_COPY);
#endif
}
- return new_page;
+ return new_folio ? &new_folio->page : NULL;
}
void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
--
2.27.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 2/6] mm: memory: use a folio in validate_page_before_insert()
2023-11-13 15:22 [PATCH v2 0/6] mm: cleanup and use more folio in page fault Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 1/6] mm: ksm: use more folio api in ksm_might_need_to_copy() Kefeng Wang
@ 2023-11-13 15:22 ` Kefeng Wang
2023-11-13 19:30 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 3/6] mm: memory: rename page_copy_prealloc() to folio_prealloc() Kefeng Wang
` (3 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Kefeng Wang @ 2023-11-13 15:22 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Sidhartha Kumar, Kefeng Wang
Use a folio in validate_page_before_insert() to save two
compound_head() calls.
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/memory.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index c32954e16b28..379354b35891 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1841,9 +1841,12 @@ pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
static int validate_page_before_insert(struct page *page)
{
- if (PageAnon(page) || PageSlab(page) || page_has_type(page))
+ struct folio *folio = page_folio(page);
+
+ if (folio_test_anon(folio) || folio_test_slab(folio) ||
+ page_has_type(page))
return -EINVAL;
- flush_dcache_page(page);
+ flush_dcache_folio(folio);
return 0;
}
--
2.27.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 3/6] mm: memory: rename page_copy_prealloc() to folio_prealloc()
2023-11-13 15:22 [PATCH v2 0/6] mm: cleanup and use more folio in page fault Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 1/6] mm: ksm: use more folio api in ksm_might_need_to_copy() Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 2/6] mm: memory: use a folio in validate_page_before_insert() Kefeng Wang
@ 2023-11-13 15:22 ` Kefeng Wang
2023-11-13 19:31 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 4/6] mm: memory: use a folio in do_cow_page() Kefeng Wang
` (2 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Kefeng Wang @ 2023-11-13 15:22 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Sidhartha Kumar, Kefeng Wang
Let's rename page_copy_prealloc() to folio_prealloc(), which could
be reused in more functons, as it maybe zero the new page, pass a
new need_zero to it, and call the vma_alloc_zeroed_movable_folio()
if need_zero is true.
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/memory.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 379354b35891..d85df1c59f52 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -992,12 +992,17 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
return 0;
}
-static inline struct folio *page_copy_prealloc(struct mm_struct *src_mm,
- struct vm_area_struct *vma, unsigned long addr)
+static inline struct folio *folio_prealloc(struct mm_struct *src_mm,
+ struct vm_area_struct *vma, unsigned long addr, bool need_zero)
{
struct folio *new_folio;
- new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr, false);
+ if (need_zero)
+ new_folio = vma_alloc_zeroed_movable_folio(vma, addr);
+ else
+ new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma,
+ addr, false);
+
if (!new_folio)
return NULL;
@@ -1129,7 +1134,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
} else if (ret == -EBUSY) {
goto out;
} else if (ret == -EAGAIN) {
- prealloc = page_copy_prealloc(src_mm, src_vma, addr);
+ prealloc = folio_prealloc(src_mm, src_vma, addr, false);
if (!prealloc)
return -ENOMEM;
} else if (ret) {
--
2.27.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 4/6] mm: memory: use a folio in do_cow_page()
2023-11-13 15:22 [PATCH v2 0/6] mm: cleanup and use more folio in page fault Kefeng Wang
` (2 preceding siblings ...)
2023-11-13 15:22 ` [PATCH v2 3/6] mm: memory: rename page_copy_prealloc() to folio_prealloc() Kefeng Wang
@ 2023-11-13 15:22 ` Kefeng Wang
2023-11-13 19:51 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 5/6] mm: memory: use folio_prealloc() in wp_page_copy() Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 6/6] mm: memory: use folio_prealloc() in do_anonymous_page() Kefeng Wang
5 siblings, 1 reply; 11+ messages in thread
From: Kefeng Wang @ 2023-11-13 15:22 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Sidhartha Kumar, Kefeng Wang
Use folio_prealloc() helper and convert to use a folio in
do_cow_page(), which save five compound_head() calls.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/memory.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index d85df1c59f52..f350ab2a324f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4653,6 +4653,7 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf)
static vm_fault_t do_cow_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
+ struct folio *folio;
vm_fault_t ret;
ret = vmf_can_call_fault(vmf);
@@ -4661,16 +4662,11 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
if (ret)
return ret;
- vmf->cow_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vmf->address);
- if (!vmf->cow_page)
+ folio = folio_prealloc(vma->vm_mm, vma, vmf->address, false);
+ if (!folio)
return VM_FAULT_OOM;
- if (mem_cgroup_charge(page_folio(vmf->cow_page), vma->vm_mm,
- GFP_KERNEL)) {
- put_page(vmf->cow_page);
- return VM_FAULT_OOM;
- }
- folio_throttle_swaprate(page_folio(vmf->cow_page), GFP_KERNEL);
+ vmf->cow_page = &folio->page;
ret = __do_fault(vmf);
if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
@@ -4679,7 +4675,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
return ret;
copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma);
- __SetPageUptodate(vmf->cow_page);
+ __folio_mark_uptodate(folio);
ret |= finish_fault(vmf);
unlock_page(vmf->page);
@@ -4688,7 +4684,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
goto uncharge_out;
return ret;
uncharge_out:
- put_page(vmf->cow_page);
+ folio_put(folio);
return ret;
}
--
2.27.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 5/6] mm: memory: use folio_prealloc() in wp_page_copy()
2023-11-13 15:22 [PATCH v2 0/6] mm: cleanup and use more folio in page fault Kefeng Wang
` (3 preceding siblings ...)
2023-11-13 15:22 ` [PATCH v2 4/6] mm: memory: use a folio in do_cow_page() Kefeng Wang
@ 2023-11-13 15:22 ` Kefeng Wang
2023-11-13 20:07 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 6/6] mm: memory: use folio_prealloc() in do_anonymous_page() Kefeng Wang
5 siblings, 1 reply; 11+ messages in thread
From: Kefeng Wang @ 2023-11-13 15:22 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Sidhartha Kumar, Kefeng Wang
Use folio_prealloc() helper to simplify code a bit.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/memory.c | 22 +++++++---------------
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index f350ab2a324f..03226566bf8e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3114,6 +3114,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
int page_copied = 0;
struct mmu_notifier_range range;
vm_fault_t ret;
+ bool pfn_is_zero;
delayacct_wpcopy_start();
@@ -3123,16 +3124,13 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
if (unlikely(ret))
goto out;
- if (is_zero_pfn(pte_pfn(vmf->orig_pte))) {
- new_folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
- if (!new_folio)
- goto oom;
- } else {
+ pfn_is_zero = is_zero_pfn(pte_pfn(vmf->orig_pte));
+ new_folio = folio_prealloc(mm, vma, vmf->address, pfn_is_zero);
+ if (!new_folio)
+ goto oom;
+
+ if (!pfn_is_zero) {
int err;
- new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma,
- vmf->address, false);
- if (!new_folio)
- goto oom;
err = __wp_page_copy_user(&new_folio->page, vmf->page, vmf);
if (err) {
@@ -3153,10 +3151,6 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
kmsan_copy_page_meta(&new_folio->page, vmf->page);
}
- if (mem_cgroup_charge(new_folio, mm, GFP_KERNEL))
- goto oom_free_new;
- folio_throttle_swaprate(new_folio, GFP_KERNEL);
-
__folio_mark_uptodate(new_folio);
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm,
@@ -3255,8 +3249,6 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
delayacct_wpcopy_end();
return 0;
-oom_free_new:
- folio_put(new_folio);
oom:
ret = VM_FAULT_OOM;
out:
--
2.27.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 6/6] mm: memory: use folio_prealloc() in do_anonymous_page()
2023-11-13 15:22 [PATCH v2 0/6] mm: cleanup and use more folio in page fault Kefeng Wang
` (4 preceding siblings ...)
2023-11-13 15:22 ` [PATCH v2 5/6] mm: memory: use folio_prealloc() in wp_page_copy() Kefeng Wang
@ 2023-11-13 15:22 ` Kefeng Wang
5 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2023-11-13 15:22 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Sidhartha Kumar, Kefeng Wang
Use folio_prealloc() to simplify code a bit.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/memory.c | 14 +++-----------
1 file changed, 3 insertions(+), 11 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 03226566bf8e..4995efbb6e83 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4172,14 +4172,10 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
/* Allocate our own private page. */
if (unlikely(anon_vma_prepare(vma)))
- goto oom;
- folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
+ return VM_FAULT_OOM;
+ folio = folio_prealloc(vma->vm_mm, vma, vmf->address, true);
if (!folio)
- goto oom;
-
- if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL))
- goto oom_free_page;
- folio_throttle_swaprate(folio, GFP_KERNEL);
+ return VM_FAULT_OOM;
/*
* The memory barrier inside __folio_mark_uptodate makes sure that
@@ -4230,10 +4226,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
release:
folio_put(folio);
goto unlock;
-oom_free_page:
- folio_put(folio);
-oom:
- return VM_FAULT_OOM;
}
/*
--
2.27.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/6] mm: memory: use a folio in validate_page_before_insert()
2023-11-13 15:22 ` [PATCH v2 2/6] mm: memory: use a folio in validate_page_before_insert() Kefeng Wang
@ 2023-11-13 19:30 ` Vishal Moola
0 siblings, 0 replies; 11+ messages in thread
From: Vishal Moola @ 2023-11-13 19:30 UTC (permalink / raw)
To: Kefeng Wang
Cc: Andrew Morton, linux-kernel, linux-mm, Matthew Wilcox,
David Hildenbrand, Sidhartha Kumar
On Mon, Nov 13, 2023 at 11:22:18PM +0800, Kefeng Wang wrote:
> Use a folio in validate_page_before_insert() to save two
> compound_head() calls.
>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> mm/memory.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index c32954e16b28..379354b35891 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1841,9 +1841,12 @@ pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
>
> static int validate_page_before_insert(struct page *page)
> {
> - if (PageAnon(page) || PageSlab(page) || page_has_type(page))
> + struct folio *folio = page_folio(page);
> +
> + if (folio_test_anon(folio) || folio_test_slab(folio) ||
> + page_has_type(page))
> return -EINVAL;
> - flush_dcache_page(page);
> + flush_dcache_folio(folio);
> return 0;
> }
>
> --
> 2.27.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 3/6] mm: memory: rename page_copy_prealloc() to folio_prealloc()
2023-11-13 15:22 ` [PATCH v2 3/6] mm: memory: rename page_copy_prealloc() to folio_prealloc() Kefeng Wang
@ 2023-11-13 19:31 ` Vishal Moola
0 siblings, 0 replies; 11+ messages in thread
From: Vishal Moola @ 2023-11-13 19:31 UTC (permalink / raw)
To: Kefeng Wang
Cc: Andrew Morton, linux-kernel, linux-mm, Matthew Wilcox,
David Hildenbrand, Sidhartha Kumar
On Mon, Nov 13, 2023 at 11:22:19PM +0800, Kefeng Wang wrote:
> Let's rename page_copy_prealloc() to folio_prealloc(), which could
> be reused in more functons, as it maybe zero the new page, pass a
> new need_zero to it, and call the vma_alloc_zeroed_movable_folio()
> if need_zero is true.
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> mm/memory.c | 13 +++++++++----
> 1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 379354b35891..d85df1c59f52 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -992,12 +992,17 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> return 0;
> }
>
> -static inline struct folio *page_copy_prealloc(struct mm_struct *src_mm,
> - struct vm_area_struct *vma, unsigned long addr)
> +static inline struct folio *folio_prealloc(struct mm_struct *src_mm,
> + struct vm_area_struct *vma, unsigned long addr, bool need_zero)
> {
> struct folio *new_folio;
>
> - new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr, false);
> + if (need_zero)
> + new_folio = vma_alloc_zeroed_movable_folio(vma, addr);
> + else
> + new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma,
> + addr, false);
> +
> if (!new_folio)
> return NULL;
>
> @@ -1129,7 +1134,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> } else if (ret == -EBUSY) {
> goto out;
> } else if (ret == -EAGAIN) {
> - prealloc = page_copy_prealloc(src_mm, src_vma, addr);
> + prealloc = folio_prealloc(src_mm, src_vma, addr, false);
> if (!prealloc)
> return -ENOMEM;
> } else if (ret) {
> --
> 2.27.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 4/6] mm: memory: use a folio in do_cow_page()
2023-11-13 15:22 ` [PATCH v2 4/6] mm: memory: use a folio in do_cow_page() Kefeng Wang
@ 2023-11-13 19:51 ` Vishal Moola
0 siblings, 0 replies; 11+ messages in thread
From: Vishal Moola @ 2023-11-13 19:51 UTC (permalink / raw)
To: Kefeng Wang
Cc: Andrew Morton, linux-kernel, linux-mm, Matthew Wilcox,
David Hildenbrand, Sidhartha Kumar
On Mon, Nov 13, 2023 at 11:22:20PM +0800, Kefeng Wang wrote:
> Use folio_prealloc() helper and convert to use a folio in
> do_cow_page(), which save five compound_head() calls.
s/do_cow_page()/do_cow_fault()/
Aside from that,
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> mm/memory.c | 16 ++++++----------
> 1 file changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index d85df1c59f52..f350ab2a324f 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4653,6 +4653,7 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf)
> static vm_fault_t do_cow_fault(struct vm_fault *vmf)
> {
> struct vm_area_struct *vma = vmf->vma;
> + struct folio *folio;
> vm_fault_t ret;
>
> ret = vmf_can_call_fault(vmf);
> @@ -4661,16 +4662,11 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
> if (ret)
> return ret;
>
> - vmf->cow_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vmf->address);
> - if (!vmf->cow_page)
> + folio = folio_prealloc(vma->vm_mm, vma, vmf->address, false);
> + if (!folio)
> return VM_FAULT_OOM;
>
> - if (mem_cgroup_charge(page_folio(vmf->cow_page), vma->vm_mm,
> - GFP_KERNEL)) {
> - put_page(vmf->cow_page);
> - return VM_FAULT_OOM;
> - }
> - folio_throttle_swaprate(page_folio(vmf->cow_page), GFP_KERNEL);
> + vmf->cow_page = &folio->page;
>
> ret = __do_fault(vmf);
> if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
> @@ -4679,7 +4675,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
> return ret;
>
> copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma);
> - __SetPageUptodate(vmf->cow_page);
> + __folio_mark_uptodate(folio);
>
> ret |= finish_fault(vmf);
> unlock_page(vmf->page);
> @@ -4688,7 +4684,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
> goto uncharge_out;
> return ret;
> uncharge_out:
> - put_page(vmf->cow_page);
> + folio_put(folio);
> return ret;
> }
>
> --
> 2.27.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 5/6] mm: memory: use folio_prealloc() in wp_page_copy()
2023-11-13 15:22 ` [PATCH v2 5/6] mm: memory: use folio_prealloc() in wp_page_copy() Kefeng Wang
@ 2023-11-13 20:07 ` Vishal Moola
0 siblings, 0 replies; 11+ messages in thread
From: Vishal Moola @ 2023-11-13 20:07 UTC (permalink / raw)
To: Kefeng Wang
Cc: Andrew Morton, linux-kernel, linux-mm, Matthew Wilcox,
David Hildenbrand, Sidhartha Kumar
On Mon, Nov 13, 2023 at 11:22:21PM +0800, Kefeng Wang wrote:
> Use folio_prealloc() helper to simplify code a bit.
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> mm/memory.c | 22 +++++++---------------
> 1 file changed, 7 insertions(+), 15 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index f350ab2a324f..03226566bf8e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3114,6 +3114,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> int page_copied = 0;
> struct mmu_notifier_range range;
> vm_fault_t ret;
> + bool pfn_is_zero;
>
> delayacct_wpcopy_start();
>
> @@ -3123,16 +3124,13 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> if (unlikely(ret))
> goto out;
>
> - if (is_zero_pfn(pte_pfn(vmf->orig_pte))) {
> - new_folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
> - if (!new_folio)
> - goto oom;
> - } else {
> + pfn_is_zero = is_zero_pfn(pte_pfn(vmf->orig_pte));
> + new_folio = folio_prealloc(mm, vma, vmf->address, pfn_is_zero);
> + if (!new_folio)
> + goto oom;
> +
> + if (!pfn_is_zero) {
> int err;
> - new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma,
> - vmf->address, false);
> - if (!new_folio)
> - goto oom;
>
> err = __wp_page_copy_user(&new_folio->page, vmf->page, vmf);
> if (err) {
> @@ -3153,10 +3151,6 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> kmsan_copy_page_meta(&new_folio->page, vmf->page);
> }
>
> - if (mem_cgroup_charge(new_folio, mm, GFP_KERNEL))
> - goto oom_free_new;
> - folio_throttle_swaprate(new_folio, GFP_KERNEL);
> -
> __folio_mark_uptodate(new_folio);
>
> mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm,
> @@ -3255,8 +3249,6 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
>
> delayacct_wpcopy_end();
> return 0;
> -oom_free_new:
> - folio_put(new_folio);
> oom:
> ret = VM_FAULT_OOM;
> out:
> --
> 2.27.0
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-11-13 20:07 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-13 15:22 [PATCH v2 0/6] mm: cleanup and use more folio in page fault Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 1/6] mm: ksm: use more folio api in ksm_might_need_to_copy() Kefeng Wang
2023-11-13 15:22 ` [PATCH v2 2/6] mm: memory: use a folio in validate_page_before_insert() Kefeng Wang
2023-11-13 19:30 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 3/6] mm: memory: rename page_copy_prealloc() to folio_prealloc() Kefeng Wang
2023-11-13 19:31 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 4/6] mm: memory: use a folio in do_cow_page() Kefeng Wang
2023-11-13 19:51 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 5/6] mm: memory: use folio_prealloc() in wp_page_copy() Kefeng Wang
2023-11-13 20:07 ` Vishal Moola
2023-11-13 15:22 ` [PATCH v2 6/6] mm: memory: use folio_prealloc() in do_anonymous_page() Kefeng Wang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.