* [PATCH] mm/khugepaged: Cleanup memcg uncharge for failure path
@ 2023-03-03 15:12 Peter Xu
2023-03-03 15:22 ` Zach O'Keefe
2023-03-03 19:00 ` Yang Shi
0 siblings, 2 replies; 3+ messages in thread
From: Peter Xu @ 2023-03-03 15:12 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: David Stevens, Andrew Morton, Yang Shi, peterx, Johannes Weiner,
Zach O'Keefe
Explicit memcg uncharging is not needed when the memcg accounting has the
same lifespan of the page/folio. That becomes the case for khugepaged
after Yang & Zach's recent rework so the hpage will be allocated for each
collapse rather than being cached.
Cleanup the explicit memcg uncharge in khugepaged failure path and leave
that for put_page().
Suggested-by: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
mm/khugepaged.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 941d1c7ea910..dd5a7d9bc593 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1230,10 +1230,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
out_up_write:
mmap_write_unlock(mm);
out_nolock:
- if (hpage) {
- mem_cgroup_uncharge(page_folio(hpage));
+ if (hpage)
put_page(hpage);
- }
trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
return result;
}
@@ -2250,10 +2248,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
unlock_page(hpage);
out:
VM_BUG_ON(!list_empty(&pagelist));
- if (hpage) {
- mem_cgroup_uncharge(page_folio(hpage));
+ if (hpage)
put_page(hpage);
- }
trace_mm_khugepaged_collapse_file(mm, hpage, index, is_shmem, addr, file, nr, result);
return result;
--
2.39.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] mm/khugepaged: Cleanup memcg uncharge for failure path
2023-03-03 15:12 [PATCH] mm/khugepaged: Cleanup memcg uncharge for failure path Peter Xu
@ 2023-03-03 15:22 ` Zach O'Keefe
2023-03-03 19:00 ` Yang Shi
1 sibling, 0 replies; 3+ messages in thread
From: Zach O'Keefe @ 2023-03-03 15:22 UTC (permalink / raw)
To: Peter Xu
Cc: linux-mm, linux-kernel, David Stevens, Andrew Morton, Yang Shi,
Johannes Weiner
Thanks Peter!
On Mar 03 10:12, Peter Xu wrote:
> Explicit memcg uncharging is not needed when the memcg accounting has the
> same lifespan of the page/folio. That becomes the case for khugepaged
> after Yang & Zach's recent rework so the hpage will be allocated for each
> collapse rather than being cached.
>
> Cleanup the explicit memcg uncharge in khugepaged failure path and leave
> that for put_page().
>
> Suggested-by: Zach O'Keefe <zokeefe@google.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
> ---
> mm/khugepaged.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 941d1c7ea910..dd5a7d9bc593 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1230,10 +1230,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> out_up_write:
> mmap_write_unlock(mm);
> out_nolock:
> - if (hpage) {
> - mem_cgroup_uncharge(page_folio(hpage));
> + if (hpage)
> put_page(hpage);
> - }
> trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
> return result;
> }
> @@ -2250,10 +2248,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> unlock_page(hpage);
> out:
> VM_BUG_ON(!list_empty(&pagelist));
> - if (hpage) {
> - mem_cgroup_uncharge(page_folio(hpage));
> + if (hpage)
> put_page(hpage);
> - }
>
> trace_mm_khugepaged_collapse_file(mm, hpage, index, is_shmem, addr, file, nr, result);
> return result;
> --
> 2.39.1
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mm/khugepaged: Cleanup memcg uncharge for failure path
2023-03-03 15:12 [PATCH] mm/khugepaged: Cleanup memcg uncharge for failure path Peter Xu
2023-03-03 15:22 ` Zach O'Keefe
@ 2023-03-03 19:00 ` Yang Shi
1 sibling, 0 replies; 3+ messages in thread
From: Yang Shi @ 2023-03-03 19:00 UTC (permalink / raw)
To: Peter Xu
Cc: linux-mm, linux-kernel, David Stevens, Andrew Morton,
Johannes Weiner, Zach O'Keefe
On Fri, Mar 3, 2023 at 7:12 AM Peter Xu <peterx@redhat.com> wrote:
>
> Explicit memcg uncharging is not needed when the memcg accounting has the
> same lifespan of the page/folio. That becomes the case for khugepaged
> after Yang & Zach's recent rework so the hpage will be allocated for each
> collapse rather than being cached.
>
> Cleanup the explicit memcg uncharge in khugepaged failure path and leave
> that for put_page().
Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@gmail.com>
>
> Suggested-by: Zach O'Keefe <zokeefe@google.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> mm/khugepaged.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 941d1c7ea910..dd5a7d9bc593 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1230,10 +1230,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> out_up_write:
> mmap_write_unlock(mm);
> out_nolock:
> - if (hpage) {
> - mem_cgroup_uncharge(page_folio(hpage));
> + if (hpage)
> put_page(hpage);
> - }
> trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
> return result;
> }
> @@ -2250,10 +2248,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> unlock_page(hpage);
> out:
> VM_BUG_ON(!list_empty(&pagelist));
> - if (hpage) {
> - mem_cgroup_uncharge(page_folio(hpage));
> + if (hpage)
> put_page(hpage);
> - }
>
> trace_mm_khugepaged_collapse_file(mm, hpage, index, is_shmem, addr, file, nr, result);
> return result;
> --
> 2.39.1
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-03-03 19:00 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-03 15:12 [PATCH] mm/khugepaged: Cleanup memcg uncharge for failure path Peter Xu
2023-03-03 15:22 ` Zach O'Keefe
2023-03-03 19:00 ` Yang Shi
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.