* [PATCH] mm/page_alloc: simplify kmem cgroup charge/uncharge code
@ 2020-12-07 14:22 Hui Su
2020-12-07 14:42 ` [External] " Muchun Song
2020-12-07 17:28 ` Shakeel Butt
0 siblings, 2 replies; 6+ messages in thread
From: Hui Su @ 2020-12-07 14:22 UTC (permalink / raw)
To: akpm, shakeelb, linux-mm, linux-kernel; +Cc: songmuchun
Since the commit 60cd4bcd6238 ("memcg: localize memcg_kmem_enabled()
check"), we have supplied the api which users don't have to explicitly
check memcg_kmem_enabled().
Signed-off-by: Hui Su <sh_def@163.com>
---
mm/page_alloc.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eaa227a479e4..dc990a899ded 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1214,8 +1214,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
* Do not let hwpoison pages hit pcplists/buddy
* Untie memcg state and reset page's owner
*/
- if (memcg_kmem_enabled() && PageKmemcg(page))
- __memcg_kmem_uncharge_page(page, order);
+ if (PageKmemcg(page))
+ memcg_kmem_uncharge_page(page, order);
reset_page_owner(page, order);
return false;
}
@@ -1244,8 +1244,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
}
if (PageMappingFlags(page))
page->mapping = NULL;
- if (memcg_kmem_enabled() && PageKmemcg(page))
- __memcg_kmem_uncharge_page(page, order);
+ if (PageKmemcg(page))
+ memcg_kmem_uncharge_page(page, order);
if (check_free)
bad += check_free_page(page);
if (bad)
@@ -4965,8 +4965,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
page = __alloc_pages_slowpath(alloc_mask, order, &ac);
out:
- if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
- unlikely(__memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
+ if ((gfp_mask & __GFP_ACCOUNT) && page &&
+ unlikely(memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
__free_pages(page, order);
page = NULL;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [External] [PATCH] mm/page_alloc: simplify kmem cgroup charge/uncharge code
2020-12-07 14:22 [PATCH] mm/page_alloc: simplify kmem cgroup charge/uncharge code Hui Su
@ 2020-12-07 14:42 ` Muchun Song
2020-12-07 17:28 ` Shakeel Butt
1 sibling, 0 replies; 6+ messages in thread
From: Muchun Song @ 2020-12-07 14:42 UTC (permalink / raw)
To: Hui Su; +Cc: Andrew Morton, Shakeel Butt, Linux Memory Management List, LKML
On Mon, Dec 7, 2020 at 10:22 PM Hui Su <sh_def@163.com> wrote:
>
> Since the commit 60cd4bcd6238 ("memcg: localize memcg_kmem_enabled()
> check"), we have supplied the api which users don't have to explicitly
> check memcg_kmem_enabled().
>
> Signed-off-by: Hui Su <sh_def@163.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> ---
> mm/page_alloc.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaa227a479e4..dc990a899ded 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1214,8 +1214,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
> * Do not let hwpoison pages hit pcplists/buddy
> * Untie memcg state and reset page's owner
> */
> - if (memcg_kmem_enabled() && PageKmemcg(page))
> - __memcg_kmem_uncharge_page(page, order);
> + if (PageKmemcg(page))
> + memcg_kmem_uncharge_page(page, order);
> reset_page_owner(page, order);
> return false;
> }
> @@ -1244,8 +1244,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
> }
> if (PageMappingFlags(page))
> page->mapping = NULL;
> - if (memcg_kmem_enabled() && PageKmemcg(page))
> - __memcg_kmem_uncharge_page(page, order);
> + if (PageKmemcg(page))
> + memcg_kmem_uncharge_page(page, order);
> if (check_free)
> bad += check_free_page(page);
> if (bad)
> @@ -4965,8 +4965,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
> page = __alloc_pages_slowpath(alloc_mask, order, &ac);
>
> out:
> - if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
> - unlikely(__memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
> + if ((gfp_mask & __GFP_ACCOUNT) && page &&
> + unlikely(memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
> __free_pages(page, order);
> page = NULL;
> }
> --
> 2.29.2
>
>
--
Yours,
Muchun
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm/page_alloc: simplify kmem cgroup charge/uncharge code
2020-12-07 14:22 [PATCH] mm/page_alloc: simplify kmem cgroup charge/uncharge code Hui Su
2020-12-07 14:42 ` [External] " Muchun Song
@ 2020-12-07 17:28 ` Shakeel Butt
[not found] ` <20201208060747.GA56968@rlk>
1 sibling, 1 reply; 6+ messages in thread
From: Shakeel Butt @ 2020-12-07 17:28 UTC (permalink / raw)
To: Hui Su; +Cc: Andrew Morton, Linux MM, LKML, Muchun Song
On Mon, Dec 7, 2020 at 6:22 AM Hui Su <sh_def@163.com> wrote:
>
> Since the commit 60cd4bcd6238 ("memcg: localize memcg_kmem_enabled()
> check"), we have supplied the api which users don't have to explicitly
> check memcg_kmem_enabled().
>
> Signed-off-by: Hui Su <sh_def@163.com>
> ---
> mm/page_alloc.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaa227a479e4..dc990a899ded 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1214,8 +1214,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
> * Do not let hwpoison pages hit pcplists/buddy
> * Untie memcg state and reset page's owner
> */
> - if (memcg_kmem_enabled() && PageKmemcg(page))
> - __memcg_kmem_uncharge_page(page, order);
> + if (PageKmemcg(page))
> + memcg_kmem_uncharge_page(page, order);
> reset_page_owner(page, order);
> return false;
> }
> @@ -1244,8 +1244,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
> }
> if (PageMappingFlags(page))
> page->mapping = NULL;
> - if (memcg_kmem_enabled() && PageKmemcg(page))
> - __memcg_kmem_uncharge_page(page, order);
> + if (PageKmemcg(page))
> + memcg_kmem_uncharge_page(page, order);
> if (check_free)
> bad += check_free_page(page);
> if (bad)
> @@ -4965,8 +4965,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
> page = __alloc_pages_slowpath(alloc_mask, order, &ac);
>
> out:
> - if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
> - unlikely(__memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
> + if ((gfp_mask & __GFP_ACCOUNT) && page &&
> + unlikely(memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
> __free_pages(page, order);
> page = NULL;
> }
The reason to keep __memcg_kmem_[un]charge_page functions is that they
were called in the very hot path. Can you please check the performance
impact of your change and if the generated code is actually same or
different.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-12-09 18:15 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-07 14:22 [PATCH] mm/page_alloc: simplify kmem cgroup charge/uncharge code Hui Su
2020-12-07 14:42 ` [External] " Muchun Song
2020-12-07 17:28 ` Shakeel Butt
[not found] ` <20201208060747.GA56968@rlk>
2020-12-08 17:12 ` Shakeel Butt
2020-12-09 16:29 ` Michal Hocko
2020-12-09 18:15 ` Shakeel Butt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).