linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
@ 2021-07-10  0:36 Suren Baghdasaryan
  2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-10  0:36 UTC (permalink / raw)
  To: tj
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, richard.weiyang, vbabka, axboe, iamjoonsoo.kim,
	david, willy, apopple, minchan, linmiaohe, linux-kernel, cgroups,
	linux-mm, kernel-team, surenb

Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
the pagefault and exit_mmap paths when memcgs are disabled using
cgroup_disable=memory command-line option.
This change results in ~2.1% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/memcontrol.c | 3 +++
 mm/swapfile.c   | 3 +++
 mm/vmpressure.c | 7 ++++++-
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ae1f5d0cb581..a228cd51c4bd 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
 	struct mem_cgroup *memcg;
 	unsigned short id;
 
+	if (mem_cgroup_disabled())
+		return;
+
 	id = swap_cgroup_record(entry, 0, nr_pages);
 	rcu_read_lock();
 	memcg = mem_cgroup_from_id(id);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 1e07d1c776f2..707fa0481bb4 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 	struct swap_info_struct *si, *next;
 	int nid = page_to_nid(page);
 
+	if (mem_cgroup_disabled())
+		return;
+
 	if (!(gfp_mask & __GFP_IO))
 		return;
 
diff --git a/mm/vmpressure.c b/mm/vmpressure.c
index d69019fc3789..9b172561fded 100644
--- a/mm/vmpressure.c
+++ b/mm/vmpressure.c
@@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
 void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
 		unsigned long scanned, unsigned long reclaimed)
 {
-	struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
+	struct vmpressure *vmpr;
+
+	if (mem_cgroup_disabled())
+		return;
+
+	vmpr = memcg_to_vmpressure(memcg);
 
 	/*
 	 * Here we only want to account pressure that userland is able to
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-10  0:36 [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
@ 2021-07-10  0:36 ` Suren Baghdasaryan
  2021-07-10 11:08   ` [External] " Muchun Song
                     ` (2 more replies)
  2021-07-10  0:36 ` [PATCH v3 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-10  0:36 UTC (permalink / raw)
  To: tj
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, richard.weiyang, vbabka, axboe, iamjoonsoo.kim,
	david, willy, apopple, minchan, linmiaohe, linux-kernel, cgroups,
	linux-mm, kernel-team, surenb

Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
functions to perform mem_cgroup_disabled static key check inline before
calling the main body of the function. This minimizes the memcg overhead
in the pagefault and exit_mmap paths when memcgs are disabled using
cgroup_disable=memory command-line option.
This change results in ~0.4% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
configurationon on an 8-core ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
---
 include/linux/memcontrol.h | 28 +++++++++++++++++++++++++---
 mm/memcontrol.c            | 29 ++++++++++-------------------
 2 files changed, 35 insertions(+), 22 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index bfe5c486f4ad..39fa88051a42 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
 		page_counter_read(&memcg->memory);
 }
 
-int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
+int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
+			gfp_t gfp_mask);
+static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
+				    gfp_t gfp_mask)
+{
+	if (mem_cgroup_disabled())
+		return 0;
+	return __mem_cgroup_charge(page, mm, gfp_mask);
+}
+
 int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
 				  gfp_t gfp, swp_entry_t entry);
 void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
 
-void mem_cgroup_uncharge(struct page *page);
-void mem_cgroup_uncharge_list(struct list_head *page_list);
+void __mem_cgroup_uncharge(struct page *page);
+static inline void mem_cgroup_uncharge(struct page *page)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__mem_cgroup_uncharge(page);
+}
+
+void __mem_cgroup_uncharge_list(struct list_head *page_list);
+static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__mem_cgroup_uncharge_list(page_list);
+}
 
 void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a228cd51c4bd..cdaf7003b43d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
 			atomic_long_read(&parent->memory.children_low_usage)));
 }
 
-static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
-			       gfp_t gfp)
+static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp)
 {
 	unsigned int nr_pages = thp_nr_pages(page);
 	int ret;
@@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
 }
 
 /**
- * mem_cgroup_charge - charge a newly allocated page to a cgroup
+ * __mem_cgroup_charge - charge a newly allocated page to a cgroup
  * @page: page to charge
  * @mm: mm context of the victim
  * @gfp_mask: reclaim mode
@@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
  *
  * Returns 0 on success. Otherwise, an error code is returned.
  */
-int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
+int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
+			gfp_t gfp_mask)
 {
 	struct mem_cgroup *memcg;
 	int ret;
 
-	if (mem_cgroup_disabled())
-		return 0;
-
 	memcg = get_mem_cgroup_from_mm(mm);
-	ret = __mem_cgroup_charge(page, memcg, gfp_mask);
+	ret = charge_memcg(page, memcg, gfp_mask);
 	css_put(&memcg->css);
 
 	return ret;
@@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
 		memcg = get_mem_cgroup_from_mm(mm);
 	rcu_read_unlock();
 
-	ret = __mem_cgroup_charge(page, memcg, gfp);
+	ret = charge_memcg(page, memcg, gfp);
 
 	css_put(&memcg->css);
 	return ret;
@@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 }
 
 /**
- * mem_cgroup_uncharge - uncharge a page
+ * __mem_cgroup_uncharge - uncharge a page
  * @page: page to uncharge
  *
  * Uncharge a page previously charged with mem_cgroup_charge().
  */
-void mem_cgroup_uncharge(struct page *page)
+void __mem_cgroup_uncharge(struct page *page)
 {
 	struct uncharge_gather ug;
 
-	if (mem_cgroup_disabled())
-		return;
-
 	/* Don't touch page->lru of any random page, pre-check: */
 	if (!page_memcg(page))
 		return;
@@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page)
 }
 
 /**
- * mem_cgroup_uncharge_list - uncharge a list of page
+ * __mem_cgroup_uncharge_list - uncharge a list of page
  * @page_list: list of pages to uncharge
  *
  * Uncharge a list of pages previously charged with
  * mem_cgroup_charge().
  */
-void mem_cgroup_uncharge_list(struct list_head *page_list)
+void __mem_cgroup_uncharge_list(struct list_head *page_list)
 {
 	struct uncharge_gather ug;
 	struct page *page;
 
-	if (mem_cgroup_disabled())
-		return;
-
 	uncharge_gather_clear(&ug);
 	list_for_each_entry(page, page_list, lru)
 		uncharge_page(page, &ug);
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config
  2021-07-10  0:36 [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
  2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
@ 2021-07-10  0:36 ` Suren Baghdasaryan
  2021-07-10 11:19   ` [External] " Muchun Song
  2021-07-12  7:17   ` Michal Hocko
  2021-07-10  1:52 ` [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Miaohe Lin
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-10  0:36 UTC (permalink / raw)
  To: tj
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, richard.weiyang, vbabka, axboe, iamjoonsoo.kim,
	david, willy, apopple, minchan, linmiaohe, linux-kernel, cgroups,
	linux-mm, kernel-team, surenb

Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
key check inline before calling the main body of the function. This
minimizes the memcg overhead in the pagefault and exit_mmap paths when
memcgs are disabled using cgroup_disable=memory command-line option.
This change results in ~1% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
configuration on an 8-core ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
 include/linux/swap.h | 26 +++++++++++++++++++++++---
 mm/memcontrol.c      | 14 ++++----------
 mm/swapfile.c        |  5 +----
 3 files changed, 28 insertions(+), 17 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 6f5a43251593..f30d26b0f71d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -721,7 +721,13 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem)
 #endif
 
 #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
+extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
+static inline  void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__cgroup_throttle_swaprate(page, gfp_mask);
+}
 #else
 static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 {
@@ -730,8 +736,22 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 
 #ifdef CONFIG_MEMCG_SWAP
 extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry);
-extern int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
-extern void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
+extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
+static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
+{
+	if (mem_cgroup_disabled())
+		return 0;
+	return __mem_cgroup_try_charge_swap(page, entry);
+}
+
+extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
+static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__mem_cgroup_uncharge_swap(entry, nr_pages);
+}
+
 extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg);
 extern bool mem_cgroup_swap_full(struct page *page);
 #else
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index cdaf7003b43d..0b05322836ec 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7234,7 +7234,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 }
 
 /**
- * mem_cgroup_try_charge_swap - try charging swap space for a page
+ * __mem_cgroup_try_charge_swap - try charging swap space for a page
  * @page: page being added to swap
  * @entry: swap entry to charge
  *
@@ -7242,16 +7242,13 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
  *
  * Returns 0 on success, -ENOMEM on failure.
  */
-int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
+int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
 {
 	unsigned int nr_pages = thp_nr_pages(page);
 	struct page_counter *counter;
 	struct mem_cgroup *memcg;
 	unsigned short oldid;
 
-	if (mem_cgroup_disabled())
-		return 0;
-
 	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return 0;
 
@@ -7287,18 +7284,15 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
 }
 
 /**
- * mem_cgroup_uncharge_swap - uncharge swap space
+ * __mem_cgroup_uncharge_swap - uncharge swap space
  * @entry: swap entry to uncharge
  * @nr_pages: the amount of swap space to uncharge
  */
-void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
+void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
 {
 	struct mem_cgroup *memcg;
 	unsigned short id;
 
-	if (mem_cgroup_disabled())
-		return;
-
 	id = swap_cgroup_record(entry, 0, nr_pages);
 	rcu_read_lock();
 	memcg = mem_cgroup_from_id(id);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 707fa0481bb4..04a0c83f1313 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3773,14 +3773,11 @@ static void free_swap_count_continuations(struct swap_info_struct *si)
 }
 
 #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
+void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 {
 	struct swap_info_struct *si, *next;
 	int nid = page_to_nid(page);
 
-	if (mem_cgroup_disabled())
-		return;
-
 	if (!(gfp_mask & __GFP_IO))
 		return;
 
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-10  0:36 [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
  2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
  2021-07-10  0:36 ` [PATCH v3 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
@ 2021-07-10  1:52 ` Miaohe Lin
  2021-07-10  2:40   ` Suren Baghdasaryan
  2021-07-10 10:54 ` [External] " Muchun Song
  2021-07-12  7:11 ` Michal Hocko
  4 siblings, 1 reply; 20+ messages in thread
From: Miaohe Lin @ 2021-07-10  1:52 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, richard.weiyang, vbabka, axboe, iamjoonsoo.kim,
	david, willy, apopple, minchan, linux-kernel, cgroups, linux-mm,
	kernel-team, Tejun Heo

On 2021/7/10 8:36, Suren Baghdasaryan wrote:
> Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
> the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~2.1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
> CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
> ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  mm/memcontrol.c | 3 +++
>  mm/swapfile.c   | 3 +++
>  mm/vmpressure.c | 7 ++++++-
>  3 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ae1f5d0cb581..a228cd51c4bd 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
>  	struct mem_cgroup *memcg;
>  	unsigned short id;
>  
> +	if (mem_cgroup_disabled())
> +		return;
> +
>  	id = swap_cgroup_record(entry, 0, nr_pages);
>  	rcu_read_lock();
>  	memcg = mem_cgroup_from_id(id);
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 1e07d1c776f2..707fa0481bb4 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
>  	struct swap_info_struct *si, *next;
>  	int nid = page_to_nid(page);
>  
> +	if (mem_cgroup_disabled())
> +		return;
> +

Many thanks for your patch. But I'am somewhat confused about this change.
IMO, cgroup_throttle_swaprate() is only related to blk_cgroup and it seems
it's irrelevant to mem_cgroup. Could you please have a explanation for me?

Thanks!

>  	if (!(gfp_mask & __GFP_IO))
>  		return;
>  
> diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> index d69019fc3789..9b172561fded 100644
> --- a/mm/vmpressure.c
> +++ b/mm/vmpressure.c
> @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
>  void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
>  		unsigned long scanned, unsigned long reclaimed)
>  {
> -	struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
> +	struct vmpressure *vmpr;
> +
> +	if (mem_cgroup_disabled())
> +		return;
> +
> +	vmpr = memcg_to_vmpressure(memcg);
>  
>  	/*
>  	 * Here we only want to account pressure that userland is able to
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-10  1:52 ` [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Miaohe Lin
@ 2021-07-10  2:40   ` Suren Baghdasaryan
  2021-07-10  3:37     ` Miaohe Lin
  0 siblings, 1 reply; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-10  2:40 UTC (permalink / raw)
  To: Miaohe Lin
  Cc: Johannes Weiner, Michal Hocko, vdavydov.dev, Andrew Morton,
	Shakeel Butt, Roman Gushchin, songmuchun, Yang Shi, alexs,
	richard.weiyang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim, LKML,
	cgroups mailinglist, linux-mm, kernel-team, Tejun Heo

On Fri, Jul 9, 2021 at 6:52 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> On 2021/7/10 8:36, Suren Baghdasaryan wrote:
> > Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
> > cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
> > the pagefault and exit_mmap paths when memcgs are disabled using
> > cgroup_disable=memory command-line option.
> > This change results in ~2.1% overhead reduction when running PFT test
> > comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
> > CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
> > ARM64 Android device.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > ---
> >  mm/memcontrol.c | 3 +++
> >  mm/swapfile.c   | 3 +++
> >  mm/vmpressure.c | 7 ++++++-
> >  3 files changed, 12 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index ae1f5d0cb581..a228cd51c4bd 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> >       struct mem_cgroup *memcg;
> >       unsigned short id;
> >
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +
> >       id = swap_cgroup_record(entry, 0, nr_pages);
> >       rcu_read_lock();
> >       memcg = mem_cgroup_from_id(id);
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 1e07d1c776f2..707fa0481bb4 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> >       struct swap_info_struct *si, *next;
> >       int nid = page_to_nid(page);
> >
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +
>
> Many thanks for your patch. But I'am somewhat confused about this change.
> IMO, cgroup_throttle_swaprate() is only related to blk_cgroup and it seems
> it's irrelevant to mem_cgroup. Could you please have a explanation for me?

cgroup_throttle_swaprate() is a NoOp when CONFIG_MEMCG=n (see:
https://elixir.bootlin.com/linux/latest/source/include/linux/swap.h#L699),
therefore I assume we can safely skip it when memcgs are disabled via
"cgroup_disable=memory". From perf results I also see no hits on this
function when CONFIG_MEMCG=n.
However, looking into the code, I'm not sure why it should depend on
CONFIG_MEMCG. But it's Friday night and I might be missing some
details here...

>
> Thanks!
>
> >       if (!(gfp_mask & __GFP_IO))
> >               return;
> >
> > diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> > index d69019fc3789..9b172561fded 100644
> > --- a/mm/vmpressure.c
> > +++ b/mm/vmpressure.c
> > @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
> >  void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
> >               unsigned long scanned, unsigned long reclaimed)
> >  {
> > -     struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
> > +     struct vmpressure *vmpr;
> > +
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +
> > +     vmpr = memcg_to_vmpressure(memcg);
> >
> >       /*
> >        * Here we only want to account pressure that userland is able to
> >
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-10  2:40   ` Suren Baghdasaryan
@ 2021-07-10  3:37     ` Miaohe Lin
  0 siblings, 0 replies; 20+ messages in thread
From: Miaohe Lin @ 2021-07-10  3:37 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Johannes Weiner, Michal Hocko, vdavydov.dev, Andrew Morton,
	Shakeel Butt, Roman Gushchin, songmuchun, Yang Shi, alexs,
	richard.weiyang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim, LKML,
	cgroups mailinglist, linux-mm, kernel-team, Tejun Heo

On 2021/7/10 10:40, Suren Baghdasaryan wrote:
> On Fri, Jul 9, 2021 at 6:52 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> On 2021/7/10 8:36, Suren Baghdasaryan wrote:
>>> Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
>>> cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
>>> the pagefault and exit_mmap paths when memcgs are disabled using
>>> cgroup_disable=memory command-line option.
>>> This change results in ~2.1% overhead reduction when running PFT test
>>> comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
>>> CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
>>> ARM64 Android device.
>>>
>>> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>>> Reviewed-by: Shakeel Butt <shakeelb@google.com>
>>> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
>>> ---
>>>  mm/memcontrol.c | 3 +++
>>>  mm/swapfile.c   | 3 +++
>>>  mm/vmpressure.c | 7 ++++++-
>>>  3 files changed, 12 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>> index ae1f5d0cb581..a228cd51c4bd 100644
>>> --- a/mm/memcontrol.c
>>> +++ b/mm/memcontrol.c
>>> @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
>>>       struct mem_cgroup *memcg;
>>>       unsigned short id;
>>>
>>> +     if (mem_cgroup_disabled())
>>> +             return;
>>> +
>>>       id = swap_cgroup_record(entry, 0, nr_pages);
>>>       rcu_read_lock();
>>>       memcg = mem_cgroup_from_id(id);
>>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>>> index 1e07d1c776f2..707fa0481bb4 100644
>>> --- a/mm/swapfile.c
>>> +++ b/mm/swapfile.c
>>> @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
>>>       struct swap_info_struct *si, *next;
>>>       int nid = page_to_nid(page);
>>>
>>> +     if (mem_cgroup_disabled())
>>> +             return;
>>> +
>>
>> Many thanks for your patch. But I'am somewhat confused about this change.
>> IMO, cgroup_throttle_swaprate() is only related to blk_cgroup and it seems
>> it's irrelevant to mem_cgroup. Could you please have a explanation for me?
> 
> cgroup_throttle_swaprate() is a NoOp when CONFIG_MEMCG=n (see:
> https://elixir.bootlin.com/linux/latest/source/include/linux/swap.h#L699),

I browsed the git history related to cgroup_throttle_swaprate() and found this:

"""
mm: memcontrol: move out cgroup swaprate throttling

    The cgroup swaprate throttling is about matching new anon allocations to
    the rate of available IO when that is being throttled.  It's the io
    controller hooking into the VM, rather than a memory controller thing.
"""

It seems cgroup_throttle_swaprate() is working with memory allocations.
So mem_cgroup matters this way. But I'am not sure...

> therefore I assume we can safely skip it when memcgs are disabled via
> "cgroup_disable=memory". From perf results I also see no hits on this
> function when CONFIG_MEMCG=n.
> However, looking into the code, I'm not sure why it should depend on
> CONFIG_MEMCG. But it's Friday night and I might be missing some
> details here...

Many thanks for your replay at Friday night. :)

> 
>>
>> Thanks!
>>
>>>       if (!(gfp_mask & __GFP_IO))
>>>               return;
>>>
>>> diff --git a/mm/vmpressure.c b/mm/vmpressure.c
>>> index d69019fc3789..9b172561fded 100644
>>> --- a/mm/vmpressure.c
>>> +++ b/mm/vmpressure.c
>>> @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
>>>  void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
>>>               unsigned long scanned, unsigned long reclaimed)
>>>  {
>>> -     struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
>>> +     struct vmpressure *vmpr;
>>> +
>>> +     if (mem_cgroup_disabled())
>>> +             return;
>>> +
>>> +     vmpr = memcg_to_vmpressure(memcg);
>>>
>>>       /*
>>>        * Here we only want to account pressure that userland is able to
>>>
>>
> .
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [External] [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-10  0:36 [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
                   ` (2 preceding siblings ...)
  2021-07-10  1:52 ` [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Miaohe Lin
@ 2021-07-10 10:54 ` Muchun Song
  2021-07-12  7:11 ` Michal Hocko
  4 siblings, 0 replies; 20+ messages in thread
From: Muchun Song @ 2021-07-10 10:54 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Shakeel Butt, Roman Gushchin, Yang Shi, Alex Shi,
	Wei Yang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim,
	Miaohe Lin, LKML, Cgroups, Linux Memory Management List,
	kernel-team

On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
> the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~2.1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
> CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
> ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>

The changes are straightforward. LGTM.
Reviewed-by: Muchun Song <songmuchun@bytedance.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [External] [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
@ 2021-07-10 11:08   ` Muchun Song
  2021-07-13  1:12     ` Suren Baghdasaryan
  2021-07-12  7:15   ` Michal Hocko
  2021-07-18 16:55   ` Matthew Wilcox
  2 siblings, 1 reply; 20+ messages in thread
From: Muchun Song @ 2021-07-10 11:08 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Shakeel Butt, Roman Gushchin, Yang Shi, Alex Shi,
	Wei Yang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim,
	Miaohe Lin, LKML, Cgroups, Linux Memory Management List,
	kernel-team

On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>

Reviewed-by: Muchun Song <songmuchun@bytedance.com>

But some nits below.

> ---
>  include/linux/memcontrol.h | 28 +++++++++++++++++++++++++---
>  mm/memcontrol.c            | 29 ++++++++++-------------------
>  2 files changed, 35 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index bfe5c486f4ad..39fa88051a42 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
>                 page_counter_read(&memcg->memory);
>  }
>
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                       gfp_t gfp_mask);
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                                   gfp_t gfp_mask)
> +{
> +       if (mem_cgroup_disabled())
> +               return 0;
> +       return __mem_cgroup_charge(page, mm, gfp_mask);
> +}
> +
>  int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>                                   gfp_t gfp, swp_entry_t entry);
>  void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
>
> -void mem_cgroup_uncharge(struct page *page);
> -void mem_cgroup_uncharge_list(struct list_head *page_list);
> +void __mem_cgroup_uncharge(struct page *page);
> +static inline void mem_cgroup_uncharge(struct page *page)
> +{
> +       if (mem_cgroup_disabled())
> +               return;
> +       __mem_cgroup_uncharge(page);
> +}
> +
> +void __mem_cgroup_uncharge_list(struct list_head *page_list);
> +static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
> +{
> +       if (mem_cgroup_disabled())
> +               return;
> +       __mem_cgroup_uncharge_list(page_list);
> +}
>
>  void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index a228cd51c4bd..cdaf7003b43d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
>                         atomic_long_read(&parent->memory.children_low_usage)));
>  }
>
> -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> -                              gfp_t gfp)
> +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp)
>  {
>         unsigned int nr_pages = thp_nr_pages(page);
>         int ret;
> @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>  }
>
>  /**
> - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
>   * @page: page to charge
>   * @mm: mm context of the victim
>   * @gfp_mask: reclaim mode
> @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>   *
>   * Returns 0 on success. Otherwise, an error code is returned.
>   */
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                       gfp_t gfp_mask)
>  {
>         struct mem_cgroup *memcg;
>         int ret;
>
> -       if (mem_cgroup_disabled())
> -               return 0;
> -
>         memcg = get_mem_cgroup_from_mm(mm);
> -       ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> +       ret = charge_memcg(page, memcg, gfp_mask);
>         css_put(&memcg->css);
>
>         return ret;
> @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>                 memcg = get_mem_cgroup_from_mm(mm);
>         rcu_read_unlock();
>
> -       ret = __mem_cgroup_charge(page, memcg, gfp);
> +       ret = charge_memcg(page, memcg, gfp);
>
>         css_put(&memcg->css);
>         return ret;
> @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  }
>
>  /**
> - * mem_cgroup_uncharge - uncharge a page
> + * __mem_cgroup_uncharge - uncharge a page
>   * @page: page to uncharge
>   *
>   * Uncharge a page previously charged with mem_cgroup_charge().

The comment here also needs to be updated.

mem_cgroup_uncharge() -> __mem_cgroup_uncharge()

>   */
> -void mem_cgroup_uncharge(struct page *page)
> +void __mem_cgroup_uncharge(struct page *page)
>  {
>         struct uncharge_gather ug;
>
> -       if (mem_cgroup_disabled())
> -               return;
> -
>         /* Don't touch page->lru of any random page, pre-check: */
>         if (!page_memcg(page))
>                 return;
> @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page)
>  }
>
>  /**
> - * mem_cgroup_uncharge_list - uncharge a list of page
> + * __mem_cgroup_uncharge_list - uncharge a list of page
>   * @page_list: list of pages to uncharge
>   *
>   * Uncharge a list of pages previously charged with
>   * mem_cgroup_charge().

Should be __mem_cgroup_charge().

Thanks.

>   */
> -void mem_cgroup_uncharge_list(struct list_head *page_list)
> +void __mem_cgroup_uncharge_list(struct list_head *page_list)
>  {
>         struct uncharge_gather ug;
>         struct page *page;
>
> -       if (mem_cgroup_disabled())
> -               return;
> -
>         uncharge_gather_clear(&ug);
>         list_for_each_entry(page, page_list, lru)
>                 uncharge_page(page, &ug);
> --
> 2.32.0.93.g670b81a890-goog
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [External] [PATCH v3 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config
  2021-07-10  0:36 ` [PATCH v3 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
@ 2021-07-10 11:19   ` Muchun Song
  2021-07-12  7:17   ` Michal Hocko
  1 sibling, 0 replies; 20+ messages in thread
From: Muchun Song @ 2021-07-10 11:19 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Shakeel Butt, Roman Gushchin, Yang Shi, Alex Shi,
	Wei Yang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim,
	Miaohe Lin, LKML, Cgroups, Linux Memory Management List,
	kernel-team

On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
> key check inline before calling the main body of the function. This
> minimizes the memcg overhead in the pagefault and exit_mmap paths when
> memcgs are disabled using cgroup_disable=memory command-line option.
> This change results in ~1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configuration on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>

LGTM.

Reviewed-by: Muchun Song <songmuchun@bytedance.com>

Thanks.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-10  0:36 [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
                   ` (3 preceding siblings ...)
  2021-07-10 10:54 ` [External] " Muchun Song
@ 2021-07-12  7:11 ` Michal Hocko
  2021-07-12 15:55   ` Suren Baghdasaryan
  4 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2021-07-12  7:11 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: tj, hannes, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, richard.weiyang, vbabka, axboe, iamjoonsoo.kim,
	david, willy, apopple, minchan, linmiaohe, linux-kernel, cgroups,
	linux-mm, kernel-team

On Fri 09-07-21 17:36:24, Suren Baghdasaryan wrote:
> Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
> the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~2.1% overhead reduction when running PFT test

What is PFT test?

> comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
> CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
> ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
>  mm/memcontrol.c | 3 +++
>  mm/swapfile.c   | 3 +++
>  mm/vmpressure.c | 7 ++++++-
>  3 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ae1f5d0cb581..a228cd51c4bd 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
>  	struct mem_cgroup *memcg;
>  	unsigned short id;
>  
> +	if (mem_cgroup_disabled())
> +		return;
> +
>  	id = swap_cgroup_record(entry, 0, nr_pages);
>  	rcu_read_lock();
>  	memcg = mem_cgroup_from_id(id);
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 1e07d1c776f2..707fa0481bb4 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
>  	struct swap_info_struct *si, *next;
>  	int nid = page_to_nid(page);
>  
> +	if (mem_cgroup_disabled())
> +		return;
> +
>  	if (!(gfp_mask & __GFP_IO))
>  		return;
>  
> diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> index d69019fc3789..9b172561fded 100644
> --- a/mm/vmpressure.c
> +++ b/mm/vmpressure.c
> @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
>  void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
>  		unsigned long scanned, unsigned long reclaimed)
>  {
> -	struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
> +	struct vmpressure *vmpr;
> +
> +	if (mem_cgroup_disabled())
> +		return;
> +
> +	vmpr = memcg_to_vmpressure(memcg);
>  
>  	/*
>  	 * Here we only want to account pressure that userland is able to
> -- 
> 2.32.0.93.g670b81a890-goog

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
  2021-07-10 11:08   ` [External] " Muchun Song
@ 2021-07-12  7:15   ` Michal Hocko
  2021-07-12 15:55     ` Suren Baghdasaryan
  2021-07-18 16:55   ` Matthew Wilcox
  2 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2021-07-12  7:15 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: tj, hannes, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, richard.weiyang, vbabka, axboe, iamjoonsoo.kim,
	david, willy, apopple, minchan, linmiaohe, linux-kernel, cgroups,
	linux-mm, kernel-team

On Fri 09-07-21 17:36:25, Suren Baghdasaryan wrote:
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>

With doc updated as suggested by Muchun Song
Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
>  include/linux/memcontrol.h | 28 +++++++++++++++++++++++++---
>  mm/memcontrol.c            | 29 ++++++++++-------------------
>  2 files changed, 35 insertions(+), 22 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index bfe5c486f4ad..39fa88051a42 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
>  		page_counter_read(&memcg->memory);
>  }
>  
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +			gfp_t gfp_mask);
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +				    gfp_t gfp_mask)
> +{
> +	if (mem_cgroup_disabled())
> +		return 0;
> +	return __mem_cgroup_charge(page, mm, gfp_mask);
> +}
> +
>  int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>  				  gfp_t gfp, swp_entry_t entry);
>  void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
>  
> -void mem_cgroup_uncharge(struct page *page);
> -void mem_cgroup_uncharge_list(struct list_head *page_list);
> +void __mem_cgroup_uncharge(struct page *page);
> +static inline void mem_cgroup_uncharge(struct page *page)
> +{
> +	if (mem_cgroup_disabled())
> +		return;
> +	__mem_cgroup_uncharge(page);
> +}
> +
> +void __mem_cgroup_uncharge_list(struct list_head *page_list);
> +static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
> +{
> +	if (mem_cgroup_disabled())
> +		return;
> +	__mem_cgroup_uncharge_list(page_list);
> +}
>  
>  void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
>  
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index a228cd51c4bd..cdaf7003b43d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
>  			atomic_long_read(&parent->memory.children_low_usage)));
>  }
>  
> -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> -			       gfp_t gfp)
> +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp)
>  {
>  	unsigned int nr_pages = thp_nr_pages(page);
>  	int ret;
> @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>  }
>  
>  /**
> - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
>   * @page: page to charge
>   * @mm: mm context of the victim
>   * @gfp_mask: reclaim mode
> @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>   *
>   * Returns 0 on success. Otherwise, an error code is returned.
>   */
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +			gfp_t gfp_mask)
>  {
>  	struct mem_cgroup *memcg;
>  	int ret;
>  
> -	if (mem_cgroup_disabled())
> -		return 0;
> -
>  	memcg = get_mem_cgroup_from_mm(mm);
> -	ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> +	ret = charge_memcg(page, memcg, gfp_mask);
>  	css_put(&memcg->css);
>  
>  	return ret;
> @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>  		memcg = get_mem_cgroup_from_mm(mm);
>  	rcu_read_unlock();
>  
> -	ret = __mem_cgroup_charge(page, memcg, gfp);
> +	ret = charge_memcg(page, memcg, gfp);
>  
>  	css_put(&memcg->css);
>  	return ret;
> @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  }
>  
>  /**
> - * mem_cgroup_uncharge - uncharge a page
> + * __mem_cgroup_uncharge - uncharge a page
>   * @page: page to uncharge
>   *
>   * Uncharge a page previously charged with mem_cgroup_charge().
>   */
> -void mem_cgroup_uncharge(struct page *page)
> +void __mem_cgroup_uncharge(struct page *page)
>  {
>  	struct uncharge_gather ug;
>  
> -	if (mem_cgroup_disabled())
> -		return;
> -
>  	/* Don't touch page->lru of any random page, pre-check: */
>  	if (!page_memcg(page))
>  		return;
> @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page)
>  }
>  
>  /**
> - * mem_cgroup_uncharge_list - uncharge a list of page
> + * __mem_cgroup_uncharge_list - uncharge a list of page
>   * @page_list: list of pages to uncharge
>   *
>   * Uncharge a list of pages previously charged with
>   * mem_cgroup_charge().
>   */
> -void mem_cgroup_uncharge_list(struct list_head *page_list)
> +void __mem_cgroup_uncharge_list(struct list_head *page_list)
>  {
>  	struct uncharge_gather ug;
>  	struct page *page;
>  
> -	if (mem_cgroup_disabled())
> -		return;
> -
>  	uncharge_gather_clear(&ug);
>  	list_for_each_entry(page, page_list, lru)
>  		uncharge_page(page, &ug);
> -- 
> 2.32.0.93.g670b81a890-goog

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config
  2021-07-10  0:36 ` [PATCH v3 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
  2021-07-10 11:19   ` [External] " Muchun Song
@ 2021-07-12  7:17   ` Michal Hocko
  2021-07-12 15:57     ` Suren Baghdasaryan
  1 sibling, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2021-07-12  7:17 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: tj, hannes, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, richard.weiyang, vbabka, axboe, iamjoonsoo.kim,
	david, willy, apopple, minchan, linmiaohe, linux-kernel, cgroups,
	linux-mm, kernel-team

On Fri 09-07-21 17:36:26, Suren Baghdasaryan wrote:
> Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
> key check inline before calling the main body of the function. This
> minimizes the memcg overhead in the pagefault and exit_mmap paths when
> memcgs are disabled using cgroup_disable=memory command-line option.
> This change results in ~1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configuration on an 8-core ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>

I find it a bit surprising to see such a big difference over a function
call in a slow path like swap in/out.

Anyway the change makes sense.

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
>  include/linux/swap.h | 26 +++++++++++++++++++++++---
>  mm/memcontrol.c      | 14 ++++----------
>  mm/swapfile.c        |  5 +----
>  3 files changed, 28 insertions(+), 17 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 6f5a43251593..f30d26b0f71d 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -721,7 +721,13 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem)
>  #endif
>  
>  #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
> -extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
> +extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
> +static inline  void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> +{
> +	if (mem_cgroup_disabled())
> +		return;
> +	__cgroup_throttle_swaprate(page, gfp_mask);
> +}
>  #else
>  static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
>  {
> @@ -730,8 +736,22 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
>  
>  #ifdef CONFIG_MEMCG_SWAP
>  extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry);
> -extern int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
> -extern void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
> +extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
> +static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> +{
> +	if (mem_cgroup_disabled())
> +		return 0;
> +	return __mem_cgroup_try_charge_swap(page, entry);
> +}
> +
> +extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
> +static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> +{
> +	if (mem_cgroup_disabled())
> +		return;
> +	__mem_cgroup_uncharge_swap(entry, nr_pages);
> +}
> +
>  extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg);
>  extern bool mem_cgroup_swap_full(struct page *page);
>  #else
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index cdaf7003b43d..0b05322836ec 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -7234,7 +7234,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
>  }
>  
>  /**
> - * mem_cgroup_try_charge_swap - try charging swap space for a page
> + * __mem_cgroup_try_charge_swap - try charging swap space for a page
>   * @page: page being added to swap
>   * @entry: swap entry to charge
>   *
> @@ -7242,16 +7242,13 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
>   *
>   * Returns 0 on success, -ENOMEM on failure.
>   */
> -int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> +int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
>  {
>  	unsigned int nr_pages = thp_nr_pages(page);
>  	struct page_counter *counter;
>  	struct mem_cgroup *memcg;
>  	unsigned short oldid;
>  
> -	if (mem_cgroup_disabled())
> -		return 0;
> -
>  	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
>  		return 0;
>  
> @@ -7287,18 +7284,15 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
>  }
>  
>  /**
> - * mem_cgroup_uncharge_swap - uncharge swap space
> + * __mem_cgroup_uncharge_swap - uncharge swap space
>   * @entry: swap entry to uncharge
>   * @nr_pages: the amount of swap space to uncharge
>   */
> -void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> +void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
>  {
>  	struct mem_cgroup *memcg;
>  	unsigned short id;
>  
> -	if (mem_cgroup_disabled())
> -		return;
> -
>  	id = swap_cgroup_record(entry, 0, nr_pages);
>  	rcu_read_lock();
>  	memcg = mem_cgroup_from_id(id);
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 707fa0481bb4..04a0c83f1313 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3773,14 +3773,11 @@ static void free_swap_count_continuations(struct swap_info_struct *si)
>  }
>  
>  #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
> -void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> +void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
>  {
>  	struct swap_info_struct *si, *next;
>  	int nid = page_to_nid(page);
>  
> -	if (mem_cgroup_disabled())
> -		return;
> -
>  	if (!(gfp_mask & __GFP_IO))
>  		return;
>  
> -- 
> 2.32.0.93.g670b81a890-goog

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-12  7:11 ` Michal Hocko
@ 2021-07-12 15:55   ` Suren Baghdasaryan
  0 siblings, 0 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-12 15:55 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Tejun Heo, Johannes Weiner, vdavydov.dev, Andrew Morton,
	Shakeel Butt, Roman Gushchin, songmuchun, Yang Shi, alexs,
	richard.weiyang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim,
	Miaohe Lin, LKML, cgroups mailinglist, linux-mm, kernel-team

On Mon, Jul 12, 2021 at 12:11 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 09-07-21 17:36:24, Suren Baghdasaryan wrote:
> > Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
> > cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
> > the pagefault and exit_mmap paths when memcgs are disabled using
> > cgroup_disable=memory command-line option.
> > This change results in ~2.1% overhead reduction when running PFT test
>
> What is PFT test?

Christoph Lamenter’s pagefault tool
(https://lkml.org/lkml/2006/8/29/294). I'll add the link in the
description for clarity.

>
> > comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
> > CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
> > ARM64 Android device.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
>
> Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

>
> Thanks!
>
> > ---
> >  mm/memcontrol.c | 3 +++
> >  mm/swapfile.c   | 3 +++
> >  mm/vmpressure.c | 7 ++++++-
> >  3 files changed, 12 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index ae1f5d0cb581..a228cd51c4bd 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> >       struct mem_cgroup *memcg;
> >       unsigned short id;
> >
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +
> >       id = swap_cgroup_record(entry, 0, nr_pages);
> >       rcu_read_lock();
> >       memcg = mem_cgroup_from_id(id);
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 1e07d1c776f2..707fa0481bb4 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> >       struct swap_info_struct *si, *next;
> >       int nid = page_to_nid(page);
> >
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +
> >       if (!(gfp_mask & __GFP_IO))
> >               return;
> >
> > diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> > index d69019fc3789..9b172561fded 100644
> > --- a/mm/vmpressure.c
> > +++ b/mm/vmpressure.c
> > @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
> >  void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
> >               unsigned long scanned, unsigned long reclaimed)
> >  {
> > -     struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
> > +     struct vmpressure *vmpr;
> > +
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +
> > +     vmpr = memcg_to_vmpressure(memcg);
> >
> >       /*
> >        * Here we only want to account pressure that userland is able to
> > --
> > 2.32.0.93.g670b81a890-goog
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-12  7:15   ` Michal Hocko
@ 2021-07-12 15:55     ` Suren Baghdasaryan
  0 siblings, 0 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-12 15:55 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Tejun Heo, Johannes Weiner, vdavydov.dev, Andrew Morton,
	Shakeel Butt, Roman Gushchin, songmuchun, Yang Shi, alexs,
	richard.weiyang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim,
	Miaohe Lin, LKML, cgroups mailinglist, linux-mm, kernel-team

On Mon, Jul 12, 2021 at 12:15 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 09-07-21 17:36:25, Suren Baghdasaryan wrote:
> > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> > functions to perform mem_cgroup_disabled static key check inline before
> > calling the main body of the function. This minimizes the memcg overhead
> > in the pagefault and exit_mmap paths when memcgs are disabled using
> > cgroup_disable=memory command-line option.
> > This change results in ~0.4% overhead reduction when running PFT test
> > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> > configurationon on an 8-core ARM64 Android device.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
>
> With doc updated as suggested by Muchun Song
> Acked-by: Michal Hocko <mhocko@suse.com>

Thanks! Will fix the comment and post v4 later today.

>
> Thanks!
>
> > ---
> >  include/linux/memcontrol.h | 28 +++++++++++++++++++++++++---
> >  mm/memcontrol.c            | 29 ++++++++++-------------------
> >  2 files changed, 35 insertions(+), 22 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index bfe5c486f4ad..39fa88051a42 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> >               page_counter_read(&memcg->memory);
> >  }
> >
> > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > +                     gfp_t gfp_mask);
> > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > +                                 gfp_t gfp_mask)
> > +{
> > +     if (mem_cgroup_disabled())
> > +             return 0;
> > +     return __mem_cgroup_charge(page, mm, gfp_mask);
> > +}
> > +
> >  int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
> >                                 gfp_t gfp, swp_entry_t entry);
> >  void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
> >
> > -void mem_cgroup_uncharge(struct page *page);
> > -void mem_cgroup_uncharge_list(struct list_head *page_list);
> > +void __mem_cgroup_uncharge(struct page *page);
> > +static inline void mem_cgroup_uncharge(struct page *page)
> > +{
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +     __mem_cgroup_uncharge(page);
> > +}
> > +
> > +void __mem_cgroup_uncharge_list(struct list_head *page_list);
> > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
> > +{
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +     __mem_cgroup_uncharge_list(page_list);
> > +}
> >
> >  void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index a228cd51c4bd..cdaf7003b43d 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
> >                       atomic_long_read(&parent->memory.children_low_usage)));
> >  }
> >
> > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > -                            gfp_t gfp)
> > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp)
> >  {
> >       unsigned int nr_pages = thp_nr_pages(page);
> >       int ret;
> > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> >  }
> >
> >  /**
> > - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
> >   * @page: page to charge
> >   * @mm: mm context of the victim
> >   * @gfp_mask: reclaim mode
> > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> >   *
> >   * Returns 0 on success. Otherwise, an error code is returned.
> >   */
> > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > +                     gfp_t gfp_mask)
> >  {
> >       struct mem_cgroup *memcg;
> >       int ret;
> >
> > -     if (mem_cgroup_disabled())
> > -             return 0;
> > -
> >       memcg = get_mem_cgroup_from_mm(mm);
> > -     ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> > +     ret = charge_memcg(page, memcg, gfp_mask);
> >       css_put(&memcg->css);
> >
> >       return ret;
> > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
> >               memcg = get_mem_cgroup_from_mm(mm);
> >       rcu_read_unlock();
> >
> > -     ret = __mem_cgroup_charge(page, memcg, gfp);
> > +     ret = charge_memcg(page, memcg, gfp);
> >
> >       css_put(&memcg->css);
> >       return ret;
> > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
> >  }
> >
> >  /**
> > - * mem_cgroup_uncharge - uncharge a page
> > + * __mem_cgroup_uncharge - uncharge a page
> >   * @page: page to uncharge
> >   *
> >   * Uncharge a page previously charged with mem_cgroup_charge().
> >   */
> > -void mem_cgroup_uncharge(struct page *page)
> > +void __mem_cgroup_uncharge(struct page *page)
> >  {
> >       struct uncharge_gather ug;
> >
> > -     if (mem_cgroup_disabled())
> > -             return;
> > -
> >       /* Don't touch page->lru of any random page, pre-check: */
> >       if (!page_memcg(page))
> >               return;
> > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page)
> >  }
> >
> >  /**
> > - * mem_cgroup_uncharge_list - uncharge a list of page
> > + * __mem_cgroup_uncharge_list - uncharge a list of page
> >   * @page_list: list of pages to uncharge
> >   *
> >   * Uncharge a list of pages previously charged with
> >   * mem_cgroup_charge().
> >   */
> > -void mem_cgroup_uncharge_list(struct list_head *page_list)
> > +void __mem_cgroup_uncharge_list(struct list_head *page_list)
> >  {
> >       struct uncharge_gather ug;
> >       struct page *page;
> >
> > -     if (mem_cgroup_disabled())
> > -             return;
> > -
> >       uncharge_gather_clear(&ug);
> >       list_for_each_entry(page, page_list, lru)
> >               uncharge_page(page, &ug);
> > --
> > 2.32.0.93.g670b81a890-goog
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config
  2021-07-12  7:17   ` Michal Hocko
@ 2021-07-12 15:57     ` Suren Baghdasaryan
  0 siblings, 0 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-12 15:57 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Tejun Heo, Johannes Weiner, vdavydov.dev, Andrew Morton,
	Shakeel Butt, Roman Gushchin, songmuchun, Yang Shi, alexs,
	richard.weiyang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim,
	Miaohe Lin, LKML, cgroups mailinglist, linux-mm, kernel-team

On Mon, Jul 12, 2021 at 12:17 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 09-07-21 17:36:26, Suren Baghdasaryan wrote:
> > Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
> > cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
> > key check inline before calling the main body of the function. This
> > minimizes the memcg overhead in the pagefault and exit_mmap paths when
> > memcgs are disabled using cgroup_disable=memory command-line option.
> > This change results in ~1% overhead reduction when running PFT test
> > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> > configuration on an 8-core ARM64 Android device.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
>
> I find it a bit surprising to see such a big difference over a function
> call in a slow path like swap in/out.

Might be due to the nature of the test. It is designed to generate an
avalanche of anonymous pagefaults and that might be the reason swap
functions are hit this hard.

>
> Anyway the change makes sense.
>
> Acked-by: Michal Hocko <mhocko@suse.com>
>
> Thanks!
>
> > ---
> >  include/linux/swap.h | 26 +++++++++++++++++++++++---
> >  mm/memcontrol.c      | 14 ++++----------
> >  mm/swapfile.c        |  5 +----
> >  3 files changed, 28 insertions(+), 17 deletions(-)
> >
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index 6f5a43251593..f30d26b0f71d 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -721,7 +721,13 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem)
> >  #endif
> >
> >  #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
> > -extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
> > +extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
> > +static inline  void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> > +{
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +     __cgroup_throttle_swaprate(page, gfp_mask);
> > +}
> >  #else
> >  static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> >  {
> > @@ -730,8 +736,22 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> >
> >  #ifdef CONFIG_MEMCG_SWAP
> >  extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry);
> > -extern int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
> > -extern void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
> > +extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
> > +static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> > +{
> > +     if (mem_cgroup_disabled())
> > +             return 0;
> > +     return __mem_cgroup_try_charge_swap(page, entry);
> > +}
> > +
> > +extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
> > +static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> > +{
> > +     if (mem_cgroup_disabled())
> > +             return;
> > +     __mem_cgroup_uncharge_swap(entry, nr_pages);
> > +}
> > +
> >  extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg);
> >  extern bool mem_cgroup_swap_full(struct page *page);
> >  #else
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index cdaf7003b43d..0b05322836ec 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -7234,7 +7234,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
> >  }
> >
> >  /**
> > - * mem_cgroup_try_charge_swap - try charging swap space for a page
> > + * __mem_cgroup_try_charge_swap - try charging swap space for a page
> >   * @page: page being added to swap
> >   * @entry: swap entry to charge
> >   *
> > @@ -7242,16 +7242,13 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
> >   *
> >   * Returns 0 on success, -ENOMEM on failure.
> >   */
> > -int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> > +int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> >  {
> >       unsigned int nr_pages = thp_nr_pages(page);
> >       struct page_counter *counter;
> >       struct mem_cgroup *memcg;
> >       unsigned short oldid;
> >
> > -     if (mem_cgroup_disabled())
> > -             return 0;
> > -
> >       if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> >               return 0;
> >
> > @@ -7287,18 +7284,15 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> >  }
> >
> >  /**
> > - * mem_cgroup_uncharge_swap - uncharge swap space
> > + * __mem_cgroup_uncharge_swap - uncharge swap space
> >   * @entry: swap entry to uncharge
> >   * @nr_pages: the amount of swap space to uncharge
> >   */
> > -void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> > +void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
> >  {
> >       struct mem_cgroup *memcg;
> >       unsigned short id;
> >
> > -     if (mem_cgroup_disabled())
> > -             return;
> > -
> >       id = swap_cgroup_record(entry, 0, nr_pages);
> >       rcu_read_lock();
> >       memcg = mem_cgroup_from_id(id);
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 707fa0481bb4..04a0c83f1313 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -3773,14 +3773,11 @@ static void free_swap_count_continuations(struct swap_info_struct *si)
> >  }
> >
> >  #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
> > -void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> > +void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
> >  {
> >       struct swap_info_struct *si, *next;
> >       int nid = page_to_nid(page);
> >
> > -     if (mem_cgroup_disabled())
> > -             return;
> > -
> >       if (!(gfp_mask & __GFP_IO))
> >               return;
> >
> > --
> > 2.32.0.93.g670b81a890-goog
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [External] [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-10 11:08   ` [External] " Muchun Song
@ 2021-07-13  1:12     ` Suren Baghdasaryan
  0 siblings, 0 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-13  1:12 UTC (permalink / raw)
  To: Muchun Song
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Shakeel Butt, Roman Gushchin, Yang Shi, Alex Shi,
	Wei Yang, Vlastimil Babka, Jens Axboe, Joonsoo Kim,
	David Hildenbrand, Matthew Wilcox, apopple, Minchan Kim,
	Miaohe Lin, LKML, Cgroups, Linux Memory Management List,
	kernel-team

On Sat, Jul 10, 2021 at 4:08 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan <surenb@google.com> wrote:
> >
> > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> > functions to perform mem_cgroup_disabled static key check inline before
> > calling the main body of the function. This minimizes the memcg overhead
> > in the pagefault and exit_mmap paths when memcgs are disabled using
> > cgroup_disable=memory command-line option.
> > This change results in ~0.4% overhead reduction when running PFT test
> > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> > configurationon on an 8-core ARM64 Android device.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
>
> But some nits below.
>
> > ---
> >  include/linux/memcontrol.h | 28 +++++++++++++++++++++++++---
> >  mm/memcontrol.c            | 29 ++++++++++-------------------
> >  2 files changed, 35 insertions(+), 22 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index bfe5c486f4ad..39fa88051a42 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> >                 page_counter_read(&memcg->memory);
> >  }
> >
> > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > +                       gfp_t gfp_mask);
> > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > +                                   gfp_t gfp_mask)
> > +{
> > +       if (mem_cgroup_disabled())
> > +               return 0;
> > +       return __mem_cgroup_charge(page, mm, gfp_mask);
> > +}
> > +
> >  int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
> >                                   gfp_t gfp, swp_entry_t entry);
> >  void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
> >
> > -void mem_cgroup_uncharge(struct page *page);
> > -void mem_cgroup_uncharge_list(struct list_head *page_list);
> > +void __mem_cgroup_uncharge(struct page *page);
> > +static inline void mem_cgroup_uncharge(struct page *page)
> > +{
> > +       if (mem_cgroup_disabled())
> > +               return;
> > +       __mem_cgroup_uncharge(page);
> > +}
> > +
> > +void __mem_cgroup_uncharge_list(struct list_head *page_list);
> > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
> > +{
> > +       if (mem_cgroup_disabled())
> > +               return;
> > +       __mem_cgroup_uncharge_list(page_list);
> > +}
> >
> >  void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index a228cd51c4bd..cdaf7003b43d 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
> >                         atomic_long_read(&parent->memory.children_low_usage)));
> >  }
> >
> > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > -                              gfp_t gfp)
> > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp)
> >  {
> >         unsigned int nr_pages = thp_nr_pages(page);
> >         int ret;
> > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> >  }
> >
> >  /**
> > - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
> >   * @page: page to charge
> >   * @mm: mm context of the victim
> >   * @gfp_mask: reclaim mode
> > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> >   *
> >   * Returns 0 on success. Otherwise, an error code is returned.
> >   */
> > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > +                       gfp_t gfp_mask)
> >  {
> >         struct mem_cgroup *memcg;
> >         int ret;
> >
> > -       if (mem_cgroup_disabled())
> > -               return 0;
> > -
> >         memcg = get_mem_cgroup_from_mm(mm);
> > -       ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> > +       ret = charge_memcg(page, memcg, gfp_mask);
> >         css_put(&memcg->css);
> >
> >         return ret;
> > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
> >                 memcg = get_mem_cgroup_from_mm(mm);
> >         rcu_read_unlock();
> >
> > -       ret = __mem_cgroup_charge(page, memcg, gfp);
> > +       ret = charge_memcg(page, memcg, gfp);
> >
> >         css_put(&memcg->css);
> >         return ret;
> > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
> >  }
> >
> >  /**
> > - * mem_cgroup_uncharge - uncharge a page
> > + * __mem_cgroup_uncharge - uncharge a page
> >   * @page: page to uncharge
> >   *
> >   * Uncharge a page previously charged with mem_cgroup_charge().
>
> The comment here also needs to be updated.
>
> mem_cgroup_uncharge() -> __mem_cgroup_uncharge()
>
> >   */
> > -void mem_cgroup_uncharge(struct page *page)
> > +void __mem_cgroup_uncharge(struct page *page)
> >  {
> >         struct uncharge_gather ug;
> >
> > -       if (mem_cgroup_disabled())
> > -               return;
> > -
> >         /* Don't touch page->lru of any random page, pre-check: */
> >         if (!page_memcg(page))
> >                 return;
> > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page)
> >  }
> >
> >  /**
> > - * mem_cgroup_uncharge_list - uncharge a list of page
> > + * __mem_cgroup_uncharge_list - uncharge a list of page
> >   * @page_list: list of pages to uncharge
> >   *
> >   * Uncharge a list of pages previously charged with
> >   * mem_cgroup_charge().
>
> Should be __mem_cgroup_charge().

Fixed and posted in v4 at:
https://lore.kernel.org/patchwork/project/lkml/list/?series=507943
Thanks!

>
> Thanks.
>
> >   */
> > -void mem_cgroup_uncharge_list(struct list_head *page_list)
> > +void __mem_cgroup_uncharge_list(struct list_head *page_list)
> >  {
> >         struct uncharge_gather ug;
> >         struct page *page;
> >
> > -       if (mem_cgroup_disabled())
> > -               return;
> > -
> >         uncharge_gather_clear(&ug);
> >         list_for_each_entry(page, page_list, lru)
> >                 uncharge_page(page, &ug);
> > --
> > 2.32.0.93.g670b81a890-goog
> >

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
  2021-07-10 11:08   ` [External] " Muchun Song
  2021-07-12  7:15   ` Michal Hocko
@ 2021-07-18 16:55   ` Matthew Wilcox
  2021-07-18 21:25     ` Suren Baghdasaryan
  2 siblings, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2021-07-18 16:55 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: tj, hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro,
	songmuchun, shy828301, alexs, richard.weiyang, vbabka, axboe,
	iamjoonsoo.kim, david, apopple, minchan, linmiaohe, linux-kernel,
	cgroups, linux-mm, kernel-team

On Fri, Jul 09, 2021 at 05:36:25PM -0700, Suren Baghdasaryan wrote:
> @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>  }
>  
>  /**
> - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
>   * @page: page to charge
>   * @mm: mm context of the victim
>   * @gfp_mask: reclaim mode

This patch conflicts with the folio work, so I'm just rebasing the
folio patches on top of this, and I think this part of the patch is a
mistake.  We don't want to document the __mem_cgroup_charge() function.
That's an implementation detail.  This patch should instead have moved the
kernel-doc to memcontrol.h and continued to document mem_cgroup_charge().

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-18 16:55   ` Matthew Wilcox
@ 2021-07-18 21:25     ` Suren Baghdasaryan
  2021-07-18 21:29       ` Matthew Wilcox
  0 siblings, 1 reply; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-18 21:25 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Shakeel Butt, Roman Gushchin, Muchun Song,
	Yang Shi, Alex Shi, Wei Yang, Vlastimil Babka, Jens Axboe,
	Joonsoo Kim, David Hildenbrand, apopple, Minchan Kim, Miaohe Lin,
	LKML, cgroups mailinglist, linux-mm, kernel-team

On Sun, Jul 18, 2021 at 9:56 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Jul 09, 2021 at 05:36:25PM -0700, Suren Baghdasaryan wrote:
> > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> >  }
> >
> >  /**
> > - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
> >   * @page: page to charge
> >   * @mm: mm context of the victim
> >   * @gfp_mask: reclaim mode
>
> This patch conflicts with the folio work, so I'm just rebasing the
> folio patches on top of this, and I think this part of the patch is a
> mistake.  We don't want to document the __mem_cgroup_charge() function.
> That's an implementation detail.  This patch should instead have moved the
> kernel-doc to memcontrol.h and continued to document mem_cgroup_charge().

Ack.
There was a v4 version of this patch:
https://lore.kernel.org/patchwork/patch/1458907 which was picked up by
Andrew already. If others agree that documentation should be moved
into the header file then I'll gladly post another version. Or I can
post a separate patch moving the documentation only. Whatever works
best. Andrew, Michal, Johannes, WDYT?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-18 21:25     ` Suren Baghdasaryan
@ 2021-07-18 21:29       ` Matthew Wilcox
  2021-07-18 21:32         ` Suren Baghdasaryan
  0 siblings, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2021-07-18 21:29 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Shakeel Butt, Roman Gushchin, Muchun Song,
	Yang Shi, Alex Shi, Wei Yang, Vlastimil Babka, Jens Axboe,
	Joonsoo Kim, David Hildenbrand, apopple, Minchan Kim, Miaohe Lin,
	LKML, cgroups mailinglist, linux-mm, kernel-team

On Sun, Jul 18, 2021 at 02:25:50PM -0700, Suren Baghdasaryan wrote:
> On Sun, Jul 18, 2021 at 9:56 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Fri, Jul 09, 2021 at 05:36:25PM -0700, Suren Baghdasaryan wrote:
> > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > >  }
> > >
> > >  /**
> > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
> > >   * @page: page to charge
> > >   * @mm: mm context of the victim
> > >   * @gfp_mask: reclaim mode
> >
> > This patch conflicts with the folio work, so I'm just rebasing the
> > folio patches on top of this, and I think this part of the patch is a
> > mistake.  We don't want to document the __mem_cgroup_charge() function.
> > That's an implementation detail.  This patch should instead have moved the
> > kernel-doc to memcontrol.h and continued to document mem_cgroup_charge().
> 
> Ack.
> There was a v4 version of this patch:
> https://lore.kernel.org/patchwork/patch/1458907 which was picked up by
> Andrew already. If others agree that documentation should be moved
> into the header file then I'll gladly post another version. Or I can
> post a separate patch moving the documentation only. Whatever works
> best. Andrew, Michal, Johannes, WDYT?

At this point, I've moved the documentation as part of the folio patch.
I'd rather not redo that patch again ...

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-18 21:29       ` Matthew Wilcox
@ 2021-07-18 21:32         ` Suren Baghdasaryan
  0 siblings, 0 replies; 20+ messages in thread
From: Suren Baghdasaryan @ 2021-07-18 21:32 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Shakeel Butt, Roman Gushchin, Muchun Song,
	Yang Shi, Alex Shi, Wei Yang, Vlastimil Babka, Jens Axboe,
	Joonsoo Kim, David Hildenbrand, apopple, Minchan Kim, Miaohe Lin,
	LKML, cgroups mailinglist, linux-mm, kernel-team

On Sun, Jul 18, 2021 at 2:30 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Sun, Jul 18, 2021 at 02:25:50PM -0700, Suren Baghdasaryan wrote:
> > On Sun, Jul 18, 2021 at 9:56 AM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Fri, Jul 09, 2021 at 05:36:25PM -0700, Suren Baghdasaryan wrote:
> > > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > > >  }
> > > >
> > > >  /**
> > > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
> > > >   * @page: page to charge
> > > >   * @mm: mm context of the victim
> > > >   * @gfp_mask: reclaim mode
> > >
> > > This patch conflicts with the folio work, so I'm just rebasing the
> > > folio patches on top of this, and I think this part of the patch is a
> > > mistake.  We don't want to document the __mem_cgroup_charge() function.
> > > That's an implementation detail.  This patch should instead have moved the
> > > kernel-doc to memcontrol.h and continued to document mem_cgroup_charge().
> >
> > Ack.
> > There was a v4 version of this patch:
> > https://lore.kernel.org/patchwork/patch/1458907 which was picked up by
> > Andrew already. If others agree that documentation should be moved
> > into the header file then I'll gladly post another version. Or I can
> > post a separate patch moving the documentation only. Whatever works
> > best. Andrew, Michal, Johannes, WDYT?
>
> At this point, I've moved the documentation as part of the folio patch.
> I'd rather not redo that patch again ...

Ok. If you need me to redo anything please let me know.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-07-18 21:32 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-10  0:36 [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
2021-07-10 11:08   ` [External] " Muchun Song
2021-07-13  1:12     ` Suren Baghdasaryan
2021-07-12  7:15   ` Michal Hocko
2021-07-12 15:55     ` Suren Baghdasaryan
2021-07-18 16:55   ` Matthew Wilcox
2021-07-18 21:25     ` Suren Baghdasaryan
2021-07-18 21:29       ` Matthew Wilcox
2021-07-18 21:32         ` Suren Baghdasaryan
2021-07-10  0:36 ` [PATCH v3 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
2021-07-10 11:19   ` [External] " Muchun Song
2021-07-12  7:17   ` Michal Hocko
2021-07-12 15:57     ` Suren Baghdasaryan
2021-07-10  1:52 ` [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Miaohe Lin
2021-07-10  2:40   ` Suren Baghdasaryan
2021-07-10  3:37     ` Miaohe Lin
2021-07-10 10:54 ` [External] " Muchun Song
2021-07-12  7:11 ` Michal Hocko
2021-07-12 15:55   ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).