linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] mm, memcg: Optimizations to minimize overhead when memcgs are disabled
@ 2021-07-09  0:05 Suren Baghdasaryan
  2021-07-09  0:05 ` [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Suren Baghdasaryan @ 2021-07-09  0:05 UTC (permalink / raw)
  To: tj
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, alexander.h.duyck, richard.weiyang, vbabka,
	axboe, iamjoonsoo.kim, david, willy, apopple, minchan, linmiaohe,
	linux-kernel, cgroups, linux-mm, kernel-team, surenb

Disabling memcgs on Android from kernel command-line because important due to
new requirements for all vendors to share the same kernel config and because
some vendors use memcgs while others don't. The ones who don't, have to disable
memcgs via "cgroup_disable=memory" kernel command-line option and we would like
to minimize the cost of disabling memcgs this way vs disabling CONFIG_MEMCG.
This patchset is focused on minimizing performance costs of this option.
When running pft test with memcgs disabled via CONFIG_MEMCG=n vs
"cgroup_disable=memory" command-line option, we measured ~6% drop in the
average pagefault/sec rate with stddev of ~2%. The results were obtained by
running pft test 1500 times and averaging the results on an 8-core ARM64
Android device with system services stopped, performance governor and enabling
only Big or Little cores in one test to minimize the noise.
Using perf, a number of relatively high-cost areas were identified where extra
operations can be minimized. The patchset consists of a number of optimisations
gradually reducing this regression. Patches are applied incrementally while
testing and recording the impact for each one:

6.01% with vanilla cgroup_disable vs CONFIG_MEMCG=n
3.87% after patch #1 adding mem_cgroup_disabled checks vs CONFIG_MEMCG=n
3.49% after patch #2 inlining mem_cgroup_{charge/uncharge} vs CONFIG_MEMCG=n
2.48% After patch #3 inlining swap-related functions vs CONFIG_MEMCG=n

I kept them separate because they vary in their "impact vs readability cost"
and I'm not sure which ones pass the acceptable threashold.

Suren Baghdasaryan (3):
  mm, memcg: add mem_cgroup_disabled checks in vmpressure and
    swap-related functions
  mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled
    memcg config
  mm, memcg: inline swap-related functions to improve disabled memcg
    config

 include/linux/memcontrol.h | 54 ++++++++++++++++++++++++++++++++++----
 include/linux/swap.h       | 26 +++++++++++++++---
 mm/memcontrol.c            | 52 +++++-------------------------------
 mm/swapfile.c              |  2 +-
 mm/vmpressure.c            |  7 ++++-
 5 files changed, 86 insertions(+), 55 deletions(-)

-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-09  0:05 [PATCH 0/3] mm, memcg: Optimizations to minimize overhead when memcgs are disabled Suren Baghdasaryan
@ 2021-07-09  0:05 ` Suren Baghdasaryan
  2021-07-09 14:41   ` Johannes Weiner
  2021-07-09 16:00   ` Shakeel Butt
  2021-07-09  0:05 ` [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
  2021-07-09  0:05 ` [PATCH 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
  2 siblings, 2 replies; 11+ messages in thread
From: Suren Baghdasaryan @ 2021-07-09  0:05 UTC (permalink / raw)
  To: tj
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, alexander.h.duyck, richard.weiyang, vbabka,
	axboe, iamjoonsoo.kim, david, willy, apopple, minchan, linmiaohe,
	linux-kernel, cgroups, linux-mm, kernel-team, surenb

Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
the pagefault and exit_mmap paths when memcgs are disabled using
cgroup_disable=memory command-line option.
This change results in ~2.1% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/memcontrol.c | 3 +++
 mm/swapfile.c   | 3 +++
 mm/vmpressure.c | 7 ++++++-
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ae1f5d0cb581..a228cd51c4bd 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
 	struct mem_cgroup *memcg;
 	unsigned short id;
 
+	if (mem_cgroup_disabled())
+		return;
+
 	id = swap_cgroup_record(entry, 0, nr_pages);
 	rcu_read_lock();
 	memcg = mem_cgroup_from_id(id);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 1e07d1c776f2..707fa0481bb4 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 	struct swap_info_struct *si, *next;
 	int nid = page_to_nid(page);
 
+	if (mem_cgroup_disabled())
+		return;
+
 	if (!(gfp_mask & __GFP_IO))
 		return;
 
diff --git a/mm/vmpressure.c b/mm/vmpressure.c
index d69019fc3789..9b172561fded 100644
--- a/mm/vmpressure.c
+++ b/mm/vmpressure.c
@@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
 void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
 		unsigned long scanned, unsigned long reclaimed)
 {
-	struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
+	struct vmpressure *vmpr;
+
+	if (mem_cgroup_disabled())
+		return;
+
+	vmpr = memcg_to_vmpressure(memcg);
 
 	/*
 	 * Here we only want to account pressure that userland is able to
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-09  0:05 [PATCH 0/3] mm, memcg: Optimizations to minimize overhead when memcgs are disabled Suren Baghdasaryan
  2021-07-09  0:05 ` [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
@ 2021-07-09  0:05 ` Suren Baghdasaryan
  2021-07-09 14:48   ` Johannes Weiner
  2021-07-09  0:05 ` [PATCH 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
  2 siblings, 1 reply; 11+ messages in thread
From: Suren Baghdasaryan @ 2021-07-09  0:05 UTC (permalink / raw)
  To: tj
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, alexander.h.duyck, richard.weiyang, vbabka,
	axboe, iamjoonsoo.kim, david, willy, apopple, minchan, linmiaohe,
	linux-kernel, cgroups, linux-mm, kernel-team, surenb

Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
functions to perform mem_cgroup_disabled static key check inline before
calling the main body of the function. This minimizes the memcg overhead
in the pagefault and exit_mmap paths when memcgs are disabled using
cgroup_disable=memory command-line option.
This change results in ~0.4% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
configurationon on an 8-core ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 include/linux/memcontrol.h | 54 ++++++++++++++++++++++++++++++++++----
 mm/memcontrol.c            | 43 +++---------------------------
 2 files changed, 53 insertions(+), 44 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index bfe5c486f4ad..480815feb116 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
 		page_counter_read(&memcg->memory);
 }
 
-int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
+struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
+
+int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
+			gfp_t gfp);
+/**
+ * mem_cgroup_charge - charge a newly allocated page to a cgroup
+ * @page: page to charge
+ * @mm: mm context of the victim
+ * @gfp_mask: reclaim mode
+ *
+ * Try to charge @page to the memcg that @mm belongs to, reclaiming
+ * pages according to @gfp_mask if necessary. if @mm is NULL, try to
+ * charge to the active memcg.
+ *
+ * Do not use this for pages allocated for swapin.
+ *
+ * Returns 0 on success. Otherwise, an error code is returned.
+ */
+static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
+				    gfp_t gfp_mask)
+{
+	struct mem_cgroup *memcg;
+	int ret;
+
+	if (mem_cgroup_disabled())
+		return 0;
+
+	memcg = get_mem_cgroup_from_mm(mm);
+	ret = __mem_cgroup_charge(page, memcg, gfp_mask);
+	css_put(&memcg->css);
+
+	return ret;
+}
+
 int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
 				  gfp_t gfp, swp_entry_t entry);
 void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
 
-void mem_cgroup_uncharge(struct page *page);
-void mem_cgroup_uncharge_list(struct list_head *page_list);
+void __mem_cgroup_uncharge(struct page *page);
+static inline void mem_cgroup_uncharge(struct page *page)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__mem_cgroup_uncharge(page);
+}
+
+void __mem_cgroup_uncharge_list(struct list_head *page_list);
+static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__mem_cgroup_uncharge_list(page_list);
+}
 
 void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
 
@@ -756,8 +802,6 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page)
 
 struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
 
-struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
-
 struct lruvec *lock_page_lruvec(struct page *page);
 struct lruvec *lock_page_lruvec_irq(struct page *page);
 struct lruvec *lock_page_lruvec_irqsave(struct page *page,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a228cd51c4bd..da677b55b2fe 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6701,8 +6701,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
 			atomic_long_read(&parent->memory.children_low_usage)));
 }
 
-static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
-			       gfp_t gfp)
+int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
+			gfp_t gfp)
 {
 	unsigned int nr_pages = thp_nr_pages(page);
 	int ret;
@@ -6722,35 +6722,6 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
 	return ret;
 }
 
-/**
- * mem_cgroup_charge - charge a newly allocated page to a cgroup
- * @page: page to charge
- * @mm: mm context of the victim
- * @gfp_mask: reclaim mode
- *
- * Try to charge @page to the memcg that @mm belongs to, reclaiming
- * pages according to @gfp_mask if necessary. if @mm is NULL, try to
- * charge to the active memcg.
- *
- * Do not use this for pages allocated for swapin.
- *
- * Returns 0 on success. Otherwise, an error code is returned.
- */
-int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
-{
-	struct mem_cgroup *memcg;
-	int ret;
-
-	if (mem_cgroup_disabled())
-		return 0;
-
-	memcg = get_mem_cgroup_from_mm(mm);
-	ret = __mem_cgroup_charge(page, memcg, gfp_mask);
-	css_put(&memcg->css);
-
-	return ret;
-}
-
 /**
  * mem_cgroup_swapin_charge_page - charge a newly allocated page for swapin
  * @page: page to charge
@@ -6921,13 +6892,10 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
  *
  * Uncharge a page previously charged with mem_cgroup_charge().
  */
-void mem_cgroup_uncharge(struct page *page)
+void __mem_cgroup_uncharge(struct page *page)
 {
 	struct uncharge_gather ug;
 
-	if (mem_cgroup_disabled())
-		return;
-
 	/* Don't touch page->lru of any random page, pre-check: */
 	if (!page_memcg(page))
 		return;
@@ -6944,14 +6912,11 @@ void mem_cgroup_uncharge(struct page *page)
  * Uncharge a list of pages previously charged with
  * mem_cgroup_charge().
  */
-void mem_cgroup_uncharge_list(struct list_head *page_list)
+void __mem_cgroup_uncharge_list(struct list_head *page_list)
 {
 	struct uncharge_gather ug;
 	struct page *page;
 
-	if (mem_cgroup_disabled())
-		return;
-
 	uncharge_gather_clear(&ug);
 	list_for_each_entry(page, page_list, lru)
 		uncharge_page(page, &ug);
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config
  2021-07-09  0:05 [PATCH 0/3] mm, memcg: Optimizations to minimize overhead when memcgs are disabled Suren Baghdasaryan
  2021-07-09  0:05 ` [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
  2021-07-09  0:05 ` [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
@ 2021-07-09  0:05 ` Suren Baghdasaryan
  2021-07-09 14:49   ` Johannes Weiner
  2021-07-09 16:05   ` Shakeel Butt
  2 siblings, 2 replies; 11+ messages in thread
From: Suren Baghdasaryan @ 2021-07-09  0:05 UTC (permalink / raw)
  To: tj
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, alexander.h.duyck, richard.weiyang, vbabka,
	axboe, iamjoonsoo.kim, david, willy, apopple, minchan, linmiaohe,
	linux-kernel, cgroups, linux-mm, kernel-team, surenb

Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
key check inline before calling the main body of the function. This
minimizes the memcg overhead in the pagefault and exit_mmap paths when
memcgs are disabled using cgroup_disable=memory command-line option.
This change results in ~1% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
configuration on an 8-core ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 include/linux/swap.h | 26 +++++++++++++++++++++++---
 mm/memcontrol.c      | 12 +++---------
 mm/swapfile.c        |  5 +----
 3 files changed, 27 insertions(+), 16 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 6f5a43251593..f30d26b0f71d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -721,7 +721,13 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem)
 #endif
 
 #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
+extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
+static inline  void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__cgroup_throttle_swaprate(page, gfp_mask);
+}
 #else
 static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 {
@@ -730,8 +736,22 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 
 #ifdef CONFIG_MEMCG_SWAP
 extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry);
-extern int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
-extern void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
+extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
+static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
+{
+	if (mem_cgroup_disabled())
+		return 0;
+	return __mem_cgroup_try_charge_swap(page, entry);
+}
+
+extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
+static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
+{
+	if (mem_cgroup_disabled())
+		return;
+	__mem_cgroup_uncharge_swap(entry, nr_pages);
+}
+
 extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg);
 extern bool mem_cgroup_swap_full(struct page *page);
 #else
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index da677b55b2fe..43f3f50a4751 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7208,7 +7208,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 }
 
 /**
- * mem_cgroup_try_charge_swap - try charging swap space for a page
+ * __mem_cgroup_try_charge_swap - try charging swap space for a page
  * @page: page being added to swap
  * @entry: swap entry to charge
  *
@@ -7216,16 +7216,13 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
  *
  * Returns 0 on success, -ENOMEM on failure.
  */
-int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
+int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
 {
 	unsigned int nr_pages = thp_nr_pages(page);
 	struct page_counter *counter;
 	struct mem_cgroup *memcg;
 	unsigned short oldid;
 
-	if (mem_cgroup_disabled())
-		return 0;
-
 	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return 0;
 
@@ -7265,14 +7262,11 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
  * @entry: swap entry to uncharge
  * @nr_pages: the amount of swap space to uncharge
  */
-void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
+void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
 {
 	struct mem_cgroup *memcg;
 	unsigned short id;
 
-	if (mem_cgroup_disabled())
-		return;
-
 	id = swap_cgroup_record(entry, 0, nr_pages);
 	rcu_read_lock();
 	memcg = mem_cgroup_from_id(id);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 707fa0481bb4..04a0c83f1313 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3773,14 +3773,11 @@ static void free_swap_count_continuations(struct swap_info_struct *si)
 }
 
 #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
+void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
 {
 	struct swap_info_struct *si, *next;
 	int nid = page_to_nid(page);
 
-	if (mem_cgroup_disabled())
-		return;
-
 	if (!(gfp_mask & __GFP_IO))
 		return;
 
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-09  0:05 ` [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
@ 2021-07-09 14:41   ` Johannes Weiner
  2021-07-09 16:00   ` Shakeel Butt
  1 sibling, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2021-07-09 14:41 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: tj, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, alexander.h.duyck, richard.weiyang, vbabka,
	axboe, iamjoonsoo.kim, david, willy, apopple, minchan, linmiaohe,
	linux-kernel, cgroups, linux-mm, kernel-team

On Thu, Jul 08, 2021 at 05:05:07PM -0700, Suren Baghdasaryan wrote:
> Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions. This minimizes the memcg overhead in
> the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~2.1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y,
> CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core
> ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-09  0:05 ` [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
@ 2021-07-09 14:48   ` Johannes Weiner
  2021-07-09 15:18     ` Suren Baghdasaryan
  0 siblings, 1 reply; 11+ messages in thread
From: Johannes Weiner @ 2021-07-09 14:48 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: tj, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, alexander.h.duyck, richard.weiyang, vbabka,
	axboe, iamjoonsoo.kim, david, willy, apopple, minchan, linmiaohe,
	linux-kernel, cgroups, linux-mm, kernel-team

On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Sounds reasonable to me as well. One comment:

> @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
>  		page_counter_read(&memcg->memory);
>  }
>  
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> +
> +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> +			gfp_t gfp);
> +/**
> + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * @page: page to charge
> + * @mm: mm context of the victim
> + * @gfp_mask: reclaim mode
> + *
> + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> + * charge to the active memcg.
> + *
> + * Do not use this for pages allocated for swapin.
> + *
> + * Returns 0 on success. Otherwise, an error code is returned.
> + */
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +				    gfp_t gfp_mask)
> +{
> +	struct mem_cgroup *memcg;
> +	int ret;
> +
> +	if (mem_cgroup_disabled())
> +		return 0;
> +
> +	memcg = get_mem_cgroup_from_mm(mm);
> +	ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> +	css_put(&memcg->css);
> +
> +	return ret;

Why not do

int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
			gfp_t gfp_mask);

static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
				    gfp_t gfp_mask)
{
	if (mem_cgroup_disabled())
		return 0;

	return __mem_cgroup_charge(page, memcg, gfp_mask);
}

like in the other cases as well?

That would avoid inlining two separate function calls into all the
callsites...

There is an (internal) __mem_cgroup_charge() already, but you can
rename it charge_memcg().


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config
  2021-07-09  0:05 ` [PATCH 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
@ 2021-07-09 14:49   ` Johannes Weiner
  2021-07-09 16:05   ` Shakeel Butt
  1 sibling, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2021-07-09 14:49 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: tj, mhocko, vdavydov.dev, akpm, shakeelb, guro, songmuchun,
	shy828301, alexs, alexander.h.duyck, richard.weiyang, vbabka,
	axboe, iamjoonsoo.kim, david, willy, apopple, minchan, linmiaohe,
	linux-kernel, cgroups, linux-mm, kernel-team

On Thu, Jul 08, 2021 at 05:05:09PM -0700, Suren Baghdasaryan wrote:
> Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
> key check inline before calling the main body of the function. This
> minimizes the memcg overhead in the pagefault and exit_mmap paths when
> memcgs are disabled using cgroup_disable=memory command-line option.
> This change results in ~1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configuration on an 8-core ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Looks reasonable to me as well.

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-09 14:48   ` Johannes Weiner
@ 2021-07-09 15:18     ` Suren Baghdasaryan
  2021-07-09 17:17       ` Suren Baghdasaryan
  0 siblings, 1 reply; 11+ messages in thread
From: Suren Baghdasaryan @ 2021-07-09 15:18 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Tejun Heo, Michal Hocko, vdavydov.dev, Andrew Morton,
	Shakeel Butt, Roman Gushchin, songmuchun, Yang Shi, alexs,
	alexander.h.duyck, richard.weiyang, Vlastimil Babka, Jens Axboe,
	Joonsoo Kim, David Hildenbrand, Matthew Wilcox, apopple,
	Minchan Kim, linmiaohe, LKML, cgroups mailinglist, linux-mm,
	kernel-team

On Fri, Jul 9, 2021 at 7:48 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> > functions to perform mem_cgroup_disabled static key check inline before
> > calling the main body of the function. This minimizes the memcg overhead
> > in the pagefault and exit_mmap paths when memcgs are disabled using
> > cgroup_disable=memory command-line option.
> > This change results in ~0.4% overhead reduction when running PFT test
> > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> > configurationon on an 8-core ARM64 Android device.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>
> Sounds reasonable to me as well. One comment:
>
> > @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> >               page_counter_read(&memcg->memory);
> >  }
> >
> > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> > +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> > +
> > +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > +                     gfp_t gfp);
> > +/**
> > + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > + * @page: page to charge
> > + * @mm: mm context of the victim
> > + * @gfp_mask: reclaim mode
> > + *
> > + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> > + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> > + * charge to the active memcg.
> > + *
> > + * Do not use this for pages allocated for swapin.
> > + *
> > + * Returns 0 on success. Otherwise, an error code is returned.
> > + */
> > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > +                                 gfp_t gfp_mask)
> > +{
> > +     struct mem_cgroup *memcg;
> > +     int ret;
> > +
> > +     if (mem_cgroup_disabled())
> > +             return 0;
> > +
> > +     memcg = get_mem_cgroup_from_mm(mm);
> > +     ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> > +     css_put(&memcg->css);
> > +
> > +     return ret;
>
> Why not do
>
> int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
>                         gfp_t gfp_mask);
>
> static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
>                                     gfp_t gfp_mask)
> {
>         if (mem_cgroup_disabled())
>                 return 0;
>
>         return __mem_cgroup_charge(page, memcg, gfp_mask);
> }
>
> like in the other cases as well?
>
> That would avoid inlining two separate function calls into all the
> callsites...
>
> There is an (internal) __mem_cgroup_charge() already, but you can
> rename it charge_memcg().

Sounds good. I'll post an updated version with your suggestion.
Thanks for the review, Johannes!

>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions
  2021-07-09  0:05 ` [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
  2021-07-09 14:41   ` Johannes Weiner
@ 2021-07-09 16:00   ` Shakeel Butt
  1 sibling, 0 replies; 11+ messages in thread
From: Shakeel Butt @ 2021-07-09 16:00 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Roman Gushchin, Muchun Song, Yang Shi, alexs,
	Alexander Duyck, Wei Yang, Vlastimil Babka, Jens Axboe,
	Joonsoo Kim, David Hildenbrand, Matthew Wilcox, Alistair Popple,
	Minchan Kim, Miaohe Lin, LKML, Cgroups, Linux MM, kernel-team

On Thu, Jul 8, 2021 at 5:05 PM Suren Baghdasaryan <surenb@google.com> wrote:
>
[...]

> --- a/mm/vmpressure.c
> +++ b/mm/vmpressure.c
> @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work)
>  void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
>                 unsigned long scanned, unsigned long reclaimed)
>  {
> -       struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
> +       struct vmpressure *vmpr;
> +
> +       if (mem_cgroup_disabled())
> +               return;
> +
> +       vmpr = memcg_to_vmpressure(memcg);

I was wondering why this was not crashing but realized that we
allocate root_mem_cgroup even in cgroup_disable=memory case.

Reviewed-by: Shakeel Butt <shakeelb@google.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config
  2021-07-09  0:05 ` [PATCH 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
  2021-07-09 14:49   ` Johannes Weiner
@ 2021-07-09 16:05   ` Shakeel Butt
  1 sibling, 0 replies; 11+ messages in thread
From: Shakeel Butt @ 2021-07-09 16:05 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Tejun Heo, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Roman Gushchin, Muchun Song, Yang Shi, alexs,
	Alexander Duyck, Wei Yang, Vlastimil Babka, Jens Axboe,
	Joonsoo Kim, David Hildenbrand, Matthew Wilcox, Alistair Popple,
	Minchan Kim, Miaohe Lin, LKML, Cgroups, Linux MM, kernel-team

On Thu, Jul 8, 2021 at 5:05 PM Suren Baghdasaryan <surenb@google.com> wrote:
>
> Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
> key check inline before calling the main body of the function. This
> minimizes the memcg overhead in the pagefault and exit_mmap paths when
> memcgs are disabled using cgroup_disable=memory command-line option.
> This change results in ~1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configuration on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
  2021-07-09 15:18     ` Suren Baghdasaryan
@ 2021-07-09 17:17       ` Suren Baghdasaryan
  0 siblings, 0 replies; 11+ messages in thread
From: Suren Baghdasaryan @ 2021-07-09 17:17 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Tejun Heo, Michal Hocko, vdavydov.dev, Andrew Morton,
	Shakeel Butt, Roman Gushchin, songmuchun, Yang Shi, alexs,
	alexander.h.duyck, richard.weiyang, Vlastimil Babka, Jens Axboe,
	Joonsoo Kim, David Hildenbrand, Matthew Wilcox, apopple,
	Minchan Kim, linmiaohe, LKML, cgroups mailinglist, linux-mm,
	kernel-team

On Fri, Jul 9, 2021 at 8:18 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Fri, Jul 9, 2021 at 7:48 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> > > functions to perform mem_cgroup_disabled static key check inline before
> > > calling the main body of the function. This minimizes the memcg overhead
> > > in the pagefault and exit_mmap paths when memcgs are disabled using
> > > cgroup_disable=memory command-line option.
> > > This change results in ~0.4% overhead reduction when running PFT test
> > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> > > configurationon on an 8-core ARM64 Android device.
> > >
> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> >
> > Sounds reasonable to me as well. One comment:
> >
> > > @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> > >               page_counter_read(&memcg->memory);
> > >  }
> > >
> > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> > > +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> > > +
> > > +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > > +                     gfp_t gfp);
> > > +/**
> > > + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > > + * @page: page to charge
> > > + * @mm: mm context of the victim
> > > + * @gfp_mask: reclaim mode
> > > + *
> > > + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> > > + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> > > + * charge to the active memcg.
> > > + *
> > > + * Do not use this for pages allocated for swapin.
> > > + *
> > > + * Returns 0 on success. Otherwise, an error code is returned.
> > > + */
> > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > > +                                 gfp_t gfp_mask)
> > > +{
> > > +     struct mem_cgroup *memcg;
> > > +     int ret;
> > > +
> > > +     if (mem_cgroup_disabled())
> > > +             return 0;
> > > +
> > > +     memcg = get_mem_cgroup_from_mm(mm);
> > > +     ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> > > +     css_put(&memcg->css);
> > > +
> > > +     return ret;
> >
> > Why not do
> >
> > int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> >                         gfp_t gfp_mask);
> >
> > static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> >                                     gfp_t gfp_mask)
> > {
> >         if (mem_cgroup_disabled())
> >                 return 0;
> >
> >         return __mem_cgroup_charge(page, memcg, gfp_mask);
> > }
> >
> > like in the other cases as well?
> >
> > That would avoid inlining two separate function calls into all the
> > callsites...
> >
> > There is an (internal) __mem_cgroup_charge() already, but you can
> > rename it charge_memcg().
>
> Sounds good. I'll post an updated version with your suggestion.
> Thanks for the review, Johannes!

Posted v2 just for this patch at
https://lore.kernel.org/patchwork/patch/1455550 . Please let me know
if you want me to resent the whole patchset instead of just this
patch.
>
> >

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-07-09 17:18 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-09  0:05 [PATCH 0/3] mm, memcg: Optimizations to minimize overhead when memcgs are disabled Suren Baghdasaryan
2021-07-09  0:05 ` [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
2021-07-09 14:41   ` Johannes Weiner
2021-07-09 16:00   ` Shakeel Butt
2021-07-09  0:05 ` [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
2021-07-09 14:48   ` Johannes Weiner
2021-07-09 15:18     ` Suren Baghdasaryan
2021-07-09 17:17       ` Suren Baghdasaryan
2021-07-09  0:05 ` [PATCH 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
2021-07-09 14:49   ` Johannes Weiner
2021-07-09 16:05   ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).