All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/2] Ignore non-LRU-based reclaim in memcg reclaim
@ 2023-04-05 18:54 Yosry Ahmed
  2023-04-05 18:54 ` [PATCH v5 1/2] mm: vmscan: ignore " Yosry Ahmed
  2023-04-05 18:54 ` [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers Yosry Ahmed
  0 siblings, 2 replies; 13+ messages in thread
From: Yosry Ahmed @ 2023-04-05 18:54 UTC (permalink / raw)
  To: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, David Hildenbrand, Johannes Weiner, Peter Xu,
	NeilBrown, Shakeel Butt, Michal Hocko, Yu Zhao, Dave Chinner
  Cc: linux-fsdevel, linux-kernel, linux-xfs, linux-mm, Yosry Ahmed

Upon running some proactive reclaim tests using memory.reclaim, we
noticed some tests flaking where writing to memory.reclaim would be
successful even though we did not reclaim the requested amount fully.
Looking further into it, I discovered that *sometimes* we over-report
the number of reclaimed pages in memcg reclaim.

Reclaimed pages through other means than LRU-based reclaim are tracked
through reclaim_state in struct scan_control, which is stashed in
current task_struct. These pages are added to the number of reclaimed
pages through LRUs. For memcg reclaim, these pages generally cannot be
linked to the memcg under reclaim and can cause an overestimated count
of reclaimed pages. This short series tries to address that.

Patch 1 ignores pages reclaimed outside of LRU reclaim in memcg reclaim.
The pages are uncharged anyway, so even if we end up under-reporting
reclaimed pages we will still succeed in making progress during
charging.

Patch 2 is just refactoring, it adds helpers that wrap some
operations on current->reclaim_state, and rename
reclaim_state->reclaimed_slab to reclaim_state->reclaimed. It also adds
a huge comment explaining why we ignore pages reclaimed outside of LRU
reclaim in memcg reclaim.

The patches are divided as such so that patch 1 can be easily backported
without all the refactoring noise.

v4 -> v5:
- Separate the functional fix into its own patch, and squash all the
  refactoring into a single second patch for ease of backporting (Andrew
  Morton).

v4: https://lore.kernel.org/lkml/20230404001353.468224-1-yosryahmed@google.com/

Yosry Ahmed (2):
  mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
  mm: vmscan: refactor reclaim_state helpers

 fs/inode.c           |  3 +-
 fs/xfs/xfs_buf.c     |  3 +-
 include/linux/swap.h | 17 ++++++++++-
 mm/slab.c            |  3 +-
 mm/slob.c            |  6 ++--
 mm/slub.c            |  5 ++-
 mm/vmscan.c          | 73 +++++++++++++++++++++++++++++++++-----------
 7 files changed, 78 insertions(+), 32 deletions(-)

-- 
2.40.0.348.gf938b09366-goog


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
  2023-04-05 18:54 [PATCH v5 0/2] Ignore non-LRU-based reclaim in memcg reclaim Yosry Ahmed
@ 2023-04-05 18:54 ` Yosry Ahmed
  2023-04-06 10:30   ` David Hildenbrand
  2023-04-05 18:54 ` [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers Yosry Ahmed
  1 sibling, 1 reply; 13+ messages in thread
From: Yosry Ahmed @ 2023-04-05 18:54 UTC (permalink / raw)
  To: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, David Hildenbrand, Johannes Weiner, Peter Xu,
	NeilBrown, Shakeel Butt, Michal Hocko, Yu Zhao, Dave Chinner
  Cc: linux-fsdevel, linux-kernel, linux-xfs, linux-mm, Yosry Ahmed, stable

We keep track of different types of reclaimed pages through
reclaim_state->reclaimed_slab, and we add them to the reported number
of reclaimed pages.  For non-memcg reclaim, this makes sense. For memcg
reclaim, we have no clue if those pages are charged to the memcg under
reclaim.

Slab pages are shared by different memcgs, so a freed slab page may have
only been partially charged to the memcg under reclaim.  The same goes for
clean file pages from pruned inodes (on highmem systems) or xfs buffer
pages, there is no simple way to currently link them to the memcg under
reclaim.

Stop reporting those freed pages as reclaimed pages during memcg reclaim.
This should make the return value of writing to memory.reclaim, and may
help reduce unnecessary reclaim retries during memcg charging.  Writing to
memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
for this case we want to include any freed pages, so use the
global_reclaim() check instead of !cgroup_reclaim().

Generally, this should make the return value of
try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
freed a slab page that was mostly charged to the memcg under reclaim),
the return value of try_to_free_mem_cgroup_pages() can be underestimated,
but this should be fine. The freed pages will be uncharged anyway, and we
can charge the memcg the next time around as we usually do memcg reclaim
in a retry loop.

The next patch performs some cleanups around reclaim_state and adds an
elaborate comment explaining this to the code. This patch is kept
minimal for easy backporting.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Cc: stable@vger.kernel.org
---

global_reclaim(sc) does not exist in kernels before 6.3. It can be
replaced with:
!cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup)

---
 mm/vmscan.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9c1c5e8b24b8f..c82bd89f90364 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
 		vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
 			   sc->nr_reclaimed - reclaimed);
 
-	sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
-	current->reclaim_state->reclaimed_slab = 0;
+	if (global_reclaim(sc)) {
+		sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
+		current->reclaim_state->reclaimed_slab = 0;
+	}
 
 	return success ? MEMCG_LRU_YOUNG : 0;
 }
@@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 
 	shrink_node_memcgs(pgdat, sc);
 
-	if (reclaim_state) {
+	if (reclaim_state && global_reclaim(sc)) {
 		sc->nr_reclaimed += reclaim_state->reclaimed_slab;
 		reclaim_state->reclaimed_slab = 0;
 	}
-- 
2.40.0.348.gf938b09366-goog


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers
  2023-04-05 18:54 [PATCH v5 0/2] Ignore non-LRU-based reclaim in memcg reclaim Yosry Ahmed
  2023-04-05 18:54 ` [PATCH v5 1/2] mm: vmscan: ignore " Yosry Ahmed
@ 2023-04-05 18:54 ` Yosry Ahmed
  2023-04-06 17:31   ` Tim Chen
  2023-04-06 20:45   ` Peter Xu
  1 sibling, 2 replies; 13+ messages in thread
From: Yosry Ahmed @ 2023-04-05 18:54 UTC (permalink / raw)
  To: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, David Hildenbrand, Johannes Weiner, Peter Xu,
	NeilBrown, Shakeel Butt, Michal Hocko, Yu Zhao, Dave Chinner
  Cc: linux-fsdevel, linux-kernel, linux-xfs, linux-mm, Yosry Ahmed

During reclaim, we keep track of pages reclaimed from other means than
LRU-based reclaim through scan_control->reclaim_state->reclaimed_slab,
which we stash a pointer to in current task_struct.

However, we keep track of more than just reclaimed slab pages through
this. We also use it for clean file pages dropped through pruned inodes,
and xfs buffer pages freed. Rename reclaimed_slab to reclaimed, and add
a helper function that wraps updating it through current, so that future
changes to this logic are contained within mm/vmscan.c.

Additionally, add a flush_reclaim_state() helper to wrap using
reclaim_state->reclaimed to updated sc->nr_reclaimed, and use that
helper to add an elaborate comment about why we only do the update for
global reclaim.

Finally, move set_task_reclaim_state() next to flush_reclaim_state() so
that all reclaim_state helpers are in close proximity for easier
readability.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 fs/inode.c           |  3 +-
 fs/xfs/xfs_buf.c     |  3 +-
 include/linux/swap.h | 17 +++++++++-
 mm/slab.c            |  3 +-
 mm/slob.c            |  6 ++--
 mm/slub.c            |  5 ++-
 mm/vmscan.c          | 75 ++++++++++++++++++++++++++++++++------------
 7 files changed, 78 insertions(+), 34 deletions(-)

diff --git a/fs/inode.c b/fs/inode.c
index 4558dc2f13557..e60fcc41faf17 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -864,8 +864,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
 				__count_vm_events(KSWAPD_INODESTEAL, reap);
 			else
 				__count_vm_events(PGINODESTEAL, reap);
-			if (current->reclaim_state)
-				current->reclaim_state->reclaimed_slab += reap;
+			mm_account_reclaimed_pages(reap);
 		}
 		iput(inode);
 		spin_lock(lru_lock);
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 54c774af6e1c6..15d1e5a7c2d34 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -286,8 +286,7 @@ xfs_buf_free_pages(
 		if (bp->b_pages[i])
 			__free_page(bp->b_pages[i]);
 	}
-	if (current->reclaim_state)
-		current->reclaim_state->reclaimed_slab += bp->b_page_count;
+	mm_account_reclaimed_pages(bp->b_page_count);
 
 	if (bp->b_pages != bp->b_page_array)
 		kmem_free(bp->b_pages);
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 209a425739a9f..e131ac155fb95 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -153,13 +153,28 @@ union swap_header {
  * memory reclaim
  */
 struct reclaim_state {
-	unsigned long reclaimed_slab;
+	/* pages reclaimed outside of LRU-based reclaim */
+	unsigned long reclaimed;
 #ifdef CONFIG_LRU_GEN
 	/* per-thread mm walk data */
 	struct lru_gen_mm_walk *mm_walk;
 #endif
 };
 
+/*
+ * mm_account_reclaimed_pages(): account reclaimed pages outside of LRU-based
+ * reclaim
+ * @pages: number of pages reclaimed
+ *
+ * If the current process is undergoing a reclaim operation, increment the
+ * number of reclaimed pages by @pages.
+ */
+static inline void mm_account_reclaimed_pages(unsigned long pages)
+{
+	if (current->reclaim_state)
+		current->reclaim_state->reclaimed += pages;
+}
+
 #ifdef __KERNEL__
 
 struct address_space;
diff --git a/mm/slab.c b/mm/slab.c
index dabc2a671fc6f..64bf1de817b24 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1392,8 +1392,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)
 	smp_wmb();
 	__folio_clear_slab(folio);
 
-	if (current->reclaim_state)
-		current->reclaim_state->reclaimed_slab += 1 << order;
+	mm_account_reclaimed_pages(1 << order);
 	unaccount_slab(slab, order, cachep);
 	__free_pages(&folio->page, order);
 }
diff --git a/mm/slob.c b/mm/slob.c
index fe567fcfa3a39..79cc8680c973c 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -61,7 +61,7 @@
 #include <linux/slab.h>
 
 #include <linux/mm.h>
-#include <linux/swap.h> /* struct reclaim_state */
+#include <linux/swap.h> /* mm_account_reclaimed_pages() */
 #include <linux/cache.h>
 #include <linux/init.h>
 #include <linux/export.h>
@@ -211,9 +211,7 @@ static void slob_free_pages(void *b, int order)
 {
 	struct page *sp = virt_to_page(b);
 
-	if (current->reclaim_state)
-		current->reclaim_state->reclaimed_slab += 1 << order;
-
+	mm_account_reclaimed_pages(1 << order);
 	mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
 			    -(PAGE_SIZE << order));
 	__free_pages(sp, order);
diff --git a/mm/slub.c b/mm/slub.c
index 39327e98fce34..7aa30eef82350 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -11,7 +11,7 @@
  */
 
 #include <linux/mm.h>
-#include <linux/swap.h> /* struct reclaim_state */
+#include <linux/swap.h> /* mm_account_reclaimed_pages() */
 #include <linux/module.h>
 #include <linux/bit_spinlock.h>
 #include <linux/interrupt.h>
@@ -2063,8 +2063,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
 	/* Make the mapping reset visible before clearing the flag */
 	smp_wmb();
 	__folio_clear_slab(folio);
-	if (current->reclaim_state)
-		current->reclaim_state->reclaimed_slab += pages;
+	mm_account_reclaimed_pages(pages);
 	unaccount_slab(slab, order, s);
 	__free_pages(&folio->page, order);
 }
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c82bd89f90364..049e39202e6ce 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -188,18 +188,6 @@ struct scan_control {
  */
 int vm_swappiness = 60;
 
-static void set_task_reclaim_state(struct task_struct *task,
-				   struct reclaim_state *rs)
-{
-	/* Check for an overwrite */
-	WARN_ON_ONCE(rs && task->reclaim_state);
-
-	/* Check for the nulling of an already-nulled member */
-	WARN_ON_ONCE(!rs && !task->reclaim_state);
-
-	task->reclaim_state = rs;
-}
-
 LIST_HEAD(shrinker_list);
 DECLARE_RWSEM(shrinker_rwsem);
 
@@ -511,6 +499,59 @@ static bool writeback_throttling_sane(struct scan_control *sc)
 }
 #endif
 
+static void set_task_reclaim_state(struct task_struct *task,
+				   struct reclaim_state *rs)
+{
+	/* Check for an overwrite */
+	WARN_ON_ONCE(rs && task->reclaim_state);
+
+	/* Check for the nulling of an already-nulled member */
+	WARN_ON_ONCE(!rs && !task->reclaim_state);
+
+	task->reclaim_state = rs;
+}
+
+/*
+ * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
+ * scan_control->nr_reclaimed.
+ */
+static void flush_reclaim_state(struct scan_control *sc,
+				struct reclaim_state *rs)
+{
+	/*
+	 * Currently, reclaim_state->reclaimed includes three types of pages
+	 * freed outside of vmscan:
+	 * (1) Slab pages.
+	 * (2) Clean file pages from pruned inodes.
+	 * (3) XFS freed buffer pages.
+	 *
+	 * For all of these cases, we have no way of finding out whether these
+	 * pages were related to the memcg under reclaim. For example, a freed
+	 * slab page could have had only a single object charged to the memcg
+	 * under reclaim. Also, populated inodes are not on shrinker LRUs
+	 * anymore except on highmem systems.
+	 *
+	 * Instead of over-reporting the reclaimed pages in a memcg reclaim,
+	 * only count such pages in global reclaim. This prevents unnecessary
+	 * retries during memcg charging and false positive from proactive
+	 * reclaim (memory.reclaim).
+	 *
+	 * For uncommon cases were the freed pages were actually significantly
+	 * charged to the memcg under reclaim, and we end up under-reporting, it
+	 * should be fine. The freed pages will be uncharged anyway, even if
+	 * they are not reported properly, and we will be able to make forward
+	 * progress in charging (which is usually in a retry loop).
+	 *
+	 * We can go one step further, and report the uncharged objcg pages in
+	 * memcg reclaim, to make reporting more accurate and reduce
+	 * under-reporting, but it's probably not worth the complexity for now.
+	 */
+	if (rs && global_reclaim(sc)) {
+		sc->nr_reclaimed += rs->reclaimed;
+		rs->reclaimed = 0;
+	}
+}
+
 static long xchg_nr_deferred(struct shrinker *shrinker,
 			     struct shrink_control *sc)
 {
@@ -5346,10 +5387,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
 		vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
 			   sc->nr_reclaimed - reclaimed);
 
-	if (global_reclaim(sc)) {
-		sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
-		current->reclaim_state->reclaimed_slab = 0;
-	}
+	flush_reclaim_state(sc, current->reclaim_state);
 
 	return success ? MEMCG_LRU_YOUNG : 0;
 }
@@ -6474,10 +6512,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 
 	shrink_node_memcgs(pgdat, sc);
 
-	if (reclaim_state && global_reclaim(sc)) {
-		sc->nr_reclaimed += reclaim_state->reclaimed_slab;
-		reclaim_state->reclaimed_slab = 0;
-	}
+	flush_reclaim_state(sc, reclaim_state);
 
 	/* Record the subtree's reclaim efficiency */
 	if (!sc->proactive)
-- 
2.40.0.348.gf938b09366-goog


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
  2023-04-05 18:54 ` [PATCH v5 1/2] mm: vmscan: ignore " Yosry Ahmed
@ 2023-04-06 10:30   ` David Hildenbrand
  2023-04-06 14:07     ` Yosry Ahmed
  2023-04-06 22:25     ` Andrew Morton
  0 siblings, 2 replies; 13+ messages in thread
From: David Hildenbrand @ 2023-04-06 10:30 UTC (permalink / raw)
  To: Yosry Ahmed, Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, Johannes Weiner, Peter Xu, NeilBrown, Shakeel Butt,
	Michal Hocko, Yu Zhao, Dave Chinner
  Cc: linux-fsdevel, linux-kernel, linux-xfs, linux-mm, stable

On 05.04.23 20:54, Yosry Ahmed wrote:
> We keep track of different types of reclaimed pages through
> reclaim_state->reclaimed_slab, and we add them to the reported number
> of reclaimed pages.  For non-memcg reclaim, this makes sense. For memcg
> reclaim, we have no clue if those pages are charged to the memcg under
> reclaim.
> 
> Slab pages are shared by different memcgs, so a freed slab page may have
> only been partially charged to the memcg under reclaim.  The same goes for
> clean file pages from pruned inodes (on highmem systems) or xfs buffer
> pages, there is no simple way to currently link them to the memcg under
> reclaim.
> 
> Stop reporting those freed pages as reclaimed pages during memcg reclaim.
> This should make the return value of writing to memory.reclaim, and may
> help reduce unnecessary reclaim retries during memcg charging.  Writing to
> memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
> for this case we want to include any freed pages, so use the
> global_reclaim() check instead of !cgroup_reclaim().
> 
> Generally, this should make the return value of
> try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
> freed a slab page that was mostly charged to the memcg under reclaim),
> the return value of try_to_free_mem_cgroup_pages() can be underestimated,
> but this should be fine. The freed pages will be uncharged anyway, and we

Can't we end up in extreme situations where 
try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount 
of memory for that cgroup was freed up.

Can you extend on why "this should be fine" ?

I suspect that overestimation might be worse than underestimation. (see 
my comment proposal below)

> can charge the memcg the next time around as we usually do memcg reclaim
> in a retry loop.
> 
> The next patch performs some cleanups around reclaim_state and adds an
> elaborate comment explaining this to the code. This patch is kept
> minimal for easy backporting.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> Cc: stable@vger.kernel.org

Fixes: ?

Otherwise it's hard to judge how far to backport this.

> ---
> 
> global_reclaim(sc) does not exist in kernels before 6.3. It can be
> replaced with:
> !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup)
> 
> ---
>   mm/vmscan.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9c1c5e8b24b8f..c82bd89f90364 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>   		vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
>   			   sc->nr_reclaimed - reclaimed);
>   
> -	sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> -	current->reclaim_state->reclaimed_slab = 0;

Worth adding a comment like

/*
  * Slab pages cannot universally be linked to a single memcg. So only
  * account them as reclaimed during global reclaim. Note that we might
  * underestimate the amount of memory reclaimed (but won't overestimate
  * it).
  */

but ...

> +	if (global_reclaim(sc)) {
> +		sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> +		current->reclaim_state->reclaimed_slab = 0;
> +	}
>   
>   	return success ? MEMCG_LRU_YOUNG : 0;
>   }
> @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>   
>   	shrink_node_memcgs(pgdat, sc);
>   

... do we want to factor the add+clear into a simple helper such that we 
can have above comment there?

static void cond_account_reclaimed_slab(reclaim_state, sc)
{	
	/*
  	 * Slab pages cannot universally be linked to a single memcg. So
	 * only account them as reclaimed during global reclaim. Note
	 * that we might underestimate the amount of memory reclaimed
	 * (but won't overestimate it).
	 */
	if (global_reclaim(sc)) {
		sc->nr_reclaimed += reclaim_state->reclaimed_slab;
		reclaim_state->reclaimed_slab = 0;
	}
}

Yes, effective a couple LOC more, but still straight-forward for a 
stable backport

> -	if (reclaim_state) {
> +	if (reclaim_state && global_reclaim(sc)) {
>   		sc->nr_reclaimed += reclaim_state->reclaimed_slab;
>   		reclaim_state->reclaimed_slab = 0;
>   	}

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
  2023-04-06 10:30   ` David Hildenbrand
@ 2023-04-06 14:07     ` Yosry Ahmed
  2023-04-06 17:49       ` David Hildenbrand
  2023-04-06 22:25     ` Andrew Morton
  1 sibling, 1 reply; 13+ messages in thread
From: Yosry Ahmed @ 2023-04-06 14:07 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, Johannes Weiner, Peter Xu, NeilBrown, Shakeel Butt,
	Michal Hocko, Yu Zhao, Dave Chinner, linux-fsdevel, linux-kernel,
	linux-xfs, linux-mm, stable

Thanks for taking a look, David!

On Thu, Apr 6, 2023 at 3:31 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 05.04.23 20:54, Yosry Ahmed wrote:
> > We keep track of different types of reclaimed pages through
> > reclaim_state->reclaimed_slab, and we add them to the reported number
> > of reclaimed pages.  For non-memcg reclaim, this makes sense. For memcg
> > reclaim, we have no clue if those pages are charged to the memcg under
> > reclaim.
> >
> > Slab pages are shared by different memcgs, so a freed slab page may have
> > only been partially charged to the memcg under reclaim.  The same goes for
> > clean file pages from pruned inodes (on highmem systems) or xfs buffer
> > pages, there is no simple way to currently link them to the memcg under
> > reclaim.
> >
> > Stop reporting those freed pages as reclaimed pages during memcg reclaim.
> > This should make the return value of writing to memory.reclaim, and may
> > help reduce unnecessary reclaim retries during memcg charging.  Writing to
> > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
> > for this case we want to include any freed pages, so use the
> > global_reclaim() check instead of !cgroup_reclaim().
> >
> > Generally, this should make the return value of
> > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
> > freed a slab page that was mostly charged to the memcg under reclaim),
> > the return value of try_to_free_mem_cgroup_pages() can be underestimated,
> > but this should be fine. The freed pages will be uncharged anyway, and we
>
> Can't we end up in extreme situations where
> try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount
> of memory for that cgroup was freed up.
>
> Can you extend on why "this should be fine" ?
>
> I suspect that overestimation might be worse than underestimation. (see
> my comment proposal below)

In such extreme scenarios even though try_to_free_mem_cgroup_pages()
would return an underestimated value, the freed memory for the cgroup
will be uncharged. try_charge() (and most callers of
try_to_free_mem_cgroup_pages()) do so in a retry loop, so even if
try_to_free_mem_cgroup_pages() returns an underestimated value
charging will succeed the next time around.

The only case where this might be a problem is if it happens in the
final retry, but I guess we need to be *really* unlucky for this
extreme scenario to happen. One could argue that if we reach such a
situation the cgroup will probably OOM soon anyway.

>
> > can charge the memcg the next time around as we usually do memcg reclaim
> > in a retry loop.
> >
> > The next patch performs some cleanups around reclaim_state and adds an
> > elaborate comment explaining this to the code. This patch is kept
> > minimal for easy backporting.
> >
> > Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> > Cc: stable@vger.kernel.org
>
> Fixes: ?
>
> Otherwise it's hard to judge how far to backport this.

It's hard to judge. The issue has been there for a while, but
memory.reclaim just made it more user visible. I think we can
attribute it to per-object slab accounting, because before that any
freed slab pages in cgroup reclaim would be entirely charged to that
cgroup.

Although in all fairness, other types of freed pages that use
reclaim_state->reclaimed_slab cannot be attributed to the cgroup under
reclaim have been there before that. I guess slab is the most
significant among them tho, so for the purposes of backporting I
guess:

Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects
instead of pages")

>
> > ---
> >
> > global_reclaim(sc) does not exist in kernels before 6.3. It can be
> > replaced with:
> > !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup)
> >
> > ---
> >   mm/vmscan.c | 8 +++++---
> >   1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 9c1c5e8b24b8f..c82bd89f90364 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
> >               vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
> >                          sc->nr_reclaimed - reclaimed);
> >
> > -     sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> > -     current->reclaim_state->reclaimed_slab = 0;
>
> Worth adding a comment like
>
> /*
>   * Slab pages cannot universally be linked to a single memcg. So only
>   * account them as reclaimed during global reclaim. Note that we might
>   * underestimate the amount of memory reclaimed (but won't overestimate
>   * it).
>   */
>
> but ...
>
> > +     if (global_reclaim(sc)) {
> > +             sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> > +             current->reclaim_state->reclaimed_slab = 0;
> > +     }
> >
> >       return success ? MEMCG_LRU_YOUNG : 0;
> >   }
> > @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >
> >       shrink_node_memcgs(pgdat, sc);
> >
>
> ... do we want to factor the add+clear into a simple helper such that we
> can have above comment there?
>
> static void cond_account_reclaimed_slab(reclaim_state, sc)
> {
>         /*
>          * Slab pages cannot universally be linked to a single memcg. So
>          * only account them as reclaimed during global reclaim. Note
>          * that we might underestimate the amount of memory reclaimed
>          * (but won't overestimate it).
>          */
>         if (global_reclaim(sc)) {
>                 sc->nr_reclaimed += reclaim_state->reclaimed_slab;
>                 reclaim_state->reclaimed_slab = 0;
>         }
> }
>
> Yes, effective a couple LOC more, but still straight-forward for a
> stable backport

The next patch in the series performs some refactoring and cleanups,
among which we add a helper called flush_reclaim_state() that does
exactly that and contains a sizable comment. I left this outside of
this patch in v5 to make the effective change as small as possible for
backporting. Looks like it can be confusing tho without the comment.

How about I pull this part to this patch as well for v6?

>
> > -     if (reclaim_state) {
> > +     if (reclaim_state && global_reclaim(sc)) {
> >               sc->nr_reclaimed += reclaim_state->reclaimed_slab;
> >               reclaim_state->reclaimed_slab = 0;
> >       }
>
> --
> Thanks,
>
> David / dhildenb
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers
  2023-04-05 18:54 ` [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers Yosry Ahmed
@ 2023-04-06 17:31   ` Tim Chen
  2023-04-06 17:43     ` Yosry Ahmed
  2023-04-06 19:42     ` Matthew Wilcox
  2023-04-06 20:45   ` Peter Xu
  1 sibling, 2 replies; 13+ messages in thread
From: Tim Chen @ 2023-04-06 17:31 UTC (permalink / raw)
  To: Yosry Ahmed, Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, David Hildenbrand, Johannes Weiner, Peter Xu,
	NeilBrown, Shakeel Butt, Michal Hocko, Yu Zhao, Dave Chinner
  Cc: linux-fsdevel, linux-kernel, linux-xfs, linux-mm

On Wed, 2023-04-05 at 18:54 +0000, Yosry Ahmed wrote:
> During reclaim, we keep track of pages reclaimed from other means than
> LRU-based reclaim through scan_control->reclaim_state->reclaimed_slab,
> which we stash a pointer to in current task_struct.
> 
> However, we keep track of more than just reclaimed slab pages through
> this. We also use it for clean file pages dropped through pruned inodes,
> and xfs buffer pages freed. Rename reclaimed_slab to reclaimed, and add
> a helper function that wraps updating it through current, so that future
> changes to this logic are contained within mm/vmscan.c.
> 
> Additionally, add a flush_reclaim_state() helper to wrap using
> reclaim_state->reclaimed to updated sc->nr_reclaimed, and use that
> helper to add an elaborate comment about why we only do the update for
> global reclaim.
> 
> Finally, move set_task_reclaim_state() next to flush_reclaim_state() so
> that all reclaim_state helpers are in close proximity for easier
> readability.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> ---
>  fs/inode.c           |  3 +-
>  fs/xfs/xfs_buf.c     |  3 +-
>  include/linux/swap.h | 17 +++++++++-
>  mm/slab.c            |  3 +-
>  mm/slob.c            |  6 ++--
>  mm/slub.c            |  5 ++-
>  mm/vmscan.c          | 75 ++++++++++++++++++++++++++++++++------------
>  7 files changed, 78 insertions(+), 34 deletions(-)
> 
> diff --git a/fs/inode.c b/fs/inode.c
> index 4558dc2f13557..e60fcc41faf17 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -864,8 +864,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
>  				__count_vm_events(KSWAPD_INODESTEAL, reap);
>  			else
>  				__count_vm_events(PGINODESTEAL, reap);
> -			if (current->reclaim_state)
> -				current->reclaim_state->reclaimed_slab += reap;
> +			mm_account_reclaimed_pages(reap);
>  		}
>  		iput(inode);
>  		spin_lock(lru_lock);
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 54c774af6e1c6..15d1e5a7c2d34 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -286,8 +286,7 @@ xfs_buf_free_pages(
>  		if (bp->b_pages[i])
>  			__free_page(bp->b_pages[i]);
>  	}
> -	if (current->reclaim_state)
> -		current->reclaim_state->reclaimed_slab += bp->b_page_count;
> +	mm_account_reclaimed_pages(bp->b_page_count);
>  
>  	if (bp->b_pages != bp->b_page_array)
>  		kmem_free(bp->b_pages);
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 209a425739a9f..e131ac155fb95 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -153,13 +153,28 @@ union swap_header {
>   * memory reclaim
>   */
>  struct reclaim_state {
> -	unsigned long reclaimed_slab;
> +	/* pages reclaimed outside of LRU-based reclaim */
> +	unsigned long reclaimed;
>  #ifdef CONFIG_LRU_GEN
>  	/* per-thread mm walk data */
>  	struct lru_gen_mm_walk *mm_walk;
>  #endif
>  };
>  
> +/*
> + * mm_account_reclaimed_pages(): account reclaimed pages outside of LRU-based
> + * reclaim
> + * @pages: number of pages reclaimed
> + *
> + * If the current process is undergoing a reclaim operation, increment the
> + * number of reclaimed pages by @pages.
> + */
> +static inline void mm_account_reclaimed_pages(unsigned long pages)
> +{
> +	if (current->reclaim_state)
> +		current->reclaim_state->reclaimed += pages;
> +}
> +
>  #ifdef __KERNEL__
>  
>  struct address_space;
> diff --git a/mm/slab.c b/mm/slab.c
> index dabc2a671fc6f..64bf1de817b24 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1392,8 +1392,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)
>  	smp_wmb();
>  	__folio_clear_slab(folio);
>  
> -	if (current->reclaim_state)
> -		current->reclaim_state->reclaimed_slab += 1 << order;
> +	mm_account_reclaimed_pages(1 << order);
>  	unaccount_slab(slab, order, cachep);
>  	__free_pages(&folio->page, order);
>  }
> diff --git a/mm/slob.c b/mm/slob.c
> index fe567fcfa3a39..79cc8680c973c 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -61,7 +61,7 @@
>  #include <linux/slab.h>
>  
>  #include <linux/mm.h>
> -#include <linux/swap.h> /* struct reclaim_state */
> +#include <linux/swap.h> /* mm_account_reclaimed_pages() */
>  #include <linux/cache.h>
>  #include <linux/init.h>
>  #include <linux/export.h>
> @@ -211,9 +211,7 @@ static void slob_free_pages(void *b, int order)
>  {
>  	struct page *sp = virt_to_page(b);
>  
> -	if (current->reclaim_state)
> -		current->reclaim_state->reclaimed_slab += 1 << order;
> -
> +	mm_account_reclaimed_pages(1 << order);
>  	mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>  			    -(PAGE_SIZE << order));
>  	__free_pages(sp, order);
> diff --git a/mm/slub.c b/mm/slub.c
> index 39327e98fce34..7aa30eef82350 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -11,7 +11,7 @@
>   */
>  
>  #include <linux/mm.h>
> -#include <linux/swap.h> /* struct reclaim_state */
> +#include <linux/swap.h> /* mm_account_reclaimed_pages() */
>  #include <linux/module.h>
>  #include <linux/bit_spinlock.h>
>  #include <linux/interrupt.h>
> @@ -2063,8 +2063,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
>  	/* Make the mapping reset visible before clearing the flag */
>  	smp_wmb();
>  	__folio_clear_slab(folio);
> -	if (current->reclaim_state)
> -		current->reclaim_state->reclaimed_slab += pages;
> +	mm_account_reclaimed_pages(pages);
>  	unaccount_slab(slab, order, s);
>  	__free_pages(&folio->page, order);
>  }
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c82bd89f90364..049e39202e6ce 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -188,18 +188,6 @@ struct scan_control {
>   */
>  int vm_swappiness = 60;
>  
> -static void set_task_reclaim_state(struct task_struct *task,
> -				   struct reclaim_state *rs)
> -{
> -	/* Check for an overwrite */
> -	WARN_ON_ONCE(rs && task->reclaim_state);
> -
> -	/* Check for the nulling of an already-nulled member */
> -	WARN_ON_ONCE(!rs && !task->reclaim_state);
> -
> -	task->reclaim_state = rs;
> -}
> -
>  LIST_HEAD(shrinker_list);
>  DECLARE_RWSEM(shrinker_rwsem);
>  
> @@ -511,6 +499,59 @@ static bool writeback_throttling_sane(struct scan_control *sc)
>  }
>  #endif
>  
> +static void set_task_reclaim_state(struct task_struct *task,
> +				   struct reclaim_state *rs)
> +{
> +	/* Check for an overwrite */
> +	WARN_ON_ONCE(rs && task->reclaim_state);
> +
> +	/* Check for the nulling of an already-nulled member */
> +	WARN_ON_ONCE(!rs && !task->reclaim_state);
> +
> +	task->reclaim_state = rs;
> +}
> +
> +/*
> + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
> + * scan_control->nr_reclaimed.
> + */
> +static void flush_reclaim_state(struct scan_control *sc,
> +				struct reclaim_state *rs)
> +{
> +	/*
> +	 * Currently, reclaim_state->reclaimed includes three types of pages
> +	 * freed outside of vmscan:
> +	 * (1) Slab pages.
> +	 * (2) Clean file pages from pruned inodes.
> +	 * (3) XFS freed buffer pages.
> +	 *
> +	 * For all of these cases, we have no way of finding out whether these
> +	 * pages were related to the memcg under reclaim. For example, a freed
> +	 * slab page could have had only a single object charged to the memcg

Minor nits:
s/could have had/could have

> +	 * under reclaim. Also, populated inodes are not on shrinker LRUs
> +	 * anymore except on highmem systems.
> +	 *
> +	 * Instead of over-reporting the reclaimed pages in a memcg reclaim,
> +	 * only count such pages in global reclaim. This prevents unnecessary

May be clearer to say:
This prevents under-reclaimaing the target memcg, and unnecessary

> +	 * retries during memcg charging and false positive from proactive
> +	 * reclaim (memory.reclaim).
> +	 *

Tim

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers
  2023-04-06 17:31   ` Tim Chen
@ 2023-04-06 17:43     ` Yosry Ahmed
  2023-04-06 19:42     ` Matthew Wilcox
  1 sibling, 0 replies; 13+ messages in thread
From: Yosry Ahmed @ 2023-04-06 17:43 UTC (permalink / raw)
  To: Tim Chen
  Cc: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, David Hildenbrand, Johannes Weiner, Peter Xu,
	NeilBrown, Shakeel Butt, Michal Hocko, Yu Zhao, Dave Chinner,
	linux-fsdevel, linux-kernel, linux-xfs, linux-mm

On Thu, Apr 6, 2023 at 10:32 AM Tim Chen <tim.c.chen@linux.intel.com> wrote:
>
> On Wed, 2023-04-05 at 18:54 +0000, Yosry Ahmed wrote:
> > During reclaim, we keep track of pages reclaimed from other means than
> > LRU-based reclaim through scan_control->reclaim_state->reclaimed_slab,
> > which we stash a pointer to in current task_struct.
> >
> > However, we keep track of more than just reclaimed slab pages through
> > this. We also use it for clean file pages dropped through pruned inodes,
> > and xfs buffer pages freed. Rename reclaimed_slab to reclaimed, and add
> > a helper function that wraps updating it through current, so that future
> > changes to this logic are contained within mm/vmscan.c.
> >
> > Additionally, add a flush_reclaim_state() helper to wrap using
> > reclaim_state->reclaimed to updated sc->nr_reclaimed, and use that
> > helper to add an elaborate comment about why we only do the update for
> > global reclaim.
> >
> > Finally, move set_task_reclaim_state() next to flush_reclaim_state() so
> > that all reclaim_state helpers are in close proximity for easier
> > readability.
> >
> > Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> > ---
> >  fs/inode.c           |  3 +-
> >  fs/xfs/xfs_buf.c     |  3 +-
> >  include/linux/swap.h | 17 +++++++++-
> >  mm/slab.c            |  3 +-
> >  mm/slob.c            |  6 ++--
> >  mm/slub.c            |  5 ++-
> >  mm/vmscan.c          | 75 ++++++++++++++++++++++++++++++++------------
> >  7 files changed, 78 insertions(+), 34 deletions(-)
> >
> > diff --git a/fs/inode.c b/fs/inode.c
> > index 4558dc2f13557..e60fcc41faf17 100644
> > --- a/fs/inode.c
> > +++ b/fs/inode.c
> > @@ -864,8 +864,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
> >                               __count_vm_events(KSWAPD_INODESTEAL, reap);
> >                       else
> >                               __count_vm_events(PGINODESTEAL, reap);
> > -                     if (current->reclaim_state)
> > -                             current->reclaim_state->reclaimed_slab += reap;
> > +                     mm_account_reclaimed_pages(reap);
> >               }
> >               iput(inode);
> >               spin_lock(lru_lock);
> > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> > index 54c774af6e1c6..15d1e5a7c2d34 100644
> > --- a/fs/xfs/xfs_buf.c
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -286,8 +286,7 @@ xfs_buf_free_pages(
> >               if (bp->b_pages[i])
> >                       __free_page(bp->b_pages[i]);
> >       }
> > -     if (current->reclaim_state)
> > -             current->reclaim_state->reclaimed_slab += bp->b_page_count;
> > +     mm_account_reclaimed_pages(bp->b_page_count);
> >
> >       if (bp->b_pages != bp->b_page_array)
> >               kmem_free(bp->b_pages);
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index 209a425739a9f..e131ac155fb95 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -153,13 +153,28 @@ union swap_header {
> >   * memory reclaim
> >   */
> >  struct reclaim_state {
> > -     unsigned long reclaimed_slab;
> > +     /* pages reclaimed outside of LRU-based reclaim */
> > +     unsigned long reclaimed;
> >  #ifdef CONFIG_LRU_GEN
> >       /* per-thread mm walk data */
> >       struct lru_gen_mm_walk *mm_walk;
> >  #endif
> >  };
> >
> > +/*
> > + * mm_account_reclaimed_pages(): account reclaimed pages outside of LRU-based
> > + * reclaim
> > + * @pages: number of pages reclaimed
> > + *
> > + * If the current process is undergoing a reclaim operation, increment the
> > + * number of reclaimed pages by @pages.
> > + */
> > +static inline void mm_account_reclaimed_pages(unsigned long pages)
> > +{
> > +     if (current->reclaim_state)
> > +             current->reclaim_state->reclaimed += pages;
> > +}
> > +
> >  #ifdef __KERNEL__
> >
> >  struct address_space;
> > diff --git a/mm/slab.c b/mm/slab.c
> > index dabc2a671fc6f..64bf1de817b24 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -1392,8 +1392,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)
> >       smp_wmb();
> >       __folio_clear_slab(folio);
> >
> > -     if (current->reclaim_state)
> > -             current->reclaim_state->reclaimed_slab += 1 << order;
> > +     mm_account_reclaimed_pages(1 << order);
> >       unaccount_slab(slab, order, cachep);
> >       __free_pages(&folio->page, order);
> >  }
> > diff --git a/mm/slob.c b/mm/slob.c
> > index fe567fcfa3a39..79cc8680c973c 100644
> > --- a/mm/slob.c
> > +++ b/mm/slob.c
> > @@ -61,7 +61,7 @@
> >  #include <linux/slab.h>
> >
> >  #include <linux/mm.h>
> > -#include <linux/swap.h> /* struct reclaim_state */
> > +#include <linux/swap.h> /* mm_account_reclaimed_pages() */
> >  #include <linux/cache.h>
> >  #include <linux/init.h>
> >  #include <linux/export.h>
> > @@ -211,9 +211,7 @@ static void slob_free_pages(void *b, int order)
> >  {
> >       struct page *sp = virt_to_page(b);
> >
> > -     if (current->reclaim_state)
> > -             current->reclaim_state->reclaimed_slab += 1 << order;
> > -
> > +     mm_account_reclaimed_pages(1 << order);
> >       mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
> >                           -(PAGE_SIZE << order));
> >       __free_pages(sp, order);
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 39327e98fce34..7aa30eef82350 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -11,7 +11,7 @@
> >   */
> >
> >  #include <linux/mm.h>
> > -#include <linux/swap.h> /* struct reclaim_state */
> > +#include <linux/swap.h> /* mm_account_reclaimed_pages() */
> >  #include <linux/module.h>
> >  #include <linux/bit_spinlock.h>
> >  #include <linux/interrupt.h>
> > @@ -2063,8 +2063,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
> >       /* Make the mapping reset visible before clearing the flag */
> >       smp_wmb();
> >       __folio_clear_slab(folio);
> > -     if (current->reclaim_state)
> > -             current->reclaim_state->reclaimed_slab += pages;
> > +     mm_account_reclaimed_pages(pages);
> >       unaccount_slab(slab, order, s);
> >       __free_pages(&folio->page, order);
> >  }
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index c82bd89f90364..049e39202e6ce 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -188,18 +188,6 @@ struct scan_control {
> >   */
> >  int vm_swappiness = 60;
> >
> > -static void set_task_reclaim_state(struct task_struct *task,
> > -                                struct reclaim_state *rs)
> > -{
> > -     /* Check for an overwrite */
> > -     WARN_ON_ONCE(rs && task->reclaim_state);
> > -
> > -     /* Check for the nulling of an already-nulled member */
> > -     WARN_ON_ONCE(!rs && !task->reclaim_state);
> > -
> > -     task->reclaim_state = rs;
> > -}
> > -
> >  LIST_HEAD(shrinker_list);
> >  DECLARE_RWSEM(shrinker_rwsem);
> >
> > @@ -511,6 +499,59 @@ static bool writeback_throttling_sane(struct scan_control *sc)
> >  }
> >  #endif
> >
> > +static void set_task_reclaim_state(struct task_struct *task,
> > +                                struct reclaim_state *rs)
> > +{
> > +     /* Check for an overwrite */
> > +     WARN_ON_ONCE(rs && task->reclaim_state);
> > +
> > +     /* Check for the nulling of an already-nulled member */
> > +     WARN_ON_ONCE(!rs && !task->reclaim_state);
> > +
> > +     task->reclaim_state = rs;
> > +}
> > +
> > +/*
> > + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
> > + * scan_control->nr_reclaimed.
> > + */
> > +static void flush_reclaim_state(struct scan_control *sc,
> > +                             struct reclaim_state *rs)
> > +{
> > +     /*
> > +      * Currently, reclaim_state->reclaimed includes three types of pages
> > +      * freed outside of vmscan:
> > +      * (1) Slab pages.
> > +      * (2) Clean file pages from pruned inodes.
> > +      * (3) XFS freed buffer pages.
> > +      *
> > +      * For all of these cases, we have no way of finding out whether these
> > +      * pages were related to the memcg under reclaim. For example, a freed
> > +      * slab page could have had only a single object charged to the memcg
>
> Minor nits:
> s/could have had/could have
>
> > +      * under reclaim. Also, populated inodes are not on shrinker LRUs
> > +      * anymore except on highmem systems.
> > +      *
> > +      * Instead of over-reporting the reclaimed pages in a memcg reclaim,
> > +      * only count such pages in global reclaim. This prevents unnecessary
>
> May be clearer to say:
> This prevents under-reclaimaing the target memcg, and unnecessary

Thanks, will rephrase for the next version!

>
> > +      * retries during memcg charging and false positive from proactive
> > +      * reclaim (memory.reclaim).
> > +      *
>
> Tim
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
  2023-04-06 14:07     ` Yosry Ahmed
@ 2023-04-06 17:49       ` David Hildenbrand
  2023-04-06 17:52         ` Yosry Ahmed
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2023-04-06 17:49 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, Johannes Weiner, Peter Xu, NeilBrown, Shakeel Butt,
	Michal Hocko, Yu Zhao, Dave Chinner, linux-fsdevel, linux-kernel,
	linux-xfs, linux-mm, stable

On 06.04.23 16:07, Yosry Ahmed wrote:
> Thanks for taking a look, David!
> 
> On Thu, Apr 6, 2023 at 3:31 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 05.04.23 20:54, Yosry Ahmed wrote:
>>> We keep track of different types of reclaimed pages through
>>> reclaim_state->reclaimed_slab, and we add them to the reported number
>>> of reclaimed pages.  For non-memcg reclaim, this makes sense. For memcg
>>> reclaim, we have no clue if those pages are charged to the memcg under
>>> reclaim.
>>>
>>> Slab pages are shared by different memcgs, so a freed slab page may have
>>> only been partially charged to the memcg under reclaim.  The same goes for
>>> clean file pages from pruned inodes (on highmem systems) or xfs buffer
>>> pages, there is no simple way to currently link them to the memcg under
>>> reclaim.
>>>
>>> Stop reporting those freed pages as reclaimed pages during memcg reclaim.
>>> This should make the return value of writing to memory.reclaim, and may
>>> help reduce unnecessary reclaim retries during memcg charging.  Writing to
>>> memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
>>> for this case we want to include any freed pages, so use the
>>> global_reclaim() check instead of !cgroup_reclaim().
>>>
>>> Generally, this should make the return value of
>>> try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
>>> freed a slab page that was mostly charged to the memcg under reclaim),
>>> the return value of try_to_free_mem_cgroup_pages() can be underestimated,
>>> but this should be fine. The freed pages will be uncharged anyway, and we
>>
>> Can't we end up in extreme situations where
>> try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount
>> of memory for that cgroup was freed up.
>>
>> Can you extend on why "this should be fine" ?
>>
>> I suspect that overestimation might be worse than underestimation. (see
>> my comment proposal below)
> 
> In such extreme scenarios even though try_to_free_mem_cgroup_pages()
> would return an underestimated value, the freed memory for the cgroup
> will be uncharged. try_charge() (and most callers of
> try_to_free_mem_cgroup_pages()) do so in a retry loop, so even if
> try_to_free_mem_cgroup_pages() returns an underestimated value
> charging will succeed the next time around.
> 
> The only case where this might be a problem is if it happens in the
> final retry, but I guess we need to be *really* unlucky for this
> extreme scenario to happen. One could argue that if we reach such a
> situation the cgroup will probably OOM soon anyway.
> 
>>
>>> can charge the memcg the next time around as we usually do memcg reclaim
>>> in a retry loop.
>>>
>>> The next patch performs some cleanups around reclaim_state and adds an
>>> elaborate comment explaining this to the code. This patch is kept
>>> minimal for easy backporting.
>>>
>>> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
>>> Cc: stable@vger.kernel.org
>>
>> Fixes: ?
>>
>> Otherwise it's hard to judge how far to backport this.
> 
> It's hard to judge. The issue has been there for a while, but
> memory.reclaim just made it more user visible. I think we can
> attribute it to per-object slab accounting, because before that any
> freed slab pages in cgroup reclaim would be entirely charged to that
> cgroup.
> 
> Although in all fairness, other types of freed pages that use
> reclaim_state->reclaimed_slab cannot be attributed to the cgroup under
> reclaim have been there before that. I guess slab is the most
> significant among them tho, so for the purposes of backporting I
> guess:
> 
> Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects
> instead of pages")
> 
>>
>>> ---
>>>
>>> global_reclaim(sc) does not exist in kernels before 6.3. It can be
>>> replaced with:
>>> !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup)
>>>
>>> ---
>>>    mm/vmscan.c | 8 +++++---
>>>    1 file changed, 5 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> index 9c1c5e8b24b8f..c82bd89f90364 100644
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>>>                vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
>>>                           sc->nr_reclaimed - reclaimed);
>>>
>>> -     sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
>>> -     current->reclaim_state->reclaimed_slab = 0;
>>
>> Worth adding a comment like
>>
>> /*
>>    * Slab pages cannot universally be linked to a single memcg. So only
>>    * account them as reclaimed during global reclaim. Note that we might
>>    * underestimate the amount of memory reclaimed (but won't overestimate
>>    * it).
>>    */
>>
>> but ...
>>
>>> +     if (global_reclaim(sc)) {
>>> +             sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
>>> +             current->reclaim_state->reclaimed_slab = 0;
>>> +     }
>>>
>>>        return success ? MEMCG_LRU_YOUNG : 0;
>>>    }
>>> @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>>>
>>>        shrink_node_memcgs(pgdat, sc);
>>>
>>
>> ... do we want to factor the add+clear into a simple helper such that we
>> can have above comment there?
>>
>> static void cond_account_reclaimed_slab(reclaim_state, sc)
>> {
>>          /*
>>           * Slab pages cannot universally be linked to a single memcg. So
>>           * only account them as reclaimed during global reclaim. Note
>>           * that we might underestimate the amount of memory reclaimed
>>           * (but won't overestimate it).
>>           */
>>          if (global_reclaim(sc)) {
>>                  sc->nr_reclaimed += reclaim_state->reclaimed_slab;
>>                  reclaim_state->reclaimed_slab = 0;
>>          }
>> }
>>
>> Yes, effective a couple LOC more, but still straight-forward for a
>> stable backport
> 
> The next patch in the series performs some refactoring and cleanups,
> among which we add a helper called flush_reclaim_state() that does
> exactly that and contains a sizable comment. I left this outside of
> this patch in v5 to make the effective change as small as possible for
> backporting. Looks like it can be confusing tho without the comment.
> 
> How about I pull this part to this patch as well for v6?

As long as it's a helper similar to what I proposed, I think that makes 
a lot of sense (and doesn't particularly bloat this patch).

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
  2023-04-06 17:49       ` David Hildenbrand
@ 2023-04-06 17:52         ` Yosry Ahmed
  0 siblings, 0 replies; 13+ messages in thread
From: Yosry Ahmed @ 2023-04-06 17:52 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, Johannes Weiner, Peter Xu, NeilBrown, Shakeel Butt,
	Michal Hocko, Yu Zhao, Dave Chinner, linux-fsdevel, linux-kernel,
	linux-xfs, linux-mm, stable

On Thu, Apr 6, 2023 at 10:50 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 06.04.23 16:07, Yosry Ahmed wrote:
> > Thanks for taking a look, David!
> >
> > On Thu, Apr 6, 2023 at 3:31 AM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 05.04.23 20:54, Yosry Ahmed wrote:
> >>> We keep track of different types of reclaimed pages through
> >>> reclaim_state->reclaimed_slab, and we add them to the reported number
> >>> of reclaimed pages.  For non-memcg reclaim, this makes sense. For memcg
> >>> reclaim, we have no clue if those pages are charged to the memcg under
> >>> reclaim.
> >>>
> >>> Slab pages are shared by different memcgs, so a freed slab page may have
> >>> only been partially charged to the memcg under reclaim.  The same goes for
> >>> clean file pages from pruned inodes (on highmem systems) or xfs buffer
> >>> pages, there is no simple way to currently link them to the memcg under
> >>> reclaim.
> >>>
> >>> Stop reporting those freed pages as reclaimed pages during memcg reclaim.
> >>> This should make the return value of writing to memory.reclaim, and may
> >>> help reduce unnecessary reclaim retries during memcg charging.  Writing to
> >>> memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
> >>> for this case we want to include any freed pages, so use the
> >>> global_reclaim() check instead of !cgroup_reclaim().
> >>>
> >>> Generally, this should make the return value of
> >>> try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
> >>> freed a slab page that was mostly charged to the memcg under reclaim),
> >>> the return value of try_to_free_mem_cgroup_pages() can be underestimated,
> >>> but this should be fine. The freed pages will be uncharged anyway, and we
> >>
> >> Can't we end up in extreme situations where
> >> try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount
> >> of memory for that cgroup was freed up.
> >>
> >> Can you extend on why "this should be fine" ?
> >>
> >> I suspect that overestimation might be worse than underestimation. (see
> >> my comment proposal below)
> >
> > In such extreme scenarios even though try_to_free_mem_cgroup_pages()
> > would return an underestimated value, the freed memory for the cgroup
> > will be uncharged. try_charge() (and most callers of
> > try_to_free_mem_cgroup_pages()) do so in a retry loop, so even if
> > try_to_free_mem_cgroup_pages() returns an underestimated value
> > charging will succeed the next time around.
> >
> > The only case where this might be a problem is if it happens in the
> > final retry, but I guess we need to be *really* unlucky for this
> > extreme scenario to happen. One could argue that if we reach such a
> > situation the cgroup will probably OOM soon anyway.
> >
> >>
> >>> can charge the memcg the next time around as we usually do memcg reclaim
> >>> in a retry loop.
> >>>
> >>> The next patch performs some cleanups around reclaim_state and adds an
> >>> elaborate comment explaining this to the code. This patch is kept
> >>> minimal for easy backporting.
> >>>
> >>> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> >>> Cc: stable@vger.kernel.org
> >>
> >> Fixes: ?
> >>
> >> Otherwise it's hard to judge how far to backport this.
> >
> > It's hard to judge. The issue has been there for a while, but
> > memory.reclaim just made it more user visible. I think we can
> > attribute it to per-object slab accounting, because before that any
> > freed slab pages in cgroup reclaim would be entirely charged to that
> > cgroup.
> >
> > Although in all fairness, other types of freed pages that use
> > reclaim_state->reclaimed_slab cannot be attributed to the cgroup under
> > reclaim have been there before that. I guess slab is the most
> > significant among them tho, so for the purposes of backporting I
> > guess:
> >
> > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects
> > instead of pages")
> >
> >>
> >>> ---
> >>>
> >>> global_reclaim(sc) does not exist in kernels before 6.3. It can be
> >>> replaced with:
> >>> !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup)
> >>>
> >>> ---
> >>>    mm/vmscan.c | 8 +++++---
> >>>    1 file changed, 5 insertions(+), 3 deletions(-)
> >>>
> >>> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >>> index 9c1c5e8b24b8f..c82bd89f90364 100644
> >>> --- a/mm/vmscan.c
> >>> +++ b/mm/vmscan.c
> >>> @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
> >>>                vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
> >>>                           sc->nr_reclaimed - reclaimed);
> >>>
> >>> -     sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> >>> -     current->reclaim_state->reclaimed_slab = 0;
> >>
> >> Worth adding a comment like
> >>
> >> /*
> >>    * Slab pages cannot universally be linked to a single memcg. So only
> >>    * account them as reclaimed during global reclaim. Note that we might
> >>    * underestimate the amount of memory reclaimed (but won't overestimate
> >>    * it).
> >>    */
> >>
> >> but ...
> >>
> >>> +     if (global_reclaim(sc)) {
> >>> +             sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> >>> +             current->reclaim_state->reclaimed_slab = 0;
> >>> +     }
> >>>
> >>>        return success ? MEMCG_LRU_YOUNG : 0;
> >>>    }
> >>> @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >>>
> >>>        shrink_node_memcgs(pgdat, sc);
> >>>
> >>
> >> ... do we want to factor the add+clear into a simple helper such that we
> >> can have above comment there?
> >>
> >> static void cond_account_reclaimed_slab(reclaim_state, sc)
> >> {
> >>          /*
> >>           * Slab pages cannot universally be linked to a single memcg. So
> >>           * only account them as reclaimed during global reclaim. Note
> >>           * that we might underestimate the amount of memory reclaimed
> >>           * (but won't overestimate it).
> >>           */
> >>          if (global_reclaim(sc)) {
> >>                  sc->nr_reclaimed += reclaim_state->reclaimed_slab;
> >>                  reclaim_state->reclaimed_slab = 0;
> >>          }
> >> }
> >>
> >> Yes, effective a couple LOC more, but still straight-forward for a
> >> stable backport
> >
> > The next patch in the series performs some refactoring and cleanups,
> > among which we add a helper called flush_reclaim_state() that does
> > exactly that and contains a sizable comment. I left this outside of
> > this patch in v5 to make the effective change as small as possible for
> > backporting. Looks like it can be confusing tho without the comment.
> >
> > How about I pull this part to this patch as well for v6?
>
> As long as it's a helper similar to what I proposed, I think that makes
> a lot of sense (and doesn't particularly bloat this patch).

Sounds good to me, I will do that and respin.

Thanks David!

>
> --
> Thanks,
>
> David / dhildenb
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers
  2023-04-06 17:31   ` Tim Chen
  2023-04-06 17:43     ` Yosry Ahmed
@ 2023-04-06 19:42     ` Matthew Wilcox
  1 sibling, 0 replies; 13+ messages in thread
From: Matthew Wilcox @ 2023-04-06 19:42 UTC (permalink / raw)
  To: Tim Chen
  Cc: Yosry Ahmed, Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Miaohe Lin, David Hildenbrand,
	Johannes Weiner, Peter Xu, NeilBrown, Shakeel Butt, Michal Hocko,
	Yu Zhao, Dave Chinner, linux-fsdevel, linux-kernel, linux-xfs,
	linux-mm

On Thu, Apr 06, 2023 at 10:31:53AM -0700, Tim Chen wrote:
> On Wed, 2023-04-05 at 18:54 +0000, Yosry Ahmed wrote:
> > +	 * For all of these cases, we have no way of finding out whether these
> > +	 * pages were related to the memcg under reclaim. For example, a freed
> > +	 * slab page could have had only a single object charged to the memcg
> 
> Minor nits:
> s/could have had/could have

No ... "could have had" is correct.  I'm a native English speaker, so I
have no idea what the rule here is, but I can ask my linguist wife later
if you want to know ;-)

Maybe it's something like this:
https://www.englishgrammar.org/have-had-and-had-had/


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers
  2023-04-05 18:54 ` [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers Yosry Ahmed
  2023-04-06 17:31   ` Tim Chen
@ 2023-04-06 20:45   ` Peter Xu
  2023-04-07  1:02     ` Yosry Ahmed
  1 sibling, 1 reply; 13+ messages in thread
From: Peter Xu @ 2023-04-06 20:45 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, David Hildenbrand, Johannes Weiner, NeilBrown,
	Shakeel Butt, Michal Hocko, Yu Zhao, Dave Chinner, linux-fsdevel,
	linux-kernel, linux-xfs, linux-mm

Hi, Yosry,

On Wed, Apr 05, 2023 at 06:54:27PM +0000, Yosry Ahmed wrote:

[...]

> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c82bd89f90364..049e39202e6ce 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -188,18 +188,6 @@ struct scan_control {
>   */
>  int vm_swappiness = 60;
>  
> -static void set_task_reclaim_state(struct task_struct *task,
> -				   struct reclaim_state *rs)
> -{
> -	/* Check for an overwrite */
> -	WARN_ON_ONCE(rs && task->reclaim_state);
> -
> -	/* Check for the nulling of an already-nulled member */
> -	WARN_ON_ONCE(!rs && !task->reclaim_state);
> -
> -	task->reclaim_state = rs;
> -}
> -
>  LIST_HEAD(shrinker_list);
>  DECLARE_RWSEM(shrinker_rwsem);
>  
> @@ -511,6 +499,59 @@ static bool writeback_throttling_sane(struct scan_control *sc)
>  }
>  #endif
>  
> +static void set_task_reclaim_state(struct task_struct *task,
> +				   struct reclaim_state *rs)
> +{
> +	/* Check for an overwrite */
> +	WARN_ON_ONCE(rs && task->reclaim_state);
> +
> +	/* Check for the nulling of an already-nulled member */
> +	WARN_ON_ONCE(!rs && !task->reclaim_state);
> +
> +	task->reclaim_state = rs;
> +}

Nit: I just think such movement not necessary while it loses the "git
blame" information easily.

Instead of moving this here without major benefit, why not just define
flush_reclaim_state() right after previous set_task_reclaim_state()?

> +
> +/*
> + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
> + * scan_control->nr_reclaimed.
> + */
> +static void flush_reclaim_state(struct scan_control *sc,
> +				struct reclaim_state *rs)
> +{
> +	/*
> +	 * Currently, reclaim_state->reclaimed includes three types of pages
> +	 * freed outside of vmscan:
> +	 * (1) Slab pages.
> +	 * (2) Clean file pages from pruned inodes.
> +	 * (3) XFS freed buffer pages.
> +	 *
> +	 * For all of these cases, we have no way of finding out whether these
> +	 * pages were related to the memcg under reclaim. For example, a freed
> +	 * slab page could have had only a single object charged to the memcg
> +	 * under reclaim. Also, populated inodes are not on shrinker LRUs
> +	 * anymore except on highmem systems.
> +	 *
> +	 * Instead of over-reporting the reclaimed pages in a memcg reclaim,
> +	 * only count such pages in global reclaim. This prevents unnecessary
> +	 * retries during memcg charging and false positive from proactive
> +	 * reclaim (memory.reclaim).
> +	 *
> +	 * For uncommon cases were the freed pages were actually significantly
> +	 * charged to the memcg under reclaim, and we end up under-reporting, it
> +	 * should be fine. The freed pages will be uncharged anyway, even if
> +	 * they are not reported properly, and we will be able to make forward
> +	 * progress in charging (which is usually in a retry loop).
> +	 *
> +	 * We can go one step further, and report the uncharged objcg pages in
> +	 * memcg reclaim, to make reporting more accurate and reduce
> +	 * under-reporting, but it's probably not worth the complexity for now.
> +	 */
> +	if (rs && global_reclaim(sc)) {
> +		sc->nr_reclaimed += rs->reclaimed;
> +		rs->reclaimed = 0;
> +	}
> +}
> +
>  static long xchg_nr_deferred(struct shrinker *shrinker,
>  			     struct shrink_control *sc)
>  {
> @@ -5346,10 +5387,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>  		vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
>  			   sc->nr_reclaimed - reclaimed);
>  
> -	if (global_reclaim(sc)) {
> -		sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> -		current->reclaim_state->reclaimed_slab = 0;
> -	}
> +	flush_reclaim_state(sc, current->reclaim_state);
>  
>  	return success ? MEMCG_LRU_YOUNG : 0;
>  }
> @@ -6474,10 +6512,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>  
>  	shrink_node_memcgs(pgdat, sc);
>  
> -	if (reclaim_state && global_reclaim(sc)) {
> -		sc->nr_reclaimed += reclaim_state->reclaimed_slab;
> -		reclaim_state->reclaimed_slab = 0;
> -	}
> +	flush_reclaim_state(sc, reclaim_state);

IIUC reclaim_state here still points to current->reclaim_state.  Could it
change at all?

Is it cleaner to make flush_reclaim_state() taking "sc" only if it always
references current->reclaim_state?

>  
>  	/* Record the subtree's reclaim efficiency */
>  	if (!sc->proactive)
> -- 
> 2.40.0.348.gf938b09366-goog
> 

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
  2023-04-06 10:30   ` David Hildenbrand
  2023-04-06 14:07     ` Yosry Ahmed
@ 2023-04-06 22:25     ` Andrew Morton
  1 sibling, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2023-04-06 22:25 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Yosry Ahmed, Alexander Viro, Darrick J. Wong, Christoph Lameter,
	David Rientjes, Joonsoo Kim, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, Johannes Weiner, Peter Xu, NeilBrown, Shakeel Butt,
	Michal Hocko, Yu Zhao, Dave Chinner, linux-fsdevel, linux-kernel,
	linux-xfs, linux-mm, stable

On Thu, 6 Apr 2023 12:30:56 +0200 David Hildenbrand <david@redhat.com> wrote:

> Otherwise it's hard to judge how far to backport this.

The case for backporting sounded rather unconvincing to me, which is
why I'm still sitting on the v4 series.

What are your thoughts on the desirability of a backport?

It makes sense to design the forthcoming v6 series for backportability,
so that even if we decide "no", others can still take it easily if they
wish to.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers
  2023-04-06 20:45   ` Peter Xu
@ 2023-04-07  1:02     ` Yosry Ahmed
  0 siblings, 0 replies; 13+ messages in thread
From: Yosry Ahmed @ 2023-04-07  1:02 UTC (permalink / raw)
  To: Peter Xu
  Cc: Andrew Morton, Alexander Viro, Darrick J. Wong,
	Christoph Lameter, David Rientjes, Joonsoo Kim, Vlastimil Babka,
	Roman Gushchin, Hyeonggon Yoo, Matthew Wilcox (Oracle),
	Miaohe Lin, David Hildenbrand, Johannes Weiner, NeilBrown,
	Shakeel Butt, Michal Hocko, Yu Zhao, Dave Chinner, linux-fsdevel,
	linux-kernel, linux-xfs, linux-mm

On Thu, Apr 6, 2023 at 1:45 PM Peter Xu <peterx@redhat.com> wrote:
>
> Hi, Yosry,
>
> On Wed, Apr 05, 2023 at 06:54:27PM +0000, Yosry Ahmed wrote:
>
> [...]
>
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index c82bd89f90364..049e39202e6ce 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -188,18 +188,6 @@ struct scan_control {
> >   */
> >  int vm_swappiness = 60;
> >
> > -static void set_task_reclaim_state(struct task_struct *task,
> > -                                struct reclaim_state *rs)
> > -{
> > -     /* Check for an overwrite */
> > -     WARN_ON_ONCE(rs && task->reclaim_state);
> > -
> > -     /* Check for the nulling of an already-nulled member */
> > -     WARN_ON_ONCE(!rs && !task->reclaim_state);
> > -
> > -     task->reclaim_state = rs;
> > -}
> > -
> >  LIST_HEAD(shrinker_list);
> >  DECLARE_RWSEM(shrinker_rwsem);
> >
> > @@ -511,6 +499,59 @@ static bool writeback_throttling_sane(struct scan_control *sc)
> >  }
> >  #endif
> >
> > +static void set_task_reclaim_state(struct task_struct *task,
> > +                                struct reclaim_state *rs)
> > +{
> > +     /* Check for an overwrite */
> > +     WARN_ON_ONCE(rs && task->reclaim_state);
> > +
> > +     /* Check for the nulling of an already-nulled member */
> > +     WARN_ON_ONCE(!rs && !task->reclaim_state);
> > +
> > +     task->reclaim_state = rs;
> > +}
>
> Nit: I just think such movement not necessary while it loses the "git
> blame" information easily.
>
> Instead of moving this here without major benefit, why not just define
> flush_reclaim_state() right after previous set_task_reclaim_state()?

An earlier version did that, but we would have to add a forward
declaration of global_reclaim() (or cgroup_reclaim()), as they are
defined after the previous position of set_task_reclaim_state().

>
> > +
> > +/*
> > + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
> > + * scan_control->nr_reclaimed.
> > + */
> > +static void flush_reclaim_state(struct scan_control *sc,
> > +                             struct reclaim_state *rs)
> > +{
> > +     /*
> > +      * Currently, reclaim_state->reclaimed includes three types of pages
> > +      * freed outside of vmscan:
> > +      * (1) Slab pages.
> > +      * (2) Clean file pages from pruned inodes.
> > +      * (3) XFS freed buffer pages.
> > +      *
> > +      * For all of these cases, we have no way of finding out whether these
> > +      * pages were related to the memcg under reclaim. For example, a freed
> > +      * slab page could have had only a single object charged to the memcg
> > +      * under reclaim. Also, populated inodes are not on shrinker LRUs
> > +      * anymore except on highmem systems.
> > +      *
> > +      * Instead of over-reporting the reclaimed pages in a memcg reclaim,
> > +      * only count such pages in global reclaim. This prevents unnecessary
> > +      * retries during memcg charging and false positive from proactive
> > +      * reclaim (memory.reclaim).
> > +      *
> > +      * For uncommon cases were the freed pages were actually significantly
> > +      * charged to the memcg under reclaim, and we end up under-reporting, it
> > +      * should be fine. The freed pages will be uncharged anyway, even if
> > +      * they are not reported properly, and we will be able to make forward
> > +      * progress in charging (which is usually in a retry loop).
> > +      *
> > +      * We can go one step further, and report the uncharged objcg pages in
> > +      * memcg reclaim, to make reporting more accurate and reduce
> > +      * under-reporting, but it's probably not worth the complexity for now.
> > +      */
> > +     if (rs && global_reclaim(sc)) {
> > +             sc->nr_reclaimed += rs->reclaimed;
> > +             rs->reclaimed = 0;
> > +     }
> > +}
> > +
> >  static long xchg_nr_deferred(struct shrinker *shrinker,
> >                            struct shrink_control *sc)
> >  {
> > @@ -5346,10 +5387,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
> >               vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
> >                          sc->nr_reclaimed - reclaimed);
> >
> > -     if (global_reclaim(sc)) {
> > -             sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
> > -             current->reclaim_state->reclaimed_slab = 0;
> > -     }
> > +     flush_reclaim_state(sc, current->reclaim_state);
> >
> >       return success ? MEMCG_LRU_YOUNG : 0;
> >  }
> > @@ -6474,10 +6512,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >
> >       shrink_node_memcgs(pgdat, sc);
> >
> > -     if (reclaim_state && global_reclaim(sc)) {
> > -             sc->nr_reclaimed += reclaim_state->reclaimed_slab;
> > -             reclaim_state->reclaimed_slab = 0;
> > -     }
> > +     flush_reclaim_state(sc, reclaim_state);
>
> IIUC reclaim_state here still points to current->reclaim_state.  Could it
> change at all?
>
> Is it cleaner to make flush_reclaim_state() taking "sc" only if it always
> references current->reclaim_state?

Good point. I think it's always current->reclaim_state.

I think we can make flush_reclaim_state() only take "sc" as an
argument, and remove the "reclaim_state" local variable in
shrink_node() completely.

>
> >
> >       /* Record the subtree's reclaim efficiency */
> >       if (!sc->proactive)
> > --
> > 2.40.0.348.gf938b09366-goog
> >
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-04-07  1:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-05 18:54 [PATCH v5 0/2] Ignore non-LRU-based reclaim in memcg reclaim Yosry Ahmed
2023-04-05 18:54 ` [PATCH v5 1/2] mm: vmscan: ignore " Yosry Ahmed
2023-04-06 10:30   ` David Hildenbrand
2023-04-06 14:07     ` Yosry Ahmed
2023-04-06 17:49       ` David Hildenbrand
2023-04-06 17:52         ` Yosry Ahmed
2023-04-06 22:25     ` Andrew Morton
2023-04-05 18:54 ` [PATCH v5 2/2] mm: vmscan: refactor reclaim_state helpers Yosry Ahmed
2023-04-06 17:31   ` Tim Chen
2023-04-06 17:43     ` Yosry Ahmed
2023-04-06 19:42     ` Matthew Wilcox
2023-04-06 20:45   ` Peter Xu
2023-04-07  1:02     ` Yosry Ahmed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.