linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] vm_swappiness=0 should still try to avoid swapping anon memory
@ 2021-08-09 22:37 Nico Pache
  2021-08-10 15:27 ` Johannes Weiner
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Nico Pache @ 2021-08-09 22:37 UTC (permalink / raw)
  To: linux-mm, akpm, linux-kernel
  Cc: hannes, npache, aquini, shakeelb, llong, mhocko, hakavlad

Since commit 170b04b7ae49 ("mm/workingset: prepare the workingset detection
infrastructure for anon LRU") and commit b91ac374346b ("mm: vmscan: enforce
inactive:active ratio at the reclaim root") swappiness can start prematurely
swapping anon memory. This is due to the assumption that refaulting anon should
always allow the shrinker to target anon memory. Add a check for swappiness
being >0 before indiscriminately targeting Anon. Before these commits
when a user had swappiness=0 anon memory would rarely get swapped; this
behavior has remained constant sense RHEL5. This commit keeps that behavior
intact and prevents the new workingset refaulting from challenging the anon
memory when swappiness=0.

Anon can still be swapped to prevent OOM. This does not completely disable
swapping, but rather tames the refaulting aspect of the code that allows for
the deactivating of anon memory.

We have two customer workloads that discovered this issue:
1) A VM claiming 95% of the hosts memory followed by file reads (never dirty)
   which begins to challenge the anon. Refaulting the anon working set will then
   cause the indiscriminant swapping of the anon.

2) A VM running a in-memory DB is being populated from file reads.
   Swappiness is set to 0 or 1 to defer write I/O as much as possible. Once
   the customer experienced low memory, swapping anon starts, with
   little-to-no PageCache being swapped.

Previously the file cache would account for almost all of the memory
reclaimed and reads would throttle. Although the two LRU changes mentioned
allow for less thrashing of file cache, customers would like to be able to keep
the swappiness=0 behavior that has been present in the kernel for a long
time.

A similar solution may be possible in get_scan_count(), which determines the
reclaim pressure for each LRU; however I believe that kind of solution may be
too aggressive, and will not allow other parts of the code (like direct reclaim)
from targeting the active_anon list. This way we stop the problem at the heart
of what is causing the issue, with the least amount of interference in other
code paths. Furthermore, shrink_lruvec can modify the reclaim pressure of each
LRU, which may make the get_scan_count solution even trickier.

Changelog:
 -V3:
    * Blame the right commit and be more descriptive in my log message.
    * inactive_is_low should remain independent from the new swappiness check.
    * Change how we get the swappiness value. Shrink_node can be called with a
      null target_mem_cgroup so we should depend on the target_lruvec to do the
      null check on memcg.

 -V2:
     * made this mem_cgroup specific so now it will work with v1, v2, and
       no cgroups.
     * I've also touched up my commit log.

Signed-off-by: Nico Pache <npache@redhat.com>
---
 mm/vmscan.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4620df62f0ff..9f2420da4037 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2883,8 +2883,12 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 	struct lruvec *target_lruvec;
 	bool reclaimable = false;
 	unsigned long file;
+	struct mem_cgroup *memcg;
+	int swappiness;
 
 	target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
+	memcg = lruvec_memcg(target_lruvec);
+	swappiness = mem_cgroup_swappiness(memcg);
 
 again:
 	memset(&sc->nr, 0, sizeof(sc->nr));
@@ -2909,7 +2913,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 
 		refaults = lruvec_page_state(target_lruvec,
 				WORKINGSET_ACTIVATE_ANON);
-		if (refaults != target_lruvec->refaults[0] ||
+		if ((swappiness && refaults != target_lruvec->refaults[0]) ||
 			inactive_is_low(target_lruvec, LRU_INACTIVE_ANON))
 			sc->may_deactivate |= DEACTIVATE_ANON;
 		else
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-04-21 16:26 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-09 22:37 [PATCH v3] vm_swappiness=0 should still try to avoid swapping anon memory Nico Pache
2021-08-10 15:27 ` Johannes Weiner
2021-08-10 19:24   ` Nico Pache
2021-08-10 21:17     ` Shakeel Butt
2021-08-10 22:16       ` Nico Pache
2021-08-10 22:29         ` Shakeel Butt
2021-08-10 21:16   ` Shakeel Butt
2021-08-10 15:37 ` Waiman Long
2022-04-19 18:11 ` Nico Pache
2022-04-19 18:46   ` Johannes Weiner
2022-04-19 19:37     ` Nico Pache
2022-04-19 23:54     ` Nico Pache
2022-04-20 14:01       ` Johannes Weiner
2022-04-20 17:34         ` Nico Pache
2022-04-20 18:44           ` Johannes Weiner
2022-04-21 16:21             ` Nico Pache

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).