From: Johannes Weiner <hannes@cmpxchg.org> To: Andrew Morton <akpm@linux-foundation.org> Cc: Jia He <hejianet@gmail.com>, Michal Hocko <mhocko@suse.cz>, Mel Gorman <mgorman@suse.de>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 5/9] mm: don't avoid high-priority reclaim on unreclaimable nodes Date: Tue, 28 Feb 2017 16:40:03 -0500 [thread overview] Message-ID: <20170228214007.5621-6-hannes@cmpxchg.org> (raw) In-Reply-To: <20170228214007.5621-1-hannes@cmpxchg.org> 246e87a93934 ("memcg: fix get_scan_count() for small targets") sought to avoid high reclaim priorities for kswapd by forcing it to scan a minimum amount of pages when lru_pages >> priority yielded nothing. b95a2f2d486d ("mm: vmscan: convert global reclaim to per-memcg LRU lists"), due to switching global reclaim to a round-robin scheme over all cgroups, had to restrict this forceful behavior to unreclaimable zones in order to prevent massive overreclaim with many cgroups. The latter patch effectively neutered the behavior completely for all but extreme memory pressure. But in those situations we might as well drop the reclaimers to lower priority levels. Remove the check. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> --- mm/vmscan.c | 19 +++++-------------- 1 file changed, 5 insertions(+), 14 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 911957b66622..46b6223fe7f3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2129,22 +2129,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, int pass; /* - * If the zone or memcg is small, nr[l] can be 0. This - * results in no scanning on this priority and a potential - * priority drop. Global direct reclaim can go to the next - * zone and tends to have no problems. Global kswapd is for - * zone balancing and it needs to scan a minimum amount. When + * If the zone or memcg is small, nr[l] can be 0. When * reclaiming for a memcg, a priority drop can cause high - * latencies, so it's better to scan a minimum amount there as - * well. + * latencies, so it's better to scan a minimum amount. When a + * cgroup has already been deleted, scrape out the remaining + * cache forcefully to get rid of the lingering state. */ - if (current_is_kswapd()) { - if (!pgdat_reclaimable(pgdat)) - force_scan = true; - if (!mem_cgroup_online(memcg)) - force_scan = true; - } - if (!global_reclaim(sc)) + if (!global_reclaim(sc) || !mem_cgroup_online(memcg)) force_scan = true; /* If we have no swap space, do not bother scanning anon pages. */ -- 2.11.1
WARNING: multiple messages have this Message-ID (diff)
From: Johannes Weiner <hannes@cmpxchg.org> To: Andrew Morton <akpm@linux-foundation.org> Cc: Jia He <hejianet@gmail.com>, Michal Hocko <mhocko@suse.cz>, Mel Gorman <mgorman@suse.de>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 5/9] mm: don't avoid high-priority reclaim on unreclaimable nodes Date: Tue, 28 Feb 2017 16:40:03 -0500 [thread overview] Message-ID: <20170228214007.5621-6-hannes@cmpxchg.org> (raw) In-Reply-To: <20170228214007.5621-1-hannes@cmpxchg.org> 246e87a93934 ("memcg: fix get_scan_count() for small targets") sought to avoid high reclaim priorities for kswapd by forcing it to scan a minimum amount of pages when lru_pages >> priority yielded nothing. b95a2f2d486d ("mm: vmscan: convert global reclaim to per-memcg LRU lists"), due to switching global reclaim to a round-robin scheme over all cgroups, had to restrict this forceful behavior to unreclaimable zones in order to prevent massive overreclaim with many cgroups. The latter patch effectively neutered the behavior completely for all but extreme memory pressure. But in those situations we might as well drop the reclaimers to lower priority levels. Remove the check. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> --- mm/vmscan.c | 19 +++++-------------- 1 file changed, 5 insertions(+), 14 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 911957b66622..46b6223fe7f3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2129,22 +2129,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, int pass; /* - * If the zone or memcg is small, nr[l] can be 0. This - * results in no scanning on this priority and a potential - * priority drop. Global direct reclaim can go to the next - * zone and tends to have no problems. Global kswapd is for - * zone balancing and it needs to scan a minimum amount. When + * If the zone or memcg is small, nr[l] can be 0. When * reclaiming for a memcg, a priority drop can cause high - * latencies, so it's better to scan a minimum amount there as - * well. + * latencies, so it's better to scan a minimum amount. When a + * cgroup has already been deleted, scrape out the remaining + * cache forcefully to get rid of the lingering state. */ - if (current_is_kswapd()) { - if (!pgdat_reclaimable(pgdat)) - force_scan = true; - if (!mem_cgroup_online(memcg)) - force_scan = true; - } - if (!global_reclaim(sc)) + if (!global_reclaim(sc) || !mem_cgroup_online(memcg)) force_scan = true; /* If we have no swap space, do not bother scanning anon pages. */ -- 2.11.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-03-01 13:56 UTC|newest] Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-02-28 21:39 [PATCH 0/9] mm: kswapd spinning on unreclaimable nodes - fixes and cleanups Johannes Weiner 2017-02-28 21:39 ` Johannes Weiner 2017-02-28 21:39 ` [PATCH 1/9] mm: fix 100% CPU kswapd busyloop on unreclaimable nodes Johannes Weiner 2017-02-28 21:39 ` Johannes Weiner 2017-03-02 3:23 ` Hillf Danton 2017-03-02 3:23 ` Hillf Danton 2017-03-02 23:30 ` Shakeel Butt 2017-03-02 23:30 ` Shakeel Butt 2017-03-03 1:26 ` Minchan Kim 2017-03-03 1:26 ` Minchan Kim 2017-03-03 7:59 ` Michal Hocko 2017-03-03 7:59 ` Michal Hocko 2017-03-06 1:37 ` Minchan Kim 2017-03-06 1:37 ` Minchan Kim 2017-03-06 16:24 ` Johannes Weiner 2017-03-06 16:24 ` Johannes Weiner 2017-03-07 0:59 ` Hillf Danton 2017-03-07 0:59 ` Hillf Danton 2017-03-07 7:28 ` Minchan Kim 2017-03-07 7:28 ` Minchan Kim 2017-03-07 10:17 ` Michal Hocko 2017-03-07 10:17 ` Michal Hocko 2017-03-07 16:56 ` Johannes Weiner 2017-03-07 16:56 ` Johannes Weiner 2017-03-09 14:20 ` Mel Gorman 2017-03-09 14:20 ` Mel Gorman 2017-02-28 21:40 ` [PATCH 2/9] mm: fix check for reclaimable pages in PF_MEMALLOC reclaim throttling Johannes Weiner 2017-02-28 21:40 ` Johannes Weiner 2017-03-01 15:02 ` Michal Hocko 2017-03-01 15:02 ` Michal Hocko 2017-03-02 3:25 ` Hillf Danton 2017-03-02 3:25 ` Hillf Danton 2017-02-28 21:40 ` [PATCH 3/9] mm: remove seemingly spurious reclaimability check from laptop_mode gating Johannes Weiner 2017-02-28 21:40 ` Johannes Weiner 2017-03-01 15:06 ` Michal Hocko 2017-03-01 15:06 ` Michal Hocko 2017-03-01 15:17 ` Mel Gorman 2017-03-01 15:17 ` Mel Gorman 2017-03-02 3:27 ` Hillf Danton 2017-03-02 3:27 ` Hillf Danton 2017-02-28 21:40 ` [PATCH 4/9] mm: remove unnecessary reclaimability check from NUMA balancing target Johannes Weiner 2017-02-28 21:40 ` Johannes Weiner 2017-03-01 15:14 ` Michal Hocko 2017-03-01 15:14 ` Michal Hocko 2017-03-02 3:28 ` Hillf Danton 2017-03-02 3:28 ` Hillf Danton 2017-02-28 21:40 ` Johannes Weiner [this message] 2017-02-28 21:40 ` [PATCH 5/9] mm: don't avoid high-priority reclaim on unreclaimable nodes Johannes Weiner 2017-03-01 15:21 ` Michal Hocko 2017-03-01 15:21 ` Michal Hocko 2017-03-02 3:31 ` Hillf Danton 2017-03-02 3:31 ` Hillf Danton 2017-02-28 21:40 ` [PATCH 6/9] mm: don't avoid high-priority reclaim on memcg limit reclaim Johannes Weiner 2017-02-28 21:40 ` Johannes Weiner 2017-03-01 15:40 ` Michal Hocko 2017-03-01 15:40 ` Michal Hocko 2017-03-01 17:36 ` Johannes Weiner 2017-03-01 17:36 ` Johannes Weiner 2017-03-01 19:13 ` Michal Hocko 2017-03-01 19:13 ` Michal Hocko 2017-03-02 3:32 ` Hillf Danton 2017-03-02 3:32 ` Hillf Danton 2017-02-28 21:40 ` [PATCH 7/9] mm: delete NR_PAGES_SCANNED and pgdat_reclaimable() Johannes Weiner 2017-02-28 21:40 ` Johannes Weiner 2017-03-01 15:41 ` Michal Hocko 2017-03-01 15:41 ` Michal Hocko 2017-03-02 3:34 ` Hillf Danton 2017-03-02 3:34 ` Hillf Danton 2017-02-28 21:40 ` [PATCH 8/9] Revert "mm, vmscan: account for skipped pages as a partial scan" Johannes Weiner 2017-02-28 21:40 ` Johannes Weiner 2017-03-01 15:51 ` Michal Hocko 2017-03-01 15:51 ` Michal Hocko 2017-03-02 3:36 ` Hillf Danton 2017-03-02 3:36 ` Hillf Danton 2017-02-28 21:40 ` [PATCH 9/9] mm: remove unnecessary back-off function when retrying page reclaim Johannes Weiner 2017-02-28 21:40 ` Johannes Weiner 2017-03-01 14:56 ` Michal Hocko 2017-03-01 14:56 ` Michal Hocko 2017-03-02 3:37 ` Hillf Danton 2017-03-02 3:37 ` Hillf Danton
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170228214007.5621-6-hannes@cmpxchg.org \ --to=hannes@cmpxchg.org \ --cc=akpm@linux-foundation.org \ --cc=hejianet@gmail.com \ --cc=kernel-team@fb.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.