All of lore.kernel.org
 help / color / mirror / Atom feed
* Patch "mm, vmscan: consider eligible zones in get_scan_count" has been added to the 4.10-stable tree
@ 2017-03-08 13:11 gregkh
  0 siblings, 0 replies; only message in thread
From: gregkh @ 2017-03-08 13:11 UTC (permalink / raw)
  To: mhocko, akpm, gregkh, hannes, hillf.zj, mgorman, minchan,
	torvalds, trevor
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    mm, vmscan: consider eligible zones in get_scan_count

to the 4.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-vmscan-consider-eligible-zones-in-get_scan_count.patch
and it can be found in the queue-4.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 71ab6cfe88dcf9f6e6a65eb85cf2bda20a257682 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Wed, 22 Feb 2017 15:46:01 -0800
Subject: mm, vmscan: consider eligible zones in get_scan_count

From: Michal Hocko <mhocko@suse.com>

commit 71ab6cfe88dcf9f6e6a65eb85cf2bda20a257682 upstream.

get_scan_count() considers the whole node LRU size when

 - doing SCAN_FILE due to many page cache inactive pages
 - calculating the number of pages to scan

In both cases this might lead to unexpected behavior especially on 32b
systems where we can expect lowmem memory pressure very often.

A large highmem zone can easily distort SCAN_FILE heuristic because
there might be only few file pages from the eligible zones on the node
lru and we would still enforce file lru scanning which can lead to
trashing while we could still scan anonymous pages.

The later use of lruvec_lru_size can be problematic as well.  Especially
when there are not many pages from the eligible zones.  We would have to
skip over many pages to find anything to reclaim but shrink_node_memcg
would only reduce the remaining number to scan by SWAP_CLUSTER_MAX at
maximum.  Therefore we can end up going over a large LRU many times
without actually having chance to reclaim much if anything at all.  The
closer we are out of memory on lowmem zone the worse the problem will
be.

Fix this by filtering out all the ineligible zones when calculating the
lru size for both paths and consider only sc->reclaim_idx zones.

The patch would need to be tweaked a bit to apply to 4.10 and older but
I will do that as soon as it hits the Linus tree in the next merge
window.

Link: http://lkml.kernel.org/r/20170117103702.28542-3-mhocko@kernel.org
Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
Signed-off-by: Michal Hocko <mhocko@suse.com>
Tested-by: Trevor Cordes <trevor@tecnopolis.ca>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


---
 mm/vmscan.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2205,7 +2205,7 @@ static void get_scan_count(struct lruvec
 	 * system is under heavy pressure.
 	 */
 	if (!inactive_list_is_low(lruvec, true, sc) &&
-	    lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES) >> sc->priority) {
+	    lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, sc->reclaim_idx) >> sc->priority) {
 		scan_balance = SCAN_FILE;
 		goto out;
 	}
@@ -2272,7 +2272,7 @@ out:
 			unsigned long size;
 			unsigned long scan;
 
-			size = lruvec_lru_size(lruvec, lru, MAX_NR_ZONES);
+			size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
 			scan = size >> sc->priority;
 
 			if (!scan && pass && force_scan)


Patches currently in stable-queue which might be from mhocko@suse.com are

queue-4.10/mm-vmpressure-fix-sending-wrong-events-on-underflow.patch
queue-4.10/mm-page_alloc-fix-nodes-for-reclaim-in-fast-path.patch
queue-4.10/mm-vmscan-consider-eligible-zones-in-get_scan_count.patch
queue-4.10/mm-vmscan-cleanup-lru-size-claculations.patch
queue-4.10/mm-devm_memremap_pages-hold-device_hotplug-lock-over-mem_hotplug_-begin-done.patch
queue-4.10/mm-do-not-access-page-mapping-directly-on-page_endio.patch

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2017-03-08 13:12 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-08 13:11 Patch "mm, vmscan: consider eligible zones in get_scan_count" has been added to the 4.10-stable tree gregkh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.