From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754564AbdDRAGX (ORCPT ); Mon, 17 Apr 2017 20:06:23 -0400 Received: from mail-pg0-f48.google.com ([74.125.83.48]:33981 "EHLO mail-pg0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752037AbdDRAGW (ORCPT ); Mon, 17 Apr 2017 20:06:22 -0400 Date: Mon, 17 Apr 2017 17:06:20 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: Johannes Weiner , Mel Gorman , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch] mm, vmscan: avoid thrashing anon lru when free + file is low Message-ID: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The purpose of the code that commit 623762517e23 ("revert 'mm: vmscan: do not swap anon pages just because free+file is low'") reintroduces is to prefer swapping anonymous memory rather than trashing the file lru. If all anonymous memory is unevictable, however, this insistance on SCAN_ANON ends up thrashing that lru instead. Check that enough evictable anon memory is actually on this lruvec before insisting on SCAN_ANON. SWAP_CLUSTER_MAX is used as the threshold to determine if only scanning anon is beneficial. Otherwise, fallback to balanced reclaim so the file lru doesn't remain untouched. Signed-off-by: David Rientjes --- mm/vmscan.c | 41 +++++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2186,26 +2186,31 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, * anon pages. Try to detect this based on file LRU size. */ if (global_reclaim(sc)) { - unsigned long pgdatfile; - unsigned long pgdatfree; - int z; - unsigned long total_high_wmark = 0; - - pgdatfree = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; + anon = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, sc->reclaim_idx) + + lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc->reclaim_idx); + if (likely(anon >= SWAP_CLUSTER_MAX)) { + unsigned long total_high_wmark = 0; + unsigned long pgdatfile; + unsigned long pgdatfree; + int z; + + pgdatfree = sum_zone_node_page_state(pgdat->node_id, + NR_FREE_PAGES); + pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + if (!managed_zone(zone)) + continue; - total_high_wmark += high_wmark_pages(zone); - } + total_high_wmark += high_wmark_pages(zone); + } - if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) { - scan_balance = SCAN_ANON; - goto out; + if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) { + scan_balance = SCAN_ANON; + goto out; + } } }