From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f197.google.com (mail-wr0-f197.google.com [209.85.128.197]) by kanga.kvack.org (Postfix) with ESMTP id B3C5F6B026A for ; Mon, 7 May 2018 16:59:58 -0400 (EDT) Received: by mail-wr0-f197.google.com with SMTP id z7-v6so19963207wrg.11 for ; Mon, 07 May 2018 13:59:58 -0700 (PDT) Received: from gum.cmpxchg.org (gum.cmpxchg.org. [85.214.110.215]) by mx.google.com with ESMTPS id i2-v6si1577280edb.100.2018.05.07.13.59.57 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 07 May 2018 13:59:57 -0700 (PDT) From: Johannes Weiner Subject: [PATCH 1/7] mm: workingset: don't drop refault information prematurely Date: Mon, 7 May 2018 17:01:29 -0400 Message-Id: <20180507210135.1823-2-hannes@cmpxchg.org> In-Reply-To: <20180507210135.1823-1-hannes@cmpxchg.org> References: <20180507210135.1823-1-hannes@cmpxchg.org> Sender: owner-linux-mm@kvack.org List-ID: To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, cgroups@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Andrew Morton , Tejun Heo , Balbir Singh , Mike Galbraith , Oliver Yang , Shakeel Butt , xxx xxx , Taras Kondratiuk , Daniel Walker , Vinayak Menon , Ruslan Ruslichenko , kernel-team@fb.com From: Johannes Weiner If we just keep enough refault information to match the CURRENT page cache during reclaim time, we could lose a lot of events when there is only a temporary spike in non-cache memory consumption that pushes out all the cache. Once cache comes back, we won't see those refaults. They might not be actionable for LRU aging, but we want to know about them for measuring memory pressure. Signed-off-by: Johannes Weiner --- mm/workingset.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index 40ee02c83978..53759a3cf99a 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -364,7 +364,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, { unsigned long max_nodes; unsigned long nodes; - unsigned long cache; + unsigned long pages; /* list_lru lock nests inside the IRQ-safe i_pages lock */ local_irq_disable(); @@ -393,14 +393,14 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, * * PAGE_SIZE / radix_tree_nodes / node_entries * 8 / PAGE_SIZE */ - if (sc->memcg) { - cache = mem_cgroup_node_nr_lru_pages(sc->memcg, sc->nid, - LRU_ALL_FILE); - } else { - cache = node_page_state(NODE_DATA(sc->nid), NR_ACTIVE_FILE) + - node_page_state(NODE_DATA(sc->nid), NR_INACTIVE_FILE); - } - max_nodes = cache >> (RADIX_TREE_MAP_SHIFT - 3); +#ifdef CONFIG_MEMCG + if (sc->memcg) + pages = page_counter_read(&sc->memcg->memory); + else +#endif + pages = node_present_pages(sc->nid); + + max_nodes = pages >> (RADIX_TREE_MAP_SHIFT - 3); if (nodes <= max_nodes) return 0; -- 2.17.0