From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755582AbbJVEWV (ORCPT ); Thu, 22 Oct 2015 00:22:21 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:39326 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751791AbbJVEWQ (ORCPT ); Thu, 22 Oct 2015 00:22:16 -0400 From: Johannes Weiner To: "David S. Miller" , Andrew Morton Cc: Michal Hocko , Vladimir Davydov , Tejun Heo , netdev@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 7/8] mm: vmscan: report vmpressure at the level of reclaim activity Date: Thu, 22 Oct 2015 00:21:35 -0400 Message-Id: <1445487696-21545-8-git-send-email-hannes@cmpxchg.org> X-Mailer: git-send-email 2.6.1 In-Reply-To: <1445487696-21545-1-git-send-email-hannes@cmpxchg.org> References: <1445487696-21545-1-git-send-email-hannes@cmpxchg.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The vmpressure metric is based on reclaim efficiency, which in turn is an attribute of the LRU. However, vmpressure events are currently reported at the source of pressure rather than at the reclaim level. Switch the reporting to the reclaim level to allow finer-grained analysis of which memcg is having trouble reclaiming its pages. As far as memory.pressure_level interface semantics go, events are escalated up the hierarchy until a listener is found, so this won't affect existing users that listen at higher levels. This also prepares vmpressure for hooking it up to the networking stack's memory pressure code. Signed-off-by: Johannes Weiner --- mm/vmscan.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index ecc2125..50630c8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2404,6 +2404,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc, memcg = mem_cgroup_iter(root, NULL, &reclaim); do { unsigned long lru_pages; + unsigned long reclaimed; unsigned long scanned; struct lruvec *lruvec; int swappiness; @@ -2416,6 +2417,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc, lruvec = mem_cgroup_zone_lruvec(zone, memcg); swappiness = mem_cgroup_swappiness(memcg); + reclaimed = sc->nr_reclaimed; scanned = sc->nr_scanned; shrink_lruvec(lruvec, swappiness, sc, &lru_pages); @@ -2437,6 +2439,10 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc, } } + vmpressure(sc->gfp_mask, memcg, + sc->nr_scanned - scanned, + sc->nr_reclaimed - reclaimed); + /* * Direct reclaim and kswapd have to scan all memory * cgroups to fulfill the overall scan target for the @@ -2454,10 +2460,6 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc, } } while ((memcg = mem_cgroup_iter(root, memcg, &reclaim))); - vmpressure(sc->gfp_mask, sc->target_mem_cgroup, - sc->nr_scanned - nr_scanned, - sc->nr_reclaimed - nr_reclaimed); - if (sc->nr_reclaimed - nr_reclaimed) reclaimable = true; -- 2.6.1