From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752801AbcFFTva (ORCPT ); Mon, 6 Jun 2016 15:51:30 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:58126 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752075AbcFFTv2 (ORCPT ); Mon, 6 Jun 2016 15:51:28 -0400 From: Johannes Weiner To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Rik van Riel , Mel Gorman , Andrea Arcangeli , Andi Kleen , Michal Hocko , Tim Chen , kernel-team@fb.com Subject: [PATCH 09/10] mm: only count actual rotations as LRU reclaim cost Date: Mon, 6 Jun 2016 15:48:35 -0400 Message-Id: <20160606194836.3624-10-hannes@cmpxchg.org> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20160606194836.3624-1-hannes@cmpxchg.org> References: <20160606194836.3624-1-hannes@cmpxchg.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Noting a reference on an active file page but still deactivating it represents a smaller cost of reclaim than noting a referenced anonymous page and actually physically rotating it back to the head. The file page *might* refault later on, but it's definite progress toward freeing pages, whereas rotating the anonymous page costs us real time without making progress toward the reclaim goal. Don't treat both events as equal. The following patch will hook up LRU balancing to cache and swap refaults, which are a much more concrete cost signal for reclaiming one list over the other. Remove the maybe-IO cost bias from page references, and only note the CPU cost for actual rotations that prevent the pages from getting reclaimed. Signed-off-by: Johannes Weiner --- mm/vmscan.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 06e381e1004c..acbd212eab6e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1821,7 +1821,6 @@ static void shrink_active_list(unsigned long nr_to_scan, if (page_referenced(page, 0, sc->target_mem_cgroup, &vm_flags)) { - nr_rotated += hpage_nr_pages(page); /* * Identify referenced, file-backed active pages and * give them one more trip around the active list. So @@ -1832,6 +1831,7 @@ static void shrink_active_list(unsigned long nr_to_scan, * so we ignore them here. */ if ((vm_flags & VM_EXEC) && page_is_file_cache(page)) { + nr_rotated += hpage_nr_pages(page); list_add(&page->lru, &l_active); continue; } @@ -1846,10 +1846,8 @@ static void shrink_active_list(unsigned long nr_to_scan, */ spin_lock_irq(&zone->lru_lock); /* - * Count referenced pages from currently used mappings as rotated, - * even though only some of them are actually re-activated. This - * helps balance scan pressure between file and anonymous pages in - * get_scan_count. + * Rotating pages costs CPU without actually + * progressing toward the reclaim goal. */ lru_note_cost(lruvec, file, nr_rotated); -- 2.8.3