From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D863CC43465 for ; Sat, 19 Sep 2020 04:20:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6ACD020874 for ; Sat, 19 Sep 2020 04:20:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="1IXapl09" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6ACD020874 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0C1E56B005D; Sat, 19 Sep 2020 00:20:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0734D6B0062; Sat, 19 Sep 2020 00:20:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECA7E6B0068; Sat, 19 Sep 2020 00:20:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id D5D196B005D for ; Sat, 19 Sep 2020 00:20:14 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 95E73181AEF1E for ; Sat, 19 Sep 2020 04:20:14 +0000 (UTC) X-FDA: 77278508748.15.crook70_40177c427130 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 6D77A1814B0C1 for ; Sat, 19 Sep 2020 04:20:14 +0000 (UTC) X-HE-Tag: crook70_40177c427130 X-Filterd-Recvd-Size: 3843 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Sat, 19 Sep 2020 04:20:13 +0000 (UTC) Received: from localhost.localdomain (c-71-198-47-131.hsd1.ca.comcast.net [71.198.47.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D053E221EC; Sat, 19 Sep 2020 04:20:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600489213; bh=tls6E4Ua3Hx2Fst6WvezMpFemZNLNASuADiBnEa7OCw=; h=Date:From:To:Subject:In-Reply-To:From; b=1IXapl09EJ4rmbVcS2s9r8jM30RqXauZSqK/HWZ7oklFFCilnsZgrAYkG59fX+0/f Vuk9h2xbE2ZwR28axA7XJJtb1cGf448C7ccROBAmLcKdmKLxw8lpyeRD55RPz70rYK D2tp2g/HGQ0Eo29+Tjtq5ujHrRR+/rPkW0Y2WfnA= Date: Fri, 18 Sep 2020 21:20:12 -0700 From: Andrew Morton To: akpm@linux-foundation.org, cai@lca.pw, hannes@cmpxchg.org, hughd@google.com, linux-mm@kvack.org, mhocko@suse.com, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, shakeelb@google.com, shy828301@gmail.com, torvalds@linux-foundation.org Subject: [patch 05/15] mm: fix check_move_unevictable_pages() on THP Message-ID: <20200919042012.n9ppaodpA%akpm@linux-foundation.org> In-Reply-To: <20200918211925.7e97f0ef63d92f5cfe5ccbc5@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hugh Dickins Subject: mm: fix check_move_unevictable_pages() on THP check_move_unevictable_pages() is used in making unevictable shmem pages evictable: by shmem_unlock_mapping(), drm_gem_check_release_pagevec() and i915/gem check_release_pagevec(). Those may pass down subpages of a huge page, when /sys/kernel/mm/transparent_hugepage/shmem_enabled is "force". That does not crash or warn at present, but the accounting of vmstats unevictable_pgs_scanned and unevictable_pgs_rescued is inconsistent: scanned being incremented on each subpage, rescued only on the head (since tails already appear evictable once the head has been updated). 5.8 commit 5d91f31faf8e ("mm: swap: fix vmstats for huge page") has established that vm_events in general (and unevictable_pgs_rescued in particular) should count every subpage: so follow that precedent here. Do this in such a way that if mem_cgroup_page_lruvec() is made stricter (to check page->mem_cgroup is always set), no problem: skip the tails before calling it, and add thp_nr_pages() to vmstats on the head. Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008301405000.5954@eggly.anvils Signed-off-by: Hugh Dickins Reviewed-by: Shakeel Butt Acked-by: Yang Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Mike Kravetz Cc: Qian Cai Signed-off-by: Andrew Morton --- mm/vmscan.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) --- a/mm/vmscan.c~mm-fix-check_move_unevictable_pages-on-thp +++ a/mm/vmscan.c @@ -4268,8 +4268,14 @@ void check_move_unevictable_pages(struct for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; struct pglist_data *pagepgdat = page_pgdat(page); + int nr_pages; + + if (PageTransTail(page)) + continue; + + nr_pages = thp_nr_pages(page); + pgscanned += nr_pages; - pgscanned++; if (pagepgdat != pgdat) { if (pgdat) spin_unlock_irq(&pgdat->lru_lock); @@ -4288,7 +4294,7 @@ void check_move_unevictable_pages(struct ClearPageUnevictable(page); del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); add_page_to_lru_list(page, lruvec, lru); - pgrescued++; + pgrescued += nr_pages; } } _