From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12768C43466 for ; Fri, 18 Sep 2020 03:01:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AF1BE20870 for ; Fri, 18 Sep 2020 03:01:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="JUmupfA4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AF1BE20870 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B6A206B0074; Thu, 17 Sep 2020 23:01:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF4AF6B0078; Thu, 17 Sep 2020 23:01:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96C566B007B; Thu, 17 Sep 2020 23:01:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id 754A26B0074 for ; Thu, 17 Sep 2020 23:01:04 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3C5AC180AD811 for ; Fri, 18 Sep 2020 03:01:04 +0000 (UTC) X-FDA: 77274680448.29.sleet98_2b059e727127 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 1A45C180868E0 for ; Fri, 18 Sep 2020 03:01:04 +0000 (UTC) X-HE-Tag: sleet98_2b059e727127 X-Filterd-Recvd-Size: 5650 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 03:01:03 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id s141so3424407qka.13 for ; Thu, 17 Sep 2020 20:01:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=xgnYESlnPAgRXuNmUoKIpEseoXbDWRNbcLBaliwt4oc=; b=JUmupfA4wraNNkt/3nQDRoRaUfUY18y3RzYhZnq0DDiJXlYJesex9t/1/0moJHN7v0 50pxg0398VUNoS2zmL94S71EK7TP61x8/Q7FBsofCF2wC7NulxTKKH3hMdhR7nFOJ/PC uki9yZgG4KT9CzoqHNwgC7l7OkUR4FkQCLaBZwHr0QORR/rpMHGg/TdJWwEZCAtCS+8e 4b1oC3cA+Trzzm0/dv8Wx+97jp+3a3DAAkgXb8OwFXgLmTQBqSbxgIv/X9QFjfhX+pRs K0DivOWlJ0PQTVXSvOGXUPE5sMtcoED2ZmrCJl1b9tqjC20mjmwqKnURY63+3efkm+wG BY+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xgnYESlnPAgRXuNmUoKIpEseoXbDWRNbcLBaliwt4oc=; b=G1eEOEqSS+tdfvK2D+Lkmj+ge11WpRlF5kpLdH6vAe70zcRuwCpChAjJHtZP6Da7p5 LnS5OMC33c3jmlvkgXl0mcPdRTj8ZulgalDKQiIP68Fo/pQIS15wPYc1BikCulqjq19q ENAhRg4AX7nobDPNm92i6HPLocgJSMsMCXXwXt6lyGvUMRxsd7QRO+7jtXCFFRfAT+o2 BM0I0sn+Ksy2P9CN+RiAd7yRSBfRaihLR1ppoQQ1sJGUT1SuaEIo2GFnc5wY4eGDQS8r 5+HeqGqo1n3IVSVHnaCs1PGtUp8/OBOzsWgR81K8FYdBdW3HQVqtcdaBqGd/YCWPUtAW oxdA== X-Gm-Message-State: AOAM531pPMxwZ7v9ydQnu6MY4pyaRMMJ2JKCG3hIUQk5g18aUDvmDvuQ 2WIA7et/S2n21n+1xEBKYpN+YNC42Jg= X-Google-Smtp-Source: ABdhPJxi2d5rBcjBpXGVEN33YSeJpIGsvVd5FHEOFm+6Zvu+65+brOoWC+mQWMQVotpkrxRGOx14mPFQ4PA= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a05:6214:a6c:: with SMTP id ef12mr32117792qvb.14.1600398062849; Thu, 17 Sep 2020 20:01:02 -0700 (PDT) Date: Thu, 17 Sep 2020 21:00:41 -0600 In-Reply-To: <20200918030051.650890-1-yuzhao@google.com> Message-Id: <20200918030051.650890-4-yuzhao@google.com> Mime-Version: 1.0 References: <20200918030051.650890-1-yuzhao@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru() From: Yu Zhao To: Andrew Morton , Michal Hocko Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Shakeel Butt , Chris Down , Yafang Shao , Vlastimil Babka , Huang Ying , Pankaj Gupta , Matthew Wilcox , Konstantin Khlebnikov , Minchan Kim , Jaewon Kim , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now we have a total of three places that free lru pages when their references become zero (after we drop the reference from isolation). Before this patch, they all do: __ClearPageLRU() page_off_lru() del_page_from_lru_list() After this patch, they become: page_off_lru() __ClearPageLRU() del_page_from_lru_list() This change should have no side effects. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 1 + mm/swap.c | 2 -- mm/vmscan.c | 1 - 3 files changed, 1 insertion(+), 3 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 8fc71e9d7bb0..be9418425e41 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -92,6 +92,7 @@ static __always_inline enum lru_list page_off_lru(struct page *page) { enum lru_list lru; + __ClearPageLRU(page); if (PageUnevictable(page)) { __ClearPageUnevictable(page); lru = LRU_UNEVICTABLE; diff --git a/mm/swap.c b/mm/swap.c index 40bf20a75278..8362083f00c9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -86,7 +86,6 @@ static void __page_cache_release(struct page *page) spin_lock_irqsave(&pgdat->lru_lock, flags); lruvec = mem_cgroup_page_lruvec(page, pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); spin_unlock_irqrestore(&pgdat->lru_lock, flags); } @@ -895,7 +894,6 @@ void release_pages(struct page **pages, int nr) lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); } diff --git a/mm/vmscan.c b/mm/vmscan.c index f257d2f61574..f9a186a96410 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1862,7 +1862,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); if (put_page_testzero(page)) { - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); if (unlikely(PageCompound(page))) { -- 2.28.0.681.g6f77f65b4e-goog