From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0A28C43465 for ; Fri, 18 Sep 2020 03:01:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 93119206B6 for ; Fri, 18 Sep 2020 03:01:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="GcBDKNFK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 93119206B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 945088E0003; Thu, 17 Sep 2020 23:01:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 881958E0001; Thu, 17 Sep 2020 23:01:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6642E8E0003; Thu, 17 Sep 2020 23:01:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 44C288E0001 for ; Thu, 17 Sep 2020 23:01:07 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 123DC3629 for ; Fri, 18 Sep 2020 03:01:07 +0000 (UTC) X-FDA: 77274680574.16.brake73_3a0910427127 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id E7172100372E0 for ; Fri, 18 Sep 2020 03:01:06 +0000 (UTC) X-HE-Tag: brake73_3a0910427127 X-Filterd-Recvd-Size: 8835 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 03:01:06 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id m13so3784094qtu.10 for ; Thu, 17 Sep 2020 20:01:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=d0N2YNozj8FzBRJ6VQ0c28ZpuzKJvjKqya/hyG9SAss=; b=GcBDKNFKsQeNSxhucTO0XJtY/nmuVgnmcwBGZjjhLtbAo/zHl78n+uIqOAXYw9Atc2 pdBj2q0PrqGhK1Ax5dmJkf52ecZq9oeG+d7hhc7Km/9ULwndyEN71/Sp6nzSoF9OJ2uT 6RCcKmhioRolb9jXu/oG1FZ5rEfVtyYOqNEq+2KJucS4VlNaZPGnnPDCr2yqdyDB7fbE Yq6jZ6dXmC2gQJD7V0OJijyJIwBud4vVWs+1RP6zyvs+1u0C51lB52kJylZvoFmG8jrr eCsSvRygARZuZZSx0mbpyPtpKOIdYDZj0pxHwox19Ko02Pa2BfamshM6dlvj+QQIxJ7o pAZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=d0N2YNozj8FzBRJ6VQ0c28ZpuzKJvjKqya/hyG9SAss=; b=AagZukITQhgO09LmOubzc6/rcartP/196n5DETDec/cMJcP5eIxneHnGJZDD4vYySh vpS+zalASClZxNelto34spekVwqlDMLd/0w48dMjECja+Zm6HX7WLzdsym4/3RgGGf8D 7Wd63fbtgh+qVAwJ7zof33a3YmUnDI7VQFwotkaGHAON2RQqObBagzQMygsS92cKSuoh dMAHxVWoATS9jNEhhxcGWicj+weteg97vLMuAdiOr4/ZLp+67wy4P1JUpvZKhCvmWo1n +0OFAjYGoFr/O+lecEXPVvs5AU/BZvu3Yvf/N1OcfhMVqaZ7fdqaypbINAnBXTWhdiGl ulVQ== X-Gm-Message-State: AOAM531pLLZ7rqSeRUUuXg2z1SkEACXaZE2OfXwmNj1+W20u8my3OYOD Okjhl0atCwhZI6No0bHTIOuqoHci5ng= X-Google-Smtp-Source: ABdhPJzCIN4MKgnxw+rBMXtIM13RkiIS+ijRbe2ZJA5M0lfk1sXC0F2KMwBDWUmal8mHOzCBeFjmgh3GjjE= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:ad4:5745:: with SMTP id q5mr31479962qvx.29.1600398065725; Thu, 17 Sep 2020 20:01:05 -0700 (PDT) Date: Thu, 17 Sep 2020 21:00:43 -0600 In-Reply-To: <20200918030051.650890-1-yuzhao@google.com> Message-Id: <20200918030051.650890-6-yuzhao@google.com> Mime-Version: 1.0 References: <20200918030051.650890-1-yuzhao@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 05/13] mm: don't pass enum lru_list to lru list addition functions From: Yu Zhao To: Andrew Morton , Michal Hocko Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Shakeel Butt , Chris Down , Yafang Shao , Vlastimil Babka , Huang Ying , Pankaj Gupta , Matthew Wilcox , Konstantin Khlebnikov , Minchan Kim , Jaewon Kim , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The enum lru_list parameter to add_page_to_lru_list() and add_page_to_lru_list_tail() is redundant in the sense that it can be extracted from the struct page parameter by page_lru(). A caveat is that we need to make sure PageActive() or PageUnevictable() is correctly set or cleared before calling these two functions. And they are indeed. This change should have no side effects. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 8 ++++++-- mm/swap.c | 18 ++++++++---------- mm/vmscan.c | 6 ++---- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index bfa30c752804..199ff51bf2a0 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -107,15 +107,19 @@ static __always_inline enum lru_list page_lru(struct page *page) } static __always_inline void add_page_to_lru_list(struct page *page, - struct lruvec *lruvec, enum lru_list lru) + struct lruvec *lruvec) { + enum lru_list lru = page_lru(page); + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add(&page->lru, &lruvec->lists[lru]); } static __always_inline void add_page_to_lru_list_tail(struct page *page, - struct lruvec *lruvec, enum lru_list lru) + struct lruvec *lruvec) { + enum lru_list lru = page_lru(page); + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add_tail(&page->lru, &lruvec->lists[lru]); } diff --git a/mm/swap.c b/mm/swap.c index 8362083f00c9..8d0e31d43852 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -238,7 +238,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && !PageUnevictable(page)) { del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); - add_page_to_lru_list_tail(page, lruvec, page_lru(page)); + add_page_to_lru_list_tail(page, lruvec); (*pgmoved) += thp_nr_pages(page); } } @@ -322,8 +322,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, lru); SetPageActive(page); - lru += LRU_ACTIVE; - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); trace_mm_lru_activate(page); __count_vm_events(PGACTIVATE, nr_pages); @@ -555,14 +554,14 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, * It can make readahead confusing. But race window * is _really_ small and it's non-critical problem. */ - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); SetPageReclaim(page); } else { /* * The page's writeback ends up during pagevec * We moves tha page into tail of inactive. */ - add_page_to_lru_list_tail(page, lruvec, lru); + add_page_to_lru_list_tail(page, lruvec); __count_vm_events(PGROTATED, nr_pages); } @@ -583,7 +582,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); ClearPageActive(page); ClearPageReferenced(page); - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, @@ -609,7 +608,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, * anonymous pages */ ClearPageSwapBacked(page); - add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); + add_page_to_lru_list(page, lruvec); __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, @@ -955,8 +954,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, * Put page_tail on the list at the correct position * so they all end up in order. */ - add_page_to_lru_list_tail(page_tail, lruvec, - page_lru(page_tail)); + add_page_to_lru_list_tail(page_tail, lruvec); } } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -1011,7 +1009,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); trace_mm_lru_insertion(page, lru); } diff --git a/mm/vmscan.c b/mm/vmscan.c index f9a186a96410..895be9fb96ec 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1859,7 +1859,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, lruvec = mem_cgroup_page_lruvec(page, pgdat); SetPageLRU(page); - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); if (put_page_testzero(page)) { del_page_from_lru_list(page, lruvec, page_off_lru(page)); @@ -4276,12 +4276,10 @@ void check_move_unevictable_pages(struct pagevec *pvec) continue; if (page_evictable(page)) { - enum lru_list lru = page_lru_base_type(page); - VM_BUG_ON_PAGE(PageActive(page), page); ClearPageUnevictable(page); del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); pgrescued++; } } -- 2.28.0.681.g6f77f65b4e-goog