From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DEB1C433FE for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1835D239EE for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1835D239EE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 976D58D0009; Mon, 7 Dec 2020 17:10:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92AED8D0008; Mon, 7 Dec 2020 17:10:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CEA68D0009; Mon, 7 Dec 2020 17:10:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id 5C3C98D0008 for ; Mon, 7 Dec 2020 17:10:14 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0C1268249980 for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) X-FDA: 77567880348.19.toe87_22020bc273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id BBDB31AD31E for ; Mon, 7 Dec 2020 22:10:13 +0000 (UTC) X-HE-Tag: toe87_22020bc273e1 X-Filterd-Recvd-Size: 9671 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:07 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id m203so460176ybf.1 for ; Mon, 07 Dec 2020 14:10:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=M6CwO9vrk8Jipcr5P78f46dlJFouYfu/sDvlfr5MXC4=; b=IziUJR3gadCdwjzue8i4u+Rd/gOqNgrzK/uqFLSsCbKbm04jYATGUJvtOV6u75wGQJ g5T63LkvFzj5ao/cGGDQ80HTbrP1SEb71K8NhN06eLeQg/IoYuQxD11Xv9V0u7vxPr2P Pgv/4ThBGoO9qMeguGuHLFd/IErzS2Exak9Ql6oyUHPz6+k8t/8JnfhqOVWr5PkM8Onq GTHJQEDII6gLHEeEsWth/S+Mdmvp7eV3eFCtmIyvGZ214atwN3Rg9Rgs/kXO9TYO48sW M1fiw/hVwF5oS1N7H0fAUOKjNnTY0yvqV6h0w2dk/fFQZqpnyPDo+LIVTHllp6NdCXyg /I/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M6CwO9vrk8Jipcr5P78f46dlJFouYfu/sDvlfr5MXC4=; b=e0VS8qs6BzUwA69RWSe3UhD028haz3VmbjusX24TINAoACFje26yWH3jjH6/idjZXl FkBOCY7YtUpCaoElzJF7kJEMWv3LUr5vGbrrsFMEbPqPRtrwkgCY4Qum3S3xZdJkVAUr HBPxU2KwRg/A1ubyQOtAt68tZlQiu+yi8gjD5roAjyQhBikE0X6I6MJ8i30n8a2KVl7c CBwlXereLUGT46lU/8efBGmwFM+YeZBiV4urRsjGPWYsMoC897gebeQJ3Jmtj3AMlShB X5Ohj0uVHf34TZLXCYTKsx8sVry8HQtwbUlo3iv4aCdPRTt5cVgc5uraZXuIdWezniTQ s0Ow== X-Gm-Message-State: AOAM530b2r5rMdLBHveShB4JuMC7g42Ne5D6o2XpvDo9B/UkRKhoDtpB c3bKUiSKywD+MLyL5KyxMbWdBjV8X8M= X-Google-Smtp-Source: ABdhPJwZb6+nslCcravkBg1ZSHnaWPcfDWbLM4oVGssVoLFmC2KkzoM4YBx2t/HdgizhhzCCOnOSPPtfYvw= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:f09:: with SMTP id 9mr26951208ybp.129.1607379007239; Mon, 07 Dec 2020 14:10:07 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:43 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-6-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 05/11] mm: don't pass "enum lru_list" to del_page_from_lru_list() From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The parameter is redundant in the sense that it can be potentially extracted from the "struct page" parameter by page_lru(). We need to make sure that existing PageActive() or PageUnevictable() remains until the function returns. A few places don't conform, and simple reordering fixes them. This patch may have left page_off_lru() seemingly odd, and we'll take care of it in the next patch. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 5 +++-- mm/compaction.c | 2 +- mm/mlock.c | 3 +-- mm/swap.c | 26 ++++++++++---------------- mm/vmscan.c | 4 ++-- 5 files changed, 17 insertions(+), 23 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 130ba3201d3f..ffacc6273678 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -124,9 +124,10 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page, } static __always_inline void del_page_from_lru_list(struct page *page, - struct lruvec *lruvec, enum lru_list lru) + struct lruvec *lruvec) { list_del(&page->lru); - update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page)); + update_lru_size(lruvec, page_lru(page), page_zonenum(page), + -thp_nr_pages(page)); } #endif diff --git a/mm/compaction.c b/mm/compaction.c index 8049d3530812..fd2058330497 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1034,7 +1034,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, low_pfn += compound_nr(page) - 1; /* Successfully isolated */ - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), thp_nr_pages(page)); diff --git a/mm/mlock.c b/mm/mlock.c index 55b3b3672977..73960bb3464d 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -278,8 +278,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) */ if (TestClearPageLRU(page)) { lruvec = relock_page_lruvec_irq(page, lruvec); - del_page_from_lru_list(page, lruvec, - page_lru(page)); + del_page_from_lru_list(page, lruvec); continue; } else __munlock_isolation_failed(page); diff --git a/mm/swap.c b/mm/swap.c index e053b4db108a..d55a0c27d804 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -85,7 +85,8 @@ static void __page_cache_release(struct page *page) lruvec = lock_page_lruvec_irqsave(page, &flags); VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + del_page_from_lru_list(page, lruvec); + page_off_lru(page); unlock_page_lruvec_irqrestore(lruvec, flags); } __ClearPageWaiters(page); @@ -229,7 +230,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) { if (!PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec); __count_vm_events(PGROTATED, thp_nr_pages(page)); @@ -308,10 +309,9 @@ void lru_note_cost_page(struct page *page) static void __activate_page(struct page *page, struct lruvec *lruvec) { if (!PageActive(page) && !PageUnevictable(page)) { - int lru = page_lru_base_type(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec); SetPageActive(page); add_page_to_lru_list(page, lruvec); trace_mm_lru_activate(page); @@ -518,8 +518,7 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, */ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) { - int lru; - bool active; + bool active = PageActive(page); int nr_pages = thp_nr_pages(page); if (PageUnevictable(page)) @@ -529,10 +528,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) if (page_mapped(page)) return; - active = PageActive(page); - lru = page_lru_base_type(page); - - del_page_from_lru_list(page, lruvec, lru + active); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); @@ -563,10 +559,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { if (PageActive(page) && !PageUnevictable(page)) { - int lru = page_lru_base_type(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); add_page_to_lru_list(page, lruvec); @@ -581,11 +576,9 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) { if (PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { - bool active = PageActive(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, - LRU_INACTIVE_ANON + active); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); /* @@ -919,7 +912,8 @@ void release_pages(struct page **pages, int nr) VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + del_page_from_lru_list(page, lruvec); + page_off_lru(page); } __ClearPageWaiters(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 8fc8f2c9d7ec..49451899037c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1764,7 +1764,7 @@ int isolate_lru_page(struct page *page) get_page(page); lruvec = lock_page_lruvec_irq(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); unlock_page_lruvec_irq(lruvec); ret = 0; } @@ -4281,8 +4281,8 @@ void check_move_unevictable_pages(struct pagevec *pvec) lruvec = relock_page_lruvec_irq(page, lruvec); if (page_evictable(page) && PageUnevictable(page)) { VM_BUG_ON_PAGE(PageActive(page), page); + del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); - del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); add_page_to_lru_list(page, lruvec); pgrescued += nr_pages; } -- 2.29.2.576.ga3fc446d84-goog