From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F383C433F5 for ; Wed, 16 Feb 2022 02:45:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 979076B0078; Tue, 15 Feb 2022 21:45:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9016D6B007B; Tue, 15 Feb 2022 21:45:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77AC66B007D; Tue, 15 Feb 2022 21:45:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id 613506B0078 for ; Tue, 15 Feb 2022 21:45:16 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 12A8B18099AD7 for ; Wed, 16 Feb 2022 02:45:16 +0000 (UTC) X-FDA: 79147101432.06.B782156 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf18.hostedemail.com (Postfix) with ESMTP id A8CD51C0004 for ; Wed, 16 Feb 2022 02:45:14 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Jz2N53lgvzbkTs; Wed, 16 Feb 2022 10:44:05 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 16 Feb 2022 10:45:11 +0800 Subject: Re: [PATCH 02/10] mm/truncate: Inline invalidate_complete_page() into its one caller To: "Matthew Wilcox (Oracle)" , , References: <20220214200017.3150590-1-willy@infradead.org> <20220214200017.3150590-3-willy@infradead.org> From: Miaohe Lin Message-ID: <13f176e8-179f-8878-9722-8835f8c130c1@huawei.com> Date: Wed, 16 Feb 2022 10:45:11 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220214200017.3150590-3-willy@infradead.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: A8CD51C0004 X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Stat-Signature: umazm9usgf5azeim3cko9st39c9pzsmq X-Rspamd-Server: rspam11 X-HE-Tag: 1644979514-881168 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/2/15 4:00, Matthew Wilcox (Oracle) wrote: > invalidate_inode_page() is the only caller of invalidate_complete_page() > and inlining it reveals that the first check is unnecessary (because we > hold the page locked, and we just retrieved the mapping from the page). > Actually, it does make a difference, in that tail pages no longer fail > at this check, so it's now possible to remove a tail page from a mapping. > > Signed-off-by: Matthew Wilcox (Oracle) > --- LGTM. Thanks. Reviewed-by: Miaohe Lin > mm/truncate.c | 28 +++++----------------------- > 1 file changed, 5 insertions(+), 23 deletions(-) > > diff --git a/mm/truncate.c b/mm/truncate.c > index 9dbf0b75da5d..e5e2edaa0b76 100644 > --- a/mm/truncate.c > +++ b/mm/truncate.c > @@ -193,27 +193,6 @@ static void truncate_cleanup_folio(struct folio *folio) > folio_clear_mappedtodisk(folio); > } > > -/* > - * This is for invalidate_mapping_pages(). That function can be called at > - * any time, and is not supposed to throw away dirty pages. But pages can > - * be marked dirty at any time too, so use remove_mapping which safely > - * discards clean, unused pages. > - * > - * Returns non-zero if the page was successfully invalidated. > - */ > -static int > -invalidate_complete_page(struct address_space *mapping, struct page *page) > -{ > - > - if (page->mapping != mapping) > - return 0; > - > - if (page_has_private(page) && !try_to_release_page(page, 0)) > - return 0; > - > - return remove_mapping(mapping, page); > -} > - > int truncate_inode_folio(struct address_space *mapping, struct folio *folio) > { > if (folio->mapping != mapping) > @@ -309,7 +288,10 @@ int invalidate_inode_page(struct page *page) > return 0; > if (page_mapped(page)) > return 0; > - return invalidate_complete_page(mapping, page); > + if (page_has_private(page) && !try_to_release_page(page, 0)) > + return 0; > + > + return remove_mapping(mapping, page); > } > > /** > @@ -584,7 +566,7 @@ void invalidate_mapping_pagevec(struct address_space *mapping, > } > > /* > - * This is like invalidate_complete_page(), except it ignores the page's > + * This is like invalidate_inode_page(), except it ignores the page's > * refcount. We do this because invalidate_inode_pages2() needs stronger > * invalidation guarantees, and cannot afford to leave pages behind because > * shrink_page_list() has a temp ref on them, or because they're transiently >