From: Miaohe Lin <linmiaohe@huawei.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
<linux-fsdevel@vger.kernel.org>, <linux-mm@kvack.org>
Subject: Re: [PATCH 06/10] mm/truncate: Split invalidate_inode_page() into mapping_shrink_folio()
Date: Tue, 15 Feb 2022 17:37:21 +0800 [thread overview]
Message-ID: <9685489d-0a77-7b3e-b78e-c54aab45a9da@huawei.com> (raw)
In-Reply-To: <20220214200017.3150590-7-willy@infradead.org>
On 2022/2/15 4:00, Matthew Wilcox (Oracle) wrote:
> Some of the callers already have the address_space and can avoid calling
> folio_mapping() and checking if the folio was already truncated. Also
> add kernel-doc and fix the return type (in case we ever support folios
> larger than 4TB).
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
LGTM. Thanks.
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> include/linux/mm.h | 1 -
> mm/internal.h | 1 +
> mm/memory-failure.c | 4 ++--
> mm/truncate.c | 34 +++++++++++++++++++++++-----------
> 4 files changed, 26 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 4637368d9455..53b301dc5c14 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1853,7 +1853,6 @@ extern void truncate_setsize(struct inode *inode, loff_t newsize);
> void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to);
> void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
> int generic_error_remove_page(struct address_space *mapping, struct page *page);
> -int invalidate_inode_page(struct page *page);
>
> #ifdef CONFIG_MMU
> extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
> diff --git a/mm/internal.h b/mm/internal.h
> index b7a2195c12b1..927a17d58b85 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -100,6 +100,7 @@ void filemap_free_folio(struct address_space *mapping, struct folio *folio);
> int truncate_inode_folio(struct address_space *mapping, struct folio *folio);
> bool truncate_inode_partial_folio(struct folio *folio, loff_t start,
> loff_t end);
> +long invalidate_inode_page(struct page *page);
>
> /**
> * folio_evictable - Test whether a folio is evictable.
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 97a9ed8f87a9..0b72a936b8dd 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2139,7 +2139,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
> */
> static int __soft_offline_page(struct page *page)
> {
> - int ret = 0;
> + long ret = 0;
> unsigned long pfn = page_to_pfn(page);
> struct page *hpage = compound_head(page);
> char const *msg_page[] = {"page", "hugepage"};
> @@ -2196,7 +2196,7 @@ static int __soft_offline_page(struct page *page)
> if (!list_empty(&pagelist))
> putback_movable_pages(&pagelist);
>
> - pr_info("soft offline: %#lx: %s migration failed %d, type %pGp\n",
> + pr_info("soft offline: %#lx: %s migration failed %ld, type %pGp\n",
> pfn, msg_page[huge], ret, &page->flags);
> if (ret > 0)
> ret = -EBUSY;
> diff --git a/mm/truncate.c b/mm/truncate.c
> index 8aa86e294775..b1bdc61198f6 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -273,18 +273,9 @@ int generic_error_remove_page(struct address_space *mapping, struct page *page)
> }
> EXPORT_SYMBOL(generic_error_remove_page);
>
> -/*
> - * Safely invalidate one page from its pagecache mapping.
> - * It only drops clean, unused pages. The page must be locked.
> - *
> - * Returns 1 if the page is successfully invalidated, otherwise 0.
> - */
> -int invalidate_inode_page(struct page *page)
> +static long mapping_shrink_folio(struct address_space *mapping,
> + struct folio *folio)
> {
> - struct folio *folio = page_folio(page);
> - struct address_space *mapping = folio_mapping(folio);
> - if (!mapping)
> - return 0;
> if (folio_test_dirty(folio) || folio_test_writeback(folio))
> return 0;
> if (folio_ref_count(folio) > folio_nr_pages(folio) + 1)
> @@ -295,6 +286,27 @@ int invalidate_inode_page(struct page *page)
> return remove_mapping(mapping, folio);
> }
>
> +/**
> + * invalidate_inode_page() - Remove an unused page from the pagecache.
> + * @page: The page to remove.
> + *
> + * Safely invalidate one page from its pagecache mapping.
> + * It only drops clean, unused pages.
> + *
> + * Context: Page must be locked.
> + * Return: The number of pages successfully removed.
> + */
> +long invalidate_inode_page(struct page *page)
> +{
> + struct folio *folio = page_folio(page);
> + struct address_space *mapping = folio_mapping(folio);
> +
> + /* The page may have been truncated before it was locked */
> + if (!mapping)
> + return 0;
> + return mapping_shrink_folio(mapping, folio);
> +}
> +
> /**
> * truncate_inode_pages_range - truncate range of pages specified by start & end byte offsets
> * @mapping: mapping to truncate
>
next prev parent reply other threads:[~2022-02-15 9:37 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-14 20:00 [PATCH 00/10] Various fixes around invalidate_page() Matthew Wilcox (Oracle)
2022-02-14 20:00 ` [PATCH 01/10] splice: Use a folio in page_cache_pipe_buf_try_steal() Matthew Wilcox (Oracle)
2022-02-15 7:15 ` Christoph Hellwig
2022-02-15 8:32 ` Miaohe Lin
2022-02-14 20:00 ` [PATCH 02/10] mm/truncate: Inline invalidate_complete_page() into its one caller Matthew Wilcox (Oracle)
2022-02-14 23:09 ` John Hubbard
2022-02-14 23:32 ` Matthew Wilcox
2022-02-14 23:51 ` John Hubbard
2022-02-15 7:17 ` Christoph Hellwig
2022-02-15 7:45 ` Miaohe Lin
2022-02-15 20:09 ` Matthew Wilcox
2022-02-16 2:36 ` Miaohe Lin
2022-02-16 2:45 ` Miaohe Lin
2022-02-14 20:00 ` [PATCH 03/10] mm/truncate: Convert invalidate_inode_page() to use a folio Matthew Wilcox (Oracle)
2022-02-15 7:18 ` Christoph Hellwig
2022-02-15 8:32 ` Miaohe Lin
2022-02-14 20:00 ` [PATCH 04/10] mm/truncate: Replace page_mapped() call in invalidate_inode_page() Matthew Wilcox (Oracle)
2022-02-15 7:19 ` Christoph Hellwig
2022-02-15 20:12 ` Matthew Wilcox
2022-02-15 8:32 ` Miaohe Lin
2022-02-25 1:31 ` Matthew Wilcox
2022-02-25 3:27 ` Matthew Wilcox
2022-02-14 20:00 ` [PATCH 05/10] mm: Convert remove_mapping() to take a folio Matthew Wilcox (Oracle)
2022-02-15 7:21 ` Christoph Hellwig
2022-02-15 8:33 ` Miaohe Lin
2022-02-14 20:00 ` [PATCH 06/10] mm/truncate: Split invalidate_inode_page() into mapping_shrink_folio() Matthew Wilcox (Oracle)
2022-02-15 7:23 ` Christoph Hellwig
2022-02-15 9:37 ` Miaohe Lin [this message]
2022-02-14 20:00 ` [PATCH 07/10] mm/truncate: Convert __invalidate_mapping_pages() to use a folio Matthew Wilcox (Oracle)
2022-02-15 7:24 ` Christoph Hellwig
2022-02-15 9:37 ` Miaohe Lin
2022-02-14 20:00 ` [PATCH 08/10] mm: Turn deactivate_file_page() into deactivate_file_folio() Matthew Wilcox (Oracle)
2022-02-15 7:25 ` Christoph Hellwig
2022-02-15 8:26 ` Miaohe Lin
2022-02-15 20:33 ` Matthew Wilcox
2022-02-16 2:45 ` Miaohe Lin
2022-02-14 20:00 ` [PATCH 09/10] mm/truncate: Combine invalidate_mapping_pagevec() and __invalidate_mapping_pages() Matthew Wilcox (Oracle)
2022-02-15 7:26 ` Christoph Hellwig
2022-02-15 9:37 ` Miaohe Lin
2022-02-14 20:00 ` [PATCH 10/10] fs: Move many prototypes to pagemap.h Matthew Wilcox (Oracle)
2022-02-15 7:27 ` Christoph Hellwig
2022-02-15 9:38 ` Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9685489d-0a77-7b3e-b78e-c54aab45a9da@huawei.com \
--to=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).