All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Cc: Christoph Hellwig <hch@infradead.org>,
	William Kucharski <william.kucharski@oracle.com>,
	linux-kernel@vger.kernel.org, Jason Gunthorpe <jgg@ziepe.ca>
Subject: Re: [PATCH v2 27/28] mm: Add isolate_lru_folio()
Date: Mon, 10 Jan 2022 23:49:02 -0800	[thread overview]
Message-ID: <c5aabc87-0f1d-5967-f60e-924322456e60@nvidia.com> (raw)
In-Reply-To: <20220110042406.499429-28-willy@infradead.org>

On 1/9/22 20:24, Matthew Wilcox (Oracle) wrote:
> Turn isolate_lru_page() into a wrapper around isolate_lru_folio().
> TestClearPageLRU() would have always failed on a tail page, so
> returning -EBUSY is the same behaviour.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   arch/powerpc/include/asm/mmu_context.h |  1 -
>   mm/folio-compat.c                      |  8 +++++
>   mm/internal.h                          |  3 +-
>   mm/vmscan.c                            | 43 ++++++++++++--------------
>   4 files changed, 29 insertions(+), 26 deletions(-)

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
-- 
John Hubbard
NVIDIA

> 
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 9ba6b585337f..b9cab0a11421 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -21,7 +21,6 @@ extern void destroy_context(struct mm_struct *mm);
>   #ifdef CONFIG_SPAPR_TCE_IOMMU
>   struct mm_iommu_table_group_mem_t;
>   
> -extern int isolate_lru_page(struct page *page);	/* from internal.h */
>   extern bool mm_iommu_preregistered(struct mm_struct *mm);
>   extern long mm_iommu_new(struct mm_struct *mm,
>   		unsigned long ua, unsigned long entries,
> diff --git a/mm/folio-compat.c b/mm/folio-compat.c
> index 749555a232a8..782e766cd1ee 100644
> --- a/mm/folio-compat.c
> +++ b/mm/folio-compat.c
> @@ -7,6 +7,7 @@
>   #include <linux/migrate.h>
>   #include <linux/pagemap.h>
>   #include <linux/swap.h>
> +#include "internal.h"
>   
>   struct address_space *page_mapping(struct page *page)
>   {
> @@ -151,3 +152,10 @@ int try_to_release_page(struct page *page, gfp_t gfp)
>   	return filemap_release_folio(page_folio(page), gfp);
>   }
>   EXPORT_SYMBOL(try_to_release_page);
> +
> +int isolate_lru_page(struct page *page)
> +{
> +	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
> +		return -EBUSY;
> +	return isolate_lru_folio((struct folio *)page);
> +}
> diff --git a/mm/internal.h b/mm/internal.h
> index 9a72d1ecdab4..8b90db90e7f2 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -157,7 +157,8 @@ extern unsigned long highest_memmap_pfn;
>   /*
>    * in mm/vmscan.c:
>    */
> -extern int isolate_lru_page(struct page *page);
> +int isolate_lru_page(struct page *page);
> +int isolate_lru_folio(struct folio *folio);
>   extern void putback_lru_page(struct page *page);
>   extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason);
>   
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index fb9584641ac7..ac2f5b76cdb2 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2168,45 +2168,40 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>   }
>   
>   /**
> - * isolate_lru_page - tries to isolate a page from its LRU list
> - * @page: page to isolate from its LRU list
> + * isolate_lru_folio - Try to isolate a folio from its LRU list.
> + * @folio: Folio to isolate from its LRU list.
>    *
> - * Isolates a @page from an LRU list, clears PageLRU and adjusts the
> - * vmstat statistic corresponding to whatever LRU list the page was on.
> + * Isolate a @folio from an LRU list and adjust the vmstat statistic
> + * corresponding to whatever LRU list the folio was on.
>    *
> - * Returns 0 if the page was removed from an LRU list.
> - * Returns -EBUSY if the page was not on an LRU list.
> - *
> - * The returned page will have PageLRU() cleared.  If it was found on
> - * the active list, it will have PageActive set.  If it was found on
> - * the unevictable list, it will have the PageUnevictable bit set. That flag
> + * The folio will have its LRU flag cleared.  If it was found on the
> + * active list, it will have the Active flag set.  If it was found on the
> + * unevictable list, it will have the Unevictable flag set.  These flags
>    * may need to be cleared by the caller before letting the page go.
>    *
> - * The vmstat statistic corresponding to the list on which the page was
> - * found will be decremented.
> - *
> - * Restrictions:
> + * Context:
>    *
>    * (1) Must be called with an elevated refcount on the page. This is a
> - *     fundamental difference from isolate_lru_pages (which is called
> + *     fundamental difference from isolate_lru_pages() (which is called
>    *     without a stable reference).
> - * (2) the lru_lock must not be held.
> - * (3) interrupts must be enabled.
> + * (2) The lru_lock must not be held.
> + * (3) Interrupts must be enabled.
> + *
> + * Return: 0 if the folio was removed from an LRU list.
> + * -EBUSY if the folio was not on an LRU list.
>    */
> -int isolate_lru_page(struct page *page)
> +int isolate_lru_folio(struct folio *folio)
>   {
> -	struct folio *folio = page_folio(page);
>   	int ret = -EBUSY;
>   
> -	VM_BUG_ON_PAGE(!page_count(page), page);
> -	WARN_RATELIMIT(PageTail(page), "trying to isolate tail page");
> +	VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio);
>   
> -	if (TestClearPageLRU(page)) {
> +	if (folio_test_clear_lru(folio)) {
>   		struct lruvec *lruvec;
>   
> -		get_page(page);
> +		folio_get(folio);
>   		lruvec = folio_lruvec_lock_irq(folio);
> -		del_page_from_lru_list(page, lruvec);
> +		lruvec_del_folio(lruvec, folio);
>   		unlock_page_lruvec_irq(lruvec);
>   		ret = 0;
>   	}

  parent reply	other threads:[~2022-01-11  7:49 UTC|newest]

Thread overview: 96+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-10  4:23 [PATCH v2 00/28] Convert GUP to folios Matthew Wilcox (Oracle)
2022-01-10  4:23 ` [PATCH v2 01/28] gup: Remove for_each_compound_range() Matthew Wilcox (Oracle)
2022-01-10  8:22   ` Christoph Hellwig
2022-01-11  1:07   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 02/28] gup: Remove for_each_compound_head() Matthew Wilcox (Oracle)
2022-01-10  8:23   ` Christoph Hellwig
2022-01-11  1:11   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 03/28] gup: Change the calling convention for compound_range_next() Matthew Wilcox (Oracle)
2022-01-10  8:25   ` Christoph Hellwig
2022-01-11  1:14   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 04/28] gup: Optimise compound_range_next() Matthew Wilcox (Oracle)
2022-01-10  8:26   ` Christoph Hellwig
2022-01-11  1:16   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 05/28] gup: Change the calling convention for compound_next() Matthew Wilcox (Oracle)
2022-01-10  8:27   ` Christoph Hellwig
2022-01-10 13:28     ` Matthew Wilcox
2022-01-11  1:18   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 06/28] gup: Fix some contiguous memmap assumptions Matthew Wilcox (Oracle)
2022-01-10  8:29   ` Christoph Hellwig
2022-01-10 13:37     ` Matthew Wilcox
2022-01-10 19:05       ` [External] : " Mike Kravetz
2022-01-11  1:47   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 07/28] gup: Remove an assumption of a contiguous memmap Matthew Wilcox (Oracle)
2022-01-10  8:30   ` Christoph Hellwig
2022-01-11  3:27   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 08/28] gup: Handle page split race more efficiently Matthew Wilcox (Oracle)
2022-01-10  8:31   ` Christoph Hellwig
2022-01-11  3:30   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 09/28] gup: Turn hpage_pincount_add() into page_pincount_add() Matthew Wilcox (Oracle)
2022-01-10  8:31   ` Christoph Hellwig
2022-01-11  3:35   ` John Hubbard
2022-01-11  4:32   ` John Hubbard
2022-01-11 13:46     ` Matthew Wilcox
2022-01-10  4:23 ` [PATCH v2 10/28] gup: Turn hpage_pincount_sub() into page_pincount_sub() Matthew Wilcox (Oracle)
2022-01-10  8:32   ` Christoph Hellwig
2022-01-11  6:40   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 11/28] mm: Make compound_pincount always available Matthew Wilcox (Oracle)
2022-01-11  4:06   ` John Hubbard
2022-01-11  4:38     ` Matthew Wilcox
2022-01-11  5:10       ` John Hubbard
2022-01-20  9:15   ` Christoph Hellwig
2022-01-10  4:23 ` [PATCH v2 12/28] mm: Add folio_put_refs() Matthew Wilcox (Oracle)
2022-01-10  8:32   ` Christoph Hellwig
2022-01-11  4:14   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 13/28] mm: Add folio_pincount_ptr() Matthew Wilcox (Oracle)
2022-01-10  8:33   ` Christoph Hellwig
2022-01-11  4:22   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 14/28] mm: Convert page_maybe_dma_pinned() to use a folio Matthew Wilcox (Oracle)
2022-01-10  8:33   ` Christoph Hellwig
2022-01-11  4:27   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 15/28] gup: Add try_get_folio() and try_grab_folio() Matthew Wilcox (Oracle)
2022-01-10  8:34   ` Christoph Hellwig
2022-01-11  5:00   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 16/28] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Matthew Wilcox (Oracle)
2022-01-10  8:35   ` Christoph Hellwig
2022-01-11  5:14   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 17/28] gup: Add gup_put_folio() Matthew Wilcox (Oracle)
2022-01-10  8:35   ` Christoph Hellwig
2022-01-11  6:44   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 18/28] hugetlb: Use try_grab_folio() instead of try_grab_compound_head() Matthew Wilcox (Oracle)
2022-01-10  8:36   ` Christoph Hellwig
2022-01-11  6:47   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 19/28] gup: Convert try_grab_page() to call try_grab_folio() Matthew Wilcox (Oracle)
2022-01-10  8:36   ` Christoph Hellwig
2022-01-11  7:01   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 20/28] gup: Convert gup_pte_range() to use a folio Matthew Wilcox (Oracle)
2022-01-10  8:37   ` Christoph Hellwig
2022-01-11  7:06   ` John Hubbard
2022-01-10  4:23 ` [PATCH v2 21/28] gup: Convert gup_hugepte() " Matthew Wilcox (Oracle)
2022-01-10  8:37   ` Christoph Hellwig
2022-01-11  7:33   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 22/28] gup: Convert gup_huge_pmd() " Matthew Wilcox (Oracle)
2022-01-10  8:37   ` Christoph Hellwig
2022-01-11  7:36   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 23/28] gup: Convert gup_huge_pud() " Matthew Wilcox (Oracle)
2022-01-10  8:38   ` Christoph Hellwig
2022-01-11  7:38   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 24/28] gup: Convert gup_huge_pgd() " Matthew Wilcox (Oracle)
2022-01-10  8:38   ` Christoph Hellwig
2022-01-11  7:38   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 25/28] gup: Convert compound_next() to gup_folio_next() Matthew Wilcox (Oracle)
2022-01-10  8:39   ` Christoph Hellwig
2022-01-11  7:41   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 26/28] gup: Convert compound_range_next() to gup_folio_range_next() Matthew Wilcox (Oracle)
2022-01-10  8:41   ` Christoph Hellwig
2022-01-10 13:41     ` Matthew Wilcox
2022-01-11  7:44   ` John Hubbard
2022-01-10  4:24 ` [PATCH v2 27/28] mm: Add isolate_lru_folio() Matthew Wilcox (Oracle)
2022-01-10  8:42   ` Christoph Hellwig
2022-01-11  7:49   ` John Hubbard [this message]
2022-01-10  4:24 ` [PATCH v2 28/28] gup: Convert check_and_migrate_movable_pages() to use a folio Matthew Wilcox (Oracle)
2022-01-10  8:42   ` Christoph Hellwig
2022-01-11  7:52   ` John Hubbard
2022-01-10 15:31 ` [PATCH v2 00/28] Convert GUP to folios Jason Gunthorpe
2022-01-10 16:09   ` Matthew Wilcox
2022-01-10 17:26 ` William Kucharski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c5aabc87-0f1d-5967-f60e-924322456e60@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=hch@infradead.org \
    --cc=jgg@ziepe.ca \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=william.kucharski@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.