All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 16/17] mm: Add isolate_lru_folio()
Date: Wed, 5 Jan 2022 00:44:38 -0800	[thread overview]
Message-ID: <1f3aaaca-052b-704b-aa72-5f19cf8038fa@nvidia.com> (raw)
In-Reply-To: <20220102215729.2943705-17-willy@infradead.org>

On 1/2/22 13:57, Matthew Wilcox (Oracle) wrote:
> Turn isolate_lru_page() into a wrapper around isolate_lru_folio().
> TestClearPageLRU() would have always failed on a tail page, so
> returning -EBUSY is the same behaviour.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   arch/powerpc/include/asm/mmu_context.h |  1 -
>   mm/folio-compat.c                      |  8 +++++
>   mm/internal.h                          |  3 +-
>   mm/vmscan.c                            | 43 ++++++++++++--------------
>   4 files changed, 29 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 9ba6b585337f..b9cab0a11421 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -21,7 +21,6 @@ extern void destroy_context(struct mm_struct *mm);
>   #ifdef CONFIG_SPAPR_TCE_IOMMU
>   struct mm_iommu_table_group_mem_t;
>   
> -extern int isolate_lru_page(struct page *page);	/* from internal.h */
>   extern bool mm_iommu_preregistered(struct mm_struct *mm);
>   extern long mm_iommu_new(struct mm_struct *mm,
>   		unsigned long ua, unsigned long entries,
> diff --git a/mm/folio-compat.c b/mm/folio-compat.c
> index 749555a232a8..782e766cd1ee 100644
> --- a/mm/folio-compat.c
> +++ b/mm/folio-compat.c
> @@ -7,6 +7,7 @@
>   #include <linux/migrate.h>
>   #include <linux/pagemap.h>
>   #include <linux/swap.h>
> +#include "internal.h"
>   
>   struct address_space *page_mapping(struct page *page)
>   {
> @@ -151,3 +152,10 @@ int try_to_release_page(struct page *page, gfp_t gfp)
>   	return filemap_release_folio(page_folio(page), gfp);
>   }
>   EXPORT_SYMBOL(try_to_release_page);
> +
> +int isolate_lru_page(struct page *page)
> +{
> +	if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
> +		return -EBUSY;
> +	return isolate_lru_folio((struct folio *)page);

This cast is not great to have, but it is correct, given the above
constraints about tail pages...and I expect this sort of thing is
temporary, anyway?

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
-- 
John Hubbard
NVIDIA

> +}
> diff --git a/mm/internal.h b/mm/internal.h
> index e989d8ceec91..977d5116d327 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -178,7 +178,8 @@ extern unsigned long highest_memmap_pfn;
>   /*
>    * in mm/vmscan.c:
>    */
> -extern int isolate_lru_page(struct page *page);
> +int isolate_lru_page(struct page *page);
> +int isolate_lru_folio(struct folio *folio);
>   extern void putback_lru_page(struct page *page);
>   extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason);
>   
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index fb9584641ac7..ac2f5b76cdb2 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2168,45 +2168,40 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>   }
>   
>   /**
> - * isolate_lru_page - tries to isolate a page from its LRU list
> - * @page: page to isolate from its LRU list
> + * isolate_lru_folio - Try to isolate a folio from its LRU list.
> + * @folio: Folio to isolate from its LRU list.
>    *
> - * Isolates a @page from an LRU list, clears PageLRU and adjusts the
> - * vmstat statistic corresponding to whatever LRU list the page was on.
> + * Isolate a @folio from an LRU list and adjust the vmstat statistic
> + * corresponding to whatever LRU list the folio was on.
>    *
> - * Returns 0 if the page was removed from an LRU list.
> - * Returns -EBUSY if the page was not on an LRU list.
> - *
> - * The returned page will have PageLRU() cleared.  If it was found on
> - * the active list, it will have PageActive set.  If it was found on
> - * the unevictable list, it will have the PageUnevictable bit set. That flag
> + * The folio will have its LRU flag cleared.  If it was found on the
> + * active list, it will have the Active flag set.  If it was found on the
> + * unevictable list, it will have the Unevictable flag set.  These flags
>    * may need to be cleared by the caller before letting the page go.
>    *
> - * The vmstat statistic corresponding to the list on which the page was
> - * found will be decremented.
> - *
> - * Restrictions:
> + * Context:
>    *
>    * (1) Must be called with an elevated refcount on the page. This is a
> - *     fundamental difference from isolate_lru_pages (which is called
> + *     fundamental difference from isolate_lru_pages() (which is called
>    *     without a stable reference).
> - * (2) the lru_lock must not be held.
> - * (3) interrupts must be enabled.
> + * (2) The lru_lock must not be held.
> + * (3) Interrupts must be enabled.
> + *
> + * Return: 0 if the folio was removed from an LRU list.
> + * -EBUSY if the folio was not on an LRU list.
>    */
> -int isolate_lru_page(struct page *page)
> +int isolate_lru_folio(struct folio *folio)
>   {
> -	struct folio *folio = page_folio(page);
>   	int ret = -EBUSY;
>   
> -	VM_BUG_ON_PAGE(!page_count(page), page);
> -	WARN_RATELIMIT(PageTail(page), "trying to isolate tail page");
> +	VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio);
>   
> -	if (TestClearPageLRU(page)) {
> +	if (folio_test_clear_lru(folio)) {
>   		struct lruvec *lruvec;
>   
> -		get_page(page);
> +		folio_get(folio);
>   		lruvec = folio_lruvec_lock_irq(folio);
> -		del_page_from_lru_list(page, lruvec);
> +		lruvec_del_folio(lruvec, folio);
>   		unlock_page_lruvec_irq(lruvec);
>   		ret = 0;
>   	}



  parent reply	other threads:[~2022-01-05  8:44 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-02 21:57 [PATCH 00/17] Convert GUP to folios Matthew Wilcox (Oracle)
2022-01-02 21:57 ` [PATCH 01/17] mm: Add folio_put_refs() Matthew Wilcox (Oracle)
2022-01-04  8:00   ` Christoph Hellwig
2022-01-04 21:15   ` John Hubbard
2022-01-02 21:57 ` [PATCH 02/17] mm: Add folio_pincount_available() Matthew Wilcox (Oracle)
2022-01-04  8:01   ` Christoph Hellwig
2022-01-04 18:25     ` Matthew Wilcox
2022-01-04 21:40   ` John Hubbard
2022-01-05  5:04     ` Matthew Wilcox
2022-01-05  6:24       ` John Hubbard
2022-01-02 21:57 ` [PATCH 03/17] mm: Add folio_pincount_ptr() Matthew Wilcox (Oracle)
2022-01-04  8:02   ` Christoph Hellwig
2022-01-04 21:43   ` John Hubbard
2022-01-06 21:57   ` William Kucharski
2022-01-02 21:57 ` [PATCH 04/17] mm: Convert page_maybe_dma_pinned() to use a folio Matthew Wilcox (Oracle)
2022-01-04  8:03   ` Christoph Hellwig
2022-01-04 22:01   ` John Hubbard
2022-01-02 21:57 ` [PATCH 05/17] gup: Add try_get_folio() Matthew Wilcox (Oracle)
2022-01-04  8:18   ` Christoph Hellwig
2022-01-05  1:25   ` John Hubbard
2022-01-05  7:00     ` John Hubbard
2022-01-07 18:23     ` Jason Gunthorpe
2022-01-08  1:37     ` Matthew Wilcox
2022-01-08  2:36       ` John Hubbard
2022-01-10 15:01       ` Jason Gunthorpe
2022-01-02 21:57 ` [PATCH 06/17] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Matthew Wilcox (Oracle)
2022-01-04  8:18   ` Christoph Hellwig
2022-01-05  1:29   ` John Hubbard
2022-01-02 21:57 ` [PATCH 07/17] gup: Add gup_put_folio() Matthew Wilcox (Oracle)
2022-01-04  8:22   ` Christoph Hellwig
2022-01-05  6:52   ` John Hubbard
2022-01-06 22:05   ` William Kucharski
2022-01-02 21:57 ` [PATCH 08/17] gup: Add try_grab_folio() Matthew Wilcox (Oracle)
2022-01-04  8:24   ` Christoph Hellwig
2022-01-05  7:06   ` John Hubbard
2022-01-02 21:57 ` [PATCH 09/17] gup: Convert gup_pte_range() to use a folio Matthew Wilcox (Oracle)
2022-01-04  8:25   ` Christoph Hellwig
2022-01-05  7:36   ` John Hubbard
2022-01-05  7:52     ` Matthew Wilcox
2022-01-05  7:57       ` John Hubbard
2022-01-02 21:57 ` [PATCH 10/17] gup: Convert gup_hugepte() " Matthew Wilcox (Oracle)
2022-01-04  8:26   ` Christoph Hellwig
2022-01-05  7:46   ` John Hubbard
2022-01-02 21:57 ` [PATCH 11/17] gup: Convert gup_huge_pmd() " Matthew Wilcox (Oracle)
2022-01-04  8:26   ` Christoph Hellwig
2022-01-05  7:50   ` John Hubbard
2022-01-02 21:57 ` [PATCH 12/17] gup: Convert gup_huge_pud() " Matthew Wilcox (Oracle)
2022-01-05  7:58   ` John Hubbard
2022-01-02 21:57 ` [PATCH 13/17] gup: Convert gup_huge_pgd() " Matthew Wilcox (Oracle)
2022-01-05  7:58   ` John Hubbard
2022-01-02 21:57 ` [PATCH 14/17] gup: Convert for_each_compound_head() to gup_for_each_folio() Matthew Wilcox (Oracle)
2022-01-04  8:32   ` Christoph Hellwig
2022-01-05  8:17   ` John Hubbard
2022-01-09  4:39     ` Matthew Wilcox
2022-01-09  8:01       ` John Hubbard
2022-01-10 15:22         ` Jan Kara
2022-01-10 15:52           ` Matthew Wilcox
2022-01-10 20:36             ` Jan Kara
2022-01-10 21:10               ` Matthew Wilcox
2022-01-17 12:07                 ` Jan Kara
2022-01-02 21:57 ` [PATCH 15/17] gup: Convert for_each_compound_range() to gup_for_each_folio_range() Matthew Wilcox (Oracle)
2022-01-04  8:35   ` Christoph Hellwig
2022-01-05  8:30   ` John Hubbard
2022-01-02 21:57 ` [PATCH 16/17] mm: Add isolate_lru_folio() Matthew Wilcox (Oracle)
2022-01-04  8:36   ` Christoph Hellwig
2022-01-05  8:44   ` John Hubbard [this message]
2022-01-06  0:34     ` Matthew Wilcox
2022-01-02 21:57 ` [PATCH 17/17] gup: Convert check_and_migrate_movable_pages() to use a folio Matthew Wilcox (Oracle)
2022-01-04  8:37   ` Christoph Hellwig
2022-01-05  9:00   ` John Hubbard
2022-01-06 22:12 ` [PATCH 00/17] Convert GUP to folios William Kucharski
2022-01-07 18:54 ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1f3aaaca-052b-704b-aa72-5f19cf8038fa@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.