All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 14/17] gup: Convert for_each_compound_head() to gup_for_each_folio()
Date: Wed, 5 Jan 2022 00:17:46 -0800	[thread overview]
Message-ID: <a0d968aa-e7c5-bbcc-4261-8767c9ca3ecb@nvidia.com> (raw)
In-Reply-To: <20220102215729.2943705-15-willy@infradead.org>

On 1/2/22 13:57, Matthew Wilcox (Oracle) wrote:
> This macro can be considerably simplified by returning the folio from
> gup_folio_next() instead of void from compound_next().  Convert both
> callers to work on folios instead of pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   mm/gup.c | 47 ++++++++++++++++++++++++-----------------------
>   1 file changed, 24 insertions(+), 23 deletions(-)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 7bd1e4a2648a..eaffa6807609 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -239,31 +239,29 @@ static inline void compound_range_next(unsigned long i, unsigned long npages,
>   	     __i < __npages; __i += __ntails, \
>   	     compound_range_next(__i, __npages, __list, &(__head), &(__ntails)))
>   
> -static inline void compound_next(unsigned long i, unsigned long npages,
> -				 struct page **list, struct page **head,
> -				 unsigned int *ntails)
> +static inline struct folio *gup_folio_next(unsigned long i,
> +		unsigned long npages, struct page **list, unsigned int *ntails)
>   {
> -	struct page *page;
> +	struct folio *folio;
>   	unsigned int nr;
>   
>   	if (i >= npages)
> -		return;
> +		return NULL;
>   
> -	page = compound_head(list[i]);
> +	folio = page_folio(list[i]);
>   	for (nr = i + 1; nr < npages; nr++) {
> -		if (compound_head(list[nr]) != page)
> +		if (page_folio(list[nr]) != folio)
>   			break;
>   	}
>   
> -	*head = page;
>   	*ntails = nr - i;
> +	return folio;
>   }
>   
> -#define for_each_compound_head(__i, __list, __npages, __head, __ntails) \
> -	for (__i = 0, \
> -	     compound_next(__i, __npages, __list, &(__head), &(__ntails)); \
> -	     __i < __npages; __i += __ntails, \
> -	     compound_next(__i, __npages, __list, &(__head), &(__ntails)))
> +#define gup_for_each_folio(__i, __list, __npages, __folio, __ntails) \
> +	for (__i = 0; \
> +	     (__folio = gup_folio_next(__i, __npages, __list, &(__ntails))) != NULL; \
> +	     __i += __ntails)


This is nice. I find these pre-existing macros to be really quite
horrible, but I was unable to suggest anything better at the time, so
it's good to see the simplification. :)

>   
>   /**
>    * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages
> @@ -291,15 +289,15 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
>   				 bool make_dirty)
>   {
>   	unsigned long index;
> -	struct page *head;
> -	unsigned int ntails;
> +	struct folio *folio;
> +	unsigned int nr;
>   
>   	if (!make_dirty) {
>   		unpin_user_pages(pages, npages);
>   		return;
>   	}
>   
> -	for_each_compound_head(index, pages, npages, head, ntails) {
> +	gup_for_each_folio(index, pages, npages, folio, nr) {
>   		/*
>   		 * Checking PageDirty at this point may race with
>   		 * clear_page_dirty_for_io(), but that's OK. Two key
> @@ -320,9 +318,12 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
>   		 * written back, so it gets written back again in the
>   		 * next writeback cycle. This is harmless.
>   		 */
> -		if (!PageDirty(head))
> -			set_page_dirty_lock(head);
> -		put_compound_head(head, ntails, FOLL_PIN);
> +		if (!folio_test_dirty(folio)) {
> +			folio_lock(folio);
> +			folio_mark_dirty(folio);
> +			folio_unlock(folio);

At some point, maybe even here, I suspect that creating the folio
version of set_page_dirty_lock() would help. I'm sure you have
a better feel for whether it helps, after doing all of this conversion
work, but it just sort of jumped out at me as surprising to see it
in this form.

In any case, this all looks correct, so


Reviewed-by: John Hubbard <jhubbard@nvidia.com>

thanks,
-- 
John Hubbard
NVIDIA

> +		}
> +		gup_put_folio(folio, nr, FOLL_PIN);
>   	}
>   }
>   EXPORT_SYMBOL(unpin_user_pages_dirty_lock);
> @@ -375,8 +376,8 @@ EXPORT_SYMBOL(unpin_user_page_range_dirty_lock);
>   void unpin_user_pages(struct page **pages, unsigned long npages)
>   {
>   	unsigned long index;
> -	struct page *head;
> -	unsigned int ntails;
> +	struct folio *folio;
> +	unsigned int nr;
>   
>   	/*
>   	 * If this WARN_ON() fires, then the system *might* be leaking pages (by
> @@ -386,8 +387,8 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
>   	if (WARN_ON(IS_ERR_VALUE(npages)))
>   		return;
>   
> -	for_each_compound_head(index, pages, npages, head, ntails)
> -		put_compound_head(head, ntails, FOLL_PIN);
> +	gup_for_each_folio(index, pages, npages, folio, nr)
> +		gup_put_folio(folio, nr, FOLL_PIN);
>   }
>   EXPORT_SYMBOL(unpin_user_pages);
>   



  parent reply	other threads:[~2022-01-05  8:17 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-02 21:57 [PATCH 00/17] Convert GUP to folios Matthew Wilcox (Oracle)
2022-01-02 21:57 ` [PATCH 01/17] mm: Add folio_put_refs() Matthew Wilcox (Oracle)
2022-01-04  8:00   ` Christoph Hellwig
2022-01-04 21:15   ` John Hubbard
2022-01-02 21:57 ` [PATCH 02/17] mm: Add folio_pincount_available() Matthew Wilcox (Oracle)
2022-01-04  8:01   ` Christoph Hellwig
2022-01-04 18:25     ` Matthew Wilcox
2022-01-04 21:40   ` John Hubbard
2022-01-05  5:04     ` Matthew Wilcox
2022-01-05  6:24       ` John Hubbard
2022-01-02 21:57 ` [PATCH 03/17] mm: Add folio_pincount_ptr() Matthew Wilcox (Oracle)
2022-01-04  8:02   ` Christoph Hellwig
2022-01-04 21:43   ` John Hubbard
2022-01-06 21:57   ` William Kucharski
2022-01-02 21:57 ` [PATCH 04/17] mm: Convert page_maybe_dma_pinned() to use a folio Matthew Wilcox (Oracle)
2022-01-04  8:03   ` Christoph Hellwig
2022-01-04 22:01   ` John Hubbard
2022-01-02 21:57 ` [PATCH 05/17] gup: Add try_get_folio() Matthew Wilcox (Oracle)
2022-01-04  8:18   ` Christoph Hellwig
2022-01-05  1:25   ` John Hubbard
2022-01-05  7:00     ` John Hubbard
2022-01-07 18:23     ` Jason Gunthorpe
2022-01-08  1:37     ` Matthew Wilcox
2022-01-08  2:36       ` John Hubbard
2022-01-10 15:01       ` Jason Gunthorpe
2022-01-02 21:57 ` [PATCH 06/17] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Matthew Wilcox (Oracle)
2022-01-04  8:18   ` Christoph Hellwig
2022-01-05  1:29   ` John Hubbard
2022-01-02 21:57 ` [PATCH 07/17] gup: Add gup_put_folio() Matthew Wilcox (Oracle)
2022-01-04  8:22   ` Christoph Hellwig
2022-01-05  6:52   ` John Hubbard
2022-01-06 22:05   ` William Kucharski
2022-01-02 21:57 ` [PATCH 08/17] gup: Add try_grab_folio() Matthew Wilcox (Oracle)
2022-01-04  8:24   ` Christoph Hellwig
2022-01-05  7:06   ` John Hubbard
2022-01-02 21:57 ` [PATCH 09/17] gup: Convert gup_pte_range() to use a folio Matthew Wilcox (Oracle)
2022-01-04  8:25   ` Christoph Hellwig
2022-01-05  7:36   ` John Hubbard
2022-01-05  7:52     ` Matthew Wilcox
2022-01-05  7:57       ` John Hubbard
2022-01-02 21:57 ` [PATCH 10/17] gup: Convert gup_hugepte() " Matthew Wilcox (Oracle)
2022-01-04  8:26   ` Christoph Hellwig
2022-01-05  7:46   ` John Hubbard
2022-01-02 21:57 ` [PATCH 11/17] gup: Convert gup_huge_pmd() " Matthew Wilcox (Oracle)
2022-01-04  8:26   ` Christoph Hellwig
2022-01-05  7:50   ` John Hubbard
2022-01-02 21:57 ` [PATCH 12/17] gup: Convert gup_huge_pud() " Matthew Wilcox (Oracle)
2022-01-05  7:58   ` John Hubbard
2022-01-02 21:57 ` [PATCH 13/17] gup: Convert gup_huge_pgd() " Matthew Wilcox (Oracle)
2022-01-05  7:58   ` John Hubbard
2022-01-02 21:57 ` [PATCH 14/17] gup: Convert for_each_compound_head() to gup_for_each_folio() Matthew Wilcox (Oracle)
2022-01-04  8:32   ` Christoph Hellwig
2022-01-05  8:17   ` John Hubbard [this message]
2022-01-09  4:39     ` Matthew Wilcox
2022-01-09  8:01       ` John Hubbard
2022-01-10 15:22         ` Jan Kara
2022-01-10 15:52           ` Matthew Wilcox
2022-01-10 20:36             ` Jan Kara
2022-01-10 21:10               ` Matthew Wilcox
2022-01-17 12:07                 ` Jan Kara
2022-01-02 21:57 ` [PATCH 15/17] gup: Convert for_each_compound_range() to gup_for_each_folio_range() Matthew Wilcox (Oracle)
2022-01-04  8:35   ` Christoph Hellwig
2022-01-05  8:30   ` John Hubbard
2022-01-02 21:57 ` [PATCH 16/17] mm: Add isolate_lru_folio() Matthew Wilcox (Oracle)
2022-01-04  8:36   ` Christoph Hellwig
2022-01-05  8:44   ` John Hubbard
2022-01-06  0:34     ` Matthew Wilcox
2022-01-02 21:57 ` [PATCH 17/17] gup: Convert check_and_migrate_movable_pages() to use a folio Matthew Wilcox (Oracle)
2022-01-04  8:37   ` Christoph Hellwig
2022-01-05  9:00   ` John Hubbard
2022-01-06 22:12 ` [PATCH 00/17] Convert GUP to folios William Kucharski
2022-01-07 18:54 ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a0d968aa-e7c5-bbcc-4261-8767c9ca3ecb@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.