From: John Hubbard <jhubbard@nvidia.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Cc: Christoph Hellwig <hch@infradead.org>,
William Kucharski <william.kucharski@oracle.com>,
linux-kernel@vger.kernel.org, Jason Gunthorpe <jgg@ziepe.ca>
Subject: Re: [PATCH v2 25/28] gup: Convert compound_next() to gup_folio_next()
Date: Mon, 10 Jan 2022 23:41:31 -0800 [thread overview]
Message-ID: <2f400a3b-d58b-9bd4-df8f-eb3de06ade2a@nvidia.com> (raw)
In-Reply-To: <20220110042406.499429-26-willy@infradead.org>
On 1/9/22 20:24, Matthew Wilcox (Oracle) wrote:
> Convert both callers to work on folios instead of pages.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> mm/gup.c | 41 ++++++++++++++++++++++-------------------
> 1 file changed, 22 insertions(+), 19 deletions(-)
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
thanks,
--
John Hubbard
NVIDIA
>
> diff --git a/mm/gup.c b/mm/gup.c
> index b5786e83c418..0cf2d5fd8d2d 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -220,20 +220,20 @@ static inline struct page *compound_range_next(unsigned long i,
> return page;
> }
>
> -static inline struct page *compound_next(unsigned long i,
> +static inline struct folio *gup_folio_next(unsigned long i,
> unsigned long npages, struct page **list, unsigned int *ntails)
> {
> - struct page *page;
> + struct folio *folio;
> unsigned int nr;
>
> - page = compound_head(list[i]);
> + folio = page_folio(list[i]);
> for (nr = i + 1; nr < npages; nr++) {
> - if (compound_head(list[nr]) != page)
> + if (page_folio(list[nr]) != folio)
> break;
> }
>
> *ntails = nr - i;
> - return page;
> + return folio;
> }
>
> /**
> @@ -261,17 +261,17 @@ static inline struct page *compound_next(unsigned long i,
> void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
> bool make_dirty)
> {
> - unsigned long index;
> - struct page *head;
> - unsigned int ntails;
> + unsigned long i;
> + struct folio *folio;
> + unsigned int nr;
>
> if (!make_dirty) {
> unpin_user_pages(pages, npages);
> return;
> }
>
> - for (index = 0; index < npages; index += ntails) {
> - head = compound_next(index, npages, pages, &ntails);
> + for (i = 0; i < npages; i += nr) {
> + folio = gup_folio_next(i, npages, pages, &nr);
> /*
> * Checking PageDirty at this point may race with
> * clear_page_dirty_for_io(), but that's OK. Two key
> @@ -292,9 +292,12 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
> * written back, so it gets written back again in the
> * next writeback cycle. This is harmless.
> */
> - if (!PageDirty(head))
> - set_page_dirty_lock(head);
> - put_compound_head(head, ntails, FOLL_PIN);
> + if (!folio_test_dirty(folio)) {
> + folio_lock(folio);
> + folio_mark_dirty(folio);
> + folio_unlock(folio);
> + }
> + gup_put_folio(folio, nr, FOLL_PIN);
> }
> }
> EXPORT_SYMBOL(unpin_user_pages_dirty_lock);
> @@ -347,9 +350,9 @@ EXPORT_SYMBOL(unpin_user_page_range_dirty_lock);
> */
> void unpin_user_pages(struct page **pages, unsigned long npages)
> {
> - unsigned long index;
> - struct page *head;
> - unsigned int ntails;
> + unsigned long i;
> + struct folio *folio;
> + unsigned int nr;
>
> /*
> * If this WARN_ON() fires, then the system *might* be leaking pages (by
> @@ -359,9 +362,9 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
> if (WARN_ON(IS_ERR_VALUE(npages)))
> return;
>
> - for (index = 0; index < npages; index += ntails) {
> - head = compound_next(index, npages, pages, &ntails);
> - put_compound_head(head, ntails, FOLL_PIN);
> + for (i = 0; i < npages; i += nr) {
> + folio = gup_folio_next(i, npages, pages, &nr);
> + gup_put_folio(folio, nr, FOLL_PIN);
> }
> }
> EXPORT_SYMBOL(unpin_user_pages);
next prev parent reply other threads:[~2022-01-11 7:41 UTC|newest]
Thread overview: 96+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-10 4:23 [PATCH v2 00/28] Convert GUP to folios Matthew Wilcox (Oracle)
2022-01-10 4:23 ` [PATCH v2 01/28] gup: Remove for_each_compound_range() Matthew Wilcox (Oracle)
2022-01-10 8:22 ` Christoph Hellwig
2022-01-11 1:07 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 02/28] gup: Remove for_each_compound_head() Matthew Wilcox (Oracle)
2022-01-10 8:23 ` Christoph Hellwig
2022-01-11 1:11 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 03/28] gup: Change the calling convention for compound_range_next() Matthew Wilcox (Oracle)
2022-01-10 8:25 ` Christoph Hellwig
2022-01-11 1:14 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 04/28] gup: Optimise compound_range_next() Matthew Wilcox (Oracle)
2022-01-10 8:26 ` Christoph Hellwig
2022-01-11 1:16 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 05/28] gup: Change the calling convention for compound_next() Matthew Wilcox (Oracle)
2022-01-10 8:27 ` Christoph Hellwig
2022-01-10 13:28 ` Matthew Wilcox
2022-01-11 1:18 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 06/28] gup: Fix some contiguous memmap assumptions Matthew Wilcox (Oracle)
2022-01-10 8:29 ` Christoph Hellwig
2022-01-10 13:37 ` Matthew Wilcox
2022-01-10 19:05 ` [External] : " Mike Kravetz
2022-01-11 1:47 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 07/28] gup: Remove an assumption of a contiguous memmap Matthew Wilcox (Oracle)
2022-01-10 8:30 ` Christoph Hellwig
2022-01-11 3:27 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 08/28] gup: Handle page split race more efficiently Matthew Wilcox (Oracle)
2022-01-10 8:31 ` Christoph Hellwig
2022-01-11 3:30 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 09/28] gup: Turn hpage_pincount_add() into page_pincount_add() Matthew Wilcox (Oracle)
2022-01-10 8:31 ` Christoph Hellwig
2022-01-11 3:35 ` John Hubbard
2022-01-11 4:32 ` John Hubbard
2022-01-11 13:46 ` Matthew Wilcox
2022-01-10 4:23 ` [PATCH v2 10/28] gup: Turn hpage_pincount_sub() into page_pincount_sub() Matthew Wilcox (Oracle)
2022-01-10 8:32 ` Christoph Hellwig
2022-01-11 6:40 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 11/28] mm: Make compound_pincount always available Matthew Wilcox (Oracle)
2022-01-11 4:06 ` John Hubbard
2022-01-11 4:38 ` Matthew Wilcox
2022-01-11 5:10 ` John Hubbard
2022-01-20 9:15 ` Christoph Hellwig
2022-01-10 4:23 ` [PATCH v2 12/28] mm: Add folio_put_refs() Matthew Wilcox (Oracle)
2022-01-10 8:32 ` Christoph Hellwig
2022-01-11 4:14 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 13/28] mm: Add folio_pincount_ptr() Matthew Wilcox (Oracle)
2022-01-10 8:33 ` Christoph Hellwig
2022-01-11 4:22 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 14/28] mm: Convert page_maybe_dma_pinned() to use a folio Matthew Wilcox (Oracle)
2022-01-10 8:33 ` Christoph Hellwig
2022-01-11 4:27 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 15/28] gup: Add try_get_folio() and try_grab_folio() Matthew Wilcox (Oracle)
2022-01-10 8:34 ` Christoph Hellwig
2022-01-11 5:00 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 16/28] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Matthew Wilcox (Oracle)
2022-01-10 8:35 ` Christoph Hellwig
2022-01-11 5:14 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 17/28] gup: Add gup_put_folio() Matthew Wilcox (Oracle)
2022-01-10 8:35 ` Christoph Hellwig
2022-01-11 6:44 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 18/28] hugetlb: Use try_grab_folio() instead of try_grab_compound_head() Matthew Wilcox (Oracle)
2022-01-10 8:36 ` Christoph Hellwig
2022-01-11 6:47 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 19/28] gup: Convert try_grab_page() to call try_grab_folio() Matthew Wilcox (Oracle)
2022-01-10 8:36 ` Christoph Hellwig
2022-01-11 7:01 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 20/28] gup: Convert gup_pte_range() to use a folio Matthew Wilcox (Oracle)
2022-01-10 8:37 ` Christoph Hellwig
2022-01-11 7:06 ` John Hubbard
2022-01-10 4:23 ` [PATCH v2 21/28] gup: Convert gup_hugepte() " Matthew Wilcox (Oracle)
2022-01-10 8:37 ` Christoph Hellwig
2022-01-11 7:33 ` John Hubbard
2022-01-10 4:24 ` [PATCH v2 22/28] gup: Convert gup_huge_pmd() " Matthew Wilcox (Oracle)
2022-01-10 8:37 ` Christoph Hellwig
2022-01-11 7:36 ` John Hubbard
2022-01-10 4:24 ` [PATCH v2 23/28] gup: Convert gup_huge_pud() " Matthew Wilcox (Oracle)
2022-01-10 8:38 ` Christoph Hellwig
2022-01-11 7:38 ` John Hubbard
2022-01-10 4:24 ` [PATCH v2 24/28] gup: Convert gup_huge_pgd() " Matthew Wilcox (Oracle)
2022-01-10 8:38 ` Christoph Hellwig
2022-01-11 7:38 ` John Hubbard
2022-01-10 4:24 ` [PATCH v2 25/28] gup: Convert compound_next() to gup_folio_next() Matthew Wilcox (Oracle)
2022-01-10 8:39 ` Christoph Hellwig
2022-01-11 7:41 ` John Hubbard [this message]
2022-01-10 4:24 ` [PATCH v2 26/28] gup: Convert compound_range_next() to gup_folio_range_next() Matthew Wilcox (Oracle)
2022-01-10 8:41 ` Christoph Hellwig
2022-01-10 13:41 ` Matthew Wilcox
2022-01-11 7:44 ` John Hubbard
2022-01-10 4:24 ` [PATCH v2 27/28] mm: Add isolate_lru_folio() Matthew Wilcox (Oracle)
2022-01-10 8:42 ` Christoph Hellwig
2022-01-11 7:49 ` John Hubbard
2022-01-10 4:24 ` [PATCH v2 28/28] gup: Convert check_and_migrate_movable_pages() to use a folio Matthew Wilcox (Oracle)
2022-01-10 8:42 ` Christoph Hellwig
2022-01-11 7:52 ` John Hubbard
2022-01-10 15:31 ` [PATCH v2 00/28] Convert GUP to folios Jason Gunthorpe
2022-01-10 16:09 ` Matthew Wilcox
2022-01-10 17:26 ` William Kucharski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2f400a3b-d58b-9bd4-df8f-eb3de06ade2a@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=hch@infradead.org \
--cc=jgg@ziepe.ca \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=william.kucharski@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).