From: Michal Hocko <mhocko@kernel.org>
To: john.hubbard@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
Christoph Hellwig <hch@infradead.org>,
Ira Weiny <ira.weiny@intel.com>, Jan Kara <jack@suse.cz>,
Jason Gunthorpe <jgg@ziepe.ca>,
Jerome Glisse <jglisse@redhat.com>,
LKML <linux-kernel@vger.kernel.org>,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
John Hubbard <jhubbard@nvidia.com>,
Dan Williams <dan.j.williams@intel.com>,
Daniel Black <daniel@linux.ibm.com>,
Matthew Wilcox <willy@infradead.org>,
Mike Kravetz <mike.kravetz@oracle.com>
Subject: Re: [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*()
Date: Wed, 7 Aug 2019 13:01:47 +0200 [thread overview]
Message-ID: <20190807110147.GT11812@dhcp22.suse.cz> (raw)
In-Reply-To: <20190805222019.28592-2-jhubbard@nvidia.com>
On Mon 05-08-19 15:20:17, john.hubbard@gmail.com wrote:
> From: John Hubbard <jhubbard@nvidia.com>
>
> For pages that were retained via get_user_pages*(), release those pages
> via the new put_user_page*() routines, instead of via put_page() or
> release_pages().
Hmm, this is an interesting code path. There seems to be a mix of pages
in the game. We get one page via follow_page_mask but then other pages
in the range are filled by __munlock_pagevec_fill and that does a direct
pte walk. Is using put_user_page correct in this case? Could you explain
why in the changelog?
> This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
> ("mm: introduce put_user_page*(), placeholder versions").
>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Daniel Black <daniel@linux.ibm.com>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Jérôme Glisse <jglisse@redhat.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
> mm/mlock.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/mlock.c b/mm/mlock.c
> index a90099da4fb4..b980e6270e8a 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -345,7 +345,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
> get_page(page); /* for putback_lru_page() */
> __munlock_isolated_page(page);
> unlock_page(page);
> - put_page(page); /* from follow_page_mask() */
> + put_user_page(page); /* from follow_page_mask() */
> }
> }
> }
> @@ -467,7 +467,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
> if (page && !IS_ERR(page)) {
> if (PageTransTail(page)) {
> VM_BUG_ON_PAGE(PageMlocked(page), page);
> - put_page(page); /* follow_page_mask() */
> + put_user_page(page); /* follow_page_mask() */
> } else if (PageTransHuge(page)) {
> lock_page(page);
> /*
> @@ -478,7 +478,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
> */
> page_mask = munlock_vma_page(page);
> unlock_page(page);
> - put_page(page); /* follow_page_mask() */
> + put_user_page(page); /* follow_page_mask() */
> } else {
> /*
> * Non-huge pages are handled in batches via
> --
> 2.22.0
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2019-08-07 11:01 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-05 22:20 [PATCH 0/3] mm/: 3 more put_user_page() conversions john.hubbard
2019-08-05 22:20 ` [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*() john.hubbard
2019-08-07 11:01 ` Michal Hocko [this message]
2019-08-07 23:32 ` John Hubbard
2019-08-08 6:21 ` Michal Hocko
2019-08-08 11:09 ` Vlastimil Babka
2019-08-08 19:20 ` John Hubbard
2019-08-08 22:59 ` John Hubbard
2019-08-08 23:41 ` Ira Weiny
2019-08-08 23:57 ` John Hubbard
2019-08-09 18:22 ` Weiny, Ira
2019-08-09 8:12 ` Vlastimil Babka
2019-08-09 8:23 ` Michal Hocko
2019-08-09 9:05 ` John Hubbard
2019-08-09 9:16 ` Michal Hocko
2019-08-09 13:58 ` Jan Kara
2019-08-09 17:52 ` Michal Hocko
2019-08-09 18:14 ` Weiny, Ira
2019-08-09 18:36 ` John Hubbard
2019-08-05 22:20 ` [PATCH 2/3] mm/mempolicy.c: " john.hubbard
2019-08-05 22:20 ` [PATCH 3/3] mm/ksm: " john.hubbard
2019-08-06 21:59 ` [PATCH 0/3] mm/: 3 more put_user_page() conversions Andrew Morton
2019-08-06 22:05 ` John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190807110147.GT11812@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=daniel@linux.ibm.com \
--cc=hch@infradead.org \
--cc=ira.weiny@intel.com \
--cc=jack@suse.cz \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=john.hubbard@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).