linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: John Hubbard <jhubbard@nvidia.com>, Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Christoph Hellwig <hch@infradead.org>,
	Ira Weiny <ira.weiny@intel.com>, Jan Kara <jack@suse.cz>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Jerome Glisse <jglisse@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	Dan Williams <dan.j.williams@intel.com>,
	Daniel Black <daniel@linux.ibm.com>,
	Matthew Wilcox <willy@infradead.org>,
	Mike Kravetz <mike.kravetz@oracle.com>
Subject: Re: [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*()
Date: Fri, 9 Aug 2019 10:12:48 +0200	[thread overview]
Message-ID: <420a5039-a79c-3872-38ea-807cedca3b8a@suse.cz> (raw)
In-Reply-To: <d1ecb0d4-ea6a-637d-7029-687b950b783f@nvidia.com>

On 8/9/19 12:59 AM, John Hubbard wrote:
>>> That's true. However, I'm not sure munlocking is where the
>>> put_user_page() machinery is intended to be used anyway? These are
>>> short-term pins for struct page manipulation, not e.g. dirtying of page
>>> contents. Reading commit fc1d8e7cca2d I don't think this case falls
>>> within the reasoning there. Perhaps not all GUP users should be
>>> converted to the planned separate GUP tracking, and instead we should
>>> have a GUP/follow_page_mask() variant that keeps using get_page/put_page?
>>>  
>>
>> Interesting. So far, the approach has been to get all the gup callers to
>> release via put_user_page(), but if we add in Jan's and Ira's vaddr_pin_pages()
>> wrapper, then maybe we could leave some sites unconverted.
>>
>> However, in order to do so, we would have to change things so that we have
>> one set of APIs (gup) that do *not* increment a pin count, and another set
>> (vaddr_pin_pages) that do. 
>>
>> Is that where we want to go...?
>>

We already have a FOLL_LONGTERM flag, isn't that somehow related? And if
it's not exactly the same thing, perhaps a new gup flag to distinguish
which kind of pinning to use?

> Oh, and meanwhile, I'm leaning toward a cheap fix: just use gup_fast() instead
> of get_page(), and also fix the releasing code. So this incremental patch, on
> top of the existing one, should do it:
> 
...
> @@ -411,7 +409,13 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
>                 if (PageTransCompound(page))
>                         break;
>  
> -               get_page(page);
> +               /*
> +                * Use get_user_pages_fast(), instead of get_page() so that the
> +                * releasing code can unconditionally call put_user_page().
> +                */
> +               ret = get_user_pages_fast(start, 1, 0, &page);

Um the whole reason of __munlock_pagevec_fill() was to avoid the full
page walk cost, which made a 14% difference, see 7a8010cd3627 ("mm:
munlock: manual pte walk in fast path instead of follow_page_mask()")
Replacing simple get_page() with page walk to satisfy API requirements
seems rather suboptimal to me.

> +               if (ret != 1)
> +                       break;
>                 /*
>                  * Increase the address that will be returned *before* the
>                  * eventual break due to pvec becoming full by adding the page
> 
> 
> thanks,
> 


  parent reply	other threads:[~2019-08-09  8:12 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-05 22:20 [PATCH 0/3] mm/: 3 more put_user_page() conversions john.hubbard
2019-08-05 22:20 ` [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*() john.hubbard
2019-08-07 11:01   ` Michal Hocko
2019-08-07 23:32     ` John Hubbard
2019-08-08  6:21       ` Michal Hocko
2019-08-08 11:09         ` Vlastimil Babka
2019-08-08 19:20           ` John Hubbard
2019-08-08 22:59             ` John Hubbard
2019-08-08 23:41               ` Ira Weiny
2019-08-08 23:57                 ` John Hubbard
2019-08-09 18:22                   ` Weiny, Ira
2019-08-09  8:12               ` Vlastimil Babka [this message]
2019-08-09  8:23                 ` Michal Hocko
2019-08-09  9:05                   ` John Hubbard
2019-08-09  9:16                     ` Michal Hocko
2019-08-09 13:58                   ` Jan Kara
2019-08-09 17:52                     ` Michal Hocko
2019-08-09 18:14                       ` Weiny, Ira
2019-08-09 18:36                         ` John Hubbard
2019-08-05 22:20 ` [PATCH 2/3] mm/mempolicy.c: " john.hubbard
2019-08-05 22:20 ` [PATCH 3/3] mm/ksm: " john.hubbard
2019-08-06 21:59 ` [PATCH 0/3] mm/: 3 more put_user_page() conversions Andrew Morton
2019-08-06 22:05   ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=420a5039-a79c-3872-38ea-807cedca3b8a@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=daniel@linux.ibm.com \
    --cc=hch@infradead.org \
    --cc=ira.weiny@intel.com \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).