Linux-Fsdevel Archive on lore.kernel.org
 help / color / Atom feed
WARNING: multiple messages refer to this Message-ID
From: Jerome Glisse <jglisse@redhat.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Jan Kara <jack@suse.cz>, Matthew Wilcox <willy@infradead.org>,
	Dave Chinner <david@fromorbit.com>,
	Dan Williams <dan.j.williams@intel.com>,
	John Hubbard <john.hubbard@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>,
	tom@talpey.com, Al Viro <viro@zeniv.linux.org.uk>,
	benve@cisco.com, Christoph Hellwig <hch@infradead.org>,
	Christopher Lameter <cl@linux.com>,
	"Dalessandro, Dennis" <dennis.dalessandro@intel.com>,
	Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>, Michal Hocko <mhocko@kernel.org>,
	mike.marciniszyn@intel.com, rcampbell@nvidia.com,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions
Date: Tue, 15 Jan 2019 20:56:11 -0500
Message-ID: <20190116015610.GH3696@redhat.com> (raw)
In-Reply-To: <99110c19-3168-f6a9-fbde-0a0e57f67279@nvidia.com>

On Tue, Jan 15, 2019 at 04:44:41PM -0800, John Hubbard wrote:
> On 1/15/19 2:12 PM, Jerome Glisse wrote:
> > On Tue, Jan 15, 2019 at 01:56:51PM -0800, John Hubbard wrote:
> >> On 1/15/19 9:15 AM, Jerome Glisse wrote:
> >>> On Tue, Jan 15, 2019 at 09:07:59AM +0100, Jan Kara wrote:
> >>>> On Mon 14-01-19 12:21:25, Jerome Glisse wrote:
> >>>>> On Mon, Jan 14, 2019 at 03:54:47PM +0100, Jan Kara wrote:
> >>>>>> On Fri 11-01-19 19:06:08, John Hubbard wrote:
> >>>>>>> On 1/11/19 6:46 PM, Jerome Glisse wrote:
> >>>>>>>> On Fri, Jan 11, 2019 at 06:38:44PM -0800, John Hubbard wrote:
> >>>>>>>> [...]
> >>>>>>>>
> >>>>>>>>>>> The other idea that you and Dan (and maybe others) pointed out was a debug
> >>>>>>>>>>> option, which we'll certainly need in order to safely convert all the call
> >>>>>>>>>>> sites. (Mirror the mappings at a different kernel offset, so that put_page()
> >>>>>>>>>>> and put_user_page() can verify that the right call was made.)  That will be
> >>>>>>>>>>> a separate patchset, as you recommended.
> >>>>>>>>>>>
> >>>>>>>>>>> I'll even go as far as recommending the page lock itself. I realize that this 
> >>>>>>>>>>> adds overhead to gup(), but we *must* hold off page_mkclean(), and I believe
> >>>>>>>>>>> that this (below) has similar overhead to the notes above--but is *much* easier
> >>>>>>>>>>> to verify correct. (If the page lock is unacceptable due to being so widely used,
> >>>>>>>>>>> then I'd recommend using another page bit to do the same thing.)
> >>>>>>>>>>
> >>>>>>>>>> Please page lock is pointless and it will not work for GUP fast. The above
> >>>>>>>>>> scheme do work and is fine. I spend the day again thinking about all memory
> >>>>>>>>>> ordering and i do not see any issues.
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Why is it that page lock cannot be used for gup fast, btw?
> >>>>>>>>
> >>>>>>>> Well it can not happen within the preempt disable section. But after
> >>>>>>>> as a post pass before GUP_fast return and after reenabling preempt then
> >>>>>>>> it is fine like it would be for regular GUP. But locking page for GUP
> >>>>>>>> is also likely to slow down some workload (with direct-IO).
> >>>>>>>>
> >>>>>>>
> >>>>>>> Right, and so to crux of the matter: taking an uncontended page lock
> >>>>>>> involves pretty much the same set of operations that your approach does.
> >>>>>>> (If gup ends up contended with the page lock for other reasons than these
> >>>>>>> paths, that seems surprising.) I'd expect very similar performance.
> >>>>>>>
> >>>>>>> But the page lock approach leads to really dramatically simpler code (and
> >>>>>>> code reviews, let's not forget). Any objection to my going that
> >>>>>>> direction, and keeping this idea as a Plan B? I think the next step will
> >>>>>>> be, once again, to gather some performance metrics, so maybe that will
> >>>>>>> help us decide.
> >>>>>>
> >>>>>> FWIW I agree that using page lock for protecting page pinning (and thus
> >>>>>> avoid races with page_mkclean()) looks simpler to me as well and I'm not
> >>>>>> convinced there will be measurable difference to the more complex scheme
> >>>>>> with barriers Jerome suggests unless that page lock contended. Jerome is
> >>>>>> right that you cannot just do lock_page() in gup_fast() path. There you
> >>>>>> have to do trylock_page() and if that fails just bail out to the slow gup
> >>>>>> path.
> >>>>>>
> >>>>>> Regarding places other than page_mkclean() that need to check pinned state:
> >>>>>> Definitely page migration will want to check whether the page is pinned or
> >>>>>> not so that it can deal differently with short-term page references vs
> >>>>>> longer-term pins.
> >>>>>>
> >>>>>> Also there is one more idea I had how to record number of pins in the page:
> >>>>>>
> >>>>>> #define PAGE_PIN_BIAS	1024
> >>>>>>
> >>>>>> get_page_pin()
> >>>>>> 	atomic_add(&page->_refcount, PAGE_PIN_BIAS);
> >>>>>>
> >>>>>> put_page_pin();
> >>>>>> 	atomic_add(&page->_refcount, -PAGE_PIN_BIAS);
> >>>>>>
> >>>>>> page_pinned(page)
> >>>>>> 	(atomic_read(&page->_refcount) - page_mapcount(page)) > PAGE_PIN_BIAS
> >>>>>>
> >>>>>> This is pretty trivial scheme. It still gives us 22-bits for page pins
> >>>>>> which should be plenty (but we should check for that and bail with error if
> >>>>>> it would overflow). Also there will be no false negatives and false
> >>>>>> positives only if there are more than 1024 non-page-table references to the
> >>>>>> page which I expect to be rare (we might want to also subtract
> >>>>>> hpage_nr_pages() for radix tree references to avoid excessive false
> >>>>>> positives for huge pages although at this point I don't think they would
> >>>>>> matter). Thoughts?
> >>>>>
> >>>>> Racing PUP are as likely to cause issues:
> >>>>>
> >>>>> CPU0                        | CPU1       | CPU2
> >>>>>                             |            |
> >>>>>                             | PUP()      |
> >>>>>     page_pinned(page)       |            |
> >>>>>       (page_count(page) -   |            |
> >>>>>        page_mapcount(page)) |            |
> >>>>>                             |            | GUP()
> >>>>>
> >>>>> So here the refcount snap-shot does not include the second GUP and
> >>>>> we can have a false negative ie the page_pinned() will return false
> >>>>> because of the PUP happening just before on CPU1 despite the racing
> >>>>> GUP on CPU2 just after.
> >>>>>
> >>>>> I believe only either lock or memory ordering with barrier can
> >>>>> guarantee that we do not miss GUP ie no false negative. Still the
> >>>>> bias idea might be usefull as with it we should not need a flag.
> >>>>
> >>>> Right. We need similar synchronization (i.e., page lock or careful checks
> >>>> with memory barriers) if we want to get a reliable page pin information.
> >>>>
> >>>>> So to make the above safe it would still need the page write back
> >>>>> double check that i described so that GUP back-off if it raced with
> >>>>> page_mkclean,clear_page_dirty_for_io and the fs write page call back
> >>>>> which call test_set_page_writeback() (yes it is very unlikely but
> >>>>> might still happen).
> >>>>
> >>>> Agreed. So with page lock it would actually look like:
> >>>>
> >>>> get_page_pin()
> >>>> 	lock_page(page);
> >>>> 	wait_for_stable_page();
> >>>> 	atomic_add(&page->_refcount, PAGE_PIN_BIAS);
> >>>> 	unlock_page(page);
> >>>>
> >>>> And if we perform page_pinned() check under page lock, then if
> >>>> page_pinned() returned false, we are sure page is not and will not be
> >>>> pinned until we drop the page lock (and also until page writeback is
> >>>> completed if needed).
> >>>>
> >>
> >> OK. Avoiding a new page flag, *and* avoiding the _mapcount auditing and
> >> compensation steps, is a pretty major selling point. And if we do the above
> >> locking, that does look correct to me. I wasn't able to visualize the
> >> locking you had in mind, until just now (above), but now it is clear, 
> >> thanks for spelling it out.
> >>
> >>>
> >>> So i still can't see anything wrong with that idea, i had similar
> >>> one in the past and diss-missed and i can't remember why :( But
> >>> thinking over and over i do not see any issue beside refcount wrap
> >>> around. Which is something that can happens today thought i don't
> >>> think it can be use in an evil way and we can catch it and be
> >>> loud about it.
> >>>
> >>> So i think the following would be bullet proof:
> >>>
> >>>
> >>> get_page_pin()
> >>>     atomic_add(&page->_refcount, PAGE_PIN_BIAS);
> >>>     smp_wmb();
> >>>     if (PageWriteback(page)) {
> >>>         // back off
> >>>         atomic_add(&page->_refcount, -PAGE_PIN_BIAS);
> >>>         // re-enable preempt if in fast
> >>>         wait_on_page_writeback(page);
> >>>         goto retry;
> >>>     }
> >>>
> >>> put_page_pin();
> >>> 	atomic_add(&page->_refcount, -PAGE_PIN_BIAS);
> >>>
> >>> page_pinned(page)
> >>> 	(atomic_read(&page->_refcount) - page_mapcount(page)) > PAGE_PIN_BIAS
> >>>
> >>> test_set_page_writeback()
> >>>     ...
> >>>     wb = TestSetPageWriteback(page)
> >>
> >> Minor point, but using PageWriteback for synchronization may rule out using
> >> wait_for_stable_page(), because wait_for_stable_page() might not actually 
> >> wait_on_page_writeback. Jan pointed out in the other thread, that we should
> >> prefer wait_for_stable_page(). 
> > 
> > Yes, but wait_for_stable_page() has no page flag so nothing we can
> > synchronize against. So my advice would be:
> >     if (PageWriteback(page)) {
> >         wait_for_stable_page(page);
> >         if (PageWriteback(page))
> >             wait_for_write_back(page);
> >     }
> > 
> > wait_for_stable_page() can optimize out the wait_for_write_back()
> > if it is safe to do so. So we can improve the above slightly too.
> > 
> >>
> >>>     smp_mb();
> >>>     if (page_pinned(page)) {
> >>>         // report page as pinned to caller of test_set_page_writeback()
> >>>     }
> >>>     ...
> >>>
> >>> This is text book memory barrier. Either get_page_pin() see racing
> >>> test_set_page_writeback() or test_set_page_writeback() see racing GUP
> >>>
> >>>
> >>
> >> This approach is probably workable, but again, it's more complex and comes
> >> without any lockdep support. Maybe it's faster, maybe not. Therefore, I want 
> >> to use it as either "do this after everything is up and running and stable", 
> >> or else as Plan B, if there is some performance implication from the page lock.
> >>
> >> Simple and correct first, then performance optimization, *if* necessary.
> > 
> > I do not like taking page lock while they are no good reasons to do so.
> 
> There actually are very good reasons to do so! These include:
> 
> 1) Simpler code that is less likely to have subtle bugs in the initial 
>    implementations.

It is not simpler, memory barrier is 1 line of code ...

> 
> 2) Pre-existing, known locking constructs that include instrumentation and
>    visibility.

Like i said i don't think page lock benefit from those at it is
very struct page specific. I need to check what is available but
you definitly do not get all the bell and whistle you get with
regular lock.

> 
> 3) ...and all of the other goodness that comes from smaller and simpler code.
> 
> I'm not saying that those reasons necessarily prevail here, but it's not
> fair to say "there are no good reasons". Less code is still worth something,
> even in the kernel.

Again memory barrier is just one line of code, i do not see lock as
something simpler than that.

> 
> > The above is textbook memory barrier as explain in Documentations/
> > Forcing page lock for GUP will inevitably slow down some workload and
> 
> Such as?
> 
> Here's the thing: if a workload is taking the page lock for some
> reason, and also competing with GUP, that's actually something that I worry
> about: what is changing in page state, while we're setting up GUP? Either
> we audit for that, or we let runtime locking rules (taking the page lock)
> keep us out of trouble in the first place.
> 
> In other words, if there is a performance hit, it might very likely be
> due to a required synchronization that is taking place.

You need to take the page lock for several thing, top of my mind: insert a
mapping, migrate, truncate, swapping, reverse map, mlock, cgroup, madvise,
... so if GUP now also need it then you force synchronization with all
that for direct-IO.

You do not need to synchronize with most of the above as they do not care
about GUP. In fact only write back path need synchronization i can not
think of anything else that would need to synchronize with GUP.

> > report for such can takes time to trickle down to mailing list and it
> > can takes time for people to actualy figure out that this are the GUP
> > changes that introduce such regression.
> > 
> > So if we could minimize performance regression with something like
> > memory barrier we should definitly do that.
> 
> We do not yet know that the more complex memory barrier approach is actually
> faster. That's worth repeating.

I would be surprise if a memory barrier was slower than a lock. Lock
can contends, memory barrier do not contend. Lock require atomic
operation and thus implied barrier so lock should translate into some-
thing slower than a memory barrier alone.


> > Also i do not think that page lock has lock dep (as it is not using
> > any of the usual locking function) but that's just my memory of that
> > code.
> > 
> 
> Lock page is pretty thoroughly instrumented. It uses wait_on_page_bit_common(),
> which in turn uses spin locks and more.

It does not gives you all the bell and whistle you get with spinlock
debug. The spinlock taken in wait_on_page_bit_common() is for the
waitqueue the page belongs to so you only get debuging on that not
on the individual page lock bit. So i do not think there is anything
that would help debugging page lock like double unlock or dead lock.


> The more I think about this, the more I want actual performance data to 
> justify anything involving the more complicated custom locking. So I think
> it's best to build the page lock based version, do some benchmarks, and see
> where we stand.

This is not custom locking, we do employ memory barrier in several
places already. Memory barrier is something quite common in the kernel
and we should favor it when there is no need for a lock.

Memory barrier never contend so you know you will never have lock
contention ... so memory barrier can only be faster than anything
with lock. The contrary would surprise me.

Using lock and believing it will be as fast as memory barrier is
is hopping that you will never contend on that lock. So i would
rather get proof that GUP will never contend on page lock.


To make it clear.

Lock code:
    GUP()
        ...
        lock_page(page);
        if (PageWriteback(page)) {
            unlock_page(page);
            wait_stable_page(page);
            goto retry;
        }
        atomic_add(page->refcount, PAGE_PIN_BIAS);
        unlock_page(page);

    test_set_page_writeback()
        bool pinned = false;
        ...
        pinned = page_is_pin(page); // could be after TestSetPageWriteback
        TestSetPageWriteback(page);
        ...
        return pinned;

Memory barrier:
    GUP()
        ...
        atomic_add(page->refcount, PAGE_PIN_BIAS);
        smp_mb();
        if (PageWriteback(page)) {
            atomic_add(page->refcount, -PAGE_PIN_BIAS);
            wait_stable_page(page);
            goto retry;
        }

    test_set_page_writeback()
        bool pinned = false;
        ...
        TestSetPageWriteback(page);
        smp_wmb();
        pinned = page_is_pin(page);
        ...
        return pinned;


One is not more complex than the other. One can contend, the other
will _never_ contend.

Cheers,
Jérôme

From: Jerome Glisse <jglisse@redhat.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Jan Kara <jack@suse.cz>, Matthew Wilcox <willy@infradead.org>,
	Dave Chinner <david@fromorbit.com>,
	Dan Williams <dan.j.williams@intel.com>,
	John Hubbard <john.hubbard@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>,
	tom@talpey.com, Al Viro <viro@zeniv.linux.org.uk>,
	benve@cisco.com, Christoph Hellwig <hch@infradead.org>,
	Christopher Lameter <cl@linux.com>,
	"Dalessandro, Dennis" <dennis.dalessandro@intel.com>,
	Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>, Michal Hocko <mhocko@kernel.org>,
	mike.marciniszyn@intel.com, rcampbell@nvidia.com,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions
Date: Tue, 15 Jan 2019 20:56:11 -0500
Message-ID: <20190116015610.GH3696@redhat.com> (raw)
Message-ID: <20190116015611.p7rzOsfRyZsYAL64xsJI9ih3kyTnqYuiBoOcRlm_6mM@z> (raw)
In-Reply-To: <99110c19-3168-f6a9-fbde-0a0e57f67279@nvidia.com>

On Tue, Jan 15, 2019 at 04:44:41PM -0800, John Hubbard wrote:
> On 1/15/19 2:12 PM, Jerome Glisse wrote:
> > On Tue, Jan 15, 2019 at 01:56:51PM -0800, John Hubbard wrote:
> >> On 1/15/19 9:15 AM, Jerome Glisse wrote:
> >>> On Tue, Jan 15, 2019 at 09:07:59AM +0100, Jan Kara wrote:
> >>>> On Mon 14-01-19 12:21:25, Jerome Glisse wrote:
> >>>>> On Mon, Jan 14, 2019 at 03:54:47PM +0100, Jan Kara wrote:
> >>>>>> On Fri 11-01-19 19:06:08, John Hubbard wrote:
> >>>>>>> On 1/11/19 6:46 PM, Jerome Glisse wrote:
> >>>>>>>> On Fri, Jan 11, 2019 at 06:38:44PM -0800, John Hubbard wrote:
> >>>>>>>> [...]
> >>>>>>>>
> >>>>>>>>>>> The other idea that you and Dan (and maybe others) pointed out was a debug
> >>>>>>>>>>> option, which we'll certainly need in order to safely convert all the call
> >>>>>>>>>>> sites. (Mirror the mappings at a different kernel offset, so that put_page()
> >>>>>>>>>>> and put_user_page() can verify that the right call was made.)  That will be
> >>>>>>>>>>> a separate patchset, as you recommended.
> >>>>>>>>>>>
> >>>>>>>>>>> I'll even go as far as recommending the page lock itself. I realize that this 
> >>>>>>>>>>> adds overhead to gup(), but we *must* hold off page_mkclean(), and I believe
> >>>>>>>>>>> that this (below) has similar overhead to the notes above--but is *much* easier
> >>>>>>>>>>> to verify correct. (If the page lock is unacceptable due to being so widely used,
> >>>>>>>>>>> then I'd recommend using another page bit to do the same thing.)
> >>>>>>>>>>
> >>>>>>>>>> Please page lock is pointless and it will not work for GUP fast. The above
> >>>>>>>>>> scheme do work and is fine. I spend the day again thinking about all memory
> >>>>>>>>>> ordering and i do not see any issues.
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Why is it that page lock cannot be used for gup fast, btw?
> >>>>>>>>
> >>>>>>>> Well it can not happen within the preempt disable section. But after
> >>>>>>>> as a post pass before GUP_fast return and after reenabling preempt then
> >>>>>>>> it is fine like it would be for regular GUP. But locking page for GUP
> >>>>>>>> is also likely to slow down some workload (with direct-IO).
> >>>>>>>>
> >>>>>>>
> >>>>>>> Right, and so to crux of the matter: taking an uncontended page lock
> >>>>>>> involves pretty much the same set of operations that your approach does.
> >>>>>>> (If gup ends up contended with the page lock for other reasons than these
> >>>>>>> paths, that seems surprising.) I'd expect very similar performance.
> >>>>>>>
> >>>>>>> But the page lock approach leads to really dramatically simpler code (and
> >>>>>>> code reviews, let's not forget). Any objection to my going that
> >>>>>>> direction, and keeping this idea as a Plan B? I think the next step will
> >>>>>>> be, once again, to gather some performance metrics, so maybe that will
> >>>>>>> help us decide.
> >>>>>>
> >>>>>> FWIW I agree that using page lock for protecting page pinning (and thus
> >>>>>> avoid races with page_mkclean()) looks simpler to me as well and I'm not
> >>>>>> convinced there will be measurable difference to the more complex scheme
> >>>>>> with barriers Jerome suggests unless that page lock contended. Jerome is
> >>>>>> right that you cannot just do lock_page() in gup_fast() path. There you
> >>>>>> have to do trylock_page() and if that fails just bail out to the slow gup
> >>>>>> path.
> >>>>>>
> >>>>>> Regarding places other than page_mkclean() that need to check pinned state:
> >>>>>> Definitely page migration will want to check whether the page is pinned or
> >>>>>> not so that it can deal differently with short-term page references vs
> >>>>>> longer-term pins.
> >>>>>>
> >>>>>> Also there is one more idea I had how to record number of pins in the page:
> >>>>>>
> >>>>>> #define PAGE_PIN_BIAS	1024
> >>>>>>
> >>>>>> get_page_pin()
> >>>>>> 	atomic_add(&page->_refcount, PAGE_PIN_BIAS);
> >>>>>>
> >>>>>> put_page_pin();
> >>>>>> 	atomic_add(&page->_refcount, -PAGE_PIN_BIAS);
> >>>>>>
> >>>>>> page_pinned(page)
> >>>>>> 	(atomic_read(&page->_refcount) - page_mapcount(page)) > PAGE_PIN_BIAS
> >>>>>>
> >>>>>> This is pretty trivial scheme. It still gives us 22-bits for page pins
> >>>>>> which should be plenty (but we should check for that and bail with error if
> >>>>>> it would overflow). Also there will be no false negatives and false
> >>>>>> positives only if there are more than 1024 non-page-table references to the
> >>>>>> page which I expect to be rare (we might want to also subtract
> >>>>>> hpage_nr_pages() for radix tree references to avoid excessive false
> >>>>>> positives for huge pages although at this point I don't think they would
> >>>>>> matter). Thoughts?
> >>>>>
> >>>>> Racing PUP are as likely to cause issues:
> >>>>>
> >>>>> CPU0                        | CPU1       | CPU2
> >>>>>                             |            |
> >>>>>                             | PUP()      |
> >>>>>     page_pinned(page)       |            |
> >>>>>       (page_count(page) -   |            |
> >>>>>        page_mapcount(page)) |            |
> >>>>>                             |            | GUP()
> >>>>>
> >>>>> So here the refcount snap-shot does not include the second GUP and
> >>>>> we can have a false negative ie the page_pinned() will return false
> >>>>> because of the PUP happening just before on CPU1 despite the racing
> >>>>> GUP on CPU2 just after.
> >>>>>
> >>>>> I believe only either lock or memory ordering with barrier can
> >>>>> guarantee that we do not miss GUP ie no false negative. Still the
> >>>>> bias idea might be usefull as with it we should not need a flag.
> >>>>
> >>>> Right. We need similar synchronization (i.e., page lock or careful checks
> >>>> with memory barriers) if we want to get a reliable page pin information.
> >>>>
> >>>>> So to make the above safe it would still need the page write back
> >>>>> double check that i described so that GUP back-off if it raced with
> >>>>> page_mkclean,clear_page_dirty_for_io and the fs write page call back
> >>>>> which call test_set_page_writeback() (yes it is very unlikely but
> >>>>> might still happen).
> >>>>
> >>>> Agreed. So with page lock it would actually look like:
> >>>>
> >>>> get_page_pin()
> >>>> 	lock_page(page);
> >>>> 	wait_for_stable_page();
> >>>> 	atomic_add(&page->_refcount, PAGE_PIN_BIAS);
> >>>> 	unlock_page(page);
> >>>>
> >>>> And if we perform page_pinned() check under page lock, then if
> >>>> page_pinned() returned false, we are sure page is not and will not be
> >>>> pinned until we drop the page lock (and also until page writeback is
> >>>> completed if needed).
> >>>>
> >>
> >> OK. Avoiding a new page flag, *and* avoiding the _mapcount auditing and
> >> compensation steps, is a pretty major selling point. And if we do the above
> >> locking, that does look correct to me. I wasn't able to visualize the
> >> locking you had in mind, until just now (above), but now it is clear, 
> >> thanks for spelling it out.
> >>
> >>>
> >>> So i still can't see anything wrong with that idea, i had similar
> >>> one in the past and diss-missed and i can't remember why :( But
> >>> thinking over and over i do not see any issue beside refcount wrap
> >>> around. Which is something that can happens today thought i don't
> >>> think it can be use in an evil way and we can catch it and be
> >>> loud about it.
> >>>
> >>> So i think the following would be bullet proof:
> >>>
> >>>
> >>> get_page_pin()
> >>>     atomic_add(&page->_refcount, PAGE_PIN_BIAS);
> >>>     smp_wmb();
> >>>     if (PageWriteback(page)) {
> >>>         // back off
> >>>         atomic_add(&page->_refcount, -PAGE_PIN_BIAS);
> >>>         // re-enable preempt if in fast
> >>>         wait_on_page_writeback(page);
> >>>         goto retry;
> >>>     }
> >>>
> >>> put_page_pin();
> >>> 	atomic_add(&page->_refcount, -PAGE_PIN_BIAS);
> >>>
> >>> page_pinned(page)
> >>> 	(atomic_read(&page->_refcount) - page_mapcount(page)) > PAGE_PIN_BIAS
> >>>
> >>> test_set_page_writeback()
> >>>     ...
> >>>     wb = TestSetPageWriteback(page)
> >>
> >> Minor point, but using PageWriteback for synchronization may rule out using
> >> wait_for_stable_page(), because wait_for_stable_page() might not actually 
> >> wait_on_page_writeback. Jan pointed out in the other thread, that we should
> >> prefer wait_for_stable_page(). 
> > 
> > Yes, but wait_for_stable_page() has no page flag so nothing we can
> > synchronize against. So my advice would be:
> >     if (PageWriteback(page)) {
> >         wait_for_stable_page(page);
> >         if (PageWriteback(page))
> >             wait_for_write_back(page);
> >     }
> > 
> > wait_for_stable_page() can optimize out the wait_for_write_back()
> > if it is safe to do so. So we can improve the above slightly too.
> > 
> >>
> >>>     smp_mb();
> >>>     if (page_pinned(page)) {
> >>>         // report page as pinned to caller of test_set_page_writeback()
> >>>     }
> >>>     ...
> >>>
> >>> This is text book memory barrier. Either get_page_pin() see racing
> >>> test_set_page_writeback() or test_set_page_writeback() see racing GUP
> >>>
> >>>
> >>
> >> This approach is probably workable, but again, it's more complex and comes
> >> without any lockdep support. Maybe it's faster, maybe not. Therefore, I want 
> >> to use it as either "do this after everything is up and running and stable", 
> >> or else as Plan B, if there is some performance implication from the page lock.
> >>
> >> Simple and correct first, then performance optimization, *if* necessary.
> > 
> > I do not like taking page lock while they are no good reasons to do so.
> 
> There actually are very good reasons to do so! These include:
> 
> 1) Simpler code that is less likely to have subtle bugs in the initial 
>    implementations.

It is not simpler, memory barrier is 1 line of code ...

> 
> 2) Pre-existing, known locking constructs that include instrumentation and
>    visibility.

Like i said i don't think page lock benefit from those at it is
very struct page specific. I need to check what is available but
you definitly do not get all the bell and whistle you get with
regular lock.

> 
> 3) ...and all of the other goodness that comes from smaller and simpler code.
> 
> I'm not saying that those reasons necessarily prevail here, but it's not
> fair to say "there are no good reasons". Less code is still worth something,
> even in the kernel.

Again memory barrier is just one line of code, i do not see lock as
something simpler than that.

> 
> > The above is textbook memory barrier as explain in Documentations/
> > Forcing page lock for GUP will inevitably slow down some workload and
> 
> Such as?
> 
> Here's the thing: if a workload is taking the page lock for some
> reason, and also competing with GUP, that's actually something that I worry
> about: what is changing in page state, while we're setting up GUP? Either
> we audit for that, or we let runtime locking rules (taking the page lock)
> keep us out of trouble in the first place.
> 
> In other words, if there is a performance hit, it might very likely be
> due to a required synchronization that is taking place.

You need to take the page lock for several thing, top of my mind: insert a
mapping, migrate, truncate, swapping, reverse map, mlock, cgroup, madvise,
... so if GUP now also need it then you force synchronization with all
that for direct-IO.

You do not need to synchronize with most of the above as they do not care
about GUP. In fact only write back path need synchronization i can not
think of anything else that would need to synchronize with GUP.

> > report for such can takes time to trickle down to mailing list and it
> > can takes time for people to actualy figure out that this are the GUP
> > changes that introduce such regression.
> > 
> > So if we could minimize performance regression with something like
> > memory barrier we should definitly do that.
> 
> We do not yet know that the more complex memory barrier approach is actually
> faster. That's worth repeating.

I would be surprise if a memory barrier was slower than a lock. Lock
can contends, memory barrier do not contend. Lock require atomic
operation and thus implied barrier so lock should translate into some-
thing slower than a memory barrier alone.


> > Also i do not think that page lock has lock dep (as it is not using
> > any of the usual locking function) but that's just my memory of that
> > code.
> > 
> 
> Lock page is pretty thoroughly instrumented. It uses wait_on_page_bit_common(),
> which in turn uses spin locks and more.

It does not gives you all the bell and whistle you get with spinlock
debug. The spinlock taken in wait_on_page_bit_common() is for the
waitqueue the page belongs to so you only get debuging on that not
on the individual page lock bit. So i do not think there is anything
that would help debugging page lock like double unlock or dead lock.


> The more I think about this, the more I want actual performance data to 
> justify anything involving the more complicated custom locking. So I think
> it's best to build the page lock based version, do some benchmarks, and see
> where we stand.

This is not custom locking, we do employ memory barrier in several
places already. Memory barrier is something quite common in the kernel
and we should favor it when there is no need for a lock.

Memory barrier never contend so you know you will never have lock
contention ... so memory barrier can only be faster than anything
with lock. The contrary would surprise me.

Using lock and believing it will be as fast as memory barrier is
is hopping that you will never contend on that lock. So i would
rather get proof that GUP will never contend on page lock.


To make it clear.

Lock code:
    GUP()
        ...
        lock_page(page);
        if (PageWriteback(page)) {
            unlock_page(page);
            wait_stable_page(page);
            goto retry;
        }
        atomic_add(page->refcount, PAGE_PIN_BIAS);
        unlock_page(page);

    test_set_page_writeback()
        bool pinned = false;
        ...
        pinned = page_is_pin(page); // could be after TestSetPageWriteback
        TestSetPageWriteback(page);
        ...
        return pinned;

Memory barrier:
    GUP()
        ...
        atomic_add(page->refcount, PAGE_PIN_BIAS);
        smp_mb();
        if (PageWriteback(page)) {
            atomic_add(page->refcount, -PAGE_PIN_BIAS);
            wait_stable_page(page);
            goto retry;
        }

    test_set_page_writeback()
        bool pinned = false;
        ...
        TestSetPageWriteback(page);
        smp_wmb();
        pinned = page_is_pin(page);
        ...
        return pinned;


One is not more complex than the other. One can contend, the other
will _never_ contend.

Cheers,
Jérôme

  parent reply index

Thread overview: 213+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-04  0:17 [PATCH 0/2] put_user_page*(): start converting the call sites john.hubbard
2018-12-04  0:17 ` [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions john.hubbard
2018-12-04  7:53   ` Mike Rapoport
2018-12-05  1:40     ` John Hubbard
2018-12-04 20:28   ` Dan Williams
2018-12-04 21:56     ` John Hubbard
2018-12-04 23:03       ` Dan Williams
2018-12-05  0:36         ` Jerome Glisse
2018-12-05  0:40           ` Dan Williams
2018-12-05  0:59             ` John Hubbard
2018-12-05  0:58         ` John Hubbard
2018-12-05  1:00           ` Dan Williams
2018-12-05  1:15           ` Matthew Wilcox
2018-12-05  1:44             ` Jerome Glisse
2018-12-05  1:57               ` John Hubbard
2018-12-07  2:45                 ` John Hubbard
2018-12-07 19:16                   ` Jerome Glisse
2018-12-07 19:26                     ` Dan Williams
2018-12-07 19:40                       ` Jerome Glisse
2018-12-08  0:52                     ` John Hubbard
2018-12-08  2:24                       ` Jerome Glisse
2018-12-10 10:28                         ` Jan Kara
2018-12-12 15:03                           ` Jerome Glisse
2018-12-12 16:27                             ` Dan Williams
2018-12-12 17:02                               ` Jerome Glisse
2018-12-12 17:49                                 ` Dan Williams
2018-12-12 19:07                                   ` John Hubbard
2018-12-12 21:30                               ` Jerome Glisse
2018-12-12 21:40                                 ` Dan Williams
2018-12-12 21:53                                   ` Jerome Glisse
2018-12-12 22:11                                     ` Matthew Wilcox
2018-12-12 22:16                                       ` Jerome Glisse
2018-12-12 23:37                                     ` Jason Gunthorpe
2018-12-12 23:46                                       ` John Hubbard
2018-12-12 23:54                                       ` Dan Williams
2018-12-13  0:01                                       ` Jerome Glisse
2018-12-13  0:18                                         ` Dan Williams
2018-12-13  0:44                                           ` Jerome Glisse
2018-12-13  3:26                                             ` Jason Gunthorpe
2018-12-13  3:20                                         ` Jason Gunthorpe
2018-12-13 12:43                                           ` Jerome Glisse
2018-12-13 13:40                                             ` Tom Talpey
2018-12-13 14:18                                               ` Jerome Glisse
2018-12-13 14:51                                                 ` Tom Talpey
2018-12-13 15:18                                                   ` Jerome Glisse
2018-12-13 18:12                                                     ` Tom Talpey
2018-12-13 19:18                                                       ` Jerome Glisse
2018-12-14 10:41                                             ` Jan Kara
2018-12-14 15:25                                               ` Jerome Glisse
2018-12-12 21:56                                 ` John Hubbard
2018-12-12 22:04                                   ` Jerome Glisse
2018-12-12 22:11                                     ` John Hubbard
2018-12-12 22:14                                       ` Jerome Glisse
2018-12-12 22:17                                         ` John Hubbard
2018-12-12 21:46                             ` Dave Chinner
2018-12-12 21:59                               ` Jerome Glisse
2018-12-13  0:51                                 ` Dave Chinner
2018-12-13  2:02                                   ` Jerome Glisse
2018-12-13 15:56                                     ` Christopher Lameter
2018-12-13 16:02                                       ` Jerome Glisse
2018-12-14  6:00                                     ` Dave Chinner
2018-12-14 15:13                                       ` Jerome Glisse
2018-12-14  3:52                                   ` John Hubbard
2018-12-14  5:21                                     ` Dan Williams
2018-12-14  6:11                                       ` John Hubbard
2018-12-14 15:20                                         ` Jerome Glisse
2018-12-14 19:38                                         ` Dan Williams
2018-12-14 19:48                                           ` Matthew Wilcox
2018-12-14 19:53                                             ` Dave Hansen
2018-12-14 20:03                                               ` Matthew Wilcox
2018-12-14 20:17                                                 ` Dan Williams
2018-12-14 20:29                                                   ` Matthew Wilcox
2018-12-15  0:41                                                 ` John Hubbard
2018-12-17  8:56                                           ` Jan Kara
2018-12-17 18:28                                             ` Dan Williams
2018-12-14 15:43                               ` Jan Kara
2018-12-16 21:58                                 ` Dave Chinner
2018-12-17 18:11                                   ` Jerome Glisse
2018-12-17 18:34                                     ` Matthew Wilcox
2018-12-17 19:48                                       ` Jerome Glisse
2018-12-17 19:51                                         ` Matthew Wilcox
2018-12-17 19:54                                           ` Jerome Glisse
2018-12-17 19:59                                             ` Matthew Wilcox
2018-12-17 20:55                                               ` Jerome Glisse
2018-12-17 21:03                                                 ` Matthew Wilcox
2018-12-17 21:15                                                   ` Jerome Glisse
2018-12-18  1:09                                       ` Dave Chinner
2018-12-18  6:12                                       ` Darrick J. Wong
2018-12-18  9:30                                       ` Jan Kara
2018-12-18 23:29                                         ` John Hubbard
2018-12-19  2:07                                           ` Jerome Glisse
2018-12-19 11:08                                             ` Jan Kara
2018-12-20 10:54                                               ` John Hubbard
2018-12-20 16:50                                                 ` Jerome Glisse
2018-12-20 16:57                                                   ` Dan Williams
2018-12-20 16:49                                               ` Jerome Glisse
2019-01-03  1:55                                               ` Jerome Glisse
2019-01-03  3:27                                                 ` John Hubbard
2019-01-03 14:57                                                   ` Jerome Glisse
2019-01-03  9:26                                                 ` Jan Kara
2019-01-03 14:44                                                   ` Jerome Glisse
2019-01-11  2:59                                                     ` John Hubbard
2019-01-11  2:59                                                       ` John Hubbard
2019-01-11 16:51                                                       ` Jerome Glisse
2019-01-11 16:51                                                         ` Jerome Glisse
2019-01-12  1:04                                                         ` John Hubbard
2019-01-12  1:04                                                           ` John Hubbard
2019-01-12  2:02                                                           ` Jerome Glisse
2019-01-12  2:02                                                             ` Jerome Glisse
2019-01-12  2:38                                                             ` John Hubbard
2019-01-12  2:38                                                               ` John Hubbard
2019-01-12  2:46                                                               ` Jerome Glisse
2019-01-12  2:46                                                                 ` Jerome Glisse
2019-01-12  3:06                                                                 ` John Hubbard
2019-01-12  3:06                                                                   ` John Hubbard
2019-01-12  3:25                                                                   ` Jerome Glisse
2019-01-12  3:25                                                                     ` Jerome Glisse
2019-01-12 20:46                                                                     ` John Hubbard
2019-01-12 20:46                                                                       ` John Hubbard
2019-01-14 14:54                                                                   ` Jan Kara
2019-01-14 14:54                                                                     ` Jan Kara
2019-01-14 17:21                                                                     ` Jerome Glisse
2019-01-14 17:21                                                                       ` Jerome Glisse
2019-01-14 19:09                                                                       ` John Hubbard
2019-01-14 19:09                                                                         ` John Hubbard
2019-01-15  8:34                                                                         ` Jan Kara
2019-01-15  8:34                                                                           ` Jan Kara
2019-01-15 21:39                                                                           ` John Hubbard
2019-01-15 21:39                                                                             ` John Hubbard
2019-01-15  8:07                                                                       ` Jan Kara
2019-01-15  8:07                                                                         ` Jan Kara
2019-01-15 17:15                                                                         ` Jerome Glisse
2019-01-15 17:15                                                                           ` Jerome Glisse
2019-01-15 21:56                                                                           ` John Hubbard
2019-01-15 21:56                                                                             ` John Hubbard
2019-01-15 22:12                                                                             ` Jerome Glisse
2019-01-15 22:12                                                                               ` Jerome Glisse
2019-01-16  0:44                                                                               ` John Hubbard
2019-01-16  0:44                                                                                 ` John Hubbard
2019-01-16  1:56                                                                                 ` Jerome Glisse [this message]
2019-01-16  1:56                                                                                   ` Jerome Glisse
2019-01-16  2:01                                                                                   ` Dan Williams
2019-01-16  2:01                                                                                     ` Dan Williams
2019-01-16  2:23                                                                                     ` Jerome Glisse
2019-01-16  2:23                                                                                       ` Jerome Glisse
2019-01-16  4:34                                                                                       ` Dave Chinner
2019-01-16  4:34                                                                                         ` Dave Chinner
2019-01-16 14:50                                                                                         ` Jerome Glisse
2019-01-16 14:50                                                                                           ` Jerome Glisse
2019-01-16 22:51                                                                                           ` Dave Chinner
2019-01-16 22:51                                                                                             ` Dave Chinner
2019-01-16 11:38                                                                         ` Jan Kara
2019-01-16 11:38                                                                           ` Jan Kara
2019-01-16 13:08                                                                           ` Jerome Glisse
2019-01-16 13:08                                                                             ` Jerome Glisse
2019-01-17  5:42                                                                             ` John Hubbard
2019-01-17  5:42                                                                               ` John Hubbard
2019-01-17 15:21                                                                               ` Jerome Glisse
2019-01-17 15:21                                                                                 ` Jerome Glisse
2019-01-18  0:16                                                                                 ` Dave Chinner
2019-01-18  1:59                                                                                   ` Jerome Glisse
2019-01-17  9:30                                                                             ` Jan Kara
2019-01-17  9:30                                                                               ` Jan Kara
2019-01-17 15:17                                                                               ` Jerome Glisse
2019-01-17 15:17                                                                                 ` Jerome Glisse
2019-01-22 15:24                                                                                 ` Jan Kara
2019-01-22 16:46                                                                                   ` Jerome Glisse
2019-01-23 18:02                                                                                     ` Jan Kara
2019-01-23 19:04                                                                                       ` Jerome Glisse
2019-01-29  0:22                                                                                         ` John Hubbard
2019-01-29  1:23                                                                                           ` Jerome Glisse
2019-01-29  6:41                                                                                             ` John Hubbard
2019-01-29 10:12                                                                                               ` Jan Kara
2019-01-30  2:21                                                                                                 ` John Hubbard
2019-01-17  5:25                                                                         ` John Hubbard
2019-01-17  5:25                                                                           ` John Hubbard
2019-01-17  9:04                                                                           ` Jan Kara
2019-01-17  9:04                                                                             ` Jan Kara
2019-01-12  3:14                                                               ` Jerome Glisse
2019-01-12  3:14                                                                 ` Jerome Glisse
2018-12-18 10:33                                   ` Jan Kara
2018-12-18 23:42                                     ` Dave Chinner
2018-12-19  3:03                                       ` Jason Gunthorpe
2018-12-19  5:26                                         ` Dan Williams
2018-12-19 11:19                                           ` Jan Kara
2018-12-19 10:28                                         ` Dave Chinner
2018-12-19 11:35                                           ` Jan Kara
2018-12-19 16:56                                             ` Jason Gunthorpe
2018-12-19 22:33                                             ` Dave Chinner
2018-12-20  9:07                                               ` Jan Kara
2018-12-20 16:54                                               ` Jerome Glisse
2018-12-19 13:24                                       ` Jan Kara
2018-12-08  5:18                       ` Matthew Wilcox
2018-12-12 19:13                         ` John Hubbard
2018-12-08  7:16                       ` Dan Williams
2018-12-08 16:33                         ` Jerome Glisse
2018-12-08 16:48                           ` Christoph Hellwig
2018-12-08 17:47                             ` Jerome Glisse
2018-12-08 18:26                               ` Christoph Hellwig
2018-12-08 18:45                                 ` Jerome Glisse
2018-12-08 18:09                             ` Dan Williams
2018-12-08 18:12                               ` Christoph Hellwig
2018-12-11  6:18                               ` Dave Chinner
2018-12-05  5:52             ` Dan Williams
2018-12-05 11:16       ` Jan Kara
2018-12-04  0:17 ` [PATCH 2/2] infiniband/mm: convert put_page() to put_user_page*() john.hubbard
2018-12-04 17:10 ` [PATCH 0/2] put_user_page*(): start converting the call sites David Laight
2018-12-05  1:05   ` John Hubbard
2018-12-05 14:08     ` David Laight
2018-12-28  8:37       ` Pavel Machek
2019-02-08  7:56 [PATCH 0/2] mm: put_user_page() call site conversion first john.hubbard
2019-02-08  7:56 ` [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions john.hubbard
2019-02-08 10:32   ` Mike Rapoport
2019-02-08 20:44     ` John Hubbard

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190116015610.GH3696@redhat.com \
    --to=jglisse@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=benve@cisco.com \
    --cc=cl@linux.com \
    --cc=dan.j.williams@intel.com \
    --cc=david@fromorbit.com \
    --cc=dennis.dalessandro@intel.com \
    --cc=dledford@redhat.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=john.hubbard@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mike.marciniszyn@intel.com \
    --cc=rcampbell@nvidia.com \
    --cc=tom@talpey.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-Fsdevel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-fsdevel/0 linux-fsdevel/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-fsdevel linux-fsdevel/ https://lore.kernel.org/linux-fsdevel \
		linux-fsdevel@vger.kernel.org linux-fsdevel@archiver.kernel.org
	public-inbox-index linux-fsdevel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-fsdevel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox