linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Jerome Glisse <jglisse@redhat.com>,
	john.hubbard@gmail.com, Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, Al Viro <viro@zeniv.linux.org.uk>,
	Christian Benvenuti <benve@cisco.com>,
	Christoph Hellwig <hch@infradead.org>,
	Christopher Lameter <cl@linux.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Chinner <david@fromorbit.com>,
	Dennis Dalessandro <dennis.dalessandro@intel.com>,
	Doug Ledford <dledford@redhat.com>,
	Ira Weiny <ira.weiny@intel.com>, Jan Kara <jack@suse.cz>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Matthew Wilcox <willy@infradead.org>,
	Michal Hocko <mhocko@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Mike Marciniszyn <mike.marciniszyn@intel.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	Tom Talpey <tom@talpey.com>, LKML <linux-kernel@vger.kernel.org>,
	linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH v4 1/1] mm: introduce put_user_page*(), placeholder versions
Date: Wed, 20 Mar 2019 12:28:36 +0300	[thread overview]
Message-ID: <20190320092836.rbc3fscxxpibgt3m@kshutemo-mobl1> (raw)
In-Reply-To: <6aa32cca-d97a-a3e5-b998-c67d0a6cc52a@nvidia.com>

On Tue, Mar 19, 2019 at 01:01:01PM -0700, John Hubbard wrote:
> On 3/19/19 7:06 AM, Kirill A. Shutemov wrote:
> > On Tue, Mar 19, 2019 at 09:47:24AM -0400, Jerome Glisse wrote:
> >> On Tue, Mar 19, 2019 at 03:04:17PM +0300, Kirill A. Shutemov wrote:
> >>> On Fri, Mar 08, 2019 at 01:36:33PM -0800, john.hubbard@gmail.com wrote:
> >>>> From: John Hubbard <jhubbard@nvidia.com>
> >>
> >> [...]
> >>
> >>>> diff --git a/mm/gup.c b/mm/gup.c
> >>>> index f84e22685aaa..37085b8163b1 100644
> >>>> --- a/mm/gup.c
> >>>> +++ b/mm/gup.c
> >>>> @@ -28,6 +28,88 @@ struct follow_page_context {
> >>>>  	unsigned int page_mask;
> >>>>  };
> >>>>  
> >>>> +typedef int (*set_dirty_func_t)(struct page *page);
> >>>> +
> >>>> +static void __put_user_pages_dirty(struct page **pages,
> >>>> +				   unsigned long npages,
> >>>> +				   set_dirty_func_t sdf)
> >>>> +{
> >>>> +	unsigned long index;
> >>>> +
> >>>> +	for (index = 0; index < npages; index++) {
> >>>> +		struct page *page = compound_head(pages[index]);
> >>>> +
> >>>> +		if (!PageDirty(page))
> >>>> +			sdf(page);
> >>>
> >>> How is this safe? What prevents the page to be cleared under you?
> >>>
> >>> If it's safe to race clear_page_dirty*() it has to be stated explicitly
> >>> with a reason why. It's not very clear to me as it is.
> >>
> >> The PageDirty() optimization above is fine to race with clear the
> >> page flag as it means it is racing after a page_mkclean() and the
> >> GUP user is done with the page so page is about to be write back
> >> ie if (!PageDirty(page)) see the page as dirty and skip the sdf()
> >> call while a split second after TestClearPageDirty() happens then
> >> it means the racing clear is about to write back the page so all
> >> is fine (the page was dirty and it is being clear for write back).
> >>
> >> If it does call the sdf() while racing with write back then we
> >> just redirtied the page just like clear_page_dirty_for_io() would
> >> do if page_mkclean() failed so nothing harmful will come of that
> >> neither. Page stays dirty despite write back it just means that
> >> the page might be write back twice in a row.
> > 
> > Fair enough. Should we get it into a comment here?
> 
> How's this read to you? I reworded and slightly expanded Jerome's 
> description:
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index d1df7b8ba973..86397ae23922 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -61,6 +61,24 @@ static void __put_user_pages_dirty(struct page **pages,
>         for (index = 0; index < npages; index++) {
>                 struct page *page = compound_head(pages[index]);
>  
> +               /*
> +                * Checking PageDirty at this point may race with
> +                * clear_page_dirty_for_io(), but that's OK. Two key cases:
> +                *
> +                * 1) This code sees the page as already dirty, so it skips
> +                * the call to sdf(). That could happen because
> +                * clear_page_dirty_for_io() called page_mkclean(),
> +                * followed by set_page_dirty(). However, now the page is
> +                * going to get written back, which meets the original
> +                * intention of setting it dirty, so all is well:
> +                * clear_page_dirty_for_io() goes on to call
> +                * TestClearPageDirty(), and write the page back.
> +                *
> +                * 2) This code sees the page as clean, so it calls sdf().
> +                * The page stays dirty, despite being written back, so it
> +                * gets written back again in the next writeback cycle.
> +                * This is harmless.
> +                */
>                 if (!PageDirty(page))
>                         sdf(page);

Looks good to me.

Other nit: effectively the same type of callback called 'spd' in
set_page_dirty(). Should we rename 'sdf' to 'sdp' here too?

> >>>> +void put_user_pages(struct page **pages, unsigned long npages)
> >>>> +{
> >>>> +	unsigned long index;
> >>>> +
> >>>> +	for (index = 0; index < npages; index++)
> >>>> +		put_user_page(pages[index]);
> >>>
> >>> I believe there's an room for improvement for compound pages.
> >>>
> >>> If there's multiple consequential pages in the array that belong to the
> >>> same compound page we can get away with a single atomic operation to
> >>> handle them all.
> >>
> >> Yes maybe just add a comment with that for now and leave this kind of
> >> optimization to latter ?
> > 
> > Sounds good to me.
> > 
> 
> Here's a comment for that:
> 
> @@ -127,6 +145,11 @@ void put_user_pages(struct page **pages, unsigned long npages)
>  {
>         unsigned long index;
>  
> +       /*
> +        * TODO: this can be optimized for huge pages: if a series of pages is
> +        * physically contiguous and part of the same compound page, then a

Comound pages are always physically contiguous. I initially ment that the
optimization makes sense if they are next to each other in 'pages' array.

> +        * single operation to the head page should suffice.
> +        */
>         for (index = 0; index < npages; index++)
>                 put_user_page(pages[index]);
>  }
> 
> 
> thanks,
> -- 
> John Hubbard
> NVIDIA

-- 
 Kirill A. Shutemov

  reply	other threads:[~2019-03-20  9:28 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-08 21:36 [PATCH v4 0/1] mm: introduce put_user_page*(), placeholder versions john.hubbard
2019-03-08 21:36 ` [PATCH v4 1/1] " john.hubbard
2019-03-19 12:04   ` Kirill A. Shutemov
2019-03-19 13:47     ` Jerome Glisse
2019-03-19 14:06       ` Kirill A. Shutemov
2019-03-19 14:15         ` Jerome Glisse
2019-03-19 20:01         ` John Hubbard
2019-03-20  9:28           ` Kirill A. Shutemov [this message]
2019-03-19 14:14       ` Jerome Glisse
2019-03-19 14:29         ` Kirill A. Shutemov
2019-03-19 15:36           ` Jan Kara
2019-03-19  9:03             ` Ira Weiny
2019-03-19 20:43               ` Tom Talpey
2019-03-19 20:45                 ` Jerome Glisse
2019-03-19 20:55                   ` Tom Talpey
2019-03-19 19:02             ` John Hubbard
2019-03-19 21:23         ` Dave Chinner
2019-03-19 22:06           ` Jerome Glisse
2019-03-19 23:57             ` Dave Chinner
2019-03-20  0:08               ` Jerome Glisse
2019-03-20  1:43                 ` John Hubbard
2019-03-20  4:33                   ` Jerome Glisse
2019-03-20  9:08                     ` Ira Weiny
2019-03-20 14:55                     ` William Kucharski
2019-03-20 14:59                       ` Jerome Glisse
2019-03-20  0:15               ` John Hubbard
2019-03-20  1:01               ` Christopher Lameter
2019-03-19 19:24       ` John Hubbard
2019-03-20  9:40         ` Kirill A. Shutemov
2019-03-08 23:21 ` [PATCH v4 0/1] " John Hubbard
2019-03-19 18:12 ` Christopher Lameter
2019-03-19 19:24   ` John Hubbard
2019-03-20  1:09     ` Christopher Lameter
2019-03-20  1:18       ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190320092836.rbc3fscxxpibgt3m@kshutemo-mobl1 \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=benve@cisco.com \
    --cc=cl@linux.com \
    --cc=dan.j.williams@intel.com \
    --cc=david@fromorbit.com \
    --cc=dennis.dalessandro@intel.com \
    --cc=dledford@redhat.com \
    --cc=hch@infradead.org \
    --cc=ira.weiny@intel.com \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=john.hubbard@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mike.marciniszyn@intel.com \
    --cc=rcampbell@nvidia.com \
    --cc=rppt@linux.ibm.com \
    --cc=tom@talpey.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).