linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Christoph Hellwig <hch@lst.de>, Jason Gunthorpe <jgg@ziepe.ca>,
	John Hubbard <john.hubbard@gmail.com>,
	Michal Hocko <mhocko@kernel.org>,
	Christopher Lameter <cl@linux.com>, Linux MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-rdma <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH 2/2] mm: set PG_dma_pinned on get_user_pages*()
Date: Mon, 25 Jun 2018 12:03:37 -0700	[thread overview]
Message-ID: <d007873c-4454-4c70-4829-b222155be0ff@nvidia.com> (raw)
In-Reply-To: <20180625152150.jnf5suiubecfppcl@quack2.suse.cz>

On 06/25/2018 08:21 AM, Jan Kara wrote:
> On Thu 21-06-18 18:30:36, Jan Kara wrote:
>> On Wed 20-06-18 15:55:41, John Hubbard wrote:
>>> On 06/20/2018 05:08 AM, Jan Kara wrote:
>>>> On Tue 19-06-18 11:11:48, John Hubbard wrote:
>>>>> On 06/19/2018 03:41 AM, Jan Kara wrote:
>>>>>> On Tue 19-06-18 02:02:55, Matthew Wilcox wrote:
>>>>>>> On Tue, Jun 19, 2018 at 10:29:49AM +0200, Jan Kara wrote:
>>> [...]
> I've spent some time on this. There are two obstacles with my approach of
> putting special entry into inode's VMA tree:
> 
> 1) If I want to place this special entry in inode's VMA tree, I either need
> to allocate full VMA, somehow initiate it so that it's clear it's a special
> "pinned" range, not a VMA => uses unnecessarily too much memory, it is
> ugly. Another solution I was hoping for was that I would factor out some
> common bits of vm_area_struct (pgoff, rb_node, ..) into a structure common
> for VMA and the locked range => doable but causes a lot of churn as VMAs
> are accessed (and modified!) at hundreds of places in the kernel. Some
> accessor functions would help to reduce the churn a bit but then stuff like
> vma_set_pgoff(vma, pgoff) isn't exactly beautiful either.
> 
> 2) Some users of GUP (e.g. direct IO) get a block of pages and then put
> references to these pages at different times and in random order -
> basically when IO for given page is completed, reference is dropped and one
> GUP call can acquire page references for pages which end up in multiple
> different bios (we don't know in advance). This makes is difficult to
> implement counterpart to GUP to 'unpin' a range of pages - we'd either have
> to support partial unpins (and splitting of pinned ranges and all such fun)
> or just have to track internally in how many pages are still pinned in the
> originally pinned range and release the pin once all individual pages are
> unpinned but then it's difficult to e.g. get to this internal structure
> from IO completion callback where we only have the bio.
>
> So I think the Matthew's idea of removing pinned pages from LRU is
> definitely worth trying to see how complex that would end up being. Did you
> get to looking into it? If not, I can probably find some time to try that
> out.
> 

OK. Even if we remove the pages from the LRU, we still have to insert a "put_gup_page"
or similarly named call. But it could be a simple replacement for put_page, with
that approach, so that does make it much much easier.
 
I was (and still am) planning on tackling this today, so let me see how far I
get before yelling for help. :)

thanks,
-- 
John Hubbard
NVIDIA

  reply	other threads:[~2018-06-25 19:03 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-17  1:25 [PATCH 0/2] mm: gup: don't unmap or drop filesystem buffers john.hubbard
2018-06-17  1:25 ` [PATCH 1/2] consolidate get_user_pages error handling john.hubbard
2018-06-17  1:25 ` [PATCH 2/2] mm: set PG_dma_pinned on get_user_pages*() john.hubbard
2018-06-17 19:53   ` Dan Williams
2018-06-17 20:04     ` Jason Gunthorpe
2018-06-17 20:10       ` Dan Williams
2018-06-17 20:28         ` John Hubbard
2018-06-18  8:12           ` Christoph Hellwig
2018-06-18 17:50             ` John Hubbard
2018-06-18 17:56               ` Dan Williams
2018-06-18 18:14                 ` John Hubbard
2018-06-18 19:21                   ` Dan Williams
2018-06-18 19:31                     ` Jason Gunthorpe
2018-06-18 20:04                       ` Dan Williams
2018-06-18 21:36                     ` John Hubbard
2018-06-19  8:29                       ` Jan Kara
2018-06-19  9:02                         ` Matthew Wilcox
2018-06-19 10:41                           ` Jan Kara
2018-06-19 18:11                             ` John Hubbard
2018-06-20  1:24                               ` Dan Williams
2018-06-20  1:34                                 ` John Hubbard
2018-06-20  1:57                                   ` Dan Williams
2018-06-20  2:03                                     ` John Hubbard
2018-06-20 12:08                               ` Jan Kara
2018-06-20 22:55                                 ` John Hubbard
2018-06-21 16:30                                   ` Jan Kara
2018-06-25 15:21                                     ` Jan Kara
2018-06-25 19:03                                       ` John Hubbard [this message]
2018-06-26  7:52                                         ` Jan Kara
2018-06-26  6:31                                       ` John Hubbard
2018-06-26 11:48                                         ` Jan Kara
2018-06-26 13:47                     ` Michal Hocko
2018-06-26 16:48                       ` Jan Kara
2018-06-27 11:32                         ` Michal Hocko
2018-06-27 11:53                           ` Jan Kara
2018-06-27 11:59                             ` Michal Hocko
2018-06-27 12:42                               ` Jan Kara
2018-06-27 14:57                                 ` Jason Gunthorpe
2018-06-27 17:02                                   ` Jan Kara
2018-06-28  2:42                                     ` John Hubbard
2018-06-28  9:17                                       ` Jan Kara
2018-07-02  5:52                                         ` Leon Romanovsky
2018-07-02  6:10                                           ` John Hubbard
2018-07-02  6:34                                             ` Leon Romanovsky
2018-07-02  6:41                                               ` John Hubbard
2018-07-02 10:36                                                 ` Michal Hocko
2018-07-02  7:02                                             ` Jan Kara
2018-07-02 14:48                                               ` Michal Hocko
2018-07-02  6:58                                           ` Jan Kara
2018-06-18  8:11         ` Christoph Hellwig
2018-06-19  6:15           ` Leon Romanovsky
2018-06-17 22:19     ` John Hubbard
2018-06-18  7:56   ` Christoph Hellwig
2018-06-18 17:44     ` John Hubbard
2018-06-17 21:54 ` [PATCH 0/2] mm: gup: don't unmap or drop filesystem buffers Christopher Lameter
2018-06-17 22:23   ` John Hubbard
2018-06-18  8:10   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d007873c-4454-4c70-4829-b222155be0ff@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=cl@linux.com \
    --cc=dan.j.williams@intel.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=john.hubbard@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mhocko@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).