From mboxrd@z Thu Jan 1 00:00:00 1970 From: John Hubbard Subject: Re: [PATCH 2/2] mm: set PG_dma_pinned on get_user_pages*() Date: Tue, 19 Jun 2018 11:11:48 -0700 Message-ID: <70001987-3938-d33e-11e0-de5b19ca3bdf@nvidia.com> References: <311eba48-60f1-b6cc-d001-5cc3ed4d76a9@nvidia.com> <20180618081258.GB16991@lst.de> <3898ef6b-2fa0-e852-a9ac-d904b47320d5@nvidia.com> <0e6053b3-b78c-c8be-4fab-e8555810c732@nvidia.com> <20180619082949.wzoe42wpxsahuitu@quack2.suse.cz> <20180619090255.GA25522@bombadil.infradead.org> <20180619104142.lpilc6esz7w3a54i@quack2.suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20180619104142.lpilc6esz7w3a54i@quack2.suse.cz> Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org To: Jan Kara , Matthew Wilcox Cc: Dan Williams , Christoph Hellwig , Jason Gunthorpe , John Hubbard , Michal Hocko , Christopher Lameter , Linux MM , LKML , linux-rdma List-Id: linux-rdma@vger.kernel.org On 06/19/2018 03:41 AM, Jan Kara wrote: > On Tue 19-06-18 02:02:55, Matthew Wilcox wrote: >> On Tue, Jun 19, 2018 at 10:29:49AM +0200, Jan Kara wrote: >>> And for record, the problem with page cache pages is not only that >>> try_to_unmap() may unmap them. It is also that page_mkclean() can >>> write-protect them. And once PTEs are write-protected filesystems may end >>> up doing bad things if DMA then modifies the page contents (DIF/DIX >>> failures, data corruption, oopses). As such I don't think that solutions >>> based on page reference count have a big chance of dealing with the >>> problem. >>> >>> And your page flag approach would also need to take page_mkclean() into >>> account. And there the issue is that until the flag is cleared (i.e., we >>> are sure there are no writers using references from GUP) you cannot >>> writeback the page safely which does not work well with your idea of >>> clearing the flag only once the page is evicted from page cache (hint, page >>> cache page cannot get evicted until it is written back). >>> >>> So as sad as it is, I don't see an easy solution here. >> >> Pages which are "got" don't need to be on the LRU list. They'll be >> marked dirty when they're put, so we can use page->lru for fun things >> like a "got" refcount. If we use bit 1 of page->lru for PageGot, we've >> got 30/62 bits in the first word and a full 64 bits in the second word. > > Interesting idea! It would destroy the aging information for the page but > for pages accessed through GUP references that is very much vague concept > anyway. It might be a bit tricky as pulling a page out of LRU requires page > lock but I don't think that's a huge problem. And page cache pages not on > LRU exist even currently when they are under reclaim so hopefully there > won't be too many places in MM that would need fixing up for such pages. This sound promising, I'll try it out! > > I'm also still pondering the idea of inserting a "virtual" VMA into vma > interval tree in the inode - as the GUP references are IMHO closest to an > mlocked mapping - and that would achieve all the functionality we need as > well. I just didn't have time to experiment with it. How would this work? Would it have the same virtual address range? And how does it avoid the problems we've been discussing? Sorry to be a bit slow here. :) > > And then there's the aspect that both these approaches are a bit too > heavyweight for some get_user_pages_fast() users (e.g. direct IO) - Al Viro > had an idea to use page lock for that path but e.g. fs/direct-io.c would have > problems due to lock ordering constraints (filesystem ->get_block would > suddently get called with the page lock held). But we can probably leave > performance optimizations for phase two. So I assume that phase one would be to apply this approach only to get_user_pages_longterm. (Please let me know if that's wrong.) thanks, -- John Hubbard NVIDIA