From: Matthew Wilcox <willy@infradead.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: Ming Mao <maoming.maoming@huawei.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
linux-mm@kvack.org, alex.williamson@redhat.com,
akpm@linux-foundation.org, cohuck@redhat.com,
jianjay.zhou@huawei.com, weidong.huang@huawei.com,
peterx@redhat.com, aarcange@redhat.com, wangyunjian@huawei.com,
jhubbard@nvidia.com
Subject: Re: [PATCH V4 1/2] vfio dma_map/unmap: optimized for hugetlbfs pages
Date: Wed, 9 Sep 2020 14:41:36 +0100 [thread overview]
Message-ID: <20200909134136.GG6583@casper.infradead.org> (raw)
In-Reply-To: <20200909080114.GA8321@infradead.org>
On Wed, Sep 09, 2020 at 09:01:14AM +0100, Christoph Hellwig wrote:
> I really don't think this approach is any good. You workaround
> a deficiency in the pin_user_pages API in one particular caller for
> one particular use case.
>
> I think you'd rather want either:
>
> (1) a FOLL_HUGEPAGE flag for the pin_user_pages API family that returns
> a single struct page for any kind of huge page, which would also
> benefit all kinds of other users rather than adding these kinds of
> hacks to vfio.
This seems to be similar to a flag I added last week to
pagecache_get_page() called FGP_HEAD:
+ * * %FGP_HEAD - If the page is present and a THP, return the head page
+ * rather than the exact page specified by the index.
I think "return the head page" is probably what we want from what I
understand of this patch. The caller can figure out the appropriate
bv_offset / bv_len for a bio_vec, if that's what they want to do with it.
http://git.infradead.org/users/willy/pagecache.git/commitdiff/ee88eeeb6b0f35e95ef82b11dfc24dc04c3dcad8 is the exact commit where I added that, but it depends on a number of other patches in this series:
http://git.infradead.org/users/willy/pagecache.git/shortlog
I'm going to send out a subset of patches later today which will include
that one and some others. I haven't touched the GUP paths at all in
that series, but it's certainly going to make THPs (of various sizes)
much more present in the system.
next prev parent reply other threads:[~2020-09-09 13:41 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-08 13:32 [PATCH V4 0/2] vfio: optimized for hugetlbf pages when dma map/unmap Ming Mao
2020-09-08 13:32 ` [PATCH V4 1/2] vfio dma_map/unmap: optimized for hugetlbfs pages Ming Mao
2020-09-09 8:01 ` Christoph Hellwig
2020-09-09 13:05 ` Jason Gunthorpe
2020-09-09 14:29 ` Christoph Hellwig
2020-09-09 15:00 ` Matthew Wilcox
2020-09-09 15:09 ` Jason Gunthorpe
2020-09-09 13:41 ` Matthew Wilcox [this message]
2020-09-09 17:04 ` Matthew Wilcox
2020-09-08 13:32 ` [PATCH V4 2/2] vfio: optimized for unpinning pages Ming Mao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200909134136.GG6583@casper.infradead.org \
--to=willy@infradead.org \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=cohuck@redhat.com \
--cc=hch@infradead.org \
--cc=jhubbard@nvidia.com \
--cc=jianjay.zhou@huawei.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maoming.maoming@huawei.com \
--cc=peterx@redhat.com \
--cc=wangyunjian@huawei.com \
--cc=weidong.huang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).