From: Jason Gunthorpe <jgg@nvidia.com>
To: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
Cc: David Airlie <airlied@linux.ie>,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Christian Koenig <christian.koenig@amd.com>
Subject: Re: [RFC PATCH 1/2] mm,drm/ttm: Block fast GUP to TTM huge pages
Date: Wed, 24 Mar 2021 10:48:33 -0300 [thread overview]
Message-ID: <20210324134833.GE2356281@nvidia.com> (raw)
In-Reply-To: <6c9acb90-8e91-d8af-7abd-e762d9a901aa@shipmail.org>
On Wed, Mar 24, 2021 at 02:35:38PM +0100, Thomas Hellström (Intel) wrote:
> > In an ideal world the creation/destruction of page table levels would
> > by dynamic at this point, like THP.
>
> Hmm, but I'm not sure what problem we're trying to solve by changing the
> interface in this way?
We are trying to make a sensible driver API to deal with huge pages.
> Currently if the core vm requests a huge pud, we give it one, and if we
> can't or don't want to (because of dirty-tracking, for example, which is
> always done on 4K page-level) we just return VM_FAULT_FALLBACK, and the
> fault is retried at a lower level.
Well, my thought would be to move the pte related stuff into
vmf_insert_range instead of recursing back via VM_FAULT_FALLBACK.
I don't know if the locking works out, but it feels cleaner that the
driver tells the vmf how big a page it can stuff in, not the vm
telling the driver to stuff in a certain size page which it might not
want to do.
Some devices want to work on a in-between page size like 64k so they
can't form 2M pages but they can stuff 64k of 4K pages in a batch on
every fault.
That idea doesn't fit naturally if the VM is driving the size.
Jason
next prev parent reply other threads:[~2021-03-24 13:48 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-21 18:45 [RFC PATCH 0/2] mm,drm/ttm: Always block GUP to TTM pages Thomas Hellström (Intel)
2021-03-21 18:45 ` [RFC PATCH 1/2] mm,drm/ttm: Block fast GUP to TTM huge pages Thomas Hellström (Intel)
2021-03-23 11:34 ` Daniel Vetter
2021-03-23 16:34 ` Thomas Hellström (Intel)
2021-03-23 16:37 ` Jason Gunthorpe
2021-03-23 16:59 ` Christoph Hellwig
2021-03-23 17:06 ` Thomas Hellström (Intel)
2021-03-24 9:56 ` Daniel Vetter
2021-03-24 12:24 ` Jason Gunthorpe
2021-03-24 12:35 ` Thomas Hellström (Intel)
2021-03-24 12:41 ` Jason Gunthorpe
2021-03-24 13:35 ` Thomas Hellström (Intel)
2021-03-24 13:48 ` Jason Gunthorpe [this message]
2021-03-24 15:50 ` Thomas Hellström (Intel)
2021-03-24 16:38 ` Jason Gunthorpe
2021-03-24 18:31 ` Christian König
2021-03-24 20:07 ` Thomas Hellström (Intel)
2021-03-24 23:14 ` Jason Gunthorpe
2021-03-25 7:48 ` Thomas Hellström (Intel)
2021-03-25 8:27 ` Christian König
2021-03-25 9:51 ` Thomas Hellström (Intel)
2021-03-25 11:30 ` Jason Gunthorpe
2021-03-25 11:53 ` Thomas Hellström (Intel)
2021-03-25 12:01 ` Jason Gunthorpe
2021-03-25 12:09 ` Christian König
2021-03-25 12:36 ` Thomas Hellström (Intel)
2021-03-25 13:02 ` Christian König
2021-03-25 13:31 ` Thomas Hellström (Intel)
2021-03-25 12:42 ` Jason Gunthorpe
2021-03-25 13:05 ` Christian König
2021-03-25 13:17 ` Jason Gunthorpe
2021-03-25 13:26 ` Christian König
2021-03-25 13:33 ` Jason Gunthorpe
2021-03-25 13:54 ` Christian König
2021-03-25 13:56 ` Jason Gunthorpe
2021-03-25 7:49 ` Christian König
2021-03-25 9:41 ` Daniel Vetter
2021-03-23 13:52 ` Jason Gunthorpe
2021-03-23 15:05 ` Thomas Hellström (Intel)
2021-03-23 19:52 ` Williams, Dan J
2021-03-23 20:42 ` Thomas Hellström (Intel)
2021-03-24 9:58 ` Daniel Vetter
2021-03-24 10:05 ` Thomas Hellström (Intel)
[not found] ` <75423f64-adef-a2c4-8e7d-2cb814127b18@intel.com>
2021-03-24 20:22 ` Thomas Hellström (Intel)
2021-03-24 20:25 ` Dave Hansen
2021-03-25 17:51 ` Thomas Hellström (Intel)
2021-03-25 17:55 ` Jason Gunthorpe
2021-03-25 18:13 ` Thomas Hellström (Intel)
2021-03-25 18:24 ` Jason Gunthorpe
2021-03-25 18:42 ` Thomas Hellström (Intel)
2021-03-26 9:08 ` Thomas Hellström (Intel)
2021-03-26 11:46 ` Jason Gunthorpe
2021-03-26 12:33 ` Thomas Hellström (Intel)
2021-03-21 18:45 ` [RFC PATCH 2/2] mm,drm/ttm: Use VM_PFNMAP for TTM vmas Thomas Hellström (Intel)
2021-03-22 7:47 ` Christian König
2021-03-22 8:13 ` Thomas Hellström (Intel)
2021-03-23 11:57 ` Christian König
2021-03-23 11:47 ` Daniel Vetter
2021-03-23 14:04 ` Jason Gunthorpe
2021-03-23 15:51 ` Thomas Hellström (Intel)
2021-03-23 14:00 ` Jason Gunthorpe
2021-03-23 15:46 ` Thomas Hellström (Intel)
2021-03-23 16:06 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210324134833.GE2356281@nvidia.com \
--to=jgg@nvidia.com \
--cc=airlied@linux.ie \
--cc=akpm@linux-foundation.org \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=thomas_os@shipmail.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).