linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: David Airlie <airlied@linux.ie>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Christian Koenig <christian.koenig@amd.com>
Subject: Re: [RFC PATCH 1/2] mm,drm/ttm: Block fast GUP to TTM huge pages
Date: Wed, 24 Mar 2021 14:35:38 +0100	[thread overview]
Message-ID: <6c9acb90-8e91-d8af-7abd-e762d9a901aa@shipmail.org> (raw)
In-Reply-To: <20210324124127.GY2356281@nvidia.com>


On 3/24/21 1:41 PM, Jason Gunthorpe wrote:
> On Wed, Mar 24, 2021 at 01:35:17PM +0100, Thomas Hellström (Intel) wrote:
>> On 3/24/21 1:24 PM, Jason Gunthorpe wrote:
>>> On Wed, Mar 24, 2021 at 10:56:43AM +0100, Daniel Vetter wrote:
>>>> On Tue, Mar 23, 2021 at 06:06:53PM +0100, Thomas Hellström (Intel) wrote:
>>>>> On 3/23/21 5:37 PM, Jason Gunthorpe wrote:
>>>>>> On Tue, Mar 23, 2021 at 05:34:51PM +0100, Thomas Hellström (Intel) wrote:
>>>>>>
>>>>>>>>> @@ -210,6 +211,20 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf,
>>>>>>>>>      	if ((pfn & (fault_page_size - 1)) != 0)
>>>>>>>>>      		goto out_fallback;
>>>>>>>>> +	/*
>>>>>>>>> +	 * Huge entries must be special, that is marking them as devmap
>>>>>>>>> +	 * with no backing device map range. If there is a backing
>>>>>>>>> +	 * range, Don't insert a huge entry.
>>>>>>>>> +	 * If this check turns out to be too much of a performance hit,
>>>>>>>>> +	 * we can instead have drivers indicate whether they may have
>>>>>>>>> +	 * backing device map ranges and if not, skip this lookup.
>>>>>>>>> +	 */
>>>>>>>> I think we can do this statically:
>>>>>>>> - if it's system memory we know there's no devmap for it, and we do the
>>>>>>>>       trick to block gup_fast
>>>>>>> Yes, that should work.
>>>>>>>> - if it's iomem, we know gup_fast wont work anyway if don't set PFN_DEV,
>>>>>>>>       so might as well not do that
>>>>>>> I think gup_fast will unfortunately mistake a huge iomem page for an
>>>>>>> ordinary page and try to access a non-existant struct page for it, unless we
>>>>>>> do the devmap trick.
>>>>>>>
>>>>>>> And the lookup would then be for the rare case where a driver would have
>>>>>>> already registered a dev_pagemap for an iomem area which may also be mapped
>>>>>>> through TTM (like the patch from Felix a couple of weeks ago). If a driver
>>>>>>> can promise not to do that, then we can safely remove the lookup.
>>>>>> Isn't the devmap PTE flag arch optional? Does this fall back to not
>>>>>> using huge pages on arches that don't support it?
>>>>> Good point. No, currently it's only conditioned on transhuge page support.
>>>>> Need to condition it on also devmap support.
>>>>>
>>>>>> Also, I feel like this code to install "pte_special" huge pages does
>>>>>> not belong in the drm subsystem..
>>>>> I could add helpers in huge_memory.c:
>>>>>
>>>>> vmf_insert_pfn_pmd_prot_special() and
>>>>> vmf_insert_pfn_pud_prot_special()
>>>> The somewhat annoying thing is that we'd need an error code so we fall
>>>> back to pte fault handling. That's at least my understanding of how
>>>> pud/pmd fault handling works. Not sure how awkward that is going to be
>>>> with the overall fault handling flow.
>>>>
>>>> But aside from that I think this makes tons of sense.
>>> Why should the driver be so specific?
>>>
>>> vmf_insert_pfn_range_XXX()
>>>
>>> And it will figure out the optimal way to build the page tables.
>>>
>>> Driver should provide the largest physically contiguous range it can
>> I figure that would probably work, but since the huge_fault() interface is
>> already providing the size of the fault based on how the pagetable is
>> currently populated I figure that would have to move a lot of that logic
>> into that helper...
> But we don't really care about the size of the fault when we stuff the
> pfns.
>
> The device might use it when handling the fault, but once the fault is
> handled the device knows what the contiguous pfn range is that it has
> available to stuff into the page tables, it just tells the vmf_insert
> what it was able to create, and it creates the necessary page table
> structure.
>
> The size of the hole in the page table is really only advisory, the
> device may not want to make a 2M or 1G page entry and may prefer to
> only create 4k.
>
> In an ideal world the creation/destruction of page table levels would
> by dynamic at this point, like THP.

Hmm, but I'm not sure what problem we're trying to solve by changing the 
interface in this way?

Currently if the core vm requests a huge pud, we give it one, and if we 
can't or don't want to (because of dirty-tracking, for example, which is 
always done on 4K page-level) we just return VM_FAULT_FALLBACK, and the 
fault is retried at a lower level. Also, determining whether we have a 
contigous range is not free, so we  don't want to do that unnecessarily.

/Thomas




  reply	other threads:[~2021-03-24 13:35 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-21 18:45 [RFC PATCH 0/2] mm,drm/ttm: Always block GUP to TTM pages Thomas Hellström (Intel)
2021-03-21 18:45 ` [RFC PATCH 1/2] mm,drm/ttm: Block fast GUP to TTM huge pages Thomas Hellström (Intel)
2021-03-23 11:34   ` Daniel Vetter
2021-03-23 16:34     ` Thomas Hellström (Intel)
2021-03-23 16:37       ` Jason Gunthorpe
2021-03-23 16:59         ` Christoph Hellwig
2021-03-23 17:06         ` Thomas Hellström (Intel)
2021-03-24  9:56           ` Daniel Vetter
2021-03-24 12:24             ` Jason Gunthorpe
2021-03-24 12:35               ` Thomas Hellström (Intel)
2021-03-24 12:41                 ` Jason Gunthorpe
2021-03-24 13:35                   ` Thomas Hellström (Intel) [this message]
2021-03-24 13:48                     ` Jason Gunthorpe
2021-03-24 15:50                       ` Thomas Hellström (Intel)
2021-03-24 16:38                         ` Jason Gunthorpe
2021-03-24 18:31                           ` Christian König
2021-03-24 20:07                             ` Thomas Hellström (Intel)
2021-03-24 23:14                               ` Jason Gunthorpe
2021-03-25  7:48                                 ` Thomas Hellström (Intel)
2021-03-25  8:27                                   ` Christian König
2021-03-25  9:51                                     ` Thomas Hellström (Intel)
2021-03-25 11:30                                       ` Jason Gunthorpe
2021-03-25 11:53                                         ` Thomas Hellström (Intel)
2021-03-25 12:01                                           ` Jason Gunthorpe
2021-03-25 12:09                                             ` Christian König
2021-03-25 12:36                                               ` Thomas Hellström (Intel)
2021-03-25 13:02                                                 ` Christian König
2021-03-25 13:31                                                   ` Thomas Hellström (Intel)
2021-03-25 12:42                                               ` Jason Gunthorpe
2021-03-25 13:05                                                 ` Christian König
2021-03-25 13:17                                                   ` Jason Gunthorpe
2021-03-25 13:26                                                     ` Christian König
2021-03-25 13:33                                                       ` Jason Gunthorpe
2021-03-25 13:54                                                         ` Christian König
2021-03-25 13:56                                                           ` Jason Gunthorpe
2021-03-25  7:49                                 ` Christian König
2021-03-25  9:41                                   ` Daniel Vetter
2021-03-23 13:52   ` Jason Gunthorpe
2021-03-23 15:05     ` Thomas Hellström (Intel)
2021-03-23 19:52   ` Williams, Dan J
2021-03-23 20:42     ` Thomas Hellström (Intel)
2021-03-24  9:58       ` Daniel Vetter
2021-03-24 10:05         ` Thomas Hellström (Intel)
     [not found]           ` <75423f64-adef-a2c4-8e7d-2cb814127b18@intel.com>
2021-03-24 20:22             ` Thomas Hellström (Intel)
2021-03-24 20:25               ` Dave Hansen
2021-03-25 17:51                 ` Thomas Hellström (Intel)
2021-03-25 17:55                   ` Jason Gunthorpe
2021-03-25 18:13                     ` Thomas Hellström (Intel)
2021-03-25 18:24                       ` Jason Gunthorpe
2021-03-25 18:42                         ` Thomas Hellström (Intel)
2021-03-26  9:08                         ` Thomas Hellström (Intel)
2021-03-26 11:46                           ` Jason Gunthorpe
2021-03-26 12:33                             ` Thomas Hellström (Intel)
2021-03-21 18:45 ` [RFC PATCH 2/2] mm,drm/ttm: Use VM_PFNMAP for TTM vmas Thomas Hellström (Intel)
2021-03-22  7:47   ` Christian König
2021-03-22  8:13     ` Thomas Hellström (Intel)
2021-03-23 11:57       ` Christian König
2021-03-23 11:47   ` Daniel Vetter
2021-03-23 14:04     ` Jason Gunthorpe
2021-03-23 15:51       ` Thomas Hellström (Intel)
2021-03-23 14:00   ` Jason Gunthorpe
2021-03-23 15:46     ` Thomas Hellström (Intel)
2021-03-23 16:06       ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6c9acb90-8e91-d8af-7abd-e762d9a901aa@shipmail.org \
    --to=thomas_os@shipmail.org \
    --cc=airlied@linux.ie \
    --cc=akpm@linux-foundation.org \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jgg@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).