All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: Mike Kravetz <mike.kravetz@oracle.com>, linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	John Hubbard <jhubbard@nvidia.com>,
	Jason Gunthorpe <jgg@ziepe.ca>
Subject: Re: [PATCH 2/2] mm/hugetlb: refactor subpage recording
Date: Tue, 26 Jan 2021 23:20:15 +0000	[thread overview]
Message-ID: <90e28835-cc36-5432-bb3b-4142fbbb2c21@oracle.com> (raw)
In-Reply-To: <4d3914e9-f448-8a86-9fc6-e71cec581115@oracle.com>

On 1/26/21 9:21 PM, Mike Kravetz wrote:
> On 1/26/21 11:21 AM, Joao Martins wrote:
>> On 1/26/21 6:08 PM, Mike Kravetz wrote:
>>> On 1/25/21 12:57 PM, Joao Martins wrote:
>>>>
>>>> +static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma,
>>>> +				 int refs, struct page **pages,
>>>> +				 struct vm_area_struct **vmas)
>>>> +{
>>>> +	int nr;
>>>> +
>>>> +	for (nr = 0; nr < refs; nr++) {
>>>> +		if (likely(pages))
>>>> +			pages[nr] = page++;
>>>> +		if (vmas)
>>>> +			vmas[nr] = vma;
>>>> +	}
>>>> +}
>>>> +
>>>>  long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>>>>  			 struct page **pages, struct vm_area_struct **vmas,
>>>>  			 unsigned long *position, unsigned long *nr_pages,
>>>> @@ -4918,28 +4932,16 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>>>>  			continue;
>>>>  		}
>>>>  
>>>> -		refs = 0;
>>>> +		refs = min3(pages_per_huge_page(h) - pfn_offset,
>>>> +			    (vma->vm_end - vaddr) >> PAGE_SHIFT, remainder);
>>>>  
>>>> -same_page:
>>>> -		if (pages)
>>>> -			pages[i] = mem_map_offset(page, pfn_offset);
>>>> +		if (pages || vmas)
>>>> +			record_subpages_vmas(mem_map_offset(page, pfn_offset),
>>>
>>> The assumption made here is that mem_map is contiguous for the range of
>>> pages in the hugetlb page.  I do not believe you can make this assumption
>>> for (gigantic) hugetlb pages which are > MAX_ORDER_NR_PAGES.  For example,
>>>
> 
> Thinking about this a bit more ...
> 
> mem_map can be accessed contiguously if we have a virtual memmap.  Correct?

Right.

> I suspect virtual memmap may be the most common configuration today.  However,
> it seems we do need to handle other configurations.
> 

At the moment mem_map_offset(page, n) in turn does this for >= MAX_ORDER:

	pfn_to_page(page_to_pfn(page) + n)


For CONFIG_SPARSE_VMEMMAP or FLATMEM will resolve into something:

	vmemmap + ((page - vmemmap) + n)

It isn't really different than incrementing the @page.

I can only think that CONFIG_SPARSEMEM and CONFIG_DISCONTIGMEM as the offending
cases which respectively look into section info or pgdat.

[CONFIG_DISCONTIGMEM doesnt isn't auto selected by any arch at the moment.]

>> That would mean get_user_pages_fast() and pin_user_pages_fast() are broken for anything
>> handling PUDs or above? See record_subpages() in gup_huge_pud() or even gup_huge_pgd().
>> It's using the same page++.
> 
> Yes, I believe those would also have the issue.
> Cc: John and Jason as they have spent a significant amount of time in gup
> code recently.  There may be something that makes that code safe?
> 
Maybe -- Looking back, gup-fast has always relied on that page pointer arithmetic, even
before its refactors around __record_subpages() and what not.

>> This adjustment below probably is what you're trying to suggest.
>>
>> Also, nth_page() is slightly more expensive and so the numbers above change from ~4.4k
>> usecs to ~7.8k usecs.
> 
> If my thoughts about virtual memmap are correct, then could we simply have
> a !vmemmap version of mem_map_offset (or similar routine) to avoid overhead?
> In that case, we could ifdef out on SPARSEMEM || DISCONTIGMEM for mem_map_offset() either
internally or within the helper I added for follow_hugetlb_page().

  reply	other threads:[~2021-01-27  3:46 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-25 20:57 [PATCH 0/2] mm/hugetlb: follow_hugetlb_page() improvements Joao Martins
2021-01-25 20:57 ` [PATCH 1/2] mm/hugetlb: grab head page refcount once per group of subpages Joao Martins
2021-01-26  2:14   ` Mike Kravetz
2021-01-25 20:57 ` [PATCH 2/2] mm/hugetlb: refactor subpage recording Joao Martins
2021-01-26 18:08   ` Mike Kravetz
2021-01-26 19:21     ` Joao Martins
2021-01-26 19:24       ` Joao Martins
2021-01-26 21:21       ` Mike Kravetz
2021-01-26 23:20         ` Joao Martins [this message]
2021-01-27  0:07         ` Jason Gunthorpe
2021-01-27  1:58           ` Mike Kravetz
2021-01-27  2:10             ` Jason Gunthorpe
2021-02-13 21:12               ` Mike Kravetz
2021-01-27  2:24           ` Matthew Wilcox
2021-01-27  2:50             ` Zi Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=90e28835-cc36-5432-bb3b-4142fbbb2c21@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.