linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Michal Hocko <mhocko@kernel.org>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Mel Gorman <mgorman@techsingularity.net>,
	John Hubbard <jhubbard@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>,
	Nitin Gupta <nigupta@nvidia.com>,
	David Nellans <dnellans@nvidia.com>
Subject: Re: [RFC PATCH 00/31] Generating physically contiguous memory after page allocation
Date: Tue, 19 Feb 2019 21:19:05 -0800	[thread overview]
Message-ID: <EB22370B-C5FB-435A-A8D0-95159E403B83@nvidia.com> (raw)
In-Reply-To: <5395a183-063f-d409-b957-51a8d02854b2@oracle.com>

On 19 Feb 2019, at 19:18, Mike Kravetz wrote:

> On 2/19/19 6:33 PM, Zi Yan wrote:
>> On 19 Feb 2019, at 17:42, Mike Kravetz wrote:
>>
>>> On 2/15/19 2:08 PM, Zi Yan wrote:
>>>
>>> Thanks for working on this issue!
>>>
>>> I have not yet had a chance to take a look at the code.  However, I 
>>> do have
>>> some general questions/comments on the approach.
>>
>> Thanks for replying. The code is very intrusive and has a lot of 
>> hacks, so it is
>> OK for us to discuss the general idea first. :)
>>
>>
>>>> Patch structure
>>>> ----
>>>>
>>>> The patchset I developed to generate physically contiguous 
>>>> memory/arbitrary
>>>> sized pages merely moves pages around. There are three components 
>>>> in this
>>>> patchset:
>>>>
>>>> 1) a new page migration mechanism, called exchange pages, that 
>>>> exchanges the
>>>> content of two in-use pages instead of performing two back-to-back 
>>>> page
>>>> migration. It saves on overheads and avoids page reclaim and memory 
>>>> compaction
>>>> in the page allocation path, although it is not strictly required 
>>>> if enough
>>>> free memory is available in the system.
>>>>
>>>> 2) a new mechanism that utilizes both page migration and exchange 
>>>> pages to
>>>> produce physically contiguous memory/arbitrary sized pages without 
>>>> allocating
>>>> any new pages, unlike what khugepaged does. It works on per-VMA 
>>>> basis, creating
>>>> physically contiguous memory out of each VMA, which is virtually 
>>>> contiguous.
>>>> A simple range tree is used to ensure no two VMAs are overlapping 
>>>> with each
>>>> other in the physical address space.
>>>
>>> This appears to be a new approach to generating contiguous areas.  
>>> Previous
>>> attempts had relied on finding a contiguous area that can then be 
>>> used for
>>> various purposes including user mappings.  Here, you take an 
>>> existing mapping
>>> and make it contiguous.  [RFC PATCH 04/31] mm: add mem_defrag 
>>> functionality
>>> talks about creating a (VPN, PFN) anchor pair for each vma and then 
>>> using
>>> this pair as the base for creating a contiguous area.
>>>
>>> I'm curious, how 'fixed' is the anchor?  As you know, there could be 
>>> a
>>> non-movable page in the PFN range.  As a result, you will not be 
>>> able to
>>> create a contiguous area starting at that PFN.  In such a case, do 
>>> we try
>>> another PFN?  I know this could result in much page shuffling.  I'm 
>>> just
>>> trying to figure out how we satisfy a user who really wants a 
>>> contiguous
>>> area.  Is there some method to keep trying?
>>
>> Good question. The anchor is determined on a per-VMA basis, which can 
>> be changed
>> easily,
>> but in this patchiest, I used a very simple strategy — making all 
>> VMAs not
>> overlapping
>> in the physical address space to get maximum overall contiguity and 
>> not changing
>> anchors
>> even if non-moveable pages are encountered when generating physically 
>> contiguous
>> pages.
>>
>> Basically, first VMA1 in the virtual address space has its anchor as
>> (VMA1_start_VPN, ZONE_start_PFN),
>> second VMA1 has its anchor as (VMA2_start_VPN, ZONE_start_PFN + 
>> VMA1_size), and
>> so on.
>> This makes all VMA not overlapping in physical address space during 
>> contiguous
>> memory
>> generation. When there is a non-moveable page, the anchor will not be 
>> changed,
>> because
>> no matter whether we assign a new anchor or not, the contiguous pages 
>> stops at
>> the non-moveable page. If we are trying to get a new anchor, more 
>> effort is
>> needed to
>> avoid overlapping new anchor with existing contiguous pages. Any 
>> overlapping will
>> nullify the existing contiguous pages.
>>
>> To satisfy a user who wants a contiguous area with N pages, the 
>> minimal distance
>> between
>> any two non-moveable pages should be bigger than N pages in the 
>> system memory.
>> Otherwise,
>> nothing would work. If there is such an area (PFN1, PFN1+N) in the 
>> physical
>> address space,
>> you can set the anchor to (VPN_USER, PFN1) and use exchange_pages() 
>> to generate
>> a contiguous
>> area with N pages. Instead, alloc_contig_pages(PFN1, PFN1+N, …) 
>> could also work,
>> but
>> only at page allocation time. It also requires the system has N free 
>> pages when
>> alloc_contig_pages() are migrating the pages in (PFN1, PFN1+N) away, 
>> or you need
>> to swap
>> pages to make the space.
>>
>> Let me know if this makes sense to you.
>>
>
> Yes, that is how I expected the implementation would work.  Thank you.
>
> Another high level question.  One of the benefits of this approach is
> that exchanging pages does not require N free pages as you describe
> above.  This assumes that the vma which we are trying to make 
> contiguous
> is already populated.  If it is not populated, then you also need to
> have N free pages.  Correct?  If this is true, then is the expected 
> use
> case to first populate a vma, and then try to make contiguous?  I 
> would
> assume that if it is not populated and a request to make contiguous is
> given, we should try to allocate/populate the vma with contiguous 
> pages
> at that time?

Yes, I assume the pages within the VMA are already populated but not 
contiguous yet.

My approach considers memory contiguity as an on-demand resource. In 
some phases
of an application, accelerators or RDMA controllers would 
process/transfer data in one
or more VMAs, at which time contiguous memory can help reduce address 
translation
overheads or lift certain constraints. And different VMAs could be 
processed at
different program phases, thus it might be hard to get contiguous memory 
for all
these VMAs at the allocation time using alloc_contig_pages(). My 
approach can
help get contiguous memory later, when the demand comes.

For some cases, you definitely can use alloc_contig_pages() to give 
users
a contiguous area at page allocation time, if you know the user is going 
to use this
area for accelerator data processing or as a RDMA buffer and the area 
size is fixed.

In addition, we can also use khugepaged approach, having a daemon 
periodically
scan VMAs and use alloc_contig_pages() to convert non-contiguous pages 
in a VMA
to contiguous pages, but it would require N free pages during the 
conversion.

In sum, my approach complements alloc_contig_pages() and provides more 
flexibility.
It is not trying to replaces alloc_contig_pages().


--
Best Regards,
Yan Zi


  reply	other threads:[~2019-02-20  5:21 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-15 22:08 [RFC PATCH 00/31] Generating physically contiguous memory after page allocation Zi Yan
2019-02-15 22:08 ` [RFC PATCH 01/31] mm: migrate: Add exchange_pages to exchange two lists of pages Zi Yan
2019-02-17 11:29   ` Matthew Wilcox
2019-02-18 17:31     ` Zi Yan
2019-02-18 17:42       ` Vlastimil Babka
2019-02-18 17:51         ` Zi Yan
2019-02-18 17:52           ` Matthew Wilcox
2019-02-18 17:59             ` Zi Yan
2019-02-19  7:42               ` Anshuman Khandual
2019-02-19 12:56                 ` Matthew Wilcox
2019-02-20  4:38                   ` Anshuman Khandual
2019-03-14  2:39                     ` Zi Yan
2019-02-21 21:10   ` Jerome Glisse
2019-02-21 21:25     ` Zi Yan
2019-02-15 22:08 ` [RFC PATCH 02/31] mm: migrate: Add THP exchange support Zi Yan
2019-02-15 22:08 ` [RFC PATCH 03/31] mm: migrate: Add tmpfs " Zi Yan
2019-02-15 22:08 ` [RFC PATCH 04/31] mm: add mem_defrag functionality Zi Yan
2019-02-15 22:08 ` [RFC PATCH 05/31] mem_defrag: split a THP if either src or dst is THP only Zi Yan
2019-02-15 22:08 ` [RFC PATCH 06/31] mm: Make MAX_ORDER configurable in Kconfig for buddy allocator Zi Yan
2019-02-15 22:08 ` [RFC PATCH 07/31] mm: deallocate pages with order > MAX_ORDER Zi Yan
2019-02-15 22:08 ` [RFC PATCH 08/31] mm: add pagechain container for storing multiple pages Zi Yan
2019-02-15 22:08 ` [RFC PATCH 09/31] mm: thp: 1GB anonymous page implementation Zi Yan
2019-02-15 22:08 ` [RFC PATCH 10/31] mm: proc: add 1GB THP kpageflag Zi Yan
2019-02-15 22:08 ` [RFC PATCH 11/31] mm: debug: print compound page order in dump_page() Zi Yan
2019-02-15 22:08 ` [RFC PATCH 12/31] mm: stats: Separate PMD THP and PUD THP stats Zi Yan
2019-02-15 22:08 ` [RFC PATCH 13/31] mm: thp: 1GB THP copy on write implementation Zi Yan
2019-02-15 22:08 ` [RFC PATCH 14/31] mm: thp: handling 1GB THP reference bit Zi Yan
2019-02-15 22:08 ` [RFC PATCH 15/31] mm: thp: add 1GB THP split_huge_pud_page() function Zi Yan
2019-02-15 22:08 ` [RFC PATCH 16/31] mm: thp: check compound_mapcount of PMD-mapped PUD THPs at free time Zi Yan
2019-02-15 22:08 ` [RFC PATCH 17/31] mm: thp: split properly PMD-mapped PUD THP to PTE-mapped PUD THP Zi Yan
2019-02-15 22:08 ` [RFC PATCH 18/31] mm: page_vma_walk: teach it about PMD-mapped " Zi Yan
2019-02-15 22:08 ` [RFC PATCH 19/31] mm: thp: 1GB THP support in try_to_unmap() Zi Yan
2019-02-15 22:08 ` [RFC PATCH 20/31] mm: thp: split 1GB THPs at page reclaim Zi Yan
2019-02-15 22:08 ` [RFC PATCH 21/31] mm: thp: 1GB zero page shrinker Zi Yan
2019-02-15 22:08 ` [RFC PATCH 22/31] mm: thp: 1GB THP follow_p*d_page() support Zi Yan
2019-02-15 22:08 ` [RFC PATCH 23/31] mm: support 1GB THP pagemap support Zi Yan
2019-02-15 22:08 ` [RFC PATCH 24/31] sysctl: add an option to only print the head page virtual address Zi Yan
2019-02-15 22:08 ` [RFC PATCH 25/31] mm: thp: add a knob to enable/disable 1GB THPs Zi Yan
2019-02-15 22:08 ` [RFC PATCH 26/31] mm: thp: promote PTE-mapped THP to PMD-mapped THP Zi Yan
2019-02-15 22:08 ` [RFC PATCH 27/31] mm: thp: promote PMD-mapped PUD pages to PUD-mapped PUD pages Zi Yan
2019-02-15 22:08 ` [RFC PATCH 28/31] mm: vmstats: add page promotion stats Zi Yan
2019-02-15 22:08 ` [RFC PATCH 29/31] mm: madvise: add madvise options to split PMD and PUD THPs Zi Yan
2019-02-15 22:08 ` [RFC PATCH 30/31] mm: mem_defrag: thp: PMD THP and PUD THP in-place promotion support Zi Yan
2019-02-15 22:08 ` [RFC PATCH 31/31] sysctl: toggle to promote PUD-mapped 1GB THP or not Zi Yan
2019-02-20  1:42 ` [RFC PATCH 00/31] Generating physically contiguous memory after page allocation Mike Kravetz
2019-02-20  2:33   ` Zi Yan
2019-02-20  3:18     ` Mike Kravetz
2019-02-20  5:19       ` Zi Yan [this message]
2019-02-20  5:27         ` Mike Kravetz
  -- strict thread matches above, loose matches on Subject: below --
2019-02-15 22:03 Zi Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EB22370B-C5FB-435A-A8D0-95159E403B83@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=dnellans@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhairgrove@nvidia.com \
    --cc=mhocko@kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=nigupta@nvidia.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).