All of lore.kernel.org
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: David Hildenbrand <david@redhat.com>, akpm@linux-foundation.org
Cc: mgorman@techsingularity.net, wangkefeng.wang@huawei.com,
	jhubbard@nvidia.com, ying.huang@intel.com, 21cnbao@gmail.com,
	ryan.roberts@arm.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] mm: support multi-size THP numa balancing
Date: Thu, 28 Mar 2024 22:18:28 +0800	[thread overview]
Message-ID: <87f9ae34-91af-4df3-b4be-5ec4e0eb9982@linux.alibaba.com> (raw)
In-Reply-To: <3f465638-f96f-4d81-87b2-779897c03b21@redhat.com>



On 2024/3/28 20:25, David Hildenbrand wrote:
> On 28.03.24 13:07, David Hildenbrand wrote:
>> On 28.03.24 12:34, Baolin Wang wrote:
>>>
>>>
>>> On 2024/3/28 17:25, David Hildenbrand wrote:
>>>> On 26.03.24 12:51, Baolin Wang wrote:
>>>>> Now the anonymous page allocation already supports multi-size THP 
>>>>> (mTHP),
>>>>> but the numa balancing still prohibits mTHP migration even though it
>>>>> is an
>>>>> exclusive mapping, which is unreasonable.
>>>>>
>>>>> Allow scanning mTHP:
>>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data 
>>>>> section
>>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared 
>>>>> data
>>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>>> NUMA-migrate COW pages that have other uses") change to use 
>>>>> page_count()
>>>>> to avoid GUP pages migration, that will also skip the mTHP numa 
>>>>> scaning.
>>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>>> issue, although there is still a GUP race, the issue seems to have 
>>>>> been
>>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the
>>>>> folio_likely_mapped_shared()
>>>>> to skip shared CoW pages though this is not a precise sharers 
>>>>> count. To
>>>>> check if the folio is shared, ideally we want to make sure every 
>>>>> page is
>>>>> mapped to the same process, but doing that seems expensive and using
>>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>>>
>>>>> Allow migrating mTHP:
>>>>> As mentioned in the previous thread[1], large folios (including 
>>>>> THP) are
>>>>> more susceptible to false sharing issues among threads than 4K base 
>>>>> page,
>>>>> leading to pages ping-pong back and forth during numa balancing, 
>>>>> which is
>>>>> currently not easy to resolve. Therefore, as a start to support 
>>>>> mTHP numa
>>>>> balancing, we can follow the PMD mapped THP's strategy, that means 
>>>>> we can
>>>>> reuse the 2-stage filter in should_numa_migrate_memory() to check 
>>>>> if the
>>>>> mTHP is being heavily contended among threads (through checking the
>>>>> CPU id
>>>>> and pid of the last access) to avoid false sharing at some degree. 
>>>>> Thus,
>>>>> we can restore all PTE maps upon the first hint page fault of a large
>>>>> folio
>>>>> to follow the PMD mapped THP's strategy. In the future, we can
>>>>> continue to
>>>>> optimize the NUMA balancing algorithm to avoid the false sharing issue
>>>>> with
>>>>> large folios as much as possible.
>>>>>
>>>>> Performance data:
>>>>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>>>>> Base: 2024-03-25 mm-unstable branch
>>>>> Enable mTHP to run autonuma-benchmark
>>>>>
>>>>> mTHP:16K
>>>>> Base                Patched
>>>>> numa01                numa01
>>>>> 224.70                137.23
>>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>>> 118.05                50.57
>>>>> numa02                numa02
>>>>> 13.45                9.30
>>>>> numa02_SMT            numa02_SMT
>>>>> 14.80                7.43
>>>>>
>>>>> mTHP:64K
>>>>> Base                Patched
>>>>> numa01                numa01
>>>>> 216.15                135.20
>>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>>> 115.35                46.93
>>>>> numa02                numa02
>>>>> 13.24                9.24
>>>>> numa02_SMT            numa02_SMT
>>>>> 14.67                7.31
>>>>>
>>>>> mTHP:128K
>>>>> Base                Patched
>>>>> numa01                numa01
>>>>> 205.13                140.41
>>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>>> 112.93                44.78
>>>>> numa02                numa02
>>>>> 13.16                9.19
>>>>> numa02_SMT            numa02_SMT
>>>>> 14.81                7.39
>>>>>
>>>>> [1]
>>>>> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>> ---
>>>>>     mm/memory.c   | 56 
>>>>> +++++++++++++++++++++++++++++++++++++++++++--------
>>>>>     mm/mprotect.c |  3 ++-
>>>>>     2 files changed, 50 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>>> index c30fb4b95e15..36191a9c799c 100644
>>>>> --- a/mm/memory.c
>>>>> +++ b/mm/memory.c
>>>>> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct
>>>>> vm_fault *vmf, struct vm_area_str
>>>>>         update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>>>>     }
>>>>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct
>>>>> vm_area_struct *vma,
>>>>> +                       struct folio *folio, pte_t fault_pte, bool
>>>>> ignore_writable)
>>>>> +{
>>>>> +    int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>>>>> +    unsigned long start = max(vmf->address - nr * PAGE_SIZE,
>>>>> vma->vm_start);
>>>>> +    unsigned long end = min(start + folio_nr_pages(folio) *
>>>>> PAGE_SIZE, vma->vm_end);
>>>>> +    pte_t *start_ptep = vmf->pte - (vmf->address - start) / 
>>>>> PAGE_SIZE;
>>>>> +    bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>>>>> +    unsigned long addr;
>>>>> +
>>>>> +    /* Restore all PTEs' mapping of the large folio */
>>>>> +    for (addr = start; addr != end; start_ptep++, addr += 
>>>>> PAGE_SIZE) {
>>>>> +        pte_t pte, old_pte;
>>>>> +        pte_t ptent = ptep_get(start_ptep);
>>>>> +        bool writable = false;
>>>>> +
>>>>> +        if (!pte_present(ptent) || !pte_protnone(ptent))
>>>>> +            continue;
>>>>> +
>>>>> +        if (vm_normal_folio(vma, addr, ptent) != folio)
>>>>> +            continue;
>>>>> +
>>>>
>>>> Should you be using folio_pte_batch() in the caller to collect all
>>>> applicable PTEs and then only have function that batch-changes a given
>>>> nr of PTEs?
>>>>
>>>> (just like we are now batching other stuff)
>>>
>>> Seems folio_pte_batch() is not suitable for numa balancing, since we did
>>> not care about other PTE bits, only care about the protnone bits. And
>>
>> You should be able to ignore most bits we care about, which case are you
>> concerned about folio_pte_batch() would miss. Hand crafting own
>> functions to cover some corner cases nobody cares about is likely a bad
>> idea.
> 
> Note that the reason why I am asking is that folio_pte_batch() can 
> optimize-out repeated ptep_get() with cont-ptes.

IIUC, the protnone PTEs will not set cont-ptes bit.

Another concern is that the protnone PTEs of the large folio might not 
be contiguous. For example, if a middle section of the large folio has 
been zapped, we would still like to restore all the protnone PTE mapping 
for the entire folio. However, folio_pte_batch() seems to only help 
identify the initial contiguous protnone PTEs.


 > Are you sure about that?

Sorry for noise, I am wrong. Folio validation is needed for some corner 
cases, but I may optimize the code with a simple pfn validation.

      reply	other threads:[~2024-03-28 14:18 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-26 11:51 [PATCH 0/2] support multi-size THP numa balancing Baolin Wang
2024-03-26 11:51 ` [PATCH 1/2] mm: factor out the numa mapping rebuilding into a new helper Baolin Wang
2024-03-26 11:51 ` [PATCH 2/2] mm: support multi-size THP numa balancing Baolin Wang
2024-03-27  2:04   ` Huang, Ying
2024-03-27  8:09     ` Baolin Wang
2024-03-27  8:21       ` Huang, Ying
2024-03-27  8:47         ` David Hildenbrand
2024-03-28  1:09           ` Huang, Ying
2024-03-28  9:25   ` David Hildenbrand
2024-03-28 11:34     ` Baolin Wang
2024-03-28 12:07       ` David Hildenbrand
2024-03-28 12:25         ` David Hildenbrand
2024-03-28 14:18           ` Baolin Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87f9ae34-91af-4df3-b4be-5ec4e0eb9982@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.