All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] support multi-size THP numa balancing
@ 2024-03-26 11:51 Baolin Wang
  2024-03-26 11:51 ` [PATCH 1/2] mm: factor out the numa mapping rebuilding into a new helper Baolin Wang
  2024-03-26 11:51 ` [PATCH 2/2] mm: support multi-size THP numa balancing Baolin Wang
  0 siblings, 2 replies; 13+ messages in thread
From: Baolin Wang @ 2024-03-26 11:51 UTC (permalink / raw)
  To: akpm
  Cc: david, mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, baolin.wang, linux-mm, linux-kernel

This patchset tries to support mTHP numa balancing, as a simple solution
to start, the NUMA balancing algorithm for mTHP will follow the THP strategy
as the basic support. Please find details in each patch.

Changes from RFC v2:
 - Follow the THP algorithm per Huang, Ying.

Changes from RFC v1:
 - Add some preformance data per Huang, Ying.
 - Allow mTHP scanning per David Hildenbrand.
 - Avoid sharing mapping for numa balancing to avoid false sharing.
 - Add more commit message.

Baolin Wang (2):
  mm: factor out the numa mapping rebuilding into a new helper
  mm: support multi-size THP numa balancing

 mm/memory.c   | 76 +++++++++++++++++++++++++++++++++++++++++----------
 mm/mprotect.c |  3 +-
 2 files changed, 64 insertions(+), 15 deletions(-)

-- 
2.39.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] mm: factor out the numa mapping rebuilding into a new helper
  2024-03-26 11:51 [PATCH 0/2] support multi-size THP numa balancing Baolin Wang
@ 2024-03-26 11:51 ` Baolin Wang
  2024-03-26 11:51 ` [PATCH 2/2] mm: support multi-size THP numa balancing Baolin Wang
  1 sibling, 0 replies; 13+ messages in thread
From: Baolin Wang @ 2024-03-26 11:51 UTC (permalink / raw)
  To: akpm
  Cc: david, mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, baolin.wang, linux-mm, linux-kernel

To support large folio's numa balancing, factor out the numa mapping rebuilding
into a new helper as a preparation.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/memory.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 62ee4a15092a..c30fb4b95e15 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5054,6 +5054,20 @@ int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf,
 	return mpol_misplaced(folio, vmf, addr);
 }
 
+static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
+					bool writable)
+{
+	pte_t pte, old_pte;
+
+	old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
+	pte = pte_modify(old_pte, vma->vm_page_prot);
+	pte = pte_mkyoung(pte);
+	if (writable)
+		pte = pte_mkwrite(pte, vma);
+	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
+	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
+}
+
 static vm_fault_t do_numa_page(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
@@ -5159,13 +5173,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	 * Make it present again, depending on how arch implements
 	 * non-accessible ptes, some can allow access by kernel mode.
 	 */
-	old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
-	pte = pte_modify(old_pte, vma->vm_page_prot);
-	pte = pte_mkyoung(pte);
-	if (writable)
-		pte = pte_mkwrite(pte, vma);
-	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
-	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
+	numa_rebuild_single_mapping(vmf, vma, writable);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 	goto out;
 }
-- 
2.39.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-26 11:51 [PATCH 0/2] support multi-size THP numa balancing Baolin Wang
  2024-03-26 11:51 ` [PATCH 1/2] mm: factor out the numa mapping rebuilding into a new helper Baolin Wang
@ 2024-03-26 11:51 ` Baolin Wang
  2024-03-27  2:04   ` Huang, Ying
  2024-03-28  9:25   ` David Hildenbrand
  1 sibling, 2 replies; 13+ messages in thread
From: Baolin Wang @ 2024-03-26 11:51 UTC (permalink / raw)
  To: akpm
  Cc: david, mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, baolin.wang, linux-mm, linux-kernel

Now the anonymous page allocation already supports multi-size THP (mTHP),
but the numa balancing still prohibits mTHP migration even though it is an
exclusive mapping, which is unreasonable.

Allow scanning mTHP:
Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
pages") skips shared CoW pages' NUMA page migration to avoid shared data
segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
NUMA-migrate COW pages that have other uses") change to use page_count()
to avoid GUP pages migration, that will also skip the mTHP numa scaning.
Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
issue, although there is still a GUP race, the issue seems to have been
resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared()
to skip shared CoW pages though this is not a precise sharers count. To
check if the folio is shared, ideally we want to make sure every page is
mapped to the same process, but doing that seems expensive and using
the estimated mapcount seems can work when running autonuma benchmark.

Allow migrating mTHP:
As mentioned in the previous thread[1], large folios (including THP) are
more susceptible to false sharing issues among threads than 4K base page,
leading to pages ping-pong back and forth during numa balancing, which is
currently not easy to resolve. Therefore, as a start to support mTHP numa
balancing, we can follow the PMD mapped THP's strategy, that means we can
reuse the 2-stage filter in should_numa_migrate_memory() to check if the
mTHP is being heavily contended among threads (through checking the CPU id
and pid of the last access) to avoid false sharing at some degree. Thus,
we can restore all PTE maps upon the first hint page fault of a large folio
to follow the PMD mapped THP's strategy. In the future, we can continue to
optimize the NUMA balancing algorithm to avoid the false sharing issue with
large folios as much as possible.

Performance data:
Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
Base: 2024-03-25 mm-unstable branch
Enable mTHP to run autonuma-benchmark

mTHP:16K
Base				Patched
numa01				numa01
224.70				137.23
numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
118.05				50.57
numa02				numa02
13.45				9.30
numa02_SMT			numa02_SMT
14.80				7.43

mTHP:64K
Base				Patched
numa01				numa01
216.15				135.20
numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
115.35				46.93
numa02				numa02
13.24				9.24
numa02_SMT			numa02_SMT
14.67				7.31

mTHP:128K
Base				Patched
numa01				numa01
205.13				140.41
numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
112.93				44.78
numa02				numa02
13.16				9.19
numa02_SMT			numa02_SMT
14.81				7.39

[1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++--------
 mm/mprotect.c |  3 ++-
 2 files changed, 50 insertions(+), 9 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index c30fb4b95e15..36191a9c799c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str
 	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
 }
 
+static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
+				       struct folio *folio, pte_t fault_pte, bool ignore_writable)
+{
+	int nr = pte_pfn(fault_pte) - folio_pfn(folio);
+	unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start);
+	unsigned long end = min(start + folio_nr_pages(folio) * PAGE_SIZE, vma->vm_end);
+	pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
+	bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
+	unsigned long addr;
+
+	/* Restore all PTEs' mapping of the large folio */
+	for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
+		pte_t pte, old_pte;
+		pte_t ptent = ptep_get(start_ptep);
+		bool writable = false;
+
+		if (!pte_present(ptent) || !pte_protnone(ptent))
+			continue;
+
+		if (vm_normal_folio(vma, addr, ptent) != folio)
+			continue;
+
+		if (!ignore_writable) {
+			writable = pte_write(pte);
+			if (!writable && pte_write_upgrade &&
+			    can_change_pte_writable(vma, addr, pte))
+				writable = true;
+		}
+
+		old_pte = ptep_modify_prot_start(vma, addr, start_ptep);
+		pte = pte_modify(old_pte, vma->vm_page_prot);
+		pte = pte_mkyoung(pte);
+		if (writable)
+			pte = pte_mkwrite(pte, vma);
+		ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte);
+		update_mmu_cache_range(vmf, vma, addr, start_ptep, 1);
+	}
+}
+
 static vm_fault_t do_numa_page(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct folio *folio = NULL;
 	int nid = NUMA_NO_NODE;
-	bool writable = false;
+	bool writable = false, ignore_writable = false;
 	int last_cpupid;
 	int target_nid;
 	pte_t pte, old_pte;
-	int flags = 0;
+	int flags = 0, nr_pages;
 
 	/*
 	 * The pte cannot be used safely until we verify, while holding the page
@@ -5107,10 +5146,6 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	if (!folio || folio_is_zone_device(folio))
 		goto out_map;
 
-	/* TODO: handle PTE-mapped THP */
-	if (folio_test_large(folio))
-		goto out_map;
-
 	/*
 	 * Avoid grouping on RO pages in general. RO pages shouldn't hurt as
 	 * much anyway since they can be in shared cache state. This misses
@@ -5130,6 +5165,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 		flags |= TNF_SHARED;
 
 	nid = folio_nid(folio);
+	nr_pages = folio_nr_pages(folio);
 	/*
 	 * For memory tiering mode, cpupid of slow memory page is used
 	 * to record page access time.  So use default value.
@@ -5146,6 +5182,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	}
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 	writable = false;
+	ignore_writable = true;
 
 	/* Migrate to the requested node */
 	if (migrate_misplaced_folio(folio, vma, target_nid)) {
@@ -5166,14 +5203,17 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 
 out:
 	if (nid != NUMA_NO_NODE)
-		task_numa_fault(last_cpupid, nid, 1, flags);
+		task_numa_fault(last_cpupid, nid, nr_pages, flags);
 	return 0;
 out_map:
 	/*
 	 * Make it present again, depending on how arch implements
 	 * non-accessible ptes, some can allow access by kernel mode.
 	 */
-	numa_rebuild_single_mapping(vmf, vma, writable);
+	if (folio && folio_test_large(folio))
+		numa_rebuild_large_mapping(vmf, vma, folio, pte, ignore_writable);
+	else
+		numa_rebuild_single_mapping(vmf, vma, writable);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 	goto out;
 }
diff --git a/mm/mprotect.c b/mm/mprotect.c
index f8a4544b4601..94878c39ee32 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -129,7 +129,8 @@ static long change_pte_range(struct mmu_gather *tlb,
 
 				/* Also skip shared copy-on-write pages */
 				if (is_cow_mapping(vma->vm_flags) &&
-				    folio_ref_count(folio) != 1)
+				    (folio_maybe_dma_pinned(folio) ||
+				     folio_likely_mapped_shared(folio)))
 					continue;
 
 				/*
-- 
2.39.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-26 11:51 ` [PATCH 2/2] mm: support multi-size THP numa balancing Baolin Wang
@ 2024-03-27  2:04   ` Huang, Ying
  2024-03-27  8:09     ` Baolin Wang
  2024-03-28  9:25   ` David Hildenbrand
  1 sibling, 1 reply; 13+ messages in thread
From: Huang, Ying @ 2024-03-27  2:04 UTC (permalink / raw)
  To: Baolin Wang
  Cc: akpm, david, mgorman, wangkefeng.wang, jhubbard, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> Now the anonymous page allocation already supports multi-size THP (mTHP),
> but the numa balancing still prohibits mTHP migration even though it is an
> exclusive mapping, which is unreasonable.
>
> Allow scanning mTHP:
> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
> pages") skips shared CoW pages' NUMA page migration to avoid shared data
> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
> NUMA-migrate COW pages that have other uses") change to use page_count()
> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
> issue, although there is still a GUP race, the issue seems to have been
> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared()
> to skip shared CoW pages though this is not a precise sharers count. To
> check if the folio is shared, ideally we want to make sure every page is
> mapped to the same process, but doing that seems expensive and using
> the estimated mapcount seems can work when running autonuma benchmark.

Because now we can deal with shared mTHP, it appears even possible to
remove folio_likely_mapped_shared() check?

> Allow migrating mTHP:
> As mentioned in the previous thread[1], large folios (including THP) are
> more susceptible to false sharing issues among threads than 4K base page,
> leading to pages ping-pong back and forth during numa balancing, which is
> currently not easy to resolve. Therefore, as a start to support mTHP numa
> balancing, we can follow the PMD mapped THP's strategy, that means we can
> reuse the 2-stage filter in should_numa_migrate_memory() to check if the
> mTHP is being heavily contended among threads (through checking the CPU id
> and pid of the last access) to avoid false sharing at some degree. Thus,
> we can restore all PTE maps upon the first hint page fault of a large folio
> to follow the PMD mapped THP's strategy. In the future, we can continue to
> optimize the NUMA balancing algorithm to avoid the false sharing issue with
> large folios as much as possible.
>
> Performance data:
> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
> Base: 2024-03-25 mm-unstable branch
> Enable mTHP to run autonuma-benchmark
>
> mTHP:16K
> Base				Patched
> numa01				numa01
> 224.70				137.23
> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
> 118.05				50.57
> numa02				numa02
> 13.45				9.30
> numa02_SMT			numa02_SMT
> 14.80				7.43
>
> mTHP:64K
> Base				Patched
> numa01				numa01
> 216.15				135.20
> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
> 115.35				46.93
> numa02				numa02
> 13.24				9.24
> numa02_SMT			numa02_SMT
> 14.67				7.31
>
> mTHP:128K
> Base				Patched
> numa01				numa01
> 205.13				140.41
> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
> 112.93				44.78
> numa02				numa02
> 13.16				9.19
> numa02_SMT			numa02_SMT
> 14.81				7.39
>
> [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++--------
>  mm/mprotect.c |  3 ++-
>  2 files changed, 50 insertions(+), 9 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index c30fb4b95e15..36191a9c799c 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str
>  	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>  }
>  
> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
> +				       struct folio *folio, pte_t fault_pte, bool ignore_writable)
> +{
> +	int nr = pte_pfn(fault_pte) - folio_pfn(folio);
> +	unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start);
> +	unsigned long end = min(start + folio_nr_pages(folio) * PAGE_SIZE, vma->vm_end);

If start is in the middle of folio, it's possible for end to go beyond
the end of folio.  So, should be something like below?

	unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end);

> +	pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
> +	bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
> +	unsigned long addr;
> +
> +	/* Restore all PTEs' mapping of the large folio */
> +	for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
> +		pte_t pte, old_pte;
> +		pte_t ptent = ptep_get(start_ptep);
> +		bool writable = false;
> +
> +		if (!pte_present(ptent) || !pte_protnone(ptent))
> +			continue;
> +
> +		if (vm_normal_folio(vma, addr, ptent) != folio)
> +			continue;
> +
> +		if (!ignore_writable) {
> +			writable = pte_write(pte);
> +			if (!writable && pte_write_upgrade &&
> +			    can_change_pte_writable(vma, addr, pte))
> +				writable = true;
> +		}
> +
> +		old_pte = ptep_modify_prot_start(vma, addr, start_ptep);
> +		pte = pte_modify(old_pte, vma->vm_page_prot);
> +		pte = pte_mkyoung(pte);
> +		if (writable)
> +			pte = pte_mkwrite(pte, vma);
> +		ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte);
> +		update_mmu_cache_range(vmf, vma, addr, start_ptep, 1);

Can this be batched for the whole folio?

> +	}
> +}
> +

[snip]

--
Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-27  2:04   ` Huang, Ying
@ 2024-03-27  8:09     ` Baolin Wang
  2024-03-27  8:21       ` Huang, Ying
  0 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2024-03-27  8:09 UTC (permalink / raw)
  To: Huang, Ying
  Cc: akpm, david, mgorman, wangkefeng.wang, jhubbard, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel



On 2024/3/27 10:04, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> 
>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>> but the numa balancing still prohibits mTHP migration even though it is an
>> exclusive mapping, which is unreasonable.
>>
>> Allow scanning mTHP:
>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>> NUMA-migrate COW pages that have other uses") change to use page_count()
>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>> issue, although there is still a GUP race, the issue seems to have been
>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared()
>> to skip shared CoW pages though this is not a precise sharers count. To
>> check if the folio is shared, ideally we want to make sure every page is
>> mapped to the same process, but doing that seems expensive and using
>> the estimated mapcount seems can work when running autonuma benchmark.
> 
> Because now we can deal with shared mTHP, it appears even possible to
> remove folio_likely_mapped_shared() check?

IMO, the issue solved by commit 859d4adc3415 is about shared CoW 
mapping, and I prefer to measure it in another patch:)

>> Allow migrating mTHP:
>> As mentioned in the previous thread[1], large folios (including THP) are
>> more susceptible to false sharing issues among threads than 4K base page,
>> leading to pages ping-pong back and forth during numa balancing, which is
>> currently not easy to resolve. Therefore, as a start to support mTHP numa
>> balancing, we can follow the PMD mapped THP's strategy, that means we can
>> reuse the 2-stage filter in should_numa_migrate_memory() to check if the
>> mTHP is being heavily contended among threads (through checking the CPU id
>> and pid of the last access) to avoid false sharing at some degree. Thus,
>> we can restore all PTE maps upon the first hint page fault of a large folio
>> to follow the PMD mapped THP's strategy. In the future, we can continue to
>> optimize the NUMA balancing algorithm to avoid the false sharing issue with
>> large folios as much as possible.
>>
>> Performance data:
>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>> Base: 2024-03-25 mm-unstable branch
>> Enable mTHP to run autonuma-benchmark
>>
>> mTHP:16K
>> Base				Patched
>> numa01				numa01
>> 224.70				137.23
>> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
>> 118.05				50.57
>> numa02				numa02
>> 13.45				9.30
>> numa02_SMT			numa02_SMT
>> 14.80				7.43
>>
>> mTHP:64K
>> Base				Patched
>> numa01				numa01
>> 216.15				135.20
>> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
>> 115.35				46.93
>> numa02				numa02
>> 13.24				9.24
>> numa02_SMT			numa02_SMT
>> 14.67				7.31
>>
>> mTHP:128K
>> Base				Patched
>> numa01				numa01
>> 205.13				140.41
>> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
>> 112.93				44.78
>> numa02				numa02
>> 13.16				9.19
>> numa02_SMT			numa02_SMT
>> 14.81				7.39
>>
>> [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++--------
>>   mm/mprotect.c |  3 ++-
>>   2 files changed, 50 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index c30fb4b95e15..36191a9c799c 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str
>>   	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>   }
>>   
>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
>> +				       struct folio *folio, pte_t fault_pte, bool ignore_writable)
>> +{
>> +	int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>> +	unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start);
>> +	unsigned long end = min(start + folio_nr_pages(folio) * PAGE_SIZE, vma->vm_end);
> 
> If start is in the middle of folio, it's possible for end to go beyond
> the end of folio.  So, should be something like below?

Yes, good catch, even though below iteration can skip over the parts 
that exceed the size of that folio.

> 	unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end);

Yes, this looks good to me. Will do in next version. Thanks.

>> +	pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
>> +	bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>> +	unsigned long addr;
>> +
>> +	/* Restore all PTEs' mapping of the large folio */
>> +	for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
>> +		pte_t pte, old_pte;
>> +		pte_t ptent = ptep_get(start_ptep);
>> +		bool writable = false;
>> +
>> +		if (!pte_present(ptent) || !pte_protnone(ptent))
>> +			continue;
>> +
>> +		if (vm_normal_folio(vma, addr, ptent) != folio)
>> +			continue;
>> +
>> +		if (!ignore_writable) {
>> +			writable = pte_write(pte);
>> +			if (!writable && pte_write_upgrade &&
>> +			    can_change_pte_writable(vma, addr, pte))
>> +				writable = true;
>> +		}
>> +
>> +		old_pte = ptep_modify_prot_start(vma, addr, start_ptep);
>> +		pte = pte_modify(old_pte, vma->vm_page_prot);
>> +		pte = pte_mkyoung(pte);
>> +		if (writable)
>> +			pte = pte_mkwrite(pte, vma);
>> +		ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte);
>> +		update_mmu_cache_range(vmf, vma, addr, start_ptep, 1);
> 
> Can this be batched for the whole folio?

I thought about it, but things are a little tricky. The folio may not 
contain continuous protnone PTEs, should skip non-present or 
non-protnone PTEs.

Moreover, it is necessary to define architecture-specified 
ptep_modify_prot_start*_nr and ptep_modify_prot_commit*_nr that can 
handle multiple PTEs, which is in my TODO list including batch numa 
scanning in change_pte_range().

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-27  8:09     ` Baolin Wang
@ 2024-03-27  8:21       ` Huang, Ying
  2024-03-27  8:47         ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: Huang, Ying @ 2024-03-27  8:21 UTC (permalink / raw)
  To: Baolin Wang
  Cc: akpm, david, mgorman, wangkefeng.wang, jhubbard, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> On 2024/3/27 10:04, Huang, Ying wrote:
>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>> 
>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>> but the numa balancing still prohibits mTHP migration even though it is an
>>> exclusive mapping, which is unreasonable.
>>>
>>> Allow scanning mTHP:
>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>> issue, although there is still a GUP race, the issue seems to have been
>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared()
>>> to skip shared CoW pages though this is not a precise sharers count. To
>>> check if the folio is shared, ideally we want to make sure every page is
>>> mapped to the same process, but doing that seems expensive and using
>>> the estimated mapcount seems can work when running autonuma benchmark.
>> Because now we can deal with shared mTHP, it appears even possible
>> to
>> remove folio_likely_mapped_shared() check?
>
> IMO, the issue solved by commit 859d4adc3415 is about shared CoW
> mapping, and I prefer to measure it in another patch:)

I mean we can deal with shared mTHP (by multiple threads or multiple
processes) with this patch.  Right?

[snip]

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-27  8:21       ` Huang, Ying
@ 2024-03-27  8:47         ` David Hildenbrand
  2024-03-28  1:09           ` Huang, Ying
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-03-27  8:47 UTC (permalink / raw)
  To: Huang, Ying, Baolin Wang
  Cc: akpm, mgorman, wangkefeng.wang, jhubbard, 21cnbao, ryan.roberts,
	linux-mm, linux-kernel

On 27.03.24 09:21, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> 
>> On 2024/3/27 10:04, Huang, Ying wrote:
>>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>>
>>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>>> but the numa balancing still prohibits mTHP migration even though it is an
>>>> exclusive mapping, which is unreasonable.
>>>>
>>>> Allow scanning mTHP:
>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>> issue, although there is still a GUP race, the issue seems to have been
>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared()
>>>> to skip shared CoW pages though this is not a precise sharers count. To
>>>> check if the folio is shared, ideally we want to make sure every page is
>>>> mapped to the same process, but doing that seems expensive and using
>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>> Because now we can deal with shared mTHP, it appears even possible
>>> to
>>> remove folio_likely_mapped_shared() check?
>>
>> IMO, the issue solved by commit 859d4adc3415 is about shared CoW
>> mapping, and I prefer to measure it in another patch:)
> 
> I mean we can deal with shared mTHP (by multiple threads or multiple
> processes) with this patch.  Right?

It's independent of the folio order. We don't want to mess with shared COW pages, see

commit 859d4adc3415a64ccb8b0c50dc4e3a888dcb5805
Author: Henry Willard <henry.willard@oracle.com>
Date:   Wed Jan 31 16:21:07 2018 -0800

     mm: numa: do not trap faults on shared data section pages.
     
     Workloads consisting of a large number of processes running the same
     program with a very large shared data segment may experience performance
     problems when numa balancing attempts to migrate the shared cow pages.
     This manifests itself with many processes or tasks in
     TASK_UNINTERRUPTIBLE state waiting for the shared pages to be migrated.
...

that introduced this handling.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-27  8:47         ` David Hildenbrand
@ 2024-03-28  1:09           ` Huang, Ying
  0 siblings, 0 replies; 13+ messages in thread
From: Huang, Ying @ 2024-03-28  1:09 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Baolin Wang, akpm, mgorman, wangkefeng.wang, jhubbard, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel

David Hildenbrand <david@redhat.com> writes:

> On 27.03.24 09:21, Huang, Ying wrote:
>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>> 
>>> On 2024/3/27 10:04, Huang, Ying wrote:
>>>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>>>
>>>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>>>> but the numa balancing still prohibits mTHP migration even though it is an
>>>>> exclusive mapping, which is unreasonable.
>>>>>
>>>>> Allow scanning mTHP:
>>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>>> issue, although there is still a GUP race, the issue seems to have been
>>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared()
>>>>> to skip shared CoW pages though this is not a precise sharers count. To
>>>>> check if the folio is shared, ideally we want to make sure every page is
>>>>> mapped to the same process, but doing that seems expensive and using
>>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>> Because now we can deal with shared mTHP, it appears even possible
>>>> to
>>>> remove folio_likely_mapped_shared() check?
>>>
>>> IMO, the issue solved by commit 859d4adc3415 is about shared CoW
>>> mapping, and I prefer to measure it in another patch:)
>> I mean we can deal with shared mTHP (by multiple threads or multiple
>> processes) with this patch.  Right?
>
> It's independent of the folio order. We don't want to mess with shared COW pages, see
>
> commit 859d4adc3415a64ccb8b0c50dc4e3a888dcb5805
> Author: Henry Willard <henry.willard@oracle.com>
> Date:   Wed Jan 31 16:21:07 2018 -0800
>
>     mm: numa: do not trap faults on shared data section pages.
>          Workloads consisting of a large number of processes running
>         the same
>     program with a very large shared data segment may experience performance
>     problems when numa balancing attempts to migrate the shared cow pages.
>     This manifests itself with many processes or tasks in
>     TASK_UNINTERRUPTIBLE state waiting for the shared pages to be migrated.
> ...
>
> that introduced this handling.

Sorry, I misunderstood your words.

--
Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-26 11:51 ` [PATCH 2/2] mm: support multi-size THP numa balancing Baolin Wang
  2024-03-27  2:04   ` Huang, Ying
@ 2024-03-28  9:25   ` David Hildenbrand
  2024-03-28 11:34     ` Baolin Wang
  1 sibling, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-03-28  9:25 UTC (permalink / raw)
  To: Baolin Wang, akpm
  Cc: mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel

On 26.03.24 12:51, Baolin Wang wrote:
> Now the anonymous page allocation already supports multi-size THP (mTHP),
> but the numa balancing still prohibits mTHP migration even though it is an
> exclusive mapping, which is unreasonable.
> 
> Allow scanning mTHP:
> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
> pages") skips shared CoW pages' NUMA page migration to avoid shared data
> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
> NUMA-migrate COW pages that have other uses") change to use page_count()
> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
> issue, although there is still a GUP race, the issue seems to have been
> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared()
> to skip shared CoW pages though this is not a precise sharers count. To
> check if the folio is shared, ideally we want to make sure every page is
> mapped to the same process, but doing that seems expensive and using
> the estimated mapcount seems can work when running autonuma benchmark.
> 
> Allow migrating mTHP:
> As mentioned in the previous thread[1], large folios (including THP) are
> more susceptible to false sharing issues among threads than 4K base page,
> leading to pages ping-pong back and forth during numa balancing, which is
> currently not easy to resolve. Therefore, as a start to support mTHP numa
> balancing, we can follow the PMD mapped THP's strategy, that means we can
> reuse the 2-stage filter in should_numa_migrate_memory() to check if the
> mTHP is being heavily contended among threads (through checking the CPU id
> and pid of the last access) to avoid false sharing at some degree. Thus,
> we can restore all PTE maps upon the first hint page fault of a large folio
> to follow the PMD mapped THP's strategy. In the future, we can continue to
> optimize the NUMA balancing algorithm to avoid the false sharing issue with
> large folios as much as possible.
> 
> Performance data:
> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
> Base: 2024-03-25 mm-unstable branch
> Enable mTHP to run autonuma-benchmark
> 
> mTHP:16K
> Base				Patched
> numa01				numa01
> 224.70				137.23
> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
> 118.05				50.57
> numa02				numa02
> 13.45				9.30
> numa02_SMT			numa02_SMT
> 14.80				7.43
> 
> mTHP:64K
> Base				Patched
> numa01				numa01
> 216.15				135.20
> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
> 115.35				46.93
> numa02				numa02
> 13.24				9.24
> numa02_SMT			numa02_SMT
> 14.67				7.31
> 
> mTHP:128K
> Base				Patched
> numa01				numa01
> 205.13				140.41
> numa01_THREAD_ALLOC		numa01_THREAD_ALLOC
> 112.93				44.78
> numa02				numa02
> 13.16				9.19
> numa02_SMT			numa02_SMT
> 14.81				7.39
> 
> [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>   mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++--------
>   mm/mprotect.c |  3 ++-
>   2 files changed, 50 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index c30fb4b95e15..36191a9c799c 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str
>   	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>   }
>   
> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
> +				       struct folio *folio, pte_t fault_pte, bool ignore_writable)
> +{
> +	int nr = pte_pfn(fault_pte) - folio_pfn(folio);
> +	unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start);
> +	unsigned long end = min(start + folio_nr_pages(folio) * PAGE_SIZE, vma->vm_end);
> +	pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
> +	bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
> +	unsigned long addr;
> +
> +	/* Restore all PTEs' mapping of the large folio */
> +	for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
> +		pte_t pte, old_pte;
> +		pte_t ptent = ptep_get(start_ptep);
> +		bool writable = false;
> +
> +		if (!pte_present(ptent) || !pte_protnone(ptent))
> +			continue;
> +
> +		if (vm_normal_folio(vma, addr, ptent) != folio)
> +			continue;
> +

Should you be using folio_pte_batch() in the caller to collect all 
applicable PTEs and then only have function that batch-changes a given 
nr of PTEs?

(just like we are now batching other stuff)


-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-28  9:25   ` David Hildenbrand
@ 2024-03-28 11:34     ` Baolin Wang
  2024-03-28 12:07       ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2024-03-28 11:34 UTC (permalink / raw)
  To: David Hildenbrand, akpm
  Cc: mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel



On 2024/3/28 17:25, David Hildenbrand wrote:
> On 26.03.24 12:51, Baolin Wang wrote:
>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>> but the numa balancing still prohibits mTHP migration even though it 
>> is an
>> exclusive mapping, which is unreasonable.
>>
>> Allow scanning mTHP:
>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>> NUMA-migrate COW pages that have other uses") change to use page_count()
>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>> issue, although there is still a GUP race, the issue seems to have been
>> resolved by commit 80d47f5de5e3. Meanwhile, use the 
>> folio_likely_mapped_shared()
>> to skip shared CoW pages though this is not a precise sharers count. To
>> check if the folio is shared, ideally we want to make sure every page is
>> mapped to the same process, but doing that seems expensive and using
>> the estimated mapcount seems can work when running autonuma benchmark.
>>
>> Allow migrating mTHP:
>> As mentioned in the previous thread[1], large folios (including THP) are
>> more susceptible to false sharing issues among threads than 4K base page,
>> leading to pages ping-pong back and forth during numa balancing, which is
>> currently not easy to resolve. Therefore, as a start to support mTHP numa
>> balancing, we can follow the PMD mapped THP's strategy, that means we can
>> reuse the 2-stage filter in should_numa_migrate_memory() to check if the
>> mTHP is being heavily contended among threads (through checking the 
>> CPU id
>> and pid of the last access) to avoid false sharing at some degree. Thus,
>> we can restore all PTE maps upon the first hint page fault of a large 
>> folio
>> to follow the PMD mapped THP's strategy. In the future, we can 
>> continue to
>> optimize the NUMA balancing algorithm to avoid the false sharing issue 
>> with
>> large folios as much as possible.
>>
>> Performance data:
>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>> Base: 2024-03-25 mm-unstable branch
>> Enable mTHP to run autonuma-benchmark
>>
>> mTHP:16K
>> Base                Patched
>> numa01                numa01
>> 224.70                137.23
>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>> 118.05                50.57
>> numa02                numa02
>> 13.45                9.30
>> numa02_SMT            numa02_SMT
>> 14.80                7.43
>>
>> mTHP:64K
>> Base                Patched
>> numa01                numa01
>> 216.15                135.20
>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>> 115.35                46.93
>> numa02                numa02
>> 13.24                9.24
>> numa02_SMT            numa02_SMT
>> 14.67                7.31
>>
>> mTHP:128K
>> Base                Patched
>> numa01                numa01
>> 205.13                140.41
>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>> 112.93                44.78
>> numa02                numa02
>> 13.16                9.19
>> numa02_SMT            numa02_SMT
>> 14.81                7.39
>>
>> [1] 
>> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++--------
>>   mm/mprotect.c |  3 ++-
>>   2 files changed, 50 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index c30fb4b95e15..36191a9c799c 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct 
>> vm_fault *vmf, struct vm_area_str
>>       update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>   }
>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct 
>> vm_area_struct *vma,
>> +                       struct folio *folio, pte_t fault_pte, bool 
>> ignore_writable)
>> +{
>> +    int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>> +    unsigned long start = max(vmf->address - nr * PAGE_SIZE, 
>> vma->vm_start);
>> +    unsigned long end = min(start + folio_nr_pages(folio) * 
>> PAGE_SIZE, vma->vm_end);
>> +    pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
>> +    bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>> +    unsigned long addr;
>> +
>> +    /* Restore all PTEs' mapping of the large folio */
>> +    for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
>> +        pte_t pte, old_pte;
>> +        pte_t ptent = ptep_get(start_ptep);
>> +        bool writable = false;
>> +
>> +        if (!pte_present(ptent) || !pte_protnone(ptent))
>> +            continue;
>> +
>> +        if (vm_normal_folio(vma, addr, ptent) != folio)
>> +            continue;
>> +
> 
> Should you be using folio_pte_batch() in the caller to collect all 
> applicable PTEs and then only have function that batch-changes a given 
> nr of PTEs?
> 
> (just like we are now batching other stuff)

Seems folio_pte_batch() is not suitable for numa balancing, since we did 
not care about other PTE bits, only care about the protnone bits. And 
after more thinking, I think I can drop the vm_normal_folio() 
validation, since all PTEs are ensured to be within the range of the 
folio size.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-28 11:34     ` Baolin Wang
@ 2024-03-28 12:07       ` David Hildenbrand
  2024-03-28 12:25         ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-03-28 12:07 UTC (permalink / raw)
  To: Baolin Wang, akpm
  Cc: mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel

On 28.03.24 12:34, Baolin Wang wrote:
> 
> 
> On 2024/3/28 17:25, David Hildenbrand wrote:
>> On 26.03.24 12:51, Baolin Wang wrote:
>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>> but the numa balancing still prohibits mTHP migration even though it
>>> is an
>>> exclusive mapping, which is unreasonable.
>>>
>>> Allow scanning mTHP:
>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>> issue, although there is still a GUP race, the issue seems to have been
>>> resolved by commit 80d47f5de5e3. Meanwhile, use the
>>> folio_likely_mapped_shared()
>>> to skip shared CoW pages though this is not a precise sharers count. To
>>> check if the folio is shared, ideally we want to make sure every page is
>>> mapped to the same process, but doing that seems expensive and using
>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>
>>> Allow migrating mTHP:
>>> As mentioned in the previous thread[1], large folios (including THP) are
>>> more susceptible to false sharing issues among threads than 4K base page,
>>> leading to pages ping-pong back and forth during numa balancing, which is
>>> currently not easy to resolve. Therefore, as a start to support mTHP numa
>>> balancing, we can follow the PMD mapped THP's strategy, that means we can
>>> reuse the 2-stage filter in should_numa_migrate_memory() to check if the
>>> mTHP is being heavily contended among threads (through checking the
>>> CPU id
>>> and pid of the last access) to avoid false sharing at some degree. Thus,
>>> we can restore all PTE maps upon the first hint page fault of a large
>>> folio
>>> to follow the PMD mapped THP's strategy. In the future, we can
>>> continue to
>>> optimize the NUMA balancing algorithm to avoid the false sharing issue
>>> with
>>> large folios as much as possible.
>>>
>>> Performance data:
>>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>>> Base: 2024-03-25 mm-unstable branch
>>> Enable mTHP to run autonuma-benchmark
>>>
>>> mTHP:16K
>>> Base                Patched
>>> numa01                numa01
>>> 224.70                137.23
>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>> 118.05                50.57
>>> numa02                numa02
>>> 13.45                9.30
>>> numa02_SMT            numa02_SMT
>>> 14.80                7.43
>>>
>>> mTHP:64K
>>> Base                Patched
>>> numa01                numa01
>>> 216.15                135.20
>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>> 115.35                46.93
>>> numa02                numa02
>>> 13.24                9.24
>>> numa02_SMT            numa02_SMT
>>> 14.67                7.31
>>>
>>> mTHP:128K
>>> Base                Patched
>>> numa01                numa01
>>> 205.13                140.41
>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>> 112.93                44.78
>>> numa02                numa02
>>> 13.16                9.19
>>> numa02_SMT            numa02_SMT
>>> 14.81                7.39
>>>
>>> [1]
>>> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> ---
>>>    mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++--------
>>>    mm/mprotect.c |  3 ++-
>>>    2 files changed, 50 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index c30fb4b95e15..36191a9c799c 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct
>>> vm_fault *vmf, struct vm_area_str
>>>        update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>>    }
>>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct
>>> vm_area_struct *vma,
>>> +                       struct folio *folio, pte_t fault_pte, bool
>>> ignore_writable)
>>> +{
>>> +    int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>>> +    unsigned long start = max(vmf->address - nr * PAGE_SIZE,
>>> vma->vm_start);
>>> +    unsigned long end = min(start + folio_nr_pages(folio) *
>>> PAGE_SIZE, vma->vm_end);
>>> +    pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
>>> +    bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>>> +    unsigned long addr;
>>> +
>>> +    /* Restore all PTEs' mapping of the large folio */
>>> +    for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
>>> +        pte_t pte, old_pte;
>>> +        pte_t ptent = ptep_get(start_ptep);
>>> +        bool writable = false;
>>> +
>>> +        if (!pte_present(ptent) || !pte_protnone(ptent))
>>> +            continue;
>>> +
>>> +        if (vm_normal_folio(vma, addr, ptent) != folio)
>>> +            continue;
>>> +
>>
>> Should you be using folio_pte_batch() in the caller to collect all
>> applicable PTEs and then only have function that batch-changes a given
>> nr of PTEs?
>>
>> (just like we are now batching other stuff)
> 
> Seems folio_pte_batch() is not suitable for numa balancing, since we did
> not care about other PTE bits, only care about the protnone bits. And

You should be able to ignore most bits we care about, which case are you 
concerned about folio_pte_batch() would miss. Hand crafting own 
functions to cover some corner cases nobody cares about is likely a bad 
idea.

> after more thinking, I think I can drop the vm_normal_folio()
> validation, since all PTEs are ensured to be within the range of the
> folio size.

Are you sure about that?

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-28 12:07       ` David Hildenbrand
@ 2024-03-28 12:25         ` David Hildenbrand
  2024-03-28 14:18           ` Baolin Wang
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-03-28 12:25 UTC (permalink / raw)
  To: Baolin Wang, akpm
  Cc: mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel

On 28.03.24 13:07, David Hildenbrand wrote:
> On 28.03.24 12:34, Baolin Wang wrote:
>>
>>
>> On 2024/3/28 17:25, David Hildenbrand wrote:
>>> On 26.03.24 12:51, Baolin Wang wrote:
>>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>>> but the numa balancing still prohibits mTHP migration even though it
>>>> is an
>>>> exclusive mapping, which is unreasonable.
>>>>
>>>> Allow scanning mTHP:
>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>> issue, although there is still a GUP race, the issue seems to have been
>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the
>>>> folio_likely_mapped_shared()
>>>> to skip shared CoW pages though this is not a precise sharers count. To
>>>> check if the folio is shared, ideally we want to make sure every page is
>>>> mapped to the same process, but doing that seems expensive and using
>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>>
>>>> Allow migrating mTHP:
>>>> As mentioned in the previous thread[1], large folios (including THP) are
>>>> more susceptible to false sharing issues among threads than 4K base page,
>>>> leading to pages ping-pong back and forth during numa balancing, which is
>>>> currently not easy to resolve. Therefore, as a start to support mTHP numa
>>>> balancing, we can follow the PMD mapped THP's strategy, that means we can
>>>> reuse the 2-stage filter in should_numa_migrate_memory() to check if the
>>>> mTHP is being heavily contended among threads (through checking the
>>>> CPU id
>>>> and pid of the last access) to avoid false sharing at some degree. Thus,
>>>> we can restore all PTE maps upon the first hint page fault of a large
>>>> folio
>>>> to follow the PMD mapped THP's strategy. In the future, we can
>>>> continue to
>>>> optimize the NUMA balancing algorithm to avoid the false sharing issue
>>>> with
>>>> large folios as much as possible.
>>>>
>>>> Performance data:
>>>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>>>> Base: 2024-03-25 mm-unstable branch
>>>> Enable mTHP to run autonuma-benchmark
>>>>
>>>> mTHP:16K
>>>> Base                Patched
>>>> numa01                numa01
>>>> 224.70                137.23
>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>> 118.05                50.57
>>>> numa02                numa02
>>>> 13.45                9.30
>>>> numa02_SMT            numa02_SMT
>>>> 14.80                7.43
>>>>
>>>> mTHP:64K
>>>> Base                Patched
>>>> numa01                numa01
>>>> 216.15                135.20
>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>> 115.35                46.93
>>>> numa02                numa02
>>>> 13.24                9.24
>>>> numa02_SMT            numa02_SMT
>>>> 14.67                7.31
>>>>
>>>> mTHP:128K
>>>> Base                Patched
>>>> numa01                numa01
>>>> 205.13                140.41
>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>> 112.93                44.78
>>>> numa02                numa02
>>>> 13.16                9.19
>>>> numa02_SMT            numa02_SMT
>>>> 14.81                7.39
>>>>
>>>> [1]
>>>> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> ---
>>>>     mm/memory.c   | 56 +++++++++++++++++++++++++++++++++++++++++++--------
>>>>     mm/mprotect.c |  3 ++-
>>>>     2 files changed, 50 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index c30fb4b95e15..36191a9c799c 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct
>>>> vm_fault *vmf, struct vm_area_str
>>>>         update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>>>     }
>>>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct
>>>> vm_area_struct *vma,
>>>> +                       struct folio *folio, pte_t fault_pte, bool
>>>> ignore_writable)
>>>> +{
>>>> +    int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>>>> +    unsigned long start = max(vmf->address - nr * PAGE_SIZE,
>>>> vma->vm_start);
>>>> +    unsigned long end = min(start + folio_nr_pages(folio) *
>>>> PAGE_SIZE, vma->vm_end);
>>>> +    pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
>>>> +    bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>>>> +    unsigned long addr;
>>>> +
>>>> +    /* Restore all PTEs' mapping of the large folio */
>>>> +    for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
>>>> +        pte_t pte, old_pte;
>>>> +        pte_t ptent = ptep_get(start_ptep);
>>>> +        bool writable = false;
>>>> +
>>>> +        if (!pte_present(ptent) || !pte_protnone(ptent))
>>>> +            continue;
>>>> +
>>>> +        if (vm_normal_folio(vma, addr, ptent) != folio)
>>>> +            continue;
>>>> +
>>>
>>> Should you be using folio_pte_batch() in the caller to collect all
>>> applicable PTEs and then only have function that batch-changes a given
>>> nr of PTEs?
>>>
>>> (just like we are now batching other stuff)
>>
>> Seems folio_pte_batch() is not suitable for numa balancing, since we did
>> not care about other PTE bits, only care about the protnone bits. And
> 
> You should be able to ignore most bits we care about, which case are you
> concerned about folio_pte_batch() would miss. Hand crafting own
> functions to cover some corner cases nobody cares about is likely a bad
> idea.

Note that the reason why I am asking is that folio_pte_batch() can 
optimize-out repeated ptep_get() with cont-ptes.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: support multi-size THP numa balancing
  2024-03-28 12:25         ` David Hildenbrand
@ 2024-03-28 14:18           ` Baolin Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Baolin Wang @ 2024-03-28 14:18 UTC (permalink / raw)
  To: David Hildenbrand, akpm
  Cc: mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao,
	ryan.roberts, linux-mm, linux-kernel



On 2024/3/28 20:25, David Hildenbrand wrote:
> On 28.03.24 13:07, David Hildenbrand wrote:
>> On 28.03.24 12:34, Baolin Wang wrote:
>>>
>>>
>>> On 2024/3/28 17:25, David Hildenbrand wrote:
>>>> On 26.03.24 12:51, Baolin Wang wrote:
>>>>> Now the anonymous page allocation already supports multi-size THP 
>>>>> (mTHP),
>>>>> but the numa balancing still prohibits mTHP migration even though it
>>>>> is an
>>>>> exclusive mapping, which is unreasonable.
>>>>>
>>>>> Allow scanning mTHP:
>>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data 
>>>>> section
>>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared 
>>>>> data
>>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>>> NUMA-migrate COW pages that have other uses") change to use 
>>>>> page_count()
>>>>> to avoid GUP pages migration, that will also skip the mTHP numa 
>>>>> scaning.
>>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>>> issue, although there is still a GUP race, the issue seems to have 
>>>>> been
>>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the
>>>>> folio_likely_mapped_shared()
>>>>> to skip shared CoW pages though this is not a precise sharers 
>>>>> count. To
>>>>> check if the folio is shared, ideally we want to make sure every 
>>>>> page is
>>>>> mapped to the same process, but doing that seems expensive and using
>>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>>>
>>>>> Allow migrating mTHP:
>>>>> As mentioned in the previous thread[1], large folios (including 
>>>>> THP) are
>>>>> more susceptible to false sharing issues among threads than 4K base 
>>>>> page,
>>>>> leading to pages ping-pong back and forth during numa balancing, 
>>>>> which is
>>>>> currently not easy to resolve. Therefore, as a start to support 
>>>>> mTHP numa
>>>>> balancing, we can follow the PMD mapped THP's strategy, that means 
>>>>> we can
>>>>> reuse the 2-stage filter in should_numa_migrate_memory() to check 
>>>>> if the
>>>>> mTHP is being heavily contended among threads (through checking the
>>>>> CPU id
>>>>> and pid of the last access) to avoid false sharing at some degree. 
>>>>> Thus,
>>>>> we can restore all PTE maps upon the first hint page fault of a large
>>>>> folio
>>>>> to follow the PMD mapped THP's strategy. In the future, we can
>>>>> continue to
>>>>> optimize the NUMA balancing algorithm to avoid the false sharing issue
>>>>> with
>>>>> large folios as much as possible.
>>>>>
>>>>> Performance data:
>>>>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>>>>> Base: 2024-03-25 mm-unstable branch
>>>>> Enable mTHP to run autonuma-benchmark
>>>>>
>>>>> mTHP:16K
>>>>> Base                Patched
>>>>> numa01                numa01
>>>>> 224.70                137.23
>>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>>> 118.05                50.57
>>>>> numa02                numa02
>>>>> 13.45                9.30
>>>>> numa02_SMT            numa02_SMT
>>>>> 14.80                7.43
>>>>>
>>>>> mTHP:64K
>>>>> Base                Patched
>>>>> numa01                numa01
>>>>> 216.15                135.20
>>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>>> 115.35                46.93
>>>>> numa02                numa02
>>>>> 13.24                9.24
>>>>> numa02_SMT            numa02_SMT
>>>>> 14.67                7.31
>>>>>
>>>>> mTHP:128K
>>>>> Base                Patched
>>>>> numa01                numa01
>>>>> 205.13                140.41
>>>>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>>>>> 112.93                44.78
>>>>> numa02                numa02
>>>>> 13.16                9.19
>>>>> numa02_SMT            numa02_SMT
>>>>> 14.81                7.39
>>>>>
>>>>> [1]
>>>>> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>> ---
>>>>>     mm/memory.c   | 56 
>>>>> +++++++++++++++++++++++++++++++++++++++++++--------
>>>>>     mm/mprotect.c |  3 ++-
>>>>>     2 files changed, 50 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>>> index c30fb4b95e15..36191a9c799c 100644
>>>>> --- a/mm/memory.c
>>>>> +++ b/mm/memory.c
>>>>> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct
>>>>> vm_fault *vmf, struct vm_area_str
>>>>>         update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>>>>     }
>>>>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct
>>>>> vm_area_struct *vma,
>>>>> +                       struct folio *folio, pte_t fault_pte, bool
>>>>> ignore_writable)
>>>>> +{
>>>>> +    int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>>>>> +    unsigned long start = max(vmf->address - nr * PAGE_SIZE,
>>>>> vma->vm_start);
>>>>> +    unsigned long end = min(start + folio_nr_pages(folio) *
>>>>> PAGE_SIZE, vma->vm_end);
>>>>> +    pte_t *start_ptep = vmf->pte - (vmf->address - start) / 
>>>>> PAGE_SIZE;
>>>>> +    bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>>>>> +    unsigned long addr;
>>>>> +
>>>>> +    /* Restore all PTEs' mapping of the large folio */
>>>>> +    for (addr = start; addr != end; start_ptep++, addr += 
>>>>> PAGE_SIZE) {
>>>>> +        pte_t pte, old_pte;
>>>>> +        pte_t ptent = ptep_get(start_ptep);
>>>>> +        bool writable = false;
>>>>> +
>>>>> +        if (!pte_present(ptent) || !pte_protnone(ptent))
>>>>> +            continue;
>>>>> +
>>>>> +        if (vm_normal_folio(vma, addr, ptent) != folio)
>>>>> +            continue;
>>>>> +
>>>>
>>>> Should you be using folio_pte_batch() in the caller to collect all
>>>> applicable PTEs and then only have function that batch-changes a given
>>>> nr of PTEs?
>>>>
>>>> (just like we are now batching other stuff)
>>>
>>> Seems folio_pte_batch() is not suitable for numa balancing, since we did
>>> not care about other PTE bits, only care about the protnone bits. And
>>
>> You should be able to ignore most bits we care about, which case are you
>> concerned about folio_pte_batch() would miss. Hand crafting own
>> functions to cover some corner cases nobody cares about is likely a bad
>> idea.
> 
> Note that the reason why I am asking is that folio_pte_batch() can 
> optimize-out repeated ptep_get() with cont-ptes.

IIUC, the protnone PTEs will not set cont-ptes bit.

Another concern is that the protnone PTEs of the large folio might not 
be contiguous. For example, if a middle section of the large folio has 
been zapped, we would still like to restore all the protnone PTE mapping 
for the entire folio. However, folio_pte_batch() seems to only help 
identify the initial contiguous protnone PTEs.


 > Are you sure about that?

Sorry for noise, I am wrong. Folio validation is needed for some corner 
cases, but I may optimize the code with a simple pfn validation.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-03-28 14:18 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-26 11:51 [PATCH 0/2] support multi-size THP numa balancing Baolin Wang
2024-03-26 11:51 ` [PATCH 1/2] mm: factor out the numa mapping rebuilding into a new helper Baolin Wang
2024-03-26 11:51 ` [PATCH 2/2] mm: support multi-size THP numa balancing Baolin Wang
2024-03-27  2:04   ` Huang, Ying
2024-03-27  8:09     ` Baolin Wang
2024-03-27  8:21       ` Huang, Ying
2024-03-27  8:47         ` David Hildenbrand
2024-03-28  1:09           ` Huang, Ying
2024-03-28  9:25   ` David Hildenbrand
2024-03-28 11:34     ` Baolin Wang
2024-03-28 12:07       ` David Hildenbrand
2024-03-28 12:25         ` David Hildenbrand
2024-03-28 14:18           ` Baolin Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.