linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: akpm@linux-foundation.org
Cc: david@redhat.com, mgorman@techsingularity.net,
	wangkefeng.wang@huawei.com, jhubbard@nvidia.com,
	ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com,
	baolin.wang@linux.alibaba.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: [RFC PATCH v2] mm: support multi-size THP numa balancing
Date: Fri, 15 Mar 2024 17:18:14 +0800	[thread overview]
Message-ID: <903bf13fc3e68b8dc1f256570d78b55b2dd9c96f.1710493587.git.baolin.wang@linux.alibaba.com> (raw)

Now the anonymous page allocation already supports multi-size THP (mTHP),
but the numa balancing still prohibits mTHP migration even though it is an
exclusive mapping, which is unreasonable. Thus let's support the exclusive
mTHP numa balancing firstly.

Allow scanning mTHP:
Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
pages") skips shared CoW pages' NUMA page migration to avoid shared data
segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
NUMA-migrate COW pages that have other uses") change to use page_count()
to avoid GUP pages migration, that will also skip the mTHP numa scaning.
Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
issue, although there is still a GUP race, the issue seems to have been
resolved by commit 80d47f5de5e3. Meanwhile, use the folio_estimated_sharers()
to skip shared CoW pages though this is not a precise sharers count. To
check if the folio is shared, ideally we want to make sure every page is
mapped to the same process, but doing that seems expensive and using
the estimated mapcount seems can work when running autonuma benchmark.

Allow migrating mTHP:
As mentioned in the previous thread[1], large folios are more susceptible
to false sharing issues, leading to pages ping-pong back and forth during
numa balancing, which is currently hard to resolve. Therefore, as a start to
support mTHP numa balancing, only exclusive mappings are allowed to perform
numa migration to avoid the false sharing issues with large folios. Similarly,
use the estimated mapcount to skip shared mappings, which seems can work
in most cases (?), and we've used folio_estimated_sharers() to skip shared
mappings in migrate_misplaced_folio() for numa balancing, seems no real
complaints.

Performance data:
Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
Base: 2024-3-15 mm-unstable branch
Enable mTHP=64K to run autonuma-benchmark

Base without the patch:
numa01
222.97
numa01_THREAD_ALLOC
115.78
numa02
13.04
numa02_SMT
14.69

Base with the patch:
numa01
125.36
numa01_THREAD_ALLOC
44.58
numa02
9.22
numa02_SMT
7.46

[1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
Changes from RFC v1:
 - Add some preformance data per Huang, Ying.
 - Allow mTHP scanning per David Hildenbrand.
 - Avoid sharing mapping for numa balancing to avoid false sharing.
 - Add more commit message.
---
 mm/memory.c   | 9 +++++----
 mm/mprotect.c | 3 ++-
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index f2bc6dd15eb8..b9d5d88c5a76 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5059,7 +5059,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	int last_cpupid;
 	int target_nid;
 	pte_t pte, old_pte;
-	int flags = 0;
+	int flags = 0, nr_pages = 0;
 
 	/*
 	 * The pte cannot be used safely until we verify, while holding the page
@@ -5089,8 +5089,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	if (!folio || folio_is_zone_device(folio))
 		goto out_map;
 
-	/* TODO: handle PTE-mapped THP */
-	if (folio_test_large(folio))
+	/* Avoid large folio false sharing */
+	if (folio_test_large(folio) && folio_estimated_sharers(folio) > 1)
 		goto out_map;
 
 	/*
@@ -5112,6 +5112,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 		flags |= TNF_SHARED;
 
 	nid = folio_nid(folio);
+	nr_pages = folio_nr_pages(folio);
 	/*
 	 * For memory tiering mode, cpupid of slow memory page is used
 	 * to record page access time.  So use default value.
@@ -5148,7 +5149,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 
 out:
 	if (nid != NUMA_NO_NODE)
-		task_numa_fault(last_cpupid, nid, 1, flags);
+		task_numa_fault(last_cpupid, nid, nr_pages, flags);
 	return 0;
 out_map:
 	/*
diff --git a/mm/mprotect.c b/mm/mprotect.c
index f8a4544b4601..f0b9c974aaae 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -129,7 +129,8 @@ static long change_pte_range(struct mmu_gather *tlb,
 
 				/* Also skip shared copy-on-write pages */
 				if (is_cow_mapping(vma->vm_flags) &&
-				    folio_ref_count(folio) != 1)
+				    (folio_maybe_dma_pinned(folio) ||
+				     folio_estimated_sharers(folio) > 1))
 					continue;
 
 				/*
-- 
2.39.3



             reply	other threads:[~2024-03-15  9:18 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-15  9:18 Baolin Wang [this message]
2024-03-18  6:16 ` [RFC PATCH v2] mm: support multi-size THP numa balancing Huang, Ying
2024-03-18  9:42   ` Baolin Wang
2024-03-18  9:48     ` David Hildenbrand
2024-03-18 10:13       ` Baolin Wang
2024-03-18 10:15         ` David Hildenbrand
2024-03-18 10:31           ` Baolin Wang
2024-03-19  7:26     ` Huang, Ying
2024-03-21  7:12       ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=903bf13fc3e68b8dc1f256570d78b55b2dd9c96f.1710493587.git.baolin.wang@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).