From: Huang Ying <ying.huang@intel.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Huang Ying <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
Yang Shi <shy828301@gmail.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Oscar Salvador <osalvador@suse.de>,
Matthew Wilcox <willy@infradead.org>
Subject: [RFC 6/6] mm/migrate_pages: batch flushing TLB
Date: Wed, 21 Sep 2022 14:06:16 +0800 [thread overview]
Message-ID: <20220921060616.73086-7-ying.huang@intel.com> (raw)
In-Reply-To: <20220921060616.73086-1-ying.huang@intel.com>
The TLB flushing will cost quite some CPU cycles during the page
migration in some situations. For example, when migrate a page of a
process with multiple active threads that run on multiple CPUs. After
batching the _unmap and _move in migrate_pages(), the TLB flushing can
be batched easily with the existing TLB flush batching mechanism.
This patch implements that.
We use the following test case to test the patch.
On a 2-socket Intel server,
- Run pmbench memory accessing benchmark
- Run `migratepages` to migrate pages of pmbench between node 0 and
node 1 back and forth.
With the patch, the TLB flushing IPI reduces 99.1% during the test and
the number of pages migrated successfully per second increases 291.7%.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Matthew Wilcox <willy@infradead.org>
---
mm/migrate.c | 4 +++-
mm/rmap.c | 24 ++++++++++++++++++++----
2 files changed, 23 insertions(+), 5 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 042fa147f302..a0de0d9b4d41 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1179,7 +1179,7 @@ static int migrate_page_unmap(new_page_t get_new_page, free_page_t put_new_page,
/* Establish migration ptes */
VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
page);
- try_to_migrate(folio, 0);
+ try_to_migrate(folio, TTU_BATCH_FLUSH);
page_was_mapped = 1;
}
@@ -1647,6 +1647,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
nr_thp_failed += thp_retry;
nr_failed_pages += nr_retry_pages;
move:
+ try_to_unmap_flush();
+
retry = 1;
thp_retry = 1;
for (pass = 0; pass < 10 && (retry || thp_retry); pass++) {
diff --git a/mm/rmap.c b/mm/rmap.c
index 93d5a6f793d2..ab88136720dc 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1960,8 +1960,24 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
} else {
flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
- /* Nuke the page table entry. */
- pteval = ptep_clear_flush(vma, address, pvmw.pte);
+ /*
+ * Nuke the page table entry.
+ */
+ if (should_defer_flush(mm, flags)) {
+ /*
+ * We clear the PTE but do not flush so potentially
+ * a remote CPU could still be writing to the folio.
+ * If the entry was previously clean then the
+ * architecture must guarantee that a clear->dirty
+ * transition on a cached TLB entry is written through
+ * and traps if the PTE is unmapped.
+ */
+ pteval = ptep_get_and_clear(mm, address, pvmw.pte);
+
+ set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+ } else {
+ pteval = ptep_clear_flush(vma, address, pvmw.pte);
+ }
}
/* Set the dirty flag on the folio now the pte is gone. */
@@ -2128,10 +2144,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
/*
* Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
- * TTU_SPLIT_HUGE_PMD and TTU_SYNC flags.
+ * TTU_SPLIT_HUGE_PMD, TTU_SYNC and TTU_BATCH_FLUSH flags.
*/
if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
- TTU_SYNC)))
+ TTU_SYNC | TTU_BATCH_FLUSH)))
return;
if (folio_is_zone_device(folio) &&
--
2.35.1
next prev parent reply other threads:[~2022-09-21 6:07 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-21 6:06 [RFC 0/6] migrate_pages(): batch TLB flushing Huang Ying
2022-09-21 6:06 ` [RFC 1/6] mm/migrate_pages: separate huge page and normal pages migration Huang Ying
2022-09-21 15:55 ` Zi Yan
2022-09-22 1:14 ` Huang, Ying
2022-09-22 6:03 ` Baolin Wang
2022-09-22 6:22 ` Huang, Ying
2022-09-21 6:06 ` [RFC 2/6] mm/migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2022-09-21 16:08 ` Zi Yan
2022-09-22 1:15 ` Huang, Ying
2022-09-22 6:36 ` Baolin Wang
2022-09-26 9:28 ` Alistair Popple
2022-09-26 18:06 ` Yang Shi
2022-09-27 0:02 ` Alistair Popple
2022-09-27 1:51 ` Huang, Ying
2022-09-27 20:34 ` John Hubbard
2022-09-27 20:57 ` Yang Shi
2022-09-28 0:59 ` Alistair Popple
2022-09-28 1:41 ` Huang, Ying
2022-09-28 1:44 ` John Hubbard
2022-09-28 1:49 ` Yang Shi
2022-09-28 1:56 ` John Hubbard
2022-09-28 2:14 ` Yang Shi
2022-09-28 2:57 ` John Hubbard
2022-09-28 3:25 ` Yang Shi
2022-09-28 3:39 ` Yang Shi
2022-09-27 20:56 ` Yang Shi
2022-09-27 20:54 ` Yang Shi
2022-09-21 6:06 ` [RFC 3/6] mm/migrate_pages: restrict number of pages to migrate in batch Huang Ying
2022-09-21 16:10 ` Zi Yan
2022-09-21 16:15 ` Zi Yan
2022-09-22 1:15 ` Huang, Ying
2022-09-21 6:06 ` [RFC 4/6] mm/migrate_pages: batch _unmap and _move Huang Ying
2022-09-21 6:06 ` [RFC 5/6] mm/migrate_pages: share more code between " Huang Ying
2022-09-21 6:06 ` Huang Ying [this message]
2022-09-21 15:47 ` [RFC 0/6] migrate_pages(): batch TLB flushing Zi Yan
2022-09-22 1:45 ` Huang, Ying
2022-09-22 3:47 ` haoxin
2022-09-22 4:36 ` Huang, Ying
2022-09-22 12:50 ` Bharata B Rao
2022-09-23 7:52 ` Huang, Ying
2022-09-27 10:46 ` Bharata B Rao
2022-09-28 1:46 ` Huang, Ying
2022-09-26 9:11 ` Alistair Popple
2022-09-27 11:21 ` haoxin
2022-09-28 2:01 ` Huang, Ying
2022-09-28 3:33 ` haoxin
2022-09-28 4:53 ` Huang, Ying
2022-11-01 14:49 ` Hesham Almatary
2022-11-02 3:14 ` Huang, Ying
2022-11-02 14:13 ` Hesham Almatary
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220921060616.73086-7-ying.huang@intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=osalvador@suse.de \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).