All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration
@ 2024-04-18  6:15 Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 1/8] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
                   ` (9 more replies)
  0 siblings, 10 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Hi everyone,

While I'm working with a tiered memory system e.g. CXL memory, I have
been facing migration overhead esp. tlb shootdown on promotion or
demotion between different tiers.  Yeah..  most tlb shootdowns on
migration through hinting fault can be avoided thanks to Huang Ying's
work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
is inaccessible").  See the following link for more information:

https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/

However, it's only for ones using hinting fault.  I thought it'd be much
better if we have a general mechanism to reduce all tlb numbers that we
can ultimately apply to any type of migration.

I'm suggesting a mechanism called MIGRC that stands for 'Migration Read
Copy', to reduce tlb numbers by deferring tlb flush until the source
folios at migration actually become used, of course, only if the target
PTE don't have write permission.

To achieve that:

   1. For the folios that map only to non-writable tlb entries, prevent
      tlb flush during migration but perform it just before the source
      folios actually become used out of buddy or pcp.

   2. When any non-writable tlb entry changes to writable e.g. through
      fault handler, give up migrc mechanism and perform tlb flush
      required right away.

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts.  For the test, I picked up XSBench
that is widely used for performance analysis on high performance
computing architectures - https://github.com/ANL-CESAR/XSBench.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times migration happens.
The slower the memory is and the more reclaim runs, the better migrc
works so as to obtain the better result.  In my system, the result
shows:

   1. itlb flushes are reduced over 90%.
   2. itlb misses are reduced over 30%.
   3. All the other tlb numbers also get enhanced.
   4. tlb shootdown interrupts are reduced over 90%.
   5. The test program runtime is reduced over 5%.

The test envitonment:

   Architecture - x86_64
   QEMU - kvm enabled, host cpu
   Numa - 2 nodes (16 CPUs 1GB, no CPUs 99GB)
   Linux Kernel - v6.9-rc4, numa balancing tiering on, demotion enabled

< measurement: raw data - tlb and interrupt numbers >

   $ perf stat -a \
           -e itlb.itlb_flush \
           -e tlb_flush.dtlb_thread \
           -e tlb_flush.stlb_any \
           -e dtlb-load-misses \
           -e dtlb-store-misses \
           -e itlb-load-misses \
           XSBench -t 16 -p 50000000

   $ grep "TLB shootdowns" /proc/interrupts

   BEFORE
   ------
   40417078     itlb.itlb_flush
   234852566    tlb_flush.dtlb_thread
   153192357    tlb_flush.stlb_any
   119001107892 dTLB-load-misses
   307921167    dTLB-store-misses
   1355272118   iTLB-load-misses

   TLB: 1364803    1303670    1333921    1349607
        1356934    1354216    1332972    1342842
        1350265    1316443    1355928    1360793
        1298239    1326358    1343006    1340971
        TLB shootdowns

   AFTER
   -----
   3316495      itlb.itlb_flush
   138912511    tlb_flush.dtlb_thread
   115199341    tlb_flush.stlb_any
   117610390021 dTLB-load-misses
   198042233    dTLB-store-misses
   840066984    iTLB-load-misses

   TLB: 117257     119219     117178     115737
        117967     118948     117508     116079
        116962     117266     117320     117215
        105808     103934     115672     117610
        TLB shootdowns

< measurement: user experience - runtime >

   $ time XSBench -t 16 -p 50000000

   BEFORE
   ------
   Threads:     16
   Runtime:     968.783 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,754,778

   15208.91s user 141.44s system 1564% cpu 16:20.98 total

   AFTER
   -----
   Threads:     16
   Runtime:     913.210 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,861,565

   14351.69s user 138.23s system 1565% cpu 15:25.47 total

---

Changes from v8:
	1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
	2. Supplement comments and commit message.
	3. Change the candidate to apply migrc mechnism:

	   BEFORE - The source folios at demotion and promotion.
	   AFTER  - The souce folios at any type of migration.

	4. Change how migrc mechanism works:

	   BEFORE - Reduce tlb flushes by deferring folio_free() for
	            source folios during demotion and promotion.
	   AFTER  - Reduce tlb flushes by deferring tlb flush until they
	            actually become used, out of pcp or buddy. The
		    current version of migrc does *not* defer calling
	            folio_free() but let it go as it is as the same as
		    vanilla kernel, with the folios marked kind of 'need
		    to tlb flush'. And then handle the flush when the
		    page exits from pcp or buddy so as to prevent
		    changing vm stats e.g. free pages.

Changes from v7:
	1. Rewrite cover letter to explain what 'migrc' mechasism is.
	   (feedbacked by Andrew Morton)
	2. Supplement the commit message of a patch 'mm: Add APIs to
	   free a folio directly to the buddy bypassing pcp'.
	   (feedbacked by Andrew Morton)

Changes from v6:
	1. Fix build errors in case of
	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
	   migrc_flush_{start,end}() calls from arch code to
	   try_to_unmap_flush() in mm/rmap.c.

Changes from v5:
	1. Fix build errors in case of CONFIG_MIGRATION disabled or
	   CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
	   bot and Raymond Jay Golo)
	2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.

Changes from v4:

	1. Rebase on v6.7.
	2. Fix build errors in arm64 that is doing nothing for tlb flush
	   but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
	   by kernel test robot)
	3. Don't use any page flag. So the system would give up migrc
	   mechanism more often but it's okay. The final improvement is
	   good enough.
	4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
	   avoiding redundant CPUs from tlb flush.

Changes from v3:

	1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
	   migrc_enable. (feedbacked by Nadav)
	2. Remove the optimization skipping CPUs that have already
	   performed tlb flushes needed by any reason when performing
	   tlb flushes by migrc because I can't tell the performance
	   difference between w/ the optimization and w/o that.
	   (feedbacked by Nadav)
	3. Minimize arch-specific code. While at it, move all the migrc
           declarations and inline functions from include/linux/mm.h to
           mm/internal.h (feedbacked by Dave Hansen, Nadav)
	4. Separate a part making migrc paused when the system is in
	   high memory pressure to another patch. (feedbacked by Nadav)
	5. Rename:
	      a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
	      b. tlb_ubc_nowr to tlb_ubc_ro,
	      c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
	      d. migrc_stop to migrc_pause.
	   (feedbacked by Nadav)
	6. Use ->lru list_head instead of introducing a new llist_head.
	   (feedbacked by Nadav)
	7. Use non-atomic operations of page-flag when it's safe.
	   (feedbacked by Nadav)
	8. Use stack instead of keeping a pointer of 'struct migrc_req'
	   in struct task, which is for manipulating it locally.
	   (feedbacked by Nadav)
	9. Replace a lot of simple functions to inline functions placed
	   in a header, mm/internal.h. (feedbacked by Nadav)
	10. Add additional sufficient comments. (feedbacked by Nadav)
	11. Remove a lot of wrapper functions. (feedbacked by Nadav)

Changes from RFC v2:

	1. Remove additional occupation in struct page. To do that,
	   unioned with lru field for migrc's list and added a page
	   flag. I know page flag is a thing that we don't like to add
	   but no choice because migrc should distinguish folios under
	   migrc's control from others. Instead, I force migrc to be
	   used only on 64 bit system to mitigate you guys from getting
	   angry.
	2. Remove meaningless internal object allocator that I
	   introduced to minimize impact onto the system. However, a ton
	   of tests showed there was no difference.
	3. Stop migrc from working when the system is in high memory
	   pressure like about to perform direct reclaim. At the
	   condition where the swap mechanism is heavily used, I found
	   the system suffered from regression without this control.
	4. Exclude folios that pte_dirty() == true from migrc's interest
	   so that migrc can work simpler.
	5. Combine several patches that work tightly coupled to one.
	6. Add sufficient comments for better review.
	7. Manage migrc's request in per-node manner (from globally).
	8. Add tlb miss improvement in commit message.
	9. Test with more CPUs(4 -> 16) to see bigger improvement.

Changes from RFC:

	1. Fix a bug triggered when a destination folio at the previous
	   migration becomes a source folio at the next migration,
	   before the folio gets handled properly so that the folio can
	   play with another migration. There was inconsistency in the
	   folio's state. Fixed it.
	2. Split the patch set into more pieces so that the folks can
	   review better. (Feedbacked by Nadav Amit)
	3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
	   (Feedbacked by Nadav Amit)
	4. Tried to add sufficient comments to explain the patch set
	   better. (Feedbacked by Nadav Amit)

Byungchul Park (8):
  x86/tlb: add APIs manipulating tlb batch's arch data
  arm64: tlbflush: add APIs manipulating tlb batch's arch data
  mm/rmap: recognize read-only tlb entries during batched tlb flush
  x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
    arch_tlbbatch_flush()
  mm: separate move/undo parts from migrate_pages_batch()
  mm: buddy: make room for a new variable, mgen, in struct page
  mm: add folio_put_mgen() to deliver migrc's generation number to pcp
    or buddy
  mm: defer tlb flush until the source folios at migration actually get
    used

 arch/arm64/include/asm/tlbflush.h |  18 ++
 arch/x86/include/asm/tlbflush.h   |  18 ++
 arch/x86/mm/tlb.c                 |   2 -
 include/linux/mm.h                |  22 ++
 include/linux/mm_types.h          |  39 ++-
 include/linux/sched.h             |  10 +
 mm/compaction.c                   |  10 +
 mm/internal.h                     |  85 +++++-
 mm/memory.c                       |   8 +
 mm/migrate.c                      | 487 ++++++++++++++++++++++++++----
 mm/page_alloc.c                   | 155 ++++++++--
 mm/page_isolation.c               |   6 +
 mm/page_reporting.c               |  10 +
 mm/rmap.c                         |  40 ++-
 mm/swap.c                         |  20 +-
 15 files changed, 826 insertions(+), 104 deletions(-)


base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe
-- 
2.17.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 1/8] x86/tlb: add APIs manipulating tlb batch's arch data
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 2/8] arm64: tlbflush: " Byungchul Park
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

This is a preparation for migrc mechanism that needs to recognize
read-only tlb entries during migration by separating tlb batch arch data
into two, one is for read-only entries and the other is for writable
ones, and merging those two when needed.

Migrc also needs to optimize tlb shootdown by skipping CPUs that have
already performed tlb flush needed for a while.  To support it, added
APIs manipulating arch data for x86.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/x86/include/asm/tlbflush.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 25726893c6f4..a14f77c5cdde 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -5,6 +5,7 @@
 #include <linux/mm_types.h>
 #include <linux/mmu_notifier.h>
 #include <linux/sched.h>
+#include <linux/cpumask.h>
 
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
@@ -293,6 +294,23 @@ static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
 
 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
 
+static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch)
+{
+	cpumask_clear(&batch->cpumask);
+}
+
+static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst,
+		struct arch_tlbflush_unmap_batch *bsrc)
+{
+	cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+}
+
+static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst,
+		struct arch_tlbflush_unmap_batch *bsrc)
+{
+	return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+}
+
 static inline bool pte_flags_need_flush(unsigned long oldflags,
 					unsigned long newflags,
 					bool ignore_access)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 2/8] arm64: tlbflush: add APIs manipulating tlb batch's arch data
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 1/8] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 3/8] mm/rmap: recognize read-only tlb entries during batched tlb flush Byungchul Park
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

This is a preparation for migrc mechanism that requires to manipulate
tlb batch's arch data.  Even though arm64 does nothing for it, arch
with CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH should provide the APIs.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/arm64/include/asm/tlbflush.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index a75de2665d84..b8c7fbc1c68e 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -347,6 +347,24 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 	dsb(ish);
 }
 
+static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch)
+{
+	/* nothing to do */
+}
+
+static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst,
+			       struct arch_tlbflush_unmap_batch *bsrc)
+{
+	/* nothing to do */
+}
+
+static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst,
+			       struct arch_tlbflush_unmap_batch *bsrc)
+{
+	/* Kernel can consider tlb batch always has been done. */
+	return true;
+}
+
 /*
  * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
  * necessarily a performance improvement.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 3/8] mm/rmap: recognize read-only tlb entries during batched tlb flush
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 1/8] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 2/8] arm64: tlbflush: " Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 4/8] x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Functionally, no change.  This is a preparation for migrc mechanism that
requires to recognize read-only tlb entries and handle them in a
different way.  The newly introduced API, fold_ubc(), will be used by
migrc mechanism.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/sched.h |  1 +
 mm/internal.h         |  4 ++++
 mm/rmap.c             | 31 ++++++++++++++++++++++++++++++-
 3 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4118b3f959c3..f9f8091f354f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1339,6 +1339,7 @@ struct task_struct {
 #endif
 
 	struct tlbflush_unmap_batch	tlb_ubc;
+	struct tlbflush_unmap_batch	tlb_ubc_ro;
 
 	/* Cache last used pipe for splice(): */
 	struct pipe_inode_info		*splice_pipe;
diff --git a/mm/internal.h b/mm/internal.h
index c6483f73ec13..b34d9e627132 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1100,6 +1100,7 @@ extern struct workqueue_struct *mm_percpu_wq;
 void try_to_unmap_flush(void);
 void try_to_unmap_flush_dirty(void);
 void flush_tlb_batched_pending(struct mm_struct *mm);
+void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src);
 #else
 static inline void try_to_unmap_flush(void)
 {
@@ -1110,6 +1111,9 @@ static inline void try_to_unmap_flush_dirty(void)
 static inline void flush_tlb_batched_pending(struct mm_struct *mm)
 {
 }
+static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src)
+{
+}
 #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
 
 extern const struct trace_print_flags pageflag_names[];
diff --git a/mm/rmap.c b/mm/rmap.c
index 2608c40dffad..c37ff1648cf1 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -635,6 +635,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 }
 
 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+void fold_ubc(struct tlbflush_unmap_batch *dst,
+	      struct tlbflush_unmap_batch *src)
+{
+	if (!src->flush_required)
+		return;
+
+	/*
+	 * Fold src to dst.
+	 */
+	arch_tlbbatch_fold(&dst->arch, &src->arch);
+	dst->writable = dst->writable || src->writable;
+	dst->flush_required = true;
+
+	/*
+	 * Reset src.
+	 */
+	arch_tlbbatch_clear(&src->arch);
+	src->flush_required = false;
+	src->writable = false;
+}
+
 /*
  * Flush TLB entries for recently unmapped pages from remote CPUs. It is
  * important if a PTE was dirty when it was unmapped that it's flushed
@@ -644,7 +666,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 void try_to_unmap_flush(void)
 {
 	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
 
+	fold_ubc(tlb_ubc, tlb_ubc_ro);
 	if (!tlb_ubc->flush_required)
 		return;
 
@@ -675,13 +699,18 @@ void try_to_unmap_flush_dirty(void)
 static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
 				      unsigned long uaddr)
 {
-	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc;
 	int batch;
 	bool writable = pte_dirty(pteval);
 
 	if (!pte_accessible(mm, pteval))
 		return;
 
+	if (pte_write(pteval) || writable)
+		tlb_ubc = &current->tlb_ubc;
+	else
+		tlb_ubc = &current->tlb_ubc_ro;
+
 	arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
 	tlb_ubc->flush_required = true;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 4/8] x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush()
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
                   ` (2 preceding siblings ...)
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 3/8] mm/rmap: recognize read-only tlb entries during batched tlb flush Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 5/8] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

This is a preparation for migrc mechanism that requires to avoid
redundant tlb flushes by manipulating tlb batch's arch data.  To achieve
that, it's needed to separate the part clearing the tlb batch's arch
data out of arch_tlbbatch_flush().

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/x86/mm/tlb.c | 2 --
 mm/rmap.c         | 1 +
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 44ac64f3a047..24bce69222cd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1265,8 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 		local_irq_enable();
 	}
 
-	cpumask_clear(&batch->cpumask);
-
 	put_flush_tlb_info();
 	put_cpu();
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index c37ff1648cf1..513e49840da7 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -673,6 +673,7 @@ void try_to_unmap_flush(void)
 		return;
 
 	arch_tlbbatch_flush(&tlb_ubc->arch);
+	arch_tlbbatch_clear(&tlb_ubc->arch);
 	tlb_ubc->flush_required = false;
 	tlb_ubc->writable = false;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 5/8] mm: separate move/undo parts from migrate_pages_batch()
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
                   ` (3 preceding siblings ...)
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 4/8] x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 6/8] mm: buddy: make room for a new variable, mgen, in struct page Byungchul Park
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Functionally, no change.  This is a preparation for migrc mechanism that
requires to use separated folio lists for its own handling during
migration.  Refactored migrate_pages_batch() and separated move/undo
parts from migrate_pages_batch().

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 mm/migrate.c | 134 +++++++++++++++++++++++++++++++--------------------
 1 file changed, 83 insertions(+), 51 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index c7692f303fa7..f9ed7a2b8720 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1609,6 +1609,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio,
 	return nr_failed;
 }
 
+static void migrate_folios_move(struct list_head *src_folios,
+		struct list_head *dst_folios,
+		free_folio_t put_new_folio, unsigned long private,
+		enum migrate_mode mode, int reason,
+		struct list_head *ret_folios,
+		struct migrate_pages_stats *stats,
+		int *retry, int *thp_retry, int *nr_failed,
+		int *nr_retry_pages)
+{
+	struct folio *folio, *folio2, *dst, *dst2;
+	bool is_thp;
+	int nr_pages;
+	int rc;
+
+	dst = list_first_entry(dst_folios, struct folio, lru);
+	dst2 = list_next_entry(dst, lru);
+	list_for_each_entry_safe(folio, folio2, src_folios, lru) {
+		is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio);
+		nr_pages = folio_nr_pages(folio);
+
+		cond_resched();
+
+		rc = migrate_folio_move(put_new_folio, private,
+				folio, dst, mode,
+				reason, ret_folios);
+		/*
+		 * The rules are:
+		 *	Success: folio will be freed
+		 *	-EAGAIN: stay on the unmap_folios list
+		 *	Other errno: put on ret_folios list
+		 */
+		switch(rc) {
+		case -EAGAIN:
+			*retry += 1;
+			*thp_retry += is_thp;
+			*nr_retry_pages += nr_pages;
+			break;
+		case MIGRATEPAGE_SUCCESS:
+			stats->nr_succeeded += nr_pages;
+			stats->nr_thp_succeeded += is_thp;
+			break;
+		default:
+			*nr_failed += 1;
+			stats->nr_thp_failed += is_thp;
+			stats->nr_failed_pages += nr_pages;
+			break;
+		}
+		dst = dst2;
+		dst2 = list_next_entry(dst, lru);
+	}
+}
+
+static void migrate_folios_undo(struct list_head *src_folios,
+		struct list_head *dst_folios,
+		free_folio_t put_new_folio, unsigned long private,
+		struct list_head *ret_folios)
+{
+	struct folio *folio, *folio2, *dst, *dst2;
+
+	dst = list_first_entry(dst_folios, struct folio, lru);
+	dst2 = list_next_entry(dst, lru);
+	list_for_each_entry_safe(folio, folio2, src_folios, lru) {
+		int old_page_state = 0;
+		struct anon_vma *anon_vma = NULL;
+
+		__migrate_folio_extract(dst, &old_page_state, &anon_vma);
+		migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED,
+				anon_vma, true, ret_folios);
+		list_del(&dst->lru);
+		migrate_folio_undo_dst(dst, true, put_new_folio, private);
+		dst = dst2;
+		dst2 = list_next_entry(dst, lru);
+	}
+}
+
 /*
  * migrate_pages_batch() first unmaps folios in the from list as many as
  * possible, then move the unmapped folios.
@@ -1631,7 +1706,7 @@ static int migrate_pages_batch(struct list_head *from,
 	int pass = 0;
 	bool is_thp = false;
 	bool is_large = false;
-	struct folio *folio, *folio2, *dst = NULL, *dst2;
+	struct folio *folio, *folio2, *dst = NULL;
 	int rc, rc_saved = 0, nr_pages;
 	LIST_HEAD(unmap_folios);
 	LIST_HEAD(dst_folios);
@@ -1790,42 +1865,11 @@ static int migrate_pages_batch(struct list_head *from,
 		thp_retry = 0;
 		nr_retry_pages = 0;
 
-		dst = list_first_entry(&dst_folios, struct folio, lru);
-		dst2 = list_next_entry(dst, lru);
-		list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
-			is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio);
-			nr_pages = folio_nr_pages(folio);
-
-			cond_resched();
-
-			rc = migrate_folio_move(put_new_folio, private,
-						folio, dst, mode,
-						reason, ret_folios);
-			/*
-			 * The rules are:
-			 *	Success: folio will be freed
-			 *	-EAGAIN: stay on the unmap_folios list
-			 *	Other errno: put on ret_folios list
-			 */
-			switch(rc) {
-			case -EAGAIN:
-				retry++;
-				thp_retry += is_thp;
-				nr_retry_pages += nr_pages;
-				break;
-			case MIGRATEPAGE_SUCCESS:
-				stats->nr_succeeded += nr_pages;
-				stats->nr_thp_succeeded += is_thp;
-				break;
-			default:
-				nr_failed++;
-				stats->nr_thp_failed += is_thp;
-				stats->nr_failed_pages += nr_pages;
-				break;
-			}
-			dst = dst2;
-			dst2 = list_next_entry(dst, lru);
-		}
+		/* Move the unmapped folios */
+		migrate_folios_move(&unmap_folios, &dst_folios,
+				put_new_folio, private, mode, reason,
+				ret_folios, stats, &retry, &thp_retry,
+				&nr_failed, &nr_retry_pages);
 	}
 	nr_failed += retry;
 	stats->nr_thp_failed += thp_retry;
@@ -1834,20 +1878,8 @@ static int migrate_pages_batch(struct list_head *from,
 	rc = rc_saved ? : nr_failed;
 out:
 	/* Cleanup remaining folios */
-	dst = list_first_entry(&dst_folios, struct folio, lru);
-	dst2 = list_next_entry(dst, lru);
-	list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
-		int old_page_state = 0;
-		struct anon_vma *anon_vma = NULL;
-
-		__migrate_folio_extract(dst, &old_page_state, &anon_vma);
-		migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED,
-				       anon_vma, true, ret_folios);
-		list_del(&dst->lru);
-		migrate_folio_undo_dst(dst, true, put_new_folio, private);
-		dst = dst2;
-		dst2 = list_next_entry(dst, lru);
-	}
+	migrate_folios_undo(&unmap_folios, &dst_folios,
+			put_new_folio, private, ret_folios);
 
 	return rc;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 6/8] mm: buddy: make room for a new variable, mgen, in struct page
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
                   ` (4 preceding siblings ...)
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 5/8] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 7/8] mm: add folio_put_mgen() to deliver migrc's generation number to pcp or buddy Byungchul Park
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Functionally, no change.  This is a preparation for migrc mechanism that
tracks need of tlb flush for each page residing in buddy, using a
generation number in struct page.

Fortunately, since the private field in struct page is used only to
store page order in buddy, ranging from 0 to MAX_PAGE_ORDER, that can be
covered with unsigned short int.  So splitted it into two smaller ones,
order and mgen, so that the both can be used in buddy at the same time.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/mm_types.h | 39 ++++++++++++++++++++++++++++++++-------
 mm/internal.h            |  4 ++--
 mm/page_alloc.c          | 13 ++++++++-----
 3 files changed, 42 insertions(+), 14 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index db0adf5721cc..47fd3780bd19 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -108,13 +108,24 @@ struct page {
 				pgoff_t index;		/* Our offset within mapping. */
 				unsigned long share;	/* share count for fsdax */
 			};
-			/**
-			 * @private: Mapping-private opaque data.
-			 * Usually used for buffer_heads if PagePrivate.
-			 * Used for swp_entry_t if PageSwapCache.
-			 * Indicates order in the buddy system if PageBuddy.
-			 */
-			unsigned long private;
+			union {
+				/**
+				 * @private: Mapping-private opaque data.
+				 * Usually used for buffer_heads if PagePrivate.
+				 * Used for swp_entry_t if PageSwapCache.
+				 */
+				unsigned long private;
+				struct {
+					/*
+					 * Indicates order in the buddy system if PageBuddy.
+					 */
+					unsigned short int order;
+					/*
+					 * Tracks need of tlb flush used by migrc
+					 */
+					unsigned short int mgen;
+				};
+			};
 		};
 		struct {	/* page_pool used by netstack */
 			/**
@@ -521,6 +532,20 @@ static inline void set_page_private(struct page *page, unsigned long private)
 	page->private = private;
 }
 
+#define page_buddy_order(page)		((page)->order)
+
+static inline void set_page_buddy_order(struct page *page, unsigned int order)
+{
+	page->order = (unsigned short int)order;
+}
+
+#define page_buddy_mgen(page)		((page)->mgen)
+
+static inline void set_page_buddy_mgen(struct page *page, unsigned short int mgen)
+{
+	page->mgen = mgen;
+}
+
 static inline void *folio_get_private(struct folio *folio)
 {
 	return folio->private;
diff --git a/mm/internal.h b/mm/internal.h
index b34d9e627132..0336375c6e8b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -453,7 +453,7 @@ struct alloc_context {
 static inline unsigned int buddy_order(struct page *page)
 {
 	/* PageBuddy() must be checked by the caller */
-	return page_private(page);
+	return page_buddy_order(page);
 }
 
 /*
@@ -467,7 +467,7 @@ static inline unsigned int buddy_order(struct page *page)
  * times, potentially observing different values in the tests and the actual
  * use of the result.
  */
-#define buddy_order_unsafe(page)	READ_ONCE(page_private(page))
+#define buddy_order_unsafe(page)	READ_ONCE(page_buddy_order(page))
 
 /*
  * This function checks whether a page is free && is the buddy
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 33d4a1be927b..cbde22c4c189 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -565,9 +565,12 @@ void prep_compound_page(struct page *page, unsigned int order)
 	prep_compound_head(page, order);
 }
 
-static inline void set_buddy_order(struct page *page, unsigned int order)
+static inline void set_buddy_order_mgen(struct page *page,
+					unsigned int order,
+					unsigned short int mgen)
 {
-	set_page_private(page, order);
+	set_page_buddy_order(page, order);
+	set_page_buddy_mgen(page, order);
 	__SetPageBuddy(page);
 }
 
@@ -834,7 +837,7 @@ static inline void __free_one_page(struct page *page,
 	}
 
 done_merging:
-	set_buddy_order(page, order);
+	set_buddy_order_mgen(page, order, 0);
 
 	if (fpi_flags & FPI_TO_TAIL)
 		to_tail = true;
@@ -1344,7 +1347,7 @@ static inline void expand(struct zone *zone, struct page *page,
 			continue;
 
 		__add_to_free_list(&page[size], zone, high, migratetype, false);
-		set_buddy_order(&page[size], high);
+		set_buddy_order_mgen(&page[size], high, 0);
 		nr_added += size;
 	}
 	account_freepages(zone, nr_added, migratetype);
@@ -6802,7 +6805,7 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page,
 			continue;
 
 		add_to_free_list(current_buddy, zone, high, migratetype, false);
-		set_buddy_order(current_buddy, high);
+		set_buddy_order_mgen(current_buddy, high, 0);
 	}
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 7/8] mm: add folio_put_mgen() to deliver migrc's generation number to pcp or buddy
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
                   ` (5 preceding siblings ...)
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 6/8] mm: buddy: make room for a new variable, mgen, in struct page Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 8/8] mm: defer tlb flush until the source folios at migration actually get used Byungchul Park
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Introduced a new API, folio_put_mgen(), to deliver migrc's generation
number to pcp or buddy that will be used by migrc mechanism to track
need of tlb flush for each page residing in pcp or buddy.

migrc makes decision whether tlb flush is needed or not, based on a
generation number stored in the interesting page and the global
generation number, for that tlb flush required has been completed.

For now, the delivery works only for the following call path but not for
the others that are not for releasing source folios during migration:

	folio_put_mgen()
	   __folio_put_mgen()
	      free_unref_page()
	         free_unref_page_commit()
	         free_one_page()
	            __free_one_page()

The generation number should be handed over properly when pages travel
between pcp and buddy, and must do necessary handling on exit from pcp
or buddy.

It's worth noting that this patch doesn't include actual body for tlb
flush on the exit, which will be filled by the main patch of migrc
mechanism.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/mm.h    |  22 +++++++
 include/linux/sched.h |   1 +
 mm/compaction.c       |  10 +++
 mm/internal.h         |  41 +++++++++++-
 mm/page_alloc.c       | 144 ++++++++++++++++++++++++++++++++++--------
 mm/page_isolation.c   |   6 ++
 mm/page_reporting.c   |  10 +++
 mm/swap.c             |  20 +++++-
 8 files changed, 226 insertions(+), 28 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index dc33f8269fb5..2e266dca1577 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1312,6 +1312,7 @@ static inline struct folio *virt_to_folio(const void *x)
 }
 
 void __folio_put(struct folio *folio);
+void __folio_put_mgen(struct folio *folio, unsigned short int mgen);
 
 void put_pages_list(struct list_head *pages);
 
@@ -1509,6 +1510,27 @@ static inline void folio_put(struct folio *folio)
 		__folio_put(folio);
 }
 
+/**
+ * folio_put_mgen - Decrement the last reference count on a folio.
+ * @folio: The folio.
+ * @mgen: The migrc generation # of TLB flush that the folio requires.
+ *
+ * The folio's reference count should be one since the only user, folio
+ * migration code, calls folio_put_mgen() only when the folio has no
+ * reference else.  The memory will be released back to the page
+ * allocator and may be used by another allocation immediately.  Do not
+ * access the memory or the struct folio after calling folio_put_mgen().
+ *
+ * Context: May be called in process or interrupt context, but not in NMI
+ * context.  May be called while holding a spinlock.
+ */
+static inline void folio_put_mgen(struct folio *folio, unsigned short int mgen)
+{
+	if (WARN_ON(!folio_put_testzero(folio)))
+		return;
+	__folio_put_mgen(folio, mgen);
+}
+
 /**
  * folio_put_refs - Reduce the reference count on a folio.
  * @folio: The folio.
diff --git a/include/linux/sched.h b/include/linux/sched.h
index f9f8091f354f..8125014dd57d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1340,6 +1340,7 @@ struct task_struct {
 
 	struct tlbflush_unmap_batch	tlb_ubc;
 	struct tlbflush_unmap_batch	tlb_ubc_ro;
+	unsigned short int		mgen;
 
 	/* Cache last used pipe for splice(): */
 	struct pipe_inode_info		*splice_pipe;
diff --git a/mm/compaction.c b/mm/compaction.c
index e731d45befc7..cf7cbffc411e 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -701,6 +701,11 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
 	if (locked)
 		spin_unlock_irqrestore(&cc->zone->lock, flags);
 
+	/*
+	 * Check and flush before using the isolated pages.
+	 */
+	check_flush_task_mgen();
+
 	/*
 	 * Be careful to not go outside of the pageblock.
 	 */
@@ -1673,6 +1678,11 @@ static void fast_isolate_freepages(struct compact_control *cc)
 
 		spin_unlock_irqrestore(&cc->zone->lock, flags);
 
+		/*
+		 * Check and flush before using the isolated pages.
+		 */
+		check_flush_task_mgen();
+
 		/* Skip fast search if enough freepages isolated */
 		if (cc->nr_freepages >= cc->nr_migratepages)
 			break;
diff --git a/mm/internal.h b/mm/internal.h
index 0336375c6e8b..484bb960aeb7 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -638,7 +638,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order);
 
 extern int user_min_free_kbytes;
 
-void free_unref_page(struct page *page, unsigned int order);
+void free_unref_page(struct page *page, unsigned int order, unsigned short int mgen);
 void free_unref_folios(struct folio_batch *fbatch);
 
 extern void zone_pcp_reset(struct zone *zone);
@@ -1516,4 +1516,43 @@ static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry,
 void workingset_update_node(struct xa_node *node);
 extern struct list_lru shadow_nodes;
 
+#if defined(CONFIG_MIGRATION) && defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+static inline unsigned short int mgen_latest(unsigned short int a, unsigned short int b)
+{
+	if (!a || !b)
+		return a + b;
+
+	/*
+	 * The mgen is wrapped around so let's use this trick.
+	 */
+	if ((short int)(a - b) < 0)
+		return b;
+	else
+		return a;
+}
+
+static inline void update_task_mgen(unsigned short int mgen)
+{
+	current->mgen = mgen_latest(current->mgen, mgen);
+}
+
+static inline unsigned int hand_over_task_mgen(void)
+{
+	return xchg(&current->mgen, 0);
+}
+
+static inline void check_flush_task_mgen(void)
+{
+	/*
+	 * XXX: migrc mechanism will handle this. For now, do nothing
+	 * but reset current's mgen to finalize this turn.
+	 */
+	current->mgen = 0;
+}
+#else /* CONFIG_MIGRATION && CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline unsigned short int mgen_latest(unsigned short int a, unsigned short int b) { return 0; }
+static inline void update_task_mgen(unsigned short int mgen) {}
+static inline unsigned int hand_over_task_mgen(void) { return 0; }
+static inline void check_flush_task_mgen(void) {}
+#endif
 #endif	/* __MM_INTERNAL_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cbde22c4c189..7343882f077a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -696,6 +696,7 @@ static inline void __del_page_from_free_list(struct page *page, struct zone *zon
 	if (page_reported(page))
 		__ClearPageReported(page);
 
+	update_task_mgen(page_buddy_mgen(page));
 	list_del(&page->buddy_list);
 	__ClearPageBuddy(page);
 	set_page_private(page, 0);
@@ -768,7 +769,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
 static inline void __free_one_page(struct page *page,
 		unsigned long pfn,
 		struct zone *zone, unsigned int order,
-		int migratetype, fpi_t fpi_flags)
+		int migratetype, fpi_t fpi_flags, unsigned short int mgen)
 {
 	struct capture_control *capc = task_capc(zone);
 	unsigned long buddy_pfn = 0;
@@ -783,12 +784,22 @@ static inline void __free_one_page(struct page *page,
 	VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
 	VM_BUG_ON_PAGE(bad_range(zone, page), page);
 
+	/*
+	 * Ensure private is zero before using it inside buddy.
+	 */
+	set_page_private(page, 0);
+
 	account_freepages(zone, 1 << order, migratetype);
 
 	while (order < MAX_PAGE_ORDER) {
 		int buddy_mt = migratetype;
 
 		if (compaction_capture(capc, page, order, migratetype)) {
+			/*
+			 * Capturer will check_flush_task_mgen() through
+			 * prep_new_page().
+			 */
+			update_task_mgen(mgen);
 			account_freepages(zone, -(1 << order), migratetype);
 			return;
 		}
@@ -819,6 +830,11 @@ static inline void __free_one_page(struct page *page,
 		if (page_is_guard(buddy))
 			clear_page_guard(zone, buddy, order);
 		else
+			/*
+			 * __del_page_from_free_list() updates current's
+			 * mgen that pairs with hand_over_task_mgen() below
+			 * in this funtion.
+			 */
 			__del_page_from_free_list(buddy, zone, order, buddy_mt);
 
 		if (unlikely(buddy_mt != migratetype)) {
@@ -837,7 +853,8 @@ static inline void __free_one_page(struct page *page,
 	}
 
 done_merging:
-	set_buddy_order_mgen(page, order, 0);
+	mgen = mgen_latest(mgen, hand_over_task_mgen());
+	set_buddy_order_mgen(page, order, mgen);
 
 	if (fpi_flags & FPI_TO_TAIL)
 		to_tail = true;
@@ -1048,6 +1065,11 @@ __always_inline bool free_pages_prepare(struct page *page,
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
 
+	/*
+	 * Ensure private is zero before using it inside pcp.
+	 */
+	set_page_private(page, 0);
+
 	trace_mm_page_free(page, order);
 	kmsan_free_page(page, order);
 
@@ -1179,17 +1201,23 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 		do {
 			unsigned long pfn;
 			int mt;
+			unsigned short int mgen;
 
 			page = list_last_entry(list, struct page, pcp_list);
 			pfn = page_to_pfn(page);
 			mt = get_pfnblock_migratetype(page, pfn);
 
+			/*
+			 * pcp uses private to store mgen.
+			 */
+			mgen = page_private(page);
+
 			/* must delete to avoid corrupting pcp list */
 			list_del(&page->pcp_list);
 			count -= nr_pages;
 			pcp->count -= nr_pages;
 
-			__free_one_page(page, pfn, zone, order, mt, FPI_NONE);
+			__free_one_page(page, pfn, zone, order, mt, FPI_NONE, mgen);
 			trace_mm_page_pcpu_drain(page, order, mt);
 		} while (count > 0 && !list_empty(list));
 	}
@@ -1199,14 +1227,14 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 
 static void free_one_page(struct zone *zone, struct page *page,
 			  unsigned long pfn, unsigned int order,
-			  fpi_t fpi_flags)
+			  fpi_t fpi_flags, unsigned short int mgen)
 {
 	unsigned long flags;
 	int migratetype;
 
 	spin_lock_irqsave(&zone->lock, flags);
 	migratetype = get_pfnblock_migratetype(page, pfn);
-	__free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
+	__free_one_page(page, pfn, zone, order, migratetype, fpi_flags, mgen);
 	spin_unlock_irqrestore(&zone->lock, flags);
 }
 
@@ -1219,7 +1247,7 @@ static void __free_pages_ok(struct page *page, unsigned int order,
 	if (!free_pages_prepare(page, order))
 		return;
 
-	free_one_page(zone, page, pfn, order, fpi_flags);
+	free_one_page(zone, page, pfn, order, fpi_flags, 0);
 
 	__count_vm_events(PGFREE, 1 << order);
 }
@@ -1484,6 +1512,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 							unsigned int alloc_flags)
 {
+	/*
+	 * Check and flush before using the pages.
+	 */
+	check_flush_task_mgen();
 	post_alloc_hook(page, order, gfp_flags);
 
 	if (order && (gfp_flags & __GFP_COMP))
@@ -1519,6 +1551,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 		page = get_page_from_free_area(area, migratetype);
 		if (!page)
 			continue;
+		/*
+		 * del_page_from_free_list() updates current's mgen that
+		 * pairs with check_flush_task_mgen() in prep_new_page().
+		 */
 		del_page_from_free_list(page, zone, current_order, migratetype);
 		expand(zone, page, order, current_order, migratetype);
 		trace_mm_page_alloc_zone_locked(page, order, migratetype,
@@ -1681,7 +1717,8 @@ static unsigned long find_large_buddy(unsigned long start_pfn)
 
 /* Split a multi-block free page into its individual pageblocks */
 static void split_large_buddy(struct zone *zone, struct page *page,
-			      unsigned long pfn, int order)
+			      unsigned long pfn, int order,
+			      unsigned short int mgen)
 {
 	unsigned long end_pfn = pfn + (1 << order);
 
@@ -1694,7 +1731,7 @@ static void split_large_buddy(struct zone *zone, struct page *page,
 	while (pfn != end_pfn) {
 		int mt = get_pfnblock_migratetype(page, pfn);
 
-		__free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE);
+		__free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE, mgen);
 		pfn += pageblock_nr_pages;
 		page = pfn_to_page(pfn);
 	}
@@ -1736,22 +1773,34 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
 	if (pfn != start_pfn) {
 		struct page *buddy = pfn_to_page(pfn);
 		int order = buddy_order(buddy);
+		unsigned short int mgen;
 
+		/*
+		 * del_page_from_free_list() updates current's mgen that
+		 * pairs with the following hand_over_task_mgen().
+		 */
 		del_page_from_free_list(buddy, zone, order,
 					get_pfnblock_migratetype(buddy, pfn));
+		mgen = hand_over_task_mgen();
 		set_pageblock_migratetype(page, migratetype);
-		split_large_buddy(zone, buddy, pfn, order);
+		split_large_buddy(zone, buddy, pfn, order, mgen);
 		return true;
 	}
 
 	/* We're the starting block of a larger buddy */
 	if (PageBuddy(page) && buddy_order(page) > pageblock_order) {
 		int order = buddy_order(page);
+		unsigned short int mgen;
 
+		/*
+		 * del_page_from_free_list() updates current's mgen that
+		 * pairs with the following hand_over_task_mgen().
+		 */
 		del_page_from_free_list(page, zone, order,
 					get_pfnblock_migratetype(page, pfn));
+		mgen = hand_over_task_mgen();
 		set_pageblock_migratetype(page, migratetype);
-		split_large_buddy(zone, page, pfn, order);
+		split_large_buddy(zone, page, pfn, order, mgen);
 		return true;
 	}
 move:
@@ -1871,6 +1920,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
 
 	/* Take ownership for orders >= pageblock_order */
 	if (current_order >= pageblock_order) {
+		/*
+		 * del_page_from_free_list() updates current's mgen that
+		 * pairs with check_flush_task_mgen() in prep_new_page().
+		 */
 		del_page_from_free_list(page, zone, current_order, block_type);
 		change_pageblock_range(page, current_order, start_type);
 		expand(zone, page, order, current_order, start_type);
@@ -1926,6 +1979,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
 	}
 
 single_page:
+	/*
+	 * del_page_from_free_list() updates current's mgen that pairs
+	 * with check_flush_task_mgen() in prep_new_page().
+	 */
 	del_page_from_free_list(page, zone, current_order, block_type);
 	expand(zone, page, order, current_order, block_type);
 	return page;
@@ -2547,7 +2604,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
 
 static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 				   struct page *page, int migratetype,
-				   unsigned int order)
+				   unsigned int order, unsigned short int mgen)
 {
 	int high, batch;
 	int pindex;
@@ -2561,6 +2618,11 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 	pcp->alloc_factor >>= 1;
 	__count_vm_events(PGFREE, 1 << order);
 	pindex = order_to_pindex(migratetype, order);
+
+	/*
+	 * pcp uses private to store mgen.
+	 */
+	set_page_private(page, mgen);
 	list_add(&page->pcp_list, &pcp->lists[pindex]);
 	pcp->count += 1 << order;
 
@@ -2596,7 +2658,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 /*
  * Free a pcp page
  */
-void free_unref_page(struct page *page, unsigned int order)
+void free_unref_page(struct page *page, unsigned int order,
+		     unsigned short int mgen)
 {
 	unsigned long __maybe_unused UP_flags;
 	struct per_cpu_pages *pcp;
@@ -2622,7 +2685,7 @@ void free_unref_page(struct page *page, unsigned int order)
 	migratetype = get_pfnblock_migratetype(page, pfn);
 	if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
 		if (unlikely(is_migrate_isolate(migratetype))) {
-			free_one_page(page_zone(page), page, pfn, order, FPI_NONE);
+			free_one_page(page_zone(page), page, pfn, order, FPI_NONE, mgen);
 			return;
 		}
 		migratetype = MIGRATE_MOVABLE;
@@ -2632,10 +2695,10 @@ void free_unref_page(struct page *page, unsigned int order)
 	pcp_trylock_prepare(UP_flags);
 	pcp = pcp_spin_trylock(zone->per_cpu_pageset);
 	if (pcp) {
-		free_unref_page_commit(zone, pcp, page, migratetype, order);
+		free_unref_page_commit(zone, pcp, page, migratetype, order, mgen);
 		pcp_spin_unlock(pcp);
 	} else {
-		free_one_page(zone, page, pfn, order, FPI_NONE);
+		free_one_page(zone, page, pfn, order, FPI_NONE, mgen);
 	}
 	pcp_trylock_finish(UP_flags);
 }
@@ -2666,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios)
 		 */
 		if (!pcp_allowed_order(order)) {
 			free_one_page(folio_zone(folio), &folio->page,
-				      pfn, order, FPI_NONE);
+				      pfn, order, FPI_NONE, 0);
 			continue;
 		}
 		folio->private = (void *)(unsigned long)order;
@@ -2702,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios)
 			 */
 			if (is_migrate_isolate(migratetype)) {
 				free_one_page(zone, &folio->page, pfn,
-					      order, FPI_NONE);
+					      order, FPI_NONE, 0);
 				continue;
 			}
 
@@ -2715,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios)
 			if (unlikely(!pcp)) {
 				pcp_trylock_finish(UP_flags);
 				free_one_page(zone, &folio->page, pfn,
-					      order, FPI_NONE);
+					      order, FPI_NONE, 0);
 				continue;
 			}
 			locked_zone = zone;
@@ -2730,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios)
 
 		trace_mm_page_free_batched(&folio->page);
 		free_unref_page_commit(zone, pcp, &folio->page, migratetype,
-				order);
+				order, 0);
 	}
 
 	if (pcp) {
@@ -2781,6 +2844,11 @@ int __isolate_free_page(struct page *page, unsigned int order)
 			return 0;
 	}
 
+	/*
+	 * del_page_from_free_list() updates current's mgen. The user of
+	 * the isolated page should check_flush_task_mgen() before using
+	 * it.
+	 */
 	del_page_from_free_list(page, zone, order, mt);
 
 	/*
@@ -2822,7 +2890,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
 
 	/* Return isolated page to tail of freelist. */
 	__free_one_page(page, page_to_pfn(page), zone, order, mt,
-			FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL);
+			FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL, 0);
 }
 
 /*
@@ -2965,6 +3033,11 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
 		}
 
 		page = list_first_entry(list, struct page, pcp_list);
+
+		/*
+		 * Pairs with check_flush_task_mgen() in prep_new_page().
+		 */
+		update_task_mgen(page_private(page));
 		list_del(&page->pcp_list);
 		pcp->count -= 1 << order;
 	} while (check_new_pages(page, order));
@@ -4791,11 +4864,11 @@ void __free_pages(struct page *page, unsigned int order)
 	struct alloc_tag *tag = pgalloc_tag_get(page);
 
 	if (put_page_testzero(page))
-		free_unref_page(page, order);
+		free_unref_page(page, order, 0);
 	else if (!head) {
 		pgalloc_tag_sub_pages(tag, (1 << order) - 1);
 		while (order-- > 0)
-			free_unref_page(page + (1 << order), order);
+			free_unref_page(page + (1 << order), order, 0);
 	}
 }
 EXPORT_SYMBOL(__free_pages);
@@ -4857,7 +4930,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
 	VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
 
 	if (page_ref_sub_and_test(page, count))
-		free_unref_page(page, compound_order(page));
+		free_unref_page(page, compound_order(page), 0);
 }
 EXPORT_SYMBOL(__page_frag_cache_drain);
 
@@ -4898,7 +4971,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
 			goto refill;
 
 		if (unlikely(nc->pfmemalloc)) {
-			free_unref_page(page, compound_order(page));
+			free_unref_page(page, compound_order(page), 0);
 			goto refill;
 		}
 
@@ -4942,7 +5015,7 @@ void page_frag_free(void *addr)
 	struct page *page = virt_to_head_page(addr);
 
 	if (unlikely(put_page_testzero(page)))
-		free_unref_page(page, compound_order(page));
+		free_unref_page(page, compound_order(page), 0);
 }
 EXPORT_SYMBOL(page_frag_free);
 
@@ -6751,10 +6824,19 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 		BUG_ON(!PageBuddy(page));
 		VM_WARN_ON(get_pageblock_migratetype(page) != MIGRATE_ISOLATE);
 		order = buddy_order(page);
+		/*
+		 * del_page_from_free_list() updates current's mgen that
+		 * pairs with check_flush_task_mgen() below in this function.
+		 */
 		del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
 		pfn += (1 << order);
 	}
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	/*
+	 * Check and flush before using it.
+	 */
+	check_flush_task_mgen();
 }
 #endif
 
@@ -6830,6 +6912,11 @@ bool take_page_off_buddy(struct page *page)
 			int migratetype = get_pfnblock_migratetype(page_head,
 								   pfn_head);
 
+			/*
+			 * del_page_from_free_list() updates current's
+			 * mgen that pairs with check_flush_task_mgen() below
+			 * in this function.
+			 */
 			del_page_from_free_list(page_head, zone, page_order,
 						migratetype);
 			break_down_buddy_pages(zone, page_head, page, 0,
@@ -6842,6 +6929,11 @@ bool take_page_off_buddy(struct page *page)
 			break;
 	}
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	/*
+	 * Check and flush before using it.
+	 */
+	check_flush_task_mgen();
 	return ret;
 }
 
@@ -6860,7 +6952,7 @@ bool put_page_back_buddy(struct page *page)
 		int migratetype = get_pfnblock_migratetype(page, pfn);
 
 		ClearPageHWPoisonTakenOff(page);
-		__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
+		__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE, 0);
 		if (TestClearPageHWPoison(page)) {
 			ret = true;
 		}
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 042937d5abe4..ab90481cf0fa 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -260,6 +260,12 @@ static void unset_migratetype_isolate(struct page *page, int migratetype)
 	zone->nr_isolate_pageblock--;
 out:
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	/*
+	 * Check and flush for the pages that have been isolated.
+	 */
+	if (isolated_page)
+		check_flush_task_mgen();
 }
 
 static inline struct page *
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index e4c428e61d8c..95b771ae4653 100644
--- a/mm/page_reporting.c
+++ b/mm/page_reporting.c
@@ -221,6 +221,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 		/* release lock before waiting on report processing */
 		spin_unlock_irq(&zone->lock);
 
+		/*
+		 * Check and flush before using the isolated pages.
+		 */
+		check_flush_task_mgen();
+
 		/* begin processing pages in local list */
 		err = prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY);
 
@@ -253,6 +258,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 
 	spin_unlock_irq(&zone->lock);
 
+	/*
+	 * Check and flush before using the isolated pages.
+	 */
+	check_flush_task_mgen();
+
 	return err;
 }
 
diff --git a/mm/swap.c b/mm/swap.c
index f0d478eee292..95c11547e831 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -126,10 +126,28 @@ void __folio_put(struct folio *folio)
 	if (folio_test_large(folio) && folio_test_large_rmappable(folio))
 		folio_undo_large_rmappable(folio);
 	mem_cgroup_uncharge(folio);
-	free_unref_page(&folio->page, folio_order(folio));
+	free_unref_page(&folio->page, folio_order(folio), 0);
 }
 EXPORT_SYMBOL(__folio_put);
 
+void __folio_put_mgen(struct folio *folio, unsigned short int mgen)
+{
+	if (unlikely(folio_is_zone_device(folio)))
+		WARN_ON(1);
+	else if (unlikely(folio_test_hugetlb(folio)))
+		WARN_ON(1);
+	else if (unlikely(folio_test_large(folio)))
+		WARN_ON(1);
+	/*
+	 * For now, migrc supports this case only.
+	 */
+	else {
+		page_cache_release(folio);
+		mem_cgroup_uncharge(folio);
+		free_unref_page(&folio->page, 0, mgen);
+	}
+}
+
 /**
  * put_pages_list() - release a list of pages
  * @pages: list of pages threaded on page->lru
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 rebase on mm-unstable 8/8] mm: defer tlb flush until the source folios at migration actually get used
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
                   ` (6 preceding siblings ...)
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 7/8] mm: add folio_put_mgen() to deliver migrc's generation number to pcp or buddy Byungchul Park
@ 2024-04-18  6:15 ` Byungchul Park
  2024-04-18 20:17 ` [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Andrew Morton
  2024-04-19  6:06 ` Huang, Ying
  9 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-18  6:15 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

This is implementation of MIGRC mechanism that stands for 'Migration
Read Copy'.  We always face the migration overhead at either promotion
or demotion, while working with tiered memory e.g. CXL memory and found
out tlb shootdown is one that is needed to get rid of if possible.

Fortunately, tlb flush can be defered as long as it guarantees to be
performed before the source folios at migration actually become used, of
course, only if the target PTE entries have read-only permission,
precisely, don't have write permission.  Otherwise, no doubt the sytem
might get messed up.

To achieve that:

   1. For the folios that map only to non-writable tlb entries, prevent
      tlb flush during migration but perform it just before the source
      folios actually become used out of buddy or pcp.

   2. When any non-writable tlb entry changes to writable e.g. through
      fault handler, give up migrc mechanism and perform tlb flush
      required right away.

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts.  For the test, I picked up XSBench
that is widely used for performance analysis on high performance
computing architectures - https://github.com/ANL-CESAR/XSBench.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times migration happens.
The slower the memory is and the more reclaim runs, the better migrc
works so as to obtain the better result.  In my system, the result
shows:

   1. itlb flushes are reduced over 90%.
   2. itlb misses are reduced over 30%.
   3. All the other tlb numbers also get enhanced.
   4. tlb shootdown interrupts are reduced over 90%.
   5. The test program runtime is reduced over 5%.

The test envitonment:

   Architecture - x86_64
   QEMU - kvm enabled, host cpu
   Numa - 2 nodes (16 CPUs 1GB, no CPUs 99GB)
   Linux Kernel - v6.9-rc4, numa balancing tiering on, demotion enabled

< measurement: raw data - tlb and interrupt numbers >

   $ perf stat -a \
           -e itlb.itlb_flush \
           -e tlb_flush.dtlb_thread \
           -e tlb_flush.stlb_any \
           -e dtlb-load-misses \
           -e dtlb-store-misses \
           -e itlb-load-misses \
	   XSBench -t 16 -p 50000000

   $ grep "TLB shootdowns" /proc/interrupts

   BEFORE
   ------
   40417078	itlb.itlb_flush
   234852566	tlb_flush.dtlb_thread
   153192357	tlb_flush.stlb_any
   119001107892	dTLB-load-misses
   307921167	dTLB-store-misses
   1355272118	iTLB-load-misses

   TLB: 1364803    1303670    1333921    1349607
        1356934    1354216    1332972    1342842
	1350265    1316443    1355928    1360793
	1298239    1326358    1343006    1340971
	TLB shootdowns

   AFTER
   -----
   3316495	itlb.itlb_flush
   138912511	tlb_flush.dtlb_thread
   115199341	tlb_flush.stlb_any
   117610390021 dTLB-load-misses
   198042233	dTLB-store-misses
   840066984	iTLB-load-misses

   TLB: 117257     119219     117178     115737
        117967     118948     117508     116079
	116962     117266     117320     117215
	105808     103934     115672     117610
	TLB shootdowns

< measurement: user experience - runtime >

   $ time XSBench -t 16 -p 50000000

   BEFORE
   ------
   Threads:     16
   Runtime:     968.783 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,754,778

   15208.91s user 141.44s system 1564% cpu 16:20.98 total

   AFTER
   -----
   Threads:     16
   Runtime:     913.210 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,861,565

   14351.69s user 138.23s system 1565% cpu 15:25.47 total

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/sched.h |   8 +
 mm/internal.h         |  46 +++++-
 mm/memory.c           |   8 +
 mm/migrate.c          | 359 ++++++++++++++++++++++++++++++++++++++++--
 mm/rmap.c             |  12 +-
 5 files changed, 414 insertions(+), 19 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8125014dd57d..66e27e0ec251 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1342,6 +1342,14 @@ struct task_struct {
 	struct tlbflush_unmap_batch	tlb_ubc_ro;
 	unsigned short int		mgen;
 
+#if defined(CONFIG_MIGRATION) && defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+	/*
+	 * whether all the mappings of a folio during unmap are read-only
+	 * so that migrc can work on the folio
+	 */
+	bool				can_migrc;
+#endif
+
 	/* Cache last used pipe for splice(): */
 	struct pipe_inode_info		*splice_pipe;
 
diff --git a/mm/internal.h b/mm/internal.h
index 484bb960aeb7..2539edd8aa00 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1517,6 +1517,39 @@ void workingset_update_node(struct xa_node *node);
 extern struct list_lru shadow_nodes;
 
 #if defined(CONFIG_MIGRATION) && defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+void check_migrc_flush(unsigned short int mgen);
+void migrc_flush(void);
+void rmap_flush_start(void);
+void rmap_flush_end(struct tlbflush_unmap_batch *batch);
+
+/*
+ * Reset the indicator indicating there are no writable mappings at the
+ * beginning of every rmap traverse for unmap.  migrc can work only when
+ * all the mappings are read-only.
+ */
+static inline void can_migrc_init(void)
+{
+	current->can_migrc = true;
+}
+
+/*
+ * Mark the folio is not applicable to migrc once it found a writble or
+ * dirty pte during rmap traverse for unmap.
+ */
+static inline void can_migrc_fail(void)
+{
+	current->can_migrc = false;
+}
+
+/*
+ * Check if all the mappings are read-only and read-only mappings even
+ * exist.
+ */
+static inline bool can_migrc_test(void)
+{
+	return current->can_migrc && current->tlb_ubc_ro.flush_required;
+}
+
 static inline unsigned short int mgen_latest(unsigned short int a, unsigned short int b)
 {
 	if (!a || !b)
@@ -1543,13 +1576,16 @@ static inline unsigned int hand_over_task_mgen(void)
 
 static inline void check_flush_task_mgen(void)
 {
-	/*
-	 * XXX: migrc mechanism will handle this. For now, do nothing
-	 * but reset current's mgen to finalize this turn.
-	 */
-	current->mgen = 0;
+	check_migrc_flush(xchg(&current->mgen, 0));
 }
 #else /* CONFIG_MIGRATION && CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline void check_migrc_flush(unsigned short int mgen) {}
+static inline void migrc_flush(void) {}
+static inline void rmap_flush_start(void) {}
+static inline void rmap_flush_end(struct tlbflush_unmap_batch *batch) {}
+static inline void can_migrc_init(void) {}
+static inline void can_migrc_fail(void) {}
+static inline bool can_migrc_test(void) { return false; }
 static inline unsigned short int mgen_latest(unsigned short int a, unsigned short int b) { return 0; }
 static inline void update_task_mgen(unsigned short int mgen) {}
 static inline unsigned int hand_over_task_mgen(void) { return 0; }
diff --git a/mm/memory.c b/mm/memory.c
index 33d87b64d15d..ef40a6527a96 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3617,6 +3617,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 	if (vmf->page)
 		folio = page_folio(vmf->page);
 
+	/*
+	 * The folio may or may not be one that is under migrc's control
+	 * and about to change its permission from read-only to writable.
+	 * Conservatively give up deferring tlb flush just in case.
+	 */
+	if (folio)
+		migrc_flush();
+
 	/*
 	 * Shared mapping: we are guaranteed to have VM_WRITE and
 	 * FAULT_FLAG_WRITE set at this point.
diff --git a/mm/migrate.c b/mm/migrate.c
index f9ed7a2b8720..cf5875ec0ca0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,6 +57,279 @@
 
 #include "internal.h"
 
+#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+static struct tlbflush_unmap_batch migrc_ubc;
+static DEFINE_SPINLOCK(migrc_lock);
+
+/*
+ * Don't be zero to distinguish from invalid mgen, 0.
+ */
+static unsigned short int mgen_next(unsigned short int a)
+{
+	return a + 1 ?: a + 2;
+}
+
+static bool mgen_before(unsigned short int a, unsigned short int b)
+{
+	return (short int)(a - b) < 0;
+}
+
+static void init_tlb_ubc(struct tlbflush_unmap_batch *ubc)
+{
+	arch_tlbbatch_clear(&ubc->arch);
+	ubc->flush_required = false;
+	ubc->writable = false;
+}
+
+/*
+ * Need to synchronize between tlb flush and managing pending CPUs in
+ * migrc_ubc.  Take a look at the following scenario, where CPU0 is in
+ * try_to_unmap_flush() and CPU1 is in migrate_pages_batch():
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	tlb flush
+ *				unmap folios (needing tlb flush)
+ *				add pending CPUs to migrc_ubc
+ *				<-- not performed tlb flush needed by
+ *				    the unmap above yet but the request
+ *				    will be cleared by CPU0 shortly. bug!
+ *	clear the CPUs from migrc_ubc
+ *
+ * The pending CPUs added in CPU1 should not be cleared from migrc_ubc
+ * in CPU0 because the tlb flush for migrc_ubc added in CPU1 has not
+ * been performed this turn.  To avoid this, using 'on_flushing'
+ * variable, prevent adding pending CPUs to migrc_ubc and give up migrc
+ * mechanism if someone is in the middle of tlb flush, like:
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	on_flushing++
+ *	tlb flush
+ *				unmap folios (needing tlb flush)
+ *				if on_flushing == 0:
+ *				   add pending CPUs to migrc_ubc
+ *				else: <-- hit
+ *				   give up migrc mechanism
+ *	clear the CPUs from migrc_ubc
+ *	on_flushing--
+ *
+ * Only the following case would be allowed for migrc mechanism to work:
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *				unmap folios (needing tlb flush)
+ *				if on_flushing == 0: <-- hit
+ *				   add pending CPUs to migrc_ubc
+ *				else:
+ *				   give up migrc mechanism
+ *	on_flushing++
+ *	tlb flush
+ *	clear the CPUs from migrc_ubc
+ *	on_flushing--
+ */
+static int on_flushing;
+
+/*
+ * When more than one thread enter check_migrc_flush() at the same
+ * time, each should wait for the request on progress to be done to
+ * avoid the following scenario, where the both CPUs are in
+ * check_migrc_flush():
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	if !migrc_ubc.flush_required:
+ *	   return
+ *	migrc_ubc.flush_required = false
+ *				if !migrc_ubc.flush_requied: <-- hit
+ *				   return <-- not performed tlb flush
+ *				              needed yet but return. bug!
+ *				migrc_ubc.flush_required = false
+ *				try_to_unmap_flush()
+ *				finalize
+ *	try_to_unmap_flush() <-- performs tlb flush needed
+ *	finalize
+ *
+ * So it should be handled:
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	atomically execute {
+ *	   if migrc_on_flushing:
+ *	      wait for the completion
+ *	      return
+ *	   if !migrc_ubc.flush_required:
+ *	      return
+ *	   migrc_ubc.flush_required = false
+ *	   migrc_on_flushing = true
+ *	}
+ *				atomically execute {
+ *				   if migrc_on_flushing: <-- hit
+ *				      wait for the completion
+ *				      return <-- tlb flush needed is done
+ *				   if !migrc_ubc.flush_requied:
+ *				      return
+ *				   migrc_ubc.flush_required = false
+ *				   migrc_on_flushing = true
+ *				}
+ *
+ *				try_to_unmap_flush()
+ *				migrc_on_flushing = false
+ *				finalize
+ *	try_to_unmap_flush() <-- performs tlb flush needed
+ *	migrc_on_flushing = false
+ *	finalize
+ */
+static bool migrc_on_flushing;
+
+/*
+ * Generation number for the current request of deferred tlb flush.
+ */
+static unsigned short int migrc_gen;
+
+/*
+ * Generation number for the next request.
+ */
+static unsigned short int migrc_gen_next = 1;
+
+/*
+ * Generation number for the latest request handled.
+ */
+static unsigned short int migrc_gen_done;
+
+static unsigned short int migrc_add_pending_ubc(struct tlbflush_unmap_batch *ubc)
+{
+	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	unsigned long flags;
+	unsigned short int mgen;
+
+	spin_lock_irqsave(&migrc_lock, flags);
+	if (on_flushing || migrc_on_flushing) {
+		spin_unlock_irqrestore(&migrc_lock, flags);
+
+		/*
+		 * Give up migrc mechanism.  Just let tlb flush needed
+		 * handled by try_to_unmap_flush() at the caller side.
+		 */
+		fold_ubc(tlb_ubc, ubc);
+		return 0;
+	}
+	fold_ubc(&migrc_ubc, ubc);
+	mgen = migrc_gen = migrc_gen_next;
+	spin_unlock_irqrestore(&migrc_lock, flags);
+
+	return mgen;
+}
+
+void rmap_flush_start(void)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&migrc_lock, flags);
+	on_flushing++;
+	spin_unlock_irqrestore(&migrc_lock, flags);
+}
+
+void rmap_flush_end(struct tlbflush_unmap_batch *batch)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&migrc_lock, flags);
+	if (arch_tlbbatch_done(&migrc_ubc.arch, &batch->arch)) {
+		migrc_ubc.flush_required = false;
+		migrc_ubc.writable = false;
+	}
+	on_flushing--;
+	spin_unlock_irqrestore(&migrc_lock, flags);
+}
+
+/*
+ * Even if multiple contexts are requesting tlb flush at the same time,
+ * it must guarantee to have completed tlb flush requested on return.
+ */
+void check_migrc_flush(unsigned short int mgen)
+{
+	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	unsigned long flags;
+
+	/*
+	 * Nothing has been requested.  We are done.
+	 */
+	if (!mgen)
+		return;
+retry:
+	/*
+	 * We can see a larger value than or equal to migrc_gen_done,
+	 * which means the tlb flush we need has been done.
+	 */
+	if (!mgen_before(READ_ONCE(migrc_gen_done), mgen))
+		return;
+
+	spin_lock_irqsave(&migrc_lock, flags);
+
+	/*
+	 * With migrc_lock held, we might read migrc_gen_done updated.
+	 */
+	if (mgen_next(migrc_gen_done) != mgen) {
+		spin_unlock_irqrestore(&migrc_lock, flags);
+		return;
+	}
+
+	/*
+	 * Others are already working for us.
+	 */
+	if (migrc_on_flushing) {
+		spin_unlock_irqrestore(&migrc_lock, flags);
+		goto retry;
+	}
+
+	if (!migrc_ubc.flush_required) {
+		spin_unlock_irqrestore(&migrc_lock, flags);
+		return;
+	}
+
+	fold_ubc(tlb_ubc, &migrc_ubc);
+	migrc_gen_next = mgen_next(migrc_gen);
+	migrc_on_flushing = true;
+	spin_unlock_irqrestore(&migrc_lock, flags);
+
+	try_to_unmap_flush();
+
+	spin_lock_irqsave(&migrc_lock, flags);
+	migrc_on_flushing = false;
+
+	/*
+	 * migrc_gen_done can be read by another with migrc_lock not
+	 * held so use WRITE_ONCE() to prevent tearing.
+	 */
+	WRITE_ONCE(migrc_gen_done, mgen);
+	spin_unlock_irqrestore(&migrc_lock, flags);
+}
+
+void migrc_flush(void)
+{
+	unsigned long flags;
+	unsigned short int mgen;
+
+	/*
+	 * Obtain the latest mgen number.
+	 */
+	spin_lock_irqsave(&migrc_lock, flags);
+	mgen = migrc_gen;
+	spin_unlock_irqrestore(&migrc_lock, flags);
+
+	check_migrc_flush(mgen);
+}
+#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static void init_tlb_ubc(struct tlbflush_unmap_batch *ubc)
+{
+}
+static unsigned int migrc_add_pending_ubc(struct tlbflush_unmap_batch *ubc)
+{
+	return 0;
+}
+#endif
+
 bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 {
 	struct folio *folio = folio_get_nontail_page(page);
@@ -1090,7 +1363,8 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked,
 
 /* Cleanup src folio upon migration success */
 static void migrate_folio_done(struct folio *src,
-			       enum migrate_reason reason)
+			       enum migrate_reason reason,
+			       unsigned short int mgen)
 {
 	/*
 	 * Compaction can migrate also non-LRU pages which are
@@ -1101,8 +1375,15 @@ static void migrate_folio_done(struct folio *src,
 		mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
 				    folio_is_file_lru(src), -folio_nr_pages(src));
 
-	if (reason != MR_MEMORY_FAILURE)
-		/* We release the page in page_handle_poison. */
+	/* We release the page in page_handle_poison. */
+	if (reason == MR_MEMORY_FAILURE) {
+		check_migrc_flush(mgen);
+		return;
+	}
+
+	if (mgen)
+		folio_put_mgen(src, mgen);
+	else
 		folio_put(src);
 }
 
@@ -1126,7 +1407,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
 		folio_clear_unevictable(src);
 		/* free_pages_prepare() will clear PG_isolated. */
 		list_del(&src->lru);
-		migrate_folio_done(src, reason);
+		migrate_folio_done(src, reason, 0);
 		return MIGRATEPAGE_SUCCESS;
 	}
 
@@ -1272,7 +1553,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
 static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
 			      struct folio *src, struct folio *dst,
 			      enum migrate_mode mode, enum migrate_reason reason,
-			      struct list_head *ret)
+			      struct list_head *ret, unsigned short int mgen)
 {
 	int rc;
 	int old_page_state = 0;
@@ -1322,11 +1603,12 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
 	 * and will be freed.
 	 */
 	list_del(&src->lru);
+
 	/* Drop an anon_vma reference if we took one */
 	if (anon_vma)
 		put_anon_vma(anon_vma);
 	folio_unlock(src);
-	migrate_folio_done(src, reason);
+	migrate_folio_done(src, reason, mgen);
 
 	return rc;
 out:
@@ -1616,7 +1898,7 @@ static void migrate_folios_move(struct list_head *src_folios,
 		struct list_head *ret_folios,
 		struct migrate_pages_stats *stats,
 		int *retry, int *thp_retry, int *nr_failed,
-		int *nr_retry_pages)
+		int *nr_retry_pages, unsigned short int mgen)
 {
 	struct folio *folio, *folio2, *dst, *dst2;
 	bool is_thp;
@@ -1633,7 +1915,7 @@ static void migrate_folios_move(struct list_head *src_folios,
 
 		rc = migrate_folio_move(put_new_folio, private,
 				folio, dst, mode,
-				reason, ret_folios);
+				reason, ret_folios, mgen);
 		/*
 		 * The rules are:
 		 *	Success: folio will be freed
@@ -1706,24 +1988,36 @@ static int migrate_pages_batch(struct list_head *from,
 	int pass = 0;
 	bool is_thp = false;
 	bool is_large = false;
+	bool is_zone_device = false;
 	struct folio *folio, *folio2, *dst = NULL;
 	int rc, rc_saved = 0, nr_pages;
 	LIST_HEAD(unmap_folios);
 	LIST_HEAD(dst_folios);
+	LIST_HEAD(unmap_folios_migrc);
+	LIST_HEAD(dst_folios_migrc);
 	bool nosplit = (reason == MR_NUMA_MISPLACED);
+	struct tlbflush_unmap_batch pending_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+	unsigned short int mgen;
 
 	VM_WARN_ON_ONCE(mode != MIGRATE_ASYNC &&
 			!list_empty(from) && !list_is_singular(from));
 
+	init_tlb_ubc(&pending_ubc);
+
 	for (pass = 0; pass < nr_pass && retry; pass++) {
 		retry = 0;
 		thp_retry = 0;
 		nr_retry_pages = 0;
 
 		list_for_each_entry_safe(folio, folio2, from, lru) {
+			bool can_migrc;
+
 			is_large = folio_test_large(folio);
 			is_thp = is_large && folio_test_pmd_mappable(folio);
 			nr_pages = folio_nr_pages(folio);
+			is_zone_device = folio_is_zone_device(folio);
 
 			cond_resched();
 
@@ -1773,9 +2067,25 @@ static int migrate_pages_batch(struct list_head *from,
 				continue;
 			}
 
+			can_migrc_init();
 			rc = migrate_folio_unmap(get_new_folio, put_new_folio,
 					private, folio, &dst, mode, reason,
 					ret_folios);
+			can_migrc = can_migrc_test();
+
+			/*
+			 * XXX: No way to handle zone device folio after
+			 * freeing.  Remove the following constraint
+			 * once migrc can handle it.
+			 */
+			can_migrc = can_migrc && likely(!is_zone_device);
+
+			/*
+			 * XXX: Remove the following constraint once
+			 * migrc handles large folio.
+			 */
+			can_migrc = can_migrc && likely(!is_large);
+
 			/*
 			 * The rules are:
 			 *	Success: folio will be freed
@@ -1821,7 +2131,8 @@ static int migrate_pages_batch(struct list_head *from,
 				/* nr_failed isn't updated for not used */
 				stats->nr_thp_failed += thp_retry;
 				rc_saved = rc;
-				if (list_empty(&unmap_folios))
+				if (list_empty(&unmap_folios) &&
+				    list_empty(&unmap_folios_migrc))
 					goto out;
 				else
 					goto move;
@@ -1835,8 +2146,19 @@ static int migrate_pages_batch(struct list_head *from,
 				stats->nr_thp_succeeded += is_thp;
 				break;
 			case MIGRATEPAGE_UNMAP:
-				list_move_tail(&folio->lru, &unmap_folios);
-				list_add_tail(&dst->lru, &dst_folios);
+				if (can_migrc) {
+					list_move_tail(&folio->lru, &unmap_folios_migrc);
+					list_add_tail(&dst->lru, &dst_folios_migrc);
+
+					/*
+					 * Gather ro batch data to add
+					 * to migrc_ubc after unmap.
+					 */
+					fold_ubc(&pending_ubc, tlb_ubc_ro);
+				} else {
+					list_move_tail(&folio->lru, &unmap_folios);
+					list_add_tail(&dst->lru, &dst_folios);
+				}
 				break;
 			default:
 				/*
@@ -1850,12 +2172,19 @@ static int migrate_pages_batch(struct list_head *from,
 				stats->nr_failed_pages += nr_pages;
 				break;
 			}
+			/*
+			 * Done with the current folio.  Fold the ro
+			 * batch data gathered to the normal batch.
+			 */
+			fold_ubc(tlb_ubc, tlb_ubc_ro);
 		}
 	}
 	nr_failed += retry;
 	stats->nr_thp_failed += thp_retry;
 	stats->nr_failed_pages += nr_retry_pages;
 move:
+	/* Should be before try_to_unmap_flush() */
+	mgen = migrc_add_pending_ubc(&pending_ubc);
 	/* Flush TLBs for all unmapped folios */
 	try_to_unmap_flush();
 
@@ -1869,7 +2198,11 @@ static int migrate_pages_batch(struct list_head *from,
 		migrate_folios_move(&unmap_folios, &dst_folios,
 				put_new_folio, private, mode, reason,
 				ret_folios, stats, &retry, &thp_retry,
-				&nr_failed, &nr_retry_pages);
+				&nr_failed, &nr_retry_pages, 0);
+		migrate_folios_move(&unmap_folios_migrc, &dst_folios_migrc,
+				put_new_folio, private, mode, reason,
+				ret_folios, stats, &retry, &thp_retry,
+				&nr_failed, &nr_retry_pages, mgen);
 	}
 	nr_failed += retry;
 	stats->nr_thp_failed += thp_retry;
@@ -1880,6 +2213,8 @@ static int migrate_pages_batch(struct list_head *from,
 	/* Cleanup remaining folios */
 	migrate_folios_undo(&unmap_folios, &dst_folios,
 			put_new_folio, private, ret_folios);
+	migrate_folios_undo(&unmap_folios_migrc, &dst_folios_migrc,
+			put_new_folio, private, ret_folios);
 
 	return rc;
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index 513e49840da7..b5cea0f7daef 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -672,7 +672,9 @@ void try_to_unmap_flush(void)
 	if (!tlb_ubc->flush_required)
 		return;
 
+	rmap_flush_start();
 	arch_tlbbatch_flush(&tlb_ubc->arch);
+	rmap_flush_end(tlb_ubc);
 	arch_tlbbatch_clear(&tlb_ubc->arch);
 	tlb_ubc->flush_required = false;
 	tlb_ubc->writable = false;
@@ -707,9 +709,15 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
 	if (!pte_accessible(mm, pteval))
 		return;
 
-	if (pte_write(pteval) || writable)
+	if (pte_write(pteval) || writable) {
 		tlb_ubc = &current->tlb_ubc;
-	else
+
+		/*
+		 * migrc cannot work with the folio once it found a
+		 * writable or dirty mapping on it.
+		 */
+		can_migrc_fail();
+	} else
 		tlb_ubc = &current->tlb_ubc_ro;
 
 	arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
                   ` (7 preceding siblings ...)
  2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 8/8] mm: defer tlb flush until the source folios at migration actually get used Byungchul Park
@ 2024-04-18 20:17 ` Andrew Morton
  2024-04-19  6:02   ` Byungchul Park
  2024-04-19  6:06 ` Huang, Ying
  9 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2024-04-18 20:17 UTC (permalink / raw)
  To: Byungchul Park
  Cc: linux-kernel, linux-mm, kernel_team, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, 18 Apr 2024 15:15:28 +0900 Byungchul Park <byungchul@sk.com> wrote:

>    $ time XSBench -t 16 -p 50000000
> 
>    BEFORE
>    ------
>    Threads:     16
>    Runtime:     968.783 seconds
>    Lookups:     1,700,000,000
>    Lookups/s:   1,754,778
> 
>    15208.91s user 141.44s system 1564% cpu 16:20.98 total
> 
>    AFTER
>    -----
>    Threads:     16
>    Runtime:     913.210 seconds
>    Lookups:     1,700,000,000
>    Lookups/s:   1,861,565
> 
>    14351.69s user 138.23s system 1565% cpu 15:25.47 total

Well that's nice.  What exactly is XSBench doing in this situation? 
What sort of improvements can we expect to see in useful workloads?

I see it no longer consumes an additional page flag, good.

The patches show no evidence of review activity and I'm not seeing much
on the mailing list (patchset title was changed.  Previous title
"Reduce TLB flushes under some specific conditions").  Perhaps a better
description of the overall benefit to our users would help to motivate
reviewers.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration
  2024-04-18 20:17 ` [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Andrew Morton
@ 2024-04-19  6:02   ` Byungchul Park
  0 siblings, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-19  6:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel_team, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, Apr 18, 2024 at 01:17:57PM -0700, Andrew Morton wrote:
> On Thu, 18 Apr 2024 15:15:28 +0900 Byungchul Park <byungchul@sk.com> wrote:
> 
> >    $ time XSBench -t 16 -p 50000000
> > 
> >    BEFORE
> >    ------
> >    Threads:     16
> >    Runtime:     968.783 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,754,778
> > 
> >    15208.91s user 141.44s system 1564% cpu 16:20.98 total
> > 
> >    AFTER
> >    -----
> >    Threads:     16
> >    Runtime:     913.210 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,861,565
> > 
> >    14351.69s user 138.23s system 1565% cpu 15:25.47 total
> 
> Well that's nice.  What exactly is XSBench doing in this situation? 

As far as I know, it's frequently and continuously accessing annon areas
with addresses ranged within 6GB, by multi threads.  Thus, it triggers a
lot of promotions by hinting fault of numa balancing tiering and a lot
of demotions by kswapd as well, resulting in a ton of tlb flushes.

All I need is a system suffering from memory reclaim or any type of
folio migration since migrc mechanism is one for mitigating the overhead
of folio migration.  To see the benefits of migrc, it doesn't have to be
XSBench but any workload suffering from reclaim.

> What sort of improvements can we expect to see in useful workloads?

Increase throughput(= runtime reduction of each work in the system).

   1. Because migrc removes the CPU time that would've been spent in IPI
      handler due to tlb shootdown, by skipping a lot of tlb shootdowns.

   2. Becasue migrc reduces tlb misses so as to utilize tlb cache better,
      by skipping a lof of tlb flushes.

Besides, I expect overall scheduler latencies can be enhanced, the worst
latencies measured using some tracters of ftrace showed no change though.

> I see it no longer consumes an additional page flag, good.
> 
> The patches show no evidence of review activity and I'm not seeing much
> on the mailing list (patchset title was changed.  Previous title
> "Reduce TLB flushes under some specific conditions").  Perhaps a better

I changed the title because it was supposed to work with only numa
balancing tiering like promotion and demotion but, for now, migrc works
with any type of folio migration.  Thus, I can tell migrc demonstrates
its benefits as long as a system is under the control of reclaim and
folio migration.

	Byungchul

> description of the overall benefit to our users would help to motivate
> reviewers.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration
  2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
                   ` (8 preceding siblings ...)
  2024-04-18 20:17 ` [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Andrew Morton
@ 2024-04-19  6:06 ` Huang, Ying
  2024-04-19  6:21   ` Byungchul Park
  2024-05-09  7:42   ` Byungchul Park
  9 siblings, 2 replies; 14+ messages in thread
From: Huang, Ying @ 2024-04-19  6:06 UTC (permalink / raw)
  To: Byungchul Park
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

Byungchul Park <byungchul@sk.com> writes:

> Hi everyone,
>
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").  See the following link for more information:
>
> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
>
> However, it's only for ones using hinting fault.  I thought it'd be much
> better if we have a general mechanism to reduce all tlb numbers that we
> can ultimately apply to any type of migration.
>
> I'm suggesting a mechanism called MIGRC that stands for 'Migration Read
> Copy', to reduce tlb numbers by deferring tlb flush until the source
> folios at migration actually become used, of course, only if the target
> PTE don't have write permission.
>
> To achieve that:
>
>    1. For the folios that map only to non-writable tlb entries, prevent
>       tlb flush during migration but perform it just before the source
>       folios actually become used out of buddy or pcp.
>
>    2. When any non-writable tlb entry changes to writable e.g. through
>       fault handler, give up migrc mechanism and perform tlb flush
>       required right away.
>
> No matter what type of workload is used for performance evaluation, the
> result would be positive thanks to the unconditional reduction of tlb
> flushes, tlb misses and interrupts.  For the test, I picked up XSBench
> that is widely used for performance analysis on high performance
> computing architectures - https://github.com/ANL-CESAR/XSBench.
>
> The result would depend on memory latency and how often reclaim runs,
> which implies tlb miss overhead and how many times migration happens.
> The slower the memory is and the more reclaim runs, the better migrc
> works so as to obtain the better result.  In my system, the result
> shows:
>
>    1. itlb flushes are reduced over 90%.
>    2. itlb misses are reduced over 30%.
>    3. All the other tlb numbers also get enhanced.
>    4. tlb shootdown interrupts are reduced over 90%.
>    5. The test program runtime is reduced over 5%.
>
> The test envitonment:
>
>    Architecture - x86_64
>    QEMU - kvm enabled, host cpu

The test is run in VM?  Do you have test results in bare metal
environment?

>    Numa - 2 nodes (16 CPUs 1GB, no CPUs 99GB)

The configuration looks quite abnormal.  Have you tested with other
configuration, such 1:4 or 1:8?

>    Linux Kernel - v6.9-rc4, numa balancing tiering on, demotion enabled
>
> < measurement: raw data - tlb and interrupt numbers >
>
>    $ perf stat -a \
>            -e itlb.itlb_flush \
>            -e tlb_flush.dtlb_thread \
>            -e tlb_flush.stlb_any \
>            -e dtlb-load-misses \
>            -e dtlb-store-misses \
>            -e itlb-load-misses \
>            XSBench -t 16 -p 50000000
>
>    $ grep "TLB shootdowns" /proc/interrupts
>
>    BEFORE
>    ------
>    40417078     itlb.itlb_flush
>    234852566    tlb_flush.dtlb_thread
>    153192357    tlb_flush.stlb_any
>    119001107892 dTLB-load-misses
>    307921167    dTLB-store-misses
>    1355272118   iTLB-load-misses
>
>    TLB: 1364803    1303670    1333921    1349607
>         1356934    1354216    1332972    1342842
>         1350265    1316443    1355928    1360793
>         1298239    1326358    1343006    1340971
>         TLB shootdowns
>
>    AFTER
>    -----
>    3316495      itlb.itlb_flush
>    138912511    tlb_flush.dtlb_thread
>    115199341    tlb_flush.stlb_any
>    117610390021 dTLB-load-misses
>    198042233    dTLB-store-misses
>    840066984    iTLB-load-misses
>
>    TLB: 117257     119219     117178     115737
>         117967     118948     117508     116079
>         116962     117266     117320     117215
>         105808     103934     115672     117610
>         TLB shootdowns
>
> < measurement: user experience - runtime >
>
>    $ time XSBench -t 16 -p 50000000
>
>    BEFORE
>    ------
>    Threads:     16
>    Runtime:     968.783 seconds
>    Lookups:     1,700,000,000
>    Lookups/s:   1,754,778
>
>    15208.91s user 141.44s system 1564% cpu 16:20.98 total
>
>    AFTER
>    -----
>    Threads:     16
>    Runtime:     913.210 seconds
>    Lookups:     1,700,000,000
>    Lookups/s:   1,861,565
>
>    14351.69s user 138.23s system 1565% cpu 15:25.47 total

IIUC, the memory footprint will be larger with the patchset.  Do you
have data?

--
Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration
  2024-04-19  6:06 ` Huang, Ying
@ 2024-04-19  6:21   ` Byungchul Park
  2024-05-09  7:42   ` Byungchul Park
  1 sibling, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-04-19  6:21 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Fri, Apr 19, 2024 at 02:06:30PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > Hi everyone,
> >
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible").  See the following link for more information:
> >
> > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> >
> > However, it's only for ones using hinting fault.  I thought it'd be much
> > better if we have a general mechanism to reduce all tlb numbers that we
> > can ultimately apply to any type of migration.
> >
> > I'm suggesting a mechanism called MIGRC that stands for 'Migration Read
> > Copy', to reduce tlb numbers by deferring tlb flush until the source
> > folios at migration actually become used, of course, only if the target
> > PTE don't have write permission.
> >
> > To achieve that:
> >
> >    1. For the folios that map only to non-writable tlb entries, prevent
> >       tlb flush during migration but perform it just before the source
> >       folios actually become used out of buddy or pcp.
> >
> >    2. When any non-writable tlb entry changes to writable e.g. through
> >       fault handler, give up migrc mechanism and perform tlb flush
> >       required right away.
> >
> > No matter what type of workload is used for performance evaluation, the
> > result would be positive thanks to the unconditional reduction of tlb
> > flushes, tlb misses and interrupts.  For the test, I picked up XSBench
> > that is widely used for performance analysis on high performance
> > computing architectures - https://github.com/ANL-CESAR/XSBench.
> >
> > The result would depend on memory latency and how often reclaim runs,
> > which implies tlb miss overhead and how many times migration happens.
> > The slower the memory is and the more reclaim runs, the better migrc
> > works so as to obtain the better result.  In my system, the result
> > shows:
> >
> >    1. itlb flushes are reduced over 90%.
> >    2. itlb misses are reduced over 30%.
> >    3. All the other tlb numbers also get enhanced.
> >    4. tlb shootdown interrupts are reduced over 90%.
> >    5. The test program runtime is reduced over 5%.
> >
> > The test envitonment:
> >
> >    Architecture - x86_64
> >    QEMU - kvm enabled, host cpu
> 
> The test is run in VM?  Do you have test results in bare metal
> environment?

I will test in a bare metal environment and share the result.

> >    Numa - 2 nodes (16 CPUs 1GB, no CPUs 99GB)
> 
> The configuration looks quite abnormal.  Have you tested with other
> configuration, such 1:4 or 1:8?

Okay I will test with the configurations.

> >    Linux Kernel - v6.9-rc4, numa balancing tiering on, demotion enabled
> >
> > < measurement: raw data - tlb and interrupt numbers >
> >
> >    $ perf stat -a \
> >            -e itlb.itlb_flush \
> >            -e tlb_flush.dtlb_thread \
> >            -e tlb_flush.stlb_any \
> >            -e dtlb-load-misses \
> >            -e dtlb-store-misses \
> >            -e itlb-load-misses \
> >            XSBench -t 16 -p 50000000
> >
> >    $ grep "TLB shootdowns" /proc/interrupts
> >
> >    BEFORE
> >    ------
> >    40417078     itlb.itlb_flush
> >    234852566    tlb_flush.dtlb_thread
> >    153192357    tlb_flush.stlb_any
> >    119001107892 dTLB-load-misses
> >    307921167    dTLB-store-misses
> >    1355272118   iTLB-load-misses
> >
> >    TLB: 1364803    1303670    1333921    1349607
> >         1356934    1354216    1332972    1342842
> >         1350265    1316443    1355928    1360793
> >         1298239    1326358    1343006    1340971
> >         TLB shootdowns
> >
> >    AFTER
> >    -----
> >    3316495      itlb.itlb_flush
> >    138912511    tlb_flush.dtlb_thread
> >    115199341    tlb_flush.stlb_any
> >    117610390021 dTLB-load-misses
> >    198042233    dTLB-store-misses
> >    840066984    iTLB-load-misses
> >
> >    TLB: 117257     119219     117178     115737
> >         117967     118948     117508     116079
> >         116962     117266     117320     117215
> >         105808     103934     115672     117610
> >         TLB shootdowns
> >
> > < measurement: user experience - runtime >
> >
> >    $ time XSBench -t 16 -p 50000000
> >
> >    BEFORE
> >    ------
> >    Threads:     16
> >    Runtime:     968.783 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,754,778
> >
> >    15208.91s user 141.44s system 1564% cpu 16:20.98 total
> >
> >    AFTER
> >    -----
> >    Threads:     16
> >    Runtime:     913.210 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,861,565
> >
> >    14351.69s user 138.23s system 1565% cpu 15:25.47 total
> 
> IIUC, the memory footprint will be larger with the patchset.  Do you
> have data?

No. The footprint is, I expect, same as vanilla with this patchset. I
will share the data.

Last time, since you pointed out that the footprint seemed to be larger
with the previous patchset becasue it worked anyway based on deferring
freeing folios.

Which made me rework on it so as to avoid tweaking the original behavior
of mm.  Instead, the current version of migrc let it go exactly same as
it is with vanilla until the interesting folios exit from pcp or buddy,
and do tlb flush if needed.

	Byungchul

> --
> Best Regards,
> Huang, Ying

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration
  2024-04-19  6:06 ` Huang, Ying
  2024-04-19  6:21   ` Byungchul Park
@ 2024-05-09  7:42   ` Byungchul Park
  1 sibling, 0 replies; 14+ messages in thread
From: Byungchul Park @ 2024-05-09  7:42 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Fri, Apr 19, 2024 at 02:06:30PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > The test envitonment:
> >
> >    Architecture - x86_64
> >    QEMU - kvm enabled, host cpu
> 
> The test is run in VM?  Do you have test results in bare metal
> environment?

I tested it in a bare metal server.  See the result below.

> >    Numa - 2 nodes (16 CPUs 1GB, no CPUs 99GB)
> 
> The configuration looks quite abnormal.  Have you tested with other
> configuration, such 1:4 or 1:8?

I tested with DRAM : CXL expander = 42GB : 98GB.

> >    Linux Kernel - v6.9-rc4, numa balancing tiering on, demotion enabled
> >
> > < measurement: raw data - tlb and interrupt numbers >
> >
> >    $ perf stat -a \
> >            -e itlb.itlb_flush \
> >            -e tlb_flush.dtlb_thread \
> >            -e tlb_flush.stlb_any \
> >            -e dtlb-load-misses \
> >            -e dtlb-store-misses \
> >            -e itlb-load-misses \
> >            XSBench -t 16 -p 50000000
> >
> >    $ grep "TLB shootdowns" /proc/interrupts
> >
> >    BEFORE
> >    ------
> >    40417078     itlb.itlb_flush
> >    234852566    tlb_flush.dtlb_thread
> >    153192357    tlb_flush.stlb_any
> >    119001107892 dTLB-load-misses
> >    307921167    dTLB-store-misses
> >    1355272118   iTLB-load-misses
> >
> >    TLB: 1364803    1303670    1333921    1349607
> >         1356934    1354216    1332972    1342842
> >         1350265    1316443    1355928    1360793
> >         1298239    1326358    1343006    1340971
> >         TLB shootdowns
> >
> >    AFTER
> >    -----
> >    3316495      itlb.itlb_flush
> >    138912511    tlb_flush.dtlb_thread
> >    115199341    tlb_flush.stlb_any
> >    117610390021 dTLB-load-misses
> >    198042233    dTLB-store-misses
> >    840066984    iTLB-load-misses
> >
> >    TLB: 117257     119219     117178     115737
> >         117967     118948     117508     116079
> >         116962     117266     117320     117215
> >         105808     103934     115672     117610
> >         TLB shootdowns
> >
> > < measurement: user experience - runtime >
> >
> >    $ time XSBench -t 16 -p 50000000
> >
> >    BEFORE
> >    ------
> >    Threads:     16
> >    Runtime:     968.783 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,754,778
> >
> >    15208.91s user 141.44s system 1564% cpu 16:20.98 total
> >
> >    AFTER
> >    -----
> >    Threads:     16
> >    Runtime:     913.210 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,861,565
> >
> >    14351.69s user 138.23s system 1565% cpu 15:25.47 total
> 
> IIUC, the memory footprint will be larger with the patchset.  Do you
> have data?

As I already told you, from version 9, the footprint is exactly same
between patched kernel and vanilla kernel because that let folios go as
is, but controls TLB flush timing only.

There's two things to note.

1. I changed the patchset and will post the next version shortly:

	BEFORE - Defer TLB flush required until the interesting folios
	         exiting either pcp or buddy.  The interesting folios
		 are source folios unmapped during folio migration.

	AFTER  - Defer TLB flush required until the interesting folios
	         exiting either pcp or buddy.  The interesting folios
		 are source folios unmapped during folio migration,
		 * plus, folios unmapped during reclaiming folios in
		 shrink_folio_list()*.

2. I changed workload for testing because XSBench doesn't struggle
   against lack of memory in such a big server.  Instead, I picked a
   very real workload, LLM inference engine, llama.cpp.

I tested with the two changes.  The test result is like:

---

   Kernel version: mm-unstable around v6.9-rc4
   Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
   CPU: 1 socket 64 core with hyper thread on
   Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL(expander) 98GB)
   Config: swap off, numa balancing tiering on, demotion enabled

   1 set of test workload:

      echo 3 > /proc/sys/vm/drop_caches
      llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
      wait
   
   where -t: nr of threads, -s: seed used to make the runtime stable,
   -n: nr of tokens determinig the runtime, -p: prompt to ask, -m: LLM
   model to use.

   Run this set 10 times successively.  So I got 30 total runtimes since
   each inference prints its runtime at the end of each run.  The result
   is like:

   BEFORE
   ------
   llama_print_timings:       total time = 1002461.95 ms /    24 tokens
   llama_print_timings:       total time = 1044978.38 ms /    24 tokens
   llama_print_timings:       total time = 1000653.09 ms /    24 tokens
   llama_print_timings:       total time = 1047104.80 ms /    24 tokens
   llama_print_timings:       total time = 1069430.36 ms /    24 tokens
   llama_print_timings:       total time = 1068201.16 ms /    24 tokens
   llama_print_timings:       total time = 1078092.59 ms /    24 tokens
   llama_print_timings:       total time = 1073200.45 ms /    24 tokens
   llama_print_timings:       total time = 1067136.00 ms /    24 tokens
   llama_print_timings:       total time = 1076442.56 ms /    24 tokens
   llama_print_timings:       total time = 1004142.64 ms /    24 tokens
   llama_print_timings:       total time = 1042942.65 ms /    24 tokens
   llama_print_timings:       total time =  999933.76 ms /    24 tokens
   llama_print_timings:       total time = 1046548.83 ms /    24 tokens
   llama_print_timings:       total time = 1068671.48 ms /    24 tokens
   llama_print_timings:       total time = 1068285.76 ms /    24 tokens
   llama_print_timings:       total time = 1077789.63 ms /    24 tokens
   llama_print_timings:       total time = 1071558.93 ms /    24 tokens
   llama_print_timings:       total time = 1066181.55 ms /    24 tokens
   llama_print_timings:       total time = 1076767.53 ms /    24 tokens
   llama_print_timings:       total time = 1004065.63 ms /    24 tokens
   llama_print_timings:       total time = 1044522.13 ms /    24 tokens
   llama_print_timings:       total time =  999725.33 ms /    24 tokens
   llama_print_timings:       total time = 1047510.77 ms /    24 tokens
   llama_print_timings:       total time = 1068010.27 ms /    24 tokens
   llama_print_timings:       total time = 1068999.31 ms /    24 tokens
   llama_print_timings:       total time = 1077648.05 ms /    24 tokens
   llama_print_timings:       total time = 1071378.96 ms /    24 tokens
   llama_print_timings:       total time = 1066326.32 ms /    24 tokens
   llama_print_timings:       total time = 1077088.92 ms /    24 tokens

   AFTER
   -----
   llama_print_timings:       total time =  988522.03 ms /    24 tokens
   llama_print_timings:       total time =  997204.52 ms /    24 tokens
   llama_print_timings:       total time =  996605.86 ms /    24 tokens
   llama_print_timings:       total time =  991985.50 ms /    24 tokens
   llama_print_timings:       total time = 1035143.31 ms /    24 tokens
   llama_print_timings:       total time =  993660.18 ms /    24 tokens
   llama_print_timings:       total time =  983082.14 ms /    24 tokens
   llama_print_timings:       total time =  990431.36 ms /    24 tokens
   llama_print_timings:       total time =  992707.09 ms /    24 tokens
   llama_print_timings:       total time =  992673.27 ms /    24 tokens
   llama_print_timings:       total time =  989285.43 ms /    24 tokens
   llama_print_timings:       total time =  996710.06 ms /    24 tokens
   llama_print_timings:       total time =  996534.64 ms /    24 tokens
   llama_print_timings:       total time =  991344.17 ms /    24 tokens
   llama_print_timings:       total time = 1035210.84 ms /    24 tokens
   llama_print_timings:       total time =  994714.13 ms /    24 tokens
   llama_print_timings:       total time =  984184.15 ms /    24 tokens
   llama_print_timings:       total time =  990909.45 ms /    24 tokens
   llama_print_timings:       total time =  991881.48 ms /    24 tokens
   llama_print_timings:       total time =  993918.03 ms /    24 tokens
   llama_print_timings:       total time =  990061.34 ms /    24 tokens
   llama_print_timings:       total time =  998076.69 ms /    24 tokens
   llama_print_timings:       total time =  997082.59 ms /    24 tokens
   llama_print_timings:       total time =  990677.58 ms /    24 tokens
   llama_print_timings:       total time = 1036054.94 ms /    24 tokens
   llama_print_timings:       total time =  994125.93 ms /    24 tokens
   llama_print_timings:       total time =  982467.01 ms /    24 tokens
   llama_print_timings:       total time =  990191.60 ms /    24 tokens
   llama_print_timings:       total time =  993319.24 ms /    24 tokens
   llama_print_timings:       total time =  992540.57 ms /    24 tokens
   
   The difference of TLB shootdown(/proc/interrupts) is like:

   BEFORE
   ------
   TLB:
   125553646  141418810  161932620  176853972  186655697  190399283
   192143823  196414038  192872439  193313658  193395617  192521416
   190788161  195067598  198016061  193607347  194293972  190786732
   191545637  194856822  191801931  189634535  190399803  196365922
   195268398  190115840  188050050  193194908  195317617  190820190
   190164820  185556071  226797214  229592631  216112464  209909495
   205575979  205950252  204948111  197999795  198892232  205287952
   199344631  195015158  195869844  198858745  195692876  200961904
   203463252  205921722  199850838  206145986  199613202  199961345
   200129577  203020521  207873649  203697671  197093386  204243803
   205993323  200934664  204193128  194435376  TLB shootdowns                                                  

   AFTER
   -----
   TLB:
   5648092    6610142    7032849    7882308    8088518    8352310
   8656536    8705136    8647426    8905583    8985408    8704522
   8884344    9026261    8929974    8869066    8877575    8810096
   8770984    8754503    8801694    8865925    8787524    8656432
   8755912    8682034    8773935    8832925    8797997    8515777
   8481240    8891258   10595243   10285973    9756935    9573681
   9398968    9069244    9242984    8899009    9310690    9029095
   9069758    9105825    9092703    9270202    9460287    9258546
   9180415    9232723    9270611    9175020    9490420    9360316
   9420818    9057663    9525631    9310152    9152242    8654483
   9181804    9050847    8919916    8883856    TLB shootdowns                                                  

   The difference of 'perf stat' for tlb numbers during one set of test
   with drop cache excluded is like:

   BEFORE
   ------
   3163679332	dTLB-load-misses     
   2017751856	dTLB-store-misses    
   327092903	iTLB-load-misses     
   1357543886	tlb:tlb_flush        

   AFTER
   -----
   2394694609	dTLB-load-misses     
   861144167	dTLB-store-misses    
   64055579	iTLB-load-misses     
   69175002	tlb:tlb_flush        

---

I'm happy to share great results.  I used a real workload that is super
popular these days, and got good results.  I will post the next version
of the patchset shortly after organizing and refining things.

	Byungchul

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-05-09  7:42 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-18  6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 1/8] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 2/8] arm64: tlbflush: " Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 3/8] mm/rmap: recognize read-only tlb entries during batched tlb flush Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 4/8] x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 5/8] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 6/8] mm: buddy: make room for a new variable, mgen, in struct page Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 7/8] mm: add folio_put_mgen() to deliver migrc's generation number to pcp or buddy Byungchul Park
2024-04-18  6:15 ` [PATCH v9 rebase on mm-unstable 8/8] mm: defer tlb flush until the source folios at migration actually get used Byungchul Park
2024-04-18 20:17 ` [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Andrew Morton
2024-04-19  6:02   ` Byungchul Park
2024-04-19  6:06 ` Huang, Ying
2024-04-19  6:21   ` Byungchul Park
2024-05-09  7:42   ` Byungchul Park

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.