linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RESEND PATCH v9 0/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration
@ 2023-05-18  6:59 Yicong Yang
  2023-05-18  6:59 ` [RESEND PATCH v9 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Yicong Yang
  2023-05-18  6:59 ` [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration Yicong Yang
  0 siblings, 2 replies; 8+ messages in thread
From: Yicong Yang @ 2023-05-18  6:59 UTC (permalink / raw)
  To: akpm, linux-mm, linux-arm-kernel, x86, catalin.marinas,
	mark.rutland, ryan.roberts, will, anshuman.khandual, linux-doc
  Cc: corbet, peterz, arnd, punit.agrawal, linux-kernel, darren,
	yangyicong, huzhanyuan, lipeifeng, zhangshiming, guojian,
	realmz6, linux-mips, openrisc, linuxppc-dev, linux-riscv,
	linux-s390, Barry Song, wangkefeng.wang, xhao, prime.zeng,
	Jonathan.Cameron

From: Yicong Yang <yangyicong@hisilicon.com>

Though ARM64 has the hardware to do tlb shootdown, the hardware
broadcasting is not free.
A simplest micro benchmark shows even on snapdragon 888 with only
8 cores, the overhead for ptep_clear_flush is huge even for paging
out one page mapped by only one process:
5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush

While pages are mapped by multiple processes or HW has more CPUs,
the cost should become even higher due to the bad scalability of
tlb shootdown.

The same benchmark can result in 16.99% CPU consumption on ARM64
server with around 100 cores according to Yicong's test on patch
2/2.

This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
1. only send tlbi instructions in the first stage -
	arch_tlbbatch_add_mm()
2. wait for the completion of tlbi by dsb while doing tlbbatch
	sync in arch_tlbbatch_flush()
Testing on snapdragon shows the overhead of ptep_clear_flush
is removed by the patchset. The micro benchmark becomes 5% faster
even for one page mapped by single process on snapdragon 888.

This support also optimize the page migration more than 50% with support
of batched TLB flushing [*].

[*] https://lore.kernel.org/linux-mm/20230213123444.155149-1-ying.huang@intel.com/

-v9:
1. Using a runtime tunable to control batched TLB flush, per Catalin in v7.
   Sorry for missing this on v8.
Link: https://lore.kernel.org/all/20230329035512.57392-1-yangyicong@huawei.com/

-v8:
1. Rebase on 6.3-rc4
2. Tested the optimization on page migration and mentioned it in the commit
3. Thanks the review from Anshuman.
Link: https://lore.kernel.org/linux-mm/20221117082648.47526-1-yangyicong@huawei.com/

-v7:
1. rename arch_tlbbatch_add_mm() to arch_tlbbatch_add_pending() as suggested, since it
   takes an extra address for arm64, per Nadav and Anshuman. Also mentioned in the commit.
2. add tags from Xin Hao, thanks.
Link: https://lore.kernel.org/lkml/20221115031425.44640-1-yangyicong@huawei.com/

-v6:
1. comment we don't defer TLB flush on platforms affected by ARM64_WORKAROUND_REPEAT_TLBI
2. use cpus_have_const_cap() instead of this_cpu_has_cap()
3. add tags from Punit, Thanks.
4. default enable the feature when cpus >= 8 rather than > 8, since the original
   improvement is observed on snapdragon 888 with 8 cores.
Link: https://lore.kernel.org/lkml/20221028081255.19157-1-yangyicong@huawei.com/

-v5:
1. Make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends on EXPERT for this stage on arm64.
2. Make a threshold of CPU numbers for enabling batched TLP flush on arm64
Link: https://lore.kernel.org/linux-arm-kernel/20220921084302.43631-1-yangyicong@huawei.com/T/

-v4:
1. Add tags from Kefeng and Anshuman, Thanks.
2. Limit the TLB batch/defer on systems with >4 CPUs, per Anshuman
3. Merge previous Patch 1,2-3 into one, per Anshuman
Link: https://lore.kernel.org/linux-mm/20220822082120.8347-1-yangyicong@huawei.com/

-v3:
1. Declare arch's tlbbatch defer support by arch_tlbbatch_should_defer() instead
   of ARCH_HAS_MM_CPUMASK, per Barry and Kefeng
2. Add Tested-by from Xin Hao
Link: https://lore.kernel.org/linux-mm/20220711034615.482895-1-21cnbao@gmail.com/

-v2:
1. Collected Yicong's test result on kunpeng920 ARM64 server;
2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
   according to the comments of Peter Zijlstra and Dave Hansen
3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
   is empty according to the comments of Nadav Amit

Thanks, Peter, Dave and Nadav for your testing or reviewing
, and comments.

-v1:
https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@gmail.com/

Anshuman Khandual (1):
  mm/tlbbatch: Introduce arch_tlbbatch_should_defer()

Barry Song (1):
  arm64: support batched/deferred tlb shootdown during page
    reclamation/migration

 .../features/vm/TLB/arch-support.txt          |  2 +-
 arch/arm64/Kconfig                            |  1 +
 arch/arm64/include/asm/tlbbatch.h             | 12 ++++
 arch/arm64/include/asm/tlbflush.h             | 33 ++++++++-
 arch/arm64/mm/flush.c                         | 69 +++++++++++++++++++
 arch/x86/include/asm/tlbflush.h               | 17 ++++-
 include/linux/mm_types_task.h                 |  4 +-
 mm/rmap.c                                     | 21 +++---
 8 files changed, 139 insertions(+), 20 deletions(-)
 create mode 100644 arch/arm64/include/asm/tlbbatch.h

-- 
2.24.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RESEND PATCH v9 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer()
  2023-05-18  6:59 [RESEND PATCH v9 0/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration Yicong Yang
@ 2023-05-18  6:59 ` Yicong Yang
  2023-05-18  6:59 ` [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration Yicong Yang
  1 sibling, 0 replies; 8+ messages in thread
From: Yicong Yang @ 2023-05-18  6:59 UTC (permalink / raw)
  To: akpm, linux-mm, linux-arm-kernel, x86, catalin.marinas,
	mark.rutland, ryan.roberts, will, anshuman.khandual, linux-doc
  Cc: corbet, peterz, arnd, punit.agrawal, linux-kernel, darren,
	yangyicong, huzhanyuan, lipeifeng, zhangshiming, guojian,
	realmz6, linux-mips, openrisc, linuxppc-dev, linux-riscv,
	linux-s390, Barry Song, wangkefeng.wang, xhao, prime.zeng,
	Jonathan.Cameron, Anshuman Khandual, Barry Song

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

The entire scheme of deferred TLB flush in reclaim path rests on the
fact that the cost to refill TLB entries is less than flushing out
individual entries by sending IPI to remote CPUs. But architecture
can have different ways to evaluate that. Hence apart from checking
TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be
architecture specific.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
[https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@linux.vnet.ibm.com/]
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
[Rebase and fix incorrect return value type]
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
Tested-by: Punit Agrawal <punit.agrawal@bytedance.com>
---
 arch/x86/include/asm/tlbflush.h | 12 ++++++++++++
 mm/rmap.c                       |  9 +--------
 2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 75bfaa421030..46bdff73217c 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -260,6 +260,18 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
 }
 
+static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
+{
+	bool should_defer = false;
+
+	/* If remote CPUs need to be flushed then defer batch the flush */
+	if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids)
+		should_defer = true;
+	put_cpu();
+
+	return should_defer;
+}
+
 static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
 {
 	/*
diff --git a/mm/rmap.c b/mm/rmap.c
index 19392e090bec..b45f95ab0c04 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -688,17 +688,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
  */
 static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
 {
-	bool should_defer = false;
-
 	if (!(flags & TTU_BATCH_FLUSH))
 		return false;
 
-	/* If remote CPUs need to be flushed then defer batch the flush */
-	if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids)
-		should_defer = true;
-	put_cpu();
-
-	return should_defer;
+	return arch_tlbbatch_should_defer(mm);
 }
 
 /*
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration
  2023-05-18  6:59 [RESEND PATCH v9 0/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration Yicong Yang
  2023-05-18  6:59 ` [RESEND PATCH v9 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Yicong Yang
@ 2023-05-18  6:59 ` Yicong Yang
  2023-06-29 16:31   ` Catalin Marinas
  1 sibling, 1 reply; 8+ messages in thread
From: Yicong Yang @ 2023-05-18  6:59 UTC (permalink / raw)
  To: akpm, linux-mm, linux-arm-kernel, x86, catalin.marinas,
	mark.rutland, ryan.roberts, will, anshuman.khandual, linux-doc
  Cc: corbet, peterz, arnd, punit.agrawal, linux-kernel, darren,
	yangyicong, huzhanyuan, lipeifeng, zhangshiming, guojian,
	realmz6, linux-mips, openrisc, linuxppc-dev, linux-riscv,
	linux-s390, Barry Song, wangkefeng.wang, xhao, prime.zeng,
	Jonathan.Cameron, Barry Song, Nadav Amit, Mel Gorman

From: Barry Song <v-songbaohua@oppo.com>

on x86, batched and deferred tlb shootdown has lead to 90%
performance increase on tlb shootdown. on arm64, HW can do
tlb shootdown without software IPI. But sync tlbi is still
quite expensive.

Even running a simplest program which requires swapout can
prove this is true,
 #include <sys/types.h>
 #include <unistd.h>
 #include <sys/mman.h>
 #include <string.h>

 int main()
 {
 #define SIZE (1 * 1024 * 1024)
         volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
                                          MAP_SHARED | MAP_ANONYMOUS, -1, 0);

         memset(p, 0x88, SIZE);

         for (int k = 0; k < 10000; k++) {
                 /* swap in */
                 for (int i = 0; i < SIZE; i += 4096) {
                         (void)p[i];
                 }

                 /* swap out */
                 madvise(p, SIZE, MADV_PAGEOUT);
         }
 }

Perf result on snapdragon 888 with 8 cores by using zRAM
as the swap block device.

 ~ # perf record taskset -c 4 ./a.out
 [ perf record: Woken up 10 times to write data ]
 [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ]
 ~ # perf report
 # To display the perf.data header info, please use --header/--header-only options.
 # To display the perf.data header info, please use --header/--header-only options.
 #
 #
 # Total Lost Samples: 0
 #
 # Samples: 60K of event 'cycles'
 # Event count (approx.): 35706225414
 #
 # Overhead  Command  Shared Object      Symbol
 # ........  .......  .................  .............................................................................
 #
    21.07%  a.out    [kernel.kallsyms]  [k] _raw_spin_unlock_irq
     8.23%  a.out    [kernel.kallsyms]  [k] _raw_spin_unlock_irqrestore
     6.67%  a.out    [kernel.kallsyms]  [k] filemap_map_pages
     6.16%  a.out    [kernel.kallsyms]  [k] __zram_bvec_write
     5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush
     3.71%  a.out    [kernel.kallsyms]  [k] _raw_spin_lock
     3.49%  a.out    [kernel.kallsyms]  [k] memset64
     1.63%  a.out    [kernel.kallsyms]  [k] clear_page
     1.42%  a.out    [kernel.kallsyms]  [k] _raw_spin_unlock
     1.26%  a.out    [kernel.kallsyms]  [k] mod_zone_state.llvm.8525150236079521930
     1.23%  a.out    [kernel.kallsyms]  [k] xas_load
     1.15%  a.out    [kernel.kallsyms]  [k] zram_slot_lock

ptep_clear_flush() takes 5.36% CPU in the micro-benchmark
swapping in/out a page mapped by only one process. If the
page is mapped by multiple processes, typically, like more
than 100 on a phone, the overhead would be much higher as
we have to run tlb flush 100 times for one single page.
Plus, tlb flush overhead will increase with the number
of CPU cores due to the bad scalability of tlb shootdown
in HW, so those ARM64 servers should expect much higher
overhead.

Further perf annonate shows 95% cpu time of ptep_clear_flush
is actually used by the final dsb() to wait for the completion
of tlb flush. This provides us a very good chance to leverage
the existing batched tlb in kernel. The minimum modification
is that we only send async tlbi in the first stage and we send
dsb while we have to sync in the second stage.

With the above simplest micro benchmark, collapsed time to
finish the program decreases around 5%.

Typical collapsed time w/o patch:
 ~ # time taskset -c 4 ./a.out
 0.21user 14.34system 0:14.69elapsed
w/ patch:
 ~ # time taskset -c 4 ./a.out
 0.22user 13.45system 0:13.80elapsed

Also, Yicong Yang added the following observation.
	Tested with benchmark in the commit on Kunpeng920 arm64 server,
	observed an improvement around 12.5% with command
	`time ./swap_bench`.
		w/o		w/
	real	0m13.460s	0m11.771s
	user	0m0.248s	0m0.279s
	sys	0m12.039s	0m11.458s

	Originally it's noticed a 16.99% overhead of ptep_clear_flush()
	which has been eliminated by this patch:

	[root@localhost yang]# perf record -- ./swap_bench && perf report
	[...]
	16.99%  swap_bench  [kernel.kallsyms]  [k] ptep_clear_flush

It is tested on 4,8,128 CPU platforms and shows to be beneficial on
large systems but may not have improvement on small systems like on
a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
on CONFIG_EXPERT for this stage and add a runtime tunable to allow
to disable it according to the scenario.

Also this patch improve the performance of page migration. Using pmbench
and tries to migrate the pages of pmbench between node 0 and node 1 for
20 times, this patch decrease the time used more than 50% and saved the
time used by ptep_clear_flush().

This patch extends arch_tlbbatch_add_mm() to take an address of the
target page to support the feature on arm64. Also rename it to
arch_tlbbatch_add_pending() to better match its function since we
don't need to handle the mm on arm64 and add_mm is not proper.
add_pending will make sense to both as on x86 we're pending the
TLB flush operations while on arm64 we're pending the synchronize
operations.

Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Nadav Amit <namit@vmware.com>
Cc: Mel Gorman <mgorman@suse.de>
Tested-by: Yicong Yang <yangyicong@hisilicon.com>
Tested-by: Xin Hao <xhao@linux.alibaba.com>
Tested-by: Punit Agrawal <punit.agrawal@bytedance.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 .../features/vm/TLB/arch-support.txt          |  2 +-
 arch/arm64/Kconfig                            |  1 +
 arch/arm64/include/asm/tlbbatch.h             | 12 ++++
 arch/arm64/include/asm/tlbflush.h             | 33 ++++++++-
 arch/arm64/mm/flush.c                         | 69 +++++++++++++++++++
 arch/x86/include/asm/tlbflush.h               |  5 +-
 include/linux/mm_types_task.h                 |  4 +-
 mm/rmap.c                                     | 12 ++--
 8 files changed, 126 insertions(+), 12 deletions(-)
 create mode 100644 arch/arm64/include/asm/tlbbatch.h

diff --git a/Documentation/features/vm/TLB/arch-support.txt b/Documentation/features/vm/TLB/arch-support.txt
index 7f049c251a79..76208db88f3b 100644
--- a/Documentation/features/vm/TLB/arch-support.txt
+++ b/Documentation/features/vm/TLB/arch-support.txt
@@ -9,7 +9,7 @@
     |       alpha: | TODO |
     |         arc: | TODO |
     |         arm: | TODO |
-    |       arm64: | N/A  |
+    |       arm64: |  ok  |
     |        csky: | TODO |
     |     hexagon: | TODO |
     |        ia64: | TODO |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b1201d25a8a4..b3fc652dc902 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -96,6 +96,7 @@ config ARM64
 	select ARCH_SUPPORTS_NUMA_BALANCING
 	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_SUPPORTS_PER_VMA_LOCK
+	select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if EXPERT
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
 	select ARCH_WANT_DEFAULT_BPF_JIT
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
diff --git a/arch/arm64/include/asm/tlbbatch.h b/arch/arm64/include/asm/tlbbatch.h
new file mode 100644
index 000000000000..fedb0b87b8db
--- /dev/null
+++ b/arch/arm64/include/asm/tlbbatch.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ARCH_ARM64_TLBBATCH_H
+#define _ARCH_ARM64_TLBBATCH_H
+
+struct arch_tlbflush_unmap_batch {
+	/*
+	 * For arm64, HW can do tlb shootdown, so we don't
+	 * need to record cpumask for sending IPI
+	 */
+};
+
+#endif /* _ARCH_ARM64_TLBBATCH_H */
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 412a3b9a3c25..8041905e26b9 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -254,17 +254,23 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 	dsb(ish);
 }
 
-static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
+static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
 					 unsigned long uaddr)
 {
 	unsigned long addr;
 
 	dsb(ishst);
-	addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
+	addr = __TLBI_VADDR(uaddr, ASID(mm));
 	__tlbi(vale1is, addr);
 	__tlbi_user(vale1is, addr);
 }
 
+static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
+					 unsigned long uaddr)
+{
+	return __flush_tlb_page_nosync(vma->vm_mm, uaddr);
+}
+
 static inline void flush_tlb_page(struct vm_area_struct *vma,
 				  unsigned long uaddr)
 {
@@ -272,6 +278,29 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
 	dsb(ish);
 }
 
+#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+extern struct static_key_false batched_tlb_enabled;
+
+static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
+{
+	return static_branch_likely(&batched_tlb_enabled);
+}
+
+static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
+					     struct mm_struct *mm,
+					     unsigned long uaddr)
+{
+	__flush_tlb_page_nosync(mm, uaddr);
+}
+
+static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
+{
+	dsb(ish);
+}
+
+#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+
 /*
  * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
  * necessarily a performance improvement.
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 5f9379b3c8c8..84a8e15cda96 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -7,8 +7,10 @@
  */
 
 #include <linux/export.h>
+#include <linux/jump_label.h>
 #include <linux/mm.h>
 #include <linux/pagemap.h>
+#include <linux/sysctl.h>
 
 #include <asm/cacheflush.h>
 #include <asm/cache.h>
@@ -107,3 +109,70 @@ void arch_invalidate_pmem(void *addr, size_t size)
 }
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
+
+#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+DEFINE_STATIC_KEY_FALSE(batched_tlb_enabled);
+
+static bool batched_tlb_flush_supported(void)
+{
+#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI
+	/*
+	 * TLB flush deferral is not required on systems, which are affected with
+	 * ARM64_WORKAROUND_REPEAT_TLBI, as __tlbi()/__tlbi_user() implementation
+	 * will have two consecutive TLBI instructions with a dsb(ish) in between
+	 * defeating the purpose (i.e save overall 'dsb ish' cost).
+	 */
+	if (unlikely(cpus_have_const_cap(ARM64_WORKAROUND_REPEAT_TLBI)))
+		return false;
+#endif
+	return true;
+}
+
+int batched_tlb_enabled_handler(struct ctl_table *table, int write,
+				      void *buffer, size_t *lenp, loff_t *ppos)
+{
+	unsigned int enabled = static_branch_unlikely(&batched_tlb_enabled);
+	struct ctl_table t;
+	int err;
+
+	if (write && !capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
+	t = *table;
+	t.data = &enabled;
+	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
+	if (!err && write) {
+		if (enabled && batched_tlb_flush_supported())
+			static_branch_enable(&batched_tlb_enabled);
+		else
+			static_branch_disable(&batched_tlb_enabled);
+	}
+
+	return err;
+}
+
+static struct ctl_table batched_tlb_sysctls[] = {
+	{
+		.procname	= "batched_tlb_enabled",
+		.data		= NULL,
+		.maxlen		= sizeof(unsigned int),
+		.mode		= 0644,
+		.proc_handler	= batched_tlb_enabled_handler,
+		.extra1		= SYSCTL_ZERO,
+		.extra2		= SYSCTL_ONE,
+	},
+	{}
+};
+
+static int __init batched_tlb_sysctls_init(void)
+{
+	if (batched_tlb_flush_supported())
+		static_branch_enable(&batched_tlb_enabled);
+
+	register_sysctl_init("vm", batched_tlb_sysctls);
+	return 0;
+}
+late_initcall(batched_tlb_sysctls_init);
+
+#endif
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 46bdff73217c..2eb4b69ce38b 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -283,8 +283,9 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
 	return atomic64_inc_return(&mm->context.tlb_gen);
 }
 
-static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
-					struct mm_struct *mm)
+static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
+					     struct mm_struct *mm,
+					     unsigned long uaddr)
 {
 	inc_mm_tlb_gen(mm);
 	cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index 5414b5c6a103..aa44fff8bb9d 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -52,8 +52,8 @@ struct tlbflush_unmap_batch {
 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
 	/*
 	 * The arch code makes the following promise: generic code can modify a
-	 * PTE, then call arch_tlbbatch_add_mm() (which internally provides all
-	 * needed barriers), then call arch_tlbbatch_flush(), and the entries
+	 * PTE, then call arch_tlbbatch_add_pending() (which internally provides
+	 * all needed barriers), then call arch_tlbbatch_flush(), and the entries
 	 * will be flushed on all CPUs by the time that arch_tlbbatch_flush()
 	 * returns.
 	 */
diff --git a/mm/rmap.c b/mm/rmap.c
index b45f95ab0c04..9ef497228d45 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -642,7 +642,8 @@ void try_to_unmap_flush_dirty(void)
 #define TLB_FLUSH_BATCH_PENDING_LARGE			\
 	(TLB_FLUSH_BATCH_PENDING_MASK / 2)
 
-static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
+static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
+				      unsigned long uaddr)
 {
 	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
 	int batch;
@@ -651,7 +652,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
 	if (!pte_accessible(mm, pteval))
 		return;
 
-	arch_tlbbatch_add_mm(&tlb_ubc->arch, mm);
+	arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
 	tlb_ubc->flush_required = true;
 
 	/*
@@ -726,7 +727,8 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
 	}
 }
 #else
-static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
+static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
+				      unsigned long uaddr)
 {
 }
 
@@ -1577,7 +1579,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 				 */
 				pteval = ptep_get_and_clear(mm, address, pvmw.pte);
 
-				set_tlb_ubc_flush_pending(mm, pteval);
+				set_tlb_ubc_flush_pending(mm, pteval, address);
 			} else {
 				pteval = ptep_clear_flush(vma, address, pvmw.pte);
 			}
@@ -1958,7 +1960,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
 				 */
 				pteval = ptep_get_and_clear(mm, address, pvmw.pte);
 
-				set_tlb_ubc_flush_pending(mm, pteval);
+				set_tlb_ubc_flush_pending(mm, pteval, address);
 			} else {
 				pteval = ptep_clear_flush(vma, address, pvmw.pte);
 			}
-- 
2.24.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration
  2023-05-18  6:59 ` [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration Yicong Yang
@ 2023-06-29 16:31   ` Catalin Marinas
  2023-06-29 17:26     ` Catalin Marinas
  0 siblings, 1 reply; 8+ messages in thread
From: Catalin Marinas @ 2023-06-29 16:31 UTC (permalink / raw)
  To: Yicong Yang
  Cc: akpm, linux-mm, linux-arm-kernel, x86, mark.rutland,
	ryan.roberts, will, anshuman.khandual, linux-doc, corbet, peterz,
	arnd, punit.agrawal, linux-kernel, darren, yangyicong,
	huzhanyuan, lipeifeng, zhangshiming, guojian, realmz6,
	linux-mips, openrisc, linuxppc-dev, linux-riscv, linux-s390,
	Barry Song, wangkefeng.wang, xhao, prime.zeng, Jonathan.Cameron,
	Barry Song, Nadav Amit, Mel Gorman

On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote:
> From: Barry Song <v-songbaohua@oppo.com>
> 
> on x86, batched and deferred tlb shootdown has lead to 90%
> performance increase on tlb shootdown. on arm64, HW can do
> tlb shootdown without software IPI. But sync tlbi is still
> quite expensive.
[...]
>  .../features/vm/TLB/arch-support.txt          |  2 +-
>  arch/arm64/Kconfig                            |  1 +
>  arch/arm64/include/asm/tlbbatch.h             | 12 ++++
>  arch/arm64/include/asm/tlbflush.h             | 33 ++++++++-
>  arch/arm64/mm/flush.c                         | 69 +++++++++++++++++++
>  arch/x86/include/asm/tlbflush.h               |  5 +-
>  include/linux/mm_types_task.h                 |  4 +-
>  mm/rmap.c                                     | 12 ++--

First of all, this patch needs to be split in some preparatory patches
introducing/renaming functions with no functional change for x86. Once
done, you can add the arm64-only changes.

Now, on the implementation, I had some comments on v7 but we didn't get
to a conclusion and the thread eventually died:

https://lore.kernel.org/linux-mm/Y7cToj5mWd1ZbMyQ@arm.com/

I know I said a command line argument is better than Kconfig or some
random number of CPUs heuristics but it would be even better if we don't
bother with any, just make this always on. Barry had some comments
around mprotect() being racy and that's why we have
flush_tlb_batched_pending() but I don't think it's needed (or, for
arm64, it can be a DSB since this patch issues the TLBIs but without the
DVM Sync). So we need to clarify this (see Barry's last email on the
above thread) and before attempting new versions of this patchset. With
flush_tlb_batched_pending() removed (or DSB), I have a suspicion such
implementation would be faster on any SoC irrespective of the number of
CPUs.

-- 
Catalin


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration
  2023-06-29 16:31   ` Catalin Marinas
@ 2023-06-29 17:26     ` Catalin Marinas
  2023-07-04 14:36       ` Yicong Yang
  0 siblings, 1 reply; 8+ messages in thread
From: Catalin Marinas @ 2023-06-29 17:26 UTC (permalink / raw)
  To: Yicong Yang
  Cc: akpm, linux-mm, linux-arm-kernel, x86, mark.rutland,
	ryan.roberts, will, anshuman.khandual, linux-doc, corbet, peterz,
	arnd, punit.agrawal, linux-kernel, darren, yangyicong,
	huzhanyuan, lipeifeng, zhangshiming, guojian, realmz6,
	linux-mips, openrisc, linuxppc-dev, linux-riscv, linux-s390,
	Barry Song, wangkefeng.wang, xhao, prime.zeng, Jonathan.Cameron,
	Barry Song, Nadav Amit, Mel Gorman

On Thu, Jun 29, 2023 at 05:31:36PM +0100, Catalin Marinas wrote:
> On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote:
> > From: Barry Song <v-songbaohua@oppo.com>
> > 
> > on x86, batched and deferred tlb shootdown has lead to 90%
> > performance increase on tlb shootdown. on arm64, HW can do
> > tlb shootdown without software IPI. But sync tlbi is still
> > quite expensive.
> [...]
> >  .../features/vm/TLB/arch-support.txt          |  2 +-
> >  arch/arm64/Kconfig                            |  1 +
> >  arch/arm64/include/asm/tlbbatch.h             | 12 ++++
> >  arch/arm64/include/asm/tlbflush.h             | 33 ++++++++-
> >  arch/arm64/mm/flush.c                         | 69 +++++++++++++++++++
> >  arch/x86/include/asm/tlbflush.h               |  5 +-
> >  include/linux/mm_types_task.h                 |  4 +-
> >  mm/rmap.c                                     | 12 ++--
> 
> First of all, this patch needs to be split in some preparatory patches
> introducing/renaming functions with no functional change for x86. Once
> done, you can add the arm64-only changes.
> 
> Now, on the implementation, I had some comments on v7 but we didn't get
> to a conclusion and the thread eventually died:
> 
> https://lore.kernel.org/linux-mm/Y7cToj5mWd1ZbMyQ@arm.com/
> 
> I know I said a command line argument is better than Kconfig or some
> random number of CPUs heuristics but it would be even better if we don't
> bother with any, just make this always on. Barry had some comments
> around mprotect() being racy and that's why we have
> flush_tlb_batched_pending() but I don't think it's needed (or, for
> arm64, it can be a DSB since this patch issues the TLBIs but without the
> DVM Sync). So we need to clarify this (see Barry's last email on the
> above thread) and before attempting new versions of this patchset. With
> flush_tlb_batched_pending() removed (or DSB), I have a suspicion such
> implementation would be faster on any SoC irrespective of the number of
> CPUs.

I think I got the need for flush_tlb_batched_pending(). If
try_to_unmap() marks the pte !present and we have a pending TLBI,
change_pte_range() will skip the TLB maintenance altogether since it did
not change the pte. So we could be left with stale TLB entries after
mprotect() before TTU does the batch flushing.

We can have an arch-specific flush_tlb_batched_pending() that can be a
DSB only on arm64 and a full mm flush on x86.

-- 
Catalin


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration
  2023-06-29 17:26     ` Catalin Marinas
@ 2023-07-04 14:36       ` Yicong Yang
  2023-07-05  8:43         ` Barry Song
  0 siblings, 1 reply; 8+ messages in thread
From: Yicong Yang @ 2023-07-04 14:36 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: yangyicong, akpm, linux-mm, linux-arm-kernel, x86, mark.rutland,
	ryan.roberts, will, anshuman.khandual, linux-doc, corbet, peterz,
	arnd, punit.agrawal, linux-kernel, darren, huzhanyuan, lipeifeng,
	zhangshiming, guojian, realmz6, linux-mips, openrisc,
	linuxppc-dev, linux-riscv, linux-s390, Barry Song,
	wangkefeng.wang, xhao, prime.zeng, Jonathan.Cameron, Barry Song,
	Nadav Amit, Mel Gorman

On 2023/6/30 1:26, Catalin Marinas wrote:
> On Thu, Jun 29, 2023 at 05:31:36PM +0100, Catalin Marinas wrote:
>> On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote:
>>> From: Barry Song <v-songbaohua@oppo.com>
>>>
>>> on x86, batched and deferred tlb shootdown has lead to 90%
>>> performance increase on tlb shootdown. on arm64, HW can do
>>> tlb shootdown without software IPI. But sync tlbi is still
>>> quite expensive.
>> [...]
>>>  .../features/vm/TLB/arch-support.txt          |  2 +-
>>>  arch/arm64/Kconfig                            |  1 +
>>>  arch/arm64/include/asm/tlbbatch.h             | 12 ++++
>>>  arch/arm64/include/asm/tlbflush.h             | 33 ++++++++-
>>>  arch/arm64/mm/flush.c                         | 69 +++++++++++++++++++
>>>  arch/x86/include/asm/tlbflush.h               |  5 +-
>>>  include/linux/mm_types_task.h                 |  4 +-
>>>  mm/rmap.c                                     | 12 ++--
>>
>> First of all, this patch needs to be split in some preparatory patches
>> introducing/renaming functions with no functional change for x86. Once
>> done, you can add the arm64-only changes.
>>

got it. will try to split this patch as suggested.

>> Now, on the implementation, I had some comments on v7 but we didn't get
>> to a conclusion and the thread eventually died:
>>
>> https://lore.kernel.org/linux-mm/Y7cToj5mWd1ZbMyQ@arm.com/
>>
>> I know I said a command line argument is better than Kconfig or some
>> random number of CPUs heuristics but it would be even better if we don't
>> bother with any, just make this always on.

ok, will make this always on.

>> Barry had some comments
>> around mprotect() being racy and that's why we have
>> flush_tlb_batched_pending() but I don't think it's needed (or, for
>> arm64, it can be a DSB since this patch issues the TLBIs but without the
>> DVM Sync). So we need to clarify this (see Barry's last email on the
>> above thread) and before attempting new versions of this patchset. With
>> flush_tlb_batched_pending() removed (or DSB), I have a suspicion such
>> implementation would be faster on any SoC irrespective of the number of
>> CPUs.
> 
> I think I got the need for flush_tlb_batched_pending(). If
> try_to_unmap() marks the pte !present and we have a pending TLBI,
> change_pte_range() will skip the TLB maintenance altogether since it did
> not change the pte. So we could be left with stale TLB entries after
> mprotect() before TTU does the batch flushing.
> 
> We can have an arch-specific flush_tlb_batched_pending() that can be a
> DSB only on arm64 and a full mm flush on x86.
> 

We need to do a flush/dsb in flush_tlb_batched_pending() only in a race
condition so we first check whether there's a pended batched flush and
if so do the tlb flush. The pending checking is common and the differences
among the archs is how to flush the TLB here within the flush_tlb_batched_pending(),
on arm64 it should only be a dsb.

As we only needs to maintain the TLBs already pended in batched flush,
does it make sense to only handle those TLBs in flush_tlb_batched_pending()?
Then we can use the arch_tlbbatch_flush() rather than flush_tlb_mm() in
flush_tlb_batched_pending() and no arch specific function needed.

Thanks.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration
  2023-07-04 14:36       ` Yicong Yang
@ 2023-07-05  8:43         ` Barry Song
  2023-07-05 10:24           ` Yicong Yang
  0 siblings, 1 reply; 8+ messages in thread
From: Barry Song @ 2023-07-05  8:43 UTC (permalink / raw)
  To: Yicong Yang
  Cc: Catalin Marinas, yangyicong, akpm, linux-mm, linux-arm-kernel,
	x86, mark.rutland, ryan.roberts, will, anshuman.khandual,
	linux-doc, corbet, peterz, arnd, punit.agrawal, linux-kernel,
	darren, huzhanyuan, lipeifeng, zhangshiming, guojian, realmz6,
	linux-mips, openrisc, linuxppc-dev, linux-riscv, linux-s390,
	wangkefeng.wang, xhao, prime.zeng, Jonathan.Cameron, Barry Song,
	Nadav Amit, Mel Gorman

On Tue, Jul 4, 2023 at 10:36 PM Yicong Yang <yangyicong@huawei.com> wrote:
>
> On 2023/6/30 1:26, Catalin Marinas wrote:
> > On Thu, Jun 29, 2023 at 05:31:36PM +0100, Catalin Marinas wrote:
> >> On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote:
> >>> From: Barry Song <v-songbaohua@oppo.com>
> >>>
> >>> on x86, batched and deferred tlb shootdown has lead to 90%
> >>> performance increase on tlb shootdown. on arm64, HW can do
> >>> tlb shootdown without software IPI. But sync tlbi is still
> >>> quite expensive.
> >> [...]
> >>>  .../features/vm/TLB/arch-support.txt          |  2 +-
> >>>  arch/arm64/Kconfig                            |  1 +
> >>>  arch/arm64/include/asm/tlbbatch.h             | 12 ++++
> >>>  arch/arm64/include/asm/tlbflush.h             | 33 ++++++++-
> >>>  arch/arm64/mm/flush.c                         | 69 +++++++++++++++++++
> >>>  arch/x86/include/asm/tlbflush.h               |  5 +-
> >>>  include/linux/mm_types_task.h                 |  4 +-
> >>>  mm/rmap.c                                     | 12 ++--
> >>
> >> First of all, this patch needs to be split in some preparatory patches
> >> introducing/renaming functions with no functional change for x86. Once
> >> done, you can add the arm64-only changes.
> >>
>
> got it. will try to split this patch as suggested.
>
> >> Now, on the implementation, I had some comments on v7 but we didn't get
> >> to a conclusion and the thread eventually died:
> >>
> >> https://lore.kernel.org/linux-mm/Y7cToj5mWd1ZbMyQ@arm.com/
> >>
> >> I know I said a command line argument is better than Kconfig or some
> >> random number of CPUs heuristics but it would be even better if we don't
> >> bother with any, just make this always on.
>
> ok, will make this always on.
>
> >> Barry had some comments
> >> around mprotect() being racy and that's why we have
> >> flush_tlb_batched_pending() but I don't think it's needed (or, for
> >> arm64, it can be a DSB since this patch issues the TLBIs but without the
> >> DVM Sync). So we need to clarify this (see Barry's last email on the
> >> above thread) and before attempting new versions of this patchset. With
> >> flush_tlb_batched_pending() removed (or DSB), I have a suspicion such
> >> implementation would be faster on any SoC irrespective of the number of
> >> CPUs.
> >
> > I think I got the need for flush_tlb_batched_pending(). If
> > try_to_unmap() marks the pte !present and we have a pending TLBI,
> > change_pte_range() will skip the TLB maintenance altogether since it did
> > not change the pte. So we could be left with stale TLB entries after
> > mprotect() before TTU does the batch flushing.
> >

Good catch.
This could be also true for MADV_DONTNEED. after try_to_unmap, we run
MADV_DONTNEED on this area, as pte is not present, we don't do anything
on this PTE in zap_pte_range afterwards.

> > We can have an arch-specific flush_tlb_batched_pending() that can be a
> > DSB only on arm64 and a full mm flush on x86.
> >
>
> We need to do a flush/dsb in flush_tlb_batched_pending() only in a race
> condition so we first check whether there's a pended batched flush and
> if so do the tlb flush. The pending checking is common and the differences
> among the archs is how to flush the TLB here within the flush_tlb_batched_pending(),
> on arm64 it should only be a dsb.
>
> As we only needs to maintain the TLBs already pended in batched flush,
> does it make sense to only handle those TLBs in flush_tlb_batched_pending()?
> Then we can use the arch_tlbbatch_flush() rather than flush_tlb_mm() in
> flush_tlb_batched_pending() and no arch specific function needed.

as we have issued no-sync tlbi on those pending addresses , that means
our hardware
has already "recorded" what should be flushed in the specific mm. so
DSB only will flush
them correctly. right?

>
> Thanks.
>

Barry


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration
  2023-07-05  8:43         ` Barry Song
@ 2023-07-05 10:24           ` Yicong Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Yicong Yang @ 2023-07-05 10:24 UTC (permalink / raw)
  To: Barry Song
  Cc: yangyicong, Catalin Marinas, akpm, linux-mm, linux-arm-kernel,
	x86, mark.rutland, ryan.roberts, will, anshuman.khandual,
	linux-doc, corbet, peterz, arnd, punit.agrawal, linux-kernel,
	darren, huzhanyuan, lipeifeng, zhangshiming, guojian, realmz6,
	linux-mips, openrisc, linuxppc-dev, linux-riscv, linux-s390,
	wangkefeng.wang, xhao, prime.zeng, Jonathan.Cameron, Barry Song,
	Nadav Amit, Mel Gorman

On 2023/7/5 16:43, Barry Song wrote:
> On Tue, Jul 4, 2023 at 10:36 PM Yicong Yang <yangyicong@huawei.com> wrote:
>>
>> On 2023/6/30 1:26, Catalin Marinas wrote:
>>> On Thu, Jun 29, 2023 at 05:31:36PM +0100, Catalin Marinas wrote:
>>>> On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote:
>>>>> From: Barry Song <v-songbaohua@oppo.com>
>>>>>
>>>>> on x86, batched and deferred tlb shootdown has lead to 90%
>>>>> performance increase on tlb shootdown. on arm64, HW can do
>>>>> tlb shootdown without software IPI. But sync tlbi is still
>>>>> quite expensive.
>>>> [...]
>>>>>  .../features/vm/TLB/arch-support.txt          |  2 +-
>>>>>  arch/arm64/Kconfig                            |  1 +
>>>>>  arch/arm64/include/asm/tlbbatch.h             | 12 ++++
>>>>>  arch/arm64/include/asm/tlbflush.h             | 33 ++++++++-
>>>>>  arch/arm64/mm/flush.c                         | 69 +++++++++++++++++++
>>>>>  arch/x86/include/asm/tlbflush.h               |  5 +-
>>>>>  include/linux/mm_types_task.h                 |  4 +-
>>>>>  mm/rmap.c                                     | 12 ++--
>>>>
>>>> First of all, this patch needs to be split in some preparatory patches
>>>> introducing/renaming functions with no functional change for x86. Once
>>>> done, you can add the arm64-only changes.
>>>>
>>
>> got it. will try to split this patch as suggested.
>>
>>>> Now, on the implementation, I had some comments on v7 but we didn't get
>>>> to a conclusion and the thread eventually died:
>>>>
>>>> https://lore.kernel.org/linux-mm/Y7cToj5mWd1ZbMyQ@arm.com/
>>>>
>>>> I know I said a command line argument is better than Kconfig or some
>>>> random number of CPUs heuristics but it would be even better if we don't
>>>> bother with any, just make this always on.
>>
>> ok, will make this always on.
>>
>>>> Barry had some comments
>>>> around mprotect() being racy and that's why we have
>>>> flush_tlb_batched_pending() but I don't think it's needed (or, for
>>>> arm64, it can be a DSB since this patch issues the TLBIs but without the
>>>> DVM Sync). So we need to clarify this (see Barry's last email on the
>>>> above thread) and before attempting new versions of this patchset. With
>>>> flush_tlb_batched_pending() removed (or DSB), I have a suspicion such
>>>> implementation would be faster on any SoC irrespective of the number of
>>>> CPUs.
>>>
>>> I think I got the need for flush_tlb_batched_pending(). If
>>> try_to_unmap() marks the pte !present and we have a pending TLBI,
>>> change_pte_range() will skip the TLB maintenance altogether since it did
>>> not change the pte. So we could be left with stale TLB entries after
>>> mprotect() before TTU does the batch flushing.
>>>
> 
> Good catch.
> This could be also true for MADV_DONTNEED. after try_to_unmap, we run
> MADV_DONTNEED on this area, as pte is not present, we don't do anything
> on this PTE in zap_pte_range afterwards.
> 
>>> We can have an arch-specific flush_tlb_batched_pending() that can be a
>>> DSB only on arm64 and a full mm flush on x86.
>>>
>>
>> We need to do a flush/dsb in flush_tlb_batched_pending() only in a race
>> condition so we first check whether there's a pended batched flush and
>> if so do the tlb flush. The pending checking is common and the differences
>> among the archs is how to flush the TLB here within the flush_tlb_batched_pending(),
>> on arm64 it should only be a dsb.
>>
>> As we only needs to maintain the TLBs already pended in batched flush,
>> does it make sense to only handle those TLBs in flush_tlb_batched_pending()?
>> Then we can use the arch_tlbbatch_flush() rather than flush_tlb_mm() in
>> flush_tlb_batched_pending() and no arch specific function needed.
> 
> as we have issued no-sync tlbi on those pending addresses , that means
> our hardware
> has already "recorded" what should be flushed in the specific mm. so
> DSB only will flush
> them correctly. right?
> 

yes it's right. I was just thought something like below. arch_tlbbatch_flush()
will only be a dsb on arm64 so this will match what Catalin wants. But as you
told that this maybe incorrect on x86 so we'd better have arch specific
implementation for flush_tlb_batched_pending() as suggested.

diff --git a/mm/rmap.c b/mm/rmap.c
index 9699c6011b0e..afa3571503a0 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -717,7 +717,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
        int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;

        if (pending != flushed) {
-               flush_tlb_mm(mm);
+               arch_tlbbatch_flush(&current->tlb_ubc.arch);
                /*
                 * If the new TLB flushing is pending during flushing, leave
                 * mm->tlb_flush_batched as is, to avoid losing flushing.


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-07-05 10:24 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-18  6:59 [RESEND PATCH v9 0/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration Yicong Yang
2023-05-18  6:59 ` [RESEND PATCH v9 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Yicong Yang
2023-05-18  6:59 ` [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration Yicong Yang
2023-06-29 16:31   ` Catalin Marinas
2023-06-29 17:26     ` Catalin Marinas
2023-07-04 14:36       ` Yicong Yang
2023-07-05  8:43         ` Barry Song
2023-07-05 10:24           ` Yicong Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).