All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] x86: Concurrent TLB flushes and other improvements
@ 2019-06-13  6:48 Nadav Amit
  2019-06-13  6:48 ` [PATCH 1/9] smp: Remove smp_call_function() and on_each_cpu() return values Nadav Amit
                   ` (10 more replies)
  0 siblings, 11 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, Richard Henderson, Ivan Kokshaysky,
	Matt Turner, Tony Luck, Fenghua Yu, Andrew Morton, Rik van Riel,
	Josh Poimboeuf, Paolo Bonzini

Currently, local and remote TLB flushes are not performed concurrently,
which introduces unnecessary overhead - each INVLPG can take 100s of
cycles. This patch-set allows TLB flushes to be run concurrently: first
request the remote CPUs to initiate the flush, then run it locally, and
finally wait for the remote CPUs to finish their work.

In addition, there are various small optimizations to avoid unwarranted
false-sharing and atomic operations.

The proposed changes should also improve the performance of other
invocations of on_each_cpu(). Hopefully, no one has relied on this
behavior of on_each_cpu() that invoked functions first remotely and only
then locally [Peter says he remembers someone might do so, but without
further information it is hard to know how to address it].

Running sysbench on dax w/emulated-pmem, write-cache disabled, and
various mitigations (PTI, Spectre, MDS) disabled on Haswell:

 sysbench fileio --file-total-size=3G --file-test-mode=rndwr \
  --file-io-mode=mmap --threads=4 --file-fsync-mode=fdatasync run

			events (avg/stddev)
			-------------------
  5.2-rc3:		1247669.0000/16075.39
  +patchset:		1290607.0000/13617.56 (+3.4%)

Patch 1 does small cleanup. Patches 2-5 implement the concurrent
execution of TLB flushes. Patches 6-9 deal with false-sharing and
unnecessary atomic operations. There is still no implementation that
uses the concurrent TLB flushes by Xen and Hyper-V. 

There are various additional possible optimizations that were sent or
are in development (async flushes, x2apic shorthands, fewer mm_tlb_gen
accesses, etc.), but based on Andy's feedback, they will be sent later.

RFCv2 -> v1:
* Fix comment on flush_tlb_multi [Juergen]
* Removing async invalidation optimizations [Andy]
* Adding KVM support [Paolo]

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>

Nadav Amit (9):
  smp: Remove smp_call_function() and on_each_cpu() return values
  smp: Run functions concurrently in smp_call_function_many()
  x86/mm/tlb: Refactor common code into flush_tlb_on_cpus()
  x86/mm/tlb: Flush remote and local TLBs concurrently
  x86/mm/tlb: Optimize local TLB flushes
  KVM: x86: Provide paravirtualized flush_tlb_multi()
  smp: Do not mark call_function_data as shared
  x86/tlb: Privatize cpu_tlbstate
  x86/apic: Use non-atomic operations when possible

 arch/alpha/kernel/smp.c               |  19 +---
 arch/alpha/oprofile/common.c          |   6 +-
 arch/ia64/kernel/perfmon.c            |  12 +--
 arch/ia64/kernel/uncached.c           |   8 +-
 arch/x86/hyperv/mmu.c                 |   2 +
 arch/x86/include/asm/paravirt.h       |   8 ++
 arch/x86/include/asm/paravirt_types.h |   6 ++
 arch/x86/include/asm/tlbflush.h       |  46 ++++----
 arch/x86/kernel/apic/apic_flat_64.c   |   4 +-
 arch/x86/kernel/apic/x2apic_cluster.c |   2 +-
 arch/x86/kernel/kvm.c                 |  11 +-
 arch/x86/kernel/paravirt.c            |   3 +
 arch/x86/kernel/smp.c                 |   2 +-
 arch/x86/lib/cache-smp.c              |   3 +-
 arch/x86/mm/init.c                    |   2 +-
 arch/x86/mm/tlb.c                     | 150 ++++++++++++++++++--------
 arch/x86/xen/mmu_pv.c                 |   2 +
 drivers/char/agp/generic.c            |   3 +-
 include/linux/smp.h                   |  32 ++++--
 kernel/smp.c                          | 141 +++++++++++-------------
 kernel/up.c                           |   3 +-
 21 files changed, 272 insertions(+), 193 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 1/9] smp: Remove smp_call_function() and on_each_cpu() return values
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-23 12:32   ` [tip:smp/hotplug] " tip-bot for Nadav Amit
  2019-06-13  6:48 ` [PATCH 2/9] smp: Run functions concurrently in smp_call_function_many() Nadav Amit
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, Richard Henderson, Ivan Kokshaysky,
	Matt Turner, Tony Luck, Fenghua Yu, Andrew Morton

The return value is fixed. Remove it and amend the callers.

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/alpha/kernel/smp.c      | 19 +++++--------------
 arch/alpha/oprofile/common.c |  6 +++---
 arch/ia64/kernel/perfmon.c   | 12 ++----------
 arch/ia64/kernel/uncached.c  |  8 ++++----
 arch/x86/lib/cache-smp.c     |  3 ++-
 drivers/char/agp/generic.c   |  3 +--
 include/linux/smp.h          |  7 +++----
 kernel/smp.c                 | 10 +++-------
 kernel/up.c                  |  3 +--
 9 files changed, 24 insertions(+), 47 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index d0dccae53ba9..5f90df30be20 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -614,8 +614,7 @@ void
 smp_imb(void)
 {
 	/* Must wait other processors to flush their icache before continue. */
-	if (on_each_cpu(ipi_imb, NULL, 1))
-		printk(KERN_CRIT "smp_imb: timed out\n");
+	on_each_cpu(ipi_imb, NULL, 1);
 }
 EXPORT_SYMBOL(smp_imb);
 
@@ -630,9 +629,7 @@ flush_tlb_all(void)
 {
 	/* Although we don't have any data to pass, we do want to
 	   synchronize with the other processors.  */
-	if (on_each_cpu(ipi_flush_tlb_all, NULL, 1)) {
-		printk(KERN_CRIT "flush_tlb_all: timed out\n");
-	}
+	on_each_cpu(ipi_flush_tlb_all, NULL, 1);
 }
 
 #define asn_locked() (cpu_data[smp_processor_id()].asn_lock)
@@ -667,9 +664,7 @@ flush_tlb_mm(struct mm_struct *mm)
 		}
 	}
 
-	if (smp_call_function(ipi_flush_tlb_mm, mm, 1)) {
-		printk(KERN_CRIT "flush_tlb_mm: timed out\n");
-	}
+	smp_call_function(ipi_flush_tlb_mm, mm, 1);
 
 	preempt_enable();
 }
@@ -720,9 +715,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 	data.mm = mm;
 	data.addr = addr;
 
-	if (smp_call_function(ipi_flush_tlb_page, &data, 1)) {
-		printk(KERN_CRIT "flush_tlb_page: timed out\n");
-	}
+	smp_call_function(ipi_flush_tlb_page, &data, 1);
 
 	preempt_enable();
 }
@@ -772,9 +765,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 		}
 	}
 
-	if (smp_call_function(ipi_flush_icache_page, mm, 1)) {
-		printk(KERN_CRIT "flush_icache_page: timed out\n");
-	}
+	smp_call_function(ipi_flush_icache_page, mm, 1);
 
 	preempt_enable();
 }
diff --git a/arch/alpha/oprofile/common.c b/arch/alpha/oprofile/common.c
index 310a4ce1dccc..1b1259c7d7d1 100644
--- a/arch/alpha/oprofile/common.c
+++ b/arch/alpha/oprofile/common.c
@@ -65,7 +65,7 @@ op_axp_setup(void)
 	model->reg_setup(&reg, ctr, &sys);
 
 	/* Configure the registers on all cpus.  */
-	(void)smp_call_function(model->cpu_setup, &reg, 1);
+	smp_call_function(model->cpu_setup, &reg, 1);
 	model->cpu_setup(&reg);
 	return 0;
 }
@@ -86,7 +86,7 @@ op_axp_cpu_start(void *dummy)
 static int
 op_axp_start(void)
 {
-	(void)smp_call_function(op_axp_cpu_start, NULL, 1);
+	smp_call_function(op_axp_cpu_start, NULL, 1);
 	op_axp_cpu_start(NULL);
 	return 0;
 }
@@ -101,7 +101,7 @@ op_axp_cpu_stop(void *dummy)
 static void
 op_axp_stop(void)
 {
-	(void)smp_call_function(op_axp_cpu_stop, NULL, 1);
+	smp_call_function(op_axp_cpu_stop, NULL, 1);
 	op_axp_cpu_stop(NULL);
 }
 
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 58a6337c0690..7c52bd2695a2 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -6390,11 +6390,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	}
 
 	/* save the current system wide pmu states */
-	ret = on_each_cpu(pfm_alt_save_pmu_state, NULL, 1);
-	if (ret) {
-		DPRINT(("on_each_cpu() failed: %d\n", ret));
-		goto cleanup_reserve;
-	}
+	on_each_cpu(pfm_alt_save_pmu_state, NULL, 1);
 
 	/* officially change to the alternate interrupt handler */
 	pfm_alt_intr_handler = hdl;
@@ -6421,7 +6417,6 @@ int
 pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 {
 	int i;
-	int ret;
 
 	if (hdl == NULL) return -EINVAL;
 
@@ -6435,10 +6430,7 @@ pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 
 	pfm_alt_intr_handler = NULL;
 
-	ret = on_each_cpu(pfm_alt_restore_pmu_state, NULL, 1);
-	if (ret) {
-		DPRINT(("on_each_cpu() failed: %d\n", ret));
-	}
+	on_each_cpu(pfm_alt_restore_pmu_state, NULL, 1);
 
 	for_each_online_cpu(i) {
 		pfm_unreserve_session(NULL, 1, i);
diff --git a/arch/ia64/kernel/uncached.c b/arch/ia64/kernel/uncached.c
index edcdfc149311..16c6d377c502 100644
--- a/arch/ia64/kernel/uncached.c
+++ b/arch/ia64/kernel/uncached.c
@@ -121,8 +121,8 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)
 	status = ia64_pal_prefetch_visibility(PAL_VISIBILITY_PHYSICAL);
 	if (status == PAL_VISIBILITY_OK_REMOTE_NEEDED) {
 		atomic_set(&uc_pool->status, 0);
-		status = smp_call_function(uncached_ipi_visibility, uc_pool, 1);
-		if (status || atomic_read(&uc_pool->status))
+		smp_call_function(uncached_ipi_visibility, uc_pool, 1);
+		if (atomic_read(&uc_pool->status))
 			goto failed;
 	} else if (status != PAL_VISIBILITY_OK)
 		goto failed;
@@ -143,8 +143,8 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)
 	if (status != PAL_STATUS_SUCCESS)
 		goto failed;
 	atomic_set(&uc_pool->status, 0);
-	status = smp_call_function(uncached_ipi_mc_drain, uc_pool, 1);
-	if (status || atomic_read(&uc_pool->status))
+	smp_call_function(uncached_ipi_mc_drain, uc_pool, 1);
+	if (atomic_read(&uc_pool->status))
 		goto failed;
 
 	/*
diff --git a/arch/x86/lib/cache-smp.c b/arch/x86/lib/cache-smp.c
index 1811fa4a1b1a..7c48ff4ae8d1 100644
--- a/arch/x86/lib/cache-smp.c
+++ b/arch/x86/lib/cache-smp.c
@@ -15,6 +15,7 @@ EXPORT_SYMBOL(wbinvd_on_cpu);
 
 int wbinvd_on_all_cpus(void)
 {
-	return on_each_cpu(__wbinvd, NULL, 1);
+	on_each_cpu(__wbinvd, NULL, 1);
+	return 0;
 }
 EXPORT_SYMBOL(wbinvd_on_all_cpus);
diff --git a/drivers/char/agp/generic.c b/drivers/char/agp/generic.c
index 658664a5a5aa..df1edb5ec0ad 100644
--- a/drivers/char/agp/generic.c
+++ b/drivers/char/agp/generic.c
@@ -1311,8 +1311,7 @@ static void ipi_handler(void *null)
 
 void global_cache_flush(void)
 {
-	if (on_each_cpu(ipi_handler, NULL, 1) != 0)
-		panic(PFX "timed out waiting for the other CPUs!\n");
+	on_each_cpu(ipi_handler, NULL, 1);
 }
 EXPORT_SYMBOL(global_cache_flush);
 
diff --git a/include/linux/smp.h b/include/linux/smp.h
index a56f08ff3097..bb8b451ab01f 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -35,7 +35,7 @@ int smp_call_function_single(int cpuid, smp_call_func_t func, void *info,
 /*
  * Call a function on all processors
  */
-int on_each_cpu(smp_call_func_t func, void *info, int wait);
+void on_each_cpu(smp_call_func_t func, void *info, int wait);
 
 /*
  * Call a function on processors specified by mask, which might include
@@ -101,7 +101,7 @@ extern void smp_cpus_done(unsigned int max_cpus);
 /*
  * Call a function on all other processors
  */
-int smp_call_function(smp_call_func_t func, void *info, int wait);
+void smp_call_function(smp_call_func_t func, void *info, int wait);
 void smp_call_function_many(const struct cpumask *mask,
 			    smp_call_func_t func, void *info, bool wait);
 
@@ -144,9 +144,8 @@ static inline void smp_send_stop(void) { }
  *	These macros fold the SMP functionality into a single CPU system
  */
 #define raw_smp_processor_id()			0
-static inline int up_smp_call_function(smp_call_func_t func, void *info)
+static inline void up_smp_call_function(smp_call_func_t func, void *info)
 {
-	return 0;
 }
 #define smp_call_function(func, info, wait) \
 			(up_smp_call_function(func, info))
diff --git a/kernel/smp.c b/kernel/smp.c
index d155374632eb..708d2ac1db72 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -487,13 +487,11 @@ EXPORT_SYMBOL(smp_call_function_many);
  * You must not call this function with disabled interrupts or from a
  * hardware interrupt handler or from a bottom half handler.
  */
-int smp_call_function(smp_call_func_t func, void *info, int wait)
+void smp_call_function(smp_call_func_t func, void *info, int wait)
 {
 	preempt_disable();
 	smp_call_function_many(cpu_online_mask, func, info, wait);
 	preempt_enable();
-
-	return 0;
 }
 EXPORT_SYMBOL(smp_call_function);
 
@@ -594,18 +592,16 @@ void __init smp_init(void)
  * early_boot_irqs_disabled is set.  Use local_irq_save/restore() instead
  * of local_irq_disable/enable().
  */
-int on_each_cpu(void (*func) (void *info), void *info, int wait)
+void on_each_cpu(void (*func) (void *info), void *info, int wait)
 {
 	unsigned long flags;
-	int ret = 0;
 
 	preempt_disable();
-	ret = smp_call_function(func, info, wait);
+	smp_call_function(func, info, wait);
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
 	preempt_enable();
-	return ret;
 }
 EXPORT_SYMBOL(on_each_cpu);
 
diff --git a/kernel/up.c b/kernel/up.c
index 483c9962c999..862b460ab97a 100644
--- a/kernel/up.c
+++ b/kernel/up.c
@@ -35,14 +35,13 @@ int smp_call_function_single_async(int cpu, call_single_data_t *csd)
 }
 EXPORT_SYMBOL(smp_call_function_single_async);
 
-int on_each_cpu(smp_call_func_t func, void *info, int wait)
+void on_each_cpu(smp_call_func_t func, void *info, int wait)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
-	return 0;
 }
 EXPORT_SYMBOL(on_each_cpu);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/9] smp: Run functions concurrently in smp_call_function_many()
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
  2019-06-13  6:48 ` [PATCH 1/9] smp: Remove smp_call_function() and on_each_cpu() return values Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-13  6:48 ` [PATCH 3/9] x86/mm/tlb: Refactor common code into flush_tlb_on_cpus() Nadav Amit
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, Rik van Riel, Josh Poimboeuf

Currently, on_each_cpu() and similar functions do not exploit the
potential of concurrency: the function is first executed remotely and
only then it is executed locally. Functions such as TLB flush can take
considerable time, so this provides an opportunity for performance
optimization.

To do so, introduce __smp_call_function_many(), which allows the callers
to provide local and remote functions that should be executed, and run
them concurrently. Keep smp_call_function_many() semantic as it is today
for backward compatibility: the called function is not executed in this
case locally.

__smp_call_function_many() does not use the optimized version for a
single remote target that smp_call_function_single() implements. For
synchronous function call, smp_call_function_single() keeps a
call_single_data (which is used for synchronization) on the stack.
Interestingly, it seems that not using this optimization provides
greater performance improvements (greater speedup with a single remote
target than with multiple ones). Presumably, holding data structures
that are intended for synchronization on the stack can introduce
overheads due to TLB misses and false-sharing when the stack is used for
other purposes.

Adding support to run the functions concurrently required to remove a
micro-optimization in on_each_cpu() that disabled/enabled IRQs instead
of saving/restoring them. The benefit of running the local and remote
code concurrently is expected to be greater.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 include/linux/smp.h |  27 ++++++---
 kernel/smp.c        | 133 +++++++++++++++++++++-----------------------
 2 files changed, 83 insertions(+), 77 deletions(-)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index bb8b451ab01f..b69abc88259d 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -32,11 +32,6 @@ extern unsigned int total_cpus;
 int smp_call_function_single(int cpuid, smp_call_func_t func, void *info,
 			     int wait);
 
-/*
- * Call a function on all processors
- */
-void on_each_cpu(smp_call_func_t func, void *info, int wait);
-
 /*
  * Call a function on processors specified by mask, which might include
  * the local one.
@@ -44,6 +39,15 @@ void on_each_cpu(smp_call_func_t func, void *info, int wait);
 void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 		void *info, bool wait);
 
+/*
+ * Call a function on all processors.  May be used during early boot while
+ * early_boot_irqs_disabled is set.
+ */
+static inline void on_each_cpu(smp_call_func_t func, void *info, int wait)
+{
+	on_each_cpu_mask(cpu_online_mask, func, info, wait);
+}
+
 /*
  * Call a function on each processor for which the supplied function
  * cond_func returns a positive value. This may include the local
@@ -102,8 +106,17 @@ extern void smp_cpus_done(unsigned int max_cpus);
  * Call a function on all other processors
  */
 void smp_call_function(smp_call_func_t func, void *info, int wait);
-void smp_call_function_many(const struct cpumask *mask,
-			    smp_call_func_t func, void *info, bool wait);
+
+void __smp_call_function_many(const struct cpumask *mask,
+			      smp_call_func_t remote_func,
+			      smp_call_func_t local_func,
+			      void *info, bool wait);
+
+static inline void smp_call_function_many(const struct cpumask *mask,
+				smp_call_func_t func, void *info, bool wait)
+{
+	__smp_call_function_many(mask, func, NULL, info, wait);
+}
 
 int smp_call_function_any(const struct cpumask *mask,
 			  smp_call_func_t func, void *info, int wait);
diff --git a/kernel/smp.c b/kernel/smp.c
index 708d2ac1db72..d9189e4d6464 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -388,9 +388,13 @@ int smp_call_function_any(const struct cpumask *mask,
 EXPORT_SYMBOL_GPL(smp_call_function_any);
 
 /**
- * smp_call_function_many(): Run a function on a set of other CPUs.
+ * __smp_call_function_many(): Run a function on a set of CPUs.
  * @mask: The set of cpus to run on (only runs on online subset).
- * @func: The function to run. This must be fast and non-blocking.
+ * @remote_func: The function to run on remote cores. This must be fast and
+ *		 non-blocking.
+ * @local_func: The function that should be run on this CPU. This must be
+ *		fast and non-blocking. If NULL is provided, no function will
+ *		be executed on this CPU.
  * @info: An arbitrary pointer to pass to the function.
  * @wait: If true, wait (atomically) until function has completed
  *        on other CPUs.
@@ -401,11 +405,16 @@ EXPORT_SYMBOL_GPL(smp_call_function_any);
  * hardware interrupt handler or from a bottom half handler. Preemption
  * must be disabled when calling this function.
  */
-void smp_call_function_many(const struct cpumask *mask,
-			    smp_call_func_t func, void *info, bool wait)
+void __smp_call_function_many(const struct cpumask *mask,
+			      smp_call_func_t remote_func,
+			      smp_call_func_t local_func,
+			      void *info, bool wait)
 {
+	int cpu, last_cpu, this_cpu = smp_processor_id();
 	struct call_function_data *cfd;
-	int cpu, next_cpu, this_cpu = smp_processor_id();
+	bool run_remote = false;
+	bool run_local = false;
+	int nr_cpus = 0;
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -413,55 +422,62 @@ void smp_call_function_many(const struct cpumask *mask,
 	 * send smp call function interrupt to this cpu and as such deadlocks
 	 * can't happen.
 	 */
-	WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
-		     && !oops_in_progress && !early_boot_irqs_disabled);
+	if (cpu_online(this_cpu) && !oops_in_progress && !early_boot_irqs_disabled)
+		lockdep_assert_irqs_enabled();
+
+	/* Check if we need local execution. */
+	if (local_func && cpumask_test_cpu(this_cpu, mask))
+		run_local = true;
 
-	/* Try to fastpath.  So, what's a CPU they want? Ignoring this one. */
+	/* Check if we need remote execution, i.e., any CPU excluding this one. */
 	cpu = cpumask_first_and(mask, cpu_online_mask);
 	if (cpu == this_cpu)
 		cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
+	if (cpu < nr_cpu_ids)
+		run_remote = true;
 
-	/* No online cpus?  We're done. */
-	if (cpu >= nr_cpu_ids)
-		return;
+	if (run_remote) {
+		cfd = this_cpu_ptr(&cfd_data);
 
-	/* Do we have another CPU which isn't us? */
-	next_cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
-	if (next_cpu == this_cpu)
-		next_cpu = cpumask_next_and(next_cpu, mask, cpu_online_mask);
-
-	/* Fastpath: do that cpu by itself. */
-	if (next_cpu >= nr_cpu_ids) {
-		smp_call_function_single(cpu, func, info, wait);
-		return;
-	}
+		cpumask_and(cfd->cpumask, mask, cpu_online_mask);
+		__cpumask_clear_cpu(this_cpu, cfd->cpumask);
 
-	cfd = this_cpu_ptr(&cfd_data);
-
-	cpumask_and(cfd->cpumask, mask, cpu_online_mask);
-	__cpumask_clear_cpu(this_cpu, cfd->cpumask);
+		cpumask_clear(cfd->cpumask_ipi);
+		for_each_cpu(cpu, cfd->cpumask) {
+			call_single_data_t *csd = per_cpu_ptr(cfd->csd, cpu);
+
+			nr_cpus++;
+			last_cpu = cpu;
+
+			csd_lock(csd);
+			if (wait)
+				csd->flags |= CSD_FLAG_SYNCHRONOUS;
+			csd->func = remote_func;
+			csd->info = info;
+			if (llist_add(&csd->llist, &per_cpu(call_single_queue, cpu)))
+				__cpumask_set_cpu(cpu, cfd->cpumask_ipi);
+		}
 
-	/* Some callers race with other cpus changing the passed mask */
-	if (unlikely(!cpumask_weight(cfd->cpumask)))
-		return;
+		/*
+		 * Choose the most efficient way to send an IPI. Note that the
+		 * number of CPUs might be zero due to concurrent changes to the
+		 * provided mask.
+		 */
+		if (nr_cpus == 1)
+			arch_send_call_function_single_ipi(last_cpu);
+		else if (likely(nr_cpus > 1))
+			arch_send_call_function_ipi_mask(cfd->cpumask_ipi);
+	}
 
-	cpumask_clear(cfd->cpumask_ipi);
-	for_each_cpu(cpu, cfd->cpumask) {
-		call_single_data_t *csd = per_cpu_ptr(cfd->csd, cpu);
+	if (run_local) {
+		unsigned long flags;
 
-		csd_lock(csd);
-		if (wait)
-			csd->flags |= CSD_FLAG_SYNCHRONOUS;
-		csd->func = func;
-		csd->info = info;
-		if (llist_add(&csd->llist, &per_cpu(call_single_queue, cpu)))
-			__cpumask_set_cpu(cpu, cfd->cpumask_ipi);
+		local_irq_save(flags);
+		local_func(info);
+		local_irq_restore(flags);
 	}
 
-	/* Send a message to all CPUs in the map */
-	arch_send_call_function_ipi_mask(cfd->cpumask_ipi);
-
-	if (wait) {
+	if (run_remote && wait) {
 		for_each_cpu(cpu, cfd->cpumask) {
 			call_single_data_t *csd;
 
@@ -470,7 +486,7 @@ void smp_call_function_many(const struct cpumask *mask,
 		}
 	}
 }
-EXPORT_SYMBOL(smp_call_function_many);
+EXPORT_SYMBOL(__smp_call_function_many);
 
 /**
  * smp_call_function(): Run a function on all other CPUs.
@@ -587,24 +603,6 @@ void __init smp_init(void)
 	smp_cpus_done(setup_max_cpus);
 }
 
-/*
- * Call a function on all processors.  May be used during early boot while
- * early_boot_irqs_disabled is set.  Use local_irq_save/restore() instead
- * of local_irq_disable/enable().
- */
-void on_each_cpu(void (*func) (void *info), void *info, int wait)
-{
-	unsigned long flags;
-
-	preempt_disable();
-	smp_call_function(func, info, wait);
-	local_irq_save(flags);
-	func(info);
-	local_irq_restore(flags);
-	preempt_enable();
-}
-EXPORT_SYMBOL(on_each_cpu);
-
 /**
  * on_each_cpu_mask(): Run a function on processors specified by
  * cpumask, which may include the local processor.
@@ -624,16 +622,11 @@ EXPORT_SYMBOL(on_each_cpu);
 void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 			void *info, bool wait)
 {
-	int cpu = get_cpu();
+	preempt_disable();
 
-	smp_call_function_many(mask, func, info, wait);
-	if (cpumask_test_cpu(cpu, mask)) {
-		unsigned long flags;
-		local_irq_save(flags);
-		func(info);
-		local_irq_restore(flags);
-	}
-	put_cpu();
+	__smp_call_function_many(mask, func, func, info, wait);
+
+	preempt_enable();
 }
 EXPORT_SYMBOL(on_each_cpu_mask);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 3/9] x86/mm/tlb: Refactor common code into flush_tlb_on_cpus()
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
  2019-06-13  6:48 ` [PATCH 1/9] smp: Remove smp_call_function() and on_each_cpu() return values Nadav Amit
  2019-06-13  6:48 ` [PATCH 2/9] smp: Run functions concurrently in smp_call_function_many() Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-25 21:07   ` Dave Hansen
  2019-06-13  6:48   ` Nadav Amit via Virtualization
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit

arch_tlbbatch_flush() and flush_tlb_mm_range() have very similar code,
which is effectively the same. Extract the mutual code into a new
function flush_tlb_on_cpus().

There is one functional change, which should not affect correctness:
flush_tlb_mm_range compared loaded_mm and the mm to figure out if local
flush is needed. Instead, the common code would look at the mm_cpumask()
which should give the same result. Performance should not be affected,
since this cpumask should not change in such a frequency that would
introduce cache contention.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/mm/tlb.c | 62 ++++++++++++++++++++++++++---------------------
 1 file changed, 34 insertions(+), 28 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 91f6db92554c..c34bcf03f06f 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -734,7 +734,11 @@ static inline struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
 			unsigned int stride_shift, bool freed_tables,
 			u64 new_tlb_gen)
 {
-	struct flush_tlb_info *info = this_cpu_ptr(&flush_tlb_info);
+	struct flush_tlb_info *info;
+
+	preempt_disable();
+
+	info = this_cpu_ptr(&flush_tlb_info);
 
 #ifdef CONFIG_DEBUG_VM
 	/*
@@ -762,6 +766,23 @@ static inline void put_flush_tlb_info(void)
 	barrier();
 	this_cpu_dec(flush_tlb_info_idx);
 #endif
+	preempt_enable();
+}
+
+static void flush_tlb_on_cpus(const cpumask_t *cpumask,
+			      const struct flush_tlb_info *info)
+{
+	int this_cpu = smp_processor_id();
+
+	if (cpumask_test_cpu(this_cpu, cpumask)) {
+		lockdep_assert_irqs_enabled();
+		local_irq_disable();
+		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
+		local_irq_enable();
+	}
+
+	if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+		flush_tlb_others(cpumask, info);
 }
 
 void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
@@ -770,9 +791,6 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 {
 	struct flush_tlb_info *info;
 	u64 new_tlb_gen;
-	int cpu;
-
-	cpu = get_cpu();
 
 	/* Should we flush just the requested range? */
 	if ((end == TLB_FLUSH_ALL) ||
@@ -787,18 +805,18 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables,
 				  new_tlb_gen);
 
-	if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
-		lockdep_assert_irqs_enabled();
-		local_irq_disable();
-		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
-		local_irq_enable();
-	}
+	/*
+	 * Assert that mm_cpumask() corresponds with the loaded mm. We got one
+	 * exception: for init_mm we do not need to flush anything, and the
+	 * cpumask does not correspond with loaded_mm.
+	 */
+	VM_WARN_ON_ONCE(cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)) !=
+			(mm == this_cpu_read(cpu_tlbstate.loaded_mm)) &&
+			mm != &init_mm);
 
-	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids)
-		flush_tlb_others(mm_cpumask(mm), info);
+	flush_tlb_on_cpus(mm_cpumask(mm), info);
 
 	put_flush_tlb_info();
-	put_cpu();
 }
 
 
@@ -833,13 +851,11 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 	} else {
 		struct flush_tlb_info *info;
 
-		preempt_disable();
 		info = get_flush_tlb_info(NULL, start, end, 0, false, 0);
 
 		on_each_cpu(do_kernel_range_flush, info, 1);
 
 		put_flush_tlb_info();
-		preempt_enable();
 	}
 }
 
@@ -857,21 +873,11 @@ static const struct flush_tlb_info full_flush_tlb_info = {
 
 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
-	int cpu = get_cpu();
-
-	if (cpumask_test_cpu(cpu, &batch->cpumask)) {
-		lockdep_assert_irqs_enabled();
-		local_irq_disable();
-		flush_tlb_func_local(&full_flush_tlb_info, TLB_LOCAL_SHOOTDOWN);
-		local_irq_enable();
-	}
-
-	if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids)
-		flush_tlb_others(&batch->cpumask, &full_flush_tlb_info);
+	preempt_disable();
+	flush_tlb_on_cpus(&batch->cpumask, &full_flush_tlb_info);
+	preempt_enable();
 
 	cpumask_clear(&batch->cpumask);
-
-	put_cpu();
 }
 
 static ssize_t tlbflush_read_file(struct file *file, char __user *user_buf,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
  2019-06-13  6:48 ` [PATCH 1/9] smp: Remove smp_call_function() and on_each_cpu() return values Nadav Amit
@ 2019-06-13  6:48   ` Nadav Amit via Virtualization
  2019-06-13  6:48 ` [PATCH 3/9] x86/mm/tlb: Refactor common code into flush_tlb_on_cpus() Nadav Amit
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Sasha Levin, Juergen Gross, Paolo Bonzini,
	Boris Ostrovsky, linux-hyperv, virtualization, kvm, xen-devel

To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.

Add a static key to tell whether this new interface is supported.

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/hyperv/mmu.c                 |  2 +
 arch/x86/include/asm/paravirt.h       |  8 +++
 arch/x86/include/asm/paravirt_types.h |  6 +++
 arch/x86/include/asm/tlbflush.h       |  6 +++
 arch/x86/kernel/kvm.c                 |  1 +
 arch/x86/kernel/paravirt.c            |  3 ++
 arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
 arch/x86/xen/mmu_pv.c                 |  2 +
 8 files changed, 87 insertions(+), 12 deletions(-)

diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index e65d7fe6489f..ca28b400c87c 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
 	pr_info("Using hypercall for remote TLB flush\n");
 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+
+	static_key_disable(&flush_tlb_multi_enabled.key);
 }
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index c25c38a05c1c..192be7254457 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -47,6 +47,8 @@ static inline void slow_down_io(void)
 #endif
 }
 
+DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static inline void __flush_tlb(void)
 {
 	PVOP_VCALL0(mmu.flush_tlb_user);
@@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
 }
 
+static inline void flush_tlb_multi(const struct cpumask *cpumask,
+				   const struct flush_tlb_info *info)
+{
+	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
+}
+
 static inline void flush_tlb_others(const struct cpumask *cpumask,
 				    const struct flush_tlb_info *info)
 {
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 946f8f1f1efc..b93b3d90729a 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -211,6 +211,12 @@ struct pv_mmu_ops {
 	void (*flush_tlb_user)(void);
 	void (*flush_tlb_kernel)(void);
 	void (*flush_tlb_one_user)(unsigned long addr);
+	/*
+	 * flush_tlb_multi() is the preferred interface, which is capable to
+	 * flush both local and remote CPUs.
+	 */
+	void (*flush_tlb_multi)(const struct cpumask *cpus,
+				const struct flush_tlb_info *info);
 	void (*flush_tlb_others)(const struct cpumask *cpus,
 				 const struct flush_tlb_info *info);
 
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index dee375831962..79272938cf79 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
 }
 
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info);
+
 void native_flush_tlb_others(const struct cpumask *cpumask,
 			     const struct flush_tlb_info *info);
 
@@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
 
 #ifndef CONFIG_PARAVIRT
+#define flush_tlb_multi(mask, info)	\
+	native_flush_tlb_multi(mask, info)
+
 #define flush_tlb_others(mask, info)	\
 	native_flush_tlb_others(mask, info)
 
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5169b8cc35bb..00d81e898717 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+		static_key_disable(&flush_tlb_multi_enabled.key);
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 98039d7fb998..ac00afed5570 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
 	return insn_len;
 }
 
+DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static void native_flush_tlb(void)
 {
 	__native_flush_tlb();
@@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.flush_tlb_user	= native_flush_tlb,
 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
+	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
 	.mmu.flush_tlb_others	= native_flush_tlb_others,
 	.mmu.tlb_remove_table	=
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index c34bcf03f06f..db73d5f1dd43 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 		 * garbage into our TLB.  Since switching to init_mm is barely
 		 * slower than a minimal flush, just switch to init_mm.
 		 *
-		 * This should be rare, with native_flush_tlb_others skipping
+		 * This should be rare, with native_flush_tlb_multi skipping
 		 * IPIs to lazy TLB mode CPUs.
 		 */
 		switch_mm_irqs_off(NULL, &init_mm, NULL);
@@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
 }
 
-static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
+static void flush_tlb_func_local(void *info)
 {
 	const struct flush_tlb_info *f = info;
+	enum tlb_flush_reason reason;
+
+	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
 
 	flush_tlb_func_common(f, true, reason);
 }
@@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
 }
 
-static bool tlb_is_not_lazy(int cpu, void *data)
+static inline bool tlb_is_not_lazy(int cpu)
 {
 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
 }
 
-void native_flush_tlb_others(const struct cpumask *cpumask,
-			     const struct flush_tlb_info *info)
+static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
+
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			    const struct flush_tlb_info *info)
 {
+	/*
+	 * Do accounting and tracing. Note that there are (and have always been)
+	 * cases in which a remote TLB flush will be traced, but eventually
+	 * would not happen.
+	 */
 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
 	if (info->end == TLB_FLUSH_ALL)
 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
@@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 		 * means that the percpu tlb_gen variables won't be updated
 		 * and we'll do pointless flushes on future context switches.
 		 *
-		 * Rather than hooking native_flush_tlb_others() here, I think
+		 * Rather than hooking native_flush_tlb_multi() here, I think
 		 * that UV should be updated so that smp_call_function_many(),
 		 * etc, are optimal on UV.
 		 */
+		local_irq_disable();
+		flush_tlb_func_local((__force void *)info);
+		local_irq_enable();
+
 		cpumask = uv_flush_tlb_others(cpumask, info);
 		if (cpumask)
 			smp_call_function_many(cpumask, flush_tlb_func_remote,
@@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * doing a speculative memory access.
 	 */
 	if (info->freed_tables)
-		smp_call_function_many(cpumask, flush_tlb_func_remote,
-			       (void *)info, 1);
-	else
-		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
-				(void *)info, 1, GFP_ATOMIC, cpumask);
+		__smp_call_function_many(cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	else {
+		/*
+		 * Although we could have used on_each_cpu_cond_mask(),
+		 * open-coding it has several performance advantages: (1) we can
+		 * use specialized functions for remote and local flushes; (2)
+		 * no need for indirect branch to test if TLB is lazy; (3) we
+		 * can use a designated cpumask for evaluating the condition
+		 * instead of allocating a new one.
+		 *
+		 * This works under the assumption that there are no nested TLB
+		 * flushes, an assumption that is already made in
+		 * flush_tlb_mm_range().
+		 */
+		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
+		int cpu;
+
+		cpumask_clear(cond_cpumask);
+
+		for_each_cpu(cpu, cpumask) {
+			if (tlb_is_not_lazy(cpu))
+				__cpumask_set_cpu(cpu, cond_cpumask);
+		}
+		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	}
+}
+
+void native_flush_tlb_others(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info)
+{
+	native_flush_tlb_multi(cpumask, info);
 }
 
 /*
@@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
 {
 	int this_cpu = smp_processor_id();
 
+	if (static_branch_likely(&flush_tlb_multi_enabled)) {
+		flush_tlb_multi(cpumask, info);
+		return;
+	}
+
 	if (cpumask_test_cpu(this_cpu, cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
-		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
+		flush_tlb_func_local((__force void *)info);
 		local_irq_enable();
 	}
 
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index beb44e22afdf..0cb277848cb4 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
 
 	pv_ops.mmu = xen_mmu_ops;
 
+	static_key_disable(&flush_tlb_multi_enabled.key);
+
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-13  6:48   ` Nadav Amit via Virtualization
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit via Virtualization @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, xen-devel, kvm, Haiyang Zhang, x86,
	linux-kernel, virtualization, Ingo Molnar, Borislav Petkov,
	Paolo Bonzini, Nadav Amit, Thomas Gleixner, Boris Ostrovsky

To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.

Add a static key to tell whether this new interface is supported.

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/hyperv/mmu.c                 |  2 +
 arch/x86/include/asm/paravirt.h       |  8 +++
 arch/x86/include/asm/paravirt_types.h |  6 +++
 arch/x86/include/asm/tlbflush.h       |  6 +++
 arch/x86/kernel/kvm.c                 |  1 +
 arch/x86/kernel/paravirt.c            |  3 ++
 arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
 arch/x86/xen/mmu_pv.c                 |  2 +
 8 files changed, 87 insertions(+), 12 deletions(-)

diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index e65d7fe6489f..ca28b400c87c 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
 	pr_info("Using hypercall for remote TLB flush\n");
 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+
+	static_key_disable(&flush_tlb_multi_enabled.key);
 }
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index c25c38a05c1c..192be7254457 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -47,6 +47,8 @@ static inline void slow_down_io(void)
 #endif
 }
 
+DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static inline void __flush_tlb(void)
 {
 	PVOP_VCALL0(mmu.flush_tlb_user);
@@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
 }
 
+static inline void flush_tlb_multi(const struct cpumask *cpumask,
+				   const struct flush_tlb_info *info)
+{
+	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
+}
+
 static inline void flush_tlb_others(const struct cpumask *cpumask,
 				    const struct flush_tlb_info *info)
 {
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 946f8f1f1efc..b93b3d90729a 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -211,6 +211,12 @@ struct pv_mmu_ops {
 	void (*flush_tlb_user)(void);
 	void (*flush_tlb_kernel)(void);
 	void (*flush_tlb_one_user)(unsigned long addr);
+	/*
+	 * flush_tlb_multi() is the preferred interface, which is capable to
+	 * flush both local and remote CPUs.
+	 */
+	void (*flush_tlb_multi)(const struct cpumask *cpus,
+				const struct flush_tlb_info *info);
 	void (*flush_tlb_others)(const struct cpumask *cpus,
 				 const struct flush_tlb_info *info);
 
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index dee375831962..79272938cf79 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
 }
 
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info);
+
 void native_flush_tlb_others(const struct cpumask *cpumask,
 			     const struct flush_tlb_info *info);
 
@@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
 
 #ifndef CONFIG_PARAVIRT
+#define flush_tlb_multi(mask, info)	\
+	native_flush_tlb_multi(mask, info)
+
 #define flush_tlb_others(mask, info)	\
 	native_flush_tlb_others(mask, info)
 
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5169b8cc35bb..00d81e898717 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+		static_key_disable(&flush_tlb_multi_enabled.key);
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 98039d7fb998..ac00afed5570 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
 	return insn_len;
 }
 
+DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static void native_flush_tlb(void)
 {
 	__native_flush_tlb();
@@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.flush_tlb_user	= native_flush_tlb,
 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
+	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
 	.mmu.flush_tlb_others	= native_flush_tlb_others,
 	.mmu.tlb_remove_table	=
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index c34bcf03f06f..db73d5f1dd43 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 		 * garbage into our TLB.  Since switching to init_mm is barely
 		 * slower than a minimal flush, just switch to init_mm.
 		 *
-		 * This should be rare, with native_flush_tlb_others skipping
+		 * This should be rare, with native_flush_tlb_multi skipping
 		 * IPIs to lazy TLB mode CPUs.
 		 */
 		switch_mm_irqs_off(NULL, &init_mm, NULL);
@@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
 }
 
-static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
+static void flush_tlb_func_local(void *info)
 {
 	const struct flush_tlb_info *f = info;
+	enum tlb_flush_reason reason;
+
+	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
 
 	flush_tlb_func_common(f, true, reason);
 }
@@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
 }
 
-static bool tlb_is_not_lazy(int cpu, void *data)
+static inline bool tlb_is_not_lazy(int cpu)
 {
 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
 }
 
-void native_flush_tlb_others(const struct cpumask *cpumask,
-			     const struct flush_tlb_info *info)
+static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
+
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			    const struct flush_tlb_info *info)
 {
+	/*
+	 * Do accounting and tracing. Note that there are (and have always been)
+	 * cases in which a remote TLB flush will be traced, but eventually
+	 * would not happen.
+	 */
 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
 	if (info->end == TLB_FLUSH_ALL)
 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
@@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 		 * means that the percpu tlb_gen variables won't be updated
 		 * and we'll do pointless flushes on future context switches.
 		 *
-		 * Rather than hooking native_flush_tlb_others() here, I think
+		 * Rather than hooking native_flush_tlb_multi() here, I think
 		 * that UV should be updated so that smp_call_function_many(),
 		 * etc, are optimal on UV.
 		 */
+		local_irq_disable();
+		flush_tlb_func_local((__force void *)info);
+		local_irq_enable();
+
 		cpumask = uv_flush_tlb_others(cpumask, info);
 		if (cpumask)
 			smp_call_function_many(cpumask, flush_tlb_func_remote,
@@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * doing a speculative memory access.
 	 */
 	if (info->freed_tables)
-		smp_call_function_many(cpumask, flush_tlb_func_remote,
-			       (void *)info, 1);
-	else
-		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
-				(void *)info, 1, GFP_ATOMIC, cpumask);
+		__smp_call_function_many(cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	else {
+		/*
+		 * Although we could have used on_each_cpu_cond_mask(),
+		 * open-coding it has several performance advantages: (1) we can
+		 * use specialized functions for remote and local flushes; (2)
+		 * no need for indirect branch to test if TLB is lazy; (3) we
+		 * can use a designated cpumask for evaluating the condition
+		 * instead of allocating a new one.
+		 *
+		 * This works under the assumption that there are no nested TLB
+		 * flushes, an assumption that is already made in
+		 * flush_tlb_mm_range().
+		 */
+		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
+		int cpu;
+
+		cpumask_clear(cond_cpumask);
+
+		for_each_cpu(cpu, cpumask) {
+			if (tlb_is_not_lazy(cpu))
+				__cpumask_set_cpu(cpu, cond_cpumask);
+		}
+		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	}
+}
+
+void native_flush_tlb_others(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info)
+{
+	native_flush_tlb_multi(cpumask, info);
 }
 
 /*
@@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
 {
 	int this_cpu = smp_processor_id();
 
+	if (static_branch_likely(&flush_tlb_multi_enabled)) {
+		flush_tlb_multi(cpumask, info);
+		return;
+	}
+
 	if (cpumask_test_cpu(this_cpu, cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
-		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
+		flush_tlb_func_local((__force void *)info);
 		local_irq_enable();
 	}
 
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index beb44e22afdf..0cb277848cb4 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
 
 	pv_ops.mmu = xen_mmu_ops;
 
+	static_key_disable(&flush_tlb_multi_enabled.key);
+
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-13  6:48   ` Nadav Amit via Virtualization
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, xen-devel, kvm, Haiyang Zhang, x86,
	linux-kernel, virtualization, Ingo Molnar, Borislav Petkov,
	Paolo Bonzini, Nadav Amit, Thomas Gleixner, K. Y. Srinivasan,
	Boris Ostrovsky

To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.

Add a static key to tell whether this new interface is supported.

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/hyperv/mmu.c                 |  2 +
 arch/x86/include/asm/paravirt.h       |  8 +++
 arch/x86/include/asm/paravirt_types.h |  6 +++
 arch/x86/include/asm/tlbflush.h       |  6 +++
 arch/x86/kernel/kvm.c                 |  1 +
 arch/x86/kernel/paravirt.c            |  3 ++
 arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
 arch/x86/xen/mmu_pv.c                 |  2 +
 8 files changed, 87 insertions(+), 12 deletions(-)

diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index e65d7fe6489f..ca28b400c87c 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
 	pr_info("Using hypercall for remote TLB flush\n");
 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+
+	static_key_disable(&flush_tlb_multi_enabled.key);
 }
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index c25c38a05c1c..192be7254457 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -47,6 +47,8 @@ static inline void slow_down_io(void)
 #endif
 }
 
+DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static inline void __flush_tlb(void)
 {
 	PVOP_VCALL0(mmu.flush_tlb_user);
@@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
 }
 
+static inline void flush_tlb_multi(const struct cpumask *cpumask,
+				   const struct flush_tlb_info *info)
+{
+	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
+}
+
 static inline void flush_tlb_others(const struct cpumask *cpumask,
 				    const struct flush_tlb_info *info)
 {
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 946f8f1f1efc..b93b3d90729a 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -211,6 +211,12 @@ struct pv_mmu_ops {
 	void (*flush_tlb_user)(void);
 	void (*flush_tlb_kernel)(void);
 	void (*flush_tlb_one_user)(unsigned long addr);
+	/*
+	 * flush_tlb_multi() is the preferred interface, which is capable to
+	 * flush both local and remote CPUs.
+	 */
+	void (*flush_tlb_multi)(const struct cpumask *cpus,
+				const struct flush_tlb_info *info);
 	void (*flush_tlb_others)(const struct cpumask *cpus,
 				 const struct flush_tlb_info *info);
 
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index dee375831962..79272938cf79 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
 }
 
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info);
+
 void native_flush_tlb_others(const struct cpumask *cpumask,
 			     const struct flush_tlb_info *info);
 
@@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
 
 #ifndef CONFIG_PARAVIRT
+#define flush_tlb_multi(mask, info)	\
+	native_flush_tlb_multi(mask, info)
+
 #define flush_tlb_others(mask, info)	\
 	native_flush_tlb_others(mask, info)
 
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5169b8cc35bb..00d81e898717 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+		static_key_disable(&flush_tlb_multi_enabled.key);
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 98039d7fb998..ac00afed5570 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
 	return insn_len;
 }
 
+DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static void native_flush_tlb(void)
 {
 	__native_flush_tlb();
@@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.flush_tlb_user	= native_flush_tlb,
 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
+	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
 	.mmu.flush_tlb_others	= native_flush_tlb_others,
 	.mmu.tlb_remove_table	=
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index c34bcf03f06f..db73d5f1dd43 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 		 * garbage into our TLB.  Since switching to init_mm is barely
 		 * slower than a minimal flush, just switch to init_mm.
 		 *
-		 * This should be rare, with native_flush_tlb_others skipping
+		 * This should be rare, with native_flush_tlb_multi skipping
 		 * IPIs to lazy TLB mode CPUs.
 		 */
 		switch_mm_irqs_off(NULL, &init_mm, NULL);
@@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
 }
 
-static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
+static void flush_tlb_func_local(void *info)
 {
 	const struct flush_tlb_info *f = info;
+	enum tlb_flush_reason reason;
+
+	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
 
 	flush_tlb_func_common(f, true, reason);
 }
@@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
 }
 
-static bool tlb_is_not_lazy(int cpu, void *data)
+static inline bool tlb_is_not_lazy(int cpu)
 {
 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
 }
 
-void native_flush_tlb_others(const struct cpumask *cpumask,
-			     const struct flush_tlb_info *info)
+static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
+
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			    const struct flush_tlb_info *info)
 {
+	/*
+	 * Do accounting and tracing. Note that there are (and have always been)
+	 * cases in which a remote TLB flush will be traced, but eventually
+	 * would not happen.
+	 */
 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
 	if (info->end == TLB_FLUSH_ALL)
 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
@@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 		 * means that the percpu tlb_gen variables won't be updated
 		 * and we'll do pointless flushes on future context switches.
 		 *
-		 * Rather than hooking native_flush_tlb_others() here, I think
+		 * Rather than hooking native_flush_tlb_multi() here, I think
 		 * that UV should be updated so that smp_call_function_many(),
 		 * etc, are optimal on UV.
 		 */
+		local_irq_disable();
+		flush_tlb_func_local((__force void *)info);
+		local_irq_enable();
+
 		cpumask = uv_flush_tlb_others(cpumask, info);
 		if (cpumask)
 			smp_call_function_many(cpumask, flush_tlb_func_remote,
@@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * doing a speculative memory access.
 	 */
 	if (info->freed_tables)
-		smp_call_function_many(cpumask, flush_tlb_func_remote,
-			       (void *)info, 1);
-	else
-		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
-				(void *)info, 1, GFP_ATOMIC, cpumask);
+		__smp_call_function_many(cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	else {
+		/*
+		 * Although we could have used on_each_cpu_cond_mask(),
+		 * open-coding it has several performance advantages: (1) we can
+		 * use specialized functions for remote and local flushes; (2)
+		 * no need for indirect branch to test if TLB is lazy; (3) we
+		 * can use a designated cpumask for evaluating the condition
+		 * instead of allocating a new one.
+		 *
+		 * This works under the assumption that there are no nested TLB
+		 * flushes, an assumption that is already made in
+		 * flush_tlb_mm_range().
+		 */
+		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
+		int cpu;
+
+		cpumask_clear(cond_cpumask);
+
+		for_each_cpu(cpu, cpumask) {
+			if (tlb_is_not_lazy(cpu))
+				__cpumask_set_cpu(cpu, cond_cpumask);
+		}
+		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	}
+}
+
+void native_flush_tlb_others(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info)
+{
+	native_flush_tlb_multi(cpumask, info);
 }
 
 /*
@@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
 {
 	int this_cpu = smp_processor_id();
 
+	if (static_branch_likely(&flush_tlb_multi_enabled)) {
+		flush_tlb_multi(cpumask, info);
+		return;
+	}
+
 	if (cpumask_test_cpu(this_cpu, cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
-		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
+		flush_tlb_func_local((__force void *)info);
 		local_irq_enable();
 	}
 
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index beb44e22afdf..0cb277848cb4 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
 
 	pv_ops.mmu = xen_mmu_ops;
 
+	static_key_disable(&flush_tlb_multi_enabled.key);
+
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
 }
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
                   ` (3 preceding siblings ...)
  2019-06-13  6:48   ` Nadav Amit via Virtualization
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-25 21:36   ` Dave Hansen
  2019-06-13  6:48 ` [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi() Nadav Amit
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit

While the updated smp infrastructure is capable of running a function on
a single local core, it is not optimized for this case. The multiple
function calls and the indirect branch introduce some overhead, making
local TLB flushes slower than they were before the recent changes.

Before calling the SMP infrastructure, check if only a local TLB flush
is needed to restore the lost performance in this common case. This
requires to check mm_cpumask() another time, but unless this mask is
updated very frequently, this should impact performance negatively.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/mm/tlb.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index db73d5f1dd43..ceb03b8cad32 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -815,8 +815,12 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
 			      const struct flush_tlb_info *info)
 {
 	int this_cpu = smp_processor_id();
+	bool flush_others = false;
 
-	if (static_branch_likely(&flush_tlb_multi_enabled)) {
+	if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+		flush_others = true;
+
+	if (static_branch_likely(&flush_tlb_multi_enabled) && flush_others) {
 		flush_tlb_multi(cpumask, info);
 		return;
 	}
@@ -828,7 +832,7 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
 		local_irq_enable();
 	}
 
-	if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+	if (flush_others)
 		flush_tlb_others(cpumask, info);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
                   ` (4 preceding siblings ...)
  2019-06-13  6:48 ` [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-25 21:40   ` Dave Hansen
  2019-06-13  6:48 ` [PATCH 7/9] smp: Do not mark call_function_data as shared Nadav Amit
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, Paolo Bonzini, kvm

Support the new interface of flush_tlb_multi, which also flushes the
local CPU's TLB, instead of flush_tlb_others that does not. This
interface is more performant since it parallelize remote and local TLB
flushes.

The actual implementation of flush_tlb_multi() is almost identical to
that of flush_tlb_others().

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/kvm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 00d81e898717..d00d551d4a2a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -580,7 +580,7 @@ static void __init kvm_apf_trap_init(void)
 
 static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
 
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
 			const struct flush_tlb_info *info)
 {
 	u8 state;
@@ -594,6 +594,11 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 	 * queue flush_on_enter for pre-empted vCPUs
 	 */
 	for_each_cpu(cpu, flushmask) {
+		/*
+		 * The local vCPU is never preempted, so we do not explicitly
+		 * skip check for local vCPU - it will never be cleared from
+		 * flushmask.
+		 */
 		src = &per_cpu(steal_time, cpu);
 		state = READ_ONCE(src->preempted);
 		if ((state & KVM_VCPU_PREEMPTED)) {
@@ -603,7 +608,7 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 		}
 	}
 
-	native_flush_tlb_others(flushmask, info);
+	native_flush_tlb_multi(flushmask, info);
 }
 
 static void __init kvm_guest_init(void)
@@ -628,9 +633,8 @@ static void __init kvm_guest_init(void)
 	if (kvm_para_has_feature(KVM_FEATURE_PV_TLB_FLUSH) &&
 	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
-		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
+		pv_ops.mmu.flush_tlb_multi = kvm_flush_tlb_multi;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
-		static_key_disable(&flush_tlb_multi_enabled.key);
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 7/9] smp: Do not mark call_function_data as shared
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
                   ` (5 preceding siblings ...)
  2019-06-13  6:48 ` [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi() Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-23 12:31   ` [tip:smp/hotplug] " tip-bot for Nadav Amit
  2019-06-13  6:48 ` [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate Nadav Amit
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, Rik van Riel

cfd_data is marked as shared, but although it hold pointers to shared
data structures, it is private per core.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Rik van Riel <riel@surriel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 kernel/smp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index d9189e4d6464..d5d8dee8e3f3 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -34,7 +34,7 @@ struct call_function_data {
 	cpumask_var_t		cpumask_ipi;
 };
 
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data);
+static DEFINE_PER_CPU_ALIGNED(struct call_function_data, cfd_data);
 
 static DEFINE_PER_CPU_SHARED_ALIGNED(struct llist_head, call_single_queue);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
                   ` (6 preceding siblings ...)
  2019-06-13  6:48 ` [PATCH 7/9] smp: Do not mark call_function_data as shared Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-14 15:58   ` Sean Christopherson
  2019-06-25 21:52   ` Dave Hansen
  2019-06-13  6:48 ` [PATCH 9/9] x86/apic: Use non-atomic operations when possible Nadav Amit
                   ` (2 subsequent siblings)
  10 siblings, 2 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit

cpu_tlbstate is mostly private and only the variable is_lazy is shared.
This causes some false-sharing when TLB flushes are performed.

Break cpu_tlbstate intro cpu_tlbstate and cpu_tlbstate_shared, and mark
each one accordingly.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/tlbflush.h | 40 ++++++++++++++++++---------------
 arch/x86/mm/init.c              |  2 +-
 arch/x86/mm/tlb.c               | 15 ++++++++-----
 3 files changed, 32 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 79272938cf79..a1fea36d5292 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -178,23 +178,6 @@ struct tlb_state {
 	u16 loaded_mm_asid;
 	u16 next_asid;
 
-	/*
-	 * We can be in one of several states:
-	 *
-	 *  - Actively using an mm.  Our CPU's bit will be set in
-	 *    mm_cpumask(loaded_mm) and is_lazy == false;
-	 *
-	 *  - Not using a real mm.  loaded_mm == &init_mm.  Our CPU's bit
-	 *    will not be set in mm_cpumask(&init_mm) and is_lazy == false.
-	 *
-	 *  - Lazily using a real mm.  loaded_mm != &init_mm, our bit
-	 *    is set in mm_cpumask(loaded_mm), but is_lazy == true.
-	 *    We're heuristically guessing that the CR3 load we
-	 *    skipped more than makes up for the overhead added by
-	 *    lazy mode.
-	 */
-	bool is_lazy;
-
 	/*
 	 * If set we changed the page tables in such a way that we
 	 * needed an invalidation of all contexts (aka. PCIDs / ASIDs).
@@ -240,7 +223,27 @@ struct tlb_state {
 	 */
 	struct tlb_context ctxs[TLB_NR_DYN_ASIDS];
 };
-DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);
+DECLARE_PER_CPU_ALIGNED(struct tlb_state, cpu_tlbstate);
+
+struct tlb_state_shared {
+	/*
+	 * We can be in one of several states:
+	 *
+	 *  - Actively using an mm.  Our CPU's bit will be set in
+	 *    mm_cpumask(loaded_mm) and is_lazy == false;
+	 *
+	 *  - Not using a real mm.  loaded_mm == &init_mm.  Our CPU's bit
+	 *    will not be set in mm_cpumask(&init_mm) and is_lazy == false.
+	 *
+	 *  - Lazily using a real mm.  loaded_mm != &init_mm, our bit
+	 *    is set in mm_cpumask(loaded_mm), but is_lazy == true.
+	 *    We're heuristically guessing that the CR3 load we
+	 *    skipped more than makes up for the overhead added by
+	 *    lazy mode.
+	 */
+	bool is_lazy;
+};
+DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
 
 /*
  * Blindly accessing user memory from NMI context can be dangerous
@@ -439,6 +442,7 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
 {
 	u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
 
+	//invpcid_flush_one(kern_pcid(loaded_mm_asid), addr);
 	asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
 
 	if (!static_cpu_has(X86_FEATURE_PTI))
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index fd10d91a6115..34027f36a944 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -951,7 +951,7 @@ void __init zone_sizes_init(void)
 	free_area_init_nodes(max_zone_pfns);
 }
 
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = {
+__visible DEFINE_PER_CPU_ALIGNED(struct tlb_state, cpu_tlbstate) = {
 	.loaded_mm = &init_mm,
 	.next_asid = 1,
 	.cr4 = ~0UL,	/* fail hard if we screw up cr4 shadow initialization */
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index ceb03b8cad32..ffa3c94abe6a 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -145,7 +145,7 @@ void leave_mm(int cpu)
 		return;
 
 	/* Warn if we're not lazy. */
-	WARN_ON(!this_cpu_read(cpu_tlbstate.is_lazy));
+	WARN_ON(!this_cpu_read(cpu_tlbstate_shared.is_lazy));
 
 	switch_mm(NULL, &init_mm, NULL);
 }
@@ -277,7 +277,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 {
 	struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm);
 	u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
-	bool was_lazy = this_cpu_read(cpu_tlbstate.is_lazy);
+	bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy);
 	unsigned cpu = smp_processor_id();
 	u64 next_tlb_gen;
 	bool need_flush;
@@ -322,7 +322,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		__flush_tlb_all();
 	}
 #endif
-	this_cpu_write(cpu_tlbstate.is_lazy, false);
+	this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
 
 	/*
 	 * The membarrier system call requires a full memory barrier and
@@ -463,7 +463,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 	if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
 		return;
 
-	this_cpu_write(cpu_tlbstate.is_lazy, true);
+	this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
 }
 
 /*
@@ -544,7 +544,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 	VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].ctx_id) !=
 		   loaded_mm->context.ctx_id);
 
-	if (this_cpu_read(cpu_tlbstate.is_lazy)) {
+	if (this_cpu_read(cpu_tlbstate_shared.is_lazy)) {
 		/*
 		 * We're in lazy mode.  We need to at least flush our
 		 * paging-structure cache to avoid speculatively reading
@@ -660,11 +660,14 @@ static void flush_tlb_func_remote(void *info)
 
 static inline bool tlb_is_not_lazy(int cpu)
 {
-	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
+	return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu);
 }
 
 static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
 
+DEFINE_PER_CPU_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
+EXPORT_PER_CPU_SYMBOL(cpu_tlbstate_shared);
+
 void native_flush_tlb_multi(const struct cpumask *cpumask,
 			    const struct flush_tlb_info *info)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 9/9] x86/apic: Use non-atomic operations when possible
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
                   ` (7 preceding siblings ...)
  2019-06-13  6:48 ` [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-23 12:16   ` [tip:x86/apic] " tip-bot for Nadav Amit
  2019-06-25 21:58   ` [PATCH 9/9] " Dave Hansen
  2019-06-23 12:37 ` [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Thomas Gleixner
  2019-06-25 22:02 ` Dave Hansen
  10 siblings, 2 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit

Using __clear_bit() and __cpumask_clear_cpu() is more efficient than
using their atomic counterparts. Use them when atomicity is not needed,
such as when manipulating bitmasks that are on the stack.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/apic/apic_flat_64.c   | 4 ++--
 arch/x86/kernel/apic/x2apic_cluster.c | 2 +-
 arch/x86/kernel/smp.c                 | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/apic/apic_flat_64.c b/arch/x86/kernel/apic/apic_flat_64.c
index bf083c3f1d73..bbdca603f94a 100644
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -78,7 +78,7 @@ flat_send_IPI_mask_allbutself(const struct cpumask *cpumask, int vector)
 	int cpu = smp_processor_id();
 
 	if (cpu < BITS_PER_LONG)
-		clear_bit(cpu, &mask);
+		__clear_bit(cpu, &mask);
 
 	_flat_send_IPI_mask(mask, vector);
 }
@@ -92,7 +92,7 @@ static void flat_send_IPI_allbutself(int vector)
 			unsigned long mask = cpumask_bits(cpu_online_mask)[0];
 
 			if (cpu < BITS_PER_LONG)
-				clear_bit(cpu, &mask);
+				__clear_bit(cpu, &mask);
 
 			_flat_send_IPI_mask(mask, vector);
 		}
diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index 7685444a106b..609e499387a1 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -50,7 +50,7 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
 	cpumask_copy(tmpmsk, mask);
 	/* If IPI should not be sent to self, clear current CPU */
 	if (apic_dest != APIC_DEST_ALLINC)
-		cpumask_clear_cpu(smp_processor_id(), tmpmsk);
+		__cpumask_clear_cpu(smp_processor_id(), tmpmsk);
 
 	/* Collapse cpus in a cluster so a single IPI per cluster is sent */
 	for_each_cpu(cpu, tmpmsk) {
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 4693e2f3a03e..96421f97e75c 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -144,7 +144,7 @@ void native_send_call_func_ipi(const struct cpumask *mask)
 	}
 
 	cpumask_copy(allbutself, cpu_online_mask);
-	cpumask_clear_cpu(smp_processor_id(), allbutself);
+	__cpumask_clear_cpu(smp_processor_id(), allbutself);
 
 	if (cpumask_equal(mask, allbutself) &&
 	    cpumask_equal(cpu_online_mask, cpu_callout_mask))
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate
  2019-06-13  6:48 ` [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate Nadav Amit
@ 2019-06-14 15:58   ` Sean Christopherson
  2019-06-17 17:10     ` Nadav Amit
  2019-06-25 21:52   ` Dave Hansen
  1 sibling, 1 reply; 61+ messages in thread
From: Sean Christopherson @ 2019-06-14 15:58 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Peter Zijlstra, Andy Lutomirski, linux-kernel, Ingo Molnar,
	Borislav Petkov, x86, Thomas Gleixner, Dave Hansen

On Wed, Jun 12, 2019 at 11:48:12PM -0700, Nadav Amit wrote:
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 79272938cf79..a1fea36d5292 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h

...

> @@ -439,6 +442,7 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
>  {
>  	u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
>  
> +	//invpcid_flush_one(kern_pcid(loaded_mm_asid), addr);

Leftover debug/testing code.

>  	asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
>  
>  	if (!static_cpu_has(X86_FEATURE_PTI))

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate
  2019-06-14 15:58   ` Sean Christopherson
@ 2019-06-17 17:10     ` Nadav Amit
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-17 17:10 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen

> On Jun 14, 2019, at 8:58 AM, Sean Christopherson <sean.j.christopherson@intel.com> wrote:
> 
> On Wed, Jun 12, 2019 at 11:48:12PM -0700, Nadav Amit wrote:
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index 79272938cf79..a1fea36d5292 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
> 
> ...
> 
>> @@ -439,6 +442,7 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
>> {
>> 	u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
>> 
>> +	//invpcid_flush_one(kern_pcid(loaded_mm_asid), addr);
> 
> Leftover debug/testing code.

Indeed, thanks. I will fix on v2 (once I get some more feedback).


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [tip:x86/apic] x86/apic: Use non-atomic operations when possible
  2019-06-13  6:48 ` [PATCH 9/9] x86/apic: Use non-atomic operations when possible Nadav Amit
@ 2019-06-23 12:16   ` tip-bot for Nadav Amit
  2019-06-25 21:58   ` [PATCH 9/9] " Dave Hansen
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot for Nadav Amit @ 2019-06-23 12:16 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, luto, dave.hansen, bp, hpa, linux-kernel, mingo, namit, tglx

Commit-ID:  dde3626f815e38bbf96fddd5185038c4b4d395a8
Gitweb:     https://git.kernel.org/tip/dde3626f815e38bbf96fddd5185038c4b4d395a8
Author:     Nadav Amit <namit@vmware.com>
AuthorDate: Wed, 12 Jun 2019 23:48:13 -0700
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 23 Jun 2019 14:07:23 +0200

x86/apic: Use non-atomic operations when possible

Using __clear_bit() and __cpumask_clear_cpu() is more efficient than using
their atomic counterparts.

Use them when atomicity is not needed, such as when manipulating bitmasks
that are on the stack.

Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20190613064813.8102-10-namit@vmware.com

---
 arch/x86/kernel/apic/apic_flat_64.c   | 4 ++--
 arch/x86/kernel/apic/x2apic_cluster.c | 2 +-
 arch/x86/kernel/smp.c                 | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/apic/apic_flat_64.c b/arch/x86/kernel/apic/apic_flat_64.c
index 0005c284a5c5..65072858f553 100644
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -78,7 +78,7 @@ flat_send_IPI_mask_allbutself(const struct cpumask *cpumask, int vector)
 	int cpu = smp_processor_id();
 
 	if (cpu < BITS_PER_LONG)
-		clear_bit(cpu, &mask);
+		__clear_bit(cpu, &mask);
 
 	_flat_send_IPI_mask(mask, vector);
 }
@@ -92,7 +92,7 @@ static void flat_send_IPI_allbutself(int vector)
 			unsigned long mask = cpumask_bits(cpu_online_mask)[0];
 
 			if (cpu < BITS_PER_LONG)
-				clear_bit(cpu, &mask);
+				__clear_bit(cpu, &mask);
 
 			_flat_send_IPI_mask(mask, vector);
 		}
diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index 7685444a106b..609e499387a1 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -50,7 +50,7 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
 	cpumask_copy(tmpmsk, mask);
 	/* If IPI should not be sent to self, clear current CPU */
 	if (apic_dest != APIC_DEST_ALLINC)
-		cpumask_clear_cpu(smp_processor_id(), tmpmsk);
+		__cpumask_clear_cpu(smp_processor_id(), tmpmsk);
 
 	/* Collapse cpus in a cluster so a single IPI per cluster is sent */
 	for_each_cpu(cpu, tmpmsk) {
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 04adc8d60aed..acddd988602d 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -146,7 +146,7 @@ void native_send_call_func_ipi(const struct cpumask *mask)
 	}
 
 	cpumask_copy(allbutself, cpu_online_mask);
-	cpumask_clear_cpu(smp_processor_id(), allbutself);
+	__cpumask_clear_cpu(smp_processor_id(), allbutself);
 
 	if (cpumask_equal(mask, allbutself) &&
 	    cpumask_equal(cpu_online_mask, cpu_callout_mask))

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip:smp/hotplug] smp: Do not mark call_function_data as shared
  2019-06-13  6:48 ` [PATCH 7/9] smp: Do not mark call_function_data as shared Nadav Amit
@ 2019-06-23 12:31   ` tip-bot for Nadav Amit
  0 siblings, 0 replies; 61+ messages in thread
From: tip-bot for Nadav Amit @ 2019-06-23 12:31 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dave.hansen, hpa, tglx, peterz, riel, bp, mingo, namit,
	linux-kernel, luto

Commit-ID:  a22793c79d6ea0a492ce1a308ec46df52ee9406e
Gitweb:     https://git.kernel.org/tip/a22793c79d6ea0a492ce1a308ec46df52ee9406e
Author:     Nadav Amit <namit@vmware.com>
AuthorDate: Wed, 12 Jun 2019 23:48:11 -0700
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 23 Jun 2019 14:26:25 +0200

smp: Do not mark call_function_data as shared

cfd_data is marked as shared, but although it hold pointers to shared
data structures, it is private per core.

Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Rik van Riel <riel@surriel.com>
Link: https://lkml.kernel.org/r/20190613064813.8102-8-namit@vmware.com

---
 kernel/smp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index d155374632eb..220ad142f5dd 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -34,7 +34,7 @@ struct call_function_data {
 	cpumask_var_t		cpumask_ipi;
 };
 
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data);
+static DEFINE_PER_CPU_ALIGNED(struct call_function_data, cfd_data);
 
 static DEFINE_PER_CPU_SHARED_ALIGNED(struct llist_head, call_single_queue);
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip:smp/hotplug] smp: Remove smp_call_function() and on_each_cpu() return values
  2019-06-13  6:48 ` [PATCH 1/9] smp: Remove smp_call_function() and on_each_cpu() return values Nadav Amit
@ 2019-06-23 12:32   ` tip-bot for Nadav Amit
  0 siblings, 0 replies; 61+ messages in thread
From: tip-bot for Nadav Amit @ 2019-06-23 12:32 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dave.hansen, namit, linux-kernel, fenghua.yu, hpa, ink, akpm,
	luto, peterz, mattst88, bp, tglx, tony.luck, mingo, rth

Commit-ID:  caa759323c73676b3e48c8d9c86093c88b4aba97
Gitweb:     https://git.kernel.org/tip/caa759323c73676b3e48c8d9c86093c88b4aba97
Author:     Nadav Amit <namit@vmware.com>
AuthorDate: Wed, 12 Jun 2019 23:48:05 -0700
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 23 Jun 2019 14:26:26 +0200

smp: Remove smp_call_function() and on_each_cpu() return values

The return value is fixed. Remove it and amend the callers.

[ tglx: Fixup arm/bL_switcher and powerpc/rtas ]

Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.kernel.org/r/20190613064813.8102-2-namit@vmware.com

---
 arch/alpha/kernel/smp.c       | 19 +++++--------------
 arch/alpha/oprofile/common.c  |  6 +++---
 arch/arm/common/bL_switcher.c |  6 ++----
 arch/ia64/kernel/perfmon.c    | 12 ++----------
 arch/ia64/kernel/uncached.c   |  8 ++++----
 arch/powerpc/kernel/rtas.c    |  3 +--
 arch/x86/lib/cache-smp.c      |  3 ++-
 drivers/char/agp/generic.c    |  3 +--
 include/linux/smp.h           |  7 +++----
 kernel/smp.c                  | 10 +++-------
 kernel/up.c                   |  3 +--
 11 files changed, 27 insertions(+), 53 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index d0dccae53ba9..5f90df30be20 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -614,8 +614,7 @@ void
 smp_imb(void)
 {
 	/* Must wait other processors to flush their icache before continue. */
-	if (on_each_cpu(ipi_imb, NULL, 1))
-		printk(KERN_CRIT "smp_imb: timed out\n");
+	on_each_cpu(ipi_imb, NULL, 1);
 }
 EXPORT_SYMBOL(smp_imb);
 
@@ -630,9 +629,7 @@ flush_tlb_all(void)
 {
 	/* Although we don't have any data to pass, we do want to
 	   synchronize with the other processors.  */
-	if (on_each_cpu(ipi_flush_tlb_all, NULL, 1)) {
-		printk(KERN_CRIT "flush_tlb_all: timed out\n");
-	}
+	on_each_cpu(ipi_flush_tlb_all, NULL, 1);
 }
 
 #define asn_locked() (cpu_data[smp_processor_id()].asn_lock)
@@ -667,9 +664,7 @@ flush_tlb_mm(struct mm_struct *mm)
 		}
 	}
 
-	if (smp_call_function(ipi_flush_tlb_mm, mm, 1)) {
-		printk(KERN_CRIT "flush_tlb_mm: timed out\n");
-	}
+	smp_call_function(ipi_flush_tlb_mm, mm, 1);
 
 	preempt_enable();
 }
@@ -720,9 +715,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 	data.mm = mm;
 	data.addr = addr;
 
-	if (smp_call_function(ipi_flush_tlb_page, &data, 1)) {
-		printk(KERN_CRIT "flush_tlb_page: timed out\n");
-	}
+	smp_call_function(ipi_flush_tlb_page, &data, 1);
 
 	preempt_enable();
 }
@@ -772,9 +765,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 		}
 	}
 
-	if (smp_call_function(ipi_flush_icache_page, mm, 1)) {
-		printk(KERN_CRIT "flush_icache_page: timed out\n");
-	}
+	smp_call_function(ipi_flush_icache_page, mm, 1);
 
 	preempt_enable();
 }
diff --git a/arch/alpha/oprofile/common.c b/arch/alpha/oprofile/common.c
index 310a4ce1dccc..1b1259c7d7d1 100644
--- a/arch/alpha/oprofile/common.c
+++ b/arch/alpha/oprofile/common.c
@@ -65,7 +65,7 @@ op_axp_setup(void)
 	model->reg_setup(&reg, ctr, &sys);
 
 	/* Configure the registers on all cpus.  */
-	(void)smp_call_function(model->cpu_setup, &reg, 1);
+	smp_call_function(model->cpu_setup, &reg, 1);
 	model->cpu_setup(&reg);
 	return 0;
 }
@@ -86,7 +86,7 @@ op_axp_cpu_start(void *dummy)
 static int
 op_axp_start(void)
 {
-	(void)smp_call_function(op_axp_cpu_start, NULL, 1);
+	smp_call_function(op_axp_cpu_start, NULL, 1);
 	op_axp_cpu_start(NULL);
 	return 0;
 }
@@ -101,7 +101,7 @@ op_axp_cpu_stop(void *dummy)
 static void
 op_axp_stop(void)
 {
-	(void)smp_call_function(op_axp_cpu_stop, NULL, 1);
+	smp_call_function(op_axp_cpu_stop, NULL, 1);
 	op_axp_cpu_stop(NULL);
 }
 
diff --git a/arch/arm/common/bL_switcher.c b/arch/arm/common/bL_switcher.c
index 57f3b7512636..17bc259729e2 100644
--- a/arch/arm/common/bL_switcher.c
+++ b/arch/arm/common/bL_switcher.c
@@ -542,16 +542,14 @@ static void bL_switcher_trace_trigger_cpu(void *__always_unused info)
 
 int bL_switcher_trace_trigger(void)
 {
-	int ret;
-
 	preempt_disable();
 
 	bL_switcher_trace_trigger_cpu(NULL);
-	ret = smp_call_function(bL_switcher_trace_trigger_cpu, NULL, true);
+	smp_call_function(bL_switcher_trace_trigger_cpu, NULL, true);
 
 	preempt_enable();
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(bL_switcher_trace_trigger);
 
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 58a6337c0690..7c52bd2695a2 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -6390,11 +6390,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	}
 
 	/* save the current system wide pmu states */
-	ret = on_each_cpu(pfm_alt_save_pmu_state, NULL, 1);
-	if (ret) {
-		DPRINT(("on_each_cpu() failed: %d\n", ret));
-		goto cleanup_reserve;
-	}
+	on_each_cpu(pfm_alt_save_pmu_state, NULL, 1);
 
 	/* officially change to the alternate interrupt handler */
 	pfm_alt_intr_handler = hdl;
@@ -6421,7 +6417,6 @@ int
 pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 {
 	int i;
-	int ret;
 
 	if (hdl == NULL) return -EINVAL;
 
@@ -6435,10 +6430,7 @@ pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 
 	pfm_alt_intr_handler = NULL;
 
-	ret = on_each_cpu(pfm_alt_restore_pmu_state, NULL, 1);
-	if (ret) {
-		DPRINT(("on_each_cpu() failed: %d\n", ret));
-	}
+	on_each_cpu(pfm_alt_restore_pmu_state, NULL, 1);
 
 	for_each_online_cpu(i) {
 		pfm_unreserve_session(NULL, 1, i);
diff --git a/arch/ia64/kernel/uncached.c b/arch/ia64/kernel/uncached.c
index 583f7ff6b589..c618d0745e22 100644
--- a/arch/ia64/kernel/uncached.c
+++ b/arch/ia64/kernel/uncached.c
@@ -124,8 +124,8 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)
 	status = ia64_pal_prefetch_visibility(PAL_VISIBILITY_PHYSICAL);
 	if (status == PAL_VISIBILITY_OK_REMOTE_NEEDED) {
 		atomic_set(&uc_pool->status, 0);
-		status = smp_call_function(uncached_ipi_visibility, uc_pool, 1);
-		if (status || atomic_read(&uc_pool->status))
+		smp_call_function(uncached_ipi_visibility, uc_pool, 1);
+		if (atomic_read(&uc_pool->status))
 			goto failed;
 	} else if (status != PAL_VISIBILITY_OK)
 		goto failed;
@@ -146,8 +146,8 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)
 	if (status != PAL_STATUS_SUCCESS)
 		goto failed;
 	atomic_set(&uc_pool->status, 0);
-	status = smp_call_function(uncached_ipi_mc_drain, uc_pool, 1);
-	if (status || atomic_read(&uc_pool->status))
+	smp_call_function(uncached_ipi_mc_drain, uc_pool, 1);
+	if (atomic_read(&uc_pool->status))
 		goto failed;
 
 	/*
diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
index fbc676160adf..64d95eb6ffff 100644
--- a/arch/powerpc/kernel/rtas.c
+++ b/arch/powerpc/kernel/rtas.c
@@ -994,8 +994,7 @@ int rtas_ibm_suspend_me(u64 handle)
 	/* Call function on all CPUs.  One of us will make the
 	 * rtas call
 	 */
-	if (on_each_cpu(rtas_percpu_suspend_me, &data, 0))
-		atomic_set(&data.error, -EINVAL);
+	on_each_cpu(rtas_percpu_suspend_me, &data, 0);
 
 	wait_for_completion(&done);
 
diff --git a/arch/x86/lib/cache-smp.c b/arch/x86/lib/cache-smp.c
index 1811fa4a1b1a..7c48ff4ae8d1 100644
--- a/arch/x86/lib/cache-smp.c
+++ b/arch/x86/lib/cache-smp.c
@@ -15,6 +15,7 @@ EXPORT_SYMBOL(wbinvd_on_cpu);
 
 int wbinvd_on_all_cpus(void)
 {
-	return on_each_cpu(__wbinvd, NULL, 1);
+	on_each_cpu(__wbinvd, NULL, 1);
+	return 0;
 }
 EXPORT_SYMBOL(wbinvd_on_all_cpus);
diff --git a/drivers/char/agp/generic.c b/drivers/char/agp/generic.c
index 658664a5a5aa..df1edb5ec0ad 100644
--- a/drivers/char/agp/generic.c
+++ b/drivers/char/agp/generic.c
@@ -1311,8 +1311,7 @@ static void ipi_handler(void *null)
 
 void global_cache_flush(void)
 {
-	if (on_each_cpu(ipi_handler, NULL, 1) != 0)
-		panic(PFX "timed out waiting for the other CPUs!\n");
+	on_each_cpu(ipi_handler, NULL, 1);
 }
 EXPORT_SYMBOL(global_cache_flush);
 
diff --git a/include/linux/smp.h b/include/linux/smp.h
index a56f08ff3097..bb8b451ab01f 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -35,7 +35,7 @@ int smp_call_function_single(int cpuid, smp_call_func_t func, void *info,
 /*
  * Call a function on all processors
  */
-int on_each_cpu(smp_call_func_t func, void *info, int wait);
+void on_each_cpu(smp_call_func_t func, void *info, int wait);
 
 /*
  * Call a function on processors specified by mask, which might include
@@ -101,7 +101,7 @@ extern void smp_cpus_done(unsigned int max_cpus);
 /*
  * Call a function on all other processors
  */
-int smp_call_function(smp_call_func_t func, void *info, int wait);
+void smp_call_function(smp_call_func_t func, void *info, int wait);
 void smp_call_function_many(const struct cpumask *mask,
 			    smp_call_func_t func, void *info, bool wait);
 
@@ -144,9 +144,8 @@ static inline void smp_send_stop(void) { }
  *	These macros fold the SMP functionality into a single CPU system
  */
 #define raw_smp_processor_id()			0
-static inline int up_smp_call_function(smp_call_func_t func, void *info)
+static inline void up_smp_call_function(smp_call_func_t func, void *info)
 {
-	return 0;
 }
 #define smp_call_function(func, info, wait) \
 			(up_smp_call_function(func, info))
diff --git a/kernel/smp.c b/kernel/smp.c
index 220ad142f5dd..616d4d114847 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -487,13 +487,11 @@ EXPORT_SYMBOL(smp_call_function_many);
  * You must not call this function with disabled interrupts or from a
  * hardware interrupt handler or from a bottom half handler.
  */
-int smp_call_function(smp_call_func_t func, void *info, int wait)
+void smp_call_function(smp_call_func_t func, void *info, int wait)
 {
 	preempt_disable();
 	smp_call_function_many(cpu_online_mask, func, info, wait);
 	preempt_enable();
-
-	return 0;
 }
 EXPORT_SYMBOL(smp_call_function);
 
@@ -594,18 +592,16 @@ void __init smp_init(void)
  * early_boot_irqs_disabled is set.  Use local_irq_save/restore() instead
  * of local_irq_disable/enable().
  */
-int on_each_cpu(void (*func) (void *info), void *info, int wait)
+void on_each_cpu(void (*func) (void *info), void *info, int wait)
 {
 	unsigned long flags;
-	int ret = 0;
 
 	preempt_disable();
-	ret = smp_call_function(func, info, wait);
+	smp_call_function(func, info, wait);
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
 	preempt_enable();
-	return ret;
 }
 EXPORT_SYMBOL(on_each_cpu);
 
diff --git a/kernel/up.c b/kernel/up.c
index 483c9962c999..862b460ab97a 100644
--- a/kernel/up.c
+++ b/kernel/up.c
@@ -35,14 +35,13 @@ int smp_call_function_single_async(int cpu, call_single_data_t *csd)
 }
 EXPORT_SYMBOL(smp_call_function_single_async);
 
-int on_each_cpu(smp_call_func_t func, void *info, int wait)
+void on_each_cpu(smp_call_func_t func, void *info, int wait)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
-	return 0;
 }
 EXPORT_SYMBOL(on_each_cpu);
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 0/9] x86: Concurrent TLB flushes and other improvements
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
                   ` (8 preceding siblings ...)
  2019-06-13  6:48 ` [PATCH 9/9] x86/apic: Use non-atomic operations when possible Nadav Amit
@ 2019-06-23 12:37 ` Thomas Gleixner
  2019-06-25 22:02 ` Dave Hansen
  10 siblings, 0 replies; 61+ messages in thread
From: Thomas Gleixner @ 2019-06-23 12:37 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Peter Zijlstra, Andy Lutomirski, linux-kernel, Ingo Molnar,
	Borislav Petkov, x86, Dave Hansen, Richard Henderson,
	Ivan Kokshaysky, Matt Turner, Tony Luck, Fenghua Yu,
	Andrew Morton, Rik van Riel, Josh Poimboeuf, Paolo Bonzini

Nadav,

On Wed, 12 Jun 2019, Nadav Amit wrote:

> Patch 1 does small cleanup. Patches 2-5 implement the concurrent
> execution of TLB flushes. Patches 6-9 deal with false-sharing and
> unnecessary atomic operations. There is still no implementation that
> uses the concurrent TLB flushes by Xen and Hyper-V. 

I've picked up the patches which make sense independent of the TLB
optimization. So they are out of your way :)

> There are various additional possible optimizations that were sent or
> are in development (async flushes, x2apic shorthands, fewer mm_tlb_gen
> accesses, etc.), but based on Andy's feedback, they will be sent later.

Vs. x2apic shorthands. It's not only x2apic. We generally do not make use
of shorthands when CPU hotplug is enabled as that needs some deep care
vs. the state whether a present CPU had been brought up at least once,
similar to the MCE issue which causes us to do the online/offlie dance for
SMT control at boot. There are a few other nasty details like the APIC
state on CPU hot unplug and NMI shorthands.

I have a WIP series in the pipeline which I'm going to post soon. I'll put
you on CC.

For the TLB related parts, I delegate the review happily to our x86 mm/tlb
wizards. Hint, hint ...

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 3/9] x86/mm/tlb: Refactor common code into flush_tlb_on_cpus()
  2019-06-13  6:48 ` [PATCH 3/9] x86/mm/tlb: Refactor common code into flush_tlb_on_cpus() Nadav Amit
@ 2019-06-25 21:07   ` Dave Hansen
  2019-06-26  1:57     ` Nadav Amit
  0 siblings, 1 reply; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:07 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen

On 6/12/19 11:48 PM, Nadav Amit wrote:
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 91f6db92554c..c34bcf03f06f 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -734,7 +734,11 @@ static inline struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
>  			unsigned int stride_shift, bool freed_tables,
>  			u64 new_tlb_gen)
>  {
> -	struct flush_tlb_info *info = this_cpu_ptr(&flush_tlb_info);
> +	struct flush_tlb_info *info;
> +
> +	preempt_disable();
> +
> +	info = this_cpu_ptr(&flush_tlb_info);
>  
>  #ifdef CONFIG_DEBUG_VM
>  	/*
> @@ -762,6 +766,23 @@ static inline void put_flush_tlb_info(void)
>  	barrier();
>  	this_cpu_dec(flush_tlb_info_idx);
>  #endif
> +	preempt_enable();
> +}

The addition of this disable/enable pair is unchangelogged and
uncommented.  I think it makes sense since we do need to make sure we
stay on this CPU, but it would be nice to mention.

> +static void flush_tlb_on_cpus(const cpumask_t *cpumask,
> +			      const struct flush_tlb_info *info)
> +{

Might be nice to mention that preempt must be disabled.  It's kinda
implied from the smp_processor_id(), but being explicit is always nice too.

> +	int this_cpu = smp_processor_id();
> +
> +	if (cpumask_test_cpu(this_cpu, cpumask)) {
> +		lockdep_assert_irqs_enabled();
> +		local_irq_disable();
> +		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
> +		local_irq_enable();
> +	}
> +
> +	if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
> +		flush_tlb_others(cpumask, info);
>  }
>  
>  void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
> @@ -770,9 +791,6 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>  {
>  	struct flush_tlb_info *info;
>  	u64 new_tlb_gen;
> -	int cpu;
> -
> -	cpu = get_cpu();
>  
>  	/* Should we flush just the requested range? */
>  	if ((end == TLB_FLUSH_ALL) ||
> @@ -787,18 +805,18 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>  	info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables,
>  				  new_tlb_gen);
>  
> -	if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
> -		lockdep_assert_irqs_enabled();
> -		local_irq_disable();
> -		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
> -		local_irq_enable();
> -	}
> +	/*
> +	 * Assert that mm_cpumask() corresponds with the loaded mm. We got one
> +	 * exception: for init_mm we do not need to flush anything, and the
> +	 * cpumask does not correspond with loaded_mm.
> +	 */
> +	VM_WARN_ON_ONCE(cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)) !=
> +			(mm == this_cpu_read(cpu_tlbstate.loaded_mm)) &&
> +			mm != &init_mm);

Very very cool.  You thought "these should be equivalent", and you added
a corresponding warning to ensure they are.

> -	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids)
> -		flush_tlb_others(mm_cpumask(mm), info);
> +	flush_tlb_on_cpus(mm_cpumask(mm), info);
>  
>  	put_flush_tlb_info();
> -	put_cpu();
>  }



Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-13  6:48   ` Nadav Amit via Virtualization
  (?)
@ 2019-06-25 21:29     ` Dave Hansen
  -1 siblings, 0 replies; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:29 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

On 6/12/19 11:48 PM, Nadav Amit wrote:
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.
> 
> Add a static key to tell whether this new interface is supported.
> 
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Sasha Levin <sashal@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: x86@kernel.org
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: linux-hyperv@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: virtualization@lists.linux-foundation.org
> Cc: kvm@vger.kernel.org
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/hyperv/mmu.c                 |  2 +
>  arch/x86/include/asm/paravirt.h       |  8 +++
>  arch/x86/include/asm/paravirt_types.h |  6 +++
>  arch/x86/include/asm/tlbflush.h       |  6 +++
>  arch/x86/kernel/kvm.c                 |  1 +
>  arch/x86/kernel/paravirt.c            |  3 ++
>  arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>  arch/x86/xen/mmu_pv.c                 |  2 +
>  8 files changed, 87 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
> index e65d7fe6489f..ca28b400c87c 100644
> --- a/arch/x86/hyperv/mmu.c
> +++ b/arch/x86/hyperv/mmu.c
> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>  	pr_info("Using hypercall for remote TLB flush\n");
>  	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>  	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +
> +	static_key_disable(&flush_tlb_multi_enabled.key);
>  }
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index c25c38a05c1c..192be7254457 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>  #endif
>  }
>  
> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static inline void __flush_tlb(void)
>  {
>  	PVOP_VCALL0(mmu.flush_tlb_user);
> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>  	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>  }
>  
> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
> +				   const struct flush_tlb_info *info)
> +{
> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
> +}
> +
>  static inline void flush_tlb_others(const struct cpumask *cpumask,
>  				    const struct flush_tlb_info *info)
>  {
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 946f8f1f1efc..b93b3d90729a 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>  	void (*flush_tlb_user)(void);
>  	void (*flush_tlb_kernel)(void);
>  	void (*flush_tlb_one_user)(unsigned long addr);
> +	/*
> +	 * flush_tlb_multi() is the preferred interface, which is capable to
> +	 * flush both local and remote CPUs.
> +	 */
> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
> +				const struct flush_tlb_info *info);
>  	void (*flush_tlb_others)(const struct cpumask *cpus,
>  				 const struct flush_tlb_info *info);
>  
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index dee375831962..79272938cf79 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>  	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>  }
>  
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info);
> +
>  void native_flush_tlb_others(const struct cpumask *cpumask,
>  			     const struct flush_tlb_info *info);
>  
> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>  
>  #ifndef CONFIG_PARAVIRT
> +#define flush_tlb_multi(mask, info)	\
> +	native_flush_tlb_multi(mask, info)
> +
>  #define flush_tlb_others(mask, info)	\
>  	native_flush_tlb_others(mask, info)
>  
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5169b8cc35bb..00d81e898717 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>  	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>  		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>  		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +		static_key_disable(&flush_tlb_multi_enabled.key);
>  	}
>  
>  	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index 98039d7fb998..ac00afed5570 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>  	return insn_len;
>  }
>  
> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static void native_flush_tlb(void)
>  {
>  	__native_flush_tlb();
> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>  	.mmu.flush_tlb_user	= native_flush_tlb,
>  	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>  	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>  	.mmu.flush_tlb_others	= native_flush_tlb_others,
>  	.mmu.tlb_remove_table	=
>  			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index c34bcf03f06f..db73d5f1dd43 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  		 * garbage into our TLB.  Since switching to init_mm is barely
>  		 * slower than a minimal flush, just switch to init_mm.
>  		 *
> -		 * This should be rare, with native_flush_tlb_others skipping
> +		 * This should be rare, with native_flush_tlb_multi skipping
>  		 * IPIs to lazy TLB mode CPUs.
>  		 */

Nit, since we're messing with this, it can now be
"native_flush_tlb_multi()" since it is a function.

>  		switch_mm_irqs_off(NULL, &init_mm, NULL);
> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>  }
>  
> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
> +static void flush_tlb_func_local(void *info)
>  {
>  	const struct flush_tlb_info *f = info;
> +	enum tlb_flush_reason reason;
> +
> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;

Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
it like this, but seems like it would be nicer and easier to track down
the origins of these things if we did this at the caller.

>  	flush_tlb_func_common(f, true, reason);
>  }
> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>  	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>  }
>  
> -static bool tlb_is_not_lazy(int cpu, void *data)
> +static inline bool tlb_is_not_lazy(int cpu)
>  {
>  	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>  }

Nit: the compiler will probably inline this sucker anyway.  So, for
these kinds of patches, I'd resist the urge to do these kinds of tweaks,
especially since it starts to hide the important change on the line.

> -void native_flush_tlb_others(const struct cpumask *cpumask,
> -			     const struct flush_tlb_info *info)
> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
> +
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			    const struct flush_tlb_info *info)
>  {
> +	/*
> +	 * Do accounting and tracing. Note that there are (and have always been)
> +	 * cases in which a remote TLB flush will be traced, but eventually
> +	 * would not happen.
> +	 */
>  	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>  	if (info->end == TLB_FLUSH_ALL)
>  		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  		 * means that the percpu tlb_gen variables won't be updated
>  		 * and we'll do pointless flushes on future context switches.
>  		 *
> -		 * Rather than hooking native_flush_tlb_others() here, I think
> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>  		 * that UV should be updated so that smp_call_function_many(),
>  		 * etc, are optimal on UV.
>  		 */
> +		local_irq_disable();
> +		flush_tlb_func_local((__force void *)info);
> +		local_irq_enable();
> +
>  		cpumask = uv_flush_tlb_others(cpumask, info);
>  		if (cpumask)
>  			smp_call_function_many(cpumask, flush_tlb_func_remote,
> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  	 * doing a speculative memory access.
>  	 */
>  	if (info->freed_tables)
> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
> -			       (void *)info, 1);
> -	else
> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
> -				(void *)info, 1, GFP_ATOMIC, cpumask);
> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	else {

I prefer brackets be added for 'if' blocks like this since it doesn't
take up any meaningful space and makes it less prone to compile errors.

> +		/*
> +		 * Although we could have used on_each_cpu_cond_mask(),
> +		 * open-coding it has several performance advantages: (1) we can
> +		 * use specialized functions for remote and local flushes; (2)
> +		 * no need for indirect branch to test if TLB is lazy; (3) we
> +		 * can use a designated cpumask for evaluating the condition
> +		 * instead of allocating a new one.
> +		 *
> +		 * This works under the assumption that there are no nested TLB
> +		 * flushes, an assumption that is already made in
> +		 * flush_tlb_mm_range().
> +		 */
> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);

This is logically a stack-local variable, right?  But, since we've got
preempt off and cpumasks can be huge, we don't want to allocate it on
the stack.  That might be worth a comment somewhere.

> +		int cpu;
> +
> +		cpumask_clear(cond_cpumask);
> +
> +		for_each_cpu(cpu, cpumask) {
> +			if (tlb_is_not_lazy(cpu))
> +				__cpumask_set_cpu(cpu, cond_cpumask);
> +		}

FWIW, it's probably worth calling out in the changelog that this loop
exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
worse than what it replaces.

> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	}
> +}

There was a __force on an earlier 'info' cast.  Could you talk about
that for a minute an explain why that one is needed?

> +void native_flush_tlb_others(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info)
> +{
> +	native_flush_tlb_multi(cpumask, info);
>  }
>  
>  /*
> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>  {
>  	int this_cpu = smp_processor_id();
>  
> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
> +		flush_tlb_multi(cpumask, info);
> +		return;
> +	}

Probably needs a comment for posterity above the if()^^:

	/* Use the optimized flush_tlb_multi() where we can. */

> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>  
>  	pv_ops.mmu = xen_mmu_ops;
>  
> +	static_key_disable(&flush_tlb_multi_enabled.key);
> +
>  	memset(dummy_mapping, 0xff, PAGE_SIZE);
>  }

More comments, please.  Perhaps:

	Existing paravirt TLB flushes are incompatible with
	flush_tlb_multi() because....  Disable it when they are
	in use.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-25 21:29     ` Dave Hansen
  0 siblings, 0 replies; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:29 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Haiyang Zhang, x86, linux-kernel,
	virtualization, Ingo Molnar, Borislav Petkov, xen-devel,
	Paolo Bonzini, Thomas Gleixner, Boris Ostrovsky

On 6/12/19 11:48 PM, Nadav Amit wrote:
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.
> 
> Add a static key to tell whether this new interface is supported.
> 
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Sasha Levin <sashal@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: x86@kernel.org
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: linux-hyperv@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: virtualization@lists.linux-foundation.org
> Cc: kvm@vger.kernel.org
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/hyperv/mmu.c                 |  2 +
>  arch/x86/include/asm/paravirt.h       |  8 +++
>  arch/x86/include/asm/paravirt_types.h |  6 +++
>  arch/x86/include/asm/tlbflush.h       |  6 +++
>  arch/x86/kernel/kvm.c                 |  1 +
>  arch/x86/kernel/paravirt.c            |  3 ++
>  arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>  arch/x86/xen/mmu_pv.c                 |  2 +
>  8 files changed, 87 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
> index e65d7fe6489f..ca28b400c87c 100644
> --- a/arch/x86/hyperv/mmu.c
> +++ b/arch/x86/hyperv/mmu.c
> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>  	pr_info("Using hypercall for remote TLB flush\n");
>  	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>  	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +
> +	static_key_disable(&flush_tlb_multi_enabled.key);
>  }
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index c25c38a05c1c..192be7254457 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>  #endif
>  }
>  
> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static inline void __flush_tlb(void)
>  {
>  	PVOP_VCALL0(mmu.flush_tlb_user);
> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>  	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>  }
>  
> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
> +				   const struct flush_tlb_info *info)
> +{
> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
> +}
> +
>  static inline void flush_tlb_others(const struct cpumask *cpumask,
>  				    const struct flush_tlb_info *info)
>  {
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 946f8f1f1efc..b93b3d90729a 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>  	void (*flush_tlb_user)(void);
>  	void (*flush_tlb_kernel)(void);
>  	void (*flush_tlb_one_user)(unsigned long addr);
> +	/*
> +	 * flush_tlb_multi() is the preferred interface, which is capable to
> +	 * flush both local and remote CPUs.
> +	 */
> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
> +				const struct flush_tlb_info *info);
>  	void (*flush_tlb_others)(const struct cpumask *cpus,
>  				 const struct flush_tlb_info *info);
>  
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index dee375831962..79272938cf79 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>  	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>  }
>  
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info);
> +
>  void native_flush_tlb_others(const struct cpumask *cpumask,
>  			     const struct flush_tlb_info *info);
>  
> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>  
>  #ifndef CONFIG_PARAVIRT
> +#define flush_tlb_multi(mask, info)	\
> +	native_flush_tlb_multi(mask, info)
> +
>  #define flush_tlb_others(mask, info)	\
>  	native_flush_tlb_others(mask, info)
>  
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5169b8cc35bb..00d81e898717 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>  	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>  		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>  		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +		static_key_disable(&flush_tlb_multi_enabled.key);
>  	}
>  
>  	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index 98039d7fb998..ac00afed5570 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>  	return insn_len;
>  }
>  
> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static void native_flush_tlb(void)
>  {
>  	__native_flush_tlb();
> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>  	.mmu.flush_tlb_user	= native_flush_tlb,
>  	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>  	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>  	.mmu.flush_tlb_others	= native_flush_tlb_others,
>  	.mmu.tlb_remove_table	=
>  			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index c34bcf03f06f..db73d5f1dd43 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  		 * garbage into our TLB.  Since switching to init_mm is barely
>  		 * slower than a minimal flush, just switch to init_mm.
>  		 *
> -		 * This should be rare, with native_flush_tlb_others skipping
> +		 * This should be rare, with native_flush_tlb_multi skipping
>  		 * IPIs to lazy TLB mode CPUs.
>  		 */

Nit, since we're messing with this, it can now be
"native_flush_tlb_multi()" since it is a function.

>  		switch_mm_irqs_off(NULL, &init_mm, NULL);
> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>  }
>  
> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
> +static void flush_tlb_func_local(void *info)
>  {
>  	const struct flush_tlb_info *f = info;
> +	enum tlb_flush_reason reason;
> +
> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;

Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
it like this, but seems like it would be nicer and easier to track down
the origins of these things if we did this at the caller.

>  	flush_tlb_func_common(f, true, reason);
>  }
> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>  	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>  }
>  
> -static bool tlb_is_not_lazy(int cpu, void *data)
> +static inline bool tlb_is_not_lazy(int cpu)
>  {
>  	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>  }

Nit: the compiler will probably inline this sucker anyway.  So, for
these kinds of patches, I'd resist the urge to do these kinds of tweaks,
especially since it starts to hide the important change on the line.

> -void native_flush_tlb_others(const struct cpumask *cpumask,
> -			     const struct flush_tlb_info *info)
> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
> +
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			    const struct flush_tlb_info *info)
>  {
> +	/*
> +	 * Do accounting and tracing. Note that there are (and have always been)
> +	 * cases in which a remote TLB flush will be traced, but eventually
> +	 * would not happen.
> +	 */
>  	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>  	if (info->end == TLB_FLUSH_ALL)
>  		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  		 * means that the percpu tlb_gen variables won't be updated
>  		 * and we'll do pointless flushes on future context switches.
>  		 *
> -		 * Rather than hooking native_flush_tlb_others() here, I think
> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>  		 * that UV should be updated so that smp_call_function_many(),
>  		 * etc, are optimal on UV.
>  		 */
> +		local_irq_disable();
> +		flush_tlb_func_local((__force void *)info);
> +		local_irq_enable();
> +
>  		cpumask = uv_flush_tlb_others(cpumask, info);
>  		if (cpumask)
>  			smp_call_function_many(cpumask, flush_tlb_func_remote,
> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  	 * doing a speculative memory access.
>  	 */
>  	if (info->freed_tables)
> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
> -			       (void *)info, 1);
> -	else
> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
> -				(void *)info, 1, GFP_ATOMIC, cpumask);
> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	else {

I prefer brackets be added for 'if' blocks like this since it doesn't
take up any meaningful space and makes it less prone to compile errors.

> +		/*
> +		 * Although we could have used on_each_cpu_cond_mask(),
> +		 * open-coding it has several performance advantages: (1) we can
> +		 * use specialized functions for remote and local flushes; (2)
> +		 * no need for indirect branch to test if TLB is lazy; (3) we
> +		 * can use a designated cpumask for evaluating the condition
> +		 * instead of allocating a new one.
> +		 *
> +		 * This works under the assumption that there are no nested TLB
> +		 * flushes, an assumption that is already made in
> +		 * flush_tlb_mm_range().
> +		 */
> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);

This is logically a stack-local variable, right?  But, since we've got
preempt off and cpumasks can be huge, we don't want to allocate it on
the stack.  That might be worth a comment somewhere.

> +		int cpu;
> +
> +		cpumask_clear(cond_cpumask);
> +
> +		for_each_cpu(cpu, cpumask) {
> +			if (tlb_is_not_lazy(cpu))
> +				__cpumask_set_cpu(cpu, cond_cpumask);
> +		}

FWIW, it's probably worth calling out in the changelog that this loop
exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
worse than what it replaces.

> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	}
> +}

There was a __force on an earlier 'info' cast.  Could you talk about
that for a minute an explain why that one is needed?

> +void native_flush_tlb_others(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info)
> +{
> +	native_flush_tlb_multi(cpumask, info);
>  }
>  
>  /*
> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>  {
>  	int this_cpu = smp_processor_id();
>  
> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
> +		flush_tlb_multi(cpumask, info);
> +		return;
> +	}

Probably needs a comment for posterity above the if()^^:

	/* Use the optimized flush_tlb_multi() where we can. */

> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>  
>  	pv_ops.mmu = xen_mmu_ops;
>  
> +	static_key_disable(&flush_tlb_multi_enabled.key);
> +
>  	memset(dummy_mapping, 0xff, PAGE_SIZE);
>  }

More comments, please.  Perhaps:

	Existing paravirt TLB flushes are incompatible with
	flush_tlb_multi() because....  Disable it when they are
	in use.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-25 21:29     ` Dave Hansen
  0 siblings, 0 replies; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:29 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Haiyang Zhang, x86, linux-kernel,
	virtualization, Ingo Molnar, Borislav Petkov, xen-devel,
	Paolo Bonzini, Thomas Gleixner, K. Y. Srinivasan,
	Boris Ostrovsky

On 6/12/19 11:48 PM, Nadav Amit wrote:
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.
> 
> Add a static key to tell whether this new interface is supported.
> 
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Sasha Levin <sashal@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: x86@kernel.org
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: linux-hyperv@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: virtualization@lists.linux-foundation.org
> Cc: kvm@vger.kernel.org
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/hyperv/mmu.c                 |  2 +
>  arch/x86/include/asm/paravirt.h       |  8 +++
>  arch/x86/include/asm/paravirt_types.h |  6 +++
>  arch/x86/include/asm/tlbflush.h       |  6 +++
>  arch/x86/kernel/kvm.c                 |  1 +
>  arch/x86/kernel/paravirt.c            |  3 ++
>  arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>  arch/x86/xen/mmu_pv.c                 |  2 +
>  8 files changed, 87 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
> index e65d7fe6489f..ca28b400c87c 100644
> --- a/arch/x86/hyperv/mmu.c
> +++ b/arch/x86/hyperv/mmu.c
> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>  	pr_info("Using hypercall for remote TLB flush\n");
>  	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>  	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +
> +	static_key_disable(&flush_tlb_multi_enabled.key);
>  }
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index c25c38a05c1c..192be7254457 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>  #endif
>  }
>  
> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static inline void __flush_tlb(void)
>  {
>  	PVOP_VCALL0(mmu.flush_tlb_user);
> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>  	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>  }
>  
> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
> +				   const struct flush_tlb_info *info)
> +{
> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
> +}
> +
>  static inline void flush_tlb_others(const struct cpumask *cpumask,
>  				    const struct flush_tlb_info *info)
>  {
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 946f8f1f1efc..b93b3d90729a 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>  	void (*flush_tlb_user)(void);
>  	void (*flush_tlb_kernel)(void);
>  	void (*flush_tlb_one_user)(unsigned long addr);
> +	/*
> +	 * flush_tlb_multi() is the preferred interface, which is capable to
> +	 * flush both local and remote CPUs.
> +	 */
> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
> +				const struct flush_tlb_info *info);
>  	void (*flush_tlb_others)(const struct cpumask *cpus,
>  				 const struct flush_tlb_info *info);
>  
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index dee375831962..79272938cf79 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>  	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>  }
>  
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info);
> +
>  void native_flush_tlb_others(const struct cpumask *cpumask,
>  			     const struct flush_tlb_info *info);
>  
> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>  
>  #ifndef CONFIG_PARAVIRT
> +#define flush_tlb_multi(mask, info)	\
> +	native_flush_tlb_multi(mask, info)
> +
>  #define flush_tlb_others(mask, info)	\
>  	native_flush_tlb_others(mask, info)
>  
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5169b8cc35bb..00d81e898717 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>  	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>  		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>  		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +		static_key_disable(&flush_tlb_multi_enabled.key);
>  	}
>  
>  	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index 98039d7fb998..ac00afed5570 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>  	return insn_len;
>  }
>  
> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static void native_flush_tlb(void)
>  {
>  	__native_flush_tlb();
> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>  	.mmu.flush_tlb_user	= native_flush_tlb,
>  	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>  	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>  	.mmu.flush_tlb_others	= native_flush_tlb_others,
>  	.mmu.tlb_remove_table	=
>  			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index c34bcf03f06f..db73d5f1dd43 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  		 * garbage into our TLB.  Since switching to init_mm is barely
>  		 * slower than a minimal flush, just switch to init_mm.
>  		 *
> -		 * This should be rare, with native_flush_tlb_others skipping
> +		 * This should be rare, with native_flush_tlb_multi skipping
>  		 * IPIs to lazy TLB mode CPUs.
>  		 */

Nit, since we're messing with this, it can now be
"native_flush_tlb_multi()" since it is a function.

>  		switch_mm_irqs_off(NULL, &init_mm, NULL);
> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>  }
>  
> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
> +static void flush_tlb_func_local(void *info)
>  {
>  	const struct flush_tlb_info *f = info;
> +	enum tlb_flush_reason reason;
> +
> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;

Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
it like this, but seems like it would be nicer and easier to track down
the origins of these things if we did this at the caller.

>  	flush_tlb_func_common(f, true, reason);
>  }
> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>  	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>  }
>  
> -static bool tlb_is_not_lazy(int cpu, void *data)
> +static inline bool tlb_is_not_lazy(int cpu)
>  {
>  	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>  }

Nit: the compiler will probably inline this sucker anyway.  So, for
these kinds of patches, I'd resist the urge to do these kinds of tweaks,
especially since it starts to hide the important change on the line.

> -void native_flush_tlb_others(const struct cpumask *cpumask,
> -			     const struct flush_tlb_info *info)
> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
> +
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			    const struct flush_tlb_info *info)
>  {
> +	/*
> +	 * Do accounting and tracing. Note that there are (and have always been)
> +	 * cases in which a remote TLB flush will be traced, but eventually
> +	 * would not happen.
> +	 */
>  	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>  	if (info->end == TLB_FLUSH_ALL)
>  		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  		 * means that the percpu tlb_gen variables won't be updated
>  		 * and we'll do pointless flushes on future context switches.
>  		 *
> -		 * Rather than hooking native_flush_tlb_others() here, I think
> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>  		 * that UV should be updated so that smp_call_function_many(),
>  		 * etc, are optimal on UV.
>  		 */
> +		local_irq_disable();
> +		flush_tlb_func_local((__force void *)info);
> +		local_irq_enable();
> +
>  		cpumask = uv_flush_tlb_others(cpumask, info);
>  		if (cpumask)
>  			smp_call_function_many(cpumask, flush_tlb_func_remote,
> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  	 * doing a speculative memory access.
>  	 */
>  	if (info->freed_tables)
> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
> -			       (void *)info, 1);
> -	else
> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
> -				(void *)info, 1, GFP_ATOMIC, cpumask);
> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	else {

I prefer brackets be added for 'if' blocks like this since it doesn't
take up any meaningful space and makes it less prone to compile errors.

> +		/*
> +		 * Although we could have used on_each_cpu_cond_mask(),
> +		 * open-coding it has several performance advantages: (1) we can
> +		 * use specialized functions for remote and local flushes; (2)
> +		 * no need for indirect branch to test if TLB is lazy; (3) we
> +		 * can use a designated cpumask for evaluating the condition
> +		 * instead of allocating a new one.
> +		 *
> +		 * This works under the assumption that there are no nested TLB
> +		 * flushes, an assumption that is already made in
> +		 * flush_tlb_mm_range().
> +		 */
> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);

This is logically a stack-local variable, right?  But, since we've got
preempt off and cpumasks can be huge, we don't want to allocate it on
the stack.  That might be worth a comment somewhere.

> +		int cpu;
> +
> +		cpumask_clear(cond_cpumask);
> +
> +		for_each_cpu(cpu, cpumask) {
> +			if (tlb_is_not_lazy(cpu))
> +				__cpumask_set_cpu(cpu, cond_cpumask);
> +		}

FWIW, it's probably worth calling out in the changelog that this loop
exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
worse than what it replaces.

> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	}
> +}

There was a __force on an earlier 'info' cast.  Could you talk about
that for a minute an explain why that one is needed?

> +void native_flush_tlb_others(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info)
> +{
> +	native_flush_tlb_multi(cpumask, info);
>  }
>  
>  /*
> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>  {
>  	int this_cpu = smp_processor_id();
>  
> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
> +		flush_tlb_multi(cpumask, info);
> +		return;
> +	}

Probably needs a comment for posterity above the if()^^:

	/* Use the optimized flush_tlb_multi() where we can. */

> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>  
>  	pv_ops.mmu = xen_mmu_ops;
>  
> +	static_key_disable(&flush_tlb_multi_enabled.key);
> +
>  	memset(dummy_mapping, 0xff, PAGE_SIZE);
>  }

More comments, please.  Perhaps:

	Existing paravirt TLB flushes are incompatible with
	flush_tlb_multi() because....  Disable it when they are
	in use.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes
  2019-06-13  6:48 ` [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes Nadav Amit
@ 2019-06-25 21:36   ` Dave Hansen
  2019-06-26 16:33     ` Andy Lutomirski
  0 siblings, 1 reply; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:36 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen

On 6/12/19 11:48 PM, Nadav Amit wrote:
> While the updated smp infrastructure is capable of running a function on
> a single local core, it is not optimized for this case. 

OK, so flush_tlb_multi() is optimized for flushing local+remote at the
same time and is also (near?) the most optimal way to flush remote-only.
 But, it's not as optimized at doing local-only flushes.  But,
flush_tlb_on_cpus() *is* optimized for local-only flushes.

Right?

Can we distill that down to any particular advise that we can comment
these suckers with?  For instance, flush_tlb_multi() is apparently not
safe to call ever by itself without a 'flush_tlb_multi_enabled' check
first.  It's also, suboptimal for local flushes apparently.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-13  6:48 ` [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi() Nadav Amit
@ 2019-06-25 21:40   ` Dave Hansen
  2019-06-26  2:39     ` Nadav Amit
  0 siblings, 1 reply; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:40 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On 6/12/19 11:48 PM, Nadav Amit wrote:
> Support the new interface of flush_tlb_multi, which also flushes the
> local CPU's TLB, instead of flush_tlb_others that does not. This
> interface is more performant since it parallelize remote and local TLB
> flushes.
> 
> The actual implementation of flush_tlb_multi() is almost identical to
> that of flush_tlb_others().

This confused me a bit.  I thought we didn't support paravirtualized
flush_tlb_multi() from reading earlier in the series.

But, it seems like that might be Xen-only and doesn't apply to KVM and
paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
that right?  It might be good to include some of that background in the
changelog to set the context.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate
  2019-06-13  6:48 ` [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate Nadav Amit
  2019-06-14 15:58   ` Sean Christopherson
@ 2019-06-25 21:52   ` Dave Hansen
  2019-06-26  1:22     ` Nadav Amit
  2019-06-26  3:57     ` Andy Lutomirski
  1 sibling, 2 replies; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:52 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen

On 6/12/19 11:48 PM, Nadav Amit wrote:
> cpu_tlbstate is mostly private and only the variable is_lazy is shared.
> This causes some false-sharing when TLB flushes are performed.

Presumably, all CPUs doing TLB flushes read 'is_lazy'.  Because of this,
when we write to it we have to do the cache coherency dance to get rid
of all the CPUs that might have a read-only copy.

I would have *thought* that we only do writes when we enter or exist
lazy mode.  That's partially true.  We do write in enter_lazy_tlb(), but
we also *unconditionally* write in switch_mm_irqs_off().  That seems
like it might be responsible for a chunk (or even a vast majority) of
the cacheline bounces.

Is there anything preventing us from turning the switch_mm_irqs_off()
write into:

	if (was_lazy)
		this_cpu_write(cpu_tlbstate.is_lazy, false);

?

I think this patch is probably still a good general idea, but I just
wonder if reducing the writes is a better way to reduce bounces.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 9/9] x86/apic: Use non-atomic operations when possible
  2019-06-13  6:48 ` [PATCH 9/9] x86/apic: Use non-atomic operations when possible Nadav Amit
  2019-06-23 12:16   ` [tip:x86/apic] " tip-bot for Nadav Amit
@ 2019-06-25 21:58   ` Dave Hansen
  2019-06-25 22:03     ` Thomas Gleixner
  1 sibling, 1 reply; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 21:58 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen

> diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
> index 7685444a106b..609e499387a1 100644
> --- a/arch/x86/kernel/apic/x2apic_cluster.c
> +++ b/arch/x86/kernel/apic/x2apic_cluster.c
> @@ -50,7 +50,7 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
>  	cpumask_copy(tmpmsk, mask);
>  	/* If IPI should not be sent to self, clear current CPU */
>  	if (apic_dest != APIC_DEST_ALLINC)
> -		cpumask_clear_cpu(smp_processor_id(), tmpmsk);
> +		__cpumask_clear_cpu(smp_processor_id(), tmpmsk);

tmpmsk is on-stack, but it's a pointer to a per-cpu variable:

        tmpmsk = this_cpu_cpumask_var_ptr(ipi_mask);

So this one doesn't appear as obviously correct as a mask which itself
is on the stack.  The other three look obviously OK, though.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 0/9] x86: Concurrent TLB flushes and other improvements
  2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
                   ` (9 preceding siblings ...)
  2019-06-23 12:37 ` [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Thomas Gleixner
@ 2019-06-25 22:02 ` Dave Hansen
  2019-06-26  1:34   ` Nadav Amit
  10 siblings, 1 reply; 61+ messages in thread
From: Dave Hansen @ 2019-06-25 22:02 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Richard Henderson, Ivan Kokshaysky, Matt Turner,
	Tony Luck, Fenghua Yu, Andrew Morton, Rik van Riel,
	Josh Poimboeuf, Paolo Bonzini

On 6/12/19 11:48 PM, Nadav Amit wrote:
> Running sysbench on dax w/emulated-pmem, write-cache disabled, and
> various mitigations (PTI, Spectre, MDS) disabled on Haswell:
> 
>  sysbench fileio --file-total-size=3G --file-test-mode=rndwr \
>   --file-io-mode=mmap --threads=4 --file-fsync-mode=fdatasync run
> 
> 			events (avg/stddev)
> 			-------------------
>   5.2-rc3:		1247669.0000/16075.39
>   +patchset:		1290607.0000/13617.56 (+3.4%)

Why did you decide on disabling the side-channel mitigations?  While
they make things slower, they're also going to be with us for a while,
so they really are part of real-world testing IMNHO.  I'd be curious
whether this set has more or less of an advantage when all the
mitigations are on.

Also, why only 4 threads?  Does this set help most when using a moderate
number of threads since the local and remote cost are (relatively) close
vs. a large system where doing lots of remote flushes is *way* more
time-consuming than a local flush?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 9/9] x86/apic: Use non-atomic operations when possible
  2019-06-25 21:58   ` [PATCH 9/9] " Dave Hansen
@ 2019-06-25 22:03     ` Thomas Gleixner
  0 siblings, 0 replies; 61+ messages in thread
From: Thomas Gleixner @ 2019-06-25 22:03 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Nadav Amit, Peter Zijlstra, Andy Lutomirski, linux-kernel,
	Ingo Molnar, Borislav Petkov, x86, Dave Hansen

On Tue, 25 Jun 2019, Dave Hansen wrote:

> > diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
> > index 7685444a106b..609e499387a1 100644
> > --- a/arch/x86/kernel/apic/x2apic_cluster.c
> > +++ b/arch/x86/kernel/apic/x2apic_cluster.c
> > @@ -50,7 +50,7 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
> >  	cpumask_copy(tmpmsk, mask);
> >  	/* If IPI should not be sent to self, clear current CPU */
> >  	if (apic_dest != APIC_DEST_ALLINC)
> > -		cpumask_clear_cpu(smp_processor_id(), tmpmsk);
> > +		__cpumask_clear_cpu(smp_processor_id(), tmpmsk);
> 
> tmpmsk is on-stack, but it's a pointer to a per-cpu variable:
> 
>         tmpmsk = this_cpu_cpumask_var_ptr(ipi_mask);
> 
> So this one doesn't appear as obviously correct as a mask which itself
> is on the stack.  The other three look obviously OK, though.

It's still correct. The mask is per cpu and protected because interrupts
are disabled. I noticed and wanted to amend the change log and forgot.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate
  2019-06-25 21:52   ` Dave Hansen
@ 2019-06-26  1:22     ` Nadav Amit
  2019-06-26  3:57     ` Andy Lutomirski
  1 sibling, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  1:22 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen

> On Jun 25, 2019, at 2:52 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> cpu_tlbstate is mostly private and only the variable is_lazy is shared.
>> This causes some false-sharing when TLB flushes are performed.
> 
> Presumably, all CPUs doing TLB flushes read 'is_lazy'.  Because of this,
> when we write to it we have to do the cache coherency dance to get rid
> of all the CPUs that might have a read-only copy.
> 
> I would have *thought* that we only do writes when we enter or exist
> lazy mode.  That's partially true.  We do write in enter_lazy_tlb(), but
> we also *unconditionally* write in switch_mm_irqs_off().  That seems
> like it might be responsible for a chunk (or even a vast majority) of
> the cacheline bounces.
> 
> Is there anything preventing us from turning the switch_mm_irqs_off()
> write into:
> 
> 	if (was_lazy)
> 		this_cpu_write(cpu_tlbstate.is_lazy, false);
> 
> ?
> 
> I think this patch is probably still a good general idea, but I just
> wonder if reducing the writes is a better way to reduce bounces.

Sounds good. I will add another patch based on your idea.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 0/9] x86: Concurrent TLB flushes and other improvements
  2019-06-25 22:02 ` Dave Hansen
@ 2019-06-26  1:34   ` Nadav Amit
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  1:34 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Richard Henderson, Ivan Kokshaysky, Matt Turner,
	Tony Luck, Fenghua Yu, Andrew Morton, Rik van Riel,
	Josh Poimboeuf, Paolo Bonzini

> On Jun 25, 2019, at 3:02 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> Running sysbench on dax w/emulated-pmem, write-cache disabled, and
>> various mitigations (PTI, Spectre, MDS) disabled on Haswell:
>> 
>> sysbench fileio --file-total-size=3G --file-test-mode=rndwr \
>>  --file-io-mode=mmap --threads=4 --file-fsync-mode=fdatasync run
>> 
>> 			events (avg/stddev)
>> 			-------------------
>>  5.2-rc3:		1247669.0000/16075.39
>>  +patchset:		1290607.0000/13617.56 (+3.4%)
> 
> Why did you decide on disabling the side-channel mitigations?  While
> they make things slower, they're also going to be with us for a while,
> so they really are part of real-world testing IMNHO.  I'd be curious
> whether this set has more or less of an advantage when all the
> mitigations are on.

It seemed reasonable since I wanted to avoid all kind of “noise”. I presume
the relative speedup would be smaller, due to the overhead of the
mitigations, would be smaller. Note that in this benchmark every TLB
invalidation is of a single entry. The benefit (in the terms of absolute
time saved) would have been greater if a flush was of multiple entries.

> Also, why only 4 threads?  Does this set help most when using a moderate
> number of threads since the local and remote cost are (relatively) close
> vs. a large system where doing lots of remote flushes is *way* more
> time-consuming than a local flush?

Don’t overthink it. My server was busy doing something else, so I was
running the tests on a lame desktop I have. I will rerun it on a bigger
machine.

I presume the performance benefit will be smaller when more cores are
involved, since the TLB shootdown time will be dominated by the inter-core
communication time (IPI+cache coherency) and the tail latency of the IPI
delivery (if interrupts are disabled on the target).

I am working on some patches to reduce these overheads as well.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 3/9] x86/mm/tlb: Refactor common code into flush_tlb_on_cpus()
  2019-06-25 21:07   ` Dave Hansen
@ 2019-06-26  1:57     ` Nadav Amit
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  1:57 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, linux-kernel, Ingo Molnar,
	Borislav Petkov, x86, Thomas Gleixner, Dave Hansen

> On Jun 25, 2019, at 2:07 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index 91f6db92554c..c34bcf03f06f 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -734,7 +734,11 @@ static inline struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
>> 			unsigned int stride_shift, bool freed_tables,
>> 			u64 new_tlb_gen)
>> {
>> -	struct flush_tlb_info *info = this_cpu_ptr(&flush_tlb_info);
>> +	struct flush_tlb_info *info;
>> +
>> +	preempt_disable();
>> +
>> +	info = this_cpu_ptr(&flush_tlb_info);
>> 
>> #ifdef CONFIG_DEBUG_VM
>> 	/*
>> @@ -762,6 +766,23 @@ static inline void put_flush_tlb_info(void)
>> 	barrier();
>> 	this_cpu_dec(flush_tlb_info_idx);
>> #endif
>> +	preempt_enable();
>> +}
> 
> The addition of this disable/enable pair is unchangelogged and
> uncommented.  I think it makes sense since we do need to make sure we
> stay on this CPU, but it would be nice to mention.

I’ll add some comments and update the changeling . I see I marked
get_flush_tlb_info() as “inline” for no reason. I’m going to remove it in
this patch, unless you say it should be in a separate patch.

>> +static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>> +			      const struct flush_tlb_info *info)
>> +{
> 
> Might be nice to mention that preempt must be disabled.  It's kinda
> implied from the smp_processor_id(), but being explicit is always nice too.

I will add a comment, although smp_processor_id() should anyhow shout at you
if you use it with CONFIG_DEBUG_PREEMPT=y.

>> +	int this_cpu = smp_processor_id();
>> +
>> +	if (cpumask_test_cpu(this_cpu, cpumask)) {
>> +		lockdep_assert_irqs_enabled();
>> +		local_irq_disable();
>> +		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
>> +		local_irq_enable();
>> +	}
>> +
>> +	if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
>> +		flush_tlb_others(cpumask, info);
>> }
>> 
>> void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>> @@ -770,9 +791,6 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>> {
>> 	struct flush_tlb_info *info;
>> 	u64 new_tlb_gen;
>> -	int cpu;
>> -
>> -	cpu = get_cpu();
>> 
>> 	/* Should we flush just the requested range? */
>> 	if ((end == TLB_FLUSH_ALL) ||
>> @@ -787,18 +805,18 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>> 	info = get_flush_tlb_info(mm, start, end, stride_shift, freed_tables,
>> 				  new_tlb_gen);
>> 
>> -	if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
>> -		lockdep_assert_irqs_enabled();
>> -		local_irq_disable();
>> -		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
>> -		local_irq_enable();
>> -	}
>> +	/*
>> +	 * Assert that mm_cpumask() corresponds with the loaded mm. We got one
>> +	 * exception: for init_mm we do not need to flush anything, and the
>> +	 * cpumask does not correspond with loaded_mm.
>> +	 */
>> +	VM_WARN_ON_ONCE(cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)) !=
>> +			(mm == this_cpu_read(cpu_tlbstate.loaded_mm)) &&
>> +			mm != &init_mm);
> 
> Very very cool.  You thought "these should be equivalent", and you added
> a corresponding warning to ensure they are.

The credit for this assertion goes to Peter who suggested I add it...

> 
>> -	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids)
>> -		flush_tlb_others(mm_cpumask(mm), info);
>> +	flush_tlb_on_cpus(mm_cpumask(mm), info);
>> 
>> 	put_flush_tlb_info();
>> -	put_cpu();
>> }
> 
> 
> 
> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>

Thanks for the reviews of this patch and the others (don’t worry, I won’t
add the “Reviewed-by” tag to the others).

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-25 21:29     ` Dave Hansen
@ 2019-06-26  2:35       ` Nadav Amit
  -1 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  2:35 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

> On Jun 25, 2019, at 2:29 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
>> 
>> Add a static key to tell whether this new interface is supported.
>> 
>> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
>> Cc: Haiyang Zhang <haiyangz@microsoft.com>
>> Cc: Stephen Hemminger <sthemmin@microsoft.com>
>> Cc: Sasha Levin <sashal@kernel.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: x86@kernel.org
>> Cc: Juergen Gross <jgross@suse.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: linux-hyperv@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: virtualization@lists.linux-foundation.org
>> Cc: kvm@vger.kernel.org
>> Cc: xen-devel@lists.xenproject.org
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> ---
>> arch/x86/hyperv/mmu.c                 |  2 +
>> arch/x86/include/asm/paravirt.h       |  8 +++
>> arch/x86/include/asm/paravirt_types.h |  6 +++
>> arch/x86/include/asm/tlbflush.h       |  6 +++
>> arch/x86/kernel/kvm.c                 |  1 +
>> arch/x86/kernel/paravirt.c            |  3 ++
>> arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>> arch/x86/xen/mmu_pv.c                 |  2 +
>> 8 files changed, 87 insertions(+), 12 deletions(-)
>> 
>> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
>> index e65d7fe6489f..ca28b400c87c 100644
>> --- a/arch/x86/hyperv/mmu.c
>> +++ b/arch/x86/hyperv/mmu.c
>> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>> 	pr_info("Using hypercall for remote TLB flush\n");
>> 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>> 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> }
>> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
>> index c25c38a05c1c..192be7254457 100644
>> --- a/arch/x86/include/asm/paravirt.h
>> +++ b/arch/x86/include/asm/paravirt.h
>> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>> #endif
>> }
>> 
>> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static inline void __flush_tlb(void)
>> {
>> 	PVOP_VCALL0(mmu.flush_tlb_user);
>> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>> 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>> }
>> 
>> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
>> +				   const struct flush_tlb_info *info)
>> +{
>> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
>> +}
>> +
>> static inline void flush_tlb_others(const struct cpumask *cpumask,
>> 				    const struct flush_tlb_info *info)
>> {
>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>> index 946f8f1f1efc..b93b3d90729a 100644
>> --- a/arch/x86/include/asm/paravirt_types.h
>> +++ b/arch/x86/include/asm/paravirt_types.h
>> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>> 	void (*flush_tlb_user)(void);
>> 	void (*flush_tlb_kernel)(void);
>> 	void (*flush_tlb_one_user)(unsigned long addr);
>> +	/*
>> +	 * flush_tlb_multi() is the preferred interface, which is capable to
>> +	 * flush both local and remote CPUs.
>> +	 */
>> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
>> +				const struct flush_tlb_info *info);
>> 	void (*flush_tlb_others)(const struct cpumask *cpus,
>> 				 const struct flush_tlb_info *info);
>> 
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index dee375831962..79272938cf79 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
>> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>> 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>> }
>> 
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info);
>> +
>> void native_flush_tlb_others(const struct cpumask *cpumask,
>> 			     const struct flush_tlb_info *info);
>> 
>> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>> extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>> 
>> #ifndef CONFIG_PARAVIRT
>> +#define flush_tlb_multi(mask, info)	\
>> +	native_flush_tlb_multi(mask, info)
>> +
>> #define flush_tlb_others(mask, info)	\
>> 	native_flush_tlb_others(mask, info)
>> 
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 5169b8cc35bb..00d81e898717 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>> 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>> 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>> 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +		static_key_disable(&flush_tlb_multi_enabled.key);
>> 	}
>> 
>> 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
>> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
>> index 98039d7fb998..ac00afed5570 100644
>> --- a/arch/x86/kernel/paravirt.c
>> +++ b/arch/x86/kernel/paravirt.c
>> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>> 	return insn_len;
>> }
>> 
>> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static void native_flush_tlb(void)
>> {
>> 	__native_flush_tlb();
>> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>> 	.mmu.flush_tlb_user	= native_flush_tlb,
>> 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>> 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
>> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>> 	.mmu.flush_tlb_others	= native_flush_tlb_others,
>> 	.mmu.tlb_remove_table	=
>> 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index c34bcf03f06f..db73d5f1dd43 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 		 * garbage into our TLB.  Since switching to init_mm is barely
>> 		 * slower than a minimal flush, just switch to init_mm.
>> 		 *
>> -		 * This should be rare, with native_flush_tlb_others skipping
>> +		 * This should be rare, with native_flush_tlb_multi skipping
>> 		 * IPIs to lazy TLB mode CPUs.
>> 		 */
> 
> Nit, since we're messing with this, it can now be
> "native_flush_tlb_multi()" since it is a function.

Sure.

> 
>> switch_mm_irqs_off(NULL, &init_mm, NULL);
>> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>> }
>> 
>> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
>> +static void flush_tlb_func_local(void *info)
>> {
>> 	const struct flush_tlb_info *f = info;
>> +	enum tlb_flush_reason reason;
>> +
>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
> 
> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
> it like this, but seems like it would be nicer and easier to track down
> the origins of these things if we did this at the caller.

I prefer not to. I want later to inline flush_tlb_info into the same
cacheline that holds call_function_data. Increasing the size of
flush_tlb_info for no good reason will not help…

>> flush_tlb_func_common(f, true, reason);
>> }
>> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>> 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>> }
>> 
>> -static bool tlb_is_not_lazy(int cpu, void *data)
>> +static inline bool tlb_is_not_lazy(int cpu)
>> {
>> 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>> }
> 
> Nit: the compiler will probably inline this sucker anyway.  So, for
> these kinds of patches, I'd resist the urge to do these kinds of tweaks,
> especially since it starts to hide the important change on the line.

Of course.

> 
>> -void native_flush_tlb_others(const struct cpumask *cpumask,
>> -			     const struct flush_tlb_info *info)
>> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
>> +
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			    const struct flush_tlb_info *info)
>> {
>> +	/*
>> +	 * Do accounting and tracing. Note that there are (and have always been)
>> +	 * cases in which a remote TLB flush will be traced, but eventually
>> +	 * would not happen.
>> +	 */
>> 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>> 	if (info->end == TLB_FLUSH_ALL)
>> 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
>> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 		 * means that the percpu tlb_gen variables won't be updated
>> 		 * and we'll do pointless flushes on future context switches.
>> 		 *
>> -		 * Rather than hooking native_flush_tlb_others() here, I think
>> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>> 		 * that UV should be updated so that smp_call_function_many(),
>> 		 * etc, are optimal on UV.
>> 		 */
>> +		local_irq_disable();
>> +		flush_tlb_func_local((__force void *)info);
>> +		local_irq_enable();
>> +
>> 		cpumask = uv_flush_tlb_others(cpumask, info);
>> 		if (cpumask)
>> 			smp_call_function_many(cpumask, flush_tlb_func_remote,
>> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 	 * doing a speculative memory access.
>> 	 */
>> 	if (info->freed_tables)
>> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
>> -			       (void *)info, 1);
>> -	else
>> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
>> -				(void *)info, 1, GFP_ATOMIC, cpumask);
>> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	else {
> 
> I prefer brackets be added for 'if' blocks like this since it doesn't
> take up any meaningful space and makes it less prone to compile errors.

If you say so.

> 
>> +		/*
>> +		 * Although we could have used on_each_cpu_cond_mask(),
>> +		 * open-coding it has several performance advantages: (1) we can
>> +		 * use specialized functions for remote and local flushes; (2)
>> +		 * no need for indirect branch to test if TLB is lazy; (3) we
>> +		 * can use a designated cpumask for evaluating the condition
>> +		 * instead of allocating a new one.
>> +		 *
>> +		 * This works under the assumption that there are no nested TLB
>> +		 * flushes, an assumption that is already made in
>> +		 * flush_tlb_mm_range().
>> +		 */
>> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
> 
> This is logically a stack-local variable, right?  But, since we've got
> preempt off and cpumasks can be huge, we don't want to allocate it on
> the stack.  That might be worth a comment somewhere.

I will add a comment here.

> 
>> +		int cpu;
>> +
>> +		cpumask_clear(cond_cpumask);
>> +
>> +		for_each_cpu(cpu, cpumask) {
>> +			if (tlb_is_not_lazy(cpu))
>> +				__cpumask_set_cpu(cpu, cond_cpumask);
>> +		}
> 
> FWIW, it's probably worth calling out in the changelog that this loop
> exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
> worse than what it replaces.

Added.

> 
>> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	}
>> +}
> 
> There was a __force on an earlier 'info' cast.  Could you talk about
> that for a minute an explain why that one is needed?

I have no idea where the __force came from. I’ll remove it.

> 
>> +void native_flush_tlb_others(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info)
>> +{
>> +	native_flush_tlb_multi(cpumask, info);
>> }
>> 
>> /*
>> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>> {
>> 	int this_cpu = smp_processor_id();
>> 
>> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
>> +		flush_tlb_multi(cpumask, info);
>> +		return;
>> +	}
> 
> Probably needs a comment for posterity above the if()^^:
> 
> 	/* Use the optimized flush_tlb_multi() where we can. */

Right.

> 
>> --- a/arch/x86/xen/mmu_pv.c
>> +++ b/arch/x86/xen/mmu_pv.c
>> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>> 
>> 	pv_ops.mmu = xen_mmu_ops;
>> 
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> +
>> 	memset(dummy_mapping, 0xff, PAGE_SIZE);
>> }
> 
> More comments, please.  Perhaps:
> 
> 	Existing paravirt TLB flushes are incompatible with
> 	flush_tlb_multi() because....  Disable it when they are
> 	in use.

There is no inherent reason for them to be incompatible. Someone needs to
adapt them. I will use my affiliation as an excuse for the question “why
don’t you do it?” ;-)

Anyhow, I will add a comment.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-25 21:29     ` Dave Hansen
                       ` (2 preceding siblings ...)
  (?)
@ 2019-06-26  2:35     ` Nadav Amit via Virtualization
  -1 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit via Virtualization @ 2019-06-26  2:35 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Peter Zijlstra, Haiyang Zhang,
	the arch/x86 maintainers, LKML, virtualization, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, xen-devel, Paolo Bonzini,
	Thomas Gleixner, Boris Ostrovsky

> On Jun 25, 2019, at 2:29 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
>> 
>> Add a static key to tell whether this new interface is supported.
>> 
>> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
>> Cc: Haiyang Zhang <haiyangz@microsoft.com>
>> Cc: Stephen Hemminger <sthemmin@microsoft.com>
>> Cc: Sasha Levin <sashal@kernel.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: x86@kernel.org
>> Cc: Juergen Gross <jgross@suse.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: linux-hyperv@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: virtualization@lists.linux-foundation.org
>> Cc: kvm@vger.kernel.org
>> Cc: xen-devel@lists.xenproject.org
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> ---
>> arch/x86/hyperv/mmu.c                 |  2 +
>> arch/x86/include/asm/paravirt.h       |  8 +++
>> arch/x86/include/asm/paravirt_types.h |  6 +++
>> arch/x86/include/asm/tlbflush.h       |  6 +++
>> arch/x86/kernel/kvm.c                 |  1 +
>> arch/x86/kernel/paravirt.c            |  3 ++
>> arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>> arch/x86/xen/mmu_pv.c                 |  2 +
>> 8 files changed, 87 insertions(+), 12 deletions(-)
>> 
>> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
>> index e65d7fe6489f..ca28b400c87c 100644
>> --- a/arch/x86/hyperv/mmu.c
>> +++ b/arch/x86/hyperv/mmu.c
>> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>> 	pr_info("Using hypercall for remote TLB flush\n");
>> 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>> 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> }
>> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
>> index c25c38a05c1c..192be7254457 100644
>> --- a/arch/x86/include/asm/paravirt.h
>> +++ b/arch/x86/include/asm/paravirt.h
>> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>> #endif
>> }
>> 
>> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static inline void __flush_tlb(void)
>> {
>> 	PVOP_VCALL0(mmu.flush_tlb_user);
>> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>> 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>> }
>> 
>> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
>> +				   const struct flush_tlb_info *info)
>> +{
>> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
>> +}
>> +
>> static inline void flush_tlb_others(const struct cpumask *cpumask,
>> 				    const struct flush_tlb_info *info)
>> {
>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>> index 946f8f1f1efc..b93b3d90729a 100644
>> --- a/arch/x86/include/asm/paravirt_types.h
>> +++ b/arch/x86/include/asm/paravirt_types.h
>> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>> 	void (*flush_tlb_user)(void);
>> 	void (*flush_tlb_kernel)(void);
>> 	void (*flush_tlb_one_user)(unsigned long addr);
>> +	/*
>> +	 * flush_tlb_multi() is the preferred interface, which is capable to
>> +	 * flush both local and remote CPUs.
>> +	 */
>> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
>> +				const struct flush_tlb_info *info);
>> 	void (*flush_tlb_others)(const struct cpumask *cpus,
>> 				 const struct flush_tlb_info *info);
>> 
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index dee375831962..79272938cf79 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
>> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>> 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>> }
>> 
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info);
>> +
>> void native_flush_tlb_others(const struct cpumask *cpumask,
>> 			     const struct flush_tlb_info *info);
>> 
>> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>> extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>> 
>> #ifndef CONFIG_PARAVIRT
>> +#define flush_tlb_multi(mask, info)	\
>> +	native_flush_tlb_multi(mask, info)
>> +
>> #define flush_tlb_others(mask, info)	\
>> 	native_flush_tlb_others(mask, info)
>> 
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 5169b8cc35bb..00d81e898717 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>> 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>> 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>> 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +		static_key_disable(&flush_tlb_multi_enabled.key);
>> 	}
>> 
>> 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
>> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
>> index 98039d7fb998..ac00afed5570 100644
>> --- a/arch/x86/kernel/paravirt.c
>> +++ b/arch/x86/kernel/paravirt.c
>> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>> 	return insn_len;
>> }
>> 
>> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static void native_flush_tlb(void)
>> {
>> 	__native_flush_tlb();
>> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>> 	.mmu.flush_tlb_user	= native_flush_tlb,
>> 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>> 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
>> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>> 	.mmu.flush_tlb_others	= native_flush_tlb_others,
>> 	.mmu.tlb_remove_table	=
>> 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index c34bcf03f06f..db73d5f1dd43 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 		 * garbage into our TLB.  Since switching to init_mm is barely
>> 		 * slower than a minimal flush, just switch to init_mm.
>> 		 *
>> -		 * This should be rare, with native_flush_tlb_others skipping
>> +		 * This should be rare, with native_flush_tlb_multi skipping
>> 		 * IPIs to lazy TLB mode CPUs.
>> 		 */
> 
> Nit, since we're messing with this, it can now be
> "native_flush_tlb_multi()" since it is a function.

Sure.

> 
>> switch_mm_irqs_off(NULL, &init_mm, NULL);
>> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>> }
>> 
>> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
>> +static void flush_tlb_func_local(void *info)
>> {
>> 	const struct flush_tlb_info *f = info;
>> +	enum tlb_flush_reason reason;
>> +
>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
> 
> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
> it like this, but seems like it would be nicer and easier to track down
> the origins of these things if we did this at the caller.

I prefer not to. I want later to inline flush_tlb_info into the same
cacheline that holds call_function_data. Increasing the size of
flush_tlb_info for no good reason will not help…

>> flush_tlb_func_common(f, true, reason);
>> }
>> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>> 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>> }
>> 
>> -static bool tlb_is_not_lazy(int cpu, void *data)
>> +static inline bool tlb_is_not_lazy(int cpu)
>> {
>> 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>> }
> 
> Nit: the compiler will probably inline this sucker anyway.  So, for
> these kinds of patches, I'd resist the urge to do these kinds of tweaks,
> especially since it starts to hide the important change on the line.

Of course.

> 
>> -void native_flush_tlb_others(const struct cpumask *cpumask,
>> -			     const struct flush_tlb_info *info)
>> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
>> +
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			    const struct flush_tlb_info *info)
>> {
>> +	/*
>> +	 * Do accounting and tracing. Note that there are (and have always been)
>> +	 * cases in which a remote TLB flush will be traced, but eventually
>> +	 * would not happen.
>> +	 */
>> 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>> 	if (info->end == TLB_FLUSH_ALL)
>> 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
>> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 		 * means that the percpu tlb_gen variables won't be updated
>> 		 * and we'll do pointless flushes on future context switches.
>> 		 *
>> -		 * Rather than hooking native_flush_tlb_others() here, I think
>> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>> 		 * that UV should be updated so that smp_call_function_many(),
>> 		 * etc, are optimal on UV.
>> 		 */
>> +		local_irq_disable();
>> +		flush_tlb_func_local((__force void *)info);
>> +		local_irq_enable();
>> +
>> 		cpumask = uv_flush_tlb_others(cpumask, info);
>> 		if (cpumask)
>> 			smp_call_function_many(cpumask, flush_tlb_func_remote,
>> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 	 * doing a speculative memory access.
>> 	 */
>> 	if (info->freed_tables)
>> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
>> -			       (void *)info, 1);
>> -	else
>> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
>> -				(void *)info, 1, GFP_ATOMIC, cpumask);
>> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	else {
> 
> I prefer brackets be added for 'if' blocks like this since it doesn't
> take up any meaningful space and makes it less prone to compile errors.

If you say so.

> 
>> +		/*
>> +		 * Although we could have used on_each_cpu_cond_mask(),
>> +		 * open-coding it has several performance advantages: (1) we can
>> +		 * use specialized functions for remote and local flushes; (2)
>> +		 * no need for indirect branch to test if TLB is lazy; (3) we
>> +		 * can use a designated cpumask for evaluating the condition
>> +		 * instead of allocating a new one.
>> +		 *
>> +		 * This works under the assumption that there are no nested TLB
>> +		 * flushes, an assumption that is already made in
>> +		 * flush_tlb_mm_range().
>> +		 */
>> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
> 
> This is logically a stack-local variable, right?  But, since we've got
> preempt off and cpumasks can be huge, we don't want to allocate it on
> the stack.  That might be worth a comment somewhere.

I will add a comment here.

> 
>> +		int cpu;
>> +
>> +		cpumask_clear(cond_cpumask);
>> +
>> +		for_each_cpu(cpu, cpumask) {
>> +			if (tlb_is_not_lazy(cpu))
>> +				__cpumask_set_cpu(cpu, cond_cpumask);
>> +		}
> 
> FWIW, it's probably worth calling out in the changelog that this loop
> exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
> worse than what it replaces.

Added.

> 
>> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	}
>> +}
> 
> There was a __force on an earlier 'info' cast.  Could you talk about
> that for a minute an explain why that one is needed?

I have no idea where the __force came from. I’ll remove it.

> 
>> +void native_flush_tlb_others(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info)
>> +{
>> +	native_flush_tlb_multi(cpumask, info);
>> }
>> 
>> /*
>> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>> {
>> 	int this_cpu = smp_processor_id();
>> 
>> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
>> +		flush_tlb_multi(cpumask, info);
>> +		return;
>> +	}
> 
> Probably needs a comment for posterity above the if()^^:
> 
> 	/* Use the optimized flush_tlb_multi() where we can. */

Right.

> 
>> --- a/arch/x86/xen/mmu_pv.c
>> +++ b/arch/x86/xen/mmu_pv.c
>> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>> 
>> 	pv_ops.mmu = xen_mmu_ops;
>> 
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> +
>> 	memset(dummy_mapping, 0xff, PAGE_SIZE);
>> }
> 
> More comments, please.  Perhaps:
> 
> 	Existing paravirt TLB flushes are incompatible with
> 	flush_tlb_multi() because....  Disable it when they are
> 	in use.

There is no inherent reason for them to be incompatible. Someone needs to
adapt them. I will use my affiliation as an excuse for the question “why
don’t you do it?” ;-)

Anyhow, I will add a comment.

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  2:35       ` Nadav Amit
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  2:35 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Peter Zijlstra, Haiyang Zhang,
	the arch/x86 maintainers, LKML, virtualization, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, xen-devel, Paolo Bonzini,
	Thomas Gleixner, K. Y. Srinivasan, Boris Ostrovsky

> On Jun 25, 2019, at 2:29 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
>> 
>> Add a static key to tell whether this new interface is supported.
>> 
>> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
>> Cc: Haiyang Zhang <haiyangz@microsoft.com>
>> Cc: Stephen Hemminger <sthemmin@microsoft.com>
>> Cc: Sasha Levin <sashal@kernel.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: x86@kernel.org
>> Cc: Juergen Gross <jgross@suse.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: linux-hyperv@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: virtualization@lists.linux-foundation.org
>> Cc: kvm@vger.kernel.org
>> Cc: xen-devel@lists.xenproject.org
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> ---
>> arch/x86/hyperv/mmu.c                 |  2 +
>> arch/x86/include/asm/paravirt.h       |  8 +++
>> arch/x86/include/asm/paravirt_types.h |  6 +++
>> arch/x86/include/asm/tlbflush.h       |  6 +++
>> arch/x86/kernel/kvm.c                 |  1 +
>> arch/x86/kernel/paravirt.c            |  3 ++
>> arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>> arch/x86/xen/mmu_pv.c                 |  2 +
>> 8 files changed, 87 insertions(+), 12 deletions(-)
>> 
>> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
>> index e65d7fe6489f..ca28b400c87c 100644
>> --- a/arch/x86/hyperv/mmu.c
>> +++ b/arch/x86/hyperv/mmu.c
>> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>> 	pr_info("Using hypercall for remote TLB flush\n");
>> 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>> 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> }
>> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
>> index c25c38a05c1c..192be7254457 100644
>> --- a/arch/x86/include/asm/paravirt.h
>> +++ b/arch/x86/include/asm/paravirt.h
>> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>> #endif
>> }
>> 
>> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static inline void __flush_tlb(void)
>> {
>> 	PVOP_VCALL0(mmu.flush_tlb_user);
>> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>> 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>> }
>> 
>> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
>> +				   const struct flush_tlb_info *info)
>> +{
>> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
>> +}
>> +
>> static inline void flush_tlb_others(const struct cpumask *cpumask,
>> 				    const struct flush_tlb_info *info)
>> {
>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>> index 946f8f1f1efc..b93b3d90729a 100644
>> --- a/arch/x86/include/asm/paravirt_types.h
>> +++ b/arch/x86/include/asm/paravirt_types.h
>> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>> 	void (*flush_tlb_user)(void);
>> 	void (*flush_tlb_kernel)(void);
>> 	void (*flush_tlb_one_user)(unsigned long addr);
>> +	/*
>> +	 * flush_tlb_multi() is the preferred interface, which is capable to
>> +	 * flush both local and remote CPUs.
>> +	 */
>> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
>> +				const struct flush_tlb_info *info);
>> 	void (*flush_tlb_others)(const struct cpumask *cpus,
>> 				 const struct flush_tlb_info *info);
>> 
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index dee375831962..79272938cf79 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
>> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>> 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>> }
>> 
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info);
>> +
>> void native_flush_tlb_others(const struct cpumask *cpumask,
>> 			     const struct flush_tlb_info *info);
>> 
>> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>> extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>> 
>> #ifndef CONFIG_PARAVIRT
>> +#define flush_tlb_multi(mask, info)	\
>> +	native_flush_tlb_multi(mask, info)
>> +
>> #define flush_tlb_others(mask, info)	\
>> 	native_flush_tlb_others(mask, info)
>> 
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 5169b8cc35bb..00d81e898717 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>> 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>> 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>> 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +		static_key_disable(&flush_tlb_multi_enabled.key);
>> 	}
>> 
>> 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
>> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
>> index 98039d7fb998..ac00afed5570 100644
>> --- a/arch/x86/kernel/paravirt.c
>> +++ b/arch/x86/kernel/paravirt.c
>> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>> 	return insn_len;
>> }
>> 
>> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static void native_flush_tlb(void)
>> {
>> 	__native_flush_tlb();
>> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>> 	.mmu.flush_tlb_user	= native_flush_tlb,
>> 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>> 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
>> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>> 	.mmu.flush_tlb_others	= native_flush_tlb_others,
>> 	.mmu.tlb_remove_table	=
>> 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index c34bcf03f06f..db73d5f1dd43 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 		 * garbage into our TLB.  Since switching to init_mm is barely
>> 		 * slower than a minimal flush, just switch to init_mm.
>> 		 *
>> -		 * This should be rare, with native_flush_tlb_others skipping
>> +		 * This should be rare, with native_flush_tlb_multi skipping
>> 		 * IPIs to lazy TLB mode CPUs.
>> 		 */
> 
> Nit, since we're messing with this, it can now be
> "native_flush_tlb_multi()" since it is a function.

Sure.

> 
>> switch_mm_irqs_off(NULL, &init_mm, NULL);
>> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>> }
>> 
>> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
>> +static void flush_tlb_func_local(void *info)
>> {
>> 	const struct flush_tlb_info *f = info;
>> +	enum tlb_flush_reason reason;
>> +
>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
> 
> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
> it like this, but seems like it would be nicer and easier to track down
> the origins of these things if we did this at the caller.

I prefer not to. I want later to inline flush_tlb_info into the same
cacheline that holds call_function_data. Increasing the size of
flush_tlb_info for no good reason will not help…

>> flush_tlb_func_common(f, true, reason);
>> }
>> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>> 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>> }
>> 
>> -static bool tlb_is_not_lazy(int cpu, void *data)
>> +static inline bool tlb_is_not_lazy(int cpu)
>> {
>> 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>> }
> 
> Nit: the compiler will probably inline this sucker anyway.  So, for
> these kinds of patches, I'd resist the urge to do these kinds of tweaks,
> especially since it starts to hide the important change on the line.

Of course.

> 
>> -void native_flush_tlb_others(const struct cpumask *cpumask,
>> -			     const struct flush_tlb_info *info)
>> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
>> +
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			    const struct flush_tlb_info *info)
>> {
>> +	/*
>> +	 * Do accounting and tracing. Note that there are (and have always been)
>> +	 * cases in which a remote TLB flush will be traced, but eventually
>> +	 * would not happen.
>> +	 */
>> 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>> 	if (info->end == TLB_FLUSH_ALL)
>> 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
>> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 		 * means that the percpu tlb_gen variables won't be updated
>> 		 * and we'll do pointless flushes on future context switches.
>> 		 *
>> -		 * Rather than hooking native_flush_tlb_others() here, I think
>> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>> 		 * that UV should be updated so that smp_call_function_many(),
>> 		 * etc, are optimal on UV.
>> 		 */
>> +		local_irq_disable();
>> +		flush_tlb_func_local((__force void *)info);
>> +		local_irq_enable();
>> +
>> 		cpumask = uv_flush_tlb_others(cpumask, info);
>> 		if (cpumask)
>> 			smp_call_function_many(cpumask, flush_tlb_func_remote,
>> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 	 * doing a speculative memory access.
>> 	 */
>> 	if (info->freed_tables)
>> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
>> -			       (void *)info, 1);
>> -	else
>> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
>> -				(void *)info, 1, GFP_ATOMIC, cpumask);
>> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	else {
> 
> I prefer brackets be added for 'if' blocks like this since it doesn't
> take up any meaningful space and makes it less prone to compile errors.

If you say so.

> 
>> +		/*
>> +		 * Although we could have used on_each_cpu_cond_mask(),
>> +		 * open-coding it has several performance advantages: (1) we can
>> +		 * use specialized functions for remote and local flushes; (2)
>> +		 * no need for indirect branch to test if TLB is lazy; (3) we
>> +		 * can use a designated cpumask for evaluating the condition
>> +		 * instead of allocating a new one.
>> +		 *
>> +		 * This works under the assumption that there are no nested TLB
>> +		 * flushes, an assumption that is already made in
>> +		 * flush_tlb_mm_range().
>> +		 */
>> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
> 
> This is logically a stack-local variable, right?  But, since we've got
> preempt off and cpumasks can be huge, we don't want to allocate it on
> the stack.  That might be worth a comment somewhere.

I will add a comment here.

> 
>> +		int cpu;
>> +
>> +		cpumask_clear(cond_cpumask);
>> +
>> +		for_each_cpu(cpu, cpumask) {
>> +			if (tlb_is_not_lazy(cpu))
>> +				__cpumask_set_cpu(cpu, cond_cpumask);
>> +		}
> 
> FWIW, it's probably worth calling out in the changelog that this loop
> exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
> worse than what it replaces.

Added.

> 
>> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	}
>> +}
> 
> There was a __force on an earlier 'info' cast.  Could you talk about
> that for a minute an explain why that one is needed?

I have no idea where the __force came from. I’ll remove it.

> 
>> +void native_flush_tlb_others(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info)
>> +{
>> +	native_flush_tlb_multi(cpumask, info);
>> }
>> 
>> /*
>> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>> {
>> 	int this_cpu = smp_processor_id();
>> 
>> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
>> +		flush_tlb_multi(cpumask, info);
>> +		return;
>> +	}
> 
> Probably needs a comment for posterity above the if()^^:
> 
> 	/* Use the optimized flush_tlb_multi() where we can. */

Right.

> 
>> --- a/arch/x86/xen/mmu_pv.c
>> +++ b/arch/x86/xen/mmu_pv.c
>> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>> 
>> 	pv_ops.mmu = xen_mmu_ops;
>> 
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> +
>> 	memset(dummy_mapping, 0xff, PAGE_SIZE);
>> }
> 
> More comments, please.  Perhaps:
> 
> 	Existing paravirt TLB flushes are incompatible with
> 	flush_tlb_multi() because....  Disable it when they are
> 	in use.

There is no inherent reason for them to be incompatible. Someone needs to
adapt them. I will use my affiliation as an excuse for the question “why
don’t you do it?” ;-)

Anyhow, I will add a comment.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-25 21:40   ` Dave Hansen
@ 2019-06-26  2:39     ` Nadav Amit
  2019-06-26  3:35       ` Andy Lutomirski
  0 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  2:39 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> Support the new interface of flush_tlb_multi, which also flushes the
>> local CPU's TLB, instead of flush_tlb_others that does not. This
>> interface is more performant since it parallelize remote and local TLB
>> flushes.
>> 
>> The actual implementation of flush_tlb_multi() is almost identical to
>> that of flush_tlb_others().
> 
> This confused me a bit.  I thought we didn't support paravirtualized
> flush_tlb_multi() from reading earlier in the series.
> 
> But, it seems like that might be Xen-only and doesn't apply to KVM and
> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> that right?  It might be good to include some of that background in the
> changelog to set the context.

I’ll try to improve the change-logs a bit. There is no inherent reason for
PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
for future work, and here are some reasons:

1. Hyper-V/Xen TLB-flushing code is not very simple
2. I don’t have a proper setup
3. I am lazy



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  2:35       ` [Xen-devel] " Nadav Amit
@ 2019-06-26  3:00         ` Dave Hansen
  -1 siblings, 0 replies; 61+ messages in thread
From: Dave Hansen @ 2019-06-26  3:00 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

On 6/25/19 7:35 PM, Nadav Amit wrote:
>>> 	const struct flush_tlb_info *f = info;
>>> +	enum tlb_flush_reason reason;
>>> +
>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>
>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>> it like this, but seems like it would be nicer and easier to track down
>> the origins of these things if we did this at the caller.
> 
> I prefer not to. I want later to inline flush_tlb_info into the same
> cacheline that holds call_function_data. Increasing the size of
> flush_tlb_info for no good reason will not help…

Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
call_function_data is 3/8ths.  To me, that means we have some slack in
the size.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  2:35       ` [Xen-devel] " Nadav Amit
  (?)
  (?)
@ 2019-06-26  3:00       ` Dave Hansen
  -1 siblings, 0 replies; 61+ messages in thread
From: Dave Hansen @ 2019-06-26  3:00 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Peter Zijlstra, Haiyang Zhang,
	the arch/x86 maintainers, LKML, virtualization, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, xen-devel, Paolo Bonzini,
	Thomas Gleixner, Boris Ostrovsky

On 6/25/19 7:35 PM, Nadav Amit wrote:
>>> 	const struct flush_tlb_info *f = info;
>>> +	enum tlb_flush_reason reason;
>>> +
>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>
>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>> it like this, but seems like it would be nicer and easier to track down
>> the origins of these things if we did this at the caller.
> 
> I prefer not to. I want later to inline flush_tlb_info into the same
> cacheline that holds call_function_data. Increasing the size of
> flush_tlb_info for no good reason will not help…

Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
call_function_data is 3/8ths.  To me, that means we have some slack in
the size.

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:00         ` Dave Hansen
  0 siblings, 0 replies; 61+ messages in thread
From: Dave Hansen @ 2019-06-26  3:00 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Peter Zijlstra, Haiyang Zhang,
	the arch/x86 maintainers, LKML, virtualization, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, xen-devel, Paolo Bonzini,
	Thomas Gleixner, K. Y. Srinivasan, Boris Ostrovsky

On 6/25/19 7:35 PM, Nadav Amit wrote:
>>> 	const struct flush_tlb_info *f = info;
>>> +	enum tlb_flush_reason reason;
>>> +
>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>
>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>> it like this, but seems like it would be nicer and easier to track down
>> the origins of these things if we did this at the caller.
> 
> I prefer not to. I want later to inline flush_tlb_info into the same
> cacheline that holds call_function_data. Increasing the size of
> flush_tlb_info for no good reason will not help…

Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
call_function_data is 3/8ths.  To me, that means we have some slack in
the size.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  3:00         ` [Xen-devel] " Dave Hansen
  (?)
@ 2019-06-26  3:32           ` Nadav Amit via Virtualization
  -1 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  3:32 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

> On Jun 25, 2019, at 8:00 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/25/19 7:35 PM, Nadav Amit wrote:
>>>> const struct flush_tlb_info *f = info;
>>>> +	enum tlb_flush_reason reason;
>>>> +
>>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>> 
>>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>>> it like this, but seems like it would be nicer and easier to track down
>>> the origins of these things if we did this at the caller.
>> 
>> I prefer not to. I want later to inline flush_tlb_info into the same
>> cacheline that holds call_function_data. Increasing the size of
>> flush_tlb_info for no good reason will not help…
> 
> Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
> call_function_data is 3/8ths.  To me, that means we have some slack in
> the size.

I do not understand your math.. :(

6 + 3 > 8 so putting both flush_tlb_info and call_function_data does not
leave us any slack (we can save one qword, so we can actually put them
at the same cacheline).

You can see my current implementation here:

https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m0ab5fe0799ba9ff0d41197f1095679fe26aebd57
https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m7b35a93dffd23fbb7ca813c795a0777d4cdcb51b


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:32           ` Nadav Amit via Virtualization
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit via Virtualization @ 2019-06-26  3:32 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Peter Zijlstra, Haiyang Zhang,
	the arch/x86 maintainers, LKML, virtualization, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, xen-devel, Paolo Bonzini,
	Thomas Gleixner, Boris Ostrovsky

> On Jun 25, 2019, at 8:00 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/25/19 7:35 PM, Nadav Amit wrote:
>>>> const struct flush_tlb_info *f = info;
>>>> +	enum tlb_flush_reason reason;
>>>> +
>>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>> 
>>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>>> it like this, but seems like it would be nicer and easier to track down
>>> the origins of these things if we did this at the caller.
>> 
>> I prefer not to. I want later to inline flush_tlb_info into the same
>> cacheline that holds call_function_data. Increasing the size of
>> flush_tlb_info for no good reason will not help…
> 
> Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
> call_function_data is 3/8ths.  To me, that means we have some slack in
> the size.

I do not understand your math.. :(

6 + 3 > 8 so putting both flush_tlb_info and call_function_data does not
leave us any slack (we can save one qword, so we can actually put them
at the same cacheline).

You can see my current implementation here:

https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m0ab5fe0799ba9ff0d41197f1095679fe26aebd57
https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m7b35a93dffd23fbb7ca813c795a0777d4cdcb51b

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:32           ` Nadav Amit via Virtualization
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  3:32 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm, Peter Zijlstra, Haiyang Zhang,
	the arch/x86 maintainers, LKML, virtualization, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, xen-devel, Paolo Bonzini,
	Thomas Gleixner, K. Y. Srinivasan, Boris Ostrovsky

> On Jun 25, 2019, at 8:00 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/25/19 7:35 PM, Nadav Amit wrote:
>>>> const struct flush_tlb_info *f = info;
>>>> +	enum tlb_flush_reason reason;
>>>> +
>>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>> 
>>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>>> it like this, but seems like it would be nicer and easier to track down
>>> the origins of these things if we did this at the caller.
>> 
>> I prefer not to. I want later to inline flush_tlb_info into the same
>> cacheline that holds call_function_data. Increasing the size of
>> flush_tlb_info for no good reason will not help…
> 
> Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
> call_function_data is 3/8ths.  To me, that means we have some slack in
> the size.

I do not understand your math.. :(

6 + 3 > 8 so putting both flush_tlb_info and call_function_data does not
leave us any slack (we can save one qword, so we can actually put them
at the same cacheline).

You can see my current implementation here:

https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m0ab5fe0799ba9ff0d41197f1095679fe26aebd57
https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m7b35a93dffd23fbb7ca813c795a0777d4cdcb51b

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  2:39     ` Nadav Amit
@ 2019-06-26  3:35       ` Andy Lutomirski
  2019-06-26  3:41         ` Nadav Amit
  0 siblings, 1 reply; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:35 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Dave Hansen, Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >
> > On 6/12/19 11:48 PM, Nadav Amit wrote:
> >> Support the new interface of flush_tlb_multi, which also flushes the
> >> local CPU's TLB, instead of flush_tlb_others that does not. This
> >> interface is more performant since it parallelize remote and local TLB
> >> flushes.
> >>
> >> The actual implementation of flush_tlb_multi() is almost identical to
> >> that of flush_tlb_others().
> >
> > This confused me a bit.  I thought we didn't support paravirtualized
> > flush_tlb_multi() from reading earlier in the series.
> >
> > But, it seems like that might be Xen-only and doesn't apply to KVM and
> > paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> > that right?  It might be good to include some of that background in the
> > changelog to set the context.
>
> I’ll try to improve the change-logs a bit. There is no inherent reason for
> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> for future work, and here are some reasons:
>
> 1. Hyper-V/Xen TLB-flushing code is not very simple
> 2. I don’t have a proper setup
> 3. I am lazy
>

In the long run, I think that we're going to want a way for one CPU to
do a remote flush and then, with appropriate locking, update the
tlb_gen fields for the remote CPU.  Getting this right may be a bit
nontrivial.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-13  6:48   ` Nadav Amit via Virtualization
  (?)
@ 2019-06-26  3:36     ` Andy Lutomirski
  -1 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:36 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, X86 ML, Thomas Gleixner, Dave Hansen,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Sasha Levin,
	Juergen Gross, Paolo Bonzini, Boris Ostrovsky, linux-hyperv,
	Linux Virtualization, kvm list, xen-devel

On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.

Would it be straightforward to have a default PV flush_tlb_multi()
that uses flush_tlb_others() under the hood?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:36     ` Andy Lutomirski
  0 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:36 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm list, Peter Zijlstra, Haiyang Zhang,
	X86 ML, LKML, Linux Virtualization, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, xen-devel, Paolo Bonzini, Thomas Gleixner,
	Boris Ostrovsky

On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.

Would it be straightforward to have a default PV flush_tlb_multi()
that uses flush_tlb_others() under the hood?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:36     ` Andy Lutomirski
  0 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:36 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm list, Peter Zijlstra, Haiyang Zhang,
	X86 ML, LKML, Linux Virtualization, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, xen-devel, Paolo Bonzini, Thomas Gleixner,
	K. Y. Srinivasan, Boris Ostrovsky

On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.

Would it be straightforward to have a default PV flush_tlb_multi()
that uses flush_tlb_others() under the hood?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  3:35       ` Andy Lutomirski
@ 2019-06-26  3:41         ` Nadav Amit
  2019-06-26  3:56           ` Andy Lutomirski
  0 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  3:41 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov,
	the arch/x86 maintainers, Thomas Gleixner, Dave Hansen,
	Paolo Bonzini, kvm

> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>>> 
>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
>>>> Support the new interface of flush_tlb_multi, which also flushes the
>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
>>>> interface is more performant since it parallelize remote and local TLB
>>>> flushes.
>>>> 
>>>> The actual implementation of flush_tlb_multi() is almost identical to
>>>> that of flush_tlb_others().
>>> 
>>> This confused me a bit.  I thought we didn't support paravirtualized
>>> flush_tlb_multi() from reading earlier in the series.
>>> 
>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
>>> that right?  It might be good to include some of that background in the
>>> changelog to set the context.
>> 
>> I’ll try to improve the change-logs a bit. There is no inherent reason for
>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
>> for future work, and here are some reasons:
>> 
>> 1. Hyper-V/Xen TLB-flushing code is not very simple
>> 2. I don’t have a proper setup
>> 3. I am lazy
> 
> In the long run, I think that we're going to want a way for one CPU to
> do a remote flush and then, with appropriate locking, update the
> tlb_gen fields for the remote CPU.  Getting this right may be a bit
> nontrivial.

What do you mean by “do a remote flush”?


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  3:36     ` Andy Lutomirski
  (?)
@ 2019-06-26  3:48       ` Nadav Amit via Virtualization
  -1 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  3:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov, X86 ML,
	Thomas Gleixner, Dave Hansen, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Sasha Levin, Juergen Gross, Paolo Bonzini,
	Boris Ostrovsky, linux-hyperv, Linux Virtualization, kvm list,
	xen-devel

> On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
> 
> Would it be straightforward to have a default PV flush_tlb_multi()
> that uses flush_tlb_others() under the hood?

I prefer not to have a default PV implementation that should anyhow go away.

I can create unoptimized untested versions for Xen and Hyper-V, if you want.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:48       ` Nadav Amit via Virtualization
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit via Virtualization @ 2019-06-26  3:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm list, Peter Zijlstra, Haiyang Zhang,
	X86 ML, LKML, Linux Virtualization, Ingo Molnar, Borislav Petkov,
	xen-devel, Paolo Bonzini, Thomas Gleixner, Boris Ostrovsky

> On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
> 
> Would it be straightforward to have a default PV flush_tlb_multi()
> that uses flush_tlb_others() under the hood?

I prefer not to have a default PV implementation that should anyhow go away.

I can create unoptimized untested versions for Xen and Hyper-V, if you want.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:48       ` Nadav Amit via Virtualization
  0 siblings, 0 replies; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  3:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm list, Peter Zijlstra, Haiyang Zhang,
	X86 ML, LKML, Linux Virtualization, Ingo Molnar, Borislav Petkov,
	xen-devel, Paolo Bonzini, Thomas Gleixner, K. Y. Srinivasan,
	Boris Ostrovsky

> On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
> 
> Would it be straightforward to have a default PV flush_tlb_multi()
> that uses flush_tlb_others() under the hood?

I prefer not to have a default PV implementation that should anyhow go away.

I can create unoptimized untested versions for Xen and Hyper-V, if you want.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  3:48       ` Nadav Amit via Virtualization
@ 2019-06-26  3:51         ` Andy Lutomirski
  -1 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:51 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andy Lutomirski, Peter Zijlstra, LKML, Ingo Molnar,
	Borislav Petkov, X86 ML, Thomas Gleixner, Dave Hansen,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Sasha Levin,
	Juergen Gross, Paolo Bonzini, Boris Ostrovsky, linux-hyperv,
	Linux Virtualization, kvm list, xen-devel

On Tue, Jun 25, 2019 at 8:48 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
> >> To improve TLB shootdown performance, flush the remote and local TLBs
> >> concurrently. Introduce flush_tlb_multi() that does so. The current
> >> flush_tlb_others() interface is kept, since paravirtual interfaces need
> >> to be adapted first before it can be removed. This is left for future
> >> work. In such PV environments, TLB flushes are not performed, at this
> >> time, concurrently.
> >
> > Would it be straightforward to have a default PV flush_tlb_multi()
> > that uses flush_tlb_others() under the hood?
>
> I prefer not to have a default PV implementation that should anyhow go away.
>
> I can create unoptimized untested versions for Xen and Hyper-V, if you want.
>

I think I prefer that approach.  We should be able to get the
maintainers to test it.  I don't love having legacy paths in there,
ahem, UV.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  3:48       ` Nadav Amit via Virtualization
  (?)
  (?)
@ 2019-06-26  3:51       ` Andy Lutomirski
  -1 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:51 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm list, Peter Zijlstra, Haiyang Zhang,
	X86 ML, LKML, Linux Virtualization, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, xen-devel, Paolo Bonzini, Thomas Gleixner,
	Boris Ostrovsky

On Tue, Jun 25, 2019 at 8:48 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
> >> To improve TLB shootdown performance, flush the remote and local TLBs
> >> concurrently. Introduce flush_tlb_multi() that does so. The current
> >> flush_tlb_others() interface is kept, since paravirtual interfaces need
> >> to be adapted first before it can be removed. This is left for future
> >> work. In such PV environments, TLB flushes are not performed, at this
> >> time, concurrently.
> >
> > Would it be straightforward to have a default PV flush_tlb_multi()
> > that uses flush_tlb_others() under the hood?
>
> I prefer not to have a default PV implementation that should anyhow go away.
>
> I can create unoptimized untested versions for Xen and Hyper-V, if you want.
>

I think I prefer that approach.  We should be able to get the
maintainers to test it.  I don't love having legacy paths in there,
ahem, UV.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
@ 2019-06-26  3:51         ` Andy Lutomirski
  0 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:51 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sasha Levin, Juergen Gross, linux-hyperv, Dave Hansen,
	Stephen Hemminger, kvm list, Peter Zijlstra, Haiyang Zhang,
	X86 ML, LKML, Linux Virtualization, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, xen-devel, Paolo Bonzini, Thomas Gleixner,
	K. Y. Srinivasan, Boris Ostrovsky

On Tue, Jun 25, 2019 at 8:48 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
> >> To improve TLB shootdown performance, flush the remote and local TLBs
> >> concurrently. Introduce flush_tlb_multi() that does so. The current
> >> flush_tlb_others() interface is kept, since paravirtual interfaces need
> >> to be adapted first before it can be removed. This is left for future
> >> work. In such PV environments, TLB flushes are not performed, at this
> >> time, concurrently.
> >
> > Would it be straightforward to have a default PV flush_tlb_multi()
> > that uses flush_tlb_others() under the hood?
>
> I prefer not to have a default PV implementation that should anyhow go away.
>
> I can create unoptimized untested versions for Xen and Hyper-V, if you want.
>

I think I prefer that approach.  We should be able to get the
maintainers to test it.  I don't love having legacy paths in there,
ahem, UV.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  3:41         ` Nadav Amit
@ 2019-06-26  3:56           ` Andy Lutomirski
  2019-06-26  6:30             ` Nadav Amit
  0 siblings, 1 reply; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:56 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andy Lutomirski, Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
> >>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >>>
> >>> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>>> Support the new interface of flush_tlb_multi, which also flushes the
> >>>> local CPU's TLB, instead of flush_tlb_others that does not. This
> >>>> interface is more performant since it parallelize remote and local TLB
> >>>> flushes.
> >>>>
> >>>> The actual implementation of flush_tlb_multi() is almost identical to
> >>>> that of flush_tlb_others().
> >>>
> >>> This confused me a bit.  I thought we didn't support paravirtualized
> >>> flush_tlb_multi() from reading earlier in the series.
> >>>
> >>> But, it seems like that might be Xen-only and doesn't apply to KVM and
> >>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> >>> that right?  It might be good to include some of that background in the
> >>> changelog to set the context.
> >>
> >> I’ll try to improve the change-logs a bit. There is no inherent reason for
> >> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> >> for future work, and here are some reasons:
> >>
> >> 1. Hyper-V/Xen TLB-flushing code is not very simple
> >> 2. I don’t have a proper setup
> >> 3. I am lazy
> >
> > In the long run, I think that we're going to want a way for one CPU to
> > do a remote flush and then, with appropriate locking, update the
> > tlb_gen fields for the remote CPU.  Getting this right may be a bit
> > nontrivial.
>
> What do you mean by “do a remote flush”?
>

I mean a PV-assisted flush on a CPU other than the CPU that started
it.  If you look at flush_tlb_func_common(), it's doing some work that
is rather fancier than just flushing the TLB.  By replacing it with
just a pure flush on Xen or Hyper-V, we're losing the potential CR3
switch and this bit:

        /* Both paths above update our state to mm_tlb_gen. */
        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);

Skipping the former can hurt idle performance, although we should
consider just disabling all the lazy optimizations on systems with PV
flush.  (And I've asked Intel to help us out here in future hardware.
I have no idea what the result of asking will be.)  Skipping the
cpu_tlbstate write means that we will do unnecessary flushes in the
future, and that's not doing us any favors.

In principle, we should be able to do something like:

flush_tlb_multi(...);
for(each CPU that got flushed) {
  spin_lock(something appropriate?);
  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
  spin_unlock(...);
}

with the caveat that it's more complicated than this if the flush is a
partial flush, and that we'll want to check that the ctx_id still
matches, etc.

Does this make sense?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate
  2019-06-25 21:52   ` Dave Hansen
  2019-06-26  1:22     ` Nadav Amit
@ 2019-06-26  3:57     ` Andy Lutomirski
  1 sibling, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:57 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Nadav Amit, Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, X86 ML, Thomas Gleixner, Dave Hansen

On Tue, Jun 25, 2019 at 2:52 PM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 6/12/19 11:48 PM, Nadav Amit wrote:
> > cpu_tlbstate is mostly private and only the variable is_lazy is shared.
> > This causes some false-sharing when TLB flushes are performed.
>
> Presumably, all CPUs doing TLB flushes read 'is_lazy'.  Because of this,
> when we write to it we have to do the cache coherency dance to get rid
> of all the CPUs that might have a read-only copy.
>
> I would have *thought* that we only do writes when we enter or exist
> lazy mode.  That's partially true.  We do write in enter_lazy_tlb(), but
> we also *unconditionally* write in switch_mm_irqs_off().  That seems
> like it might be responsible for a chunk (or even a vast majority) of
> the cacheline bounces.
>
> Is there anything preventing us from turning the switch_mm_irqs_off()
> write into:
>
>         if (was_lazy)
>                 this_cpu_write(cpu_tlbstate.is_lazy, false);
>
> ?
>
> I think this patch is probably still a good general idea, but I just
> wonder if reducing the writes is a better way to reduce bounces.

Good catch!  I'm usually pretty good about this for
test_and_set()-style things, but I totally missed this obvious
unnecessary write when I did this.  I hereby apologize for all the
cycles I wasted :)

--Andy

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  3:56           ` Andy Lutomirski
@ 2019-06-26  6:30             ` Nadav Amit
  2019-06-26 16:37               ` Andy Lutomirski
  0 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-26  6:30 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov,
	the arch/x86 maintainers, Thomas Gleixner, Dave Hansen,
	Paolo Bonzini, kvm

> On Jun 25, 2019, at 8:56 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
>>> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
>>> 
>>> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>>>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>>>>> 
>>>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
>>>>>> Support the new interface of flush_tlb_multi, which also flushes the
>>>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
>>>>>> interface is more performant since it parallelize remote and local TLB
>>>>>> flushes.
>>>>>> 
>>>>>> The actual implementation of flush_tlb_multi() is almost identical to
>>>>>> that of flush_tlb_others().
>>>>> 
>>>>> This confused me a bit.  I thought we didn't support paravirtualized
>>>>> flush_tlb_multi() from reading earlier in the series.
>>>>> 
>>>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
>>>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
>>>>> that right?  It might be good to include some of that background in the
>>>>> changelog to set the context.
>>>> 
>>>> I’ll try to improve the change-logs a bit. There is no inherent reason for
>>>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
>>>> for future work, and here are some reasons:
>>>> 
>>>> 1. Hyper-V/Xen TLB-flushing code is not very simple
>>>> 2. I don’t have a proper setup
>>>> 3. I am lazy
>>> 
>>> In the long run, I think that we're going to want a way for one CPU to
>>> do a remote flush and then, with appropriate locking, update the
>>> tlb_gen fields for the remote CPU.  Getting this right may be a bit
>>> nontrivial.
>> 
>> What do you mean by “do a remote flush”?
> 
> I mean a PV-assisted flush on a CPU other than the CPU that started
> it.  If you look at flush_tlb_func_common(), it's doing some work that
> is rather fancier than just flushing the TLB.  By replacing it with
> just a pure flush on Xen or Hyper-V, we're losing the potential CR3
> switch and this bit:
> 
>        /* Both paths above update our state to mm_tlb_gen. */
>        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
> 
> Skipping the former can hurt idle performance, although we should
> consider just disabling all the lazy optimizations on systems with PV
> flush.  (And I've asked Intel to help us out here in future hardware.
> I have no idea what the result of asking will be.)  Skipping the
> cpu_tlbstate write means that we will do unnecessary flushes in the
> future, and that's not doing us any favors.
> 
> In principle, we should be able to do something like:
> 
> flush_tlb_multi(...);
> for(each CPU that got flushed) {
>  spin_lock(something appropriate?);
>  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
>  spin_unlock(...);
> }
> 
> with the caveat that it's more complicated than this if the flush is a
> partial flush, and that we'll want to check that the ctx_id still
> matches, etc.
> 
> Does this make sense?

Thanks for the detailed explanation. Let me check that I got it right. 

You want to optimize cases in which:

1. A virtual machine

2. Which issues mtultiple (remote) TLB shootdowns

2. To remote vCPU which is preempted by the hypervisor

4. And unlike KVM, the hypervisor does not provide facilities for the VM to
know which vCPU is preempted, and atomically request TLB flush when the vCPU
is scheduled.

Right?


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes
  2019-06-25 21:36   ` Dave Hansen
@ 2019-06-26 16:33     ` Andy Lutomirski
  2019-06-26 16:39       ` Nadav Amit
  0 siblings, 1 reply; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26 16:33 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Nadav Amit, Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, X86 ML, Thomas Gleixner, Dave Hansen

On Tue, Jun 25, 2019 at 2:36 PM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 6/12/19 11:48 PM, Nadav Amit wrote:
> > While the updated smp infrastructure is capable of running a function on
> > a single local core, it is not optimized for this case.
>
> OK, so flush_tlb_multi() is optimized for flushing local+remote at the
> same time and is also (near?) the most optimal way to flush remote-only.
>  But, it's not as optimized at doing local-only flushes.  But,
> flush_tlb_on_cpus() *is* optimized for local-only flushes.

Can we stick the optimization into flush_tlb_multi() in the interest
of keeping this stuff readable?

Also, would this series be easier to understand if there was a patch
to just remove the UV optimization before making other changes?

--Andy

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  6:30             ` Nadav Amit
@ 2019-06-26 16:37               ` Andy Lutomirski
  2019-06-26 17:41                 ` Vitaly Kuznetsov
  0 siblings, 1 reply; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26 16:37 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andy Lutomirski, Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On Tue, Jun 25, 2019 at 11:30 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:56 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
> >>> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >>>
> >>> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
> >>>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >>>>>
> >>>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>>>>> Support the new interface of flush_tlb_multi, which also flushes the
> >>>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
> >>>>>> interface is more performant since it parallelize remote and local TLB
> >>>>>> flushes.
> >>>>>>
> >>>>>> The actual implementation of flush_tlb_multi() is almost identical to
> >>>>>> that of flush_tlb_others().
> >>>>>
> >>>>> This confused me a bit.  I thought we didn't support paravirtualized
> >>>>> flush_tlb_multi() from reading earlier in the series.
> >>>>>
> >>>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
> >>>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> >>>>> that right?  It might be good to include some of that background in the
> >>>>> changelog to set the context.
> >>>>
> >>>> I’ll try to improve the change-logs a bit. There is no inherent reason for
> >>>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> >>>> for future work, and here are some reasons:
> >>>>
> >>>> 1. Hyper-V/Xen TLB-flushing code is not very simple
> >>>> 2. I don’t have a proper setup
> >>>> 3. I am lazy
> >>>
> >>> In the long run, I think that we're going to want a way for one CPU to
> >>> do a remote flush and then, with appropriate locking, update the
> >>> tlb_gen fields for the remote CPU.  Getting this right may be a bit
> >>> nontrivial.
> >>
> >> What do you mean by “do a remote flush”?
> >
> > I mean a PV-assisted flush on a CPU other than the CPU that started
> > it.  If you look at flush_tlb_func_common(), it's doing some work that
> > is rather fancier than just flushing the TLB.  By replacing it with
> > just a pure flush on Xen or Hyper-V, we're losing the potential CR3
> > switch and this bit:
> >
> >        /* Both paths above update our state to mm_tlb_gen. */
> >        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
> >
> > Skipping the former can hurt idle performance, although we should
> > consider just disabling all the lazy optimizations on systems with PV
> > flush.  (And I've asked Intel to help us out here in future hardware.
> > I have no idea what the result of asking will be.)  Skipping the
> > cpu_tlbstate write means that we will do unnecessary flushes in the
> > future, and that's not doing us any favors.
> >
> > In principle, we should be able to do something like:
> >
> > flush_tlb_multi(...);
> > for(each CPU that got flushed) {
> >  spin_lock(something appropriate?);
> >  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
> >  spin_unlock(...);
> > }
> >
> > with the caveat that it's more complicated than this if the flush is a
> > partial flush, and that we'll want to check that the ctx_id still
> > matches, etc.
> >
> > Does this make sense?
>
> Thanks for the detailed explanation. Let me check that I got it right.
>
> You want to optimize cases in which:
>
> 1. A virtual machine

Yes.

>
> 2. Which issues mtultiple (remote) TLB shootdowns

Yes.  Or just one followed by a context switch.  Right now it's
suboptimal with just two vCPUs and a single remote flush.  If CPU 0
does a remote PV flush of CPU1 and then CPU1 context switches away
from the running mm and back, it will do an unnecessary flush on the
way back because the tlb_gen won't match.

>
> 2. To remote vCPU which is preempted by the hypervisor

Yes, or even one that isn't preempted.

>
> 4. And unlike KVM, the hypervisor does not provide facilities for the VM to
> know which vCPU is preempted, and atomically request TLB flush when the vCPU
> is scheduled.
>

I'm not sure this makes much difference to the case I'm thinking of.

All this being said, do we currently have any system that supports
PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
not that familiar with its exact capabilities.  If I remember right,
Hyper-V doesn't expose PCID yet.


> Right?
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes
  2019-06-26 16:33     ` Andy Lutomirski
@ 2019-06-26 16:39       ` Nadav Amit
  2019-06-26 16:50         ` Andy Lutomirski
  0 siblings, 1 reply; 61+ messages in thread
From: Nadav Amit @ 2019-06-26 16:39 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov,
	X86 ML, Thomas Gleixner, Dave Hansen

> On Jun 26, 2019, at 9:33 AM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Tue, Jun 25, 2019 at 2:36 PM Dave Hansen <dave.hansen@intel.com> wrote:
>> On 6/12/19 11:48 PM, Nadav Amit wrote:
>>> While the updated smp infrastructure is capable of running a function on
>>> a single local core, it is not optimized for this case.
>> 
>> OK, so flush_tlb_multi() is optimized for flushing local+remote at the
>> same time and is also (near?) the most optimal way to flush remote-only.
>> But, it's not as optimized at doing local-only flushes.  But,
>> flush_tlb_on_cpus() *is* optimized for local-only flushes.
> 
> Can we stick the optimization into flush_tlb_multi() in the interest
> of keeping this stuff readable?

flush_tlb_on_cpus() will be much simpler once I remove the fallback
path that is in there for Xen and hyper-v. I can then open-code it in
flush_tlb_mm_range() and arch_tlbbatch_flush().

> 
> Also, would this series be easier to understand if there was a patch
> to just remove the UV optimization before making other changes?

If you just want me to remove it, I can do it. I don’t know who uses it and
what the impact might be.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes
  2019-06-26 16:39       ` Nadav Amit
@ 2019-06-26 16:50         ` Andy Lutomirski
  0 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26 16:50 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andy Lutomirski, Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar,
	Borislav Petkov, X86 ML, Thomas Gleixner, Dave Hansen

On Wed, Jun 26, 2019 at 9:39 AM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 26, 2019, at 9:33 AM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 2:36 PM Dave Hansen <dave.hansen@intel.com> wrote:
> >> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>> While the updated smp infrastructure is capable of running a function on
> >>> a single local core, it is not optimized for this case.
> >>
> >> OK, so flush_tlb_multi() is optimized for flushing local+remote at the
> >> same time and is also (near?) the most optimal way to flush remote-only.
> >> But, it's not as optimized at doing local-only flushes.  But,
> >> flush_tlb_on_cpus() *is* optimized for local-only flushes.
> >
> > Can we stick the optimization into flush_tlb_multi() in the interest
> > of keeping this stuff readable?
>
> flush_tlb_on_cpus() will be much simpler once I remove the fallback
> path that is in there for Xen and hyper-v. I can then open-code it in
> flush_tlb_mm_range() and arch_tlbbatch_flush().
>
> >
> > Also, would this series be easier to understand if there was a patch
> > to just remove the UV optimization before making other changes?
>
> If you just want me to remove it, I can do it. I don’t know who uses it and
> what the impact might be.
>

Only if you think it simplifies things.  The impact will be somewhat
slower flushes on affected hardware.  The UV maintainers know how to
fix this more sustainably, and maybe this will encourage them to do it
:)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26 16:37               ` Andy Lutomirski
@ 2019-06-26 17:41                 ` Vitaly Kuznetsov
  2019-06-26 18:21                   ` Andy Lutomirski
  0 siblings, 1 reply; 61+ messages in thread
From: Vitaly Kuznetsov @ 2019-06-26 17:41 UTC (permalink / raw)
  To: Andy Lutomirski, Nadav Amit
  Cc: Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov,
	the arch/x86 maintainers, Thomas Gleixner, Dave Hansen,
	Paolo Bonzini, kvm

Andy Lutomirski <luto@kernel.org> writes:

> All this being said, do we currently have any system that supports
> PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
> not that familiar with its exact capabilities.  If I remember right,
> Hyper-V doesn't expose PCID yet.
>

It already does (and support it to certain extent), see

commit 617ab45c9a8900e64a78b43696c02598b8cad68b
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Wed Jan 24 11:36:29 2018 +0100

    x86/hyperv: Stop suppressing X86_FEATURE_PCID

-- 
Vitaly

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26 17:41                 ` Vitaly Kuznetsov
@ 2019-06-26 18:21                   ` Andy Lutomirski
  0 siblings, 0 replies; 61+ messages in thread
From: Andy Lutomirski @ 2019-06-26 18:21 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Andy Lutomirski, Nadav Amit, Dave Hansen, Peter Zijlstra, LKML,
	Ingo Molnar, Borislav Petkov, the arch/x86 maintainers,
	Thomas Gleixner, Dave Hansen, Paolo Bonzini, kvm

On Wed, Jun 26, 2019 at 10:41 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
> Andy Lutomirski <luto@kernel.org> writes:
>
> > All this being said, do we currently have any system that supports
> > PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
> > not that familiar with its exact capabilities.  If I remember right,
> > Hyper-V doesn't expose PCID yet.
> >
>
> It already does (and support it to certain extent), see
>
> commit 617ab45c9a8900e64a78b43696c02598b8cad68b
> Author: Vitaly Kuznetsov <vkuznets@redhat.com>
> Date:   Wed Jan 24 11:36:29 2018 +0100
>
>     x86/hyperv: Stop suppressing X86_FEATURE_PCID
>

Hmm.  Once the dust settles from Nadav's patches, I think we should
see about supporting it better :)

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2019-06-26 18:22 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-13  6:48 [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Nadav Amit
2019-06-13  6:48 ` [PATCH 1/9] smp: Remove smp_call_function() and on_each_cpu() return values Nadav Amit
2019-06-23 12:32   ` [tip:smp/hotplug] " tip-bot for Nadav Amit
2019-06-13  6:48 ` [PATCH 2/9] smp: Run functions concurrently in smp_call_function_many() Nadav Amit
2019-06-13  6:48 ` [PATCH 3/9] x86/mm/tlb: Refactor common code into flush_tlb_on_cpus() Nadav Amit
2019-06-25 21:07   ` Dave Hansen
2019-06-26  1:57     ` Nadav Amit
2019-06-13  6:48 ` [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently Nadav Amit
2019-06-13  6:48   ` [Xen-devel] " Nadav Amit
2019-06-13  6:48   ` Nadav Amit via Virtualization
2019-06-25 21:29   ` Dave Hansen
2019-06-25 21:29     ` [Xen-devel] " Dave Hansen
2019-06-25 21:29     ` Dave Hansen
2019-06-26  2:35     ` Nadav Amit
2019-06-26  2:35       ` [Xen-devel] " Nadav Amit
2019-06-26  3:00       ` Dave Hansen
2019-06-26  3:00         ` [Xen-devel] " Dave Hansen
2019-06-26  3:32         ` Nadav Amit
2019-06-26  3:32           ` [Xen-devel] " Nadav Amit
2019-06-26  3:32           ` Nadav Amit via Virtualization
2019-06-26  3:00       ` Dave Hansen
2019-06-26  2:35     ` Nadav Amit via Virtualization
2019-06-26  3:36   ` Andy Lutomirski
2019-06-26  3:36     ` [Xen-devel] " Andy Lutomirski
2019-06-26  3:36     ` Andy Lutomirski
2019-06-26  3:48     ` Nadav Amit
2019-06-26  3:48       ` [Xen-devel] " Nadav Amit
2019-06-26  3:48       ` Nadav Amit via Virtualization
2019-06-26  3:51       ` Andy Lutomirski
2019-06-26  3:51       ` Andy Lutomirski
2019-06-26  3:51         ` [Xen-devel] " Andy Lutomirski
2019-06-13  6:48 ` [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes Nadav Amit
2019-06-25 21:36   ` Dave Hansen
2019-06-26 16:33     ` Andy Lutomirski
2019-06-26 16:39       ` Nadav Amit
2019-06-26 16:50         ` Andy Lutomirski
2019-06-13  6:48 ` [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi() Nadav Amit
2019-06-25 21:40   ` Dave Hansen
2019-06-26  2:39     ` Nadav Amit
2019-06-26  3:35       ` Andy Lutomirski
2019-06-26  3:41         ` Nadav Amit
2019-06-26  3:56           ` Andy Lutomirski
2019-06-26  6:30             ` Nadav Amit
2019-06-26 16:37               ` Andy Lutomirski
2019-06-26 17:41                 ` Vitaly Kuznetsov
2019-06-26 18:21                   ` Andy Lutomirski
2019-06-13  6:48 ` [PATCH 7/9] smp: Do not mark call_function_data as shared Nadav Amit
2019-06-23 12:31   ` [tip:smp/hotplug] " tip-bot for Nadav Amit
2019-06-13  6:48 ` [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate Nadav Amit
2019-06-14 15:58   ` Sean Christopherson
2019-06-17 17:10     ` Nadav Amit
2019-06-25 21:52   ` Dave Hansen
2019-06-26  1:22     ` Nadav Amit
2019-06-26  3:57     ` Andy Lutomirski
2019-06-13  6:48 ` [PATCH 9/9] x86/apic: Use non-atomic operations when possible Nadav Amit
2019-06-23 12:16   ` [tip:x86/apic] " tip-bot for Nadav Amit
2019-06-25 21:58   ` [PATCH 9/9] " Dave Hansen
2019-06-25 22:03     ` Thomas Gleixner
2019-06-23 12:37 ` [PATCH 0/9] x86: Concurrent TLB flushes and other improvements Thomas Gleixner
2019-06-25 22:02 ` Dave Hansen
2019-06-26  1:34   ` Nadav Amit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.