kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
       [not found] <20190613064813.8102-1-namit@vmware.com>
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-25 21:29   ` Dave Hansen
  2019-06-26  3:36   ` Andy Lutomirski
  2019-06-13  6:48 ` [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi() Nadav Amit
  1 sibling, 2 replies; 18+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Sasha Levin, Juergen Gross, Paolo Bonzini,
	Boris Ostrovsky, linux-hyperv, virtualization, kvm, xen-devel

To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV environments, TLB flushes are not performed, at this
time, concurrently.

Add a static key to tell whether this new interface is supported.

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: linux-hyperv@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/hyperv/mmu.c                 |  2 +
 arch/x86/include/asm/paravirt.h       |  8 +++
 arch/x86/include/asm/paravirt_types.h |  6 +++
 arch/x86/include/asm/tlbflush.h       |  6 +++
 arch/x86/kernel/kvm.c                 |  1 +
 arch/x86/kernel/paravirt.c            |  3 ++
 arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
 arch/x86/xen/mmu_pv.c                 |  2 +
 8 files changed, 87 insertions(+), 12 deletions(-)

diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index e65d7fe6489f..ca28b400c87c 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
 	pr_info("Using hypercall for remote TLB flush\n");
 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+
+	static_key_disable(&flush_tlb_multi_enabled.key);
 }
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index c25c38a05c1c..192be7254457 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -47,6 +47,8 @@ static inline void slow_down_io(void)
 #endif
 }
 
+DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static inline void __flush_tlb(void)
 {
 	PVOP_VCALL0(mmu.flush_tlb_user);
@@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
 }
 
+static inline void flush_tlb_multi(const struct cpumask *cpumask,
+				   const struct flush_tlb_info *info)
+{
+	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
+}
+
 static inline void flush_tlb_others(const struct cpumask *cpumask,
 				    const struct flush_tlb_info *info)
 {
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 946f8f1f1efc..b93b3d90729a 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -211,6 +211,12 @@ struct pv_mmu_ops {
 	void (*flush_tlb_user)(void);
 	void (*flush_tlb_kernel)(void);
 	void (*flush_tlb_one_user)(unsigned long addr);
+	/*
+	 * flush_tlb_multi() is the preferred interface, which is capable to
+	 * flush both local and remote CPUs.
+	 */
+	void (*flush_tlb_multi)(const struct cpumask *cpus,
+				const struct flush_tlb_info *info);
 	void (*flush_tlb_others)(const struct cpumask *cpus,
 				 const struct flush_tlb_info *info);
 
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index dee375831962..79272938cf79 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
 }
 
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info);
+
 void native_flush_tlb_others(const struct cpumask *cpumask,
 			     const struct flush_tlb_info *info);
 
@@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
 
 #ifndef CONFIG_PARAVIRT
+#define flush_tlb_multi(mask, info)	\
+	native_flush_tlb_multi(mask, info)
+
 #define flush_tlb_others(mask, info)	\
 	native_flush_tlb_others(mask, info)
 
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5169b8cc35bb..00d81e898717 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
+		static_key_disable(&flush_tlb_multi_enabled.key);
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 98039d7fb998..ac00afed5570 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
 	return insn_len;
 }
 
+DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
+
 static void native_flush_tlb(void)
 {
 	__native_flush_tlb();
@@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.flush_tlb_user	= native_flush_tlb,
 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
+	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
 	.mmu.flush_tlb_others	= native_flush_tlb_others,
 	.mmu.tlb_remove_table	=
 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index c34bcf03f06f..db73d5f1dd43 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 		 * garbage into our TLB.  Since switching to init_mm is barely
 		 * slower than a minimal flush, just switch to init_mm.
 		 *
-		 * This should be rare, with native_flush_tlb_others skipping
+		 * This should be rare, with native_flush_tlb_multi skipping
 		 * IPIs to lazy TLB mode CPUs.
 		 */
 		switch_mm_irqs_off(NULL, &init_mm, NULL);
@@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
 }
 
-static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
+static void flush_tlb_func_local(void *info)
 {
 	const struct flush_tlb_info *f = info;
+	enum tlb_flush_reason reason;
+
+	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
 
 	flush_tlb_func_common(f, true, reason);
 }
@@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
 }
 
-static bool tlb_is_not_lazy(int cpu, void *data)
+static inline bool tlb_is_not_lazy(int cpu)
 {
 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
 }
 
-void native_flush_tlb_others(const struct cpumask *cpumask,
-			     const struct flush_tlb_info *info)
+static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
+
+void native_flush_tlb_multi(const struct cpumask *cpumask,
+			    const struct flush_tlb_info *info)
 {
+	/*
+	 * Do accounting and tracing. Note that there are (and have always been)
+	 * cases in which a remote TLB flush will be traced, but eventually
+	 * would not happen.
+	 */
 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
 	if (info->end == TLB_FLUSH_ALL)
 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
@@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 		 * means that the percpu tlb_gen variables won't be updated
 		 * and we'll do pointless flushes on future context switches.
 		 *
-		 * Rather than hooking native_flush_tlb_others() here, I think
+		 * Rather than hooking native_flush_tlb_multi() here, I think
 		 * that UV should be updated so that smp_call_function_many(),
 		 * etc, are optimal on UV.
 		 */
+		local_irq_disable();
+		flush_tlb_func_local((__force void *)info);
+		local_irq_enable();
+
 		cpumask = uv_flush_tlb_others(cpumask, info);
 		if (cpumask)
 			smp_call_function_many(cpumask, flush_tlb_func_remote,
@@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * doing a speculative memory access.
 	 */
 	if (info->freed_tables)
-		smp_call_function_many(cpumask, flush_tlb_func_remote,
-			       (void *)info, 1);
-	else
-		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
-				(void *)info, 1, GFP_ATOMIC, cpumask);
+		__smp_call_function_many(cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	else {
+		/*
+		 * Although we could have used on_each_cpu_cond_mask(),
+		 * open-coding it has several performance advantages: (1) we can
+		 * use specialized functions for remote and local flushes; (2)
+		 * no need for indirect branch to test if TLB is lazy; (3) we
+		 * can use a designated cpumask for evaluating the condition
+		 * instead of allocating a new one.
+		 *
+		 * This works under the assumption that there are no nested TLB
+		 * flushes, an assumption that is already made in
+		 * flush_tlb_mm_range().
+		 */
+		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
+		int cpu;
+
+		cpumask_clear(cond_cpumask);
+
+		for_each_cpu(cpu, cpumask) {
+			if (tlb_is_not_lazy(cpu))
+				__cpumask_set_cpu(cpu, cond_cpumask);
+		}
+		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
+					 flush_tlb_func_local, (void *)info, 1);
+	}
+}
+
+void native_flush_tlb_others(const struct cpumask *cpumask,
+			     const struct flush_tlb_info *info)
+{
+	native_flush_tlb_multi(cpumask, info);
 }
 
 /*
@@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
 {
 	int this_cpu = smp_processor_id();
 
+	if (static_branch_likely(&flush_tlb_multi_enabled)) {
+		flush_tlb_multi(cpumask, info);
+		return;
+	}
+
 	if (cpumask_test_cpu(this_cpu, cpumask)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
-		flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN);
+		flush_tlb_func_local((__force void *)info);
 		local_irq_enable();
 	}
 
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index beb44e22afdf..0cb277848cb4 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
 
 	pv_ops.mmu = xen_mmu_ops;
 
+	static_key_disable(&flush_tlb_multi_enabled.key);
+
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
       [not found] <20190613064813.8102-1-namit@vmware.com>
  2019-06-13  6:48 ` [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently Nadav Amit
@ 2019-06-13  6:48 ` Nadav Amit
  2019-06-25 21:40   ` Dave Hansen
  1 sibling, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-13  6:48 UTC (permalink / raw)
  To: Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Nadav Amit, Paolo Bonzini, kvm

Support the new interface of flush_tlb_multi, which also flushes the
local CPU's TLB, instead of flush_tlb_others that does not. This
interface is more performant since it parallelize remote and local TLB
flushes.

The actual implementation of flush_tlb_multi() is almost identical to
that of flush_tlb_others().

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/kvm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 00d81e898717..d00d551d4a2a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -580,7 +580,7 @@ static void __init kvm_apf_trap_init(void)
 
 static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
 
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
 			const struct flush_tlb_info *info)
 {
 	u8 state;
@@ -594,6 +594,11 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 	 * queue flush_on_enter for pre-empted vCPUs
 	 */
 	for_each_cpu(cpu, flushmask) {
+		/*
+		 * The local vCPU is never preempted, so we do not explicitly
+		 * skip check for local vCPU - it will never be cleared from
+		 * flushmask.
+		 */
 		src = &per_cpu(steal_time, cpu);
 		state = READ_ONCE(src->preempted);
 		if ((state & KVM_VCPU_PREEMPTED)) {
@@ -603,7 +608,7 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 		}
 	}
 
-	native_flush_tlb_others(flushmask, info);
+	native_flush_tlb_multi(flushmask, info);
 }
 
 static void __init kvm_guest_init(void)
@@ -628,9 +633,8 @@ static void __init kvm_guest_init(void)
 	if (kvm_para_has_feature(KVM_FEATURE_PV_TLB_FLUSH) &&
 	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
-		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
+		pv_ops.mmu.flush_tlb_multi = kvm_flush_tlb_multi;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
-		static_key_disable(&flush_tlb_multi_enabled.key);
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-13  6:48 ` [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently Nadav Amit
@ 2019-06-25 21:29   ` Dave Hansen
  2019-06-26  2:35     ` Nadav Amit
  2019-06-26  3:36   ` Andy Lutomirski
  1 sibling, 1 reply; 18+ messages in thread
From: Dave Hansen @ 2019-06-25 21:29 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

On 6/12/19 11:48 PM, Nadav Amit wrote:
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.
> 
> Add a static key to tell whether this new interface is supported.
> 
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Sasha Levin <sashal@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: x86@kernel.org
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: linux-hyperv@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: virtualization@lists.linux-foundation.org
> Cc: kvm@vger.kernel.org
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/hyperv/mmu.c                 |  2 +
>  arch/x86/include/asm/paravirt.h       |  8 +++
>  arch/x86/include/asm/paravirt_types.h |  6 +++
>  arch/x86/include/asm/tlbflush.h       |  6 +++
>  arch/x86/kernel/kvm.c                 |  1 +
>  arch/x86/kernel/paravirt.c            |  3 ++
>  arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>  arch/x86/xen/mmu_pv.c                 |  2 +
>  8 files changed, 87 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
> index e65d7fe6489f..ca28b400c87c 100644
> --- a/arch/x86/hyperv/mmu.c
> +++ b/arch/x86/hyperv/mmu.c
> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>  	pr_info("Using hypercall for remote TLB flush\n");
>  	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>  	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +
> +	static_key_disable(&flush_tlb_multi_enabled.key);
>  }
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index c25c38a05c1c..192be7254457 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>  #endif
>  }
>  
> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static inline void __flush_tlb(void)
>  {
>  	PVOP_VCALL0(mmu.flush_tlb_user);
> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>  	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>  }
>  
> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
> +				   const struct flush_tlb_info *info)
> +{
> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
> +}
> +
>  static inline void flush_tlb_others(const struct cpumask *cpumask,
>  				    const struct flush_tlb_info *info)
>  {
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 946f8f1f1efc..b93b3d90729a 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>  	void (*flush_tlb_user)(void);
>  	void (*flush_tlb_kernel)(void);
>  	void (*flush_tlb_one_user)(unsigned long addr);
> +	/*
> +	 * flush_tlb_multi() is the preferred interface, which is capable to
> +	 * flush both local and remote CPUs.
> +	 */
> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
> +				const struct flush_tlb_info *info);
>  	void (*flush_tlb_others)(const struct cpumask *cpus,
>  				 const struct flush_tlb_info *info);
>  
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index dee375831962..79272938cf79 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>  	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>  }
>  
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info);
> +
>  void native_flush_tlb_others(const struct cpumask *cpumask,
>  			     const struct flush_tlb_info *info);
>  
> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>  
>  #ifndef CONFIG_PARAVIRT
> +#define flush_tlb_multi(mask, info)	\
> +	native_flush_tlb_multi(mask, info)
> +
>  #define flush_tlb_others(mask, info)	\
>  	native_flush_tlb_others(mask, info)
>  
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5169b8cc35bb..00d81e898717 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>  	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>  		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>  		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
> +		static_key_disable(&flush_tlb_multi_enabled.key);
>  	}
>  
>  	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index 98039d7fb998..ac00afed5570 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>  	return insn_len;
>  }
>  
> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
> +
>  static void native_flush_tlb(void)
>  {
>  	__native_flush_tlb();
> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>  	.mmu.flush_tlb_user	= native_flush_tlb,
>  	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>  	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>  	.mmu.flush_tlb_others	= native_flush_tlb_others,
>  	.mmu.tlb_remove_table	=
>  			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index c34bcf03f06f..db73d5f1dd43 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  		 * garbage into our TLB.  Since switching to init_mm is barely
>  		 * slower than a minimal flush, just switch to init_mm.
>  		 *
> -		 * This should be rare, with native_flush_tlb_others skipping
> +		 * This should be rare, with native_flush_tlb_multi skipping
>  		 * IPIs to lazy TLB mode CPUs.
>  		 */

Nit, since we're messing with this, it can now be
"native_flush_tlb_multi()" since it is a function.

>  		switch_mm_irqs_off(NULL, &init_mm, NULL);
> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>  	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>  }
>  
> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
> +static void flush_tlb_func_local(void *info)
>  {
>  	const struct flush_tlb_info *f = info;
> +	enum tlb_flush_reason reason;
> +
> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;

Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
it like this, but seems like it would be nicer and easier to track down
the origins of these things if we did this at the caller.

>  	flush_tlb_func_common(f, true, reason);
>  }
> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>  	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>  }
>  
> -static bool tlb_is_not_lazy(int cpu, void *data)
> +static inline bool tlb_is_not_lazy(int cpu)
>  {
>  	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>  }

Nit: the compiler will probably inline this sucker anyway.  So, for
these kinds of patches, I'd resist the urge to do these kinds of tweaks,
especially since it starts to hide the important change on the line.

> -void native_flush_tlb_others(const struct cpumask *cpumask,
> -			     const struct flush_tlb_info *info)
> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
> +
> +void native_flush_tlb_multi(const struct cpumask *cpumask,
> +			    const struct flush_tlb_info *info)
>  {
> +	/*
> +	 * Do accounting and tracing. Note that there are (and have always been)
> +	 * cases in which a remote TLB flush will be traced, but eventually
> +	 * would not happen.
> +	 */
>  	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>  	if (info->end == TLB_FLUSH_ALL)
>  		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  		 * means that the percpu tlb_gen variables won't be updated
>  		 * and we'll do pointless flushes on future context switches.
>  		 *
> -		 * Rather than hooking native_flush_tlb_others() here, I think
> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>  		 * that UV should be updated so that smp_call_function_many(),
>  		 * etc, are optimal on UV.
>  		 */
> +		local_irq_disable();
> +		flush_tlb_func_local((__force void *)info);
> +		local_irq_enable();
> +
>  		cpumask = uv_flush_tlb_others(cpumask, info);
>  		if (cpumask)
>  			smp_call_function_many(cpumask, flush_tlb_func_remote,
> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>  	 * doing a speculative memory access.
>  	 */
>  	if (info->freed_tables)
> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
> -			       (void *)info, 1);
> -	else
> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
> -				(void *)info, 1, GFP_ATOMIC, cpumask);
> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	else {

I prefer brackets be added for 'if' blocks like this since it doesn't
take up any meaningful space and makes it less prone to compile errors.

> +		/*
> +		 * Although we could have used on_each_cpu_cond_mask(),
> +		 * open-coding it has several performance advantages: (1) we can
> +		 * use specialized functions for remote and local flushes; (2)
> +		 * no need for indirect branch to test if TLB is lazy; (3) we
> +		 * can use a designated cpumask for evaluating the condition
> +		 * instead of allocating a new one.
> +		 *
> +		 * This works under the assumption that there are no nested TLB
> +		 * flushes, an assumption that is already made in
> +		 * flush_tlb_mm_range().
> +		 */
> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);

This is logically a stack-local variable, right?  But, since we've got
preempt off and cpumasks can be huge, we don't want to allocate it on
the stack.  That might be worth a comment somewhere.

> +		int cpu;
> +
> +		cpumask_clear(cond_cpumask);
> +
> +		for_each_cpu(cpu, cpumask) {
> +			if (tlb_is_not_lazy(cpu))
> +				__cpumask_set_cpu(cpu, cond_cpumask);
> +		}

FWIW, it's probably worth calling out in the changelog that this loop
exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
worse than what it replaces.

> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
> +					 flush_tlb_func_local, (void *)info, 1);
> +	}
> +}

There was a __force on an earlier 'info' cast.  Could you talk about
that for a minute an explain why that one is needed?

> +void native_flush_tlb_others(const struct cpumask *cpumask,
> +			     const struct flush_tlb_info *info)
> +{
> +	native_flush_tlb_multi(cpumask, info);
>  }
>  
>  /*
> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>  {
>  	int this_cpu = smp_processor_id();
>  
> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
> +		flush_tlb_multi(cpumask, info);
> +		return;
> +	}

Probably needs a comment for posterity above the if()^^:

	/* Use the optimized flush_tlb_multi() where we can. */

> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>  
>  	pv_ops.mmu = xen_mmu_ops;
>  
> +	static_key_disable(&flush_tlb_multi_enabled.key);
> +
>  	memset(dummy_mapping, 0xff, PAGE_SIZE);
>  }

More comments, please.  Perhaps:

	Existing paravirt TLB flushes are incompatible with
	flush_tlb_multi() because....  Disable it when they are
	in use.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-13  6:48 ` [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi() Nadav Amit
@ 2019-06-25 21:40   ` Dave Hansen
  2019-06-26  2:39     ` Nadav Amit
  0 siblings, 1 reply; 18+ messages in thread
From: Dave Hansen @ 2019-06-25 21:40 UTC (permalink / raw)
  To: Nadav Amit, Peter Zijlstra, Andy Lutomirski
  Cc: linux-kernel, Ingo Molnar, Borislav Petkov, x86, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On 6/12/19 11:48 PM, Nadav Amit wrote:
> Support the new interface of flush_tlb_multi, which also flushes the
> local CPU's TLB, instead of flush_tlb_others that does not. This
> interface is more performant since it parallelize remote and local TLB
> flushes.
> 
> The actual implementation of flush_tlb_multi() is almost identical to
> that of flush_tlb_others().

This confused me a bit.  I thought we didn't support paravirtualized
flush_tlb_multi() from reading earlier in the series.

But, it seems like that might be Xen-only and doesn't apply to KVM and
paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
that right?  It might be good to include some of that background in the
changelog to set the context.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-25 21:29   ` Dave Hansen
@ 2019-06-26  2:35     ` Nadav Amit
  2019-06-26  3:00       ` Dave Hansen
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-26  2:35 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

> On Jun 25, 2019, at 2:29 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
>> 
>> Add a static key to tell whether this new interface is supported.
>> 
>> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
>> Cc: Haiyang Zhang <haiyangz@microsoft.com>
>> Cc: Stephen Hemminger <sthemmin@microsoft.com>
>> Cc: Sasha Levin <sashal@kernel.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: x86@kernel.org
>> Cc: Juergen Gross <jgross@suse.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: linux-hyperv@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: virtualization@lists.linux-foundation.org
>> Cc: kvm@vger.kernel.org
>> Cc: xen-devel@lists.xenproject.org
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> ---
>> arch/x86/hyperv/mmu.c                 |  2 +
>> arch/x86/include/asm/paravirt.h       |  8 +++
>> arch/x86/include/asm/paravirt_types.h |  6 +++
>> arch/x86/include/asm/tlbflush.h       |  6 +++
>> arch/x86/kernel/kvm.c                 |  1 +
>> arch/x86/kernel/paravirt.c            |  3 ++
>> arch/x86/mm/tlb.c                     | 71 ++++++++++++++++++++++-----
>> arch/x86/xen/mmu_pv.c                 |  2 +
>> 8 files changed, 87 insertions(+), 12 deletions(-)
>> 
>> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
>> index e65d7fe6489f..ca28b400c87c 100644
>> --- a/arch/x86/hyperv/mmu.c
>> +++ b/arch/x86/hyperv/mmu.c
>> @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void)
>> 	pr_info("Using hypercall for remote TLB flush\n");
>> 	pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others;
>> 	pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> }
>> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
>> index c25c38a05c1c..192be7254457 100644
>> --- a/arch/x86/include/asm/paravirt.h
>> +++ b/arch/x86/include/asm/paravirt.h
>> @@ -47,6 +47,8 @@ static inline void slow_down_io(void)
>> #endif
>> }
>> 
>> +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static inline void __flush_tlb(void)
>> {
>> 	PVOP_VCALL0(mmu.flush_tlb_user);
>> @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr)
>> 	PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
>> }
>> 
>> +static inline void flush_tlb_multi(const struct cpumask *cpumask,
>> +				   const struct flush_tlb_info *info)
>> +{
>> +	PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info);
>> +}
>> +
>> static inline void flush_tlb_others(const struct cpumask *cpumask,
>> 				    const struct flush_tlb_info *info)
>> {
>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>> index 946f8f1f1efc..b93b3d90729a 100644
>> --- a/arch/x86/include/asm/paravirt_types.h
>> +++ b/arch/x86/include/asm/paravirt_types.h
>> @@ -211,6 +211,12 @@ struct pv_mmu_ops {
>> 	void (*flush_tlb_user)(void);
>> 	void (*flush_tlb_kernel)(void);
>> 	void (*flush_tlb_one_user)(unsigned long addr);
>> +	/*
>> +	 * flush_tlb_multi() is the preferred interface, which is capable to
>> +	 * flush both local and remote CPUs.
>> +	 */
>> +	void (*flush_tlb_multi)(const struct cpumask *cpus,
>> +				const struct flush_tlb_info *info);
>> 	void (*flush_tlb_others)(const struct cpumask *cpus,
>> 				 const struct flush_tlb_info *info);
>> 
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index dee375831962..79272938cf79 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
>> @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>> 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>> }
>> 
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info);
>> +
>> void native_flush_tlb_others(const struct cpumask *cpumask,
>> 			     const struct flush_tlb_info *info);
>> 
>> @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>> extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>> 
>> #ifndef CONFIG_PARAVIRT
>> +#define flush_tlb_multi(mask, info)	\
>> +	native_flush_tlb_multi(mask, info)
>> +
>> #define flush_tlb_others(mask, info)	\
>> 	native_flush_tlb_others(mask, info)
>> 
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 5169b8cc35bb..00d81e898717 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void)
>> 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
>> 		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
>> 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
>> +		static_key_disable(&flush_tlb_multi_enabled.key);
>> 	}
>> 
>> 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
>> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
>> index 98039d7fb998..ac00afed5570 100644
>> --- a/arch/x86/kernel/paravirt.c
>> +++ b/arch/x86/kernel/paravirt.c
>> @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
>> 	return insn_len;
>> }
>> 
>> +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled);
>> +
>> static void native_flush_tlb(void)
>> {
>> 	__native_flush_tlb();
>> @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = {
>> 	.mmu.flush_tlb_user	= native_flush_tlb,
>> 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
>> 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
>> +	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
>> 	.mmu.flush_tlb_others	= native_flush_tlb_others,
>> 	.mmu.tlb_remove_table	=
>> 			(void (*)(struct mmu_gather *, void *))tlb_remove_page,
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index c34bcf03f06f..db73d5f1dd43 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 		 * garbage into our TLB.  Since switching to init_mm is barely
>> 		 * slower than a minimal flush, just switch to init_mm.
>> 		 *
>> -		 * This should be rare, with native_flush_tlb_others skipping
>> +		 * This should be rare, with native_flush_tlb_multi skipping
>> 		 * IPIs to lazy TLB mode CPUs.
>> 		 */
> 
> Nit, since we're messing with this, it can now be
> "native_flush_tlb_multi()" since it is a function.

Sure.

> 
>> switch_mm_irqs_off(NULL, &init_mm, NULL);
>> @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
>> 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
>> }
>> 
>> -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason)
>> +static void flush_tlb_func_local(void *info)
>> {
>> 	const struct flush_tlb_info *f = info;
>> +	enum tlb_flush_reason reason;
>> +
>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
> 
> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
> it like this, but seems like it would be nicer and easier to track down
> the origins of these things if we did this at the caller.

I prefer not to. I want later to inline flush_tlb_info into the same
cacheline that holds call_function_data. Increasing the size of
flush_tlb_info for no good reason will not help…

>> flush_tlb_func_common(f, true, reason);
>> }
>> @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info)
>> 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
>> }
>> 
>> -static bool tlb_is_not_lazy(int cpu, void *data)
>> +static inline bool tlb_is_not_lazy(int cpu)
>> {
>> 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
>> }
> 
> Nit: the compiler will probably inline this sucker anyway.  So, for
> these kinds of patches, I'd resist the urge to do these kinds of tweaks,
> especially since it starts to hide the important change on the line.

Of course.

> 
>> -void native_flush_tlb_others(const struct cpumask *cpumask,
>> -			     const struct flush_tlb_info *info)
>> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
>> +
>> +void native_flush_tlb_multi(const struct cpumask *cpumask,
>> +			    const struct flush_tlb_info *info)
>> {
>> +	/*
>> +	 * Do accounting and tracing. Note that there are (and have always been)
>> +	 * cases in which a remote TLB flush will be traced, but eventually
>> +	 * would not happen.
>> +	 */
>> 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
>> 	if (info->end == TLB_FLUSH_ALL)
>> 		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
>> @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 		 * means that the percpu tlb_gen variables won't be updated
>> 		 * and we'll do pointless flushes on future context switches.
>> 		 *
>> -		 * Rather than hooking native_flush_tlb_others() here, I think
>> +		 * Rather than hooking native_flush_tlb_multi() here, I think
>> 		 * that UV should be updated so that smp_call_function_many(),
>> 		 * etc, are optimal on UV.
>> 		 */
>> +		local_irq_disable();
>> +		flush_tlb_func_local((__force void *)info);
>> +		local_irq_enable();
>> +
>> 		cpumask = uv_flush_tlb_others(cpumask, info);
>> 		if (cpumask)
>> 			smp_call_function_many(cpumask, flush_tlb_func_remote,
>> @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
>> 	 * doing a speculative memory access.
>> 	 */
>> 	if (info->freed_tables)
>> -		smp_call_function_many(cpumask, flush_tlb_func_remote,
>> -			       (void *)info, 1);
>> -	else
>> -		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
>> -				(void *)info, 1, GFP_ATOMIC, cpumask);
>> +		__smp_call_function_many(cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	else {
> 
> I prefer brackets be added for 'if' blocks like this since it doesn't
> take up any meaningful space and makes it less prone to compile errors.

If you say so.

> 
>> +		/*
>> +		 * Although we could have used on_each_cpu_cond_mask(),
>> +		 * open-coding it has several performance advantages: (1) we can
>> +		 * use specialized functions for remote and local flushes; (2)
>> +		 * no need for indirect branch to test if TLB is lazy; (3) we
>> +		 * can use a designated cpumask for evaluating the condition
>> +		 * instead of allocating a new one.
>> +		 *
>> +		 * This works under the assumption that there are no nested TLB
>> +		 * flushes, an assumption that is already made in
>> +		 * flush_tlb_mm_range().
>> +		 */
>> +		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
> 
> This is logically a stack-local variable, right?  But, since we've got
> preempt off and cpumasks can be huge, we don't want to allocate it on
> the stack.  That might be worth a comment somewhere.

I will add a comment here.

> 
>> +		int cpu;
>> +
>> +		cpumask_clear(cond_cpumask);
>> +
>> +		for_each_cpu(cpu, cpumask) {
>> +			if (tlb_is_not_lazy(cpu))
>> +				__cpumask_set_cpu(cpu, cond_cpumask);
>> +		}
> 
> FWIW, it's probably worth calling out in the changelog that this loop
> exists in on_each_cpu_cond_mask() too.  It looks bad here, but it's no
> worse than what it replaces.

Added.

> 
>> +		__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
>> +					 flush_tlb_func_local, (void *)info, 1);
>> +	}
>> +}
> 
> There was a __force on an earlier 'info' cast.  Could you talk about
> that for a minute an explain why that one is needed?

I have no idea where the __force came from. I’ll remove it.

> 
>> +void native_flush_tlb_others(const struct cpumask *cpumask,
>> +			     const struct flush_tlb_info *info)
>> +{
>> +	native_flush_tlb_multi(cpumask, info);
>> }
>> 
>> /*
>> @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
>> {
>> 	int this_cpu = smp_processor_id();
>> 
>> +	if (static_branch_likely(&flush_tlb_multi_enabled)) {
>> +		flush_tlb_multi(cpumask, info);
>> +		return;
>> +	}
> 
> Probably needs a comment for posterity above the if()^^:
> 
> 	/* Use the optimized flush_tlb_multi() where we can. */

Right.

> 
>> --- a/arch/x86/xen/mmu_pv.c
>> +++ b/arch/x86/xen/mmu_pv.c
>> @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void)
>> 
>> 	pv_ops.mmu = xen_mmu_ops;
>> 
>> +	static_key_disable(&flush_tlb_multi_enabled.key);
>> +
>> 	memset(dummy_mapping, 0xff, PAGE_SIZE);
>> }
> 
> More comments, please.  Perhaps:
> 
> 	Existing paravirt TLB flushes are incompatible with
> 	flush_tlb_multi() because....  Disable it when they are
> 	in use.

There is no inherent reason for them to be incompatible. Someone needs to
adapt them. I will use my affiliation as an excuse for the question “why
don’t you do it?” ;-)

Anyhow, I will add a comment.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-25 21:40   ` Dave Hansen
@ 2019-06-26  2:39     ` Nadav Amit
  2019-06-26  3:35       ` Andy Lutomirski
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-26  2:39 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> Support the new interface of flush_tlb_multi, which also flushes the
>> local CPU's TLB, instead of flush_tlb_others that does not. This
>> interface is more performant since it parallelize remote and local TLB
>> flushes.
>> 
>> The actual implementation of flush_tlb_multi() is almost identical to
>> that of flush_tlb_others().
> 
> This confused me a bit.  I thought we didn't support paravirtualized
> flush_tlb_multi() from reading earlier in the series.
> 
> But, it seems like that might be Xen-only and doesn't apply to KVM and
> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> that right?  It might be good to include some of that background in the
> changelog to set the context.

I’ll try to improve the change-logs a bit. There is no inherent reason for
PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
for future work, and here are some reasons:

1. Hyper-V/Xen TLB-flushing code is not very simple
2. I don’t have a proper setup
3. I am lazy



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  2:35     ` Nadav Amit
@ 2019-06-26  3:00       ` Dave Hansen
  2019-06-26  3:32         ` Nadav Amit
  0 siblings, 1 reply; 18+ messages in thread
From: Dave Hansen @ 2019-06-26  3:00 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

On 6/25/19 7:35 PM, Nadav Amit wrote:
>>> 	const struct flush_tlb_info *f = info;
>>> +	enum tlb_flush_reason reason;
>>> +
>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>
>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>> it like this, but seems like it would be nicer and easier to track down
>> the origins of these things if we did this at the caller.
> 
> I prefer not to. I want later to inline flush_tlb_info into the same
> cacheline that holds call_function_data. Increasing the size of
> flush_tlb_info for no good reason will not help…

Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
call_function_data is 3/8ths.  To me, that means we have some slack in
the size.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  3:00       ` Dave Hansen
@ 2019-06-26  3:32         ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2019-06-26  3:32 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Sasha Levin, Juergen Gross, Paolo Bonzini, Boris Ostrovsky,
	linux-hyperv, virtualization, kvm, xen-devel

> On Jun 25, 2019, at 8:00 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/25/19 7:35 PM, Nadav Amit wrote:
>>>> const struct flush_tlb_info *f = info;
>>>> +	enum tlb_flush_reason reason;
>>>> +
>>>> +	reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN;
>>> 
>>> Should we just add the "reason" to flush_tlb_info?  It's OK-ish to imply
>>> it like this, but seems like it would be nicer and easier to track down
>>> the origins of these things if we did this at the caller.
>> 
>> I prefer not to. I want later to inline flush_tlb_info into the same
>> cacheline that holds call_function_data. Increasing the size of
>> flush_tlb_info for no good reason will not help…
> 
> Well, flush_tlb_info is at 6/8ths of a cacheline at the moment.
> call_function_data is 3/8ths.  To me, that means we have some slack in
> the size.

I do not understand your math.. :(

6 + 3 > 8 so putting both flush_tlb_info and call_function_data does not
leave us any slack (we can save one qword, so we can actually put them
at the same cacheline).

You can see my current implementation here:

https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m0ab5fe0799ba9ff0d41197f1095679fe26aebd57
https://lore.kernel.org/lkml/20190531063645.4697-4-namit@vmware.com/T/#m7b35a93dffd23fbb7ca813c795a0777d4cdcb51b


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  2:39     ` Nadav Amit
@ 2019-06-26  3:35       ` Andy Lutomirski
  2019-06-26  3:41         ` Nadav Amit
  0 siblings, 1 reply; 18+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:35 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Dave Hansen, Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >
> > On 6/12/19 11:48 PM, Nadav Amit wrote:
> >> Support the new interface of flush_tlb_multi, which also flushes the
> >> local CPU's TLB, instead of flush_tlb_others that does not. This
> >> interface is more performant since it parallelize remote and local TLB
> >> flushes.
> >>
> >> The actual implementation of flush_tlb_multi() is almost identical to
> >> that of flush_tlb_others().
> >
> > This confused me a bit.  I thought we didn't support paravirtualized
> > flush_tlb_multi() from reading earlier in the series.
> >
> > But, it seems like that might be Xen-only and doesn't apply to KVM and
> > paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> > that right?  It might be good to include some of that background in the
> > changelog to set the context.
>
> I’ll try to improve the change-logs a bit. There is no inherent reason for
> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> for future work, and here are some reasons:
>
> 1. Hyper-V/Xen TLB-flushing code is not very simple
> 2. I don’t have a proper setup
> 3. I am lazy
>

In the long run, I think that we're going to want a way for one CPU to
do a remote flush and then, with appropriate locking, update the
tlb_gen fields for the remote CPU.  Getting this right may be a bit
nontrivial.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-13  6:48 ` [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently Nadav Amit
  2019-06-25 21:29   ` Dave Hansen
@ 2019-06-26  3:36   ` Andy Lutomirski
  2019-06-26  3:48     ` Nadav Amit
  1 sibling, 1 reply; 18+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:36 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Peter Zijlstra, Andy Lutomirski, LKML, Ingo Molnar,
	Borislav Petkov, X86 ML, Thomas Gleixner, Dave Hansen,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Sasha Levin,
	Juergen Gross, Paolo Bonzini, Boris Ostrovsky, linux-hyperv,
	Linux Virtualization, kvm list, xen-devel

On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>
> To improve TLB shootdown performance, flush the remote and local TLBs
> concurrently. Introduce flush_tlb_multi() that does so. The current
> flush_tlb_others() interface is kept, since paravirtual interfaces need
> to be adapted first before it can be removed. This is left for future
> work. In such PV environments, TLB flushes are not performed, at this
> time, concurrently.

Would it be straightforward to have a default PV flush_tlb_multi()
that uses flush_tlb_others() under the hood?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  3:35       ` Andy Lutomirski
@ 2019-06-26  3:41         ` Nadav Amit
  2019-06-26  3:56           ` Andy Lutomirski
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-26  3:41 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov,
	the arch/x86 maintainers, Thomas Gleixner, Dave Hansen,
	Paolo Bonzini, kvm

> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>>> 
>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
>>>> Support the new interface of flush_tlb_multi, which also flushes the
>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
>>>> interface is more performant since it parallelize remote and local TLB
>>>> flushes.
>>>> 
>>>> The actual implementation of flush_tlb_multi() is almost identical to
>>>> that of flush_tlb_others().
>>> 
>>> This confused me a bit.  I thought we didn't support paravirtualized
>>> flush_tlb_multi() from reading earlier in the series.
>>> 
>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
>>> that right?  It might be good to include some of that background in the
>>> changelog to set the context.
>> 
>> I’ll try to improve the change-logs a bit. There is no inherent reason for
>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
>> for future work, and here are some reasons:
>> 
>> 1. Hyper-V/Xen TLB-flushing code is not very simple
>> 2. I don’t have a proper setup
>> 3. I am lazy
> 
> In the long run, I think that we're going to want a way for one CPU to
> do a remote flush and then, with appropriate locking, update the
> tlb_gen fields for the remote CPU.  Getting this right may be a bit
> nontrivial.

What do you mean by “do a remote flush”?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  3:36   ` Andy Lutomirski
@ 2019-06-26  3:48     ` Nadav Amit
  2019-06-26  3:51       ` Andy Lutomirski
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-26  3:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov, X86 ML,
	Thomas Gleixner, Dave Hansen, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Sasha Levin, Juergen Gross, Paolo Bonzini,
	Boris Ostrovsky, linux-hyperv, Linux Virtualization, kvm list,
	xen-devel

> On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
>> To improve TLB shootdown performance, flush the remote and local TLBs
>> concurrently. Introduce flush_tlb_multi() that does so. The current
>> flush_tlb_others() interface is kept, since paravirtual interfaces need
>> to be adapted first before it can be removed. This is left for future
>> work. In such PV environments, TLB flushes are not performed, at this
>> time, concurrently.
> 
> Would it be straightforward to have a default PV flush_tlb_multi()
> that uses flush_tlb_others() under the hood?

I prefer not to have a default PV implementation that should anyhow go away.

I can create unoptimized untested versions for Xen and Hyper-V, if you want.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
  2019-06-26  3:48     ` Nadav Amit
@ 2019-06-26  3:51       ` Andy Lutomirski
  0 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:51 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andy Lutomirski, Peter Zijlstra, LKML, Ingo Molnar,
	Borislav Petkov, X86 ML, Thomas Gleixner, Dave Hansen,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Sasha Levin,
	Juergen Gross, Paolo Bonzini, Boris Ostrovsky, linux-hyperv,
	Linux Virtualization, kvm list, xen-devel

On Tue, Jun 25, 2019 at 8:48 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:36 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Wed, Jun 12, 2019 at 11:49 PM Nadav Amit <namit@vmware.com> wrote:
> >> To improve TLB shootdown performance, flush the remote and local TLBs
> >> concurrently. Introduce flush_tlb_multi() that does so. The current
> >> flush_tlb_others() interface is kept, since paravirtual interfaces need
> >> to be adapted first before it can be removed. This is left for future
> >> work. In such PV environments, TLB flushes are not performed, at this
> >> time, concurrently.
> >
> > Would it be straightforward to have a default PV flush_tlb_multi()
> > that uses flush_tlb_others() under the hood?
>
> I prefer not to have a default PV implementation that should anyhow go away.
>
> I can create unoptimized untested versions for Xen and Hyper-V, if you want.
>

I think I prefer that approach.  We should be able to get the
maintainers to test it.  I don't love having legacy paths in there,
ahem, UV.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  3:41         ` Nadav Amit
@ 2019-06-26  3:56           ` Andy Lutomirski
  2019-06-26  6:30             ` Nadav Amit
  0 siblings, 1 reply; 18+ messages in thread
From: Andy Lutomirski @ 2019-06-26  3:56 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andy Lutomirski, Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
> >>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >>>
> >>> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>>> Support the new interface of flush_tlb_multi, which also flushes the
> >>>> local CPU's TLB, instead of flush_tlb_others that does not. This
> >>>> interface is more performant since it parallelize remote and local TLB
> >>>> flushes.
> >>>>
> >>>> The actual implementation of flush_tlb_multi() is almost identical to
> >>>> that of flush_tlb_others().
> >>>
> >>> This confused me a bit.  I thought we didn't support paravirtualized
> >>> flush_tlb_multi() from reading earlier in the series.
> >>>
> >>> But, it seems like that might be Xen-only and doesn't apply to KVM and
> >>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> >>> that right?  It might be good to include some of that background in the
> >>> changelog to set the context.
> >>
> >> I’ll try to improve the change-logs a bit. There is no inherent reason for
> >> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> >> for future work, and here are some reasons:
> >>
> >> 1. Hyper-V/Xen TLB-flushing code is not very simple
> >> 2. I don’t have a proper setup
> >> 3. I am lazy
> >
> > In the long run, I think that we're going to want a way for one CPU to
> > do a remote flush and then, with appropriate locking, update the
> > tlb_gen fields for the remote CPU.  Getting this right may be a bit
> > nontrivial.
>
> What do you mean by “do a remote flush”?
>

I mean a PV-assisted flush on a CPU other than the CPU that started
it.  If you look at flush_tlb_func_common(), it's doing some work that
is rather fancier than just flushing the TLB.  By replacing it with
just a pure flush on Xen or Hyper-V, we're losing the potential CR3
switch and this bit:

        /* Both paths above update our state to mm_tlb_gen. */
        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);

Skipping the former can hurt idle performance, although we should
consider just disabling all the lazy optimizations on systems with PV
flush.  (And I've asked Intel to help us out here in future hardware.
I have no idea what the result of asking will be.)  Skipping the
cpu_tlbstate write means that we will do unnecessary flushes in the
future, and that's not doing us any favors.

In principle, we should be able to do something like:

flush_tlb_multi(...);
for(each CPU that got flushed) {
  spin_lock(something appropriate?);
  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
  spin_unlock(...);
}

with the caveat that it's more complicated than this if the flush is a
partial flush, and that we'll want to check that the ctx_id still
matches, etc.

Does this make sense?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  3:56           ` Andy Lutomirski
@ 2019-06-26  6:30             ` Nadav Amit
  2019-06-26 16:37               ` Andy Lutomirski
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-26  6:30 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov,
	the arch/x86 maintainers, Thomas Gleixner, Dave Hansen,
	Paolo Bonzini, kvm

> On Jun 25, 2019, at 8:56 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
>>> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
>>> 
>>> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>>>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>>>>> 
>>>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
>>>>>> Support the new interface of flush_tlb_multi, which also flushes the
>>>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
>>>>>> interface is more performant since it parallelize remote and local TLB
>>>>>> flushes.
>>>>>> 
>>>>>> The actual implementation of flush_tlb_multi() is almost identical to
>>>>>> that of flush_tlb_others().
>>>>> 
>>>>> This confused me a bit.  I thought we didn't support paravirtualized
>>>>> flush_tlb_multi() from reading earlier in the series.
>>>>> 
>>>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
>>>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
>>>>> that right?  It might be good to include some of that background in the
>>>>> changelog to set the context.
>>>> 
>>>> I’ll try to improve the change-logs a bit. There is no inherent reason for
>>>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
>>>> for future work, and here are some reasons:
>>>> 
>>>> 1. Hyper-V/Xen TLB-flushing code is not very simple
>>>> 2. I don’t have a proper setup
>>>> 3. I am lazy
>>> 
>>> In the long run, I think that we're going to want a way for one CPU to
>>> do a remote flush and then, with appropriate locking, update the
>>> tlb_gen fields for the remote CPU.  Getting this right may be a bit
>>> nontrivial.
>> 
>> What do you mean by “do a remote flush”?
> 
> I mean a PV-assisted flush on a CPU other than the CPU that started
> it.  If you look at flush_tlb_func_common(), it's doing some work that
> is rather fancier than just flushing the TLB.  By replacing it with
> just a pure flush on Xen or Hyper-V, we're losing the potential CR3
> switch and this bit:
> 
>        /* Both paths above update our state to mm_tlb_gen. */
>        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
> 
> Skipping the former can hurt idle performance, although we should
> consider just disabling all the lazy optimizations on systems with PV
> flush.  (And I've asked Intel to help us out here in future hardware.
> I have no idea what the result of asking will be.)  Skipping the
> cpu_tlbstate write means that we will do unnecessary flushes in the
> future, and that's not doing us any favors.
> 
> In principle, we should be able to do something like:
> 
> flush_tlb_multi(...);
> for(each CPU that got flushed) {
>  spin_lock(something appropriate?);
>  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
>  spin_unlock(...);
> }
> 
> with the caveat that it's more complicated than this if the flush is a
> partial flush, and that we'll want to check that the ctx_id still
> matches, etc.
> 
> Does this make sense?

Thanks for the detailed explanation. Let me check that I got it right. 

You want to optimize cases in which:

1. A virtual machine

2. Which issues mtultiple (remote) TLB shootdowns

2. To remote vCPU which is preempted by the hypervisor

4. And unlike KVM, the hypervisor does not provide facilities for the VM to
know which vCPU is preempted, and atomically request TLB flush when the vCPU
is scheduled.

Right?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26  6:30             ` Nadav Amit
@ 2019-06-26 16:37               ` Andy Lutomirski
  2019-06-26 17:41                 ` Vitaly Kuznetsov
  0 siblings, 1 reply; 18+ messages in thread
From: Andy Lutomirski @ 2019-06-26 16:37 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andy Lutomirski, Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar,
	Borislav Petkov, the arch/x86 maintainers, Thomas Gleixner,
	Dave Hansen, Paolo Bonzini, kvm

On Tue, Jun 25, 2019 at 11:30 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:56 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
> >>> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >>>
> >>> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
> >>>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >>>>>
> >>>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>>>>> Support the new interface of flush_tlb_multi, which also flushes the
> >>>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
> >>>>>> interface is more performant since it parallelize remote and local TLB
> >>>>>> flushes.
> >>>>>>
> >>>>>> The actual implementation of flush_tlb_multi() is almost identical to
> >>>>>> that of flush_tlb_others().
> >>>>>
> >>>>> This confused me a bit.  I thought we didn't support paravirtualized
> >>>>> flush_tlb_multi() from reading earlier in the series.
> >>>>>
> >>>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
> >>>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> >>>>> that right?  It might be good to include some of that background in the
> >>>>> changelog to set the context.
> >>>>
> >>>> I’ll try to improve the change-logs a bit. There is no inherent reason for
> >>>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> >>>> for future work, and here are some reasons:
> >>>>
> >>>> 1. Hyper-V/Xen TLB-flushing code is not very simple
> >>>> 2. I don’t have a proper setup
> >>>> 3. I am lazy
> >>>
> >>> In the long run, I think that we're going to want a way for one CPU to
> >>> do a remote flush and then, with appropriate locking, update the
> >>> tlb_gen fields for the remote CPU.  Getting this right may be a bit
> >>> nontrivial.
> >>
> >> What do you mean by “do a remote flush”?
> >
> > I mean a PV-assisted flush on a CPU other than the CPU that started
> > it.  If you look at flush_tlb_func_common(), it's doing some work that
> > is rather fancier than just flushing the TLB.  By replacing it with
> > just a pure flush on Xen or Hyper-V, we're losing the potential CR3
> > switch and this bit:
> >
> >        /* Both paths above update our state to mm_tlb_gen. */
> >        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
> >
> > Skipping the former can hurt idle performance, although we should
> > consider just disabling all the lazy optimizations on systems with PV
> > flush.  (And I've asked Intel to help us out here in future hardware.
> > I have no idea what the result of asking will be.)  Skipping the
> > cpu_tlbstate write means that we will do unnecessary flushes in the
> > future, and that's not doing us any favors.
> >
> > In principle, we should be able to do something like:
> >
> > flush_tlb_multi(...);
> > for(each CPU that got flushed) {
> >  spin_lock(something appropriate?);
> >  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
> >  spin_unlock(...);
> > }
> >
> > with the caveat that it's more complicated than this if the flush is a
> > partial flush, and that we'll want to check that the ctx_id still
> > matches, etc.
> >
> > Does this make sense?
>
> Thanks for the detailed explanation. Let me check that I got it right.
>
> You want to optimize cases in which:
>
> 1. A virtual machine

Yes.

>
> 2. Which issues mtultiple (remote) TLB shootdowns

Yes.  Or just one followed by a context switch.  Right now it's
suboptimal with just two vCPUs and a single remote flush.  If CPU 0
does a remote PV flush of CPU1 and then CPU1 context switches away
from the running mm and back, it will do an unnecessary flush on the
way back because the tlb_gen won't match.

>
> 2. To remote vCPU which is preempted by the hypervisor

Yes, or even one that isn't preempted.

>
> 4. And unlike KVM, the hypervisor does not provide facilities for the VM to
> know which vCPU is preempted, and atomically request TLB flush when the vCPU
> is scheduled.
>

I'm not sure this makes much difference to the case I'm thinking of.

All this being said, do we currently have any system that supports
PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
not that familiar with its exact capabilities.  If I remember right,
Hyper-V doesn't expose PCID yet.


> Right?
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26 16:37               ` Andy Lutomirski
@ 2019-06-26 17:41                 ` Vitaly Kuznetsov
  2019-06-26 18:21                   ` Andy Lutomirski
  0 siblings, 1 reply; 18+ messages in thread
From: Vitaly Kuznetsov @ 2019-06-26 17:41 UTC (permalink / raw)
  To: Andy Lutomirski, Nadav Amit
  Cc: Dave Hansen, Peter Zijlstra, LKML, Ingo Molnar, Borislav Petkov,
	the arch/x86 maintainers, Thomas Gleixner, Dave Hansen,
	Paolo Bonzini, kvm

Andy Lutomirski <luto@kernel.org> writes:

> All this being said, do we currently have any system that supports
> PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
> not that familiar with its exact capabilities.  If I remember right,
> Hyper-V doesn't expose PCID yet.
>

It already does (and support it to certain extent), see

commit 617ab45c9a8900e64a78b43696c02598b8cad68b
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Wed Jan 24 11:36:29 2018 +0100

    x86/hyperv: Stop suppressing X86_FEATURE_PCID

-- 
Vitaly

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()
  2019-06-26 17:41                 ` Vitaly Kuznetsov
@ 2019-06-26 18:21                   ` Andy Lutomirski
  0 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-06-26 18:21 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Andy Lutomirski, Nadav Amit, Dave Hansen, Peter Zijlstra, LKML,
	Ingo Molnar, Borislav Petkov, the arch/x86 maintainers,
	Thomas Gleixner, Dave Hansen, Paolo Bonzini, kvm

On Wed, Jun 26, 2019 at 10:41 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
> Andy Lutomirski <luto@kernel.org> writes:
>
> > All this being said, do we currently have any system that supports
> > PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
> > not that familiar with its exact capabilities.  If I remember right,
> > Hyper-V doesn't expose PCID yet.
> >
>
> It already does (and support it to certain extent), see
>
> commit 617ab45c9a8900e64a78b43696c02598b8cad68b
> Author: Vitaly Kuznetsov <vkuznets@redhat.com>
> Date:   Wed Jan 24 11:36:29 2018 +0100
>
>     x86/hyperv: Stop suppressing X86_FEATURE_PCID
>

Hmm.  Once the dust settles from Nadav's patches, I think we should
see about supporting it better :)

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-06-26 18:22 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190613064813.8102-1-namit@vmware.com>
2019-06-13  6:48 ` [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently Nadav Amit
2019-06-25 21:29   ` Dave Hansen
2019-06-26  2:35     ` Nadav Amit
2019-06-26  3:00       ` Dave Hansen
2019-06-26  3:32         ` Nadav Amit
2019-06-26  3:36   ` Andy Lutomirski
2019-06-26  3:48     ` Nadav Amit
2019-06-26  3:51       ` Andy Lutomirski
2019-06-13  6:48 ` [PATCH 6/9] KVM: x86: Provide paravirtualized flush_tlb_multi() Nadav Amit
2019-06-25 21:40   ` Dave Hansen
2019-06-26  2:39     ` Nadav Amit
2019-06-26  3:35       ` Andy Lutomirski
2019-06-26  3:41         ` Nadav Amit
2019-06-26  3:56           ` Andy Lutomirski
2019-06-26  6:30             ` Nadav Amit
2019-06-26 16:37               ` Andy Lutomirski
2019-06-26 17:41                 ` Vitaly Kuznetsov
2019-06-26 18:21                   ` Andy Lutomirski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).