linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection
@ 2020-03-24 15:18 Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 1/8] x86/split_lock: Rework the initialization flow " Xiaoyao Li
                   ` (8 more replies)
  0 siblings, 9 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

So sorry for the noise that I forgot to CC the maillist.

This series aims to add the virtualization of split lock detection for
guest, while containing some fixes of native kernel split lock handling. 

Note, this series is based on the kernel patch[1]. Patch 1-3 are x86
kernel patches that based on the linux/master branch. Patch 4-8 are kvm
patches that based on the kvm/queue branch.

Patch 1 is the fix and enhancement for kernel split lock detction. It
ensures X86_FEATURE_SPLIT_LOCK_DETECT flag is set only when feature does
exist and not disabled on kernel params. And it explicitly turn off split
lock when sld_off instead of assuming BIOS/firmware leaves it cleared.

Patch 2 optimizes the runtime MSR accessing.

Patch 3 are the preparation for enabling split lock detection
virtualization in KVM.

Patch 4 fixes the issue that malicious guest may exploit kvm emulator to
attcact host kernel.

Patch 5 handles guest's split lock when host turns split lock detect on.

Patch 6-8 implement the virtualization of split lock detection in kvm.

[1]: https://lore.kernel.org/lkml/158031147976.396.8941798847364718785.tip-bot2@tip-bot2/ 

Changes in v6:
 - Drop the sld_not_exist flag and use X86_FEATURE_SPLIT_LOCK_DETECT to
   check whether need to init split lock detection. [tglx]
 - Use tglx's method to verify the existence of split lock detectoin.
 - small optimization of sld_update_msr() that the default value of
   msr_test_ctrl_cache has split_lock_detect bit cleared.
 - Drop the patch3 in v5 that introducing kvm_only option. [tglx]
 - Rebase patch4-8 to kvm/queue.
 - use the new kvm-cpu-cap to expose X86_FEATURE_CORE_CAPABILITIES in
   Patch 6.

Changes in v5:
 - Use X86_FEATURE_SPLIT_LOCK_DETECT flag in kvm to ensure split lock
   detection is really supported.
 - Add and export sld related helper functions in their related usecase 
   kvm patches.

Changes in v4:
 - Add patch 1 to rework the initialization flow of split lock
   detection.
 - Drop percpu MSR_TEST_CTRL cache, just use a static variable to cache
   the reserved/unused bit of MSR_TEST_CTRL. [Sean]
 - Add new option for split_lock_detect kernel param.
 - Changlog refinement. [Sean]
 - Add a new patch to enable MSR_TEST_CTRL for intel guest. [Sean]

Xiaoyao Li (8):
  x86/split_lock: Rework the initialization flow of split lock detection
  x86/split_lock: Avoid runtime reads of the TEST_CTRL MSR
  x86/split_lock: Export handle_user_split_lock()
  kvm: x86: Emulate split-lock access as a write in emulator
  kvm: vmx: Extend VMX's #AC interceptor to handle split lock #AC
    happens in guest
  kvm: x86: Emulate MSR IA32_CORE_CAPABILITIES
  kvm: vmx: Enable MSR_TEST_CTRL for intel guest
  kvm: vmx: virtualize split lock detection

 arch/x86/include/asm/cpu.h      |  21 +++++-
 arch/x86/include/asm/kvm_host.h |   1 +
 arch/x86/kernel/cpu/intel.c     | 114 +++++++++++++++++++-------------
 arch/x86/kernel/traps.c         |   2 +-
 arch/x86/kvm/cpuid.c            |   1 +
 arch/x86/kvm/vmx/vmx.c          |  75 ++++++++++++++++++++-
 arch/x86/kvm/vmx/vmx.h          |   1 +
 arch/x86/kvm/x86.c              |  42 +++++++++++-
 8 files changed, 203 insertions(+), 54 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v6 1/8] x86/split_lock: Rework the initialization flow of split lock detection
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 2/8] x86/split_lock: Avoid runtime reads of the TEST_CTRL MSR Xiaoyao Li
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

Current initialization flow of split lock detection has following
issues:

1. It assumes the initial value of MSR_TEST_CTRL.SPLIT_LOCK_DETECT to be
   zero. However, it's possible that BIOS/firmware has set it.

2. X86_FEATURE_SPLIT_LOCK_DETECT flag is unconditionally set even if
   there is a virtualization flaw that FMS indicates the existence while
   it's actually not supported.

3. Because of #2, for nest virt, L1 KVM cannot rely on flag
   X86_FEATURE_SPLIT_LOCK_DETECT to check the existence of feature.

Rework the initialization flow to solve above issues. In detail,
explicitly set and clear split_lock_detect bit to verify MSR_TEST_CTRL
can be accessed, and rdmsr after wrmsr to ensure bit is set
successfully.

X86_FEATURE_SPLIT_LOCK_DETECT flag is set only when the feature does
exist and the feature is not disabled with kernel param
"split_lock_detect=off"

Originally-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kernel/cpu/intel.c | 79 +++++++++++++++++++++----------------
 1 file changed, 46 insertions(+), 33 deletions(-)

diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index db3e745e5d47..a0a7d0ec170a 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -44,7 +44,7 @@ enum split_lock_detect_state {
  * split_lock_setup() will switch this to sld_warn on systems that support
  * split lock detect, unless there is a command line override.
  */
-static enum split_lock_detect_state sld_state = sld_off;
+static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
 
 /*
  * Processors which have self-snooping capability can handle conflicting
@@ -984,78 +984,91 @@ static inline bool match_option(const char *arg, int arglen, const char *opt)
 	return len == arglen && !strncmp(arg, opt, len);
 }
 
+static bool __init split_lock_verify_msr(bool on)
+{
+	u64 ctrl, tmp;
+
+	if (rdmsrl_safe(MSR_TEST_CTRL, &ctrl))
+		return false;
+
+	if (on)
+		ctrl |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+	else
+		ctrl &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+
+	if (wrmsrl_safe(MSR_TEST_CTRL, ctrl))
+		return false;
+
+	rdmsrl(MSR_TEST_CTRL, tmp);
+	return ctrl == tmp;
+}
+
 static void __init split_lock_setup(void)
 {
+	enum split_lock_detect_state state = sld_warn;
 	char arg[20];
 	int i, ret;
 
-	setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
-	sld_state = sld_warn;
+	if (!split_lock_verify_msr(false)) {
+		pr_info("MSR access failed: Disabled\n");
+		return;
+	}
 
 	ret = cmdline_find_option(boot_command_line, "split_lock_detect",
 				  arg, sizeof(arg));
 	if (ret >= 0) {
 		for (i = 0; i < ARRAY_SIZE(sld_options); i++) {
 			if (match_option(arg, ret, sld_options[i].option)) {
-				sld_state = sld_options[i].state;
+				state = sld_options[i].state;
 				break;
 			}
 		}
 	}
 
-	switch (sld_state) {
+	switch (state) {
 	case sld_off:
 		pr_info("disabled\n");
-		break;
-
+		return;
 	case sld_warn:
 		pr_info("warning about user-space split_locks\n");
 		break;
-
 	case sld_fatal:
 		pr_info("sending SIGBUS on user-space split_locks\n");
 		break;
 	}
+
+	if (!split_lock_verify_msr(true)) {
+		pr_info("MSR access failed: Disabled\n");
+		return;
+	}
+
+	sld_state = state;
+	setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
 }
 
 /*
- * Locking is not required at the moment because only bit 29 of this
- * MSR is implemented and locking would not prevent that the operation
- * of one thread is immediately undone by the sibling thread.
- * Use the "safe" versions of rdmsr/wrmsr here because although code
- * checks CPUID and MSR bits to make sure the TEST_CTRL MSR should
- * exist, there may be glitches in virtualization that leave a guest
- * with an incorrect view of real h/w capabilities.
+ * MSR_TEST_CTRL is per core, but we treat it like a per CPU MSR. Locking
+ * is not implemented as one thread could undo the setting of the other
+ * thread immediately after dropping the lock anyway.
  */
-static bool __sld_msr_set(bool on)
+static void sld_update_msr(bool on)
 {
 	u64 test_ctrl_val;
 
-	if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
-		return false;
+	rdmsrl(MSR_TEST_CTRL, test_ctrl_val);
 
 	if (on)
 		test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
 	else
 		test_ctrl_val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
 
-	return !wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val);
+	wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
 }
 
 static void split_lock_init(void)
 {
-	if (sld_state == sld_off)
-		return;
-
-	if (__sld_msr_set(true))
-		return;
-
-	/*
-	 * If this is anything other than the boot-cpu, you've done
-	 * funny things and you get to keep whatever pieces.
-	 */
-	pr_warn("MSR fail -- disabled\n");
-	sld_state = sld_off;
+	if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+		sld_update_msr(sld_state != sld_off);
 }
 
 bool handle_user_split_lock(struct pt_regs *regs, long error_code)
@@ -1071,7 +1084,7 @@ bool handle_user_split_lock(struct pt_regs *regs, long error_code)
 	 * progress and set TIF_SLD so the detection is re-enabled via
 	 * switch_to_sld() when the task is scheduled out.
 	 */
-	__sld_msr_set(false);
+	sld_update_msr(false);
 	set_tsk_thread_flag(current, TIF_SLD);
 	return true;
 }
@@ -1085,7 +1098,7 @@ bool handle_user_split_lock(struct pt_regs *regs, long error_code)
  */
 void switch_to_sld(unsigned long tifn)
 {
-	__sld_msr_set(!(tifn & _TIF_SLD));
+	sld_update_msr(!(tifn & _TIF_SLD));
 }
 
 #define SPLIT_LOCK_CPU(model) {X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v6 2/8] x86/split_lock: Avoid runtime reads of the TEST_CTRL MSR
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 1/8] x86/split_lock: Rework the initialization flow " Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 3/8] x86/split_lock: Export handle_user_split_lock() Xiaoyao Li
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

In a context switch from a task that is detecting split locks
to one that is not (or vice versa) we need to update the TEST_CTRL
MSR. Currently this is done with the common sequence:
	read the MSR
	flip the bit
	write the MSR
in order to avoid changing the value of any reserved bits in the MSR.

Cache unused and reserved bits of TEST_CTRL MSR with SPLIT_LOCK_DETECT
bit cleared during initialization, so we can avoid an expensive RDMSR
instruction during context switch.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Originally-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kernel/cpu/intel.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index a0a7d0ec170a..553b5855c32b 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -45,6 +45,7 @@ enum split_lock_detect_state {
  * split lock detect, unless there is a command line override.
  */
 static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
+static u64 msr_test_ctrl_cache __ro_after_init;
 
 /*
  * Processors which have self-snooping capability can handle conflicting
@@ -1037,6 +1038,8 @@ static void __init split_lock_setup(void)
 		break;
 	}
 
+	rdmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);
+
 	if (!split_lock_verify_msr(true)) {
 		pr_info("MSR access failed: Disabled\n");
 		return;
@@ -1053,14 +1056,10 @@ static void __init split_lock_setup(void)
  */
 static void sld_update_msr(bool on)
 {
-	u64 test_ctrl_val;
-
-	rdmsrl(MSR_TEST_CTRL, test_ctrl_val);
+	u64 test_ctrl_val = msr_test_ctrl_cache;
 
 	if (on)
 		test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
-	else
-		test_ctrl_val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
 
 	wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v6 3/8] x86/split_lock: Export handle_user_split_lock()
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 1/8] x86/split_lock: Rework the initialization flow " Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 2/8] x86/split_lock: Avoid runtime reads of the TEST_CTRL MSR Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 4/8] kvm: x86: Emulate split-lock access as a write in emulator Xiaoyao Li
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

In the future, KVM will use handle_user_split_lock() to handle #AC
caused by split lock in guest. Due to the fact that KVM doesn't have
a @regs context and will pre-check EFLASG.AC, move the EFLAGS.AC check
to do_alignment_check().

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/include/asm/cpu.h  | 4 ++--
 arch/x86/kernel/cpu/intel.c | 7 ++++---
 arch/x86/kernel/traps.c     | 2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
index ff6f3ca649b3..ff567afa6ee1 100644
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -43,11 +43,11 @@ unsigned int x86_stepping(unsigned int sig);
 #ifdef CONFIG_CPU_SUP_INTEL
 extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c);
 extern void switch_to_sld(unsigned long tifn);
-extern bool handle_user_split_lock(struct pt_regs *regs, long error_code);
+extern bool handle_user_split_lock(unsigned long ip);
 #else
 static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {}
 static inline void switch_to_sld(unsigned long tifn) {}
-static inline bool handle_user_split_lock(struct pt_regs *regs, long error_code)
+static inline bool handle_user_split_lock(unsigned long ip)
 {
 	return false;
 }
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 553b5855c32b..aed2b477e2ad 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -1070,13 +1070,13 @@ static void split_lock_init(void)
 		sld_update_msr(sld_state != sld_off);
 }
 
-bool handle_user_split_lock(struct pt_regs *regs, long error_code)
+bool handle_user_split_lock(unsigned long ip)
 {
-	if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
+	if (sld_state == sld_fatal)
 		return false;
 
 	pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
-			    current->comm, current->pid, regs->ip);
+			    current->comm, current->pid, ip);
 
 	/*
 	 * Disable the split lock detection for this task so it can make
@@ -1087,6 +1087,7 @@ bool handle_user_split_lock(struct pt_regs *regs, long error_code)
 	set_tsk_thread_flag(current, TIF_SLD);
 	return true;
 }
+EXPORT_SYMBOL_GPL(handle_user_split_lock);
 
 /*
  * This function is called only when switching between tasks with
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 0ef5befaed7d..407ff9be610f 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -304,7 +304,7 @@ dotraplinkage void do_alignment_check(struct pt_regs *regs, long error_code)
 
 	local_irq_enable();
 
-	if (handle_user_split_lock(regs, error_code))
+	if (!(regs->flags & X86_EFLAGS_AC) && handle_user_split_lock(regs->ip))
 		return;
 
 	do_trap(X86_TRAP_AC, SIGBUS, "alignment check", regs,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v6 4/8] kvm: x86: Emulate split-lock access as a write in emulator
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
                   ` (2 preceding siblings ...)
  2020-03-24 15:18 ` [PATCH v6 3/8] x86/split_lock: Export handle_user_split_lock() Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-25  0:00   ` Thomas Gleixner
  2020-03-24 15:18 ` [PATCH v6 5/8] kvm: vmx: Extend VMX's #AC interceptor to handle split lock #AC happens in guest Xiaoyao Li
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

If split lock detect is on (warn/fatal), #AC handler calls die() when
split lock happens in kernel.

Malicous guest can exploit the KVM emulator to trigger split lock #AC
in kernel[1]. So just emulating the access as a write if it's a
split-lock access (the same as access spans page) to avoid malicious
guest attacking kernel.

More discussion can be found [2][3].

[1] https://lore.kernel.org/lkml/8c5b11c9-58df-38e7-a514-dc12d687b198@redhat.com/
[2] https://lkml.kernel.org/r/20200131200134.GD18946@linux.intel.com
[3] https://lkml.kernel.org/r/20200227001117.GX9940@linux.intel.com

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/include/asm/cpu.h  | 2 ++
 arch/x86/kernel/cpu/intel.c | 6 ++++++
 arch/x86/kvm/x86.c          | 7 ++++++-
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
index ff567afa6ee1..d2071f6a35ac 100644
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -44,6 +44,7 @@ unsigned int x86_stepping(unsigned int sig);
 extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c);
 extern void switch_to_sld(unsigned long tifn);
 extern bool handle_user_split_lock(unsigned long ip);
+extern bool split_lock_detect_on(void);
 #else
 static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {}
 static inline void switch_to_sld(unsigned long tifn) {}
@@ -51,5 +52,6 @@ static inline bool handle_user_split_lock(unsigned long ip)
 {
 	return false;
 }
+static inline bool split_lock_detect_on(void) { return false; }
 #endif
 #endif /* _ASM_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index aed2b477e2ad..fd67be719284 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -1070,6 +1070,12 @@ static void split_lock_init(void)
 		sld_update_msr(sld_state != sld_off);
 }
 
+bool split_lock_detect_on(void)
+{
+	return sld_state != sld_off;
+}
+EXPORT_SYMBOL_GPL(split_lock_detect_on);
+
 bool handle_user_split_lock(unsigned long ip)
 {
 	if (sld_state == sld_fatal)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ebd56aa10d9f..5ef57e3a315f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5831,6 +5831,7 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
 {
 	struct kvm_host_map map;
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+	u64 page_line_mask = PAGE_MASK;
 	gpa_t gpa;
 	char *kaddr;
 	bool exchanged;
@@ -5845,7 +5846,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
 	    (gpa & PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
 		goto emul_write;
 
-	if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK))
+	if (split_lock_detect_on())
+		page_line_mask = ~(cache_line_size() - 1);
+
+	/* when write spans page or spans cache when SLD enabled */
+	if (((gpa + bytes - 1) & page_line_mask) != (gpa & page_line_mask))
 		goto emul_write;
 
 	if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), &map))
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v6 5/8] kvm: vmx: Extend VMX's #AC interceptor to handle split lock #AC happens in guest
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
                   ` (3 preceding siblings ...)
  2020-03-24 15:18 ` [PATCH v6 4/8] kvm: x86: Emulate split-lock access as a write in emulator Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 6/8] kvm: x86: Emulate MSR IA32_CORE_CAPABILITIES Xiaoyao Li
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

There are two types of #AC can be generated in Intel CPUs:
 1. legacy alignment check #AC;
 2. split lock #AC;

Legacy alignment check #AC can be injected to guest if guest has enabled
alignemnet check.

when host enables split lock detectin, i.e., sld_warn or sld_fatal,
there will be an unexpected #AC in guest and intercepted by KVM because
KVM doesn't virtualize this feature to guest and hardware value of
MSR_TEST_CTRL.SLD bit stays unchanged when vcpu is running.

To handle this unexpected #AC, treat guest just like host usermode that
calling handle_user_split_lock():
 - If host is sld_warn, it warns and set TIF_SLD so that __switch_to_xtra()
   does the MSR_TEST_CTRL.SLD bit switching when control transfer to/from
   this vcpu.
 - If host is sld_fatal, forward #AC to userspace, the similar as sending
   SIGBUS.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 30 +++++++++++++++++++++++++++---
 1 file changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 094dbe375f01..300e1e149372 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4613,6 +4613,12 @@ static int handle_machine_check(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static inline bool guest_cpu_alignment_check_enabled(struct kvm_vcpu *vcpu)
+{
+	return vmx_get_cpl(vcpu) == 3 && kvm_read_cr0_bits(vcpu, X86_CR0_AM) &&
+	       (kvm_get_rflags(vcpu) & X86_EFLAGS_AC);
+}
+
 static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -4678,9 +4684,6 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 		return handle_rmode_exception(vcpu, ex_no, error_code);
 
 	switch (ex_no) {
-	case AC_VECTOR:
-		kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
-		return 1;
 	case DB_VECTOR:
 		dr6 = vmcs_readl(EXIT_QUALIFICATION);
 		if (!(vcpu->guest_debug &
@@ -4709,6 +4712,27 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 		kvm_run->debug.arch.pc = vmcs_readl(GUEST_CS_BASE) + rip;
 		kvm_run->debug.arch.exception = ex_no;
 		break;
+	case AC_VECTOR:
+		/*
+		 * Reflect #AC to the guest if it's expecting the #AC, i.e. has
+		 * legacy alignment check enabled.  Pre-check host split lock
+		 * support to avoid the VMREADs needed to check legacy #AC,
+		 * i.e. reflect the #AC if the only possible source is legacy
+		 * alignment checks.
+		 */
+		if (!split_lock_detect_on() ||
+		    guest_cpu_alignment_check_enabled(vcpu)) {
+			kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
+			return 1;
+		}
+
+		/*
+		 * Forward the #AC to userspace if kernel policy does not allow
+		 * temporarily disabling split lock detection.
+		 */
+		if (handle_user_split_lock(kvm_rip_read(vcpu)))
+			return 1;
+		fallthrough;
 	default:
 		kvm_run->exit_reason = KVM_EXIT_EXCEPTION;
 		kvm_run->ex.exception = ex_no;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v6 6/8] kvm: x86: Emulate MSR IA32_CORE_CAPABILITIES
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
                   ` (4 preceding siblings ...)
  2020-03-24 15:18 ` [PATCH v6 5/8] kvm: vmx: Extend VMX's #AC interceptor to handle split lock #AC happens in guest Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-24 15:18 ` [PATCH v6 7/8] kvm: vmx: Enable MSR_TEST_CTRL for intel guest Xiaoyao Li
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

Emulate MSR_IA32_CORE_CAPABILITIES in software and unconditionally
advertise its support to userspace. Like MSR_IA32_ARCH_CAPABILITIES, it
is a feature-enumerating MSR and can be fully emulated regardless of
hardware support. Existence of CORE_CAPABILITIES is enumerated via
CPUID.(EAX=7H,ECX=0):EDX[30].

Note, support for individual features enumerated via CORE_CAPABILITIES,
e.g., split lock detection, will be added in future patches.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/cpuid.c            |  1 +
 arch/x86/kvm/x86.c              | 22 ++++++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9a183e9d4cb1..7e842ccb0608 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -597,6 +597,7 @@ struct kvm_vcpu_arch {
 	u64 ia32_xss;
 	u64 microcode_version;
 	u64 arch_capabilities;
+	u64 core_capabilities;
 
 	/*
 	 * Paging state of the vcpu
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 435a7da07d5f..1cacc776b329 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -344,6 +344,7 @@ void kvm_set_cpu_caps(void)
 	/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
 	kvm_cpu_cap_set(X86_FEATURE_TSC_ADJUST);
 	kvm_cpu_cap_set(X86_FEATURE_ARCH_CAPABILITIES);
+	kvm_cpu_cap_set(X86_FEATURE_CORE_CAPABILITIES);
 
 	if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS))
 		kvm_cpu_cap_set(X86_FEATURE_SPEC_CTRL);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5ef57e3a315f..fc1a4e9e5659 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1248,6 +1248,7 @@ static const u32 emulated_msrs_all[] = {
 	MSR_IA32_TSC_ADJUST,
 	MSR_IA32_TSCDEADLINE,
 	MSR_IA32_ARCH_CAPABILITIES,
+	MSR_IA32_CORE_CAPS,
 	MSR_IA32_MISC_ENABLE,
 	MSR_IA32_MCG_STATUS,
 	MSR_IA32_MCG_CTL,
@@ -1314,6 +1315,7 @@ static const u32 msr_based_features_all[] = {
 	MSR_F10H_DECFG,
 	MSR_IA32_UCODE_REV,
 	MSR_IA32_ARCH_CAPABILITIES,
+	MSR_IA32_CORE_CAPS,
 };
 
 static u32 msr_based_features[ARRAY_SIZE(msr_based_features_all)];
@@ -1367,12 +1369,20 @@ static u64 kvm_get_arch_capabilities(void)
 	return data;
 }
 
+static u64 kvm_get_core_capabilities(void)
+{
+	return 0;
+}
+
 static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
 {
 	switch (msr->index) {
 	case MSR_IA32_ARCH_CAPABILITIES:
 		msr->data = kvm_get_arch_capabilities();
 		break;
+	case MSR_IA32_CORE_CAPS:
+		msr->data = kvm_get_core_capabilities();
+		break;
 	case MSR_IA32_UCODE_REV:
 		rdmsrl_safe(msr->index, &msr->data);
 		break;
@@ -2745,6 +2755,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		vcpu->arch.arch_capabilities = data;
 		break;
+	case MSR_IA32_CORE_CAPS:
+		if (!msr_info->host_initiated)
+			return 1;
+		vcpu->arch.core_capabilities = data;
+		break;
 	case MSR_EFER:
 		return set_efer(vcpu, msr_info);
 	case MSR_K7_HWCR:
@@ -3072,6 +3087,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		msr_info->data = vcpu->arch.arch_capabilities;
 		break;
+	case MSR_IA32_CORE_CAPS:
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_CORE_CAPABILITIES))
+			return 1;
+		msr_info->data = vcpu->arch.core_capabilities;
+		break;
 	case MSR_IA32_POWER_CTL:
 		msr_info->data = vcpu->arch.msr_ia32_power_ctl;
 		break;
@@ -9367,6 +9388,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 		goto free_guest_fpu;
 
 	vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
+	vcpu->arch.core_capabilities = kvm_get_core_capabilities();
 	vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
 	kvm_vcpu_mtrr_init(vcpu);
 	vcpu_load(vcpu);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v6 7/8] kvm: vmx: Enable MSR_TEST_CTRL for intel guest
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
                   ` (5 preceding siblings ...)
  2020-03-24 15:18 ` [PATCH v6 6/8] kvm: x86: Emulate MSR IA32_CORE_CAPABILITIES Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-25  0:07   ` Thomas Gleixner
  2020-03-24 15:18 ` [PATCH v6 8/8] kvm: vmx: virtualize split lock detection Xiaoyao Li
  2020-03-24 17:47 ` [PATCH v6 0/8] x86/split_lock: Fix and virtualization of " Sean Christopherson
  8 siblings, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

Only enabling the read and write zero of MSR_TEST_CTRL. This makes
MSR_TEST_CTRL always available for intel guest, but guset cannot write any
value to it except zero.

This matches the truth that most Intel CPUs support MSR_TEST_CTRL, and
it also alleviates the effort to handle wrmsr/rdmsr when exposing split
lock detect to guest in the following patch.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 10 ++++++++++
 arch/x86/kvm/vmx/vmx.h |  1 +
 2 files changed, 11 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 300e1e149372..a302027f7e56 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1820,6 +1820,9 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	u32 index;
 
 	switch (msr_info->index) {
+	case MSR_TEST_CTRL:
+		msr_info->data = vmx->msr_test_ctrl;
+		break;
 #ifdef CONFIG_X86_64
 	case MSR_FS_BASE:
 		msr_info->data = vmcs_readl(GUEST_FS_BASE);
@@ -1973,6 +1976,12 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	u32 index;
 
 	switch (msr_index) {
+	case MSR_TEST_CTRL:
+		if (data)
+			return 1;
+
+		vmx->msr_test_ctrl = data;
+		break;
 	case MSR_EFER:
 		ret = kvm_set_msr_common(vcpu, msr_info);
 		break;
@@ -4283,6 +4292,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 
 	vmx->rmode.vm86_active = 0;
 	vmx->spec_ctrl = 0;
+	vmx->msr_test_ctrl = 0;
 
 	vmx->msr_ia32_umwait_control = 0;
 
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index be93d597306c..7ef9cc085188 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -224,6 +224,7 @@ struct vcpu_vmx {
 #endif
 
 	u64		      spec_ctrl;
+	u64		      msr_test_ctrl;
 	u32		      msr_ia32_umwait_control;
 
 	u32 secondary_exec_control;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
                   ` (6 preceding siblings ...)
  2020-03-24 15:18 ` [PATCH v6 7/8] kvm: vmx: Enable MSR_TEST_CTRL for intel guest Xiaoyao Li
@ 2020-03-24 15:18 ` Xiaoyao Li
  2020-03-25  0:40   ` Thomas Gleixner
  2020-03-24 17:47 ` [PATCH v6 0/8] x86/split_lock: Fix and virtualization of " Sean Christopherson
  8 siblings, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-24 15:18 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

Due to the fact that MSR_TEST_CTRL is per-core scope, i.e., the sibling
threads in the same physical CPU core share the same MSR, only
advertising feature split lock detection to guest when SMT is disabled
or unsupported, for simplicitly.

Below summarizing how guest behaves of different host configuration:

  sld_fatal - hardware MSR_TEST_CTRL.SLD is always on when vcpu is running,
              even though guest thinks it sets/clears MSR_TEST_CTRL.SLD
	      bit successfully. i.e., SLD is forced on for guest.

  sld_warn - hardware MSR_TEST_CTRL.SLD is left on until an #AC is
	     intercepted with MSR_TEST_CTRL.SLD=0 in the guest, at which
	     point normal sld_warn rules apply, i.e., clear
	     MSR_TEST_CTRL.SLD bit and set TIF_SLD.
	     If a vCPU associated with the task does VM-Enter with
	     virtual MSR_TEST_CTRL.SLD=1, TIF_SLD is reset, hardware
	     MSR_TEST_CTRL.SLD is re-set, and cycle begins anew.

  sld_disable - guest cannot see feature split lock detection.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/include/asm/cpu.h  | 17 +++++++++++++-
 arch/x86/kernel/cpu/intel.c | 29 +++++++++++++-----------
 arch/x86/kvm/vmx/vmx.c      | 45 ++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/x86.c          | 17 +++++++++++---
 4 files changed, 86 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
index d2071f6a35ac..519dd0c4c1bd 100644
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -41,10 +41,23 @@ unsigned int x86_family(unsigned int sig);
 unsigned int x86_model(unsigned int sig);
 unsigned int x86_stepping(unsigned int sig);
 #ifdef CONFIG_CPU_SUP_INTEL
+enum split_lock_detect_state {
+	sld_off = 0,
+	sld_warn,
+	sld_fatal,
+};
+extern enum split_lock_detect_state sld_state __ro_after_init;
+
+static inline bool split_lock_detect_on(void)
+{
+	return sld_state != sld_off;
+}
+
 extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c);
 extern void switch_to_sld(unsigned long tifn);
 extern bool handle_user_split_lock(unsigned long ip);
-extern bool split_lock_detect_on(void);
+extern void sld_msr_set(bool on);
+extern void sld_turn_back_on(void);
 #else
 static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {}
 static inline void switch_to_sld(unsigned long tifn) {}
@@ -53,5 +66,7 @@ static inline bool handle_user_split_lock(unsigned long ip)
 	return false;
 }
 static inline bool split_lock_detect_on(void) { return false; }
+static inline void sld_msr_set(bool on) {}
+static inline void sld_turn_back_on(void) {}
 #endif
 #endif /* _ASM_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index fd67be719284..8c186e8d4536 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -33,18 +33,14 @@
 #include <asm/apic.h>
 #endif
 
-enum split_lock_detect_state {
-	sld_off = 0,
-	sld_warn,
-	sld_fatal,
-};
-
 /*
  * Default to sld_off because most systems do not support split lock detection
  * split_lock_setup() will switch this to sld_warn on systems that support
  * split lock detect, unless there is a command line override.
  */
-static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
+enum split_lock_detect_state sld_state __ro_after_init = sld_off;
+EXPORT_SYMBOL_GPL(sld_state);
+
 static u64 msr_test_ctrl_cache __ro_after_init;
 
 /*
@@ -1070,12 +1066,6 @@ static void split_lock_init(void)
 		sld_update_msr(sld_state != sld_off);
 }
 
-bool split_lock_detect_on(void)
-{
-	return sld_state != sld_off;
-}
-EXPORT_SYMBOL_GPL(split_lock_detect_on);
-
 bool handle_user_split_lock(unsigned long ip)
 {
 	if (sld_state == sld_fatal)
@@ -1095,6 +1085,19 @@ bool handle_user_split_lock(unsigned long ip)
 }
 EXPORT_SYMBOL_GPL(handle_user_split_lock);
 
+void sld_msr_set(bool on)
+{
+	sld_update_msr(on);
+}
+EXPORT_SYMBOL_GPL(sld_msr_set);
+
+void sld_turn_back_on(void)
+{
+	sld_update_msr(true);
+	clear_tsk_thread_flag(current, TIF_SLD);
+}
+EXPORT_SYMBOL_GPL(sld_turn_back_on);
+
 /*
  * This function is called only when switching between tasks with
  * different split-lock detection modes. It sets the MSR for the
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a302027f7e56..2adf326d433f 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1808,6 +1808,22 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
 	}
 }
 
+static inline u64 vmx_msr_test_ctrl_valid_bits(struct kvm_vcpu *vcpu)
+{
+	u64 valid_bits = 0;
+
+	/*
+	 * Note: for guest, feature split lock detection can only be enumerated
+	 * through MSR_IA32_CORE_CAPABILITIES bit.
+	 * The FMS enumeration is invalid.
+	 */
+	if (vcpu->arch.core_capabilities &
+	    MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT)
+		valid_bits |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+
+	return valid_bits;
+}
+
 /*
  * Reads an msr value (of 'msr_index') into 'pdata'.
  * Returns 0 on success, non-0 otherwise.
@@ -1977,7 +1993,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 
 	switch (msr_index) {
 	case MSR_TEST_CTRL:
-		if (data)
+		if (data & ~vmx_msr_test_ctrl_valid_bits(vcpu))
 			return 1;
 
 		vmx->msr_test_ctrl = data;
@@ -4629,6 +4645,11 @@ static inline bool guest_cpu_alignment_check_enabled(struct kvm_vcpu *vcpu)
 	       (kvm_get_rflags(vcpu) & X86_EFLAGS_AC);
 }
 
+static inline bool guest_cpu_split_lock_detect_on(struct vcpu_vmx *vmx)
+{
+	return vmx->msr_test_ctrl & MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+}
+
 static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -4725,12 +4746,13 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 	case AC_VECTOR:
 		/*
 		 * Reflect #AC to the guest if it's expecting the #AC, i.e. has
-		 * legacy alignment check enabled.  Pre-check host split lock
-		 * support to avoid the VMREADs needed to check legacy #AC,
-		 * i.e. reflect the #AC if the only possible source is legacy
-		 * alignment checks.
+		 * legacy alignment check enabled or split lock detect enabled.
+		 * Pre-check host split lock support to avoid further check of
+		 * guest, i.e. reflect the #AC if host doesn't enable split lock
+		 * detection.
 		 */
 		if (!split_lock_detect_on() ||
+		    guest_cpu_split_lock_detect_on(vmx) ||
 		    guest_cpu_alignment_check_enabled(vcpu)) {
 			kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
 			return 1;
@@ -6631,6 +6653,14 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	 */
 	x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0);
 
+	if (static_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
+	    guest_cpu_split_lock_detect_on(vmx)) {
+		if (test_thread_flag(TIF_SLD))
+			sld_turn_back_on();
+		else if (!split_lock_detect_on())
+			sld_msr_set(true);
+	}
+
 	/* L1D Flush includes CPU buffer clear to mitigate MDS */
 	if (static_branch_unlikely(&vmx_l1d_should_flush))
 		vmx_l1d_flush(vcpu);
@@ -6665,6 +6695,11 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0);
 
+	if (static_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
+	    guest_cpu_split_lock_detect_on(vmx) &&
+	    !split_lock_detect_on())
+		sld_msr_set(false);
+
 	/* All fields are clean at this point */
 	if (static_branch_unlikely(&enable_evmcs))
 		current_evmcs->hv_clean_fields |=
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fc1a4e9e5659..58abfdf67b60 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1189,7 +1189,7 @@ static const u32 msrs_to_save_all[] = {
 #endif
 	MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
 	MSR_IA32_FEAT_CTL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
-	MSR_IA32_SPEC_CTRL,
+	MSR_IA32_SPEC_CTRL, MSR_TEST_CTRL,
 	MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_STATUS, MSR_IA32_RTIT_CR3_MATCH,
 	MSR_IA32_RTIT_OUTPUT_BASE, MSR_IA32_RTIT_OUTPUT_MASK,
 	MSR_IA32_RTIT_ADDR0_A, MSR_IA32_RTIT_ADDR0_B,
@@ -1371,7 +1371,12 @@ static u64 kvm_get_arch_capabilities(void)
 
 static u64 kvm_get_core_capabilities(void)
 {
-	return 0;
+	u64 data = 0;
+
+	if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) && !cpu_smt_possible())
+		data |= MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT;
+
+	return data;
 }
 
 static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
@@ -2756,7 +2761,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		vcpu->arch.arch_capabilities = data;
 		break;
 	case MSR_IA32_CORE_CAPS:
-		if (!msr_info->host_initiated)
+		if (!msr_info->host_initiated ||
+		    data & ~kvm_get_core_capabilities())
 			return 1;
 		vcpu->arch.core_capabilities = data;
 		break;
@@ -5235,6 +5241,11 @@ static void kvm_init_msr_list(void)
 		 * to the guests in some cases.
 		 */
 		switch (msrs_to_save_all[i]) {
+		case MSR_TEST_CTRL:
+			if (!(kvm_get_core_capabilities() &
+			      MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT))
+				continue;
+			break;
 		case MSR_IA32_BNDCFGS:
 			if (!kvm_mpx_supported())
 				continue;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection
  2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
                   ` (7 preceding siblings ...)
  2020-03-24 15:18 ` [PATCH v6 8/8] kvm: vmx: virtualize split lock detection Xiaoyao Li
@ 2020-03-24 17:47 ` Sean Christopherson
  8 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2020-03-24 17:47 UTC (permalink / raw)
  To: Xiaoyao Li
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, x86, kvm, linux-kernel, Andy Lutomirski,
	Peter Zijlstra, Arvind Sankar, Fenghua Yu, Tony Luck

On Tue, Mar 24, 2020 at 11:18:51PM +0800, Xiaoyao Li wrote:
> So sorry for the noise that I forgot to CC the maillist.

You also forgot to add RESEND :-)  Tagging the patches like so

  [PATCH RESEND v6 0/8] x86/split_lock: Fix and virtualization of split lock detection

lets folks that received the originals identify and respond to the "new"
thread.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 4/8] kvm: x86: Emulate split-lock access as a write in emulator
  2020-03-24 15:18 ` [PATCH v6 4/8] kvm: x86: Emulate split-lock access as a write in emulator Xiaoyao Li
@ 2020-03-25  0:00   ` Thomas Gleixner
  2020-03-25  0:31     ` Xiaoyao Li
  0 siblings, 1 reply; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-25  0:00 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

Xiaoyao Li <xiaoyao.li@intel.com> writes:
>  
> +bool split_lock_detect_on(void)
> +{
> +	return sld_state != sld_off;
> +}
> +EXPORT_SYMBOL_GPL(split_lock_detect_on);

1) You export this function here

2) You change that in one of the next patches to something else

3) According to patch 1/8 X86_FEATURE_SPLIT_LOCK_DETECT is not set when
   sld_state == sld_off. FYI, I did that on purpose.

AFAICT #1 and #2 are just historical leftovers of your previous patch
series and the extra step was just adding more changed lines per patch
for no value.

#3 changed the detection mechanism and at the same time the semantics of
the feature flag.

So what's the point of this exercise? 

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 7/8] kvm: vmx: Enable MSR_TEST_CTRL for intel guest
  2020-03-24 15:18 ` [PATCH v6 7/8] kvm: vmx: Enable MSR_TEST_CTRL for intel guest Xiaoyao Li
@ 2020-03-25  0:07   ` Thomas Gleixner
  0 siblings, 0 replies; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-25  0:07 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

Xiaoyao Li <xiaoyao.li@intel.com> writes:

> Subject: Re: [PATCH v6 7/8] kvm: vmx: Enable MSR_TEST_CTRL for intel guest

What the heck is a "intel guest"?

Can you Intel folks please stop to slap Intel (and in your subject line
it's even spelled wrong) at everything whether it makes sense or not?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 4/8] kvm: x86: Emulate split-lock access as a write in emulator
  2020-03-25  0:00   ` Thomas Gleixner
@ 2020-03-25  0:31     ` Xiaoyao Li
  0 siblings, 0 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-25  0:31 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

On 3/25/2020 8:00 AM, Thomas Gleixner wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>   
>> +bool split_lock_detect_on(void)
>> +{
>> +	return sld_state != sld_off;
>> +}
>> +EXPORT_SYMBOL_GPL(split_lock_detect_on);
> 
> 1) You export this function here
> 
> 2) You change that in one of the next patches to something else
> 
> 3) According to patch 1/8 X86_FEATURE_SPLIT_LOCK_DETECT is not set when
>     sld_state == sld_off. FYI, I did that on purpose.
> 
> AFAICT #1 and #2 are just historical leftovers of your previous patch
> series and the extra step was just adding more changed lines per patch
> for no value.
> 
> #3 changed the detection mechanism and at the same time the semantics of
> the feature flag.
> 
> So what's the point of this exercise?

Right. In this series, setting X86_FEATURE_SPLIT_LOCK_DETECT flag means 
SLD is turned on. Need to remove split_lock_detect_on(). Thanks for 
pointing out this.

> Thanks,
> 
>          tglx
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-24 15:18 ` [PATCH v6 8/8] kvm: vmx: virtualize split lock detection Xiaoyao Li
@ 2020-03-25  0:40   ` Thomas Gleixner
  2020-03-25  1:11     ` Xiaoyao Li
  2020-03-26  6:41     ` Xiaoyao Li
  0 siblings, 2 replies; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-25  0:40 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck, Xiaoyao Li

Xiaoyao Li <xiaoyao.li@intel.com> writes:
>  #ifdef CONFIG_CPU_SUP_INTEL
> +enum split_lock_detect_state {
> +	sld_off = 0,
> +	sld_warn,
> +	sld_fatal,
> +};
> +extern enum split_lock_detect_state sld_state __ro_after_init;
> +
> +static inline bool split_lock_detect_on(void)
> +{
> +	return sld_state != sld_off;
> +}

See previous reply.

> +void sld_msr_set(bool on)
> +{
> +	sld_update_msr(on);
> +}
> +EXPORT_SYMBOL_GPL(sld_msr_set);
> +
> +void sld_turn_back_on(void)
> +{
> +	sld_update_msr(true);
> +	clear_tsk_thread_flag(current, TIF_SLD);
> +}
> +EXPORT_SYMBOL_GPL(sld_turn_back_on);

First of all these functions want to be in a separate patch, but aside
of that they do not make any sense at all.

> +static inline bool guest_cpu_split_lock_detect_on(struct vcpu_vmx *vmx)
> +{
> +	return vmx->msr_test_ctrl & MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
> +}
> +
>  static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> @@ -4725,12 +4746,13 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>  	case AC_VECTOR:
>  		/*
>  		 * Reflect #AC to the guest if it's expecting the #AC, i.e. has
> -		 * legacy alignment check enabled.  Pre-check host split lock
> -		 * support to avoid the VMREADs needed to check legacy #AC,
> -		 * i.e. reflect the #AC if the only possible source is legacy
> -		 * alignment checks.
> +		 * legacy alignment check enabled or split lock detect enabled.
> +		 * Pre-check host split lock support to avoid further check of
> +		 * guest, i.e. reflect the #AC if host doesn't enable split lock
> +		 * detection.
>  		 */
>  		if (!split_lock_detect_on() ||
> +		    guest_cpu_split_lock_detect_on(vmx) ||
>  		    guest_cpu_alignment_check_enabled(vcpu)) {

If the host has split lock detection disabled then how is the guest
supposed to have it enabled in the first place?

> @@ -6631,6 +6653,14 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  	 */
>  	x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0);
>  
> +	if (static_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
> +	    guest_cpu_split_lock_detect_on(vmx)) {
> +		if (test_thread_flag(TIF_SLD))
> +			sld_turn_back_on();

This is completely inconsistent behaviour. The only way that TIF_SLD is
set is when the host has sld_state == sld_warn and the guest triggered
a split lock #AC.

'warn' means that the split lock event is registered and a printk
emitted and after that the task runs with split lock detection disabled.

It does not matter at all if the task triggered the #AC while in guest
or in host user space mode. Stop claiming that virt is special. The only
special thing about virt is, that it is using a different mechanism to
exit kernel mode. Aside of that from the kernel POV it is completely
irrelevant whether the task triggered the split lock in host user space
or in guest mode.

If the SLD mode is fatal, then the task is killed no matter what.

Please sit down and go through your patches and rethink every single
line instead of sending out yet another half baken and hastily cobbled
together pile.

To be clear, Patch 1 and 2 make sense on their own, so I'm tempted to
pick them up right now, but the rest is going to be 5.8 material no
matter what.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-25  0:40   ` Thomas Gleixner
@ 2020-03-25  1:11     ` Xiaoyao Li
  2020-03-25  1:41       ` Thomas Gleixner
  2020-03-26  6:41     ` Xiaoyao Li
  1 sibling, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-25  1:11 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

On 3/25/2020 8:40 AM, Thomas Gleixner wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>   #ifdef CONFIG_CPU_SUP_INTEL
>> +enum split_lock_detect_state {
>> +	sld_off = 0,
>> +	sld_warn,
>> +	sld_fatal,
>> +};
>> +extern enum split_lock_detect_state sld_state __ro_after_init;
>> +
>> +static inline bool split_lock_detect_on(void)
>> +{
>> +	return sld_state != sld_off;
>> +}
> 
> See previous reply.
> 
>> +void sld_msr_set(bool on)
>> +{
>> +	sld_update_msr(on);
>> +}
>> +EXPORT_SYMBOL_GPL(sld_msr_set);
>> +
>> +void sld_turn_back_on(void)
>> +{
>> +	sld_update_msr(true);
>> +	clear_tsk_thread_flag(current, TIF_SLD);
>> +}
>> +EXPORT_SYMBOL_GPL(sld_turn_back_on);
> 
> First of all these functions want to be in a separate patch, but aside
> of that they do not make any sense at all.
> 
>> +static inline bool guest_cpu_split_lock_detect_on(struct vcpu_vmx *vmx)
>> +{
>> +	return vmx->msr_test_ctrl & MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
>> +}
>> +
>>   static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>   {
>>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
>> @@ -4725,12 +4746,13 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>   	case AC_VECTOR:
>>   		/*
>>   		 * Reflect #AC to the guest if it's expecting the #AC, i.e. has
>> -		 * legacy alignment check enabled.  Pre-check host split lock
>> -		 * support to avoid the VMREADs needed to check legacy #AC,
>> -		 * i.e. reflect the #AC if the only possible source is legacy
>> -		 * alignment checks.
>> +		 * legacy alignment check enabled or split lock detect enabled.
>> +		 * Pre-check host split lock support to avoid further check of
>> +		 * guest, i.e. reflect the #AC if host doesn't enable split lock
>> +		 * detection.
>>   		 */
>>   		if (!split_lock_detect_on() ||
>> +		    guest_cpu_split_lock_detect_on(vmx) ||
>>   		    guest_cpu_alignment_check_enabled(vcpu)) {
> 
> If the host has split lock detection disabled then how is the guest
> supposed to have it enabled in the first place?

So we need to reach an agreement on whether we need a state that host 
turns it off but feature is available to be exposed to guest.

>> @@ -6631,6 +6653,14 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>>   	 */
>>   	x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0);
>>   
>> +	if (static_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
>> +	    guest_cpu_split_lock_detect_on(vmx)) {
>> +		if (test_thread_flag(TIF_SLD))
>> +			sld_turn_back_on();
> 
> This is completely inconsistent behaviour. The only way that TIF_SLD is
> set is when the host has sld_state == sld_warn and the guest triggered
> a split lock #AC.

Can you image the case that both host and guest set sld_state == sld_warn.

1. There is guest userspace thread causing split lock.
2. It sets TIF_SLD for the thread in guest, and clears SLD bit to re- 
execute the instruction in guest.
3. Then it still causes #AC since hardware SLD is not cleared. In host 
kvm, we call handle_user_split_lock() that sets TIF_SLD for this VMM 
thread, and clears hardware SLD bit. Then it enters guest and re-execute 
the instruction.
4. In guest, it schedules to another thread without TIF_SLD being set. 
it sets the SLD bit to detect the split lock for this thread. So for 
this purpose, we need to turn sld back on for the VMM thread, otherwise 
this guest vcpu cannot catch split lock any more.

> 'warn' means that the split lock event is registered and a printk
> emitted and after that the task runs with split lock detection disabled.
> 
> It does not matter at all if the task triggered the #AC while in guest
> or in host user space mode. Stop claiming that virt is special. The only
> special thing about virt is, that it is using a different mechanism to
> exit kernel mode. Aside of that from the kernel POV it is completely
> irrelevant whether the task triggered the split lock in host user space
> or in guest mode.
> 
> If the SLD mode is fatal, then the task is killed no matter what.
> 
> Please sit down and go through your patches and rethink every single
> line instead of sending out yet another half baken and hastily cobbled
> together pile.
> 
> To be clear, Patch 1 and 2 make sense on their own, so I'm tempted to
> pick them up right now, but the rest is going to be 5.8 material no
> matter what.

Alright.

Do you need me to spin a new version of patch 1 to clear SLD bit on APs 
if SLD_OFF?


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-25  1:11     ` Xiaoyao Li
@ 2020-03-25  1:41       ` Thomas Gleixner
  2020-03-26  1:38         ` Xiaoyao Li
  0 siblings, 1 reply; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-25  1:41 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

Xiaoyao Li <xiaoyao.li@intel.com> writes:
> On 3/25/2020 8:40 AM, Thomas Gleixner wrote:
>>>   		if (!split_lock_detect_on() ||
>>> +		    guest_cpu_split_lock_detect_on(vmx) ||
>>>   		    guest_cpu_alignment_check_enabled(vcpu)) {
>> 
>> If the host has split lock detection disabled then how is the guest
>> supposed to have it enabled in the first place?
>
> So we need to reach an agreement on whether we need a state that host 
> turns it off but feature is available to be exposed to guest.

There is a very simple agreement:

  If the host turns it off, then it is not available at all

  If the host sets 'warn' then this applies to everything

  If the host sets 'fatal' then this applies to everything

Make it simple and consistent.

>>> +	if (static_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
>>> +	    guest_cpu_split_lock_detect_on(vmx)) {
>>> +		if (test_thread_flag(TIF_SLD))
>>> +			sld_turn_back_on();
>> 
>> This is completely inconsistent behaviour. The only way that TIF_SLD is
>> set is when the host has sld_state == sld_warn and the guest triggered
>> a split lock #AC.
>
> Can you image the case that both host and guest set sld_state == sld_warn.
>
> 1. There is guest userspace thread causing split lock.
> 2. It sets TIF_SLD for the thread in guest, and clears SLD bit to re- 
> execute the instruction in guest.
> 3. Then it still causes #AC since hardware SLD is not cleared. In host 
> kvm, we call handle_user_split_lock() that sets TIF_SLD for this VMM 
> thread, and clears hardware SLD bit. Then it enters guest and re-execute 
> the instruction.
> 4. In guest, it schedules to another thread without TIF_SLD being set. 
> it sets the SLD bit to detect the split lock for this thread. So for 
> this purpose, we need to turn sld back on for the VMM thread, otherwise 
> this guest vcpu cannot catch split lock any more.

If you really want to address that scenario, then why are you needing
any of those completely backwards interfaces at all?

Just because your KVM exception trap uses the host handling function
which sets TIF_SLD?
 
Please sit down, go back to the drawing board and figure it out how to
solve that without creating inconsistent state and duct tape functions
to deal with that.

I'm not rewriting the patches for you this time.

> Do you need me to spin a new version of patch 1 to clear SLD bit on APs 
> if SLD_OFF?

I assume you can answer that question yourself.

Thanks,

        tglx




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-25  1:41       ` Thomas Gleixner
@ 2020-03-26  1:38         ` Xiaoyao Li
  2020-03-26 11:08           ` Thomas Gleixner
  0 siblings, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-26  1:38 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

On 3/25/2020 9:41 AM, Thomas Gleixner wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>> On 3/25/2020 8:40 AM, Thomas Gleixner wrote:
>>>>    		if (!split_lock_detect_on() ||
>>>> +		    guest_cpu_split_lock_detect_on(vmx) ||
>>>>    		    guest_cpu_alignment_check_enabled(vcpu)) {
>>>
>>> If the host has split lock detection disabled then how is the guest
>>> supposed to have it enabled in the first place?
>>
>> So we need to reach an agreement on whether we need a state that host
>> turns it off but feature is available to be exposed to guest.
> 
> There is a very simple agreement:
> 
>    If the host turns it off, then it is not available at all
> 
>    If the host sets 'warn' then this applies to everything
> 
>    If the host sets 'fatal' then this applies to everything
> 
> Make it simple and consistent.

OK. you are the boss.

>>>> +	if (static_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
>>>> +	    guest_cpu_split_lock_detect_on(vmx)) {
>>>> +		if (test_thread_flag(TIF_SLD))
>>>> +			sld_turn_back_on();
>>>
>>> This is completely inconsistent behaviour. The only way that TIF_SLD is
>>> set is when the host has sld_state == sld_warn and the guest triggered
>>> a split lock #AC.
>>
>> Can you image the case that both host and guest set sld_state == sld_warn.
>>
>> 1. There is guest userspace thread causing split lock.
>> 2. It sets TIF_SLD for the thread in guest, and clears SLD bit to re-
>> execute the instruction in guest.
>> 3. Then it still causes #AC since hardware SLD is not cleared. In host
>> kvm, we call handle_user_split_lock() that sets TIF_SLD for this VMM
>> thread, and clears hardware SLD bit. Then it enters guest and re-execute
>> the instruction.
>> 4. In guest, it schedules to another thread without TIF_SLD being set.
>> it sets the SLD bit to detect the split lock for this thread. So for
>> this purpose, we need to turn sld back on for the VMM thread, otherwise
>> this guest vcpu cannot catch split lock any more.
> 
> If you really want to address that scenario, then why are you needing
> any of those completely backwards interfaces at all?
> 
> Just because your KVM exception trap uses the host handling function
> which sets TIF_SLD?
>   

Yes. just because KVM use the host handling function.

If you disallow me to touch codes out of kvm. It can be achieved with 
something like in v2:
https://lore.kernel.org/kvm/20200203151608.28053-1-xiaoyao.li@intel.com/

Obviously re-use TIF_SLD flag to automatically switch MSR_TEST_CTRL.SLD 
bit when switch to/from vcpu thread is better.

And to virtualize SLD feature as full as possible for guest, we have to 
implement the backwards interface. If you really don't want that 
interface, we have to write code directly in kvm to modify TIF_SLD flag 
and MSR_TEST_CTRL.SLD bit.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-25  0:40   ` Thomas Gleixner
  2020-03-25  1:11     ` Xiaoyao Li
@ 2020-03-26  6:41     ` Xiaoyao Li
  2020-03-26 11:10       ` Thomas Gleixner
  1 sibling, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-26  6:41 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

On 3/25/2020 8:40 AM, Thomas Gleixner wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>> +static inline bool guest_cpu_split_lock_detect_on(struct vcpu_vmx *vmx)
>> +{
>> +	return vmx->msr_test_ctrl & MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
>> +}
>> +
>>   static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>   {
>>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
>> @@ -4725,12 +4746,13 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>   	case AC_VECTOR:
>>   		/*
>>   		 * Reflect #AC to the guest if it's expecting the #AC, i.e. has
>> -		 * legacy alignment check enabled.  Pre-check host split lock
>> -		 * support to avoid the VMREADs needed to check legacy #AC,
>> -		 * i.e. reflect the #AC if the only possible source is legacy
>> -		 * alignment checks.
>> +		 * legacy alignment check enabled or split lock detect enabled.
>> +		 * Pre-check host split lock support to avoid further check of
>> +		 * guest, i.e. reflect the #AC if host doesn't enable split lock
>> +		 * detection.
>>   		 */
>>   		if (!split_lock_detect_on() ||
>> +		    guest_cpu_split_lock_detect_on(vmx) ||
>>   		    guest_cpu_alignment_check_enabled(vcpu)) {
> 
> If the host has split lock detection disabled then how is the guest
> supposed to have it enabled in the first place?
> 

It is ||

Thanks,
-Xiaoyao


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-26  1:38         ` Xiaoyao Li
@ 2020-03-26 11:08           ` Thomas Gleixner
  2020-03-26 12:31             ` Xiaoyao Li
  0 siblings, 1 reply; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-26 11:08 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

Xiaoyao Li <xiaoyao.li@intel.com> writes:
> On 3/25/2020 9:41 AM, Thomas Gleixner wrote:
>> If you really want to address that scenario, then why are you needing
>> any of those completely backwards interfaces at all?
>> 
>> Just because your KVM exception trap uses the host handling function
>> which sets TIF_SLD?
>>   
> Yes. just because KVM use the host handling function.

> If you disallow me to touch codes out of kvm. It can be achieved with

Who said you cannot touch code outside of KVM? 

> Obviously re-use TIF_SLD flag to automatically switch MSR_TEST_CTRL.SLD 
> bit when switch to/from vcpu thread is better.

What's better about that?

TIF_SLD has very well defined semantics. It's used to denote that the
SLD bit needs to be cleared for the task when its scheduled in.

So now you overload it by clearing it magically and claim that this is
better.

vCPU-thread

   user space (qemu)
     triggers #AC
       -> exception
           set TIF_SLD

   iotctl()
     vcpu_run()
       -> clear TIF_SLD

It's not better, it's simply wrong and inconsistent.

> And to virtualize SLD feature as full as possible for guest, we have to 
> implement the backwards interface. If you really don't want that 
> interface, we have to write code directly in kvm to modify TIF_SLD flag 
> and MSR_TEST_CTRL.SLD bit.

Wrong again. KVM has absolutely no business in fiddling with TIF_SLD and
the function to flip the SLD bit is simply sld_update_msr(bool on) which
does not need any KVMism at all.

There are two options to handle SLD for KVM:

   1) Follow strictly the host rules

      If user space or guest triggered #AC then TIF_SLD is set and that
      task is excluded from ever setting SLD again.

   2) Track KVM guest state separately

      vcpu_run()
        if (current_has(TIF_SLD) && guest_sld_on())
          sld_update_msr(true);
        else if (!current_has(TIF_SLD) && !guest_sld_on())
          sld_update_msr(false);
        vmenter()
          ....
        vmexit()
        if (current_has(TIF_SLD) && guest_sld_on())
          sld_update_msr(false);
        else if (!current_has(TIF_SLD) && !guest_sld_on())
          sld_update_msr(true);

      If the guest triggers #AC then this solely affects guest state
      and does not fiddle with TIF_SLD.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-26  6:41     ` Xiaoyao Li
@ 2020-03-26 11:10       ` Thomas Gleixner
  2020-03-26 12:43         ` Xiaoyao Li
  0 siblings, 1 reply; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-26 11:10 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

Xiaoyao Li <xiaoyao.li@intel.com> writes:
> On 3/25/2020 8:40 AM, Thomas Gleixner wrote:
>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>   static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>>   {
>>>   	struct vcpu_vmx *vmx = to_vmx(vcpu);
>>> @@ -4725,12 +4746,13 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>>   	case AC_VECTOR:
>>>   		/*
>>>   		 * Reflect #AC to the guest if it's expecting the #AC, i.e. has
>>> -		 * legacy alignment check enabled.  Pre-check host split lock
>>> -		 * support to avoid the VMREADs needed to check legacy #AC,
>>> -		 * i.e. reflect the #AC if the only possible source is legacy
>>> -		 * alignment checks.
>>> +		 * legacy alignment check enabled or split lock detect enabled.
>>> +		 * Pre-check host split lock support to avoid further check of
>>> +		 * guest, i.e. reflect the #AC if host doesn't enable split lock
>>> +		 * detection.
>>>   		 */
>>>   		if (!split_lock_detect_on() ||
>>> +		    guest_cpu_split_lock_detect_on(vmx) ||
>>>   		    guest_cpu_alignment_check_enabled(vcpu)) {
>> 
>> If the host has split lock detection disabled then how is the guest
>> supposed to have it enabled in the first place?
>> 
> It is ||

Again. If the host has it disabled, then the feature flag is OFF. So
how is the hypervisor exposing it in the first place?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-26 11:08           ` Thomas Gleixner
@ 2020-03-26 12:31             ` Xiaoyao Li
  0 siblings, 0 replies; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-26 12:31 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

On 3/26/2020 7:08 PM, Thomas Gleixner wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>> On 3/25/2020 9:41 AM, Thomas Gleixner wrote:
>>> If you really want to address that scenario, then why are you needing
>>> any of those completely backwards interfaces at all?
>>>
>>> Just because your KVM exception trap uses the host handling function
>>> which sets TIF_SLD?
>>>    
>> Yes. just because KVM use the host handling function.
> 
>> If you disallow me to touch codes out of kvm. It can be achieved with
> 
> Who said you cannot touch code outside of KVM?
> 
>> Obviously re-use TIF_SLD flag to automatically switch MSR_TEST_CTRL.SLD
>> bit when switch to/from vcpu thread is better.
> 
> What's better about that?
> 
> TIF_SLD has very well defined semantics. It's used to denote that the
> SLD bit needs to be cleared for the task when its scheduled in.
> 
> So now you overload it by clearing it magically and claim that this is
> better.
> 
> vCPU-thread
> 
>     user space (qemu)
>       triggers #AC
>         -> exception
>             set TIF_SLD
> 
>     iotctl()
>       vcpu_run()
>         -> clear TIF_SLD
> 
> It's not better, it's simply wrong and inconsistent.
> 
>> And to virtualize SLD feature as full as possible for guest, we have to
>> implement the backwards interface. If you really don't want that
>> interface, we have to write code directly in kvm to modify TIF_SLD flag
>> and MSR_TEST_CTRL.SLD bit.
> 
> Wrong again. KVM has absolutely no business in fiddling with TIF_SLD and
> the function to flip the SLD bit is simply sld_update_msr(bool on) which
> does not need any KVMism at all.
> 
> There are two options to handle SLD for KVM:
> 
>     1) Follow strictly the host rules
> 
>        If user space or guest triggered #AC then TIF_SLD is set and that
>        task is excluded from ever setting SLD again.

Obviously, it's not about to virtualize SLD for guest. So we don't pick 
this one.

>     2) Track KVM guest state separately
> 
>        vcpu_run()
>          if (current_has(TIF_SLD) && guest_sld_on())
>            sld_update_msr(true);
>          else if (!current_has(TIF_SLD) && !guest_sld_on())
>            sld_update_msr(false);
>          vmenter()
>            ....
>          vmexit()
>          if (current_has(TIF_SLD) && guest_sld_on())
>            sld_update_msr(false);
>          else if (!current_has(TIF_SLD) && !guest_sld_on())
>            sld_update_msr(true);
> 
>        If the guest triggers #AC then this solely affects guest state
>        and does not fiddle with TIF_SLD.
> 

OK. So when host is sld_warn, guest's SLD value can be loaded into 
hardware MSR when vmenter.

I'll go with this option.





^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-26 11:10       ` Thomas Gleixner
@ 2020-03-26 12:43         ` Xiaoyao Li
  2020-03-26 14:55           ` Thomas Gleixner
  0 siblings, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-26 12:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

On 3/26/2020 7:10 PM, Thomas Gleixner wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>> On 3/25/2020 8:40 AM, Thomas Gleixner wrote:
>>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>>    static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>>>    {
>>>>    	struct vcpu_vmx *vmx = to_vmx(vcpu);
>>>> @@ -4725,12 +4746,13 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>>>>    	case AC_VECTOR:
>>>>    		/*
>>>>    		 * Reflect #AC to the guest if it's expecting the #AC, i.e. has
>>>> -		 * legacy alignment check enabled.  Pre-check host split lock
>>>> -		 * support to avoid the VMREADs needed to check legacy #AC,
>>>> -		 * i.e. reflect the #AC if the only possible source is legacy
>>>> -		 * alignment checks.
>>>> +		 * legacy alignment check enabled or split lock detect enabled.
>>>> +		 * Pre-check host split lock support to avoid further check of
>>>> +		 * guest, i.e. reflect the #AC if host doesn't enable split lock
>>>> +		 * detection.
>>>>    		 */
>>>>    		if (!split_lock_detect_on() ||
>>>> +		    guest_cpu_split_lock_detect_on(vmx) ||
>>>>    		    guest_cpu_alignment_check_enabled(vcpu)) {
>>>
>>> If the host has split lock detection disabled then how is the guest
>>> supposed to have it enabled in the first place?
>>>
>> It is ||
> 
> Again. If the host has it disabled, then the feature flag is OFF. So
> how is the hypervisor exposing it in the first place?
> 

So what's wrong with above code?

If the host has it disabled, !split_lock_detect_on() is true, it skips 
following check due to ||

I guess you want something like below?

if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK)) {
	inject #AC back to guest
} else {
	if (guest_alignment_check_enabled() || guest_sld_on())
		inject #AC back to guest
}

BTW, there is an issue in my original patch that guest_sld_on() should 
be checked at last.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-26 12:43         ` Xiaoyao Li
@ 2020-03-26 14:55           ` Thomas Gleixner
  2020-03-26 15:09             ` Xiaoyao Li
  0 siblings, 1 reply; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-26 14:55 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

Xiaoyao Li <xiaoyao.li@intel.com> writes:
> On 3/26/2020 7:10 PM, Thomas Gleixner wrote:
> If the host has it disabled, !split_lock_detect_on() is true, it skips 
> following check due to ||
>
> if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK)) {
> 	inject #AC back to guest

That'd be a regular #AC, right?

> } else {
> 	if (guest_alignment_check_enabled() || guest_sld_on())
> 		inject #AC back to guest

Here is clearly an else path missing. 

> }

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-26 14:55           ` Thomas Gleixner
@ 2020-03-26 15:09             ` Xiaoyao Li
  2020-03-26 18:51               ` Thomas Gleixner
  0 siblings, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2020-03-26 15:09 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, hpa,
	Paolo Bonzini, Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

On 3/26/2020 10:55 PM, Thomas Gleixner wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>> On 3/26/2020 7:10 PM, Thomas Gleixner wrote:
>> If the host has it disabled, !split_lock_detect_on() is true, it skips
>> following check due to ||
>>
>> if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK)) {
>> 	inject #AC back to guest
and 	return 1;

> 
> That'd be a regular #AC, right?

Yes.

>> } else {
>> 	if (guest_alignment_check_enabled() || guest_sld_on())
>> 		inject #AC back to guest
and 		return 1;

> Here is clearly an else path missing.

the else path is fall through.

i.e. calling handle_user_split_lock().

If cannot handle, it falls through to report #AC to user space (QEMU)

>> }
> 

If there is no problem with the above. So what's the problem of the 
original?


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v6 8/8] kvm: vmx: virtualize split lock detection
  2020-03-26 15:09             ` Xiaoyao Li
@ 2020-03-26 18:51               ` Thomas Gleixner
  0 siblings, 0 replies; 25+ messages in thread
From: Thomas Gleixner @ 2020-03-26 18:51 UTC (permalink / raw)
  To: Xiaoyao Li, Ingo Molnar, Borislav Petkov, hpa, Paolo Bonzini,
	Sean Christopherson
  Cc: x86, kvm, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Arvind Sankar, Fenghua Yu, Tony Luck

Xiaoyao Li <xiaoyao.li@intel.com> writes:
> On 3/26/2020 10:55 PM, Thomas Gleixner wrote:
>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>> On 3/26/2020 7:10 PM, Thomas Gleixner wrote:
>>> If the host has it disabled, !split_lock_detect_on() is true, it skips
>>> following check due to ||
>>>
>>> if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK)) {
>>> 	inject #AC back to guest
> and 	return 1;
>
>> 
>> That'd be a regular #AC, right?
>
> Yes.
>
>>> } else {
>>> 	if (guest_alignment_check_enabled() || guest_sld_on())
>>> 		inject #AC back to guest
> and 		return 1;
>
>> Here is clearly an else path missing.
>
> the else path is fall through.
>
> i.e. calling handle_user_split_lock().
>
> If cannot handle, it falls through to report #AC to user space (QEMU)
>
>>> }
>> 
>
> If there is no problem with the above. So what's the problem of the 
> original?

Probably my inability to decipher the convoluted condition.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-03-26 18:51 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-24 15:18 [PATCH v6 0/8] x86/split_lock: Fix and virtualization of split lock detection Xiaoyao Li
2020-03-24 15:18 ` [PATCH v6 1/8] x86/split_lock: Rework the initialization flow " Xiaoyao Li
2020-03-24 15:18 ` [PATCH v6 2/8] x86/split_lock: Avoid runtime reads of the TEST_CTRL MSR Xiaoyao Li
2020-03-24 15:18 ` [PATCH v6 3/8] x86/split_lock: Export handle_user_split_lock() Xiaoyao Li
2020-03-24 15:18 ` [PATCH v6 4/8] kvm: x86: Emulate split-lock access as a write in emulator Xiaoyao Li
2020-03-25  0:00   ` Thomas Gleixner
2020-03-25  0:31     ` Xiaoyao Li
2020-03-24 15:18 ` [PATCH v6 5/8] kvm: vmx: Extend VMX's #AC interceptor to handle split lock #AC happens in guest Xiaoyao Li
2020-03-24 15:18 ` [PATCH v6 6/8] kvm: x86: Emulate MSR IA32_CORE_CAPABILITIES Xiaoyao Li
2020-03-24 15:18 ` [PATCH v6 7/8] kvm: vmx: Enable MSR_TEST_CTRL for intel guest Xiaoyao Li
2020-03-25  0:07   ` Thomas Gleixner
2020-03-24 15:18 ` [PATCH v6 8/8] kvm: vmx: virtualize split lock detection Xiaoyao Li
2020-03-25  0:40   ` Thomas Gleixner
2020-03-25  1:11     ` Xiaoyao Li
2020-03-25  1:41       ` Thomas Gleixner
2020-03-26  1:38         ` Xiaoyao Li
2020-03-26 11:08           ` Thomas Gleixner
2020-03-26 12:31             ` Xiaoyao Li
2020-03-26  6:41     ` Xiaoyao Li
2020-03-26 11:10       ` Thomas Gleixner
2020-03-26 12:43         ` Xiaoyao Li
2020-03-26 14:55           ` Thomas Gleixner
2020-03-26 15:09             ` Xiaoyao Li
2020-03-26 18:51               ` Thomas Gleixner
2020-03-24 17:47 ` [PATCH v6 0/8] x86/split_lock: Fix and virtualization of " Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).