* [patch 1/3] x86/split_lock: Provide handle_guest_split_lock()
2020-04-10 11:53 [patch 0/3] x86/kvm: Basic split lock #AC handling Thomas Gleixner
@ 2020-04-10 11:54 ` Thomas Gleixner
2020-04-11 16:04 ` [tip: x86/urgent] " tip-bot2 for Thomas Gleixner
2020-04-10 11:54 ` [patch 2/3] KVM: x86: Emulate split-lock access as a write in emulator Thomas Gleixner
` (3 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2020-04-10 11:54 UTC (permalink / raw)
To: LKML
Cc: x86, Kenneth R. Crudup, Paolo Bonzini, Fenghua Yu, Xiaoyao Li,
Nadav Amit, Thomas Hellstrom, Sean Christopherson, Tony Luck
From: Thomas Gleixner <tglx@linutronix.de>
Without at least minimal handling for split lock detection induced #AC, VMX
will just run into the same problem as the VMWare hypervisor, which was
reported by Kenneth.
It will inject the #AC blindly into the guest whether the guest is prepared
or not.
Provide a function for guest mode which acts depending on the host SLD
mode. If mode == sld_warn, treat it like user space, i.e. emit a warning
disable SLD and mark the task accordingly. Otherwise force SIGBUS.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "Kenneth R. Crudup" <kenny@panix.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Xiaoyao Li <xiaoyao.li@intel.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
---
arch/x86/include/asm/cpu.h | 1 +
arch/x86/kernel/cpu/intel.c | 33 ++++++++++++++++++++++++++++-----
2 files changed, 29 insertions(+), 5 deletions(-)
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -44,6 +44,7 @@ unsigned int x86_stepping(unsigned int s
extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c);
extern void switch_to_sld(unsigned long tifn);
extern bool handle_user_split_lock(struct pt_regs *regs, long error_code);
+extern bool handle_guest_split_lock(unsigned long ip);
#else
static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {}
static inline void switch_to_sld(unsigned long tifn) {}
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -21,6 +21,7 @@
#include <asm/elf.h>
#include <asm/cpu_device_id.h>
#include <asm/cmdline.h>
+#include <asm/traps.h>
#ifdef CONFIG_X86_64
#include <linux/topology.h>
@@ -1066,13 +1067,10 @@ static void split_lock_init(void)
split_lock_verify_msr(sld_state != sld_off);
}
-bool handle_user_split_lock(struct pt_regs *regs, long error_code)
+static void split_lock_warn(unsigned long ip)
{
- if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
- return false;
-
pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
- current->comm, current->pid, regs->ip);
+ current->comm, current->pid, ip);
/*
* Disable the split lock detection for this task so it can make
@@ -1081,6 +1079,31 @@ bool handle_user_split_lock(struct pt_re
*/
sld_update_msr(false);
set_tsk_thread_flag(current, TIF_SLD);
+}
+
+bool handle_guest_split_lock(unsigned long ip)
+{
+ if (sld_state == sld_warn) {
+ split_lock_warn(ip);
+ return true;
+ }
+
+ pr_warn_once("#AC: %s/%d %s split_lock trap at address: 0x%lx\n",
+ current->comm, current->pid,
+ sld_state == sld_fatal ? "fatal" : "bogus", ip);
+
+ current->thread.error_code = 0;
+ current->thread.trap_nr = X86_TRAP_AC;
+ force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
+ return false;
+}
+EXPORT_SYMBOL_GPL(handle_guest_split_lock);
+
+bool handle_user_split_lock(struct pt_regs *regs, long error_code)
+{
+ if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
+ return false;
+ split_lock_warn(regs->ip);
return true;
}
^ permalink raw reply [flat|nested] 10+ messages in thread
* [tip: x86/urgent] x86/split_lock: Provide handle_guest_split_lock()
2020-04-10 11:54 ` [patch 1/3] x86/split_lock: Provide handle_guest_split_lock() Thomas Gleixner
@ 2020-04-11 16:04 ` tip-bot2 for Thomas Gleixner
0 siblings, 0 replies; 10+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-04-11 16:04 UTC (permalink / raw)
To: linux-tip-commits
Cc: Thomas Gleixner, Borislav Petkov, Paolo Bonzini, x86, LKML
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: d7e94dbdac1a40924626b0efc7ff530c8baf5e4a
Gitweb: https://git.kernel.org/tip/d7e94dbdac1a40924626b0efc7ff530c8baf5e4a
Author: Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Fri, 10 Apr 2020 13:54:00 +02:00
Committer: Borislav Petkov <bp@suse.de>
CommitterDate: Sat, 11 Apr 2020 16:39:30 +02:00
x86/split_lock: Provide handle_guest_split_lock()
Without at least minimal handling for split lock detection induced #AC,
VMX will just run into the same problem as the VMWare hypervisor, which
was reported by Kenneth.
It will inject the #AC blindly into the guest whether the guest is
prepared or not.
Provide a function for guest mode which acts depending on the host
SLD mode. If mode == sld_warn, treat it like user space, i.e. emit a
warning, disable SLD and mark the task accordingly. Otherwise force
SIGBUS.
[ bp: Add a !CPU_SUP_INTEL stub for handle_guest_split_lock(). ]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lkml.kernel.org/r/20200410115516.978037132@linutronix.de
Link: https://lkml.kernel.org/r/20200402123258.895628824@linutronix.de
---
arch/x86/include/asm/cpu.h | 6 ++++++
arch/x86/kernel/cpu/intel.c | 33 ++++++++++++++++++++++++++++-----
2 files changed, 34 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
index ff6f3ca..dd17c2d 100644
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -44,6 +44,7 @@ unsigned int x86_stepping(unsigned int sig);
extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c);
extern void switch_to_sld(unsigned long tifn);
extern bool handle_user_split_lock(struct pt_regs *regs, long error_code);
+extern bool handle_guest_split_lock(unsigned long ip);
#else
static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {}
static inline void switch_to_sld(unsigned long tifn) {}
@@ -51,5 +52,10 @@ static inline bool handle_user_split_lock(struct pt_regs *regs, long error_code)
{
return false;
}
+
+static inline bool handle_guest_split_lock(unsigned long ip)
+{
+ return false;
+}
#endif
#endif /* _ASM_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 9a26e97..bf08d45 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -21,6 +21,7 @@
#include <asm/elf.h>
#include <asm/cpu_device_id.h>
#include <asm/cmdline.h>
+#include <asm/traps.h>
#ifdef CONFIG_X86_64
#include <linux/topology.h>
@@ -1066,13 +1067,10 @@ static void split_lock_init(void)
split_lock_verify_msr(sld_state != sld_off);
}
-bool handle_user_split_lock(struct pt_regs *regs, long error_code)
+static void split_lock_warn(unsigned long ip)
{
- if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
- return false;
-
pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
- current->comm, current->pid, regs->ip);
+ current->comm, current->pid, ip);
/*
* Disable the split lock detection for this task so it can make
@@ -1081,6 +1079,31 @@ bool handle_user_split_lock(struct pt_regs *regs, long error_code)
*/
sld_update_msr(false);
set_tsk_thread_flag(current, TIF_SLD);
+}
+
+bool handle_guest_split_lock(unsigned long ip)
+{
+ if (sld_state == sld_warn) {
+ split_lock_warn(ip);
+ return true;
+ }
+
+ pr_warn_once("#AC: %s/%d %s split_lock trap at address: 0x%lx\n",
+ current->comm, current->pid,
+ sld_state == sld_fatal ? "fatal" : "bogus", ip);
+
+ current->thread.error_code = 0;
+ current->thread.trap_nr = X86_TRAP_AC;
+ force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
+ return false;
+}
+EXPORT_SYMBOL_GPL(handle_guest_split_lock);
+
+bool handle_user_split_lock(struct pt_regs *regs, long error_code)
+{
+ if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
+ return false;
+ split_lock_warn(regs->ip);
return true;
}
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [patch 2/3] KVM: x86: Emulate split-lock access as a write in emulator
2020-04-10 11:53 [patch 0/3] x86/kvm: Basic split lock #AC handling Thomas Gleixner
2020-04-10 11:54 ` [patch 1/3] x86/split_lock: Provide handle_guest_split_lock() Thomas Gleixner
@ 2020-04-10 11:54 ` Thomas Gleixner
2020-04-11 16:04 ` [tip: x86/urgent] " tip-bot2 for Xiaoyao Li
2020-04-10 11:54 ` [patch 3/3] KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guest Thomas Gleixner
` (2 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2020-04-10 11:54 UTC (permalink / raw)
To: LKML
Cc: x86, Sean Christopherson, Xiaoyao Li, Kenneth R. Crudup,
Paolo Bonzini, Fenghua Yu, Nadav Amit, Thomas Hellstrom,
Tony Luck
From: Xiaoyao Li <xiaoyao.li@intel.com>
Emulate split-lock accesses as writes if split lock detection is on to
avoid #AC during emulation, which will result in a panic(). This should
never occur for a well behaved guest, but a malicious guest can
manipulate the TLB to trigger emulation of a locked instruction[1].
More discussion can be found [2][3].
[1] https://lkml.kernel.org/r/8c5b11c9-58df-38e7-a514-dc12d687b198@redhat.com
[2] https://lkml.kernel.org/r/20200131200134.GD18946@linux.intel.com
[3] https://lkml.kernel.org/r/20200227001117.GX9940@linux.intel.com
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/x86.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5838,6 +5838,7 @@ static int emulator_cmpxchg_emulated(str
{
struct kvm_host_map map;
struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+ u64 page_line_mask;
gpa_t gpa;
char *kaddr;
bool exchanged;
@@ -5852,7 +5853,16 @@ static int emulator_cmpxchg_emulated(str
(gpa & PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
goto emul_write;
- if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK))
+ /*
+ * Emulate the atomic as a straight write to avoid #AC if SLD is
+ * enabled in the host and the access splits a cache line.
+ */
+ if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+ page_line_mask = ~(cache_line_size() - 1);
+ else
+ page_line_mask = PAGE_MASK;
+
+ if (((gpa + bytes - 1) & page_line_mask) != (gpa & page_line_mask))
goto emul_write;
if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), &map))
^ permalink raw reply [flat|nested] 10+ messages in thread
* [tip: x86/urgent] KVM: x86: Emulate split-lock access as a write in emulator
2020-04-10 11:54 ` [patch 2/3] KVM: x86: Emulate split-lock access as a write in emulator Thomas Gleixner
@ 2020-04-11 16:04 ` tip-bot2 for Xiaoyao Li
0 siblings, 0 replies; 10+ messages in thread
From: tip-bot2 for Xiaoyao Li @ 2020-04-11 16:04 UTC (permalink / raw)
To: linux-tip-commits
Cc: Sean Christopherson, Xiaoyao Li, Thomas Gleixner,
Borislav Petkov, Paolo Bonzini, x86, LKML
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 9de6fe3c28d6d8feadfad907961f1f31b85c6985
Gitweb: https://git.kernel.org/tip/9de6fe3c28d6d8feadfad907961f1f31b85c6985
Author: Xiaoyao Li <xiaoyao.li@intel.com>
AuthorDate: Fri, 10 Apr 2020 13:54:01 +02:00
Committer: Borislav Petkov <bp@suse.de>
CommitterDate: Sat, 11 Apr 2020 16:40:55 +02:00
KVM: x86: Emulate split-lock access as a write in emulator
Emulate split-lock accesses as writes if split lock detection is on
to avoid #AC during emulation, which will result in a panic(). This
should never occur for a well-behaved guest, but a malicious guest can
manipulate the TLB to trigger emulation of a locked instruction[1].
More discussion can be found at [2][3].
[1] https://lkml.kernel.org/r/8c5b11c9-58df-38e7-a514-dc12d687b198@redhat.com
[2] https://lkml.kernel.org/r/20200131200134.GD18946@linux.intel.com
[3] https://lkml.kernel.org/r/20200227001117.GX9940@linux.intel.com
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lkml.kernel.org/r/20200410115517.084300242@linutronix.de
---
arch/x86/kvm/x86.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 027dfd2..3bf2eca 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5839,6 +5839,7 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
{
struct kvm_host_map map;
struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+ u64 page_line_mask;
gpa_t gpa;
char *kaddr;
bool exchanged;
@@ -5853,7 +5854,16 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
(gpa & PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
goto emul_write;
- if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK))
+ /*
+ * Emulate the atomic as a straight write to avoid #AC if SLD is
+ * enabled in the host and the access splits a cache line.
+ */
+ if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+ page_line_mask = ~(cache_line_size() - 1);
+ else
+ page_line_mask = PAGE_MASK;
+
+ if (((gpa + bytes - 1) & page_line_mask) != (gpa & page_line_mask))
goto emul_write;
if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), &map))
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [patch 3/3] KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guest
2020-04-10 11:53 [patch 0/3] x86/kvm: Basic split lock #AC handling Thomas Gleixner
2020-04-10 11:54 ` [patch 1/3] x86/split_lock: Provide handle_guest_split_lock() Thomas Gleixner
2020-04-10 11:54 ` [patch 2/3] KVM: x86: Emulate split-lock access as a write in emulator Thomas Gleixner
@ 2020-04-10 11:54 ` Thomas Gleixner
2020-04-11 16:04 ` [tip: x86/urgent] " tip-bot2 for Xiaoyao Li
2020-04-10 15:15 ` [patch 0/3] x86/kvm: Basic split lock #AC handling Paolo Bonzini
2020-04-10 19:02 ` Sean Christopherson
4 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2020-04-10 11:54 UTC (permalink / raw)
To: LKML
Cc: x86, Sean Christopherson, Xiaoyao Li, Kenneth R. Crudup,
Paolo Bonzini, Fenghua Yu, Nadav Amit, Thomas Hellstrom,
Tony Luck
From: Xiaoyao Li <xiaoyao.li@intel.com>
Two types of #AC can be generated in Intel CPUs:
1. legacy alignment check #AC
2. split lock #AC
Reflect #AC back into the guest if the guest has legacy alignment checks
enabled or if split lock detection is disabled.
If the #AC is not a legacy one and split lock detection is enabled, then
invoke handle_guest_split_lock() which will either warn and disable split
lock detection for this task or force SIGBUS on it.
[ tglx: Switch it to handle_guest_split_lock() and renamed the misnomed
helper function ]
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/vmx/vmx.c | 37 ++++++++++++++++++++++++++++++++++---
1 file changed, 34 insertions(+), 3 deletions(-)
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4592,6 +4592,26 @@ static int handle_machine_check(struct k
return 1;
}
+/*
+ * If the host has split lock detection disabled, then #AC is
+ * unconditionally injected into the guest, which is the pre split lock
+ * detection behaviour.
+ *
+ * If the host has split lock detection enabled then #AC is
+ * only injected into the guest when:
+ * - Guest CPL == 3 (user mode)
+ * - Guest has #AC detection enabled in CR0
+ * - Guest EFLAGS has AC bit set
+ */
+static inline bool guest_inject_ac(struct kvm_vcpu *vcpu)
+{
+ if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+ return true;
+
+ return vmx_get_cpl(vcpu) == 3 && kvm_read_cr0_bits(vcpu, X86_CR0_AM) &&
+ (kvm_get_rflags(vcpu) & X86_EFLAGS_AC);
+}
+
static int handle_exception_nmi(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -4657,9 +4677,6 @@ static int handle_exception_nmi(struct k
return handle_rmode_exception(vcpu, ex_no, error_code);
switch (ex_no) {
- case AC_VECTOR:
- kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
- return 1;
case DB_VECTOR:
dr6 = vmcs_readl(EXIT_QUALIFICATION);
if (!(vcpu->guest_debug &
@@ -4688,6 +4705,20 @@ static int handle_exception_nmi(struct k
kvm_run->debug.arch.pc = vmcs_readl(GUEST_CS_BASE) + rip;
kvm_run->debug.arch.exception = ex_no;
break;
+ case AC_VECTOR:
+ if (guest_inject_ac(vcpu)) {
+ kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
+ return 1;
+ }
+
+ /*
+ * Handle split lock. Depending on detection mode this will
+ * either warn and disable split lock detection for this
+ * task or force SIGBUS on it.
+ */
+ if (handle_guest_split_lock(kvm_rip_read(vcpu)))
+ return 1;
+ fallthrough;
default:
kvm_run->exit_reason = KVM_EXIT_EXCEPTION;
kvm_run->ex.exception = ex_no;
^ permalink raw reply [flat|nested] 10+ messages in thread
* [tip: x86/urgent] KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guest
2020-04-10 11:54 ` [patch 3/3] KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guest Thomas Gleixner
@ 2020-04-11 16:04 ` tip-bot2 for Xiaoyao Li
0 siblings, 0 replies; 10+ messages in thread
From: tip-bot2 for Xiaoyao Li @ 2020-04-11 16:04 UTC (permalink / raw)
To: linux-tip-commits
Cc: Sean Christopherson, Xiaoyao Li, Thomas Gleixner,
Borislav Petkov, Paolo Bonzini, x86, LKML
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: e6f8b6c12f03818baacc5f504fe83fa5e20771d6
Gitweb: https://git.kernel.org/tip/e6f8b6c12f03818baacc5f504fe83fa5e20771d6
Author: Xiaoyao Li <xiaoyao.li@intel.com>
AuthorDate: Fri, 10 Apr 2020 13:54:02 +02:00
Committer: Borislav Petkov <bp@suse.de>
CommitterDate: Sat, 11 Apr 2020 16:42:41 +02:00
KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guest
Two types of #AC can be generated in Intel CPUs:
1. legacy alignment check #AC
2. split lock #AC
Reflect #AC back into the guest if the guest has legacy alignment checks
enabled or if split lock detection is disabled.
If the #AC is not a legacy one and split lock detection is enabled, then
invoke handle_guest_split_lock() which will either warn and disable split
lock detection for this task or force SIGBUS on it.
[ tglx: Switch it to handle_guest_split_lock() and rename the misnamed
helper function. ]
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lkml.kernel.org/r/20200410115517.176308876@linutronix.de
---
arch/x86/kvm/vmx/vmx.c | 37 ++++++++++++++++++++++++++++++++++---
1 file changed, 34 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 8959514..8305097 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4588,6 +4588,26 @@ static int handle_machine_check(struct kvm_vcpu *vcpu)
return 1;
}
+/*
+ * If the host has split lock detection disabled, then #AC is
+ * unconditionally injected into the guest, which is the pre split lock
+ * detection behaviour.
+ *
+ * If the host has split lock detection enabled then #AC is
+ * only injected into the guest when:
+ * - Guest CPL == 3 (user mode)
+ * - Guest has #AC detection enabled in CR0
+ * - Guest EFLAGS has AC bit set
+ */
+static inline bool guest_inject_ac(struct kvm_vcpu *vcpu)
+{
+ if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+ return true;
+
+ return vmx_get_cpl(vcpu) == 3 && kvm_read_cr0_bits(vcpu, X86_CR0_AM) &&
+ (kvm_get_rflags(vcpu) & X86_EFLAGS_AC);
+}
+
static int handle_exception_nmi(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -4653,9 +4673,6 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
return handle_rmode_exception(vcpu, ex_no, error_code);
switch (ex_no) {
- case AC_VECTOR:
- kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
- return 1;
case DB_VECTOR:
dr6 = vmcs_readl(EXIT_QUALIFICATION);
if (!(vcpu->guest_debug &
@@ -4684,6 +4701,20 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
kvm_run->debug.arch.pc = vmcs_readl(GUEST_CS_BASE) + rip;
kvm_run->debug.arch.exception = ex_no;
break;
+ case AC_VECTOR:
+ if (guest_inject_ac(vcpu)) {
+ kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
+ return 1;
+ }
+
+ /*
+ * Handle split lock. Depending on detection mode this will
+ * either warn and disable split lock detection for this
+ * task or force SIGBUS on it.
+ */
+ if (handle_guest_split_lock(kvm_rip_read(vcpu)))
+ return 1;
+ fallthrough;
default:
kvm_run->exit_reason = KVM_EXIT_EXCEPTION;
kvm_run->ex.exception = ex_no;
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [patch 0/3] x86/kvm: Basic split lock #AC handling
2020-04-10 11:53 [patch 0/3] x86/kvm: Basic split lock #AC handling Thomas Gleixner
` (2 preceding siblings ...)
2020-04-10 11:54 ` [patch 3/3] KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guest Thomas Gleixner
@ 2020-04-10 15:15 ` Paolo Bonzini
2020-04-10 19:02 ` Sean Christopherson
4 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2020-04-10 15:15 UTC (permalink / raw)
To: Thomas Gleixner, LKML
Cc: x86, Kenneth R. Crudup, Fenghua Yu, Xiaoyao Li, Nadav Amit,
Thomas Hellstrom, Sean Christopherson, Tony Luck
On 10/04/20 13:53, Thomas Gleixner wrote:
> This is a reworked version of the patches posted by Sean:
>
> https://lore.kernel.org/r/20200402155554.27705-1-sean.j.christopherson@intel.com
>
> The changes vs. this are:
>
> 1) Use a separate function for guest split lock handling
>
> 2) Force SIGBUS when SLD mode fatal
>
> 3) Rename the misnomed helper function which decides whether
> #AC is injected into the guest or not and move the feature
> check and the comments into that helper.
>
> Thanks,
>
> tglx
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Or just tell me if you want me to send it Linus's way.
Paolo
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/3] x86/kvm: Basic split lock #AC handling
2020-04-10 11:53 [patch 0/3] x86/kvm: Basic split lock #AC handling Thomas Gleixner
` (3 preceding siblings ...)
2020-04-10 15:15 ` [patch 0/3] x86/kvm: Basic split lock #AC handling Paolo Bonzini
@ 2020-04-10 19:02 ` Sean Christopherson
2020-04-14 7:38 ` Thomas Gleixner
4 siblings, 1 reply; 10+ messages in thread
From: Sean Christopherson @ 2020-04-10 19:02 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, x86, Kenneth R. Crudup, Paolo Bonzini, Fenghua Yu,
Xiaoyao Li, Nadav Amit, Thomas Hellstrom, Tony Luck
On Fri, Apr 10, 2020 at 01:53:59PM +0200, Thomas Gleixner wrote:
> This is a reworked version of the patches posted by Sean:
>
> https://lore.kernel.org/r/20200402155554.27705-1-sean.j.christopherson@intel.com
>
> The changes vs. this are:
>
> 1) Use a separate function for guest split lock handling
>
> 2) Force SIGBUS when SLD mode fatal
Not that it matters as the code is correct, but I think you meant
"when SLD mode off" here.
> 3) Rename the misnomed helper function which decides whether
> #AC is injected into the guest or not and move the feature
> check and the comments into that helper.
Thanks a bunch for helping push this along!
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/3] x86/kvm: Basic split lock #AC handling
2020-04-10 19:02 ` Sean Christopherson
@ 2020-04-14 7:38 ` Thomas Gleixner
0 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2020-04-14 7:38 UTC (permalink / raw)
To: Sean Christopherson
Cc: LKML, x86, Kenneth R. Crudup, Paolo Bonzini, Fenghua Yu,
Xiaoyao Li, Nadav Amit, Thomas Hellstrom, Tony Luck
Sean,
Sean Christopherson <sean.j.christopherson@intel.com> writes:
> On Fri, Apr 10, 2020 at 01:53:59PM +0200, Thomas Gleixner wrote:
>> This is a reworked version of the patches posted by Sean:
>>
>> https://lore.kernel.org/r/20200402155554.27705-1-sean.j.christopherson@intel.com
>>
>> The changes vs. this are:
>>
>> 1) Use a separate function for guest split lock handling
>>
>> 2) Force SIGBUS when SLD mode fatal
>
> Not that it matters as the code is correct, but I think you meant
> "when SLD mode off" here.
Actually for both fatal and off. The latter should never happen :)
Thanks,
tglx
^ permalink raw reply [flat|nested] 10+ messages in thread