linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability
@ 2023-01-13 22:01 Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 1/6] KVM: x86: only allow exits disable before vCPUs created Kechen Lu
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Kechen Lu @ 2023-01-13 22:01 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini
  Cc: chao.gao, shaoqin.huang, vkuznets, kechenl, linux-kernel

Summary
===========
Introduce support of vCPU-scoped ioctl with KVM_CAP_X86_DISABLE_EXITS
cap for disabling exits to enable finer-grained VM exits disabling
on per vCPU scales instead of whole guest. This patch series enabled
the vCPU-scoped exits control and toggling.

Motivation
============
In use cases like Windows guest running heavy CPU-bound
workloads, disabling HLT VM-exits could mitigate host sched ctx switch
overhead. Simply HLT disabling on all vCPUs could bring
performance benefits, but if no pCPUs reserved for host threads, could
happened to the forced preemption as host does not know the time to do
the schedule for other host threads want to run. With this patch, we
could only disable part of vCPUs HLT exits for one guest, this still
keeps performance benefits, and also shows resiliency to host stressing
workload running at the same time.

Performance and Testing
=========================
In the host stressing workload experiment with Windows guest heavy
CPU-bound workloads, it shows good resiliency and having the ~3%
performance improvement. E.g. Passmark running in a Windows guest
with this patch disabling HLT exits on only half of vCPUs still
showing 2.4% higher main score v/s baseline.

Tested everything on AMD machines.

v4->v5 :
- Drop the usage of KVM request, keep the VM-scoped exits disable
  as the existing design, and only allow per-vCPU settings to
  override the per-VM settings (Sean Christopherson)
- Refactor the disable exits selftest without introducing any
  new prerequisite patch, tests per-vCPU exits disable and overrides,
  and per-VM exits disable

v3->v4 (Chao Gao) :
- Use kvm vCPU request KVM_REQ_DISABLE_EXIT to perform the arch
  VMCS updating (patch 5)
- Fix selftests redundant arguments (patch 7)
- Merge overlapped fix bits from patch 4 to patch 3

v2->v3 (Sean Christopherson) :
- Reject KVM_CAP_X86_DISABLE_EXITS if userspace disable MWAIT exits
  when MWAIT is not allowed in guest (patch 3)
- Make userspace able to re-enable previously disabled exits (patch 4)
- Add mwait/pause/cstate exits flag toggling instead of only hlt
  exits (patch 5)
- Add selftests for KVM_CAP_X86_DISABLE_EXITS (patch 7)

v1->v2 (Sean Christopherson) :
- Add explicit restriction for VM-scoped exits disabling to be called
  before vCPUs creation (patch 1)
- Use vCPU ioctl instead of 64bit vCPU bitmask (patch 5), and make exits
  disable flags check purely for vCPU instead of VM (patch 2)

Best Regards,
Kechen

Kechen Lu (3):
  KVM: x86: Move *_in_guest power management flags to vCPU scope
  KVM: x86: add vCPU scoped toggling for disabled exits
  KVM: selftests: Add tests for VM and vCPU cap
    KVM_CAP_X86_DISABLE_EXITS

Sean Christopherson (3):
  KVM: x86: only allow exits disable before vCPUs created
  KVM: x86: Reject disabling of MWAIT interception when not allowed
  KVM: x86: Let userspace re-enable previously disabled exits

 Documentation/virt/kvm/api.rst                |   8 +-
 arch/x86/include/asm/kvm-x86-ops.h            |   1 +
 arch/x86/include/asm/kvm_host.h               |   7 +
 arch/x86/kvm/cpuid.c                          |   4 +-
 arch/x86/kvm/lapic.c                          |   7 +-
 arch/x86/kvm/svm/nested.c                     |   4 +-
 arch/x86/kvm/svm/svm.c                        |  42 +-
 arch/x86/kvm/vmx/vmx.c                        |  53 +-
 arch/x86/kvm/x86.c                            |  69 ++-
 arch/x86/kvm/x86.h                            |  16 +-
 include/uapi/linux/kvm.h                      |   4 +-
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/disable_exits_test.c | 457 ++++++++++++++++++
 13 files changed, 626 insertions(+), 47 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/disable_exits_test.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH v5 1/6] KVM: x86: only allow exits disable before vCPUs created
  2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
@ 2023-01-13 22:01 ` Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 2/6] KVM: x86: Move *_in_guest power management flags to vCPU scope Kechen Lu
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Kechen Lu @ 2023-01-13 22:01 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini
  Cc: chao.gao, shaoqin.huang, vkuznets, kechenl, linux-kernel, stable

From: Sean Christopherson <seanjc@google.com>

Since VMX and SVM both would never update the control bits if exits
are disable after vCPUs are created, only allow setting exits
disable flag before vCPU creation.

Fixes: 4d5422cea3b6 ("KVM: X86: Provide a capability to disable MWAIT
intercepts")

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Kechen Lu <kechenl@nvidia.com>
Cc: stable@vger.kernel.org
---
 Documentation/virt/kvm/api.rst | 1 +
 arch/x86/kvm/x86.c             | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 9807b05a1b57..fb0fcc566d5a 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -7087,6 +7087,7 @@ branch to guests' 0x200 interrupt vector.
 :Architectures: x86
 :Parameters: args[0] defines which exits are disabled
 :Returns: 0 on success, -EINVAL when args[0] contains invalid exits
+          or if any vCPU has already been created
 
 Valid bits in args[0] are::
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index da4bbd043a7b..c8ae9c4f9f08 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6227,6 +6227,10 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		if (cap->args[0] & ~KVM_X86_DISABLE_VALID_EXITS)
 			break;
 
+		mutex_lock(&kvm->lock);
+		if (kvm->created_vcpus)
+			goto disable_exits_unlock;
+
 		if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&
 			kvm_can_mwait_in_guest())
 			kvm->arch.mwait_in_guest = true;
@@ -6237,6 +6241,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE)
 			kvm->arch.cstate_in_guest = true;
 		r = 0;
+disable_exits_unlock:
+		mutex_unlock(&kvm->lock);
 		break;
 	case KVM_CAP_MSR_PLATFORM_INFO:
 		kvm->arch.guest_can_read_msr_platform_info = cap->args[0];
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH v5 2/6] KVM: x86: Move *_in_guest power management flags to vCPU scope
  2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 1/6] KVM: x86: only allow exits disable before vCPUs created Kechen Lu
@ 2023-01-13 22:01 ` Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 3/6] KVM: x86: Reject disabling of MWAIT interception when not allowed Kechen Lu
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Kechen Lu @ 2023-01-13 22:01 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini
  Cc: chao.gao, shaoqin.huang, vkuznets, kechenl, linux-kernel

Make the runtime disabled mwait/hlt/pause/cstate exits flags vCPU scope
to allow finer-grained, per-vCPU control.  The VM-scoped control is only
allowed before vCPUs are created, thus preserving the existing behavior
is a simple matter of snapshotting the flags at vCPU creation.

Signed-off-by: Kechen Lu <kechenl@nvidia.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm_host.h |  5 +++++
 arch/x86/kvm/cpuid.c            |  4 ++--
 arch/x86/kvm/lapic.c            |  7 +++----
 arch/x86/kvm/svm/nested.c       |  4 ++--
 arch/x86/kvm/svm/svm.c          | 12 ++++++------
 arch/x86/kvm/vmx/vmx.c          | 16 ++++++++--------
 arch/x86/kvm/x86.c              |  6 +++++-
 arch/x86/kvm/x86.h              | 16 ++++++++--------
 8 files changed, 39 insertions(+), 31 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6aaae18f1854..41b998234a04 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1009,6 +1009,11 @@ struct kvm_vcpu_arch {
 #if IS_ENABLED(CONFIG_HYPERV)
 	hpa_t hv_root_tdp;
 #endif
+
+	bool mwait_in_guest;
+	bool hlt_in_guest;
+	bool pause_in_guest;
+	bool cstate_in_guest;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 596061c1610e..20e427dc608c 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -283,8 +283,8 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_e
 		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
 
 	best = __kvm_find_kvm_cpuid_features(vcpu, entries, nent);
-	if (kvm_hlt_in_guest(vcpu->kvm) && best &&
-		(best->eax & (1 << KVM_FEATURE_PV_UNHALT)))
+	if (kvm_hlt_in_guest(vcpu) &&
+	    best && (best->eax & (1 << KVM_FEATURE_PV_UNHALT)))
 		best->eax &= ~(1 << KVM_FEATURE_PV_UNHALT);
 
 	if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT)) {
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 4efdb4a4d72c..8f74f9a80aa5 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -151,14 +151,13 @@ static inline u32 kvm_x2apic_id(struct kvm_lapic *apic)
 static bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu)
 {
 	return pi_inject_timer && kvm_vcpu_apicv_active(vcpu) &&
-		(kvm_mwait_in_guest(vcpu->kvm) || kvm_hlt_in_guest(vcpu->kvm));
+		(kvm_mwait_in_guest(vcpu) || kvm_hlt_in_guest(vcpu));
 }
 
 bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu)
 {
-	return kvm_x86_ops.set_hv_timer
-	       && !(kvm_mwait_in_guest(vcpu->kvm) ||
-		    kvm_can_post_timer_interrupt(vcpu));
+	return kvm_x86_ops.set_hv_timer &&
+	        !(kvm_mwait_in_guest(vcpu) || kvm_can_post_timer_interrupt(vcpu));
 }
 
 static bool kvm_use_posted_timer_interrupt(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index add65dd59756..ed26b6de3007 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -721,7 +721,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
 
 	pause_count12 = svm->pause_filter_enabled ? svm->nested.ctl.pause_filter_count : 0;
 	pause_thresh12 = svm->pause_threshold_enabled ? svm->nested.ctl.pause_filter_thresh : 0;
-	if (kvm_pause_in_guest(svm->vcpu.kvm)) {
+	if (kvm_pause_in_guest(&svm->vcpu)) {
 		/* use guest values since host doesn't intercept PAUSE */
 		vmcb02->control.pause_filter_count = pause_count12;
 		vmcb02->control.pause_filter_thresh = pause_thresh12;
@@ -1012,7 +1012,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 	vmcb12->control.event_inj         = svm->nested.ctl.event_inj;
 	vmcb12->control.event_inj_err     = svm->nested.ctl.event_inj_err;
 
-	if (!kvm_pause_in_guest(vcpu->kvm)) {
+	if (!kvm_pause_in_guest(vcpu)) {
 		vmcb01->control.pause_filter_count = vmcb02->control.pause_filter_count;
 		vmcb_mark_dirty(vmcb01, VMCB_INTERCEPTS);
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9a194aa1a75a..dc7176605e01 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1014,7 +1014,7 @@ static void grow_ple_window(struct kvm_vcpu *vcpu)
 	struct vmcb_control_area *control = &svm->vmcb->control;
 	int old = control->pause_filter_count;
 
-	if (kvm_pause_in_guest(vcpu->kvm))
+	if (kvm_pause_in_guest(vcpu))
 		return;
 
 	control->pause_filter_count = __grow_ple_window(old,
@@ -1035,7 +1035,7 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
 	struct vmcb_control_area *control = &svm->vmcb->control;
 	int old = control->pause_filter_count;
 
-	if (kvm_pause_in_guest(vcpu->kvm))
+	if (kvm_pause_in_guest(vcpu))
 		return;
 
 	control->pause_filter_count =
@@ -1229,12 +1229,12 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
 	svm_set_intercept(svm, INTERCEPT_RDPRU);
 	svm_set_intercept(svm, INTERCEPT_RSM);
 
-	if (!kvm_mwait_in_guest(vcpu->kvm)) {
+	if (!kvm_mwait_in_guest(vcpu)) {
 		svm_set_intercept(svm, INTERCEPT_MONITOR);
 		svm_set_intercept(svm, INTERCEPT_MWAIT);
 	}
 
-	if (!kvm_hlt_in_guest(vcpu->kvm))
+	if (!kvm_hlt_in_guest(vcpu))
 		svm_set_intercept(svm, INTERCEPT_HLT);
 
 	control->iopm_base_pa = __sme_set(iopm_base);
@@ -1278,7 +1278,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
 	svm->nested.vmcb12_gpa = INVALID_GPA;
 	svm->nested.last_vmcb12_gpa = INVALID_GPA;
 
-	if (!kvm_pause_in_guest(vcpu->kvm)) {
+	if (!kvm_pause_in_guest(vcpu)) {
 		control->pause_filter_count = pause_filter_count;
 		if (pause_filter_thresh)
 			control->pause_filter_thresh = pause_filter_thresh;
@@ -4362,7 +4362,7 @@ static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 
 static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu)
 {
-	if (!kvm_pause_in_guest(vcpu->kvm))
+	if (!kvm_pause_in_guest(vcpu))
 		shrink_ple_window(vcpu);
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index fc9008dbed33..019a20029878 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1689,7 +1689,7 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
 	 * then the instruction is already executing and RIP has already been
 	 * advanced.
 	 */
-	if (kvm_hlt_in_guest(vcpu->kvm) &&
+	if (kvm_hlt_in_guest(vcpu) &&
 			vmcs_read32(GUEST_ACTIVITY_STATE) == GUEST_ACTIVITY_HLT)
 		vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE);
 }
@@ -4412,10 +4412,10 @@ static u32 vmx_exec_control(struct vcpu_vmx *vmx)
 		exec_control &= ~(CPU_BASED_CR3_LOAD_EXITING |
 				  CPU_BASED_CR3_STORE_EXITING |
 				  CPU_BASED_INVLPG_EXITING);
-	if (kvm_mwait_in_guest(vmx->vcpu.kvm))
+	if (kvm_mwait_in_guest(&vmx->vcpu))
 		exec_control &= ~(CPU_BASED_MWAIT_EXITING |
 				CPU_BASED_MONITOR_EXITING);
-	if (kvm_hlt_in_guest(vmx->vcpu.kvm))
+	if (kvm_hlt_in_guest(&vmx->vcpu))
 		exec_control &= ~CPU_BASED_HLT_EXITING;
 	return exec_control;
 }
@@ -4515,7 +4515,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
 	}
 	if (!enable_unrestricted_guest)
 		exec_control &= ~SECONDARY_EXEC_UNRESTRICTED_GUEST;
-	if (kvm_pause_in_guest(vmx->vcpu.kvm))
+	if (kvm_pause_in_guest(&vmx->vcpu))
 		exec_control &= ~SECONDARY_EXEC_PAUSE_LOOP_EXITING;
 	if (!kvm_vcpu_apicv_active(vcpu))
 		exec_control &= ~(SECONDARY_EXEC_APIC_REGISTER_VIRT |
@@ -4661,7 +4661,7 @@ static void init_vmcs(struct vcpu_vmx *vmx)
 		vmcs_write16(LAST_PID_POINTER_INDEX, kvm->arch.max_vcpu_ids - 1);
 	}
 
-	if (!kvm_pause_in_guest(kvm)) {
+	if (!kvm_pause_in_guest(&vmx->vcpu)) {
 		vmcs_write32(PLE_GAP, ple_gap);
 		vmx->ple_window = ple_window;
 		vmx->ple_window_dirty = true;
@@ -5833,7 +5833,7 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
  */
 static int handle_pause(struct kvm_vcpu *vcpu)
 {
-	if (!kvm_pause_in_guest(vcpu->kvm))
+	if (!kvm_pause_in_guest(vcpu))
 		grow_ple_window(vcpu);
 
 	/*
@@ -7379,7 +7379,7 @@ static int vmx_vcpu_create(struct kvm_vcpu *vcpu)
 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
-	if (kvm_cstate_in_guest(vcpu->kvm)) {
+	if (kvm_cstate_in_guest(vcpu)) {
 		vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C1_RES, MSR_TYPE_R);
 		vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
 		vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
@@ -7935,7 +7935,7 @@ static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
 
 static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
 {
-	if (!kvm_pause_in_guest(vcpu->kvm))
+	if (!kvm_pause_in_guest(vcpu))
 		shrink_ple_window(vcpu);
 }
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c8ae9c4f9f08..9a77b55142c6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11634,6 +11634,10 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 #if IS_ENABLED(CONFIG_HYPERV)
 	vcpu->arch.hv_root_tdp = INVALID_PAGE;
 #endif
+	vcpu->arch.mwait_in_guest = vcpu->kvm->arch.mwait_in_guest;
+	vcpu->arch.hlt_in_guest = vcpu->kvm->arch.hlt_in_guest;
+	vcpu->arch.pause_in_guest = vcpu->kvm->arch.pause_in_guest;
+	vcpu->arch.cstate_in_guest = vcpu->kvm->arch.cstate_in_guest;
 
 	r = static_call(kvm_x86_vcpu_create)(vcpu);
 	if (r)
@@ -12885,7 +12889,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu)
 		     kvm_is_exception_pending(vcpu)))
 		return false;
 
-	if (kvm_hlt_in_guest(vcpu->kvm) && !kvm_can_deliver_async_pf(vcpu))
+	if (kvm_hlt_in_guest(vcpu) && !kvm_can_deliver_async_pf(vcpu))
 		return false;
 
 	/*
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 9de72586f406..b8e49a9d353d 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -351,24 +351,24 @@ static inline u64 nsec_to_cycles(struct kvm_vcpu *vcpu, u64 nsec)
 	    __rem;						\
 	 })
 
-static inline bool kvm_mwait_in_guest(struct kvm *kvm)
+static inline bool kvm_mwait_in_guest(struct kvm_vcpu *vcpu)
 {
-	return kvm->arch.mwait_in_guest;
+	return vcpu->arch.mwait_in_guest;
 }
 
-static inline bool kvm_hlt_in_guest(struct kvm *kvm)
+static inline bool kvm_hlt_in_guest(struct kvm_vcpu *vcpu)
 {
-	return kvm->arch.hlt_in_guest;
+	return vcpu->arch.hlt_in_guest;
 }
 
-static inline bool kvm_pause_in_guest(struct kvm *kvm)
+static inline bool kvm_pause_in_guest(struct kvm_vcpu *vcpu)
 {
-	return kvm->arch.pause_in_guest;
+	return vcpu->arch.pause_in_guest;
 }
 
-static inline bool kvm_cstate_in_guest(struct kvm *kvm)
+static inline bool kvm_cstate_in_guest(struct kvm_vcpu *vcpu)
 {
-	return kvm->arch.cstate_in_guest;
+	return vcpu->arch.cstate_in_guest;
 }
 
 static inline bool kvm_notify_vmexit_enabled(struct kvm *kvm)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH v5 3/6] KVM: x86: Reject disabling of MWAIT interception when not allowed
  2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 1/6] KVM: x86: only allow exits disable before vCPUs created Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 2/6] KVM: x86: Move *_in_guest power management flags to vCPU scope Kechen Lu
@ 2023-01-13 22:01 ` Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 4/6] KVM: x86: Let userspace re-enable previously disabled exits Kechen Lu
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Kechen Lu @ 2023-01-13 22:01 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini
  Cc: chao.gao, shaoqin.huang, vkuznets, kechenl, linux-kernel

From: Sean Christopherson <seanjc@google.com>

Reject KVM_CAP_X86_DISABLE_EXITS if userspace attempts to disable MWAIT
exits and KVM previously reported (via KVM_CHECK_EXTENSION) that MWAIT is
not allowed in guest, e.g. because it's not supported or the CPU doesn't
have an aways-running APIC timer.

Fixes: 4d5422cea3b6 ("KVM: X86: Provide a capability to disable MWAIT
intercepts")

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Kechen Lu <kechenl@nvidia.com>
---
 arch/x86/kvm/x86.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9a77b55142c6..60caa3fd40e5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4326,6 +4326,16 @@ static inline bool kvm_can_mwait_in_guest(void)
 		boot_cpu_has(X86_FEATURE_ARAT);
 }
 
+static u64 kvm_get_allowed_disable_exits(void)
+{
+	u64 r = KVM_X86_DISABLE_VALID_EXITS;
+
+	if (!kvm_can_mwait_in_guest())
+		r &= ~KVM_X86_DISABLE_EXITS_MWAIT;
+
+	return r;
+}
+
 static int kvm_ioctl_get_supported_hv_cpuid(struct kvm_vcpu *vcpu,
 					    struct kvm_cpuid2 __user *cpuid_arg)
 {
@@ -4448,10 +4458,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		r = KVM_CLOCK_VALID_FLAGS;
 		break;
 	case KVM_CAP_X86_DISABLE_EXITS:
-		r |=  KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_PAUSE |
-		      KVM_X86_DISABLE_EXITS_CSTATE;
-		if(kvm_can_mwait_in_guest())
-			r |= KVM_X86_DISABLE_EXITS_MWAIT;
+		r |= kvm_get_allowed_disable_exits();
 		break;
 	case KVM_CAP_X86_SMM:
 		if (!IS_ENABLED(CONFIG_KVM_SMM))
@@ -6224,15 +6231,14 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		break;
 	case KVM_CAP_X86_DISABLE_EXITS:
 		r = -EINVAL;
-		if (cap->args[0] & ~KVM_X86_DISABLE_VALID_EXITS)
+		if (cap->args[0] & ~kvm_get_allowed_disable_exits())
 			break;
 
 		mutex_lock(&kvm->lock);
 		if (kvm->created_vcpus)
 			goto disable_exits_unlock;
 
-		if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&
-			kvm_can_mwait_in_guest())
+		if (cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT)
 			kvm->arch.mwait_in_guest = true;
 		if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)
 			kvm->arch.hlt_in_guest = true;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH v5 4/6] KVM: x86: Let userspace re-enable previously disabled exits
  2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
                   ` (2 preceding siblings ...)
  2023-01-13 22:01 ` [RFC PATCH v5 3/6] KVM: x86: Reject disabling of MWAIT interception when not allowed Kechen Lu
@ 2023-01-13 22:01 ` Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 5/6] KVM: x86: add vCPU scoped toggling for " Kechen Lu
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Kechen Lu @ 2023-01-13 22:01 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini
  Cc: chao.gao, shaoqin.huang, vkuznets, kechenl, linux-kernel

From: Sean Christopherson <seanjc@google.com>

Add an OVERRIDE flag to KVM_CAP_X86_DISABLE_EXITS allow userspace to
re-enable exits and/or override previous settings.  There's no real use
case for the the per-VM ioctl, but a future per-vCPU variant wants to let
userspace toggle interception while the vCPU is running; add the OVERRIDE
functionality now to provide consistent between between the per-VM and
per-vCPU variants.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 Documentation/virt/kvm/api.rst |  5 +++++
 arch/x86/kvm/x86.c             | 32 ++++++++++++++++++++++++--------
 include/uapi/linux/kvm.h       |  4 +++-
 3 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index fb0fcc566d5a..3850202942d0 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -7095,6 +7095,7 @@ Valid bits in args[0] are::
   #define KVM_X86_DISABLE_EXITS_HLT              (1 << 1)
   #define KVM_X86_DISABLE_EXITS_PAUSE            (1 << 2)
   #define KVM_X86_DISABLE_EXITS_CSTATE           (1 << 3)
+  #define KVM_X86_DISABLE_EXITS_OVERRIDE         (1ull << 63)
 
 Enabling this capability on a VM provides userspace with a way to no
 longer intercept some instructions for improved latency in some
@@ -7103,6 +7104,10 @@ physical CPUs.  More bits can be added in the future; userspace can
 just pass the KVM_CHECK_EXTENSION result to KVM_ENABLE_CAP to disable
 all such vmexits.
 
+By default, this capability only disables exits.  To re-enable an exit, or to
+override previous settings, userspace can set KVM_X86_DISABLE_EXITS_OVERRIDE,
+in which case KVM will enable/disable according to the mask (a '1' == disable).
+
 Do not enable KVM_FEATURE_PV_UNHALT if you disable HLT exits.
 
 7.14 KVM_CAP_S390_HPAGE_1M
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 60caa3fd40e5..3ea5f12536a0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5484,6 +5484,28 @@ static int kvm_vcpu_ioctl_device_attr(struct kvm_vcpu *vcpu,
 	return r;
 }
 
+
+#define kvm_ioctl_disable_exits(a, mask)				     \
+({									     \
+	if (!kvm_can_mwait_in_guest())                                       \
+		(mask) &= KVM_X86_DISABLE_EXITS_MWAIT;                       \
+	if ((mask) & KVM_X86_DISABLE_EXITS_OVERRIDE) {			     \
+		(a).mwait_in_guest = (mask) & KVM_X86_DISABLE_EXITS_MWAIT;   \
+		(a).hlt_in_guest = (mask) & KVM_X86_DISABLE_EXITS_HLT;	     \
+		(a).pause_in_guest = (mask) & KVM_X86_DISABLE_EXITS_PAUSE;   \
+		(a).cstate_in_guest = (mask) & KVM_X86_DISABLE_EXITS_CSTATE; \
+	} else {							     \
+		if ((mask) & KVM_X86_DISABLE_EXITS_MWAIT)		     \
+			(a).mwait_in_guest = true;			     \
+		if ((mask) & KVM_X86_DISABLE_EXITS_HLT)			     \
+			(a).hlt_in_guest = true;			     \
+		if ((mask) & KVM_X86_DISABLE_EXITS_PAUSE)		     \
+			(a).pause_in_guest = true;			     \
+		if ((mask) & KVM_X86_DISABLE_EXITS_CSTATE)		     \
+			(a).cstate_in_guest = true;			     \
+	}								     \
+})
+
 static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 				     struct kvm_enable_cap *cap)
 {
@@ -6238,14 +6260,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		if (kvm->created_vcpus)
 			goto disable_exits_unlock;
 
-		if (cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT)
-			kvm->arch.mwait_in_guest = true;
-		if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)
-			kvm->arch.hlt_in_guest = true;
-		if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE)
-			kvm->arch.pause_in_guest = true;
-		if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE)
-			kvm->arch.cstate_in_guest = true;
+		kvm_ioctl_disable_exits(kvm->arch, cap->args[0]);
+
 		r = 0;
 disable_exits_unlock:
 		mutex_unlock(&kvm->lock);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 55155e262646..12ea7dd80471 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -823,10 +823,12 @@ struct kvm_ioeventfd {
 #define KVM_X86_DISABLE_EXITS_HLT            (1 << 1)
 #define KVM_X86_DISABLE_EXITS_PAUSE          (1 << 2)
 #define KVM_X86_DISABLE_EXITS_CSTATE         (1 << 3)
+#define KVM_X86_DISABLE_EXITS_OVERRIDE	     (1ull << 63)
 #define KVM_X86_DISABLE_VALID_EXITS          (KVM_X86_DISABLE_EXITS_MWAIT | \
                                               KVM_X86_DISABLE_EXITS_HLT | \
                                               KVM_X86_DISABLE_EXITS_PAUSE | \
-                                              KVM_X86_DISABLE_EXITS_CSTATE)
+                                              KVM_X86_DISABLE_EXITS_CSTATE | \
+					      KVM_X86_DISABLE_EXITS_OVERRIDE)
 
 /* for KVM_ENABLE_CAP */
 struct kvm_enable_cap {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH v5 5/6] KVM: x86: add vCPU scoped toggling for disabled exits
  2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
                   ` (3 preceding siblings ...)
  2023-01-13 22:01 ` [RFC PATCH v5 4/6] KVM: x86: Let userspace re-enable previously disabled exits Kechen Lu
@ 2023-01-13 22:01 ` Kechen Lu
  2023-01-13 22:01 ` [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS Kechen Lu
  2023-01-18  8:30 ` [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Zhi Wang
  6 siblings, 0 replies; 12+ messages in thread
From: Kechen Lu @ 2023-01-13 22:01 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini
  Cc: chao.gao, shaoqin.huang, vkuznets, kechenl, linux-kernel

Introduce support of vCPU-scoped ioctl with KVM_CAP_X86_DISABLE_EXITS
cap for disabling exits to enable finer-grained VM exits disabling
on per vCPU scales instead of whole guest. This patch enables
the vCPU-scoped exits control toggling, but keeps the VM-scoped
exits control behaviors restriction as before.

In use cases like Windows guest running heavy CPU-bound
workloads, disabling HLT VM-exits could mitigate host sched ctx switch
overhead. Simply HLT disabling on all vCPUs could bring
performance benefits, but if no pCPUs reserved for host threads, could
happened to the forced preemption as host does not know the time to do
the schedule for other host threads want to run. With this patch, we
could only disable part of vCPUs HLT exits for one guest, this still
keeps performance benefits, and also shows resiliency to host stressing
workload running at the same time.

In the host stressing workload experiment with Windows guest heavy
CPU-bound workloads, it shows good resiliency and having the ~3%
performance improvement. E.g. Passmark running in a Windows guest
with this patch disabling HLT exits on only half of vCPUs still
showing 2.4% higher main score v/s baseline.

Suggested-by: Sean Christopherson <seanjc@google.com>
Suggested-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Kechen Lu <kechenl@nvidia.com>
---
 Documentation/virt/kvm/api.rst     |  2 +-
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  2 ++
 arch/x86/kvm/svm/svm.c             | 30 ++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c             | 37 ++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c                 |  7 ++++++
 6 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 3850202942d0..698f476d36dd 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -7102,7 +7102,7 @@ longer intercept some instructions for improved latency in some
 workloads, and is suggested when vCPUs are associated to dedicated
 physical CPUs.  More bits can be added in the future; userspace can
 just pass the KVM_CHECK_EXTENSION result to KVM_ENABLE_CAP to disable
-all such vmexits.
+all such vmexits. VM scoped and vCPU scoped capability are both supported.
 
 By default, this capability only disables exits.  To re-enable an exit, or to
 override previous settings, userspace can set KVM_X86_DISABLE_EXITS_OVERRIDE,
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index abccd51dcfca..534322c21168 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -131,6 +131,7 @@ KVM_X86_OP(msr_filter_changed)
 KVM_X86_OP(complete_emulated_msr)
 KVM_X86_OP(vcpu_deliver_sipi_vector)
 KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
+KVM_X86_OP(update_disabled_exits)
 
 #undef KVM_X86_OP
 #undef KVM_X86_OP_OPTIONAL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 41b998234a04..e21e5d452b5d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1711,6 +1711,8 @@ struct kvm_x86_ops {
 	 * Returns vCPU specific APICv inhibit reasons
 	 */
 	unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu);
+
+	void (*update_disabled_exits)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_x86_nested_ops {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index dc7176605e01..81c387dfa46c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4680,6 +4680,33 @@ static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
 	sev_vcpu_deliver_sipi_vector(vcpu, vector);
 }
 
+static void svm_update_disabled_exits(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	struct vmcb_control_area *control = &svm->vmcb->control;
+
+	if (kvm_hlt_in_guest(vcpu))
+		svm_clr_intercept(svm, INTERCEPT_HLT);
+	else
+		svm_set_intercept(svm, INTERCEPT_HLT);
+
+	if (kvm_mwait_in_guest(vcpu)) {
+		svm_clr_intercept(svm, INTERCEPT_MONITOR);
+		svm_clr_intercept(svm, INTERCEPT_MWAIT);
+	} else {
+		svm_set_intercept(svm, INTERCEPT_MONITOR);
+		svm_set_intercept(svm, INTERCEPT_MWAIT);
+	}
+
+	if (kvm_pause_in_guest(vcpu)) {
+		svm_clr_intercept(svm, INTERCEPT_PAUSE);
+	} else {
+		control->pause_filter_count = pause_filter_count;
+		if (pause_filter_thresh)
+			control->pause_filter_thresh = pause_filter_thresh;
+	}
+}
+
 static void svm_vm_destroy(struct kvm *kvm)
 {
 	avic_vm_destroy(kvm);
@@ -4825,7 +4852,10 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.complete_emulated_msr = svm_complete_emulated_msr,
 
 	.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
+
 	.vcpu_get_apicv_inhibit_reasons = avic_vcpu_get_apicv_inhibit_reasons,
+
+	.update_disabled_exits = svm_update_disabled_exits,
 };
 
 /*
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 019a20029878..f5137afdd424 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8070,6 +8070,41 @@ static void vmx_vm_destroy(struct kvm *kvm)
 	free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
 }
 
+static void vmx_update_disabled_exits(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	if (kvm_hlt_in_guest(vcpu))
+		exec_controls_clearbit(vmx, CPU_BASED_HLT_EXITING);
+	else
+		exec_controls_setbit(vmx, CPU_BASED_HLT_EXITING);
+
+	if (kvm_mwait_in_guest(vcpu))
+		exec_controls_clearbit(vmx, CPU_BASED_MWAIT_EXITING |
+			CPU_BASED_MONITOR_EXITING);
+	else
+		exec_controls_setbit(vmx, CPU_BASED_MWAIT_EXITING |
+			CPU_BASED_MONITOR_EXITING);
+
+	if (!kvm_pause_in_guest(vcpu)) {
+		vmcs_write32(PLE_GAP, ple_gap);
+		vmx->ple_window = ple_window;
+		vmx->ple_window_dirty = true;
+	}
+
+	if (kvm_cstate_in_guest(vcpu)) {
+		vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C1_RES, MSR_TYPE_R);
+		vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
+		vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
+		vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
+	} else {
+		vmx_enable_intercept_for_msr(vcpu, MSR_CORE_C1_RES, MSR_TYPE_R);
+		vmx_enable_intercept_for_msr(vcpu, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
+		vmx_enable_intercept_for_msr(vcpu, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
+		vmx_enable_intercept_for_msr(vcpu, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
+	}
+}
+
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.name = "kvm_intel",
 
@@ -8207,6 +8242,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.complete_emulated_msr = kvm_complete_insn_gp,
 
 	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
+
+	.update_disabled_exits = vmx_update_disabled_exits,
 };
 
 static unsigned int vmx_handle_intel_pt_intr(void)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3ea5f12536a0..8c15292c6886 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5552,6 +5552,13 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 		if (vcpu->arch.pv_cpuid.enforce)
 			kvm_update_pv_runtime(vcpu);
 
+		return 0;
+	case KVM_CAP_X86_DISABLE_EXITS:
+		if (cap->args[0] & ~kvm_get_allowed_disable_exits())
+			return -EINVAL;
+
+		kvm_ioctl_disable_exits(vcpu->arch, cap->args[0]);
+		static_call(kvm_x86_update_disabled_exits)(vcpu);
 		return 0;
 	default:
 		return -EINVAL;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS
  2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
                   ` (4 preceding siblings ...)
  2023-01-13 22:01 ` [RFC PATCH v5 5/6] KVM: x86: add vCPU scoped toggling for " Kechen Lu
@ 2023-01-13 22:01 ` Kechen Lu
  2023-01-18 20:03   ` Zhi Wang
  2023-01-18  8:30 ` [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Zhi Wang
  6 siblings, 1 reply; 12+ messages in thread
From: Kechen Lu @ 2023-01-13 22:01 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini
  Cc: chao.gao, shaoqin.huang, vkuznets, kechenl, linux-kernel

Add selftests for KVM cap KVM_CAP_X86_DISABLE_EXITS overriding flags
in VM and vCPU scope both works as expected.

Suggested-by: Chao Gao <chao.gao@intel.com>
Suggested-by: Shaoqin Huang <shaoqin.huang@intel.com>
Signed-off-by: Kechen Lu <kechenl@nvidia.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/disable_exits_test.c | 457 ++++++++++++++++++
 2 files changed, 458 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/disable_exits_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 1750f91dd936..eeeba35e2536 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -114,6 +114,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
 TEST_GEN_PROGS_x86_64 += x86_64/amx_test
 TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
 TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test
+TEST_GEN_PROGS_x86_64 += x86_64/disable_exits_test
 TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
 TEST_GEN_PROGS_x86_64 += demand_paging_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/x86_64/disable_exits_test.c b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
new file mode 100644
index 000000000000..dceba3bcef5f
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
@@ -0,0 +1,457 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test per-VM and per-vCPU disable exits cap
+ * 1) Per-VM scope 
+ * 2) Per-vCPU scope
+ *
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <pthread.h>
+#include <inttypes.h>
+#include <string.h>
+#include <time.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "svm_util.h"
+#include "vmx.h"
+#include "processor.h"
+#include "asm/kvm.h"
+#include "linux/kvm.h"
+
+/* Arbitary chosen IPI vector value from sender to halter vCPU */
+#define IPI_VECTOR	 0xa5
+/* Number of HLTs halter vCPU thread executes */
+#define COUNT_HLT_EXITS	 10
+
+struct guest_stats {
+	uint32_t halter_apic_id;
+	volatile uint64_t hlt_count;
+	volatile uint64_t wake_count;
+};
+
+static u64 read_vcpu_stats_halt_exits(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_stats_header header;
+	u64 *stats_data;
+	u64 ret = 0;
+	struct kvm_stats_desc *stats_desc;
+	struct kvm_stats_desc *pdesc;
+	int stats_fd = vcpu_get_stats_fd(vcpu);
+
+	read_stats_header(stats_fd, &header);
+	if (header.num_desc == 0) {
+		fprintf(stderr,
+			"Cannot read halt exits since no KVM stats defined\n");
+		return ret;
+	}
+
+	stats_desc = read_stats_descriptors(stats_fd, &header);
+	for (i = 0; i < header.num_desc; ++i) {
+		pdesc = get_stats_descriptor(stats_desc, i, &header);
+		if (!strncmp(pdesc->name, "halt_exits", 10)) {
+			stats_data = malloc(pdesc->size * sizeof(*stats_data));
+			read_stat_data(stats_fd, &header, pdesc, stats_data,
+				pdesc->size);
+			ret = *stats_data;
+			free(stats_data);
+			break;
+		}
+	}
+	free(stats_desc);
+	return ret;
+}
+
+/* HLT multiple times in one vCPU */
+static void halter_guest_code(struct guest_stats *data)
+{
+	xapic_enable();
+	data->halter_apic_id = GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
+
+	for (;;) {
+		data->hlt_count++;
+		asm volatile("sti; hlt; cli");
+		data->wake_count++;
+	}
+}
+
+static void halter_waiting_guest_code(struct guest_stats *data)
+{
+	uint64_t tsc_start = rdtsc();
+
+	xapic_enable();
+	data->halter_apic_id = GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
+
+	for (;;) {
+		data->hlt_count++;
+		asm volatile("sti; hlt; cli");
+		data->wake_count++;
+		/* Wait for ~0.5sec for each HLT execution */
+		tsc_start = rdtsc();
+		while (rdtsc() - tsc_start < 2000000000);
+	}
+}
+
+/* Runs on halter vCPU when IPI arrives */
+static void guest_ipi_handler(struct ex_regs *regs)
+{
+	xapic_write_reg(APIC_EOI, 11);
+}
+
+/* Sender vCPU waits for ~1sec to assume HLT executed */
+static void sender_wait_loop(struct guest_stats *data, uint64_t old_hlt_count,
+		uint64_t old_wake_count)
+{
+	uint64_t tsc_start = rdtsc();
+	while (rdtsc() - tsc_start < 4000000000);
+	GUEST_ASSERT((data->wake_count != old_wake_count) &&
+		       	(data->hlt_count != old_hlt_count));
+}
+
+/* Sender vCPU loops sending IPI to halter vCPU every ~1sec */
+static void sender_guest_code(struct guest_stats *data)
+{
+	uint32_t icr_val;
+	uint32_t icr2_val;
+	uint64_t old_hlt_count = 0;
+	uint64_t old_wake_count = 0;
+
+	xapic_enable();
+	/* Init interrupt command register for sending IPIs */
+	icr_val = (APIC_DEST_PHYSICAL | APIC_DM_FIXED | IPI_VECTOR);
+	icr2_val = SET_APIC_DEST_FIELD(data->halter_apic_id);
+
+	for (;;) {
+		/*
+		 * Send IPI to halted vCPU
+		 * First IPI sends here as already waited before sender vCPU
+		 * thread creation
+		 */
+		xapic_write_reg(APIC_ICR2, icr2_val);
+		xapic_write_reg(APIC_ICR, icr_val);
+		sender_wait_loop(data, old_hlt_count, old_wake_count);
+		GUEST_ASSERT((data->wake_count != old_wake_count) &&
+			(data->hlt_count != old_hlt_count));
+		old_wake_count = data->wake_count;
+		old_hlt_count = data->hlt_count;
+	}
+}
+
+static void *vcpu_thread(void *arg)
+{
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg;
+	int old;
+	int r;
+
+	r = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old);
+	TEST_ASSERT(r == 0,
+		"pthread_setcanceltype failed on vcpu_id=%u with errno=%d",
+	       	vcpu->id, r);
+	fprintf(stderr, "vCPU thread running vCPU %u\n", vcpu->id);
+	vcpu_run(vcpu);
+	return NULL;
+}
+
+static void cancel_join_vcpu_thread(pthread_t thread, struct kvm_vcpu *vcpu)
+{
+	void *retval;
+	int r;
+
+	r = pthread_cancel(thread);
+	TEST_ASSERT(r == 0,
+		"pthread_cancel on vcpu_id=%d failed with errno=%d",
+		vcpu->id, r);
+
+	r = pthread_join(thread, &retval);
+	TEST_ASSERT(r == 0,
+		"pthread_join on vcpu_id=%d failed with errno=%d",
+		vcpu->id, r);
+}
+
+/*
+ * Test case 1:
+ * Normal VM running with one vCPU keeps executing HLTs,
+ * another vCPU sending IPIs to wake it up, should expect
+ * all HLTs exiting to host
+ */
+static void test_vm_without_disable_exits_cap(void)
+{
+	int r;
+	int wait_secs;
+	const int first_halter_wait = 10;
+	uint64_t kvm_halt_exits;
+	struct kvm_vm *vm;
+	struct kvm_vcpu *halter_vcpu;
+	struct kvm_vcpu *sender_vcpu;
+	struct guest_stats *data;
+	vm_vaddr_t guest_stats_page_vaddr;
+	pthread_t threads[2];
+
+	/* Create VM */
+	vm = vm_create(2);
+
+	/* Add vCPU with loops halting */
+	halter_vcpu = vm_vcpu_add(vm, 0, halter_guest_code);
+
+	vm_init_descriptor_tables(vm);
+	vcpu_init_descriptor_tables(halter_vcpu);
+	vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
+	virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
+
+	/* Add vCPU with IPIs waking up halter vCPU */
+	sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
+
+	guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
+	data = addr_gva2hva(vm, guest_stats_page_vaddr);
+	memset(data, 0, sizeof(*data));
+
+	vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
+	vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
+
+	/* Start halter vCPU thread and wait for it to execute first HLT. */
+	r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
+	TEST_ASSERT(r == 0,
+		"pthread_create halter failed errno=%d", errno);
+	fprintf(stderr, "Halter vCPU thread started\n");
+
+	wait_secs = 0;
+	while ((wait_secs < first_halter_wait) && !data->hlt_count) {
+		sleep(1);
+		wait_secs++;
+	}
+	TEST_ASSERT(data->hlt_count,
+		"Halter vCPU did not execute first HLT within %d seconds",
+	       	first_halter_wait);
+	fprintf(stderr,
+		"Halter vCPU thread reported its first HLT executed "
+	       	"after %d seconds.\n",
+		wait_secs);
+
+	/* 
+	 * After guest halter vCPU executed first HLT, start the sender
+	 * vCPU thread to wakeup halter vCPU
+	 */
+	r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
+	TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d", errno);
+
+	while (data->hlt_count < COUNT_HLT_EXITS);
+
+	cancel_join_vcpu_thread(threads[0], halter_vcpu);
+	cancel_join_vcpu_thread(threads[1], sender_vcpu);
+
+	kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
+	TEST_ASSERT(kvm_halt_exits == data->hlt_count,
+		"Halter vCPU had unmatched %lu halt exits - %lu HLTs "
+		"executed, when not disabling VM halt exits\n",
+	       	kvm_halt_exits, data->hlt_count);
+	fprintf(stderr, "Halter vCPU had %lu halt exits\n",
+		kvm_halt_exits);
+	fprintf(stderr, "Guest records %lu HLTs executed, "
+		"waked %lu times\n",
+		data->hlt_count, data->wake_count);
+
+	kvm_vm_free(vm);
+}
+
+/*
+ * Test case 2:
+ * VM scoped exits disabling, HLT instructions
+ * stay inside guest without exits
+ */
+static void test_vm_disable_exits_cap(void)
+{
+	int r;
+	uint64_t kvm_halt_exits;
+	struct kvm_vm *vm;
+	struct kvm_vcpu *halter_vcpu;
+	struct guest_stats *data;
+	vm_vaddr_t guest_stats_page_vaddr;
+	pthread_t halter_thread;
+
+	/* Create VM */
+	vm = vm_create(1);
+
+	/*
+	 * Before adding any vCPUs, enable the KVM_X86_DISABLE_EXITS cap
+	 * with flag KVM_X86_DISABLE_EXITS_HLT
+	 */
+	vm_enable_cap(vm, KVM_CAP_X86_DISABLE_EXITS,
+		KVM_X86_DISABLE_EXITS_HLT);
+
+	/* Add vCPU with loops halting */
+	halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
+
+	vm_init_descriptor_tables(vm);
+	vcpu_init_descriptor_tables(halter_vcpu);
+	vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
+	virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
+
+	guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
+	data = addr_gva2hva(vm, guest_stats_page_vaddr);
+	memset(data, 0, sizeof(*data));
+	vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
+
+	/* Start halter vCPU thread and execute HLTs immediately */
+	r = pthread_create(&halter_thread, NULL, vcpu_thread, halter_vcpu);
+	TEST_ASSERT(r == 0,
+		"pthread_create halter failed errno=%d", errno);
+	fprintf(stderr, "Halter vCPU thread started\n");
+
+	while (data->hlt_count < COUNT_HLT_EXITS);
+
+	cancel_join_vcpu_thread(halter_thread, halter_vcpu);
+
+	kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
+	TEST_ASSERT(kvm_halt_exits == 0,
+		"Halter vCPU had unexpected halt exits occuring after "
+		"disabling VM-scoped halt exits cap\n");
+	fprintf(stderr, "Halter vCPU had %lu HLT exits\n",
+		kvm_halt_exits);
+	fprintf(stderr, "Guest records %lu HLTs executed\n",
+		data->hlt_count);
+
+	kvm_vm_free(vm);
+}
+
+/*
+ * Test case 3:
+ * VM overrides exits disable flags after vCPU created,
+ * which is not allowed
+ */
+static void test_vm_disable_exits_cap_with_vcpu_created(void)
+{
+	int r;
+	struct kvm_vm *vm;
+	struct kvm_enable_cap cap = {
+		.cap = KVM_CAP_X86_DISABLE_EXITS,
+		.args[0] = KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_OVERRIDE,
+	};
+
+	/* Create VM */
+	vm = vm_create(1);
+	/* Add vCPU with loops halting */
+	vm_vcpu_add(vm, 0, halter_waiting_guest_code);
+
+	/*
+	 * After creating vCPU, the current VM-scoped ABI should 
+	 * discard the cap enable of KVM_CAP_X86_DISABLE_EXITS
+	 * and return non-zero. Since vm_enabled_cap() not able
+	 * to assert the return value, so use the __vm_ioctl()
+	 */
+	r = __vm_ioctl(vm, KVM_ENABLE_CAP, &cap);
+
+	TEST_ASSERT(r != 0,
+		"Setting VM-scoped KVM_CAP_X86_DISABLE_EXITS after "
+		"vCPUs created is not allowed, but it succeeds here\n");
+}
+
+/*
+ * Test case 4:
+ * vCPU scoped halt exits disabling and enabling tests,
+ * verify overides are working after vCPU created
+ */
+static void test_vcpu_toggling_disable_exits_cap(void)
+{
+	int r;
+	uint64_t kvm_halt_exits;
+	struct kvm_vm *vm;
+	struct kvm_vcpu *halter_vcpu;
+	struct kvm_vcpu *sender_vcpu;
+	struct guest_stats *data;
+	vm_vaddr_t guest_stats_page_vaddr;
+	pthread_t threads[2];
+
+	/* Create VM */
+	vm = vm_create(2);
+
+	/* Add vCPU with loops halting */
+	halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
+	/* Set KVM_CAP_X86_DISABLE_EXITS_HLT for halter vCPU */
+	vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
+		KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_OVERRIDE);
+
+	vm_init_descriptor_tables(vm);
+	vcpu_init_descriptor_tables(halter_vcpu);
+	vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
+
+	virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
+
+	/* Add vCPU with IPIs waking up halter vCPU */
+	sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
+
+	guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
+	data = addr_gva2hva(vm, guest_stats_page_vaddr);
+	memset(data, 0, sizeof(*data));
+
+	vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
+	vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
+
+	r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
+	TEST_ASSERT(r == 0,
+		"pthread_create halter failed errno=%d", errno);
+	fprintf(stderr, "Halter vCPU thread started with halt exits"
+		"disabled\n");
+
+	/* 
+	 * For the first phase of the running, halt exits
+	 * are disabled, halter vCPU executes HLT instruction
+	 * but never exits to host
+	 */
+	while (data->hlt_count < (COUNT_HLT_EXITS / 2));
+
+	cancel_join_vcpu_thread(threads[0], halter_vcpu);
+	/* 
+	 * Override and clean KVM_CAP_X86_DISABLE_EXITS flags 
+	 * for halter vCPU. Expect to see halt exits occurs then.
+	 */
+	vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
+		KVM_X86_DISABLE_EXITS_OVERRIDE);
+
+	r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
+	TEST_ASSERT(r == 0,
+		"pthread_create halter failed errno=%d", errno);
+	fprintf(stderr, "Halter vCPU thread restarted and cleared "
+		"halt exits flag\n");
+
+	sleep(1);
+	/* 
+	 * Second phase of the test, after guest halter vCPU
+	 * reenabled halt exits, start the sender
+	 * vCPU thread to wakeup halter vCPU
+	 */
+	r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
+	TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d", errno);
+
+	while (data->hlt_count < COUNT_HLT_EXITS);
+
+	cancel_join_vcpu_thread(threads[0], halter_vcpu);
+	cancel_join_vcpu_thread(threads[1], sender_vcpu);
+
+	kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
+	TEST_ASSERT(kvm_halt_exits == (COUNT_HLT_EXITS / 2),
+		"Halter vCPU had unexpected %lu halt exits, "
+		"there should be %d halt exits while "
+		"not disabling VM halt exits\n",
+	       	kvm_halt_exits, COUNT_HLT_EXITS / 2);
+	fprintf(stderr, "Halter vCPU had %lu halt exits\n",
+		kvm_halt_exits);
+	fprintf(stderr, "Guest records %lu HLTs executed, "
+		"waked %lu times\n",
+		data->hlt_count, data->wake_count);
+
+	kvm_vm_free(vm);
+}
+
+int main(int argc, char *argv[])
+{
+	fprintf(stderr, "VM-scoped tests start\n");
+	test_vm_without_disable_exits_cap();
+	test_vm_disable_exits_cap();
+	test_vm_disable_exits_cap_with_vcpu_created();
+	fprintf(stderr, "vCPU-scoped test starts\n");
+	test_vcpu_toggling_disable_exits_cap();
+	return 0;
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability
  2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
                   ` (5 preceding siblings ...)
  2023-01-13 22:01 ` [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS Kechen Lu
@ 2023-01-18  8:30 ` Zhi Wang
  2023-01-18 13:32   ` Zhi Wang
  6 siblings, 1 reply; 12+ messages in thread
From: Zhi Wang @ 2023-01-18  8:30 UTC (permalink / raw)
  To: Kechen Lu
  Cc: kvm, seanjc, pbonzini, chao.gao, shaoqin.huang, vkuznets, linux-kernel

On Fri, 13 Jan 2023 22:01:08 +0000
Kechen Lu <kechenl@nvidia.com> wrote:

Hi:

checkpatch.pl throws a lot of warning and errors when I was trying
this series. Can you fix them?

total: 470 errors, 22 warnings, 464 lines checked

> Summary
> ===========
> Introduce support of vCPU-scoped ioctl with KVM_CAP_X86_DISABLE_EXITS
> cap for disabling exits to enable finer-grained VM exits disabling
> on per vCPU scales instead of whole guest. This patch series enabled
> the vCPU-scoped exits control and toggling.
> 
> Motivation
> ============
> In use cases like Windows guest running heavy CPU-bound
> workloads, disabling HLT VM-exits could mitigate host sched ctx switch
> overhead. Simply HLT disabling on all vCPUs could bring
> performance benefits, but if no pCPUs reserved for host threads, could
> happened to the forced preemption as host does not know the time to do
> the schedule for other host threads want to run. With this patch, we
> could only disable part of vCPUs HLT exits for one guest, this still
> keeps performance benefits, and also shows resiliency to host stressing
> workload running at the same time.
> 
> Performance and Testing
> =========================
> In the host stressing workload experiment with Windows guest heavy
> CPU-bound workloads, it shows good resiliency and having the ~3%
> performance improvement. E.g. Passmark running in a Windows guest
> with this patch disabling HLT exits on only half of vCPUs still
> showing 2.4% higher main score v/s baseline.
> 
> Tested everything on AMD machines.
> 
> v4->v5 :
> - Drop the usage of KVM request, keep the VM-scoped exits disable
>   as the existing design, and only allow per-vCPU settings to
>   override the per-VM settings (Sean Christopherson)
> - Refactor the disable exits selftest without introducing any
>   new prerequisite patch, tests per-vCPU exits disable and overrides,
>   and per-VM exits disable
> 
> v3->v4 (Chao Gao) :
> - Use kvm vCPU request KVM_REQ_DISABLE_EXIT to perform the arch
>   VMCS updating (patch 5)
> - Fix selftests redundant arguments (patch 7)
> - Merge overlapped fix bits from patch 4 to patch 3
> 
> v2->v3 (Sean Christopherson) :
> - Reject KVM_CAP_X86_DISABLE_EXITS if userspace disable MWAIT exits
>   when MWAIT is not allowed in guest (patch 3)
> - Make userspace able to re-enable previously disabled exits (patch 4)
> - Add mwait/pause/cstate exits flag toggling instead of only hlt
>   exits (patch 5)
> - Add selftests for KVM_CAP_X86_DISABLE_EXITS (patch 7)
> 
> v1->v2 (Sean Christopherson) :
> - Add explicit restriction for VM-scoped exits disabling to be called
>   before vCPUs creation (patch 1)
> - Use vCPU ioctl instead of 64bit vCPU bitmask (patch 5), and make exits
>   disable flags check purely for vCPU instead of VM (patch 2)
> 
> Best Regards,
> Kechen
> 
> Kechen Lu (3):
>   KVM: x86: Move *_in_guest power management flags to vCPU scope
>   KVM: x86: add vCPU scoped toggling for disabled exits
>   KVM: selftests: Add tests for VM and vCPU cap
>     KVM_CAP_X86_DISABLE_EXITS
> 
> Sean Christopherson (3):
>   KVM: x86: only allow exits disable before vCPUs created
>   KVM: x86: Reject disabling of MWAIT interception when not allowed
>   KVM: x86: Let userspace re-enable previously disabled exits
> 
>  Documentation/virt/kvm/api.rst                |   8 +-
>  arch/x86/include/asm/kvm-x86-ops.h            |   1 +
>  arch/x86/include/asm/kvm_host.h               |   7 +
>  arch/x86/kvm/cpuid.c                          |   4 +-
>  arch/x86/kvm/lapic.c                          |   7 +-
>  arch/x86/kvm/svm/nested.c                     |   4 +-
>  arch/x86/kvm/svm/svm.c                        |  42 +-
>  arch/x86/kvm/vmx/vmx.c                        |  53 +-
>  arch/x86/kvm/x86.c                            |  69 ++-
>  arch/x86/kvm/x86.h                            |  16 +-
>  include/uapi/linux/kvm.h                      |   4 +-
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../selftests/kvm/x86_64/disable_exits_test.c | 457 ++++++++++++++++++
>  13 files changed, 626 insertions(+), 47 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability
  2023-01-18  8:30 ` [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Zhi Wang
@ 2023-01-18 13:32   ` Zhi Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Zhi Wang @ 2023-01-18 13:32 UTC (permalink / raw)
  To: Kechen Lu
  Cc: kvm, seanjc, pbonzini, chao.gao, shaoqin.huang, vkuznets, linux-kernel

On Wed, 18 Jan 2023 10:30:03 +0200
Zhi Wang <zhi.wang.linux@gmail.com> wrote:

Hi:

No sure why the test never finishes on my testing machine. Will take a look
later today.

My CPU: 
model name      : Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz

branch kvm.git/master top commit:
310bc39546a435c83cc27a0eba878afac0d74714

-----

[inno@inno-lk-server x86_64]$ time ./disable_exits_test
VM-scoped tests start
Halter vCPU thread started
vCPU thread running vCPU 0
Halter vCPU thread reported its first HLT executed after 1 seconds.
vCPU thread running vCPU 1
Halter vCPU had 10 halt exits
Guest records 10 HLTs executed, waked 9 times
Halter vCPU thread started
vCPU thread running vCPU 0



^C

real    19m0.923s
user    37m56.512s
sys     0m1.086s

-----

> On Fri, 13 Jan 2023 22:01:08 +0000
> Kechen Lu <kechenl@nvidia.com> wrote:
> 
> Hi:
> 
> checkpatch.pl throws a lot of warning and errors when I was trying
> this series. Can you fix them?
> 
> total: 470 errors, 22 warnings, 464 lines checked
> 
> > Summary
> > ===========
> > Introduce support of vCPU-scoped ioctl with KVM_CAP_X86_DISABLE_EXITS
> > cap for disabling exits to enable finer-grained VM exits disabling
> > on per vCPU scales instead of whole guest. This patch series enabled
> > the vCPU-scoped exits control and toggling.
> > 
> > Motivation
> > ============
> > In use cases like Windows guest running heavy CPU-bound
> > workloads, disabling HLT VM-exits could mitigate host sched ctx switch
> > overhead. Simply HLT disabling on all vCPUs could bring
> > performance benefits, but if no pCPUs reserved for host threads, could
> > happened to the forced preemption as host does not know the time to do
> > the schedule for other host threads want to run. With this patch, we
> > could only disable part of vCPUs HLT exits for one guest, this still
> > keeps performance benefits, and also shows resiliency to host stressing
> > workload running at the same time.
> > 
> > Performance and Testing
> > =========================
> > In the host stressing workload experiment with Windows guest heavy
> > CPU-bound workloads, it shows good resiliency and having the ~3%
> > performance improvement. E.g. Passmark running in a Windows guest
> > with this patch disabling HLT exits on only half of vCPUs still
> > showing 2.4% higher main score v/s baseline.
> > 
> > Tested everything on AMD machines.
> > 
> > v4->v5 :
> > - Drop the usage of KVM request, keep the VM-scoped exits disable
> >   as the existing design, and only allow per-vCPU settings to
> >   override the per-VM settings (Sean Christopherson)
> > - Refactor the disable exits selftest without introducing any
> >   new prerequisite patch, tests per-vCPU exits disable and overrides,
> >   and per-VM exits disable
> > 
> > v3->v4 (Chao Gao) :
> > - Use kvm vCPU request KVM_REQ_DISABLE_EXIT to perform the arch
> >   VMCS updating (patch 5)
> > - Fix selftests redundant arguments (patch 7)
> > - Merge overlapped fix bits from patch 4 to patch 3
> > 
> > v2->v3 (Sean Christopherson) :
> > - Reject KVM_CAP_X86_DISABLE_EXITS if userspace disable MWAIT exits
> >   when MWAIT is not allowed in guest (patch 3)
> > - Make userspace able to re-enable previously disabled exits (patch 4)
> > - Add mwait/pause/cstate exits flag toggling instead of only hlt
> >   exits (patch 5)
> > - Add selftests for KVM_CAP_X86_DISABLE_EXITS (patch 7)
> > 
> > v1->v2 (Sean Christopherson) :
> > - Add explicit restriction for VM-scoped exits disabling to be called
> >   before vCPUs creation (patch 1)
> > - Use vCPU ioctl instead of 64bit vCPU bitmask (patch 5), and make exits
> >   disable flags check purely for vCPU instead of VM (patch 2)
> > 
> > Best Regards,
> > Kechen
> > 
> > Kechen Lu (3):
> >   KVM: x86: Move *_in_guest power management flags to vCPU scope
> >   KVM: x86: add vCPU scoped toggling for disabled exits
> >   KVM: selftests: Add tests for VM and vCPU cap
> >     KVM_CAP_X86_DISABLE_EXITS
> > 
> > Sean Christopherson (3):
> >   KVM: x86: only allow exits disable before vCPUs created
> >   KVM: x86: Reject disabling of MWAIT interception when not allowed
> >   KVM: x86: Let userspace re-enable previously disabled exits
> > 
> >  Documentation/virt/kvm/api.rst                |   8 +-
> >  arch/x86/include/asm/kvm-x86-ops.h            |   1 +
> >  arch/x86/include/asm/kvm_host.h               |   7 +
> >  arch/x86/kvm/cpuid.c                          |   4 +-
> >  arch/x86/kvm/lapic.c                          |   7 +-
> >  arch/x86/kvm/svm/nested.c                     |   4 +-
> >  arch/x86/kvm/svm/svm.c                        |  42 +-
> >  arch/x86/kvm/vmx/vmx.c                        |  53 +-
> >  arch/x86/kvm/x86.c                            |  69 ++-
> >  arch/x86/kvm/x86.h                            |  16 +-
> >  include/uapi/linux/kvm.h                      |   4 +-
> >  tools/testing/selftests/kvm/Makefile          |   1 +
> >  .../selftests/kvm/x86_64/disable_exits_test.c | 457 ++++++++++++++++++
> >  13 files changed, 626 insertions(+), 47 deletions(-)
> >  create mode 100644 tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > 
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS
  2023-01-13 22:01 ` [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS Kechen Lu
@ 2023-01-18 20:03   ` Zhi Wang
  2023-01-18 20:26     ` Kechen Lu
  0 siblings, 1 reply; 12+ messages in thread
From: Zhi Wang @ 2023-01-18 20:03 UTC (permalink / raw)
  To: Kechen Lu
  Cc: kvm, seanjc, pbonzini, chao.gao, shaoqin.huang, vkuznets, linux-kernel

On Fri, 13 Jan 2023 22:01:14 +0000
Kechen Lu <kechenl@nvidia.com> wrote:

I think I figure out why this test case doesn't work:

The 2nd case always hangs because:

1) Unlike the 1st case in which a halter and an IPI sender will be created,
there is only halter thread created in the 2nd case.
2) The halter enables KVM_X86_DISABLE_EXITS_HLT. Thus, HLT will not cause VMEXIT
3) The halter stuck in the halter_waiting_guest_code(). data->hlt_count is
always 1 and data->wake_count is always 0.
4) In the main thread, you have test_vm_disable_exits_cap() -> 
                         while (data->hlt_count < COUNT_HLT_EXITS);

As data->hlt_count will never increase in the vcpu_thread, the main thread
always stuck in the while loop.

Can you explain more about your thoughts of designing this test case?

> Add selftests for KVM cap KVM_CAP_X86_DISABLE_EXITS overriding flags
> in VM and vCPU scope both works as expected.
> 
> Suggested-by: Chao Gao <chao.gao@intel.com>
> Suggested-by: Shaoqin Huang <shaoqin.huang@intel.com>
> Signed-off-by: Kechen Lu <kechenl@nvidia.com>
> ---
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../selftests/kvm/x86_64/disable_exits_test.c | 457 ++++++++++++++++++
>  2 files changed, 458 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 1750f91dd936..eeeba35e2536 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -114,6 +114,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
>  TEST_GEN_PROGS_x86_64 += x86_64/amx_test
>  TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
>  TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test
> +TEST_GEN_PROGS_x86_64 += x86_64/disable_exits_test
>  TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
>  TEST_GEN_PROGS_x86_64 += demand_paging_test
>  TEST_GEN_PROGS_x86_64 += dirty_log_test
> diff --git a/tools/testing/selftests/kvm/x86_64/disable_exits_test.c b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> new file mode 100644
> index 000000000000..dceba3bcef5f
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> @@ -0,0 +1,457 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Test per-VM and per-vCPU disable exits cap
> + * 1) Per-VM scope 
> + * 2) Per-vCPU scope
> + *
> + */
> +
> +#define _GNU_SOURCE /* for program_invocation_short_name */
> +#include <pthread.h>
> +#include <inttypes.h>
> +#include <string.h>
> +#include <time.h>
> +#include <sys/ioctl.h>
> +
> +#include "test_util.h"
> +#include "kvm_util.h"
> +#include "svm_util.h"
> +#include "vmx.h"
> +#include "processor.h"
> +#include "asm/kvm.h"
> +#include "linux/kvm.h"
> +
> +/* Arbitary chosen IPI vector value from sender to halter vCPU */
> +#define IPI_VECTOR	 0xa5
> +/* Number of HLTs halter vCPU thread executes */
> +#define COUNT_HLT_EXITS	 10
> +
> +struct guest_stats {
> +	uint32_t halter_apic_id;
> +	volatile uint64_t hlt_count;
> +	volatile uint64_t wake_count;
> +};
> +
> +static u64 read_vcpu_stats_halt_exits(struct kvm_vcpu *vcpu)
> +{
> +	int i;
> +	struct kvm_stats_header header;
> +	u64 *stats_data;
> +	u64 ret = 0;
> +	struct kvm_stats_desc *stats_desc;
> +	struct kvm_stats_desc *pdesc;
> +	int stats_fd = vcpu_get_stats_fd(vcpu);
> +
> +	read_stats_header(stats_fd, &header);
> +	if (header.num_desc == 0) {
> +		fprintf(stderr,
> +			"Cannot read halt exits since no KVM stats defined\n");
> +		return ret;
> +	}
> +
> +	stats_desc = read_stats_descriptors(stats_fd, &header);
> +	for (i = 0; i < header.num_desc; ++i) {
> +		pdesc = get_stats_descriptor(stats_desc, i, &header);
> +		if (!strncmp(pdesc->name, "halt_exits", 10)) {
> +			stats_data = malloc(pdesc->size * sizeof(*stats_data));
> +			read_stat_data(stats_fd, &header, pdesc, stats_data,
> +				pdesc->size);
> +			ret = *stats_data;
> +			free(stats_data);
> +			break;
> +		}
> +	}
> +	free(stats_desc);
> +	return ret;
> +}
> +
> +/* HLT multiple times in one vCPU */
> +static void halter_guest_code(struct guest_stats *data)
> +{
> +	xapic_enable();
> +	data->halter_apic_id = GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
> +
> +	for (;;) {
> +		data->hlt_count++;
> +		asm volatile("sti; hlt; cli");
> +		data->wake_count++;
> +	}
> +}
> +
> +static void halter_waiting_guest_code(struct guest_stats *data)
> +{
> +	uint64_t tsc_start = rdtsc();
> +
> +	xapic_enable();
> +	data->halter_apic_id = GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
> +
> +	for (;;) {
> +		data->hlt_count++;
> +		asm volatile("sti; hlt; cli");
> +		data->wake_count++;
> +		/* Wait for ~0.5sec for each HLT execution */
> +		tsc_start = rdtsc();
> +		while (rdtsc() - tsc_start < 2000000000);
> +	}
> +}
> +
> +/* Runs on halter vCPU when IPI arrives */
> +static void guest_ipi_handler(struct ex_regs *regs)
> +{
> +	xapic_write_reg(APIC_EOI, 11);
> +}
> +
> +/* Sender vCPU waits for ~1sec to assume HLT executed */
> +static void sender_wait_loop(struct guest_stats *data, uint64_t old_hlt_count,
> +		uint64_t old_wake_count)
> +{
> +	uint64_t tsc_start = rdtsc();
> +	while (rdtsc() - tsc_start < 4000000000);
> +	GUEST_ASSERT((data->wake_count != old_wake_count) &&
> +		       	(data->hlt_count != old_hlt_count));
> +}
> +
> +/* Sender vCPU loops sending IPI to halter vCPU every ~1sec */
> +static void sender_guest_code(struct guest_stats *data)
> +{
> +	uint32_t icr_val;
> +	uint32_t icr2_val;
> +	uint64_t old_hlt_count = 0;
> +	uint64_t old_wake_count = 0;
> +
> +	xapic_enable();
> +	/* Init interrupt command register for sending IPIs */
> +	icr_val = (APIC_DEST_PHYSICAL | APIC_DM_FIXED | IPI_VECTOR);
> +	icr2_val = SET_APIC_DEST_FIELD(data->halter_apic_id);
> +
> +	for (;;) {
> +		/*
> +		 * Send IPI to halted vCPU
> +		 * First IPI sends here as already waited before sender vCPU
> +		 * thread creation
> +		 */
> +		xapic_write_reg(APIC_ICR2, icr2_val);
> +		xapic_write_reg(APIC_ICR, icr_val);
> +		sender_wait_loop(data, old_hlt_count, old_wake_count);
> +		GUEST_ASSERT((data->wake_count != old_wake_count) &&
> +			(data->hlt_count != old_hlt_count));
> +		old_wake_count = data->wake_count;
> +		old_hlt_count = data->hlt_count;
> +	}
> +}
> +
> +static void *vcpu_thread(void *arg)
> +{
> +	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg;
> +	int old;
> +	int r;
> +
> +	r = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &old);
> +	TEST_ASSERT(r == 0,
> +		"pthread_setcanceltype failed on vcpu_id=%u with errno=%d",
> +	       	vcpu->id, r);
> +	fprintf(stderr, "vCPU thread running vCPU %u\n", vcpu->id);
> +	vcpu_run(vcpu);
> +	return NULL;
> +}
> +
> +static void cancel_join_vcpu_thread(pthread_t thread, struct kvm_vcpu *vcpu)
> +{
> +	void *retval;
> +	int r;
> +
> +	r = pthread_cancel(thread);
> +	TEST_ASSERT(r == 0,
> +		"pthread_cancel on vcpu_id=%d failed with errno=%d",
> +		vcpu->id, r);
> +
> +	r = pthread_join(thread, &retval);
> +	TEST_ASSERT(r == 0,
> +		"pthread_join on vcpu_id=%d failed with errno=%d",
> +		vcpu->id, r);
> +}
> +
> +/*
> + * Test case 1:
> + * Normal VM running with one vCPU keeps executing HLTs,
> + * another vCPU sending IPIs to wake it up, should expect
> + * all HLTs exiting to host
> + */
> +static void test_vm_without_disable_exits_cap(void)
> +{
> +	int r;
> +	int wait_secs;
> +	const int first_halter_wait = 10;
> +	uint64_t kvm_halt_exits;
> +	struct kvm_vm *vm;
> +	struct kvm_vcpu *halter_vcpu;
> +	struct kvm_vcpu *sender_vcpu;
> +	struct guest_stats *data;
> +	vm_vaddr_t guest_stats_page_vaddr;
> +	pthread_t threads[2];
> +
> +	/* Create VM */
> +	vm = vm_create(2);
> +
> +	/* Add vCPU with loops halting */
> +	halter_vcpu = vm_vcpu_add(vm, 0, halter_guest_code);
> +
> +	vm_init_descriptor_tables(vm);
> +	vcpu_init_descriptor_tables(halter_vcpu);
> +	vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> +	virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> +
> +	/* Add vCPU with IPIs waking up halter vCPU */
> +	sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
> +
> +	guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> +	data = addr_gva2hva(vm, guest_stats_page_vaddr);
> +	memset(data, 0, sizeof(*data));
> +
> +	vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> +	vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
> +
> +	/* Start halter vCPU thread and wait for it to execute first HLT. */
> +	r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> +	TEST_ASSERT(r == 0,
> +		"pthread_create halter failed errno=%d", errno);
> +	fprintf(stderr, "Halter vCPU thread started\n");
> +
> +	wait_secs = 0;
> +	while ((wait_secs < first_halter_wait) && !data->hlt_count) {
> +		sleep(1);
> +		wait_secs++;
> +	}
> +	TEST_ASSERT(data->hlt_count,
> +		"Halter vCPU did not execute first HLT within %d seconds",
> +	       	first_halter_wait);
> +	fprintf(stderr,
> +		"Halter vCPU thread reported its first HLT executed "
> +	       	"after %d seconds.\n",
> +		wait_secs);
> +
> +	/* 
> +	 * After guest halter vCPU executed first HLT, start the sender
> +	 * vCPU thread to wakeup halter vCPU
> +	 */
> +	r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
> +	TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d", errno);
> +
> +	while (data->hlt_count < COUNT_HLT_EXITS);
> +
> +	cancel_join_vcpu_thread(threads[0], halter_vcpu);
> +	cancel_join_vcpu_thread(threads[1], sender_vcpu);
> +
> +	kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> +	TEST_ASSERT(kvm_halt_exits == data->hlt_count,
> +		"Halter vCPU had unmatched %lu halt exits - %lu HLTs "
> +		"executed, when not disabling VM halt exits\n",
> +	       	kvm_halt_exits, data->hlt_count);
> +	fprintf(stderr, "Halter vCPU had %lu halt exits\n",
> +		kvm_halt_exits);
> +	fprintf(stderr, "Guest records %lu HLTs executed, "
> +		"waked %lu times\n",
> +		data->hlt_count, data->wake_count);
> +
> +	kvm_vm_free(vm);
> +}
> +
> +/*
> + * Test case 2:
> + * VM scoped exits disabling, HLT instructions
> + * stay inside guest without exits
> + */
> +static void test_vm_disable_exits_cap(void)
> +{
> +	int r;
> +	uint64_t kvm_halt_exits;
> +	struct kvm_vm *vm;
> +	struct kvm_vcpu *halter_vcpu;
> +	struct guest_stats *data;
> +	vm_vaddr_t guest_stats_page_vaddr;
> +	pthread_t halter_thread;
> +
> +	/* Create VM */
> +	vm = vm_create(1);
> +
> +	/*
> +	 * Before adding any vCPUs, enable the KVM_X86_DISABLE_EXITS cap
> +	 * with flag KVM_X86_DISABLE_EXITS_HLT
> +	 */
> +	vm_enable_cap(vm, KVM_CAP_X86_DISABLE_EXITS,
> +		KVM_X86_DISABLE_EXITS_HLT);
> +
> +	/* Add vCPU with loops halting */
> +	halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> +
> +	vm_init_descriptor_tables(vm);
> +	vcpu_init_descriptor_tables(halter_vcpu);
> +	vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> +	virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> +
> +	guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> +	data = addr_gva2hva(vm, guest_stats_page_vaddr);
> +	memset(data, 0, sizeof(*data));
> +	vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> +
> +	/* Start halter vCPU thread and execute HLTs immediately */
> +	r = pthread_create(&halter_thread, NULL, vcpu_thread, halter_vcpu);
> +	TEST_ASSERT(r == 0,
> +		"pthread_create halter failed errno=%d", errno);
> +	fprintf(stderr, "Halter vCPU thread started\n");
> +
> +	while (data->hlt_count < COUNT_HLT_EXITS);
> +
> +	cancel_join_vcpu_thread(halter_thread, halter_vcpu);
> +
> +	kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> +	TEST_ASSERT(kvm_halt_exits == 0,
> +		"Halter vCPU had unexpected halt exits occuring after "
> +		"disabling VM-scoped halt exits cap\n");
> +	fprintf(stderr, "Halter vCPU had %lu HLT exits\n",
> +		kvm_halt_exits);
> +	fprintf(stderr, "Guest records %lu HLTs executed\n",
> +		data->hlt_count);
> +
> +	kvm_vm_free(vm);
> +}
> +
> +/*
> + * Test case 3:
> + * VM overrides exits disable flags after vCPU created,
> + * which is not allowed
> + */
> +static void test_vm_disable_exits_cap_with_vcpu_created(void)
> +{
> +	int r;
> +	struct kvm_vm *vm;
> +	struct kvm_enable_cap cap = {
> +		.cap = KVM_CAP_X86_DISABLE_EXITS,
> +		.args[0] = KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_OVERRIDE,
> +	};
> +
> +	/* Create VM */
> +	vm = vm_create(1);
> +	/* Add vCPU with loops halting */
> +	vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> +
> +	/*
> +	 * After creating vCPU, the current VM-scoped ABI should 
> +	 * discard the cap enable of KVM_CAP_X86_DISABLE_EXITS
> +	 * and return non-zero. Since vm_enabled_cap() not able
> +	 * to assert the return value, so use the __vm_ioctl()
> +	 */
> +	r = __vm_ioctl(vm, KVM_ENABLE_CAP, &cap);
> +
> +	TEST_ASSERT(r != 0,
> +		"Setting VM-scoped KVM_CAP_X86_DISABLE_EXITS after "
> +		"vCPUs created is not allowed, but it succeeds here\n");
> +}
> +
> +/*
> + * Test case 4:
> + * vCPU scoped halt exits disabling and enabling tests,
> + * verify overides are working after vCPU created
> + */
> +static void test_vcpu_toggling_disable_exits_cap(void)
> +{
> +	int r;
> +	uint64_t kvm_halt_exits;
> +	struct kvm_vm *vm;
> +	struct kvm_vcpu *halter_vcpu;
> +	struct kvm_vcpu *sender_vcpu;
> +	struct guest_stats *data;
> +	vm_vaddr_t guest_stats_page_vaddr;
> +	pthread_t threads[2];
> +
> +	/* Create VM */
> +	vm = vm_create(2);
> +
> +	/* Add vCPU with loops halting */
> +	halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> +	/* Set KVM_CAP_X86_DISABLE_EXITS_HLT for halter vCPU */
> +	vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
> +		KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_OVERRIDE);
> +
> +	vm_init_descriptor_tables(vm);
> +	vcpu_init_descriptor_tables(halter_vcpu);
> +	vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> +
> +	virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> +
> +	/* Add vCPU with IPIs waking up halter vCPU */
> +	sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
> +
> +	guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> +	data = addr_gva2hva(vm, guest_stats_page_vaddr);
> +	memset(data, 0, sizeof(*data));
> +
> +	vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> +	vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
> +
> +	r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> +	TEST_ASSERT(r == 0,
> +		"pthread_create halter failed errno=%d", errno);
> +	fprintf(stderr, "Halter vCPU thread started with halt exits"
> +		"disabled\n");
> +
> +	/* 
> +	 * For the first phase of the running, halt exits
> +	 * are disabled, halter vCPU executes HLT instruction
> +	 * but never exits to host
> +	 */
> +	while (data->hlt_count < (COUNT_HLT_EXITS / 2));
> +
> +	cancel_join_vcpu_thread(threads[0], halter_vcpu);
> +	/* 
> +	 * Override and clean KVM_CAP_X86_DISABLE_EXITS flags 
> +	 * for halter vCPU. Expect to see halt exits occurs then.
> +	 */
> +	vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
> +		KVM_X86_DISABLE_EXITS_OVERRIDE);
> +
> +	r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> +	TEST_ASSERT(r == 0,
> +		"pthread_create halter failed errno=%d", errno);
> +	fprintf(stderr, "Halter vCPU thread restarted and cleared "
> +		"halt exits flag\n");
> +
> +	sleep(1);
> +	/* 
> +	 * Second phase of the test, after guest halter vCPU
> +	 * reenabled halt exits, start the sender
> +	 * vCPU thread to wakeup halter vCPU
> +	 */
> +	r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
> +	TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d", errno);
> +
> +	while (data->hlt_count < COUNT_HLT_EXITS);
> +
> +	cancel_join_vcpu_thread(threads[0], halter_vcpu);
> +	cancel_join_vcpu_thread(threads[1], sender_vcpu);
> +
> +	kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> +	TEST_ASSERT(kvm_halt_exits == (COUNT_HLT_EXITS / 2),
> +		"Halter vCPU had unexpected %lu halt exits, "
> +		"there should be %d halt exits while "
> +		"not disabling VM halt exits\n",
> +	       	kvm_halt_exits, COUNT_HLT_EXITS / 2);
> +	fprintf(stderr, "Halter vCPU had %lu halt exits\n",
> +		kvm_halt_exits);
> +	fprintf(stderr, "Guest records %lu HLTs executed, "
> +		"waked %lu times\n",
> +		data->hlt_count, data->wake_count);
> +
> +	kvm_vm_free(vm);
> +}
> +
> +int main(int argc, char *argv[])
> +{
> +	fprintf(stderr, "VM-scoped tests start\n");
> +	test_vm_without_disable_exits_cap();
> +	test_vm_disable_exits_cap();
> +	test_vm_disable_exits_cap_with_vcpu_created();
> +	fprintf(stderr, "vCPU-scoped test starts\n");
> +	test_vcpu_toggling_disable_exits_cap();
> +	return 0;
> +}


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS
  2023-01-18 20:03   ` Zhi Wang
@ 2023-01-18 20:26     ` Kechen Lu
  2023-01-18 20:43       ` Kechen Lu
  0 siblings, 1 reply; 12+ messages in thread
From: Kechen Lu @ 2023-01-18 20:26 UTC (permalink / raw)
  To: Zhi Wang
  Cc: kvm, seanjc, pbonzini, chao.gao, shaoqin.huang, vkuznets, linux-kernel

Hi Zhi,

Thanks for testing the patch series. Comments below.

> -----Original Message-----
> From: Zhi Wang <zhi.wang.linux@gmail.com>
> Sent: Wednesday, January 18, 2023 12:03 PM
> To: Kechen Lu <kechenl@nvidia.com>
> Cc: kvm@vger.kernel.org; seanjc@google.com; pbonzini@redhat.com;
> chao.gao@intel.com; shaoqin.huang@intel.com; vkuznets@redhat.com;
> linux-kernel@vger.kernel.org
> Subject: Re: [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU
> cap KVM_CAP_X86_DISABLE_EXITS
> 
> External email: Use caution opening links or attachments
> 
> 
> On Fri, 13 Jan 2023 22:01:14 +0000
> Kechen Lu <kechenl@nvidia.com> wrote:
> 
> I think I figure out why this test case doesn't work:
> 
> The 2nd case always hangs because:
> 
> 1) Unlike the 1st case in which a halter and an IPI sender will be created,
> there is only halter thread created in the 2nd case.
> 2) The halter enables KVM_X86_DISABLE_EXITS_HLT. Thus, HLT will not cause
> VMEXIT
> 3) The halter stuck in the halter_waiting_guest_code(). data->hlt_count is
> always 1 and data->wake_count is always 0.
> 4) In the main thread, you have test_vm_disable_exits_cap() ->
>                          while (data->hlt_count < COUNT_HLT_EXITS);
> 
> As data->hlt_count will never increase in the vcpu_thread, the main thread
> always stuck in the while loop.
> 
> Can you explain more about your thoughts of designing this test case?

For this test case, we want to test for the VM-scoped KVM_CAP_X86_DISABLE_EXITS cap flags setting. 
So if we set KVM_X86_DISABLE_EXITS_HLT, there would be no halt vmexits, and what expect
is the HLT instructions looping executed within guest halter vCPU thread, and not stuck here, no IPIs
required to wake it up.

Here is what I got for this test case running in an AMD machine.
-------------------------------------
Halter vCPU thread started
vCPU thread running vCPU 0
Halter vCPU had 0 HLT exits
Guest records 10 HLTs executed
-------------------------------------

BR,
Kechen

> 
> > Add selftests for KVM cap KVM_CAP_X86_DISABLE_EXITS overriding flags
> > in VM and vCPU scope both works as expected.
> >
> > Suggested-by: Chao Gao <chao.gao@intel.com>
> > Suggested-by: Shaoqin Huang <shaoqin.huang@intel.com>
> > Signed-off-by: Kechen Lu <kechenl@nvidia.com>
> > ---
> >  tools/testing/selftests/kvm/Makefile          |   1 +
> >  .../selftests/kvm/x86_64/disable_exits_test.c | 457
> > ++++++++++++++++++
> >  2 files changed, 458 insertions(+)
> >  create mode 100644
> > tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> >
> > diff --git a/tools/testing/selftests/kvm/Makefile
> > b/tools/testing/selftests/kvm/Makefile
> > index 1750f91dd936..eeeba35e2536 100644
> > --- a/tools/testing/selftests/kvm/Makefile
> > +++ b/tools/testing/selftests/kvm/Makefile
> > @@ -114,6 +114,7 @@ TEST_GEN_PROGS_x86_64 +=
> x86_64/sev_migrate_tests
> >  TEST_GEN_PROGS_x86_64 += x86_64/amx_test
> >  TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
> >  TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test
> > +TEST_GEN_PROGS_x86_64 += x86_64/disable_exits_test
> >  TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> >  TEST_GEN_PROGS_x86_64 += demand_paging_test
> >  TEST_GEN_PROGS_x86_64 += dirty_log_test diff --git
> > a/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > new file mode 100644
> > index 000000000000..dceba3bcef5f
> > --- /dev/null
> > +++ b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > @@ -0,0 +1,457 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Test per-VM and per-vCPU disable exits cap
> > + * 1) Per-VM scope
> > + * 2) Per-vCPU scope
> > + *
> > + */
> > +
> > +#define _GNU_SOURCE /* for program_invocation_short_name */
> #include
> > +<pthread.h> #include <inttypes.h> #include <string.h> #include
> > +<time.h> #include <sys/ioctl.h>
> > +
> > +#include "test_util.h"
> > +#include "kvm_util.h"
> > +#include "svm_util.h"
> > +#include "vmx.h"
> > +#include "processor.h"
> > +#include "asm/kvm.h"
> > +#include "linux/kvm.h"
> > +
> > +/* Arbitary chosen IPI vector value from sender to halter vCPU */
> > +#define IPI_VECTOR    0xa5
> > +/* Number of HLTs halter vCPU thread executes */
> > +#define COUNT_HLT_EXITS       10
> > +
> > +struct guest_stats {
> > +     uint32_t halter_apic_id;
> > +     volatile uint64_t hlt_count;
> > +     volatile uint64_t wake_count;
> > +};
> > +
> > +static u64 read_vcpu_stats_halt_exits(struct kvm_vcpu *vcpu) {
> > +     int i;
> > +     struct kvm_stats_header header;
> > +     u64 *stats_data;
> > +     u64 ret = 0;
> > +     struct kvm_stats_desc *stats_desc;
> > +     struct kvm_stats_desc *pdesc;
> > +     int stats_fd = vcpu_get_stats_fd(vcpu);
> > +
> > +     read_stats_header(stats_fd, &header);
> > +     if (header.num_desc == 0) {
> > +             fprintf(stderr,
> > +                     "Cannot read halt exits since no KVM stats defined\n");
> > +             return ret;
> > +     }
> > +
> > +     stats_desc = read_stats_descriptors(stats_fd, &header);
> > +     for (i = 0; i < header.num_desc; ++i) {
> > +             pdesc = get_stats_descriptor(stats_desc, i, &header);
> > +             if (!strncmp(pdesc->name, "halt_exits", 10)) {
> > +                     stats_data = malloc(pdesc->size * sizeof(*stats_data));
> > +                     read_stat_data(stats_fd, &header, pdesc, stats_data,
> > +                             pdesc->size);
> > +                     ret = *stats_data;
> > +                     free(stats_data);
> > +                     break;
> > +             }
> > +     }
> > +     free(stats_desc);
> > +     return ret;
> > +}
> > +
> > +/* HLT multiple times in one vCPU */
> > +static void halter_guest_code(struct guest_stats *data) {
> > +     xapic_enable();
> > +     data->halter_apic_id =
> > +GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
> > +
> > +     for (;;) {
> > +             data->hlt_count++;
> > +             asm volatile("sti; hlt; cli");
> > +             data->wake_count++;
> > +     }
> > +}
> > +
> > +static void halter_waiting_guest_code(struct guest_stats *data) {
> > +     uint64_t tsc_start = rdtsc();
> > +
> > +     xapic_enable();
> > +     data->halter_apic_id =
> > + GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
> > +
> > +     for (;;) {
> > +             data->hlt_count++;
> > +             asm volatile("sti; hlt; cli");
> > +             data->wake_count++;
> > +             /* Wait for ~0.5sec for each HLT execution */
> > +             tsc_start = rdtsc();
> > +             while (rdtsc() - tsc_start < 2000000000);
> > +     }
> > +}
> > +
> > +/* Runs on halter vCPU when IPI arrives */ static void
> > +guest_ipi_handler(struct ex_regs *regs) {
> > +     xapic_write_reg(APIC_EOI, 11);
> > +}
> > +
> > +/* Sender vCPU waits for ~1sec to assume HLT executed */ static void
> > +sender_wait_loop(struct guest_stats *data, uint64_t old_hlt_count,
> > +             uint64_t old_wake_count) {
> > +     uint64_t tsc_start = rdtsc();
> > +     while (rdtsc() - tsc_start < 4000000000);
> > +     GUEST_ASSERT((data->wake_count != old_wake_count) &&
> > +                     (data->hlt_count != old_hlt_count)); }
> > +
> > +/* Sender vCPU loops sending IPI to halter vCPU every ~1sec */ static
> > +void sender_guest_code(struct guest_stats *data) {
> > +     uint32_t icr_val;
> > +     uint32_t icr2_val;
> > +     uint64_t old_hlt_count = 0;
> > +     uint64_t old_wake_count = 0;
> > +
> > +     xapic_enable();
> > +     /* Init interrupt command register for sending IPIs */
> > +     icr_val = (APIC_DEST_PHYSICAL | APIC_DM_FIXED | IPI_VECTOR);
> > +     icr2_val = SET_APIC_DEST_FIELD(data->halter_apic_id);
> > +
> > +     for (;;) {
> > +             /*
> > +              * Send IPI to halted vCPU
> > +              * First IPI sends here as already waited before sender vCPU
> > +              * thread creation
> > +              */
> > +             xapic_write_reg(APIC_ICR2, icr2_val);
> > +             xapic_write_reg(APIC_ICR, icr_val);
> > +             sender_wait_loop(data, old_hlt_count, old_wake_count);
> > +             GUEST_ASSERT((data->wake_count != old_wake_count) &&
> > +                     (data->hlt_count != old_hlt_count));
> > +             old_wake_count = data->wake_count;
> > +             old_hlt_count = data->hlt_count;
> > +     }
> > +}
> > +
> > +static void *vcpu_thread(void *arg)
> > +{
> > +     struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg;
> > +     int old;
> > +     int r;
> > +
> > +     r = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS,
> &old);
> > +     TEST_ASSERT(r == 0,
> > +             "pthread_setcanceltype failed on vcpu_id=%u with errno=%d",
> > +             vcpu->id, r);
> > +     fprintf(stderr, "vCPU thread running vCPU %u\n", vcpu->id);
> > +     vcpu_run(vcpu);
> > +     return NULL;
> > +}
> > +
> > +static void cancel_join_vcpu_thread(pthread_t thread, struct kvm_vcpu
> > +*vcpu) {
> > +     void *retval;
> > +     int r;
> > +
> > +     r = pthread_cancel(thread);
> > +     TEST_ASSERT(r == 0,
> > +             "pthread_cancel on vcpu_id=%d failed with errno=%d",
> > +             vcpu->id, r);
> > +
> > +     r = pthread_join(thread, &retval);
> > +     TEST_ASSERT(r == 0,
> > +             "pthread_join on vcpu_id=%d failed with errno=%d",
> > +             vcpu->id, r);
> > +}
> > +
> > +/*
> > + * Test case 1:
> > + * Normal VM running with one vCPU keeps executing HLTs,
> > + * another vCPU sending IPIs to wake it up, should expect
> > + * all HLTs exiting to host
> > + */
> > +static void test_vm_without_disable_exits_cap(void)
> > +{
> > +     int r;
> > +     int wait_secs;
> > +     const int first_halter_wait = 10;
> > +     uint64_t kvm_halt_exits;
> > +     struct kvm_vm *vm;
> > +     struct kvm_vcpu *halter_vcpu;
> > +     struct kvm_vcpu *sender_vcpu;
> > +     struct guest_stats *data;
> > +     vm_vaddr_t guest_stats_page_vaddr;
> > +     pthread_t threads[2];
> > +
> > +     /* Create VM */
> > +     vm = vm_create(2);
> > +
> > +     /* Add vCPU with loops halting */
> > +     halter_vcpu = vm_vcpu_add(vm, 0, halter_guest_code);
> > +
> > +     vm_init_descriptor_tables(vm);
> > +     vcpu_init_descriptor_tables(halter_vcpu);
> > +     vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> > +     virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> > +
> > +     /* Add vCPU with IPIs waking up halter vCPU */
> > +     sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
> > +
> > +     guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> > +     data = addr_gva2hva(vm, guest_stats_page_vaddr);
> > +     memset(data, 0, sizeof(*data));
> > +
> > +     vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> > +     vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
> > +
> > +     /* Start halter vCPU thread and wait for it to execute first HLT. */
> > +     r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> > +     TEST_ASSERT(r == 0,
> > +             "pthread_create halter failed errno=%d", errno);
> > +     fprintf(stderr, "Halter vCPU thread started\n");
> > +
> > +     wait_secs = 0;
> > +     while ((wait_secs < first_halter_wait) && !data->hlt_count) {
> > +             sleep(1);
> > +             wait_secs++;
> > +     }
> > +     TEST_ASSERT(data->hlt_count,
> > +             "Halter vCPU did not execute first HLT within %d seconds",
> > +             first_halter_wait);
> > +     fprintf(stderr,
> > +             "Halter vCPU thread reported its first HLT executed "
> > +             "after %d seconds.\n",
> > +             wait_secs);
> > +
> > +     /*
> > +      * After guest halter vCPU executed first HLT, start the sender
> > +      * vCPU thread to wakeup halter vCPU
> > +      */
> > +     r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
> > +     TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d",
> > + errno);
> > +
> > +     while (data->hlt_count < COUNT_HLT_EXITS);
> > +
> > +     cancel_join_vcpu_thread(threads[0], halter_vcpu);
> > +     cancel_join_vcpu_thread(threads[1], sender_vcpu);
> > +
> > +     kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> > +     TEST_ASSERT(kvm_halt_exits == data->hlt_count,
> > +             "Halter vCPU had unmatched %lu halt exits - %lu HLTs "
> > +             "executed, when not disabling VM halt exits\n",
> > +             kvm_halt_exits, data->hlt_count);
> > +     fprintf(stderr, "Halter vCPU had %lu halt exits\n",
> > +             kvm_halt_exits);
> > +     fprintf(stderr, "Guest records %lu HLTs executed, "
> > +             "waked %lu times\n",
> > +             data->hlt_count, data->wake_count);
> > +
> > +     kvm_vm_free(vm);
> > +}
> > +
> > +/*
> > + * Test case 2:
> > + * VM scoped exits disabling, HLT instructions
> > + * stay inside guest without exits
> > + */
> > +static void test_vm_disable_exits_cap(void) {
> > +     int r;
> > +     uint64_t kvm_halt_exits;
> > +     struct kvm_vm *vm;
> > +     struct kvm_vcpu *halter_vcpu;
> > +     struct guest_stats *data;
> > +     vm_vaddr_t guest_stats_page_vaddr;
> > +     pthread_t halter_thread;
> > +
> > +     /* Create VM */
> > +     vm = vm_create(1);
> > +
> > +     /*
> > +      * Before adding any vCPUs, enable the KVM_X86_DISABLE_EXITS cap
> > +      * with flag KVM_X86_DISABLE_EXITS_HLT
> > +      */
> > +     vm_enable_cap(vm, KVM_CAP_X86_DISABLE_EXITS,
> > +             KVM_X86_DISABLE_EXITS_HLT);
> > +
> > +     /* Add vCPU with loops halting */
> > +     halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> > +
> > +     vm_init_descriptor_tables(vm);
> > +     vcpu_init_descriptor_tables(halter_vcpu);
> > +     vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> > +     virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> > +
> > +     guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> > +     data = addr_gva2hva(vm, guest_stats_page_vaddr);
> > +     memset(data, 0, sizeof(*data));
> > +     vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> > +
> > +     /* Start halter vCPU thread and execute HLTs immediately */
> > +     r = pthread_create(&halter_thread, NULL, vcpu_thread, halter_vcpu);
> > +     TEST_ASSERT(r == 0,
> > +             "pthread_create halter failed errno=%d", errno);
> > +     fprintf(stderr, "Halter vCPU thread started\n");
> > +
> > +     while (data->hlt_count < COUNT_HLT_EXITS);
> > +
> > +     cancel_join_vcpu_thread(halter_thread, halter_vcpu);
> > +
> > +     kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> > +     TEST_ASSERT(kvm_halt_exits == 0,
> > +             "Halter vCPU had unexpected halt exits occuring after "
> > +             "disabling VM-scoped halt exits cap\n");
> > +     fprintf(stderr, "Halter vCPU had %lu HLT exits\n",
> > +             kvm_halt_exits);
> > +     fprintf(stderr, "Guest records %lu HLTs executed\n",
> > +             data->hlt_count);
> > +
> > +     kvm_vm_free(vm);
> > +}
> > +
> > +/*
> > + * Test case 3:
> > + * VM overrides exits disable flags after vCPU created,
> > + * which is not allowed
> > + */
> > +static void test_vm_disable_exits_cap_with_vcpu_created(void)
> > +{
> > +     int r;
> > +     struct kvm_vm *vm;
> > +     struct kvm_enable_cap cap = {
> > +             .cap = KVM_CAP_X86_DISABLE_EXITS,
> > +             .args[0] = KVM_X86_DISABLE_EXITS_HLT |
> KVM_X86_DISABLE_EXITS_OVERRIDE,
> > +     };
> > +
> > +     /* Create VM */
> > +     vm = vm_create(1);
> > +     /* Add vCPU with loops halting */
> > +     vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> > +
> > +     /*
> > +      * After creating vCPU, the current VM-scoped ABI should
> > +      * discard the cap enable of KVM_CAP_X86_DISABLE_EXITS
> > +      * and return non-zero. Since vm_enabled_cap() not able
> > +      * to assert the return value, so use the __vm_ioctl()
> > +      */
> > +     r = __vm_ioctl(vm, KVM_ENABLE_CAP, &cap);
> > +
> > +     TEST_ASSERT(r != 0,
> > +             "Setting VM-scoped KVM_CAP_X86_DISABLE_EXITS after "
> > +             "vCPUs created is not allowed, but it succeeds here\n");
> > +}
> > +
> > +/*
> > + * Test case 4:
> > + * vCPU scoped halt exits disabling and enabling tests,
> > + * verify overides are working after vCPU created  */ static void
> > +test_vcpu_toggling_disable_exits_cap(void)
> > +{
> > +     int r;
> > +     uint64_t kvm_halt_exits;
> > +     struct kvm_vm *vm;
> > +     struct kvm_vcpu *halter_vcpu;
> > +     struct kvm_vcpu *sender_vcpu;
> > +     struct guest_stats *data;
> > +     vm_vaddr_t guest_stats_page_vaddr;
> > +     pthread_t threads[2];
> > +
> > +     /* Create VM */
> > +     vm = vm_create(2);
> > +
> > +     /* Add vCPU with loops halting */
> > +     halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> > +     /* Set KVM_CAP_X86_DISABLE_EXITS_HLT for halter vCPU */
> > +     vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
> > +             KVM_X86_DISABLE_EXITS_HLT |
> > + KVM_X86_DISABLE_EXITS_OVERRIDE);
> > +
> > +     vm_init_descriptor_tables(vm);
> > +     vcpu_init_descriptor_tables(halter_vcpu);
> > +     vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> > +
> > +     virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> > +
> > +     /* Add vCPU with IPIs waking up halter vCPU */
> > +     sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
> > +
> > +     guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> > +     data = addr_gva2hva(vm, guest_stats_page_vaddr);
> > +     memset(data, 0, sizeof(*data));
> > +
> > +     vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> > +     vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
> > +
> > +     r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> > +     TEST_ASSERT(r == 0,
> > +             "pthread_create halter failed errno=%d", errno);
> > +     fprintf(stderr, "Halter vCPU thread started with halt exits"
> > +             "disabled\n");
> > +
> > +     /*
> > +      * For the first phase of the running, halt exits
> > +      * are disabled, halter vCPU executes HLT instruction
> > +      * but never exits to host
> > +      */
> > +     while (data->hlt_count < (COUNT_HLT_EXITS / 2));
> > +
> > +     cancel_join_vcpu_thread(threads[0], halter_vcpu);
> > +     /*
> > +      * Override and clean KVM_CAP_X86_DISABLE_EXITS flags
> > +      * for halter vCPU. Expect to see halt exits occurs then.
> > +      */
> > +     vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
> > +             KVM_X86_DISABLE_EXITS_OVERRIDE);
> > +
> > +     r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> > +     TEST_ASSERT(r == 0,
> > +             "pthread_create halter failed errno=%d", errno);
> > +     fprintf(stderr, "Halter vCPU thread restarted and cleared "
> > +             "halt exits flag\n");
> > +
> > +     sleep(1);
> > +     /*
> > +      * Second phase of the test, after guest halter vCPU
> > +      * reenabled halt exits, start the sender
> > +      * vCPU thread to wakeup halter vCPU
> > +      */
> > +     r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
> > +     TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d",
> > + errno);
> > +
> > +     while (data->hlt_count < COUNT_HLT_EXITS);
> > +
> > +     cancel_join_vcpu_thread(threads[0], halter_vcpu);
> > +     cancel_join_vcpu_thread(threads[1], sender_vcpu);
> > +
> > +     kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> > +     TEST_ASSERT(kvm_halt_exits == (COUNT_HLT_EXITS / 2),
> > +             "Halter vCPU had unexpected %lu halt exits, "
> > +             "there should be %d halt exits while "
> > +             "not disabling VM halt exits\n",
> > +             kvm_halt_exits, COUNT_HLT_EXITS / 2);
> > +     fprintf(stderr, "Halter vCPU had %lu halt exits\n",
> > +             kvm_halt_exits);
> > +     fprintf(stderr, "Guest records %lu HLTs executed, "
> > +             "waked %lu times\n",
> > +             data->hlt_count, data->wake_count);
> > +
> > +     kvm_vm_free(vm);
> > +}
> > +
> > +int main(int argc, char *argv[])
> > +{
> > +     fprintf(stderr, "VM-scoped tests start\n");
> > +     test_vm_without_disable_exits_cap();
> > +     test_vm_disable_exits_cap();
> > +     test_vm_disable_exits_cap_with_vcpu_created();
> > +     fprintf(stderr, "vCPU-scoped test starts\n");
> > +     test_vcpu_toggling_disable_exits_cap();
> > +     return 0;
> > +}


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS
  2023-01-18 20:26     ` Kechen Lu
@ 2023-01-18 20:43       ` Kechen Lu
  0 siblings, 0 replies; 12+ messages in thread
From: Kechen Lu @ 2023-01-18 20:43 UTC (permalink / raw)
  To: Zhi Wang
  Cc: kvm, seanjc, pbonzini, chao.gao, shaoqin.huang, vkuznets, linux-kernel

Hi Zhi,

My apologies, I think messed up on the test case 2 and 4 design here.
The test passed on my setup is probably because of my BIOS setting.

I will refactor this selftest.
Thanks!

BR,
Kechen

> -----Original Message-----
> From: Kechen Lu
> Sent: Wednesday, January 18, 2023 12:26 PM
> To: Zhi Wang <zhi.wang.linux@gmail.com>
> Cc: kvm@vger.kernel.org; seanjc@google.com; pbonzini@redhat.com;
> chao.gao@intel.com; shaoqin.huang@intel.com; vkuznets@redhat.com;
> linux-kernel@vger.kernel.org
> Subject: RE: [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU
> cap KVM_CAP_X86_DISABLE_EXITS
> 
> Hi Zhi,
> 
> Thanks for testing the patch series. Comments below.
> 
> > -----Original Message-----
> > From: Zhi Wang <zhi.wang.linux@gmail.com>
> > Sent: Wednesday, January 18, 2023 12:03 PM
> > To: Kechen Lu <kechenl@nvidia.com>
> > Cc: kvm@vger.kernel.org; seanjc@google.com; pbonzini@redhat.com;
> > chao.gao@intel.com; shaoqin.huang@intel.com; vkuznets@redhat.com;
> > linux-kernel@vger.kernel.org
> > Subject: Re: [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and
> > vCPU cap KVM_CAP_X86_DISABLE_EXITS
> >
> > External email: Use caution opening links or attachments
> >
> >
> > On Fri, 13 Jan 2023 22:01:14 +0000
> > Kechen Lu <kechenl@nvidia.com> wrote:
> >
> > I think I figure out why this test case doesn't work:
> >
> > The 2nd case always hangs because:
> >
> > 1) Unlike the 1st case in which a halter and an IPI sender will be
> > created, there is only halter thread created in the 2nd case.
> > 2) The halter enables KVM_X86_DISABLE_EXITS_HLT. Thus, HLT will not
> > cause VMEXIT
> > 3) The halter stuck in the halter_waiting_guest_code().
> > data->hlt_count is always 1 and data->wake_count is always 0.
> > 4) In the main thread, you have test_vm_disable_exits_cap() ->
> >                          while (data->hlt_count < COUNT_HLT_EXITS);
> >
> > As data->hlt_count will never increase in the vcpu_thread, the main
> > thread always stuck in the while loop.
> >
> > Can you explain more about your thoughts of designing this test case?
> 
> For this test case, we want to test for the VM-scoped
> KVM_CAP_X86_DISABLE_EXITS cap flags setting.
> So if we set KVM_X86_DISABLE_EXITS_HLT, there would be no halt vmexits,
> and what expect is the HLT instructions looping executed within guest halter
> vCPU thread, and not stuck here, no IPIs required to wake it up.
> 
> Here is what I got for this test case running in an AMD machine.
> -------------------------------------
> Halter vCPU thread started
> vCPU thread running vCPU 0
> Halter vCPU had 0 HLT exits
> Guest records 10 HLTs executed
> -------------------------------------
> 
> BR,
> Kechen
> 
> >
> > > Add selftests for KVM cap KVM_CAP_X86_DISABLE_EXITS overriding
> flags
> > > in VM and vCPU scope both works as expected.
> > >
> > > Suggested-by: Chao Gao <chao.gao@intel.com>
> > > Suggested-by: Shaoqin Huang <shaoqin.huang@intel.com>
> > > Signed-off-by: Kechen Lu <kechenl@nvidia.com>
> > > ---
> > >  tools/testing/selftests/kvm/Makefile          |   1 +
> > >  .../selftests/kvm/x86_64/disable_exits_test.c | 457
> > > ++++++++++++++++++
> > >  2 files changed, 458 insertions(+)
> > >  create mode 100644
> > > tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > >
> > > diff --git a/tools/testing/selftests/kvm/Makefile
> > > b/tools/testing/selftests/kvm/Makefile
> > > index 1750f91dd936..eeeba35e2536 100644
> > > --- a/tools/testing/selftests/kvm/Makefile
> > > +++ b/tools/testing/selftests/kvm/Makefile
> > > @@ -114,6 +114,7 @@ TEST_GEN_PROGS_x86_64 +=
> > x86_64/sev_migrate_tests
> > >  TEST_GEN_PROGS_x86_64 += x86_64/amx_test
> > >  TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
> > >  TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test
> > > +TEST_GEN_PROGS_x86_64 += x86_64/disable_exits_test
> > >  TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> > >  TEST_GEN_PROGS_x86_64 += demand_paging_test
> > >  TEST_GEN_PROGS_x86_64 += dirty_log_test diff --git
> > > a/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > > b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > > new file mode 100644
> > > index 000000000000..dceba3bcef5f
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/kvm/x86_64/disable_exits_test.c
> > > @@ -0,0 +1,457 @@
> > > +// SPDX-License-Identifier: GPL-2.0-only
> > > +/*
> > > + * Test per-VM and per-vCPU disable exits cap
> > > + * 1) Per-VM scope
> > > + * 2) Per-vCPU scope
> > > + *
> > > + */
> > > +
> > > +#define _GNU_SOURCE /* for program_invocation_short_name */
> > #include
> > > +<pthread.h> #include <inttypes.h> #include <string.h> #include
> > > +<time.h> #include <sys/ioctl.h>
> > > +
> > > +#include "test_util.h"
> > > +#include "kvm_util.h"
> > > +#include "svm_util.h"
> > > +#include "vmx.h"
> > > +#include "processor.h"
> > > +#include "asm/kvm.h"
> > > +#include "linux/kvm.h"
> > > +
> > > +/* Arbitary chosen IPI vector value from sender to halter vCPU */
> > > +#define IPI_VECTOR    0xa5
> > > +/* Number of HLTs halter vCPU thread executes */
> > > +#define COUNT_HLT_EXITS       10
> > > +
> > > +struct guest_stats {
> > > +     uint32_t halter_apic_id;
> > > +     volatile uint64_t hlt_count;
> > > +     volatile uint64_t wake_count;
> > > +};
> > > +
> > > +static u64 read_vcpu_stats_halt_exits(struct kvm_vcpu *vcpu) {
> > > +     int i;
> > > +     struct kvm_stats_header header;
> > > +     u64 *stats_data;
> > > +     u64 ret = 0;
> > > +     struct kvm_stats_desc *stats_desc;
> > > +     struct kvm_stats_desc *pdesc;
> > > +     int stats_fd = vcpu_get_stats_fd(vcpu);
> > > +
> > > +     read_stats_header(stats_fd, &header);
> > > +     if (header.num_desc == 0) {
> > > +             fprintf(stderr,
> > > +                     "Cannot read halt exits since no KVM stats defined\n");
> > > +             return ret;
> > > +     }
> > > +
> > > +     stats_desc = read_stats_descriptors(stats_fd, &header);
> > > +     for (i = 0; i < header.num_desc; ++i) {
> > > +             pdesc = get_stats_descriptor(stats_desc, i, &header);
> > > +             if (!strncmp(pdesc->name, "halt_exits", 10)) {
> > > +                     stats_data = malloc(pdesc->size * sizeof(*stats_data));
> > > +                     read_stat_data(stats_fd, &header, pdesc, stats_data,
> > > +                             pdesc->size);
> > > +                     ret = *stats_data;
> > > +                     free(stats_data);
> > > +                     break;
> > > +             }
> > > +     }
> > > +     free(stats_desc);
> > > +     return ret;
> > > +}
> > > +
> > > +/* HLT multiple times in one vCPU */ static void
> > > +halter_guest_code(struct guest_stats *data) {
> > > +     xapic_enable();
> > > +     data->halter_apic_id =
> > > +GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
> > > +
> > > +     for (;;) {
> > > +             data->hlt_count++;
> > > +             asm volatile("sti; hlt; cli");
> > > +             data->wake_count++;
> > > +     }
> > > +}
> > > +
> > > +static void halter_waiting_guest_code(struct guest_stats *data) {
> > > +     uint64_t tsc_start = rdtsc();
> > > +
> > > +     xapic_enable();
> > > +     data->halter_apic_id =
> > > + GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
> > > +
> > > +     for (;;) {
> > > +             data->hlt_count++;
> > > +             asm volatile("sti; hlt; cli");
> > > +             data->wake_count++;
> > > +             /* Wait for ~0.5sec for each HLT execution */
> > > +             tsc_start = rdtsc();
> > > +             while (rdtsc() - tsc_start < 2000000000);
> > > +     }
> > > +}
> > > +
> > > +/* Runs on halter vCPU when IPI arrives */ static void
> > > +guest_ipi_handler(struct ex_regs *regs) {
> > > +     xapic_write_reg(APIC_EOI, 11); }
> > > +
> > > +/* Sender vCPU waits for ~1sec to assume HLT executed */ static
> > > +void sender_wait_loop(struct guest_stats *data, uint64_t
> old_hlt_count,
> > > +             uint64_t old_wake_count) {
> > > +     uint64_t tsc_start = rdtsc();
> > > +     while (rdtsc() - tsc_start < 4000000000);
> > > +     GUEST_ASSERT((data->wake_count != old_wake_count) &&
> > > +                     (data->hlt_count != old_hlt_count)); }
> > > +
> > > +/* Sender vCPU loops sending IPI to halter vCPU every ~1sec */
> > > +static void sender_guest_code(struct guest_stats *data) {
> > > +     uint32_t icr_val;
> > > +     uint32_t icr2_val;
> > > +     uint64_t old_hlt_count = 0;
> > > +     uint64_t old_wake_count = 0;
> > > +
> > > +     xapic_enable();
> > > +     /* Init interrupt command register for sending IPIs */
> > > +     icr_val = (APIC_DEST_PHYSICAL | APIC_DM_FIXED | IPI_VECTOR);
> > > +     icr2_val = SET_APIC_DEST_FIELD(data->halter_apic_id);
> > > +
> > > +     for (;;) {
> > > +             /*
> > > +              * Send IPI to halted vCPU
> > > +              * First IPI sends here as already waited before sender vCPU
> > > +              * thread creation
> > > +              */
> > > +             xapic_write_reg(APIC_ICR2, icr2_val);
> > > +             xapic_write_reg(APIC_ICR, icr_val);
> > > +             sender_wait_loop(data, old_hlt_count, old_wake_count);
> > > +             GUEST_ASSERT((data->wake_count != old_wake_count) &&
> > > +                     (data->hlt_count != old_hlt_count));
> > > +             old_wake_count = data->wake_count;
> > > +             old_hlt_count = data->hlt_count;
> > > +     }
> > > +}
> > > +
> > > +static void *vcpu_thread(void *arg) {
> > > +     struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg;
> > > +     int old;
> > > +     int r;
> > > +
> > > +     r = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS,
> > &old);
> > > +     TEST_ASSERT(r == 0,
> > > +             "pthread_setcanceltype failed on vcpu_id=%u with errno=%d",
> > > +             vcpu->id, r);
> > > +     fprintf(stderr, "vCPU thread running vCPU %u\n", vcpu->id);
> > > +     vcpu_run(vcpu);
> > > +     return NULL;
> > > +}
> > > +
> > > +static void cancel_join_vcpu_thread(pthread_t thread, struct
> > > +kvm_vcpu
> > > +*vcpu) {
> > > +     void *retval;
> > > +     int r;
> > > +
> > > +     r = pthread_cancel(thread);
> > > +     TEST_ASSERT(r == 0,
> > > +             "pthread_cancel on vcpu_id=%d failed with errno=%d",
> > > +             vcpu->id, r);
> > > +
> > > +     r = pthread_join(thread, &retval);
> > > +     TEST_ASSERT(r == 0,
> > > +             "pthread_join on vcpu_id=%d failed with errno=%d",
> > > +             vcpu->id, r);
> > > +}
> > > +
> > > +/*
> > > + * Test case 1:
> > > + * Normal VM running with one vCPU keeps executing HLTs,
> > > + * another vCPU sending IPIs to wake it up, should expect
> > > + * all HLTs exiting to host
> > > + */
> > > +static void test_vm_without_disable_exits_cap(void)
> > > +{
> > > +     int r;
> > > +     int wait_secs;
> > > +     const int first_halter_wait = 10;
> > > +     uint64_t kvm_halt_exits;
> > > +     struct kvm_vm *vm;
> > > +     struct kvm_vcpu *halter_vcpu;
> > > +     struct kvm_vcpu *sender_vcpu;
> > > +     struct guest_stats *data;
> > > +     vm_vaddr_t guest_stats_page_vaddr;
> > > +     pthread_t threads[2];
> > > +
> > > +     /* Create VM */
> > > +     vm = vm_create(2);
> > > +
> > > +     /* Add vCPU with loops halting */
> > > +     halter_vcpu = vm_vcpu_add(vm, 0, halter_guest_code);
> > > +
> > > +     vm_init_descriptor_tables(vm);
> > > +     vcpu_init_descriptor_tables(halter_vcpu);
> > > +     vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> > > +     virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> > > +
> > > +     /* Add vCPU with IPIs waking up halter vCPU */
> > > +     sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
> > > +
> > > +     guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> > > +     data = addr_gva2hva(vm, guest_stats_page_vaddr);
> > > +     memset(data, 0, sizeof(*data));
> > > +
> > > +     vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> > > +     vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
> > > +
> > > +     /* Start halter vCPU thread and wait for it to execute first HLT. */
> > > +     r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> > > +     TEST_ASSERT(r == 0,
> > > +             "pthread_create halter failed errno=%d", errno);
> > > +     fprintf(stderr, "Halter vCPU thread started\n");
> > > +
> > > +     wait_secs = 0;
> > > +     while ((wait_secs < first_halter_wait) && !data->hlt_count) {
> > > +             sleep(1);
> > > +             wait_secs++;
> > > +     }
> > > +     TEST_ASSERT(data->hlt_count,
> > > +             "Halter vCPU did not execute first HLT within %d seconds",
> > > +             first_halter_wait);
> > > +     fprintf(stderr,
> > > +             "Halter vCPU thread reported its first HLT executed "
> > > +             "after %d seconds.\n",
> > > +             wait_secs);
> > > +
> > > +     /*
> > > +      * After guest halter vCPU executed first HLT, start the sender
> > > +      * vCPU thread to wakeup halter vCPU
> > > +      */
> > > +     r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
> > > +     TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d",
> > > + errno);
> > > +
> > > +     while (data->hlt_count < COUNT_HLT_EXITS);
> > > +
> > > +     cancel_join_vcpu_thread(threads[0], halter_vcpu);
> > > +     cancel_join_vcpu_thread(threads[1], sender_vcpu);
> > > +
> > > +     kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> > > +     TEST_ASSERT(kvm_halt_exits == data->hlt_count,
> > > +             "Halter vCPU had unmatched %lu halt exits - %lu HLTs "
> > > +             "executed, when not disabling VM halt exits\n",
> > > +             kvm_halt_exits, data->hlt_count);
> > > +     fprintf(stderr, "Halter vCPU had %lu halt exits\n",
> > > +             kvm_halt_exits);
> > > +     fprintf(stderr, "Guest records %lu HLTs executed, "
> > > +             "waked %lu times\n",
> > > +             data->hlt_count, data->wake_count);
> > > +
> > > +     kvm_vm_free(vm);
> > > +}
> > > +
> > > +/*
> > > + * Test case 2:
> > > + * VM scoped exits disabling, HLT instructions
> > > + * stay inside guest without exits
> > > + */
> > > +static void test_vm_disable_exits_cap(void) {
> > > +     int r;
> > > +     uint64_t kvm_halt_exits;
> > > +     struct kvm_vm *vm;
> > > +     struct kvm_vcpu *halter_vcpu;
> > > +     struct guest_stats *data;
> > > +     vm_vaddr_t guest_stats_page_vaddr;
> > > +     pthread_t halter_thread;
> > > +
> > > +     /* Create VM */
> > > +     vm = vm_create(1);
> > > +
> > > +     /*
> > > +      * Before adding any vCPUs, enable the KVM_X86_DISABLE_EXITS
> cap
> > > +      * with flag KVM_X86_DISABLE_EXITS_HLT
> > > +      */
> > > +     vm_enable_cap(vm, KVM_CAP_X86_DISABLE_EXITS,
> > > +             KVM_X86_DISABLE_EXITS_HLT);
> > > +
> > > +     /* Add vCPU with loops halting */
> > > +     halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> > > +
> > > +     vm_init_descriptor_tables(vm);
> > > +     vcpu_init_descriptor_tables(halter_vcpu);
> > > +     vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
> > > +     virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> > > +
> > > +     guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> > > +     data = addr_gva2hva(vm, guest_stats_page_vaddr);
> > > +     memset(data, 0, sizeof(*data));
> > > +     vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> > > +
> > > +     /* Start halter vCPU thread and execute HLTs immediately */
> > > +     r = pthread_create(&halter_thread, NULL, vcpu_thread,
> halter_vcpu);
> > > +     TEST_ASSERT(r == 0,
> > > +             "pthread_create halter failed errno=%d", errno);
> > > +     fprintf(stderr, "Halter vCPU thread started\n");
> > > +
> > > +     while (data->hlt_count < COUNT_HLT_EXITS);
> > > +
> > > +     cancel_join_vcpu_thread(halter_thread, halter_vcpu);
> > > +
> > > +     kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> > > +     TEST_ASSERT(kvm_halt_exits == 0,
> > > +             "Halter vCPU had unexpected halt exits occuring after "
> > > +             "disabling VM-scoped halt exits cap\n");
> > > +     fprintf(stderr, "Halter vCPU had %lu HLT exits\n",
> > > +             kvm_halt_exits);
> > > +     fprintf(stderr, "Guest records %lu HLTs executed\n",
> > > +             data->hlt_count);
> > > +
> > > +     kvm_vm_free(vm);
> > > +}
> > > +
> > > +/*
> > > + * Test case 3:
> > > + * VM overrides exits disable flags after vCPU created,
> > > + * which is not allowed
> > > + */
> > > +static void test_vm_disable_exits_cap_with_vcpu_created(void)
> > > +{
> > > +     int r;
> > > +     struct kvm_vm *vm;
> > > +     struct kvm_enable_cap cap = {
> > > +             .cap = KVM_CAP_X86_DISABLE_EXITS,
> > > +             .args[0] = KVM_X86_DISABLE_EXITS_HLT |
> > KVM_X86_DISABLE_EXITS_OVERRIDE,
> > > +     };
> > > +
> > > +     /* Create VM */
> > > +     vm = vm_create(1);
> > > +     /* Add vCPU with loops halting */
> > > +     vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> > > +
> > > +     /*
> > > +      * After creating vCPU, the current VM-scoped ABI should
> > > +      * discard the cap enable of KVM_CAP_X86_DISABLE_EXITS
> > > +      * and return non-zero. Since vm_enabled_cap() not able
> > > +      * to assert the return value, so use the __vm_ioctl()
> > > +      */
> > > +     r = __vm_ioctl(vm, KVM_ENABLE_CAP, &cap);
> > > +
> > > +     TEST_ASSERT(r != 0,
> > > +             "Setting VM-scoped KVM_CAP_X86_DISABLE_EXITS after "
> > > +             "vCPUs created is not allowed, but it succeeds
> > > +here\n"); }
> > > +
> > > +/*
> > > + * Test case 4:
> > > + * vCPU scoped halt exits disabling and enabling tests,
> > > + * verify overides are working after vCPU created  */ static void
> > > +test_vcpu_toggling_disable_exits_cap(void)
> > > +{
> > > +     int r;
> > > +     uint64_t kvm_halt_exits;
> > > +     struct kvm_vm *vm;
> > > +     struct kvm_vcpu *halter_vcpu;
> > > +     struct kvm_vcpu *sender_vcpu;
> > > +     struct guest_stats *data;
> > > +     vm_vaddr_t guest_stats_page_vaddr;
> > > +     pthread_t threads[2];
> > > +
> > > +     /* Create VM */
> > > +     vm = vm_create(2);
> > > +
> > > +     /* Add vCPU with loops halting */
> > > +     halter_vcpu = vm_vcpu_add(vm, 0, halter_waiting_guest_code);
> > > +     /* Set KVM_CAP_X86_DISABLE_EXITS_HLT for halter vCPU */
> > > +     vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
> > > +             KVM_X86_DISABLE_EXITS_HLT |
> > > + KVM_X86_DISABLE_EXITS_OVERRIDE);
> > > +
> > > +     vm_init_descriptor_tables(vm);
> > > +     vcpu_init_descriptor_tables(halter_vcpu);
> > > +     vm_install_exception_handler(vm, IPI_VECTOR,
> > > + guest_ipi_handler);
> > > +
> > > +     virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> > > +
> > > +     /* Add vCPU with IPIs waking up halter vCPU */
> > > +     sender_vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
> > > +
> > > +     guest_stats_page_vaddr = vm_vaddr_alloc_page(vm);
> > > +     data = addr_gva2hva(vm, guest_stats_page_vaddr);
> > > +     memset(data, 0, sizeof(*data));
> > > +
> > > +     vcpu_args_set(halter_vcpu, 1, guest_stats_page_vaddr);
> > > +     vcpu_args_set(sender_vcpu, 1, guest_stats_page_vaddr);
> > > +
> > > +     r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> > > +     TEST_ASSERT(r == 0,
> > > +             "pthread_create halter failed errno=%d", errno);
> > > +     fprintf(stderr, "Halter vCPU thread started with halt exits"
> > > +             "disabled\n");
> > > +
> > > +     /*
> > > +      * For the first phase of the running, halt exits
> > > +      * are disabled, halter vCPU executes HLT instruction
> > > +      * but never exits to host
> > > +      */
> > > +     while (data->hlt_count < (COUNT_HLT_EXITS / 2));
> > > +
> > > +     cancel_join_vcpu_thread(threads[0], halter_vcpu);
> > > +     /*
> > > +      * Override and clean KVM_CAP_X86_DISABLE_EXITS flags
> > > +      * for halter vCPU. Expect to see halt exits occurs then.
> > > +      */
> > > +     vcpu_enable_cap(halter_vcpu, KVM_CAP_X86_DISABLE_EXITS,
> > > +             KVM_X86_DISABLE_EXITS_OVERRIDE);
> > > +
> > > +     r = pthread_create(&threads[0], NULL, vcpu_thread, halter_vcpu);
> > > +     TEST_ASSERT(r == 0,
> > > +             "pthread_create halter failed errno=%d", errno);
> > > +     fprintf(stderr, "Halter vCPU thread restarted and cleared "
> > > +             "halt exits flag\n");
> > > +
> > > +     sleep(1);
> > > +     /*
> > > +      * Second phase of the test, after guest halter vCPU
> > > +      * reenabled halt exits, start the sender
> > > +      * vCPU thread to wakeup halter vCPU
> > > +      */
> > > +     r = pthread_create(&threads[1], NULL, vcpu_thread, sender_vcpu);
> > > +     TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d",
> > > + errno);
> > > +
> > > +     while (data->hlt_count < COUNT_HLT_EXITS);
> > > +
> > > +     cancel_join_vcpu_thread(threads[0], halter_vcpu);
> > > +     cancel_join_vcpu_thread(threads[1], sender_vcpu);
> > > +
> > > +     kvm_halt_exits = read_vcpu_stats_halt_exits(halter_vcpu);
> > > +     TEST_ASSERT(kvm_halt_exits == (COUNT_HLT_EXITS / 2),
> > > +             "Halter vCPU had unexpected %lu halt exits, "
> > > +             "there should be %d halt exits while "
> > > +             "not disabling VM halt exits\n",
> > > +             kvm_halt_exits, COUNT_HLT_EXITS / 2);
> > > +     fprintf(stderr, "Halter vCPU had %lu halt exits\n",
> > > +             kvm_halt_exits);
> > > +     fprintf(stderr, "Guest records %lu HLTs executed, "
> > > +             "waked %lu times\n",
> > > +             data->hlt_count, data->wake_count);
> > > +
> > > +     kvm_vm_free(vm);
> > > +}
> > > +
> > > +int main(int argc, char *argv[])
> > > +{
> > > +     fprintf(stderr, "VM-scoped tests start\n");
> > > +     test_vm_without_disable_exits_cap();
> > > +     test_vm_disable_exits_cap();
> > > +     test_vm_disable_exits_cap_with_vcpu_created();
> > > +     fprintf(stderr, "vCPU-scoped test starts\n");
> > > +     test_vcpu_toggling_disable_exits_cap();
> > > +     return 0;
> > > +}


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-01-18 20:43 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-13 22:01 [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Kechen Lu
2023-01-13 22:01 ` [RFC PATCH v5 1/6] KVM: x86: only allow exits disable before vCPUs created Kechen Lu
2023-01-13 22:01 ` [RFC PATCH v5 2/6] KVM: x86: Move *_in_guest power management flags to vCPU scope Kechen Lu
2023-01-13 22:01 ` [RFC PATCH v5 3/6] KVM: x86: Reject disabling of MWAIT interception when not allowed Kechen Lu
2023-01-13 22:01 ` [RFC PATCH v5 4/6] KVM: x86: Let userspace re-enable previously disabled exits Kechen Lu
2023-01-13 22:01 ` [RFC PATCH v5 5/6] KVM: x86: add vCPU scoped toggling for " Kechen Lu
2023-01-13 22:01 ` [RFC PATCH v5 6/6] KVM: selftests: Add tests for VM and vCPU cap KVM_CAP_X86_DISABLE_EXITS Kechen Lu
2023-01-18 20:03   ` Zhi Wang
2023-01-18 20:26     ` Kechen Lu
2023-01-18 20:43       ` Kechen Lu
2023-01-18  8:30 ` [RFC PATCH v5 0/6] KVM: x86: add per-vCPU exits disable capability Zhi Wang
2023-01-18 13:32   ` Zhi Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).