linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] SVM fixes + refactoring
@ 2022-03-22 17:24 Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called Maxim Levitsky
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

Those are few bug fixes which are in my patch queue,
rebased against current kvm/queue.

Best regards,
	Maxim Levitsky

Maxim Levitsky (8):
  KVM: x86: avoid loading a vCPU after .vm_destroy was called
  KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb
  kvm: x86: SVM: use vmcb* instead of svm->vmcb where it makes sense
  KVM: x86: SVM: fix avic spec based definitions again
  KVM: x86: SVM: move tsc ratio definitions to svm.h
  kvm: x86: SVM: remove unused defines
  KVM: x86: SVM: fix tsc scaling when the host doesn't support it
  KVM: x86: SVM: remove vgif_enabled()

 arch/x86/include/asm/svm.h |  14 ++-
 arch/x86/kvm/svm/avic.c    |   2 +-
 arch/x86/kvm/svm/nested.c  | 179 +++++++++++++++++++------------------
 arch/x86/kvm/svm/svm.c     |  46 ++++------
 arch/x86/kvm/svm/svm.h     |  23 +----
 arch/x86/kvm/vmx/vmx.c     |   7 +-
 arch/x86/kvm/x86.c         |  14 +--
 7 files changed, 134 insertions(+), 151 deletions(-)

-- 
2.26.3



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-30  0:27   ` Sean Christopherson
  2022-03-22 17:24 ` [PATCH 2/8] KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb Maxim Levitsky
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky, stable

This can cause various unexpected issues, since VM is partially
destroyed at that point.

For example when AVIC is enabled, this causes avic_vcpu_load to
access physical id page entry which is already freed by .vm_destroy.

Fixes: 8221c1370056 ("svm: Manage vcpu load/unload when enable AVIC")
Cc: stable@vger.kernel.org

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/x86.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d3a9ce07a565..ba920e537ddf 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11759,20 +11759,15 @@ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
 	vcpu_put(vcpu);
 }
 
-static void kvm_free_vcpus(struct kvm *kvm)
+static void kvm_unload_vcpu_mmus(struct kvm *kvm)
 {
 	unsigned long i;
 	struct kvm_vcpu *vcpu;
 
-	/*
-	 * Unpin any mmu pages first.
-	 */
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		kvm_clear_async_pf_completion_queue(vcpu);
 		kvm_unload_vcpu_mmu(vcpu);
 	}
-
-	kvm_destroy_vcpus(kvm);
 }
 
 void kvm_arch_sync_events(struct kvm *kvm)
@@ -11878,11 +11873,12 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
 		mutex_unlock(&kvm->slots_lock);
 	}
+	kvm_unload_vcpu_mmus(kvm);
 	static_call_cond(kvm_x86_vm_destroy)(kvm);
 	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
 	kvm_pic_destroy(kvm);
 	kvm_ioapic_destroy(kvm);
-	kvm_free_vcpus(kvm);
+	kvm_destroy_vcpus(kvm);
 	kvfree(rcu_dereference_check(kvm->arch.apic_map, 1));
 	kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1));
 	kvm_mmu_uninit_vm(kvm);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/8] KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-24 18:12   ` Paolo Bonzini
  2022-03-22 17:24 ` [PATCH 3/8] kvm: x86: SVM: use vmcb* instead of svm->vmcb where it makes sense Maxim Levitsky
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

No functional change intended.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/avic.c | 2 +-
 arch/x86/kvm/svm/svm.c  | 9 +++++----
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index c5ef4715f3e0..b39fe614467a 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -167,7 +167,7 @@ int avic_vm_init(struct kvm *kvm)
 
 void avic_init_vmcb(struct vcpu_svm *svm)
 {
-	struct vmcb *vmcb = svm->vmcb;
+	struct vmcb *vmcb = svm->vmcb01.ptr;
 	struct kvm_svm *kvm_svm = to_kvm_svm(svm->vcpu.kvm);
 	phys_addr_t bpa = __sme_set(page_to_phys(svm->avic_backing_page));
 	phys_addr_t lpa = __sme_set(page_to_phys(kvm_svm->avic_logical_id_table_page));
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 6535adee3e9c..e9a5c1e80889 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -997,8 +997,9 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
 static void init_vmcb(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
-	struct vmcb_control_area *control = &svm->vmcb->control;
-	struct vmcb_save_area *save = &svm->vmcb->save;
+	struct vmcb *vmcb = svm->vmcb01.ptr;
+	struct vmcb_control_area *control = &vmcb->control;
+	struct vmcb_save_area *save = &vmcb->save;
 
 	svm_set_intercept(svm, INTERCEPT_CR0_READ);
 	svm_set_intercept(svm, INTERCEPT_CR3_READ);
@@ -1140,10 +1141,10 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
 		}
 	}
 
-	svm_hv_init_vmcb(svm->vmcb);
+	svm_hv_init_vmcb(vmcb);
 	init_vmcb_after_set_cpuid(vcpu);
 
-	vmcb_mark_all_dirty(svm->vmcb);
+	vmcb_mark_all_dirty(vmcb);
 
 	enable_gif(svm);
 }
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/8] kvm: x86: SVM: use vmcb* instead of svm->vmcb where it makes sense
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 2/8] KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 4/8] KVM: x86: SVM: fix avic spec based definitions again Maxim Levitsky
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

This makes the code a bit shorter and cleaner.

No functional change intended.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 179 ++++++++++++++++++++------------------
 1 file changed, 94 insertions(+), 85 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index d736ec6514ca..1c381c6a7b51 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -36,40 +36,43 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu,
 				       struct x86_exception *fault)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
+	struct vmcb *vmcb = svm->vmcb;
 
-	if (svm->vmcb->control.exit_code != SVM_EXIT_NPF) {
+	if (vmcb->control.exit_code != SVM_EXIT_NPF) {
 		/*
 		 * TODO: track the cause of the nested page fault, and
 		 * correctly fill in the high bits of exit_info_1.
 		 */
-		svm->vmcb->control.exit_code = SVM_EXIT_NPF;
-		svm->vmcb->control.exit_code_hi = 0;
-		svm->vmcb->control.exit_info_1 = (1ULL << 32);
-		svm->vmcb->control.exit_info_2 = fault->address;
+		vmcb->control.exit_code = SVM_EXIT_NPF;
+		vmcb->control.exit_code_hi = 0;
+		vmcb->control.exit_info_1 = (1ULL << 32);
+		vmcb->control.exit_info_2 = fault->address;
 	}
 
-	svm->vmcb->control.exit_info_1 &= ~0xffffffffULL;
-	svm->vmcb->control.exit_info_1 |= fault->error_code;
+	vmcb->control.exit_info_1 &= ~0xffffffffULL;
+	vmcb->control.exit_info_1 |= fault->error_code;
 
 	nested_svm_vmexit(svm);
 }
 
 static void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault)
 {
-       struct vcpu_svm *svm = to_svm(vcpu);
-       WARN_ON(!is_guest_mode(vcpu));
+	struct vcpu_svm *svm = to_svm(vcpu);
+	struct vmcb *vmcb = svm->vmcb;
+
+	WARN_ON(!is_guest_mode(vcpu));
 
 	if (vmcb12_is_intercept(&svm->nested.ctl,
 				INTERCEPT_EXCEPTION_OFFSET + PF_VECTOR) &&
-	    !svm->nested.nested_run_pending) {
-               svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + PF_VECTOR;
-               svm->vmcb->control.exit_code_hi = 0;
-               svm->vmcb->control.exit_info_1 = fault->error_code;
-               svm->vmcb->control.exit_info_2 = fault->address;
-               nested_svm_vmexit(svm);
-       } else {
-               kvm_inject_page_fault(vcpu, fault);
-       }
+				!svm->nested.nested_run_pending) {
+		vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + PF_VECTOR;
+		vmcb->control.exit_code_hi = 0;
+		vmcb->control.exit_info_1 = fault->error_code;
+		vmcb->control.exit_info_2 = fault->address;
+		nested_svm_vmexit(svm);
+	} else {
+		kvm_inject_page_fault(vcpu, fault);
+	}
 }
 
 static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index)
@@ -533,6 +536,7 @@ void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
 static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
 {
 	bool new_vmcb12 = false;
+	struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
 
 	nested_vmcb02_compute_g_pat(svm);
 
@@ -544,18 +548,18 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
 	}
 
 	if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_SEG))) {
-		svm->vmcb->save.es = vmcb12->save.es;
-		svm->vmcb->save.cs = vmcb12->save.cs;
-		svm->vmcb->save.ss = vmcb12->save.ss;
-		svm->vmcb->save.ds = vmcb12->save.ds;
-		svm->vmcb->save.cpl = vmcb12->save.cpl;
-		vmcb_mark_dirty(svm->vmcb, VMCB_SEG);
+		vmcb02->save.es = vmcb12->save.es;
+		vmcb02->save.cs = vmcb12->save.cs;
+		vmcb02->save.ss = vmcb12->save.ss;
+		vmcb02->save.ds = vmcb12->save.ds;
+		vmcb02->save.cpl = vmcb12->save.cpl;
+		vmcb_mark_dirty(vmcb02, VMCB_SEG);
 	}
 
 	if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_DT))) {
-		svm->vmcb->save.gdtr = vmcb12->save.gdtr;
-		svm->vmcb->save.idtr = vmcb12->save.idtr;
-		vmcb_mark_dirty(svm->vmcb, VMCB_DT);
+		vmcb02->save.gdtr = vmcb12->save.gdtr;
+		vmcb02->save.idtr = vmcb12->save.idtr;
+		vmcb_mark_dirty(vmcb02, VMCB_DT);
 	}
 
 	kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);
@@ -572,15 +576,15 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
 	kvm_rip_write(&svm->vcpu, vmcb12->save.rip);
 
 	/* In case we don't even reach vcpu_run, the fields are not updated */
-	svm->vmcb->save.rax = vmcb12->save.rax;
-	svm->vmcb->save.rsp = vmcb12->save.rsp;
-	svm->vmcb->save.rip = vmcb12->save.rip;
+	vmcb02->save.rax = vmcb12->save.rax;
+	vmcb02->save.rsp = vmcb12->save.rsp;
+	vmcb02->save.rip = vmcb12->save.rip;
 
 	/* These bits will be set properly on the first execution when new_vmc12 is true */
 	if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_DR))) {
-		svm->vmcb->save.dr7 = svm->nested.save.dr7 | DR7_FIXED_1;
+		vmcb02->save.dr7 = svm->nested.save.dr7 | DR7_FIXED_1;
 		svm->vcpu.arch.dr6  = svm->nested.save.dr6 | DR6_ACTIVE_LOW;
-		vmcb_mark_dirty(svm->vmcb, VMCB_DR);
+		vmcb_mark_dirty(vmcb02, VMCB_DR);
 	}
 }
 
@@ -592,6 +596,8 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
 	const u32 int_ctl_vmcb12_bits = V_TPR_MASK | V_IRQ_INJECTION_BITS_MASK;
 
 	struct kvm_vcpu *vcpu = &svm->vcpu;
+	struct vmcb *vmcb01 = svm->vmcb01.ptr;
+	struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
 
 	/*
 	 * Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
@@ -605,14 +611,14 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
 	WARN_ON(kvm_apicv_activated(svm->vcpu.kvm));
 
 	/* Copied from vmcb01.  msrpm_base can be overwritten later.  */
-	svm->vmcb->control.nested_ctl = svm->vmcb01.ptr->control.nested_ctl;
-	svm->vmcb->control.iopm_base_pa = svm->vmcb01.ptr->control.iopm_base_pa;
-	svm->vmcb->control.msrpm_base_pa = svm->vmcb01.ptr->control.msrpm_base_pa;
+	vmcb02->control.nested_ctl = vmcb01->control.nested_ctl;
+	vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa;
+	vmcb02->control.msrpm_base_pa = vmcb01->control.msrpm_base_pa;
 
 	/* Done at vmrun: asid.  */
 
 	/* Also overwritten later if necessary.  */
-	svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
+	vmcb02->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
 
 	/* nested_cr3.  */
 	if (nested_npt_enabled(svm))
@@ -623,24 +629,24 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
 			svm->nested.ctl.tsc_offset,
 			svm->tsc_ratio_msr);
 
-	svm->vmcb->control.tsc_offset = vcpu->arch.tsc_offset;
+	vmcb02->control.tsc_offset = vcpu->arch.tsc_offset;
 
 	if (svm->tsc_ratio_msr != kvm_default_tsc_scaling_ratio) {
 		WARN_ON(!svm->tsc_scaling_enabled);
 		nested_svm_update_tsc_ratio_msr(vcpu);
 	}
 
-	svm->vmcb->control.int_ctl             =
+	vmcb02->control.int_ctl             =
 		(svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) |
-		(svm->vmcb01.ptr->control.int_ctl & int_ctl_vmcb01_bits);
+		(vmcb01->control.int_ctl & int_ctl_vmcb01_bits);
 
-	svm->vmcb->control.int_vector          = svm->nested.ctl.int_vector;
-	svm->vmcb->control.int_state           = svm->nested.ctl.int_state;
-	svm->vmcb->control.event_inj           = svm->nested.ctl.event_inj;
-	svm->vmcb->control.event_inj_err       = svm->nested.ctl.event_inj_err;
+	vmcb02->control.int_vector          = svm->nested.ctl.int_vector;
+	vmcb02->control.int_state           = svm->nested.ctl.int_state;
+	vmcb02->control.event_inj           = svm->nested.ctl.event_inj;
+	vmcb02->control.event_inj_err       = svm->nested.ctl.event_inj_err;
 
 	if (!nested_vmcb_needs_vls_intercept(svm))
-		svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+		vmcb02->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
 
 	nested_svm_transition_tlb_flush(vcpu);
 
@@ -719,6 +725,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 	struct vmcb *vmcb12;
 	struct kvm_host_map map;
 	u64 vmcb12_gpa;
+	struct vmcb *vmcb01 = svm->vmcb01.ptr;
 
 	if (!svm->nested.hsave_msr) {
 		kvm_inject_gp(vcpu, 0);
@@ -762,14 +769,14 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 	 * Since vmcb01 is not in use, we can use it to store some of the L1
 	 * state.
 	 */
-	svm->vmcb01.ptr->save.efer   = vcpu->arch.efer;
-	svm->vmcb01.ptr->save.cr0    = kvm_read_cr0(vcpu);
-	svm->vmcb01.ptr->save.cr4    = vcpu->arch.cr4;
-	svm->vmcb01.ptr->save.rflags = kvm_get_rflags(vcpu);
-	svm->vmcb01.ptr->save.rip    = kvm_rip_read(vcpu);
+	vmcb01->save.efer   = vcpu->arch.efer;
+	vmcb01->save.cr0    = kvm_read_cr0(vcpu);
+	vmcb01->save.cr4    = vcpu->arch.cr4;
+	vmcb01->save.rflags = kvm_get_rflags(vcpu);
+	vmcb01->save.rip    = kvm_rip_read(vcpu);
 
 	if (!npt_enabled)
-		svm->vmcb01.ptr->save.cr3 = kvm_read_cr3(vcpu);
+		vmcb01->save.cr3 = kvm_read_cr3(vcpu);
 
 	svm->nested.nested_run_pending = 1;
 
@@ -835,8 +842,9 @@ void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
 int nested_svm_vmexit(struct vcpu_svm *svm)
 {
 	struct kvm_vcpu *vcpu = &svm->vcpu;
+	struct vmcb *vmcb01 = svm->vmcb01.ptr;
+	struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
 	struct vmcb *vmcb12;
-	struct vmcb *vmcb = svm->vmcb;
 	struct kvm_host_map map;
 	int rc;
 
@@ -864,36 +872,36 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 
 	/* Give the current vmcb to the guest */
 
-	vmcb12->save.es     = vmcb->save.es;
-	vmcb12->save.cs     = vmcb->save.cs;
-	vmcb12->save.ss     = vmcb->save.ss;
-	vmcb12->save.ds     = vmcb->save.ds;
-	vmcb12->save.gdtr   = vmcb->save.gdtr;
-	vmcb12->save.idtr   = vmcb->save.idtr;
+	vmcb12->save.es     = vmcb02->save.es;
+	vmcb12->save.cs     = vmcb02->save.cs;
+	vmcb12->save.ss     = vmcb02->save.ss;
+	vmcb12->save.ds     = vmcb02->save.ds;
+	vmcb12->save.gdtr   = vmcb02->save.gdtr;
+	vmcb12->save.idtr   = vmcb02->save.idtr;
 	vmcb12->save.efer   = svm->vcpu.arch.efer;
 	vmcb12->save.cr0    = kvm_read_cr0(vcpu);
 	vmcb12->save.cr3    = kvm_read_cr3(vcpu);
-	vmcb12->save.cr2    = vmcb->save.cr2;
+	vmcb12->save.cr2    = vmcb02->save.cr2;
 	vmcb12->save.cr4    = svm->vcpu.arch.cr4;
 	vmcb12->save.rflags = kvm_get_rflags(vcpu);
 	vmcb12->save.rip    = kvm_rip_read(vcpu);
 	vmcb12->save.rsp    = kvm_rsp_read(vcpu);
 	vmcb12->save.rax    = kvm_rax_read(vcpu);
-	vmcb12->save.dr7    = vmcb->save.dr7;
+	vmcb12->save.dr7    = vmcb02->save.dr7;
 	vmcb12->save.dr6    = svm->vcpu.arch.dr6;
-	vmcb12->save.cpl    = vmcb->save.cpl;
+	vmcb12->save.cpl    = vmcb02->save.cpl;
 
-	vmcb12->control.int_state         = vmcb->control.int_state;
-	vmcb12->control.exit_code         = vmcb->control.exit_code;
-	vmcb12->control.exit_code_hi      = vmcb->control.exit_code_hi;
-	vmcb12->control.exit_info_1       = vmcb->control.exit_info_1;
-	vmcb12->control.exit_info_2       = vmcb->control.exit_info_2;
+	vmcb12->control.int_state         = vmcb02->control.int_state;
+	vmcb12->control.exit_code         = vmcb02->control.exit_code;
+	vmcb12->control.exit_code_hi      = vmcb02->control.exit_code_hi;
+	vmcb12->control.exit_info_1       = vmcb02->control.exit_info_1;
+	vmcb12->control.exit_info_2       = vmcb02->control.exit_info_2;
 
 	if (vmcb12->control.exit_code != SVM_EXIT_ERR)
 		nested_save_pending_event_to_vmcb12(svm, vmcb12);
 
 	if (svm->nrips_enabled)
-		vmcb12->control.next_rip  = vmcb->control.next_rip;
+		vmcb12->control.next_rip  = vmcb02->control.next_rip;
 
 	vmcb12->control.int_ctl           = svm->nested.ctl.int_ctl;
 	vmcb12->control.tlb_ctl           = svm->nested.ctl.tlb_ctl;
@@ -909,12 +917,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 	 * no event can be injected in L1.
 	 */
 	svm_set_gif(svm, false);
-	svm->vmcb->control.exit_int_info = 0;
+	vmcb01->control.exit_int_info = 0;
 
 	svm->vcpu.arch.tsc_offset = svm->vcpu.arch.l1_tsc_offset;
-	if (svm->vmcb->control.tsc_offset != svm->vcpu.arch.tsc_offset) {
-		svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset;
-		vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
+	if (vmcb01->control.tsc_offset != svm->vcpu.arch.tsc_offset) {
+		vmcb01->control.tsc_offset = svm->vcpu.arch.tsc_offset;
+		vmcb_mark_dirty(vmcb01, VMCB_INTERCEPTS);
 	}
 
 	if (svm->tsc_ratio_msr != kvm_default_tsc_scaling_ratio) {
@@ -928,13 +936,13 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 	/*
 	 * Restore processor state that had been saved in vmcb01
 	 */
-	kvm_set_rflags(vcpu, svm->vmcb->save.rflags);
-	svm_set_efer(vcpu, svm->vmcb->save.efer);
-	svm_set_cr0(vcpu, svm->vmcb->save.cr0 | X86_CR0_PE);
-	svm_set_cr4(vcpu, svm->vmcb->save.cr4);
-	kvm_rax_write(vcpu, svm->vmcb->save.rax);
-	kvm_rsp_write(vcpu, svm->vmcb->save.rsp);
-	kvm_rip_write(vcpu, svm->vmcb->save.rip);
+	kvm_set_rflags(vcpu, vmcb01->save.rflags);
+	svm_set_efer(vcpu, vmcb01->save.efer);
+	svm_set_cr0(vcpu, vmcb01->save.cr0 | X86_CR0_PE);
+	svm_set_cr4(vcpu, vmcb01->save.cr4);
+	kvm_rax_write(vcpu, vmcb01->save.rax);
+	kvm_rsp_write(vcpu, vmcb01->save.rsp);
+	kvm_rip_write(vcpu, vmcb01->save.rip);
 
 	svm->vcpu.arch.dr7 = DR7_FIXED_1;
 	kvm_update_dr7(&svm->vcpu);
@@ -952,7 +960,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 
 	nested_svm_uninit_mmu_context(vcpu);
 
-	rc = nested_svm_load_cr3(vcpu, svm->vmcb->save.cr3, false, true);
+	rc = nested_svm_load_cr3(vcpu, vmcb01->save.cr3, false, true);
 	if (rc)
 		return 1;
 
@@ -970,7 +978,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 	 * right now so that it an be accounted for before we execute
 	 * L1's next instruction.
 	 */
-	if (unlikely(svm->vmcb->save.rflags & X86_EFLAGS_TF))
+	if (unlikely(vmcb01->save.rflags & X86_EFLAGS_TF))
 		kvm_queue_exception(&(svm->vcpu), DB_VECTOR);
 
 	return 0;
@@ -1183,12 +1191,13 @@ static bool nested_exit_on_exception(struct vcpu_svm *svm)
 static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm)
 {
 	unsigned int nr = svm->vcpu.arch.exception.nr;
+	struct vmcb *vmcb = svm->vmcb;
 
-	svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + nr;
-	svm->vmcb->control.exit_code_hi = 0;
+	vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + nr;
+	vmcb->control.exit_code_hi = 0;
 
 	if (svm->vcpu.arch.exception.has_error_code)
-		svm->vmcb->control.exit_info_1 = svm->vcpu.arch.exception.error_code;
+		vmcb->control.exit_info_1 = svm->vcpu.arch.exception.error_code;
 
 	/*
 	 * EXITINFO2 is undefined for all exception intercepts other
@@ -1196,11 +1205,11 @@ static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm)
 	 */
 	if (nr == PF_VECTOR) {
 		if (svm->vcpu.arch.exception.nested_apf)
-			svm->vmcb->control.exit_info_2 = svm->vcpu.arch.apf.nested_apf_token;
+			vmcb->control.exit_info_2 = svm->vcpu.arch.apf.nested_apf_token;
 		else if (svm->vcpu.arch.exception.has_payload)
-			svm->vmcb->control.exit_info_2 = svm->vcpu.arch.exception.payload;
+			vmcb->control.exit_info_2 = svm->vcpu.arch.exception.payload;
 		else
-			svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2;
+			vmcb->control.exit_info_2 = svm->vcpu.arch.cr2;
 	} else if (nr == DB_VECTOR) {
 		/* See inject_pending_event.  */
 		kvm_deliver_exception_payload(&svm->vcpu);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/8] KVM: x86: SVM: fix avic spec based definitions again
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
                   ` (2 preceding siblings ...)
  2022-03-22 17:24 ` [PATCH 3/8] kvm: x86: SVM: use vmcb* instead of svm->vmcb where it makes sense Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 5/8] KVM: x86: SVM: move tsc ratio definitions to svm.h Maxim Levitsky
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

Due to wrong rebase, commit
4a204f7895878 ("KVM: SVM: Allow AVIC support on system w/ physical APIC ID > 255")

moved avic spec #defines back to avic.c.

Move them back, and while at it extend AVIC_DOORBELL_PHYSICAL_ID_MASK to 12
bits as well (it will be used in nested avic)

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/svm.h |  8 +++++---
 arch/x86/kvm/svm/svm.h     | 11 -----------
 2 files changed, 5 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 7eb2df5417fb..ab572d8def2b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -222,7 +222,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 
 
 /* AVIC */
-#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
+#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFFULL)
 #define AVIC_LOGICAL_ID_ENTRY_VALID_BIT			31
 #define AVIC_LOGICAL_ID_ENTRY_VALID_MASK		(1 << 31)
 
@@ -230,9 +230,11 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK	(0xFFFFFFFFFFULL << 12)
 #define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK		(1ULL << 62)
 #define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK		(1ULL << 63)
-#define AVIC_PHYSICAL_ID_TABLE_SIZE_MASK		(0xFF)
+#define AVIC_PHYSICAL_ID_TABLE_SIZE_MASK		(0xFFULL)
 
-#define AVIC_DOORBELL_PHYSICAL_ID_MASK			(0xFF)
+#define AVIC_DOORBELL_PHYSICAL_ID_MASK			GENMASK_ULL(11, 0)
+
+#define VMCB_AVIC_APIC_BAR_MASK				0xFFFFFFFFFF000ULL
 
 #define AVIC_UNACCEL_ACCESS_WRITE_MASK		1
 #define AVIC_UNACCEL_ACCESS_OFFSET_MASK		0xFF0
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index d07a5b88ea96..468f149556dd 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -577,17 +577,6 @@ extern struct kvm_x86_nested_ops svm_nested_ops;
 
 /* avic.c */
 
-#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
-#define AVIC_LOGICAL_ID_ENTRY_VALID_BIT			31
-#define AVIC_LOGICAL_ID_ENTRY_VALID_MASK		(1 << 31)
-
-#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK	GENMASK_ULL(11, 0)
-#define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK	(0xFFFFFFFFFFULL << 12)
-#define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK		(1ULL << 62)
-#define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK		(1ULL << 63)
-
-#define VMCB_AVIC_APIC_BAR_MASK		0xFFFFFFFFFF000ULL
-
 int avic_ga_log_notifier(u32 ga_tag);
 void avic_vm_destroy(struct kvm *kvm);
 int avic_vm_init(struct kvm *kvm);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 5/8] KVM: x86: SVM: move tsc ratio definitions to svm.h
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
                   ` (3 preceding siblings ...)
  2022-03-22 17:24 ` [PATCH 4/8] KVM: x86: SVM: fix avic spec based definitions again Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 6/8] kvm: x86: SVM: remove unused defines Maxim Levitsky
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

Another piece of SVM spec which should be in the header file

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/svm.h |  6 ++++++
 arch/x86/kvm/svm/svm.c     | 15 +++++----------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index ab572d8def2b..f70a5108d464 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -221,6 +221,12 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_NESTED_CTL_SEV_ES_ENABLE	BIT(2)
 
 
+#define SVM_TSC_RATIO_RSVD	0xffffff0000000000ULL
+#define SVM_TSC_RATIO_MIN	0x0000000000000001ULL
+#define SVM_TSC_RATIO_MAX	0x000000ffffffffffULL
+#define SVM_TSC_RATIO_DEFAULT	0x0100000000ULL
+
+
 /* AVIC */
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFFULL)
 #define AVIC_LOGICAL_ID_ENTRY_VALID_BIT			31
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e9a5c1e80889..ea3f0b2605e5 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -72,10 +72,6 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id);
 
 #define DEBUGCTL_RESERVED_BITS (~(0x3fULL))
 
-#define TSC_RATIO_RSVD          0xffffff0000000000ULL
-#define TSC_RATIO_MIN		0x0000000000000001ULL
-#define TSC_RATIO_MAX		0x000000ffffffffffULL
-
 static bool erratum_383_found __read_mostly;
 
 u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
@@ -87,7 +83,6 @@ u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
 static uint64_t osvw_len = 4, osvw_status;
 
 static DEFINE_PER_CPU(u64, current_tsc_ratio);
-#define TSC_RATIO_DEFAULT	0x0100000000ULL
 
 static const struct svm_direct_access_msrs {
 	u32 index;   /* Index of the MSR */
@@ -483,7 +478,7 @@ static void svm_hardware_disable(void)
 {
 	/* Make sure we clean up behind us */
 	if (tsc_scaling)
-		wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT);
+		wrmsrl(MSR_AMD64_TSC_RATIO, SVM_TSC_RATIO_DEFAULT);
 
 	cpu_svm_disable();
 
@@ -529,8 +524,8 @@ static int svm_hardware_enable(void)
 		 * Set the default value, even if we don't use TSC scaling
 		 * to avoid having stale value in the msr
 		 */
-		wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT);
-		__this_cpu_write(current_tsc_ratio, TSC_RATIO_DEFAULT);
+		wrmsrl(MSR_AMD64_TSC_RATIO, SVM_TSC_RATIO_DEFAULT);
+		__this_cpu_write(current_tsc_ratio, SVM_TSC_RATIO_DEFAULT);
 	}
 
 
@@ -2729,7 +2724,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 			break;
 		}
 
-		if (data & TSC_RATIO_RSVD)
+		if (data & SVM_TSC_RATIO_RSVD)
 			return 1;
 
 		svm->tsc_ratio_msr = data;
@@ -4776,7 +4771,7 @@ static __init int svm_hardware_setup(void)
 		} else {
 			pr_info("TSC scaling supported\n");
 			kvm_has_tsc_control = true;
-			kvm_max_tsc_scaling_ratio = TSC_RATIO_MAX;
+			kvm_max_tsc_scaling_ratio = SVM_TSC_RATIO_MAX;
 			kvm_tsc_scaling_ratio_frac_bits = 32;
 		}
 	}
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 6/8] kvm: x86: SVM: remove unused defines
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
                   ` (4 preceding siblings ...)
  2022-03-22 17:24 ` [PATCH 5/8] KVM: x86: SVM: move tsc ratio definitions to svm.h Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 7/8] KVM: x86: SVM: fix tsc scaling when the host doesn't support it Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 8/8] KVM: x86: SVM: remove vgif_enabled() Maxim Levitsky
  7 siblings, 0 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

Remove some unused #defines from svm.c

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ea3f0b2605e5..fb31ed01086c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -62,14 +62,6 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id);
 #define SEG_TYPE_LDT 2
 #define SEG_TYPE_BUSY_TSS16 3
 
-#define SVM_FEATURE_LBRV           (1 <<  1)
-#define SVM_FEATURE_SVML           (1 <<  2)
-#define SVM_FEATURE_TSC_RATE       (1 <<  4)
-#define SVM_FEATURE_VMCB_CLEAN     (1 <<  5)
-#define SVM_FEATURE_FLUSH_ASID     (1 <<  6)
-#define SVM_FEATURE_DECODE_ASSIST  (1 <<  7)
-#define SVM_FEATURE_PAUSE_FILTER   (1 << 10)
-
 #define DEBUGCTL_RESERVED_BITS (~(0x3fULL))
 
 static bool erratum_383_found __read_mostly;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 7/8] KVM: x86: SVM: fix tsc scaling when the host doesn't support it
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
                   ` (5 preceding siblings ...)
  2022-03-22 17:24 ` [PATCH 6/8] kvm: x86: SVM: remove unused defines Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-22 17:24 ` [PATCH 8/8] KVM: x86: SVM: remove vgif_enabled() Maxim Levitsky
  7 siblings, 0 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

It was decided that when TSC scaling is not supported,
the virtual MSR_AMD64_TSC_RATIO should still have the default '1.0'
value.

However in this case kvm_max_tsc_scaling_ratio is not set,
which breaks various assumptions.

Fix this by always calculating kvm_max_tsc_scaling_ratio
regardless of host support.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 4 ++--
 arch/x86/kvm/vmx/vmx.c | 7 +++----
 arch/x86/kvm/x86.c     | 4 +---
 3 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index fb31ed01086c..acf04cf4ed2a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4763,10 +4763,10 @@ static __init int svm_hardware_setup(void)
 		} else {
 			pr_info("TSC scaling supported\n");
 			kvm_has_tsc_control = true;
-			kvm_max_tsc_scaling_ratio = SVM_TSC_RATIO_MAX;
-			kvm_tsc_scaling_ratio_frac_bits = 32;
 		}
 	}
+	kvm_max_tsc_scaling_ratio = SVM_TSC_RATIO_MAX;
+	kvm_tsc_scaling_ratio_frac_bits = 32;
 
 	tsc_aux_uret_slot = kvm_add_user_return_msr(MSR_TSC_AUX);
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 84a7500cd80c..e3a311b9ba34 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7980,12 +7980,11 @@ static __init int hardware_setup(void)
 	if (!enable_apicv)
 		vmx_x86_ops.sync_pir_to_irr = NULL;
 
-	if (cpu_has_vmx_tsc_scaling()) {
+	if (cpu_has_vmx_tsc_scaling())
 		kvm_has_tsc_control = true;
-		kvm_max_tsc_scaling_ratio = KVM_VMX_TSC_MULTIPLIER_MAX;
-		kvm_tsc_scaling_ratio_frac_bits = 48;
-	}
 
+	kvm_max_tsc_scaling_ratio = KVM_VMX_TSC_MULTIPLIER_MAX;
+	kvm_tsc_scaling_ratio_frac_bits = 48;
 	kvm_has_bus_lock_exit = cpu_has_vmx_bus_lock_detection();
 
 	set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ba920e537ddf..9c27239f987f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11631,10 +11631,8 @@ int kvm_arch_hardware_setup(void *opaque)
 		u64 max = min(0x7fffffffULL,
 			      __scale_tsc(kvm_max_tsc_scaling_ratio, tsc_khz));
 		kvm_max_guest_tsc_khz = max;
-
-		kvm_default_tsc_scaling_ratio = 1ULL << kvm_tsc_scaling_ratio_frac_bits;
 	}
-
+	kvm_default_tsc_scaling_ratio = 1ULL << kvm_tsc_scaling_ratio_frac_bits;
 	kvm_init_msr_list();
 	return 0;
 }
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 8/8] KVM: x86: SVM: remove vgif_enabled()
  2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
                   ` (6 preceding siblings ...)
  2022-03-22 17:24 ` [PATCH 7/8] KVM: x86: SVM: fix tsc scaling when the host doesn't support it Maxim Levitsky
@ 2022-03-22 17:24 ` Maxim Levitsky
  2022-03-30  0:20   ` Sean Christopherson
  7 siblings, 1 reply; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-22 17:24 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Thomas Gleixner, Maxim Levitsky

KVM always uses vgif when allowed, thus there is
no need to query current vmcb for it

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 12 ++++++------
 arch/x86/kvm/svm/svm.h | 12 ++++--------
 2 files changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index acf04cf4ed2a..70fc5897f5f2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -172,7 +172,7 @@ static int vls = true;
 module_param(vls, int, 0444);
 
 /* enable/disable Virtual GIF */
-static int vgif = true;
+int vgif = true;
 module_param(vgif, int, 0444);
 
 /* enable/disable LBR virtualization */
@@ -2148,7 +2148,7 @@ void svm_set_gif(struct vcpu_svm *svm, bool value)
 		 * Likewise, clear the VINTR intercept, we will set it
 		 * again while processing KVM_REQ_EVENT if needed.
 		 */
-		if (vgif_enabled(svm))
+		if (vgif)
 			svm_clr_intercept(svm, INTERCEPT_STGI);
 		if (svm_is_intercept(svm, INTERCEPT_VINTR))
 			svm_clear_vintr(svm);
@@ -2166,7 +2166,7 @@ void svm_set_gif(struct vcpu_svm *svm, bool value)
 		 * in use, we still rely on the VINTR intercept (rather than
 		 * STGI) to detect an open interrupt window.
 		*/
-		if (!vgif_enabled(svm))
+		if (!vgif)
 			svm_clear_vintr(svm);
 	}
 }
@@ -3502,7 +3502,7 @@ static void svm_enable_irq_window(struct kvm_vcpu *vcpu)
 	 * enabled, the STGI interception will not occur. Enable the irq
 	 * window under the assumption that the hardware will set the GIF.
 	 */
-	if (vgif_enabled(svm) || gif_set(svm)) {
+	if (vgif || gif_set(svm)) {
 		/*
 		 * IRQ window is not needed when AVIC is enabled,
 		 * unless we have pending ExtINT since it cannot be injected
@@ -3522,7 +3522,7 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
 		return; /* IRET will cause a vm exit */
 
 	if (!gif_set(svm)) {
-		if (vgif_enabled(svm))
+		if (vgif)
 			svm_set_intercept(svm, INTERCEPT_STGI);
 		return; /* STGI will cause a vm exit */
 	}
@@ -4329,7 +4329,7 @@ static void svm_enable_smi_window(struct kvm_vcpu *vcpu)
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	if (!gif_set(svm)) {
-		if (vgif_enabled(svm))
+		if (vgif)
 			svm_set_intercept(svm, INTERCEPT_STGI);
 		/* STGI will cause a vm exit */
 	} else {
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 468f149556dd..6a10cb4817e8 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -33,6 +33,7 @@
 #define MSRPM_OFFSETS	16
 extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
 extern bool npt_enabled;
+extern int vgif;
 extern bool intercept_smi;
 
 /*
@@ -453,14 +454,9 @@ static inline bool svm_is_intercept(struct vcpu_svm *svm, int bit)
 	return vmcb_is_intercept(&svm->vmcb->control, bit);
 }
 
-static inline bool vgif_enabled(struct vcpu_svm *svm)
-{
-	return !!(svm->vmcb->control.int_ctl & V_GIF_ENABLE_MASK);
-}
-
 static inline void enable_gif(struct vcpu_svm *svm)
 {
-	if (vgif_enabled(svm))
+	if (vgif)
 		svm->vmcb->control.int_ctl |= V_GIF_MASK;
 	else
 		svm->vcpu.arch.hflags |= HF_GIF_MASK;
@@ -468,7 +464,7 @@ static inline void enable_gif(struct vcpu_svm *svm)
 
 static inline void disable_gif(struct vcpu_svm *svm)
 {
-	if (vgif_enabled(svm))
+	if (vgif)
 		svm->vmcb->control.int_ctl &= ~V_GIF_MASK;
 	else
 		svm->vcpu.arch.hflags &= ~HF_GIF_MASK;
@@ -476,7 +472,7 @@ static inline void disable_gif(struct vcpu_svm *svm)
 
 static inline bool gif_set(struct vcpu_svm *svm)
 {
-	if (vgif_enabled(svm))
+	if (vgif)
 		return !!(svm->vmcb->control.int_ctl & V_GIF_MASK);
 	else
 		return !!(svm->vcpu.arch.hflags & HF_GIF_MASK);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/8] KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb
  2022-03-22 17:24 ` [PATCH 2/8] KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb Maxim Levitsky
@ 2022-03-24 18:12   ` Paolo Bonzini
  2022-03-27 15:15     ` Maxim Levitsky
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2022-03-24 18:12 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Jim Mattson, Wanpeng Li, Borislav Petkov, Joerg Roedel,
	Ingo Molnar, Suravee Suthikulpanit, x86, H. Peter Anvin,
	Dave Hansen, linux-kernel, Sean Christopherson, Vitaly Kuznetsov,
	Thomas Gleixner

On 3/22/22 18:24, Maxim Levitsky wrote:
>   
>   void avic_init_vmcb(struct vcpu_svm *svm)
>   {
> -	struct vmcb *vmcb = svm->vmcb;
> +	struct vmcb *vmcb = svm->vmcb01.ptr;
>   	struct kvm_svm *kvm_svm = to_kvm_svm(svm->vcpu.kvm);
>   	phys_addr_t bpa = __sme_set(page_to_phys(svm->avic_backing_page));
>   	phys_addr_t lpa = __sme_set(page_to_phys(kvm_svm->avic_logical_id_table_page));

Let's do this for consistency with e.g. svm_hv_init_vmcb:

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index b39fe614467a..ab202158137d 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -165,9 +165,8 @@ int avic_vm_init(struct kvm *kvm)
  	return err;
  }
  
-void avic_init_vmcb(struct vcpu_svm *svm)
+void avic_init_vmcb(struct vcpu_svm *svm, struct vmcb *vmcb)
  {
-	struct vmcb *vmcb = svm->vmcb01.ptr;
  	struct kvm_svm *kvm_svm = to_kvm_svm(svm->vcpu.kvm);
  	phys_addr_t bpa = __sme_set(page_to_phys(svm->avic_backing_page));
  	phys_addr_t lpa = __sme_set(page_to_phys(kvm_svm->avic_logical_id_table_page));
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cc02506b7a19..ced8edad0c87 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1123,7 +1123,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
  		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
  
  	if (kvm_vcpu_apicv_active(vcpu))
-		avic_init_vmcb(svm);
+		avic_init_vmcb(svm, vmcb);
  
  	if (vgif) {
  		svm_clr_intercept(svm, INTERCEPT_STGI);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index d07a5b88ea96..bbac6c24a8b8 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -591,7 +591,7 @@ extern struct kvm_x86_nested_ops svm_nested_ops;
  int avic_ga_log_notifier(u32 ga_tag);
  void avic_vm_destroy(struct kvm *kvm);
  int avic_vm_init(struct kvm *kvm);
-void avic_init_vmcb(struct vcpu_svm *svm);
+void avic_init_vmcb(struct vcpu_svm *svm, struct vmcb *vmcb);
  int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu);
  int avic_unaccelerated_access_interception(struct kvm_vcpu *vcpu);
  int avic_init_vcpu(struct vcpu_svm *svm);


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/8] KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb
  2022-03-24 18:12   ` Paolo Bonzini
@ 2022-03-27 15:15     ` Maxim Levitsky
  0 siblings, 0 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-03-27 15:15 UTC (permalink / raw)
  To: Paolo Bonzini, kvm
  Cc: Jim Mattson, Wanpeng Li, Borislav Petkov, Joerg Roedel,
	Ingo Molnar, Suravee Suthikulpanit, x86, H. Peter Anvin,
	Dave Hansen, linux-kernel, Sean Christopherson, Vitaly Kuznetsov,
	Thomas Gleixner

On Thu, 2022-03-24 at 19:12 +0100, Paolo Bonzini wrote:
> On 3/22/22 18:24, Maxim Levitsky wrote:
> >   
> >   void avic_init_vmcb(struct vcpu_svm *svm)
> >   {
> > -	struct vmcb *vmcb = svm->vmcb;
> > +	struct vmcb *vmcb = svm->vmcb01.ptr;
> >   	struct kvm_svm *kvm_svm = to_kvm_svm(svm->vcpu.kvm);
> >   	phys_addr_t bpa = __sme_set(page_to_phys(svm->avic_backing_page));
> >   	phys_addr_t lpa = __sme_set(page_to_phys(kvm_svm->avic_logical_id_table_page));
> 
> Let's do this for consistency with e.g. svm_hv_init_vmcb:
> 
> diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
> index b39fe614467a..ab202158137d 100644
> --- a/arch/x86/kvm/svm/avic.c
> +++ b/arch/x86/kvm/svm/avic.c
> @@ -165,9 +165,8 @@ int avic_vm_init(struct kvm *kvm)
>   	return err;
>   }
>   
> -void avic_init_vmcb(struct vcpu_svm *svm)
> +void avic_init_vmcb(struct vcpu_svm *svm, struct vmcb *vmcb)
>   {
> -	struct vmcb *vmcb = svm->vmcb01.ptr;
>   	struct kvm_svm *kvm_svm = to_kvm_svm(svm->vcpu.kvm);
>   	phys_addr_t bpa = __sme_set(page_to_phys(svm->avic_backing_page));
>   	phys_addr_t lpa = __sme_set(page_to_phys(kvm_svm->avic_logical_id_table_page));
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index cc02506b7a19..ced8edad0c87 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -1123,7 +1123,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
>   		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
>   
>   	if (kvm_vcpu_apicv_active(vcpu))
> -		avic_init_vmcb(svm);
> +		avic_init_vmcb(svm, vmcb);
>   
>   	if (vgif) {
>   		svm_clr_intercept(svm, INTERCEPT_STGI);
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index d07a5b88ea96..bbac6c24a8b8 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -591,7 +591,7 @@ extern struct kvm_x86_nested_ops svm_nested_ops;
>   int avic_ga_log_notifier(u32 ga_tag);
>   void avic_vm_destroy(struct kvm *kvm);
>   int avic_vm_init(struct kvm *kvm);
> -void avic_init_vmcb(struct vcpu_svm *svm);
> +void avic_init_vmcb(struct vcpu_svm *svm, struct vmcb *vmcb);
>   int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu);
>   int avic_unaccelerated_access_interception(struct kvm_vcpu *vcpu);
>   int avic_init_vcpu(struct vcpu_svm *svm);
> 

This is a very good idea, I will do this in the 
next version of the patches.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 8/8] KVM: x86: SVM: remove vgif_enabled()
  2022-03-22 17:24 ` [PATCH 8/8] KVM: x86: SVM: remove vgif_enabled() Maxim Levitsky
@ 2022-03-30  0:20   ` Sean Christopherson
  2022-03-30 12:08     ` Paolo Bonzini
  0 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2022-03-30  0:20 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Vitaly Kuznetsov,
	Thomas Gleixner

On Tue, Mar 22, 2022, Maxim Levitsky wrote:
> KVM always uses vgif when allowed, thus there is
> no need to query current vmcb for it

It'd be helpful to explicitly call out that KVM always takes V_GIF_ENABLE_MASK
from vmcs01, otherwise this looks like it does unintentend things when KVM is
runing vmcb02.

> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/svm/svm.c | 12 ++++++------
>  arch/x86/kvm/svm/svm.h | 12 ++++--------
>  2 files changed, 10 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index acf04cf4ed2a..70fc5897f5f2 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -172,7 +172,7 @@ static int vls = true;
>  module_param(vls, int, 0444);
>  
>  /* enable/disable Virtual GIF */
> -static int vgif = true;
> +int vgif = true;
>  module_param(vgif, int, 0444);

...

> @@ -453,14 +454,9 @@ static inline bool svm_is_intercept(struct vcpu_svm *svm, int bit)
>  	return vmcb_is_intercept(&svm->vmcb->control, bit);
>  }
>  
> -static inline bool vgif_enabled(struct vcpu_svm *svm)
> -{
> -	return !!(svm->vmcb->control.int_ctl & V_GIF_ENABLE_MASK);

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
  2022-03-22 17:24 ` [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called Maxim Levitsky
@ 2022-03-30  0:27   ` Sean Christopherson
  2022-03-30 12:07     ` Paolo Bonzini
  0 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2022-03-30  0:27 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Jim Mattson, Paolo Bonzini, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Ingo Molnar, Suravee Suthikulpanit, x86,
	H. Peter Anvin, Dave Hansen, linux-kernel, Vitaly Kuznetsov,
	Thomas Gleixner, stable

On Tue, Mar 22, 2022, Maxim Levitsky wrote:
> This can cause various unexpected issues, since VM is partially
> destroyed at that point.
> 
> For example when AVIC is enabled, this causes avic_vcpu_load to
> access physical id page entry which is already freed by .vm_destroy.

Hmm, the SEV unbinding of ASIDs should be done after MMU teardown too (which your
patch also does).

> 
> Fixes: 8221c1370056 ("svm: Manage vcpu load/unload when enable AVIC")
> Cc: stable@vger.kernel.org
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 10 +++-------
>  1 file changed, 3 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d3a9ce07a565..ba920e537ddf 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11759,20 +11759,15 @@ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
>  	vcpu_put(vcpu);
>  }
>  
> -static void kvm_free_vcpus(struct kvm *kvm)
> +static void kvm_unload_vcpu_mmus(struct kvm *kvm)
>  {
>  	unsigned long i;
>  	struct kvm_vcpu *vcpu;
>  
> -	/*
> -	 * Unpin any mmu pages first.
> -	 */
>  	kvm_for_each_vcpu(i, vcpu, kvm) {
>  		kvm_clear_async_pf_completion_queue(vcpu);
>  		kvm_unload_vcpu_mmu(vcpu);
>  	}
> -
> -	kvm_destroy_vcpus(kvm);
>  }
>  
>  void kvm_arch_sync_events(struct kvm *kvm)
> @@ -11878,11 +11873,12 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  		__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
>  		mutex_unlock(&kvm->slots_lock);
>  	}
> +	kvm_unload_vcpu_mmus(kvm);
>  	static_call_cond(kvm_x86_vm_destroy)(kvm);
>  	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
>  	kvm_pic_destroy(kvm);
>  	kvm_ioapic_destroy(kvm);
> -	kvm_free_vcpus(kvm);
> +	kvm_destroy_vcpus(kvm);

Rather than split kvm_free_vcpus(), can we instead move the call to svm_vm_destroy()
by adding a second hook, .vm_teardown(), which is needed for TDX?  I.e. keep VMX
where it is by using vm_teardown, but effectively move SVM?

https://lore.kernel.org/all/1fa2d0db387a99352d44247728c5b8ae5f5cab4d.1637799475.git.isaku.yamahata@intel.com

>  	kvfree(rcu_dereference_check(kvm->arch.apic_map, 1));
>  	kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1));
>  	kvm_mmu_uninit_vm(kvm);
> -- 
> 2.26.3
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
  2022-03-30  0:27   ` Sean Christopherson
@ 2022-03-30 12:07     ` Paolo Bonzini
  2022-04-28  5:47       ` Maxim Levitsky
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2022-03-30 12:07 UTC (permalink / raw)
  To: Sean Christopherson, Maxim Levitsky
  Cc: kvm, Jim Mattson, Wanpeng Li, Borislav Petkov, Joerg Roedel,
	Ingo Molnar, Suravee Suthikulpanit, x86, H. Peter Anvin,
	Dave Hansen, linux-kernel, Vitaly Kuznetsov, Thomas Gleixner,
	stable

On 3/30/22 02:27, Sean Christopherson wrote:
> Rather than split kvm_free_vcpus(), can we instead move the call to svm_vm_destroy()
> by adding a second hook, .vm_teardown(), which is needed for TDX?  I.e. keep VMX
> where it is by using vm_teardown, but effectively move SVM?
> 
> https://lore.kernel.org/all/1fa2d0db387a99352d44247728c5b8ae5f5cab4d.1637799475.git.isaku.yamahata@intel.com

I'd rather do that only for the TDX patches.

Paolo


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 8/8] KVM: x86: SVM: remove vgif_enabled()
  2022-03-30  0:20   ` Sean Christopherson
@ 2022-03-30 12:08     ` Paolo Bonzini
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2022-03-30 12:08 UTC (permalink / raw)
  To: Sean Christopherson, Maxim Levitsky
  Cc: kvm, Jim Mattson, Wanpeng Li, Borislav Petkov, Joerg Roedel,
	Ingo Molnar, Suravee Suthikulpanit, x86, H. Peter Anvin,
	Dave Hansen, linux-kernel, Vitaly Kuznetsov, Thomas Gleixner

On 3/30/22 02:20, Sean Christopherson wrote:
> It'd be helpful to explicitly call out that KVM always takes V_GIF_ENABLE_MASK
> from vmcs01, otherwise this looks like it does unintentend things when KVM is
> runing vmcb02.

I will add a note to the commit message.

More precisely, because KVM does not (as of this patch) support vGIF 
when L1 runs L2, the vmcb02's V_GIF_MASK and V_GIF_ENABLE_MASK also 
control L1's GIF and are the same as vmcs01's.

Paolo


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called
  2022-03-30 12:07     ` Paolo Bonzini
@ 2022-04-28  5:47       ` Maxim Levitsky
  0 siblings, 0 replies; 16+ messages in thread
From: Maxim Levitsky @ 2022-04-28  5:47 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson
  Cc: kvm, Jim Mattson, Wanpeng Li, Borislav Petkov, Joerg Roedel,
	Ingo Molnar, Suravee Suthikulpanit, x86, H. Peter Anvin,
	Dave Hansen, linux-kernel, Vitaly Kuznetsov, Thomas Gleixner,
	stable

On Wed, 2022-03-30 at 14:07 +0200, Paolo Bonzini wrote:
> On 3/30/22 02:27, Sean Christopherson wrote:
> > Rather than split kvm_free_vcpus(), can we instead move the call to svm_vm_destroy()
> > by adding a second hook, .vm_teardown(), which is needed for TDX?  I.e. keep VMX
> > where it is by using vm_teardown, but effectively move SVM?
> > 
> > https://lore.kernel.org/all/1fa2d0db387a99352d44247728c5b8ae5f5cab4d.1637799475.git.isaku.yamahata@intel.com
> 
> I'd rather do that only for the TDX patches.
> 
> Paolo
> 
Any update on this patch? Looks like it is not upstream nor in kvm/queue.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-04-28  5:47 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-22 17:24 [PATCH 0/8] SVM fixes + refactoring Maxim Levitsky
2022-03-22 17:24 ` [PATCH 1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called Maxim Levitsky
2022-03-30  0:27   ` Sean Christopherson
2022-03-30 12:07     ` Paolo Bonzini
2022-04-28  5:47       ` Maxim Levitsky
2022-03-22 17:24 ` [PATCH 2/8] KVM: x86: SVM: use vmcb01 in avic_init_vmcb and init_vmcb Maxim Levitsky
2022-03-24 18:12   ` Paolo Bonzini
2022-03-27 15:15     ` Maxim Levitsky
2022-03-22 17:24 ` [PATCH 3/8] kvm: x86: SVM: use vmcb* instead of svm->vmcb where it makes sense Maxim Levitsky
2022-03-22 17:24 ` [PATCH 4/8] KVM: x86: SVM: fix avic spec based definitions again Maxim Levitsky
2022-03-22 17:24 ` [PATCH 5/8] KVM: x86: SVM: move tsc ratio definitions to svm.h Maxim Levitsky
2022-03-22 17:24 ` [PATCH 6/8] kvm: x86: SVM: remove unused defines Maxim Levitsky
2022-03-22 17:24 ` [PATCH 7/8] KVM: x86: SVM: fix tsc scaling when the host doesn't support it Maxim Levitsky
2022-03-22 17:24 ` [PATCH 8/8] KVM: x86: SVM: remove vgif_enabled() Maxim Levitsky
2022-03-30  0:20   ` Sean Christopherson
2022-03-30 12:08     ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).