linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Refactor vcpu creation flow of x86 arch
@ 2019-10-15 16:40 Xiaoyao Li
  2019-10-15 16:40 ` [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions Xiaoyao Li
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-15 16:40 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: Xiaoyao Li, kvm, linux-kernel

When reading the vcpu creationg flow of vmx, I find it hard to follow since it
mixes the data structure allocation and initilization together.

This series tries to make the vcpu creation flow more clear that first
allocating data structure and then initializing them. In this way, it helps
move FPU allocation to generic x86 code (Patch 4).

This series intends to do no functional change. I just tested it with
kvm_unit_tests for vmx since I have no AMD machine at hand.

Xiaoyao Li (4):
  KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions
  KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability
  KVM: X86: Refactor kvm_arch_vcpu_create
  KVM: X86: Make vcpu's FPU allocation a common function

 arch/x86/include/asm/kvm_host.h |   1 +
 arch/x86/kvm/svm.c              |  81 ++++++---------
 arch/x86/kvm/vmx/nested.c       |   2 +-
 arch/x86/kvm/vmx/nested.h       |   2 +-
 arch/x86/kvm/vmx/vmx.c          | 173 ++++++++++++++------------------
 arch/x86/kvm/x86.c              |  40 ++++++++
 6 files changed, 150 insertions(+), 149 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions
  2019-10-15 16:40 [PATCH 0/4] Refactor vcpu creation flow of x86 arch Xiaoyao Li
@ 2019-10-15 16:40 ` Xiaoyao Li
  2019-10-15 22:05   ` Krish Sadhukhan
  2019-10-15 16:40 ` [PATCH 2/4] KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability Xiaoyao Li
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-15 16:40 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: Xiaoyao Li, kvm, linux-kernel

Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
to match what they really do.

No functional change.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/vmx/nested.c | 2 +-
 arch/x86/kvm/vmx/nested.h | 2 +-
 arch/x86/kvm/vmx/vmx.c    | 9 +++------
 3 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 5e231da00310..7935422d311f 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5768,7 +5768,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
 	return ret;
 }
 
-void nested_vmx_vcpu_setup(void)
+void nested_vmx_vmcs_setup(void)
 {
 	if (enable_shadow_vmcs) {
 		vmcs_write64(VMREAD_BITMAP, __pa(vmx_vmread_bitmap));
diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
index 187d39bf0bf1..2be1ba7482c9 100644
--- a/arch/x86/kvm/vmx/nested.h
+++ b/arch/x86/kvm/vmx/nested.h
@@ -11,7 +11,7 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps,
 				bool apicv);
 void nested_vmx_hardware_unsetup(void);
 __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *));
-void nested_vmx_vcpu_setup(void);
+void nested_vmx_vmcs_setup(void);
 void nested_vmx_free_vcpu(struct kvm_vcpu *vcpu);
 int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry);
 bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 exit_reason);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e660e28e9ae0..58b77a882426 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4161,15 +4161,12 @@ static void ept_set_mmio_spte_mask(void)
 
 #define VMX_XSS_EXIT_BITMAP 0
 
-/*
- * Sets up the vmcs for emulated real mode.
- */
-static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
 {
 	int i;
 
 	if (nested)
-		nested_vmx_vcpu_setup();
+		nested_vmx_vmcs_setup();
 
 	if (cpu_has_vmx_msr_bitmap())
 		vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
@@ -6777,7 +6774,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	cpu = get_cpu();
 	vmx_vcpu_load(&vmx->vcpu, cpu);
 	vmx->vcpu.cpu = cpu;
-	vmx_vcpu_setup(vmx);
+	vmx_vmcs_setup(vmx);
 	vmx_vcpu_put(&vmx->vcpu);
 	put_cpu();
 	if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/4] KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability
  2019-10-15 16:40 [PATCH 0/4] Refactor vcpu creation flow of x86 arch Xiaoyao Li
  2019-10-15 16:40 ` [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions Xiaoyao Li
@ 2019-10-15 16:40 ` Xiaoyao Li
  2019-10-16  0:40   ` Krish Sadhukhan
  2019-10-15 16:40 ` [PATCH 3/4] KVM: X86: Refactor kvm_arch_vcpu_create Xiaoyao Li
  2019-10-15 16:40 ` [PATCH 4/4] KVM: X86: Make vcpu's FPU allocation a common function Xiaoyao Li
  3 siblings, 1 reply; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-15 16:40 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: Xiaoyao Li, kvm, linux-kernel

Move the MSR bitmap setup codes to vmx_vmcs_setup() and only setup them
when hardware has msr_bitmap capability.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 39 ++++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 58b77a882426..7051511c27c2 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4164,12 +4164,30 @@ static void ept_set_mmio_spte_mask(void)
 static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
 {
 	int i;
+	unsigned long *msr_bitmap;
 
 	if (nested)
 		nested_vmx_vmcs_setup();
 
-	if (cpu_has_vmx_msr_bitmap())
-		vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
+	if (cpu_has_vmx_msr_bitmap()) {
+		msr_bitmap = vmx->vmcs01.msr_bitmap;
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_TSC, MSR_TYPE_R);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
+		if (kvm_cstate_in_guest(vmx->vcpu.kvm)) {
+			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C1_RES, MSR_TYPE_R);
+			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
+			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
+			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
+		}
+
+		vmcs_write64(MSR_BITMAP, __pa(msr_bitmap));
+	}
+	vmx->msr_bitmap_mode = 0;
 
 	vmcs_write64(VMCS_LINK_POINTER, -1ull); /* 22.3.1.5 */
 
@@ -6697,7 +6715,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	int err;
 	struct vcpu_vmx *vmx;
-	unsigned long *msr_bitmap;
 	int cpu;
 
 	BUILD_BUG_ON_MSG(offsetof(struct vcpu_vmx, vcpu) != 0,
@@ -6754,22 +6771,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	if (err < 0)
 		goto free_msrs;
 
-	msr_bitmap = vmx->vmcs01.msr_bitmap;
-	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_TSC, MSR_TYPE_R);
-	vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW);
-	vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, MSR_TYPE_RW);
-	vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
-	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
-	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
-	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
-	if (kvm_cstate_in_guest(kvm)) {
-		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C1_RES, MSR_TYPE_R);
-		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
-		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
-		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
-	}
-	vmx->msr_bitmap_mode = 0;
-
 	vmx->loaded_vmcs = &vmx->vmcs01;
 	cpu = get_cpu();
 	vmx_vcpu_load(&vmx->vcpu, cpu);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/4] KVM: X86: Refactor kvm_arch_vcpu_create
  2019-10-15 16:40 [PATCH 0/4] Refactor vcpu creation flow of x86 arch Xiaoyao Li
  2019-10-15 16:40 ` [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions Xiaoyao Li
  2019-10-15 16:40 ` [PATCH 2/4] KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability Xiaoyao Li
@ 2019-10-15 16:40 ` Xiaoyao Li
  2019-10-17 16:12   ` Sean Christopherson
  2019-10-15 16:40 ` [PATCH 4/4] KVM: X86: Make vcpu's FPU allocation a common function Xiaoyao Li
  3 siblings, 1 reply; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-15 16:40 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: Xiaoyao Li, kvm, linux-kernel

Current x86 arch vcpu creation flow is a little bit messed.
Specifically, vcpu's data structure allocation and vcpu initialization
are mixed up, which is unfriendly to read.

Seperating the vcpu_create and vcpu_init just like what ARM does, that
it first calls vcpu_create related functions for vcpu's data structure
allocation and then calls vcpu_init related functions to initialize the
vcpu.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/include/asm/kvm_host.h |   1 +
 arch/x86/kvm/svm.c              |  63 +++++++++---------
 arch/x86/kvm/vmx/vmx.c          | 109 ++++++++++++++++----------------
 arch/x86/kvm/x86.c              |  16 +++++
 4 files changed, 102 insertions(+), 87 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5d8056ff7390..d574c2391145 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1015,6 +1015,7 @@ struct kvm_x86_ops {
 
 	/* Create, but do not attach this VCPU */
 	struct kvm_vcpu *(*vcpu_create)(struct kvm *kvm, unsigned id);
+	int (*vcpu_init)(struct kvm_vcpu *vcpu);
 	void (*vcpu_free)(struct kvm_vcpu *vcpu);
 	void (*vcpu_reset)(struct kvm_vcpu *vcpu, bool init_event);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index e479ea9bc9da..d35ef5456462 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2138,6 +2138,32 @@ static int avic_init_vcpu(struct vcpu_svm *svm)
 	return ret;
 }
 
+static int svm_vcpu_init(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	int err;
+
+	err = avic_init_vcpu(svm);
+	if (err)
+		return err;
+
+	/* We initialize this flag to true to make sure that the is_running
+	 * bit would be set the first time the vcpu is loaded.
+	 */
+	svm->avic_is_running = true;
+
+	svm_vcpu_init_msrpm(svm->msrpm);
+	svm_vcpu_init_msrpm(svm->nested.msrpm);
+
+	clear_page(svm->vmcb);
+	svm->asid_generation = 0;
+	init_vmcb(svm);
+
+	svm_init_osvw(vcpu);
+
+	return 0;
+}
+
 static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	struct vcpu_svm *svm;
@@ -2150,17 +2176,15 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	BUILD_BUG_ON_MSG(offsetof(struct vcpu_svm, vcpu) != 0,
 		"struct kvm_vcpu must be at offset 0 for arch usercopy region");
 
+	err = -ENOMEM;
 	svm = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL_ACCOUNT);
-	if (!svm) {
-		err = -ENOMEM;
+	if (!svm)
 		goto out;
-	}
 
 	svm->vcpu.arch.user_fpu = kmem_cache_zalloc(x86_fpu_cache,
 						     GFP_KERNEL_ACCOUNT);
 	if (!svm->vcpu.arch.user_fpu) {
 		printk(KERN_ERR "kvm: failed to allocate kvm userspace's fpu\n");
-		err = -ENOMEM;
 		goto free_partial_svm;
 	}
 
@@ -2168,18 +2192,12 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 						     GFP_KERNEL_ACCOUNT);
 	if (!svm->vcpu.arch.guest_fpu) {
 		printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n");
-		err = -ENOMEM;
 		goto free_user_fpu;
 	}
 
-	err = kvm_vcpu_init(&svm->vcpu, kvm, id);
-	if (err)
-		goto free_svm;
-
-	err = -ENOMEM;
 	page = alloc_page(GFP_KERNEL_ACCOUNT);
 	if (!page)
-		goto uninit;
+		goto free_svm;
 
 	msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER);
 	if (!msrpm_pages)
@@ -2193,43 +2211,22 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	if (!hsave_page)
 		goto free_page3;
 
-	err = avic_init_vcpu(svm);
-	if (err)
-		goto free_page4;
-
-	/* We initialize this flag to true to make sure that the is_running
-	 * bit would be set the first time the vcpu is loaded.
-	 */
-	svm->avic_is_running = true;
-
 	svm->nested.hsave = page_address(hsave_page);
 
 	svm->msrpm = page_address(msrpm_pages);
-	svm_vcpu_init_msrpm(svm->msrpm);
-
 	svm->nested.msrpm = page_address(nested_msrpm_pages);
-	svm_vcpu_init_msrpm(svm->nested.msrpm);
 
 	svm->vmcb = page_address(page);
-	clear_page(svm->vmcb);
 	svm->vmcb_pa = __sme_set(page_to_pfn(page) << PAGE_SHIFT);
-	svm->asid_generation = 0;
-	init_vmcb(svm);
-
-	svm_init_osvw(&svm->vcpu);
 
 	return &svm->vcpu;
 
-free_page4:
-	__free_page(hsave_page);
 free_page3:
 	__free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER);
 free_page2:
 	__free_pages(msrpm_pages, MSRPM_ALLOC_ORDER);
 free_page1:
 	__free_page(page);
-uninit:
-	kvm_vcpu_uninit(&svm->vcpu);
 free_svm:
 	kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu);
 free_user_fpu:
@@ -2263,7 +2260,6 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
 	__free_page(virt_to_page(svm->nested.hsave));
 	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
-	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.user_fpu);
 	kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu);
 	kmem_cache_free(kvm_vcpu_cache, svm);
@@ -7242,6 +7238,7 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.vcpu_create = svm_create_vcpu,
 	.vcpu_free = svm_free_vcpu,
+	.vcpu_init = svm_vcpu_init,
 	.vcpu_reset = svm_vcpu_reset,
 
 	.vm_alloc = svm_vm_alloc,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 7051511c27c2..2f54a3bcb6a5 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6705,30 +6705,78 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
 	nested_vmx_free_vcpu(vcpu);
 	free_loaded_vmcs(vmx->loaded_vmcs);
 	kfree(vmx->guest_msrs);
-	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.user_fpu);
 	kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu);
 	kmem_cache_free(kvm_vcpu_cache, vmx);
 }
 
+static int vmx_vcpu_init(struct kvm_vcpu *vcpu)
+{
+	int cpu;
+	int err;
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	cpu = get_cpu();
+	vmx_vcpu_load(vcpu, cpu);
+	vmx->vcpu.cpu = cpu;
+	vmx_vmcs_setup(vmx);
+	vmx_vcpu_put(vcpu);
+	put_cpu();
+
+	if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
+		err = alloc_apic_access_page(vcpu->kvm);
+		if (err)
+			return -ENOMEM;
+	}
+
+	if (enable_ept && !enable_unrestricted_guest) {
+		err = init_rmode_identity_map(vcpu->kvm);
+		if (err)
+			return -ENOMEM;
+	}
+
+	if (nested)
+		nested_vmx_setup_ctls_msrs(&vmx->nested.msrs,
+					   vmx_capability.ept,
+					   kvm_vcpu_apicv_active(&vmx->vcpu));
+	else
+		memset(&vmx->nested.msrs, 0, sizeof(vmx->nested.msrs));
+
+
+	vmx->nested.posted_intr_nv = -1;
+	vmx->nested.current_vmptr = -1ull;
+
+	vmx->msr_ia32_feature_control_valid_bits = FEATURE_CONTROL_LOCKED;
+
+	/*
+	 * Enforce invariant: pi_desc.nv is always either POSTED_INTR_VECTOR
+	 * or POSTED_INTR_WAKEUP_VECTOR.
+	 */
+	vmx->pi_desc.nv = POSTED_INTR_VECTOR;
+	vmx->pi_desc.sn = 1;
+
+	vmx->ept_pointer = INVALID_PAGE;
+
+	return 0;
+}
+
 static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	int err;
 	struct vcpu_vmx *vmx;
-	int cpu;
 
 	BUILD_BUG_ON_MSG(offsetof(struct vcpu_vmx, vcpu) != 0,
 		"struct kvm_vcpu must be at offset 0 for arch usercopy region");
 
+	err = -ENOMEM;
 	vmx = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL_ACCOUNT);
 	if (!vmx)
-		return ERR_PTR(-ENOMEM);
+		goto out;
 
 	vmx->vcpu.arch.user_fpu = kmem_cache_zalloc(x86_fpu_cache,
 			GFP_KERNEL_ACCOUNT);
 	if (!vmx->vcpu.arch.user_fpu) {
 		printk(KERN_ERR "kvm: failed to allocate kvm userspace's fpu\n");
-		err = -ENOMEM;
 		goto free_partial_vcpu;
 	}
 
@@ -6736,18 +6784,11 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 			GFP_KERNEL_ACCOUNT);
 	if (!vmx->vcpu.arch.guest_fpu) {
 		printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n");
-		err = -ENOMEM;
 		goto free_user_fpu;
 	}
 
 	vmx->vpid = allocate_vpid();
 
-	err = kvm_vcpu_init(&vmx->vcpu, kvm, id);
-	if (err)
-		goto free_vcpu;
-
-	err = -ENOMEM;
-
 	/*
 	 * If PML is turned on, failure on enabling PML just results in failure
 	 * of creating the vcpu, therefore we can simplify PML logic (by
@@ -6757,7 +6798,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	if (enable_pml) {
 		vmx->pml_pg = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
 		if (!vmx->pml_pg)
-			goto uninit_vcpu;
+			goto free_vcpu;
 	}
 
 	vmx->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL_ACCOUNT);
@@ -6772,55 +6813,13 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 		goto free_msrs;
 
 	vmx->loaded_vmcs = &vmx->vmcs01;
-	cpu = get_cpu();
-	vmx_vcpu_load(&vmx->vcpu, cpu);
-	vmx->vcpu.cpu = cpu;
-	vmx_vmcs_setup(vmx);
-	vmx_vcpu_put(&vmx->vcpu);
-	put_cpu();
-	if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
-		err = alloc_apic_access_page(kvm);
-		if (err)
-			goto free_vmcs;
-	}
-
-	if (enable_ept && !enable_unrestricted_guest) {
-		err = init_rmode_identity_map(kvm);
-		if (err)
-			goto free_vmcs;
-	}
-
-	if (nested)
-		nested_vmx_setup_ctls_msrs(&vmx->nested.msrs,
-					   vmx_capability.ept,
-					   kvm_vcpu_apicv_active(&vmx->vcpu));
-	else
-		memset(&vmx->nested.msrs, 0, sizeof(vmx->nested.msrs));
-
-	vmx->nested.posted_intr_nv = -1;
-	vmx->nested.current_vmptr = -1ull;
-
-	vmx->msr_ia32_feature_control_valid_bits = FEATURE_CONTROL_LOCKED;
-
-	/*
-	 * Enforce invariant: pi_desc.nv is always either POSTED_INTR_VECTOR
-	 * or POSTED_INTR_WAKEUP_VECTOR.
-	 */
-	vmx->pi_desc.nv = POSTED_INTR_VECTOR;
-	vmx->pi_desc.sn = 1;
-
-	vmx->ept_pointer = INVALID_PAGE;
 
 	return &vmx->vcpu;
 
-free_vmcs:
-	free_loaded_vmcs(vmx->loaded_vmcs);
 free_msrs:
 	kfree(vmx->guest_msrs);
 free_pml:
 	vmx_destroy_pml_buffer(vmx);
-uninit_vcpu:
-	kvm_vcpu_uninit(&vmx->vcpu);
 free_vcpu:
 	free_vpid(vmx->vpid);
 	kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu);
@@ -6828,6 +6827,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.user_fpu);
 free_partial_vcpu:
 	kmem_cache_free(kvm_vcpu_cache, vmx);
+out:
 	return ERR_PTR(err);
 }
 
@@ -7764,6 +7764,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.vcpu_create = vmx_create_vcpu,
 	.vcpu_free = vmx_free_vcpu,
+	.vcpu_init = vmx_vcpu_init,
 	.vcpu_reset = vmx_vcpu_reset,
 
 	.prepare_guest_switch = vmx_prepare_switch_to_guest,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f26f8be4e621..fd9f1a1f0f01 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9016,6 +9016,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 
 	kvmclock_reset(vcpu);
 
+	kvm_vcpu_uninit(vcpu);
 	kvm_x86_ops->vcpu_free(vcpu);
 	free_cpumask_var(wbinvd_dirty_mask);
 }
@@ -9024,6 +9025,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
 						unsigned int id)
 {
 	struct kvm_vcpu *vcpu;
+	int err;
 
 	if (kvm_check_tsc_unstable() && atomic_read(&kvm->online_vcpus) != 0)
 		printk_once(KERN_WARNING
@@ -9031,6 +9033,14 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
 		"guest TSC will not be reliable\n");
 
 	vcpu = kvm_x86_ops->vcpu_create(kvm, id);
+	if (IS_ERR(vcpu))
+		return vcpu;
+
+	err = kvm_vcpu_init(vcpu, kvm, id);
+	if (err) {
+		kvm_x86_ops->vcpu_free(vcpu);
+		return ERR_PTR(err);
+	}
 
 	return vcpu;
 }
@@ -9083,6 +9093,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 	kvm_mmu_unload(vcpu);
 	vcpu_put(vcpu);
 
+	kvm_vcpu_uninit(vcpu);
 	kvm_x86_ops->vcpu_free(vcpu);
 }
 
@@ -9379,8 +9390,13 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 
 	kvm_hv_vcpu_init(vcpu);
 
+	if (kvm_x86_ops->vcpu_init(vcpu))
+		goto fail_free_wbinvd_dirty_mask;
+
 	return 0;
 
+fail_free_wbinvd_dirty_mask:
+	free_cpumask_var(vcpu->arch.wbinvd_dirty_mask);
 fail_free_mce_banks:
 	kfree(vcpu->arch.mce_banks);
 fail_free_lapic:
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/4] KVM: X86: Make vcpu's FPU allocation a common function
  2019-10-15 16:40 [PATCH 0/4] Refactor vcpu creation flow of x86 arch Xiaoyao Li
                   ` (2 preceding siblings ...)
  2019-10-15 16:40 ` [PATCH 3/4] KVM: X86: Refactor kvm_arch_vcpu_create Xiaoyao Li
@ 2019-10-15 16:40 ` Xiaoyao Li
  3 siblings, 0 replies; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-15 16:40 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: Xiaoyao Li, kvm, linux-kernel

They are duplicated codes to create vcpu.arch.{user,guest}_fpu in VMX
and SVM. Make them common functions and delay it a little bit later
after .create_vcpu.

No functional change intended.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/svm.c     | 18 ------------------
 arch/x86/kvm/vmx/vmx.c | 18 ------------------
 arch/x86/kvm/x86.c     | 24 ++++++++++++++++++++++++
 3 files changed, 24 insertions(+), 36 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d35ef5456462..cbb5f5b9a9e7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2181,20 +2181,6 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	if (!svm)
 		goto out;
 
-	svm->vcpu.arch.user_fpu = kmem_cache_zalloc(x86_fpu_cache,
-						     GFP_KERNEL_ACCOUNT);
-	if (!svm->vcpu.arch.user_fpu) {
-		printk(KERN_ERR "kvm: failed to allocate kvm userspace's fpu\n");
-		goto free_partial_svm;
-	}
-
-	svm->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache,
-						     GFP_KERNEL_ACCOUNT);
-	if (!svm->vcpu.arch.guest_fpu) {
-		printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n");
-		goto free_user_fpu;
-	}
-
 	page = alloc_page(GFP_KERNEL_ACCOUNT);
 	if (!page)
 		goto free_svm;
@@ -2228,10 +2214,6 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 free_page1:
 	__free_page(page);
 free_svm:
-	kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu);
-free_user_fpu:
-	kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.user_fpu);
-free_partial_svm:
 	kmem_cache_free(kvm_vcpu_cache, svm);
 out:
 	return ERR_PTR(err);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2f54a3bcb6a5..183112604f04 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6773,20 +6773,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	if (!vmx)
 		goto out;
 
-	vmx->vcpu.arch.user_fpu = kmem_cache_zalloc(x86_fpu_cache,
-			GFP_KERNEL_ACCOUNT);
-	if (!vmx->vcpu.arch.user_fpu) {
-		printk(KERN_ERR "kvm: failed to allocate kvm userspace's fpu\n");
-		goto free_partial_vcpu;
-	}
-
-	vmx->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache,
-			GFP_KERNEL_ACCOUNT);
-	if (!vmx->vcpu.arch.guest_fpu) {
-		printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n");
-		goto free_user_fpu;
-	}
-
 	vmx->vpid = allocate_vpid();
 
 	/*
@@ -6822,10 +6808,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	vmx_destroy_pml_buffer(vmx);
 free_vcpu:
 	free_vpid(vmx->vpid);
-	kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu);
-free_user_fpu:
-	kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.user_fpu);
-free_partial_vcpu:
 	kmem_cache_free(kvm_vcpu_cache, vmx);
 out:
 	return ERR_PTR(err);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fd9f1a1f0f01..c4b477a9c7f6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9021,6 +9021,26 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 	free_cpumask_var(wbinvd_dirty_mask);
 }
 
+static int kvm_vcpu_create_fpu(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.user_fpu = kmem_cache_zalloc(x86_fpu_cache,
+			GFP_KERNEL_ACCOUNT);
+	if (vcpu->arch.user_fpu) {
+		printk(KERN_ERR "kvm: failed to allocate kvm userspace's fpu\n");
+		return -ENOMEM;
+	}
+
+	vcpu->arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache,
+			GFP_KERNEL_ACCOUNT);
+	if (vcpu->arch.guest_fpu) {
+		printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n");
+		kmem_cache_free(x86_fpu_cache, vcpu->arch.user_fpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
 struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
 						unsigned int id)
 {
@@ -9036,6 +9056,10 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
 	if (IS_ERR(vcpu))
 		return vcpu;
 
+	err = kvm_vcpu_create_fpu(vcpu);
+	if (err)
+		return ERR_PTR(err);
+
 	err = kvm_vcpu_init(vcpu, kvm, id);
 	if (err) {
 		kvm_x86_ops->vcpu_free(vcpu);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions
  2019-10-15 16:40 ` [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions Xiaoyao Li
@ 2019-10-15 22:05   ` Krish Sadhukhan
  2019-10-16  1:27     ` Xiaoyao Li
  0 siblings, 1 reply; 12+ messages in thread
From: Krish Sadhukhan @ 2019-10-15 22:05 UTC (permalink / raw)
  To: Xiaoyao Li, Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: kvm, linux-kernel



On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
> Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
> to match what they really do.
>
> No functional change.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> ---
>   arch/x86/kvm/vmx/nested.c | 2 +-
>   arch/x86/kvm/vmx/nested.h | 2 +-
>   arch/x86/kvm/vmx/vmx.c    | 9 +++------
>   3 files changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 5e231da00310..7935422d311f 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -5768,7 +5768,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
>   	return ret;
>   }
>   
> -void nested_vmx_vcpu_setup(void)
> +void nested_vmx_vmcs_setup(void)
>   {
>   	if (enable_shadow_vmcs) {
>   		vmcs_write64(VMREAD_BITMAP, __pa(vmx_vmread_bitmap));
> diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
> index 187d39bf0bf1..2be1ba7482c9 100644
> --- a/arch/x86/kvm/vmx/nested.h
> +++ b/arch/x86/kvm/vmx/nested.h
> @@ -11,7 +11,7 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps,
>   				bool apicv);
>   void nested_vmx_hardware_unsetup(void);
>   __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *));
> -void nested_vmx_vcpu_setup(void);
> +void nested_vmx_vmcs_setup(void);
>   void nested_vmx_free_vcpu(struct kvm_vcpu *vcpu);
>   int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry);
>   bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 exit_reason);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index e660e28e9ae0..58b77a882426 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -4161,15 +4161,12 @@ static void ept_set_mmio_spte_mask(void)
>   
>   #define VMX_XSS_EXIT_BITMAP 0
>   
> -/*
> - * Sets up the vmcs for emulated real mode.
> - */
> -static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
> +static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
>   {
>   	int i;
>   
>   	if (nested)
> -		nested_vmx_vcpu_setup();
> +		nested_vmx_vmcs_setup();
>   
>   	if (cpu_has_vmx_msr_bitmap())
>   		vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
> @@ -6777,7 +6774,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
>   	cpu = get_cpu();
>   	vmx_vcpu_load(&vmx->vcpu, cpu);
>   	vmx->vcpu.cpu = cpu;
> -	vmx_vcpu_setup(vmx);
> +	vmx_vmcs_setup(vmx);
>   	vmx_vcpu_put(&vmx->vcpu);
>   	put_cpu();
>   	if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {

May be we should rename vmx_vcpu_reset() to vmx_vmcs_reset()  as well  ?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability
  2019-10-15 16:40 ` [PATCH 2/4] KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability Xiaoyao Li
@ 2019-10-16  0:40   ` Krish Sadhukhan
  2019-10-16  1:29     ` Xiaoyao Li
  0 siblings, 1 reply; 12+ messages in thread
From: Krish Sadhukhan @ 2019-10-16  0:40 UTC (permalink / raw)
  To: Xiaoyao Li, Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: kvm, linux-kernel



On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
> Move the MSR bitmap setup codes to vmx_vmcs_setup() and only setup them
> when hardware has msr_bitmap capability.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> ---
>   arch/x86/kvm/vmx/vmx.c | 39 ++++++++++++++++++++-------------------
>   1 file changed, 20 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 58b77a882426..7051511c27c2 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -4164,12 +4164,30 @@ static void ept_set_mmio_spte_mask(void)
>   static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
>   {
>   	int i;
> +	unsigned long *msr_bitmap;
>   
>   	if (nested)
>   		nested_vmx_vmcs_setup();
>   
> -	if (cpu_has_vmx_msr_bitmap())
> -		vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
> +	if (cpu_has_vmx_msr_bitmap()) {
> +		msr_bitmap = vmx->vmcs01.msr_bitmap;
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_TSC, MSR_TYPE_R);

vmx_disable_intercept_for_msr() also calls cpu_has_vmx_msr_bitmap(), 
which means we are repeating the check. A cleaner approach is to remove 
the call to cpu_has_vmx_msr_bitmap()  from 
vmx_disable_intercept_for_msr()  and let its callers do the check just 
like you are doing here.

> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
> +		if (kvm_cstate_in_guest(vmx->vcpu.kvm)) {
> +			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C1_RES, MSR_TYPE_R);
> +			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
> +			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
> +			vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
> +		}
> +
> +		vmcs_write64(MSR_BITMAP, __pa(msr_bitmap));
> +	}
> +	vmx->msr_bitmap_mode = 0;
>   
>   	vmcs_write64(VMCS_LINK_POINTER, -1ull); /* 22.3.1.5 */
>   
> @@ -6697,7 +6715,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
>   {
>   	int err;
>   	struct vcpu_vmx *vmx;
> -	unsigned long *msr_bitmap;
>   	int cpu;
>   
>   	BUILD_BUG_ON_MSG(offsetof(struct vcpu_vmx, vcpu) != 0,
> @@ -6754,22 +6771,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
>   	if (err < 0)
>   		goto free_msrs;
>   
> -	msr_bitmap = vmx->vmcs01.msr_bitmap;
> -	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_TSC, MSR_TYPE_R);
> -	vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW);
> -	vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, MSR_TYPE_RW);
> -	vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
> -	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
> -	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
> -	vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
> -	if (kvm_cstate_in_guest(kvm)) {
> -		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C1_RES, MSR_TYPE_R);
> -		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
> -		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
> -		vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
> -	}
> -	vmx->msr_bitmap_mode = 0;
> -
>   	vmx->loaded_vmcs = &vmx->vmcs01;
>   	cpu = get_cpu();
>   	vmx_vcpu_load(&vmx->vcpu, cpu);


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions
  2019-10-15 22:05   ` Krish Sadhukhan
@ 2019-10-16  1:27     ` Xiaoyao Li
  2019-10-16 18:09       ` Krish Sadhukhan
  0 siblings, 1 reply; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-16  1:27 UTC (permalink / raw)
  To: Krish Sadhukhan, Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: kvm, linux-kernel

On 10/16/2019 6:05 AM, Krish Sadhukhan wrote:
> 
> 
> On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
>> Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
>> to match what they really do.
>>
>> No functional change.
>>
>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>> ---
>>   arch/x86/kvm/vmx/nested.c | 2 +-
>>   arch/x86/kvm/vmx/nested.h | 2 +-
>>   arch/x86/kvm/vmx/vmx.c    | 9 +++------
>>   3 files changed, 5 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>> index 5e231da00310..7935422d311f 100644
>> --- a/arch/x86/kvm/vmx/nested.c
>> +++ b/arch/x86/kvm/vmx/nested.c
>> @@ -5768,7 +5768,7 @@ static int vmx_set_nested_state(struct kvm_vcpu 
>> *vcpu,
>>       return ret;
>>   }
>> -void nested_vmx_vcpu_setup(void)
>> +void nested_vmx_vmcs_setup(void)
>>   {
>>       if (enable_shadow_vmcs) {
>>           vmcs_write64(VMREAD_BITMAP, __pa(vmx_vmread_bitmap));
>> diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
>> index 187d39bf0bf1..2be1ba7482c9 100644
>> --- a/arch/x86/kvm/vmx/nested.h
>> +++ b/arch/x86/kvm/vmx/nested.h
>> @@ -11,7 +11,7 @@ void nested_vmx_setup_ctls_msrs(struct 
>> nested_vmx_msrs *msrs, u32 ept_caps,
>>                   bool apicv);
>>   void nested_vmx_hardware_unsetup(void);
>>   __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct 
>> kvm_vcpu *));
>> -void nested_vmx_vcpu_setup(void);
>> +void nested_vmx_vmcs_setup(void);
>>   void nested_vmx_free_vcpu(struct kvm_vcpu *vcpu);
>>   int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool 
>> from_vmentry);
>>   bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 exit_reason);
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index e660e28e9ae0..58b77a882426 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -4161,15 +4161,12 @@ static void ept_set_mmio_spte_mask(void)
>>   #define VMX_XSS_EXIT_BITMAP 0
>> -/*
>> - * Sets up the vmcs for emulated real mode.
>> - */
>> -static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
>> +static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
>>   {
>>       int i;
>>       if (nested)
>> -        nested_vmx_vcpu_setup();
>> +        nested_vmx_vmcs_setup();
>>       if (cpu_has_vmx_msr_bitmap())
>>           vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
>> @@ -6777,7 +6774,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct 
>> kvm *kvm, unsigned int id)
>>       cpu = get_cpu();
>>       vmx_vcpu_load(&vmx->vcpu, cpu);
>>       vmx->vcpu.cpu = cpu;
>> -    vmx_vcpu_setup(vmx);
>> +    vmx_vmcs_setup(vmx);
>>       vmx_vcpu_put(&vmx->vcpu);
>>       put_cpu();
>>       if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
> 
> May be we should rename vmx_vcpu_reset() to vmx_vmcs_reset()  as well  ?

Not really. vmx_vcpu_reset() not only resets vmcs but also the emulated 
field of vmx vcpu.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability
  2019-10-16  0:40   ` Krish Sadhukhan
@ 2019-10-16  1:29     ` Xiaoyao Li
  0 siblings, 0 replies; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-16  1:29 UTC (permalink / raw)
  To: Krish Sadhukhan, Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: kvm, linux-kernel

On 10/16/2019 8:40 AM, Krish Sadhukhan wrote:
> 
> 
> On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
>> Move the MSR bitmap setup codes to vmx_vmcs_setup() and only setup them
>> when hardware has msr_bitmap capability.
>>
>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>> ---
>>   arch/x86/kvm/vmx/vmx.c | 39 ++++++++++++++++++++-------------------
>>   1 file changed, 20 insertions(+), 19 deletions(-)
>>
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index 58b77a882426..7051511c27c2 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -4164,12 +4164,30 @@ static void ept_set_mmio_spte_mask(void)
>>   static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
>>   {
>>       int i;
>> +    unsigned long *msr_bitmap;
>>       if (nested)
>>           nested_vmx_vmcs_setup();
>> -    if (cpu_has_vmx_msr_bitmap())
>> -        vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
>> +    if (cpu_has_vmx_msr_bitmap()) {
>> +        msr_bitmap = vmx->vmcs01.msr_bitmap;
>> +        vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_TSC, 
>> MSR_TYPE_R);
> 
> vmx_disable_intercept_for_msr() also calls cpu_has_vmx_msr_bitmap(), 
> which means we are repeating the check. A cleaner approach is to remove 
> the call to cpu_has_vmx_msr_bitmap()  from 
> vmx_disable_intercept_for_msr()  and let its callers do the check just 
> like you are doing here.
> 

Right.
I'll improve it. Thanks!

>> +        vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, 
>> MSR_TYPE_RW);
>> +        vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, 
>> MSR_TYPE_RW);
>> +        vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, 
>> MSR_TYPE_RW);
>> +        vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
>> +        vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
>> +        vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
>> +        if (kvm_cstate_in_guest(vmx->vcpu.kvm)) {
>> +            vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_CORE_C1_RES, MSR_TYPE_R);
>> +            vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
>> +            vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
>> +            vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
>> +        }
>> +
>> +        vmcs_write64(MSR_BITMAP, __pa(msr_bitmap));
>> +    }
>> +    vmx->msr_bitmap_mode = 0;
>>       vmcs_write64(VMCS_LINK_POINTER, -1ull); /* 22.3.1.5 */
>> @@ -6697,7 +6715,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct 
>> kvm *kvm, unsigned int id)
>>   {
>>       int err;
>>       struct vcpu_vmx *vmx;
>> -    unsigned long *msr_bitmap;
>>       int cpu;
>>       BUILD_BUG_ON_MSG(offsetof(struct vcpu_vmx, vcpu) != 0,
>> @@ -6754,22 +6771,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct 
>> kvm *kvm, unsigned int id)
>>       if (err < 0)
>>           goto free_msrs;
>> -    msr_bitmap = vmx->vmcs01.msr_bitmap;
>> -    vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_TSC, MSR_TYPE_R);
>> -    vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW);
>> -    vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, MSR_TYPE_RW);
>> -    vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, 
>> MSR_TYPE_RW);
>> -    vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_CS, 
>> MSR_TYPE_RW);
>> -    vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_ESP, 
>> MSR_TYPE_RW);
>> -    vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_EIP, 
>> MSR_TYPE_RW);
>> -    if (kvm_cstate_in_guest(kvm)) {
>> -        vmx_disable_intercept_for_msr(msr_bitmap, MSR_CORE_C1_RES, 
>> MSR_TYPE_R);
>> -        vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
>> -        vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
>> -        vmx_disable_intercept_for_msr(msr_bitmap, 
>> MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
>> -    }
>> -    vmx->msr_bitmap_mode = 0;
>> -
>>       vmx->loaded_vmcs = &vmx->vmcs01;
>>       cpu = get_cpu();
>>       vmx_vcpu_load(&vmx->vcpu, cpu);
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions
  2019-10-16  1:27     ` Xiaoyao Li
@ 2019-10-16 18:09       ` Krish Sadhukhan
  2019-10-17  1:25         ` Xiaoyao Li
  0 siblings, 1 reply; 12+ messages in thread
From: Krish Sadhukhan @ 2019-10-16 18:09 UTC (permalink / raw)
  To: Xiaoyao Li, Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: kvm, linux-kernel


On 10/15/19 6:27 PM, Xiaoyao Li wrote:
> On 10/16/2019 6:05 AM, Krish Sadhukhan wrote:
>>
>>
>> On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
>>> Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
>>> to match what they really do.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>>> ---
>>>   arch/x86/kvm/vmx/nested.c | 2 +-
>>>   arch/x86/kvm/vmx/nested.h | 2 +-
>>>   arch/x86/kvm/vmx/vmx.c    | 9 +++------
>>>   3 files changed, 5 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>>> index 5e231da00310..7935422d311f 100644
>>> --- a/arch/x86/kvm/vmx/nested.c
>>> +++ b/arch/x86/kvm/vmx/nested.c
>>> @@ -5768,7 +5768,7 @@ static int vmx_set_nested_state(struct 
>>> kvm_vcpu *vcpu,
>>>       return ret;
>>>   }
>>> -void nested_vmx_vcpu_setup(void)
>>> +void nested_vmx_vmcs_setup(void)
>>>   {
>>>       if (enable_shadow_vmcs) {
>>>           vmcs_write64(VMREAD_BITMAP, __pa(vmx_vmread_bitmap));
>>> diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
>>> index 187d39bf0bf1..2be1ba7482c9 100644
>>> --- a/arch/x86/kvm/vmx/nested.h
>>> +++ b/arch/x86/kvm/vmx/nested.h
>>> @@ -11,7 +11,7 @@ void nested_vmx_setup_ctls_msrs(struct 
>>> nested_vmx_msrs *msrs, u32 ept_caps,
>>>                   bool apicv);
>>>   void nested_vmx_hardware_unsetup(void);
>>>   __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct 
>>> kvm_vcpu *));
>>> -void nested_vmx_vcpu_setup(void);
>>> +void nested_vmx_vmcs_setup(void);
>>>   void nested_vmx_free_vcpu(struct kvm_vcpu *vcpu);
>>>   int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool 
>>> from_vmentry);
>>>   bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 
>>> exit_reason);
>>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>>> index e660e28e9ae0..58b77a882426 100644
>>> --- a/arch/x86/kvm/vmx/vmx.c
>>> +++ b/arch/x86/kvm/vmx/vmx.c
>>> @@ -4161,15 +4161,12 @@ static void ept_set_mmio_spte_mask(void)
>>>   #define VMX_XSS_EXIT_BITMAP 0
>>> -/*
>>> - * Sets up the vmcs for emulated real mode.
>>> - */
>>> -static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
>>> +static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
>>>   {
>>>       int i;
>>>       if (nested)
>>> -        nested_vmx_vcpu_setup();
>>> +        nested_vmx_vmcs_setup();
>>>       if (cpu_has_vmx_msr_bitmap())
>>>           vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
>>> @@ -6777,7 +6774,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct 
>>> kvm *kvm, unsigned int id)
>>>       cpu = get_cpu();
>>>       vmx_vcpu_load(&vmx->vcpu, cpu);
>>>       vmx->vcpu.cpu = cpu;
>>> -    vmx_vcpu_setup(vmx);
>>> +    vmx_vmcs_setup(vmx);
>>>       vmx_vcpu_put(&vmx->vcpu);
>>>       put_cpu();
>>>       if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
>>
>> May be we should rename vmx_vcpu_reset() to vmx_vmcs_reset()  as well  ?
>
> Not really. vmx_vcpu_reset() not only resets vmcs but also the 
> emulated field of vmx vcpu.

It would be a better organization of the code if the resetting of the 
VMCS fields were placed in a separate function.


Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions
  2019-10-16 18:09       ` Krish Sadhukhan
@ 2019-10-17  1:25         ` Xiaoyao Li
  0 siblings, 0 replies; 12+ messages in thread
From: Xiaoyao Li @ 2019-10-17  1:25 UTC (permalink / raw)
  To: Krish Sadhukhan, Paolo Bonzini, Radim Krčmář,
	Sean Christopherson, Vitaly Kuznetsov, Jim Mattson, Joerg Roedel
  Cc: kvm, linux-kernel

On 10/17/2019 2:09 AM, Krish Sadhukhan wrote:
> 
> On 10/15/19 6:27 PM, Xiaoyao Li wrote:
>> On 10/16/2019 6:05 AM, Krish Sadhukhan wrote:
>>>
>>>
>>> On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
>>>> Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
>>>> to match what they really do.
>>>>
>>>> No functional change.
>>>>
>>>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>>>> ---
>>>>   arch/x86/kvm/vmx/nested.c | 2 +-
>>>>   arch/x86/kvm/vmx/nested.h | 2 +-
>>>>   arch/x86/kvm/vmx/vmx.c    | 9 +++------
>>>>   3 files changed, 5 insertions(+), 8 deletions(-)
>>>>
>>>> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>>>> index 5e231da00310..7935422d311f 100644
>>>> --- a/arch/x86/kvm/vmx/nested.c
>>>> +++ b/arch/x86/kvm/vmx/nested.c
>>>> @@ -5768,7 +5768,7 @@ static int vmx_set_nested_state(struct 
>>>> kvm_vcpu *vcpu,
>>>>       return ret;
>>>>   }
>>>> -void nested_vmx_vcpu_setup(void)
>>>> +void nested_vmx_vmcs_setup(void)
>>>>   {
>>>>       if (enable_shadow_vmcs) {
>>>>           vmcs_write64(VMREAD_BITMAP, __pa(vmx_vmread_bitmap));
>>>> diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
>>>> index 187d39bf0bf1..2be1ba7482c9 100644
>>>> --- a/arch/x86/kvm/vmx/nested.h
>>>> +++ b/arch/x86/kvm/vmx/nested.h
>>>> @@ -11,7 +11,7 @@ void nested_vmx_setup_ctls_msrs(struct 
>>>> nested_vmx_msrs *msrs, u32 ept_caps,
>>>>                   bool apicv);
>>>>   void nested_vmx_hardware_unsetup(void);
>>>>   __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct 
>>>> kvm_vcpu *));
>>>> -void nested_vmx_vcpu_setup(void);
>>>> +void nested_vmx_vmcs_setup(void);
>>>>   void nested_vmx_free_vcpu(struct kvm_vcpu *vcpu);
>>>>   int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool 
>>>> from_vmentry);
>>>>   bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 
>>>> exit_reason);
>>>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>>>> index e660e28e9ae0..58b77a882426 100644
>>>> --- a/arch/x86/kvm/vmx/vmx.c
>>>> +++ b/arch/x86/kvm/vmx/vmx.c
>>>> @@ -4161,15 +4161,12 @@ static void ept_set_mmio_spte_mask(void)
>>>>   #define VMX_XSS_EXIT_BITMAP 0
>>>> -/*
>>>> - * Sets up the vmcs for emulated real mode.
>>>> - */
>>>> -static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
>>>> +static void vmx_vmcs_setup(struct vcpu_vmx *vmx)
>>>>   {
>>>>       int i;
>>>>       if (nested)
>>>> -        nested_vmx_vcpu_setup();
>>>> +        nested_vmx_vmcs_setup();
>>>>       if (cpu_has_vmx_msr_bitmap())
>>>>           vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
>>>> @@ -6777,7 +6774,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct 
>>>> kvm *kvm, unsigned int id)
>>>>       cpu = get_cpu();
>>>>       vmx_vcpu_load(&vmx->vcpu, cpu);
>>>>       vmx->vcpu.cpu = cpu;
>>>> -    vmx_vcpu_setup(vmx);
>>>> +    vmx_vmcs_setup(vmx);
>>>>       vmx_vcpu_put(&vmx->vcpu);
>>>>       put_cpu();
>>>>       if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
>>>
>>> May be we should rename vmx_vcpu_reset() to vmx_vmcs_reset()  as well  ?
>>
>> Not really. vmx_vcpu_reset() not only resets vmcs but also the 
>> emulated field of vmx vcpu.
> 
> It would be a better organization of the code if the resetting of the 
> VMCS fields were placed in a separate function.
> 

OK, will do it in next version.

> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] KVM: X86: Refactor kvm_arch_vcpu_create
  2019-10-15 16:40 ` [PATCH 3/4] KVM: X86: Refactor kvm_arch_vcpu_create Xiaoyao Li
@ 2019-10-17 16:12   ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2019-10-17 16:12 UTC (permalink / raw)
  To: Xiaoyao Li
  Cc: Paolo Bonzini, Radim Krčmář,
	Vitaly Kuznetsov, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Wed, Oct 16, 2019 at 12:40:32AM +0800, Xiaoyao Li wrote:
> Current x86 arch vcpu creation flow is a little bit messed.
> Specifically, vcpu's data structure allocation and vcpu initialization
> are mixed up, which is unfriendly to read.
> 
> Seperating the vcpu_create and vcpu_init just like what ARM does, that
> it first calls vcpu_create related functions for vcpu's data structure
> allocation and then calls vcpu_init related functions to initialize the
> vcpu.

My vote is to take advantage of the requirement that @vcpu must reside at
offset 0 in vmx_vcpu and svm_vcpu, and allocate the vcpu in x86 code.
That would allow kvm_arch_vcpu_create() to invoke kvm_vcpu_init() directly
instead of bouncing through the vendor code.

And if we're extra lucky and the other architectures can use a similar
pattern, kvm_vm_ioctl_create_vcpu() could be refactored to something like:

	vcpu = kvm_arch_vcpu_alloc(kvm, id);
	if (IS_ERR(vcpu)) {
		r = PTR_ERR(vcpu);
		goto vcpu_decrement;
	}

	r = kvm_arch_vcpu_init(vcpu);
	if (r)
		goto vcpu_destroy;

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-10-17 16:12 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-15 16:40 [PATCH 0/4] Refactor vcpu creation flow of x86 arch Xiaoyao Li
2019-10-15 16:40 ` [PATCH 1/4] KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions Xiaoyao Li
2019-10-15 22:05   ` Krish Sadhukhan
2019-10-16  1:27     ` Xiaoyao Li
2019-10-16 18:09       ` Krish Sadhukhan
2019-10-17  1:25         ` Xiaoyao Li
2019-10-15 16:40 ` [PATCH 2/4] KVM: VMX: Setup MSR bitmap only when has msr_bitmap capability Xiaoyao Li
2019-10-16  0:40   ` Krish Sadhukhan
2019-10-16  1:29     ` Xiaoyao Li
2019-10-15 16:40 ` [PATCH 3/4] KVM: X86: Refactor kvm_arch_vcpu_create Xiaoyao Li
2019-10-17 16:12   ` Sean Christopherson
2019-10-15 16:40 ` [PATCH 4/4] KVM: X86: Make vcpu's FPU allocation a common function Xiaoyao Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).