kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 00/13] KVM x86/VMX cleanups
@ 2017-08-04 13:14 David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 01/13] KVM: x86: mmu: returning void in a void function is strange David Hildenbrand
                   ` (13 more replies)
  0 siblings, 14 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Some cleanups discovered while digging through the code.

David Hildenbrand (13):
  KVM: x86: mmu: returning void in a void function is strange
  KVM: x86: mmu: use for_each_shadow_entry_lockless()
  KVM: x86: mmu: free_page can handle NULL
  KVM: x86: drop BUG_ON(vcpu->kvm)
  KVM: VMX: vmx_vcpu_setup() cannot fail
  KVM: x86: no need to inititalize vcpu members to 0
  KVM: VMX: drop enable_ept check from ept_sync_context()
  KVM: VMX: call ept_sync_global() with enable_ept only
  KVM: VMX: drop unnecessary function declarations
  KVM: nVMX: no need to set vcpu->cpu when switching vmcs
  KVM: VMX: cleanup init_rmode_identity_map()
  KVM: x86: document special identity map address value
  KVM: x86: allow setting identity map addr with no vcpus only

 Documentation/virtual/kvm/api.txt |  4 ++++
 arch/x86/kvm/mmu.c                | 26 ++++++++------------
 arch/x86/kvm/vmx.c                | 50 ++++++++++-----------------------------
 arch/x86/kvm/x86.c                | 21 +++++++---------
 4 files changed, 35 insertions(+), 66 deletions(-)

-- 
2.9.4

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v1 01/13] KVM: x86: mmu: returning void in a void function is strange
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless() David Hildenbrand
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Let's just drop the return.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9b1dd11..9ed26cc 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2422,7 +2422,7 @@ static void __shadow_walk_next(struct kvm_shadow_walk_iterator *iterator,
 
 static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator)
 {
-	return __shadow_walk_next(iterator, *iterator->sptep);
+	__shadow_walk_next(iterator, *iterator->sptep);
 }
 
 static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless()
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 01/13] KVM: x86: mmu: returning void in a void function is strange David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-14 10:21   ` Paolo Bonzini
  2017-08-04 13:14 ` [PATCH v1 03/13] KVM: x86: mmu: free_page can handle NULL David Hildenbrand
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Certainly better to read.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/mmu.c | 21 ++++++++-------------
 1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9ed26cc..3769613 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3596,8 +3596,8 @@ static bool
 walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
 {
 	struct kvm_shadow_walk_iterator iterator;
-	u64 sptes[PT64_ROOT_LEVEL], spte = 0ull;
-	int root, leaf;
+	u64 sptes[PT64_ROOT_LEVEL] = { 0 }, spte = 0ull;
+	int level;
 	bool reserved = false;
 
 	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
@@ -3605,14 +3605,8 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
 
 	walk_shadow_page_lockless_begin(vcpu);
 
-	for (shadow_walk_init(&iterator, vcpu, addr),
-		 leaf = root = iterator.level;
-	     shadow_walk_okay(&iterator);
-	     __shadow_walk_next(&iterator, spte)) {
-		spte = mmu_spte_get_lockless(iterator.sptep);
-
-		sptes[leaf - 1] = spte;
-		leaf--;
+	for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) {
+		sptes[iterator.level - 1] = spte;
 
 		if (!is_shadow_present_pte(spte))
 			break;
@@ -3626,10 +3620,11 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
 	if (reserved) {
 		pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n",
 		       __func__, addr);
-		while (root > leaf) {
+		for (level = PT64_ROOT_LEVEL; level > 0; level--) {
+			if (!sptes[level - 1])
+				continue;
 			pr_err("------ spte 0x%llx level %d.\n",
-			       sptes[root - 1], root);
-			root--;
+			       sptes[level - 1], level);
 		}
 	}
 exit:
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 03/13] KVM: x86: mmu: free_page can handle NULL
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 01/13] KVM: x86: mmu: returning void in a void function is strange David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless() David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 04/13] KVM: x86: drop BUG_ON(vcpu->kvm) David Hildenbrand
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/mmu.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3769613..98e72ee 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4881,8 +4881,7 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp);
 static void free_mmu_pages(struct kvm_vcpu *vcpu)
 {
 	free_page((unsigned long)vcpu->arch.mmu.pae_root);
-	if (vcpu->arch.mmu.lm_root != NULL)
-		free_page((unsigned long)vcpu->arch.mmu.lm_root);
+	free_page((unsigned long)vcpu->arch.mmu.lm_root);
 }
 
 static int alloc_mmu_pages(struct kvm_vcpu *vcpu)
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 04/13] KVM: x86: drop BUG_ON(vcpu->kvm)
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (2 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 03/13] KVM: x86: mmu: free_page can handle NULL David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 05/13] KVM: VMX: vmx_vcpu_setup() cannot fail David Hildenbrand
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

And also get rid of that superfluous local variable "kvm".

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/x86.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 72d82ab..3b5455b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7930,16 +7930,12 @@ EXPORT_SYMBOL_GPL(kvm_no_apic_vcpu);
 int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 {
 	struct page *page;
-	struct kvm *kvm;
 	int r;
 
-	BUG_ON(vcpu->kvm == NULL);
-	kvm = vcpu->kvm;
-
 	vcpu->arch.apicv_active = kvm_x86_ops->get_enable_apicv();
 	vcpu->arch.pv.pv_unhalted = false;
 	vcpu->arch.emulate_ctxt.ops = &emulate_ops;
-	if (!irqchip_in_kernel(kvm) || kvm_vcpu_is_reset_bsp(vcpu))
+	if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
 	else
 		vcpu->arch.mp_state = KVM_MP_STATE_UNINITIALIZED;
@@ -7957,7 +7953,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 	if (r < 0)
 		goto fail_free_pio_data;
 
-	if (irqchip_in_kernel(kvm)) {
+	if (irqchip_in_kernel(vcpu->kvm)) {
 		r = kvm_create_lapic(vcpu);
 		if (r < 0)
 			goto fail_mmu_destroy;
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 05/13] KVM: VMX: vmx_vcpu_setup() cannot fail
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (3 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 04/13] KVM: x86: drop BUG_ON(vcpu->kvm) David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 06/13] KVM: x86: no need to inititalize vcpu members to 0 David Hildenbrand
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Make it a void and drop error handling code.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/vmx.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index b22af24..96e2ef4 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5279,7 +5279,7 @@ static void ept_set_mmio_spte_mask(void)
 /*
  * Sets up the vmcs for emulated real mode.
  */
-static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
+static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
 {
 #ifdef CONFIG_X86_64
 	unsigned long a;
@@ -5388,8 +5388,6 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
 		vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
 		vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
 	}
-
-	return 0;
 }
 
 static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
@@ -9272,11 +9270,9 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	cpu = get_cpu();
 	vmx_vcpu_load(&vmx->vcpu, cpu);
 	vmx->vcpu.cpu = cpu;
-	err = vmx_vcpu_setup(vmx);
+	vmx_vcpu_setup(vmx);
 	vmx_vcpu_put(&vmx->vcpu);
 	put_cpu();
-	if (err)
-		goto free_vmcs;
 	if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
 		err = alloc_apic_access_page(kvm);
 		if (err)
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 06/13] KVM: x86: no need to inititalize vcpu members to 0
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (4 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 05/13] KVM: VMX: vmx_vcpu_setup() cannot fail David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 07/13] KVM: VMX: drop enable_ept check from ept_sync_context() David Hildenbrand
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

vmx and svm use zalloc, so this is not necessary.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/x86.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3b5455b..c5dfea9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7933,7 +7933,6 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 	int r;
 
 	vcpu->arch.apicv_active = kvm_x86_ops->get_enable_apicv();
-	vcpu->arch.pv.pv_unhalted = false;
 	vcpu->arch.emulate_ctxt.ops = &emulate_ops;
 	if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
@@ -7975,10 +7974,6 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 
 	fx_init(vcpu);
 
-	vcpu->arch.ia32_tsc_adjust_msr = 0x0;
-	vcpu->arch.pv_time_enabled = false;
-
-	vcpu->arch.guest_supported_xcr0 = 0;
 	vcpu->arch.guest_xstate_size = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
 
 	vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 07/13] KVM: VMX: drop enable_ept check from ept_sync_context()
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (5 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 06/13] KVM: x86: no need to inititalize vcpu members to 0 David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 08/13] KVM: VMX: call ept_sync_global() with enable_ept only David Hildenbrand
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

This function is only called with enable_ept.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/vmx.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 96e2ef4..4709032 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1572,12 +1572,10 @@ static inline void ept_sync_global(void)
 
 static inline void ept_sync_context(u64 eptp)
 {
-	if (enable_ept) {
-		if (cpu_has_vmx_invept_context())
-			__invept(VMX_EPT_EXTENT_CONTEXT, eptp, 0);
-		else
-			ept_sync_global();
-	}
+	if (cpu_has_vmx_invept_context())
+		__invept(VMX_EPT_EXTENT_CONTEXT, eptp, 0);
+	else
+		ept_sync_global();
 }
 
 static __always_inline void vmcs_check16(unsigned long field)
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 08/13] KVM: VMX: call ept_sync_global() with enable_ept only
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (6 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 07/13] KVM: VMX: drop enable_ept check from ept_sync_context() David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-14 10:25   ` Paolo Bonzini
  2017-08-04 13:14 ` [PATCH v1 09/13] KVM: VMX: drop unnecessary function declarations David Hildenbrand
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

ept_* function should only be called with enable_ept being set.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/vmx.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4709032..11dfdca 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3512,7 +3512,8 @@ static int hardware_enable(void)
 		wrmsrl(MSR_IA32_FEATURE_CONTROL, old | test_bits);
 	}
 	kvm_cpu_vmxon(phys_addr);
-	ept_sync_global();
+	if (enable_ept)
+		ept_sync_global();
 
 	return 0;
 }
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 09/13] KVM: VMX: drop unnecessary function declarations
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (7 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 08/13] KVM: VMX: call ept_sync_global() with enable_ept only David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 10/13] KVM: nVMX: no need to set vcpu->cpu when switching vmcs David Hildenbrand
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/vmx.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 11dfdca..fe5c407 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -895,14 +895,12 @@ static bool nested_ept_ad_enabled(struct kvm_vcpu *vcpu);
 static unsigned long nested_ept_get_cr3(struct kvm_vcpu *vcpu);
 static u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa);
 static bool vmx_xsaves_supported(void);
-static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr);
 static void vmx_set_segment(struct kvm_vcpu *vcpu,
 			    struct kvm_segment *var, int seg);
 static void vmx_get_segment(struct kvm_vcpu *vcpu,
 			    struct kvm_segment *var, int seg);
 static bool guest_state_valid(struct kvm_vcpu *vcpu);
 static u32 vmx_segment_access_rights(struct kvm_segment *var);
-static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx);
 static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx);
 static int alloc_identity_pagetable(struct kvm *kvm);
 static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 10/13] KVM: nVMX: no need to set vcpu->cpu when switching vmcs
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (8 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 09/13] KVM: VMX: drop unnecessary function declarations David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 11/13] KVM: VMX: cleanup init_rmode_identity_map() David Hildenbrand
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

vcpu->cpu is not cleared when doing a vmx_vcpu_put/load, so this can be
dropped.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/vmx.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index fe5c407..71f0023 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -9186,7 +9186,6 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
 	vmx->loaded_vmcs = vmcs;
 	vmx_vcpu_put(vcpu);
 	vmx_vcpu_load(vcpu, cpu);
-	vcpu->cpu = cpu;
 	put_cpu();
 }
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 11/13] KVM: VMX: cleanup init_rmode_identity_map()
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (9 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 10/13] KVM: nVMX: no need to set vcpu->cpu when switching vmcs David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 12/13] KVM: x86: document special identity map address value David Hildenbrand
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

No need for another enable_ept check. kvm->arch.ept_identity_map_addr
only has to be inititalized once. Having alloc_identity_pagetable() is
overkill and dropping BUG_ONs is always nice.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/vmx.c | 26 ++++----------------------
 1 file changed, 4 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 71f0023..78c66a7 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -902,7 +902,6 @@ static void vmx_get_segment(struct kvm_vcpu *vcpu,
 static bool guest_state_valid(struct kvm_vcpu *vcpu);
 static u32 vmx_segment_access_rights(struct kvm_segment *var);
 static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx);
-static int alloc_identity_pagetable(struct kvm *kvm);
 static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
 static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
 static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,
@@ -4739,18 +4738,18 @@ static int init_rmode_identity_map(struct kvm *kvm)
 	kvm_pfn_t identity_map_pfn;
 	u32 tmp;
 
-	if (!enable_ept)
-		return 0;
-
 	/* Protect kvm->arch.ept_identity_pagetable_done. */
 	mutex_lock(&kvm->slots_lock);
 
 	if (likely(kvm->arch.ept_identity_pagetable_done))
 		goto out2;
 
+	if (!kvm->arch.ept_identity_map_addr)
+		kvm->arch.ept_identity_map_addr = VMX_EPT_IDENTITY_PAGETABLE_ADDR;
 	identity_map_pfn = kvm->arch.ept_identity_map_addr >> PAGE_SHIFT;
 
-	r = alloc_identity_pagetable(kvm);
+	r = __x86_set_memory_region(kvm, IDENTITY_PAGETABLE_PRIVATE_MEMSLOT,
+				    kvm->arch.ept_identity_map_addr, PAGE_SIZE);
 	if (r < 0)
 		goto out2;
 
@@ -4822,20 +4821,6 @@ static int alloc_apic_access_page(struct kvm *kvm)
 	return r;
 }
 
-static int alloc_identity_pagetable(struct kvm *kvm)
-{
-	/* Called with kvm->slots_lock held. */
-
-	int r = 0;
-
-	BUG_ON(kvm->arch.ept_identity_pagetable_done);
-
-	r = __x86_set_memory_region(kvm, IDENTITY_PAGETABLE_PRIVATE_MEMSLOT,
-				    kvm->arch.ept_identity_map_addr, PAGE_SIZE);
-
-	return r;
-}
-
 static int allocate_vpid(void)
 {
 	int vpid;
@@ -9276,9 +9261,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	}
 
 	if (enable_ept) {
-		if (!kvm->arch.ept_identity_map_addr)
-			kvm->arch.ept_identity_map_addr =
-				VMX_EPT_IDENTITY_PAGETABLE_ADDR;
 		err = init_rmode_identity_map(kvm);
 		if (err)
 			goto free_vmcs;
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 12/13] KVM: x86: document special identity map address value
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (10 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 11/13] KVM: VMX: cleanup init_rmode_identity_map() David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-04 13:14 ` [PATCH v1 13/13] KVM: x86: allow setting identity map addr with no vcpus only David Hildenbrand
  2017-08-10 17:58 ` [PATCH v1 00/13] KVM x86/VMX cleanups Radim Krčmář
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Setting it to 0 leads to setting it to the default value, let's document
this.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index e63a35f..22bc5a0 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1124,6 +1124,9 @@ guest physical address space and must not conflict with any memory slot
 or any mmio address.  The guest may malfunction if it accesses this memory
 region.
 
+Setting the address to 0 will result in resetting the address to its default
+(0xfffbc000).
+
 This ioctl is required on Intel-based hosts.  This is needed on Intel hardware
 because of a quirk in the virtualization implementation (see the internals
 documentation when it pops into existence).
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 13/13] KVM: x86: allow setting identity map addr with no vcpus only
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (11 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 12/13] KVM: x86: document special identity map address value David Hildenbrand
@ 2017-08-04 13:14 ` David Hildenbrand
  2017-08-10 17:58 ` [PATCH v1 00/13] KVM x86/VMX cleanups Radim Krčmář
  13 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-04 13:14 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david

Changing it afterwards doesn't make too much sense and will only result
in inconsistencies.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 1 +
 arch/x86/kvm/x86.c                | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 22bc5a0..dd2dd96 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1131,6 +1131,7 @@ This ioctl is required on Intel-based hosts.  This is needed on Intel hardware
 because of a quirk in the virtualization implementation (see the internals
 documentation when it pops into existence).
 
+Fails if any VCPU has already been created.
 
 4.41 KVM_SET_BOOT_CPU_ID
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c5dfea9..86242ee 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4001,10 +4001,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
 	case KVM_SET_IDENTITY_MAP_ADDR: {
 		u64 ident_addr;
 
+		mutex_lock(&kvm->lock);
+		r = -EINVAL;
+		if (kvm->created_vcpus)
+			goto set_identity_unlock;
 		r = -EFAULT;
 		if (copy_from_user(&ident_addr, argp, sizeof ident_addr))
-			goto out;
+			goto set_identity_unlock;
 		r = kvm_vm_ioctl_set_identity_map_addr(kvm, ident_addr);
+set_identity_unlock:
+		mutex_unlock(&kvm->lock);
 		break;
 	}
 	case KVM_SET_NR_MMU_PAGES:
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 00/13] KVM x86/VMX cleanups
  2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
                   ` (12 preceding siblings ...)
  2017-08-04 13:14 ` [PATCH v1 13/13] KVM: x86: allow setting identity map addr with no vcpus only David Hildenbrand
@ 2017-08-10 17:58 ` Radim Krčmář
  2017-08-14 10:30   ` Paolo Bonzini
  13 siblings, 1 reply; 22+ messages in thread
From: Radim Krčmář @ 2017-08-10 17:58 UTC (permalink / raw)
  To: David Hildenbrand; +Cc: kvm, Paolo Bonzini

2017-08-04 15:14+0200, David Hildenbrand:
> Some cleanups discovered while digging through the code.
> 
> David Hildenbrand (13):
>   KVM: x86: mmu: returning void in a void function is strange
>   KVM: x86: mmu: use for_each_shadow_entry_lockless()
>   KVM: x86: mmu: free_page can handle NULL
>   KVM: x86: drop BUG_ON(vcpu->kvm)
>   KVM: VMX: vmx_vcpu_setup() cannot fail
>   KVM: x86: no need to inititalize vcpu members to 0
>   KVM: VMX: drop enable_ept check from ept_sync_context()
>   KVM: VMX: call ept_sync_global() with enable_ept only
>   KVM: VMX: drop unnecessary function declarations
>   KVM: nVMX: no need to set vcpu->cpu when switching vmcs
>   KVM: VMX: cleanup init_rmode_identity_map()
>   KVM: x86: document special identity map address value
>   KVM: x86: allow setting identity map addr with no vcpus only
> 
>  Documentation/virtual/kvm/api.txt |  4 ++++
>  arch/x86/kvm/mmu.c                | 26 ++++++++------------
>  arch/x86/kvm/vmx.c                | 50 ++++++++++-----------------------------
>  arch/x86/kvm/x86.c                | 21 +++++++---------
>  4 files changed, 35 insertions(+), 66 deletions(-)

Nice cleanup,

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless()
  2017-08-04 13:14 ` [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless() David Hildenbrand
@ 2017-08-14 10:21   ` Paolo Bonzini
  2017-08-17 10:12     ` David Hildenbrand
  0 siblings, 1 reply; 22+ messages in thread
From: Paolo Bonzini @ 2017-08-14 10:21 UTC (permalink / raw)
  To: David Hildenbrand, kvm; +Cc: Radim Krčmář

On 04/08/2017 15:14, David Hildenbrand wrote:
> Certainly better to read.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  arch/x86/kvm/mmu.c | 21 ++++++++-------------
>  1 file changed, 8 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 9ed26cc..3769613 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3596,8 +3596,8 @@ static bool
>  walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>  {
>  	struct kvm_shadow_walk_iterator iterator;
> -	u64 sptes[PT64_ROOT_LEVEL], spte = 0ull;
> -	int root, leaf;
> +	u64 sptes[PT64_ROOT_LEVEL] = { 0 }, spte = 0ull;
> +	int level;
>  	bool reserved = false;
>  
>  	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
> @@ -3605,14 +3605,8 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>  
>  	walk_shadow_page_lockless_begin(vcpu);
>  
> -	for (shadow_walk_init(&iterator, vcpu, addr),
> -		 leaf = root = iterator.level;
> -	     shadow_walk_okay(&iterator);
> -	     __shadow_walk_next(&iterator, spte)) {
> -		spte = mmu_spte_get_lockless(iterator.sptep);
> -
> -		sptes[leaf - 1] = spte;
> -		leaf--;
> +	for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) {
> +		sptes[iterator.level - 1] = spte;

If you leave a

	leaf = iterator.level;

(note I'm note subtracting 1 here; maybe s/leaf/last/ could be a good
idea, too)

>  		if (!is_shadow_present_pte(spte))
>  			break;
> @@ -3626,10 +3620,11 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>  	if (reserved) {
>  		pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n",
>  		       __func__, addr);
> -		while (root > leaf) {
> +		for (level = PT64_ROOT_LEVEL; level > 0; level--) {
> +			if (!sptes[level - 1])
> +				continue;

then here you might use

	for (level = vcpu->arch.mmu.shadow_root_level;
	     level >= leaf; level--)

instead?

Paolo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 08/13] KVM: VMX: call ept_sync_global() with enable_ept only
  2017-08-04 13:14 ` [PATCH v1 08/13] KVM: VMX: call ept_sync_global() with enable_ept only David Hildenbrand
@ 2017-08-14 10:25   ` Paolo Bonzini
  2017-08-17 10:06     ` David Hildenbrand
  0 siblings, 1 reply; 22+ messages in thread
From: Paolo Bonzini @ 2017-08-14 10:25 UTC (permalink / raw)
  To: David Hildenbrand, kvm; +Cc: Radim Krčmář

On 04/08/2017 15:14, David Hildenbrand wrote:
> ept_* function should only be called with enable_ept being set.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  arch/x86/kvm/vmx.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 4709032..11dfdca 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3512,7 +3512,8 @@ static int hardware_enable(void)
>  		wrmsrl(MSR_IA32_FEATURE_CONTROL, old | test_bits);
>  	}
>  	kvm_cpu_vmxon(phys_addr);
> -	ept_sync_global();
> +	if (enable_ept)
> +		ept_sync_global();
>  
>  	return 0;
>  }
> 

We probably should also disable EPT if !cpu_has_vmx_invept_global() in
hardware_setup, and remove the "if" from ept_sync_global().

Paolo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 00/13] KVM x86/VMX cleanups
  2017-08-10 17:58 ` [PATCH v1 00/13] KVM x86/VMX cleanups Radim Krčmář
@ 2017-08-14 10:30   ` Paolo Bonzini
  0 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2017-08-14 10:30 UTC (permalink / raw)
  To: Radim Krčmář, David Hildenbrand; +Cc: kvm

On 10/08/2017 19:58, Radim Krčmář wrote:
> 2017-08-04 15:14+0200, David Hildenbrand:
>> Some cleanups discovered while digging through the code.
>>
>> David Hildenbrand (13):
>>   KVM: x86: mmu: returning void in a void function is strange
>>   KVM: x86: mmu: use for_each_shadow_entry_lockless()
>>   KVM: x86: mmu: free_page can handle NULL
>>   KVM: x86: drop BUG_ON(vcpu->kvm)
>>   KVM: VMX: vmx_vcpu_setup() cannot fail
>>   KVM: x86: no need to inititalize vcpu members to 0
>>   KVM: VMX: drop enable_ept check from ept_sync_context()
>>   KVM: VMX: call ept_sync_global() with enable_ept only
>>   KVM: VMX: drop unnecessary function declarations
>>   KVM: nVMX: no need to set vcpu->cpu when switching vmcs
>>   KVM: VMX: cleanup init_rmode_identity_map()
>>   KVM: x86: document special identity map address value
>>   KVM: x86: allow setting identity map addr with no vcpus only
>>
>>  Documentation/virtual/kvm/api.txt |  4 ++++
>>  arch/x86/kvm/mmu.c                | 26 ++++++++------------
>>  arch/x86/kvm/vmx.c                | 50 ++++++++++-----------------------------
>>  arch/x86/kvm/x86.c                | 21 +++++++---------
>>  4 files changed, 35 insertions(+), 66 deletions(-)
> 
> Nice cleanup,
> 
> Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

I only had a remark on patch 2, the others look good!

Paolo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 08/13] KVM: VMX: call ept_sync_global() with enable_ept only
  2017-08-14 10:25   ` Paolo Bonzini
@ 2017-08-17 10:06     ` David Hildenbrand
  0 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-17 10:06 UTC (permalink / raw)
  To: Paolo Bonzini, kvm; +Cc: Radim Krčmář

On 14.08.2017 12:25, Paolo Bonzini wrote:
> On 04/08/2017 15:14, David Hildenbrand wrote:
>> ept_* function should only be called with enable_ept being set.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  arch/x86/kvm/vmx.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 4709032..11dfdca 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -3512,7 +3512,8 @@ static int hardware_enable(void)
>>  		wrmsrl(MSR_IA32_FEATURE_CONTROL, old | test_bits);
>>  	}
>>  	kvm_cpu_vmxon(phys_addr);
>> -	ept_sync_global();
>> +	if (enable_ept)
>> +		ept_sync_global();
>>  
>>  	return 0;
>>  }
>>
> 
> We probably should also disable EPT if !cpu_has_vmx_invept_global() in
> hardware_setup, and remove the "if" from ept_sync_global().
> 
> Paolo
> 

Sounds good, I'll add a patch to the next version of this series.

-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless()
  2017-08-14 10:21   ` Paolo Bonzini
@ 2017-08-17 10:12     ` David Hildenbrand
  2017-08-17 10:23       ` Paolo Bonzini
  0 siblings, 1 reply; 22+ messages in thread
From: David Hildenbrand @ 2017-08-17 10:12 UTC (permalink / raw)
  To: Paolo Bonzini, kvm; +Cc: Radim Krčmář

On 14.08.2017 12:21, Paolo Bonzini wrote:
> On 04/08/2017 15:14, David Hildenbrand wrote:
>> Certainly better to read.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  arch/x86/kvm/mmu.c | 21 ++++++++-------------
>>  1 file changed, 8 insertions(+), 13 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 9ed26cc..3769613 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -3596,8 +3596,8 @@ static bool
>>  walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>>  {
>>  	struct kvm_shadow_walk_iterator iterator;
>> -	u64 sptes[PT64_ROOT_LEVEL], spte = 0ull;
>> -	int root, leaf;
>> +	u64 sptes[PT64_ROOT_LEVEL] = { 0 }, spte = 0ull;
>> +	int level;
>>  	bool reserved = false;
>>  
>>  	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
>> @@ -3605,14 +3605,8 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>>  
>>  	walk_shadow_page_lockless_begin(vcpu);
>>  
>> -	for (shadow_walk_init(&iterator, vcpu, addr),
>> -		 leaf = root = iterator.level;
>> -	     shadow_walk_okay(&iterator);
>> -	     __shadow_walk_next(&iterator, spte)) {
>> -		spte = mmu_spte_get_lockless(iterator.sptep);
>> -
>> -		sptes[leaf - 1] = spte;
>> -		leaf--;
>> +	for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) {
>> +		sptes[iterator.level - 1] = spte;
> 
> If you leave a
> 
> 	leaf = iterator.level;
> 
> (note I'm note subtracting 1 here; maybe s/leaf/last/ could be a good
> idea, too)
> 
>>  		if (!is_shadow_present_pte(spte))
>>  			break;
>> @@ -3626,10 +3620,11 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>>  	if (reserved) {
>>  		pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n",
>>  		       __func__, addr);
>> -		while (root > leaf) {
>> +		for (level = PT64_ROOT_LEVEL; level > 0; level--) {
>> +			if (!sptes[level - 1])
>> +				continue;
> 
> then here you might use
> 
> 	for (level = vcpu->arch.mmu.shadow_root_level;
> 	     level >= leaf; level--)

I also had that in mind, but shadow_walk_init() tells me that
using vcpu->arch.mmu.shadow_root_level might not be correct?

Alternative 1: get rid of this debug output completely
Alternative 2: add dump_shadow_entries(struct kvm_vcpu *vcpu, u64 addr)
(we might dump entries that are now different on the second walk, but I
highly doubt that this is relevant)

> 
> instead?
> 
> Paolo
> 


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless()
  2017-08-17 10:12     ` David Hildenbrand
@ 2017-08-17 10:23       ` Paolo Bonzini
  2017-08-17 10:39         ` David Hildenbrand
  0 siblings, 1 reply; 22+ messages in thread
From: Paolo Bonzini @ 2017-08-17 10:23 UTC (permalink / raw)
  To: David Hildenbrand, kvm; +Cc: Radim Krčmář

On 17/08/2017 12:12, David Hildenbrand wrote:
> On 14.08.2017 12:21, Paolo Bonzini wrote:
>> On 04/08/2017 15:14, David Hildenbrand wrote:
>>> Certainly better to read.
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>  arch/x86/kvm/mmu.c | 21 ++++++++-------------
>>>  1 file changed, 8 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>> index 9ed26cc..3769613 100644
>>> --- a/arch/x86/kvm/mmu.c
>>> +++ b/arch/x86/kvm/mmu.c
>>> @@ -3596,8 +3596,8 @@ static bool
>>>  walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>>>  {
>>>  	struct kvm_shadow_walk_iterator iterator;
>>> -	u64 sptes[PT64_ROOT_LEVEL], spte = 0ull;
>>> -	int root, leaf;
>>> +	u64 sptes[PT64_ROOT_LEVEL] = { 0 }, spte = 0ull;
>>> +	int level;
>>>  	bool reserved = false;
>>>  
>>>  	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
>>> @@ -3605,14 +3605,8 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>>>  
>>>  	walk_shadow_page_lockless_begin(vcpu);
>>>  
>>> -	for (shadow_walk_init(&iterator, vcpu, addr),
>>> -		 leaf = root = iterator.level;
>>> -	     shadow_walk_okay(&iterator);
>>> -	     __shadow_walk_next(&iterator, spte)) {
>>> -		spte = mmu_spte_get_lockless(iterator.sptep);
>>> -
>>> -		sptes[leaf - 1] = spte;
>>> -		leaf--;
>>> +	for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) {
>>> +		sptes[iterator.level - 1] = spte;
>>
>> If you leave a
>>
>> 	leaf = iterator.level;
>>
>> (note I'm note subtracting 1 here; maybe s/leaf/last/ could be a good
>> idea, too)
>>
>>>  		if (!is_shadow_present_pte(spte))
>>>  			break;
>>> @@ -3626,10 +3620,11 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>>>  	if (reserved) {
>>>  		pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n",
>>>  		       __func__, addr);
>>> -		while (root > leaf) {
>>> +		for (level = PT64_ROOT_LEVEL; level > 0; level--) {
>>> +			if (!sptes[level - 1])
>>> +				continue;
>>
>> then here you might use
>>
>> 	for (level = vcpu->arch.mmu.shadow_root_level;
>> 	     level >= leaf; level--)
> 
> I also had that in mind, but shadow_walk_init() tells me that
> using vcpu->arch.mmu.shadow_root_level might not be correct?
> 
> Alternative 1: get rid of this debug output completely
> Alternative 2: add dump_shadow_entries(struct kvm_vcpu *vcpu, u64 addr)
> (we might dump entries that are now different on the second walk, but I
> highly doubt that this is relevant)

Hmm, I might just ask you to drop this patch, after all.

Paolo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless()
  2017-08-17 10:23       ` Paolo Bonzini
@ 2017-08-17 10:39         ` David Hildenbrand
  0 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2017-08-17 10:39 UTC (permalink / raw)
  To: Paolo Bonzini, kvm; +Cc: Radim Krčmář


>>> then here you might use
>>>
>>> 	for (level = vcpu->arch.mmu.shadow_root_level;
>>> 	     level >= leaf; level--)
>>
>> I also had that in mind, but shadow_walk_init() tells me that
>> using vcpu->arch.mmu.shadow_root_level might not be correct?
>>
>> Alternative 1: get rid of this debug output completely
>> Alternative 2: add dump_shadow_entries(struct kvm_vcpu *vcpu, u64 addr)
>> (we might dump entries that are now different on the second walk, but I
>> highly doubt that this is relevant)
> 
> Hmm, I might just ask you to drop this patch, after all.

Sure, I just found for_each_shadow_entry_lockless() to be better fitting
here. I'll drop it.

> 
> Paolo
> 


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-08-17 10:39 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-04 13:14 [PATCH v1 00/13] KVM x86/VMX cleanups David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 01/13] KVM: x86: mmu: returning void in a void function is strange David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless() David Hildenbrand
2017-08-14 10:21   ` Paolo Bonzini
2017-08-17 10:12     ` David Hildenbrand
2017-08-17 10:23       ` Paolo Bonzini
2017-08-17 10:39         ` David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 03/13] KVM: x86: mmu: free_page can handle NULL David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 04/13] KVM: x86: drop BUG_ON(vcpu->kvm) David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 05/13] KVM: VMX: vmx_vcpu_setup() cannot fail David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 06/13] KVM: x86: no need to inititalize vcpu members to 0 David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 07/13] KVM: VMX: drop enable_ept check from ept_sync_context() David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 08/13] KVM: VMX: call ept_sync_global() with enable_ept only David Hildenbrand
2017-08-14 10:25   ` Paolo Bonzini
2017-08-17 10:06     ` David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 09/13] KVM: VMX: drop unnecessary function declarations David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 10/13] KVM: nVMX: no need to set vcpu->cpu when switching vmcs David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 11/13] KVM: VMX: cleanup init_rmode_identity_map() David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 12/13] KVM: x86: document special identity map address value David Hildenbrand
2017-08-04 13:14 ` [PATCH v1 13/13] KVM: x86: allow setting identity map addr with no vcpus only David Hildenbrand
2017-08-10 17:58 ` [PATCH v1 00/13] KVM x86/VMX cleanups Radim Krčmář
2017-08-14 10:30   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).