linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: MMU: pending MMU and nEPT patches
@ 2017-08-11 16:52 Paolo Bonzini
  2017-08-11 16:52 ` [PATCH 1/3] KVM: x86: simplify ept_misconfig Paolo Bonzini
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Paolo Bonzini @ 2017-08-11 16:52 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar, david

This is a cleaned up combination of the patch I just sent, plus
Brijesh's changes that I've taken out of kvm/next.  Patch 3 is only
lightly tested (it does pass kvm-unit-tests though).

Paolo

Brijesh Singh (1):
  KVM: x86: Avoid guest page table walk when gpa_available is set

Paolo Bonzini (2):
  KVM: x86: simplify ept_misconfig
  KVM: x86: fix use of L1 MMIO areas in nested guests

 arch/x86/include/asm/kvm_host.h |  3 ++-
 arch/x86/kvm/mmu.c              | 19 ++++++++++++++++++-
 arch/x86/kvm/paging_tmpl.h      |  3 +--
 arch/x86/kvm/svm.c              |  3 +--
 arch/x86/kvm/vmx.c              | 28 ++++++++++++----------------
 arch/x86/kvm/x86.c              | 19 ++++++-------------
 arch/x86/kvm/x86.h              |  6 +++++-
 7 files changed, 45 insertions(+), 36 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] KVM: x86: simplify ept_misconfig
  2017-08-11 16:52 [PATCH 0/3] KVM: MMU: pending MMU and nEPT patches Paolo Bonzini
@ 2017-08-11 16:52 ` Paolo Bonzini
  2017-08-12 23:31   ` Wanpeng Li
                     ` (2 more replies)
  2017-08-11 16:52 ` [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set Paolo Bonzini
  2017-08-11 16:52 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
  2 siblings, 3 replies; 16+ messages in thread
From: Paolo Bonzini @ 2017-08-11 16:52 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar, david

Calling handle_mmio_page_fault() has been unnecessary since commit
e9ee956e311d ("KVM: x86: MMU: Move handle_mmio_page_fault() call to
kvm_mmu_page_fault()", 2016-02-22)

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index df8d2f127508..45fb0ea78ee8 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6410,17 +6410,10 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
 		return kvm_skip_emulated_instruction(vcpu);
 	}
 
-	ret = handle_mmio_page_fault(vcpu, gpa, true);
 	vcpu->arch.gpa_available = true;
-	if (likely(ret == RET_MMIO_PF_EMULATE))
-		return x86_emulate_instruction(vcpu, gpa, 0, NULL, 0) ==
-					      EMULATE_DONE;
-
-	if (unlikely(ret == RET_MMIO_PF_INVALID))
-		return kvm_mmu_page_fault(vcpu, gpa, 0, NULL, 0);
-
-	if (unlikely(ret == RET_MMIO_PF_RETRY))
-		return 1;
+	ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
+	if (ret >= 0)
+		return ret;
 
 	/* It is the real ept misconfig */
 	WARN_ON(1);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set
  2017-08-11 16:52 [PATCH 0/3] KVM: MMU: pending MMU and nEPT patches Paolo Bonzini
  2017-08-11 16:52 ` [PATCH 1/3] KVM: x86: simplify ept_misconfig Paolo Bonzini
@ 2017-08-11 16:52 ` Paolo Bonzini
  2017-08-12 23:32   ` Wanpeng Li
  2017-08-17  7:58   ` David Hildenbrand
  2017-08-11 16:52 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
  2 siblings, 2 replies; 16+ messages in thread
From: Paolo Bonzini @ 2017-08-11 16:52 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar, david, Brijesh Singh

From: Brijesh Singh <brijesh.singh@amd.com>

When a guest causes a page fault which requires emulation, the
vcpu->arch.gpa_available flag is set to indicate that cr2 contains a
valid GPA.

Currently, emulator_read_write_onepage() makes use of gpa_available flag
to avoid a guest page walk for a known MMIO regions. Lets not limit
the gpa_available optimization to just MMIO region. The patch extends
the check to avoid page walk whenever gpa_available flag is set.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
[Fix EPT=0 according to Wanpeng Li's fix, plus ensure VMX also uses the
 new code. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  3 ++-
 arch/x86/kvm/mmu.c              |  9 +++++++++
 arch/x86/kvm/svm.c              |  3 +--
 arch/x86/kvm/vmx.c              |  3 ---
 arch/x86/kvm/x86.c              | 19 ++++++-------------
 5 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9e4862e0e978..6db0ed9cf59e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -685,8 +685,9 @@ struct kvm_vcpu_arch {
 	int pending_ioapic_eoi;
 	int pending_external_vector;
 
-	/* GPA available (AMD only) */
+	/* GPA available */
 	bool gpa_available;
+	gpa_t gpa_val;
 
 	/* be preempted when it's in kernel-mode(cpl=0) */
 	bool preempted_in_kernel;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5339d83916bf..f5c3f8e7d29f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4829,6 +4829,15 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 	enum emulation_result er;
 	bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
 
+	/*
+	 * With shadow page tables, fault_address contains a GVA
+	 * or nested GPA.
+	 */
+	if (vcpu->arch.mmu.direct_map) {
+		vcpu->arch.gpa_available = true;
+		vcpu->arch.gpa_val = cr2;
+	}
+
 	if (unlikely(error_code & PFERR_RSVD_MASK)) {
 		r = handle_mmio_page_fault(vcpu, cr2, direct);
 		if (r == RET_MMIO_PF_EMULATE) {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1fa9ee5660f4..c5c6b182cddf 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4235,8 +4235,7 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 	u32 exit_code = svm->vmcb->control.exit_code;
 
 	trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
-
-	vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+	vcpu->arch.gpa_available = false;
 
 	if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
 		vcpu->arch.cr0 = svm->vmcb->save.cr0;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 45fb0ea78ee8..79efb00dd70d 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6393,9 +6393,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
 	error_code |= (exit_qualification & 0x100) != 0 ?
 	       PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
 
-	vcpu->arch.gpa_available = true;
 	vcpu->arch.exit_qualification = exit_qualification;
-
 	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
 }
 
@@ -6410,7 +6408,6 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
 		return kvm_skip_emulated_instruction(vcpu);
 	}
 
-	vcpu->arch.gpa_available = true;
 	ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
 	if (ret >= 0)
 		return ret;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e40a779711a9..bb05b705c295 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4657,25 +4657,18 @@ static int emulator_read_write_onepage(unsigned long addr, void *val,
 	 */
 	if (vcpu->arch.gpa_available &&
 	    emulator_can_use_gpa(ctxt) &&
-	    vcpu_is_mmio_gpa(vcpu, addr, exception->address, write) &&
-	    (addr & ~PAGE_MASK) == (exception->address & ~PAGE_MASK)) {
-		gpa = exception->address;
-		goto mmio;
+	    (addr & ~PAGE_MASK) == (vcpu->arch.gpa_val & ~PAGE_MASK)) {
+		gpa = vcpu->arch.gpa_val;
+		ret = vcpu_is_mmio_gpa(vcpu, addr, gpa, write);
+	} else {
+		ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
 	}
 
-	ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
-
 	if (ret < 0)
 		return X86EMUL_PROPAGATE_FAULT;
-
-	/* For APIC access vmexit */
-	if (ret)
-		goto mmio;
-
-	if (ops->read_write_emulate(vcpu, gpa, val, bytes))
+	if (!ret && ops->read_write_emulate(vcpu, gpa, val, bytes))
 		return X86EMUL_CONTINUE;
 
-mmio:
 	/*
 	 * Is this MMIO handled locally?
 	 */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-11 16:52 [PATCH 0/3] KVM: MMU: pending MMU and nEPT patches Paolo Bonzini
  2017-08-11 16:52 ` [PATCH 1/3] KVM: x86: simplify ept_misconfig Paolo Bonzini
  2017-08-11 16:52 ` [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set Paolo Bonzini
@ 2017-08-11 16:52 ` Paolo Bonzini
  2017-08-13  0:11   ` Wanpeng Li
  2017-08-17  8:11   ` David Hildenbrand
  2 siblings, 2 replies; 16+ messages in thread
From: Paolo Bonzini @ 2017-08-11 16:52 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar, david

There is currently some confusion between nested and L1 GPAs.  The
assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
it is not enough.  What this patch does is fence off the MMIO cache
completely when using shadow nested page tables, since we have neither
a GVA nor an L1 GPA to put in the cache.  This also allows some
simplifications in kvm_mmu_page_fault and FNAME(page_fault).

The EPT misconfig likewise does not have an L1 GPA to pass to
kvm_io_bus_write, so that must be skipped for guest mode.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/mmu.c         | 10 +++++++++-
 arch/x86/kvm/paging_tmpl.h |  3 +--
 arch/x86/kvm/vmx.c         | 12 +++++++++---
 arch/x86/kvm/x86.h         |  6 +++++-
 4 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f5c3f8e7d29f..f3665947bcc5 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3598,6 +3598,14 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level)
 
 static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 {
+	/*
+	 * A nested guest cannot use the MMIO cache if it is using nested
+	 * page tables, because cr2 is a nGPA while the cache stores L1's
+	 * physical addresses.
+	 */
+	if (mmu_is_nested(vcpu))
+		return false;
+
 	if (direct)
 		return vcpu_match_mmio_gpa(vcpu, addr);
 
@@ -4827,7 +4835,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 {
 	int r, emulation_type = EMULTYPE_RETRY;
 	enum emulation_result er;
-	bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
+	bool direct = vcpu->arch.mmu.direct_map;
 
 	/*
 	 * With shadow page tables, fault_address contains a GVA
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 3bb90ceeb52d..86b68dc5a649 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -790,8 +790,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
 			 &map_writable))
 		return 0;
 
-	if (handle_abnormal_pfn(vcpu, mmu_is_nested(vcpu) ? 0 : addr,
-				walker.gfn, pfn, walker.pte_access, &r))
+	if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
 		return r;
 
 	/*
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 79efb00dd70d..e3989461f938 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6402,10 +6402,16 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
 	int ret;
 	gpa_t gpa;
 
+	/*
+	 * A nested guest cannot optimize MMIO vmexits, because we have an
+	 * nGPA here instead of the required GPA.
+	 */
 	gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
-	if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
-		trace_kvm_fast_mmio(gpa);
-		return kvm_skip_emulated_instruction(vcpu);
+	if (!is_guest_mode(vcpu)) {
+		if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
+			trace_kvm_fast_mmio(gpa);
+			return kvm_skip_emulated_instruction(vcpu);
+		}
 	}
 
 	ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 612067074905..2383d2ce0a84 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
 static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
 					gva_t gva, gfn_t gfn, unsigned access)
 {
-	vcpu->arch.mmio_gva = gva & PAGE_MASK;
+	/*
+	 * If this is a shadow nested page table, the "GVA" is
+	 * actually a nested GPA.
+	 */
+	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
 	vcpu->arch.access = access;
 	vcpu->arch.mmio_gfn = gfn;
 	vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] KVM: x86: simplify ept_misconfig
  2017-08-11 16:52 ` [PATCH 1/3] KVM: x86: simplify ept_misconfig Paolo Bonzini
@ 2017-08-12 23:31   ` Wanpeng Li
  2017-08-17  7:43   ` David Hildenbrand
  2017-08-17  8:06   ` David Hildenbrand
  2 siblings, 0 replies; 16+ messages in thread
From: Wanpeng Li @ 2017-08-12 23:31 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, Wanpeng Li, Radim Krcmar, David Hildenbrand

2017-08-12 0:52 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> Calling handle_mmio_page_fault() has been unnecessary since commit
> e9ee956e311d ("KVM: x86: MMU: Move handle_mmio_page_fault() call to
> kvm_mmu_page_fault()", 2016-02-22)
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/kvm/vmx.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index df8d2f127508..45fb0ea78ee8 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6410,17 +6410,10 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>                 return kvm_skip_emulated_instruction(vcpu);
>         }
>
> -       ret = handle_mmio_page_fault(vcpu, gpa, true);
>         vcpu->arch.gpa_available = true;
> -       if (likely(ret == RET_MMIO_PF_EMULATE))
> -               return x86_emulate_instruction(vcpu, gpa, 0, NULL, 0) ==
> -                                             EMULATE_DONE;
> -
> -       if (unlikely(ret == RET_MMIO_PF_INVALID))
> -               return kvm_mmu_page_fault(vcpu, gpa, 0, NULL, 0);
> -
> -       if (unlikely(ret == RET_MMIO_PF_RETRY))
> -               return 1;
> +       ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
> +       if (ret >= 0)
> +               return ret;
>
>         /* It is the real ept misconfig */
>         WARN_ON(1);
> --
> 1.8.3.1
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set
  2017-08-11 16:52 ` [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set Paolo Bonzini
@ 2017-08-12 23:32   ` Wanpeng Li
  2017-08-17  7:58   ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: Wanpeng Li @ 2017-08-12 23:32 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, Wanpeng Li, Radim Krcmar, David Hildenbrand,
	Brijesh Singh

2017-08-12 0:52 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> From: Brijesh Singh <brijesh.singh@amd.com>
>
> When a guest causes a page fault which requires emulation, the
> vcpu->arch.gpa_available flag is set to indicate that cr2 contains a
> valid GPA.
>
> Currently, emulator_read_write_onepage() makes use of gpa_available flag
> to avoid a guest page walk for a known MMIO regions. Lets not limit
> the gpa_available optimization to just MMIO region. The patch extends
> the check to avoid page walk whenever gpa_available flag is set.
>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> [Fix EPT=0 according to Wanpeng Li's fix, plus ensure VMX also uses the
>  new code. - Paolo]
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/include/asm/kvm_host.h |  3 ++-
>  arch/x86/kvm/mmu.c              |  9 +++++++++
>  arch/x86/kvm/svm.c              |  3 +--
>  arch/x86/kvm/vmx.c              |  3 ---
>  arch/x86/kvm/x86.c              | 19 ++++++-------------
>  5 files changed, 18 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 9e4862e0e978..6db0ed9cf59e 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -685,8 +685,9 @@ struct kvm_vcpu_arch {
>         int pending_ioapic_eoi;
>         int pending_external_vector;
>
> -       /* GPA available (AMD only) */
> +       /* GPA available */
>         bool gpa_available;
> +       gpa_t gpa_val;
>
>         /* be preempted when it's in kernel-mode(cpl=0) */
>         bool preempted_in_kernel;
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 5339d83916bf..f5c3f8e7d29f 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4829,6 +4829,15 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>         enum emulation_result er;
>         bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
>
> +       /*
> +        * With shadow page tables, fault_address contains a GVA
> +        * or nested GPA.
> +        */
> +       if (vcpu->arch.mmu.direct_map) {
> +               vcpu->arch.gpa_available = true;
> +               vcpu->arch.gpa_val = cr2;
> +       }
> +
>         if (unlikely(error_code & PFERR_RSVD_MASK)) {
>                 r = handle_mmio_page_fault(vcpu, cr2, direct);
>                 if (r == RET_MMIO_PF_EMULATE) {
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 1fa9ee5660f4..c5c6b182cddf 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -4235,8 +4235,7 @@ static int handle_exit(struct kvm_vcpu *vcpu)
>         u32 exit_code = svm->vmcb->control.exit_code;
>
>         trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
> -
> -       vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
> +       vcpu->arch.gpa_available = false;
>
>         if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
>                 vcpu->arch.cr0 = svm->vmcb->save.cr0;
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 45fb0ea78ee8..79efb00dd70d 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6393,9 +6393,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
>         error_code |= (exit_qualification & 0x100) != 0 ?
>                PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
>
> -       vcpu->arch.gpa_available = true;
>         vcpu->arch.exit_qualification = exit_qualification;
> -
>         return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
>  }
>
> @@ -6410,7 +6408,6 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>                 return kvm_skip_emulated_instruction(vcpu);
>         }
>
> -       vcpu->arch.gpa_available = true;
>         ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
>         if (ret >= 0)
>                 return ret;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e40a779711a9..bb05b705c295 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4657,25 +4657,18 @@ static int emulator_read_write_onepage(unsigned long addr, void *val,
>          */
>         if (vcpu->arch.gpa_available &&
>             emulator_can_use_gpa(ctxt) &&
> -           vcpu_is_mmio_gpa(vcpu, addr, exception->address, write) &&
> -           (addr & ~PAGE_MASK) == (exception->address & ~PAGE_MASK)) {
> -               gpa = exception->address;
> -               goto mmio;
> +           (addr & ~PAGE_MASK) == (vcpu->arch.gpa_val & ~PAGE_MASK)) {
> +               gpa = vcpu->arch.gpa_val;
> +               ret = vcpu_is_mmio_gpa(vcpu, addr, gpa, write);
> +       } else {
> +               ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
>         }
>
> -       ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
> -
>         if (ret < 0)
>                 return X86EMUL_PROPAGATE_FAULT;
> -
> -       /* For APIC access vmexit */
> -       if (ret)
> -               goto mmio;
> -
> -       if (ops->read_write_emulate(vcpu, gpa, val, bytes))
> +       if (!ret && ops->read_write_emulate(vcpu, gpa, val, bytes))
>                 return X86EMUL_CONTINUE;
>
> -mmio:
>         /*
>          * Is this MMIO handled locally?
>          */
> --
> 1.8.3.1
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-11 16:52 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
@ 2017-08-13  0:11   ` Wanpeng Li
  2017-08-17  8:11   ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: Wanpeng Li @ 2017-08-13  0:11 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, Wanpeng Li, Radim Krcmar, David Hildenbrand

2017-08-12 0:52 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> There is currently some confusion between nested and L1 GPAs.  The
> assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
> it is not enough.  What this patch does is fence off the MMIO cache
> completely when using shadow nested page tables, since we have neither
> a GVA nor an L1 GPA to put in the cache.  This also allows some
> simplifications in kvm_mmu_page_fault and FNAME(page_fault).
>
> The EPT misconfig likewise does not have an L1 GPA to pass to
> kvm_io_bus_write, so that must be skipped for guest mode.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/kvm/mmu.c         | 10 +++++++++-
>  arch/x86/kvm/paging_tmpl.h |  3 +--
>  arch/x86/kvm/vmx.c         | 12 +++++++++---
>  arch/x86/kvm/x86.h         |  6 +++++-
>  4 files changed, 24 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index f5c3f8e7d29f..f3665947bcc5 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3598,6 +3598,14 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level)
>
>  static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
>  {
> +       /*
> +        * A nested guest cannot use the MMIO cache if it is using nested
> +        * page tables, because cr2 is a nGPA while the cache stores L1's
> +        * physical addresses.
> +        */
> +       if (mmu_is_nested(vcpu))
> +               return false;
> +
>         if (direct)
>                 return vcpu_match_mmio_gpa(vcpu, addr);
>
> @@ -4827,7 +4835,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>  {
>         int r, emulation_type = EMULTYPE_RETRY;
>         enum emulation_result er;
> -       bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
> +       bool direct = vcpu->arch.mmu.direct_map;
>
>         /*
>          * With shadow page tables, fault_address contains a GVA
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 3bb90ceeb52d..86b68dc5a649 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -790,8 +790,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
>                          &map_writable))
>                 return 0;
>
> -       if (handle_abnormal_pfn(vcpu, mmu_is_nested(vcpu) ? 0 : addr,
> -                               walker.gfn, pfn, walker.pte_access, &r))
> +       if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
>                 return r;
>
>         /*
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 79efb00dd70d..e3989461f938 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6402,10 +6402,16 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>         int ret;
>         gpa_t gpa;
>
> +       /*
> +        * A nested guest cannot optimize MMIO vmexits, because we have an
> +        * nGPA here instead of the required GPA.
> +        */
>         gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> -       if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> -               trace_kvm_fast_mmio(gpa);
> -               return kvm_skip_emulated_instruction(vcpu);
> +       if (!is_guest_mode(vcpu)) {
> +               if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> +                       trace_kvm_fast_mmio(gpa);
> +                       return kvm_skip_emulated_instruction(vcpu);
> +               }
>         }
>
>         ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 612067074905..2383d2ce0a84 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
>  static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
>                                         gva_t gva, gfn_t gfn, unsigned access)
>  {
> -       vcpu->arch.mmio_gva = gva & PAGE_MASK;
> +       /*
> +        * If this is a shadow nested page table, the "GVA" is
> +        * actually a nested GPA.
> +        */
> +       vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
>         vcpu->arch.access = access;
>         vcpu->arch.mmio_gfn = gfn;
>         vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] KVM: x86: simplify ept_misconfig
  2017-08-11 16:52 ` [PATCH 1/3] KVM: x86: simplify ept_misconfig Paolo Bonzini
  2017-08-12 23:31   ` Wanpeng Li
@ 2017-08-17  7:43   ` David Hildenbrand
  2017-08-17  8:06   ` David Hildenbrand
  2 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2017-08-17  7:43 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar

On 11.08.2017 18:52, Paolo Bonzini wrote:
> Calling handle_mmio_page_fault() has been unnecessary since commit
> e9ee956e311d ("KVM: x86: MMU: Move handle_mmio_page_fault() call to
> kvm_mmu_page_fault()", 2016-02-22)
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/vmx.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index df8d2f127508..45fb0ea78ee8 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6410,17 +6410,10 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>  		return kvm_skip_emulated_instruction(vcpu);
>  	}
>  
> -	ret = handle_mmio_page_fault(vcpu, gpa, true);
>  	vcpu->arch.gpa_available = true;
> -	if (likely(ret == RET_MMIO_PF_EMULATE))
> -		return x86_emulate_instruction(vcpu, gpa, 0, NULL, 0) ==
> -					      EMULATE_DONE;
> -
> -	if (unlikely(ret == RET_MMIO_PF_INVALID))
> -		return kvm_mmu_page_fault(vcpu, gpa, 0, NULL, 0);
> -
> -	if (unlikely(ret == RET_MMIO_PF_RETRY))
> -		return 1;
> +	ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
> +	if (ret >= 0)
> +		return ret;
>  
>  	/* It is the real ept misconfig */
>  	WARN_ON(1);
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set
  2017-08-11 16:52 ` [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set Paolo Bonzini
  2017-08-12 23:32   ` Wanpeng Li
@ 2017-08-17  7:58   ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2017-08-17  7:58 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar, Brijesh Singh

On 11.08.2017 18:52, Paolo Bonzini wrote:
> From: Brijesh Singh <brijesh.singh@amd.com>
> 
> When a guest causes a page fault which requires emulation, the
> vcpu->arch.gpa_available flag is set to indicate that cr2 contains a
> valid GPA.
> 
> Currently, emulator_read_write_onepage() makes use of gpa_available flag
> to avoid a guest page walk for a known MMIO regions. Lets not limit
> the gpa_available optimization to just MMIO region. The patch extends
> the check to avoid page walk whenever gpa_available flag is set.

Can we move that to a separate patch?

> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> [Fix EPT=0 according to Wanpeng Li's fix, plus ensure VMX also uses the
>  new code. - Paolo]
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

[...]
> +++ b/arch/x86/kvm/svm.c
> @@ -4235,8 +4235,7 @@ static int handle_exit(struct kvm_vcpu *vcpu)
>  	u32 exit_code = svm->vmcb->control.exit_code;
>  
>  	trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
> -
> -	vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
> +	vcpu->arch.gpa_available = false;

Can we move resetting to false to vcpu_enter_guest()? It should be reset
before handle_exit() is called for both cases.

(maybe an additional patch)

>  
>  	if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
>  		vcpu->arch.cr0 = svm->vmcb->save.cr0;
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 45fb0ea78ee8..79efb00dd70d 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6393,9 +6393,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
>  	error_code |= (exit_qualification & 0x100) != 0 ?
>  	       PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
>  
> -	vcpu->arch.gpa_available = true;
>  	vcpu->arch.exit_qualification = exit_qualification;
> -
>  	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
>  }
>  
> @@ -6410,7 +6408,6 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>  		return kvm_skip_emulated_instruction(vcpu);
>  	}
>  
> -	vcpu->arch.gpa_available = true;
>  	ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
>  	if (ret >= 0)
>  		return ret;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e40a779711a9..bb05b705c295 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4657,25 +4657,18 @@ static int emulator_read_write_onepage(unsigned long addr, void *val,
>  	 */
>  	if (vcpu->arch.gpa_available &&
>  	    emulator_can_use_gpa(ctxt) &&
> -	    vcpu_is_mmio_gpa(vcpu, addr, exception->address, write) &&
> -	    (addr & ~PAGE_MASK) == (exception->address & ~PAGE_MASK)) {
> -		gpa = exception->address;
> -		goto mmio;
> +	    (addr & ~PAGE_MASK) == (vcpu->arch.gpa_val & ~PAGE_MASK)) {
> +		gpa = vcpu->arch.gpa_val;
> +		ret = vcpu_is_mmio_gpa(vcpu, addr, gpa, write);
> +	} else {
> +		ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
>  	}
>  
> -	ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
> -
>  	if (ret < 0)
>  		return X86EMUL_PROPAGATE_FAULT;
> -
> -	/* For APIC access vmexit */
> -	if (ret)
> -		goto mmio;
> -
> -	if (ops->read_write_emulate(vcpu, gpa, val, bytes))
> +	if (!ret && ops->read_write_emulate(vcpu, gpa, val, bytes))
>  		return X86EMUL_CONTINUE;
>  
> -mmio:
>  	/*
>  	 * Is this MMIO handled locally?
>  	 */
> 

Looks good to me.

-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] KVM: x86: simplify ept_misconfig
  2017-08-11 16:52 ` [PATCH 1/3] KVM: x86: simplify ept_misconfig Paolo Bonzini
  2017-08-12 23:31   ` Wanpeng Li
  2017-08-17  7:43   ` David Hildenbrand
@ 2017-08-17  8:06   ` David Hildenbrand
  2 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2017-08-17  8:06 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar

On 11.08.2017 18:52, Paolo Bonzini wrote:
> Calling handle_mmio_page_fault() has been unnecessary since commit
> e9ee956e311d ("KVM: x86: MMU: Move handle_mmio_page_fault() call to
> kvm_mmu_page_fault()", 2016-02-22)
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/vmx.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index df8d2f127508..45fb0ea78ee8 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6410,17 +6410,10 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>  		return kvm_skip_emulated_instruction(vcpu);
>  	}
>  
> -	ret = handle_mmio_page_fault(vcpu, gpa, true);
>  	vcpu->arch.gpa_available = true;
> -	if (likely(ret == RET_MMIO_PF_EMULATE))
> -		return x86_emulate_instruction(vcpu, gpa, 0, NULL, 0) ==
> -					      EMULATE_DONE;
> -
> -	if (unlikely(ret == RET_MMIO_PF_INVALID))
> -		return kvm_mmu_page_fault(vcpu, gpa, 0, NULL, 0);
> -
> -	if (unlikely(ret == RET_MMIO_PF_RETRY))
> -		return 1;
> +	ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
> +	if (ret >= 0)
> +		return ret;
>  
>  	/* It is the real ept misconfig */
>  	WARN_ON(1);
> 

I think we can now un-export handle_mmio_page_fault(), as it is only
arch/x86/kvm/mmu.c

-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-11 16:52 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
  2017-08-13  0:11   ` Wanpeng Li
@ 2017-08-17  8:11   ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2017-08-17  8:11 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar

On 11.08.2017 18:52, Paolo Bonzini wrote:
> There is currently some confusion between nested and L1 GPAs.  The
> assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
> it is not enough.  What this patch does is fence off the MMIO cache
> completely when using shadow nested page tables, since we have neither
> a GVA nor an L1 GPA to put in the cache.  This also allows some
> simplifications in kvm_mmu_page_fault and FNAME(page_fault).
> 
> The EPT misconfig likewise does not have an L1 GPA to pass to
> kvm_io_bus_write, so that must be skipped for guest mode.

The complexity of the mmu and such non-trivial corner case scares me
every time :)

> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/mmu.c         | 10 +++++++++-
>  arch/x86/kvm/paging_tmpl.h |  3 +--
>  arch/x86/kvm/vmx.c         | 12 +++++++++---
>  arch/x86/kvm/x86.h         |  6 +++++-
>  4 files changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index f5c3f8e7d29f..f3665947bcc5 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3598,6 +3598,14 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level)
>  
>  static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
>  {
> +	/*
> +	 * A nested guest cannot use the MMIO cache if it is using nested
> +	 * page tables, because cr2 is a nGPA while the cache stores L1's
> +	 * physical addresses.
> +	 */
> +	if (mmu_is_nested(vcpu))
> +		return false;
> +
>  	if (direct)
>  		return vcpu_match_mmio_gpa(vcpu, addr);
>  
> @@ -4827,7 +4835,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>  {
>  	int r, emulation_type = EMULTYPE_RETRY;
>  	enum emulation_result er;
> -	bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
> +	bool direct = vcpu->arch.mmu.direct_map;
>  
>  	/*
>  	 * With shadow page tables, fault_address contains a GVA
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 3bb90ceeb52d..86b68dc5a649 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -790,8 +790,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
>  			 &map_writable))
>  		return 0;
>  
> -	if (handle_abnormal_pfn(vcpu, mmu_is_nested(vcpu) ? 0 : addr,
> -				walker.gfn, pfn, walker.pte_access, &r))
> +	if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
>  		return r;
>  
>  	/*
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 79efb00dd70d..e3989461f938 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6402,10 +6402,16 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>  	int ret;
>  	gpa_t gpa;
>  
> +	/*
> +	 * A nested guest cannot optimize MMIO vmexits, because we have an
> +	 * nGPA here instead of the required GPA.
> +	 */
>  	gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> -	if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> -		trace_kvm_fast_mmio(gpa);
> -		return kvm_skip_emulated_instruction(vcpu);
> +	if (!is_guest_mode(vcpu)) {
> +		if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {

if (!is_guest_mode(vcpu) &&
    !kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL) ...


could be done, so following code won't be changed.

> +			trace_kvm_fast_mmio(gpa);
> +			return kvm_skip_emulated_instruction(vcpu);
> +		}
>  	}
>  
>  	ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 612067074905..2383d2ce0a84 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
>  static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
>  					gva_t gva, gfn_t gfn, unsigned access)
>  {
> -	vcpu->arch.mmio_gva = gva & PAGE_MASK;
> +	/*
> +	 * If this is a shadow nested page table, the "GVA" is
> +	 * actually a nested GPA.

nGPA ? (to stick to terminology)

> +	 */
> +	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
>  	vcpu->arch.access = access;
>  	vcpu->arch.mmio_gfn = gfn;
>  	vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
> 


-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-17 16:36 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
  2017-08-18  7:59   ` David Hildenbrand
@ 2019-02-05 19:54   ` Jim Mattson
  1 sibling, 0 replies; 16+ messages in thread
From: Jim Mattson @ 2019-02-05 19:54 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, kvm list, Wanpeng Li, Radim Krčmář,
	David Hildenbrand

On Thu, Aug 17, 2017 at 9:37 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> There is currently some confusion between nested and L1 GPAs.  The
> assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
> it is not enough.  What this patch does is fence off the MMIO cache
> completely when using shadow nested page tables, since we have neither
> a GVA nor an L1 GPA to put in the cache.  This also allows some
> simplifications in kvm_mmu_page_fault and FNAME(page_fault).
>
> The EPT misconfig likewise does not have an L1 GPA to pass to
> kvm_io_bus_write, so that must be skipped for guest mode.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>         v1->v2: standardize on "nGPA" moniker, replace nested ifs with &&
>
>  arch/x86/kvm/mmu.c         | 10 +++++++++-
>  arch/x86/kvm/paging_tmpl.h |  3 +--
>  arch/x86/kvm/vmx.c         |  7 ++++++-
>  arch/x86/kvm/x86.h         |  6 +++++-
>  4 files changed, 21 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index a2c592b14617..02f8c507b160 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3596,6 +3596,14 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level)
>
>  static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
>  {
> +       /*
> +        * A nested guest cannot use the MMIO cache if it is using nested
> +        * page tables, because cr2 is a nGPA while the cache stores L1's
> +        * physical addresses.
> +        */
> +       if (mmu_is_nested(vcpu))
> +               return false;
> +
>         if (direct)
>                 return vcpu_match_mmio_gpa(vcpu, addr);
>
> @@ -4841,7 +4849,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>  {
>         int r, emulation_type = EMULTYPE_RETRY;
>         enum emulation_result er;
> -       bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
> +       bool direct = vcpu->arch.mmu.direct_map;
>
>         /* With shadow page tables, fault_address contains a GVA or nGPA.  */
>         if (vcpu->arch.mmu.direct_map) {
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 3bb90ceeb52d..86b68dc5a649 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -790,8 +790,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
>                          &map_writable))
>                 return 0;
>
> -       if (handle_abnormal_pfn(vcpu, mmu_is_nested(vcpu) ? 0 : addr,
> -                               walker.gfn, pfn, walker.pte_access, &r))
> +       if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
>                 return r;
>
>         /*
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index e2c8b33c35d1..61389ad784e4 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6402,8 +6402,13 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>         int ret;
>         gpa_t gpa;
>
> +       /*
> +        * A nested guest cannot optimize MMIO vmexits, because we have an
> +        * nGPA here instead of the required GPA.
> +        */
>         gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> -       if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> +       if (!is_guest_mode(vcpu) &&
> +           !kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
>                 trace_kvm_fast_mmio(gpa);
>                 return kvm_skip_emulated_instruction(vcpu);
>         }
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 612067074905..113460370a7f 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
>  static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
>                                         gva_t gva, gfn_t gfn, unsigned access)
>  {
> -       vcpu->arch.mmio_gva = gva & PAGE_MASK;
> +       /*
> +        * If this is a shadow nested page table, the "GVA" is
> +        * actually a nGPA.
> +        */
> +       vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
>         vcpu->arch.access = access;
>         vcpu->arch.mmio_gfn = gfn;
>         vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
> --
> 1.8.3.1

Should this patch be considered for the stable branches?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-18 12:35     ` Radim Krčmář
@ 2017-08-18 12:38       ` Paolo Bonzini
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2017-08-18 12:38 UTC (permalink / raw)
  To: Radim Krčmář, David Hildenbrand
  Cc: linux-kernel, kvm, wanpeng.li

On 18/08/2017 14:35, Radim Krčmář wrote:
> 
>>> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
>>> @@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
>>>  static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
>>>  					gva_t gva, gfn_t gfn, unsigned access)
>>>  {
>>> -	vcpu->arch.mmio_gva = gva & PAGE_MASK;
>>> +	/*
>>> +	 * If this is a shadow nested page table, the "GVA" is
>> s/"GVA"/GVA/ ?
> I prefer the former, we're talking about "gva_t gva" that isn't GVA. :)

Exactly. :)

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-18  7:59   ` David Hildenbrand
@ 2017-08-18 12:35     ` Radim Krčmář
  2017-08-18 12:38       ` Paolo Bonzini
  0 siblings, 1 reply; 16+ messages in thread
From: Radim Krčmář @ 2017-08-18 12:35 UTC (permalink / raw)
  To: David Hildenbrand; +Cc: Paolo Bonzini, linux-kernel, kvm, wanpeng.li

2017-08-18 09:59+0200, David Hildenbrand:
> On 17.08.2017 18:36, Paolo Bonzini wrote:
> > There is currently some confusion between nested and L1 GPAs.  The
> > assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
> > it is not enough.  What this patch does is fence off the MMIO cache
> > completely when using shadow nested page tables, since we have neither
> > a GVA nor an L1 GPA to put in the cache.  This also allows some
> > simplifications in kvm_mmu_page_fault and FNAME(page_fault).
> > 
> > The EPT misconfig likewise does not have an L1 GPA to pass to
> > kvm_io_bus_write, so that must be skipped for guest mode.
> > 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> > 	v1->v2: standardize on "nGPA" moniker, replace nested ifs with &&
> > 
> >  arch/x86/kvm/mmu.c         | 10 +++++++++-
> >  arch/x86/kvm/paging_tmpl.h |  3 +--
> >  arch/x86/kvm/vmx.c         |  7 ++++++-
> >  arch/x86/kvm/x86.h         |  6 +++++-
> >  4 files changed, 21 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index a2c592b14617..02f8c507b160 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -3596,6 +3596,14 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level)
> >  
> >  static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
> >  {
> > +	/*
> > +	 * A nested guest cannot use the MMIO cache if it is using nested
> > +	 * page tables, because cr2 is a nGPA while the cache stores L1's
> > +	 * physical addresses.
> 
> ... "while the cache stores GPAs" ?

Makes sense, changed while applying.

> > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> > @@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
> >  static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
> >  					gva_t gva, gfn_t gfn, unsigned access)
> >  {
> > -	vcpu->arch.mmio_gva = gva & PAGE_MASK;
> > +	/*
> > +	 * If this is a shadow nested page table, the "GVA" is
> 
> s/"GVA"/GVA/ ?

I prefer the former, we're talking about "gva_t gva" that isn't GVA. :)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-17 16:36 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
@ 2017-08-18  7:59   ` David Hildenbrand
  2017-08-18 12:35     ` Radim Krčmář
  2019-02-05 19:54   ` Jim Mattson
  1 sibling, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2017-08-18  7:59 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar

On 17.08.2017 18:36, Paolo Bonzini wrote:
> There is currently some confusion between nested and L1 GPAs.  The
> assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
> it is not enough.  What this patch does is fence off the MMIO cache
> completely when using shadow nested page tables, since we have neither
> a GVA nor an L1 GPA to put in the cache.  This also allows some
> simplifications in kvm_mmu_page_fault and FNAME(page_fault).
> 
> The EPT misconfig likewise does not have an L1 GPA to pass to
> kvm_io_bus_write, so that must be skipped for guest mode.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> 	v1->v2: standardize on "nGPA" moniker, replace nested ifs with &&
> 
>  arch/x86/kvm/mmu.c         | 10 +++++++++-
>  arch/x86/kvm/paging_tmpl.h |  3 +--
>  arch/x86/kvm/vmx.c         |  7 ++++++-
>  arch/x86/kvm/x86.h         |  6 +++++-
>  4 files changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index a2c592b14617..02f8c507b160 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3596,6 +3596,14 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level)
>  
>  static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
>  {
> +	/*
> +	 * A nested guest cannot use the MMIO cache if it is using nested
> +	 * page tables, because cr2 is a nGPA while the cache stores L1's
> +	 * physical addresses.

... "while the cache stores GPAs" ?

> +	 */
> +	if (mmu_is_nested(vcpu))
> +		return false;
> +
>  	if (direct)
>  		return vcpu_match_mmio_gpa(vcpu, addr);
>  
> @@ -4841,7 +4849,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
>  {
>  	int r, emulation_type = EMULTYPE_RETRY;
>  	enum emulation_result er;
> -	bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
> +	bool direct = vcpu->arch.mmu.direct_map;
>  
>  	/* With shadow page tables, fault_address contains a GVA or nGPA.  */
>  	if (vcpu->arch.mmu.direct_map) {
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 3bb90ceeb52d..86b68dc5a649 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -790,8 +790,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
>  			 &map_writable))
>  		return 0;
>  
> -	if (handle_abnormal_pfn(vcpu, mmu_is_nested(vcpu) ? 0 : addr,
> -				walker.gfn, pfn, walker.pte_access, &r))
> +	if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
>  		return r;
>  
>  	/*
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index e2c8b33c35d1..61389ad784e4 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6402,8 +6402,13 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>  	int ret;
>  	gpa_t gpa;
>  
> +	/*
> +	 * A nested guest cannot optimize MMIO vmexits, because we have an
> +	 * nGPA here instead of the required GPA.
> +	 */
>  	gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> -	if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> +	if (!is_guest_mode(vcpu) &&
> +	    !kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
>  		trace_kvm_fast_mmio(gpa);
>  		return kvm_skip_emulated_instruction(vcpu);
>  	}
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 612067074905..113460370a7f 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
>  static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
>  					gva_t gva, gfn_t gfn, unsigned access)
>  {
> -	vcpu->arch.mmio_gva = gva & PAGE_MASK;
> +	/*
> +	 * If this is a shadow nested page table, the "GVA" is

s/"GVA"/GVA/ ?

> +	 * actually a nGPA.
> +	 */
> +	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
>  	vcpu->arch.access = access;
>  	vcpu->arch.mmio_gfn = gfn;
>  	vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 

Thanks,

David

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests
  2017-08-17 16:36 [PATCH v2 0/3] KVM: MMU: pending MMU and nEPT patches Paolo Bonzini
@ 2017-08-17 16:36 ` Paolo Bonzini
  2017-08-18  7:59   ` David Hildenbrand
  2019-02-05 19:54   ` Jim Mattson
  0 siblings, 2 replies; 16+ messages in thread
From: Paolo Bonzini @ 2017-08-17 16:36 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: wanpeng.li, rkrcmar, david

There is currently some confusion between nested and L1 GPAs.  The
assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
it is not enough.  What this patch does is fence off the MMIO cache
completely when using shadow nested page tables, since we have neither
a GVA nor an L1 GPA to put in the cache.  This also allows some
simplifications in kvm_mmu_page_fault and FNAME(page_fault).

The EPT misconfig likewise does not have an L1 GPA to pass to
kvm_io_bus_write, so that must be skipped for guest mode.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
	v1->v2: standardize on "nGPA" moniker, replace nested ifs with &&

 arch/x86/kvm/mmu.c         | 10 +++++++++-
 arch/x86/kvm/paging_tmpl.h |  3 +--
 arch/x86/kvm/vmx.c         |  7 ++++++-
 arch/x86/kvm/x86.h         |  6 +++++-
 4 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a2c592b14617..02f8c507b160 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3596,6 +3596,14 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level)
 
 static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 {
+	/*
+	 * A nested guest cannot use the MMIO cache if it is using nested
+	 * page tables, because cr2 is a nGPA while the cache stores L1's
+	 * physical addresses.
+	 */
+	if (mmu_is_nested(vcpu))
+		return false;
+
 	if (direct)
 		return vcpu_match_mmio_gpa(vcpu, addr);
 
@@ -4841,7 +4849,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 {
 	int r, emulation_type = EMULTYPE_RETRY;
 	enum emulation_result er;
-	bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
+	bool direct = vcpu->arch.mmu.direct_map;
 
 	/* With shadow page tables, fault_address contains a GVA or nGPA.  */
 	if (vcpu->arch.mmu.direct_map) {
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 3bb90ceeb52d..86b68dc5a649 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -790,8 +790,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
 			 &map_writable))
 		return 0;
 
-	if (handle_abnormal_pfn(vcpu, mmu_is_nested(vcpu) ? 0 : addr,
-				walker.gfn, pfn, walker.pte_access, &r))
+	if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
 		return r;
 
 	/*
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e2c8b33c35d1..61389ad784e4 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6402,8 +6402,13 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
 	int ret;
 	gpa_t gpa;
 
+	/*
+	 * A nested guest cannot optimize MMIO vmexits, because we have an
+	 * nGPA here instead of the required GPA.
+	 */
 	gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
-	if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
+	if (!is_guest_mode(vcpu) &&
+	    !kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
 		trace_kvm_fast_mmio(gpa);
 		return kvm_skip_emulated_instruction(vcpu);
 	}
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 612067074905..113460370a7f 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -90,7 +90,11 @@ static inline u32 bit(int bitno)
 static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
 					gva_t gva, gfn_t gfn, unsigned access)
 {
-	vcpu->arch.mmio_gva = gva & PAGE_MASK;
+	/*
+	 * If this is a shadow nested page table, the "GVA" is
+	 * actually a nGPA.
+	 */
+	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
 	vcpu->arch.access = access;
 	vcpu->arch.mmio_gfn = gfn;
 	vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-02-05 19:54 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-11 16:52 [PATCH 0/3] KVM: MMU: pending MMU and nEPT patches Paolo Bonzini
2017-08-11 16:52 ` [PATCH 1/3] KVM: x86: simplify ept_misconfig Paolo Bonzini
2017-08-12 23:31   ` Wanpeng Li
2017-08-17  7:43   ` David Hildenbrand
2017-08-17  8:06   ` David Hildenbrand
2017-08-11 16:52 ` [PATCH 2/3] KVM: x86: Avoid guest page table walk when gpa_available is set Paolo Bonzini
2017-08-12 23:32   ` Wanpeng Li
2017-08-17  7:58   ` David Hildenbrand
2017-08-11 16:52 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
2017-08-13  0:11   ` Wanpeng Li
2017-08-17  8:11   ` David Hildenbrand
2017-08-17 16:36 [PATCH v2 0/3] KVM: MMU: pending MMU and nEPT patches Paolo Bonzini
2017-08-17 16:36 ` [PATCH 3/3] KVM: x86: fix use of L1 MMIO areas in nested guests Paolo Bonzini
2017-08-18  7:59   ` David Hildenbrand
2017-08-18 12:35     ` Radim Krčmář
2017-08-18 12:38       ` Paolo Bonzini
2019-02-05 19:54   ` Jim Mattson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).