linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup
@ 2019-08-01 20:35 Sean Christopherson
  2019-08-01 20:35 ` [PATCH 1/3] KVM: x86: Rename access permissions cache member in struct kvm_vcpu_arch Sean Christopherson
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Sean Christopherson @ 2019-08-01 20:35 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář; +Cc: kvm, linux-kernel

A few loosely related MMIO SPTE patches to get rid of a bit of cruft that
has been a source of annoyance when mucking around in the MMIO code.

No functional changes intended.

Sean Christopherson (3):
  KVM: x86: Rename access permissions cache member in struct
    kvm_vcpu_arch
  KVM: x86/mmu: Add explicit access mask for MMIO SPTEs
  KVM: x86/mmu: Consolidate "is MMIO SPTE" code

 Documentation/virtual/kvm/mmu.txt |  4 ++--
 arch/x86/include/asm/kvm_host.h   |  2 +-
 arch/x86/kvm/mmu.c                | 31 +++++++++++++++++--------------
 arch/x86/kvm/mmu.h                |  2 +-
 arch/x86/kvm/vmx/vmx.c            |  2 +-
 arch/x86/kvm/x86.c                |  2 +-
 arch/x86/kvm/x86.h                |  2 +-
 7 files changed, 24 insertions(+), 21 deletions(-)

-- 
2.22.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/3] KVM: x86: Rename access permissions cache member in struct kvm_vcpu_arch
  2019-08-01 20:35 [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Sean Christopherson
@ 2019-08-01 20:35 ` Sean Christopherson
  2019-08-01 20:35 ` [PATCH 2/3] KVM: x86/mmu: Add explicit access mask for MMIO SPTEs Sean Christopherson
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Sean Christopherson @ 2019-08-01 20:35 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář; +Cc: kvm, linux-kernel

Rename "access" to "mmio_access" to match the other MMIO cache members
and to make it more obvious that it's tracking the access permissions
for the MMIO cache.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 Documentation/virtual/kvm/mmu.txt | 4 ++--
 arch/x86/include/asm/kvm_host.h   | 2 +-
 arch/x86/kvm/x86.c                | 2 +-
 arch/x86/kvm/x86.h                | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
index 2efe0efc516e..4ade0df93552 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -294,7 +294,7 @@ Handling a page fault is performed as follows:
    - walk shadow page table
    - check for valid generation number in the spte (see "Fast invalidation of
      MMIO sptes" below)
-   - cache the information to vcpu->arch.mmio_gva, vcpu->arch.access and
+   - cache the information to vcpu->arch.mmio_gva, vcpu->arch.mmio_access and
      vcpu->arch.mmio_gfn, and call the emulator
  - If both P bit and R/W bit of error code are set, this could possibly
    be handled as a "fast page fault" (fixed without taking the MMU lock).  See
@@ -304,7 +304,7 @@ Handling a page fault is performed as follows:
    - if permissions are insufficient, reflect the fault back to the guest
  - determine the host page
    - if this is an mmio request, there is no host page; cache the info to
-     vcpu->arch.mmio_gva, vcpu->arch.access and vcpu->arch.mmio_gfn
+     vcpu->arch.mmio_gva, vcpu->arch.mmio_access and vcpu->arch.mmio_gfn
  - walk the shadow page table to find the spte for the translation,
    instantiating missing intermediate page tables as necessary
    - If this is an mmio request, cache the mmio info to the spte and set some
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e74f0711eaaf..2d931d208f29 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -715,7 +715,7 @@ struct kvm_vcpu_arch {
 
 	/* Cache MMIO info */
 	u64 mmio_gva;
-	unsigned access;
+	unsigned mmio_access;
 	gfn_t mmio_gfn;
 	u64 mmio_gen;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 01e18caac825..85c374e4d622 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5370,7 +5370,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 	 */
 	if (vcpu_match_mmio_gva(vcpu, gva)
 	    && !permission_fault(vcpu, vcpu->arch.walk_mmu,
-				 vcpu->arch.access, 0, access)) {
+				 vcpu->arch.mmio_access, 0, access)) {
 		*gpa = vcpu->arch.mmio_gfn << PAGE_SHIFT |
 					(gva & (PAGE_SIZE - 1));
 		trace_vcpu_match_mmio(gva, *gpa, write, false);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 6594020c0691..b5274e2a53cf 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -196,7 +196,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
 	 * actually a nGPA.
 	 */
 	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
-	vcpu->arch.access = access;
+	vcpu->arch.mmio_access = access;
 	vcpu->arch.mmio_gfn = gfn;
 	vcpu->arch.mmio_gen = gen;
 }
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/3] KVM: x86/mmu: Add explicit access mask for MMIO SPTEs
  2019-08-01 20:35 [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Sean Christopherson
  2019-08-01 20:35 ` [PATCH 1/3] KVM: x86: Rename access permissions cache member in struct kvm_vcpu_arch Sean Christopherson
@ 2019-08-01 20:35 ` Sean Christopherson
  2019-08-01 20:35 ` [PATCH 3/3] KVM: x86/mmu: Consolidate "is MMIO SPTE" code Sean Christopherson
  2019-08-03  6:30 ` [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Paolo Bonzini
  3 siblings, 0 replies; 5+ messages in thread
From: Sean Christopherson @ 2019-08-01 20:35 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář; +Cc: kvm, linux-kernel

When shadow paging is enabled, KVM tracks the allowed access type for
MMIO SPTEs so that it can do a permission check on a MMIO GVA cache hit
without having to walk the guest's page tables.  The tracking is done
by retaining the WRITE and USER bits of the access when inserting the
MMIO SPTE (read access is implicitly allowed), which allows the MMIO
page fault handler to retrieve and cache the WRITE/USER bits from the
SPTE.

Unfortunately for EPT, the mask used to retain the WRITE/USER bits is
hardcoded using the x86 paging versions of the bits.  This funkiness
happens to work because KVM uses a completely different mask/value for
MMIO SPTEs when EPT is enabled, and the EPT mask/value just happens to
overlap exactly with the x86 WRITE/USER bits[*].

Explicitly define the access mask for MMIO SPTEs to accurately reflect
that EPT does not want to incorporate any access bits into the SPTE, and
so that KVM isn't subtly relying on EPT's WX bits always being set in
MMIO SPTEs, e.g. attempting to use other bits for experimentation breaks
horribly.

Note, vcpu_match_mmio_gva() explicits prevents matching GVA==0, and all
TDP flows explicit set mmio_gva to 0, i.e. zeroing vcpu->arch.access for
EPT has no (known) functional impact.

[*] Using WX to generate EPT misconfigurations (equivalent to reserved
    bit page fault) ensures KVM can employ its MMIO page fault tricks
    even platforms without reserved address bits.

Fixes: ce88decffd17 ("KVM: MMU: mmio page fault support")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---

Even though KVM tracks the access permissions with an unsigned, I went
with a u64 for shadow_mmio_access_mask to match the existing masks and
because I really dislike "unsigned" :-)

 arch/x86/kvm/mmu.c     | 15 +++++++++------
 arch/x86/kvm/mmu.h     |  2 +-
 arch/x86/kvm/vmx/vmx.c |  2 +-
 3 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8f72526e2f68..9ab6ff9e491b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -214,6 +214,7 @@ static u64 __read_mostly shadow_accessed_mask;
 static u64 __read_mostly shadow_dirty_mask;
 static u64 __read_mostly shadow_mmio_mask;
 static u64 __read_mostly shadow_mmio_value;
+static u64 __read_mostly shadow_mmio_access_mask;
 static u64 __read_mostly shadow_present_mask;
 static u64 __read_mostly shadow_me_mask;
 
@@ -299,11 +300,13 @@ static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
 	kvm_flush_remote_tlbs_with_range(kvm, &range);
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
+void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask)
 {
+	BUG_ON((u64)(unsigned)access_mask != access_mask);
 	BUG_ON((mmio_mask & mmio_value) != mmio_value);
 	shadow_mmio_value = mmio_value | SPTE_SPECIAL_MASK;
 	shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK;
+	shadow_mmio_access_mask = access_mask;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
 
@@ -389,7 +392,7 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn,
 	u64 mask = generation_mmio_spte_mask(gen);
 	u64 gpa = gfn << PAGE_SHIFT;
 
-	access &= ACC_WRITE_MASK | ACC_USER_MASK;
+	access &= shadow_mmio_access_mask;
 	mask |= shadow_mmio_value | access;
 	mask |= gpa | shadow_nonpresent_or_rsvd_mask;
 	mask |= (gpa & shadow_nonpresent_or_rsvd_mask)
@@ -418,8 +421,7 @@ static gfn_t get_mmio_spte_gfn(u64 spte)
 
 static unsigned get_mmio_spte_access(u64 spte)
 {
-	u64 mask = generation_mmio_spte_mask(MMIO_SPTE_GEN_MASK) | shadow_mmio_mask;
-	return (spte & ~mask) & ~PAGE_MASK;
+	return spte & shadow_mmio_access_mask;
 }
 
 static bool set_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
@@ -3290,7 +3292,8 @@ static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn,
 	}
 
 	if (unlikely(is_noslot_pfn(pfn)))
-		vcpu_cache_mmio_info(vcpu, gva, gfn, access);
+		vcpu_cache_mmio_info(vcpu, gva, gfn,
+				     access & shadow_mmio_access_mask);
 
 	return false;
 }
@@ -6028,7 +6031,7 @@ static void kvm_set_mmio_spte_mask(void)
 	if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52)
 		mask &= ~1ull;
 
-	kvm_mmu_set_mmio_spte_mask(mask, mask);
+	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
 }
 
 int kvm_mmu_module_init(void)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 54c2a377795b..11f8ec89433b 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -51,7 +51,7 @@ static inline u64 rsvd_bits(int s, int e)
 	return ((1ULL << (e - s + 1)) - 1) << s;
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value);
+void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask);
 
 void
 reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 074385c86c09..10faf5c91f4e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4026,7 +4026,7 @@ static void ept_set_mmio_spte_mask(void)
 	 * of an EPT paging-structure entry is 110b (write/execute).
 	 */
 	kvm_mmu_set_mmio_spte_mask(VMX_EPT_RWX_MASK,
-				   VMX_EPT_MISCONFIG_WX_VALUE);
+				   VMX_EPT_MISCONFIG_WX_VALUE, 0);
 }
 
 #define VMX_XSS_EXIT_BITMAP 0
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/3] KVM: x86/mmu: Consolidate "is MMIO SPTE" code
  2019-08-01 20:35 [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Sean Christopherson
  2019-08-01 20:35 ` [PATCH 1/3] KVM: x86: Rename access permissions cache member in struct kvm_vcpu_arch Sean Christopherson
  2019-08-01 20:35 ` [PATCH 2/3] KVM: x86/mmu: Add explicit access mask for MMIO SPTEs Sean Christopherson
@ 2019-08-01 20:35 ` Sean Christopherson
  2019-08-03  6:30 ` [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Paolo Bonzini
  3 siblings, 0 replies; 5+ messages in thread
From: Sean Christopherson @ 2019-08-01 20:35 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář; +Cc: kvm, linux-kernel

Replace the open-coded "is MMIO SPTE" checks in the MMU warnings
related to software-based access/dirty tracking to make the code
slightly more self-documenting.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9ab6ff9e491b..745cbf45b32a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -310,6 +310,11 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask)
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
 
+static bool is_mmio_spte(u64 spte)
+{
+	return (spte & shadow_mmio_mask) == shadow_mmio_value;
+}
+
 static inline bool sp_ad_disabled(struct kvm_mmu_page *sp)
 {
 	return sp->role.ad_disabled;
@@ -317,19 +322,19 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page *sp)
 
 static inline bool spte_ad_enabled(u64 spte)
 {
-	MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value);
+	MMU_WARN_ON(is_mmio_spte(spte));
 	return !(spte & shadow_acc_track_value);
 }
 
 static inline u64 spte_shadow_accessed_mask(u64 spte)
 {
-	MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value);
+	MMU_WARN_ON(is_mmio_spte(spte));
 	return spte_ad_enabled(spte) ? shadow_accessed_mask : 0;
 }
 
 static inline u64 spte_shadow_dirty_mask(u64 spte)
 {
-	MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value);
+	MMU_WARN_ON(is_mmio_spte(spte));
 	return spte_ad_enabled(spte) ? shadow_dirty_mask : 0;
 }
 
@@ -404,11 +409,6 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn,
 	mmu_spte_set(sptep, mask);
 }
 
-static bool is_mmio_spte(u64 spte)
-{
-	return (spte & shadow_mmio_mask) == shadow_mmio_value;
-}
-
 static gfn_t get_mmio_spte_gfn(u64 spte)
 {
 	u64 gpa = spte & shadow_nonpresent_or_rsvd_lower_gfn_mask;
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup
  2019-08-01 20:35 [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Sean Christopherson
                   ` (2 preceding siblings ...)
  2019-08-01 20:35 ` [PATCH 3/3] KVM: x86/mmu: Consolidate "is MMIO SPTE" code Sean Christopherson
@ 2019-08-03  6:30 ` Paolo Bonzini
  3 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2019-08-03  6:30 UTC (permalink / raw)
  To: Sean Christopherson, Radim Krčmář; +Cc: kvm, linux-kernel

On 01/08/19 22:35, Sean Christopherson wrote:
> A few loosely related MMIO SPTE patches to get rid of a bit of cruft that
> has been a source of annoyance when mucking around in the MMIO code.
> 
> No functional changes intended.
> 
> Sean Christopherson (3):
>   KVM: x86: Rename access permissions cache member in struct
>     kvm_vcpu_arch
>   KVM: x86/mmu: Add explicit access mask for MMIO SPTEs
>   KVM: x86/mmu: Consolidate "is MMIO SPTE" code
> 
>  Documentation/virtual/kvm/mmu.txt |  4 ++--
>  arch/x86/include/asm/kvm_host.h   |  2 +-
>  arch/x86/kvm/mmu.c                | 31 +++++++++++++++++--------------
>  arch/x86/kvm/mmu.h                |  2 +-
>  arch/x86/kvm/vmx/vmx.c            |  2 +-
>  arch/x86/kvm/x86.c                |  2 +-
>  arch/x86/kvm/x86.h                |  2 +-
>  7 files changed, 24 insertions(+), 21 deletions(-)
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-08-03  6:30 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-01 20:35 [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Sean Christopherson
2019-08-01 20:35 ` [PATCH 1/3] KVM: x86: Rename access permissions cache member in struct kvm_vcpu_arch Sean Christopherson
2019-08-01 20:35 ` [PATCH 2/3] KVM: x86/mmu: Add explicit access mask for MMIO SPTEs Sean Christopherson
2019-08-01 20:35 ` [PATCH 3/3] KVM: x86/mmu: Consolidate "is MMIO SPTE" code Sean Christopherson
2019-08-03  6:30 ` [PATCH 0/3] KVM: x86/mmu: minor MMIO SPTE cleanup Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).