All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf()
@ 2020-04-15 21:44 Sean Christopherson
  2020-04-15 21:44 ` [PATCH 1/2] KVM: x86/mmu: Set @writable to false for non-visible accesses by L2 Sean Christopherson
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Sean Christopherson @ 2020-04-15 21:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Rick Edgecombe

Two cleanups with no functional changes.

I'm not 100% on whether or not open coding the private memslot check in
patch 2 is a good idea.  Avoiding the extra memslot lookup is nice, but
that could be done by providing e.g. kvm_is_memslot_visible().  On one
hand, I like deferring the nonexistent and invalid checks to common code,
but on the other hand it creates the possibility of missing some future
case where kvm_is_gfn_visible() adds a check that's not also incoporated
into __gfn_to_hva_many(), though that seems like a rather unlikely
scenario.

Sean Christopherson (2):
  KVM: x86/mmu: Set @writable to false for non-visible accesses by L2
  KVM: x86/mmu: Avoid an extra memslot lookup in try_async_pf() for L2

 arch/x86/kvm/mmu/mmu.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

-- 
2.26.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] KVM: x86/mmu: Set @writable to false for non-visible accesses by L2
  2020-04-15 21:44 [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf() Sean Christopherson
@ 2020-04-15 21:44 ` Sean Christopherson
  2020-04-16 21:33   ` Jim Mattson
  2020-04-15 21:44 ` [PATCH 2/2] KVM: x86/mmu: Avoid an extra memslot lookup in try_async_pf() for L2 Sean Christopherson
  2020-04-16 13:52 ` [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf() Paolo Bonzini
  2 siblings, 1 reply; 5+ messages in thread
From: Sean Christopherson @ 2020-04-15 21:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Rick Edgecombe

Explicitly set @writable to false in try_async_pf() if the GFN->PFN
translation is short-circuited due to the requested GFN not being
visible to L2.

Leaving @writable ('map_writable' in the callers) uninitialized is ok
in that it's never actually consumed, but one has to track it all the
way through set_spte() being short-circuited by set_mmio_spte() to
understand that the uninitialized variable is benign, and relying on
@writable being ignored is an unnecessary risk.  Explicitly setting
@writable also aligns try_async_pf() with __gfn_to_pfn_memslot().

Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c6ea6032c222..6d6cb9416179 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4090,6 +4090,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 	 */
 	if (is_guest_mode(vcpu) && !kvm_is_visible_gfn(vcpu->kvm, gfn)) {
 		*pfn = KVM_PFN_NOSLOT;
+		*writable = false;
 		return false;
 	}
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] KVM: x86/mmu: Avoid an extra memslot lookup in try_async_pf() for L2
  2020-04-15 21:44 [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf() Sean Christopherson
  2020-04-15 21:44 ` [PATCH 1/2] KVM: x86/mmu: Set @writable to false for non-visible accesses by L2 Sean Christopherson
@ 2020-04-15 21:44 ` Sean Christopherson
  2020-04-16 13:52 ` [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf() Paolo Bonzini
  2 siblings, 0 replies; 5+ messages in thread
From: Sean Christopherson @ 2020-04-15 21:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Rick Edgecombe

Tweak the L2 vs. private memslot handling in try_async_pf() to avoid an
added memslot lookup and more precisely single out private memslots,
i.e. defer to the common code to handle nonexistent or invalid memslots
to make it clear L2 doesn't require special handling for those cases.

Opportunstically squish a multi-line comment into a single-line comment.

Note, the end result, KVM_PFN_NOSLOT, is unchanged.

Cc: Jim Mattson <jmattson@google.com>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6d6cb9416179..06d0150ce53b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4082,19 +4082,16 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 			 gpa_t cr2_or_gpa, kvm_pfn_t *pfn, bool write,
 			 bool *writable)
 {
-	struct kvm_memory_slot *slot;
+	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	bool async;
 
-	/*
-	 * Don't expose private memslots to L2.
-	 */
-	if (is_guest_mode(vcpu) && !kvm_is_visible_gfn(vcpu->kvm, gfn)) {
+	/* Don't expose private memslots to L2. */
+	if (is_guest_mode(vcpu) && slot && slot->id >= KVM_USER_MEM_SLOTS) {
 		*pfn = KVM_PFN_NOSLOT;
 		*writable = false;
 		return false;
 	}
 
-	slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	async = false;
 	*pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable);
 	if (!async)
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf()
  2020-04-15 21:44 [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf() Sean Christopherson
  2020-04-15 21:44 ` [PATCH 1/2] KVM: x86/mmu: Set @writable to false for non-visible accesses by L2 Sean Christopherson
  2020-04-15 21:44 ` [PATCH 2/2] KVM: x86/mmu: Avoid an extra memslot lookup in try_async_pf() for L2 Sean Christopherson
@ 2020-04-16 13:52 ` Paolo Bonzini
  2 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2020-04-16 13:52 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Rick Edgecombe

On 15/04/20 23:44, Sean Christopherson wrote:
> 
> I'm not 100% on whether or not open coding the private memslot check in
> patch 2 is a good idea.  Avoiding the extra memslot lookup is nice, but
> that could be done by providing e.g. kvm_is_memslot_visible(). 

Yeah, that's better.  The patch is so small that it's even pointless to
split it in two:

From: Paolo Bonzini <pbonzini@redhat.com>
Subject: [PATCH] KVM: x86/mmu: Avoid an extra memslot lookup in try_async_pf() for L2

Create a new function kvm_is_visible_memslot() and use it from
kvm_is_visible_gfn(); use the new function in try_async_pf() too,
to avoid an extra memslot lookup.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6d6cb9416179..fe04ce843a57 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4082,19 +4082,18 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 			 gpa_t cr2_or_gpa, kvm_pfn_t *pfn, bool write,
 			 bool *writable)
 {
-	struct kvm_memory_slot *slot;
+	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	bool async;
 
 	/*
 	 * Don't expose private memslots to L2.
 	 */
-	if (is_guest_mode(vcpu) && !kvm_is_visible_gfn(vcpu->kvm, gfn)) {
+	if (is_guest_mode(vcpu) && !kvm_is_visible_memslot(slot)) {
 		*pfn = KVM_PFN_NOSLOT;
 		*writable = false;
 		return false;
 	}
 
-	slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	async = false;
 	*pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable);
 	if (!async)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 658215f6102c..7d4f1eb70274 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1357,6 +1357,12 @@ static inline void kvm_vcpu_set_dy_eligible(struct kvm_vcpu *vcpu, bool val)
 }
 #endif /* CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT */
 
+static inline bool kvm_is_visible_memslot(struct kvm_memory_slot *memslot)
+{
+	return (memslot && memslot->id < KVM_USER_MEM_SLOTS &&
+		!(memslot->flags & KVM_MEMSLOT_INVALID));
+}
+
 struct kvm_vcpu *kvm_get_running_vcpu(void);
 struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index da8fd45e0e3e..8aa577db131e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1607,11 +1607,7 @@ bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
 {
 	struct kvm_memory_slot *memslot = gfn_to_memslot(kvm, gfn);
 
-	if (!memslot || memslot->id >= KVM_USER_MEM_SLOTS ||
-	      memslot->flags & KVM_MEMSLOT_INVALID)
-		return false;
-
-	return true;
+	return kvm_is_visible_memslot(memslot);
 }
 EXPORT_SYMBOL_GPL(kvm_is_visible_gfn);
 


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] KVM: x86/mmu: Set @writable to false for non-visible accesses by L2
  2020-04-15 21:44 ` [PATCH 1/2] KVM: x86/mmu: Set @writable to false for non-visible accesses by L2 Sean Christopherson
@ 2020-04-16 21:33   ` Jim Mattson
  0 siblings, 0 replies; 5+ messages in thread
From: Jim Mattson @ 2020-04-16 21:33 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Joerg Roedel,
	kvm list, LKML, Rick Edgecombe

On Wed, Apr 15, 2020 at 2:44 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Explicitly set @writable to false in try_async_pf() if the GFN->PFN
> translation is short-circuited due to the requested GFN not being
> visible to L2.
>
> Leaving @writable ('map_writable' in the callers) uninitialized is ok
> in that it's never actually consumed, but one has to track it all the
> way through set_spte() being short-circuited by set_mmio_spte() to
> understand that the uninitialized variable is benign, and relying on
> @writable being ignored is an unnecessary risk.  Explicitly setting
> @writable also aligns try_async_pf() with __gfn_to_pfn_memslot().
>
> Jim Mattson <jmattson@google.com>
Go ahead and preface the above with Reviewed-by:
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index c6ea6032c222..6d6cb9416179 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4090,6 +4090,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
>          */
>         if (is_guest_mode(vcpu) && !kvm_is_visible_gfn(vcpu->kvm, gfn)) {
>                 *pfn = KVM_PFN_NOSLOT;
> +               *writable = false;
>                 return false;
>         }
>
> --
> 2.26.0
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-04-16 21:34 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-15 21:44 [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf() Sean Christopherson
2020-04-15 21:44 ` [PATCH 1/2] KVM: x86/mmu: Set @writable to false for non-visible accesses by L2 Sean Christopherson
2020-04-16 21:33   ` Jim Mattson
2020-04-15 21:44 ` [PATCH 2/2] KVM: x86/mmu: Avoid an extra memslot lookup in try_async_pf() for L2 Sean Christopherson
2020-04-16 13:52 ` [PATCH 0/2] KVM: x86/mmu: Minor cleanup in try_async_pf() Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.