All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role
@ 2023-01-26 21:04 David Matlack
  2023-01-26 22:38 ` Sean Christopherson
  0 siblings, 1 reply; 5+ messages in thread
From: David Matlack @ 2023-01-26 21:04 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Sean Christopherson, kvm, David Matlack

Replace sp->tdp_mmu_page with a bit in the page role. This reduces the
size of struct kvm_mmu_page by a byte.

Note that in tdp_mmu_init_sp() there is no need to explicitly set
sp->role.tdp_mmu=1 for every SP since the role is already copied to all
child SPs.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/
Signed-off-by: David Matlack <dmatlack@google.com>
---
 arch/x86/include/asm/kvm_host.h | 11 ++++++++---
 arch/x86/kvm/mmu/mmu.c          |  1 +
 arch/x86/kvm/mmu/mmu_internal.h |  1 -
 arch/x86/kvm/mmu/tdp_mmu.c      |  1 -
 arch/x86/kvm/mmu/tdp_mmu.h      |  2 +-
 5 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4d2bc08794e4..33e5b1c56ff4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -304,10 +304,10 @@ struct kvm_kernel_irq_routing_entry;
  * the number of unique SPs that can theoretically be created is 2^n, where n
  * is the number of bits that are used to compute the role.
  *
- * But, even though there are 19 bits in the mask below, not all combinations
+ * But, even though there are 20 bits in the mask below, not all combinations
  * of modes and flags are possible:
  *
- *   - invalid shadow pages are not accounted, so the bits are effectively 18
+ *   - invalid shadow pages are not accounted, so the bits are effectively 19
  *
  *   - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging);
  *     execonly and ad_disabled are only used for nested EPT which has
@@ -321,6 +321,10 @@ struct kvm_kernel_irq_routing_entry;
  *   - on top of this, smep_andnot_wp and smap_andnot_wp are only set if
  *     cr0_wp=0, therefore these three bits only give rise to 5 possibilities.
  *
+ *   - When tdp_mmu=1, only a subset of the remaining bits can vary across
+ *     shadow pages: level, invalid, and smm. The rest are fixed, e.g. direct=1,
+ *     access=ACC_ALL, guest_mode=0, etc.
+ *
  * Therefore, the maximum number of possible upper-level shadow pages for a
  * single gfn is a bit less than 2^13.
  */
@@ -340,7 +344,8 @@ union kvm_mmu_page_role {
 		unsigned ad_disabled:1;
 		unsigned guest_mode:1;
 		unsigned passthrough:1;
-		unsigned :5;
+		unsigned tdp_mmu:1;
+		unsigned:4;
 
 		/*
 		 * This is left at the top of the word so that
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index aeb240b339f5..458a7c398f76 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5128,6 +5128,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
 	role.level = kvm_mmu_get_tdp_level(vcpu);
 	role.direct = true;
 	role.has_4_byte_gpte = false;
+	role.tdp_mmu = tdp_mmu_enabled;
 
 	return role;
 }
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index ac00bfbf32f6..49591f3bd9f9 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -54,7 +54,6 @@ struct kvm_mmu_page {
 	struct list_head link;
 	struct hlist_node hash_link;
 
-	bool tdp_mmu_page;
 	bool unsync;
 	u8 mmu_valid_gen;
 
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index bba33aea0fb0..e0eda6332755 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -280,7 +280,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
 	sp->role = role;
 	sp->gfn = gfn;
 	sp->ptep = sptep;
-	sp->tdp_mmu_page = true;
 
 	trace_kvm_mmu_get_page(sp, true);
 }
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index 0a63b1afabd3..cf7321635ff1 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -71,7 +71,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
 					u64 *spte);
 
 #ifdef CONFIG_X86_64
-static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; }
+static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->role.tdp_mmu; }
 #else
 static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; }
 #endif

base-commit: de60733246ff4545a0483140c1f21426b8d7cb7f
prerequisite-patch-id: 42a76ce7cec240776c21f674e99e893a3a6bee58
-- 
2.39.1.456.gfc5497dd1b-goog


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role
  2023-01-26 21:04 [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role David Matlack
@ 2023-01-26 22:38 ` Sean Christopherson
  2023-01-27  0:14   ` David Matlack
  0 siblings, 1 reply; 5+ messages in thread
From: Sean Christopherson @ 2023-01-26 22:38 UTC (permalink / raw)
  To: David Matlack; +Cc: Paolo Bonzini, kvm

On Thu, Jan 26, 2023, David Matlack wrote:
> Replace sp->tdp_mmu_page with a bit in the page role. This reduces the
> size of struct kvm_mmu_page by a byte.

No it doesn't.  I purposely squeezed the flag into an open byte in commit

  ca41c34cab1f ("KVM: x86/mmu: Relocate kvm_mmu_page.tdp_mmu_page for better cache locality")

I double checked just to make sure: the size is 184 bytes before and after.

I'm not opposed to this change, but I also don't see the point.  The common code
ends up with an arch hook in the appropriate place anyways[*], and I think we'll
want to pay careful attention to the cache locality of the struct as whole, e.g.
naively dumping the arch crud at the end of the common kvm_mmu_page structure may
impact performance, especially for shadow paging.

And just drop the WARN_ON() sanity check in kvm_tdp_mmu_put_root() .

Hmm, actually, if we invert the order for the shadow MMU, e.g. embed "struct
kvm_mmu_page" in a "struct kvm_shadow_mmu_page" or whatever, then the size of
TDP MMU pages should shrink substantially.

So my vote is to hold off for now and take a closer look at this in the common
MMU series proper.

[*] https://lore.kernel.org/all/20221208193857.4090582-20-dmatlack@google.com

> Note that in tdp_mmu_init_sp() there is no need to explicitly set
> sp->role.tdp_mmu=1 for every SP since the role is already copied to all
> child SPs.
> 
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Link: https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/

Drop the trailing slash, otherwise directly clicking the link goes sideways.

> Signed-off-by: David Matlack <dmatlack@google.com>
> ---

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role
  2023-01-26 22:38 ` Sean Christopherson
@ 2023-01-27  0:14   ` David Matlack
  2023-01-27  1:53     ` Sean Christopherson
  0 siblings, 1 reply; 5+ messages in thread
From: David Matlack @ 2023-01-27  0:14 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Paolo Bonzini, kvm

On Thu, Jan 26, 2023 at 2:38 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Jan 26, 2023, David Matlack wrote:
> > Replace sp->tdp_mmu_page with a bit in the page role. This reduces the
> > size of struct kvm_mmu_page by a byte.
>
> No it doesn't.  I purposely squeezed the flag into an open byte in commit
>
>   ca41c34cab1f ("KVM: x86/mmu: Relocate kvm_mmu_page.tdp_mmu_page for better cache locality")
>
> I double checked just to make sure: the size is 184 bytes before and after.

My mistake, thanks for pointing it out.

>
> I'm not opposed to this change, but I also don't see the point.  The common code
> ends up with an arch hook in the appropriate place anyways[*], and I think we'll
> want to pay careful attention to the cache locality of the struct as whole, e.g.
> naively dumping the arch crud at the end of the common kvm_mmu_page structure may
> impact performance, especially for shadow paging.
>
> And just drop the WARN_ON() sanity check in kvm_tdp_mmu_put_root() .
>
> Hmm, actually, if we invert the order for the shadow MMU, e.g. embed "struct
> kvm_mmu_page" in a "struct kvm_shadow_mmu_page" or whatever, then the size of
> TDP MMU pages should shrink substantially.
>
> So my vote is to hold off for now and take a closer look at this in the common
> MMU series proper.

Sounds good to me.

>
> [*] https://lore.kernel.org/all/20221208193857.4090582-20-dmatlack@google.com
>
> > Note that in tdp_mmu_init_sp() there is no need to explicitly set
> > sp->role.tdp_mmu=1 for every SP since the role is already copied to all
> > child SPs.
> >
> > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > Link: https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/
>
> Drop the trailing slash, otherwise directly clicking the link goes sideways.

I don't have any problem clicking this link, but I can try to omit the
trailing slash in the future.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role
  2023-01-27  0:14   ` David Matlack
@ 2023-01-27  1:53     ` Sean Christopherson
  2023-01-27  3:49       ` David Matlack
  0 siblings, 1 reply; 5+ messages in thread
From: Sean Christopherson @ 2023-01-27  1:53 UTC (permalink / raw)
  To: David Matlack; +Cc: Paolo Bonzini, kvm

On Thu, Jan 26, 2023, David Matlack wrote:
> On Thu, Jan 26, 2023 at 2:38 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Thu, Jan 26, 2023, David Matlack wrote:
> > > Link: https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/
> >
> > Drop the trailing slash, otherwise directly clicking the link goes sideways.
> 
> I don't have any problem clicking this link, but I can try to omit the
> trailing slash in the future.

Ha!  Figured out what happens.  Gmail gets confused and thinks the forward slash
indiciates a continuation, e.g. the original mail sends me to:

  https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/Signed-off-by

I bet the above now works because there's whitespace. For giggles....

https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/
will_it_blend

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role
  2023-01-27  1:53     ` Sean Christopherson
@ 2023-01-27  3:49       ` David Matlack
  0 siblings, 0 replies; 5+ messages in thread
From: David Matlack @ 2023-01-27  3:49 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Paolo Bonzini, kvm

On Thu, Jan 26, 2023 at 5:53 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Jan 26, 2023, David Matlack wrote:
> > On Thu, Jan 26, 2023 at 2:38 PM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Thu, Jan 26, 2023, David Matlack wrote:
> > > > Link: https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/
> > >
> > > Drop the trailing slash, otherwise directly clicking the link goes sideways.
> >
> > I don't have any problem clicking this link, but I can try to omit the
> > trailing slash in the future.
>
> Ha!  Figured out what happens.  Gmail gets confused and thinks the forward slash
> indiciates a continuation, e.g. the original mail sends me to:
>
>   https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/Signed-off-by

Ohhh I do see the same behavior when clicking the link in my original
email. The copy of the link in your reply had nothing on the line
after it so clicking it there worked just fine :)

>
> I bet the above now works because there's whitespace. For giggles....
>
> https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/
> will_it_blend

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-01-27  3:50 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-26 21:04 [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role David Matlack
2023-01-26 22:38 ` Sean Christopherson
2023-01-27  0:14   ` David Matlack
2023-01-27  1:53     ` Sean Christopherson
2023-01-27  3:49       ` David Matlack

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.