From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Peter Xu <peterx@redhat.com>
Subject: [PATCH 0/4] KVM: x86/mmu: pte_list_desc fix and cleanups
Date: Fri, 24 Jun 2022 23:27:31 +0000 [thread overview]
Message-ID: <20220624232735.3090056-1-seanjc@google.com> (raw)
Reviewing the eager page splitting code made me realize that burning 14
rmap entries for nested TDP MMUs is extremely wasteful due to the per-vCPU
caches allocating 40 entries by default. For nested TDP, aliasing L2 gfns
to L1 gfns is quite rare and is not performance critical (it's exclusively
pre-boot behavior for sane setups).
Patch 1 fixes a bug where pte_list_desc is not correctly aligned nor sized
on 32-bit kernels. The primary motivation for the fix is to be able to add
a compile-time assertion on the size being a multiple of the cache line
size, I doubt anyone cares about the performance/memory impact.
Patch 2 tweaks MMU setup to support a dynamic pte_list_desc size.
Patch 3 reduces the number of sptes per pte_list_desc to 2 for nested TDP
MMUs, i.e. allocates the bare minimum to prioritize the memory footprint
over performance for sane setups.
Patch 4 fills the pte_list_desc cache if and only if rmaps are in use,
i.e. doesn't allocate pte_list_desc when using the TDP MMU until nested
TDP is used.
Sean Christopherson (4):
KVM: x86/mmu: Track the number entries in a pte_list_desc with a ulong
KVM: x86/mmu: Defer "full" MMU setup until after vendor
hardware_setup()
KVM: x86/mmu: Shrink pte_list_desc size when KVM is using TDP
KVM: x86/mmu: Topup pte_list_desc cache iff VM is using rmaps
arch/x86/include/asm/kvm_host.h | 5 ++-
arch/x86/kvm/mmu/mmu.c | 78 +++++++++++++++++++++++----------
arch/x86/kvm/x86.c | 17 ++++---
3 files changed, 70 insertions(+), 30 deletions(-)
base-commit: 4b88b1a518b337de1252b8180519ca4c00015c9e
--
2.37.0.rc0.161.g10f37bed90-goog
next reply other threads:[~2022-06-24 23:27 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-24 23:27 Sean Christopherson [this message]
2022-06-24 23:27 ` [PATCH 1/4] KVM: x86/mmu: Track the number entries in a pte_list_desc with a ulong Sean Christopherson
2022-06-24 23:27 ` [PATCH 2/4] KVM: x86/mmu: Defer "full" MMU setup until after vendor hardware_setup() Sean Christopherson
2022-06-25 0:16 ` David Matlack
2022-06-27 15:40 ` Sean Christopherson
2022-06-27 22:50 ` David Matlack
2022-07-12 21:56 ` Peter Xu
2022-07-14 18:23 ` Sean Christopherson
2022-06-24 23:27 ` [PATCH 3/4] KVM: x86/mmu: Shrink pte_list_desc size when KVM is using TDP Sean Christopherson
2022-07-12 22:35 ` Peter Xu
2022-07-12 22:53 ` Sean Christopherson
2022-07-13 0:24 ` Peter Xu
2022-07-14 18:43 ` Sean Christopherson
2022-06-24 23:27 ` [PATCH 4/4] KVM: x86/mmu: Topup pte_list_desc cache iff VM is using rmaps Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220624232735.3090056-1-seanjc@google.com \
--to=seanjc@google.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).