From: Sean Christopherson <sean.j.christopherson@intel.com> To: Marc Zyngier <maz@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>, Arnd Bergmann <arnd@arndb.de> Cc: linux-arch@vger.kernel.org, Junaid Shahid <junaids@google.com>, Wanpeng Li <wanpengli@tencent.com>, kvm@vger.kernel.org, Joerg Roedel <joro@8bytes.org>, Peter Shier <pshier@google.com>, linux-mips@vger.kernel.org, Sean Christopherson <sean.j.christopherson@intel.com>, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Ben Gardon <bgardon@google.com>, Vitaly Kuznetsov <vkuznets@redhat.com>, Peter Feiner <pfeiner@google.com>, kvmarm@lists.cs.columbia.edu, Jim Mattson <jmattson@google.com> Subject: [PATCH v3 08/21] KVM: x86/mmu: Clean up the gorilla math in mmu_topup_memory_caches() Date: Thu, 2 Jul 2020 19:35:32 -0700 [thread overview] Message-ID: <20200703023545.8771-9-sean.j.christopherson@intel.com> (raw) In-Reply-To: <20200703023545.8771-1-sean.j.christopherson@intel.com> Clean up the minimums in mmu_topup_memory_caches() to document the driving mechanisms behind the minimums. Now that encountering an empty cache is unlikely to trigger BUG_ON(), it is less dangerous to be more precise when defining the minimums. For rmaps, the logic is 1 parent PTE per level, plus a single rmap, and prefetched rmaps. The extra objects in the current '8 + PREFETCH' minimum came about due to an abundance of paranoia in commit c41ef344de212 ("KVM: MMU: increase per-vcpu rmap cache alloc size"), i.e. it could have increased the minimum to 2 rmaps. Furthermore, the unexpected extra rmap case was killed off entirely by commits f759e2b4c728c ("KVM: MMU: avoid pte_list_desc running out in kvm_mmu_pte_write") and f5a1e9f89504f ("KVM: MMU: remove call to kvm_mmu_pte_write from walk_addr"). For the so called page cache, replace '8' with 2*PT64_ROOT_MAX_LEVEL. The 2x multiplier is needed because the cache is used for both shadow pages and gfn arrays for indirect MMUs. And finally, for page headers, replace '4' with PT64_ROOT_MAX_LEVEL. Note, KVM now supports 5-level paging, i.e. the old minimums that used a baseline derived from 4-level paging were technically wrong. But, KVM always allocates roots in a separate flow, e.g. it's impossible in the current implementation to actually need 5 new shadow pages in a single flow. Use PT64_ROOT_MAX_LEVEL unmodified instead of subtracting 1, as the direct usage is likely more intuitive to uninformed readers, and the inflated minimum is unlikely to affect functionality in practice. Reviewed-by: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/kvm/mmu/mmu.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3d0768e16463..cf02ad93c249 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1103,14 +1103,17 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) { int r; + /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */ r = mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, - 8 + PTE_PREFETCH_NUM); + 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, 8); + r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, + 2 * PT64_ROOT_MAX_LEVEL); if (r) return r; - return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, 4); + return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, + PT64_ROOT_MAX_LEVEL); } static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) -- 2.26.0
WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <sean.j.christopherson@intel.com> To: Marc Zyngier <maz@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>, Arnd Bergmann <arnd@arndb.de> Cc: James Morse <james.morse@arm.com>, Julien Thierry <julien.thierry.kdev@gmail.com>, Suzuki K Poulose <suzuki.poulose@arm.com>, Sean Christopherson <sean.j.christopherson@intel.com>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon <bgardon@google.com>, Peter Feiner <pfeiner@google.com>, Peter Shier <pshier@google.com>, Junaid Shahid <junaids@google.com>, Christoffer Dall <christoffer.dall@arm.com> Subject: [PATCH v3 08/21] KVM: x86/mmu: Clean up the gorilla math in mmu_topup_memory_caches() Date: Thu, 2 Jul 2020 19:35:32 -0700 [thread overview] Message-ID: <20200703023545.8771-9-sean.j.christopherson@intel.com> (raw) Message-ID: <20200703023532.Ynkz4f-R7LQaCwIRNu9PTqSEuoCEo8rGr3So-X7BsoA@z> (raw) In-Reply-To: <20200703023545.8771-1-sean.j.christopherson@intel.com> Clean up the minimums in mmu_topup_memory_caches() to document the driving mechanisms behind the minimums. Now that encountering an empty cache is unlikely to trigger BUG_ON(), it is less dangerous to be more precise when defining the minimums. For rmaps, the logic is 1 parent PTE per level, plus a single rmap, and prefetched rmaps. The extra objects in the current '8 + PREFETCH' minimum came about due to an abundance of paranoia in commit c41ef344de212 ("KVM: MMU: increase per-vcpu rmap cache alloc size"), i.e. it could have increased the minimum to 2 rmaps. Furthermore, the unexpected extra rmap case was killed off entirely by commits f759e2b4c728c ("KVM: MMU: avoid pte_list_desc running out in kvm_mmu_pte_write") and f5a1e9f89504f ("KVM: MMU: remove call to kvm_mmu_pte_write from walk_addr"). For the so called page cache, replace '8' with 2*PT64_ROOT_MAX_LEVEL. The 2x multiplier is needed because the cache is used for both shadow pages and gfn arrays for indirect MMUs. And finally, for page headers, replace '4' with PT64_ROOT_MAX_LEVEL. Note, KVM now supports 5-level paging, i.e. the old minimums that used a baseline derived from 4-level paging were technically wrong. But, KVM always allocates roots in a separate flow, e.g. it's impossible in the current implementation to actually need 5 new shadow pages in a single flow. Use PT64_ROOT_MAX_LEVEL unmodified instead of subtracting 1, as the direct usage is likely more intuitive to uninformed readers, and the inflated minimum is unlikely to affect functionality in practice. Reviewed-by: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/kvm/mmu/mmu.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3d0768e16463..cf02ad93c249 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1103,14 +1103,17 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) { int r; + /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */ r = mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, - 8 + PTE_PREFETCH_NUM); + 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, 8); + r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, + 2 * PT64_ROOT_MAX_LEVEL); if (r) return r; - return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, 4); + return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, + PT64_ROOT_MAX_LEVEL); } static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) -- 2.26.0
next prev parent reply other threads:[~2020-07-03 2:35 UTC|newest] Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-07-03 2:35 [PATCH v3 00/21] KVM: Cleanup and unify kvm_mmu_memory_cache usage Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 01/21] KVM: x86/mmu: Track the associated kmem_cache in the MMU caches Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 02/21] KVM: x86/mmu: Consolidate "page" variant of memory cache helpers Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 03/21] KVM: x86/mmu: Use consistent "mc" name for kvm_mmu_memory_cache locals Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 04/21] KVM: x86/mmu: Remove superfluous gotos from mmu_topup_memory_caches() Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 05/21] KVM: x86/mmu: Try to avoid crashing KVM if a MMU memory cache is empty Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 06/21] KVM: x86/mmu: Move fast_page_fault() call above mmu_topup_memory_caches() Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 07/21] KVM: x86/mmu: Topup memory caches after walking GVA->GPA Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson [this message] 2020-07-03 2:35 ` [PATCH v3 08/21] KVM: x86/mmu: Clean up the gorilla math in mmu_topup_memory_caches() Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 09/21] KVM: x86/mmu: Separate the memory caches for shadow pages and gfn arrays Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 10/21] KVM: x86/mmu: Make __GFP_ZERO a property of the memory cache Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 11/21] KVM: x86/mmu: Zero allocate shadow pages (outside of mmu_lock) Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 12/21] KVM: x86/mmu: Skip filling the gfn cache for guaranteed direct MMU topups Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 13/21] KVM: x86/mmu: Prepend "kvm_" to memory cache helpers that will be global Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 14/21] KVM: Move x86's version of struct kvm_mmu_memory_cache to common code Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 15/21] KVM: Move x86's MMU memory cache helpers to common KVM code Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 16/21] KVM: arm64: Drop @max param from mmu_topup_memory_cache() Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 17/21] KVM: arm64: Use common code's approach for __GFP_ZERO with memory caches Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 18/21] KVM: arm64: Use common KVM implementation of MMU " Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 19/21] KVM: MIPS: Drop @max param from mmu_topup_memory_cache() Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 20/21] KVM: MIPS: Account pages used for GPA page tables Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-03 2:35 ` [PATCH v3 21/21] KVM: MIPS: Use common KVM implementation of MMU memory caches Sean Christopherson 2020-07-03 2:35 ` Sean Christopherson 2020-07-09 12:14 ` [PATCH v3 00/21] KVM: Cleanup and unify kvm_mmu_memory_cache usage Paolo Bonzini
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200703023545.8771-9-sean.j.christopherson@intel.com \ --to=sean.j.christopherson@intel.com \ --cc=arnd@arndb.de \ --cc=bgardon@google.com \ --cc=jmattson@google.com \ --cc=joro@8bytes.org \ --cc=junaids@google.com \ --cc=kvm@vger.kernel.org \ --cc=kvmarm@lists.cs.columbia.edu \ --cc=linux-arch@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mips@vger.kernel.org \ --cc=maz@kernel.org \ --cc=pbonzini@redhat.com \ --cc=pfeiner@google.com \ --cc=pshier@google.com \ --cc=vkuznets@redhat.com \ --cc=wanpengli@tencent.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).