kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Maxim Levitsky <mlevitsk@redhat.com>,
	peterx@redhat.com, Sean Christopherson <seanjc@google.com>
Subject: [PATCH 0/9] KVM: X86: Some light optimizations on rmap logic
Date: Thu, 24 Jun 2021 14:13:47 -0400	[thread overview]
Message-ID: <20210624181356.10235-1-peterx@redhat.com> (raw)

(This is still based on a random 5.13-rc3-ish branch, but I can rebase if needed)\r
All things started from patch 1, which introduced a new statistic to keep "max\r
rmap entry count per vm".  At that time I was just curious about how many rmap\r
is there normally for a guest, and it surprised me a bit.\r
For TDP mappings it's all fine as mostly rmap of a page is either 0 or 1\r
depending on faulted or not.  It turns out with EPT=N there seems to be a huge\r
number of pages that can have tens or hundreds of rmap entries even for an idle\r
guest.  Then I continued with the rest.\r
To understand better on "how much of those pages", I did patch 2-6 which\r
introduced the idea of per-arch per-vm debugfs nodes, and added a debug file to\r
do statistics for rmap, which is similar to kvm_arch_create_vcpu_debugfs() but\r
for vm not vcpu.\r
I did notice this should be the clean approach as I also see other archs\r
randomly create some per-vm debugfs nodes there:\r
*** arch/arm64/kvm/vgic/vgic-debug.c:\r
vgic_debug_init[274]           debugfs_create_file("vgic-state", 0444, kvm->debugfs_dentry, kvm,\r
*** arch/powerpc/kvm/book3s_64_mmu_hv.c:\r
kvmppc_mmu_debugfs_init[2115]  debugfs_create_file("htab", 0400, kvm->arch.debugfs_dir, kvm,\r
*** arch/powerpc/kvm/book3s_64_mmu_radix.c:\r
kvmhv_radix_debugfs_init[1434] debugfs_create_file("radix", 0400, kvm->arch.debugfs_dir, kvm,\r
*** arch/powerpc/kvm/book3s_hv.c:\r
debugfs_vcpu_init[2395]        debugfs_create_file("timings", 0444, vcpu->arch.debugfs_dir, vcpu,\r
*** arch/powerpc/kvm/book3s_xics.c:\r
xics_debugfs_init[1027]        xics->dentry = debugfs_create_file(name, 0444, powerpc_debugfs_root,\r
*** arch/powerpc/kvm/book3s_xive.c:\r
xive_debugfs_init[2236]        xive->dentry = debugfs_create_file(name, S_IRUGO, powerpc_debugfs_root,\r
*** arch/powerpc/kvm/timing.c:\r
kvmppc_create_vcpu_debugfs[214] debugfs_file = debugfs_create_file(dbg_fname, 0666, kvm_debugfs_dir,\r
PPC even has its own per-vm dir for that.  I think if patch 2-6 can be\r
considered to be accepted then the next thing to consider is to merge all these\r
usages to be under the same existing per-vm dentry with their per-arch hooks\r
The last 3 patches (patch 7-9) are a few optimizations of existing rmap logic.\r
The major test case I used is rmap_fork [1], however it's not really the ideal\r
one to show their effect for sure as that test I wrote covers both\r
rmap_add/remove, while I don't have good idea on optimizing rmap_remove without\r
changing the array structure or adding much overhead (e.g. sort the array, or\r
making a tree-like structure somehow to replace the array list).  However it\r
already shows some benefit with those changes, so I post them out.\r
Applying patch 7-8 will bring a summary of 38% perf boost when I fork 500\r
childs with the test I used.  Didn't run perf test on patch 9.  More in the\r
commit log.\r
Please review, thanks.\r
[1] https://github.com/xzpeter/clibs/commit/825436f825453de2ea5aaee4bdb1c92281efe5b3\r
Peter Xu (9):\r
  KVM: X86: Add per-vm stat for max rmap list size\r
  KVM: Introduce kvm_get_kvm_safe()\r
  KVM: Allow to have arch-specific per-vm debugfs files\r
  KVM: X86: Introduce pte_list_count() helper\r
  KVM: X86: Introduce kvm_mmu_slot_lpages() helpers\r
  KVM: X86: Introduce mmu_rmaps_stat per-vm debugfs file\r
  KVM: X86: MMU: Tune PTE_LIST_EXT to be bigger\r
  KVM: X86: Optimize pte_list_desc with per-array counter\r
  KVM: X86: Optimize zapping rmap\r
 arch/x86/include/asm/kvm_host.h |   1 +\r
 arch/x86/kvm/mmu/mmu.c          |  90 +++++++++++++++++-----\r
 arch/x86/kvm/mmu/mmu_internal.h |   1 +\r
 arch/x86/kvm/x86.c              | 131 +++++++++++++++++++++++++++++++-\r
 include/linux/kvm_host.h        |   2 +\r
 virt/kvm/kvm_main.c             |  36 +++++++--\r
 6 files changed, 233 insertions(+), 28 deletions(-)\r
-- \r

             reply	other threads:[~2021-06-24 18:14 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-24 18:13 Peter Xu [this message]
2021-06-24 18:13 ` [PATCH 1/9] KVM: X86: Add per-vm stat for max rmap list size Peter Xu
2021-06-24 18:13 ` [PATCH 2/9] KVM: Introduce kvm_get_kvm_safe() Peter Xu
2021-06-24 18:13 ` [PATCH 3/9] KVM: Allow to have arch-specific per-vm debugfs files Peter Xu
2021-06-24 18:13 ` [PATCH 4/9] KVM: X86: Introduce pte_list_count() helper Peter Xu
2021-06-24 18:13 ` [PATCH 5/9] KVM: X86: Introduce kvm_mmu_slot_lpages() helpers Peter Xu
2021-06-24 18:13 ` [PATCH 6/9] KVM: X86: Introduce mmu_rmaps_stat per-vm debugfs file Peter Xu
2021-06-24 18:22   ` Peter Xu
2021-06-24 18:13 ` [PATCH 7/9] KVM: X86: MMU: Tune PTE_LIST_EXT to be bigger Peter Xu
2021-06-24 18:15 ` [PATCH 8/9] KVM: X86: Optimize pte_list_desc with per-array counter Peter Xu
2021-06-24 22:53   ` Peter Xu
2021-06-24 18:15 ` [PATCH 9/9] KVM: X86: Optimize zapping rmap Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210624181356.10235-1-peterx@redhat.com \
    --to=peterx@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mlevitsk@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --subject='Re: [PATCH 0/9] KVM: X86: Some light optimizations on rmap logic' \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).