linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/4] KVM: arm64: Improve efficiency of stage2 page table
@ 2021-06-16  9:51 Yanan Wang
  2021-06-16  9:51 ` [PATCH v6 1/4] KVM: arm64: Introduce cache maintenance callbacks for guest stage-2 Yanan Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Yanan Wang @ 2021-06-16  9:51 UTC (permalink / raw)
  To: Marc Zyngier, Will Deacon, Quentin Perret, Alexandru Elisei,
	kvmarm, linux-arm-kernel, kvm, linux-kernel
  Cc: Catalin Marinas, James Morse, Julien Thierry, Suzuki K Poulose,
	Gavin Shan, wanghaibin.wang, zhukeqian1, yuzenghui, Yanan Wang

Hello,
This series makes some efficiency improvement of guest stage-2 page
table code, and there are some test results to quantify the benefit.

Description for this series:
We currently uniformly permorm CMOs of D-cache and I-cache in function
user_mem_abort before calling the fault handlers. If we get concurrent
guest faults(e.g. translation faults, permission faults) or some really
unnecessary guest faults caused by BBM, CMOs for the first vcpu are
necessary while the others later are not.

By moving CMOs to the fault handlers, we can easily identify conditions
where they are really needed and avoid the unnecessary ones. As it's a
time consuming process to perform CMOs especially when flushing a block
range, so this solution reduces much load of kvm and improve efficiency
of the stage-2 page table code.

We can imagine two specific scenarios which will gain much benefit:
1) In a normal VM startup, this solution will improve the efficiency of
handling guest page faults incurred by vCPUs, when initially populating
stage-2 page tables.
2) After live migration, the heavy workload will be resumed on the
destination VM, however all the stage-2 page tables need to be rebuilt
at the moment. So this solution will ease the performance drop during
resuming stage.

The following are test results originally from v3 [1] to represent how
much benefit was introduced by movement of CMOs. We can use KVM selftest
to simulate a scenario of concurrent guest memory access and test the
execution time that KVM uses to create new stage-2 mappings, update the
existing mappings, split/rebuild huge mappings during/after dirty logging.

hardware platform: HiSilicon Kunpeng920 Server
host kernel: Linux mainline v5.12-rc2
test tools: KVM selftest [2]
[1] https://lore.kernel.org/lkml/20210326031654.3716-1-wangyanan55@huawei.com/
[2] https://lore.kernel.org/lkml/20210302125751.19080-1-wangyanan55@huawei.com/

cmdline: ./kvm_page_table_test -m 4 -s anonymous -b 1G -v 80
           (80 vcpus, 1G memory, page mappings(normal 4K))
KVM_CREATE_MAPPINGS: before 104.35s -> after  90.42s  +13.35%
KVM_UPDATE_MAPPINGS: before  78.64s -> after  75.45s  + 4.06%

cmdline: ./kvm_page_table_test -m 4 -s anonymous_thp -b 20G -v 40
           (40 vcpus, 20G memory, block mappings(THP 2M))
KVM_CREATE_MAPPINGS: before  15.66s -> after   6.92s  +55.80%
KVM_UPDATE_MAPPINGS: before 178.80s -> after 123.35s  +31.00%
KVM_REBUILD_BLOCKS:  before 187.34s -> after 131.76s  +30.65%

cmdline: ./kvm_page_table_test -m 4 -s anonymous_hugetlb_1gb -b 20G -v 40
           (40 vcpus, 20G memory, block mappings(HUGETLB 1G))
KVM_CREATE_MAPPINGS: before 104.54s -> after   3.70s  +96.46%
KVM_UPDATE_MAPPINGS: before 174.20s -> after 115.94s  +33.44%
KVM_REBUILD_BLOCKS:  before 103.95s -> after   2.96s  +97.15%

---

Changelogs:

v5->v6:
- convert the guest CMO functions into callbacks in kvm_pgtable_mm_ops (Marc)
- drop patch #6 in v5 since we are stuffing topup into mmu_lock section (Quentin)
- rebased on latest kvmarm/tree
- v5: https://lore.kernel.org/lkml/20210415115032.35760-1-wangyanan55@huawei.com/

v4->v5:
- rebased on the latest kvmarm/tree to adapt to the new stage-2 page-table code
- v4: https://lore.kernel.org/lkml/20210409033652.28316-1-wangyanan55@huawei.com

---

Yanan Wang (4):
  KVM: arm64: Introduce cache maintenance callbacks for guest stage-2
  KVM: arm64: Introduce mm_ops member for structure stage2_attr_data
  KVM: arm64: Tweak parameters of guest cache maintenance functions
  KVM: arm64: Move guest CMOs to the fault handlers

 arch/arm64/include/asm/kvm_mmu.h     |  9 ++----
 arch/arm64/include/asm/kvm_pgtable.h |  7 +++++
 arch/arm64/kvm/hyp/pgtable.c         | 47 +++++++++++++++++++++-------
 arch/arm64/kvm/mmu.c                 | 39 ++++++++++-------------
 4 files changed, 62 insertions(+), 40 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-06-18  8:50 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-16  9:51 [PATCH v6 0/4] KVM: arm64: Improve efficiency of stage2 page table Yanan Wang
2021-06-16  9:51 ` [PATCH v6 1/4] KVM: arm64: Introduce cache maintenance callbacks for guest stage-2 Yanan Wang
     [not found]   ` <87eed2lzcc.wl-maz@kernel.org>
2021-06-17  6:48     ` wangyanan (Y)
2021-06-17  8:03       ` Marc Zyngier
2021-06-17  8:22         ` wangyanan (Y)
2021-06-17  8:44           ` Marc Zyngier
2021-06-17  9:43             ` wangyanan (Y)
2021-06-17 10:43               ` Marc Zyngier
2021-06-18  8:50   ` Fuad Tabba
2021-06-16  9:51 ` [PATCH v6 2/4] KVM: arm64: Introduce mm_ops member for structure stage2_attr_data Yanan Wang
2021-06-16  9:51 ` [PATCH v6 3/4] KVM: arm64: Tweak parameters of guest cache maintenance functions Yanan Wang
2021-06-16  9:52 ` [PATCH v6 4/4] KVM: arm64: Move guest CMOs to the fault handlers Yanan Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).