kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] KVM: x86: enable dirty log gradually in small chunks
@ 2020-02-18 11:00 Jay Zhou
  2020-02-18 11:39 ` Paolo Bonzini
  2020-02-18 21:23 ` Sean Christopherson
  0 siblings, 2 replies; 8+ messages in thread
From: Jay Zhou @ 2020-02-18 11:00 UTC (permalink / raw)
  To: kvm
  Cc: pbonzini, peterx, wangxinxin.wang, linfeng23, weidong.huang,
	jianjay.zhou

It could take kvm->mmu_lock for an extended period of time when
enabling dirty log for the first time. The main cost is to clear
all the D-bits of last level SPTEs. This situation can benefit from
manual dirty log protect as well, which can reduce the mmu_lock
time taken. The sequence is like this:

1. Set all the bits of the first dirty bitmap to 1 when enabling
   dirty log for the first time
2. Only write protect the huge pages
3. KVM_GET_DIRTY_LOG returns the dirty bitmap info
4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
   SPTEs gradually in small chunks

Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment,
I did some tests with a 128G windows VM and counted the time taken
of memory_global_dirty_log_start, here is the numbers:

VM Size        Before    After optimization
128G           460ms     10ms

Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
---
 arch/x86/kvm/vmx/vmx.c   |  5 +++++
 include/linux/kvm_host.h |  5 +++++
 virt/kvm/kvm_main.c      | 10 ++++++++--
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3be25ec..a8d64f6 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
 static void vmx_slot_enable_log_dirty(struct kvm *kvm,
 				     struct kvm_memory_slot *slot)
 {
+#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
+	if (!kvm->manual_dirty_log_protect)
+		kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
+#else
 	kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
+#endif
 	kvm_mmu_slot_largepage_remove_write_access(kvm, slot);
 }
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index e89eb67..fd149b0 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -360,6 +360,11 @@ static inline unsigned long *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
 	return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
 }
 
+static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot *memslot)
+{
+	bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
+}
+
 struct kvm_s390_adapter_int {
 	u64 ind_addr;
 	u64 summary_addr;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 70f03ce..08565ed 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
  * Allocation size is twice as large as the actual dirty bitmap size.
  * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
  */
-static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
+static int kvm_create_dirty_bitmap(struct kvm *kvm,
+				struct kvm_memory_slot *memslot)
 {
 	unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
 
@@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
 	if (!memslot->dirty_bitmap)
 		return -ENOMEM;
 
+#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
+	if (kvm->manual_dirty_log_protect)
+		kvm_set_first_dirty_bitmap(memslot);
+#endif
+
 	return 0;
 }
 
@@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 
 	/* Allocate page dirty bitmap if needed */
 	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
-		if (kvm_create_dirty_bitmap(&new) < 0)
+		if (kvm_create_dirty_bitmap(kvm, &new) < 0)
 			goto out_free;
 	}
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-02-19 15:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-18 11:00 [PATCH] KVM: x86: enable dirty log gradually in small chunks Jay Zhou
2020-02-18 11:39 ` Paolo Bonzini
2020-02-18 13:39   ` Zhoujian (jay)
2020-02-18 17:26     ` Peter Xu
2020-02-19  4:11       ` Zhoujian (jay)
2020-02-18 21:23 ` Sean Christopherson
2020-02-19  6:58   ` Zhoujian (jay)
2020-02-19 15:08     ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).