All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Zhoujian (jay)" <jianjay.zhou@huawei.com>
To: Peter Xu <peterx@redhat.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	"sean.j.christopherson@intel.com"
	<sean.j.christopherson@intel.com>,
	"wangxin (U)" <wangxinxin.wang@huawei.com>,
	"Huangweidong (C)" <weidong.huang@huawei.com>,
	"Liujinsong (Paul)" <liu.jinsong@huawei.com>
Subject: RE: [PATCH v3] KVM: x86: enable dirty log gradually in small chunks
Date: Tue, 25 Feb 2020 03:15:37 +0000	[thread overview]
Message-ID: <B2D15215269B544CADD246097EACE7474BB1B778@dggemm508-mbx.china.huawei.com> (raw)
In-Reply-To: <20200224170538.GH37727@xz-x1>



> -----Original Message-----
> From: Peter Xu [mailto:peterx@redhat.com]
> Sent: Tuesday, February 25, 2020 1:06 AM
> To: Zhoujian (jay) <jianjay.zhou@huawei.com>
> Cc: kvm@vger.kernel.org; pbonzini@redhat.com;
> sean.j.christopherson@intel.com; wangxin (U) <wangxinxin.wang@huawei.com>;
> Huangweidong (C) <weidong.huang@huawei.com>; Liujinsong (Paul)
> <liu.jinsong@huawei.com>
> Subject: Re: [PATCH v3] KVM: x86: enable dirty log gradually in small chunks
> 
> On Mon, Feb 24, 2020 at 11:25:58AM +0800, Jay Zhou wrote:
> > It could take kvm->mmu_lock for an extended period of time when
> > enabling dirty log for the first time. The main cost is to clear all
> > the D-bits of last level SPTEs. This situation can benefit from manual
> > dirty log protect as well, which can reduce the mmu_lock time taken.
> > The sequence is like this:
> >
> > 1. Initialize all the bits of the dirty bitmap to 1 when enabling
> >    dirty log for the first time
> > 2. Only write protect the huge pages
> > 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4.
> > KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> >    SPTEs gradually in small chunks
> >
> > Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did
> > some tests with a 128G windows VM and counted the time taken of
> > memory_global_dirty_log_start, here is the numbers:
> >
> > VM Size        Before    After optimization
> > 128G           460ms     10ms
> >
> > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> > ---
> > v3:
> >   * add kvm_manual_dirty_log_init_set helper, add testcase on top and
> >     keep old behavior for KVM_MEM_READONLY [Peter]
> >   * tweak logic at enabling KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2
> [Sean,
> > Peter]
> >
> > v2:
> >   * add new bit to KVM_ENABLE_CAP for
> KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 [Paolo]
> >   * support non-PML path [Peter]
> >   * delete the unnecessary ifdef and make the initialization of bitmap
> >     more clear [Sean]
> >   * document the new bits and tweak the testcase
> >
> >  Documentation/virt/kvm/api.rst  | 16 +++++++++++++---
> > arch/x86/include/asm/kvm_host.h |  3 ++-
> >  arch/x86/kvm/mmu/mmu.c          |  7 ++++---
> >  arch/x86/kvm/vmx/vmx.c          |  3 ++-
> >  arch/x86/kvm/x86.c              | 18 +++++++++++++++---
> >  include/linux/kvm_host.h        |  9 ++++++++-
> >  virt/kvm/kvm_main.c             | 30 +++++++++++++++++++++++-------
> >  7 files changed, 67 insertions(+), 19 deletions(-)
> >
> > diff --git a/Documentation/virt/kvm/api.rst
> > b/Documentation/virt/kvm/api.rst index 97a72a5..807fcd7 100644
> > --- a/Documentation/virt/kvm/api.rst
> > +++ b/Documentation/virt/kvm/api.rst
> > @@ -5704,10 +5704,20 @@ and injected exceptions.
> >  :Architectures: x86, arm, arm64, mips
> >  :Parameters: args[0] whether feature should be enabled or not
> >
> > -With this capability enabled, KVM_GET_DIRTY_LOG will not
> > automatically -clear and write-protect all pages that are returned as dirty.
> > +Valid flags are::
> > +
> > +  #define KVM_DIRTY_LOG_MANUAL_PROTECT2 (1 << 0)  #define
> > + KVM_DIRTY_LOG_INITIALLY_SET (1 << 1)
> 
> I think I mis-read previously on the old version so my comment was misleading.
> If this is the sub-capability within KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2,
> then I don't think we need to have the ending "2" any more.  How about:

It's OK, :-)

> 
>   KVM_MANUAL_PROTECT_ENABLE
>   KVM_MANUAL_PROTECT_INIT_ALL_SET

I think this naming emphasizes more about "manual protect", and the
original naming emphasizes more about "dirty log". The object of manual protect
and initial-all-set is dirty log, so it seem that the original names are a little more
close to the thing we do.

> > +
> > +With KVM_DIRTY_LOG_MANUAL_PROTECT2 set, KVM_GET_DIRTY_LOG will
> not
> > +automatically clear and write-protect all pages that are returned as dirty.
> >  Rather, userspace will have to do this operation separately using
> > KVM_CLEAR_DIRTY_LOG.
> > +With KVM_DIRTY_LOG_INITIALLY_SET set, all the bits of the dirty
> > +bitmap will be initialized to 1 when created, dirty logging will be
> > +enabled gradually in small chunks using KVM_CLEAR_DIRTY_LOG.
> > +However, the KVM_DIRTY_LOG_INITIALLY_SET depends on
> > +KVM_DIRTY_LOG_MANUAL_PROTECT2, it can not be set individually and
> supports x86 only for now.
> >
> >  At the cost of a slightly more complicated operation, this provides
> > better  scalability and responsiveness for two reasons.  First, @@
> > -5716,7 +5726,7 @@ than requiring to sync a full memslot; this ensures
> > that KVM does not  take spinlocks for an extended period of time.
> > Second, in some cases a  large amount of time can pass between a call
> > to KVM_GET_DIRTY_LOG and  userspace actually using the data in the
> > page.  Pages can be modified -during this time, which is inefficint for both the
> guest and userspace:
> > +during this time, which is inefficient for both the guest and userspace:
> >  the guest will incur a higher penalty due to write protection faults,
> > while userspace can see false reports of dirty pages.  Manual
> > reprotection  helps reducing this time, improving guest performance
> > and reducing the diff --git a/arch/x86/include/asm/kvm_host.h
> > b/arch/x86/include/asm/kvm_host.h index 40a0c0f..a90630c 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1312,7 +1312,8 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64
> > accessed_mask,
> >
> >  void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);  void
> > kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> > -				      struct kvm_memory_slot *memslot);
> > +				      struct kvm_memory_slot *memslot,
> > +				      int start_level);
> >  void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> >  				   const struct kvm_memory_slot *memslot);  void
> > kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, diff --git
> > a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index
> > 87e9ba2..a4e70eb 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5860,13 +5860,14 @@ static bool slot_rmap_write_protect(struct kvm
> > *kvm,  }
> >
> >  void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> > -				      struct kvm_memory_slot *memslot)
> > +				      struct kvm_memory_slot *memslot,
> > +				      int start_level)
> >  {
> >  	bool flush;
> >
> >  	spin_lock(&kvm->mmu_lock);
> > -	flush = slot_handle_all_level(kvm, memslot, slot_rmap_write_protect,
> > -				      false);
> > +	flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
> > +				start_level, PT_MAX_HUGEPAGE_LEVEL, false);
> >  	spin_unlock(&kvm->mmu_lock);
> >
> >  	/*
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index
> > 3be25ec..0deb8c3 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7201,7 +7201,8 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu,
> > int cpu)  static void vmx_slot_enable_log_dirty(struct kvm *kvm,
> >  				     struct kvm_memory_slot *slot)  {
> > -	kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> > +	if (!kvm_manual_dirty_log_init_set(kvm))
> > +		kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> >  	kvm_mmu_slot_largepage_remove_write_access(kvm, slot);  }
> >
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index
> > fb5d64e..f816940 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -9956,7 +9956,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm
> > *kvm,  {
> >  	/* Still write protect RO slot */
> >  	if (new->flags & KVM_MEM_READONLY) {
> > -		kvm_mmu_slot_remove_write_access(kvm, new);
> > +		kvm_mmu_slot_remove_write_access(kvm, new,
> PT_PAGE_TABLE_LEVEL);
> >  		return;
> >  	}
> >
> > @@ -9993,8 +9993,20 @@ static void kvm_mmu_slot_apply_flags(struct kvm
> *kvm,
> >  	if (new->flags & KVM_MEM_LOG_DIRTY_PAGES) {
> >  		if (kvm_x86_ops->slot_enable_log_dirty)
> >  			kvm_x86_ops->slot_enable_log_dirty(kvm, new);
> > -		else
> > -			kvm_mmu_slot_remove_write_access(kvm, new);
> > +		else {
> > +			int level = kvm_manual_dirty_log_init_set(kvm) ?
> > +				PT_DIRECTORY_LEVEL : PT_PAGE_TABLE_LEVEL;
> > +
> > +			/*
> > +			 * If we're with initial-all-set, we don't need
> > +			 * to write protect any small page because
> > +			 * they're reported as dirty already.  However
> > +			 * we still need to write-protect huge pages
> > +			 * so that the page split can happen lazily on
> > +			 * the first write to the huge page.
> > +			 */
> > +			kvm_mmu_slot_remove_write_access(kvm, new, level);
> > +		}
> >  	} else {
> >  		if (kvm_x86_ops->slot_disable_log_dirty)
> >  			kvm_x86_ops->slot_disable_log_dirty(kvm, new); diff --git
> > a/include/linux/kvm_host.h b/include/linux/kvm_host.h index
> > e89eb67..80ada94 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -360,6 +360,13 @@ static inline unsigned long
> *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> >  	return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> > }
> >
> > +#define KVM_DIRTY_LOG_MANUAL_PROTECT2 (1 << 0) #define
> > +KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) #define
> > +KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT2 |
> \
> > +				KVM_DIRTY_LOG_INITIALLY_SET)
> > +
> > +bool kvm_manual_dirty_log_init_set(struct kvm *kvm);
> > +
> >  struct kvm_s390_adapter_int {
> >  	u64 ind_addr;
> >  	u64 summary_addr;
> > @@ -493,7 +500,7 @@ struct kvm {
> >  #endif
> >  	long tlbs_dirty;
> >  	struct list_head devices;
> > -	bool manual_dirty_log_protect;
> > +	u64 manual_dirty_log_protect;
> >  	struct dentry *debugfs_dentry;
> >  	struct kvm_stat_data **debugfs_stat_data;
> >  	struct srcu_struct srcu;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index
> > 70f03ce..0ffb804 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -858,11 +858,17 @@ static int kvm_vm_release(struct inode *inode,
> struct file *filp)
> >  	return 0;
> >  }
> >
> > +bool kvm_manual_dirty_log_init_set(struct kvm *kvm) {
> > +	return kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET;
> > +}
> 
> Nit: this can be put into kvm_host.h as inlined.

I'm afraid not. I tried to do it, but it can't be compiled through. Since this
function is shared between the kvm and kvm_intel(vmx part) module, it should be
exported.

BTW: how about using kvm_dirty_log_manual_protect and_init_set instead of
kvm_manual_dirty_log_init_set (just an idea, your opinions are welcome).

> 
> > +EXPORT_SYMBOL_GPL(kvm_manual_dirty_log_init_set);
> > +
> >  /*
> >   * Allocation size is twice as large as the actual dirty bitmap size.
> >   * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> >   */
> > -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> > +static int kvm_alloc_dirty_bitmap(struct kvm_memory_slot *memslot)
> >  {
> >  	unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
> >
> > @@ -1094,8 +1100,11 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >
> >  	/* Allocate page dirty bitmap if needed */
> >  	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> > -		if (kvm_create_dirty_bitmap(&new) < 0)
> > +		if (kvm_alloc_dirty_bitmap(&new))
> >  			goto out_free;
> > +
> > +		if (kvm_manual_dirty_log_init_set(kvm))
> > +			bitmap_set(new.dirty_bitmap, 0, new.npages);
> >  	}
> >
> >  	slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
> > @@ -3310,9 +3319,6 @@ static long
> kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
> >  	case KVM_CAP_IOEVENTFD_ANY_LENGTH:
> >  	case KVM_CAP_CHECK_EXTENSION_VM:
> >  	case KVM_CAP_ENABLE_CAP_VM:
> > -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > -	case KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2:
> > -#endif
> >  		return 1;
> >  #ifdef CONFIG_KVM_MMIO
> >  	case KVM_CAP_COALESCED_MMIO:
> > @@ -3320,6 +3326,10 @@ static long
> kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
> >  	case KVM_CAP_COALESCED_PIO:
> >  		return 1;
> >  #endif
> > +#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > +	case KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2:
> > +		return KVM_DIRTY_LOG_MANUAL_CAPS;
> 
> We probably can only return the new feature bit when with CONFIG_X86?

How about to define different values in different architectures(see
KVM_USER_MEM_SLOTS as an example), like this:

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c3314b2..383a8ae 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -23,6 +23,10 @@
 #define KVM_HAVE_ONE_REG
 #define KVM_HALT_POLL_NS_DEFAULT 500000

+#define KVM_DIRTY_LOG_MANUAL_PROTECT (1 << 0)
+#define KVM_DIRTY_LOG_INITIALLY_SET 0
+#define KVM_DIRTY_LOG_MANUAL_CAPS KVM_DIRTY_LOG_MANUAL_PROTECT
+
 #define KVM_VCPU_MAX_FEATURES 2

 #include <kvm/arm_vgic.h>
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 41204a4..503ee17 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -85,6 +85,10 @@

 #define KVM_HALT_POLL_NS_DEFAULT 500000

+#define KVM_DIRTY_LOG_MANUAL_PROTECT (1 << 0)
+#define KVM_DIRTY_LOG_INITIALLY_SET 0
+#define KVM_DIRTY_LOG_MANUAL_CAPS KVM_DIRTY_LOG_MANUAL_PROTECT
+
 #ifdef CONFIG_KVM_MIPS_VZ
 extern unsigned long GUESTID_MASK;
 extern unsigned long GUESTID_FIRST_VERSION;
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 40a0c0f..ac05172 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -49,6 +49,11 @@

 #define KVM_IRQCHIP_NUM_PINS  KVM_IOAPIC_NUM_PINS

+#define KVM_DIRTY_LOG_MANUAL_PROTECT (1 << 0)
+#define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1)
+#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT | \
+               KVM_DIRTY_LOG_INITIALLY_SET)
+
 /* x86-specific vcpu->requests bit members */
 #define KVM_REQ_MIGRATE_TIMER          KVM_ARCH_REQ(0)
 #define KVM_REQ_REPORT_TPR_ACCESS      KVM_ARCH_REQ(1)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index e89eb67..ebd3e55 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -360,6 +360,18 @@ static inline unsigned long *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
        return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
 }

+#ifndef KVM_DIRTY_LOG_MANUAL_PROTECT
+#define KVM_DIRTY_LOG_MANUAL_PROTECT 0
+#endif
+#ifndef KVM_DIRTY_LOG_INITIALLY_SET
+#define KVM_DIRTY_LOG_INITIALLY_SET 0
+#endif
+#ifndef KVM_DIRTY_LOG_MANUAL_CAPS
+#define KVM_DIRTY_LOG_MANUAL_CAPS 0
+#endif

> 
> > +#endif
> >  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
> >  	case KVM_CAP_IRQ_ROUTING:
> >  		return KVM_MAX_IRQ_ROUTES;
> > @@ -3347,11 +3357,17 @@ static int
> > kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,  {
> >  	switch (cap->cap) {
> >  #ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > -	case KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2:
> > -		if (cap->flags || (cap->args[0] & ~1))
> > +	case KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2: {
> > +		u64 allowed_options = KVM_DIRTY_LOG_MANUAL_PROTECT2;
> > +
> > +		if (cap->args[0] & KVM_DIRTY_LOG_MANUAL_PROTECT2)
> > +			allowed_options |= KVM_DIRTY_LOG_INITIALLY_SET;
> 
> Same here to check against CONFIG_X86?

Same as the above.



Regards,
Jay Zhou

  parent reply	other threads:[~2020-02-25  3:15 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-24  3:25 [PATCH v3] KVM: x86: enable dirty log gradually in small chunks Jay Zhou
2020-02-24 17:05 ` Peter Xu
2020-02-24 17:38   ` Sean Christopherson
2020-02-25  4:01     ` Zhoujian (jay)
2020-02-25  3:15   ` Zhoujian (jay) [this message]
2020-02-25 16:07     ` Peter Xu
2020-02-25 19:07       ` Peter Xu
2020-02-26  4:59         ` Zhoujian (jay)
2020-02-26 15:40           ` Peter Xu
2020-02-26  4:59       ` Zhoujian (jay)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=B2D15215269B544CADD246097EACE7474BB1B778@dggemm508-mbx.china.huawei.com \
    --to=jianjay.zhou@huawei.com \
    --cc=kvm@vger.kernel.org \
    --cc=liu.jinsong@huawei.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=wangxinxin.wang@huawei.com \
    --cc=weidong.huang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.