From: Paolo Bonzini <pbonzini@redhat.com>
To: Jay Zhou <jianjay.zhou@huawei.com>, kvm@vger.kernel.org
Cc: peterx@redhat.com, wangxinxin.wang@huawei.com,
linfeng23@huawei.com, weidong.huang@huawei.com
Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
Date: Tue, 18 Feb 2020 12:39:48 +0100 [thread overview]
Message-ID: <24b21aee-e038-bc55-a85e-0f64912e7b89@redhat.com> (raw)
In-Reply-To: <20200218110013.15640-1-jianjay.zhou@huawei.com>
On 18/02/20 12:00, Jay Zhou wrote:
> It could take kvm->mmu_lock for an extended period of time when
> enabling dirty log for the first time. The main cost is to clear
> all the D-bits of last level SPTEs. This situation can benefit from
> manual dirty log protect as well, which can reduce the mmu_lock
> time taken. The sequence is like this:
>
> 1. Set all the bits of the first dirty bitmap to 1 when enabling
> dirty log for the first time
> 2. Only write protect the huge pages
> 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info
> 4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> SPTEs gradually in small chunks
>
> Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment,
> I did some tests with a 128G windows VM and counted the time taken
> of memory_global_dirty_log_start, here is the numbers:
>
> VM Size Before After optimization
> 128G 460ms 10ms
This is a good idea, but could userspace expect the bitmap to be 0 for
pages that haven't been touched? I think this should be added as a new
bit to the KVM_ENABLE_CAP for KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. That is:
- in kvm_vm_ioctl_check_extension_generic, return 3 for
KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 (better: define two constants
KVM_DIRTY_LOG_MANUAL_PROTECT as 1 and KVM_DIRTY_LOG_INITIALLY_SET as 2).
- in kvm_vm_ioctl_enable_cap_generic, allow bit 0 and bit 1 for cap->args[0]
- in kvm_vm_ioctl_enable_cap_generic, check "if
(!(kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET))".
Thanks,
Paolo
> Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 5 +++++
> include/linux/kvm_host.h | 5 +++++
> virt/kvm/kvm_main.c | 10 ++++++++--
> 3 files changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3be25ec..a8d64f6 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
> static void vmx_slot_enable_log_dirty(struct kvm *kvm,
> struct kvm_memory_slot *slot)
> {
> +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> + if (!kvm->manual_dirty_log_protect)
> + kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> +#else
> kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> +#endif
> kvm_mmu_slot_largepage_remove_write_access(kvm, slot);
> }
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index e89eb67..fd149b0 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -360,6 +360,11 @@ static inline unsigned long *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> }
>
> +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot *memslot)
> +{
> + bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
> +}
> +
> struct kvm_s390_adapter_int {
> u64 ind_addr;
> u64 summary_addr;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 70f03ce..08565ed 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
> * Allocation size is twice as large as the actual dirty bitmap size.
> * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> */
> -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> +static int kvm_create_dirty_bitmap(struct kvm *kvm,
> + struct kvm_memory_slot *memslot)
> {
> unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
>
> @@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> if (!memslot->dirty_bitmap)
> return -ENOMEM;
>
> +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> + if (kvm->manual_dirty_log_protect)
> + kvm_set_first_dirty_bitmap(memslot);
> +#endif
> +
> return 0;
> }
>
> @@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>
> /* Allocate page dirty bitmap if needed */
> if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> - if (kvm_create_dirty_bitmap(&new) < 0)
> + if (kvm_create_dirty_bitmap(kvm, &new) < 0)
> goto out_free;
> }
>
>
next prev parent reply other threads:[~2020-02-18 11:39 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-18 11:00 [PATCH] KVM: x86: enable dirty log gradually in small chunks Jay Zhou
2020-02-18 11:39 ` Paolo Bonzini [this message]
2020-02-18 13:39 ` Zhoujian (jay)
2020-02-18 17:26 ` Peter Xu
2020-02-19 4:11 ` Zhoujian (jay)
2020-02-18 21:23 ` Sean Christopherson
2020-02-19 6:58 ` Zhoujian (jay)
2020-02-19 15:08 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=24b21aee-e038-bc55-a85e-0f64912e7b89@redhat.com \
--to=pbonzini@redhat.com \
--cc=jianjay.zhou@huawei.com \
--cc=kvm@vger.kernel.org \
--cc=linfeng23@huawei.com \
--cc=peterx@redhat.com \
--cc=wangxinxin.wang@huawei.com \
--cc=weidong.huang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).