* [PATCH] KVM: x86: enable dirty log gradually in small chunks
@ 2020-02-18 11:00 Jay Zhou
2020-02-18 11:39 ` Paolo Bonzini
2020-02-18 21:23 ` Sean Christopherson
0 siblings, 2 replies; 8+ messages in thread
From: Jay Zhou @ 2020-02-18 11:00 UTC (permalink / raw)
To: kvm
Cc: pbonzini, peterx, wangxinxin.wang, linfeng23, weidong.huang,
jianjay.zhou
It could take kvm->mmu_lock for an extended period of time when
enabling dirty log for the first time. The main cost is to clear
all the D-bits of last level SPTEs. This situation can benefit from
manual dirty log protect as well, which can reduce the mmu_lock
time taken. The sequence is like this:
1. Set all the bits of the first dirty bitmap to 1 when enabling
dirty log for the first time
2. Only write protect the huge pages
3. KVM_GET_DIRTY_LOG returns the dirty bitmap info
4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
SPTEs gradually in small chunks
Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment,
I did some tests with a 128G windows VM and counted the time taken
of memory_global_dirty_log_start, here is the numbers:
VM Size Before After optimization
128G 460ms 10ms
Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
---
arch/x86/kvm/vmx/vmx.c | 5 +++++
include/linux/kvm_host.h | 5 +++++
virt/kvm/kvm_main.c | 10 ++++++++--
3 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3be25ec..a8d64f6 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
static void vmx_slot_enable_log_dirty(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
+#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
+ if (!kvm->manual_dirty_log_protect)
+ kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
+#else
kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
+#endif
kvm_mmu_slot_largepage_remove_write_access(kvm, slot);
}
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index e89eb67..fd149b0 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -360,6 +360,11 @@ static inline unsigned long *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
}
+static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot *memslot)
+{
+ bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
+}
+
struct kvm_s390_adapter_int {
u64 ind_addr;
u64 summary_addr;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 70f03ce..08565ed 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
* Allocation size is twice as large as the actual dirty bitmap size.
* See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
*/
-static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
+static int kvm_create_dirty_bitmap(struct kvm *kvm,
+ struct kvm_memory_slot *memslot)
{
unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
@@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
if (!memslot->dirty_bitmap)
return -ENOMEM;
+#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
+ if (kvm->manual_dirty_log_protect)
+ kvm_set_first_dirty_bitmap(memslot);
+#endif
+
return 0;
}
@@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
/* Allocate page dirty bitmap if needed */
if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
- if (kvm_create_dirty_bitmap(&new) < 0)
+ if (kvm_create_dirty_bitmap(kvm, &new) < 0)
goto out_free;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
2020-02-18 11:00 [PATCH] KVM: x86: enable dirty log gradually in small chunks Jay Zhou
@ 2020-02-18 11:39 ` Paolo Bonzini
2020-02-18 13:39 ` Zhoujian (jay)
2020-02-18 21:23 ` Sean Christopherson
1 sibling, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2020-02-18 11:39 UTC (permalink / raw)
To: Jay Zhou, kvm; +Cc: peterx, wangxinxin.wang, linfeng23, weidong.huang
On 18/02/20 12:00, Jay Zhou wrote:
> It could take kvm->mmu_lock for an extended period of time when
> enabling dirty log for the first time. The main cost is to clear
> all the D-bits of last level SPTEs. This situation can benefit from
> manual dirty log protect as well, which can reduce the mmu_lock
> time taken. The sequence is like this:
>
> 1. Set all the bits of the first dirty bitmap to 1 when enabling
> dirty log for the first time
> 2. Only write protect the huge pages
> 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info
> 4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> SPTEs gradually in small chunks
>
> Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment,
> I did some tests with a 128G windows VM and counted the time taken
> of memory_global_dirty_log_start, here is the numbers:
>
> VM Size Before After optimization
> 128G 460ms 10ms
This is a good idea, but could userspace expect the bitmap to be 0 for
pages that haven't been touched? I think this should be added as a new
bit to the KVM_ENABLE_CAP for KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. That is:
- in kvm_vm_ioctl_check_extension_generic, return 3 for
KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 (better: define two constants
KVM_DIRTY_LOG_MANUAL_PROTECT as 1 and KVM_DIRTY_LOG_INITIALLY_SET as 2).
- in kvm_vm_ioctl_enable_cap_generic, allow bit 0 and bit 1 for cap->args[0]
- in kvm_vm_ioctl_enable_cap_generic, check "if
(!(kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET))".
Thanks,
Paolo
> Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 5 +++++
> include/linux/kvm_host.h | 5 +++++
> virt/kvm/kvm_main.c | 10 ++++++++--
> 3 files changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3be25ec..a8d64f6 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
> static void vmx_slot_enable_log_dirty(struct kvm *kvm,
> struct kvm_memory_slot *slot)
> {
> +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> + if (!kvm->manual_dirty_log_protect)
> + kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> +#else
> kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> +#endif
> kvm_mmu_slot_largepage_remove_write_access(kvm, slot);
> }
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index e89eb67..fd149b0 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -360,6 +360,11 @@ static inline unsigned long *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> }
>
> +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot *memslot)
> +{
> + bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
> +}
> +
> struct kvm_s390_adapter_int {
> u64 ind_addr;
> u64 summary_addr;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 70f03ce..08565ed 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
> * Allocation size is twice as large as the actual dirty bitmap size.
> * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> */
> -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> +static int kvm_create_dirty_bitmap(struct kvm *kvm,
> + struct kvm_memory_slot *memslot)
> {
> unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
>
> @@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> if (!memslot->dirty_bitmap)
> return -ENOMEM;
>
> +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> + if (kvm->manual_dirty_log_protect)
> + kvm_set_first_dirty_bitmap(memslot);
> +#endif
> +
> return 0;
> }
>
> @@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>
> /* Allocate page dirty bitmap if needed */
> if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> - if (kvm_create_dirty_bitmap(&new) < 0)
> + if (kvm_create_dirty_bitmap(kvm, &new) < 0)
> goto out_free;
> }
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH] KVM: x86: enable dirty log gradually in small chunks
2020-02-18 11:39 ` Paolo Bonzini
@ 2020-02-18 13:39 ` Zhoujian (jay)
2020-02-18 17:26 ` Peter Xu
0 siblings, 1 reply; 8+ messages in thread
From: Zhoujian (jay) @ 2020-02-18 13:39 UTC (permalink / raw)
To: Paolo Bonzini, kvm
Cc: peterx, wangxin (U), linfeng (M), Huangweidong (C), Liujinsong (Paul)
Hi Paolo,
> -----Original Message-----
> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> Sent: Tuesday, February 18, 2020 7:40 PM
> To: Zhoujian (jay) <jianjay.zhou@huawei.com>; kvm@vger.kernel.org
> Cc: peterx@redhat.com; wangxin (U) <wangxinxin.wang@huawei.com>;
> linfeng (M) <linfeng23@huawei.com>; Huangweidong (C)
> <weidong.huang@huawei.com>
> Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
>
> On 18/02/20 12:00, Jay Zhou wrote:
> > It could take kvm->mmu_lock for an extended period of time when
> > enabling dirty log for the first time. The main cost is to clear all
> > the D-bits of last level SPTEs. This situation can benefit from manual
> > dirty log protect as well, which can reduce the mmu_lock time taken.
> > The sequence is like this:
> >
> > 1. Set all the bits of the first dirty bitmap to 1 when enabling
> > dirty log for the first time
> > 2. Only write protect the huge pages
> > 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4.
> > KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> > SPTEs gradually in small chunks
> >
> > Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did
> > some tests with a 128G windows VM and counted the time taken of
> > memory_global_dirty_log_start, here is the numbers:
> >
> > VM Size Before After optimization
> > 128G 460ms 10ms
>
> This is a good idea, but could userspace expect the bitmap to be 0 for pages
> that haven't been touched?
The userspace gets the bitmap information only from the kernel side.
It depends on the kernel side to distinguish whether the pages have been touched
I think, which using the rmap to traverse for now. I haven't the other ideas yet, :-(
But even though the userspace gets 1 for pages that haven't been touched, these
pages will be filtered out too in the kernel space KVM_CLEAR_DIRTY_LOG ioctl
path, since the rmap does not exist I think.
> I think this should be added as a new bit to the
> KVM_ENABLE_CAP for KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. That is:
>
> - in kvm_vm_ioctl_check_extension_generic, return 3 for
> KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 (better: define two constants
> KVM_DIRTY_LOG_MANUAL_PROTECT as 1 and
> KVM_DIRTY_LOG_INITIALLY_SET as 2).
>
> - in kvm_vm_ioctl_enable_cap_generic, allow bit 0 and bit 1 for cap->args[0]
>
> - in kvm_vm_ioctl_enable_cap_generic, check "if
> (!(kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET))".
Thanks for the details! I'll add them in the next version.
Regards,
Jay Zhou
>
> Thanks,
>
> Paolo
>
>
> > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> > ---
> > arch/x86/kvm/vmx/vmx.c | 5 +++++
> > include/linux/kvm_host.h | 5 +++++
> > virt/kvm/kvm_main.c | 10 ++++++++--
> > 3 files changed, 18 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index
> > 3be25ec..a8d64f6 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu,
> > int cpu) static void vmx_slot_enable_log_dirty(struct kvm *kvm,
> > struct kvm_memory_slot *slot) {
> > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > + if (!kvm->manual_dirty_log_protect)
> > + kvm_mmu_slot_leaf_clear_dirty(kvm, slot); #else
> > kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> > +#endif
> > kvm_mmu_slot_largepage_remove_write_access(kvm, slot); }
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index
> > e89eb67..fd149b0 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -360,6 +360,11 @@ static inline unsigned long
> *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> > return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> > }
> >
> > +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot
> > +*memslot) {
> > + bitmap_set(memslot->dirty_bitmap, 0, memslot->npages); }
> > +
> > struct kvm_s390_adapter_int {
> > u64 ind_addr;
> > u64 summary_addr;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index
> > 70f03ce..08565ed 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode,
> struct file *filp)
> > * Allocation size is twice as large as the actual dirty bitmap size.
> > * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> > */
> > -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> > +static int kvm_create_dirty_bitmap(struct kvm *kvm,
> > + struct kvm_memory_slot *memslot)
> > {
> > unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
> >
> > @@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct
> kvm_memory_slot *memslot)
> > if (!memslot->dirty_bitmap)
> > return -ENOMEM;
> >
> > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > + if (kvm->manual_dirty_log_protect)
> > + kvm_set_first_dirty_bitmap(memslot);
> > +#endif
> > +
> > return 0;
> > }
> >
> > @@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >
> > /* Allocate page dirty bitmap if needed */
> > if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> > - if (kvm_create_dirty_bitmap(&new) < 0)
> > + if (kvm_create_dirty_bitmap(kvm, &new) < 0)
> > goto out_free;
> > }
> >
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
2020-02-18 13:39 ` Zhoujian (jay)
@ 2020-02-18 17:26 ` Peter Xu
2020-02-19 4:11 ` Zhoujian (jay)
0 siblings, 1 reply; 8+ messages in thread
From: Peter Xu @ 2020-02-18 17:26 UTC (permalink / raw)
To: Zhoujian (jay)
Cc: Paolo Bonzini, kvm, wangxin (U), linfeng (M), Huangweidong (C),
Liujinsong (Paul)
On Tue, Feb 18, 2020 at 01:39:36PM +0000, Zhoujian (jay) wrote:
> Hi Paolo,
>
> > -----Original Message-----
> > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > Sent: Tuesday, February 18, 2020 7:40 PM
> > To: Zhoujian (jay) <jianjay.zhou@huawei.com>; kvm@vger.kernel.org
> > Cc: peterx@redhat.com; wangxin (U) <wangxinxin.wang@huawei.com>;
> > linfeng (M) <linfeng23@huawei.com>; Huangweidong (C)
> > <weidong.huang@huawei.com>
> > Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
> >
> > On 18/02/20 12:00, Jay Zhou wrote:
> > > It could take kvm->mmu_lock for an extended period of time when
> > > enabling dirty log for the first time. The main cost is to clear all
> > > the D-bits of last level SPTEs. This situation can benefit from manual
> > > dirty log protect as well, which can reduce the mmu_lock time taken.
> > > The sequence is like this:
> > >
> > > 1. Set all the bits of the first dirty bitmap to 1 when enabling
> > > dirty log for the first time
> > > 2. Only write protect the huge pages
> > > 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4.
> > > KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> > > SPTEs gradually in small chunks
> > >
> > > Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did
> > > some tests with a 128G windows VM and counted the time taken of
> > > memory_global_dirty_log_start, here is the numbers:
> > >
> > > VM Size Before After optimization
> > > 128G 460ms 10ms
> >
> > This is a good idea, but could userspace expect the bitmap to be 0 for pages
> > that haven't been touched?
>
> The userspace gets the bitmap information only from the kernel side.
> It depends on the kernel side to distinguish whether the pages have been touched
> I think, which using the rmap to traverse for now. I haven't the other ideas yet, :-(
>
> But even though the userspace gets 1 for pages that haven't been touched, these
> pages will be filtered out too in the kernel space KVM_CLEAR_DIRTY_LOG ioctl
> path, since the rmap does not exist I think.
>
> > I think this should be added as a new bit to the
> > KVM_ENABLE_CAP for KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. That is:
> >
> > - in kvm_vm_ioctl_check_extension_generic, return 3 for
> > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 (better: define two constants
> > KVM_DIRTY_LOG_MANUAL_PROTECT as 1 and
> > KVM_DIRTY_LOG_INITIALLY_SET as 2).
> >
> > - in kvm_vm_ioctl_enable_cap_generic, allow bit 0 and bit 1 for cap->args[0]
> >
> > - in kvm_vm_ioctl_enable_cap_generic, check "if
> > (!(kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET))".
>
> Thanks for the details! I'll add them in the next version.
I agree with Paolo that we'd better introduce a new bit for the
change, because we don't know whether userspace has the assumption
with a zeroed dirty bitmap as initial state (which is still part of
the kernel ABI IIUC, actually that could be a good thing for some
userspace).
Another question is that I see you only modified the PML path. Could
this also benefit the rest (say, SPTE write protects)?
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
2020-02-18 11:00 [PATCH] KVM: x86: enable dirty log gradually in small chunks Jay Zhou
2020-02-18 11:39 ` Paolo Bonzini
@ 2020-02-18 21:23 ` Sean Christopherson
2020-02-19 6:58 ` Zhoujian (jay)
1 sibling, 1 reply; 8+ messages in thread
From: Sean Christopherson @ 2020-02-18 21:23 UTC (permalink / raw)
To: Jay Zhou; +Cc: kvm, pbonzini, peterx, wangxinxin.wang, linfeng23, weidong.huang
On Tue, Feb 18, 2020 at 07:00:13PM +0800, Jay Zhou wrote:
> It could take kvm->mmu_lock for an extended period of time when
> enabling dirty log for the first time. The main cost is to clear
> all the D-bits of last level SPTEs. This situation can benefit from
> manual dirty log protect as well, which can reduce the mmu_lock
> time taken. The sequence is like this:
>
> 1. Set all the bits of the first dirty bitmap to 1 when enabling
> dirty log for the first time
> 2. Only write protect the huge pages
> 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info
> 4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> SPTEs gradually in small chunks
>
> Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment,
> I did some tests with a 128G windows VM and counted the time taken
> of memory_global_dirty_log_start, here is the numbers:
>
> VM Size Before After optimization
> 128G 460ms 10ms
>
> Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 5 +++++
> include/linux/kvm_host.h | 5 +++++
> virt/kvm/kvm_main.c | 10 ++++++++--
> 3 files changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3be25ec..a8d64f6 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
> static void vmx_slot_enable_log_dirty(struct kvm *kvm,
> struct kvm_memory_slot *slot)
> {
> +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> + if (!kvm->manual_dirty_log_protect)
> + kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> +#else
> kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> +#endif
The ifdef is unnecessary, this is in VMX (x86) code, i.e.
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT is guaranteed to be defined.
> kvm_mmu_slot_largepage_remove_write_access(kvm, slot);
> }
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index e89eb67..fd149b0 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -360,6 +360,11 @@ static inline unsigned long *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> }
>
> +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot *memslot)
> +{
> + bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
> +}
I'd prefer this be open coded with a comment, e.g. "first" is misleading
because it's really "initial dirty bitmap for this memslot after enabling
dirty logging".
> +
> struct kvm_s390_adapter_int {
> u64 ind_addr;
> u64 summary_addr;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 70f03ce..08565ed 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
> * Allocation size is twice as large as the actual dirty bitmap size.
> * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> */
> -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> +static int kvm_create_dirty_bitmap(struct kvm *kvm,
> + struct kvm_memory_slot *memslot)
> {
> unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
>
> @@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> if (!memslot->dirty_bitmap)
> return -ENOMEM;
>
> +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
The ifdef is unnecessary, manual_dirty_log_protect always exists and is
guaranteed to be false if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=n. This
isn't exactly a hot path so saving the uop isn't worth the #ifdef.
> + if (kvm->manual_dirty_log_protect)
> + kvm_set_first_dirty_bitmap(memslot);
> +#endif
> +
> return 0;
> }
>
> @@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>
> /* Allocate page dirty bitmap if needed */
> if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> - if (kvm_create_dirty_bitmap(&new) < 0)
> + if (kvm_create_dirty_bitmap(kvm, &new) < 0)
Rather than pass @kvm, what about doing bitmap_set() in __kvm_set_memory_region()
and s/kvm_create_dirty_bitmap/kvm_alloc_dirty_bitmap to make it clear that
the helper is only responsible for allocation? And opportunistically drop
the superfluous "< 0", e.g.
if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
if (kvm_alloc_dirty_bitmap(&new))
goto out_free;
/*
* WORDS!
*/
if (kvm->manual_dirty_log_protect)
bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
}
> goto out_free;
> }
>
> --
> 1.8.3.1
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH] KVM: x86: enable dirty log gradually in small chunks
2020-02-18 17:26 ` Peter Xu
@ 2020-02-19 4:11 ` Zhoujian (jay)
0 siblings, 0 replies; 8+ messages in thread
From: Zhoujian (jay) @ 2020-02-19 4:11 UTC (permalink / raw)
To: Peter Xu
Cc: Paolo Bonzini, kvm, wangxin (U), linfeng (M), Huangweidong (C),
Liujinsong (Paul)
Hi Peter,
> -----Original Message-----
> From: Peter Xu [mailto:peterx@redhat.com]
> Sent: Wednesday, February 19, 2020 1:26 AM
> To: Zhoujian (jay) <jianjay.zhou@huawei.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>; kvm@vger.kernel.org; wangxin (U)
> <wangxinxin.wang@huawei.com>; linfeng (M) <linfeng23@huawei.com>;
> Huangweidong (C) <weidong.huang@huawei.com>; Liujinsong (Paul)
> <liu.jinsong@huawei.com>
> Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
>
> On Tue, Feb 18, 2020 at 01:39:36PM +0000, Zhoujian (jay) wrote:
> > Hi Paolo,
> >
> > > -----Original Message-----
> > > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > > Sent: Tuesday, February 18, 2020 7:40 PM
> > > To: Zhoujian (jay) <jianjay.zhou@huawei.com>; kvm@vger.kernel.org
> > > Cc: peterx@redhat.com; wangxin (U) <wangxinxin.wang@huawei.com>;
> > > linfeng (M) <linfeng23@huawei.com>; Huangweidong (C)
> > > <weidong.huang@huawei.com>
> > > Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small
> > > chunks
> > >
> > > On 18/02/20 12:00, Jay Zhou wrote:
> > > > It could take kvm->mmu_lock for an extended period of time when
> > > > enabling dirty log for the first time. The main cost is to clear
> > > > all the D-bits of last level SPTEs. This situation can benefit
> > > > from manual dirty log protect as well, which can reduce the mmu_lock
> time taken.
> > > > The sequence is like this:
> > > >
> > > > 1. Set all the bits of the first dirty bitmap to 1 when enabling
> > > > dirty log for the first time
> > > > 2. Only write protect the huge pages 3. KVM_GET_DIRTY_LOG returns
> > > > the dirty bitmap info 4.
> > > > KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> > > > SPTEs gradually in small chunks
> > > >
> > > > Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I
> > > > did some tests with a 128G windows VM and counted the time taken
> > > > of memory_global_dirty_log_start, here is the numbers:
> > > >
> > > > VM Size Before After optimization
> > > > 128G 460ms 10ms
> > >
> > > This is a good idea, but could userspace expect the bitmap to be 0
> > > for pages that haven't been touched?
> >
> > The userspace gets the bitmap information only from the kernel side.
> > It depends on the kernel side to distinguish whether the pages have
> > been touched I think, which using the rmap to traverse for now. I
> > haven't the other ideas yet, :-(
> >
> > But even though the userspace gets 1 for pages that haven't been
> > touched, these pages will be filtered out too in the kernel space
> > KVM_CLEAR_DIRTY_LOG ioctl path, since the rmap does not exist I think.
> >
> > > I think this should be added as a new bit to the KVM_ENABLE_CAP for
> > > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. That is:
> > >
> > > - in kvm_vm_ioctl_check_extension_generic, return 3 for
> > > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 (better: define two constants
> > > KVM_DIRTY_LOG_MANUAL_PROTECT as 1 and
> KVM_DIRTY_LOG_INITIALLY_SET as
> > > 2).
> > >
> > > - in kvm_vm_ioctl_enable_cap_generic, allow bit 0 and bit 1 for
> > > cap->args[0]
> > >
> > > - in kvm_vm_ioctl_enable_cap_generic, check "if
> > > (!(kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET))".
> >
> > Thanks for the details! I'll add them in the next version.
>
> I agree with Paolo that we'd better introduce a new bit for the change, because
> we don't know whether userspace has the assumption with a zeroed dirty
> bitmap as initial state (which is still part of the kernel ABI IIUC, actually that
> could be a good thing for some userspace).
>
> Another question is that I see you only modified the PML path. Could this also
> benefit the rest (say, SPTE write protects)?
Oh I missed the other path, thanks for reminding, I'll add it in V2.
Regards,
Jay Zhou
>
> Thanks,
>
> --
> Peter Xu
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH] KVM: x86: enable dirty log gradually in small chunks
2020-02-18 21:23 ` Sean Christopherson
@ 2020-02-19 6:58 ` Zhoujian (jay)
2020-02-19 15:08 ` Sean Christopherson
0 siblings, 1 reply; 8+ messages in thread
From: Zhoujian (jay) @ 2020-02-19 6:58 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, pbonzini, peterx, wangxin (U), linfeng (M), Huangweidong (C),
Liujinsong (Paul)
Hi Sean,
> -----Original Message-----
> From: Sean Christopherson [mailto:sean.j.christopherson@intel.com]
> Sent: Wednesday, February 19, 2020 5:23 AM
> To: Zhoujian (jay) <jianjay.zhou@huawei.com>
> Cc: kvm@vger.kernel.org; pbonzini@redhat.com; peterx@redhat.com;
> wangxin (U) <wangxinxin.wang@huawei.com>; linfeng (M)
> <linfeng23@huawei.com>; Huangweidong (C) <weidong.huang@huawei.com>
> Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
>
> On Tue, Feb 18, 2020 at 07:00:13PM +0800, Jay Zhou wrote:
> > It could take kvm->mmu_lock for an extended period of time when
> > enabling dirty log for the first time. The main cost is to clear all
> > the D-bits of last level SPTEs. This situation can benefit from manual
> > dirty log protect as well, which can reduce the mmu_lock time taken.
> > The sequence is like this:
> >
> > 1. Set all the bits of the first dirty bitmap to 1 when enabling
> > dirty log for the first time
> > 2. Only write protect the huge pages
> > 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4.
> > KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> > SPTEs gradually in small chunks
> >
> > Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did
> > some tests with a 128G windows VM and counted the time taken of
> > memory_global_dirty_log_start, here is the numbers:
> >
> > VM Size Before After optimization
> > 128G 460ms 10ms
> >
> > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> > ---
> > arch/x86/kvm/vmx/vmx.c | 5 +++++
> > include/linux/kvm_host.h | 5 +++++
> > virt/kvm/kvm_main.c | 10 ++++++++--
> > 3 files changed, 18 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index
> > 3be25ec..a8d64f6 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu,
> > int cpu) static void vmx_slot_enable_log_dirty(struct kvm *kvm,
> > struct kvm_memory_slot *slot) {
> > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > + if (!kvm->manual_dirty_log_protect)
> > + kvm_mmu_slot_leaf_clear_dirty(kvm, slot); #else
> > kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> > +#endif
>
> The ifdef is unnecessary, this is in VMX (x86) code, i.e.
> CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT is guaranteed to be
> defined.
I agree.
>
> > kvm_mmu_slot_largepage_remove_write_access(kvm, slot); }
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index
> > e89eb67..fd149b0 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -360,6 +360,11 @@ static inline unsigned long
> *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> > return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> > }
> >
> > +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot
> > +*memslot) {
> > + bitmap_set(memslot->dirty_bitmap, 0, memslot->npages); }
>
> I'd prefer this be open coded with a comment, e.g. "first" is misleading because
> it's really "initial dirty bitmap for this memslot after enabling dirty logging".
kvm_create_dirty_bitmap allocates twice size as large as the actual dirty bitmap
size, and there is kvm_second_dirty_bitmap to get the second part of the map,
this is the reason why I use first_dirty_bitmap here, which means the first part
(not first time) of the dirty bitmap.
I'll try to be more clear if this is misleading...
> > +
> > struct kvm_s390_adapter_int {
> > u64 ind_addr;
> > u64 summary_addr;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index
> > 70f03ce..08565ed 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode,
> struct file *filp)
> > * Allocation size is twice as large as the actual dirty bitmap size.
> > * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> > */
> > -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> > +static int kvm_create_dirty_bitmap(struct kvm *kvm,
> > + struct kvm_memory_slot *memslot)
> > {
> > unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
> >
> > @@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct
> kvm_memory_slot *memslot)
> > if (!memslot->dirty_bitmap)
> > return -ENOMEM;
> >
> > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
>
> The ifdef is unnecessary, manual_dirty_log_protect always exists and is
> guaranteed to be false if
> CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=n. This isn't exactly a
> hot path so saving the uop isn't worth the #ifdef.
After rereading the code, I think you're right.
>
> > + if (kvm->manual_dirty_log_protect)
> > + kvm_set_first_dirty_bitmap(memslot);
> > +#endif
> > +
> > return 0;
> > }
> >
> > @@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >
> > /* Allocate page dirty bitmap if needed */
> > if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> > - if (kvm_create_dirty_bitmap(&new) < 0)
> > + if (kvm_create_dirty_bitmap(kvm, &new) < 0)
>
> Rather than pass @kvm, what about doing bitmap_set() in
> __kvm_set_memory_region() and
> s/kvm_create_dirty_bitmap/kvm_alloc_dirty_bitmap to make it clear that the
> helper is only responsible for allocation? And opportunistically drop the
> superfluous "< 0", e.g.
>
> if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> if (kvm_alloc_dirty_bitmap(&new))
> goto out_free;
>
> /*
> * WORDS!
> */
> if (kvm->manual_dirty_log_protect)
> bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
> }
Seems to be more clear, thanks for the suggestion.
Regards,
Jay Zhou
> > goto out_free;
> > }
> >
> > --
> > 1.8.3.1
> >
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
2020-02-19 6:58 ` Zhoujian (jay)
@ 2020-02-19 15:08 ` Sean Christopherson
0 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2020-02-19 15:08 UTC (permalink / raw)
To: Zhoujian (jay)
Cc: kvm, pbonzini, peterx, wangxin (U), linfeng (M), Huangweidong (C),
Liujinsong (Paul)
On Wed, Feb 19, 2020 at 06:58:33AM +0000, Zhoujian (jay) wrote:
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -360,6 +360,11 @@ static inline unsigned long
> > *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> > > return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> > > }
> > >
> > > +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot
> > > +*memslot) {
> > > + bitmap_set(memslot->dirty_bitmap, 0, memslot->npages); }
> >
> > I'd prefer this be open coded with a comment, e.g. "first" is misleading because
> > it's really "initial dirty bitmap for this memslot after enabling dirty logging".
>
> kvm_create_dirty_bitmap allocates twice size as large as the actual dirty bitmap
> size, and there is kvm_second_dirty_bitmap to get the second part of the map,
> this is the reason why I use first_dirty_bitmap here, which means the first part
> (not first time) of the dirty bitmap.
Ha, I didn't consider that usage of "first", obviously :-)
> I'll try to be more clear if this is misleading...
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-02-19 15:08 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-18 11:00 [PATCH] KVM: x86: enable dirty log gradually in small chunks Jay Zhou
2020-02-18 11:39 ` Paolo Bonzini
2020-02-18 13:39 ` Zhoujian (jay)
2020-02-18 17:26 ` Peter Xu
2020-02-19 4:11 ` Zhoujian (jay)
2020-02-18 21:23 ` Sean Christopherson
2020-02-19 6:58 ` Zhoujian (jay)
2020-02-19 15:08 ` Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).