qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Keqian Zhu <zhukeqian1@huawei.com>
To: Peter Xu <peterx@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Hyman <huangy81@chinatelecom.cn>,
	qemu-devel@nongnu.org,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH v5 00/10] KVM: Dirty ring support (QEMU part)
Date: Wed, 24 Mar 2021 10:56:22 +0800	[thread overview]
Message-ID: <5da1dd71-58e9-6579-c7c1-6cb60baf7ac1@huawei.com> (raw)
In-Reply-To: <20210323143429.GB6486@xz-x1>

Hi Peter,

On 2021/3/23 22:34, Peter Xu wrote:
> Keqian,
> 
> On Tue, Mar 23, 2021 at 02:40:43PM +0800, Keqian Zhu wrote:
>>>> The second question is that you observed longer migration time (55s->73s) when guest
>>>> has 24G ram and dirty rate is 800M/s. I am not clear about the reason. As with dirty
>>>> ring enabled, Qemu can get dirty info faster which means it handles dirty page more
>>>> quick, and guest can be throttled which means dirty page is generated slower. What's
>>>> the rationale for the longer migration time?
>>>
>>> Because dirty ring is more sensitive to dirty rate, while dirty bitmap is more
>> Emm... Sorry that I'm very clear about this... I think that higher dirty rate doesn't cause
>> slower dirty_log_sync compared to that of legacy bitmap mode. Besides, higher dirty rate
>> means we may have more full-exit, which can properly limit the dirty rate. So it seems that
>> dirty ring "prefers" higher dirty rate.
> 
> When I measured the 800MB/s it's in the guest, after throttling.
> 
> Imagine another example: a VM has 1G memory keep dirtying with 10GB/s.  Dirty
> logging will need to collect even less for each iteration because memory size
> shrinked, collect even less frequent due to the high dirty rate, however dirty
> ring will use 100% cpu power to collect dirty pages because the ring keeps full.
Looks good.

We have many places to collect dirty pages: the background reaper, vCPU exit handler,
and the migration thread. I think migration time is closely related to the migration thread.

The migration thread calls kvm_dirty_ring_flush().
1. kvm_cpu_synchronize_kick_all() will wait vcpu handles full-exit.
2. kvm_dirty_ring_reap() collects and resets dirty pages.
The above two operation will spend more time with higher dirty rate.

But I suddenly realize that the key problem maybe not at this. Though we have separate
"reset" operation for dirty ring, actually it is performed right after we collect dirty
ring to kvmslot. So in dirty ring mode, it likes legacy bitmap mode without manual_dirty_clear.

If we can "reset" dirty ring just before we really handle the dirty pages, we can have
shorter migration time. But the design of dirty ring doesn't allow this, because we must
perform reset to make free space...

> 
>>
>>> sensitive to memory footprint.  In above 24G mem + 800MB/s dirty rate
>>> condition, dirty bitmap seems to be more efficient, say, collecting dirty
>>> bitmap of 24G mem (24G/4K/8=0.75MB) for each migration cycle is fast enough.
>>>
>>> Not to mention that current implementation of dirty ring in QEMU is not
>>> complete - we still have two more layers of dirty bitmap, so it's actually a
>>> mixture of dirty bitmap and dirty ring.  This series is more like a POC on
>>> dirty ring interface, so as to let QEMU be able to run on KVM dirty ring.
>>> E.g., we won't have hang issue when getting dirty pages since it's totally
>>> async, however we'll still have some legacy dirty bitmap issues e.g. memory
>>> consumption of userspace dirty bitmaps are still linear to memory footprint.
>> The plan looks good and coordinated, but I have a concern. Our dirty ring actually depends
>> on the structure of hardware logging buffer (PML buffer). We can't say it can be properly
>> adapted to all kinds of hardware design in the future.
> 
> Sorry I don't get it - dirty ring can work with pure page wr-protect too?
Sure, it can. I just want to discuss many possible kinds of hardware logging buffer.
However, I'd like to stop at this, at least dirty ring works well with PML. :)

> 
>>
>>>
>>> Moreover, IMHO another important feature that dirty ring provided is actually
>>> the full-exit, where we can pause a vcpu when it dirties too fast, while other
>> I think a proper pause time is hard to decide. Short time may have little effect
>> of throttle, but long time may have heavy effect on guest. Do you have a good algorithm?
> 
> That's the next thing we can discuss.  IMHO I think the dirty ring is nice
> already because we can measure dirty rate per-vcpu, also we can throttle in
> vcpu granule.  That's something required for a good algorithm, say we shouldn't
> block vcpu when there's small dirty rate, and in many cases that's the case for
> e.g. UI threads.  Any algorithm should be based on these facts.
OK.

Thanks,
Keqian


  reply	other threads:[~2021-03-24  2:57 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-10 20:32 [PATCH v5 00/10] KVM: Dirty ring support (QEMU part) Peter Xu
2021-03-10 20:32 ` [PATCH v5 01/10] memory: Introduce log_sync_global() to memory listener Peter Xu
2021-03-10 20:32 ` [PATCH v5 02/10] KVM: Use a big lock to replace per-kml slots_lock Peter Xu
2021-03-22 10:47   ` Keqian Zhu
2021-03-22 13:54     ` Paolo Bonzini
2021-03-22 16:27       ` Peter Xu
2021-03-24 18:08         ` Peter Xu
2021-03-10 20:32 ` [PATCH v5 03/10] KVM: Create the KVMSlot dirty bitmap on flag changes Peter Xu
2021-03-10 20:32 ` [PATCH v5 04/10] KVM: Provide helper to get kvm dirty log Peter Xu
2021-03-10 20:32 ` [PATCH v5 05/10] KVM: Provide helper to sync dirty bitmap from slot to ramblock Peter Xu
2021-03-10 20:32 ` [PATCH v5 06/10] KVM: Simplify dirty log sync in kvm_set_phys_mem Peter Xu
2021-03-10 20:32 ` [PATCH v5 07/10] KVM: Cache kvm slot dirty bitmap size Peter Xu
2021-03-10 20:32 ` [PATCH v5 08/10] KVM: Add dirty-gfn-count property Peter Xu
2021-03-10 20:33 ` [PATCH v5 09/10] KVM: Disable manual dirty log when dirty ring enabled Peter Xu
2021-03-22  9:17   ` Keqian Zhu
2021-03-22 13:55     ` Paolo Bonzini
2021-03-22 16:21       ` Peter Xu
2021-03-10 20:33 ` [PATCH v5 10/10] KVM: Dirty ring support Peter Xu
2021-03-22 13:37   ` Keqian Zhu
2021-03-22 18:52     ` Peter Xu
2021-03-23  1:25       ` Keqian Zhu
2021-03-19 18:12 ` [PATCH v5 00/10] KVM: Dirty ring support (QEMU part) Peter Xu
2021-03-22 14:02 ` Keqian Zhu
2021-03-22 19:45   ` Peter Xu
2021-03-23  6:40     ` Keqian Zhu
2021-03-23 14:34       ` Peter Xu
2021-03-24  2:56         ` Keqian Zhu [this message]
2021-03-24 15:09           ` Peter Xu
2021-03-25  1:21             ` Keqian Zhu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5da1dd71-58e9-6579-c7c1-6cb60baf7ac1@huawei.com \
    --to=zhukeqian1@huawei.com \
    --cc=dgilbert@redhat.com \
    --cc=huangy81@chinatelecom.cn \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).