From: "Zhang, Chen" <chen.zhang@intel.com>
To: Zhanghailiang <zhang.zhanghailiang@huawei.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
Daniel Cho <danielcho@qnap.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: RE: The issues about architecture of the COLO checkpoint
Date: Wed, 12 Feb 2020 05:45:03 +0000 [thread overview]
Message-ID: <2b09c8650b944c908c0c95fefe6d759f@intel.com> (raw)
In-Reply-To: <8737854e2826400fa4d14dc408cfd947@huawei.com>
> -----Original Message-----
> From: Zhanghailiang <zhang.zhanghailiang@huawei.com>
> Sent: Wednesday, February 12, 2020 11:18 AM
> To: Dr. David Alan Gilbert <dgilbert@redhat.com>; Daniel Cho
> <danielcho@qnap.com>; Zhang, Chen <chen.zhang@intel.com>
> Cc: qemu-devel@nongnu.org
> Subject: RE: The issues about architecture of the COLO checkpoint
>
> Hi,
>
> Thank you Dave,
>
> I'll reply here directly.
>
> -----Original Message-----
> From: Dr. David Alan Gilbert [mailto:dgilbert@redhat.com]
> Sent: Wednesday, February 12, 2020 1:48 AM
> To: Daniel Cho <danielcho@qnap.com>; chen.zhang@intel.com;
> Zhanghailiang <zhang.zhanghailiang@huawei.com>
> Cc: qemu-devel@nongnu.org
> Subject: Re: The issues about architecture of the COLO checkpoint
>
>
> cc'ing in COLO people:
>
>
> * Daniel Cho (danielcho@qnap.com) wrote:
> > Hi everyone,
> > We have some issues about setting COLO feature. Hope somebody
> > could give us some advice.
> >
> > Issue 1:
> > We dynamic to set COLO feature for PVM(2 core, 16G memory), but
> > the Primary VM will pause a long time(based on memory size) for
> > waiting SVM start. Does it have any idea to reduce the pause time?
> >
>
> Yes, we do have some ideas to optimize this downtime.
>
> The main problem for current version is, for each checkpoint, we have to
> send the whole PVM's pages
> To SVM, and then copy the whole VM's state into SVM from ram cache, in
> this process, we need both of them be paused.
> Just as you said, the downtime is based on memory size.
>
> So firstly, we need to reduce the sending data while do checkpoint, actually,
> we can migrate parts of PVM's dirty pages in background
> While both of VMs are running. And then we load these pages into ram
> cache (backup memory) in SVM temporarily. While do checkpoint,
> We just send the last dirty pages of PVM to slave side and then copy the ram
> cache into SVM. Further on, we don't have
> To send the whole PVM's dirty pages, we can only send the pages that
> dirtied by PVM or SVM during two checkpoints. (Because
> If one page is not dirtied by both PVM and SVM, the data of this pages will
> keep same in SVM, PVM, backup memory). This method can reduce
> the time that consumed in sending data.
>
> For the second problem, we can reduce the memory copy by two methods,
> first one, we don't have to copy the whole pages in ram cache,
> We can only copy the pages that dirtied by PVM and SVM in last checkpoint.
> Second, we can use userfault missing function to reduce the
> Time consumed in memory copy. (For the second time, in theory, we can
> reduce time consumed in memory into ms level).
>
> You can find the first optimization in attachment, it is based on an old qemu
> version (qemu-2.6), it should not be difficult to rebase it
> Into master or your version. And please feel free to send the new version if
> you want into community ;)
>
>
Thanks Hailiang!
By the way, Do you have time to push the patches to upstream?
I think this is a better and faster option.
Thanks
Zhang Chen
> >
> > Issue 2:
> > In
> > https://github.com/qemu/qemu/blob/master/migration/colo.c#L503,
> > could we move start_vm() before Line 488? Because at first checkpoint
> > PVM will wait for SVM's reply, it cause PVM stop for a while.
> >
>
> No, that makes no sense, because if PVM runs firstly, it still need to wait for
> The network packets from SVM to compare before send it to client side.
>
>
> Thanks,
> Hailiang
>
> > We set the COLO feature on running VM, so we hope the running VM
> > could continuous service for users.
> > Do you have any suggestions for those issues?
> >
> > Best regards,
> > Daniel Cho
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2020-02-12 5:46 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-11 6:30 The issues about architecture of the COLO checkpoint Daniel Cho
2020-02-11 17:47 ` Dr. David Alan Gilbert
2020-02-12 3:18 ` Zhanghailiang
2020-02-12 5:45 ` Zhang, Chen [this message]
2020-02-12 8:37 ` Daniel Cho
2020-02-13 1:45 ` Zhanghailiang
2020-02-13 2:10 ` Zhang, Chen
2020-02-13 2:17 ` Zhang, Chen
2020-02-13 3:02 ` Daniel Cho
2020-02-13 10:37 ` Dr. David Alan Gilbert
2020-02-15 3:35 ` Daniel Cho
2020-02-17 1:25 ` Zhanghailiang
2020-02-17 5:36 ` Zhang, Chen
2020-02-18 9:22 ` Daniel Cho
2020-02-20 3:07 ` Zhang, Chen
2020-02-20 3:49 ` Daniel Cho
2020-02-20 3:51 ` Daniel Cho
2020-02-20 19:43 ` Dr. David Alan Gilbert
2020-02-24 6:57 ` Zhanghailiang
2020-02-23 18:43 ` Zhang, Chen
2020-02-24 7:14 ` Daniel Cho
2020-03-04 7:44 ` Zhang, Chen
2020-03-06 15:22 ` Lukas Straub
2020-03-12 16:39 ` Dr. David Alan Gilbert
2020-03-17 8:32 ` Zhang, Chen
2020-02-13 0:57 ` Zhanghailiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2b09c8650b944c908c0c95fefe6d759f@intel.com \
--to=chen.zhang@intel.com \
--cc=danielcho@qnap.com \
--cc=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).