From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, HTML_MESSAGE,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD1A5C35242 for ; Mon, 17 Feb 2020 05:37:48 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 91AFB2070B for ; Mon, 17 Feb 2020 05:37:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91AFB2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40564 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j3Z6J-0005CZ-PJ for qemu-devel@archiver.kernel.org; Mon, 17 Feb 2020 00:37:47 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:55625) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j3Z5d-0004lQ-RM for qemu-devel@nongnu.org; Mon, 17 Feb 2020 00:37:08 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j3Z5Z-00052B-S4 for qemu-devel@nongnu.org; Mon, 17 Feb 2020 00:37:05 -0500 Received: from mga09.intel.com ([134.134.136.24]:32999) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j3Z5Z-0004o7-DR for qemu-devel@nongnu.org; Mon, 17 Feb 2020 00:37:01 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Feb 2020 21:36:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,451,1574150400"; d="scan'208,217";a="268290094" Received: from chenzh5-mobl2.ccr.corp.intel.com (HELO [10.249.194.146]) ([10.249.194.146]) by fmsmga002.fm.intel.com with ESMTP; 16 Feb 2020 21:36:48 -0800 Subject: Re: The issues about architecture of the COLO checkpoint To: Daniel Cho , "Dr. David Alan Gilbert" References: <20200211174756.GA2798@work-vm> <8737854e2826400fa4d14dc408cfd947@huawei.com> <2b09c8650b944c908c0c95fefe6d759f@intel.com> <1bf96353e8e2490098a71d0d1182986a@huawei.com> <51f95ec9ed4a4cc682e63abf1414979b@intel.com> <20200213103752.GE2960@work-vm> From: "Zhang, Chen" Message-ID: Date: Mon, 17 Feb 2020 13:36:47 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2 MIME-Version: 1.0 In-Reply-To: Content-Type: multipart/alternative; boundary="------------16048698033E6E567041C39F" Content-Language: en-US X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.24 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jason Wang , Zhanghailiang , "qemu-devel@nongnu.org" Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" This is a multi-part message in MIME format. --------------16048698033E6E567041C39F Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On 2/15/2020 11:35 AM, Daniel Cho wrote: > Hi Dave, > > Yes, I agree with you, it does need a timeout. Hi Daniel and Dave, Current colo-compare already have the timeout mechanism. Named packet_check_timer,  It will scan primary packet queue to make sure all the primary packet not stay too long time. If colo-compare got a primary packet without related secondary packet in a certain time , it will automatic trigger checkpoint. https://github.com/qemu/qemu/blob/master/net/colo-compare.c#L847 Thanks Zhang Chen > > Hi Hailiang, > > We base on qemu-4.1.0 for using COLO feature, in your patch, we found > a lot of difference  between your version and ours. > Could you give us a latest release version which is close your > developing code? > > Thanks. > > Regards > Daniel Cho > > Dr. David Alan Gilbert > 於 2020年2月13日 週四 下午6:38寫道: > > * Daniel Cho (danielcho@qnap.com ) wrote: > > Hi Hailiang, > > > > 1. > >     OK, we will try the patch > > “0001-COLO-Optimize-memory-back-up-process.patch”, > > and thanks for your help. > > > > 2. > >     We understand the reason to compare PVM and SVM's packet. > However, the > > empty of SVM's packet queue might happened on setting COLO > feature and SVM > > broken. > > > > On situation 1 ( setting COLO feature ): > >     We could force do checkpoint after setting COLO feature > finish, then it > > will protect the state of PVM and SVM . As the Zhang Chen said. > > > > On situation 2 ( SVM broken ): > >     COLO will do failover for PVM, so it might not cause any > wrong on PVM. > > > > However, those situations are our views, so there might be a big > difference > > between reality and our views. > > If we have any wrong views and opinions, please let us know, and > correct > > us. > > It does need a timeout; the SVM being broken or being in a state where > it never sends the corresponding packet (because of a state > difference) > can happen and COLO needs to timeout when the packet hasn't arrived > after a while and trigger the checkpoint. > > Dave > > > Thanks. > > > > Best regards, > > Daniel Cho > > > > Zhang, Chen > > 於 2020年2月13日 週四 上午10:17寫道: > > > > > Add cc Jason Wang, he is a network expert. > > > > > > In case some network things goes wrong. > > > > > > > > > > > > Thanks > > > > > > Zhang Chen > > > > > > > > > > > > *From:* Zhang, Chen > > > *Sent:* Thursday, February 13, 2020 10:10 AM > > > *To:* 'Zhanghailiang' >; Daniel Cho < > > > danielcho@qnap.com > > > > *Cc:* Dr. David Alan Gilbert >; qemu-devel@nongnu.org > > > > *Subject:* RE: The issues about architecture of the COLO > checkpoint > > > > > > > > > > > > For the issue 2: > > > > > > > > > > > > COLO need use the network packets to confirm PVM and SVM in > the same state, > > > > > > Generally speaking, we can’t send PVM packets without compared > with SVM > > > packets. > > > > > > But to prevent jamming, I think COLO can do force checkpoint > and send the > > > PVM packets in this case. > > > > > > > > > > > > Thanks > > > > > > Zhang Chen > > > > > > > > > > > > *From:* Zhanghailiang > > > > *Sent:* Thursday, February 13, 2020 9:45 AM > > > *To:* Daniel Cho > > > > *Cc:* Dr. David Alan Gilbert >; qemu-devel@nongnu.org > ; > > > Zhang, Chen > > > > *Subject:* RE: The issues about architecture of the COLO > checkpoint > > > > > > > > > > > > Hi, > > > > > > > > > > > > 1.       After re-walked through the codes, yes, you are > right, actually, > > > after the first migration, we will keep dirty log on in > primary side, > > > > > > And only send the dirty pages in PVM to SVM. The ram cache in > secondary > > > side is always a backup of PVM, so we don’t have to > > > > > > Re-send the none-dirtied pages. > > > > > > The reason why the first checkpoint takes longer time is we > have to backup > > > the whole VM’s ram into ram cache, that is colo_init_ram_cache(). > > > > > > It is time consuming, but I have optimized in the second patch > > > “0001-COLO-Optimize-memory-back-up-process.patch” which you > can find in my > > > previous reply. > > > > > > > > > > > > Besides, I found that, In my previous reply “We can only copy > the pages > > > that dirtied by PVM and SVM in last checkpoint.”, > > > > > > We have done this optimization in current upstream codes. > > > > > > > > > > > > 2.I don’t quite understand this question. For COLO, we always > need both > > > network packets of PVM’s and SVM’s to compare before send this > packets to > > > client. > > > > > > It depends on this to decide whether or not PVM and SVM are in > same state. > > > > > > > > > > > > Thanks, > > > > > > hailiang > > > > > > > > > > > > *From:* Daniel Cho [mailto:danielcho@qnap.com > >] > > > *Sent:* Wednesday, February 12, 2020 4:37 PM > > > *To:* Zhang, Chen > > > > *Cc:* Zhanghailiang >; Dr. David Alan > > > Gilbert >; > qemu-devel@nongnu.org > > > *Subject:* Re: The issues about architecture of the COLO > checkpoint > > > > > > > > > > > > Hi Hailiang, > > > > > > > > > > > > Thanks for your replaying and explain in detail. > > > > > > We will try to use the attachments to enhance memory copy. > > > > > > > > > > > > However, we have some questions for your replying. > > > > > > > > > > > > 1.  As you said, "for each checkpoint, we have to send the > whole PVM's > > > pages To SVM", why the only first checkpoint will takes more > pause time? > > > > > > In our observing, the first checkpoint will take more time for > pausing, > > > then other checkpoints will takes a few time for pausing. Does > it means > > > only the first checkpoint will send the whole pages to SVM, > and the other > > > checkpoints send the dirty pages to SVM for reloading? > > > > > > > > > > > > 2. We notice the COLO-COMPARE component will stuck the packet > until > > > receive packets from PVM and SVM, as this rule, when we add the > > > COLO-COMPARE to PVM, its network will stuck until SVM start. > So it is an > > > other issue to make PVM stuck while setting COLO feature. With > this issue, > > > could we let colo-compare to pass the PVM's packet when the > SVM's packet > > > queue is empty? Then, the PVM's network won't stock, and "if > PVM runs > > > firstly, it still need to wait for The network packets from SVM to > > > compare before send it to client side" won't happened either. > > > > > > > > > > > > Best regard, > > > > > > Daniel Cho > > > > > > > > > > > > Zhang, Chen > 於 2020年2月12日 週三 下午1:45寫道: > > > > > > > > > > > > > -----Original Message----- > > > > From: Zhanghailiang > > > > > Sent: Wednesday, February 12, 2020 11:18 AM > > > > To: Dr. David Alan Gilbert >; Daniel Cho > > > > >; Zhang, > Chen > > > > > Cc: qemu-devel@nongnu.org > > > > Subject: RE: The issues about architecture of the COLO > checkpoint > > > > > > > > Hi, > > > > > > > > Thank you Dave, > > > > > > > > I'll reply here directly. > > > > > > > > -----Original Message----- > > > > From: Dr. David Alan Gilbert [mailto:dgilbert@redhat.com > ] > > > > Sent: Wednesday, February 12, 2020 1:48 AM > > > > To: Daniel Cho >; chen.zhang@intel.com > ; > > > > Zhanghailiang > > > > > Cc: qemu-devel@nongnu.org > > > > Subject: Re: The issues about architecture of the COLO > checkpoint > > > > > > > > > > > > cc'ing in COLO people: > > > > > > > > > > > > * Daniel Cho (danielcho@qnap.com > ) wrote: > > > > > Hi everyone, > > > > >      We have some issues about setting COLO feature. Hope > somebody > > > > > could give us some advice. > > > > > > > > > > Issue 1: > > > > >      We dynamic to set COLO feature for PVM(2 core, 16G > memory),  but > > > > > the Primary VM will pause a long time(based on memory > size) for > > > > > waiting SVM start. Does it have any idea to reduce the > pause time? > > > > > > > > > > > > > Yes, we do have some ideas to optimize this downtime. > > > > > > > > The main problem for current version is, for each > checkpoint, we have to > > > > send the whole PVM's pages > > > > To SVM, and then copy the whole VM's state into SVM from ram > cache, in > > > > this process, we need both of them be paused. > > > > Just as you said, the downtime is based on memory size. > > > > > > > > So firstly, we need to reduce the sending data while do > checkpoint, > > > actually, > > > > we can migrate parts of PVM's dirty pages in background > > > > While both of VMs are running. And then we load these pages > into ram > > > > cache (backup memory) in SVM temporarily. While do checkpoint, > > > > We just send the last dirty pages of PVM to slave side and > then copy the > > > ram > > > > cache into SVM. Further on, we don't have > > > > To send the whole PVM's dirty pages, we can only send the > pages that > > > > dirtied by PVM or SVM during two checkpoints. (Because > > > > If one page is not dirtied by both PVM and SVM, the data of > this pages > > > will > > > > keep same in SVM, PVM, backup memory). This method can reduce > > > > the time that consumed in sending data. > > > > > > > > For the second problem, we can reduce the memory copy by two > methods, > > > > first one, we don't have to copy the whole pages in ram cache, > > > > We can only copy the pages that dirtied by PVM and SVM in last > > > checkpoint. > > > > Second, we can use userfault missing function to reduce the > > > > Time consumed in memory copy. (For the second time, in > theory, we can > > > > reduce time consumed in memory into ms level). > > > > > > > > You can find the first optimization in attachment, it is > based on an old > > > qemu > > > > version (qemu-2.6), it should not be difficult to rebase it > > > > Into master or your version. And please feel free to send > the new > > > version if > > > > you want into community ;) > > > > > > > > > > > > > > Thanks Hailiang! > > > By the way, Do you have time to push the patches to upstream? > > > I think this is a better and faster option. > > > > > > Thanks > > > Zhang Chen > > > > > > > > > > > > > Issue 2: > > > > >      In > > > > > > https://github.com/qemu/qemu/blob/master/migration/colo.c#L503, > > > > > could we move start_vm() before Line 488? Because at first > checkpoint > > > > > PVM will wait for SVM's reply, it cause PVM stop for a while. > > > > > > > > > > > > > No, that makes no sense, because if PVM runs firstly, it > still need to > > > wait for > > > > The network packets from SVM to compare before send it to > client side. > > > > > > > > > > > > Thanks, > > > > Hailiang > > > > > > > > >      We set the COLO feature on running VM, so we hope the > running VM > > > > > could continuous service for users. > > > > > Do you have any suggestions for those issues? > > > > > > > > > > Best regards, > > > > > Daniel Cho > > > > -- > > > > Dr. David Alan Gilbert / dgilbert@redhat.com > / Manchester, UK > > > > > > > -- > Dr. David Alan Gilbert / dgilbert@redhat.com > / Manchester, UK > --------------16048698033E6E567041C39F Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit


On 2/15/2020 11:35 AM, Daniel Cho wrote:
Hi Dave, 

Yes, I agree with you, it does need a timeout.


Hi Daniel and Dave,

Current colo-compare already have the timeout mechanism.

Named packet_check_timer,  It will scan primary packet queue to make sure all the primary packet not stay too long time.

If colo-compare got a primary packet without related secondary packet in a certain time , it will automatic trigger checkpoint.

https://github.com/qemu/qemu/blob/master/net/colo-compare.c#L847


Thanks

Zhang Chen



Hi Hailiang, 

We base on qemu-4.1.0 for using COLO feature, in your patch, we found a lot of difference  between your version and ours.
Could you give us a latest release version which is close your developing code?

Thanks. 

Regards
Daniel Cho

Dr. David Alan Gilbert <dgilbert@redhat.com> 於 2020年2月13日 週四 下午6:38寫道:
* Daniel Cho (danielcho@qnap.com) wrote:
> Hi Hailiang,
>
> 1.
>     OK, we will try the patch
> “0001-COLO-Optimize-memory-back-up-process.patch”,
> and thanks for your help.
>
> 2.
>     We understand the reason to compare PVM and SVM's packet. However, the
> empty of SVM's packet queue might happened on setting COLO feature and SVM
> broken.
>
> On situation 1 ( setting COLO feature ):
>     We could force do checkpoint after setting COLO feature finish, then it
> will protect the state of PVM and SVM . As the Zhang Chen said.
>
> On situation 2 ( SVM broken ):
>     COLO will do failover for PVM, so it might not cause any wrong on PVM.
>
> However, those situations are our views, so there might be a big difference
> between reality and our views.
> If we have any wrong views and opinions, please let us know, and correct
> us.

It does need a timeout; the SVM being broken or being in a state where
it never sends the corresponding packet (because of a state difference)
can happen and COLO needs to timeout when the packet hasn't arrived
after a while and trigger the checkpoint.

Dave

> Thanks.
>
> Best regards,
> Daniel Cho
>
> Zhang, Chen <chen.zhang@intel.com> 於 2020年2月13日 週四 上午10:17寫道:
>
> > Add cc Jason Wang, he is a network expert.
> >
> > In case some network things goes wrong.
> >
> >
> >
> > Thanks
> >
> > Zhang Chen
> >
> >
> >
> > *From:* Zhang, Chen
> > *Sent:* Thursday, February 13, 2020 10:10 AM
> > *To:* 'Zhanghailiang' <zhang.zhanghailiang@huawei.com>; Daniel Cho <
> > danielcho@qnap.com>
> > *Cc:* Dr. David Alan Gilbert <dgilbert@redhat.com>; qemu-devel@nongnu.org
> > *Subject:* RE: The issues about architecture of the COLO checkpoint
> >
> >
> >
> > For the issue 2:
> >
> >
> >
> > COLO need use the network packets to confirm PVM and SVM in the same state,
> >
> > Generally speaking, we can’t send PVM packets without compared with SVM
> > packets.
> >
> > But to prevent jamming, I think COLO can do force checkpoint and send the
> > PVM packets in this case.
> >
> >
> >
> > Thanks
> >
> > Zhang Chen
> >
> >
> >
> > *From:* Zhanghailiang <zhang.zhanghailiang@huawei.com>
> > *Sent:* Thursday, February 13, 2020 9:45 AM
> > *To:* Daniel Cho <danielcho@qnap.com>
> > *Cc:* Dr. David Alan Gilbert <dgilbert@redhat.com>; qemu-devel@nongnu.org;
> > Zhang, Chen <chen.zhang@intel.com>
> > *Subject:* RE: The issues about architecture of the COLO checkpoint
> >
> >
> >
> > Hi,
> >
> >
> >
> > 1.       After re-walked through the codes, yes, you are right, actually,
> > after the first migration, we will keep dirty log on in primary side,
> >
> > And only send the dirty pages in PVM to SVM. The ram cache in secondary
> > side is always a backup of PVM, so we don’t have to
> >
> > Re-send the none-dirtied pages.
> >
> > The reason why the first checkpoint takes longer time is we have to backup
> > the whole VM’s ram into ram cache, that is colo_init_ram_cache().
> >
> > It is time consuming, but I have optimized in the second patch
> > “0001-COLO-Optimize-memory-back-up-process.patch” which you can find in my
> > previous reply.
> >
> >
> >
> > Besides, I found that, In my previous reply “We can only copy the pages
> > that dirtied by PVM and SVM in last checkpoint.”,
> >
> > We have done this optimization in current upstream codes.
> >
> >
> >
> > 2.I don’t quite understand this question. For COLO, we always need both
> > network packets of PVM’s and SVM’s to compare before send this packets to
> > client.
> >
> > It depends on this to decide whether or not PVM and SVM are in same state.
> >
> >
> >
> > Thanks,
> >
> > hailiang
> >
> >
> >
> > *From:* Daniel Cho [mailto:danielcho@qnap.com <danielcho@qnap.com>]
> > *Sent:* Wednesday, February 12, 2020 4:37 PM
> > *To:* Zhang, Chen <chen.zhang@intel.com>
> > *Cc:* Zhanghailiang <zhang.zhanghailiang@huawei.com>; Dr. David Alan
> > Gilbert <dgilbert@redhat.com>; qemu-devel@nongnu.org
> > *Subject:* Re: The issues about architecture of the COLO checkpoint
> >
> >
> >
> > Hi Hailiang,
> >
> >
> >
> > Thanks for your replaying and explain in detail.
> >
> > We will try to use the attachments to enhance memory copy.
> >
> >
> >
> > However, we have some questions for your replying.
> >
> >
> >
> > 1.  As you said, "for each checkpoint, we have to send the whole PVM's
> > pages To SVM", why the only first checkpoint will takes more pause time?
> >
> > In our observing, the first checkpoint will take more time for pausing,
> > then other checkpoints will takes a few time for pausing. Does it means
> > only the first checkpoint will send the whole pages to SVM, and the other
> > checkpoints send the dirty pages to SVM for reloading?
> >
> >
> >
> > 2. We notice the COLO-COMPARE component will stuck the packet until
> > receive packets from PVM and SVM, as this rule, when we add the
> > COLO-COMPARE to PVM, its network will stuck until SVM start. So it is an
> > other issue to make PVM stuck while setting COLO feature. With this issue,
> > could we let colo-compare to pass the PVM's packet when the SVM's packet
> > queue is empty? Then, the PVM's network won't stock, and "if PVM runs
> > firstly, it still need to wait for The network packets from SVM to
> > compare before send it to client side" won't happened either.
> >
> >
> >
> > Best regard,
> >
> > Daniel Cho
> >
> >
> >
> > Zhang, Chen <chen.zhang@intel.com> 於 2020年2月12日 週三 下午1:45寫道:
> >
> >
> >
> > > -----Original Message-----
> > > From: Zhanghailiang <zhang.zhanghailiang@huawei.com>
> > > Sent: Wednesday, February 12, 2020 11:18 AM
> > > To: Dr. David Alan Gilbert <dgilbert@redhat.com>; Daniel Cho
> > > <danielcho@qnap.com>; Zhang, Chen <chen.zhang@intel.com>
> > > Cc: qemu-devel@nongnu.org
> > > Subject: RE: The issues about architecture of the COLO checkpoint
> > >
> > > Hi,
> > >
> > > Thank you Dave,
> > >
> > > I'll reply here directly.
> > >
> > > -----Original Message-----
> > > From: Dr. David Alan Gilbert [mailto:dgilbert@redhat.com]
> > > Sent: Wednesday, February 12, 2020 1:48 AM
> > > To: Daniel Cho <danielcho@qnap.com>; chen.zhang@intel.com;
> > > Zhanghailiang <zhang.zhanghailiang@huawei.com>
> > > Cc: qemu-devel@nongnu.org
> > > Subject: Re: The issues about architecture of the COLO checkpoint
> > >
> > >
> > > cc'ing in COLO people:
> > >
> > >
> > > * Daniel Cho (danielcho@qnap.com) wrote:
> > > > Hi everyone,
> > > >      We have some issues about setting COLO feature. Hope somebody
> > > > could give us some advice.
> > > >
> > > > Issue 1:
> > > >      We dynamic to set COLO feature for PVM(2 core, 16G memory),  but
> > > > the Primary VM will pause a long time(based on memory size) for
> > > > waiting SVM start. Does it have any idea to reduce the pause time?
> > > >
> > >
> > > Yes, we do have some ideas to optimize this downtime.
> > >
> > > The main problem for current version is, for each checkpoint, we have to
> > > send the whole PVM's pages
> > > To SVM, and then copy the whole VM's state into SVM from ram cache, in
> > > this process, we need both of them be paused.
> > > Just as you said, the downtime is based on memory size.
> > >
> > > So firstly, we need to reduce the sending data while do checkpoint,
> > actually,
> > > we can migrate parts of PVM's dirty pages in background
> > > While both of VMs are running. And then we load these pages into ram
> > > cache (backup memory) in SVM temporarily. While do checkpoint,
> > > We just send the last dirty pages of PVM to slave side and then copy the
> > ram
> > > cache into SVM. Further on, we don't have
> > > To send the whole PVM's dirty pages, we can only send the pages that
> > > dirtied by PVM or SVM during two checkpoints. (Because
> > > If one page is not dirtied by both PVM and SVM, the data of this pages
> > will
> > > keep same in SVM, PVM, backup memory). This method can reduce
> > > the time that consumed in sending data.
> > >
> > > For the second problem, we can reduce the memory copy by two methods,
> > > first one, we don't have to copy the whole pages in ram cache,
> > > We can only copy the pages that dirtied by PVM and SVM in last
> > checkpoint.
> > > Second, we can use userfault missing function to reduce the
> > > Time consumed in memory copy. (For the second time, in theory, we can
> > > reduce time consumed in memory into ms level).
> > >
> > > You can find the first optimization in attachment, it is based on an old
> > qemu
> > > version (qemu-2.6), it should not be difficult to rebase it
> > > Into master or your version. And please feel free to send the new
> > version if
> > > you want into community ;)
> > >
> > >
> >
> > Thanks Hailiang!
> > By the way, Do you have time to push the patches to upstream?
> > I think this is a better and faster option.
> >
> > Thanks
> > Zhang Chen
> >
> > > >
> > > > Issue 2:
> > > >      In
> > > > https://github.com/qemu/qemu/blob/master/migration/colo.c#L503,
> > > > could we move start_vm() before Line 488? Because at first checkpoint
> > > > PVM will wait for SVM's reply, it cause PVM stop for a while.
> > > >
> > >
> > > No, that makes no sense, because if PVM runs firstly, it still need to
> > wait for
> > > The network packets from SVM to compare before send it to client side.
> > >
> > >
> > > Thanks,
> > > Hailiang
> > >
> > > >      We set the COLO feature on running VM, so we hope the running VM
> > > > could continuous service for users.
> > > > Do you have any suggestions for those issues?
> > > >
> > > > Best regards,
> > > > Daniel Cho
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
> >
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

--------------16048698033E6E567041C39F--