From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Li, Liang Z" Subject: RE: [RFC Design Doc]Speed up live migration by skipping free pages Date: Thu, 24 Mar 2016 10:16:47 +0000 Message-ID: References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322101116.GA9532@redhat.com> <20160323155325-mutt-send-email-mst@redhat.com> <20160324094846.GA17006@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Cc: "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kenel.org" , "pbonzini@redhat.com" , "rth@twiddle.net" , "ehabkost@redhat.com" , "amit.shah@redhat.com" , "quintela@redhat.com" , "dgilbert@redhat.com" , "mohan_parthasarathy@hpe.com" , "jitendra.kolhe@hpe.com" , "simhan@hpe.com" , "rkagan@virtuozzo.com" , "riel@redhat.com" To: "Michael S. Tsirkin" Return-path: Received: from mga01.intel.com ([192.55.52.88]:22920 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754495AbcCXKQ5 convert rfc822-to-8bit (ORCPT ); Thu, 24 Mar 2016 06:16:57 -0400 In-Reply-To: <20160324094846.GA17006@redhat.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: > On Thu, Mar 24, 2016 at 01:19:40AM +0000, Li, Liang Z wrote: > > > > > > 2. Why not use virtio-balloon > > > > > > Actually, the virtio-balloon can do the similar thing by > > > > > > inflating the balloon before live migration, but its > > > > > > performance is no good, for an 8GB idle guest just boots, it > > > > > > takes about 5.7 Sec to inflate the balloon to 7GB, but it only > > > > > > takes 25ms to get a valid free page bitmap from the guest. > > > > > > There are some of reasons for the bad performance of > > > > > > vitio-balloon: > > > > > > a. allocating pages (5%, 304ms) > > > > > > > > > > Interesting. This is definitely worth improving in guest kernel. > > > > > Also, will it be faster if we allocate and pass to guest huge pages > instead? > > > > > Might speed up madvise as well. > > > > > > > > Maybe. > > > > > > > > > > b. sending PFNs to host (71%, 4194ms) > > > > > > > > > > OK, so we probably should teach balloon to pass huge lists in bitmaps. > > > > > Will be benefitial for regular balloon operation, as well. > > > > > > > > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too few > > > > and lead to too many times of virtio transmission, that's the main > > > > reason for the > > > bad performance. > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value can > > > improve > > > > the performance significant. Maybe we should increase it before > > > > doing the further optimization, do you think so ? > > > > > > We could push it up a bit higher: 256 is 1kbyte in size, so we can > > > make it 3x bigger and still fit struct virtio_balloon is a single > > > page. But if we are going to add the bitmap variant anyway, we probably > shouldn't bother. > > > > > > > > > c. address translation and madvise() operation (24%, 1423ms) > > > > > > > > > > How is this split between translation and madvise? I suspect > > > > > it's mostly madvise since you need translation when using bitmap as > well. > > > > > Correct? Could you measure this please? Also, what if we use > > > > > the new MADV_FREE instead? By how much would this help? > > > > > > > > > For the current balloon, address translation is needed. > > > > But for live migration, there is no need to do address translation. > > > > > > Well you need ram address in order to clear the dirty bit. > > > How would you get it without translation? > > > > > > > If you means that kind of address translation, yes, it need. > > What I want to say is, filter out the free page can be done by bitmap > operation. > > > > Liang > > OK so I see that your patches use block->offset in struct RAMBlock to look up > bits in guest-supplied bitmap. > I don't think that's guaranteed to work. It's part of the bitmap operation, because the latest change of the ram_list.dirty_memory. Why do you think so? Could you tell me the reason? Liang From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44656) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj2KN-0003YV-Q1 for qemu-devel@nongnu.org; Thu, 24 Mar 2016 06:17:20 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aj2KK-00029c-HK for qemu-devel@nongnu.org; Thu, 24 Mar 2016 06:17:19 -0400 Received: from mga14.intel.com ([192.55.52.115]:48537) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj2KK-00029R-7I for qemu-devel@nongnu.org; Thu, 24 Mar 2016 06:17:16 -0400 From: "Li, Liang Z" Date: Thu, 24 Mar 2016 10:16:47 +0000 Message-ID: References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322101116.GA9532@redhat.com> <20160323155325-mutt-send-email-mst@redhat.com> <20160324094846.GA17006@redhat.com> In-Reply-To: <20160324094846.GA17006@redhat.com> Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: "rkagan@virtuozzo.com" , "linux-kernel@vger.kenel.org" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "simhan@hpe.com" , "qemu-devel@nongnu.org" , "dgilbert@redhat.com" , "jitendra.kolhe@hpe.com" , "mohan_parthasarathy@hpe.com" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "rth@twiddle.net" > On Thu, Mar 24, 2016 at 01:19:40AM +0000, Li, Liang Z wrote: > > > > > > 2. Why not use virtio-balloon > > > > > > Actually, the virtio-balloon can do the similar thing by > > > > > > inflating the balloon before live migration, but its > > > > > > performance is no good, for an 8GB idle guest just boots, it > > > > > > takes about 5.7 Sec to inflate the balloon to 7GB, but it only > > > > > > takes 25ms to get a valid free page bitmap from the guest. > > > > > > There are some of reasons for the bad performance of > > > > > > vitio-balloon: > > > > > > a. allocating pages (5%, 304ms) > > > > > > > > > > Interesting. This is definitely worth improving in guest kernel. > > > > > Also, will it be faster if we allocate and pass to guest huge pag= es > instead? > > > > > Might speed up madvise as well. > > > > > > > > Maybe. > > > > > > > > > > b. sending PFNs to host (71%, 4194ms) > > > > > > > > > > OK, so we probably should teach balloon to pass huge lists in bit= maps. > > > > > Will be benefitial for regular balloon operation, as well. > > > > > > > > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too few > > > > and lead to too many times of virtio transmission, that's the main > > > > reason for the > > > bad performance. > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value can > > > improve > > > > the performance significant. Maybe we should increase it before > > > > doing the further optimization, do you think so ? > > > > > > We could push it up a bit higher: 256 is 1kbyte in size, so we can > > > make it 3x bigger and still fit struct virtio_balloon is a single > > > page. But if we are going to add the bitmap variant anyway, we probab= ly > shouldn't bother. > > > > > > > > > c. address translation and madvise() operation (24%, 1423ms) > > > > > > > > > > How is this split between translation and madvise? I suspect > > > > > it's mostly madvise since you need translation when using bitmap = as > well. > > > > > Correct? Could you measure this please? Also, what if we use > > > > > the new MADV_FREE instead? By how much would this help? > > > > > > > > > For the current balloon, address translation is needed. > > > > But for live migration, there is no need to do address translation. > > > > > > Well you need ram address in order to clear the dirty bit. > > > How would you get it without translation? > > > > > > > If you means that kind of address translation, yes, it need. > > What I want to say is, filter out the free page can be done by bitmap > operation. > > > > Liang >=20 > OK so I see that your patches use block->offset in struct RAMBlock to loo= k up > bits in guest-supplied bitmap. > I don't think that's guaranteed to work. It's part of the bitmap operation, because the latest change of the ram_lis= t.dirty_memory. Why do you think so? Could you tell me the reason? Liang