From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [RFC Design Doc]Speed up live migration by skipping free pages Date: Thu, 24 Mar 2016 12:29:07 +0200 Message-ID: <20160324122627-mutt-send-email-mst@redhat.com> References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322101116.GA9532@redhat.com> <20160323155325-mutt-send-email-mst@redhat.com> <20160324094846.GA17006@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kenel.org" , "pbonzini@redhat.com" , "rth@twiddle.net" , "ehabkost@redhat.com" , "amit.shah@redhat.com" , "quintela@redhat.com" , "dgilbert@redhat.com" , "mohan_parthasarathy@hpe.com" , "jitendra.kolhe@hpe.com" , "simhan@hpe.com" , "rkagan@virtuozzo.com" , "riel@redhat.com" To: "Li, Liang Z" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:37497 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754769AbcCXK3P (ORCPT ); Thu, 24 Mar 2016 06:29:15 -0400 Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Mar 24, 2016 at 10:16:47AM +0000, Li, Liang Z wrote: > > On Thu, Mar 24, 2016 at 01:19:40AM +0000, Li, Liang Z wrote: > > > > > > > 2. Why not use virtio-balloon > > > > > > > Actually, the virtio-balloon can do the similar thing by > > > > > > > inflating the balloon before live migration, but its > > > > > > > performance is no good, for an 8GB idle guest just boots, it > > > > > > > takes about 5.7 Sec to inflate the balloon to 7GB, but it only > > > > > > > takes 25ms to get a valid free page bitmap from the guest. > > > > > > > There are some of reasons for the bad performance of > > > > > > > vitio-balloon: > > > > > > > a. allocating pages (5%, 304ms) > > > > > > > > > > > > Interesting. This is definitely worth improving in guest kernel. > > > > > > Also, will it be faster if we allocate and pass to guest huge pages > > instead? > > > > > > Might speed up madvise as well. > > > > > > > > > > Maybe. > > > > > > > > > > > > b. sending PFNs to host (71%, 4194ms) > > > > > > > > > > > > OK, so we probably should teach balloon to pass huge lists in bitmaps. > > > > > > Will be benefitial for regular balloon operation, as well. > > > > > > > > > > > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too few > > > > > and lead to too many times of virtio transmission, that's the main > > > > > reason for the > > > > bad performance. > > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value can > > > > improve > > > > > the performance significant. Maybe we should increase it before > > > > > doing the further optimization, do you think so ? > > > > > > > > We could push it up a bit higher: 256 is 1kbyte in size, so we can > > > > make it 3x bigger and still fit struct virtio_balloon is a single > > > > page. But if we are going to add the bitmap variant anyway, we probably > > shouldn't bother. > > > > > > > > > > > c. address translation and madvise() operation (24%, 1423ms) > > > > > > > > > > > > How is this split between translation and madvise? I suspect > > > > > > it's mostly madvise since you need translation when using bitmap as > > well. > > > > > > Correct? Could you measure this please? Also, what if we use > > > > > > the new MADV_FREE instead? By how much would this help? > > > > > > > > > > > For the current balloon, address translation is needed. > > > > > But for live migration, there is no need to do address translation. > > > > > > > > Well you need ram address in order to clear the dirty bit. > > > > How would you get it without translation? > > > > > > > > > > If you means that kind of address translation, yes, it need. > > > What I want to say is, filter out the free page can be done by bitmap > > operation. > > > > > > Liang > > > > OK so I see that your patches use block->offset in struct RAMBlock to look up > > bits in guest-supplied bitmap. > > I don't think that's guaranteed to work. > > It's part of the bitmap operation, because the latest change of the ram_list.dirty_memory. > Why do you think so? Could you tell me the reason? > > Liang Sorry, why do I think what? That ram_addr_t is not guaranteed to equal GPA of the block? E.g. HACKING says: Use hwaddr for guest physical addresses except pcibus_t for PCI addresses. In addition, ram_addr_t is a QEMU internal address space that maps guest RAM physical addresses into an intermediate address space that can map to host virtual address spaces. -- MST From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47403) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj2W1-0006bw-2S for qemu-devel@nongnu.org; Thu, 24 Mar 2016 06:29:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aj2Vv-0004yd-RU for qemu-devel@nongnu.org; Thu, 24 Mar 2016 06:29:20 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48908) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj2Vv-0004yL-JI for qemu-devel@nongnu.org; Thu, 24 Mar 2016 06:29:15 -0400 Date: Thu, 24 Mar 2016 12:29:07 +0200 From: "Michael S. Tsirkin" Message-ID: <20160324122627-mutt-send-email-mst@redhat.com> References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322101116.GA9532@redhat.com> <20160323155325-mutt-send-email-mst@redhat.com> <20160324094846.GA17006@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Li, Liang Z" Cc: "rkagan@virtuozzo.com" , "linux-kernel@vger.kenel.org" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "simhan@hpe.com" , "qemu-devel@nongnu.org" , "dgilbert@redhat.com" , "jitendra.kolhe@hpe.com" , "mohan_parthasarathy@hpe.com" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "rth@twiddle.net" On Thu, Mar 24, 2016 at 10:16:47AM +0000, Li, Liang Z wrote: > > On Thu, Mar 24, 2016 at 01:19:40AM +0000, Li, Liang Z wrote: > > > > > > > 2. Why not use virtio-balloon > > > > > > > Actually, the virtio-balloon can do the similar thing by > > > > > > > inflating the balloon before live migration, but its > > > > > > > performance is no good, for an 8GB idle guest just boots, it > > > > > > > takes about 5.7 Sec to inflate the balloon to 7GB, but it only > > > > > > > takes 25ms to get a valid free page bitmap from the guest. > > > > > > > There are some of reasons for the bad performance of > > > > > > > vitio-balloon: > > > > > > > a. allocating pages (5%, 304ms) > > > > > > > > > > > > Interesting. This is definitely worth improving in guest kernel. > > > > > > Also, will it be faster if we allocate and pass to guest huge pages > > instead? > > > > > > Might speed up madvise as well. > > > > > > > > > > Maybe. > > > > > > > > > > > > b. sending PFNs to host (71%, 4194ms) > > > > > > > > > > > > OK, so we probably should teach balloon to pass huge lists in bitmaps. > > > > > > Will be benefitial for regular balloon operation, as well. > > > > > > > > > > > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too few > > > > > and lead to too many times of virtio transmission, that's the main > > > > > reason for the > > > > bad performance. > > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value can > > > > improve > > > > > the performance significant. Maybe we should increase it before > > > > > doing the further optimization, do you think so ? > > > > > > > > We could push it up a bit higher: 256 is 1kbyte in size, so we can > > > > make it 3x bigger and still fit struct virtio_balloon is a single > > > > page. But if we are going to add the bitmap variant anyway, we probably > > shouldn't bother. > > > > > > > > > > > c. address translation and madvise() operation (24%, 1423ms) > > > > > > > > > > > > How is this split between translation and madvise? I suspect > > > > > > it's mostly madvise since you need translation when using bitmap as > > well. > > > > > > Correct? Could you measure this please? Also, what if we use > > > > > > the new MADV_FREE instead? By how much would this help? > > > > > > > > > > > For the current balloon, address translation is needed. > > > > > But for live migration, there is no need to do address translation. > > > > > > > > Well you need ram address in order to clear the dirty bit. > > > > How would you get it without translation? > > > > > > > > > > If you means that kind of address translation, yes, it need. > > > What I want to say is, filter out the free page can be done by bitmap > > operation. > > > > > > Liang > > > > OK so I see that your patches use block->offset in struct RAMBlock to look up > > bits in guest-supplied bitmap. > > I don't think that's guaranteed to work. > > It's part of the bitmap operation, because the latest change of the ram_list.dirty_memory. > Why do you think so? Could you tell me the reason? > > Liang Sorry, why do I think what? That ram_addr_t is not guaranteed to equal GPA of the block? E.g. HACKING says: Use hwaddr for guest physical addresses except pcibus_t for PCI addresses. In addition, ram_addr_t is a QEMU internal address space that maps guest RAM physical addresses into an intermediate address space that can map to host virtual address spaces. -- MST