From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Li, Liang Z" Subject: RE: [RFC Design Doc]Speed up live migration by skipping free pages Date: Thu, 24 Mar 2016 14:33:15 +0000 Message-ID: References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322101116.GA9532@redhat.com> <20160323155325-mutt-send-email-mst@redhat.com> <20160324094846.GA17006@redhat.com> <20160324122627-mutt-send-email-mst@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Cc: "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kenel.org" , "pbonzini@redhat.com" , "rth@twiddle.net" , "ehabkost@redhat.com" , "amit.shah@redhat.com" , "quintela@redhat.com" , "dgilbert@redhat.com" , "mohan_parthasarathy@hpe.com" , "jitendra.kolhe@hpe.com" , "simhan@hpe.com" , "rkagan@virtuozzo.com" , "riel@redhat.com" To: "Michael S. Tsirkin" Return-path: Received: from mga14.intel.com ([192.55.52.115]:50540 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751581AbcCXOde convert rfc822-to-8bit (ORCPT ); Thu, 24 Mar 2016 10:33:34 -0400 In-Reply-To: <20160324122627-mutt-send-email-mst@redhat.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too > > > > > > few and lead to too many times of virtio transmission, that's > > > > > > the main reason for the > > > > > bad performance. > > > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value > can > > > > > improve > > > > > > the performance significant. Maybe we should increase it > > > > > > before doing the further optimization, do you think so ? > > > > > > > > > > We could push it up a bit higher: 256 is 1kbyte in size, so we > > > > > can make it 3x bigger and still fit struct virtio_balloon is a > > > > > single page. But if we are going to add the bitmap variant > > > > > anyway, we probably > > > shouldn't bother. > > > > > > > > > > > > > c. address translation and madvise() operation (24%, > > > > > > > > 1423ms) > > > > > > > > > > > > > > How is this split between translation and madvise? I > > > > > > > suspect it's mostly madvise since you need translation when > > > > > > > using bitmap as > > > well. > > > > > > > Correct? Could you measure this please? Also, what if we > > > > > > > use the new MADV_FREE instead? By how much would this help? > > > > > > > > > > > > > For the current balloon, address translation is needed. > > > > > > But for live migration, there is no need to do address translation. > > > > > > > > > > Well you need ram address in order to clear the dirty bit. > > > > > How would you get it without translation? > > > > > > > > > > > > > If you means that kind of address translation, yes, it need. > > > > What I want to say is, filter out the free page can be done by > > > > bitmap > > > operation. > > > > > > > > Liang > > > > > > OK so I see that your patches use block->offset in struct RAMBlock > > > to look up bits in guest-supplied bitmap. > > > I don't think that's guaranteed to work. > > > > It's part of the bitmap operation, because the latest change of the > ram_list.dirty_memory. > > Why do you think so? Could you tell me the reason? > > > > Liang > > Sorry, why do I think what? That ram_addr_t is not guaranteed to equal GPA > of the block? > I mean why do you think that's can't guaranteed to work. Yes, ram_addr_t is not guaranteed to equal GPA of the block. But I didn't use them as GPA. The code in the filter_out_guest_free_pages() in my patch just follow the style of the latest change of ram_list.dirty_memory[]. The free page bitmap got from the guest in my RFC patch has been filtered out the 'hole', so the bit N of the free page bitmap and the bit N in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks are corresponding to the same guest page. Right? If it's true, I think I am doing the right thing? Liang > E.g. HACKING says: > Use hwaddr for guest physical addresses except pcibus_t > for PCI addresses. In addition, ram_addr_t is a QEMU internal > address > space that maps guest RAM physical addresses into an intermediate > address space that can map to host virtual address spaces. > > > -- > MST > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in the body of > a message to majordomo@vger.kernel.org More majordomo info at > http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52144) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj6KN-0004qX-37 for qemu-devel@nongnu.org; Thu, 24 Mar 2016 10:33:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aj6KG-0000Qg-Tk for qemu-devel@nongnu.org; Thu, 24 Mar 2016 10:33:35 -0400 Received: from mga09.intel.com ([134.134.136.24]:54076) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj6KG-0000QW-IX for qemu-devel@nongnu.org; Thu, 24 Mar 2016 10:33:28 -0400 From: "Li, Liang Z" Date: Thu, 24 Mar 2016 14:33:15 +0000 Message-ID: References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322101116.GA9532@redhat.com> <20160323155325-mutt-send-email-mst@redhat.com> <20160324094846.GA17006@redhat.com> <20160324122627-mutt-send-email-mst@redhat.com> In-Reply-To: <20160324122627-mutt-send-email-mst@redhat.com> Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: "rkagan@virtuozzo.com" , "linux-kernel@vger.kenel.org" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "simhan@hpe.com" , "qemu-devel@nongnu.org" , "dgilbert@redhat.com" , "jitendra.kolhe@hpe.com" , "mohan_parthasarathy@hpe.com" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "rth@twiddle.net" > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too > > > > > > few and lead to too many times of virtio transmission, that's > > > > > > the main reason for the > > > > > bad performance. > > > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value > can > > > > > improve > > > > > > the performance significant. Maybe we should increase it > > > > > > before doing the further optimization, do you think so ? > > > > > > > > > > We could push it up a bit higher: 256 is 1kbyte in size, so we > > > > > can make it 3x bigger and still fit struct virtio_balloon is a > > > > > single page. But if we are going to add the bitmap variant > > > > > anyway, we probably > > > shouldn't bother. > > > > > > > > > > > > > c. address translation and madvise() operation (24%, > > > > > > > > 1423ms) > > > > > > > > > > > > > > How is this split between translation and madvise? I > > > > > > > suspect it's mostly madvise since you need translation when > > > > > > > using bitmap as > > > well. > > > > > > > Correct? Could you measure this please? Also, what if we > > > > > > > use the new MADV_FREE instead? By how much would this help? > > > > > > > > > > > > > For the current balloon, address translation is needed. > > > > > > But for live migration, there is no need to do address translat= ion. > > > > > > > > > > Well you need ram address in order to clear the dirty bit. > > > > > How would you get it without translation? > > > > > > > > > > > > > If you means that kind of address translation, yes, it need. > > > > What I want to say is, filter out the free page can be done by > > > > bitmap > > > operation. > > > > > > > > Liang > > > > > > OK so I see that your patches use block->offset in struct RAMBlock > > > to look up bits in guest-supplied bitmap. > > > I don't think that's guaranteed to work. > > > > It's part of the bitmap operation, because the latest change of the > ram_list.dirty_memory. > > Why do you think so? Could you tell me the reason? > > > > Liang >=20 > Sorry, why do I think what? That ram_addr_t is not guaranteed to equal GP= A > of the block? >=20 I mean why do you think that's can't guaranteed to work. Yes, ram_addr_t is not guaranteed to equal GPA of the block. But I didn't u= se them as GPA. The code in the filter_out_guest_free_pages() in my patch just follow = the style of the latest change of ram_list.dirty_memory[]. The free page bitmap got from the guest in my RFC patch has been filtered o= ut the 'hole', so the bit N of the free page bitmap and the bit N in=20 ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks are corresponding to the same guest page. Right? If it's true, I think I am doing the right thing? Liang > E.g. HACKING says: > Use hwaddr for guest physical addresses except pcibus_t > for PCI addresses. In addition, ram_addr_t is a QEMU internal > address > space that maps guest RAM physical addresses into an intermediate > address space that can map to host virtual address spaces. >=20 >=20 > -- > MST > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in the bod= y of > a message to majordomo@vger.kernel.org More majordomo info at > http://vger.kernel.org/majordomo-info.html