From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Li, Liang Z" Subject: RE: [RFC Design Doc]Speed up live migration by skipping free pages Date: Thu, 24 Mar 2016 16:05:16 +0000 Message-ID: References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322190530.GI2216@work-vm> <20160324012424.GB14956@linux-gk3p> <20160324090004.GA2230@work-vm> <20160324102354.GB2230@work-vm> <20160324165530-mutt-send-email-mst@redhat.com> <20160324175503-mutt-send-email-mst@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: "Dr. David Alan Gilbert" , Wei Yang , "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kenel.org" , "pbonzini@redhat.com" , "rth@twiddle.net" , "ehabkost@redhat.com" , "amit.shah@redhat.com" , "quintela@redhat.com" , "mohan_parthasarathy@hpe.com" , "jitendra.kolhe@hpe.com" , "simhan@hpe.com" , "rkagan@virtuozzo.com" , "riel@redhat.com" To: "Michael S. Tsirkin" Return-path: Received: from mga11.intel.com ([192.55.52.93]:39789 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751229AbcCXQFU convert rfc822-to-8bit (ORCPT ); Thu, 24 Mar 2016 12:05:20 -0400 In-Reply-To: <20160324175503-mutt-send-email-mst@redhat.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: On=A0%D, %SN wrote: %Q %C Liang > -----Original Message----- > From: Michael S. Tsirkin [mailto:mst@redhat.com] > Sent: Thursday, March 24, 2016 11:57 PM > To: Li, Liang Z > Cc: Dr. David Alan Gilbert; Wei Yang; qemu-devel@nongnu.org; > kvm@vger.kernel.org; linux-kernel@vger.kenel.org; pbonzini@redhat.com= ; > rth@twiddle.net; ehabkost@redhat.com; amit.shah@redhat.com; > quintela@redhat.com; mohan_parthasarathy@hpe.com; > jitendra.kolhe@hpe.com; simhan@hpe.com; rkagan@virtuozzo.com; > riel@redhat.com > Subject: Re: [RFC Design Doc]Speed up live migration by skipping free= pages >=20 > On Thu, Mar 24, 2016 at 03:53:25PM +0000, Li, Liang Z wrote: > > > > > > Not very complex, we can implement like this: > > > > > > > > > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 = 2. > > > > > > Clear all the bits in ram_list. > > > > > > dirty_memory[DIRTY_MEMORY_MIGRATION] > > > > > > 3. Send the get_free_page_bitmap request 4. Start to send > > > > > > pages to destination and check if the free_page_bitmap is r= eady > > > > > > if (is_ready) { > > > > > > filter out the free pages from migration_bitmap_= rcu->bmap; > > > > > > migration_bitmap_sync(); > > > > > > } > > > > > > continue until live migration complete. > > > > > > > > > > > > > > > > > > Is that right? > > > > > > > > > > The order I'm trying to understand is something like: > > > > > > > > > > a) Send the get_free_page_bitmap request > > > > > b) Start sending pages > > > > > c) Reach the end of memory > > > > > [ is_ready is false - guest hasn't made free map yet ] > > > > > d) normal migration_bitmap_sync() at end of first pass > > > > > e) Carry on sending dirty pages > > > > > f) is_ready is true > > > > > f.1) filter out free pages? > > > > > f.2) migration_bitmap_sync() > > > > > > > > > > It's f.1 I'm worried about. If the guest started generating = the > > > > > free bitmap before (d), then a page marked as 'free' in f.1 > > > > > might have become dirty before (d) and so (f.2) doesn't set t= he > > > > > dirty again, and so we can't filter out pages in f.1. > > > > > > > > > > > > > As you described, the order is incorrect. > > > > > > > > Liang > > > > > > > > > So to make it safe, what is required is to make sure no free list= us > > > outstanding before calling migration_bitmap_sync. > > > > > > If one is outstanding, filter out pages before calling > migration_bitmap_sync. > > > > > > Of course, if we just do it like we normally do with migration, t= hen > > > by the time we call migration_bitmap_sync dirty bitmap is complet= ely > > > empty, so there won't be anything to filter out. > > > > > > One way to address this is call migration_bitmap_sync in the IO > > > handler, while VCPU is stopped, then make sure to filter out page= s > > > before the next migration_bitmap_sync. > > > > > > Another is to start filtering out pages upon IO handler, but make > > > sure to flush the queue before calling migration_bitmap_sync. > > > > > > > It's really complex, maybe we should switch to a simple start, jus= t > > skip the free page in the ram bulk stage and make it asynchronous? > > > > Liang >=20 > You mean like your patches do? No, blocking bulk migration until gues= t > response is basically a non-starter. >=20 No, don't wait anymore. Like below (copy from previous thread) -------------------------------------------------------------- 1. Set all the bits in the migration_bitmap_rcu->bmap to 1=20 2. Clear all the bits in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] 3. Send the get_free_page_bitmap request=20 4. Start to send pages to destination and check if the free_page_bitma= p is ready if (is_ready) { filter out the free pages from migration_bitmap_rcu->bmap; migration_bitmap_sync(); } continue until live migration complete. --------------------------------------------------------------- Can this work? Liang > -- > MST From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51384) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj7lu-0008Fp-Cg for qemu-devel@nongnu.org; Thu, 24 Mar 2016 12:06:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aj7lo-000125-4Y for qemu-devel@nongnu.org; Thu, 24 Mar 2016 12:06:06 -0400 Received: from mga03.intel.com ([134.134.136.65]:33006) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj7ln-00011f-Pj for qemu-devel@nongnu.org; Thu, 24 Mar 2016 12:06:00 -0400 From: "Li, Liang Z" Date: Thu, 24 Mar 2016 16:05:16 +0000 Message-ID: References: <1458632629-4649-1-git-send-email-liang.z.li@intel.com> <20160322190530.GI2216@work-vm> <20160324012424.GB14956@linux-gk3p> <20160324090004.GA2230@work-vm> <20160324102354.GB2230@work-vm> <20160324165530-mutt-send-email-mst@redhat.com> <20160324175503-mutt-send-email-mst@redhat.com> In-Reply-To: <20160324175503-mutt-send-email-mst@redhat.com> Content-Language: en-US Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: "rkagan@virtuozzo.com" , "linux-kernel@vger.kenel.org" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "simhan@hpe.com" , "Dr. David Alan Gilbert" , "qemu-devel@nongnu.org" , "jitendra.kolhe@hpe.com" , "mohan_parthasarathy@hpe.com" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , Wei Yang , "rth@twiddle.net" On=A0%D, %SN wrote: %Q %C Liang > -----Original Message----- > From: Michael S. Tsirkin [mailto:mst@redhat.com] > Sent: Thursday, March 24, 2016 11:57 PM > To: Li, Liang Z > Cc: Dr. David Alan Gilbert; Wei Yang; qemu-devel@nongnu.org; > kvm@vger.kernel.org; linux-kernel@vger.kenel.org; pbonzini@redhat.com; > rth@twiddle.net; ehabkost@redhat.com; amit.shah@redhat.com; > quintela@redhat.com; mohan_parthasarathy@hpe.com; > jitendra.kolhe@hpe.com; simhan@hpe.com; rkagan@virtuozzo.com; > riel@redhat.com > Subject: Re: [RFC Design Doc]Speed up live migration by skipping free pag= es >=20 > On Thu, Mar 24, 2016 at 03:53:25PM +0000, Li, Liang Z wrote: > > > > > > Not very complex, we can implement like this: > > > > > > > > > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2. > > > > > > Clear all the bits in ram_list. > > > > > > dirty_memory[DIRTY_MEMORY_MIGRATION] > > > > > > 3. Send the get_free_page_bitmap request 4. Start to send > > > > > > pages to destination and check if the free_page_bitmap is ready > > > > > > if (is_ready) { > > > > > > filter out the free pages from migration_bitmap_rcu-= >bmap; > > > > > > migration_bitmap_sync(); > > > > > > } > > > > > > continue until live migration complete. > > > > > > > > > > > > > > > > > > Is that right? > > > > > > > > > > The order I'm trying to understand is something like: > > > > > > > > > > a) Send the get_free_page_bitmap request > > > > > b) Start sending pages > > > > > c) Reach the end of memory > > > > > [ is_ready is false - guest hasn't made free map yet ] > > > > > d) normal migration_bitmap_sync() at end of first pass > > > > > e) Carry on sending dirty pages > > > > > f) is_ready is true > > > > > f.1) filter out free pages? > > > > > f.2) migration_bitmap_sync() > > > > > > > > > > It's f.1 I'm worried about. If the guest started generating the > > > > > free bitmap before (d), then a page marked as 'free' in f.1 > > > > > might have become dirty before (d) and so (f.2) doesn't set the > > > > > dirty again, and so we can't filter out pages in f.1. > > > > > > > > > > > > > As you described, the order is incorrect. > > > > > > > > Liang > > > > > > > > > So to make it safe, what is required is to make sure no free list us > > > outstanding before calling migration_bitmap_sync. > > > > > > If one is outstanding, filter out pages before calling > migration_bitmap_sync. > > > > > > Of course, if we just do it like we normally do with migration, then > > > by the time we call migration_bitmap_sync dirty bitmap is completely > > > empty, so there won't be anything to filter out. > > > > > > One way to address this is call migration_bitmap_sync in the IO > > > handler, while VCPU is stopped, then make sure to filter out pages > > > before the next migration_bitmap_sync. > > > > > > Another is to start filtering out pages upon IO handler, but make > > > sure to flush the queue before calling migration_bitmap_sync. > > > > > > > It's really complex, maybe we should switch to a simple start, just > > skip the free page in the ram bulk stage and make it asynchronous? > > > > Liang >=20 > You mean like your patches do? No, blocking bulk migration until guest > response is basically a non-starter. >=20 No, don't wait anymore. Like below (copy from previous thread) -------------------------------------------------------------- 1. Set all the bits in the migration_bitmap_rcu->bmap to 1=20 2. Clear all the bits in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] 3. Send the get_free_page_bitmap request=20 4. Start to send pages to destination and check if the free_page_bitmap is= ready if (is_ready) { filter out the free pages from migration_bitmap_rcu->bmap; migration_bitmap_sync(); } continue until live migration complete. --------------------------------------------------------------- Can this work? Liang > -- > MST