From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Li, Liang Z" Subject: RE: [RFC Design Doc]Speed up live migration by skipping free pages Date: Fri, 25 Mar 2016 01:59:21 +0000 Message-ID: References: <20160324090004.GA2230@work-vm> <20160324102354.GB2230@work-vm> <20160324165530-mutt-send-email-mst@redhat.com> <20160324175503-mutt-send-email-mst@redhat.com> <20160324181031-mutt-send-email-mst@redhat.com> <20160324174933.GA11662@work-vm> <20160324202738-mutt-send-email-mst@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Cc: Wei Yang , "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kenel.org" , "pbonzini@redhat.com" , "rth@twiddle.net" , "ehabkost@redhat.com" , "amit.shah@redhat.com" , "quintela@redhat.com" , "mohan_parthasarathy@hpe.com" , "jitendra.kolhe@hpe.com" , "simhan@hpe.com" , "rkagan@virtuozzo.com" , "riel@redhat.com" To: "Michael S. Tsirkin" , "Dr. David Alan Gilbert" Return-path: Received: from mga09.intel.com ([134.134.136.24]:10061 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751526AbcCYB7p convert rfc822-to-8bit (ORCPT ); Thu, 24 Mar 2016 21:59:45 -0400 In-Reply-To: <20160324202738-mutt-send-email-mst@redhat.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: > > > > > > > > > The order I'm trying to understand is something like: > > > > > > > > > > > > > > > > > > a) Send the get_free_page_bitmap request > > > > > > > > > b) Start sending pages > > > > > > > > > c) Reach the end of memory > > > > > > > > > [ is_ready is false - guest hasn't made free map yet ] > > > > > > > > > d) normal migration_bitmap_sync() at end of first pass > > > > > > > > > e) Carry on sending dirty pages > > > > > > > > > f) is_ready is true > > > > > > > > > f.1) filter out free pages? > > > > > > > > > f.2) migration_bitmap_sync() > > > > > > > > > > > > > > > > > > It's f.1 I'm worried about. If the guest started > > > > > > > > > generating the free bitmap before (d), then a page > > > > > > > > > marked as 'free' in f.1 might have become dirty before > > > > > > > > > (d) and so (f.2) doesn't set the dirty again, and so we can't > filter out pages in f.1. > > > > > > > > > > > > > > > > > > > > > > > > > As you described, the order is incorrect. > > > > > > > > > > > > > > > > Liang > > > > > > > > > > > > > > > > > > > > > So to make it safe, what is required is to make sure no free > > > > > > > list us outstanding before calling migration_bitmap_sync. > > > > > > > > > > > > > > If one is outstanding, filter out pages before calling > > > > > migration_bitmap_sync. > > > > > > > > > > > > > > Of course, if we just do it like we normally do with > > > > > > > migration, then by the time we call migration_bitmap_sync > > > > > > > dirty bitmap is completely empty, so there won't be anything to > filter out. > > > > > > > > > > > > > > One way to address this is call migration_bitmap_sync in the > > > > > > > IO handler, while VCPU is stopped, then make sure to filter > > > > > > > out pages before the next migration_bitmap_sync. > > > > > > > > > > > > > > Another is to start filtering out pages upon IO handler, but > > > > > > > make sure to flush the queue before calling > migration_bitmap_sync. > > > > > > > > > > > > > > > > > > > It's really complex, maybe we should switch to a simple start, > > > > > > just skip the free page in the ram bulk stage and make it > asynchronous? > > > > > > > > > > > > Liang > > > > > > > > > > You mean like your patches do? No, blocking bulk migration until > > > > > guest response is basically a non-starter. > > > > > > > > > > > > > No, don't wait anymore. Like below (copy from previous thread) > > > > -------------------------------------------------------------- > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2. > > > > Clear all the bits in > > > > ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] > > > > 3. Send the get_free_page_bitmap request 4. Start to send pages > > > > to destination and check if the free_page_bitmap is ready > > > > if (is_ready) { > > > > filter out the free pages from migration_bitmap_rcu->bmap; > > > > migration_bitmap_sync(); > > > > } > > > > continue until live migration complete. > > > > --------------------------------------------------------------- > > > > Can this work? > > > > > > > > Liang > > > > > > Not if you get the ready bit asynchronously like you wrote here > > > since is_ready can get set while you called migration_bitmap_sync. > > > > > > As I said previously, > > > to make this work you need to filter out synchronously while VCPU is > > > stopped and while free pages from list are not being used. > > > > > > Alternatively prevent getting free page list and filtering them out > > > from guest from racing with migration_bitmap_sync. > > > > > > For example, flush the VQ after migration_bitmap_sync. > > > So: > > > > > > lock > > > migration_bitmap_sync(); > > > while (elem = virtqueue_pop) { > > > virtqueue_push(elem) > > > g_free(elem) > > > } > > > unlock > > > > > > > > > while in handle_output > > > > > > lock > > > while (elem = virtqueue_pop) { > > > list = get_free_list(elem) > > > filter_out_free(list) > > > virtqueue_push(elem) > > > free(elem) > > > } > > > unlock > > > > > > > > > lock prevents migration_bitmap_sync from racing against > > > handle_output > > > > I think the easier way is just to ignore the guests free list response > > if it comes back after the first pass. > > > > Dave > > That's a subset of course - after the first pass == after > migration_bitmap_sync. > > But it's really nasty - for example, how do you know it's the response from > this migration round and not a previous one? It's easy, add a request and response ID can solve this issue. > It is really better to just keep things orthogonal and not introduce arbitrary > limitations. > > > For example, with post-copy there's no first pass, and it can still benefit from > this optimization. > Leave this to Dave ... Liang > > > > > > > > > > This way you can actually use ioeventfd for this VQ so VCPU won't be > > > blocked. > > > > > > I do not think this is so complex, and this way you can add requests > > > for guest free bitmap at an arbitary interval either in host or in > > > guest. > > > > > > For example, add a value that says how often should guest update the > > > bitmap, set it to 0 to disable updates after migration done. > > > > > > Or, make guest resubmit a new one when we consume the old one, run > > > handle_output about through a periodic timer on host. > > > > > > > > > > > -- > > > > > MST > > -- > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47881) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ajH2V-0007lN-UI for qemu-devel@nongnu.org; Thu, 24 Mar 2016 21:59:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ajH2S-00051a-Na for qemu-devel@nongnu.org; Thu, 24 Mar 2016 21:59:51 -0400 Received: from mga04.intel.com ([192.55.52.120]:5049) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ajH2S-00051R-D7 for qemu-devel@nongnu.org; Thu, 24 Mar 2016 21:59:48 -0400 From: "Li, Liang Z" Date: Fri, 25 Mar 2016 01:59:21 +0000 Message-ID: References: <20160324090004.GA2230@work-vm> <20160324102354.GB2230@work-vm> <20160324165530-mutt-send-email-mst@redhat.com> <20160324175503-mutt-send-email-mst@redhat.com> <20160324181031-mutt-send-email-mst@redhat.com> <20160324174933.GA11662@work-vm> <20160324202738-mutt-send-email-mst@redhat.com> In-Reply-To: <20160324202738-mutt-send-email-mst@redhat.com> Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" , "Dr. David Alan Gilbert" Cc: "rkagan@virtuozzo.com" , "linux-kernel@vger.kenel.org" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "simhan@hpe.com" , "qemu-devel@nongnu.org" , "jitendra.kolhe@hpe.com" , "mohan_parthasarathy@hpe.com" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , Wei Yang , "rth@twiddle.net" > > > > > > > > > The order I'm trying to understand is something like: > > > > > > > > > > > > > > > > > > a) Send the get_free_page_bitmap request > > > > > > > > > b) Start sending pages > > > > > > > > > c) Reach the end of memory > > > > > > > > > [ is_ready is false - guest hasn't made free map ye= t ] > > > > > > > > > d) normal migration_bitmap_sync() at end of first pas= s > > > > > > > > > e) Carry on sending dirty pages > > > > > > > > > f) is_ready is true > > > > > > > > > f.1) filter out free pages? > > > > > > > > > f.2) migration_bitmap_sync() > > > > > > > > > > > > > > > > > > It's f.1 I'm worried about. If the guest started > > > > > > > > > generating the free bitmap before (d), then a page > > > > > > > > > marked as 'free' in f.1 might have become dirty before > > > > > > > > > (d) and so (f.2) doesn't set the dirty again, and so we c= an't > filter out pages in f.1. > > > > > > > > > > > > > > > > > > > > > > > > > As you described, the order is incorrect. > > > > > > > > > > > > > > > > Liang > > > > > > > > > > > > > > > > > > > > > So to make it safe, what is required is to make sure no free > > > > > > > list us outstanding before calling migration_bitmap_sync. > > > > > > > > > > > > > > If one is outstanding, filter out pages before calling > > > > > migration_bitmap_sync. > > > > > > > > > > > > > > Of course, if we just do it like we normally do with > > > > > > > migration, then by the time we call migration_bitmap_sync > > > > > > > dirty bitmap is completely empty, so there won't be anything = to > filter out. > > > > > > > > > > > > > > One way to address this is call migration_bitmap_sync in the > > > > > > > IO handler, while VCPU is stopped, then make sure to filter > > > > > > > out pages before the next migration_bitmap_sync. > > > > > > > > > > > > > > Another is to start filtering out pages upon IO handler, but > > > > > > > make sure to flush the queue before calling > migration_bitmap_sync. > > > > > > > > > > > > > > > > > > > It's really complex, maybe we should switch to a simple start, > > > > > > just skip the free page in the ram bulk stage and make it > asynchronous? > > > > > > > > > > > > Liang > > > > > > > > > > You mean like your patches do? No, blocking bulk migration until > > > > > guest response is basically a non-starter. > > > > > > > > > > > > > No, don't wait anymore. Like below (copy from previous thread) > > > > -------------------------------------------------------------- > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2. > > > > Clear all the bits in > > > > ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] > > > > 3. Send the get_free_page_bitmap request 4. Start to send pages > > > > to destination and check if the free_page_bitmap is ready > > > > if (is_ready) { > > > > filter out the free pages from migration_bitmap_rcu->bmap; > > > > migration_bitmap_sync(); > > > > } > > > > continue until live migration complete. > > > > --------------------------------------------------------------- > > > > Can this work? > > > > > > > > Liang > > > > > > Not if you get the ready bit asynchronously like you wrote here > > > since is_ready can get set while you called migration_bitmap_sync. > > > > > > As I said previously, > > > to make this work you need to filter out synchronously while VCPU is > > > stopped and while free pages from list are not being used. > > > > > > Alternatively prevent getting free page list and filtering them out > > > from guest from racing with migration_bitmap_sync. > > > > > > For example, flush the VQ after migration_bitmap_sync. > > > So: > > > > > > lock > > > migration_bitmap_sync(); > > > while (elem =3D virtqueue_pop) { > > > virtqueue_push(elem) > > > g_free(elem) > > > } > > > unlock > > > > > > > > > while in handle_output > > > > > > lock > > > while (elem =3D virtqueue_pop) { > > > list =3D get_free_list(elem) > > > filter_out_free(list) > > > virtqueue_push(elem) > > > free(elem) > > > } > > > unlock > > > > > > > > > lock prevents migration_bitmap_sync from racing against > > > handle_output > > > > I think the easier way is just to ignore the guests free list response > > if it comes back after the first pass. > > > > Dave >=20 > That's a subset of course - after the first pass =3D=3D after > migration_bitmap_sync. >=20 > But it's really nasty - for example, how do you know it's the response fr= om > this migration round and not a previous one? It's easy, add a request and response ID can solve this issue. > It is really better to just keep things orthogonal and not introduce arbi= trary > limitations. >=20 >=20 > For example, with post-copy there's no first pass, and it can still benef= it from > this optimization. >=20 Leave this to Dave ... Liang >=20 > > > > > > > > > This way you can actually use ioeventfd for this VQ so VCPU won't be > > > blocked. > > > > > > I do not think this is so complex, and this way you can add requests > > > for guest free bitmap at an arbitary interval either in host or in > > > guest. > > > > > > For example, add a value that says how often should guest update the > > > bitmap, set it to 0 to disable updates after migration done. > > > > > > Or, make guest resubmit a new one when we consume the old one, run > > > handle_output about through a periodic timer on host. > > > > > > > > > > > -- > > > > > MST > > -- > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK