From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37516) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cviGP-0008PL-1M for qemu-devel@nongnu.org; Wed, 05 Apr 2017 06:34:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cviGL-0006t0-Lt for qemu-devel@nongnu.org; Wed, 05 Apr 2017 06:34:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38368) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cviGL-0006sZ-Dh for qemu-devel@nongnu.org; Wed, 05 Apr 2017 06:34:05 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5A09DC00109A for ; Wed, 5 Apr 2017 10:34:04 +0000 (UTC) Date: Wed, 5 Apr 2017 11:34:00 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20170405103400.GB4066@work-vm> References: <20170323204544.12015-1-quintela@redhat.com> <20170323204544.12015-31-quintela@redhat.com> <20170331165227.GP4514@work-vm> <87r3183w89.fsf@secure.mitica> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87r3183w89.fsf@secure.mitica> Subject: Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Juan Quintela Cc: qemu-devel@nongnu.org * Juan Quintela (quintela@redhat.com) wrote: > "Dr. David Alan Gilbert" wrote: > > * Juan Quintela (quintela@redhat.com) wrote: > > Hi > > >> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms, > >> * > >> * It should be empty at the end anyway, but in error cases there may > >> * xbe some left. > >> - * > >> - * @ms: current migration state > >> */ > >> -void flush_page_queue(MigrationState *ms) > >> +void flush_page_queue(void) > > > > I'm not sure this is safe; it's called from migrate_fd_cleanup right at > > the end, if you do any finalisation/cleanup of the RAMState in > > ram_save_complete > > then when is it safe to run this? > > But, looking into it, I think that we should be able to move this into > ram_save_cleanup() no? > > We don't need it after that? > As an added bonus, we can remove the export of it. As discussed by irc; the thing I'm cautious about is getting the order of cleanup right. If you look at migration_completion you see we call qemu_savevm_state_complete_postcopy() (which calls ram_save_complete) before we call await_return_path_close_on_source which ensures that the thread that's handling requests from the destination and queuing them has finished. It seems right to make sure that thread has finished (and thus nothing is trying to add anythign to that queue) before trying to clean it up. Dave > >> @@ -1260,16 +1272,16 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname, > >> goto err; > >> } > >> > >> - struct MigrationSrcPageRequest *new_entry = > >> - g_malloc0(sizeof(struct MigrationSrcPageRequest)); > >> + struct RAMSrcPageRequest *new_entry = > >> + g_malloc0(sizeof(struct RAMSrcPageRequest)); > >> new_entry->rb = ramblock; > >> new_entry->offset = start; > >> new_entry->len = len; > >> > >> memory_region_ref(ramblock->mr); > >> - qemu_mutex_lock(&ms->src_page_req_mutex); > >> - QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req); > >> - qemu_mutex_unlock(&ms->src_page_req_mutex); > >> + qemu_mutex_lock(&rs->src_page_req_mutex); > >> + QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req); > >> + qemu_mutex_unlock(&rs->src_page_req_mutex); > > > > Hmm ok where did it get it's rs from? > > Anyway, the thing I needed to convince myself of was that there was > > any guarantee that > > RAMState would exist by the time the first request came in, something > > that we now need > > to be careful of. > > I think we're mostly OK; we call qemu_savevm_state_begin() at the top > > of migration_thread so the ram_save_setup should be done and allocate > > the RAMState before we get into the main loop and thus before we ever > > look at the 'start_postcopy' flag and thus before we ever ask the destination > > to send us stuff. > > > >> rcu_read_unlock(); > >> > >> return 0; > >> @@ -1408,7 +1420,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage) > >> > >> do { > >> again = true; > >> - found = get_queued_page(rs, ms, &pss, &dirty_ram_abs); > >> + found = get_queued_page(rs, &pss, &dirty_ram_abs); > >> > >> if (!found) { > >> /* priority queue empty, so just search for something dirty */ > >> @@ -1968,6 +1980,8 @@ static int ram_state_init(RAMState *rs) > >> > >> memset(rs, 0, sizeof(*rs)); > >> qemu_mutex_init(&rs->bitmap_mutex); > >> + qemu_mutex_init(&rs->src_page_req_mutex); > >> + QSIMPLEQ_INIT(&rs->src_page_requests); > > > > Similar question to above; that mutex is going to get reinit'd > > on a new migration and it shouldn't be without it being destroyed. > > Maybe make it a once. > > good catch. I think that the easiest way is to just move it into proper > RAMState, init it here, and destroy it at cleanup? > > Later, Juan. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK