From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33069) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cZG6u-0006SG-5Q for qemu-devel@nongnu.org; Thu, 02 Feb 2017 07:03:36 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cZG6q-0001vp-5R for qemu-devel@nongnu.org; Thu, 02 Feb 2017 07:03:32 -0500 Received: from mx1.redhat.com ([209.132.183.28]:33198) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cZG6p-0001sa-T0 for qemu-devel@nongnu.org; Thu, 02 Feb 2017 07:03:28 -0500 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D98A213A60 for ; Thu, 2 Feb 2017 12:03:24 +0000 (UTC) Date: Thu, 2 Feb 2017 12:03:20 +0000 From: "Dr. David Alan Gilbert" Message-ID: <20170202120319.GB2549@work-vm> References: <1485207141-1941-1-git-send-email-quintela@redhat.com> <1485207141-1941-12-git-send-email-quintela@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1485207141-1941-12-git-send-email-quintela@redhat.com> Subject: Re: [Qemu-devel] [PATCH 11/17] migration: Create thread infrastructure for multifd send side List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Juan Quintela Cc: qemu-devel@nongnu.org, amit.shah@redhat.com * Juan Quintela (quintela@redhat.com) wrote: > We make the locking and the transfer of information specific, even if we > are still transmiting things through the main thread. > > Signed-off-by: Juan Quintela > --- > migration/ram.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 52 insertions(+), 1 deletion(-) > > diff --git a/migration/ram.c b/migration/ram.c > index c71929e..9d7bc64 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -392,17 +392,25 @@ void migrate_compress_threads_create(void) > /* Multiple fd's */ > > struct MultiFDSendParams { > + /* not changed */ > QemuThread thread; > QIOChannel *c; > QemuCond cond; > QemuMutex mutex; > + /* protected by param mutex */ > bool quit; > bool started; > + uint8_t *address; > + /* protected by multifd mutex */ > + bool done; > }; > typedef struct MultiFDSendParams MultiFDSendParams; > > static MultiFDSendParams *multifd_send; > > +QemuMutex multifd_send_mutex; > +QemuCond multifd_send_cond; > + > static void *multifd_send_thread(void *opaque) > { > MultiFDSendParams *params = opaque; > @@ -416,7 +424,17 @@ static void *multifd_send_thread(void *opaque) > > qemu_mutex_lock(¶ms->mutex); > while (!params->quit){ > - qemu_cond_wait(¶ms->cond, ¶ms->mutex); > + if (params->address) { > + params->address = 0; This confused me (I wondered what happens to the 1st block) but I see in the next patch this gets replaced by something more complex; so I suggest just using params->dummy and commented it's about to get replaced. > + qemu_mutex_unlock(¶ms->mutex); > + qemu_mutex_lock(&multifd_send_mutex); > + params->done = true; > + qemu_cond_signal(&multifd_send_cond); > + qemu_mutex_unlock(&multifd_send_mutex); > + qemu_mutex_lock(¶ms->mutex); > + } else { > + qemu_cond_wait(¶ms->cond, ¶ms->mutex); > + } > } > qemu_mutex_unlock(¶ms->mutex); > > @@ -464,12 +482,16 @@ void migrate_multifd_send_threads_create(void) > } > thread_count = migrate_multifd_threads(); > multifd_send = g_new0(MultiFDSendParams, thread_count); > + qemu_mutex_init(&multifd_send_mutex); > + qemu_cond_init(&multifd_send_cond); > for (i = 0; i < thread_count; i++) { > char thread_name[15]; > qemu_mutex_init(&multifd_send[i].mutex); > qemu_cond_init(&multifd_send[i].cond); > multifd_send[i].quit = false; > multifd_send[i].started = false; > + multifd_send[i].done = true; > + multifd_send[i].address = 0; > multifd_send[i].c = socket_send_channel_create(); > if(!multifd_send[i].c) { > error_report("Error creating a send channel"); > @@ -487,6 +509,34 @@ void migrate_multifd_send_threads_create(void) > } > } > > +static int multifd_send_page(uint8_t *address) > +{ > + int i, thread_count; > + bool found = false; > + > + thread_count = migrate_multifd_threads(); > + qemu_mutex_lock(&multifd_send_mutex); > + while (!found) { > + for (i = 0; i < thread_count; i++) { > + if (multifd_send[i].done) { > + multifd_send[i].done = false; > + found = true; > + break; > + } > + } > + if (!found) { > + qemu_cond_wait(&multifd_send_cond, &multifd_send_mutex); > + } > + } > + qemu_mutex_unlock(&multifd_send_mutex); > + qemu_mutex_lock(&multifd_send[i].mutex); Having a 'multifd_send_mutex' and a 'multifd_send[i].mutex' is pretty confusing! > + multifd_send[i].address = address; > + qemu_cond_signal(&multifd_send[i].cond); > + qemu_mutex_unlock(&multifd_send[i].mutex); > + > + return 0; > +} > + > struct MultiFDRecvParams { > QemuThread thread; > QIOChannel *c; > @@ -1015,6 +1065,7 @@ static int ram_multifd_page(QEMUFile *f, PageSearchStatus *pss, > *bytes_transferred += > save_page_header(f, block, offset | RAM_SAVE_FLAG_MULTIFD_PAGE); > qemu_put_buffer(f, p, TARGET_PAGE_SIZE); > + multifd_send_page(p); > *bytes_transferred += TARGET_PAGE_SIZE; > pages = 1; > acct_info.norm_pages++; > -- > 2.9.3 I think I'm pretty OK with this; but we'll see what it looks like after you think about Paolo's suggestion; it does feel like it should be possible to do the locking etc simpler; I just don't know how. Dave -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK