From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 2/6] migration: Make global sem_sync semaphore by channel
Date: Wed, 14 Aug 2019 15:34:56 +0100 [thread overview]
Message-ID: <20190814143456.GK2920@work-vm> (raw)
In-Reply-To: <20190814020218.1868-3-quintela@redhat.com>
* Juan Quintela (quintela@redhat.com) wrote:
> This makes easy to debug things because when you want for all threads
> to arrive at that semaphore, you know which one your are waiting for.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
and queued.
> ---
> migration/ram.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index ca11d43e30..4bdd201a4e 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -661,6 +661,8 @@ typedef struct {
> uint64_t num_packets;
> /* pages sent through this channel */
> uint64_t num_pages;
> + /* syncs main thread and channels */
> + QemuSemaphore sem_sync;
> } MultiFDSendParams;
>
> typedef struct {
> @@ -896,8 +898,6 @@ struct {
> MultiFDSendParams *params;
> /* array of pages to sent */
> MultiFDPages_t *pages;
> - /* syncs main thread and channels */
> - QemuSemaphore sem_sync;
> /* global number of generated multifd packets */
> uint64_t packet_num;
> /* send channels ready */
> @@ -1038,6 +1038,7 @@ void multifd_save_cleanup(void)
> p->c = NULL;
> qemu_mutex_destroy(&p->mutex);
> qemu_sem_destroy(&p->sem);
> + qemu_sem_destroy(&p->sem_sync);
> g_free(p->name);
> p->name = NULL;
> multifd_pages_clear(p->pages);
> @@ -1047,7 +1048,6 @@ void multifd_save_cleanup(void)
> p->packet = NULL;
> }
> qemu_sem_destroy(&multifd_send_state->channels_ready);
> - qemu_sem_destroy(&multifd_send_state->sem_sync);
> g_free(multifd_send_state->params);
> multifd_send_state->params = NULL;
> multifd_pages_clear(multifd_send_state->pages);
> @@ -1092,7 +1092,7 @@ static void multifd_send_sync_main(void)
> MultiFDSendParams *p = &multifd_send_state->params[i];
>
> trace_multifd_send_sync_main_wait(p->id);
> - qemu_sem_wait(&multifd_send_state->sem_sync);
> + qemu_sem_wait(&p->sem_sync);
> }
> trace_multifd_send_sync_main(multifd_send_state->packet_num);
> }
> @@ -1152,7 +1152,7 @@ static void *multifd_send_thread(void *opaque)
> qemu_mutex_unlock(&p->mutex);
>
> if (flags & MULTIFD_FLAG_SYNC) {
> - qemu_sem_post(&multifd_send_state->sem_sync);
> + qemu_sem_post(&p->sem_sync);
> }
> qemu_sem_post(&multifd_send_state->channels_ready);
> } else if (p->quit) {
> @@ -1175,7 +1175,7 @@ out:
> */
> if (ret != 0) {
> if (flags & MULTIFD_FLAG_SYNC) {
> - qemu_sem_post(&multifd_send_state->sem_sync);
> + qemu_sem_post(&p->sem_sync);
> }
> qemu_sem_post(&multifd_send_state->channels_ready);
> }
> @@ -1221,7 +1221,6 @@ int multifd_save_setup(void)
> multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
> multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
> multifd_send_state->pages = multifd_pages_init(page_count);
> - qemu_sem_init(&multifd_send_state->sem_sync, 0);
> qemu_sem_init(&multifd_send_state->channels_ready, 0);
>
> for (i = 0; i < thread_count; i++) {
> @@ -1229,6 +1228,7 @@ int multifd_save_setup(void)
>
> qemu_mutex_init(&p->mutex);
> qemu_sem_init(&p->sem, 0);
> + qemu_sem_init(&p->sem_sync, 0);
> p->quit = false;
> p->pending_job = 0;
> p->id = i;
> --
> 2.21.0
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2019-08-14 14:35 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-14 2:02 [Qemu-devel] [PATCH 0/6] Fix multifd with big number of channels Juan Quintela
2019-08-14 2:02 ` [Qemu-devel] [PATCH 1/6] migration: Add traces for multifd terminate threads Juan Quintela
2019-08-14 11:17 ` Dr. David Alan Gilbert
2019-08-19 11:14 ` Philippe Mathieu-Daudé
2019-08-14 2:02 ` [Qemu-devel] [PATCH 2/6] migration: Make global sem_sync semaphore by channel Juan Quintela
2019-08-14 14:34 ` Dr. David Alan Gilbert [this message]
2019-08-14 2:02 ` [Qemu-devel] [PATCH 3/6] migration: Make sure that all multifd channels have been created Juan Quintela
2019-08-14 14:58 ` Dr. David Alan Gilbert
2019-08-19 8:29 ` Juan Quintela
2019-08-14 2:02 ` [Qemu-devel] [PATCH 4/6] migration: Make multifd threads wait until all " Juan Quintela
2019-08-14 2:02 ` [Qemu-devel] [PATCH 5/6] migration: add some multifd traces Juan Quintela
2019-08-14 16:23 ` Dr. David Alan Gilbert
2019-08-19 11:10 ` Philippe Mathieu-Daudé
2019-08-19 11:13 ` Philippe Mathieu-Daudé
2019-08-14 2:02 ` [Qemu-devel] [PATCH 6/6] RFH: We lost "connect" events Juan Quintela
2019-08-19 9:52 ` Daniel P. Berrangé
2019-08-19 10:33 ` Juan Quintela
2019-08-19 10:40 ` Daniel P. Berrangé
2019-08-19 10:46 ` Juan Quintela
2019-08-19 10:49 ` Daniel P. Berrangé
2019-08-19 10:50 ` Juan Quintela
2019-08-19 11:00 ` Peter Maydell
2019-08-19 11:06 ` Daniel P. Berrangé
2019-08-19 10:47 ` Juan Quintela
2019-08-14 11:43 ` [Qemu-devel] [PATCH 0/6] Fix multifd with big number of channels no-reply
2019-08-14 14:53 ` Dr. David Alan Gilbert
2019-08-14 23:24 ` no-reply
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190814143456.GK2920@work-vm \
--to=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).