From: Juan Quintela <quintela@redhat.com> To: qemu-devel@nongnu.org Cc: Laurent Vivier <lvivier@redhat.com>, Richard Henderson <rth@twiddle.net>, Paolo Bonzini <pbonzini@redhat.com>, kvm@vger.kernel.org, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Juan Quintela <quintela@redhat.com>, Thomas Huth <thuth@redhat.com>, Wei Yang <richardw.yang@linux.intel.com> Subject: [PULL 04/19] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS Date: Fri, 12 Jul 2019 16:31:52 +0200 [thread overview] Message-ID: <20190712143207.4214-5-quintela@redhat.com> (raw) In-Reply-To: <20190712143207.4214-1-quintela@redhat.com> From: Wei Yang <richardw.yang@linux.intel.com> On receiving RAM_SAVE_FLAG_EOS, multifd_recv_sync_main() is called to synchronize receive threads. Current synchronization mechanism is to wait for each channel's sem_sync semaphore. This semaphore is triggered by a packet with MULTIFD_FLAG_SYNC flag. While in current implementation, we don't do multifd_send_sync_main() to send such packet when blk_mig_bulk_active() is true. This will leads to the receive threads won't notify multifd_recv_sync_main() by sem_sync. And multifd_recv_sync_main() will always wait there. [Note]: normal migration test works, while didn't test the blk_mig_bulk_active() case. Since not sure how to produce this situation. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190612014337.11255-1-richardw.yang@linux.intel.com> Signed-off-by: Juan Quintela <quintela@redhat.com> --- migration/ram.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/migration/ram.c b/migration/ram.c index 908517fc2b..74c9306c78 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3466,8 +3466,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) */ ram_control_after_iterate(f, RAM_CONTROL_ROUND); - multifd_send_sync_main(); out: + multifd_send_sync_main(); qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); ram_counters.transferred += 8; -- 2.21.0
WARNING: multiple messages have this Message-ID (diff)
From: Juan Quintela <quintela@redhat.com> To: qemu-devel@nongnu.org Cc: Laurent Vivier <lvivier@redhat.com>, Thomas Huth <thuth@redhat.com>, kvm@vger.kernel.org, Juan Quintela <quintela@redhat.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Wei Yang <richardw.yang@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <rth@twiddle.net> Subject: [Qemu-devel] [PULL 04/19] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS Date: Fri, 12 Jul 2019 16:31:52 +0200 [thread overview] Message-ID: <20190712143207.4214-5-quintela@redhat.com> (raw) In-Reply-To: <20190712143207.4214-1-quintela@redhat.com> From: Wei Yang <richardw.yang@linux.intel.com> On receiving RAM_SAVE_FLAG_EOS, multifd_recv_sync_main() is called to synchronize receive threads. Current synchronization mechanism is to wait for each channel's sem_sync semaphore. This semaphore is triggered by a packet with MULTIFD_FLAG_SYNC flag. While in current implementation, we don't do multifd_send_sync_main() to send such packet when blk_mig_bulk_active() is true. This will leads to the receive threads won't notify multifd_recv_sync_main() by sem_sync. And multifd_recv_sync_main() will always wait there. [Note]: normal migration test works, while didn't test the blk_mig_bulk_active() case. Since not sure how to produce this situation. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190612014337.11255-1-richardw.yang@linux.intel.com> Signed-off-by: Juan Quintela <quintela@redhat.com> --- migration/ram.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/migration/ram.c b/migration/ram.c index 908517fc2b..74c9306c78 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3466,8 +3466,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) */ ram_control_after_iterate(f, RAM_CONTROL_ROUND); - multifd_send_sync_main(); out: + multifd_send_sync_main(); qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); ram_counters.transferred += 8; -- 2.21.0
next prev parent reply other threads:[~2019-07-12 14:32 UTC|newest] Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-07-12 14:31 [PULL 00/19] Migration patches Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 01/19] migration: fix multifd_recv event typo Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 02/19] migration-test: rename parameter to parameter_int Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 03/19] migration-test: Add migration multifd test Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` Juan Quintela [this message] 2019-07-12 14:31 ` [Qemu-devel] [PULL 04/19] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS Juan Quintela 2019-07-12 14:31 ` [PULL 05/19] migration/xbzrle: update cache and current_data in one place Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 06/19] cutils: remove one unnecessary pointer operation Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 07/19] migration/multifd: sync packet_num after all thread are done Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 08/19] migration/ram.c: reset complete_round when we gets a queued page Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 09/19] migration: No need to take rcu during sync_dirty_bitmap Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 10/19] memory: Don't set migration bitmap when without migration Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:31 ` [PULL 11/19] bitmap: Add bitmap_copy_with_{src|dst}_offset() Juan Quintela 2019-07-12 14:31 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 12/19] memory: Pass mr into snapshot_and_clear_dirty Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 13/19] memory: Introduce memory listener hook log_clear() Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 14/19] kvm: Update comments for sync_dirty_bitmap Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 15/19] kvm: Persistent per kvmslot dirty bitmap Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 16/19] kvm: Introduce slots lock for memory listener Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 17/19] kvm: Support KVM_CLEAR_DIRTY_LOG Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 18/19] migration: Split log_clear() into smaller chunks Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 14:32 ` [PULL 19/19] migration: allow private destination ram with x-ignore-shared Juan Quintela 2019-07-12 14:32 ` [Qemu-devel] " Juan Quintela 2019-07-12 16:33 ` [Qemu-devel] [PULL 00/19] Migration patches Peter Maydell 2019-07-12 16:33 ` Peter Maydell 2019-07-12 17:54 ` Dr. David Alan Gilbert 2019-07-12 17:54 ` Dr. David Alan Gilbert 2019-07-15 11:16 ` Peter Maydell 2019-07-15 11:16 ` Peter Maydell 2019-07-15 13:44 ` Juan Quintela 2019-07-15 13:44 ` [Qemu-devel] " Juan Quintela 2019-07-15 13:48 ` Peter Maydell 2019-07-15 13:48 ` [Qemu-devel] " Peter Maydell 2019-07-15 14:10 ` Juan Quintela 2019-07-15 14:10 ` [Qemu-devel] " Juan Quintela 2019-07-15 14:15 ` Peter Maydell 2019-07-15 14:15 ` [Qemu-devel] " Peter Maydell 2019-07-15 14:04 ` Daniel P. Berrangé 2019-07-15 14:04 ` Daniel P. Berrangé 2019-07-15 14:17 ` Peter Maydell 2019-07-15 14:17 ` Peter Maydell -- strict thread matches above, loose matches on Subject: below -- 2019-07-11 10:43 Juan Quintela 2019-07-11 10:43 ` [PULL 04/19] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS Juan Quintela
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190712143207.4214-5-quintela@redhat.com \ --to=quintela@redhat.com \ --cc=dgilbert@redhat.com \ --cc=kvm@vger.kernel.org \ --cc=lvivier@redhat.com \ --cc=pbonzini@redhat.com \ --cc=qemu-devel@nongnu.org \ --cc=richardw.yang@linux.intel.com \ --cc=rth@twiddle.net \ --cc=thuth@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.