From: Juan Quintela <quintela@redhat.com> To: qemu-devel@nongnu.org Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Juan Quintela <quintela@redhat.com>, Laurent Vivier <lvivier@redhat.com>, kvm@vger.kernel.org, Thomas Huth <thuth@redhat.com>, Richard Henderson <rth@twiddle.net>, Paolo Bonzini <pbonzini@redhat.com>, Wei Yang <richardw.yang@linux.intel.com> Subject: [PULL 04/19] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS Date: Thu, 11 Jul 2019 12:43:57 +0200 Message-ID: <20190711104412.31233-5-quintela@redhat.com> (raw) In-Reply-To: <20190711104412.31233-1-quintela@redhat.com> From: Wei Yang <richardw.yang@linux.intel.com> On receiving RAM_SAVE_FLAG_EOS, multifd_recv_sync_main() is called to synchronize receive threads. Current synchronization mechanism is to wait for each channel's sem_sync semaphore. This semaphore is triggered by a packet with MULTIFD_FLAG_SYNC flag. While in current implementation, we don't do multifd_send_sync_main() to send such packet when blk_mig_bulk_active() is true. This will leads to the receive threads won't notify multifd_recv_sync_main() by sem_sync. And multifd_recv_sync_main() will always wait there. [Note]: normal migration test works, while didn't test the blk_mig_bulk_active() case. Since not sure how to produce this situation. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190612014337.11255-1-richardw.yang@linux.intel.com> Signed-off-by: Juan Quintela <quintela@redhat.com> --- migration/ram.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/migration/ram.c b/migration/ram.c index 908517fc2b..74c9306c78 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3466,8 +3466,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) */ ram_control_after_iterate(f, RAM_CONTROL_ROUND); - multifd_send_sync_main(); out: + multifd_send_sync_main(); qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); ram_counters.transferred += 8; -- 2.21.0
next prev parent reply index Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-07-11 10:43 [PULL 00/19] Migration patches Juan Quintela 2019-07-11 10:43 ` [PULL 01/19] migration: fix multifd_recv event typo Juan Quintela 2019-07-11 10:43 ` [PULL 02/19] migration-test: rename parameter to parameter_int Juan Quintela 2019-07-11 10:43 ` [PULL 03/19] migration-test: Add migration multifd test Juan Quintela 2019-07-11 10:43 ` Juan Quintela [this message] 2019-07-11 10:43 ` [PULL 05/19] migration/xbzrle: update cache and current_data in one place Juan Quintela 2019-07-11 10:43 ` [PULL 06/19] cutils: remove one unnecessary pointer operation Juan Quintela 2019-07-11 10:44 ` [PULL 07/19] migration/multifd: sync packet_num after all thread are done Juan Quintela 2019-07-11 10:44 ` [PULL 08/19] migratioin/ram.c: reset complete_round when we gets a queued page Juan Quintela 2019-07-11 10:44 ` [PULL 09/19] migration: No need to take rcu during sync_dirty_bitmap Juan Quintela 2019-07-11 10:44 ` [PULL 10/19] memory: Don't set migration bitmap when without migration Juan Quintela 2019-07-11 10:44 ` [PULL 11/19] bitmap: Add bitmap_copy_with_{src|dst}_offset() Juan Quintela 2019-07-11 10:44 ` [PULL 12/19] memory: Pass mr into snapshot_and_clear_dirty Juan Quintela 2019-07-11 10:44 ` [PULL 13/19] memory: Introduce memory listener hook log_clear() Juan Quintela 2019-07-11 10:44 ` [PULL 14/19] kvm: Update comments for sync_dirty_bitmap Juan Quintela 2019-07-11 10:44 ` [PULL 15/19] kvm: Persistent per kvmslot dirty bitmap Juan Quintela 2019-07-11 10:44 ` [PULL 16/19] kvm: Introduce slots lock for memory listener Juan Quintela 2019-07-11 10:44 ` [PULL 17/19] kvm: Support KVM_CLEAR_DIRTY_LOG Juan Quintela 2019-07-11 10:44 ` [PULL 19/19] migration: allow private destination ram with x-ignore-shared Juan Quintela 2019-07-11 11:19 ` [PULL 00/19] Migration patches Paolo Bonzini 2019-07-11 11:32 ` Juan Quintela 2019-07-11 11:34 ` Dr. David Alan Gilbert 2019-07-11 11:40 ` [Qemu-devel] " Peter Maydell 2019-07-11 12:39 ` Peter Maydell 2019-07-12 14:06 ` Juan Quintela 2019-07-11 12:55 ` Christian Borntraeger 2019-07-11 13:01 ` [Qemu-devel] " Peter Maydell 2019-07-11 13:00 ` no-reply 2019-07-12 14:33 ` no-reply 2019-07-12 14:31 Juan Quintela 2019-07-12 14:31 ` [PULL 04/19] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS Juan Quintela
Reply instructions: You may reply publically to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190711104412.31233-5-quintela@redhat.com \ --to=quintela@redhat.com \ --cc=dgilbert@redhat.com \ --cc=kvm@vger.kernel.org \ --cc=lvivier@redhat.com \ --cc=pbonzini@redhat.com \ --cc=qemu-devel@nongnu.org \ --cc=richardw.yang@linux.intel.com \ --cc=rth@twiddle.net \ --cc=thuth@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
KVM Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \ kvm@vger.kernel.org public-inbox-index kvm Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.kvm AGPL code for this site: git clone https://public-inbox.org/public-inbox.git