All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/19] migration: Postcopy Preemption
@ 2022-03-31 15:08 Peter Xu
  2022-03-31 15:08 ` [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname Peter Xu
                   ` (19 more replies)
  0 siblings, 20 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

This is v4 of postcopy preempt series.  It can also be found here:

  https://github.com/xzpeter/qemu/tree/postcopy-preempt

RFC: https://lore.kernel.org/qemu-devel/20220119080929.39485-1-peterx@redhat.com
V1:  https://lore.kernel.org/qemu-devel/20220216062809.57179-1-peterx@redhat.com
V2:  https://lore.kernel.org/qemu-devel/20220301083925.33483-1-peterx@redhat.com
V3:  https://lore.kernel.org/qemu-devel/20220330213908.26608-1-peterx@redhat.com

v4:
- Fix a double-free on params.tls-creds when quitting qemu
- Reorder patches to satisfy per-commit builds

v3:
- Rebased to master since many patches landed
- Fixed one bug on postcopy recovery when preempt enabled, this is only
  found when I test with TLS+recovery, because TLS changed the timing.
- Dropped patch:
  "migration: Fail postcopy preempt with TLS for now"
- Added patches for TLS:
  - "migration: Postpone releasing MigrationState.hostname"
  - "migration: Drop multifd tls_hostname cache"
  - "migration: Enable TLS for preempt channel"
  - "migration: Export tls-[creds|hostname|authz] params to cmdline too"
  - "tests: Add postcopy tls migration test"
  - "tests: Add postcopy tls recovery migration test"
- Added two more tests to the preempt test patch (tls, tls+recovery)

Abstract
========

This series added a new migration capability called "postcopy-preempt".  It can
be enabled when postcopy is enabled, and it'll simply (but greatly) speed up
postcopy page requests handling process.

Below are some initial postcopy page request latency measurements after the
new series applied.

For each page size, I measured page request latency for three cases:

  (a) Vanilla:                the old postcopy
  (b) Preempt no-break-huge:  preempt enabled, x-postcopy-preempt-break-huge=off
  (c) Preempt full:           preempt enabled, x-postcopy-preempt-break-huge=on
                              (this is the default option when preempt enabled)

Here x-postcopy-preempt-break-huge parameter is just added in v2 so as to
conditionally disable the behavior to break sending a precopy huge page for
debugging purpose.  So when it's off, postcopy will not preempt precopy
sending a huge page, but still postcopy will use its own channel.

I tested it separately to give a rough idea on which part of the change
helped how much of it.  The overall benefit should be the comparison
between case (a) and (c).

  |-----------+---------+-----------------------+--------------|
  | Page size | Vanilla | Preempt no-break-huge | Preempt full |
  |-----------+---------+-----------------------+--------------|
  | 4K        |   10.68 |               N/A [*] |         0.57 |
  | 2M        |   10.58 |                  5.49 |         5.02 |
  | 1G        | 2046.65 |               933.185 |      649.445 |
  |-----------+---------+-----------------------+--------------|
  [*]: This case is N/A because 4K page does not contain huge page at all

[1] https://github.com/xzpeter/small-stuffs/blob/master/tools/huge_vm/uffd-latency.bpf

TODO List
=========

Avoid precopy write() blocks postcopy
-------------------------------------

I didn't prove this, but I always think the write() syscalls being blocked
for precopy pages can affect postcopy services.  If we can solve this
problem then my wild guess is we can further reduce the average page
latency.

Two solutions at least in mind: (1) we could have made the write side of
the migration channel NON_BLOCK too, or (2) multi-threads on send side,
just like multifd, but we may use lock to protect which page to send too
(e.g., the core idea is we should _never_ rely anything on the main thread,
multifd has that dependency on queuing pages only on main thread).

That can definitely be done and thought about later.

Multi-channel for preemption threads
------------------------------------

Currently the postcopy preempt feature use only one extra channel and one
extra thread on dest (no new thread on src QEMU).  It should be mostly good
enough for major use cases, but when the postcopy queue is long enough
(e.g. hundreds of vCPUs faulted on different pages) logically we could
still observe more delays in average.  Whether growing threads/channels can
solve it is debatable, but sounds worthwhile a try.  That's yet another
thing we can think about after this patchset lands.

Logically the design provides space for that - the receiving postcopy
preempt thread can understand all ram-layer migration protocol, and for
multi channel and multi threads we could simply grow that into multile
threads handling the same protocol (with multiple PostcopyTmpPage).  The
source needs more thoughts on synchronizations, though, but it shouldn't
affect the whole protocol layer, so should be easy to keep compatible.

Please review, thanks.

Peter Xu (19):
  migration: Postpone releasing MigrationState.hostname
  migration: Drop multifd tls_hostname cache
  migration: Add pss.postcopy_requested status
  migration: Move migrate_allow_multifd and helpers into migration.c
  migration: Export ram_load_postcopy()
  migration: Move channel setup out of postcopy_try_recover()
  migration: Allow migrate-recover to run multiple times
  migration: Add postcopy-preempt capability
  migration: Postcopy preemption preparation on channel creation
  migration: Postcopy preemption enablement
  migration: Postcopy recover with preempt enabled
  migration: Create the postcopy preempt channel asynchronously
  migration: Parameter x-postcopy-preempt-break-huge
  migration: Add helpers to detect TLS capability
  migration: Export tls-[creds|hostname|authz] params to cmdline too
  migration: Enable TLS for preempt channel
  tests: Add postcopy tls migration test
  tests: Add postcopy tls recovery migration test
  tests: Add postcopy preempt tests

 migration/channel.c          |  11 +-
 migration/migration.c        | 218 ++++++++++++++++++++------
 migration/migration.h        |  52 ++++++-
 migration/multifd.c          |  36 +----
 migration/multifd.h          |   4 -
 migration/postcopy-ram.c     | 190 ++++++++++++++++++++++-
 migration/postcopy-ram.h     |  11 ++
 migration/qemu-file.c        |  27 ++++
 migration/qemu-file.h        |   1 +
 migration/ram.c              | 288 +++++++++++++++++++++++++++++++++--
 migration/ram.h              |   3 +
 migration/savevm.c           |  49 ++++--
 migration/socket.c           |  22 ++-
 migration/socket.h           |   1 +
 migration/trace-events       |  15 +-
 qapi/migration.json          |   8 +-
 tests/qtest/migration-test.c | 113 ++++++++++++--
 17 files changed, 918 insertions(+), 131 deletions(-)

-- 
2.32.0



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-07 17:21   ` Dr. David Alan Gilbert
  2022-04-20 10:34   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 02/19] migration: Drop multifd tls_hostname cache Peter Xu
                   ` (18 subsequent siblings)
  19 siblings, 2 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

We used to release it right after migrate_fd_connect().  That's not good
enough when there're more than one socket pair required, because it'll be
needed to establish TLS connection for the rest channels.

One example is multifd, where we copied over the hostname for each channel
but that's actually not needed.

Keeping the hostname until the cleanup phase of migration.

Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/channel.c   | 1 -
 migration/migration.c | 5 +++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/migration/channel.c b/migration/channel.c
index c4fc000a1a..c6a8dcf1d7 100644
--- a/migration/channel.c
+++ b/migration/channel.c
@@ -96,6 +96,5 @@ void migration_channel_connect(MigrationState *s,
         }
     }
     migrate_fd_connect(s, error);
-    g_free(s->hostname);
     error_free(error);
 }
diff --git a/migration/migration.c b/migration/migration.c
index 695f0f2900..281d33326b 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1809,6 +1809,11 @@ static void migrate_fd_cleanup(MigrationState *s)
     qemu_bh_delete(s->cleanup_bh);
     s->cleanup_bh = NULL;
 
+    if (s->hostname) {
+        g_free(s->hostname);
+        s->hostname = NULL;
+    }
+
     qemu_savevm_state_cleanup();
 
     if (s->to_dst_file) {
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 02/19] migration: Drop multifd tls_hostname cache
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
  2022-03-31 15:08 ` [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-07 17:42   ` Dr. David Alan Gilbert
  2022-04-20 10:35   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 03/19] migration: Add pss.postcopy_requested status Peter Xu
                   ` (17 subsequent siblings)
  19 siblings, 2 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

The hostname is cached N times, N equals to the multifd channels.

Drop that cache because after previous patch we've got s->hostname
being alive for the whole lifecycle of migration procedure.

Cc: Juan Quintela <quintela@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/multifd.c | 10 +++-------
 migration/multifd.h |  2 --
 2 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 76b57a7177..1be4ab5d17 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -542,8 +542,6 @@ void multifd_save_cleanup(void)
         qemu_sem_destroy(&p->sem_sync);
         g_free(p->name);
         p->name = NULL;
-        g_free(p->tls_hostname);
-        p->tls_hostname = NULL;
         multifd_pages_clear(p->pages);
         p->pages = NULL;
         p->packet_len = 0;
@@ -763,7 +761,7 @@ static void multifd_tls_channel_connect(MultiFDSendParams *p,
                                         Error **errp)
 {
     MigrationState *s = migrate_get_current();
-    const char *hostname = p->tls_hostname;
+    const char *hostname = s->hostname;
     QIOChannelTLS *tioc;
 
     tioc = migration_tls_client_create(s, ioc, hostname, errp);
@@ -787,7 +785,8 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
     MigrationState *s = migrate_get_current();
 
     trace_multifd_set_outgoing_channel(
-        ioc, object_get_typename(OBJECT(ioc)), p->tls_hostname, error);
+        ioc, object_get_typename(OBJECT(ioc)),
+        migrate_get_current()->hostname, error);
 
     if (!error) {
         if (s->parameters.tls_creds &&
@@ -874,7 +873,6 @@ int multifd_save_setup(Error **errp)
     int thread_count;
     uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     uint8_t i;
-    MigrationState *s;
 
     if (!migrate_use_multifd()) {
         return 0;
@@ -884,7 +882,6 @@ int multifd_save_setup(Error **errp)
         return -1;
     }
 
-    s = migrate_get_current();
     thread_count = migrate_multifd_channels();
     multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
     multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
@@ -909,7 +906,6 @@ int multifd_save_setup(Error **errp)
         p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
         p->packet->version = cpu_to_be32(MULTIFD_VERSION);
         p->name = g_strdup_printf("multifdsend_%d", i);
-        p->tls_hostname = g_strdup(s->hostname);
         /* We need one extra place for the packet header */
         p->iov = g_new0(struct iovec, page_count + 1);
         p->normal = g_new0(ram_addr_t, page_count);
diff --git a/migration/multifd.h b/migration/multifd.h
index 4dda900a0b..3d577b98b7 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -72,8 +72,6 @@ typedef struct {
     uint8_t id;
     /* channel thread name */
     char *name;
-    /* tls hostname */
-    char *tls_hostname;
     /* channel thread id */
     QemuThread thread;
     /* communication channel */
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 03/19] migration: Add pss.postcopy_requested status
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
  2022-03-31 15:08 ` [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname Peter Xu
  2022-03-31 15:08 ` [PATCH v4 02/19] migration: Drop multifd tls_hostname cache Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 10:36   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 04/19] migration: Move migrate_allow_multifd and helpers into migration.c Peter Xu
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

This boolean flag shows whether the current page during migration is triggered
by postcopy or not.  Then in ram_save_host_page() and deeper stack we'll be
able to have a reference on the priority of this page.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 3532f64ecb..bfcd45a36e 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -414,6 +414,8 @@ struct PageSearchStatus {
     unsigned long page;
     /* Set once we wrap around */
     bool         complete_round;
+    /* Whether current page is explicitly requested by postcopy */
+    bool         postcopy_requested;
 };
 typedef struct PageSearchStatus PageSearchStatus;
 
@@ -1487,6 +1489,9 @@ retry:
  */
 static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
 {
+    /* This is not a postcopy requested page */
+    pss->postcopy_requested = false;
+
     pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
     if (pss->complete_round && pss->block == rs->last_seen_block &&
         pss->page >= rs->last_page) {
@@ -1981,6 +1986,7 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
          * really rare.
          */
         pss->complete_round = false;
+        pss->postcopy_requested = true;
     }
 
     return !!block;
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 04/19] migration: Move migrate_allow_multifd and helpers into migration.c
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (2 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 03/19] migration: Add pss.postcopy_requested status Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 10:41   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 05/19] migration: Export ram_load_postcopy() Peter Xu
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

This variable, along with its helpers, is used to detect whether multiple
channel will be supported for migration.  In follow up patches, there'll be
other capability that requires multi-channels.  Hence move it outside multifd
specific code and make it public.  Meanwhile rename it from "multifd" to
"multi_channels" to show its real meaning.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 22 +++++++++++++++++-----
 migration/migration.h |  3 +++
 migration/multifd.c   | 19 ++++---------------
 migration/multifd.h   |  2 --
 4 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 281d33326b..596d3d30b4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -180,6 +180,18 @@ static int migration_maybe_pause(MigrationState *s,
                                  int new_state);
 static void migrate_fd_cancel(MigrationState *s);
 
+static bool migrate_allow_multi_channels = true;
+
+void migrate_protocol_allow_multi_channels(bool allow)
+{
+    migrate_allow_multi_channels = allow;
+}
+
+bool migrate_multi_channels_is_allowed(void)
+{
+    return migrate_allow_multi_channels;
+}
+
 static gint page_request_addr_cmp(gconstpointer ap, gconstpointer bp)
 {
     uintptr_t a = (uintptr_t) ap, b = (uintptr_t) bp;
@@ -469,12 +481,12 @@ static void qemu_start_incoming_migration(const char *uri, Error **errp)
 {
     const char *p = NULL;
 
-    migrate_protocol_allow_multifd(false); /* reset it anyway */
+    migrate_protocol_allow_multi_channels(false); /* reset it anyway */
     qapi_event_send_migration(MIGRATION_STATUS_SETUP);
     if (strstart(uri, "tcp:", &p) ||
         strstart(uri, "unix:", NULL) ||
         strstart(uri, "vsock:", NULL)) {
-        migrate_protocol_allow_multifd(true);
+        migrate_protocol_allow_multi_channels(true);
         socket_start_incoming_migration(p ? p : uri, errp);
 #ifdef CONFIG_RDMA
     } else if (strstart(uri, "rdma:", &p)) {
@@ -1261,7 +1273,7 @@ static bool migrate_caps_check(bool *cap_list,
 
     /* incoming side only */
     if (runstate_check(RUN_STATE_INMIGRATE) &&
-        !migrate_multifd_is_allowed() &&
+        !migrate_multi_channels_is_allowed() &&
         cap_list[MIGRATION_CAPABILITY_MULTIFD]) {
         error_setg(errp, "multifd is not supported by current protocol");
         return false;
@@ -2324,11 +2336,11 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
         }
     }
 
-    migrate_protocol_allow_multifd(false);
+    migrate_protocol_allow_multi_channels(false);
     if (strstart(uri, "tcp:", &p) ||
         strstart(uri, "unix:", NULL) ||
         strstart(uri, "vsock:", NULL)) {
-        migrate_protocol_allow_multifd(true);
+        migrate_protocol_allow_multi_channels(true);
         socket_start_outgoing_migration(s, p ? p : uri, &local_err);
 #ifdef CONFIG_RDMA
     } else if (strstart(uri, "rdma:", &p)) {
diff --git a/migration/migration.h b/migration/migration.h
index 2de861df01..f17ccc657c 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -430,4 +430,7 @@ void migration_cancel(const Error *error);
 void populate_vfio_info(MigrationInfo *info);
 void postcopy_temp_page_reset(PostcopyTmpPage *tmp_page);
 
+bool migrate_multi_channels_is_allowed(void);
+void migrate_protocol_allow_multi_channels(bool allow);
+
 #endif
diff --git a/migration/multifd.c b/migration/multifd.c
index 1be4ab5d17..9ea4f581e2 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -517,7 +517,7 @@ void multifd_save_cleanup(void)
 {
     int i;
 
-    if (!migrate_use_multifd() || !migrate_multifd_is_allowed()) {
+    if (!migrate_use_multifd() || !migrate_multi_channels_is_allowed()) {
         return;
     }
     multifd_send_terminate_threads(NULL);
@@ -857,17 +857,6 @@ cleanup:
     multifd_new_send_channel_cleanup(p, sioc, local_err);
 }
 
-static bool migrate_allow_multifd = true;
-void migrate_protocol_allow_multifd(bool allow)
-{
-    migrate_allow_multifd = allow;
-}
-
-bool migrate_multifd_is_allowed(void)
-{
-    return migrate_allow_multifd;
-}
-
 int multifd_save_setup(Error **errp)
 {
     int thread_count;
@@ -877,7 +866,7 @@ int multifd_save_setup(Error **errp)
     if (!migrate_use_multifd()) {
         return 0;
     }
-    if (!migrate_multifd_is_allowed()) {
+    if (!migrate_multi_channels_is_allowed()) {
         error_setg(errp, "multifd is not supported by current protocol");
         return -1;
     }
@@ -976,7 +965,7 @@ int multifd_load_cleanup(Error **errp)
 {
     int i;
 
-    if (!migrate_use_multifd() || !migrate_multifd_is_allowed()) {
+    if (!migrate_use_multifd() || !migrate_multi_channels_is_allowed()) {
         return 0;
     }
     multifd_recv_terminate_threads(NULL);
@@ -1125,7 +1114,7 @@ int multifd_load_setup(Error **errp)
     if (!migrate_use_multifd()) {
         return 0;
     }
-    if (!migrate_multifd_is_allowed()) {
+    if (!migrate_multi_channels_is_allowed()) {
         error_setg(errp, "multifd is not supported by current protocol");
         return -1;
     }
diff --git a/migration/multifd.h b/migration/multifd.h
index 3d577b98b7..7d0effcb03 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -13,8 +13,6 @@
 #ifndef QEMU_MIGRATION_MULTIFD_H
 #define QEMU_MIGRATION_MULTIFD_H
 
-bool migrate_multifd_is_allowed(void);
-void migrate_protocol_allow_multifd(bool allow);
 int multifd_save_setup(Error **errp);
 void multifd_save_cleanup(void);
 int multifd_load_setup(Error **errp);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 05/19] migration: Export ram_load_postcopy()
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (3 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 04/19] migration: Move migrate_allow_multifd and helpers into migration.c Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 10:42   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 06/19] migration: Move channel setup out of postcopy_try_recover() Peter Xu
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

Will be reused in postcopy fast load thread.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c | 2 +-
 migration/ram.h | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index bfcd45a36e..253fe4b756 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3645,7 +3645,7 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis)
  *
  * @f: QEMUFile where to send the data
  */
-static int ram_load_postcopy(QEMUFile *f)
+int ram_load_postcopy(QEMUFile *f)
 {
     int flags = 0, ret = 0;
     bool place_needed = false;
diff --git a/migration/ram.h b/migration/ram.h
index 2c6dc3675d..ded0a3a086 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -61,6 +61,7 @@ void ram_postcopy_send_discard_bitmap(MigrationState *ms);
 /* For incoming postcopy discard */
 int ram_discard_range(const char *block_name, uint64_t start, size_t length);
 int ram_postcopy_incoming_init(MigrationIncomingState *mis);
+int ram_load_postcopy(QEMUFile *f);
 
 void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 06/19] migration: Move channel setup out of postcopy_try_recover()
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (4 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 05/19] migration: Export ram_load_postcopy() Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 10:43   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 07/19] migration: Allow migrate-recover to run multiple times Peter Xu
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

We used to use postcopy_try_recover() to replace migration_incoming_setup() to
setup incoming channels.  That's fine for the old world, but in the new world
there can be more than one channels that need setup.  Better move the channel
setup out of it so that postcopy_try_recover() only handles the last phase of
switching to the recovery phase.

To do that in migration_fd_process_incoming(), move the postcopy_try_recover()
call to be after migration_incoming_setup(), which will setup the channels.
While in migration_ioc_process_incoming(), postpone the recover() routine right
before we'll jump into migration_incoming_process().

A side benefit is we don't need to pass in QEMUFile* to postcopy_try_recover()
anymore.  Remove it.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 596d3d30b4..8ecf78f2c7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -671,19 +671,20 @@ void migration_incoming_process(void)
 }
 
 /* Returns true if recovered from a paused migration, otherwise false */
-static bool postcopy_try_recover(QEMUFile *f)
+static bool postcopy_try_recover(void)
 {
     MigrationIncomingState *mis = migration_incoming_get_current();
 
     if (mis->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
         /* Resumed from a paused postcopy migration */
 
-        mis->from_src_file = f;
+        /* This should be set already in migration_incoming_setup() */
+        assert(mis->from_src_file);
         /* Postcopy has standalone thread to do vm load */
-        qemu_file_set_blocking(f, true);
+        qemu_file_set_blocking(mis->from_src_file, true);
 
         /* Re-configure the return path */
-        mis->to_src_file = qemu_file_get_return_path(f);
+        mis->to_src_file = qemu_file_get_return_path(mis->from_src_file);
 
         migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_PAUSED,
                           MIGRATION_STATUS_POSTCOPY_RECOVER);
@@ -704,11 +705,10 @@ static bool postcopy_try_recover(QEMUFile *f)
 
 void migration_fd_process_incoming(QEMUFile *f, Error **errp)
 {
-    if (postcopy_try_recover(f)) {
+    if (!migration_incoming_setup(f, errp)) {
         return;
     }
-
-    if (!migration_incoming_setup(f, errp)) {
+    if (postcopy_try_recover()) {
         return;
     }
     migration_incoming_process();
@@ -724,11 +724,6 @@ void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
         /* The first connection (multifd may have multiple) */
         QEMUFile *f = qemu_fopen_channel_input(ioc);
 
-        /* If it's a recovery, we're done */
-        if (postcopy_try_recover(f)) {
-            return;
-        }
-
         if (!migration_incoming_setup(f, errp)) {
             return;
         }
@@ -749,6 +744,10 @@ void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
     }
 
     if (start_migration) {
+        /* If it's a recovery, we're done */
+        if (postcopy_try_recover()) {
+            return;
+        }
         migration_incoming_process();
     }
 }
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 07/19] migration: Allow migrate-recover to run multiple times
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (5 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 06/19] migration: Move channel setup out of postcopy_try_recover() Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 10:44   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 08/19] migration: Add postcopy-preempt capability Peter Xu
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

Previously migration didn't have an easy way to cleanup the listening
transport, migrate recovery only allows to execute once.  That's done with a
trick flag in postcopy_recover_triggered.

Now the facility is already there.

Drop postcopy_recover_triggered and instead allows a new migrate-recover to
release the previous listener transport.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 13 ++-----------
 migration/migration.h |  1 -
 migration/savevm.c    |  3 ---
 3 files changed, 2 insertions(+), 15 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 8ecf78f2c7..21fcf5102f 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2164,11 +2164,8 @@ void qmp_migrate_recover(const char *uri, Error **errp)
         return;
     }
 
-    if (qatomic_cmpxchg(&mis->postcopy_recover_triggered,
-                       false, true) == true) {
-        error_setg(errp, "Migrate recovery is triggered already");
-        return;
-    }
+    /* If there's an existing transport, release it */
+    migration_incoming_transport_cleanup(mis);
 
     /*
      * Note that this call will never start a real migration; it will
@@ -2176,12 +2173,6 @@ void qmp_migrate_recover(const char *uri, Error **errp)
      * to continue using that newly established channel.
      */
     qemu_start_incoming_migration(uri, errp);
-
-    /* Safe to dereference with the assert above */
-    if (*errp) {
-        /* Reset the flag so user could still retry */
-        qatomic_set(&mis->postcopy_recover_triggered, false);
-    }
 }
 
 void qmp_migrate_pause(Error **errp)
diff --git a/migration/migration.h b/migration/migration.h
index f17ccc657c..a863032b71 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -139,7 +139,6 @@ struct MigrationIncomingState {
     struct PostcopyBlocktimeContext *blocktime_ctx;
 
     /* notify PAUSED postcopy incoming migrations to try to continue */
-    bool postcopy_recover_triggered;
     QemuSemaphore postcopy_pause_sem_dst;
     QemuSemaphore postcopy_pause_sem_fault;
 
diff --git a/migration/savevm.c b/migration/savevm.c
index 02ed94c180..d9076897b8 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2589,9 +2589,6 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
 
     assert(migrate_postcopy_ram());
 
-    /* Clear the triggered bit to allow one recovery */
-    mis->postcopy_recover_triggered = false;
-
     /*
      * Unregister yank with either from/to src would work, since ioc behind it
      * is the same
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 08/19] migration: Add postcopy-preempt capability
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (6 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 07/19] migration: Allow migrate-recover to run multiple times Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 10:51   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 09/19] migration: Postcopy preemption preparation on channel creation Peter Xu
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

Firstly, postcopy already preempts precopy due to the fact that we do
unqueue_page() first before looking into dirty bits.

However that's not enough, e.g., when there're host huge page enabled, when
sending a precopy huge page, a postcopy request needs to wait until the whole
huge page that is sending to finish.  That could introduce quite some delay,
the bigger the huge page is the larger delay it'll bring.

This patch adds a new capability to allow postcopy requests to preempt existing
precopy page during sending a huge page, so that postcopy requests can be
serviced even faster.

Meanwhile to send it even faster, bypass the precopy stream by providing a
standalone postcopy socket for sending requested pages.

Since the new behavior will not be compatible with the old behavior, this will
not be the default, it's enabled only when the new capability is set on both
src/dst QEMUs.

This patch only adds the capability itself, the logic will be added in follow
up patches.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 23 +++++++++++++++++++++++
 migration/migration.h |  1 +
 qapi/migration.json   |  8 +++++++-
 3 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/migration/migration.c b/migration/migration.c
index 21fcf5102f..76e6ada524 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1235,6 +1235,11 @@ static bool migrate_caps_check(bool *cap_list,
             error_setg(errp, "Postcopy is not compatible with ignore-shared");
             return false;
         }
+
+        if (cap_list[MIGRATION_CAPABILITY_MULTIFD]) {
+            error_setg(errp, "Multifd is not supported in postcopy");
+            return false;
+        }
     }
 
     if (cap_list[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT]) {
@@ -1278,6 +1283,13 @@ static bool migrate_caps_check(bool *cap_list,
         return false;
     }
 
+    if (cap_list[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT]) {
+        if (!cap_list[MIGRATION_CAPABILITY_POSTCOPY_RAM]) {
+            error_setg(errp, "Postcopy preempt requires postcopy-ram");
+            return false;
+        }
+    }
+
     return true;
 }
 
@@ -2627,6 +2639,15 @@ bool migrate_background_snapshot(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT];
 }
 
+bool migrate_postcopy_preempt(void)
+{
+    MigrationState *s;
+
+    s = migrate_get_current();
+
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT];
+}
+
 /* migration thread support */
 /*
  * Something bad happened to the RP stream, mark an error
@@ -4237,6 +4258,8 @@ static Property migration_properties[] = {
     DEFINE_PROP_MIG_CAP("x-compress", MIGRATION_CAPABILITY_COMPRESS),
     DEFINE_PROP_MIG_CAP("x-events", MIGRATION_CAPABILITY_EVENTS),
     DEFINE_PROP_MIG_CAP("x-postcopy-ram", MIGRATION_CAPABILITY_POSTCOPY_RAM),
+    DEFINE_PROP_MIG_CAP("x-postcopy-preempt",
+                        MIGRATION_CAPABILITY_POSTCOPY_PREEMPT),
     DEFINE_PROP_MIG_CAP("x-colo", MIGRATION_CAPABILITY_X_COLO),
     DEFINE_PROP_MIG_CAP("x-release-ram", MIGRATION_CAPABILITY_RELEASE_RAM),
     DEFINE_PROP_MIG_CAP("x-block", MIGRATION_CAPABILITY_BLOCK),
diff --git a/migration/migration.h b/migration/migration.h
index a863032b71..af4bcb19c2 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -394,6 +394,7 @@ int migrate_decompress_threads(void);
 bool migrate_use_events(void);
 bool migrate_postcopy_blocktime(void);
 bool migrate_background_snapshot(void);
+bool migrate_postcopy_preempt(void);
 
 /* Sending on the return path - generic and then for each message type */
 void migrate_send_rp_shut(MigrationIncomingState *mis,
diff --git a/qapi/migration.json b/qapi/migration.json
index 18e2610e88..3523f23386 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -463,6 +463,12 @@
 #                       procedure starts. The VM RAM is saved with running VM.
 #                       (since 6.0)
 #
+# @postcopy-preempt: If enabled, the migration process will allow postcopy
+#                    requests to preempt precopy stream, so postcopy requests
+#                    will be handled faster.  This is a performance feature and
+#                    should not affect the correctness of postcopy migration.
+#                    (since 7.0)
+#
 # Features:
 # @unstable: Members @x-colo and @x-ignore-shared are experimental.
 #
@@ -476,7 +482,7 @@
            'block', 'return-path', 'pause-before-switchover', 'multifd',
            'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
            { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
-           'validate-uuid', 'background-snapshot'] }
+           'validate-uuid', 'background-snapshot', 'postcopy-preempt'] }
 
 ##
 # @MigrationCapabilityStatus:
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 09/19] migration: Postcopy preemption preparation on channel creation
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (7 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 08/19] migration: Add postcopy-preempt capability Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 10:59   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 10/19] migration: Postcopy preemption enablement Peter Xu
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

Create a new socket for postcopy to be prepared to send postcopy requested
pages via this specific channel, so as to not get blocked by precopy pages.

A new thread is also created on dest qemu to receive data from this new channel
based on the ram_load_postcopy() routine.

The ram_load_postcopy(POSTCOPY) branch and the thread has not started to
function, and that'll be done in follow up patches.

Cleanup the new sockets on both src/dst QEMUs, meanwhile look after the new
thread too to make sure it'll be recycled properly.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c    | 62 +++++++++++++++++++++++----
 migration/migration.h    |  8 ++++
 migration/postcopy-ram.c | 92 ++++++++++++++++++++++++++++++++++++++--
 migration/postcopy-ram.h | 10 +++++
 migration/ram.c          | 25 ++++++++---
 migration/ram.h          |  4 +-
 migration/savevm.c       | 20 ++++-----
 migration/socket.c       | 22 +++++++++-
 migration/socket.h       |  1 +
 migration/trace-events   |  5 ++-
 10 files changed, 218 insertions(+), 31 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 76e6ada524..01b882494d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -321,6 +321,12 @@ void migration_incoming_state_destroy(void)
         mis->page_requested = NULL;
     }
 
+    if (mis->postcopy_qemufile_dst) {
+        migration_ioc_unregister_yank_from_file(mis->postcopy_qemufile_dst);
+        qemu_fclose(mis->postcopy_qemufile_dst);
+        mis->postcopy_qemufile_dst = NULL;
+    }
+
     yank_unregister_instance(MIGRATION_YANK_INSTANCE);
 }
 
@@ -714,15 +720,21 @@ void migration_fd_process_incoming(QEMUFile *f, Error **errp)
     migration_incoming_process();
 }
 
+static bool migration_needs_multiple_sockets(void)
+{
+    return migrate_use_multifd() || migrate_postcopy_preempt();
+}
+
 void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
 {
     MigrationIncomingState *mis = migration_incoming_get_current();
     Error *local_err = NULL;
     bool start_migration;
+    QEMUFile *f;
 
     if (!mis->from_src_file) {
         /* The first connection (multifd may have multiple) */
-        QEMUFile *f = qemu_fopen_channel_input(ioc);
+        f = qemu_fopen_channel_input(ioc);
 
         if (!migration_incoming_setup(f, errp)) {
             return;
@@ -730,13 +742,18 @@ void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
 
         /*
          * Common migration only needs one channel, so we can start
-         * right now.  Multifd needs more than one channel, we wait.
+         * right now.  Some features need more than one channel, we wait.
          */
-        start_migration = !migrate_use_multifd();
+        start_migration = !migration_needs_multiple_sockets();
     } else {
         /* Multiple connections */
-        assert(migrate_use_multifd());
-        start_migration = multifd_recv_new_channel(ioc, &local_err);
+        assert(migration_needs_multiple_sockets());
+        if (migrate_use_multifd()) {
+            start_migration = multifd_recv_new_channel(ioc, &local_err);
+        } else if (migrate_postcopy_preempt()) {
+            f = qemu_fopen_channel_input(ioc);
+            start_migration = postcopy_preempt_new_channel(mis, f);
+        }
         if (local_err) {
             error_propagate(errp, local_err);
             return;
@@ -761,11 +778,20 @@ void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
 bool migration_has_all_channels(void)
 {
     MigrationIncomingState *mis = migration_incoming_get_current();
-    bool all_channels;
 
-    all_channels = multifd_recv_all_channels_created();
+    if (!mis->from_src_file) {
+        return false;
+    }
+
+    if (migrate_use_multifd()) {
+        return multifd_recv_all_channels_created();
+    }
+
+    if (migrate_postcopy_preempt()) {
+        return mis->postcopy_qemufile_dst != NULL;
+    }
 
-    return all_channels && mis->from_src_file != NULL;
+    return true;
 }
 
 /*
@@ -1863,6 +1889,12 @@ static void migrate_fd_cleanup(MigrationState *s)
         qemu_fclose(tmp);
     }
 
+    if (s->postcopy_qemufile_src) {
+        migration_ioc_unregister_yank_from_file(s->postcopy_qemufile_src);
+        qemu_fclose(s->postcopy_qemufile_src);
+        s->postcopy_qemufile_src = NULL;
+    }
+
     assert(!migration_is_active(s));
 
     if (s->state == MIGRATION_STATUS_CANCELLING) {
@@ -3238,6 +3270,11 @@ static void migration_completion(MigrationState *s)
         qemu_savevm_state_complete_postcopy(s->to_dst_file);
         qemu_mutex_unlock_iothread();
 
+        /* Shutdown the postcopy fast path thread */
+        if (migrate_postcopy_preempt()) {
+            postcopy_preempt_shutdown_file(s);
+        }
+
         trace_migration_completion_postcopy_end_after_complete();
     } else {
         goto fail;
@@ -4125,6 +4162,15 @@ void migrate_fd_connect(MigrationState *s, Error *error_in)
         }
     }
 
+    /* This needs to be done before resuming a postcopy */
+    if (postcopy_preempt_setup(s, &local_err)) {
+        error_report_err(local_err);
+        migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
+                          MIGRATION_STATUS_FAILED);
+        migrate_fd_cleanup(s);
+        return;
+    }
+
     if (resume) {
         /* Wakeup the main migration thread to do the recovery */
         migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_PAUSED,
diff --git a/migration/migration.h b/migration/migration.h
index af4bcb19c2..caa910d956 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -23,6 +23,7 @@
 #include "io/channel-buffer.h"
 #include "net/announce.h"
 #include "qom/object.h"
+#include "postcopy-ram.h"
 
 struct PostcopyBlocktimeContext;
 
@@ -112,6 +113,11 @@ struct MigrationIncomingState {
      * enabled.
      */
     unsigned int postcopy_channels;
+    /* QEMUFile for postcopy only; it'll be handled by a separate thread */
+    QEMUFile *postcopy_qemufile_dst;
+    /* Postcopy priority thread is used to receive postcopy requested pages */
+    QemuThread postcopy_prio_thread;
+    bool postcopy_prio_thread_created;
     /*
      * An array of temp host huge pages to be used, one for each postcopy
      * channel.
@@ -192,6 +198,8 @@ struct MigrationState {
     QEMUBH *cleanup_bh;
     /* Protected by qemu_file_lock */
     QEMUFile *to_dst_file;
+    /* Postcopy specific transfer channel */
+    QEMUFile *postcopy_qemufile_src;
     QIOChannelBuffer *bioc;
     /*
      * Protects to_dst_file/from_dst_file pointers.  We need to make sure we
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 32c52f4b1d..df0c02f729 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -33,6 +33,9 @@
 #include "trace.h"
 #include "hw/boards.h"
 #include "exec/ramblock.h"
+#include "socket.h"
+#include "qemu-file-channel.h"
+#include "yank_functions.h"
 
 /* Arbitrary limit on size of each discard command,
  * keeps them around ~200 bytes
@@ -567,6 +570,11 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
 {
     trace_postcopy_ram_incoming_cleanup_entry();
 
+    if (mis->postcopy_prio_thread_created) {
+        qemu_thread_join(&mis->postcopy_prio_thread);
+        mis->postcopy_prio_thread_created = false;
+    }
+
     if (mis->have_fault_thread) {
         Error *local_err = NULL;
 
@@ -1102,8 +1110,13 @@ static int postcopy_temp_pages_setup(MigrationIncomingState *mis)
     int err, i, channels;
     void *temp_page;
 
-    /* TODO: will be boosted when enable postcopy preemption */
-    mis->postcopy_channels = 1;
+    if (migrate_postcopy_preempt()) {
+        /* If preemption enabled, need extra channel for urgent requests */
+        mis->postcopy_channels = RAM_CHANNEL_MAX;
+    } else {
+        /* Both precopy/postcopy on the same channel */
+        mis->postcopy_channels = 1;
+    }
 
     channels = mis->postcopy_channels;
     mis->postcopy_tmp_pages = g_malloc0_n(sizeof(PostcopyTmpPage), channels);
@@ -1170,7 +1183,7 @@ int postcopy_ram_incoming_setup(MigrationIncomingState *mis)
         return -1;
     }
 
-    postcopy_thread_create(mis, &mis->fault_thread, "postcopy/fault",
+    postcopy_thread_create(mis, &mis->fault_thread, "fault-default",
                            postcopy_ram_fault_thread, QEMU_THREAD_JOINABLE);
     mis->have_fault_thread = true;
 
@@ -1185,6 +1198,16 @@ int postcopy_ram_incoming_setup(MigrationIncomingState *mis)
         return -1;
     }
 
+    if (migrate_postcopy_preempt()) {
+        /*
+         * This thread needs to be created after the temp pages because it'll fetch
+         * RAM_CHANNEL_POSTCOPY PostcopyTmpPage immediately.
+         */
+        postcopy_thread_create(mis, &mis->postcopy_prio_thread, "fault-fast",
+                               postcopy_preempt_thread, QEMU_THREAD_JOINABLE);
+        mis->postcopy_prio_thread_created = true;
+    }
+
     trace_postcopy_ram_enable_notify();
 
     return 0;
@@ -1514,3 +1537,66 @@ void postcopy_unregister_shared_ufd(struct PostCopyFD *pcfd)
         }
     }
 }
+
+bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
+{
+    /*
+     * The new loading channel has its own threads, so it needs to be
+     * blocked too.  It's by default true, just be explicit.
+     */
+    qemu_file_set_blocking(file, true);
+    mis->postcopy_qemufile_dst = file;
+    trace_postcopy_preempt_new_channel();
+
+    /* Start the migration immediately */
+    return true;
+}
+
+int postcopy_preempt_setup(MigrationState *s, Error **errp)
+{
+    QIOChannel *ioc;
+
+    if (!migrate_postcopy_preempt()) {
+        return 0;
+    }
+
+    if (!migrate_multi_channels_is_allowed()) {
+        error_setg(errp, "Postcopy preempt is not supported as current "
+                   "migration stream does not support multi-channels.");
+        return -1;
+    }
+
+    ioc = socket_send_channel_create_sync(errp);
+
+    if (ioc == NULL) {
+        return -1;
+    }
+
+    migration_ioc_register_yank(ioc);
+    s->postcopy_qemufile_src = qemu_fopen_channel_output(ioc);
+
+    trace_postcopy_preempt_new_channel();
+
+    return 0;
+}
+
+void *postcopy_preempt_thread(void *opaque)
+{
+    MigrationIncomingState *mis = opaque;
+    int ret;
+
+    trace_postcopy_preempt_thread_entry();
+
+    rcu_register_thread();
+
+    qemu_sem_post(&mis->thread_sync_sem);
+
+    /* Sending RAM_SAVE_FLAG_EOS to terminate this thread */
+    ret = ram_load_postcopy(mis->postcopy_qemufile_dst, RAM_CHANNEL_POSTCOPY);
+
+    rcu_unregister_thread();
+
+    trace_postcopy_preempt_thread_exit();
+
+    return ret == 0 ? NULL : (void *)-1;
+}
diff --git a/migration/postcopy-ram.h b/migration/postcopy-ram.h
index 07684c0e1d..34b1080cde 100644
--- a/migration/postcopy-ram.h
+++ b/migration/postcopy-ram.h
@@ -183,4 +183,14 @@ int postcopy_wake_shared(struct PostCopyFD *pcfd, uint64_t client_addr,
 int postcopy_request_shared_page(struct PostCopyFD *pcfd, RAMBlock *rb,
                                  uint64_t client_addr, uint64_t offset);
 
+/* Hard-code channels for now for postcopy preemption */
+enum PostcopyChannels {
+    RAM_CHANNEL_PRECOPY = 0,
+    RAM_CHANNEL_POSTCOPY = 1,
+    RAM_CHANNEL_MAX,
+};
+
+bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file);
+int postcopy_preempt_setup(MigrationState *s, Error **errp);
+
 #endif
diff --git a/migration/ram.c b/migration/ram.c
index 253fe4b756..c7ea1d9215 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3644,15 +3644,15 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis)
  * rcu_read_lock is taken prior to this being called.
  *
  * @f: QEMUFile where to send the data
+ * @channel: the channel to use for loading
  */
-int ram_load_postcopy(QEMUFile *f)
+int ram_load_postcopy(QEMUFile *f, int channel)
 {
     int flags = 0, ret = 0;
     bool place_needed = false;
     bool matches_target_page_size = false;
     MigrationIncomingState *mis = migration_incoming_get_current();
-    /* Currently we only use channel 0.  TODO: use all the channels */
-    PostcopyTmpPage *tmp_page = &mis->postcopy_tmp_pages[0];
+    PostcopyTmpPage *tmp_page = &mis->postcopy_tmp_pages[channel];
 
     while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
         ram_addr_t addr;
@@ -3676,7 +3676,7 @@ int ram_load_postcopy(QEMUFile *f)
         flags = addr & ~TARGET_PAGE_MASK;
         addr &= TARGET_PAGE_MASK;
 
-        trace_ram_load_postcopy_loop((uint64_t)addr, flags);
+        trace_ram_load_postcopy_loop(channel, (uint64_t)addr, flags);
         if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
                      RAM_SAVE_FLAG_COMPRESS_PAGE)) {
             block = ram_block_from_stream(mis, f, flags);
@@ -3717,10 +3717,10 @@ int ram_load_postcopy(QEMUFile *f)
             } else if (tmp_page->host_addr !=
                        host_page_from_ram_block_offset(block, addr)) {
                 /* not the 1st TP within the HP */
-                error_report("Non-same host page detected.  "
+                error_report("Non-same host page detected on channel %d: "
                              "Target host page %p, received host page %p "
                              "(rb %s offset 0x"RAM_ADDR_FMT" target_pages %d)",
-                             tmp_page->host_addr,
+                             channel, tmp_page->host_addr,
                              host_page_from_ram_block_offset(block, addr),
                              block->idstr, addr, tmp_page->target_pages);
                 ret = -EINVAL;
@@ -4107,7 +4107,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
      */
     WITH_RCU_READ_LOCK_GUARD() {
         if (postcopy_running) {
-            ret = ram_load_postcopy(f);
+            /*
+             * Note!  Here RAM_CHANNEL_PRECOPY is the precopy channel of
+             * postcopy migration, we have another RAM_CHANNEL_POSTCOPY to
+             * service fast page faults.
+             */
+            ret = ram_load_postcopy(f, RAM_CHANNEL_PRECOPY);
         } else {
             ret = ram_load_precopy(f);
         }
@@ -4269,6 +4274,12 @@ static int ram_resume_prepare(MigrationState *s, void *opaque)
     return 0;
 }
 
+void postcopy_preempt_shutdown_file(MigrationState *s)
+{
+    qemu_put_be64(s->postcopy_qemufile_src, RAM_SAVE_FLAG_EOS);
+    qemu_fflush(s->postcopy_qemufile_src);
+}
+
 static SaveVMHandlers savevm_ram_handlers = {
     .save_setup = ram_save_setup,
     .save_live_iterate = ram_save_iterate,
diff --git a/migration/ram.h b/migration/ram.h
index ded0a3a086..5d90945a6e 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -61,7 +61,7 @@ void ram_postcopy_send_discard_bitmap(MigrationState *ms);
 /* For incoming postcopy discard */
 int ram_discard_range(const char *block_name, uint64_t start, size_t length);
 int ram_postcopy_incoming_init(MigrationIncomingState *mis);
-int ram_load_postcopy(QEMUFile *f);
+int ram_load_postcopy(QEMUFile *f, int channel);
 
 void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
 
@@ -73,6 +73,8 @@ int64_t ramblock_recv_bitmap_send(QEMUFile *file,
                                   const char *block_name);
 int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb);
 bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t start);
+void postcopy_preempt_shutdown_file(MigrationState *s);
+void *postcopy_preempt_thread(void *opaque);
 
 /* ram cache */
 int colo_init_ram_cache(void);
diff --git a/migration/savevm.c b/migration/savevm.c
index d9076897b8..ecee05e631 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2575,16 +2575,6 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
 {
     int i;
 
-    /*
-     * If network is interrupted, any temp page we received will be useless
-     * because we didn't mark them as "received" in receivedmap.  After a
-     * proper recovery later (which will sync src dirty bitmap with receivedmap
-     * on dest) these cached small pages will be resent again.
-     */
-    for (i = 0; i < mis->postcopy_channels; i++) {
-        postcopy_temp_page_reset(&mis->postcopy_tmp_pages[i]);
-    }
-
     trace_postcopy_pause_incoming();
 
     assert(migrate_postcopy_ram());
@@ -2613,6 +2603,16 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
     /* Notify the fault thread for the invalidated file handle */
     postcopy_fault_thread_notify(mis);
 
+    /*
+     * If network is interrupted, any temp page we received will be useless
+     * because we didn't mark them as "received" in receivedmap.  After a
+     * proper recovery later (which will sync src dirty bitmap with receivedmap
+     * on dest) these cached small pages will be resent again.
+     */
+    for (i = 0; i < mis->postcopy_channels; i++) {
+        postcopy_temp_page_reset(&mis->postcopy_tmp_pages[i]);
+    }
+
     error_report("Detected IO failure for postcopy. "
                  "Migration paused.");
 
diff --git a/migration/socket.c b/migration/socket.c
index 05705a32d8..a7f345b353 100644
--- a/migration/socket.c
+++ b/migration/socket.c
@@ -26,7 +26,7 @@
 #include "io/channel-socket.h"
 #include "io/net-listener.h"
 #include "trace.h"
-
+#include "postcopy-ram.h"
 
 struct SocketOutgoingArgs {
     SocketAddress *saddr;
@@ -39,6 +39,24 @@ void socket_send_channel_create(QIOTaskFunc f, void *data)
                                      f, data, NULL, NULL);
 }
 
+QIOChannel *socket_send_channel_create_sync(Error **errp)
+{
+    QIOChannelSocket *sioc = qio_channel_socket_new();
+
+    if (!outgoing_args.saddr) {
+        object_unref(OBJECT(sioc));
+        error_setg(errp, "Initial sock address not set!");
+        return NULL;
+    }
+
+    if (qio_channel_socket_connect_sync(sioc, outgoing_args.saddr, errp) < 0) {
+        object_unref(OBJECT(sioc));
+        return NULL;
+    }
+
+    return QIO_CHANNEL(sioc);
+}
+
 int socket_send_channel_destroy(QIOChannel *send)
 {
     /* Remove channel */
@@ -158,6 +176,8 @@ socket_start_incoming_migration_internal(SocketAddress *saddr,
 
     if (migrate_use_multifd()) {
         num = migrate_multifd_channels();
+    } else if (migrate_postcopy_preempt()) {
+        num = RAM_CHANNEL_MAX;
     }
 
     if (qio_net_listener_open_sync(listener, saddr, num, errp) < 0) {
diff --git a/migration/socket.h b/migration/socket.h
index 891dbccceb..dc54df4e6c 100644
--- a/migration/socket.h
+++ b/migration/socket.h
@@ -21,6 +21,7 @@
 #include "io/task.h"
 
 void socket_send_channel_create(QIOTaskFunc f, void *data);
+QIOChannel *socket_send_channel_create_sync(Error **errp);
 int socket_send_channel_destroy(QIOChannel *send);
 
 void socket_start_incoming_migration(const char *str, Error **errp);
diff --git a/migration/trace-events b/migration/trace-events
index 1aec580e92..1f932782d9 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -91,7 +91,7 @@ migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned
 migration_throttle(void) ""
 ram_discard_range(const char *rbname, uint64_t start, size_t len) "%s: start: %" PRIx64 " %zx"
 ram_load_loop(const char *rbname, uint64_t addr, int flags, void *host) "%s: addr: 0x%" PRIx64 " flags: 0x%x host: %p"
-ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
+ram_load_postcopy_loop(int channel, uint64_t addr, int flags) "chan=%d addr=%" PRIx64 " flags=%x"
 ram_postcopy_send_discard_bitmap(void) ""
 ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
 ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
@@ -278,6 +278,9 @@ postcopy_request_shared_page(const char *sharer, const char *rb, uint64_t rb_off
 postcopy_request_shared_page_present(const char *sharer, const char *rb, uint64_t rb_offset) "%s already %s offset 0x%"PRIx64
 postcopy_wake_shared(uint64_t client_addr, const char *rb) "at 0x%"PRIx64" in %s"
 postcopy_page_req_del(void *addr, int count) "resolved page req %p total %d"
+postcopy_preempt_new_channel(void) ""
+postcopy_preempt_thread_entry(void) ""
+postcopy_preempt_thread_exit(void) ""
 
 get_mem_fault_cpu_index(int cpu, uint32_t pid) "cpu: %d, pid: %u"
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 10/19] migration: Postcopy preemption enablement
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (8 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 09/19] migration: Postcopy preemption preparation on channel creation Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 11:05   ` Daniel P. Berrangé
  2022-05-11 15:54   ` manish.mishra
  2022-03-31 15:08 ` [PATCH v4 11/19] migration: Postcopy recover with preempt enabled Peter Xu
                   ` (9 subsequent siblings)
  19 siblings, 2 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

This patch enables postcopy-preempt feature.

It contains two major changes to the migration logic:

(1) Postcopy requests are now sent via a different socket from precopy
    background migration stream, so as to be isolated from very high page
    request delays.

(2) For huge page enabled hosts: when there's postcopy requests, they can now
    intercept a partial sending of huge host pages on src QEMU.

After this patch, we'll live migrate a VM with two channels for postcopy: (1)
PRECOPY channel, which is the default channel that transfers background pages;
and (2) POSTCOPY channel, which only transfers requested pages.

There's no strict rule of which channel to use, e.g., if a requested page is
already being transferred on precopy channel, then we will keep using the same
precopy channel to transfer the page even if it's explicitly requested.  In 99%
of the cases we'll prioritize the channels so we send requested page via the
postcopy channel as long as possible.

On the source QEMU, when we found a postcopy request, we'll interrupt the
PRECOPY channel sending process and quickly switch to the POSTCOPY channel.
After we serviced all the high priority postcopy pages, we'll switch back to
PRECOPY channel so that we'll continue to send the interrupted huge page again.
There's no new thread introduced on src QEMU.

On the destination QEMU, one new thread is introduced to receive page data from
the postcopy specific socket (done in the preparation patch).

This patch has a side effect: after sending postcopy pages, previously we'll
assume the guest will access follow up pages so we'll keep sending from there.
Now it's changed.  Instead of going on with a postcopy requested page, we'll go
back and continue sending the precopy huge page (which can be intercepted by a
postcopy request so the huge page can be sent partially before).

Whether that's a problem is debatable, because "assuming the guest will
continue to access the next page" may not really suite when huge pages are
used, especially if the huge page is large (e.g. 1GB pages).  So that locality
hint is much meaningless if huge pages are used.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c  |   2 +
 migration/migration.h  |   2 +-
 migration/ram.c        | 250 +++++++++++++++++++++++++++++++++++++++--
 migration/trace-events |   7 ++
 4 files changed, 252 insertions(+), 9 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 01b882494d..56d54c186b 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3158,6 +3158,8 @@ static int postcopy_start(MigrationState *ms)
                               MIGRATION_STATUS_FAILED);
     }
 
+    trace_postcopy_preempt_enabled(migrate_postcopy_preempt());
+
     return ret;
 
 fail_closefb:
diff --git a/migration/migration.h b/migration/migration.h
index caa910d956..b8aacfe3af 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -68,7 +68,7 @@ typedef struct {
 struct MigrationIncomingState {
     QEMUFile *from_src_file;
     /* Previously received RAM's RAMBlock pointer */
-    RAMBlock *last_recv_block;
+    RAMBlock *last_recv_block[RAM_CHANNEL_MAX];
     /* A hook to allow cleanup at the end of incoming migration */
     void *transport_data;
     void (*transport_cleanup)(void *data);
diff --git a/migration/ram.c b/migration/ram.c
index c7ea1d9215..518d511874 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -295,6 +295,20 @@ struct RAMSrcPageRequest {
     QSIMPLEQ_ENTRY(RAMSrcPageRequest) next_req;
 };
 
+typedef struct {
+    /*
+     * Cached ramblock/offset values if preempted.  They're only meaningful if
+     * preempted==true below.
+     */
+    RAMBlock *ram_block;
+    unsigned long ram_page;
+    /*
+     * Whether a postcopy preemption just happened.  Will be reset after
+     * precopy recovered to background migration.
+     */
+    bool preempted;
+} PostcopyPreemptState;
+
 /* State of RAM for migration */
 struct RAMState {
     /* QEMUFile used for this migration */
@@ -349,6 +363,14 @@ struct RAMState {
     /* Queue of outstanding page requests from the destination */
     QemuMutex src_page_req_mutex;
     QSIMPLEQ_HEAD(, RAMSrcPageRequest) src_page_requests;
+
+    /* Postcopy preemption informations */
+    PostcopyPreemptState postcopy_preempt_state;
+    /*
+     * Current channel we're using on src VM.  Only valid if postcopy-preempt
+     * is enabled.
+     */
+    unsigned int postcopy_channel;
 };
 typedef struct RAMState RAMState;
 
@@ -356,6 +378,11 @@ static RAMState *ram_state;
 
 static NotifierWithReturnList precopy_notifier_list;
 
+static void postcopy_preempt_reset(RAMState *rs)
+{
+    memset(&rs->postcopy_preempt_state, 0, sizeof(PostcopyPreemptState));
+}
+
 /* Whether postcopy has queued requests? */
 static bool postcopy_has_request(RAMState *rs)
 {
@@ -1947,6 +1974,55 @@ void ram_write_tracking_stop(void)
 }
 #endif /* defined(__linux__) */
 
+/*
+ * Check whether two addr/offset of the ramblock falls onto the same host huge
+ * page.  Returns true if so, false otherwise.
+ */
+static bool offset_on_same_huge_page(RAMBlock *rb, uint64_t addr1,
+                                     uint64_t addr2)
+{
+    size_t page_size = qemu_ram_pagesize(rb);
+
+    addr1 = ROUND_DOWN(addr1, page_size);
+    addr2 = ROUND_DOWN(addr2, page_size);
+
+    return addr1 == addr2;
+}
+
+/*
+ * Whether a previous preempted precopy huge page contains current requested
+ * page?  Returns true if so, false otherwise.
+ *
+ * This should really happen very rarely, because it means when we were sending
+ * during background migration for postcopy we're sending exactly the page that
+ * some vcpu got faulted on on dest node.  When it happens, we probably don't
+ * need to do much but drop the request, because we know right after we restore
+ * the precopy stream it'll be serviced.  It'll slightly affect the order of
+ * postcopy requests to be serviced (e.g. it'll be the same as we move current
+ * request to the end of the queue) but it shouldn't be a big deal.  The most
+ * imporant thing is we can _never_ try to send a partial-sent huge page on the
+ * POSTCOPY channel again, otherwise that huge page will got "split brain" on
+ * two channels (PRECOPY, POSTCOPY).
+ */
+static bool postcopy_preempted_contains(RAMState *rs, RAMBlock *block,
+                                        ram_addr_t offset)
+{
+    PostcopyPreemptState *state = &rs->postcopy_preempt_state;
+
+    /* No preemption at all? */
+    if (!state->preempted) {
+        return false;
+    }
+
+    /* Not even the same ramblock? */
+    if (state->ram_block != block) {
+        return false;
+    }
+
+    return offset_on_same_huge_page(block, offset,
+                                    state->ram_page << TARGET_PAGE_BITS);
+}
+
 /**
  * get_queued_page: unqueue a page from the postcopy requests
  *
@@ -1962,9 +2038,17 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
     RAMBlock  *block;
     ram_addr_t offset;
 
+again:
     block = unqueue_page(rs, &offset);
 
-    if (!block) {
+    if (block) {
+        /* See comment above postcopy_preempted_contains() */
+        if (postcopy_preempted_contains(rs, block, offset)) {
+            trace_postcopy_preempt_hit(block->idstr, offset);
+            /* This request is dropped */
+            goto again;
+        }
+    } else {
         /*
          * Poll write faults too if background snapshot is enabled; that's
          * when we have vcpus got blocked by the write protected pages.
@@ -2180,6 +2264,117 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
     return ram_save_page(rs, pss);
 }
 
+static bool postcopy_needs_preempt(RAMState *rs, PageSearchStatus *pss)
+{
+    /* Not enabled eager preempt?  Then never do that. */
+    if (!migrate_postcopy_preempt()) {
+        return false;
+    }
+
+    /* If the ramblock we're sending is a small page?  Never bother. */
+    if (qemu_ram_pagesize(pss->block) == TARGET_PAGE_SIZE) {
+        return false;
+    }
+
+    /* Not in postcopy at all? */
+    if (!migration_in_postcopy()) {
+        return false;
+    }
+
+    /*
+     * If we're already handling a postcopy request, don't preempt as this page
+     * has got the same high priority.
+     */
+    if (pss->postcopy_requested) {
+        return false;
+    }
+
+    /* If there's postcopy requests, then check it up! */
+    return postcopy_has_request(rs);
+}
+
+/* Returns true if we preempted precopy, false otherwise */
+static void postcopy_do_preempt(RAMState *rs, PageSearchStatus *pss)
+{
+    PostcopyPreemptState *p_state = &rs->postcopy_preempt_state;
+
+    trace_postcopy_preempt_triggered(pss->block->idstr, pss->page);
+
+    /*
+     * Time to preempt precopy. Cache current PSS into preempt state, so that
+     * after handling the postcopy pages we can recover to it.  We need to do
+     * so because the dest VM will have partial of the precopy huge page kept
+     * over in its tmp huge page caches; better move on with it when we can.
+     */
+    p_state->ram_block = pss->block;
+    p_state->ram_page = pss->page;
+    p_state->preempted = true;
+}
+
+/* Whether we're preempted by a postcopy request during sending a huge page */
+static bool postcopy_preempt_triggered(RAMState *rs)
+{
+    return rs->postcopy_preempt_state.preempted;
+}
+
+static void postcopy_preempt_restore(RAMState *rs, PageSearchStatus *pss)
+{
+    PostcopyPreemptState *state = &rs->postcopy_preempt_state;
+
+    assert(state->preempted);
+
+    pss->block = state->ram_block;
+    pss->page = state->ram_page;
+    /* This is not a postcopy request but restoring previous precopy */
+    pss->postcopy_requested = false;
+
+    trace_postcopy_preempt_restored(pss->block->idstr, pss->page);
+
+    /* Reset preempt state, most importantly, set preempted==false */
+    postcopy_preempt_reset(rs);
+}
+
+static void postcopy_preempt_choose_channel(RAMState *rs, PageSearchStatus *pss)
+{
+    MigrationState *s = migrate_get_current();
+    unsigned int channel;
+    QEMUFile *next;
+
+    channel = pss->postcopy_requested ?
+        RAM_CHANNEL_POSTCOPY : RAM_CHANNEL_PRECOPY;
+
+    if (channel != rs->postcopy_channel) {
+        if (channel == RAM_CHANNEL_PRECOPY) {
+            next = s->to_dst_file;
+        } else {
+            next = s->postcopy_qemufile_src;
+        }
+        /* Update and cache the current channel */
+        rs->f = next;
+        rs->postcopy_channel = channel;
+
+        /*
+         * If channel switched, reset last_sent_block since the old sent block
+         * may not be on the same channel.
+         */
+        rs->last_sent_block = NULL;
+
+        trace_postcopy_preempt_switch_channel(channel);
+    }
+
+    trace_postcopy_preempt_send_host_page(pss->block->idstr, pss->page);
+}
+
+/* We need to make sure rs->f always points to the default channel elsewhere */
+static void postcopy_preempt_reset_channel(RAMState *rs)
+{
+    if (migrate_postcopy_preempt() && migration_in_postcopy()) {
+        rs->postcopy_channel = RAM_CHANNEL_PRECOPY;
+        rs->f = migrate_get_current()->to_dst_file;
+        trace_postcopy_preempt_reset_channel();
+    }
+}
+
 /**
  * ram_save_host_page: save a whole host page
  *
@@ -2211,7 +2406,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
         return 0;
     }
 
+    if (migrate_postcopy_preempt() && migration_in_postcopy()) {
+        postcopy_preempt_choose_channel(rs, pss);
+    }
+
     do {
+        if (postcopy_needs_preempt(rs, pss)) {
+            postcopy_do_preempt(rs, pss);
+            break;
+        }
+
         /* Check the pages is dirty and if it is send it */
         if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
             tmppages = ram_save_target_page(rs, pss);
@@ -2235,6 +2439,19 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
     /* The offset we leave with is the min boundary of host page and block */
     pss->page = MIN(pss->page, hostpage_boundary);
 
+    /*
+     * When with postcopy preempt mode, flush the data as soon as possible for
+     * postcopy requests, because we've already sent a whole huge page, so the
+     * dst node should already have enough resource to atomically filling in
+     * the current missing page.
+     *
+     * More importantly, when using separate postcopy channel, we must do
+     * explicit flush or it won't flush until the buffer is full.
+     */
+    if (migrate_postcopy_preempt() && pss->postcopy_requested) {
+        qemu_fflush(rs->f);
+    }
+
     res = ram_save_release_protection(rs, pss, start_page);
     return (res < 0 ? res : pages);
 }
@@ -2276,8 +2493,17 @@ static int ram_find_and_save_block(RAMState *rs)
         found = get_queued_page(rs, &pss);
 
         if (!found) {
-            /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(rs, &pss, &again);
+            /*
+             * Recover previous precopy ramblock/offset if postcopy has
+             * preempted precopy.  Otherwise find the next dirty bit.
+             */
+            if (postcopy_preempt_triggered(rs)) {
+                postcopy_preempt_restore(rs, &pss);
+                found = true;
+            } else {
+                /* priority queue empty, so just search for something dirty */
+                found = find_dirty_block(rs, &pss, &again);
+            }
         }
 
         if (found) {
@@ -2405,6 +2631,8 @@ static void ram_state_reset(RAMState *rs)
     rs->last_page = 0;
     rs->last_version = ram_list.version;
     rs->xbzrle_enabled = false;
+    postcopy_preempt_reset(rs);
+    rs->postcopy_channel = RAM_CHANNEL_PRECOPY;
 }
 
 #define MAX_WAIT 50 /* ms, half buffered_file limit */
@@ -3043,6 +3271,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     }
     qemu_mutex_unlock(&rs->bitmap_mutex);
 
+    postcopy_preempt_reset_channel(rs);
+
     /*
      * Must occur before EOS (or any QEMUFile operation)
      * because of RDMA protocol.
@@ -3112,6 +3342,8 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         ram_control_after_iterate(f, RAM_CONTROL_FINISH);
     }
 
+    postcopy_preempt_reset_channel(rs);
+
     if (ret >= 0) {
         multifd_send_sync_main(rs->f);
         qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -3194,11 +3426,13 @@ static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
  * @mis: the migration incoming state pointer
  * @f: QEMUFile where to read the data from
  * @flags: Page flags (mostly to see if it's a continuation of previous block)
+ * @channel: the channel we're using
  */
 static inline RAMBlock *ram_block_from_stream(MigrationIncomingState *mis,
-                                              QEMUFile *f, int flags)
+                                              QEMUFile *f, int flags,
+                                              int channel)
 {
-    RAMBlock *block = mis->last_recv_block;
+    RAMBlock *block = mis->last_recv_block[channel];
     char id[256];
     uint8_t len;
 
@@ -3225,7 +3459,7 @@ static inline RAMBlock *ram_block_from_stream(MigrationIncomingState *mis,
         return NULL;
     }
 
-    mis->last_recv_block = block;
+    mis->last_recv_block[channel] = block;
 
     return block;
 }
@@ -3679,7 +3913,7 @@ int ram_load_postcopy(QEMUFile *f, int channel)
         trace_ram_load_postcopy_loop(channel, (uint64_t)addr, flags);
         if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
                      RAM_SAVE_FLAG_COMPRESS_PAGE)) {
-            block = ram_block_from_stream(mis, f, flags);
+            block = ram_block_from_stream(mis, f, flags, channel);
             if (!block) {
                 ret = -EINVAL;
                 break;
@@ -3930,7 +4164,7 @@ static int ram_load_precopy(QEMUFile *f)
 
         if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
                      RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
-            RAMBlock *block = ram_block_from_stream(mis, f, flags);
+            RAMBlock *block = ram_block_from_stream(mis, f, flags, RAM_CHANNEL_PRECOPY);
 
             host = host_from_ram_block_offset(block, addr);
             /*
diff --git a/migration/trace-events b/migration/trace-events
index 1f932782d9..f92793b5f5 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -111,6 +111,12 @@ ram_load_complete(int ret, uint64_t seq_iter) "exit_code %d seq iteration %" PRI
 ram_write_tracking_ramblock_start(const char *block_id, size_t page_size, void *addr, size_t length) "%s: page_size: %zu addr: %p length: %zu"
 ram_write_tracking_ramblock_stop(const char *block_id, size_t page_size, void *addr, size_t length) "%s: page_size: %zu addr: %p length: %zu"
 unqueue_page(char *block, uint64_t offset, bool dirty) "ramblock '%s' offset 0x%"PRIx64" dirty %d"
+postcopy_preempt_triggered(char *str, unsigned long page) "during sending ramblock %s offset 0x%lx"
+postcopy_preempt_restored(char *str, unsigned long page) "ramblock %s offset 0x%lx"
+postcopy_preempt_hit(char *str, uint64_t offset) "ramblock %s offset 0x%"PRIx64
+postcopy_preempt_send_host_page(char *str, uint64_t offset) "ramblock %s offset 0x%"PRIx64
+postcopy_preempt_switch_channel(int channel) "%d"
+postcopy_preempt_reset_channel(void) ""
 
 # multifd.c
 multifd_new_send_channel_async(uint8_t id) "channel %u"
@@ -176,6 +182,7 @@ migration_thread_low_pending(uint64_t pending) "%" PRIu64
 migrate_transferred(uint64_t tranferred, uint64_t time_spent, uint64_t bandwidth, uint64_t size) "transferred %" PRIu64 " time_spent %" PRIu64 " bandwidth %" PRIu64 " max_size %" PRId64
 process_incoming_migration_co_end(int ret, int ps) "ret=%d postcopy-state=%d"
 process_incoming_migration_co_postcopy_end_main(void) ""
+postcopy_preempt_enabled(bool value) "%d"
 
 # channel.c
 migration_set_incoming_channel(void *ioc, const char *ioctype) "ioc=%p ioctype=%s"
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 11/19] migration: Postcopy recover with preempt enabled
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (9 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 10/19] migration: Postcopy preemption enablement Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-03-31 15:08 ` [PATCH v4 12/19] migration: Create the postcopy preempt channel asynchronously Peter Xu
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

To allow postcopy recovery, the ram fast load (preempt-only) dest QEMU thread
needs similar handling on fault tolerance.  When ram_load_postcopy() fails,
instead of stopping the thread it halts with a semaphore, preparing to be
kicked again when recovery is detected.

A mutex is introduced to make sure there's no concurrent operation upon the
socket.  To make it simple, the fast ram load thread will take the mutex during
its whole procedure, and only release it if it's paused.  The fast-path socket
will be properly released by the main loading thread safely when there's
network failures during postcopy with that mutex held.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c    | 27 +++++++++++++++++++++++----
 migration/migration.h    | 19 +++++++++++++++++++
 migration/postcopy-ram.c | 24 ++++++++++++++++++++++--
 migration/qemu-file.c    | 27 +++++++++++++++++++++++++++
 migration/qemu-file.h    |  1 +
 migration/savevm.c       | 26 ++++++++++++++++++++++++--
 migration/trace-events   |  2 ++
 7 files changed, 118 insertions(+), 8 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 56d54c186b..157a34c844 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -215,9 +215,11 @@ void migration_object_init(void)
     current_incoming->postcopy_remote_fds =
         g_array_new(FALSE, TRUE, sizeof(struct PostCopyFD));
     qemu_mutex_init(&current_incoming->rp_mutex);
+    qemu_mutex_init(&current_incoming->postcopy_prio_thread_mutex);
     qemu_event_init(&current_incoming->main_thread_load_event, false);
     qemu_sem_init(&current_incoming->postcopy_pause_sem_dst, 0);
     qemu_sem_init(&current_incoming->postcopy_pause_sem_fault, 0);
+    qemu_sem_init(&current_incoming->postcopy_pause_sem_fast_load, 0);
     qemu_mutex_init(&current_incoming->page_request_mutex);
     current_incoming->page_requested = g_tree_new(page_request_addr_cmp);
 
@@ -697,9 +699,9 @@ static bool postcopy_try_recover(void)
 
         /*
          * Here, we only wake up the main loading thread (while the
-         * fault thread will still be waiting), so that we can receive
+         * rest threads will still be waiting), so that we can receive
          * commands from source now, and answer it if needed. The
-         * fault thread will be woken up afterwards until we are sure
+         * rest threads will be woken up afterwards until we are sure
          * that source is ready to reply to page requests.
          */
         qemu_sem_post(&mis->postcopy_pause_sem_dst);
@@ -3471,6 +3473,18 @@ static MigThrError postcopy_pause(MigrationState *s)
         qemu_file_shutdown(file);
         qemu_fclose(file);
 
+        /*
+         * Do the same to postcopy fast path socket too if there is.  No
+         * locking needed because no racer as long as we do this before setting
+         * status to paused.
+         */
+        if (s->postcopy_qemufile_src) {
+            migration_ioc_unregister_yank_from_file(s->postcopy_qemufile_src);
+            qemu_file_shutdown(s->postcopy_qemufile_src);
+            qemu_fclose(s->postcopy_qemufile_src);
+            s->postcopy_qemufile_src = NULL;
+        }
+
         migrate_set_state(&s->state, s->state,
                           MIGRATION_STATUS_POSTCOPY_PAUSED);
 
@@ -3526,8 +3540,13 @@ static MigThrError migration_detect_error(MigrationState *s)
         return MIG_THR_ERR_FATAL;
     }
 
-    /* Try to detect any file errors */
-    ret = qemu_file_get_error_obj(s->to_dst_file, &local_error);
+    /*
+     * Try to detect any file errors.  Note that postcopy_qemufile_src will
+     * be NULL when postcopy preempt is not enabled.
+     */
+    ret = qemu_file_get_error_obj_any(s->to_dst_file,
+                                      s->postcopy_qemufile_src,
+                                      &local_error);
     if (!ret) {
         /* Everything is fine */
         assert(!local_error);
diff --git a/migration/migration.h b/migration/migration.h
index b8aacfe3af..91f845e9e4 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -118,6 +118,18 @@ struct MigrationIncomingState {
     /* Postcopy priority thread is used to receive postcopy requested pages */
     QemuThread postcopy_prio_thread;
     bool postcopy_prio_thread_created;
+    /*
+     * Used to sync between the ram load main thread and the fast ram load
+     * thread.  It protects postcopy_qemufile_dst, which is the postcopy
+     * fast channel.
+     *
+     * The ram fast load thread will take it mostly for the whole lifecycle
+     * because it needs to continuously read data from the channel, and
+     * it'll only release this mutex if postcopy is interrupted, so that
+     * the ram load main thread will take this mutex over and properly
+     * release the broken channel.
+     */
+    QemuMutex postcopy_prio_thread_mutex;
     /*
      * An array of temp host huge pages to be used, one for each postcopy
      * channel.
@@ -147,6 +159,13 @@ struct MigrationIncomingState {
     /* notify PAUSED postcopy incoming migrations to try to continue */
     QemuSemaphore postcopy_pause_sem_dst;
     QemuSemaphore postcopy_pause_sem_fault;
+    /*
+     * This semaphore is used to allow the ram fast load thread (only when
+     * postcopy preempt is enabled) fall into sleep when there's network
+     * interruption detected.  When the recovery is done, the main load
+     * thread will kick the fast ram load thread using this semaphore.
+     */
+    QemuSemaphore postcopy_pause_sem_fast_load;
 
     /* List of listening socket addresses  */
     SocketAddressList *socket_address_list;
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index df0c02f729..e20305a9e2 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -1580,6 +1580,15 @@ int postcopy_preempt_setup(MigrationState *s, Error **errp)
     return 0;
 }
 
+static void postcopy_pause_ram_fast_load(MigrationIncomingState *mis)
+{
+    trace_postcopy_pause_fast_load();
+    qemu_mutex_unlock(&mis->postcopy_prio_thread_mutex);
+    qemu_sem_wait(&mis->postcopy_pause_sem_fast_load);
+    qemu_mutex_lock(&mis->postcopy_prio_thread_mutex);
+    trace_postcopy_pause_fast_load_continued();
+}
+
 void *postcopy_preempt_thread(void *opaque)
 {
     MigrationIncomingState *mis = opaque;
@@ -1592,11 +1601,22 @@ void *postcopy_preempt_thread(void *opaque)
     qemu_sem_post(&mis->thread_sync_sem);
 
     /* Sending RAM_SAVE_FLAG_EOS to terminate this thread */
-    ret = ram_load_postcopy(mis->postcopy_qemufile_dst, RAM_CHANNEL_POSTCOPY);
+    qemu_mutex_lock(&mis->postcopy_prio_thread_mutex);
+    while (1) {
+        ret = ram_load_postcopy(mis->postcopy_qemufile_dst, RAM_CHANNEL_POSTCOPY);
+        /* If error happened, go into recovery routine */
+        if (ret) {
+            postcopy_pause_ram_fast_load(mis);
+        } else {
+            /* We're done */
+            break;
+        }
+    }
+    qemu_mutex_unlock(&mis->postcopy_prio_thread_mutex);
 
     rcu_unregister_thread();
 
     trace_postcopy_preempt_thread_exit();
 
-    return ret == 0 ? NULL : (void *)-1;
+    return NULL;
 }
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 1479cddad9..397652f0ba 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -139,6 +139,33 @@ int qemu_file_get_error_obj(QEMUFile *f, Error **errp)
     return f->last_error;
 }
 
+/*
+ * Get last error for either stream f1 or f2 with optional Error*.
+ * The error returned (non-zero) can be either from f1 or f2.
+ *
+ * If any of the qemufile* is NULL, then skip the check on that file.
+ *
+ * When there is no error on both qemufile, zero is returned.
+ */
+int qemu_file_get_error_obj_any(QEMUFile *f1, QEMUFile *f2, Error **errp)
+{
+    int ret = 0;
+
+    if (f1) {
+        ret = qemu_file_get_error_obj(f1, errp);
+        /* If there's already error detected, return */
+        if (ret) {
+            return ret;
+        }
+    }
+
+    if (f2) {
+        ret = qemu_file_get_error_obj(f2, errp);
+    }
+
+    return ret;
+}
+
 /*
  * Set the last error for stream f with optional Error*
  */
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 3f36d4dc8c..2564e5e1c7 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -156,6 +156,7 @@ void qemu_file_update_transfer(QEMUFile *f, int64_t len);
 void qemu_file_set_rate_limit(QEMUFile *f, int64_t new_rate);
 int64_t qemu_file_get_rate_limit(QEMUFile *f);
 int qemu_file_get_error_obj(QEMUFile *f, Error **errp);
+int qemu_file_get_error_obj_any(QEMUFile *f1, QEMUFile *f2, Error **errp);
 void qemu_file_set_error_obj(QEMUFile *f, int ret, Error *err);
 void qemu_file_set_error(QEMUFile *f, int ret);
 int qemu_file_shutdown(QEMUFile *f);
diff --git a/migration/savevm.c b/migration/savevm.c
index ecee05e631..050874650a 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2152,6 +2152,13 @@ static int loadvm_postcopy_handle_resume(MigrationIncomingState *mis)
      */
     qemu_sem_post(&mis->postcopy_pause_sem_fault);
 
+    if (migrate_postcopy_preempt()) {
+        /* The channel should already be setup again; make sure of it */
+        assert(mis->postcopy_qemufile_dst);
+        /* Kick the fast ram load thread too */
+        qemu_sem_post(&mis->postcopy_pause_sem_fast_load);
+    }
+
     return 0;
 }
 
@@ -2597,6 +2604,21 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
     mis->to_src_file = NULL;
     qemu_mutex_unlock(&mis->rp_mutex);
 
+    /*
+     * NOTE: this must happen before reset the PostcopyTmpPages below,
+     * otherwise it's racy to reset those fields when the fast load thread
+     * can be accessing it in parallel.
+     */
+    if (mis->postcopy_qemufile_dst) {
+        qemu_file_shutdown(mis->postcopy_qemufile_dst);
+        /* Take the mutex to make sure the fast ram load thread halted */
+        qemu_mutex_lock(&mis->postcopy_prio_thread_mutex);
+        migration_ioc_unregister_yank_from_file(mis->postcopy_qemufile_dst);
+        qemu_fclose(mis->postcopy_qemufile_dst);
+        mis->postcopy_qemufile_dst = NULL;
+        qemu_mutex_unlock(&mis->postcopy_prio_thread_mutex);
+    }
+
     migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
                       MIGRATION_STATUS_POSTCOPY_PAUSED);
 
@@ -2634,8 +2656,8 @@ retry:
     while (true) {
         section_type = qemu_get_byte(f);
 
-        if (qemu_file_get_error(f)) {
-            ret = qemu_file_get_error(f);
+        ret = qemu_file_get_error_obj_any(f, mis->postcopy_qemufile_dst, NULL);
+        if (ret) {
             break;
         }
 
diff --git a/migration/trace-events b/migration/trace-events
index f92793b5f5..b21d5f371f 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -270,6 +270,8 @@ mark_postcopy_blocktime_begin(uint64_t addr, void *dd, uint32_t time, int cpu, i
 mark_postcopy_blocktime_end(uint64_t addr, void *dd, uint32_t time, int affected_cpu) "addr: 0x%" PRIx64 ", dd: %p, time: %u, affected_cpu: %d"
 postcopy_pause_fault_thread(void) ""
 postcopy_pause_fault_thread_continued(void) ""
+postcopy_pause_fast_load(void) ""
+postcopy_pause_fast_load_continued(void) ""
 postcopy_ram_fault_thread_entry(void) ""
 postcopy_ram_fault_thread_exit(void) ""
 postcopy_ram_fault_thread_fds_core(int baseufd, int quitfd) "ufd: %d quitfd: %d"
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 12/19] migration: Create the postcopy preempt channel asynchronously
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (10 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 11/19] migration: Postcopy recover with preempt enabled Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-03-31 15:08 ` [PATCH v4 13/19] migration: Parameter x-postcopy-preempt-break-huge Peter Xu
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

This patch allows the postcopy preempt channel to be created
asynchronously.  The benefit is that when the connection is slow, we won't
take the BQL (and potentially block all things like QMP) for a long time
without releasing.

A function postcopy_preempt_wait_channel() is introduced, allowing the
migration thread to be able to wait on the channel creation.  The channel
is always created by the main thread, in which we'll kick a new semaphore
to tell the migration thread that the channel has created.

We'll need to wait for the new channel in two places: (1) when there's a
new postcopy migration that is starting, or (2) when there's a postcopy
migration to resume.

For the start of migration, we don't need to wait for this channel until
when we want to start postcopy, aka, postcopy_start().  We'll fail the
migration if we found that the channel creation failed (which should
probably not happen at all in 99% of the cases, because the main channel is
using the same network topology).

For a postcopy recovery, we'll need to wait in postcopy_pause().  In that
case if the channel creation failed, we can't fail the migration or we'll
crash the VM, instead we keep in PAUSED state, waiting for yet another
recovery.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c    | 16 ++++++++++++
 migration/migration.h    |  7 +++++
 migration/postcopy-ram.c | 56 +++++++++++++++++++++++++++++++---------
 migration/postcopy-ram.h |  1 +
 4 files changed, 68 insertions(+), 12 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 157a34c844..33faa0ff6e 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3021,6 +3021,12 @@ static int postcopy_start(MigrationState *ms)
     int64_t bandwidth = migrate_max_postcopy_bandwidth();
     bool restart_block = false;
     int cur_state = MIGRATION_STATUS_ACTIVE;
+
+    if (postcopy_preempt_wait_channel(ms)) {
+        migrate_set_state(&ms->state, ms->state, MIGRATION_STATUS_FAILED);
+        return -1;
+    }
+
     if (!migrate_pause_before_switchover()) {
         migrate_set_state(&ms->state, MIGRATION_STATUS_ACTIVE,
                           MIGRATION_STATUS_POSTCOPY_ACTIVE);
@@ -3502,6 +3508,14 @@ static MigThrError postcopy_pause(MigrationState *s)
         if (s->state == MIGRATION_STATUS_POSTCOPY_RECOVER) {
             /* Woken up by a recover procedure. Give it a shot */
 
+            if (postcopy_preempt_wait_channel(s)) {
+                /*
+                 * Preempt enabled, and new channel create failed; loop
+                 * back to wait for another recovery.
+                 */
+                continue;
+            }
+
             /*
              * Firstly, let's wake up the return path now, with a new
              * return path channel.
@@ -4361,6 +4375,7 @@ static void migration_instance_finalize(Object *obj)
     qemu_sem_destroy(&ms->postcopy_pause_sem);
     qemu_sem_destroy(&ms->postcopy_pause_rp_sem);
     qemu_sem_destroy(&ms->rp_state.rp_sem);
+    qemu_sem_destroy(&ms->postcopy_qemufile_src_sem);
     error_free(ms->error);
 }
 
@@ -4407,6 +4422,7 @@ static void migration_instance_init(Object *obj)
     qemu_sem_init(&ms->rp_state.rp_sem, 0);
     qemu_sem_init(&ms->rate_limit_sem, 0);
     qemu_sem_init(&ms->wait_unplug_sem, 0);
+    qemu_sem_init(&ms->postcopy_qemufile_src_sem, 0);
     qemu_mutex_init(&ms->qemu_file_lock);
 }
 
diff --git a/migration/migration.h b/migration/migration.h
index 91f845e9e4..f898b8547a 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -219,6 +219,13 @@ struct MigrationState {
     QEMUFile *to_dst_file;
     /* Postcopy specific transfer channel */
     QEMUFile *postcopy_qemufile_src;
+    /*
+     * It is posted when the preempt channel is established.  Note: this is
+     * used for both the start or recover of a postcopy migration.  We'll
+     * post to this sem every time a new preempt channel is created in the
+     * main thread, and we keep post() and wait() in pair.
+     */
+    QemuSemaphore postcopy_qemufile_src_sem;
     QIOChannelBuffer *bioc;
     /*
      * Protects to_dst_file/from_dst_file pointers.  We need to make sure we
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index e20305a9e2..ab2a50cf45 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -1552,10 +1552,50 @@ bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
     return true;
 }
 
-int postcopy_preempt_setup(MigrationState *s, Error **errp)
+static void
+postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
 {
-    QIOChannel *ioc;
+    MigrationState *s = opaque;
+    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
+    Error *local_err = NULL;
+
+    if (qio_task_propagate_error(task, &local_err)) {
+        /* Something wrong happened.. */
+        migrate_set_error(s, local_err);
+        error_free(local_err);
+    } else {
+        migration_ioc_register_yank(ioc);
+        s->postcopy_qemufile_src = qemu_fopen_channel_output(ioc);
+        trace_postcopy_preempt_new_channel();
+    }
+
+    /*
+     * Kick the waiter in all cases.  The waiter should check upon
+     * postcopy_qemufile_src to know whether it failed or not.
+     */
+    qemu_sem_post(&s->postcopy_qemufile_src_sem);
+    object_unref(OBJECT(ioc));
+}
 
+/* Returns 0 if channel established, -1 for error. */
+int postcopy_preempt_wait_channel(MigrationState *s)
+{
+    /* If preempt not enabled, no need to wait */
+    if (!migrate_postcopy_preempt()) {
+        return 0;
+    }
+
+    /*
+     * We need the postcopy preempt channel to be established before
+     * starting doing anything.
+     */
+    qemu_sem_wait(&s->postcopy_qemufile_src_sem);
+
+    return s->postcopy_qemufile_src ? 0 : -1;
+}
+
+int postcopy_preempt_setup(MigrationState *s, Error **errp)
+{
     if (!migrate_postcopy_preempt()) {
         return 0;
     }
@@ -1566,16 +1606,8 @@ int postcopy_preempt_setup(MigrationState *s, Error **errp)
         return -1;
     }
 
-    ioc = socket_send_channel_create_sync(errp);
-
-    if (ioc == NULL) {
-        return -1;
-    }
-
-    migration_ioc_register_yank(ioc);
-    s->postcopy_qemufile_src = qemu_fopen_channel_output(ioc);
-
-    trace_postcopy_preempt_new_channel();
+    /* Kick an async task to connect */
+    socket_send_channel_create(postcopy_preempt_send_channel_new, s);
 
     return 0;
 }
diff --git a/migration/postcopy-ram.h b/migration/postcopy-ram.h
index 34b1080cde..6147bf7d1d 100644
--- a/migration/postcopy-ram.h
+++ b/migration/postcopy-ram.h
@@ -192,5 +192,6 @@ enum PostcopyChannels {
 
 bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file);
 int postcopy_preempt_setup(MigrationState *s, Error **errp);
+int postcopy_preempt_wait_channel(MigrationState *s);
 
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 13/19] migration: Parameter x-postcopy-preempt-break-huge
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (11 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 12/19] migration: Create the postcopy preempt channel asynchronously Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-03-31 15:08 ` [PATCH v4 14/19] migration: Add helpers to detect TLS capability Peter Xu
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

Add a parameter that can conditionally disable the "break sending huge
page" behavior in postcopy preemption.  By default it's enabled.

It should only be used for debugging purposes, and we should never remove
the "x-" prefix.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 2 ++
 migration/migration.h | 7 +++++++
 migration/ram.c       | 7 +++++++
 3 files changed, 16 insertions(+)

diff --git a/migration/migration.c b/migration/migration.c
index 33faa0ff6e..ee3df9e229 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -4330,6 +4330,8 @@ static Property migration_properties[] = {
     DEFINE_PROP_SIZE("announce-step", MigrationState,
                       parameters.announce_step,
                       DEFAULT_MIGRATE_ANNOUNCE_STEP),
+    DEFINE_PROP_BOOL("x-postcopy-preempt-break-huge", MigrationState,
+                      postcopy_preempt_break_huge, true),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
diff --git a/migration/migration.h b/migration/migration.h
index f898b8547a..6ee520642f 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -340,6 +340,13 @@ struct MigrationState {
     bool send_configuration;
     /* Whether we send section footer during migration */
     bool send_section_footer;
+    /*
+     * Whether we allow break sending huge pages when postcopy preempt is
+     * enabled.  When disabled, we won't interrupt precopy within sending a
+     * host huge page, which is the old behavior of vanilla postcopy.
+     * NOTE: this parameter is ignored if postcopy preempt is not enabled.
+     */
+    bool postcopy_preempt_break_huge;
 
     /* Needed by postcopy-pause state */
     QemuSemaphore postcopy_pause_sem;
diff --git a/migration/ram.c b/migration/ram.c
index 518d511874..3400cde6e9 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2266,11 +2266,18 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
 
 static bool postcopy_needs_preempt(RAMState *rs, PageSearchStatus *pss)
 {
+    MigrationState *ms = migrate_get_current();
+
     /* Not enabled eager preempt?  Then never do that. */
     if (!migrate_postcopy_preempt()) {
         return false;
     }
 
+    /* If the user explicitly disabled breaking of huge page, skip */
+    if (!ms->postcopy_preempt_break_huge) {
+        return false;
+    }
+
     /* If the ramblock we're sending is a small page?  Never bother. */
     if (qemu_ram_pagesize(pss->block) == TARGET_PAGE_SIZE) {
         return false;
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 14/19] migration: Add helpers to detect TLS capability
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (12 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 13/19] migration: Parameter x-postcopy-preempt-break-huge Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 11:10   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 15/19] migration: Export tls-[creds|hostname|authz] params to cmdline too Peter Xu
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

Add migrate_tls_enabled() to detect whether TLS is configured.

Add migrate_channel_requires_tls() to detect whether the specific channel
requires TLS.

No functional change intended.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/channel.c   | 10 ++--------
 migration/migration.c | 17 +++++++++++++++++
 migration/migration.h |  4 ++++
 migration/multifd.c   |  7 +------
 4 files changed, 24 insertions(+), 14 deletions(-)

diff --git a/migration/channel.c b/migration/channel.c
index c6a8dcf1d7..36e59eaeec 100644
--- a/migration/channel.c
+++ b/migration/channel.c
@@ -38,10 +38,7 @@ void migration_channel_process_incoming(QIOChannel *ioc)
     trace_migration_set_incoming_channel(
         ioc, object_get_typename(OBJECT(ioc)));
 
-    if (s->parameters.tls_creds &&
-        *s->parameters.tls_creds &&
-        !object_dynamic_cast(OBJECT(ioc),
-                             TYPE_QIO_CHANNEL_TLS)) {
+    if (migrate_channel_requires_tls(ioc)) {
         migration_tls_channel_process_incoming(s, ioc, &local_err);
     } else {
         migration_ioc_register_yank(ioc);
@@ -71,10 +68,7 @@ void migration_channel_connect(MigrationState *s,
         ioc, object_get_typename(OBJECT(ioc)), hostname, error);
 
     if (!error) {
-        if (s->parameters.tls_creds &&
-            *s->parameters.tls_creds &&
-            !object_dynamic_cast(OBJECT(ioc),
-                                 TYPE_QIO_CHANNEL_TLS)) {
+        if (migrate_channel_requires_tls(ioc)) {
             migration_tls_channel_connect(s, ioc, hostname, &error);
 
             if (!error) {
diff --git a/migration/migration.c b/migration/migration.c
index ee3df9e229..899084f993 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -49,6 +49,7 @@
 #include "trace.h"
 #include "exec/target_page.h"
 #include "io/channel-buffer.h"
+#include "io/channel-tls.h"
 #include "migration/colo.h"
 #include "hw/boards.h"
 #include "hw/qdev-properties.h"
@@ -4251,6 +4252,22 @@ void migration_global_dump(Monitor *mon)
                    ms->clear_bitmap_shift);
 }
 
+bool migrate_tls_enabled(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return s->parameters.tls_creds && *s->parameters.tls_creds;
+}
+
+bool migrate_channel_requires_tls(QIOChannel *ioc)
+{
+    if (!migrate_tls_enabled()) {
+        return false;
+    }
+
+    return !object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_TLS);
+}
+
 #define DEFINE_PROP_MIG_CAP(name, x)             \
     DEFINE_PROP_BOOL(name, MigrationState, enabled_capabilities[x], false)
 
diff --git a/migration/migration.h b/migration/migration.h
index 6ee520642f..8b9ad7fe31 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -436,6 +436,10 @@ bool migrate_use_events(void);
 bool migrate_postcopy_blocktime(void);
 bool migrate_background_snapshot(void);
 bool migrate_postcopy_preempt(void);
+/* Whether TLS is enabled for migration? */
+bool migrate_tls_enabled(void);
+/* Whether the QIO channel requires further TLS handshake? */
+bool migrate_channel_requires_tls(QIOChannel *ioc);
 
 /* Sending on the return path - generic and then for each message type */
 void migrate_send_rp_shut(MigrationIncomingState *mis,
diff --git a/migration/multifd.c b/migration/multifd.c
index 9ea4f581e2..19e3c44491 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -782,17 +782,12 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
                                     QIOChannel *ioc,
                                     Error *error)
 {
-    MigrationState *s = migrate_get_current();
-
     trace_multifd_set_outgoing_channel(
         ioc, object_get_typename(OBJECT(ioc)),
         migrate_get_current()->hostname, error);
 
     if (!error) {
-        if (s->parameters.tls_creds &&
-            *s->parameters.tls_creds &&
-            !object_dynamic_cast(OBJECT(ioc),
-                                 TYPE_QIO_CHANNEL_TLS)) {
+        if (migrate_channel_requires_tls(ioc)) {
             multifd_tls_channel_connect(p, ioc, &error);
             if (!error) {
                 /*
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 15/19] migration: Export tls-[creds|hostname|authz] params to cmdline too
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (13 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 14/19] migration: Add helpers to detect TLS capability Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 11:13   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 16/19] migration: Enable TLS for preempt channel Peter Xu
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

It's useful for specifying tls credentials all in the cmdline (along with
the -object tls-creds-*), especially for debugging purpose.

The trick here is we must remember to not free these fields again in the
finalize() function of migration object, otherwise it'll cause double-free.

The thing is when destroying an object, we'll first destroy the properties
that bound to the object, then the object itself.  To be explicit, when
destroy the object in object_finalize() we have such sequence of
operations:

    object_property_del_all(obj);
    object_deinit(obj, ti);

So after this change the two fields are properly released already even
before reaching the finalize() function but in object_property_del_all(),
hence we don't need to free them anymore in finalize() or it's double-free.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 899084f993..1dc80be1f4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -4349,6 +4349,9 @@ static Property migration_properties[] = {
                       DEFAULT_MIGRATE_ANNOUNCE_STEP),
     DEFINE_PROP_BOOL("x-postcopy-preempt-break-huge", MigrationState,
                       postcopy_preempt_break_huge, true),
+    DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
+    DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
+    DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -4382,12 +4385,9 @@ static void migration_class_init(ObjectClass *klass, void *data)
 static void migration_instance_finalize(Object *obj)
 {
     MigrationState *ms = MIGRATION_OBJ(obj);
-    MigrationParameters *params = &ms->parameters;
 
     qemu_mutex_destroy(&ms->error_mutex);
     qemu_mutex_destroy(&ms->qemu_file_lock);
-    g_free(params->tls_hostname);
-    g_free(params->tls_creds);
     qemu_sem_destroy(&ms->wait_unplug_sem);
     qemu_sem_destroy(&ms->rate_limit_sem);
     qemu_sem_destroy(&ms->pause_sem);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 16/19] migration: Enable TLS for preempt channel
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (14 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 15/19] migration: Export tls-[creds|hostname|authz] params to cmdline too Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 11:35   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 17/19] tests: Add postcopy tls migration test Peter Xu
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

This patch is based on the async preempt channel creation.  It continues
wiring up the new channel with TLS handshake to destionation when enabled.

Note that only the src QEMU needs such operation; the dest QEMU does not
need any change for TLS support due to the fact that all channels are
established synchronously there, so all the TLS magic is already properly
handled by migration_tls_channel_process_incoming().

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/postcopy-ram.c | 60 +++++++++++++++++++++++++++++++++++-----
 migration/trace-events   |  1 +
 2 files changed, 54 insertions(+), 7 deletions(-)

diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index ab2a50cf45..f5ba176862 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -36,6 +36,7 @@
 #include "socket.h"
 #include "qemu-file-channel.h"
 #include "yank_functions.h"
+#include "tls.h"
 
 /* Arbitrary limit on size of each discard command,
  * keeps them around ~200 bytes
@@ -1552,15 +1553,15 @@ bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
     return true;
 }
 
+/*
+ * Setup the postcopy preempt channel with the IOC.  If ERROR is specified,
+ * setup the error instead.  This helper will free the ERROR if specified.
+ */
 static void
-postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
+postcopy_preempt_send_channel_done(MigrationState *s,
+                                   QIOChannel *ioc, Error *local_err)
 {
-    MigrationState *s = opaque;
-    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
-    Error *local_err = NULL;
-
-    if (qio_task_propagate_error(task, &local_err)) {
-        /* Something wrong happened.. */
+    if (local_err) {
         migrate_set_error(s, local_err);
         error_free(local_err);
     } else {
@@ -1574,6 +1575,51 @@ postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
      * postcopy_qemufile_src to know whether it failed or not.
      */
     qemu_sem_post(&s->postcopy_qemufile_src_sem);
+}
+
+static void
+postcopy_preempt_tls_handshake(QIOTask *task, gpointer opaque)
+{
+    MigrationState *s = opaque;
+    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
+    Error *err = NULL;
+
+    qio_task_propagate_error(task, &err);
+    postcopy_preempt_send_channel_done(s, ioc, err);
+    object_unref(OBJECT(ioc));
+}
+
+static void
+postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
+{
+    MigrationState *s = opaque;
+    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
+    QIOChannelTLS *tioc;
+    Error *local_err = NULL;
+
+    if (qio_task_propagate_error(task, &local_err)) {
+        assert(local_err);
+        goto out;
+    }
+
+    if (migrate_channel_requires_tls(ioc)) {
+        tioc = migration_tls_client_create(s, ioc, s->hostname, &local_err);
+        if (!tioc) {
+            assert(local_err);
+            goto out;
+        }
+        trace_postcopy_preempt_tls_handshake();
+        qio_channel_set_name(QIO_CHANNEL(tioc), "migration-tls-preempt");
+        qio_channel_tls_handshake(tioc, postcopy_preempt_tls_handshake,
+                                  s, NULL, NULL);
+        /* Setup the channel until TLS handshake finished */
+        object_unref(OBJECT(ioc));
+        return;
+    }
+
+out:
+    /* This handles both good and error cases */
+    postcopy_preempt_send_channel_done(s, ioc, local_err);
     object_unref(OBJECT(ioc));
 }
 
diff --git a/migration/trace-events b/migration/trace-events
index b21d5f371f..00ab2e1b96 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -287,6 +287,7 @@ postcopy_request_shared_page(const char *sharer, const char *rb, uint64_t rb_off
 postcopy_request_shared_page_present(const char *sharer, const char *rb, uint64_t rb_offset) "%s already %s offset 0x%"PRIx64
 postcopy_wake_shared(uint64_t client_addr, const char *rb) "at 0x%"PRIx64" in %s"
 postcopy_page_req_del(void *addr, int count) "resolved page req %p total %d"
+postcopy_preempt_tls_handshake(void) ""
 postcopy_preempt_new_channel(void) ""
 postcopy_preempt_thread_entry(void) ""
 postcopy_preempt_thread_exit(void) ""
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 17/19] tests: Add postcopy tls migration test
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (15 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 16/19] migration: Enable TLS for preempt channel Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 11:39   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 18/19] tests: Add postcopy tls recovery " Peter Xu
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

We just added TLS tests for precopy but not postcopy.  Add the
corresponding test for vanilla postcopy.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tests/qtest/migration-test.c | 43 +++++++++++++++++++++++++++++++-----
 1 file changed, 37 insertions(+), 6 deletions(-)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index d9f444ea14..80c4244871 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -481,6 +481,10 @@ typedef struct {
     bool only_target;
     /* Use dirty ring if true; dirty logging otherwise */
     bool use_dirty_ring;
+    /* Whether use TLS channels for postcopy test? */
+    bool postcopy_tls;
+    /* Used only if postcopy_tls==true, to cache the data object */
+    void *postcopy_tls_data;
     const char *opts_source;
     const char *opts_target;
 } MigrateStart;
@@ -980,6 +984,10 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
         return -1;
     }
 
+    if (args->postcopy_tls) {
+        args->postcopy_tls_data = test_migrate_tls_psk_start_match(from, to);
+    }
+
     migrate_set_capability(from, "postcopy-ram", true);
     migrate_set_capability(to, "postcopy-ram", true);
     migrate_set_capability(to, "postcopy-blocktime", true);
@@ -1004,7 +1012,8 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
     return 0;
 }
 
-static void migrate_postcopy_complete(QTestState *from, QTestState *to)
+static void migrate_postcopy_complete(QTestState *from, QTestState *to,
+                                      MigrateStart *args)
 {
     wait_for_migration_complete(from);
 
@@ -1015,19 +1024,38 @@ static void migrate_postcopy_complete(QTestState *from, QTestState *to)
         read_blocktime(to);
     }
 
+    if (args->postcopy_tls) {
+        assert(args->postcopy_tls_data);
+        test_migrate_tls_psk_finish(from, to, args->postcopy_tls_data);
+        args->postcopy_tls_data = NULL;
+    }
+
     test_migrate_end(from, to, true);
 }
 
-static void test_postcopy(void)
+static void test_postcopy_common(MigrateStart *args)
 {
-    MigrateStart args = {};
     QTestState *from, *to;
 
-    if (migrate_postcopy_prepare(&from, &to, &args)) {
+    if (migrate_postcopy_prepare(&from, &to, args)) {
         return;
     }
     migrate_postcopy_start(from, to);
-    migrate_postcopy_complete(from, to);
+    migrate_postcopy_complete(from, to, args);
+}
+
+static void test_postcopy(void)
+{
+    MigrateStart args = { };
+
+    test_postcopy_common(&args);
+}
+
+static void test_postcopy_tls(void)
+{
+    MigrateStart args = { .postcopy_tls = true };
+
+    test_postcopy_common(&args);
 }
 
 static void test_postcopy_recovery(void)
@@ -1089,7 +1117,7 @@ static void test_postcopy_recovery(void)
     /* Restore the postcopy bandwidth to unlimited */
     migrate_set_parameter_int(from, "max-postcopy-bandwidth", 0);
 
-    migrate_postcopy_complete(from, to);
+    migrate_postcopy_complete(from, to, &args);
 }
 
 static void test_baddest(void)
@@ -2134,6 +2162,9 @@ int main(int argc, char **argv)
 
     qtest_add_func("/migration/postcopy/unix", test_postcopy);
     qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
+#ifdef CONFIG_GNUTLS
+    qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
+#endif /* CONFIG_GNUTLS */
     qtest_add_func("/migration/bad_dest", test_baddest);
     qtest_add_func("/migration/precopy/unix/plain", test_precopy_unix_plain);
     qtest_add_func("/migration/precopy/unix/xbzrle", test_precopy_unix_xbzrle);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 18/19] tests: Add postcopy tls recovery migration test
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (16 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 17/19] tests: Add postcopy tls migration test Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-04-20 11:42   ` Daniel P. Berrangé
  2022-03-31 15:08 ` [PATCH v4 19/19] tests: Add postcopy preempt tests Peter Xu
  2022-04-21 13:57 ` [PATCH v4 00/19] migration: Postcopy Preemption Dr. David Alan Gilbert
  19 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

It's easy to build this upon the postcopy tls test.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tests/qtest/migration-test.c | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 80c4244871..7288c64e97 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -1058,15 +1058,15 @@ static void test_postcopy_tls(void)
     test_postcopy_common(&args);
 }
 
-static void test_postcopy_recovery(void)
+static void test_postcopy_recovery_common(MigrateStart *args)
 {
-    MigrateStart args = {
-        .hide_stderr = true,
-    };
     QTestState *from, *to;
     g_autofree char *uri = NULL;
 
-    if (migrate_postcopy_prepare(&from, &to, &args)) {
+    /* Always hide errors for postcopy recover tests since they're expected */
+    args->hide_stderr = true;
+
+    if (migrate_postcopy_prepare(&from, &to, args)) {
         return;
     }
 
@@ -1117,7 +1117,21 @@ static void test_postcopy_recovery(void)
     /* Restore the postcopy bandwidth to unlimited */
     migrate_set_parameter_int(from, "max-postcopy-bandwidth", 0);
 
-    migrate_postcopy_complete(from, to, &args);
+    migrate_postcopy_complete(from, to, args);
+}
+
+static void test_postcopy_recovery(void)
+{
+    MigrateStart args = { };
+
+    test_postcopy_recovery_common(&args);
+}
+
+static void test_postcopy_recovery_tls(void)
+{
+    MigrateStart args = { .postcopy_tls = true };
+
+    test_postcopy_recovery_common(&args);
 }
 
 static void test_baddest(void)
@@ -2164,6 +2178,7 @@ int main(int argc, char **argv)
     qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
 #ifdef CONFIG_GNUTLS
     qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
+    qtest_add_func("/migration/postcopy/tls/recovery", test_postcopy_recovery_tls);
 #endif /* CONFIG_GNUTLS */
     qtest_add_func("/migration/bad_dest", test_baddest);
     qtest_add_func("/migration/precopy/unix/plain", test_precopy_unix_plain);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v4 19/19] tests: Add postcopy preempt tests
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (17 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 18/19] tests: Add postcopy tls recovery " Peter Xu
@ 2022-03-31 15:08 ` Peter Xu
  2022-03-31 15:25   ` Peter Xu
  2022-04-20 11:43   ` Daniel P. Berrangé
  2022-04-21 13:57 ` [PATCH v4 00/19] migration: Postcopy Preemption Dr. David Alan Gilbert
  19 siblings, 2 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, peterx, Juan Quintela

Four tests are added for preempt mode:

  - Postcopy default
  - Postcopy tls
  - Postcopy recovery
  - Postcopy tls+recovery

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tests/qtest/migration-test.c | 49 ++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 7288c64e97..7188503ae1 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -477,6 +477,7 @@ typedef struct {
      */
     bool hide_stderr;
     bool use_shmem;
+    bool postcopy_preempt;
     /* only launch the target process */
     bool only_target;
     /* Use dirty ring if true; dirty logging otherwise */
@@ -992,6 +993,11 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
     migrate_set_capability(to, "postcopy-ram", true);
     migrate_set_capability(to, "postcopy-blocktime", true);
 
+    if (args->postcopy_preempt) {
+        migrate_set_capability(from, "postcopy-preempt", true);
+        migrate_set_capability(to, "postcopy-preempt", true);
+    }
+
     /* We want to pick a speed slow enough that the test completes
      * quickly, but that it doesn't complete precopy even on a slow
      * machine, so also set the downtime.
@@ -1058,6 +1064,25 @@ static void test_postcopy_tls(void)
     test_postcopy_common(&args);
 }
 
+static void test_postcopy_preempt(void)
+{
+    MigrateStart args = {
+        .postcopy_preempt = true,
+    };
+
+    test_postcopy_common(&args);
+}
+
+static void test_postcopy_preempt_tls(void)
+{
+    MigrateStart args = {
+        .postcopy_preempt = true,
+        .postcopy_tls = true,
+    };
+
+    test_postcopy_common(&args);
+}
+
 static void test_postcopy_recovery_common(MigrateStart *args)
 {
     QTestState *from, *to;
@@ -1134,6 +1159,24 @@ static void test_postcopy_recovery_tls(void)
     test_postcopy_recovery_common(&args);
 }
 
+static void test_postcopy_preempt_recovery(void)
+{
+    MigrateStart args = { .postcopy_preempt = true };
+
+    test_postcopy_recovery_common(&args);
+}
+
+/* This contains preempt+recovery+tls test altogether */
+static void test_postcopy_preempt_all(void)
+{
+    MigrateStart args = {
+        .postcopy_preempt = true,
+        .postcopy_tls = true,
+    };
+
+    test_postcopy_recovery_common(&args);
+}
+
 static void test_baddest(void)
 {
     MigrateStart args = {
@@ -2176,6 +2219,12 @@ int main(int argc, char **argv)
 
     qtest_add_func("/migration/postcopy/unix", test_postcopy);
     qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
+    qtest_add_func("/migration/postcopy/preempt/unix", test_postcopy_preempt);
+    qtest_add_func("/migration/postcopy/preempt/recovery",
+                   test_postcopy_preempt_recovery);
+    qtest_add_func("/migration/postcopy/preempt/tls", test_postcopy_preempt_tls);
+    qtest_add_func("/migration/postcopy/preempt/tls+recovery",
+                   test_postcopy_preempt_all);
 #ifdef CONFIG_GNUTLS
     qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
     qtest_add_func("/migration/postcopy/tls/recovery", test_postcopy_recovery_tls);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 19/19] tests: Add postcopy preempt tests
  2022-03-31 15:08 ` [PATCH v4 19/19] tests: Add postcopy preempt tests Peter Xu
@ 2022-03-31 15:25   ` Peter Xu
  2022-04-20 11:43   ` Daniel P. Berrangé
  1 sibling, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-03-31 15:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, Juan Quintela

On Thu, Mar 31, 2022 at 11:08:57AM -0400, Peter Xu wrote:
> Four tests are added for preempt mode:
> 
>   - Postcopy default
>   - Postcopy tls
>   - Postcopy recovery
>   - Postcopy tls+recovery
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  tests/qtest/migration-test.c | 49 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
> 
> diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> index 7288c64e97..7188503ae1 100644
> --- a/tests/qtest/migration-test.c
> +++ b/tests/qtest/migration-test.c
> @@ -477,6 +477,7 @@ typedef struct {
>       */
>      bool hide_stderr;
>      bool use_shmem;
> +    bool postcopy_preempt;
>      /* only launch the target process */
>      bool only_target;
>      /* Use dirty ring if true; dirty logging otherwise */
> @@ -992,6 +993,11 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
>      migrate_set_capability(to, "postcopy-ram", true);
>      migrate_set_capability(to, "postcopy-blocktime", true);
>  
> +    if (args->postcopy_preempt) {
> +        migrate_set_capability(from, "postcopy-preempt", true);
> +        migrate_set_capability(to, "postcopy-preempt", true);
> +    }
> +
>      /* We want to pick a speed slow enough that the test completes
>       * quickly, but that it doesn't complete precopy even on a slow
>       * machine, so also set the downtime.
> @@ -1058,6 +1064,25 @@ static void test_postcopy_tls(void)
>      test_postcopy_common(&args);
>  }
>  
> +static void test_postcopy_preempt(void)
> +{
> +    MigrateStart args = {
> +        .postcopy_preempt = true,
> +    };
> +
> +    test_postcopy_common(&args);
> +}
> +
> +static void test_postcopy_preempt_tls(void)
> +{
> +    MigrateStart args = {
> +        .postcopy_preempt = true,
> +        .postcopy_tls = true,
> +    };
> +
> +    test_postcopy_common(&args);
> +}
> +
>  static void test_postcopy_recovery_common(MigrateStart *args)
>  {
>      QTestState *from, *to;
> @@ -1134,6 +1159,24 @@ static void test_postcopy_recovery_tls(void)
>      test_postcopy_recovery_common(&args);
>  }
>  
> +static void test_postcopy_preempt_recovery(void)
> +{
> +    MigrateStart args = { .postcopy_preempt = true };
> +
> +    test_postcopy_recovery_common(&args);
> +}
> +
> +/* This contains preempt+recovery+tls test altogether */
> +static void test_postcopy_preempt_all(void)
> +{
> +    MigrateStart args = {
> +        .postcopy_preempt = true,
> +        .postcopy_tls = true,
> +    };
> +
> +    test_postcopy_recovery_common(&args);
> +}
> +
>  static void test_baddest(void)
>  {
>      MigrateStart args = {
> @@ -2176,6 +2219,12 @@ int main(int argc, char **argv)
>  
>      qtest_add_func("/migration/postcopy/unix", test_postcopy);
>      qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
> +    qtest_add_func("/migration/postcopy/preempt/unix", test_postcopy_preempt);
> +    qtest_add_func("/migration/postcopy/preempt/recovery",
> +                   test_postcopy_preempt_recovery);
> +    qtest_add_func("/migration/postcopy/preempt/tls", test_postcopy_preempt_tls);
> +    qtest_add_func("/migration/postcopy/preempt/tls+recovery",
> +                   test_postcopy_preempt_all);
>  #ifdef CONFIG_GNUTLS
>      qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
>      qtest_add_func("/migration/postcopy/tls/recovery", test_postcopy_recovery_tls);

Ehh, the latter two need to be put into CONFIG_GNUTLS block..

---8<---
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 7188503ae1..3d4fe89f52 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -2222,12 +2222,12 @@ int main(int argc, char **argv)
     qtest_add_func("/migration/postcopy/preempt/unix", test_postcopy_preempt);
     qtest_add_func("/migration/postcopy/preempt/recovery",
                    test_postcopy_preempt_recovery);
-    qtest_add_func("/migration/postcopy/preempt/tls", test_postcopy_preempt_tls);
-    qtest_add_func("/migration/postcopy/preempt/tls+recovery",
-                   test_postcopy_preempt_all);
 #ifdef CONFIG_GNUTLS
     qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
     qtest_add_func("/migration/postcopy/tls/recovery", test_postcopy_recovery_tls);
+    qtest_add_func("/migration/postcopy/preempt/tls", test_postcopy_preempt_tls);
+    qtest_add_func("/migration/postcopy/preempt/tls+recovery",
+                   test_postcopy_preempt_all);
 #endif /* CONFIG_GNUTLS */
     qtest_add_func("/migration/bad_dest", test_baddest);
     qtest_add_func("/migration/precopy/unix/plain", test_precopy_unix_plain);
---8<---

Sorry for the noise.

-- 
Peter Xu



^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname
  2022-03-31 15:08 ` [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname Peter Xu
@ 2022-04-07 17:21   ` Dr. David Alan Gilbert
  2022-04-20 10:34   ` Daniel P. Berrangé
  1 sibling, 0 replies; 54+ messages in thread
From: Dr. David Alan Gilbert @ 2022-04-07 17:21 UTC (permalink / raw)
  To: Peter Xu
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange, qemu-devel,
	Juan Quintela

* Peter Xu (peterx@redhat.com) wrote:
> We used to release it right after migrate_fd_connect().  That's not good
> enough when there're more than one socket pair required, because it'll be
> needed to establish TLS connection for the rest channels.
> 
> One example is multifd, where we copied over the hostname for each channel
> but that's actually not needed.
> 
> Keeping the hostname until the cleanup phase of migration.
> 
> Cc: Daniel P. Berrange <berrange@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/channel.c   | 1 -
>  migration/migration.c | 5 +++++
>  2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/channel.c b/migration/channel.c
> index c4fc000a1a..c6a8dcf1d7 100644
> --- a/migration/channel.c
> +++ b/migration/channel.c
> @@ -96,6 +96,5 @@ void migration_channel_connect(MigrationState *s,
>          }
>      }
>      migrate_fd_connect(s, error);
> -    g_free(s->hostname);
>      error_free(error);
>  }
> diff --git a/migration/migration.c b/migration/migration.c
> index 695f0f2900..281d33326b 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1809,6 +1809,11 @@ static void migrate_fd_cleanup(MigrationState *s)
>      qemu_bh_delete(s->cleanup_bh);
>      s->cleanup_bh = NULL;
>  
> +    if (s->hostname) {
> +        g_free(s->hostname);
> +        s->hostname = NULL;
> +    }
> +
>      qemu_savevm_state_cleanup();
>  
>      if (s->to_dst_file) {
> -- 
> 2.32.0
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 02/19] migration: Drop multifd tls_hostname cache
  2022-03-31 15:08 ` [PATCH v4 02/19] migration: Drop multifd tls_hostname cache Peter Xu
@ 2022-04-07 17:42   ` Dr. David Alan Gilbert
  2022-04-20 10:35   ` Daniel P. Berrangé
  1 sibling, 0 replies; 54+ messages in thread
From: Dr. David Alan Gilbert @ 2022-04-07 17:42 UTC (permalink / raw)
  To: Peter Xu
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange, qemu-devel,
	Juan Quintela

* Peter Xu (peterx@redhat.com) wrote:
> The hostname is cached N times, N equals to the multifd channels.
> 
> Drop that cache because after previous patch we've got s->hostname
> being alive for the whole lifecycle of migration procedure.
> 
> Cc: Juan Quintela <quintela@redhat.com>
> Cc: Daniel P. Berrange <berrange@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.c | 10 +++-------
>  migration/multifd.h |  2 --
>  2 files changed, 3 insertions(+), 9 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 76b57a7177..1be4ab5d17 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -542,8 +542,6 @@ void multifd_save_cleanup(void)
>          qemu_sem_destroy(&p->sem_sync);
>          g_free(p->name);
>          p->name = NULL;
> -        g_free(p->tls_hostname);
> -        p->tls_hostname = NULL;
>          multifd_pages_clear(p->pages);
>          p->pages = NULL;
>          p->packet_len = 0;
> @@ -763,7 +761,7 @@ static void multifd_tls_channel_connect(MultiFDSendParams *p,
>                                          Error **errp)
>  {
>      MigrationState *s = migrate_get_current();
> -    const char *hostname = p->tls_hostname;
> +    const char *hostname = s->hostname;
>      QIOChannelTLS *tioc;
>  
>      tioc = migration_tls_client_create(s, ioc, hostname, errp);
> @@ -787,7 +785,8 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
>      MigrationState *s = migrate_get_current();
>  
>      trace_multifd_set_outgoing_channel(
> -        ioc, object_get_typename(OBJECT(ioc)), p->tls_hostname, error);
> +        ioc, object_get_typename(OBJECT(ioc)),
> +        migrate_get_current()->hostname, error);
>  
>      if (!error) {
>          if (s->parameters.tls_creds &&
> @@ -874,7 +873,6 @@ int multifd_save_setup(Error **errp)
>      int thread_count;
>      uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      uint8_t i;
> -    MigrationState *s;
>  
>      if (!migrate_use_multifd()) {
>          return 0;
> @@ -884,7 +882,6 @@ int multifd_save_setup(Error **errp)
>          return -1;
>      }
>  
> -    s = migrate_get_current();
>      thread_count = migrate_multifd_channels();
>      multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
>      multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
> @@ -909,7 +906,6 @@ int multifd_save_setup(Error **errp)
>          p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>          p->packet->version = cpu_to_be32(MULTIFD_VERSION);
>          p->name = g_strdup_printf("multifdsend_%d", i);
> -        p->tls_hostname = g_strdup(s->hostname);
>          /* We need one extra place for the packet header */
>          p->iov = g_new0(struct iovec, page_count + 1);
>          p->normal = g_new0(ram_addr_t, page_count);
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 4dda900a0b..3d577b98b7 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -72,8 +72,6 @@ typedef struct {
>      uint8_t id;
>      /* channel thread name */
>      char *name;
> -    /* tls hostname */
> -    char *tls_hostname;
>      /* channel thread id */
>      QemuThread thread;
>      /* communication channel */
> -- 
> 2.32.0
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname
  2022-03-31 15:08 ` [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname Peter Xu
  2022-04-07 17:21   ` Dr. David Alan Gilbert
@ 2022-04-20 10:34   ` Daniel P. Berrangé
  2022-04-20 18:19     ` Peter Xu
  1 sibling, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:34 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:39AM -0400, Peter Xu wrote:
> We used to release it right after migrate_fd_connect().  That's not good
> enough when there're more than one socket pair required, because it'll be
> needed to establish TLS connection for the rest channels.
> 
> One example is multifd, where we copied over the hostname for each channel
> but that's actually not needed.
> 
> Keeping the hostname until the cleanup phase of migration.
> 
> Cc: Daniel P. Berrange <berrange@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/channel.c   | 1 -
>  migration/migration.c | 5 +++++
>  2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/channel.c b/migration/channel.c
> index c4fc000a1a..c6a8dcf1d7 100644
> --- a/migration/channel.c
> +++ b/migration/channel.c
> @@ -96,6 +96,5 @@ void migration_channel_connect(MigrationState *s,
>          }
>      }
>      migrate_fd_connect(s, error);
> -    g_free(s->hostname);
>      error_free(error);
>  }
> diff --git a/migration/migration.c b/migration/migration.c
> index 695f0f2900..281d33326b 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1809,6 +1809,11 @@ static void migrate_fd_cleanup(MigrationState *s)
>      qemu_bh_delete(s->cleanup_bh);
>      s->cleanup_bh = NULL;
>  
> +    if (s->hostname) {
> +        g_free(s->hostname);
> +        s->hostname = NULL;
> +    }

FWIW there's a marginally more concise pattern:

  g_clear_pointer(&s->hostname, g_free)


Either way

   Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 02/19] migration: Drop multifd tls_hostname cache
  2022-03-31 15:08 ` [PATCH v4 02/19] migration: Drop multifd tls_hostname cache Peter Xu
  2022-04-07 17:42   ` Dr. David Alan Gilbert
@ 2022-04-20 10:35   ` Daniel P. Berrangé
  1 sibling, 0 replies; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:35 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:40AM -0400, Peter Xu wrote:
> The hostname is cached N times, N equals to the multifd channels.
> 
> Drop that cache because after previous patch we've got s->hostname
> being alive for the whole lifecycle of migration procedure.
> 
> Cc: Juan Quintela <quintela@redhat.com>
> Cc: Daniel P. Berrange <berrange@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/multifd.c | 10 +++-------
>  migration/multifd.h |  2 --
>  2 files changed, 3 insertions(+), 9 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 03/19] migration: Add pss.postcopy_requested status
  2022-03-31 15:08 ` [PATCH v4 03/19] migration: Add pss.postcopy_requested status Peter Xu
@ 2022-04-20 10:36   ` Daniel P. Berrangé
  0 siblings, 0 replies; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:36 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:41AM -0400, Peter Xu wrote:
> This boolean flag shows whether the current page during migration is triggered
> by postcopy or not.  Then in ram_save_host_page() and deeper stack we'll be
> able to have a reference on the priority of this page.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/ram.c | 6 ++++++
>  1 file changed, 6 insertions(+)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 04/19] migration: Move migrate_allow_multifd and helpers into migration.c
  2022-03-31 15:08 ` [PATCH v4 04/19] migration: Move migrate_allow_multifd and helpers into migration.c Peter Xu
@ 2022-04-20 10:41   ` Daniel P. Berrangé
  2022-04-20 19:30     ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:41 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:42AM -0400, Peter Xu wrote:
> This variable, along with its helpers, is used to detect whether multiple
> channel will be supported for migration.  In follow up patches, there'll be
> other capability that requires multi-channels.  Hence move it outside multifd
> specific code and make it public.  Meanwhile rename it from "multifd" to
> "multi_channels" to show its real meaning.

FWIW, I would generally suggest separating the rename from the code
movement into distinct patches.

> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c | 22 +++++++++++++++++-----
>  migration/migration.h |  3 +++
>  migration/multifd.c   | 19 ++++---------------
>  migration/multifd.h   |  2 --
>  4 files changed, 24 insertions(+), 22 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 281d33326b..596d3d30b4 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -180,6 +180,18 @@ static int migration_maybe_pause(MigrationState *s,
>                                   int new_state);
>  static void migrate_fd_cancel(MigrationState *s);
>  
> +static bool migrate_allow_multi_channels = true;

This is a pre-existing thing, but I'm curious why we default this to
'true', when the first thing qemu_start_incoming_migration() and
qmp_migrate() do, is to set it to 'false' and then selectively
put it back to 'true'.


>  static gint page_request_addr_cmp(gconstpointer ap, gconstpointer bp)
>  {
>      uintptr_t a = (uintptr_t) ap, b = (uintptr_t) bp;
> @@ -469,12 +481,12 @@ static void qemu_start_incoming_migration(const char *uri, Error **errp)
>  {
>      const char *p = NULL;
>  
> -    migrate_protocol_allow_multifd(false); /* reset it anyway */
> +    migrate_protocol_allow_multi_channels(false); /* reset it anyway */
>      qapi_event_send_migration(MIGRATION_STATUS_SETUP);
>      if (strstart(uri, "tcp:", &p) ||
>          strstart(uri, "unix:", NULL) ||
>          strstart(uri, "vsock:", NULL)) {
> -        migrate_protocol_allow_multifd(true);
> +        migrate_protocol_allow_multi_channels(true);
>          socket_start_incoming_migration(p ? p : uri, errp);



> @@ -2324,11 +2336,11 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
>          }
>      }
>  
> -    migrate_protocol_allow_multifd(false);
> +    migrate_protocol_allow_multi_channels(false);
>      if (strstart(uri, "tcp:", &p) ||
>          strstart(uri, "unix:", NULL) ||
>          strstart(uri, "vsock:", NULL)) {
> -        migrate_protocol_allow_multifd(true);
> +        migrate_protocol_allow_multi_channels(true);
>          socket_start_outgoing_migration(s, p ? p : uri, &local_err);
>  #ifdef CONFIG_RDMA
>      } else if (strstart(uri, "rdma:", &p)) {

Regardless of comments above

  Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 05/19] migration: Export ram_load_postcopy()
  2022-03-31 15:08 ` [PATCH v4 05/19] migration: Export ram_load_postcopy() Peter Xu
@ 2022-04-20 10:42   ` Daniel P. Berrangé
  0 siblings, 0 replies; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:42 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:43AM -0400, Peter Xu wrote:
> Will be reused in postcopy fast load thread.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/ram.c | 2 +-
>  migration/ram.h | 1 +
>  2 files changed, 2 insertions(+), 1 deletion(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 06/19] migration: Move channel setup out of postcopy_try_recover()
  2022-03-31 15:08 ` [PATCH v4 06/19] migration: Move channel setup out of postcopy_try_recover() Peter Xu
@ 2022-04-20 10:43   ` Daniel P. Berrangé
  0 siblings, 0 replies; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:43 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:44AM -0400, Peter Xu wrote:
> We used to use postcopy_try_recover() to replace migration_incoming_setup() to
> setup incoming channels.  That's fine for the old world, but in the new world
> there can be more than one channels that need setup.  Better move the channel
> setup out of it so that postcopy_try_recover() only handles the last phase of
> switching to the recovery phase.
> 
> To do that in migration_fd_process_incoming(), move the postcopy_try_recover()
> call to be after migration_incoming_setup(), which will setup the channels.
> While in migration_ioc_process_incoming(), postpone the recover() routine right
> before we'll jump into migration_incoming_process().
> 
> A side benefit is we don't need to pass in QEMUFile* to postcopy_try_recover()
> anymore.  Remove it.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 07/19] migration: Allow migrate-recover to run multiple times
  2022-03-31 15:08 ` [PATCH v4 07/19] migration: Allow migrate-recover to run multiple times Peter Xu
@ 2022-04-20 10:44   ` Daniel P. Berrangé
  0 siblings, 0 replies; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:44 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:45AM -0400, Peter Xu wrote:
> Previously migration didn't have an easy way to cleanup the listening
> transport, migrate recovery only allows to execute once.  That's done with a
> trick flag in postcopy_recover_triggered.
> 
> Now the facility is already there.
> 
> Drop postcopy_recover_triggered and instead allows a new migrate-recover to
> release the previous listener transport.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c | 13 ++-----------
>  migration/migration.h |  1 -
>  migration/savevm.c    |  3 ---
>  3 files changed, 2 insertions(+), 15 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 08/19] migration: Add postcopy-preempt capability
  2022-03-31 15:08 ` [PATCH v4 08/19] migration: Add postcopy-preempt capability Peter Xu
@ 2022-04-20 10:51   ` Daniel P. Berrangé
  2022-04-20 19:31     ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:51 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:46AM -0400, Peter Xu wrote:
> Firstly, postcopy already preempts precopy due to the fact that we do
> unqueue_page() first before looking into dirty bits.
> 
> However that's not enough, e.g., when there're host huge page enabled, when
> sending a precopy huge page, a postcopy request needs to wait until the whole
> huge page that is sending to finish.  That could introduce quite some delay,
> the bigger the huge page is the larger delay it'll bring.
> 
> This patch adds a new capability to allow postcopy requests to preempt existing
> precopy page during sending a huge page, so that postcopy requests can be
> serviced even faster.
> 
> Meanwhile to send it even faster, bypass the precopy stream by providing a
> standalone postcopy socket for sending requested pages.
> 
> Since the new behavior will not be compatible with the old behavior, this will
> not be the default, it's enabled only when the new capability is set on both
> src/dst QEMUs.
> 
> This patch only adds the capability itself, the logic will be added in follow
> up patches.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c | 23 +++++++++++++++++++++++
>  migration/migration.h |  1 +
>  qapi/migration.json   |  8 +++++++-
>  3 files changed, 31 insertions(+), 1 deletion(-)


> diff --git a/qapi/migration.json b/qapi/migration.json
> index 18e2610e88..3523f23386 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -463,6 +463,12 @@
>  #                       procedure starts. The VM RAM is saved with running VM.
>  #                       (since 6.0)
>  #
> +# @postcopy-preempt: If enabled, the migration process will allow postcopy
> +#                    requests to preempt precopy stream, so postcopy requests
> +#                    will be handled faster.  This is a performance feature and
> +#                    should not affect the correctness of postcopy migration.
> +#                    (since 7.0)

Now 7.1

  Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 09/19] migration: Postcopy preemption preparation on channel creation
  2022-03-31 15:08 ` [PATCH v4 09/19] migration: Postcopy preemption preparation on channel creation Peter Xu
@ 2022-04-20 10:59   ` Daniel P. Berrangé
  0 siblings, 0 replies; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 10:59 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:47AM -0400, Peter Xu wrote:
> Create a new socket for postcopy to be prepared to send postcopy requested
> pages via this specific channel, so as to not get blocked by precopy pages.
> 
> A new thread is also created on dest qemu to receive data from this new channel
> based on the ram_load_postcopy() routine.
> 
> The ram_load_postcopy(POSTCOPY) branch and the thread has not started to
> function, and that'll be done in follow up patches.
> 
> Cleanup the new sockets on both src/dst QEMUs, meanwhile look after the new
> thread too to make sure it'll be recycled properly.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c    | 62 +++++++++++++++++++++++----
>  migration/migration.h    |  8 ++++
>  migration/postcopy-ram.c | 92 ++++++++++++++++++++++++++++++++++++++--
>  migration/postcopy-ram.h | 10 +++++
>  migration/ram.c          | 25 ++++++++---
>  migration/ram.h          |  4 +-
>  migration/savevm.c       | 20 ++++-----
>  migration/socket.c       | 22 +++++++++-
>  migration/socket.h       |  1 +
>  migration/trace-events   |  5 ++-
>  10 files changed, 218 insertions(+), 31 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 10/19] migration: Postcopy preemption enablement
  2022-03-31 15:08 ` [PATCH v4 10/19] migration: Postcopy preemption enablement Peter Xu
@ 2022-04-20 11:05   ` Daniel P. Berrangé
  2022-04-20 19:39     ` Peter Xu
  2022-05-11 15:54   ` manish.mishra
  1 sibling, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 11:05 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:48AM -0400, Peter Xu wrote:
> This patch enables postcopy-preempt feature.
> 
> It contains two major changes to the migration logic:
> 
> (1) Postcopy requests are now sent via a different socket from precopy
>     background migration stream, so as to be isolated from very high page
>     request delays.
> 
> (2) For huge page enabled hosts: when there's postcopy requests, they can now
>     intercept a partial sending of huge host pages on src QEMU.
> 
> After this patch, we'll live migrate a VM with two channels for postcopy: (1)
> PRECOPY channel, which is the default channel that transfers background pages;
> and (2) POSTCOPY channel, which only transfers requested pages.
> 
> There's no strict rule of which channel to use, e.g., if a requested page is
> already being transferred on precopy channel, then we will keep using the same
> precopy channel to transfer the page even if it's explicitly requested.  In 99%
> of the cases we'll prioritize the channels so we send requested page via the
> postcopy channel as long as possible.
> 
> On the source QEMU, when we found a postcopy request, we'll interrupt the
> PRECOPY channel sending process and quickly switch to the POSTCOPY channel.
> After we serviced all the high priority postcopy pages, we'll switch back to
> PRECOPY channel so that we'll continue to send the interrupted huge page again.
> There's no new thread introduced on src QEMU.

Implicit in this approach is that the delay in sending postcopy
OOB pages is from the pending socket buffers the kernel already
has, and not any delay caused by the QEMU sending thread being
busy doing other stuff.

Is there any scenario in which the QEMU sending thread is stalled
in sendmsg() with a 1GB huge page waiting for the kernel to
get space in the socket outgoing buffer ?

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 14/19] migration: Add helpers to detect TLS capability
  2022-03-31 15:08 ` [PATCH v4 14/19] migration: Add helpers to detect TLS capability Peter Xu
@ 2022-04-20 11:10   ` Daniel P. Berrangé
  2022-04-20 19:52     ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 11:10 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:52AM -0400, Peter Xu wrote:
> Add migrate_tls_enabled() to detect whether TLS is configured.
> 
> Add migrate_channel_requires_tls() to detect whether the specific channel
> requires TLS.
> 
> No functional change intended.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/channel.c   | 10 ++--------
>  migration/migration.c | 17 +++++++++++++++++
>  migration/migration.h |  4 ++++
>  migration/multifd.c   |  7 +------
>  4 files changed, 24 insertions(+), 14 deletions(-)
> 
> diff --git a/migration/channel.c b/migration/channel.c
> index c6a8dcf1d7..36e59eaeec 100644
> --- a/migration/channel.c
> +++ b/migration/channel.c
> @@ -38,10 +38,7 @@ void migration_channel_process_incoming(QIOChannel *ioc)
>      trace_migration_set_incoming_channel(
>          ioc, object_get_typename(OBJECT(ioc)));
>  
> -    if (s->parameters.tls_creds &&
> -        *s->parameters.tls_creds &&
> -        !object_dynamic_cast(OBJECT(ioc),
> -                             TYPE_QIO_CHANNEL_TLS)) {
> +    if (migrate_channel_requires_tls(ioc)) {
>          migration_tls_channel_process_incoming(s, ioc, &local_err);
>      } else {
>          migration_ioc_register_yank(ioc);
> @@ -71,10 +68,7 @@ void migration_channel_connect(MigrationState *s,
>          ioc, object_get_typename(OBJECT(ioc)), hostname, error);
>  
>      if (!error) {
> -        if (s->parameters.tls_creds &&
> -            *s->parameters.tls_creds &&
> -            !object_dynamic_cast(OBJECT(ioc),
> -                                 TYPE_QIO_CHANNEL_TLS)) {
> +        if (migrate_channel_requires_tls(ioc)) {
>              migration_tls_channel_connect(s, ioc, hostname, &error);
>  
>              if (!error) {
> diff --git a/migration/migration.c b/migration/migration.c
> index ee3df9e229..899084f993 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -49,6 +49,7 @@
>  #include "trace.h"
>  #include "exec/target_page.h"
>  #include "io/channel-buffer.h"
> +#include "io/channel-tls.h"
>  #include "migration/colo.h"
>  #include "hw/boards.h"
>  #include "hw/qdev-properties.h"
> @@ -4251,6 +4252,22 @@ void migration_global_dump(Monitor *mon)
>                     ms->clear_bitmap_shift);
>  }
>  
> +bool migrate_tls_enabled(void)
> +{
> +    MigrationState *s = migrate_get_current();
> +
> +    return s->parameters.tls_creds && *s->parameters.tls_creds;
> +}
> +
> +bool migrate_channel_requires_tls(QIOChannel *ioc)
> +{
> +    if (!migrate_tls_enabled()) {

This is the only place migrate_tls_enabled is called. Does it
really need to exist as an exported method, as opposed to
inlining it here ?

> +        return false;
> +    }
> +
> +    return !object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_TLS);
> +}
> +
>  #define DEFINE_PROP_MIG_CAP(name, x)             \
>      DEFINE_PROP_BOOL(name, MigrationState, enabled_capabilities[x], false)
>  
> diff --git a/migration/migration.h b/migration/migration.h
> index 6ee520642f..8b9ad7fe31 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -436,6 +436,10 @@ bool migrate_use_events(void);
>  bool migrate_postcopy_blocktime(void);
>  bool migrate_background_snapshot(void);
>  bool migrate_postcopy_preempt(void);
> +/* Whether TLS is enabled for migration? */
> +bool migrate_tls_enabled(void);
> +/* Whether the QIO channel requires further TLS handshake? */
> +bool migrate_channel_requires_tls(QIOChannel *ioc);

How about having it in tls.{c,h} as  'migration_tls_channel_enabled()' ?

>  
>  /* Sending on the return path - generic and then for each message type */
>  void migrate_send_rp_shut(MigrationIncomingState *mis,
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 9ea4f581e2..19e3c44491 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -782,17 +782,12 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
>                                      QIOChannel *ioc,
>                                      Error *error)
>  {
> -    MigrationState *s = migrate_get_current();
> -
>      trace_multifd_set_outgoing_channel(
>          ioc, object_get_typename(OBJECT(ioc)),
>          migrate_get_current()->hostname, error);
>  
>      if (!error) {
> -        if (s->parameters.tls_creds &&
> -            *s->parameters.tls_creds &&
> -            !object_dynamic_cast(OBJECT(ioc),
> -                                 TYPE_QIO_CHANNEL_TLS)) {
> +        if (migrate_channel_requires_tls(ioc)) {
>              multifd_tls_channel_connect(p, ioc, &error);
>              if (!error) {
>                  /*
> -- 
> 2.32.0
> 

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 15/19] migration: Export tls-[creds|hostname|authz] params to cmdline too
  2022-03-31 15:08 ` [PATCH v4 15/19] migration: Export tls-[creds|hostname|authz] params to cmdline too Peter Xu
@ 2022-04-20 11:13   ` Daniel P. Berrangé
  2022-04-20 20:01     ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 11:13 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:53AM -0400, Peter Xu wrote:
> It's useful for specifying tls credentials all in the cmdline (along with
> the -object tls-creds-*), especially for debugging purpose.
> 
> The trick here is we must remember to not free these fields again in the
> finalize() function of migration object, otherwise it'll cause double-free.
> 
> The thing is when destroying an object, we'll first destroy the properties
> that bound to the object, then the object itself.  To be explicit, when
> destroy the object in object_finalize() we have such sequence of
> operations:
> 
>     object_property_del_all(obj);
>     object_deinit(obj, ti);
> 
> So after this change the two fields are properly released already even
> before reaching the finalize() function but in object_property_del_all(),
> hence we don't need to free them anymore in finalize() or it's double-free.


I believe this is also fixing a small memory leak

> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 899084f993..1dc80be1f4 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -4349,6 +4349,9 @@ static Property migration_properties[] = {
>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
>      DEFINE_PROP_BOOL("x-postcopy-preempt-break-huge", MigrationState,
>                        postcopy_preempt_break_huge, true),
> +    DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
> +    DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
> +    DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
>  
>      /* Migration capabilities */
>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> @@ -4382,12 +4385,9 @@ static void migration_class_init(ObjectClass *klass, void *data)
>  static void migration_instance_finalize(Object *obj)
>  {
>      MigrationState *ms = MIGRATION_OBJ(obj);
> -    MigrationParameters *params = &ms->parameters;
>  
>      qemu_mutex_destroy(&ms->error_mutex);
>      qemu_mutex_destroy(&ms->qemu_file_lock);
> -    g_free(params->tls_hostname);
> -    g_free(params->tls_creds);

tls_authz wasn't previously freed here, and now it will be

>      qemu_sem_destroy(&ms->wait_unplug_sem);
>      qemu_sem_destroy(&ms->rate_limit_sem);
>      qemu_sem_destroy(&ms->pause_sem);

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 16/19] migration: Enable TLS for preempt channel
  2022-03-31 15:08 ` [PATCH v4 16/19] migration: Enable TLS for preempt channel Peter Xu
@ 2022-04-20 11:35   ` Daniel P. Berrangé
  2022-04-20 20:10     ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 11:35 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:54AM -0400, Peter Xu wrote:
> This patch is based on the async preempt channel creation.  It continues
> wiring up the new channel with TLS handshake to destionation when enabled.
> 
> Note that only the src QEMU needs such operation; the dest QEMU does not
> need any change for TLS support due to the fact that all channels are
> established synchronously there, so all the TLS magic is already properly
> handled by migration_tls_channel_process_incoming().
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/postcopy-ram.c | 60 +++++++++++++++++++++++++++++++++++-----
>  migration/trace-events   |  1 +
>  2 files changed, 54 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> index ab2a50cf45..f5ba176862 100644
> --- a/migration/postcopy-ram.c
> +++ b/migration/postcopy-ram.c
> @@ -36,6 +36,7 @@
>  #include "socket.h"
>  #include "qemu-file-channel.h"
>  #include "yank_functions.h"
> +#include "tls.h"
>  
>  /* Arbitrary limit on size of each discard command,
>   * keeps them around ~200 bytes
> @@ -1552,15 +1553,15 @@ bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
>      return true;
>  }
>  
> +/*
> + * Setup the postcopy preempt channel with the IOC.  If ERROR is specified,
> + * setup the error instead.  This helper will free the ERROR if specified.
> + */
>  static void
> -postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
> +postcopy_preempt_send_channel_done(MigrationState *s,
> +                                   QIOChannel *ioc, Error *local_err)
>  {
> -    MigrationState *s = opaque;
> -    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
> -    Error *local_err = NULL;
> -
> -    if (qio_task_propagate_error(task, &local_err)) {
> -        /* Something wrong happened.. */
> +    if (local_err) {
>          migrate_set_error(s, local_err);
>          error_free(local_err);
>      } else {
> @@ -1574,6 +1575,51 @@ postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
>       * postcopy_qemufile_src to know whether it failed or not.
>       */
>      qemu_sem_post(&s->postcopy_qemufile_src_sem);
> +}
> +
> +static void
> +postcopy_preempt_tls_handshake(QIOTask *task, gpointer opaque)
> +{
> +    MigrationState *s = opaque;
> +    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));

If using g_autoptr(QIOChannel) ioc = ...

> +    Error *err = NULL;

local_err is normal naming 

> +
> +    qio_task_propagate_error(task, &err);
> +    postcopy_preempt_send_channel_done(s, ioc, err);
> +    object_unref(OBJECT(ioc));

...not needed with g_autoptr

> +}
> +
> +static void
> +postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
> +{
> +    MigrationState *s = opaque;
> +    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));

If you use g_autoptr(QIOChannel)

> +    QIOChannelTLS *tioc;
> +    Error *local_err = NULL;
> +
> +    if (qio_task_propagate_error(task, &local_err)) {
> +        assert(local_err);

I don't think we really need to add these asserts everywhere we
handle a failure path do we ?

> +        goto out;
> +    }
> +
> +    if (migrate_channel_requires_tls(ioc)) {
> +        tioc = migration_tls_client_create(s, ioc, s->hostname, &local_err);
> +        if (!tioc) {
> +            assert(local_err);
> +            goto out;
> +        }
> +        trace_postcopy_preempt_tls_handshake();
> +        qio_channel_set_name(QIO_CHANNEL(tioc), "migration-tls-preempt");
> +        qio_channel_tls_handshake(tioc, postcopy_preempt_tls_handshake,
> +                                  s, NULL, NULL);
> +        /* Setup the channel until TLS handshake finished */
> +        object_unref(OBJECT(ioc));

...not needed with g_autoptr

> +        return;
> +    }
> +
> +out:
> +    /* This handles both good and error cases */
> +    postcopy_preempt_send_channel_done(s, ioc, local_err);
>      object_unref(OBJECT(ioc));

...also not needed with g_autoptr

>  }
>  
> diff --git a/migration/trace-events b/migration/trace-events
> index b21d5f371f..00ab2e1b96 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -287,6 +287,7 @@ postcopy_request_shared_page(const char *sharer, const char *rb, uint64_t rb_off
>  postcopy_request_shared_page_present(const char *sharer, const char *rb, uint64_t rb_offset) "%s already %s offset 0x%"PRIx64
>  postcopy_wake_shared(uint64_t client_addr, const char *rb) "at 0x%"PRIx64" in %s"
>  postcopy_page_req_del(void *addr, int count) "resolved page req %p total %d"
> +postcopy_preempt_tls_handshake(void) ""
>  postcopy_preempt_new_channel(void) ""
>  postcopy_preempt_thread_entry(void) ""
>  postcopy_preempt_thread_exit(void) ""
> -- 
> 2.32.0
> 

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 17/19] tests: Add postcopy tls migration test
  2022-03-31 15:08 ` [PATCH v4 17/19] tests: Add postcopy tls migration test Peter Xu
@ 2022-04-20 11:39   ` Daniel P. Berrangé
  2022-04-20 20:15     ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 11:39 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:55AM -0400, Peter Xu wrote:
> We just added TLS tests for precopy but not postcopy.  Add the
> corresponding test for vanilla postcopy.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  tests/qtest/migration-test.c | 43 +++++++++++++++++++++++++++++++-----
>  1 file changed, 37 insertions(+), 6 deletions(-)
> 
> diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> index d9f444ea14..80c4244871 100644
> --- a/tests/qtest/migration-test.c
> +++ b/tests/qtest/migration-test.c
> @@ -481,6 +481,10 @@ typedef struct {
>      bool only_target;
>      /* Use dirty ring if true; dirty logging otherwise */
>      bool use_dirty_ring;
> +    /* Whether use TLS channels for postcopy test? */
> +    bool postcopy_tls;
> +    /* Used only if postcopy_tls==true, to cache the data object */
> +    void *postcopy_tls_data;
>      const char *opts_source;
>      const char *opts_target;
>  } MigrateStart;
> @@ -980,6 +984,10 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
>          return -1;
>      }
>  
> +    if (args->postcopy_tls) {
> +        args->postcopy_tls_data = test_migrate_tls_psk_start_match(from, to);
> +    }
> +
>      migrate_set_capability(from, "postcopy-ram", true);
>      migrate_set_capability(to, "postcopy-ram", true);
>      migrate_set_capability(to, "postcopy-blocktime", true);
> @@ -1004,7 +1012,8 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
>      return 0;
>  }
>  
> -static void migrate_postcopy_complete(QTestState *from, QTestState *to)
> +static void migrate_postcopy_complete(QTestState *from, QTestState *to,
> +                                      MigrateStart *args)
>  {
>      wait_for_migration_complete(from);
>  
> @@ -1015,19 +1024,38 @@ static void migrate_postcopy_complete(QTestState *from, QTestState *to)
>          read_blocktime(to);
>      }
>  
> +    if (args->postcopy_tls) {
> +        assert(args->postcopy_tls_data);
> +        test_migrate_tls_psk_finish(from, to, args->postcopy_tls_data);
> +        args->postcopy_tls_data = NULL;
> +    }
> +
>      test_migrate_end(from, to, true);
>  }
>  
> -static void test_postcopy(void)
> +static void test_postcopy_common(MigrateStart *args)
>  {
> -    MigrateStart args = {};
>      QTestState *from, *to;
>  
> -    if (migrate_postcopy_prepare(&from, &to, &args)) {
> +    if (migrate_postcopy_prepare(&from, &to, args)) {
>          return;
>      }
>      migrate_postcopy_start(from, to);
> -    migrate_postcopy_complete(from, to);
> +    migrate_postcopy_complete(from, to, args);
> +}
> +
> +static void test_postcopy(void)
> +{
> +    MigrateStart args = { };
> +
> +    test_postcopy_common(&args);
> +}
> +
> +static void test_postcopy_tls(void)

test_postcopy_tls_psk() 

> +{
> +    MigrateStart args = { .postcopy_tls = true };
> +
> +    test_postcopy_common(&args);
>  }
>  
>  static void test_postcopy_recovery(void)
> @@ -1089,7 +1117,7 @@ static void test_postcopy_recovery(void)
>      /* Restore the postcopy bandwidth to unlimited */
>      migrate_set_parameter_int(from, "max-postcopy-bandwidth", 0);
>  
> -    migrate_postcopy_complete(from, to);
> +    migrate_postcopy_complete(from, to, &args);
>  }
>  
>  static void test_baddest(void)
> @@ -2134,6 +2162,9 @@ int main(int argc, char **argv)
>  
>      qtest_add_func("/migration/postcopy/unix", test_postcopy);

Rename this to /migration/postcopy/unix/plain

>      qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
> +#ifdef CONFIG_GNUTLS
> +    qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);

And this to /migration/postcopy/unix/tls/psk  so we match the precopy test
naming convention I started

> +#endif /* CONFIG_GNUTLS */
>      qtest_add_func("/migration/bad_dest", test_baddest);
>      qtest_add_func("/migration/precopy/unix/plain", test_precopy_unix_plain);
>      qtest_add_func("/migration/precopy/unix/xbzrle", test_precopy_unix_xbzrle);
> -- 
> 2.32.0
> 

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 18/19] tests: Add postcopy tls recovery migration test
  2022-03-31 15:08 ` [PATCH v4 18/19] tests: Add postcopy tls recovery " Peter Xu
@ 2022-04-20 11:42   ` Daniel P. Berrangé
  2022-04-20 20:38     ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 11:42 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:56AM -0400, Peter Xu wrote:
> It's easy to build this upon the postcopy tls test.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  tests/qtest/migration-test.c | 27 +++++++++++++++++++++------
>  1 file changed, 21 insertions(+), 6 deletions(-)
> 
> diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> index 80c4244871..7288c64e97 100644
> --- a/tests/qtest/migration-test.c
> +++ b/tests/qtest/migration-test.c
> @@ -1058,15 +1058,15 @@ static void test_postcopy_tls(void)
>      test_postcopy_common(&args);
>  }
>  
> -static void test_postcopy_recovery(void)
> +static void test_postcopy_recovery_common(MigrateStart *args)
>  {
> -    MigrateStart args = {
> -        .hide_stderr = true,
> -    };
>      QTestState *from, *to;
>      g_autofree char *uri = NULL;
>  
> -    if (migrate_postcopy_prepare(&from, &to, &args)) {
> +    /* Always hide errors for postcopy recover tests since they're expected */
> +    args->hide_stderr = true;
> +
> +    if (migrate_postcopy_prepare(&from, &to, args)) {
>          return;
>      }
>  
> @@ -1117,7 +1117,21 @@ static void test_postcopy_recovery(void)
>      /* Restore the postcopy bandwidth to unlimited */
>      migrate_set_parameter_int(from, "max-postcopy-bandwidth", 0);
>  
> -    migrate_postcopy_complete(from, to, &args);
> +    migrate_postcopy_complete(from, to, args);
> +}
> +
> +static void test_postcopy_recovery(void)
> +{
> +    MigrateStart args = { };
> +
> +    test_postcopy_recovery_common(&args);
> +}
> +
> +static void test_postcopy_recovery_tls(void)
> +{
> +    MigrateStart args = { .postcopy_tls = true };
> +
> +    test_postcopy_recovery_common(&args);
>  }
>  
>  static void test_baddest(void)
> @@ -2164,6 +2178,7 @@ int main(int argc, char **argv)
>      qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
>  #ifdef CONFIG_GNUTLS
>      qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
> +    qtest_add_func("/migration/postcopy/tls/recovery", test_postcopy_recovery_tls);

It is important that a test name is *NOT* a prefix for another
test name, as that makes it harder to selectively run individual
tests with '-p' as it does a pattern match.

Bearing in mind my comments on the previous patch, I think we want

    /migration/postcopy/recovery/plain
    /migration/postcopy/recovery/tls/psk

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 19/19] tests: Add postcopy preempt tests
  2022-03-31 15:08 ` [PATCH v4 19/19] tests: Add postcopy preempt tests Peter Xu
  2022-03-31 15:25   ` Peter Xu
@ 2022-04-20 11:43   ` Daniel P. Berrangé
  2022-04-20 20:51     ` Peter Xu
  1 sibling, 1 reply; 54+ messages in thread
From: Daniel P. Berrangé @ 2022-04-20 11:43 UTC (permalink / raw)
  To: Peter Xu
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Thu, Mar 31, 2022 at 11:08:57AM -0400, Peter Xu wrote:
> Four tests are added for preempt mode:
> 
>   - Postcopy default
>   - Postcopy tls
>   - Postcopy recovery
>   - Postcopy tls+recovery
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  tests/qtest/migration-test.c | 49 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
> 
> diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> index 7288c64e97..7188503ae1 100644
> --- a/tests/qtest/migration-test.c
> +++ b/tests/qtest/migration-test.c
> @@ -477,6 +477,7 @@ typedef struct {
>       */
>      bool hide_stderr;
>      bool use_shmem;
> +    bool postcopy_preempt;
>      /* only launch the target process */
>      bool only_target;
>      /* Use dirty ring if true; dirty logging otherwise */
> @@ -992,6 +993,11 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
>      migrate_set_capability(to, "postcopy-ram", true);
>      migrate_set_capability(to, "postcopy-blocktime", true);
>  
> +    if (args->postcopy_preempt) {
> +        migrate_set_capability(from, "postcopy-preempt", true);
> +        migrate_set_capability(to, "postcopy-preempt", true);
> +    }
> +
>      /* We want to pick a speed slow enough that the test completes
>       * quickly, but that it doesn't complete precopy even on a slow
>       * machine, so also set the downtime.
> @@ -1058,6 +1064,25 @@ static void test_postcopy_tls(void)
>      test_postcopy_common(&args);
>  }
>  
> +static void test_postcopy_preempt(void)
> +{
> +    MigrateStart args = {
> +        .postcopy_preempt = true,
> +    };
> +
> +    test_postcopy_common(&args);
> +}
> +
> +static void test_postcopy_preempt_tls(void)
> +{
> +    MigrateStart args = {
> +        .postcopy_preempt = true,
> +        .postcopy_tls = true,
> +    };
> +
> +    test_postcopy_common(&args);
> +}
> +
>  static void test_postcopy_recovery_common(MigrateStart *args)
>  {
>      QTestState *from, *to;
> @@ -1134,6 +1159,24 @@ static void test_postcopy_recovery_tls(void)
>      test_postcopy_recovery_common(&args);
>  }
>  
> +static void test_postcopy_preempt_recovery(void)
> +{
> +    MigrateStart args = { .postcopy_preempt = true };
> +
> +    test_postcopy_recovery_common(&args);
> +}
> +
> +/* This contains preempt+recovery+tls test altogether */
> +static void test_postcopy_preempt_all(void)
> +{
> +    MigrateStart args = {
> +        .postcopy_preempt = true,
> +        .postcopy_tls = true,
> +    };
> +
> +    test_postcopy_recovery_common(&args);
> +}
> +
>  static void test_baddest(void)
>  {
>      MigrateStart args = {
> @@ -2176,6 +2219,12 @@ int main(int argc, char **argv)
>  
>      qtest_add_func("/migration/postcopy/unix", test_postcopy);
>      qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
> +    qtest_add_func("/migration/postcopy/preempt/unix", test_postcopy_preempt);
> +    qtest_add_func("/migration/postcopy/preempt/recovery",
> +                   test_postcopy_preempt_recovery);
> +    qtest_add_func("/migration/postcopy/preempt/tls", test_postcopy_preempt_tls);
> +    qtest_add_func("/migration/postcopy/preempt/tls+recovery",
> +                   test_postcopy_preempt_all);

On test naming again I think we want these four tests to have names

    /migration/postcopy/preempt/plain
    /migration/postcopy/preempt/tls/psk
    /migration/postcopy/preempt/recovery/plain
    /migration/postcopy/preempt/recovery/tls/psk


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname
  2022-04-20 10:34   ` Daniel P. Berrangé
@ 2022-04-20 18:19     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 18:19 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 11:34:16AM +0100, Daniel P. Berrangé wrote:
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 695f0f2900..281d33326b 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -1809,6 +1809,11 @@ static void migrate_fd_cleanup(MigrationState *s)
> >      qemu_bh_delete(s->cleanup_bh);
> >      s->cleanup_bh = NULL;
> >  
> > +    if (s->hostname) {
> > +        g_free(s->hostname);
> > +        s->hostname = NULL;
> > +    }
> 
> FWIW there's a marginally more concise pattern:
> 
>   g_clear_pointer(&s->hostname, g_free)

Sounds good.

> 
> 
> Either way
> 
>    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 04/19] migration: Move migrate_allow_multifd and helpers into migration.c
  2022-04-20 10:41   ` Daniel P. Berrangé
@ 2022-04-20 19:30     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 19:30 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 11:41:30AM +0100, Daniel P. Berrangé wrote:
> On Thu, Mar 31, 2022 at 11:08:42AM -0400, Peter Xu wrote:
> > This variable, along with its helpers, is used to detect whether multiple
> > channel will be supported for migration.  In follow up patches, there'll be
> > other capability that requires multi-channels.  Hence move it outside multifd
> > specific code and make it public.  Meanwhile rename it from "multifd" to
> > "multi_channels" to show its real meaning.
> 
> FWIW, I would generally suggest separating the rename from the code
> movement into distinct patches.

Okay.  To still cherish Dave's R-b, I'll try to keep as-is this time, but
I'll remember it next time.

> 
> > 
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  migration/migration.c | 22 +++++++++++++++++-----
> >  migration/migration.h |  3 +++
> >  migration/multifd.c   | 19 ++++---------------
> >  migration/multifd.h   |  2 --
> >  4 files changed, 24 insertions(+), 22 deletions(-)
> > 
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 281d33326b..596d3d30b4 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -180,6 +180,18 @@ static int migration_maybe_pause(MigrationState *s,
> >                                   int new_state);
> >  static void migrate_fd_cancel(MigrationState *s);
> >  
> > +static bool migrate_allow_multi_channels = true;
> 
> This is a pre-existing thing, but I'm curious why we default this to
> 'true', when the first thing qemu_start_incoming_migration() and
> qmp_migrate() do, is to set it to 'false' and then selectively
> put it back to 'true'.

Agreed, FWICT it's not needed, it just doesn't hurt either.

> 
> 
> >  static gint page_request_addr_cmp(gconstpointer ap, gconstpointer bp)
> >  {
> >      uintptr_t a = (uintptr_t) ap, b = (uintptr_t) bp;
> > @@ -469,12 +481,12 @@ static void qemu_start_incoming_migration(const char *uri, Error **errp)
> >  {
> >      const char *p = NULL;
> >  
> > -    migrate_protocol_allow_multifd(false); /* reset it anyway */
> > +    migrate_protocol_allow_multi_channels(false); /* reset it anyway */
> >      qapi_event_send_migration(MIGRATION_STATUS_SETUP);
> >      if (strstart(uri, "tcp:", &p) ||
> >          strstart(uri, "unix:", NULL) ||
> >          strstart(uri, "vsock:", NULL)) {
> > -        migrate_protocol_allow_multifd(true);
> > +        migrate_protocol_allow_multi_channels(true);
> >          socket_start_incoming_migration(p ? p : uri, errp);
> 
> 
> 
> > @@ -2324,11 +2336,11 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
> >          }
> >      }
> >  
> > -    migrate_protocol_allow_multifd(false);
> > +    migrate_protocol_allow_multi_channels(false);
> >      if (strstart(uri, "tcp:", &p) ||
> >          strstart(uri, "unix:", NULL) ||
> >          strstart(uri, "vsock:", NULL)) {
> > -        migrate_protocol_allow_multifd(true);
> > +        migrate_protocol_allow_multi_channels(true);
> >          socket_start_outgoing_migration(s, p ? p : uri, &local_err);
> >  #ifdef CONFIG_RDMA
> >      } else if (strstart(uri, "rdma:", &p)) {
> 
> Regardless of comments above
> 
>   Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 08/19] migration: Add postcopy-preempt capability
  2022-04-20 10:51   ` Daniel P. Berrangé
@ 2022-04-20 19:31     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 19:31 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 11:51:28AM +0100, Daniel P. Berrangé wrote:
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index 18e2610e88..3523f23386 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -463,6 +463,12 @@
> >  #                       procedure starts. The VM RAM is saved with running VM.
> >  #                       (since 6.0)
> >  #
> > +# @postcopy-preempt: If enabled, the migration process will allow postcopy
> > +#                    requests to preempt precopy stream, so postcopy requests
> > +#                    will be handled faster.  This is a performance feature and
> > +#                    should not affect the correctness of postcopy migration.
> > +#                    (since 7.0)
> 
> Now 7.1

Fixed.

> 
>   Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 10/19] migration: Postcopy preemption enablement
  2022-04-20 11:05   ` Daniel P. Berrangé
@ 2022-04-20 19:39     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 19:39 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 12:05:24PM +0100, Daniel P. Berrangé wrote:
> On Thu, Mar 31, 2022 at 11:08:48AM -0400, Peter Xu wrote:
> > This patch enables postcopy-preempt feature.
> > 
> > It contains two major changes to the migration logic:
> > 
> > (1) Postcopy requests are now sent via a different socket from precopy
> >     background migration stream, so as to be isolated from very high page
> >     request delays.
> > 
> > (2) For huge page enabled hosts: when there's postcopy requests, they can now
> >     intercept a partial sending of huge host pages on src QEMU.
> > 
> > After this patch, we'll live migrate a VM with two channels for postcopy: (1)
> > PRECOPY channel, which is the default channel that transfers background pages;
> > and (2) POSTCOPY channel, which only transfers requested pages.
> > 
> > There's no strict rule of which channel to use, e.g., if a requested page is
> > already being transferred on precopy channel, then we will keep using the same
> > precopy channel to transfer the page even if it's explicitly requested.  In 99%
> > of the cases we'll prioritize the channels so we send requested page via the
> > postcopy channel as long as possible.
> > 
> > On the source QEMU, when we found a postcopy request, we'll interrupt the
> > PRECOPY channel sending process and quickly switch to the POSTCOPY channel.
> > After we serviced all the high priority postcopy pages, we'll switch back to
> > PRECOPY channel so that we'll continue to send the interrupted huge page again.
> > There's no new thread introduced on src QEMU.
> 
> Implicit in this approach is that the delay in sending postcopy
> OOB pages is from the pending socket buffers the kernel already
> has, and not any delay caused by the QEMU sending thread being
> busy doing other stuff.

Yes.

> 
> Is there any scenario in which the QEMU sending thread is stalled
> in sendmsg() with a 1GB huge page waiting for the kernel to
> get space in the socket outgoing buffer ?

Another yes..

It doesn't necessarily to be during sending a 1GB huge page, the guest can
be using small pages and IMHO we could get stuck at sendmsg() for a precopy
small page while there's actually postcopy requests in the queue.

We can't solve this as long as we keep using 1 single thread for sending
page.

This patchset doesn't solve this issue, yet.  And it's actually the chunk
discussed and mention in the cover letter too in the section "Avoid precopy
write() blocks postcopy" as an TODO item.

Logically in the future we could try to make two or more sender threads so
postcopy pages can use a separate sender thread.

Note that this change will _not_ require interface change either from qemu
cmdline or on migration protocol, because this patchset should have handled
all the migration protocol already even for that, but then if it'll work
well we could get pure speed up on further shrinked latency when preempt
mode enabled comparing to before.

The other thing is I never measured such an effect, so I can't tell how
would it perform at last.  We need more work on top if we'd like to persue
it, mostly on doing proper synchronizations on senders.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 14/19] migration: Add helpers to detect TLS capability
  2022-04-20 11:10   ` Daniel P. Berrangé
@ 2022-04-20 19:52     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 19:52 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 12:10:14PM +0100, Daniel P. Berrangé wrote:
> On Thu, Mar 31, 2022 at 11:08:52AM -0400, Peter Xu wrote:
> > Add migrate_tls_enabled() to detect whether TLS is configured.
> > 
> > Add migrate_channel_requires_tls() to detect whether the specific channel
> > requires TLS.
> > 
> > No functional change intended.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  migration/channel.c   | 10 ++--------
> >  migration/migration.c | 17 +++++++++++++++++
> >  migration/migration.h |  4 ++++
> >  migration/multifd.c   |  7 +------
> >  4 files changed, 24 insertions(+), 14 deletions(-)
> > 
> > diff --git a/migration/channel.c b/migration/channel.c
> > index c6a8dcf1d7..36e59eaeec 100644
> > --- a/migration/channel.c
> > +++ b/migration/channel.c
> > @@ -38,10 +38,7 @@ void migration_channel_process_incoming(QIOChannel *ioc)
> >      trace_migration_set_incoming_channel(
> >          ioc, object_get_typename(OBJECT(ioc)));
> >  
> > -    if (s->parameters.tls_creds &&
> > -        *s->parameters.tls_creds &&
> > -        !object_dynamic_cast(OBJECT(ioc),
> > -                             TYPE_QIO_CHANNEL_TLS)) {
> > +    if (migrate_channel_requires_tls(ioc)) {
> >          migration_tls_channel_process_incoming(s, ioc, &local_err);
> >      } else {
> >          migration_ioc_register_yank(ioc);
> > @@ -71,10 +68,7 @@ void migration_channel_connect(MigrationState *s,
> >          ioc, object_get_typename(OBJECT(ioc)), hostname, error);
> >  
> >      if (!error) {
> > -        if (s->parameters.tls_creds &&
> > -            *s->parameters.tls_creds &&
> > -            !object_dynamic_cast(OBJECT(ioc),
> > -                                 TYPE_QIO_CHANNEL_TLS)) {
> > +        if (migrate_channel_requires_tls(ioc)) {
> >              migration_tls_channel_connect(s, ioc, hostname, &error);
> >  
> >              if (!error) {
> > diff --git a/migration/migration.c b/migration/migration.c
> > index ee3df9e229..899084f993 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -49,6 +49,7 @@
> >  #include "trace.h"
> >  #include "exec/target_page.h"
> >  #include "io/channel-buffer.h"
> > +#include "io/channel-tls.h"
> >  #include "migration/colo.h"
> >  #include "hw/boards.h"
> >  #include "hw/qdev-properties.h"
> > @@ -4251,6 +4252,22 @@ void migration_global_dump(Monitor *mon)
> >                     ms->clear_bitmap_shift);
> >  }
> >  
> > +bool migrate_tls_enabled(void)
> > +{
> > +    MigrationState *s = migrate_get_current();
> > +
> > +    return s->parameters.tls_creds && *s->parameters.tls_creds;
> > +}
> > +
> > +bool migrate_channel_requires_tls(QIOChannel *ioc)
> > +{
> > +    if (!migrate_tls_enabled()) {
> 
> This is the only place migrate_tls_enabled is called. Does it
> really need to exist as an exported method, as opposed to
> inlining it here ?

IMHO the helper could help code readers to easier understand when TLS is
enabled, and it's not super obvious as TLS doesn't have a capability bit
bound to it.  No strong opinions, though.

> 
> > +        return false;
> > +    }
> > +
> > +    return !object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_TLS);
> > +}
> > +
> >  #define DEFINE_PROP_MIG_CAP(name, x)             \
> >      DEFINE_PROP_BOOL(name, MigrationState, enabled_capabilities[x], false)
> >  
> > diff --git a/migration/migration.h b/migration/migration.h
> > index 6ee520642f..8b9ad7fe31 100644
> > --- a/migration/migration.h
> > +++ b/migration/migration.h
> > @@ -436,6 +436,10 @@ bool migrate_use_events(void);
> >  bool migrate_postcopy_blocktime(void);
> >  bool migrate_background_snapshot(void);
> >  bool migrate_postcopy_preempt(void);
> > +/* Whether TLS is enabled for migration? */
> > +bool migrate_tls_enabled(void);
> > +/* Whether the QIO channel requires further TLS handshake? */
> > +bool migrate_channel_requires_tls(QIOChannel *ioc);
> 
> How about having it in tls.{c,h} as  'migration_tls_channel_enabled()' ?

I can do the movement, but the new name can be confusing when we read it in
the codes, it'll look like:

  if (migration_tls_channel_enabled(ioc)) {
    /* create the tls channel */
    ...
  }

The thing is migration_tls_channel_enabled() on a TLS channel will return
false.. which seems to be against the gut feelings.

migrate_channel_requires_tls() feels better but maybe not so much..
Would migrate_channel_requires_tls_wrapper() be better (but longer..)?

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 15/19] migration: Export tls-[creds|hostname|authz] params to cmdline too
  2022-04-20 11:13   ` Daniel P. Berrangé
@ 2022-04-20 20:01     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 20:01 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 12:13:07PM +0100, Daniel P. Berrangé wrote:
> On Thu, Mar 31, 2022 at 11:08:53AM -0400, Peter Xu wrote:
> > It's useful for specifying tls credentials all in the cmdline (along with
> > the -object tls-creds-*), especially for debugging purpose.
> > 
> > The trick here is we must remember to not free these fields again in the
> > finalize() function of migration object, otherwise it'll cause double-free.
> > 
> > The thing is when destroying an object, we'll first destroy the properties
> > that bound to the object, then the object itself.  To be explicit, when
> > destroy the object in object_finalize() we have such sequence of
> > operations:
> > 
> >     object_property_del_all(obj);
> >     object_deinit(obj, ti);
> > 
> > So after this change the two fields are properly released already even
> > before reaching the finalize() function but in object_property_del_all(),
> > hence we don't need to free them anymore in finalize() or it's double-free.
> 
> 
> I believe this is also fixing a small memory leak

Yes I think so.

I didn't even mention it since it's one global tiny variable and IIUC QEMU
does have other similar cases of keeping vars around. As long as it'll not
grow dynamically, then doesn't sound like a huge problem.

But yeah, doing proper free is still ideal.  So I'll add one more sentence
to the commit message in next version.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 16/19] migration: Enable TLS for preempt channel
  2022-04-20 11:35   ` Daniel P. Berrangé
@ 2022-04-20 20:10     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 20:10 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 12:35:21PM +0100, Daniel P. Berrangé wrote:
> On Thu, Mar 31, 2022 at 11:08:54AM -0400, Peter Xu wrote:
> > This patch is based on the async preempt channel creation.  It continues
> > wiring up the new channel with TLS handshake to destionation when enabled.
> > 
> > Note that only the src QEMU needs such operation; the dest QEMU does not
> > need any change for TLS support due to the fact that all channels are
> > established synchronously there, so all the TLS magic is already properly
> > handled by migration_tls_channel_process_incoming().
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  migration/postcopy-ram.c | 60 +++++++++++++++++++++++++++++++++++-----
> >  migration/trace-events   |  1 +
> >  2 files changed, 54 insertions(+), 7 deletions(-)
> > 
> > diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> > index ab2a50cf45..f5ba176862 100644
> > --- a/migration/postcopy-ram.c
> > +++ b/migration/postcopy-ram.c
> > @@ -36,6 +36,7 @@
> >  #include "socket.h"
> >  #include "qemu-file-channel.h"
> >  #include "yank_functions.h"
> > +#include "tls.h"
> >  
> >  /* Arbitrary limit on size of each discard command,
> >   * keeps them around ~200 bytes
> > @@ -1552,15 +1553,15 @@ bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
> >      return true;
> >  }
> >  
> > +/*
> > + * Setup the postcopy preempt channel with the IOC.  If ERROR is specified,
> > + * setup the error instead.  This helper will free the ERROR if specified.
> > + */
> >  static void
> > -postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
> > +postcopy_preempt_send_channel_done(MigrationState *s,
> > +                                   QIOChannel *ioc, Error *local_err)
> >  {
> > -    MigrationState *s = opaque;
> > -    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
> > -    Error *local_err = NULL;
> > -
> > -    if (qio_task_propagate_error(task, &local_err)) {
> > -        /* Something wrong happened.. */
> > +    if (local_err) {
> >          migrate_set_error(s, local_err);
> >          error_free(local_err);
> >      } else {
> > @@ -1574,6 +1575,51 @@ postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
> >       * postcopy_qemufile_src to know whether it failed or not.
> >       */
> >      qemu_sem_post(&s->postcopy_qemufile_src_sem);
> > +}
> > +
> > +static void
> > +postcopy_preempt_tls_handshake(QIOTask *task, gpointer opaque)
> > +{
> > +    MigrationState *s = opaque;
> > +    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
> 
> If using g_autoptr(QIOChannel) ioc = ...

New magic learned..

> 
> > +    Error *err = NULL;
> 
> local_err is normal naming 

OK.

> 
> > +
> > +    qio_task_propagate_error(task, &err);
> > +    postcopy_preempt_send_channel_done(s, ioc, err);
> > +    object_unref(OBJECT(ioc));
> 
> ...not needed with g_autoptr
> 
> > +}
> > +
> > +static void
> > +postcopy_preempt_send_channel_new(QIOTask *task, gpointer opaque)
> > +{
> > +    MigrationState *s = opaque;
> > +    QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
> 
> If you use g_autoptr(QIOChannel)

Will use it here too.

> 
> > +    QIOChannelTLS *tioc;
> > +    Error *local_err = NULL;
> > +
> > +    if (qio_task_propagate_error(task, &local_err)) {
> > +        assert(local_err);
> 
> I don't think we really need to add these asserts everywhere we
> handle a failure path do we ?

Maybe I'm just over-cautious, yeah let me drop those.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 17/19] tests: Add postcopy tls migration test
  2022-04-20 11:39   ` Daniel P. Berrangé
@ 2022-04-20 20:15     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 20:15 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 12:39:07PM +0100, Daniel P. Berrangé wrote:
> On Thu, Mar 31, 2022 at 11:08:55AM -0400, Peter Xu wrote:
> > We just added TLS tests for precopy but not postcopy.  Add the
> > corresponding test for vanilla postcopy.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  tests/qtest/migration-test.c | 43 +++++++++++++++++++++++++++++++-----
> >  1 file changed, 37 insertions(+), 6 deletions(-)
> > 
> > diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> > index d9f444ea14..80c4244871 100644
> > --- a/tests/qtest/migration-test.c
> > +++ b/tests/qtest/migration-test.c
> > @@ -481,6 +481,10 @@ typedef struct {
> >      bool only_target;
> >      /* Use dirty ring if true; dirty logging otherwise */
> >      bool use_dirty_ring;
> > +    /* Whether use TLS channels for postcopy test? */
> > +    bool postcopy_tls;
> > +    /* Used only if postcopy_tls==true, to cache the data object */
> > +    void *postcopy_tls_data;
> >      const char *opts_source;
> >      const char *opts_target;
> >  } MigrateStart;
> > @@ -980,6 +984,10 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
> >          return -1;
> >      }
> >  
> > +    if (args->postcopy_tls) {
> > +        args->postcopy_tls_data = test_migrate_tls_psk_start_match(from, to);
> > +    }
> > +
> >      migrate_set_capability(from, "postcopy-ram", true);
> >      migrate_set_capability(to, "postcopy-ram", true);
> >      migrate_set_capability(to, "postcopy-blocktime", true);
> > @@ -1004,7 +1012,8 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
> >      return 0;
> >  }
> >  
> > -static void migrate_postcopy_complete(QTestState *from, QTestState *to)
> > +static void migrate_postcopy_complete(QTestState *from, QTestState *to,
> > +                                      MigrateStart *args)
> >  {
> >      wait_for_migration_complete(from);
> >  
> > @@ -1015,19 +1024,38 @@ static void migrate_postcopy_complete(QTestState *from, QTestState *to)
> >          read_blocktime(to);
> >      }
> >  
> > +    if (args->postcopy_tls) {
> > +        assert(args->postcopy_tls_data);
> > +        test_migrate_tls_psk_finish(from, to, args->postcopy_tls_data);
> > +        args->postcopy_tls_data = NULL;
> > +    }
> > +
> >      test_migrate_end(from, to, true);
> >  }
> >  
> > -static void test_postcopy(void)
> > +static void test_postcopy_common(MigrateStart *args)
> >  {
> > -    MigrateStart args = {};
> >      QTestState *from, *to;
> >  
> > -    if (migrate_postcopy_prepare(&from, &to, &args)) {
> > +    if (migrate_postcopy_prepare(&from, &to, args)) {
> >          return;
> >      }
> >      migrate_postcopy_start(from, to);
> > -    migrate_postcopy_complete(from, to);
> > +    migrate_postcopy_complete(from, to, args);
> > +}
> > +
> > +static void test_postcopy(void)
> > +{
> > +    MigrateStart args = { };
> > +
> > +    test_postcopy_common(&args);
> > +}
> > +
> > +static void test_postcopy_tls(void)
> 
> test_postcopy_tls_psk() 
> 
> > +{
> > +    MigrateStart args = { .postcopy_tls = true };
> > +
> > +    test_postcopy_common(&args);
> >  }
> >  
> >  static void test_postcopy_recovery(void)
> > @@ -1089,7 +1117,7 @@ static void test_postcopy_recovery(void)
> >      /* Restore the postcopy bandwidth to unlimited */
> >      migrate_set_parameter_int(from, "max-postcopy-bandwidth", 0);
> >  
> > -    migrate_postcopy_complete(from, to);
> > +    migrate_postcopy_complete(from, to, &args);
> >  }
> >  
> >  static void test_baddest(void)
> > @@ -2134,6 +2162,9 @@ int main(int argc, char **argv)
> >  
> >      qtest_add_func("/migration/postcopy/unix", test_postcopy);
> 
> Rename this to /migration/postcopy/unix/plain
> 
> >      qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
> > +#ifdef CONFIG_GNUTLS
> > +    qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
> 
> And this to /migration/postcopy/unix/tls/psk  so we match the precopy test
> naming convention I started

I can do all the renamings.

But note that I explicitly didn't add psk just because for postcopy it's
the same to use either psk or other ways to do encryption - we're testing
the tls channel paths not any specific type of TLS channels.

I wanted to use that trick to make sure people are aware we don't really
need other types of tls tests for postcopy, because the tls-type specific
code paths should have been covered in tls specific precopy tests.

I guess I'll add a comment showing that instead of using a vague naming.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 18/19] tests: Add postcopy tls recovery migration test
  2022-04-20 11:42   ` Daniel P. Berrangé
@ 2022-04-20 20:38     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 20:38 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 12:42:15PM +0100, Daniel P. Berrangé wrote:
> On Thu, Mar 31, 2022 at 11:08:56AM -0400, Peter Xu wrote:
> > It's easy to build this upon the postcopy tls test.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  tests/qtest/migration-test.c | 27 +++++++++++++++++++++------
> >  1 file changed, 21 insertions(+), 6 deletions(-)
> > 
> > diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> > index 80c4244871..7288c64e97 100644
> > --- a/tests/qtest/migration-test.c
> > +++ b/tests/qtest/migration-test.c
> > @@ -1058,15 +1058,15 @@ static void test_postcopy_tls(void)
> >      test_postcopy_common(&args);
> >  }
> >  
> > -static void test_postcopy_recovery(void)
> > +static void test_postcopy_recovery_common(MigrateStart *args)
> >  {
> > -    MigrateStart args = {
> > -        .hide_stderr = true,
> > -    };
> >      QTestState *from, *to;
> >      g_autofree char *uri = NULL;
> >  
> > -    if (migrate_postcopy_prepare(&from, &to, &args)) {
> > +    /* Always hide errors for postcopy recover tests since they're expected */
> > +    args->hide_stderr = true;
> > +
> > +    if (migrate_postcopy_prepare(&from, &to, args)) {
> >          return;
> >      }
> >  
> > @@ -1117,7 +1117,21 @@ static void test_postcopy_recovery(void)
> >      /* Restore the postcopy bandwidth to unlimited */
> >      migrate_set_parameter_int(from, "max-postcopy-bandwidth", 0);
> >  
> > -    migrate_postcopy_complete(from, to, &args);
> > +    migrate_postcopy_complete(from, to, args);
> > +}
> > +
> > +static void test_postcopy_recovery(void)
> > +{
> > +    MigrateStart args = { };
> > +
> > +    test_postcopy_recovery_common(&args);
> > +}
> > +
> > +static void test_postcopy_recovery_tls(void)
> > +{
> > +    MigrateStart args = { .postcopy_tls = true };
> > +
> > +    test_postcopy_recovery_common(&args);
> >  }
> >  
> >  static void test_baddest(void)
> > @@ -2164,6 +2178,7 @@ int main(int argc, char **argv)
> >      qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
> >  #ifdef CONFIG_GNUTLS
> >      qtest_add_func("/migration/postcopy/tls", test_postcopy_tls);
> > +    qtest_add_func("/migration/postcopy/tls/recovery", test_postcopy_recovery_tls);
> 
> It is important that a test name is *NOT* a prefix for another
> test name, as that makes it harder to selectively run individual
> tests with '-p' as it does a pattern match.
> 
> Bearing in mind my comments on the previous patch, I think we want
> 
>     /migration/postcopy/recovery/plain
>     /migration/postcopy/recovery/tls/psk

Again, I can try to take all the suggestions in the next version, but note
that there's no obvious reason on how we name them..  It's:

  /XXX/Feature1
  /XXX/Feature2
  ...

Now what we're saying is: /XXX/Feature1/Feature2 is better than
/XXX/Feature2/Feature1.

And IMHO that really does not matter..

To be strict, for features that are compatible between each other, the only
sane way to write them is:

  /XXX/Feature1
  /XXX/Feature2
  /XXX/Feature1+Feature2

And we make sure there's an ordered list of features.  But then we still
lose the ultimate goal of allowing us to specify one "-p something" to run
any tests that FeatureX is enabled.  Sometimes we simply run a superset or
subset then it's good enough at least to me.

IOW, we may need something better than the path-form (-p) of qtest to
achieve what you wanted, IMHO.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 19/19] tests: Add postcopy preempt tests
  2022-04-20 11:43   ` Daniel P. Berrangé
@ 2022-04-20 20:51     ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-04-20 20:51 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras Soares Passos,
	Dr . David Alan Gilbert

On Wed, Apr 20, 2022 at 12:43:39PM +0100, Daniel P. Berrangé wrote:
> >  static void test_baddest(void)
> >  {
> >      MigrateStart args = {
> > @@ -2176,6 +2219,12 @@ int main(int argc, char **argv)
> >  
> >      qtest_add_func("/migration/postcopy/unix", test_postcopy);
> >      qtest_add_func("/migration/postcopy/recovery", test_postcopy_recovery);
> > +    qtest_add_func("/migration/postcopy/preempt/unix", test_postcopy_preempt);
> > +    qtest_add_func("/migration/postcopy/preempt/recovery",
> > +                   test_postcopy_preempt_recovery);
> > +    qtest_add_func("/migration/postcopy/preempt/tls", test_postcopy_preempt_tls);
> > +    qtest_add_func("/migration/postcopy/preempt/tls+recovery",
> > +                   test_postcopy_preempt_all);
> 
> On test naming again I think we want these four tests to have names
> 
>     /migration/postcopy/preempt/plain
>     /migration/postcopy/preempt/tls/psk
>     /migration/postcopy/preempt/recovery/plain
>     /migration/postcopy/preempt/recovery/tls/psk

Well to think it again, logically if we prefer to spell out tls/psk, then
we may also want to spell out preempt/unix because of the same reason..

Similarly to all the vanilla postcopy/* tests, where if we keep tls/psk,
then we should keep postcopy/unix rather than postcopy/plain.

But let's not bother much with it.. I'll apply all the changes above in the
new version.

Thanks a lot for reviewing the series,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 00/19] migration: Postcopy Preemption
  2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
                   ` (18 preceding siblings ...)
  2022-03-31 15:08 ` [PATCH v4 19/19] tests: Add postcopy preempt tests Peter Xu
@ 2022-04-21 13:57 ` Dr. David Alan Gilbert
  19 siblings, 0 replies; 54+ messages in thread
From: Dr. David Alan Gilbert @ 2022-04-21 13:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange, qemu-devel,
	Juan Quintela

* Peter Xu (peterx@redhat.com) wrote:
> This is v4 of postcopy preempt series.  It can also be found here:
> 
>   https://github.com/xzpeter/qemu/tree/postcopy-preempt
> 
> RFC: https://lore.kernel.org/qemu-devel/20220119080929.39485-1-peterx@redhat.com
> V1:  https://lore.kernel.org/qemu-devel/20220216062809.57179-1-peterx@redhat.com
> V2:  https://lore.kernel.org/qemu-devel/20220301083925.33483-1-peterx@redhat.com
> V3:  https://lore.kernel.org/qemu-devel/20220330213908.26608-1-peterx@redhat.com

I've queued:
migration: Allow migrate-recover to run multiple times
migration: Move channel setup out of postcopy_try_recover()
migration: Export ram_load_postcopy()
migration: Move migrate_allow_multifd and helpers into migration.c
migration: Add pss.postcopy_requested status
migration: Drop multifd tls_hostname cache
migration: Postpone releasing MigrationState.hostname

> v4:
> - Fix a double-free on params.tls-creds when quitting qemu
> - Reorder patches to satisfy per-commit builds
> 
> v3:
> - Rebased to master since many patches landed
> - Fixed one bug on postcopy recovery when preempt enabled, this is only
>   found when I test with TLS+recovery, because TLS changed the timing.
> - Dropped patch:
>   "migration: Fail postcopy preempt with TLS for now"
> - Added patches for TLS:
>   - "migration: Postpone releasing MigrationState.hostname"
>   - "migration: Drop multifd tls_hostname cache"
>   - "migration: Enable TLS for preempt channel"
>   - "migration: Export tls-[creds|hostname|authz] params to cmdline too"
>   - "tests: Add postcopy tls migration test"
>   - "tests: Add postcopy tls recovery migration test"
> - Added two more tests to the preempt test patch (tls, tls+recovery)
> 
> Abstract
> ========
> 
> This series added a new migration capability called "postcopy-preempt".  It can
> be enabled when postcopy is enabled, and it'll simply (but greatly) speed up
> postcopy page requests handling process.
> 
> Below are some initial postcopy page request latency measurements after the
> new series applied.
> 
> For each page size, I measured page request latency for three cases:
> 
>   (a) Vanilla:                the old postcopy
>   (b) Preempt no-break-huge:  preempt enabled, x-postcopy-preempt-break-huge=off
>   (c) Preempt full:           preempt enabled, x-postcopy-preempt-break-huge=on
>                               (this is the default option when preempt enabled)
> 
> Here x-postcopy-preempt-break-huge parameter is just added in v2 so as to
> conditionally disable the behavior to break sending a precopy huge page for
> debugging purpose.  So when it's off, postcopy will not preempt precopy
> sending a huge page, but still postcopy will use its own channel.
> 
> I tested it separately to give a rough idea on which part of the change
> helped how much of it.  The overall benefit should be the comparison
> between case (a) and (c).
> 
>   |-----------+---------+-----------------------+--------------|
>   | Page size | Vanilla | Preempt no-break-huge | Preempt full |
>   |-----------+---------+-----------------------+--------------|
>   | 4K        |   10.68 |               N/A [*] |         0.57 |
>   | 2M        |   10.58 |                  5.49 |         5.02 |
>   | 1G        | 2046.65 |               933.185 |      649.445 |
>   |-----------+---------+-----------------------+--------------|
>   [*]: This case is N/A because 4K page does not contain huge page at all
> 
> [1] https://github.com/xzpeter/small-stuffs/blob/master/tools/huge_vm/uffd-latency.bpf
> 
> TODO List
> =========
> 
> Avoid precopy write() blocks postcopy
> -------------------------------------
> 
> I didn't prove this, but I always think the write() syscalls being blocked
> for precopy pages can affect postcopy services.  If we can solve this
> problem then my wild guess is we can further reduce the average page
> latency.
> 
> Two solutions at least in mind: (1) we could have made the write side of
> the migration channel NON_BLOCK too, or (2) multi-threads on send side,
> just like multifd, but we may use lock to protect which page to send too
> (e.g., the core idea is we should _never_ rely anything on the main thread,
> multifd has that dependency on queuing pages only on main thread).
> 
> That can definitely be done and thought about later.
> 
> Multi-channel for preemption threads
> ------------------------------------
> 
> Currently the postcopy preempt feature use only one extra channel and one
> extra thread on dest (no new thread on src QEMU).  It should be mostly good
> enough for major use cases, but when the postcopy queue is long enough
> (e.g. hundreds of vCPUs faulted on different pages) logically we could
> still observe more delays in average.  Whether growing threads/channels can
> solve it is debatable, but sounds worthwhile a try.  That's yet another
> thing we can think about after this patchset lands.
> 
> Logically the design provides space for that - the receiving postcopy
> preempt thread can understand all ram-layer migration protocol, and for
> multi channel and multi threads we could simply grow that into multile
> threads handling the same protocol (with multiple PostcopyTmpPage).  The
> source needs more thoughts on synchronizations, though, but it shouldn't
> affect the whole protocol layer, so should be easy to keep compatible.
> 
> Please review, thanks.
> 
> Peter Xu (19):
>   migration: Postpone releasing MigrationState.hostname
>   migration: Drop multifd tls_hostname cache
>   migration: Add pss.postcopy_requested status
>   migration: Move migrate_allow_multifd and helpers into migration.c
>   migration: Export ram_load_postcopy()
>   migration: Move channel setup out of postcopy_try_recover()
>   migration: Allow migrate-recover to run multiple times
>   migration: Add postcopy-preempt capability
>   migration: Postcopy preemption preparation on channel creation
>   migration: Postcopy preemption enablement
>   migration: Postcopy recover with preempt enabled
>   migration: Create the postcopy preempt channel asynchronously
>   migration: Parameter x-postcopy-preempt-break-huge
>   migration: Add helpers to detect TLS capability
>   migration: Export tls-[creds|hostname|authz] params to cmdline too
>   migration: Enable TLS for preempt channel
>   tests: Add postcopy tls migration test
>   tests: Add postcopy tls recovery migration test
>   tests: Add postcopy preempt tests
> 
>  migration/channel.c          |  11 +-
>  migration/migration.c        | 218 ++++++++++++++++++++------
>  migration/migration.h        |  52 ++++++-
>  migration/multifd.c          |  36 +----
>  migration/multifd.h          |   4 -
>  migration/postcopy-ram.c     | 190 ++++++++++++++++++++++-
>  migration/postcopy-ram.h     |  11 ++
>  migration/qemu-file.c        |  27 ++++
>  migration/qemu-file.h        |   1 +
>  migration/ram.c              | 288 +++++++++++++++++++++++++++++++++--
>  migration/ram.h              |   3 +
>  migration/savevm.c           |  49 ++++--
>  migration/socket.c           |  22 ++-
>  migration/socket.h           |   1 +
>  migration/trace-events       |  15 +-
>  qapi/migration.json          |   8 +-
>  tests/qtest/migration-test.c | 113 ++++++++++++--
>  17 files changed, 918 insertions(+), 131 deletions(-)
> 
> -- 
> 2.32.0
> 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 10/19] migration: Postcopy preemption enablement
  2022-03-31 15:08 ` [PATCH v4 10/19] migration: Postcopy preemption enablement Peter Xu
  2022-04-20 11:05   ` Daniel P. Berrangé
@ 2022-05-11 15:54   ` manish.mishra
  2022-05-12 16:22     ` Peter Xu
  1 sibling, 1 reply; 54+ messages in thread
From: manish.mishra @ 2022-05-11 15:54 UTC (permalink / raw)
  To: Peter Xu, qemu-devel
  Cc: Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, Juan Quintela

[-- Attachment #1: Type: text/plain, Size: 19576 bytes --]

On 31/03/22 8:38 pm, Peter Xu wrote:

LGTM

> This patch enables postcopy-preempt feature.
>
> It contains two major changes to the migration logic:
>
> (1) Postcopy requests are now sent via a different socket from precopy
>      background migration stream, so as to be isolated from very high page
>      request delays.
>
> (2) For huge page enabled hosts: when there's postcopy requests, they can now
>      intercept a partial sending of huge host pages on src QEMU.
>
> After this patch, we'll live migrate a VM with two channels for postcopy: (1)
> PRECOPY channel, which is the default channel that transfers background pages;
> and (2) POSTCOPY channel, which only transfers requested pages.
>
> There's no strict rule of which channel to use, e.g., if a requested page is
> already being transferred on precopy channel, then we will keep using the same
> precopy channel to transfer the page even if it's explicitly requested.  In 99%
> of the cases we'll prioritize the channels so we send requested page via the
> postcopy channel as long as possible.
>
> On the source QEMU, when we found a postcopy request, we'll interrupt the
> PRECOPY channel sending process and quickly switch to the POSTCOPY channel.
> After we serviced all the high priority postcopy pages, we'll switch back to
> PRECOPY channel so that we'll continue to send the interrupted huge page again.
> There's no new thread introduced on src QEMU.
>
> On the destination QEMU, one new thread is introduced to receive page data from
> the postcopy specific socket (done in the preparation patch).
>
> This patch has a side effect: after sending postcopy pages, previously we'll
> assume the guest will access follow up pages so we'll keep sending from there.
> Now it's changed.  Instead of going on with a postcopy requested page, we'll go
> back and continue sending the precopy huge page (which can be intercepted by a
> postcopy request so the huge page can be sent partially before).
>
> Whether that's a problem is debatable, because "assuming the guest will
> continue to access the next page" may not really suite when huge pages are
> used, especially if the huge page is large (e.g. 1GB pages).  So that locality
> hint is much meaningless if huge pages are used.
>
> Reviewed-by: Dr. David Alan Gilbert<dgilbert@redhat.com>
> Signed-off-by: Peter Xu<peterx@redhat.com>
> ---
>   migration/migration.c  |   2 +
>   migration/migration.h  |   2 +-
>   migration/ram.c        | 250 +++++++++++++++++++++++++++++++++++++++--
>   migration/trace-events |   7 ++
>   4 files changed, 252 insertions(+), 9 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index 01b882494d..56d54c186b 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -3158,6 +3158,8 @@ static int postcopy_start(MigrationState *ms)
>                                 MIGRATION_STATUS_FAILED);
>       }
>   
> +    trace_postcopy_preempt_enabled(migrate_postcopy_preempt());
> +
>       return ret;
>   
>   fail_closefb:
> diff --git a/migration/migration.h b/migration/migration.h
> index caa910d956..b8aacfe3af 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -68,7 +68,7 @@ typedef struct {
>   struct MigrationIncomingState {
>       QEMUFile *from_src_file;
>       /* Previously received RAM's RAMBlock pointer */
> -    RAMBlock *last_recv_block;
> +    RAMBlock *last_recv_block[RAM_CHANNEL_MAX];
>       /* A hook to allow cleanup at the end of incoming migration */
>       void *transport_data;
>       void (*transport_cleanup)(void *data);
> diff --git a/migration/ram.c b/migration/ram.c
> index c7ea1d9215..518d511874 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -295,6 +295,20 @@ struct RAMSrcPageRequest {
>       QSIMPLEQ_ENTRY(RAMSrcPageRequest) next_req;
>   };
>   
> +typedef struct {
> +    /*
> +     * Cached ramblock/offset values if preempted.  They're only meaningful if
> +     * preempted==true below.
> +     */
> +    RAMBlock *ram_block;
> +    unsigned long ram_page;
> +    /*
> +     * Whether a postcopy preemption just happened.  Will be reset after
> +     * precopy recovered to background migration.
> +     */
> +    bool preempted;
> +} PostcopyPreemptState;
> +
>   /* State of RAM for migration */
>   struct RAMState {
>       /* QEMUFile used for this migration */
> @@ -349,6 +363,14 @@ struct RAMState {
>       /* Queue of outstanding page requests from the destination */
>       QemuMutex src_page_req_mutex;
>       QSIMPLEQ_HEAD(, RAMSrcPageRequest) src_page_requests;
> +
> +    /* Postcopy preemption informations */
> +    PostcopyPreemptState postcopy_preempt_state;
> +    /*
> +     * Current channel we're using on src VM.  Only valid if postcopy-preempt
> +     * is enabled.
> +     */
> +    unsigned int postcopy_channel;
>   };
>   typedef struct RAMState RAMState;
>   
> @@ -356,6 +378,11 @@ static RAMState *ram_state;
>   
>   static NotifierWithReturnList precopy_notifier_list;
>   
> +static void postcopy_preempt_reset(RAMState *rs)
> +{
> +    memset(&rs->postcopy_preempt_state, 0, sizeof(PostcopyPreemptState));
> +}
> +
>   /* Whether postcopy has queued requests? */
>   static bool postcopy_has_request(RAMState *rs)
>   {
> @@ -1947,6 +1974,55 @@ void ram_write_tracking_stop(void)
>   }
>   #endif /* defined(__linux__) */
>   
> +/*
> + * Check whether two addr/offset of the ramblock falls onto the same host huge
> + * page.  Returns true if so, false otherwise.
> + */
> +static bool offset_on_same_huge_page(RAMBlock *rb, uint64_t addr1,
> +                                     uint64_t addr2)
> +{
> +    size_t page_size = qemu_ram_pagesize(rb);
> +
> +    addr1 = ROUND_DOWN(addr1, page_size);
> +    addr2 = ROUND_DOWN(addr2, page_size);
> +
> +    return addr1 == addr2;
> +}
> +
> +/*
> + * Whether a previous preempted precopy huge page contains current requested
> + * page?  Returns true if so, false otherwise.
> + *
> + * This should really happen very rarely, because it means when we were sending
> + * during background migration for postcopy we're sending exactly the page that
> + * some vcpu got faulted on on dest node.  When it happens, we probably don't
> + * need to do much but drop the request, because we know right after we restore
> + * the precopy stream it'll be serviced.  It'll slightly affect the order of
> + * postcopy requests to be serviced (e.g. it'll be the same as we move current
> + * request to the end of the queue) but it shouldn't be a big deal.  The most
> + * imporant thing is we can _never_ try to send a partial-sent huge page on the
> + * POSTCOPY channel again, otherwise that huge page will got "split brain" on
> + * two channels (PRECOPY, POSTCOPY).
> + */
> +static bool postcopy_preempted_contains(RAMState *rs, RAMBlock *block,
> +                                        ram_addr_t offset)
> +{
> +    PostcopyPreemptState *state = &rs->postcopy_preempt_state;
> +
> +    /* No preemption at all? */
> +    if (!state->preempted) {
> +        return false;
> +    }
> +
> +    /* Not even the same ramblock? */
> +    if (state->ram_block != block) {
> +        return false;
> +    }
> +
> +    return offset_on_same_huge_page(block, offset,
> +                                    state->ram_page << TARGET_PAGE_BITS);
> +}
> +
>   /**
>    * get_queued_page: unqueue a page from the postcopy requests
>    *
> @@ -1962,9 +2038,17 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
>       RAMBlock  *block;
>       ram_addr_t offset;
>   
> +again:
>       block = unqueue_page(rs, &offset);
>   
> -    if (!block) {
> +    if (block) {
> +        /* See comment above postcopy_preempted_contains() */
> +        if (postcopy_preempted_contains(rs, block, offset)) {
> +            trace_postcopy_preempt_hit(block->idstr, offset);
> +            /* This request is dropped */
> +            goto again;
> +        }
If we continuosly keep on getting new post-copy request. Is it possible 
this case can starve post-copy request which is in precopy preemtion?
> +    } else {
>           /*
>            * Poll write faults too if background snapshot is enabled; that's
>            * when we have vcpus got blocked by the write protected pages.
> @@ -2180,6 +2264,117 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
>       return ram_save_page(rs, pss);
>   }
>   
> +static bool postcopy_needs_preempt(RAMState *rs, PageSearchStatus *pss)
> +{
> +    /* Not enabled eager preempt?  Then never do that. */
> +    if (!migrate_postcopy_preempt()) {
> +        return false;
> +    }
> +
> +    /* If the ramblock we're sending is a small page?  Never bother. */
> +    if (qemu_ram_pagesize(pss->block) == TARGET_PAGE_SIZE) {
> +        return false;
> +    }
> +
> +    /* Not in postcopy at all? */
> +    if (!migration_in_postcopy()) {
> +        return false;
> +    }
> +
> +    /*
> +     * If we're already handling a postcopy request, don't preempt as this page
> +     * has got the same high priority.
> +     */
> +    if (pss->postcopy_requested) {
> +        return false;
> +    }
> +
> +    /* If there's postcopy requests, then check it up! */
> +    return postcopy_has_request(rs);
> +}
> +
> +/* Returns true if we preempted precopy, false otherwise */
> +static void postcopy_do_preempt(RAMState *rs, PageSearchStatus *pss)
> +{
> +    PostcopyPreemptState *p_state = &rs->postcopy_preempt_state;
> +
> +    trace_postcopy_preempt_triggered(pss->block->idstr, pss->page);
> +
> +    /*
> +     * Time to preempt precopy. Cache current PSS into preempt state, so that
> +     * after handling the postcopy pages we can recover to it.  We need to do
> +     * so because the dest VM will have partial of the precopy huge page kept
> +     * over in its tmp huge page caches; better move on with it when we can.
> +     */
> +    p_state->ram_block = pss->block;
> +    p_state->ram_page = pss->page;
> +    p_state->preempted = true;
> +}
> +
> +/* Whether we're preempted by a postcopy request during sending a huge page */
> +static bool postcopy_preempt_triggered(RAMState *rs)
> +{
> +    return rs->postcopy_preempt_state.preempted;
> +}
> +
> +static void postcopy_preempt_restore(RAMState *rs, PageSearchStatus *pss)
> +{
> +    PostcopyPreemptState *state = &rs->postcopy_preempt_state;
> +
> +    assert(state->preempted);
> +
> +    pss->block = state->ram_block;
> +    pss->page = state->ram_page;
> +    /* This is not a postcopy request but restoring previous precopy */
> +    pss->postcopy_requested = false;
> +
> +    trace_postcopy_preempt_restored(pss->block->idstr, pss->page);
> +
> +    /* Reset preempt state, most importantly, set preempted==false */
> +    postcopy_preempt_reset(rs);
> +}
> +
> +static void postcopy_preempt_choose_channel(RAMState *rs, PageSearchStatus *pss)
> +{
> +    MigrationState *s = migrate_get_current();
> +    unsigned int channel;
> +    QEMUFile *next;
> +
> +    channel = pss->postcopy_requested ?
> +        RAM_CHANNEL_POSTCOPY : RAM_CHANNEL_PRECOPY;
> +
> +    if (channel != rs->postcopy_channel) {
> +        if (channel == RAM_CHANNEL_PRECOPY) {
> +            next = s->to_dst_file;
> +        } else {
> +            next = s->postcopy_qemufile_src;
> +        }
> +        /* Update and cache the current channel */
> +        rs->f = next;
> +        rs->postcopy_channel = channel;
> +
> +        /*
> +         * If channel switched, reset last_sent_block since the old sent block
> +         * may not be on the same channel.
> +         */
> +        rs->last_sent_block = NULL;
> +
> +        trace_postcopy_preempt_switch_channel(channel);
> +    }
> +
> +    trace_postcopy_preempt_send_host_page(pss->block->idstr, pss->page);
> +}
> +
> +/* We need to make sure rs->f always points to the default channel elsewhere */
> +static void postcopy_preempt_reset_channel(RAMState *rs)
> +{
> +    if (migrate_postcopy_preempt() && migration_in_postcopy()) {
> +        rs->postcopy_channel = RAM_CHANNEL_PRECOPY;
> +        rs->f = migrate_get_current()->to_dst_file;
> +        trace_postcopy_preempt_reset_channel();
> +    }
> +}
> +
>   /**
>    * ram_save_host_page: save a whole host page
>    *
> @@ -2211,7 +2406,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
>           return 0;
>       }
>   
> +    if (migrate_postcopy_preempt() && migration_in_postcopy()) {

I see why there is only one extra channel, multiFD is not supported for 
postcopy. Peter, Any particular reason for that.

As it must be very bad without multiFD, we have seen we can not utilise 
NIC higher than 10 Gbps without multiFD. If it

is something in TODO can we help with that?

> +        postcopy_preempt_choose_channel(rs, pss);
> +    }
> +
>       do {
> +        if (postcopy_needs_preempt(rs, pss)) {
> +            postcopy_do_preempt(rs, pss);
> +            break;
> +        }
> +
>           /* Check the pages is dirty and if it is send it */
>           if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
>               tmppages = ram_save_target_page(rs, pss);
> @@ -2235,6 +2439,19 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
>       /* The offset we leave with is the min boundary of host page and block */
>       pss->page = MIN(pss->page, hostpage_boundary);
>   
> +    /*
> +     * When with postcopy preempt mode, flush the data as soon as possible for
> +     * postcopy requests, because we've already sent a whole huge page, so the
> +     * dst node should already have enough resource to atomically filling in
> +     * the current missing page.
> +     *
> +     * More importantly, when using separate postcopy channel, we must do
> +     * explicit flush or it won't flush until the buffer is full.
> +     */
> +    if (migrate_postcopy_preempt() && pss->postcopy_requested) {
> +        qemu_fflush(rs->f);
> +    }
> +
>       res = ram_save_release_protection(rs, pss, start_page);
>       return (res < 0 ? res : pages);
>   }
> @@ -2276,8 +2493,17 @@ static int ram_find_and_save_block(RAMState *rs)
>           found = get_queued_page(rs, &pss);
>   
>           if (!found) {
> -            /* priority queue empty, so just search for something dirty */
> -            found = find_dirty_block(rs, &pss, &again);
> +            /*
> +             * Recover previous precopy ramblock/offset if postcopy has
> +             * preempted precopy.  Otherwise find the next dirty bit.
> +             */
> +            if (postcopy_preempt_triggered(rs)) {
> +                postcopy_preempt_restore(rs, &pss);
> +                found = true;
> +            } else {
> +                /* priority queue empty, so just search for something dirty */
> +                found = find_dirty_block(rs, &pss, &again);
> +            }
>           }
>   
>           if (found) {
> @@ -2405,6 +2631,8 @@ static void ram_state_reset(RAMState *rs)
>       rs->last_page = 0;
>       rs->last_version = ram_list.version;
>       rs->xbzrle_enabled = false;
> +    postcopy_preempt_reset(rs);
> +    rs->postcopy_channel = RAM_CHANNEL_PRECOPY;
>   }
>   
>   #define MAX_WAIT 50 /* ms, half buffered_file limit */
> @@ -3043,6 +3271,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>       }
>       qemu_mutex_unlock(&rs->bitmap_mutex);
>   
> +    postcopy_preempt_reset_channel(rs);
> +
>       /*
>        * Must occur before EOS (or any QEMUFile operation)
>        * because of RDMA protocol.
> @@ -3112,6 +3342,8 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>           ram_control_after_iterate(f, RAM_CONTROL_FINISH);
>       }
>   
> +    postcopy_preempt_reset_channel(rs);
> +
>       if (ret >= 0) {
>           multifd_send_sync_main(rs->f);
>           qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> @@ -3194,11 +3426,13 @@ static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
>    * @mis: the migration incoming state pointer
>    * @f: QEMUFile where to read the data from
>    * @flags: Page flags (mostly to see if it's a continuation of previous block)
> + * @channel: the channel we're using
>    */
>   static inline RAMBlock *ram_block_from_stream(MigrationIncomingState *mis,
> -                                              QEMUFile *f, int flags)
> +                                              QEMUFile *f, int flags,
> +                                              int channel)
>   {
> -    RAMBlock *block = mis->last_recv_block;
> +    RAMBlock *block = mis->last_recv_block[channel];
>       char id[256];
>       uint8_t len;
>   
> @@ -3225,7 +3459,7 @@ static inline RAMBlock *ram_block_from_stream(MigrationIncomingState *mis,
>           return NULL;
>       }
>   
> -    mis->last_recv_block = block;
> +    mis->last_recv_block[channel] = block;
>   
>       return block;
>   }
> @@ -3679,7 +3913,7 @@ int ram_load_postcopy(QEMUFile *f, int channel)
>           trace_ram_load_postcopy_loop(channel, (uint64_t)addr, flags);
>           if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
>                        RAM_SAVE_FLAG_COMPRESS_PAGE)) {
> -            block = ram_block_from_stream(mis, f, flags);
> +            block = ram_block_from_stream(mis, f, flags, channel);
>               if (!block) {
>                   ret = -EINVAL;
>                   break;
> @@ -3930,7 +4164,7 @@ static int ram_load_precopy(QEMUFile *f)
>   
>           if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
>                        RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
> -            RAMBlock *block = ram_block_from_stream(mis, f, flags);
> +            RAMBlock *block = ram_block_from_stream(mis, f, flags, RAM_CHANNEL_PRECOPY);
>   
>               host = host_from_ram_block_offset(block, addr);
>               /*
> diff --git a/migration/trace-events b/migration/trace-events
> index 1f932782d9..f92793b5f5 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -111,6 +111,12 @@ ram_load_complete(int ret, uint64_t seq_iter) "exit_code %d seq iteration %" PRI
>   ram_write_tracking_ramblock_start(const char *block_id, size_t page_size, void *addr, size_t length) "%s: page_size: %zu addr: %p length: %zu"
>   ram_write_tracking_ramblock_stop(const char *block_id, size_t page_size, void *addr, size_t length) "%s: page_size: %zu addr: %p length: %zu"
>   unqueue_page(char *block, uint64_t offset, bool dirty) "ramblock '%s' offset 0x%"PRIx64" dirty %d"
> +postcopy_preempt_triggered(char *str, unsigned long page) "during sending ramblock %s offset 0x%lx"
> +postcopy_preempt_restored(char *str, unsigned long page) "ramblock %s offset 0x%lx"
> +postcopy_preempt_hit(char *str, uint64_t offset) "ramblock %s offset 0x%"PRIx64
> +postcopy_preempt_send_host_page(char *str, uint64_t offset) "ramblock %s offset 0x%"PRIx64
> +postcopy_preempt_switch_channel(int channel) "%d"
> +postcopy_preempt_reset_channel(void) ""
>   
>   # multifd.c
>   multifd_new_send_channel_async(uint8_t id) "channel %u"
> @@ -176,6 +182,7 @@ migration_thread_low_pending(uint64_t pending) "%" PRIu64
>   migrate_transferred(uint64_t tranferred, uint64_t time_spent, uint64_t bandwidth, uint64_t size) "transferred %" PRIu64 " time_spent %" PRIu64 " bandwidth %" PRIu64 " max_size %" PRId64
>   process_incoming_migration_co_end(int ret, int ps) "ret=%d postcopy-state=%d"
>   process_incoming_migration_co_postcopy_end_main(void) ""
> +postcopy_preempt_enabled(bool value) "%d"
>   
>   # channel.c
>   migration_set_incoming_channel(void *ioc, const char *ioctype) "ioc=%p ioctype=%s"

[-- Attachment #2: Type: text/html, Size: 19946 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 10/19] migration: Postcopy preemption enablement
  2022-05-11 15:54   ` manish.mishra
@ 2022-05-12 16:22     ` Peter Xu
  2022-05-13 18:53       ` manish.mishra
  0 siblings, 1 reply; 54+ messages in thread
From: Peter Xu @ 2022-05-12 16:22 UTC (permalink / raw)
  To: manish.mishra
  Cc: qemu-devel, Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, Juan Quintela

Hi, Manish,

On Wed, May 11, 2022 at 09:24:28PM +0530, manish.mishra wrote:
> > @@ -1962,9 +2038,17 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
> >       RAMBlock  *block;
> >       ram_addr_t offset;
> > +again:
> >       block = unqueue_page(rs, &offset);
> > -    if (!block) {
> > +    if (block) {
> > +        /* See comment above postcopy_preempted_contains() */
> > +        if (postcopy_preempted_contains(rs, block, offset)) {
> > +            trace_postcopy_preempt_hit(block->idstr, offset);
> > +            /* This request is dropped */
> > +            goto again;
> > +        }
> If we continuosly keep on getting new post-copy request. Is it possible this
> case can starve post-copy request which is in precopy preemtion?

I didn't fully get your thoughts, could you elaborate?

Here we're checking against the case where the postcopy requested page is
exactly the one that we have preempted in previous precopy sessions.  If
true, we drop this postcopy page and continue with the rest.

When there'll be no postcopy requests pending then we'll continue with the
precopy page, which is exactly the request we've dropped.

Why we did this is actually in comment above postcopy_preempted_contains(),
and quotting from there:

/*
 * This should really happen very rarely, because it means when we were sending
 * during background migration for postcopy we're sending exactly the page that
 * some vcpu got faulted on on dest node.  When it happens, we probably don't
 * need to do much but drop the request, because we know right after we restore
 * the precopy stream it'll be serviced.  It'll slightly affect the order of
 * postcopy requests to be serviced (e.g. it'll be the same as we move current
 * request to the end of the queue) but it shouldn't be a big deal.  The most
 * imporant thing is we can _never_ try to send a partial-sent huge page on the
 * POSTCOPY channel again, otherwise that huge page will got "split brain" on
 * two channels (PRECOPY, POSTCOPY).
 */

[...]

> > @@ -2211,7 +2406,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> >           return 0;
> >       }
> > +    if (migrate_postcopy_preempt() && migration_in_postcopy()) {
> 
> I see why there is only one extra channel, multiFD is not supported for
> postcopy. Peter, Any particular reason for that.

We used one channel not because multifd is not enabled - if you read into
the series the channels are separately managed because they're servicing
different goals.  It's because I don't really know whether multiple
channels would be necessary, because postcopy requests should not be the
major channel that pages will be sent, kind of a fast-path.

One of the major goal of this series is to avoid interruptions made to
postcopy urgent pages due to sending of precopy pages.  One extra channel
already serviced it well, so I just stopped there as the initial version.
I actually raised that question myself too in the cover letter in the todo
section, I think we can always evaluate the possibility of that in the
future without major reworks (but we may need another parameter to specify
the num of threads just like multifd).

> 
> As it must be very bad without multiFD, we have seen we can not utilise NIC
> higher than 10 Gbps without multiFD. If it
> 
> is something in TODO can we help with that?

Yes, that should be on Juan's todo list (in the cc list as well), and
AFAICT he'll be happy if anyone would like to take items out of the list.
We can further discuss it somewhere.

One thing to mention is that I suspect the thread models will still need to
be separate even if multifd joins the equation.  I mean, IMHO multifd
threads take chunks of pages and send things in bulk, while if you read
into this series postcopy preempt threads send page one by one and asap.
The former cares on throughput and latter cares latency.  When we design
the mix of postcopy+multifd it'll be great we also keep this in mind so
hopefully it'll make postcopy+multifd+preempt easier at last.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 10/19] migration: Postcopy preemption enablement
  2022-05-12 16:22     ` Peter Xu
@ 2022-05-13 18:53       ` manish.mishra
  2022-05-13 19:31         ` Peter Xu
  0 siblings, 1 reply; 54+ messages in thread
From: manish.mishra @ 2022-05-13 18:53 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, Juan Quintela


On 12/05/22 9:52 pm, Peter Xu wrote:
> Hi, Manish,
>
> On Wed, May 11, 2022 at 09:24:28PM +0530, manish.mishra wrote:
>>> @@ -1962,9 +2038,17 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
>>>        RAMBlock  *block;
>>>        ram_addr_t offset;
>>> +again:
>>>        block = unqueue_page(rs, &offset);
>>> -    if (!block) {
>>> +    if (block) {
>>> +        /* See comment above postcopy_preempted_contains() */
>>> +        if (postcopy_preempted_contains(rs, block, offset)) {
>>> +            trace_postcopy_preempt_hit(block->idstr, offset);
>>> +            /* This request is dropped */
>>> +            goto again;
>>> +        }
>> If we continuosly keep on getting new post-copy request. Is it possible this
>> case can starve post-copy request which is in precopy preemtion?
> I didn't fully get your thoughts, could you elaborate?
>
> Here we're checking against the case where the postcopy requested page is
> exactly the one that we have preempted in previous precopy sessions.  If
> true, we drop this postcopy page and continue with the rest.
>
> When there'll be no postcopy requests pending then we'll continue with the
> precopy page, which is exactly the request we've dropped.
>
> Why we did this is actually in comment above postcopy_preempted_contains(),
> and quotting from there:
>
> /*
>   * This should really happen very rarely, because it means when we were sending
>   * during background migration for postcopy we're sending exactly the page that
>   * some vcpu got faulted on on dest node.  When it happens, we probably don't
>   * need to do much but drop the request, because we know right after we restore
>   * the precopy stream it'll be serviced.  It'll slightly affect the order of
>   * postcopy requests to be serviced (e.g. it'll be the same as we move current
>   * request to the end of the queue) but it shouldn't be a big deal.  The most
>   * imporant thing is we can _never_ try to send a partial-sent huge page on the
>   * POSTCOPY channel again, otherwise that huge page will got "split brain" on
>   * two channels (PRECOPY, POSTCOPY).
>   */
>
> [...]

Hi Peter, what i meant here is that as we go to precopy sending only 
when there is

no post-copy request left so if there is some workload which is 
continuosly generating

new post-copy fault request, It may take very long before we resume on 
precopy channel.

So basically precopy channel may have a post-copy request pending for 
very long in

this case? Earlier as it was FCFS there was a guarantee a post-copy 
request will be

served after a fixed amount of time.

>>> @@ -2211,7 +2406,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
>>>            return 0;
>>>        }
>>> +    if (migrate_postcopy_preempt() && migration_in_postcopy()) {
>> I see why there is only one extra channel, multiFD is not supported for
>> postcopy. Peter, Any particular reason for that.
> We used one channel not because multifd is not enabled - if you read into
> the series the channels are separately managed because they're servicing
> different goals.  It's because I don't really know whether multiple
> channels would be necessary, because postcopy requests should not be the
> major channel that pages will be sent, kind of a fast-path.
>
> One of the major goal of this series is to avoid interruptions made to
> postcopy urgent pages due to sending of precopy pages.  One extra channel
> already serviced it well, so I just stopped there as the initial version.
> I actually raised that question myself too in the cover letter in the todo
> section, I think we can always evaluate the possibility of that in the
> future without major reworks (but we may need another parameter to specify
> the num of threads just like multifd).

 >because postcopy requests should not be the major channel that pages 
will be sent, kind of a fast-path.

Yes, agree Peter, but in worst case scenario it is possible we may have 
to transfer full memory of VM

by post-copy requests? So in that case we may require higher number of 
threads. But agree there can not be

be binding with number of mutliFD channels as multiFD uses 256KB buffer 
size but here we may have to 4k

in small page case so there can be big diff in throughput limits. Also 
smaller the buffer size much higher will

be cpu usage so it needs to be decided carefully.

>> As it must be very bad without multiFD, we have seen we can not utilise NIC
>> higher than 10 Gbps without multiFD. If it
>>
>> is something in TODO can we help with that?
> Yes, that should be on Juan's todo list (in the cc list as well), and
> AFAICT he'll be happy if anyone would like to take items out of the list.
> We can further discuss it somewhere.
>
> One thing to mention is that I suspect the thread models will still need to
> be separate even if multifd joins the equation.  I mean, IMHO multifd
> threads take chunks of pages and send things in bulk, while if you read
> into this series postcopy preempt threads send page one by one and asap.
> The former cares on throughput and latter cares latency.  When we design
> the mix of postcopy+multifd it'll be great we also keep this in mind so
> hopefully it'll make postcopy+multifd+preempt easier at last.
yes, got it, thanks
>
> Thanks,
>


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v4 10/19] migration: Postcopy preemption enablement
  2022-05-13 18:53       ` manish.mishra
@ 2022-05-13 19:31         ` Peter Xu
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Xu @ 2022-05-13 19:31 UTC (permalink / raw)
  To: manish.mishra
  Cc: qemu-devel, Leonardo Bras Soares Passos, Daniel P . Berrange,
	Dr . David Alan Gilbert, Juan Quintela

On Sat, May 14, 2022 at 12:23:44AM +0530, manish.mishra wrote:
> 
> On 12/05/22 9:52 pm, Peter Xu wrote:
> > Hi, Manish,
> > 
> > On Wed, May 11, 2022 at 09:24:28PM +0530, manish.mishra wrote:
> > > > @@ -1962,9 +2038,17 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
> > > >        RAMBlock  *block;
> > > >        ram_addr_t offset;
> > > > +again:
> > > >        block = unqueue_page(rs, &offset);
> > > > -    if (!block) {
> > > > +    if (block) {
> > > > +        /* See comment above postcopy_preempted_contains() */
> > > > +        if (postcopy_preempted_contains(rs, block, offset)) {
> > > > +            trace_postcopy_preempt_hit(block->idstr, offset);
> > > > +            /* This request is dropped */
> > > > +            goto again;
> > > > +        }
> > > If we continuosly keep on getting new post-copy request. Is it possible this
> > > case can starve post-copy request which is in precopy preemtion?
> > I didn't fully get your thoughts, could you elaborate?
> > 
> > Here we're checking against the case where the postcopy requested page is
> > exactly the one that we have preempted in previous precopy sessions.  If
> > true, we drop this postcopy page and continue with the rest.
> > 
> > When there'll be no postcopy requests pending then we'll continue with the
> > precopy page, which is exactly the request we've dropped.
> > 
> > Why we did this is actually in comment above postcopy_preempted_contains(),
> > and quotting from there:
> > 
> > /*
> >   * This should really happen very rarely, because it means when we were sending
> >   * during background migration for postcopy we're sending exactly the page that
> >   * some vcpu got faulted on on dest node.  When it happens, we probably don't
> >   * need to do much but drop the request, because we know right after we restore
> >   * the precopy stream it'll be serviced.  It'll slightly affect the order of
> >   * postcopy requests to be serviced (e.g. it'll be the same as we move current
> >   * request to the end of the queue) but it shouldn't be a big deal.  The most
> >   * imporant thing is we can _never_ try to send a partial-sent huge page on the
> >   * POSTCOPY channel again, otherwise that huge page will got "split brain" on
> >   * two channels (PRECOPY, POSTCOPY).
> >   */
> > 
> > [...]
> 
> Hi Peter, what i meant here is that as we go to precopy sending only when
> there is
> 
> no post-copy request left so if there is some workload which is continuosly
> generating
> 
> new post-copy fault request, It may take very long before we resume on
> precopy channel.
> 
> So basically precopy channel may have a post-copy request pending for very
> long in
> 
> this case? Earlier as it was FCFS there was a guarantee a post-copy request
> will be
> 
> served after a fixed amount of time.

Ah that's a good point.

In that case maybe what we want to do is to restore this preemption
immediately using postcopy_preempt_restore(), however we may also want to
make sure this huge page won't get preempted by any other postcopy pages.

One thing in my mind to do this is to add one more field to the pss
structure, we could call it pss->urgent.

Previously we have only had pss->postcopy_requested showing that whether
one request comes from postcopy and whether we should send this page via
the postcopy preempt channel.  We also use that as a hint so we will never
preempt a huge page when postcopy_requested is set.  Now we probably want
to separate that out of pss->urgent, so postcopy_requested will be mostly
like before (along with the new pss->urgent set to 1 for all postcopy
pages), except that we may want to also set pss->urgent to 1 for this very
special case even for precopy pages, so that we won't preempt this page as
well.

I'm thinking maybe it's not wise to directly change the patch when I
repost.  My current plan is I'll add one more patch at the end, so I won't
easily give away Dave's R-b meanwhile hopefully that could make reviewers
easy.  We could consider squashing that patch in if we'll commit the whole
thing, or we can even keep them separate as a further optimization.

> 
> > > > @@ -2211,7 +2406,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > > >            return 0;
> > > >        }
> > > > +    if (migrate_postcopy_preempt() && migration_in_postcopy()) {
> > > I see why there is only one extra channel, multiFD is not supported for
> > > postcopy. Peter, Any particular reason for that.
> > We used one channel not because multifd is not enabled - if you read into
> > the series the channels are separately managed because they're servicing
> > different goals.  It's because I don't really know whether multiple
> > channels would be necessary, because postcopy requests should not be the
> > major channel that pages will be sent, kind of a fast-path.
> > 
> > One of the major goal of this series is to avoid interruptions made to
> > postcopy urgent pages due to sending of precopy pages.  One extra channel
> > already serviced it well, so I just stopped there as the initial version.
> > I actually raised that question myself too in the cover letter in the todo
> > section, I think we can always evaluate the possibility of that in the
> > future without major reworks (but we may need another parameter to specify
> > the num of threads just like multifd).
> 
> >because postcopy requests should not be the major channel that pages will
> be sent, kind of a fast-path.
> 
> Yes, agree Peter, but in worst case scenario it is possible we may have to
> transfer full memory of VM
> 
> by post-copy requests? So in that case we may require higher number of
> threads. But agree there can not be
> 
> be binding with number of mutliFD channels as multiFD uses 256KB buffer size
> but here we may have to 4k
> 
> in small page case so there can be big diff in throughput limits. Also
> smaller the buffer size much higher will
> 
> be cpu usage so it needs to be decided carefully.

Right, and I see your point here.

It's just non-trivial to both gain performance and low latency imho.  But
maybe you have a good point in that it also means with preemption mode on
and with an extremely busy VM we could have put multifd into a vain even if
we'll support both multifd+preempt in the future.

But anyway - let's think more of it and let's solve problems one by one.

The worst case is we'll have low bw for such migration but it still keeps
relatively good responsiveness on dest page faults for now.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2022-05-13 19:32 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-31 15:08 [PATCH v4 00/19] migration: Postcopy Preemption Peter Xu
2022-03-31 15:08 ` [PATCH v4 01/19] migration: Postpone releasing MigrationState.hostname Peter Xu
2022-04-07 17:21   ` Dr. David Alan Gilbert
2022-04-20 10:34   ` Daniel P. Berrangé
2022-04-20 18:19     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 02/19] migration: Drop multifd tls_hostname cache Peter Xu
2022-04-07 17:42   ` Dr. David Alan Gilbert
2022-04-20 10:35   ` Daniel P. Berrangé
2022-03-31 15:08 ` [PATCH v4 03/19] migration: Add pss.postcopy_requested status Peter Xu
2022-04-20 10:36   ` Daniel P. Berrangé
2022-03-31 15:08 ` [PATCH v4 04/19] migration: Move migrate_allow_multifd and helpers into migration.c Peter Xu
2022-04-20 10:41   ` Daniel P. Berrangé
2022-04-20 19:30     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 05/19] migration: Export ram_load_postcopy() Peter Xu
2022-04-20 10:42   ` Daniel P. Berrangé
2022-03-31 15:08 ` [PATCH v4 06/19] migration: Move channel setup out of postcopy_try_recover() Peter Xu
2022-04-20 10:43   ` Daniel P. Berrangé
2022-03-31 15:08 ` [PATCH v4 07/19] migration: Allow migrate-recover to run multiple times Peter Xu
2022-04-20 10:44   ` Daniel P. Berrangé
2022-03-31 15:08 ` [PATCH v4 08/19] migration: Add postcopy-preempt capability Peter Xu
2022-04-20 10:51   ` Daniel P. Berrangé
2022-04-20 19:31     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 09/19] migration: Postcopy preemption preparation on channel creation Peter Xu
2022-04-20 10:59   ` Daniel P. Berrangé
2022-03-31 15:08 ` [PATCH v4 10/19] migration: Postcopy preemption enablement Peter Xu
2022-04-20 11:05   ` Daniel P. Berrangé
2022-04-20 19:39     ` Peter Xu
2022-05-11 15:54   ` manish.mishra
2022-05-12 16:22     ` Peter Xu
2022-05-13 18:53       ` manish.mishra
2022-05-13 19:31         ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 11/19] migration: Postcopy recover with preempt enabled Peter Xu
2022-03-31 15:08 ` [PATCH v4 12/19] migration: Create the postcopy preempt channel asynchronously Peter Xu
2022-03-31 15:08 ` [PATCH v4 13/19] migration: Parameter x-postcopy-preempt-break-huge Peter Xu
2022-03-31 15:08 ` [PATCH v4 14/19] migration: Add helpers to detect TLS capability Peter Xu
2022-04-20 11:10   ` Daniel P. Berrangé
2022-04-20 19:52     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 15/19] migration: Export tls-[creds|hostname|authz] params to cmdline too Peter Xu
2022-04-20 11:13   ` Daniel P. Berrangé
2022-04-20 20:01     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 16/19] migration: Enable TLS for preempt channel Peter Xu
2022-04-20 11:35   ` Daniel P. Berrangé
2022-04-20 20:10     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 17/19] tests: Add postcopy tls migration test Peter Xu
2022-04-20 11:39   ` Daniel P. Berrangé
2022-04-20 20:15     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 18/19] tests: Add postcopy tls recovery " Peter Xu
2022-04-20 11:42   ` Daniel P. Berrangé
2022-04-20 20:38     ` Peter Xu
2022-03-31 15:08 ` [PATCH v4 19/19] tests: Add postcopy preempt tests Peter Xu
2022-03-31 15:25   ` Peter Xu
2022-04-20 11:43   ` Daniel P. Berrangé
2022-04-20 20:51     ` Peter Xu
2022-04-21 13:57 ` [PATCH v4 00/19] migration: Postcopy Preemption Dr. David Alan Gilbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.