All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] Eliminate multifd flush
@ 2023-02-08 13:30 Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Juan Quintela @ 2023-02-08 13:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Dr. David Alan Gilbert, Markus Armbruster, Juan Quintela,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost, Eric Blake

Hi

In this v3:
- update to latest upstream.
- fix checkpatch errors.

Please, review.

In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
  (only for tls, no clue why).

Please, review.

[v1]
Upstream multifd code synchronize all threads after each RAM section.  This is suboptimal.
Change it to only flush after we go trough all ram.

Preserve all semantics for old machine types.

Juan Quintela (6):
  multifd: Create property multifd-sync-after-each-section
  multifd: Protect multifd_send_sync_main() calls
  migration: Simplify ram_find_and_save_block()
  migration: Make find_dirty_block() return a single parameter
  multifd: Only sync once each full round of memory
  ram: Document migration ram flags

 qapi/migration.json   |   9 +++-
 migration/migration.h |   1 +
 hw/core/machine.c     |   1 +
 migration/migration.c |  13 +++++-
 migration/ram.c       | 100 +++++++++++++++++++++++++++++-------------
 5 files changed, 91 insertions(+), 33 deletions(-)

-- 
2.39.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/6] multifd: Create property multifd-sync-after-each-section
  2023-02-08 13:30 [PATCH v3 0/6] Eliminate multifd flush Juan Quintela
@ 2023-02-08 13:30 ` Juan Quintela
       [not found]   ` <20230208202526.p2jmikndw5lx2ong@redhat.com>
  2023-02-08 13:30 ` [PATCH v3 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Juan Quintela @ 2023-02-08 13:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Dr. David Alan Gilbert, Markus Armbruster, Juan Quintela,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost, Eric Blake

We used to synchronize all channels at the end of each RAM section
sent.  That is not needed, so preparing to only synchronize once every
full round in latests patches.

Notice that we initialize the property as true.  We will change the
default when we introduce the new mechanism.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Rename each-iteration to after-each-section

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 qapi/migration.json   |  9 ++++++++-
 migration/migration.h |  1 +
 hw/core/machine.c     |  1 +
 migration/migration.c | 15 +++++++++++++--
 4 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/qapi/migration.json b/qapi/migration.json
index c84fa10e86..d712b082c8 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -478,6 +478,12 @@
 #                    should not affect the correctness of postcopy migration.
 #                    (since 7.1)
 #
+# @multifd-sync-after-each-section: Synchronize channels after each
+#                                   section is sent.  We used to do
+#                                   that on the past, but it is
+#                                   suboptimal.
+#                                   (since 7.1)
+#
 # Features:
 # @unstable: Members @x-colo and @x-ignore-shared are experimental.
 #
@@ -492,7 +498,8 @@
            'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
            { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
            'validate-uuid', 'background-snapshot',
-           'zero-copy-send', 'postcopy-preempt'] }
+           'zero-copy-send', 'postcopy-preempt',
+           'multifd-sync-after-each-section'] }
 
 ##
 # @MigrationCapabilityStatus:
diff --git a/migration/migration.h b/migration/migration.h
index 66511ce532..4dde1faa68 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -412,6 +412,7 @@ int migrate_multifd_channels(void);
 MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
+bool migrate_multifd_sync_after_each_section(void);
 
 #ifdef CONFIG_LINUX
 bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index b5cd42cd8c..224469cee4 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -55,6 +55,7 @@ const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
 GlobalProperty hw_compat_7_0[] = {
     { "arm-gicv3-common", "force-8-bit-prio", "on" },
     { "nvme-ns", "eui64-default", "on"},
+    { "migration", "multifd-sync-after-each-section", "on"},
 };
 const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
 
diff --git a/migration/migration.c b/migration/migration.c
index 7a14aa98d8..cf02295765 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -167,7 +167,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
     MIGRATION_CAPABILITY_XBZRLE,
     MIGRATION_CAPABILITY_X_COLO,
     MIGRATION_CAPABILITY_VALIDATE_UUID,
-    MIGRATION_CAPABILITY_ZERO_COPY_SEND);
+    MIGRATION_CAPABILITY_ZERO_COPY_SEND,
+    MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION);
 
 /* When we add fault tolerance, we could have several
    migrations at once.  For now we don't need to add
@@ -2675,6 +2676,15 @@ bool migrate_use_multifd(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
+bool migrate_multifd_sync_after_each_section(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return true;
+    // We will change this when code gets in.
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
+}
+
 bool migrate_pause_before_switchover(void)
 {
     MigrationState *s;
@@ -4501,7 +4511,8 @@ static Property migration_properties[] = {
     DEFINE_PROP_MIG_CAP("x-zero-copy-send",
             MIGRATION_CAPABILITY_ZERO_COPY_SEND),
 #endif
-
+    DEFINE_PROP_MIG_CAP("multifd-sync-after-each-section",
+                        MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/6] multifd: Protect multifd_send_sync_main() calls
  2023-02-08 13:30 [PATCH v3 0/6] Eliminate multifd flush Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
@ 2023-02-08 13:30 ` Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Juan Quintela @ 2023-02-08 13:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Dr. David Alan Gilbert, Markus Armbruster, Juan Quintela,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost, Eric Blake

We only need to do that on the ram_save_iterate() call on sending and
on destination when we get a RAM_SAVE_FLAG_EOS.

In setup() and complete() we need to synch in both new and old cases,
so don't add a check there.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Remove the wrappers that we take out on patch 5.
---
 migration/ram.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index b966e148c2..0f0fd5c36a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3354,9 +3354,12 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 out:
     if (ret >= 0
         && migration_is_setup_or_active(migrate_get_current()->state)) {
-        ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
-        if (ret < 0) {
-            return ret;
+        if (migrate_multifd_sync_after_each_section()) {
+            ret = multifd_send_sync_main(
+                rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
+            if (ret < 0) {
+                return ret;
+            }
         }
 
         qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -4116,7 +4119,9 @@ int ram_load_postcopy(QEMUFile *f, int channel)
 
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
-            multifd_recv_sync_main();
+            if (migrate_multifd_sync_after_each_section()) {
+                multifd_recv_sync_main();
+            }
             break;
         default:
             error_report("Unknown combination of migration flags: 0x%x"
@@ -4387,7 +4392,9 @@ static int ram_load_precopy(QEMUFile *f)
             break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
-            multifd_recv_sync_main();
+            if (migrate_multifd_sync_after_each_section()) {
+                multifd_recv_sync_main();
+            }
             break;
         default:
             if (flags & RAM_SAVE_FLAG_HOOK) {
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/6] migration: Simplify ram_find_and_save_block()
  2023-02-08 13:30 [PATCH v3 0/6] Eliminate multifd flush Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
@ 2023-02-08 13:30 ` Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 4/6] migration: Make find_dirty_block() return a single parameter Juan Quintela
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Juan Quintela @ 2023-02-08 13:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Dr. David Alan Gilbert, Markus Armbruster, Juan Quintela,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost, Eric Blake

We will need later that find_dirty_block() return errors, so
simplify the loop.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 0f0fd5c36a..5c406f2c1d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2542,7 +2542,6 @@ static int ram_find_and_save_block(RAMState *rs)
 {
     PageSearchStatus *pss = &rs->pss[RAM_CHANNEL_PRECOPY];
     int pages = 0;
-    bool again, found;
 
     /* No dirty page as there is zero RAM */
     if (!ram_bytes_total()) {
@@ -2564,18 +2563,17 @@ static int ram_find_and_save_block(RAMState *rs)
     pss_init(pss, rs->last_seen_block, rs->last_page);
 
     do {
-        again = true;
-        found = get_queued_page(rs, pss);
-
-        if (!found) {
+        if (!get_queued_page(rs, pss)) {
             /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(rs, pss, &again);
-        }
-
-        if (found) {
+            bool again = true;
+            if (!find_dirty_block(rs, pss, &again)) {
+                if (!again) {
+                    break;
+                }
+            }
             pages = ram_save_host_page(rs, pss);
         }
-    } while (!pages && again);
+    } while (!pages);
 
     rs->last_seen_block = pss->block;
     rs->last_page = pss->page;
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 4/6] migration: Make find_dirty_block() return a single parameter
  2023-02-08 13:30 [PATCH v3 0/6] Eliminate multifd flush Juan Quintela
                   ` (2 preceding siblings ...)
  2023-02-08 13:30 ` [PATCH v3 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
@ 2023-02-08 13:30 ` Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 5/6] multifd: Only sync once each full round of memory Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 6/6] ram: Document migration ram flags Juan Quintela
  5 siblings, 0 replies; 10+ messages in thread
From: Juan Quintela @ 2023-02-08 13:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Dr. David Alan Gilbert, Markus Armbruster, Juan Quintela,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost, Eric Blake

We used to return two bools, just return a single int with the
following meaning:

old return / again / new return
false        false   PAGE_ALL_CLEAN
false        true    PAGE_TRY_AGAIN
true         true    PAGE_DIRTY_FOUND  /* We don't care about again at all */

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 40 +++++++++++++++++++++++-----------------
 1 file changed, 23 insertions(+), 17 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 5c406f2c1d..e05503f825 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1546,17 +1546,23 @@ retry:
     return pages;
 }
 
+#define PAGE_ALL_CLEAN 0
+#define PAGE_TRY_AGAIN 1
+#define PAGE_DIRTY_FOUND 2
 /**
  * find_dirty_block: find the next dirty page and update any state
  * associated with the search process.
  *
- * Returns true if a page is found
+ * Returns:
+ *         PAGE_ALL_CLEAN: no dirty page found, give up
+ *         PAGE_TRY_AGAIN: no dirty page found, retry for next block
+ *         PAGE_DIRTY_FOUND: dirty page found
  *
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
  * @again: set to false if the search has scanned the whole of RAM
  */
-static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
+static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
 {
     /* Update pss->page for the next dirty bit in ramblock */
     pss_find_next_dirty(pss);
@@ -1567,8 +1573,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
          * We've been once around the RAM and haven't found anything.
          * Give up.
          */
-        *again = false;
-        return false;
+        return PAGE_ALL_CLEAN;
     }
     if (!offset_in_ramblock(pss->block,
                             ((ram_addr_t)pss->page) << TARGET_PAGE_BITS)) {
@@ -1597,13 +1602,10 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
             }
         }
         /* Didn't find anything this time, but try again on the new block */
-        *again = true;
-        return false;
+        return PAGE_TRY_AGAIN;
     } else {
-        /* Can go around again, but... */
-        *again = true;
-        /* We've found something so probably don't need to */
-        return true;
+        /* We've found something */
+        return PAGE_DIRTY_FOUND;
     }
 }
 
@@ -2562,19 +2564,23 @@ static int ram_find_and_save_block(RAMState *rs)
 
     pss_init(pss, rs->last_seen_block, rs->last_page);
 
-    do {
+    while (true) {
         if (!get_queued_page(rs, pss)) {
             /* priority queue empty, so just search for something dirty */
-            bool again = true;
-            if (!find_dirty_block(rs, pss, &again)) {
-                if (!again) {
+            int res = find_dirty_block(rs, pss);
+            if (res != PAGE_DIRTY_FOUND) {
+                if (res == PAGE_ALL_CLEAN) {
                     break;
+                } else if (res == PAGE_TRY_AGAIN) {
+                    continue;
                 }
             }
-            pages = ram_save_host_page(rs, pss);
         }
-    } while (!pages);
-
+        pages = ram_save_host_page(rs, pss);
+        if (pages) {
+            break;
+        }
+    }
     rs->last_seen_block = pss->block;
     rs->last_page = pss->page;
 
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 5/6] multifd: Only sync once each full round of memory
  2023-02-08 13:30 [PATCH v3 0/6] Eliminate multifd flush Juan Quintela
                   ` (3 preceding siblings ...)
  2023-02-08 13:30 ` [PATCH v3 4/6] migration: Make find_dirty_block() return a single parameter Juan Quintela
@ 2023-02-08 13:30 ` Juan Quintela
  2023-02-08 13:30 ` [PATCH v3 6/6] ram: Document migration ram flags Juan Quintela
  5 siblings, 0 replies; 10+ messages in thread
From: Juan Quintela @ 2023-02-08 13:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Dr. David Alan Gilbert, Markus Armbruster, Juan Quintela,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost, Eric Blake

We need to add a new flag to mean to sync at that point.
Notice that we still synchronize at the end of setup and at the end of
complete stages.

Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Add missing qemu_fflush(), now it passes all tests always.
---
 migration/migration.c |  2 --
 migration/ram.c       | 28 +++++++++++++++++++++++++++-
 2 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index cf02295765..cf4dfb0072 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2680,8 +2680,6 @@ bool migrate_multifd_sync_after_each_section(void)
 {
     MigrationState *s = migrate_get_current();
 
-    return true;
-    // We will change this when code gets in.
     return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
 }
 
diff --git a/migration/ram.c b/migration/ram.c
index e05503f825..7952d5f01c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -82,6 +82,7 @@
 #define RAM_SAVE_FLAG_XBZRLE   0x40
 /* 0x80 is reserved in migration.h start with 0x100 next */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
+#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
 
 XBZRLECacheStats xbzrle_counters;
 
@@ -1554,6 +1555,7 @@ retry:
  * associated with the search process.
  *
  * Returns:
+ *         <0: An error happened
  *         PAGE_ALL_CLEAN: no dirty page found, give up
  *         PAGE_TRY_AGAIN: no dirty page found, retry for next block
  *         PAGE_DIRTY_FOUND: dirty page found
@@ -1581,6 +1583,15 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
         pss->page = 0;
         pss->block = QLIST_NEXT_RCU(pss->block, next);
         if (!pss->block) {
+            if (!migrate_multifd_sync_after_each_section()) {
+                QEMUFile *f = rs->pss[RAM_CHANNEL_PRECOPY].pss_channel;
+                int ret = multifd_send_sync_main(f);
+                if (ret < 0) {
+                    return ret;
+                }
+                qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+                qemu_fflush(f);
+            }
             /*
              * If memory migration starts over, we will meet a dirtied page
              * which may still exists in compression threads's ring, so we
@@ -2573,6 +2584,9 @@ static int ram_find_and_save_block(RAMState *rs)
                     break;
                 } else if (res == PAGE_TRY_AGAIN) {
                     continue;
+                } else if (res < 0) {
+                    pages = res;
+                    break;
                 }
             }
         }
@@ -3250,6 +3264,10 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
         return ret;
     }
 
+    if (!migrate_multifd_sync_after_each_section()) {
+        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+    }
+
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
 
@@ -3436,6 +3454,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         return ret;
     }
 
+    if (!migrate_multifd_sync_after_each_section()) {
+        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+    }
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
 
@@ -4120,7 +4141,9 @@ int ram_load_postcopy(QEMUFile *f, int channel)
             }
             decompress_data_with_multi_threads(f, page_buffer, len);
             break;
-
+        case RAM_SAVE_FLAG_MULTIFD_SYNC:
+            multifd_recv_sync_main();
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             if (migrate_multifd_sync_after_each_section()) {
@@ -4394,6 +4417,9 @@ static int ram_load_precopy(QEMUFile *f)
                 break;
             }
             break;
+        case RAM_SAVE_FLAG_MULTIFD_SYNC:
+            multifd_recv_sync_main();
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             if (migrate_multifd_sync_after_each_section()) {
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 6/6] ram: Document migration ram flags
  2023-02-08 13:30 [PATCH v3 0/6] Eliminate multifd flush Juan Quintela
                   ` (4 preceding siblings ...)
  2023-02-08 13:30 ` [PATCH v3 5/6] multifd: Only sync once each full round of memory Juan Quintela
@ 2023-02-08 13:30 ` Juan Quintela
  2023-02-08 20:29   ` Eric Blake
  5 siblings, 1 reply; 10+ messages in thread
From: Juan Quintela @ 2023-02-08 13:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Dr. David Alan Gilbert, Markus Armbruster, Juan Quintela,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost, Eric Blake

0x80 is RAM_SAVE_FLAG_HOOK, it is in qemu-file now.
Bigger usable flag is 0x200, noticing that.
We can reuse RAM_SAVE_FLAG_FULL.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 7952d5f01c..d95e26c03c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -73,16 +73,19 @@
  * RAM_SSAVE_FLAG_COMPRESS_PAGE just rename it.
  */
 
-#define RAM_SAVE_FLAG_FULL     0x01 /* Obsolete, not used anymore */
+/* RAM_SAVE_FLAG_FULL has been obsoleted since at least 2009, we can
+ * reuse it */
+#define RAM_SAVE_FLAG_FULL     0x01
 #define RAM_SAVE_FLAG_ZERO     0x02
 #define RAM_SAVE_FLAG_MEM_SIZE 0x04
 #define RAM_SAVE_FLAG_PAGE     0x08
 #define RAM_SAVE_FLAG_EOS      0x10
 #define RAM_SAVE_FLAG_CONTINUE 0x20
 #define RAM_SAVE_FLAG_XBZRLE   0x40
-/* 0x80 is reserved in migration.h start with 0x100 next */
+/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
 #define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
+/* We can't use any flag that is bigger that 0x200 */
 
 XBZRLECacheStats xbzrle_counters;
 
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 6/6] ram: Document migration ram flags
  2023-02-08 13:30 ` [PATCH v3 6/6] ram: Document migration ram flags Juan Quintela
@ 2023-02-08 20:29   ` Eric Blake
  2023-02-09 13:27     ` Juan Quintela
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Blake @ 2023-02-08 20:29 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Dr. David Alan Gilbert, Markus Armbruster,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost

On Wed, Feb 08, 2023 at 02:30:10PM +0100, Juan Quintela wrote:
> 0x80 is RAM_SAVE_FLAG_HOOK, it is in qemu-file now.
> Bigger usable flag is 0x200, noticing that.
> We can reuse RAM_SAVE_FLAG_FULL.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 7952d5f01c..d95e26c03c 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -73,16 +73,19 @@
>   * RAM_SSAVE_FLAG_COMPRESS_PAGE just rename it.
>   */
>  
> -#define RAM_SAVE_FLAG_FULL     0x01 /* Obsolete, not used anymore */
> +/* RAM_SAVE_FLAG_FULL has been obsoleted since at least 2009, we can
> + * reuse it */

How about:

/* RAM_SAVE_FLAG_FULL was obsoleted in 2009, it can be reused now */

> +#define RAM_SAVE_FLAG_FULL     0x01
>  #define RAM_SAVE_FLAG_ZERO     0x02
>  #define RAM_SAVE_FLAG_MEM_SIZE 0x04
>  #define RAM_SAVE_FLAG_PAGE     0x08
>  #define RAM_SAVE_FLAG_EOS      0x10
>  #define RAM_SAVE_FLAG_CONTINUE 0x20
>  #define RAM_SAVE_FLAG_XBZRLE   0x40
> -/* 0x80 is reserved in migration.h start with 0x100 next */
> +/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
>  #define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
> +/* We can't use any flag that is bigger that 0x200 */

s/that is bigger that/bigger than/

>  
>  XBZRLECacheStats xbzrle_counters;
>  
> -- 
> 2.39.1
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/6] multifd: Create property multifd-sync-after-each-section
       [not found]   ` <20230208202526.p2jmikndw5lx2ong@redhat.com>
@ 2023-02-09 13:22     ` Juan Quintela
  0 siblings, 0 replies; 10+ messages in thread
From: Juan Quintela @ 2023-02-09 13:22 UTC (permalink / raw)
  To: Eric Blake
  Cc: qemu-devel, Dr. David Alan Gilbert, Markus Armbruster,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost

Eric Blake <eblake@redhat.com> wrote:
> On Wed, Feb 08, 2023 at 02:30:05PM +0100, Juan Quintela wrote:
>> We used to synchronize all channels at the end of each RAM section
>> sent.  That is not needed, so preparing to only synchronize once every
>> full round in latests patches.
>> 
>> Notice that we initialize the property as true.  We will change the
>> default when we introduce the new mechanism.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> 
>> ---
>> 
>> Rename each-iteration to after-each-section
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  qapi/migration.json   |  9 ++++++++-
>>  migration/migration.h |  1 +
>>  hw/core/machine.c     |  1 +
>>  migration/migration.c | 15 +++++++++++++--
>>  4 files changed, 23 insertions(+), 3 deletions(-)
>> 
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index c84fa10e86..d712b082c8 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -478,6 +478,12 @@
>>  #                    should not affect the correctness of postcopy migration.
>>  #                    (since 7.1)
>>  #
>> +# @multifd-sync-after-each-section: Synchronize channels after each
>> +#                                   section is sent.  We used to do
>> +#                                   that on the past, but it is
>
> s/on/in/

good catch.

>> +#                                   suboptimal.
>> +#                                   (since 7.1)
>
> Shouldn't this be 8.0 now?

You are right (as always).

Changing it.  Thanks.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 6/6] ram: Document migration ram flags
  2023-02-08 20:29   ` Eric Blake
@ 2023-02-09 13:27     ` Juan Quintela
  0 siblings, 0 replies; 10+ messages in thread
From: Juan Quintela @ 2023-02-09 13:27 UTC (permalink / raw)
  To: Eric Blake
  Cc: qemu-devel, Dr. David Alan Gilbert, Markus Armbruster,
	Marcel Apfelbaum, Yanan Wang, Philippe Mathieu-Daudé,
	Eduardo Habkost

Eric Blake <eblake@redhat.com> wrote:
> On Wed, Feb 08, 2023 at 02:30:10PM +0100, Juan Quintela wrote:
>> 0x80 is RAM_SAVE_FLAG_HOOK, it is in qemu-file now.
>> Bigger usable flag is 0x200, noticing that.
>> We can reuse RAM_SAVE_FLAG_FULL.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 7 +++++--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 7952d5f01c..d95e26c03c 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -73,16 +73,19 @@
>>   * RAM_SSAVE_FLAG_COMPRESS_PAGE just rename it.
>>   */
>>  
>> -#define RAM_SAVE_FLAG_FULL     0x01 /* Obsolete, not used anymore */
>> +/* RAM_SAVE_FLAG_FULL has been obsoleted since at least 2009, we can
>> + * reuse it */
>
> How about:
>
> /* RAM_SAVE_FLAG_FULL was obsoleted in 2009, it can be reused now */

You win.

>
>> +#define RAM_SAVE_FLAG_FULL     0x01
>>  #define RAM_SAVE_FLAG_ZERO     0x02
>>  #define RAM_SAVE_FLAG_MEM_SIZE 0x04
>>  #define RAM_SAVE_FLAG_PAGE     0x08
>>  #define RAM_SAVE_FLAG_EOS      0x10
>>  #define RAM_SAVE_FLAG_CONTINUE 0x20
>>  #define RAM_SAVE_FLAG_XBZRLE   0x40
>> -/* 0x80 is reserved in migration.h start with 0x100 next */
>> +/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
>>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
>>  #define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
>> +/* We can't use any flag that is bigger that 0x200 */
>
> s/that is bigger that/bigger than/

Another strike.

Thanks, Juan.



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-02-09 13:27 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-08 13:30 [PATCH v3 0/6] Eliminate multifd flush Juan Quintela
2023-02-08 13:30 ` [PATCH v3 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
     [not found]   ` <20230208202526.p2jmikndw5lx2ong@redhat.com>
2023-02-09 13:22     ` Juan Quintela
2023-02-08 13:30 ` [PATCH v3 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
2023-02-08 13:30 ` [PATCH v3 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
2023-02-08 13:30 ` [PATCH v3 4/6] migration: Make find_dirty_block() return a single parameter Juan Quintela
2023-02-08 13:30 ` [PATCH v3 5/6] multifd: Only sync once each full round of memory Juan Quintela
2023-02-08 13:30 ` [PATCH v3 6/6] ram: Document migration ram flags Juan Quintela
2023-02-08 20:29   ` Eric Blake
2023-02-09 13:27     ` Juan Quintela

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.