All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Eliminate multifd flush
@ 2022-07-28 11:59 Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Juan Quintela @ 2022-07-28 11:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Marcel Apfelbaum, Philippe Mathieu-Daudé,
	Juan Quintela, Dr. David Alan Gilbert, Eduardo Habkost,
	Yanan Wang

Hi

In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
  (only for tls, no clue why).

Please, review.

[v1]
Upstream multifd code synchronize all threads after each RAM section.  This is suboptimal.
Change it to only flush after we go trough all ram.

Preserve all semantics for old machine types.

Juan Quintela (6):
  multifd: Create property multifd-sync-after-each-section
  multifd: Protect multifd_send_sync_main() calls
  migration: Simplify ram_find_and_save_block()
  migration: Make find_dirty_block() return a single parameter
  multifd: Only sync once each full round of memory
  ram: Document migration ram flags

 migration/migration.h |  6 +++
 hw/core/machine.c     |  1 +
 migration/migration.c | 11 ++++-
 migration/ram.c       | 98 ++++++++++++++++++++++++++++++-------------
 4 files changed, 85 insertions(+), 31 deletions(-)

-- 
2.37.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/6] multifd: Create property multifd-sync-after-each-section
  2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
@ 2022-07-28 11:59 ` Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Juan Quintela @ 2022-07-28 11:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Marcel Apfelbaum, Philippe Mathieu-Daudé,
	Juan Quintela, Dr. David Alan Gilbert, Eduardo Habkost,
	Yanan Wang

We used to synchronize all channels at the end of each RAM section
sent.  That is not needed, so preparing to only synchronize once every
full round in latests patches.

Notice that we initialize the property as true.  We will change the
default when we introduce the new mechanism.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

---

Rename each-iteration to after-each-section

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.h |  6 ++++++
 hw/core/machine.c     |  1 +
 migration/migration.c | 11 ++++++++++-
 3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/migration/migration.h b/migration/migration.h
index cdad8aceaa..6abd2a51f5 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -373,6 +373,11 @@ struct MigrationState {
      * This save hostname when out-going migration starts
      */
     char *hostname;
+    /*
+     * Synchronize channels after each section is sent.
+     * We used to do that on the past, but it is suboptimal.
+     */
+    bool multifd_sync_after_each_section;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
@@ -415,6 +420,7 @@ int migrate_multifd_channels(void);
 MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
+bool migrate_multifd_sync_after_each_section(void);
 
 #ifdef CONFIG_LINUX
 bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index a673302cce..9645a25f8f 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -43,6 +43,7 @@
 GlobalProperty hw_compat_7_0[] = {
     { "arm-gicv3-common", "force-8-bit-prio", "on" },
     { "nvme-ns", "eui64-default", "on"},
+    { "migration", "multifd-sync-after-each-section", "on"},
 };
 const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
 
diff --git a/migration/migration.c b/migration/migration.c
index e03f698a3c..ebca4f2d8a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2592,6 +2592,13 @@ bool migrate_use_multifd(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
+bool migrate_multifd_sync_after_each_section(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return s->multifd_sync_after_each_section;
+}
+
 bool migrate_pause_before_switchover(void)
 {
     MigrationState *s;
@@ -4384,7 +4391,9 @@ static Property migration_properties[] = {
     DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
     DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
     DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
-
+    /* We will change to false when we introduce the new mechanism */
+    DEFINE_PROP_BOOL("multifd-sync-after-each-section", MigrationState,
+                      multifd_sync_after_each_section, true),
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
     DEFINE_PROP_MIG_CAP("x-rdma-pin-all", MIGRATION_CAPABILITY_RDMA_PIN_ALL),
-- 
2.37.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/6] multifd: Protect multifd_send_sync_main() calls
  2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
@ 2022-07-28 11:59 ` Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Juan Quintela @ 2022-07-28 11:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Marcel Apfelbaum, Philippe Mathieu-Daudé,
	Juan Quintela, Dr. David Alan Gilbert, Eduardo Habkost,
	Yanan Wang

We only need to do that on the ram_save_iterate() call on sending and
on destination when we get a RAM_SAVE_FLAG_EOS.

In setup() and complete() we need to synch in both new and old cases,
so don't add a check there.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

---

Remove the wrappers that we take out on patch 5.
---
 migration/ram.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index b94669ba5d..6b71ce74f6 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3338,9 +3338,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 out:
     if (ret >= 0
         && migration_is_setup_or_active(migrate_get_current()->state)) {
-        ret = multifd_send_sync_main(rs->f);
-        if (ret < 0) {
-            return ret;
+        if (migrate_multifd_sync_after_each_section()) {
+            ret = multifd_send_sync_main(rs->f);
+            if (ret < 0) {
+                return ret;
+            }
         }
 
         qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -4084,7 +4086,9 @@ int ram_load_postcopy(QEMUFile *f, int channel)
 
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
-            multifd_recv_sync_main();
+            if (migrate_multifd_sync_after_each_section()) {
+                multifd_recv_sync_main();
+            }
             break;
         default:
             error_report("Unknown combination of migration flags: 0x%x"
@@ -4361,7 +4365,9 @@ static int ram_load_precopy(QEMUFile *f)
             break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
-            multifd_recv_sync_main();
+            if (migrate_multifd_sync_after_each_section()) {
+                multifd_recv_sync_main();
+            }
             break;
         default:
             if (flags & RAM_SAVE_FLAG_HOOK) {
-- 
2.37.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/6] migration: Simplify ram_find_and_save_block()
  2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
@ 2022-07-28 11:59 ` Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 4/6] migration: Make find_dirty_block() return a single parameter Juan Quintela
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Juan Quintela @ 2022-07-28 11:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Marcel Apfelbaum, Philippe Mathieu-Daudé,
	Juan Quintela, Dr. David Alan Gilbert, Eduardo Habkost,
	Yanan Wang

We will need later that find_dirty_block() return errors, so
simplify the loop.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 6b71ce74f6..c2c939ee03 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2524,7 +2524,6 @@ static int ram_find_and_save_block(RAMState *rs)
 {
     PageSearchStatus pss;
     int pages = 0;
-    bool again, found;
 
     /* No dirty page as there is zero RAM */
     if (!ram_bytes_total()) {
@@ -2540,27 +2539,25 @@ static int ram_find_and_save_block(RAMState *rs)
     }
 
     do {
-        again = true;
-        found = get_queued_page(rs, &pss);
-
-        if (!found) {
+        if (!get_queued_page(rs, &pss)) {
             /*
              * Recover previous precopy ramblock/offset if postcopy has
              * preempted precopy.  Otherwise find the next dirty bit.
              */
             if (postcopy_preempt_triggered(rs)) {
                 postcopy_preempt_restore(rs, &pss, false);
-                found = true;
             } else {
                 /* priority queue empty, so just search for something dirty */
-                found = find_dirty_block(rs, &pss, &again);
+                bool again = true;
+                if (!find_dirty_block(rs, &pss, &again)) {
+                    if (!again) {
+                        break;
+                    }
+                }
             }
         }
-
-        if (found) {
-            pages = ram_save_host_page(rs, &pss);
-        }
-    } while (!pages && again);
+        pages = ram_save_host_page(rs, &pss);
+    } while (!pages);
 
     rs->last_seen_block = pss.block;
     rs->last_page = pss.page;
-- 
2.37.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 4/6] migration: Make find_dirty_block() return a single parameter
  2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
                   ` (2 preceding siblings ...)
  2022-07-28 11:59 ` [PATCH v2 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
@ 2022-07-28 11:59 ` Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 5/6] multifd: Only sync once each full round of memory Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 6/6] ram: Document migration ram flags Juan Quintela
  5 siblings, 0 replies; 7+ messages in thread
From: Juan Quintela @ 2022-07-28 11:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Marcel Apfelbaum, Philippe Mathieu-Daudé,
	Juan Quintela, Dr. David Alan Gilbert, Eduardo Habkost,
	Yanan Wang

We used to return two bools, just return a single int with the
following meaning:

old return / again / new return
false        false   PAGE_ALL_CLEAN
false        true    PAGE_TRY_AGAIN
true         true    PAGE_DIRTY_FOUND  /* We don't care about again at all */

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 37 ++++++++++++++++++++++---------------
 1 file changed, 22 insertions(+), 15 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c2c939ee03..1507ba1991 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1532,17 +1532,23 @@ retry:
     return pages;
 }
 
+#define PAGE_ALL_CLEAN 0
+#define PAGE_TRY_AGAIN 1
+#define PAGE_DIRTY_FOUND 2
 /**
  * find_dirty_block: find the next dirty page and update any state
  * associated with the search process.
  *
- * Returns true if a page is found
+ * Returns:
+ *         PAGE_ALL_CLEAN: no dirty page found, give up
+ *         PAGE_TRY_AGAIN: no dirty page found, retry for next block
+ *         PAGE_DIRTY_FOUND: dirty page found
  *
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
  * @again: set to false if the search has scanned the whole of RAM
  */
-static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
+static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
 {
     /*
      * This is not a postcopy requested page, mark it "not urgent", and use
@@ -1558,8 +1564,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
          * We've been once around the RAM and haven't found anything.
          * Give up.
          */
-        *again = false;
-        return false;
+        return PAGE_ALL_CLEAN;
     }
     if (!offset_in_ramblock(pss->block,
                             ((ram_addr_t)pss->page) << TARGET_PAGE_BITS)) {
@@ -1588,13 +1593,10 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
             }
         }
         /* Didn't find anything this time, but try again on the new block */
-        *again = true;
-        return false;
+        return PAGE_TRY_AGAIN;
     } else {
-        /* Can go around again, but... */
-        *again = true;
-        /* We've found something so probably don't need to */
-        return true;
+        /* We've found something */
+        return PAGE_DIRTY_FOUND;
     }
 }
 
@@ -2538,7 +2540,7 @@ static int ram_find_and_save_block(RAMState *rs)
         pss.block = QLIST_FIRST_RCU(&ram_list.blocks);
     }
 
-    do {
+    while (true){
         if (!get_queued_page(rs, &pss)) {
             /*
              * Recover previous precopy ramblock/offset if postcopy has
@@ -2548,16 +2550,21 @@ static int ram_find_and_save_block(RAMState *rs)
                 postcopy_preempt_restore(rs, &pss, false);
             } else {
                 /* priority queue empty, so just search for something dirty */
-                bool again = true;
-                if (!find_dirty_block(rs, &pss, &again)) {
-                    if (!again) {
+                int res = find_dirty_block(rs, &pss);
+                if (res != PAGE_DIRTY_FOUND) {
+                    if (res == PAGE_ALL_CLEAN) {
                         break;
+                    } else if (res == PAGE_TRY_AGAIN) {
+                        continue;
                     }
                 }
             }
         }
         pages = ram_save_host_page(rs, &pss);
-    } while (!pages);
+        if (pages) {
+            break;
+        }
+    }
 
     rs->last_seen_block = pss.block;
     rs->last_page = pss.page;
-- 
2.37.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 5/6] multifd: Only sync once each full round of memory
  2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
                   ` (3 preceding siblings ...)
  2022-07-28 11:59 ` [PATCH v2 4/6] migration: Make find_dirty_block() return a single parameter Juan Quintela
@ 2022-07-28 11:59 ` Juan Quintela
  2022-07-28 11:59 ` [PATCH v2 6/6] ram: Document migration ram flags Juan Quintela
  5 siblings, 0 replies; 7+ messages in thread
From: Juan Quintela @ 2022-07-28 11:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Marcel Apfelbaum, Philippe Mathieu-Daudé,
	Juan Quintela, Dr. David Alan Gilbert, Eduardo Habkost,
	Yanan Wang

We need to add a new flag to mean to sync at that point.
Notice that we still synchronize at the end of setup and at the end of
complete stages.

Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Add missing qemu_fflush(), now it passes all tests always.
---
 migration/migration.c |  2 +-
 migration/ram.c       | 27 ++++++++++++++++++++++++++-
 2 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index ebca4f2d8a..7905145d7d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -4393,7 +4393,7 @@ static Property migration_properties[] = {
     DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
     /* We will change to false when we introduce the new mechanism */
     DEFINE_PROP_BOOL("multifd-sync-after-each-section", MigrationState,
-                      multifd_sync_after_each_section, true),
+                      multifd_sync_after_each_section, false),
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
     DEFINE_PROP_MIG_CAP("x-rdma-pin-all", MIGRATION_CAPABILITY_RDMA_PIN_ALL),
diff --git a/migration/ram.c b/migration/ram.c
index 1507ba1991..234603ee4f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -82,6 +82,7 @@
 #define RAM_SAVE_FLAG_XBZRLE   0x40
 /* 0x80 is reserved in migration.h start with 0x100 next */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
+#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
 
 XBZRLECacheStats xbzrle_counters;
 
@@ -1540,6 +1541,7 @@ retry:
  * associated with the search process.
  *
  * Returns:
+ *         <0: An error happened
  *         PAGE_ALL_CLEAN: no dirty page found, give up
  *         PAGE_TRY_AGAIN: no dirty page found, retry for next block
  *         PAGE_DIRTY_FOUND: dirty page found
@@ -1572,6 +1574,14 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
         pss->page = 0;
         pss->block = QLIST_NEXT_RCU(pss->block, next);
         if (!pss->block) {
+            if (!migrate_multifd_sync_after_each_section()) {
+                int ret = multifd_send_sync_main(rs->f);
+                if (ret < 0) {
+                    return ret;
+                }
+                qemu_put_be64(rs->f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+                qemu_fflush(rs->f);
+            }
             /*
              * If memory migration starts over, we will meet a dirtied page
              * which may still exists in compression threads's ring, so we
@@ -2556,6 +2566,9 @@ static int ram_find_and_save_block(RAMState *rs)
                         break;
                     } else if (res == PAGE_TRY_AGAIN) {
                         continue;
+                    } else if (res < 0) {
+                        pages = res;
+                        break;
                     }
                 }
             }
@@ -3232,6 +3245,10 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
         return ret;
     }
 
+    if (!migrate_multifd_sync_after_each_section()) {
+        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+    }
+
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
 
@@ -3419,6 +3436,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         return ret;
     }
 
+    if (!migrate_multifd_sync_after_each_section()) {
+        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+    }
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
 
@@ -4087,7 +4107,9 @@ int ram_load_postcopy(QEMUFile *f, int channel)
             }
             decompress_data_with_multi_threads(f, page_buffer, len);
             break;
-
+        case RAM_SAVE_FLAG_MULTIFD_SYNC:
+            multifd_recv_sync_main();
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             if (migrate_multifd_sync_after_each_section()) {
@@ -4367,6 +4389,9 @@ static int ram_load_precopy(QEMUFile *f)
                 break;
             }
             break;
+        case RAM_SAVE_FLAG_MULTIFD_SYNC:
+            multifd_recv_sync_main();
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             if (migrate_multifd_sync_after_each_section()) {
-- 
2.37.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 6/6] ram: Document migration ram flags
  2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
                   ` (4 preceding siblings ...)
  2022-07-28 11:59 ` [PATCH v2 5/6] multifd: Only sync once each full round of memory Juan Quintela
@ 2022-07-28 11:59 ` Juan Quintela
  5 siblings, 0 replies; 7+ messages in thread
From: Juan Quintela @ 2022-07-28 11:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Marcel Apfelbaum, Philippe Mathieu-Daudé,
	Juan Quintela, Dr. David Alan Gilbert, Eduardo Habkost,
	Yanan Wang

0x80 is RAM_SAVE_FLAG_HOOK, it is in qemu-file now.
Bigger usable flag is 0x200, noticing that.
We can reuse RAM_SAVe_FLAG_FULL.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 234603ee4f..83a48e3889 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -73,16 +73,19 @@
  * RAM_SSAVE_FLAG_COMPRESS_PAGE just rename it.
  */
 
-#define RAM_SAVE_FLAG_FULL     0x01 /* Obsolete, not used anymore */
+/* RAM_SAVE_FLAG_FULL has been obsoleted since at least 2009, we can
+ * reuse it */
+#define RAM_SAVE_FLAG_FULL     0x01
 #define RAM_SAVE_FLAG_ZERO     0x02
 #define RAM_SAVE_FLAG_MEM_SIZE 0x04
 #define RAM_SAVE_FLAG_PAGE     0x08
 #define RAM_SAVE_FLAG_EOS      0x10
 #define RAM_SAVE_FLAG_CONTINUE 0x20
 #define RAM_SAVE_FLAG_XBZRLE   0x40
-/* 0x80 is reserved in migration.h start with 0x100 next */
+/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
 #define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
+/* We can't use any flag that is bigger that 0x200 */
 
 XBZRLECacheStats xbzrle_counters;
 
-- 
2.37.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-07-28 12:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
2022-07-28 11:59 ` [PATCH v2 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
2022-07-28 11:59 ` [PATCH v2 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
2022-07-28 11:59 ` [PATCH v2 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
2022-07-28 11:59 ` [PATCH v2 4/6] migration: Make find_dirty_block() return a single parameter Juan Quintela
2022-07-28 11:59 ` [PATCH v2 5/6] multifd: Only sync once each full round of memory Juan Quintela
2022-07-28 11:59 ` [PATCH v2 6/6] ram: Document migration ram flags Juan Quintela

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.