All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Eliminate multifd flush
@ 2022-06-21 14:05 Juan Quintela
  2022-06-21 14:05 ` [PATCH 1/5] multifd: Create property multifd-sync-each-iteration Juan Quintela
                   ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: Juan Quintela @ 2022-06-21 14:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Juan Quintela, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

Upstream multifd code synchronize all threads after each RAM section.  This is suboptimal.
Change it to only flush after we go trough all ram.

Preserve all semantics for old machine types.

Juan Quintela (5):
  multifd: Create property multifd-sync-each-iteration
  multifd: Put around all sync calls tests for each iteration
  migration: Simplify ram_find_and_save_block()
  migration: Make find_dirty_block() return a single parameter
  multifd: Only sync once each full round of memory

 migration/migration.h |  6 ++++
 hw/core/machine.c     |  1 +
 migration/migration.c | 10 ++++++
 migration/ram.c       | 80 ++++++++++++++++++++++++++++---------------
 4 files changed, 70 insertions(+), 27 deletions(-)

-- 
2.34.1




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 1/5] multifd: Create property multifd-sync-each-iteration
  2022-06-21 14:05 [PATCH 0/5] Eliminate multifd flush Juan Quintela
@ 2022-06-21 14:05 ` Juan Quintela
  2022-06-30 14:34   ` Dr. David Alan Gilbert
  2022-07-05 12:19   ` Dr. David Alan Gilbert
  2022-06-21 14:05 ` [PATCH 2/5] multifd: Put around all sync calls tests for each iteration Juan Quintela
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 25+ messages in thread
From: Juan Quintela @ 2022-06-21 14:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Juan Quintela, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

We used to synchronize all channels at the end of each RAM section
sent.  That is not needed, so preparing to only synchronize once every
full round in latests patches.

Notice that we initialize the property as true.  We will change the
default when we introduce the new mechanism.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.h |  6 ++++++
 hw/core/machine.c     |  1 +
 migration/migration.c | 10 ++++++++++
 3 files changed, 17 insertions(+)

diff --git a/migration/migration.h b/migration/migration.h
index 485d58b95f..70dae24516 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -332,6 +332,11 @@ struct MigrationState {
      * This save hostname when out-going migration starts
      */
     char *hostname;
+    /*
+     * Synchronize channels after each iteration.
+     * We used to do that on the past, but it is suboptimal.
+     */
+    bool multifd_sync_each_iteration;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
@@ -374,6 +379,7 @@ int migrate_multifd_channels(void);
 MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
+bool migrate_multifd_sync_each_iteration(void);
 
 #ifdef CONFIG_LINUX
 bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index a673302cce..c8a60917e0 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -43,6 +43,7 @@
 GlobalProperty hw_compat_7_0[] = {
     { "arm-gicv3-common", "force-8-bit-prio", "on" },
     { "nvme-ns", "eui64-default", "on"},
+    { "migration", "multifd-sync-each-iteration", "on"},
 };
 const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
 
diff --git a/migration/migration.c b/migration/migration.c
index 31739b2af9..3f79df0b70 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2540,6 +2540,13 @@ bool migrate_use_multifd(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
+bool migrate_multifd_sync_each_iteration(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return s->multifd_sync_each_iteration;
+}
+
 bool migrate_pause_before_switchover(void)
 {
     MigrationState *s;
@@ -4274,6 +4281,9 @@ static Property migration_properties[] = {
     DEFINE_PROP_SIZE("announce-step", MigrationState,
                       parameters.announce_step,
                       DEFAULT_MIGRATE_ANNOUNCE_STEP),
+    /* We will change to false when we introduce the new mechanism */
+    DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
+                      multifd_sync_each_iteration, true),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 2/5] multifd: Put around all sync calls tests for each iteration
  2022-06-21 14:05 [PATCH 0/5] Eliminate multifd flush Juan Quintela
  2022-06-21 14:05 ` [PATCH 1/5] multifd: Create property multifd-sync-each-iteration Juan Quintela
@ 2022-06-21 14:05 ` Juan Quintela
  2022-07-05 12:20   ` Dr. David Alan Gilbert
  2022-06-21 14:05 ` [PATCH 3/5] migration: Simplify ram_find_and_save_block() Juan Quintela
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Juan Quintela @ 2022-06-21 14:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Juan Quintela, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

We will sync later in different places.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 33 +++++++++++++++++++++------------
 1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 5f5e37f64d..35816a3a0a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2944,11 +2944,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     ram_control_before_iterate(f, RAM_CONTROL_SETUP);
     ram_control_after_iterate(f, RAM_CONTROL_SETUP);
 
-    ret =  multifd_send_sync_main(f);
-    if (ret < 0) {
-        return ret;
+    if (migrate_multifd_sync_each_iteration()) {
+        ret =  multifd_send_sync_main(f);
+        if (ret < 0) {
+            return ret;
+        }
     }
-
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
 
@@ -3057,9 +3058,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 out:
     if (ret >= 0
         && migration_is_setup_or_active(migrate_get_current()->state)) {
-        ret = multifd_send_sync_main(rs->f);
-        if (ret < 0) {
-            return ret;
+        if (migrate_multifd_sync_each_iteration()) {
+            ret = multifd_send_sync_main(rs->f);
+            if (ret < 0) {
+                return ret;
+            }
         }
 
         qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -3125,9 +3128,11 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         return ret;
     }
 
-    ret = multifd_send_sync_main(rs->f);
-    if (ret < 0) {
-        return ret;
+    if (migrate_multifd_sync_each_iteration()) {
+        ret = multifd_send_sync_main(rs->f);
+        if (ret < 0) {
+            return ret;
+        }
     }
 
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -3799,7 +3804,9 @@ int ram_load_postcopy(QEMUFile *f)
 
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
-            multifd_recv_sync_main();
+            if (migrate_multifd_sync_each_iteration()) {
+                multifd_recv_sync_main();
+            }
             break;
         default:
             error_report("Unknown combination of migration flags: 0x%x"
@@ -4075,7 +4082,9 @@ static int ram_load_precopy(QEMUFile *f)
             break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
-            multifd_recv_sync_main();
+            if (migrate_multifd_sync_each_iteration()) {
+                multifd_recv_sync_main();
+            }
             break;
         default:
             if (flags & RAM_SAVE_FLAG_HOOK) {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 3/5] migration: Simplify ram_find_and_save_block()
  2022-06-21 14:05 [PATCH 0/5] Eliminate multifd flush Juan Quintela
  2022-06-21 14:05 ` [PATCH 1/5] multifd: Create property multifd-sync-each-iteration Juan Quintela
  2022-06-21 14:05 ` [PATCH 2/5] multifd: Put around all sync calls tests for each iteration Juan Quintela
@ 2022-06-21 14:05 ` Juan Quintela
  2022-07-05 12:51   ` Dr. David Alan Gilbert
  2022-06-21 14:05 ` [PATCH 4/5] migration: Make find_dirty_block() return a single parameter Juan Quintela
  2022-06-21 14:05 ` [PATCH 5/5] multifd: Only sync once each full round of memory Juan Quintela
  4 siblings, 1 reply; 25+ messages in thread
From: Juan Quintela @ 2022-06-21 14:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Juan Quintela, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

We will need later that find_dirty_block() return errors, so
simplify the loop.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 35816a3a0a..1d4ff3185b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2256,7 +2256,6 @@ static int ram_find_and_save_block(RAMState *rs)
 {
     PageSearchStatus pss;
     int pages = 0;
-    bool again, found;
 
     /* No dirty page as there is zero RAM */
     if (!ram_bytes_total()) {
@@ -2272,18 +2271,17 @@ static int ram_find_and_save_block(RAMState *rs)
     }
 
     do {
-        again = true;
-        found = get_queued_page(rs, &pss);
-
-        if (!found) {
+        if (!get_queued_page(rs, &pss)) {
             /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(rs, &pss, &again);
+            bool again = true;
+            if (!find_dirty_block(rs, &pss, &again)) {
+                if (!again) {
+                    break;
+                }
+            }
         }
-
-        if (found) {
-            pages = ram_save_host_page(rs, &pss);
-        }
-    } while (!pages && again);
+        pages = ram_save_host_page(rs, &pss);
+    } while (!pages);
 
     rs->last_seen_block = pss.block;
     rs->last_page = pss.page;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 4/5] migration: Make find_dirty_block() return a single parameter
  2022-06-21 14:05 [PATCH 0/5] Eliminate multifd flush Juan Quintela
                   ` (2 preceding siblings ...)
  2022-06-21 14:05 ` [PATCH 3/5] migration: Simplify ram_find_and_save_block() Juan Quintela
@ 2022-06-21 14:05 ` Juan Quintela
  2022-07-05 12:54   ` Dr. David Alan Gilbert
  2022-06-21 14:05 ` [PATCH 5/5] multifd: Only sync once each full round of memory Juan Quintela
  4 siblings, 1 reply; 25+ messages in thread
From: Juan Quintela @ 2022-06-21 14:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Juan Quintela, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

We used to return two bools, just return a single int with the
following meaning:

old return / again / new return
false        false   0
false        true    1
true         true    2  /* We don't care about again at all */

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 37 +++++++++++++++++++------------------
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 1d4ff3185b..2c7289edad 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1481,13 +1481,16 @@ retry:
  * find_dirty_block: find the next dirty page and update any state
  * associated with the search process.
  *
- * Returns true if a page is found
+ * Returns:
+ *         0: no page found, give up
+ *         1: no page found, retry
+ *         2: page found
  *
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
  * @again: set to false if the search has scanned the whole of RAM
  */
-static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
+static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
 {
     /* This is not a postcopy requested page */
     pss->postcopy_requested = false;
@@ -1499,8 +1502,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
          * We've been once around the RAM and haven't found anything.
          * Give up.
          */
-        *again = false;
-        return false;
+        return 0;
     }
     if (!offset_in_ramblock(pss->block,
                             ((ram_addr_t)pss->page) << TARGET_PAGE_BITS)) {
@@ -1529,13 +1531,10 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
             }
         }
         /* Didn't find anything this time, but try again on the new block */
-        *again = true;
-        return false;
+        return 1;
     } else {
-        /* Can go around again, but... */
-        *again = true;
-        /* We've found something so probably don't need to */
-        return true;
+        /* We've found something */
+        return 2;
     }
 }
 
@@ -2270,18 +2269,20 @@ static int ram_find_and_save_block(RAMState *rs)
         pss.block = QLIST_FIRST_RCU(&ram_list.blocks);
     }
 
-    do {
+    while (true){
         if (!get_queued_page(rs, &pss)) {
             /* priority queue empty, so just search for something dirty */
-            bool again = true;
-            if (!find_dirty_block(rs, &pss, &again)) {
-                if (!again) {
-                    break;
-                }
-            }
+            int res = find_dirty_block(rs, &pss);
+            if (res == 0) {
+                break;
+            } else if (res == 1)
+                continue;
         }
         pages = ram_save_host_page(rs, &pss);
-    } while (!pages);
+        if (pages) {
+            break;
+        }
+    }
 
     rs->last_seen_block = pss.block;
     rs->last_page = pss.page;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-06-21 14:05 [PATCH 0/5] Eliminate multifd flush Juan Quintela
                   ` (3 preceding siblings ...)
  2022-06-21 14:05 ` [PATCH 4/5] migration: Make find_dirty_block() return a single parameter Juan Quintela
@ 2022-06-21 14:05 ` Juan Quintela
  2022-07-01  2:29   ` Leonardo Brás
  2022-07-05 13:56   ` Dr. David Alan Gilbert
  4 siblings, 2 replies; 25+ messages in thread
From: Juan Quintela @ 2022-06-21 14:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: Juan Quintela, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

We need to add a new flag to mean to sync at that point.
Notice that we still synchronize at the end of setup and at the end of
complete stages.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c |  2 +-
 migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
 2 files changed, 31 insertions(+), 13 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 3f79df0b70..6627787fc2 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
                       DEFAULT_MIGRATE_ANNOUNCE_STEP),
     /* We will change to false when we introduce the new mechanism */
     DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
-                      multifd_sync_each_iteration, true),
+                      multifd_sync_each_iteration, false),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
diff --git a/migration/ram.c b/migration/ram.c
index 2c7289edad..6792986565 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -81,6 +81,7 @@
 #define RAM_SAVE_FLAG_XBZRLE   0x40
 /* 0x80 is reserved in migration.h start with 0x100 next */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
+#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
 
 XBZRLECacheStats xbzrle_counters;
 
@@ -1482,6 +1483,7 @@ retry:
  * associated with the search process.
  *
  * Returns:
+ *        <0: An error happened
  *         0: no page found, give up
  *         1: no page found, retry
  *         2: page found
@@ -1510,6 +1512,13 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
         pss->page = 0;
         pss->block = QLIST_NEXT_RCU(pss->block, next);
         if (!pss->block) {
+            if (!migrate_multifd_sync_each_iteration()) {
+                int ret = multifd_send_sync_main(rs->f);
+                if (ret < 0) {
+                    return ret;
+                }
+                qemu_put_be64(rs->f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+            }
             /*
              * If memory migration starts over, we will meet a dirtied page
              * which may still exists in compression threads's ring, so we
@@ -2273,7 +2282,8 @@ static int ram_find_and_save_block(RAMState *rs)
         if (!get_queued_page(rs, &pss)) {
             /* priority queue empty, so just search for something dirty */
             int res = find_dirty_block(rs, &pss);
-            if (res == 0) {
+            if (res <= 0) {
+                pages = res;
                 break;
             } else if (res == 1)
                 continue;
@@ -2943,11 +2953,13 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     ram_control_before_iterate(f, RAM_CONTROL_SETUP);
     ram_control_after_iterate(f, RAM_CONTROL_SETUP);
 
-    if (migrate_multifd_sync_each_iteration()) {
-        ret =  multifd_send_sync_main(f);
-        if (ret < 0) {
-            return ret;
-        }
+    ret =  multifd_send_sync_main(f);
+    if (ret < 0) {
+        return ret;
+    }
+
+    if (!migrate_multifd_sync_each_iteration()) {
+        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
     }
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
@@ -3127,13 +3139,14 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         return ret;
     }
 
-    if (migrate_multifd_sync_each_iteration()) {
-        ret = multifd_send_sync_main(rs->f);
-        if (ret < 0) {
-            return ret;
-        }
+    ret = multifd_send_sync_main(rs->f);
+    if (ret < 0) {
+        return ret;
     }
 
+    if (migrate_multifd_sync_each_iteration()) {
+        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+    }
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
 
@@ -3800,7 +3813,9 @@ int ram_load_postcopy(QEMUFile *f)
             }
             decompress_data_with_multi_threads(f, page_buffer, len);
             break;
-
+        case RAM_SAVE_FLAG_MULTIFD_SYNC:
+            multifd_recv_sync_main();
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             if (migrate_multifd_sync_each_iteration()) {
@@ -4079,6 +4094,9 @@ static int ram_load_precopy(QEMUFile *f)
                 break;
             }
             break;
+        case RAM_SAVE_FLAG_MULTIFD_SYNC:
+            multifd_recv_sync_main();
+            break;
         case RAM_SAVE_FLAG_EOS:
             /* normal exit */
             if (migrate_multifd_sync_each_iteration()) {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/5] multifd: Create property multifd-sync-each-iteration
  2022-06-21 14:05 ` [PATCH 1/5] multifd: Create property multifd-sync-each-iteration Juan Quintela
@ 2022-06-30 14:34   ` Dr. David Alan Gilbert
  2022-07-04 16:07     ` Juan Quintela
  2022-07-05 12:19   ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-06-30 14:34 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Juan Quintela (quintela@redhat.com) wrote:
> We used to synchronize all channels at the end of each RAM section
> sent.  That is not needed, so preparing to only synchronize once every
> full round in latests patches.
> 
> Notice that we initialize the property as true.  We will change the
> default when we introduce the new mechanism.

I don't understand why this is a property - does it break the actual
stream format?

Dave

> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/migration.h |  6 ++++++
>  hw/core/machine.c     |  1 +
>  migration/migration.c | 10 ++++++++++
>  3 files changed, 17 insertions(+)
> 
> diff --git a/migration/migration.h b/migration/migration.h
> index 485d58b95f..70dae24516 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -332,6 +332,11 @@ struct MigrationState {
>       * This save hostname when out-going migration starts
>       */
>      char *hostname;
> +    /*
> +     * Synchronize channels after each iteration.
> +     * We used to do that on the past, but it is suboptimal.
> +     */
> +    bool multifd_sync_each_iteration;
>  };
>  
>  void migrate_set_state(int *state, int old_state, int new_state);
> @@ -374,6 +379,7 @@ int migrate_multifd_channels(void);
>  MultiFDCompression migrate_multifd_compression(void);
>  int migrate_multifd_zlib_level(void);
>  int migrate_multifd_zstd_level(void);
> +bool migrate_multifd_sync_each_iteration(void);
>  
>  #ifdef CONFIG_LINUX
>  bool migrate_use_zero_copy_send(void);
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index a673302cce..c8a60917e0 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -43,6 +43,7 @@
>  GlobalProperty hw_compat_7_0[] = {
>      { "arm-gicv3-common", "force-8-bit-prio", "on" },
>      { "nvme-ns", "eui64-default", "on"},
> +    { "migration", "multifd-sync-each-iteration", "on"},
>  };
>  const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
>  
> diff --git a/migration/migration.c b/migration/migration.c
> index 31739b2af9..3f79df0b70 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -2540,6 +2540,13 @@ bool migrate_use_multifd(void)
>      return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
>  }
>  
> +bool migrate_multifd_sync_each_iteration(void)
> +{
> +    MigrationState *s = migrate_get_current();
> +
> +    return s->multifd_sync_each_iteration;
> +}
> +
>  bool migrate_pause_before_switchover(void)
>  {
>      MigrationState *s;
> @@ -4274,6 +4281,9 @@ static Property migration_properties[] = {
>      DEFINE_PROP_SIZE("announce-step", MigrationState,
>                        parameters.announce_step,
>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
> +    /* We will change to false when we introduce the new mechanism */
> +    DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> +                      multifd_sync_each_iteration, true),
>  
>      /* Migration capabilities */
>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> -- 
> 2.34.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-06-21 14:05 ` [PATCH 5/5] multifd: Only sync once each full round of memory Juan Quintela
@ 2022-07-01  2:29   ` Leonardo Brás
  2022-07-04 16:18     ` Juan Quintela
  2022-07-05 13:56   ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 25+ messages in thread
From: Leonardo Brás @ 2022-07-01  2:29 UTC (permalink / raw)
  To: Juan Quintela, qemu-devel
  Cc: Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

Hello Juan,

On Tue, 2022-06-21 at 16:05 +0200, Juan Quintela wrote:
> We need to add a new flag to mean to sync at that point.
> Notice that we still synchronize at the end of setup and at the end of
> complete stages.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/migration.c |  2 +-
>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
>  2 files changed, 31 insertions(+), 13 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 3f79df0b70..6627787fc2 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
>      /* We will change to false when we introduce the new mechanism */
>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> -                      multifd_sync_each_iteration, true),
> +                      multifd_sync_each_iteration, false),
>  
>      /* Migration capabilities */
>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> diff --git a/migration/ram.c b/migration/ram.c
> index 2c7289edad..6792986565 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -81,6 +81,7 @@
>  #define RAM_SAVE_FLAG_XBZRLE   0x40
>  /* 0x80 is reserved in migration.h start with 0x100 next */
>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
>  
>  XBZRLECacheStats xbzrle_counters;
>  
> @@ -1482,6 +1483,7 @@ retry:
>   * associated with the search process.
>   *
>   * Returns:
> + *        <0: An error happened
>   *         0: no page found, give up
>   *         1: no page found, retry
>   *         2: page found
> @@ -1510,6 +1512,13 @@ static int find_dirty_block(RAMState *rs,
> PageSearchStatus *pss)
>          pss->page = 0;
>          pss->block = QLIST_NEXT_RCU(pss->block, next);
>          if (!pss->block) {
> +            if (!migrate_multifd_sync_each_iteration()) {
> +                int ret = multifd_send_sync_main(rs->f);
> +                if (ret < 0) {
> +                    return ret;
> +                }
> +                qemu_put_be64(rs->f, RAM_SAVE_FLAG_MULTIFD_SYNC);
> +            }
>              /*
>               * If memory migration starts over, we will meet a dirtied page
>               * which may still exists in compression threads's ring, so we
> @@ -2273,7 +2282,8 @@ static int ram_find_and_save_block(RAMState *rs)
>          if (!get_queued_page(rs, &pss)) {
>              /* priority queue empty, so just search for something dirty */
>              int res = find_dirty_block(rs, &pss);
> -            if (res == 0) {
> +            if (res <= 0) {
> +                pages = res;
>                  break;
>              } else if (res == 1)
>                  continue;
> @@ -2943,11 +2953,13 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>      ram_control_before_iterate(f, RAM_CONTROL_SETUP);
>      ram_control_after_iterate(f, RAM_CONTROL_SETUP);
>  
> -    if (migrate_multifd_sync_each_iteration()) {
> -        ret =  multifd_send_sync_main(f);
> -        if (ret < 0) {
> -            return ret;
> -        }

(1) IIUC, the above lines were changed in 2/5 to be reverted now.
Is that correct? was it expected?

> +    ret =  multifd_send_sync_main(f);
> +    if (ret < 0) {
> +        return ret;
> +    }
> +
> +    if (!migrate_multifd_sync_each_iteration()) {
> +        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);

(2) I have done some testing with this patchset (because of MSG_ZEROCOPY) and it
seems this part here is breaking migration from this build to 'older' builds
(same commits except for this patchset):

qemu-system-x86_64: Unknown combination of migration flags: 0x200
qemu-system-x86_64: error while loading state section id 2(ram)
qemu-system-x86_64: load of migration failed: Invalid argument

Which makes sense, since there is no RAM_SAVE_FLAG_MULTIFD_SYNC in older
versions. Is this expected / desired ?

Strange enough, it seems to be breaking even with this set in the sending part: 
--global migration.multifd-sync-each-iteration=on

Was the idea of this config to allow migration to older qemu builds?


>      }
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>      qemu_fflush(f);
> @@ -3127,13 +3139,14 @@ static int ram_save_complete(QEMUFile *f, void
> *opaque)
>          return ret;
>      }
>  
> -    if (migrate_multifd_sync_each_iteration()) {
> -        ret = multifd_send_sync_main(rs->f);
> -        if (ret < 0) {
> -            return ret;
> -        }
> +    ret = multifd_send_sync_main(rs->f);
> +    if (ret < 0) {
> +        return ret;
>      }

(3) Same as (1)

>  
> +    if (migrate_multifd_sync_each_iteration()) {
> +        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
> +    }
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>      qemu_fflush(f);
>  
> @@ -3800,7 +3813,9 @@ int ram_load_postcopy(QEMUFile *f)
>              }
>              decompress_data_with_multi_threads(f, page_buffer, len);
>              break;
> -
> +        case RAM_SAVE_FLAG_MULTIFD_SYNC:
> +            multifd_recv_sync_main();
> +            break;
>          case RAM_SAVE_FLAG_EOS:
>              /* normal exit */
>              if (migrate_multifd_sync_each_iteration()) {
> @@ -4079,6 +4094,9 @@ static int ram_load_precopy(QEMUFile *f)
>                  break;
>              }
>              break;
> +        case RAM_SAVE_FLAG_MULTIFD_SYNC:
> +            multifd_recv_sync_main();
> +            break;
>          case RAM_SAVE_FLAG_EOS:
>              /* normal exit */
>              if (migrate_multifd_sync_each_iteration()) {


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/5] multifd: Create property multifd-sync-each-iteration
  2022-06-30 14:34   ` Dr. David Alan Gilbert
@ 2022-07-04 16:07     ` Juan Quintela
  0 siblings, 0 replies; 25+ messages in thread
From: Juan Quintela @ 2022-07-04 16:07 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We used to synchronize all channels at the end of each RAM section
>> sent.  That is not needed, so preparing to only synchronize once every
>> full round in latests patches.
>> 
>> Notice that we initialize the property as true.  We will change the
>> default when we introduce the new mechanism.
>
> I don't understand why this is a property - does it break the actual
> stream format?

Yeap.

You can see on following patches.  The problem is that we synchronize
each time that we sent/receive a

RAM_SAVE_FLAG_EOS

And that is way too much.  As Leo showed, it can be as much as 20 times
a second.

Later, Juan.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-01  2:29   ` Leonardo Brás
@ 2022-07-04 16:18     ` Juan Quintela
  0 siblings, 0 replies; 25+ messages in thread
From: Juan Quintela @ 2022-07-04 16:18 UTC (permalink / raw)
  To: Leonardo Brás
  Cc: qemu-devel, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Dr. David Alan Gilbert,
	Yanan Wang

Leonardo Brás <leobras@redhat.com> wrote:
> Hello Juan,
>
> On Tue, 2022-06-21 at 16:05 +0200, Juan Quintela wrote:
>> We need to add a new flag to mean to sync at that point.
>> Notice that we still synchronize at the end of setup and at the end of
>> complete stages.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/migration.c |  2 +-
>>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
>>  2 files changed, 31 insertions(+), 13 deletions(-)
>> 




>> @@ -2943,11 +2953,13 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>>      ram_control_before_iterate(f, RAM_CONTROL_SETUP);
>>      ram_control_after_iterate(f, RAM_CONTROL_SETUP);
>>  
>> -    if (migrate_multifd_sync_each_iteration()) {
>> -        ret =  multifd_send_sync_main(f);
>> -        if (ret < 0) {
>> -            return ret;
>> -        }
>
> (1) IIUC, the above lines were changed in 2/5 to be reverted now.
> Is that correct? was it expected?

Yeap.  The problem here is that (withouth this patchset) we synchrconize
in three places:

- ram_save_setup()
- ram_save_iterate()
- ram_save_complete()

And we want to change it to:

- ram_save_setup()
- ram_save_complete()
- And each time that we pass through the end of memory. (much less times
  than calls to ram_save_iterate).

In the three places we send:

   RAM_SAVE_FLAG_EOS

And that is what cause the synchronization.

As we can't piggyback on RAM_SAVE_FLAG_EOS anymore, I added a new flag
to synchronize it.

The problem is that now (on setup and complete)  we need to synchronize
independently if we do the older way or the new one.  On iterate we only
synchronize on the old code, and on new code only when we reach the end
of the memory.

I *thought* it was clear this way, but I can do without the change if
people think it is easier.


>> +    ret =  multifd_send_sync_main(f);
>> +    if (ret < 0) {
>> +        return ret;
>> +    }
>> +
>> +    if (!migrate_multifd_sync_each_iteration()) {
>> +        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
>
> (2) I have done some testing with this patchset (because of MSG_ZEROCOPY) and it
> seems this part here is breaking migration from this build to 'older' builds
> (same commits except for this patchset):
>
> qemu-system-x86_64: Unknown combination of migration flags: 0x200
> qemu-system-x86_64: error while loading state section id 2(ram)
> qemu-system-x86_64: load of migration failed: Invalid argument

You can't do that O:-) (TM), that is the whole point that I added the
multifd-sync-each-iteration property.  It is true for old machine types,
it is false for new machine types.  If you try to play with that
property, it is better than you know what you are doing (TM).

> Which makes sense, since there is no RAM_SAVE_FLAG_MULTIFD_SYNC in older
> versions. Is this expected / desired ?

See previous paragraph.

> Strange enough, it seems to be breaking even with this set in the sending part: 
> --global migration.multifd-sync-each-iteration=on
>
> Was the idea of this config to allow migration to older qemu builds?

If you set it to on, it should work against and old qemu.  By default it
is set to on for old machine types, and only to on for new machine
types.  So you should have it right if you use new machine types.  (If
you use "pc" or "q35" machine types, you should now what you are doing
for migrating machines.  We do this kind of change all the time).

>>      }
>>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>>      qemu_fflush(f);
>> @@ -3127,13 +3139,14 @@ static int ram_save_complete(QEMUFile *f, void
>> *opaque)
>>          return ret;
>>      }
>>  
>> -    if (migrate_multifd_sync_each_iteration()) {
>> -        ret = multifd_send_sync_main(rs->f);
>> -        if (ret < 0) {
>> -            return ret;
>> -        }
>> +    ret = multifd_send_sync_main(rs->f);
>> +    if (ret < 0) {
>> +        return ret;
>>      }
>
> (3) Same as (1)

Yeap, this is on purpose.  If you feel that it is clearer the other way,
I can change the patchset.

Later, Juan.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/5] multifd: Create property multifd-sync-each-iteration
  2022-06-21 14:05 ` [PATCH 1/5] multifd: Create property multifd-sync-each-iteration Juan Quintela
  2022-06-30 14:34   ` Dr. David Alan Gilbert
@ 2022-07-05 12:19   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-05 12:19 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Juan Quintela (quintela@redhat.com) wrote:
> We used to synchronize all channels at the end of each RAM section
> sent.  That is not needed, so preparing to only synchronize once every
> full round in latests patches.
> 
> Notice that we initialize the property as true.  We will change the
> default when we introduce the new mechanism.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.h |  6 ++++++
>  hw/core/machine.c     |  1 +
>  migration/migration.c | 10 ++++++++++
>  3 files changed, 17 insertions(+)
> 
> diff --git a/migration/migration.h b/migration/migration.h
> index 485d58b95f..70dae24516 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -332,6 +332,11 @@ struct MigrationState {
>       * This save hostname when out-going migration starts
>       */
>      char *hostname;
> +    /*
> +     * Synchronize channels after each iteration.
> +     * We used to do that on the past, but it is suboptimal.
> +     */
> +    bool multifd_sync_each_iteration;
>  };
>  
>  void migrate_set_state(int *state, int old_state, int new_state);
> @@ -374,6 +379,7 @@ int migrate_multifd_channels(void);
>  MultiFDCompression migrate_multifd_compression(void);
>  int migrate_multifd_zlib_level(void);
>  int migrate_multifd_zstd_level(void);
> +bool migrate_multifd_sync_each_iteration(void);
>  
>  #ifdef CONFIG_LINUX
>  bool migrate_use_zero_copy_send(void);
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index a673302cce..c8a60917e0 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -43,6 +43,7 @@
>  GlobalProperty hw_compat_7_0[] = {
>      { "arm-gicv3-common", "force-8-bit-prio", "on" },
>      { "nvme-ns", "eui64-default", "on"},
> +    { "migration", "multifd-sync-each-iteration", "on"},
>  };
>  const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
>  
> diff --git a/migration/migration.c b/migration/migration.c
> index 31739b2af9..3f79df0b70 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -2540,6 +2540,13 @@ bool migrate_use_multifd(void)
>      return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
>  }
>  
> +bool migrate_multifd_sync_each_iteration(void)
> +{
> +    MigrationState *s = migrate_get_current();
> +
> +    return s->multifd_sync_each_iteration;
> +}
> +
>  bool migrate_pause_before_switchover(void)
>  {
>      MigrationState *s;
> @@ -4274,6 +4281,9 @@ static Property migration_properties[] = {
>      DEFINE_PROP_SIZE("announce-step", MigrationState,
>                        parameters.announce_step,
>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
> +    /* We will change to false when we introduce the new mechanism */
> +    DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> +                      multifd_sync_each_iteration, true),
>  
>      /* Migration capabilities */
>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> -- 
> 2.34.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/5] multifd: Put around all sync calls tests for each iteration
  2022-06-21 14:05 ` [PATCH 2/5] multifd: Put around all sync calls tests for each iteration Juan Quintela
@ 2022-07-05 12:20   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-05 12:20 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Juan Quintela (quintela@redhat.com) wrote:
> We will sync later in different places.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 33 +++++++++++++++++++++------------
>  1 file changed, 21 insertions(+), 12 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 5f5e37f64d..35816a3a0a 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2944,11 +2944,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>      ram_control_before_iterate(f, RAM_CONTROL_SETUP);
>      ram_control_after_iterate(f, RAM_CONTROL_SETUP);
>  
> -    ret =  multifd_send_sync_main(f);
> -    if (ret < 0) {
> -        return ret;
> +    if (migrate_multifd_sync_each_iteration()) {
> +        ret =  multifd_send_sync_main(f);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
> -
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>      qemu_fflush(f);
>  
> @@ -3057,9 +3058,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>  out:
>      if (ret >= 0
>          && migration_is_setup_or_active(migrate_get_current()->state)) {
> -        ret = multifd_send_sync_main(rs->f);
> -        if (ret < 0) {
> -            return ret;
> +        if (migrate_multifd_sync_each_iteration()) {
> +            ret = multifd_send_sync_main(rs->f);
> +            if (ret < 0) {
> +                return ret;
> +            }
>          }
>  
>          qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> @@ -3125,9 +3128,11 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>          return ret;
>      }
>  
> -    ret = multifd_send_sync_main(rs->f);
> -    if (ret < 0) {
> -        return ret;
> +    if (migrate_multifd_sync_each_iteration()) {
> +        ret = multifd_send_sync_main(rs->f);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> @@ -3799,7 +3804,9 @@ int ram_load_postcopy(QEMUFile *f)
>  
>          case RAM_SAVE_FLAG_EOS:
>              /* normal exit */
> -            multifd_recv_sync_main();
> +            if (migrate_multifd_sync_each_iteration()) {
> +                multifd_recv_sync_main();
> +            }
>              break;
>          default:
>              error_report("Unknown combination of migration flags: 0x%x"
> @@ -4075,7 +4082,9 @@ static int ram_load_precopy(QEMUFile *f)
>              break;
>          case RAM_SAVE_FLAG_EOS:
>              /* normal exit */
> -            multifd_recv_sync_main();
> +            if (migrate_multifd_sync_each_iteration()) {
> +                multifd_recv_sync_main();
> +            }
>              break;
>          default:
>              if (flags & RAM_SAVE_FLAG_HOOK) {
> -- 
> 2.34.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/5] migration: Simplify ram_find_and_save_block()
  2022-06-21 14:05 ` [PATCH 3/5] migration: Simplify ram_find_and_save_block() Juan Quintela
@ 2022-07-05 12:51   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-05 12:51 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Juan Quintela (quintela@redhat.com) wrote:
> We will need later that find_dirty_block() return errors, so
> simplify the loop.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 20 +++++++++-----------
>  1 file changed, 9 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 35816a3a0a..1d4ff3185b 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2256,7 +2256,6 @@ static int ram_find_and_save_block(RAMState *rs)
>  {
>      PageSearchStatus pss;
>      int pages = 0;
> -    bool again, found;
>  
>      /* No dirty page as there is zero RAM */
>      if (!ram_bytes_total()) {
> @@ -2272,18 +2271,17 @@ static int ram_find_and_save_block(RAMState *rs)
>      }
>  
>      do {
> -        again = true;
> -        found = get_queued_page(rs, &pss);
> -
> -        if (!found) {
> +        if (!get_queued_page(rs, &pss)) {
>              /* priority queue empty, so just search for something dirty */
> -            found = find_dirty_block(rs, &pss, &again);
> +            bool again = true;
> +            if (!find_dirty_block(rs, &pss, &again)) {
> +                if (!again) {
> +                    break;
> +                }
> +            }
>          }
> -
> -        if (found) {
> -            pages = ram_save_host_page(rs, &pss);
> -        }
> -    } while (!pages && again);
> +        pages = ram_save_host_page(rs, &pss);
> +    } while (!pages);
>  
>      rs->last_seen_block = pss.block;
>      rs->last_page = pss.page;
> -- 
> 2.34.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 4/5] migration: Make find_dirty_block() return a single parameter
  2022-06-21 14:05 ` [PATCH 4/5] migration: Make find_dirty_block() return a single parameter Juan Quintela
@ 2022-07-05 12:54   ` Dr. David Alan Gilbert
  2022-07-26 16:23     ` Juan Quintela
  0 siblings, 1 reply; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-05 12:54 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Juan Quintela (quintela@redhat.com) wrote:
> We used to return two bools, just return a single int with the
> following meaning:
> 
> old return / again / new return
> false        false   0
> false        true    1
> true         true    2  /* We don't care about again at all */

We shouldn't use magic numbers; if you want to return it in a single
value then it should be an enum so it is clear.

Dave

> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 37 +++++++++++++++++++------------------
>  1 file changed, 19 insertions(+), 18 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 1d4ff3185b..2c7289edad 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1481,13 +1481,16 @@ retry:
>   * find_dirty_block: find the next dirty page and update any state
>   * associated with the search process.
>   *
> - * Returns true if a page is found
> + * Returns:
> + *         0: no page found, give up
> + *         1: no page found, retry
> + *         2: page found
>   *
>   * @rs: current RAM state
>   * @pss: data about the state of the current dirty page scan
>   * @again: set to false if the search has scanned the whole of RAM
>   */
> -static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
> +static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
>  {
>      /* This is not a postcopy requested page */
>      pss->postcopy_requested = false;
> @@ -1499,8 +1502,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
>           * We've been once around the RAM and haven't found anything.
>           * Give up.
>           */
> -        *again = false;
> -        return false;
> +        return 0;
>      }
>      if (!offset_in_ramblock(pss->block,
>                              ((ram_addr_t)pss->page) << TARGET_PAGE_BITS)) {
> @@ -1529,13 +1531,10 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
>              }
>          }
>          /* Didn't find anything this time, but try again on the new block */
> -        *again = true;
> -        return false;
> +        return 1;
>      } else {
> -        /* Can go around again, but... */
> -        *again = true;
> -        /* We've found something so probably don't need to */
> -        return true;
> +        /* We've found something */
> +        return 2;
>      }
>  }
>  
> @@ -2270,18 +2269,20 @@ static int ram_find_and_save_block(RAMState *rs)
>          pss.block = QLIST_FIRST_RCU(&ram_list.blocks);
>      }
>  
> -    do {
> +    while (true){
>          if (!get_queued_page(rs, &pss)) {
>              /* priority queue empty, so just search for something dirty */
> -            bool again = true;
> -            if (!find_dirty_block(rs, &pss, &again)) {
> -                if (!again) {
> -                    break;
> -                }
> -            }
> +            int res = find_dirty_block(rs, &pss);
> +            if (res == 0) {
> +                break;
> +            } else if (res == 1)
> +                continue;
>          }
>          pages = ram_save_host_page(rs, &pss);
> -    } while (!pages);
> +        if (pages) {
> +            break;
> +        }
> +    }
>  
>      rs->last_seen_block = pss.block;
>      rs->last_page = pss.page;
> -- 
> 2.34.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-06-21 14:05 ` [PATCH 5/5] multifd: Only sync once each full round of memory Juan Quintela
  2022-07-01  2:29   ` Leonardo Brás
@ 2022-07-05 13:56   ` Dr. David Alan Gilbert
  2022-07-05 14:34     ` Daniel P. Berrangé
  2022-07-05 15:11     ` Juan Quintela
  1 sibling, 2 replies; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-05 13:56 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Juan Quintela (quintela@redhat.com) wrote:
> We need to add a new flag to mean to sync at that point.
> Notice that we still synchronize at the end of setup and at the end of
> complete stages.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/migration.c |  2 +-
>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
>  2 files changed, 31 insertions(+), 13 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 3f79df0b70..6627787fc2 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
>      /* We will change to false when we introduce the new mechanism */
>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> -                      multifd_sync_each_iteration, true),
> +                      multifd_sync_each_iteration, false),
>  
>      /* Migration capabilities */
>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> diff --git a/migration/ram.c b/migration/ram.c
> index 2c7289edad..6792986565 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -81,6 +81,7 @@
>  #define RAM_SAVE_FLAG_XBZRLE   0x40
>  /* 0x80 is reserved in migration.h start with 0x100 next */
>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200

Note this is the very last usable flag!
We could do with avoiding using them as flags where we dont need to.

>  XBZRLECacheStats xbzrle_counters;
>  
> @@ -1482,6 +1483,7 @@ retry:
>   * associated with the search process.
>   *
>   * Returns:
> + *        <0: An error happened
>   *         0: no page found, give up
>   *         1: no page found, retry
>   *         2: page found
> @@ -1510,6 +1512,13 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
>          pss->page = 0;
>          pss->block = QLIST_NEXT_RCU(pss->block, next);
>          if (!pss->block) {
> +            if (!migrate_multifd_sync_each_iteration()) {
> +                int ret = multifd_send_sync_main(rs->f);
> +                if (ret < 0) {
> +                    return ret;
> +                }
> +                qemu_put_be64(rs->f, RAM_SAVE_FLAG_MULTIFD_SYNC);
> +            }
>              /*
>               * If memory migration starts over, we will meet a dirtied page
>               * which may still exists in compression threads's ring, so we
> @@ -2273,7 +2282,8 @@ static int ram_find_and_save_block(RAMState *rs)
>          if (!get_queued_page(rs, &pss)) {
>              /* priority queue empty, so just search for something dirty */
>              int res = find_dirty_block(rs, &pss);
> -            if (res == 0) {
> +            if (res <= 0) {
> +                pages = res;
>                  break;
>              } else if (res == 1)
>                  continue;
> @@ -2943,11 +2953,13 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>      ram_control_before_iterate(f, RAM_CONTROL_SETUP);
>      ram_control_after_iterate(f, RAM_CONTROL_SETUP);
>  
> -    if (migrate_multifd_sync_each_iteration()) {
> -        ret =  multifd_send_sync_main(f);
> -        if (ret < 0) {
> -            return ret;
> -        }
> +    ret =  multifd_send_sync_main(f);
> +    if (ret < 0) {
> +        return ret;
> +    }
> +
> +    if (!migrate_multifd_sync_each_iteration()) {
> +        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
>      }
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>      qemu_fflush(f);
> @@ -3127,13 +3139,14 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>          return ret;
>      }
>  
> -    if (migrate_multifd_sync_each_iteration()) {
> -        ret = multifd_send_sync_main(rs->f);
> -        if (ret < 0) {
> -            return ret;
> -        }

It feels like you could have done that in the previous patch.
Anyway,


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> +    ret = multifd_send_sync_main(rs->f);
> +    if (ret < 0) {
> +        return ret;
>      }
>  
> +    if (migrate_multifd_sync_each_iteration()) {
> +        qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
> +    }
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>      qemu_fflush(f);
>  
> @@ -3800,7 +3813,9 @@ int ram_load_postcopy(QEMUFile *f)
>              }
>              decompress_data_with_multi_threads(f, page_buffer, len);
>              break;
> -
> +        case RAM_SAVE_FLAG_MULTIFD_SYNC:
> +            multifd_recv_sync_main();
> +            break;
>          case RAM_SAVE_FLAG_EOS:
>              /* normal exit */
>              if (migrate_multifd_sync_each_iteration()) {
> @@ -4079,6 +4094,9 @@ static int ram_load_precopy(QEMUFile *f)
>                  break;
>              }
>              break;
> +        case RAM_SAVE_FLAG_MULTIFD_SYNC:
> +            multifd_recv_sync_main();
> +            break;
>          case RAM_SAVE_FLAG_EOS:
>              /* normal exit */
>              if (migrate_multifd_sync_each_iteration()) {
> -- 
> 2.34.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 13:56   ` Dr. David Alan Gilbert
@ 2022-07-05 14:34     ` Daniel P. Berrangé
  2022-07-05 15:13       ` Juan Quintela
  2022-07-05 15:11     ` Juan Quintela
  1 sibling, 1 reply; 25+ messages in thread
From: Daniel P. Berrangé @ 2022-07-05 14:34 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Juan Quintela, qemu-devel, Leonardo Bras,
	Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

On Tue, Jul 05, 2022 at 02:56:35PM +0100, Dr. David Alan Gilbert wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
> > We need to add a new flag to mean to sync at that point.
> > Notice that we still synchronize at the end of setup and at the end of
> > complete stages.
> > 
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> >  migration/migration.c |  2 +-
> >  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
> >  2 files changed, 31 insertions(+), 13 deletions(-)
> > 
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 3f79df0b70..6627787fc2 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
> >                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
> >      /* We will change to false when we introduce the new mechanism */
> >      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> > -                      multifd_sync_each_iteration, true),
> > +                      multifd_sync_each_iteration, false),
> >  
> >      /* Migration capabilities */
> >      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 2c7289edad..6792986565 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -81,6 +81,7 @@
> >  #define RAM_SAVE_FLAG_XBZRLE   0x40
> >  /* 0x80 is reserved in migration.h start with 0x100 next */
> >  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
> > +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
> 
> Note this is the very last usable flag!
> We could do with avoiding using them as flags where we dont need to.

Before it is too late, shouldn't we do

   #define RAM_SAVE_FLAG_BIGGER_FLAGS 0x200  

to indicate that this will be followed by another uint64 value
giving us another 64 flags to play with ?


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 13:56   ` Dr. David Alan Gilbert
  2022-07-05 14:34     ` Daniel P. Berrangé
@ 2022-07-05 15:11     ` Juan Quintela
  2022-07-05 16:52       ` Daniel P. Berrangé
  1 sibling, 1 reply; 25+ messages in thread
From: Juan Quintela @ 2022-07-05 15:11 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We need to add a new flag to mean to sync at that point.
>> Notice that we still synchronize at the end of setup and at the end of
>> complete stages.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/migration.c |  2 +-
>>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
>>  2 files changed, 31 insertions(+), 13 deletions(-)
>> 
>> diff --git a/migration/migration.c b/migration/migration.c
>> index 3f79df0b70..6627787fc2 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
>>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
>>      /* We will change to false when we introduce the new mechanism */
>>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
>> -                      multifd_sync_each_iteration, true),
>> +                      multifd_sync_each_iteration, false),
>>  
>>      /* Migration capabilities */
>>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 2c7289edad..6792986565 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -81,6 +81,7 @@
>>  #define RAM_SAVE_FLAG_XBZRLE   0x40
>>  /* 0x80 is reserved in migration.h start with 0x100 next */
>>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
>> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
>
> Note this is the very last usable flag!

We can recover two flags right now:

RAM_SAVE_FLAG_FULL is not used anymore.
0x80 is free since years ago.

Once multifd is default, there are some other that could go.

Later, Juan.

> We could do with avoiding using them as flags where we dont need to.

I can't really think on another way to do it.  The other thing that I
can do is just reuse one of the flags that don't make sense for multifd
(RAM_SAVE_FLAG_ZERO after zero pages patch,
RAM_SAVE_FLAG_XBZRLE/COMPRESS_PAGE).

It looks worse to me.

Later, Juan.

> It feels like you could have done that in the previous patch.
> Anyway,
>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Thanks, Juan.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 14:34     ` Daniel P. Berrangé
@ 2022-07-05 15:13       ` Juan Quintela
  0 siblings, 0 replies; 25+ messages in thread
From: Juan Quintela @ 2022-07-05 15:13 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Dr. David Alan Gilbert, qemu-devel, Leonardo Bras,
	Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

Daniel P. Berrangé <berrange@redhat.com> wrote:
> On Tue, Jul 05, 2022 at 02:56:35PM +0100, Dr. David Alan Gilbert wrote:
>> * Juan Quintela (quintela@redhat.com) wrote:
>> > We need to add a new flag to mean to sync at that point.
>> > Notice that we still synchronize at the end of setup and at the end of
>> > complete stages.
>> > 
>> > Signed-off-by: Juan Quintela <quintela@redhat.com>
>> > ---
>> >  migration/migration.c |  2 +-
>> >  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
>> >  2 files changed, 31 insertions(+), 13 deletions(-)
>> > 
>> > diff --git a/migration/migration.c b/migration/migration.c
>> > index 3f79df0b70..6627787fc2 100644
>> > --- a/migration/migration.c
>> > +++ b/migration/migration.c
>> > @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
>> >                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
>> >      /* We will change to false when we introduce the new mechanism */
>> >      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
>> > -                      multifd_sync_each_iteration, true),
>> > +                      multifd_sync_each_iteration, false),
>> >  
>> >      /* Migration capabilities */
>> >      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
>> > diff --git a/migration/ram.c b/migration/ram.c
>> > index 2c7289edad..6792986565 100644
>> > --- a/migration/ram.c
>> > +++ b/migration/ram.c
>> > @@ -81,6 +81,7 @@
>> >  #define RAM_SAVE_FLAG_XBZRLE   0x40
>> >  /* 0x80 is reserved in migration.h start with 0x100 next */
>> >  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
>> > +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
>> 
>> Note this is the very last usable flag!
>> We could do with avoiding using them as flags where we dont need to.
>
> Before it is too late, shouldn't we do
>
>    #define RAM_SAVE_FLAG_BIGGER_FLAGS 0x200  
>
> to indicate that this will be followed by another uint64 value
> giving us another 64 flags to play with ?

Dunno.  We can recover 2 bits already as I told on the previous answer.
And another two/three once that we move to multifd, so we should be ok
(famous last words).

Once told that, putting a comment saying what is the biggest possible
value looks like a good idea.

Later, Juan.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 15:11     ` Juan Quintela
@ 2022-07-05 16:52       ` Daniel P. Berrangé
  2022-07-05 17:13         ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 25+ messages in thread
From: Daniel P. Berrangé @ 2022-07-05 16:52 UTC (permalink / raw)
  To: Juan Quintela
  Cc: Dr. David Alan Gilbert, qemu-devel, Leonardo Bras,
	Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

On Tue, Jul 05, 2022 at 05:11:46PM +0200, Juan Quintela wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> We need to add a new flag to mean to sync at that point.
> >> Notice that we still synchronize at the end of setup and at the end of
> >> complete stages.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  migration/migration.c |  2 +-
> >>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
> >>  2 files changed, 31 insertions(+), 13 deletions(-)
> >> 
> >> diff --git a/migration/migration.c b/migration/migration.c
> >> index 3f79df0b70..6627787fc2 100644
> >> --- a/migration/migration.c
> >> +++ b/migration/migration.c
> >> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
> >>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
> >>      /* We will change to false when we introduce the new mechanism */
> >>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> >> -                      multifd_sync_each_iteration, true),
> >> +                      multifd_sync_each_iteration, false),
> >>  
> >>      /* Migration capabilities */
> >>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index 2c7289edad..6792986565 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -81,6 +81,7 @@
> >>  #define RAM_SAVE_FLAG_XBZRLE   0x40
> >>  /* 0x80 is reserved in migration.h start with 0x100 next */
> >>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
> >> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
> >
> > Note this is the very last usable flag!
> 
> We can recover two flags right now:
> 
> RAM_SAVE_FLAG_FULL is not used anymore.
> 0x80 is free since years ago.
> 
> Once multifd is default, there are some other that could go.

Non-multifd migration isn't likely to go away any time soon, given
distros desire to support migration between QEMU's with quite
significantly different versions. So feels like quite a long time
before we might reclaim more flags.

> > We could do with avoiding using them as flags where we dont need to.
> 
> I can't really think on another way to do it.  The other thing that I
> can do is just reuse one of the flags that don't make sense for multifd
> (RAM_SAVE_FLAG_ZERO after zero pages patch,
> RAM_SAVE_FLAG_XBZRLE/COMPRESS_PAGE).

Re-using flags based on use context differences feels like a recipe
to confuse people.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 16:52       ` Daniel P. Berrangé
@ 2022-07-05 17:13         ` Dr. David Alan Gilbert
  2022-07-05 17:16           ` Daniel P. Berrangé
  0 siblings, 1 reply; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-05 17:13 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras,
	Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Tue, Jul 05, 2022 at 05:11:46PM +0200, Juan Quintela wrote:
> > "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > > * Juan Quintela (quintela@redhat.com) wrote:
> > >> We need to add a new flag to mean to sync at that point.
> > >> Notice that we still synchronize at the end of setup and at the end of
> > >> complete stages.
> > >> 
> > >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> > >> ---
> > >>  migration/migration.c |  2 +-
> > >>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
> > >>  2 files changed, 31 insertions(+), 13 deletions(-)
> > >> 
> > >> diff --git a/migration/migration.c b/migration/migration.c
> > >> index 3f79df0b70..6627787fc2 100644
> > >> --- a/migration/migration.c
> > >> +++ b/migration/migration.c
> > >> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
> > >>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
> > >>      /* We will change to false when we introduce the new mechanism */
> > >>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> > >> -                      multifd_sync_each_iteration, true),
> > >> +                      multifd_sync_each_iteration, false),
> > >>  
> > >>      /* Migration capabilities */
> > >>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> > >> diff --git a/migration/ram.c b/migration/ram.c
> > >> index 2c7289edad..6792986565 100644
> > >> --- a/migration/ram.c
> > >> +++ b/migration/ram.c
> > >> @@ -81,6 +81,7 @@
> > >>  #define RAM_SAVE_FLAG_XBZRLE   0x40
> > >>  /* 0x80 is reserved in migration.h start with 0x100 next */
> > >>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
> > >> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
> > >
> > > Note this is the very last usable flag!
> > 
> > We can recover two flags right now:
> > 
> > RAM_SAVE_FLAG_FULL is not used anymore.
> > 0x80 is free since years ago.
> > 
> > Once multifd is default, there are some other that could go.

I have suggested that a few times in the past.

> Non-multifd migration isn't likely to go away any time soon, given
> distros desire to support migration between QEMU's with quite
> significantly different versions. So feels like quite a long time
> before we might reclaim more flags.
> 
> > > We could do with avoiding using them as flags where we dont need to.
> > 
> > I can't really think on another way to do it.  The other thing that I
> > can do is just reuse one of the flags that don't make sense for multifd
> > (RAM_SAVE_FLAG_ZERO after zero pages patch,
> > RAM_SAVE_FLAG_XBZRLE/COMPRESS_PAGE).
> 
> Re-using flags based on use context differences feels like a recipe
> to confuse people.

Note that most of these things aren't really 'flags'; in the sense that
only a few of them are actually combinable; so we should start using
combinations to mean things new.

Dave

> With regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 17:13         ` Dr. David Alan Gilbert
@ 2022-07-05 17:16           ` Daniel P. Berrangé
  2022-07-05 17:20             ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 25+ messages in thread
From: Daniel P. Berrangé @ 2022-07-05 17:16 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Juan Quintela, qemu-devel, Leonardo Bras,
	Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

On Tue, Jul 05, 2022 at 06:13:40PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > On Tue, Jul 05, 2022 at 05:11:46PM +0200, Juan Quintela wrote:
> > > "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > > > * Juan Quintela (quintela@redhat.com) wrote:
> > > >> We need to add a new flag to mean to sync at that point.
> > > >> Notice that we still synchronize at the end of setup and at the end of
> > > >> complete stages.
> > > >> 
> > > >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> > > >> ---
> > > >>  migration/migration.c |  2 +-
> > > >>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
> > > >>  2 files changed, 31 insertions(+), 13 deletions(-)
> > > >> 
> > > >> diff --git a/migration/migration.c b/migration/migration.c
> > > >> index 3f79df0b70..6627787fc2 100644
> > > >> --- a/migration/migration.c
> > > >> +++ b/migration/migration.c
> > > >> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
> > > >>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
> > > >>      /* We will change to false when we introduce the new mechanism */
> > > >>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> > > >> -                      multifd_sync_each_iteration, true),
> > > >> +                      multifd_sync_each_iteration, false),
> > > >>  
> > > >>      /* Migration capabilities */
> > > >>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> > > >> diff --git a/migration/ram.c b/migration/ram.c
> > > >> index 2c7289edad..6792986565 100644
> > > >> --- a/migration/ram.c
> > > >> +++ b/migration/ram.c
> > > >> @@ -81,6 +81,7 @@
> > > >>  #define RAM_SAVE_FLAG_XBZRLE   0x40
> > > >>  /* 0x80 is reserved in migration.h start with 0x100 next */
> > > >>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
> > > >> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
> > > >
> > > > Note this is the very last usable flag!
> > > 
> > > We can recover two flags right now:
> > > 
> > > RAM_SAVE_FLAG_FULL is not used anymore.
> > > 0x80 is free since years ago.
> > > 
> > > Once multifd is default, there are some other that could go.
> 
> I have suggested that a few times in the past.
> 
> > Non-multifd migration isn't likely to go away any time soon, given
> > distros desire to support migration between QEMU's with quite
> > significantly different versions. So feels like quite a long time
> > before we might reclaim more flags.
> > 
> > > > We could do with avoiding using them as flags where we dont need to.
> > > 
> > > I can't really think on another way to do it.  The other thing that I
> > > can do is just reuse one of the flags that don't make sense for multifd
> > > (RAM_SAVE_FLAG_ZERO after zero pages patch,
> > > RAM_SAVE_FLAG_XBZRLE/COMPRESS_PAGE).
> > 
> > Re-using flags based on use context differences feels like a recipe
> > to confuse people.
> 
> Note that most of these things aren't really 'flags'; in the sense that
> only a few of them are actually combinable; so we should start using
> combinations to mean things new.

IOW, treat the field as an enum of valid values instead, and just
define enum entries for the few valid combinations, giving us many
more values to play with ?


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 17:16           ` Daniel P. Berrangé
@ 2022-07-05 17:20             ` Dr. David Alan Gilbert
  2022-07-28  8:25               ` Juan Quintela
  0 siblings, 1 reply; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-05 17:20 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Juan Quintela, qemu-devel, Leonardo Bras,
	Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Tue, Jul 05, 2022 at 06:13:40PM +0100, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > On Tue, Jul 05, 2022 at 05:11:46PM +0200, Juan Quintela wrote:
> > > > "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > > > > * Juan Quintela (quintela@redhat.com) wrote:
> > > > >> We need to add a new flag to mean to sync at that point.
> > > > >> Notice that we still synchronize at the end of setup and at the end of
> > > > >> complete stages.
> > > > >> 
> > > > >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> > > > >> ---
> > > > >>  migration/migration.c |  2 +-
> > > > >>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
> > > > >>  2 files changed, 31 insertions(+), 13 deletions(-)
> > > > >> 
> > > > >> diff --git a/migration/migration.c b/migration/migration.c
> > > > >> index 3f79df0b70..6627787fc2 100644
> > > > >> --- a/migration/migration.c
> > > > >> +++ b/migration/migration.c
> > > > >> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
> > > > >>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
> > > > >>      /* We will change to false when we introduce the new mechanism */
> > > > >>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> > > > >> -                      multifd_sync_each_iteration, true),
> > > > >> +                      multifd_sync_each_iteration, false),
> > > > >>  
> > > > >>      /* Migration capabilities */
> > > > >>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> > > > >> diff --git a/migration/ram.c b/migration/ram.c
> > > > >> index 2c7289edad..6792986565 100644
> > > > >> --- a/migration/ram.c
> > > > >> +++ b/migration/ram.c
> > > > >> @@ -81,6 +81,7 @@
> > > > >>  #define RAM_SAVE_FLAG_XBZRLE   0x40
> > > > >>  /* 0x80 is reserved in migration.h start with 0x100 next */
> > > > >>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
> > > > >> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
> > > > >
> > > > > Note this is the very last usable flag!
> > > > 
> > > > We can recover two flags right now:
> > > > 
> > > > RAM_SAVE_FLAG_FULL is not used anymore.
> > > > 0x80 is free since years ago.
> > > > 
> > > > Once multifd is default, there are some other that could go.
> > 
> > I have suggested that a few times in the past.
> > 
> > > Non-multifd migration isn't likely to go away any time soon, given
> > > distros desire to support migration between QEMU's with quite
> > > significantly different versions. So feels like quite a long time
> > > before we might reclaim more flags.
> > > 
> > > > > We could do with avoiding using them as flags where we dont need to.
> > > > 
> > > > I can't really think on another way to do it.  The other thing that I
> > > > can do is just reuse one of the flags that don't make sense for multifd
> > > > (RAM_SAVE_FLAG_ZERO after zero pages patch,
> > > > RAM_SAVE_FLAG_XBZRLE/COMPRESS_PAGE).
> > > 
> > > Re-using flags based on use context differences feels like a recipe
> > > to confuse people.
> > 
> > Note that most of these things aren't really 'flags'; in the sense that
> > only a few of them are actually combinable; so we should start using
> > combinations to mean things new.
> 
> IOW, treat the field as an enum of valid values instead, and just
> define enum entries for the few valid combinations, giving us many
> more values to play with ?

Right; some care needs to be taken with the ones that were interpreted
as flags; but since you're not going to send the new values to an old
qemu, you've got quite a bit of flexibility.

Dave

> 
> With regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 4/5] migration: Make find_dirty_block() return a single parameter
  2022-07-05 12:54   ` Dr. David Alan Gilbert
@ 2022-07-26 16:23     ` Juan Quintela
  2022-07-28  9:07       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 25+ messages in thread
From: Juan Quintela @ 2022-07-26 16:23 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We used to return two bools, just return a single int with the
>> following meaning:
>> 
>> old return / again / new return
>> false        false   0
>> false        true    1
>> true         true    2  /* We don't care about again at all */
>
> We shouldn't use magic numbers; if you want to return it in a single
> value then it should be an enum so it is clear.

I need to also return an error in the following patches.
I am not sure if it clearer to try to change to an enum.
Will try and see.

Later, Juan.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/5] multifd: Only sync once each full round of memory
  2022-07-05 17:20             ` Dr. David Alan Gilbert
@ 2022-07-28  8:25               ` Juan Quintela
  0 siblings, 0 replies; 25+ messages in thread
From: Juan Quintela @ 2022-07-28  8:25 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Daniel P. Berrangé,
	qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Daniel P. Berrangé (berrange@redhat.com) wrote:
>> On Tue, Jul 05, 2022 at 06:13:40PM +0100, Dr. David Alan Gilbert wrote:
>> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
>> > > On Tue, Jul 05, 2022 at 05:11:46PM +0200, Juan Quintela wrote:
>> > > > "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>> > > > > * Juan Quintela (quintela@redhat.com) wrote:
>> > > > >> We need to add a new flag to mean to sync at that point.
>> > > > >> Notice that we still synchronize at the end of setup and at the end of
>> > > > >> complete stages.
>> > > > >> 
>> > > > >> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> > > > >> ---
>> > > > >>  migration/migration.c |  2 +-
>> > > > >>  migration/ram.c       | 42 ++++++++++++++++++++++++++++++------------
>> > > > >>  2 files changed, 31 insertions(+), 13 deletions(-)
>> > > > >> 
>> > > > >> diff --git a/migration/migration.c b/migration/migration.c
>> > > > >> index 3f79df0b70..6627787fc2 100644
>> > > > >> --- a/migration/migration.c
>> > > > >> +++ b/migration/migration.c
>> > > > >> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
>> > > > >>                        DEFAULT_MIGRATE_ANNOUNCE_STEP),
>> > > > >>      /* We will change to false when we introduce the new mechanism */
>> > > > >>      DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
>> > > > >> -                      multifd_sync_each_iteration, true),
>> > > > >> +                      multifd_sync_each_iteration, false),
>> > > > >>  
>> > > > >>      /* Migration capabilities */
>> > > > >>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
>> > > > >> diff --git a/migration/ram.c b/migration/ram.c
>> > > > >> index 2c7289edad..6792986565 100644
>> > > > >> --- a/migration/ram.c
>> > > > >> +++ b/migration/ram.c
>> > > > >> @@ -81,6 +81,7 @@
>> > > > >>  #define RAM_SAVE_FLAG_XBZRLE   0x40
>> > > > >>  /* 0x80 is reserved in migration.h start with 0x100 next */
>> > > > >>  #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
>> > > > >> +#define RAM_SAVE_FLAG_MULTIFD_SYNC     0x200
>> > > > >
>> > > > > Note this is the very last usable flag!
>> > > > 
>> > > > We can recover two flags right now:
>> > > > 
>> > > > RAM_SAVE_FLAG_FULL is not used anymore.
>> > > > 0x80 is free since years ago.
>> > > > 
>> > > > Once multifd is default, there are some other that could go.
>> > 
>> > I have suggested that a few times in the past.
>> > 
>> > > Non-multifd migration isn't likely to go away any time soon, given
>> > > distros desire to support migration between QEMU's with quite
>> > > significantly different versions. So feels like quite a long time
>> > > before we might reclaim more flags.
>> > > 
>> > > > > We could do with avoiding using them as flags where we dont need to.
>> > > > 
>> > > > I can't really think on another way to do it.  The other thing that I
>> > > > can do is just reuse one of the flags that don't make sense for multifd
>> > > > (RAM_SAVE_FLAG_ZERO after zero pages patch,
>> > > > RAM_SAVE_FLAG_XBZRLE/COMPRESS_PAGE).
>> > > 
>> > > Re-using flags based on use context differences feels like a recipe
>> > > to confuse people.
>> > 
>> > Note that most of these things aren't really 'flags'; in the sense that
>> > only a few of them are actually combinable; so we should start using
>> > combinations to mean things new.
>> 
>> IOW, treat the field as an enum of valid values instead, and just
>> define enum entries for the few valid combinations, giving us many
>> more values to play with ?
>
> Right; some care needs to be taken with the ones that were interpreted
> as flags; but since you're not going to send the new values to an old
> qemu, you've got quite a bit of flexibility.

Rigth now no combinations are allowed, so we are free to play with that
combination thing.  Reception side code is:

        switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
        case RAM_SAVE_FLAG_MEM_SIZE:
        ....
        break;

        case RAM_SAVE_FLAG_ZERO:
            ...
            break;

        case RAM_SAVE_FLAG_PAGE:
            ....
            break;

        case RAM_SAVE_FLAG_COMPRESS_PAGE:
            ....
            break;

        case RAM_SAVE_FLAG_XBZRLE:
            ....
            break;
        case RAM_SAVE_FLAG_MULTIFD_SYNC:
            ...
            break;
        case RAM_SAVE_FLAG_EOS:
            ....
            break;
        default:
            if (flags & RAM_SAVE_FLAG_HOOK) {
                   .....
            }
        }

So the only value that is a flag is the CONTINUE one, there are not
other combinations with other flags.

Yes, the RAM_SAVE_FLAG_HOOK is as weird as it can be.

Later, Juan.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 4/5] migration: Make find_dirty_block() return a single parameter
  2022-07-26 16:23     ` Juan Quintela
@ 2022-07-28  9:07       ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 25+ messages in thread
From: Dr. David Alan Gilbert @ 2022-07-28  9:07 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Leonardo Bras, Philippe Mathieu-Daudé,
	Marcel Apfelbaum, Eduardo Habkost, Yanan Wang

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> We used to return two bools, just return a single int with the
> >> following meaning:
> >> 
> >> old return / again / new return
> >> false        false   0
> >> false        true    1
> >> true         true    2  /* We don't care about again at all */
> >
> > We shouldn't use magic numbers; if you want to return it in a single
> > value then it should be an enum so it is clear.
> 
> I need to also return an error in the following patches.
> I am not sure if it clearer to try to change to an enum.
> Will try and see.

Well even if you used a const or #define, it would be better than having
0/1/2 all over.

Dave

> Later, Juan.
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2022-07-28  9:45 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-21 14:05 [PATCH 0/5] Eliminate multifd flush Juan Quintela
2022-06-21 14:05 ` [PATCH 1/5] multifd: Create property multifd-sync-each-iteration Juan Quintela
2022-06-30 14:34   ` Dr. David Alan Gilbert
2022-07-04 16:07     ` Juan Quintela
2022-07-05 12:19   ` Dr. David Alan Gilbert
2022-06-21 14:05 ` [PATCH 2/5] multifd: Put around all sync calls tests for each iteration Juan Quintela
2022-07-05 12:20   ` Dr. David Alan Gilbert
2022-06-21 14:05 ` [PATCH 3/5] migration: Simplify ram_find_and_save_block() Juan Quintela
2022-07-05 12:51   ` Dr. David Alan Gilbert
2022-06-21 14:05 ` [PATCH 4/5] migration: Make find_dirty_block() return a single parameter Juan Quintela
2022-07-05 12:54   ` Dr. David Alan Gilbert
2022-07-26 16:23     ` Juan Quintela
2022-07-28  9:07       ` Dr. David Alan Gilbert
2022-06-21 14:05 ` [PATCH 5/5] multifd: Only sync once each full round of memory Juan Quintela
2022-07-01  2:29   ` Leonardo Brás
2022-07-04 16:18     ` Juan Quintela
2022-07-05 13:56   ` Dr. David Alan Gilbert
2022-07-05 14:34     ` Daniel P. Berrangé
2022-07-05 15:13       ` Juan Quintela
2022-07-05 15:11     ` Juan Quintela
2022-07-05 16:52       ` Daniel P. Berrangé
2022-07-05 17:13         ` Dr. David Alan Gilbert
2022-07-05 17:16           ` Daniel P. Berrangé
2022-07-05 17:20             ` Dr. David Alan Gilbert
2022-07-28  8:25               ` Juan Quintela

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.