All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental
@ 2019-02-20 11:56 Juan Quintela
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 1/8] multifd: Only send pages when packet are not empty Juan Quintela
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

v2:
- Measure packet size in bytes/not pages
- Change the defalut value from 64KB to 512KB
- rename used field to pages_used
- rename size field to pages_alloc
- Create nnext_packet_size field
  Will be used on compression series later.
- Be flexible about what packet sizes we allow.

Please review,

In v1:

- Change page_count default to 128.

  16 was really small, and it makes much less contention on mutexes to
  just have bigger packets.

- Drop multifd-page_count parameter

  This parameter was useful for testing, but in all my testing 128 is
  good enough, no need to have an extra knob.  Libvirt don't want to
  expose this parameter because it is difficult to explain.

- Drop experimental "x-" from multifd

  Code is stable, nothing big is happening here.

- Multifd test

  And a test for multifd, this test has already been on other patch
  series.  But now it uses the names without "-x".

Please review.

Juan Quintela (8):
  multifd: Only send pages when packet are not empty
  multifd: Rename "size" member to pages_alloc
  multifd: Create new next_packet_size field
  multifd: Drop x-multifd-page-count parameter
  multifd: Be flexible about packet size
  multifd: Change default packet size
  multifd: Drop x-
  tests: Add migration multifd test

 hmp.c                  | 17 +++------
 migration/migration.c  | 56 +++++++-----------------------
 migration/migration.h  |  1 -
 migration/ram.c        | 79 ++++++++++++++++++++++++++++++------------
 migration/trace-events |  4 +--
 qapi/migration.json    | 45 +++++++++---------------
 tests/migration-test.c | 48 +++++++++++++++++++++++++
 7 files changed, 142 insertions(+), 108 deletions(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 1/8] multifd: Only send pages when packet are not empty
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  2019-02-21 17:43   ` Dr. David Alan Gilbert
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 2/8] multifd: Rename "size" member to pages_alloc Juan Quintela
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

We send packages without pages sometimes for sysnchronizanion.  The
iov functions do the right thing, but we will be changing this code in
future patches.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 59191c1ed2..8b5fd67d66 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1025,9 +1025,12 @@ static void *multifd_send_thread(void *opaque)
                 break;
             }
 
-            ret = qio_channel_writev_all(p->c, p->pages->iov, used, &local_err);
-            if (ret != 0) {
-                break;
+            if (used) {
+                ret = qio_channel_writev_all(p->c, p->pages->iov,
+                                             used, &local_err);
+                if (ret != 0) {
+                    break;
+                }
             }
 
             qemu_mutex_lock(&p->mutex);
@@ -1254,9 +1257,12 @@ static void *multifd_recv_thread(void *opaque)
         p->num_pages += used;
         qemu_mutex_unlock(&p->mutex);
 
-        ret = qio_channel_readv_all(p->c, p->pages->iov, used, &local_err);
-        if (ret != 0) {
-            break;
+        if (used) {
+            ret = qio_channel_readv_all(p->c, p->pages->iov,
+                                        used, &local_err);
+            if (ret != 0) {
+                break;
+            }
         }
 
         if (flags & MULTIFD_FLAG_SYNC) {
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 2/8] multifd: Rename "size" member to pages_alloc
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 1/8] multifd: Only send pages when packet are not empty Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  2019-02-21 17:48   ` Dr. David Alan Gilbert
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 3/8] multifd: Create new next_packet_size field Juan Quintela
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

It really indicates what is the number of allocated pages for one
packet.  Once there rename "used" to "pages_used".

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 8b5fd67d66..29f0d431a8 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -531,8 +531,9 @@ typedef struct {
     uint32_t magic;
     uint32_t version;
     uint32_t flags;
-    uint32_t size;
-    uint32_t used;
+    /* maximum number of allocated pages */
+    uint32_t pages_alloc;
+    uint32_t pages_used;
     uint64_t packet_num;
     char ramblock[256];
     uint64_t offset[];
@@ -718,8 +719,8 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
     packet->magic = cpu_to_be32(MULTIFD_MAGIC);
     packet->version = cpu_to_be32(MULTIFD_VERSION);
     packet->flags = cpu_to_be32(p->flags);
-    packet->size = cpu_to_be32(migrate_multifd_page_count());
-    packet->used = cpu_to_be32(p->pages->used);
+    packet->pages_alloc = cpu_to_be32(migrate_multifd_page_count());
+    packet->pages_used = cpu_to_be32(p->pages->used);
     packet->packet_num = cpu_to_be64(p->packet_num);
 
     if (p->pages->block) {
@@ -755,19 +756,19 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
 
     p->flags = be32_to_cpu(packet->flags);
 
-    packet->size = be32_to_cpu(packet->size);
-    if (packet->size > migrate_multifd_page_count()) {
+    packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
+    if (packet->pages_alloc > migrate_multifd_page_count()) {
         error_setg(errp, "multifd: received packet "
                    "with size %d and expected maximum size %d",
-                   packet->size, migrate_multifd_page_count()) ;
+                   packet->pages_alloc, migrate_multifd_page_count()) ;
         return -1;
     }
 
-    p->pages->used = be32_to_cpu(packet->used);
-    if (p->pages->used > packet->size) {
+    p->pages->used = be32_to_cpu(packet->pages_used);
+    if (p->pages->used > packet->pages_alloc) {
         error_setg(errp, "multifd: received packet "
-                   "with size %d and expected maximum size %d",
-                   p->pages->used, packet->size) ;
+                   "with %d pages and expected maximum pages are %d",
+                   p->pages->used, packet->pages_alloc) ;
         return -1;
     }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 3/8] multifd: Create new next_packet_size field
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 1/8] multifd: Only send pages when packet are not empty Juan Quintela
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 2/8] multifd: Rename "size" member to pages_alloc Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  2019-02-21 18:45   ` Dr. David Alan Gilbert
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 4/8] multifd: Drop x-multifd-page-count parameter Juan Quintela
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

We need to send this field when we add compression support.  As we are
still on x- stage, we can do this kind of changes.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c        | 15 +++++++++++++--
 migration/trace-events |  4 ++--
 2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 29f0d431a8..26ed26fc2d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -534,6 +534,8 @@ typedef struct {
     /* maximum number of allocated pages */
     uint32_t pages_alloc;
     uint32_t pages_used;
+    /* size of the next packet that contains pages */
+    uint32_t next_packet_size;
     uint64_t packet_num;
     char ramblock[256];
     uint64_t offset[];
@@ -581,6 +583,8 @@ typedef struct {
     MultiFDPacket_t *packet;
     /* multifd flags for each packet */
     uint32_t flags;
+    /* size of the next packet that contains pages */
+    uint32_t next_packet_size;
     /* global number of generated multifd packets */
     uint64_t packet_num;
     /* thread local variables */
@@ -617,6 +621,8 @@ typedef struct {
     /* global number of generated multifd packets */
     uint64_t packet_num;
     /* thread local variables */
+    /* size of the next packet that contains pages */
+    uint32_t next_packet_size;
     /* packets sent through this channel */
     uint64_t num_packets;
     /* pages sent through this channel */
@@ -721,6 +727,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
     packet->flags = cpu_to_be32(p->flags);
     packet->pages_alloc = cpu_to_be32(migrate_multifd_page_count());
     packet->pages_used = cpu_to_be32(p->pages->used);
+    packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
 
     if (p->pages->block) {
@@ -772,6 +779,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
 
+    p->next_packet_size = be32_to_cpu(packet->next_packet_size);
     p->packet_num = be64_to_cpu(packet->packet_num);
 
     if (p->pages->used) {
@@ -1011,6 +1019,7 @@ static void *multifd_send_thread(void *opaque)
             uint64_t packet_num = p->packet_num;
             uint32_t flags = p->flags;
 
+            p->next_packet_size = used * qemu_target_page_size();
             multifd_send_fill_packet(p);
             p->flags = 0;
             p->num_packets++;
@@ -1018,7 +1027,8 @@ static void *multifd_send_thread(void *opaque)
             p->pages->used = 0;
             qemu_mutex_unlock(&p->mutex);
 
-            trace_multifd_send(p->id, packet_num, used, flags);
+            trace_multifd_send(p->id, packet_num, used, flags,
+                               p->next_packet_size);
 
             ret = qio_channel_write_all(p->c, (void *)p->packet,
                                         p->packet_len, &local_err);
@@ -1253,7 +1263,8 @@ static void *multifd_recv_thread(void *opaque)
 
         used = p->pages->used;
         flags = p->flags;
-        trace_multifd_recv(p->id, p->packet_num, used, flags);
+        trace_multifd_recv(p->id, p->packet_num, used, flags,
+                           p->next_packet_size);
         p->num_packets++;
         p->num_pages += used;
         qemu_mutex_unlock(&p->mutex);
diff --git a/migration/trace-events b/migration/trace-events
index bd2d0cd25a..a11e66e1d9 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -77,13 +77,13 @@ get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, unsigned
 migration_bitmap_sync_start(void) ""
 migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
 migration_throttle(void) ""
-multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags) "channel %d packet number %" PRIu64 " pages %d flags 0x%x"
+multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet number %" PRIu64 " pages %d flags 0x%x next packet size %d"
 multifd_recv_sync_main(long packet_num) "packet num %ld"
 multifd_recv_sync_main_signal(uint8_t id) "channel %d"
 multifd_recv_sync_main_wait(uint8_t id) "channel %d"
 multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %" PRIu64
 multifd_recv_thread_start(uint8_t id) "%d"
-multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x"
+multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x next packet size %d"
 multifd_send_sync_main(long packet_num) "packet num %ld"
 multifd_send_sync_main_signal(uint8_t id) "channel %d"
 multifd_send_sync_main_wait(uint8_t id) "channel %d"
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 4/8] multifd: Drop x-multifd-page-count parameter
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
                   ` (2 preceding siblings ...)
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 3/8] multifd: Create new next_packet_size field Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  2019-02-21 17:51   ` Dr. David Alan Gilbert
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size Juan Quintela
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

Libvirt don't want to expose (and explain it).  From now on we measure
the number of packages in bytes instead of pages, so it is the same
independently of architecture.  We choose the page size of x86.
Notice that in the following patch we make this variable.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 hmp.c                 |  7 -------
 migration/migration.c | 30 ------------------------------
 migration/migration.h |  1 -
 migration/ram.c       | 15 ++++++++++-----
 qapi/migration.json   | 13 +------------
 5 files changed, 11 insertions(+), 55 deletions(-)

diff --git a/hmp.c b/hmp.c
index 8e283153b5..8bd5e48005 100644
--- a/hmp.c
+++ b/hmp.c
@@ -422,9 +422,6 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
         monitor_printf(mon, "%s: %u\n",
             MigrationParameter_str(MIGRATION_PARAMETER_X_MULTIFD_CHANNELS),
             params->x_multifd_channels);
-        monitor_printf(mon, "%s: %u\n",
-            MigrationParameter_str(MIGRATION_PARAMETER_X_MULTIFD_PAGE_COUNT),
-            params->x_multifd_page_count);
         monitor_printf(mon, "%s: %" PRIu64 "\n",
             MigrationParameter_str(MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE),
             params->xbzrle_cache_size);
@@ -1772,10 +1769,6 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         p->has_x_multifd_channels = true;
         visit_type_int(v, param, &p->x_multifd_channels, &err);
         break;
-    case MIGRATION_PARAMETER_X_MULTIFD_PAGE_COUNT:
-        p->has_x_multifd_page_count = true;
-        visit_type_int(v, param, &p->x_multifd_page_count, &err);
-        break;
     case MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE:
         p->has_xbzrle_cache_size = true;
         visit_type_size(v, param, &cache_size, &err);
diff --git a/migration/migration.c b/migration/migration.c
index 5ecf0978ac..f1b34bfe29 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -81,7 +81,6 @@
 /* The delay time (in ms) between two COLO checkpoints */
 #define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
 #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
-#define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
 
 /* Background transfer rate for postcopy, 0 means unlimited, note
  * that page requests can still exceed this limit.
@@ -749,8 +748,6 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
     params->block_incremental = s->parameters.block_incremental;
     params->has_x_multifd_channels = true;
     params->x_multifd_channels = s->parameters.x_multifd_channels;
-    params->has_x_multifd_page_count = true;
-    params->x_multifd_page_count = s->parameters.x_multifd_page_count;
     params->has_xbzrle_cache_size = true;
     params->xbzrle_cache_size = s->parameters.xbzrle_cache_size;
     params->has_max_postcopy_bandwidth = true;
@@ -1112,14 +1109,6 @@ static bool migrate_params_check(MigrationParameters *params, Error **errp)
                    "is invalid, it should be in the range of 1 to 255");
         return false;
     }
-    if (params->has_x_multifd_page_count &&
-        (params->x_multifd_page_count < 1 ||
-         params->x_multifd_page_count > 10000)) {
-        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-                   "multifd_page_count",
-                   "is invalid, it should be in the range of 1 to 10000");
-        return false;
-    }
 
     if (params->has_xbzrle_cache_size &&
         (params->xbzrle_cache_size < qemu_target_page_size() ||
@@ -1202,9 +1191,6 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
     if (params->has_x_multifd_channels) {
         dest->x_multifd_channels = params->x_multifd_channels;
     }
-    if (params->has_x_multifd_page_count) {
-        dest->x_multifd_page_count = params->x_multifd_page_count;
-    }
     if (params->has_xbzrle_cache_size) {
         dest->xbzrle_cache_size = params->xbzrle_cache_size;
     }
@@ -1283,9 +1269,6 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
     if (params->has_x_multifd_channels) {
         s->parameters.x_multifd_channels = params->x_multifd_channels;
     }
-    if (params->has_x_multifd_page_count) {
-        s->parameters.x_multifd_page_count = params->x_multifd_page_count;
-    }
     if (params->has_xbzrle_cache_size) {
         s->parameters.xbzrle_cache_size = params->xbzrle_cache_size;
         xbzrle_cache_resize(params->xbzrle_cache_size, errp);
@@ -2044,15 +2027,6 @@ int migrate_multifd_channels(void)
     return s->parameters.x_multifd_channels;
 }
 
-int migrate_multifd_page_count(void)
-{
-    MigrationState *s;
-
-    s = migrate_get_current();
-
-    return s->parameters.x_multifd_page_count;
-}
-
 int migrate_use_xbzrle(void)
 {
     MigrationState *s;
@@ -3286,9 +3260,6 @@ static Property migration_properties[] = {
     DEFINE_PROP_UINT8("x-multifd-channels", MigrationState,
                       parameters.x_multifd_channels,
                       DEFAULT_MIGRATE_MULTIFD_CHANNELS),
-    DEFINE_PROP_UINT32("x-multifd-page-count", MigrationState,
-                      parameters.x_multifd_page_count,
-                      DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT),
     DEFINE_PROP_SIZE("xbzrle-cache-size", MigrationState,
                       parameters.xbzrle_cache_size,
                       DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE),
@@ -3366,7 +3337,6 @@ static void migration_instance_init(Object *obj)
     params->has_x_checkpoint_delay = true;
     params->has_block_incremental = true;
     params->has_x_multifd_channels = true;
-    params->has_x_multifd_page_count = true;
     params->has_xbzrle_cache_size = true;
     params->has_max_postcopy_bandwidth = true;
     params->has_max_cpu_throttle = true;
diff --git a/migration/migration.h b/migration/migration.h
index 837709d8a1..7e03643683 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -269,7 +269,6 @@ bool migrate_auto_converge(void);
 bool migrate_use_multifd(void);
 bool migrate_pause_before_switchover(void);
 int migrate_multifd_channels(void);
-int migrate_multifd_page_count(void);
 
 int migrate_use_xbzrle(void);
 int64_t migrate_xbzrle_cache_size(void);
diff --git a/migration/ram.c b/migration/ram.c
index 26ed26fc2d..e22d02760b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -520,6 +520,9 @@ exit:
 
 #define MULTIFD_FLAG_SYNC (1 << 0)
 
+/* This value needs to be a multiple of qemu_target_page_size() */
+#define MULTIFD_PACKET_SIZE (64 * 1024)
+
 typedef struct {
     uint32_t magic;
     uint32_t version;
@@ -720,12 +723,13 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
 static void multifd_send_fill_packet(MultiFDSendParams *p)
 {
     MultiFDPacket_t *packet = p->packet;
+    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     int i;
 
     packet->magic = cpu_to_be32(MULTIFD_MAGIC);
     packet->version = cpu_to_be32(MULTIFD_VERSION);
     packet->flags = cpu_to_be32(p->flags);
-    packet->pages_alloc = cpu_to_be32(migrate_multifd_page_count());
+    packet->pages_alloc = cpu_to_be32(page_count);
     packet->pages_used = cpu_to_be32(p->pages->used);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
@@ -742,6 +746,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
 {
     MultiFDPacket_t *packet = p->packet;
+    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     RAMBlock *block;
     int i;
 
@@ -764,10 +769,10 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     p->flags = be32_to_cpu(packet->flags);
 
     packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
-    if (packet->pages_alloc > migrate_multifd_page_count()) {
+    if (packet->pages_alloc > page_count) {
         error_setg(errp, "multifd: received packet "
                    "with size %d and expected maximum size %d",
-                   packet->pages_alloc, migrate_multifd_page_count()) ;
+                   packet->pages_alloc, page_count) ;
         return -1;
     }
 
@@ -1099,7 +1104,7 @@ static void multifd_new_send_channel_async(QIOTask *task, gpointer opaque)
 int multifd_save_setup(void)
 {
     int thread_count;
-    uint32_t page_count = migrate_multifd_page_count();
+    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     uint8_t i;
 
     if (!migrate_use_multifd()) {
@@ -1299,7 +1304,7 @@ static void *multifd_recv_thread(void *opaque)
 int multifd_load_setup(void)
 {
     int thread_count;
-    uint32_t page_count = migrate_multifd_page_count();
+    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     uint8_t i;
 
     if (!migrate_use_multifd()) {
diff --git a/qapi/migration.json b/qapi/migration.json
index b62947791f..8c5db60406 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -547,9 +547,6 @@
 #                     number of sockets used for migration.  The
 #                     default value is 2 (since 2.11)
 #
-# @x-multifd-page-count: Number of pages sent together to a thread.
-#                        The default value is 16 (since 2.11)
-#
 # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
 #                     needs to be a multiple of the target page size
 #                     and a power of 2
@@ -569,7 +566,7 @@
            'cpu-throttle-initial', 'cpu-throttle-increment',
            'tls-creds', 'tls-hostname', 'max-bandwidth',
            'downtime-limit', 'x-checkpoint-delay', 'block-incremental',
-           'x-multifd-channels', 'x-multifd-page-count',
+           'x-multifd-channels',
            'xbzrle-cache-size', 'max-postcopy-bandwidth',
            'max-cpu-throttle' ] }
 
@@ -637,9 +634,6 @@
 #                     number of sockets used for migration.  The
 #                     default value is 2 (since 2.11)
 #
-# @x-multifd-page-count: Number of pages sent together to a thread.
-#                        The default value is 16 (since 2.11)
-#
 # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
 #                     needs to be a multiple of the target page size
 #                     and a power of 2
@@ -670,7 +664,6 @@
             '*x-checkpoint-delay': 'int',
             '*block-incremental': 'bool',
             '*x-multifd-channels': 'int',
-            '*x-multifd-page-count': 'int',
             '*xbzrle-cache-size': 'size',
             '*max-postcopy-bandwidth': 'size',
 	    '*max-cpu-throttle': 'int' } }
@@ -754,9 +747,6 @@
 #                     number of sockets used for migration.
 #                     The default value is 2 (since 2.11)
 #
-# @x-multifd-page-count: Number of pages sent together to a thread.
-#                        The default value is 16 (since 2.11)
-#
 # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
 #                     needs to be a multiple of the target page size
 #                     and a power of 2
@@ -786,7 +776,6 @@
             '*x-checkpoint-delay': 'uint32',
             '*block-incremental': 'bool' ,
             '*x-multifd-channels': 'uint8',
-            '*x-multifd-page-count': 'uint32',
             '*xbzrle-cache-size': 'size',
 	    '*max-postcopy-bandwidth': 'size',
             '*max-cpu-throttle':'uint8'} }
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
                   ` (3 preceding siblings ...)
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 4/8] multifd: Drop x-multifd-page-count parameter Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  2019-02-21 18:30   ` Dr. David Alan Gilbert
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 6/8] multifd: Change default " Juan Quintela
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

This way we can change the packet size in the future and everything
will work.  We choose an arbitrary big number (100 times configured
size) as a limit about how big we will reallocate.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e22d02760b..75a8fc21f8 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -723,13 +723,13 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
 static void multifd_send_fill_packet(MultiFDSendParams *p)
 {
     MultiFDPacket_t *packet = p->packet;
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+    uint32_t page_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     int i;
 
     packet->magic = cpu_to_be32(MULTIFD_MAGIC);
     packet->version = cpu_to_be32(MULTIFD_VERSION);
     packet->flags = cpu_to_be32(p->flags);
-    packet->pages_alloc = cpu_to_be32(page_count);
+    packet->pages_alloc = cpu_to_be32(page_max);
     packet->pages_used = cpu_to_be32(p->pages->used);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
@@ -746,7 +746,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
 {
     MultiFDPacket_t *packet = p->packet;
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+    uint32_t pages_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     RAMBlock *block;
     int i;
 
@@ -769,12 +769,24 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     p->flags = be32_to_cpu(packet->flags);
 
     packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
-    if (packet->pages_alloc > page_count) {
+    /*
+     * If we recevied a packet that is 100 times bigger than expected
+     * just stop migration.  It is a magic number.
+     */
+    if (packet->pages_alloc > pages_max * 100) {
         error_setg(errp, "multifd: received packet "
-                   "with size %d and expected maximum size %d",
-                   packet->pages_alloc, page_count) ;
+                   "with size %d and expected size %d",
+                   packet->pages_alloc, pages_max) ;
         return -1;
     }
+    /*
+     * We received a packet that is bigger than expected but inside
+     * reasonable limits (see previous comment).  Just reallocate.
+     */
+    if (packet->pages_alloc > p->pages->allocated) {
+        multifd_pages_clear(p->pages);
+        multifd_pages_init(packet->pages_alloc);
+    }
 
     p->pages->used = be32_to_cpu(packet->pages_used);
     if (p->pages->used > packet->pages_alloc) {
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 6/8] multifd: Change default packet size
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
                   ` (4 preceding siblings ...)
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  2019-02-21 18:40   ` Dr. David Alan Gilbert
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 7/8] multifd: Drop x- Juan Quintela
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 8/8] tests: Add migration multifd test Juan Quintela
  7 siblings, 1 reply; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

We moved from 64KB to 512KB, as it makes less locking contention
without any downside in testing.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 75a8fc21f8..d57db00ce4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -521,7 +521,7 @@ exit:
 #define MULTIFD_FLAG_SYNC (1 << 0)
 
 /* This value needs to be a multiple of qemu_target_page_size() */
-#define MULTIFD_PACKET_SIZE (64 * 1024)
+#define MULTIFD_PACKET_SIZE (512 * 1024)
 
 typedef struct {
     uint32_t magic;
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 7/8] multifd: Drop x-
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
                   ` (5 preceding siblings ...)
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 6/8] multifd: Change default " Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 8/8] tests: Add migration multifd test Juan Quintela
  7 siblings, 0 replies; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

We make it supported from now on.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 hmp.c                 | 10 +++++-----
 migration/migration.c | 26 +++++++++++++-------------
 qapi/migration.json   | 34 +++++++++++++++++-----------------
 3 files changed, 35 insertions(+), 35 deletions(-)

diff --git a/hmp.c b/hmp.c
index 8bd5e48005..34edad6ea3 100644
--- a/hmp.c
+++ b/hmp.c
@@ -420,8 +420,8 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
             MigrationParameter_str(MIGRATION_PARAMETER_BLOCK_INCREMENTAL),
             params->block_incremental ? "on" : "off");
         monitor_printf(mon, "%s: %u\n",
-            MigrationParameter_str(MIGRATION_PARAMETER_X_MULTIFD_CHANNELS),
-            params->x_multifd_channels);
+            MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_CHANNELS),
+            params->multifd_channels);
         monitor_printf(mon, "%s: %" PRIu64 "\n",
             MigrationParameter_str(MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE),
             params->xbzrle_cache_size);
@@ -1765,9 +1765,9 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         p->has_block_incremental = true;
         visit_type_bool(v, param, &p->block_incremental, &err);
         break;
-    case MIGRATION_PARAMETER_X_MULTIFD_CHANNELS:
-        p->has_x_multifd_channels = true;
-        visit_type_int(v, param, &p->x_multifd_channels, &err);
+    case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
+        p->has_multifd_channels = true;
+        visit_type_int(v, param, &p->multifd_channels, &err);
         break;
     case MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE:
         p->has_xbzrle_cache_size = true;
diff --git a/migration/migration.c b/migration/migration.c
index f1b34bfe29..f246519ec8 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -746,8 +746,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
     params->x_checkpoint_delay = s->parameters.x_checkpoint_delay;
     params->has_block_incremental = true;
     params->block_incremental = s->parameters.block_incremental;
-    params->has_x_multifd_channels = true;
-    params->x_multifd_channels = s->parameters.x_multifd_channels;
+    params->has_multifd_channels = true;
+    params->multifd_channels = s->parameters.multifd_channels;
     params->has_xbzrle_cache_size = true;
     params->xbzrle_cache_size = s->parameters.xbzrle_cache_size;
     params->has_max_postcopy_bandwidth = true;
@@ -1103,7 +1103,7 @@ static bool migrate_params_check(MigrationParameters *params, Error **errp)
 
     /* x_checkpoint_delay is now always positive */
 
-    if (params->has_x_multifd_channels && (params->x_multifd_channels < 1)) {
+    if (params->has_multifd_channels && (params->multifd_channels < 1)) {
         error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
                    "multifd_channels",
                    "is invalid, it should be in the range of 1 to 255");
@@ -1188,8 +1188,8 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
     if (params->has_block_incremental) {
         dest->block_incremental = params->block_incremental;
     }
-    if (params->has_x_multifd_channels) {
-        dest->x_multifd_channels = params->x_multifd_channels;
+    if (params->has_multifd_channels) {
+        dest->multifd_channels = params->multifd_channels;
     }
     if (params->has_xbzrle_cache_size) {
         dest->xbzrle_cache_size = params->xbzrle_cache_size;
@@ -1266,8 +1266,8 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
     if (params->has_block_incremental) {
         s->parameters.block_incremental = params->block_incremental;
     }
-    if (params->has_x_multifd_channels) {
-        s->parameters.x_multifd_channels = params->x_multifd_channels;
+    if (params->has_multifd_channels) {
+        s->parameters.multifd_channels = params->multifd_channels;
     }
     if (params->has_xbzrle_cache_size) {
         s->parameters.xbzrle_cache_size = params->xbzrle_cache_size;
@@ -2005,7 +2005,7 @@ bool migrate_use_multifd(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_X_MULTIFD];
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
 bool migrate_pause_before_switchover(void)
@@ -2024,7 +2024,7 @@ int migrate_multifd_channels(void)
 
     s = migrate_get_current();
 
-    return s->parameters.x_multifd_channels;
+    return s->parameters.multifd_channels;
 }
 
 int migrate_use_xbzrle(void)
@@ -3257,8 +3257,8 @@ static Property migration_properties[] = {
     DEFINE_PROP_UINT32("x-checkpoint-delay", MigrationState,
                       parameters.x_checkpoint_delay,
                       DEFAULT_MIGRATE_X_CHECKPOINT_DELAY),
-    DEFINE_PROP_UINT8("x-multifd-channels", MigrationState,
-                      parameters.x_multifd_channels,
+    DEFINE_PROP_UINT8("multifd-channels", MigrationState,
+                      parameters.multifd_channels,
                       DEFAULT_MIGRATE_MULTIFD_CHANNELS),
     DEFINE_PROP_SIZE("xbzrle-cache-size", MigrationState,
                       parameters.xbzrle_cache_size,
@@ -3282,7 +3282,7 @@ static Property migration_properties[] = {
     DEFINE_PROP_MIG_CAP("x-release-ram", MIGRATION_CAPABILITY_RELEASE_RAM),
     DEFINE_PROP_MIG_CAP("x-block", MIGRATION_CAPABILITY_BLOCK),
     DEFINE_PROP_MIG_CAP("x-return-path", MIGRATION_CAPABILITY_RETURN_PATH),
-    DEFINE_PROP_MIG_CAP("x-multifd", MIGRATION_CAPABILITY_X_MULTIFD),
+    DEFINE_PROP_MIG_CAP("x-multifd", MIGRATION_CAPABILITY_MULTIFD),
 
     DEFINE_PROP_END_OF_LIST(),
 };
@@ -3336,7 +3336,7 @@ static void migration_instance_init(Object *obj)
     params->has_downtime_limit = true;
     params->has_x_checkpoint_delay = true;
     params->has_block_incremental = true;
-    params->has_x_multifd_channels = true;
+    params->has_multifd_channels = true;
     params->has_xbzrle_cache_size = true;
     params->has_max_postcopy_bandwidth = true;
     params->has_max_cpu_throttle = true;
diff --git a/qapi/migration.json b/qapi/migration.json
index 8c5db60406..c202703889 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -401,7 +401,7 @@
 # @pause-before-switchover: Pause outgoing migration before serialising device
 #          state and before disabling block IO (since 2.11)
 #
-# @x-multifd: Use more than one fd for migration (since 2.11)
+# @multifd: Use more than one fd for migration (since 4.0)
 #
 # @dirty-bitmaps: If enabled, QEMU will migrate named dirty bitmaps.
 #                 (since 2.12)
@@ -418,7 +418,7 @@
 { 'enum': 'MigrationCapability',
   'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks',
            'compress', 'events', 'postcopy-ram', 'x-colo', 'release-ram',
-           'block', 'return-path', 'pause-before-switchover', 'x-multifd',
+           'block', 'return-path', 'pause-before-switchover', 'multifd',
            'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate' ] }
 
 ##
@@ -542,10 +542,10 @@
 # 	migrated and the destination must already have access to the
 # 	same backing chain as was used on the source.  (since 2.10)
 #
-# @x-multifd-channels: Number of channels used to migrate data in
-#                     parallel. This is the same number that the
-#                     number of sockets used for migration.  The
-#                     default value is 2 (since 2.11)
+# @multifd-channels: Number of channels used to migrate data in
+#                    parallel. This is the same number that the
+#                    number of sockets used for migration.  The
+#                    default value is 2 (since 4.0)
 #
 # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
 #                     needs to be a multiple of the target page size
@@ -566,7 +566,7 @@
            'cpu-throttle-initial', 'cpu-throttle-increment',
            'tls-creds', 'tls-hostname', 'max-bandwidth',
            'downtime-limit', 'x-checkpoint-delay', 'block-incremental',
-           'x-multifd-channels',
+           'multifd-channels',
            'xbzrle-cache-size', 'max-postcopy-bandwidth',
            'max-cpu-throttle' ] }
 
@@ -629,10 +629,10 @@
 # 	migrated and the destination must already have access to the
 # 	same backing chain as was used on the source.  (since 2.10)
 #
-# @x-multifd-channels: Number of channels used to migrate data in
-#                     parallel. This is the same number that the
-#                     number of sockets used for migration.  The
-#                     default value is 2 (since 2.11)
+# @multifd-channels: Number of channels used to migrate data in
+#                    parallel. This is the same number that the
+#                    number of sockets used for migration.  The
+#                    default value is 2 (since 4.0)
 #
 # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
 #                     needs to be a multiple of the target page size
@@ -663,7 +663,7 @@
             '*downtime-limit': 'int',
             '*x-checkpoint-delay': 'int',
             '*block-incremental': 'bool',
-            '*x-multifd-channels': 'int',
+            '*multifd-channels': 'int',
             '*xbzrle-cache-size': 'size',
             '*max-postcopy-bandwidth': 'size',
 	    '*max-cpu-throttle': 'int' } }
@@ -742,10 +742,10 @@
 # 	migrated and the destination must already have access to the
 # 	same backing chain as was used on the source.  (since 2.10)
 #
-# @x-multifd-channels: Number of channels used to migrate data in
-#                     parallel. This is the same number that the
-#                     number of sockets used for migration.
-#                     The default value is 2 (since 2.11)
+# @multifd-channels: Number of channels used to migrate data in
+#                    parallel. This is the same number that the
+#                    number of sockets used for migration.
+#                    The default value is 2 (since 4.0)
 #
 # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
 #                     needs to be a multiple of the target page size
@@ -775,7 +775,7 @@
             '*downtime-limit': 'uint64',
             '*x-checkpoint-delay': 'uint32',
             '*block-incremental': 'bool' ,
-            '*x-multifd-channels': 'uint8',
+            '*multifd-channels': 'uint8',
             '*xbzrle-cache-size': 'size',
 	    '*max-postcopy-bandwidth': 'size',
             '*max-cpu-throttle':'uint8'} }
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v2 8/8] tests: Add migration multifd test
  2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
                   ` (6 preceding siblings ...)
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 7/8] multifd: Drop x- Juan Quintela
@ 2019-02-20 11:56 ` Juan Quintela
  7 siblings, 0 replies; 17+ messages in thread
From: Juan Quintela @ 2019-02-20 11:56 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Paolo Bonzini, Eric Blake, Laurent Vivier,
	Juan Quintela, Dr. David Alan Gilbert, Markus Armbruster

We set multifd-channels.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 tests/migration-test.c | 48 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/tests/migration-test.c b/tests/migration-test.c
index a27f095b28..3e0e447f95 100644
--- a/tests/migration-test.c
+++ b/tests/migration-test.c
@@ -942,6 +942,53 @@ static void test_precopy_tcp(void)
     g_free(uri);
 }
 
+static void test_multifd_tcp(void)
+{
+    char *uri;
+    QTestState *from, *to;
+
+    if (test_migrate_start(&from, &to, "tcp:127.0.0.1:0", false)) {
+        return;
+    }
+
+    /*
+     * We want to pick a speed slow enough that the test completes
+     * quickly, but that it doesn't complete precopy even on a slow
+     * machine, so also set the downtime.
+     */
+    /* 1 ms should make it not converge*/
+    migrate_set_parameter(from, "downtime-limit", 1);
+    /* 1GB/s */
+    migrate_set_parameter(from, "max-bandwidth", 1000000000);
+
+    migrate_set_parameter(from, "multifd-channels", 2);
+    migrate_set_parameter(to, "multifd-channels", 2);
+
+    migrate_set_capability(from, "multifd", "true");
+    migrate_set_capability(to, "multifd", "true");
+    /* Wait for the first serial output from the source */
+    wait_for_serial("src_serial");
+
+    uri = migrate_get_socket_address(to, "socket-address");
+
+    migrate(from, uri, "{}");
+
+    wait_for_migration_pass(from);
+
+    /* 300ms it should converge */
+    migrate_set_parameter(from, "downtime-limit", 300);
+
+    if (!got_stop) {
+        qtest_qmp_eventwait(from, "STOP");
+    }
+    qtest_qmp_eventwait(to, "RESUME");
+
+    wait_for_serial("dest_serial");
+    wait_for_migration_complete(from);
+
+    test_migrate_end(from, to, true);
+}
+
 int main(int argc, char **argv)
 {
     char template[] = "/tmp/migration-test-XXXXXX";
@@ -995,6 +1042,7 @@ int main(int argc, char **argv)
     qtest_add_func("/migration/precopy/unix", test_precopy_unix);
     qtest_add_func("/migration/precopy/tcp", test_precopy_tcp);
     qtest_add_func("/migration/xbzrle/unix", test_xbzrle_unix);
+    qtest_add_func("/migration/multifd/tcp", test_multifd_tcp);
 
     ret = g_test_run();
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/8] multifd: Only send pages when packet are not empty
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 1/8] multifd: Only send pages when packet are not empty Juan Quintela
@ 2019-02-21 17:43   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2019-02-21 17:43 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

* Juan Quintela (quintela@redhat.com) wrote:
> We send packages without pages sometimes for sysnchronizanion.  The
> iov functions do the right thing, but we will be changing this code in
> future patches.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 18 ++++++++++++------
>  1 file changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 59191c1ed2..8b5fd67d66 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1025,9 +1025,12 @@ static void *multifd_send_thread(void *opaque)
>                  break;
>              }
>  
> -            ret = qio_channel_writev_all(p->c, p->pages->iov, used, &local_err);
> -            if (ret != 0) {
> -                break;
> +            if (used) {
> +                ret = qio_channel_writev_all(p->c, p->pages->iov,
> +                                             used, &local_err);
> +                if (ret != 0) {
> +                    break;
> +                }
>              }
>  
>              qemu_mutex_lock(&p->mutex);
> @@ -1254,9 +1257,12 @@ static void *multifd_recv_thread(void *opaque)
>          p->num_pages += used;
>          qemu_mutex_unlock(&p->mutex);
>  
> -        ret = qio_channel_readv_all(p->c, p->pages->iov, used, &local_err);
> -        if (ret != 0) {
> -            break;
> +        if (used) {
> +            ret = qio_channel_readv_all(p->c, p->pages->iov,
> +                                        used, &local_err);
> +            if (ret != 0) {
> +                break;
> +            }
>          }
>  
>          if (flags & MULTIFD_FLAG_SYNC) {
> -- 
> 2.20.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/8] multifd: Rename "size" member to pages_alloc
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 2/8] multifd: Rename "size" member to pages_alloc Juan Quintela
@ 2019-02-21 17:48   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2019-02-21 17:48 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

* Juan Quintela (quintela@redhat.com) wrote:
> It really indicates what is the number of allocated pages for one
> packet.  Once there rename "used" to "pages_used".
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 23 ++++++++++++-----------
>  1 file changed, 12 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 8b5fd67d66..29f0d431a8 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -531,8 +531,9 @@ typedef struct {
>      uint32_t magic;
>      uint32_t version;
>      uint32_t flags;
> -    uint32_t size;
> -    uint32_t used;
> +    /* maximum number of allocated pages */
> +    uint32_t pages_alloc;
> +    uint32_t pages_used;
>      uint64_t packet_num;
>      char ramblock[256];
>      uint64_t offset[];
> @@ -718,8 +719,8 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>      packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>      packet->version = cpu_to_be32(MULTIFD_VERSION);
>      packet->flags = cpu_to_be32(p->flags);
> -    packet->size = cpu_to_be32(migrate_multifd_page_count());
> -    packet->used = cpu_to_be32(p->pages->used);
> +    packet->pages_alloc = cpu_to_be32(migrate_multifd_page_count());
> +    packet->pages_used = cpu_to_be32(p->pages->used);
>      packet->packet_num = cpu_to_be64(p->packet_num);
>  
>      if (p->pages->block) {
> @@ -755,19 +756,19 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>  
>      p->flags = be32_to_cpu(packet->flags);
>  
> -    packet->size = be32_to_cpu(packet->size);
> -    if (packet->size > migrate_multifd_page_count()) {
> +    packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
> +    if (packet->pages_alloc > migrate_multifd_page_count()) {
>          error_setg(errp, "multifd: received packet "
>                     "with size %d and expected maximum size %d",
> -                   packet->size, migrate_multifd_page_count()) ;
> +                   packet->pages_alloc, migrate_multifd_page_count()) ;
>          return -1;
>      }
>  
> -    p->pages->used = be32_to_cpu(packet->used);
> -    if (p->pages->used > packet->size) {
> +    p->pages->used = be32_to_cpu(packet->pages_used);
> +    if (p->pages->used > packet->pages_alloc) {
>          error_setg(errp, "multifd: received packet "
> -                   "with size %d and expected maximum size %d",
> -                   p->pages->used, packet->size) ;
> +                   "with %d pages and expected maximum pages are %d",
> +                   p->pages->used, packet->pages_alloc) ;
>          return -1;
>      }
>  
> -- 
> 2.20.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/8] multifd: Drop x-multifd-page-count parameter
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 4/8] multifd: Drop x-multifd-page-count parameter Juan Quintela
@ 2019-02-21 17:51   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2019-02-21 17:51 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

* Juan Quintela (quintela@redhat.com) wrote:
> Libvirt don't want to expose (and explain it).  From now on we measure
> the number of packages in bytes instead of pages, so it is the same
> independently of architecture.  We choose the page size of x86.
> Notice that in the following patch we make this variable.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Fortunately it's so easy to remove and add parameters....


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  hmp.c                 |  7 -------
>  migration/migration.c | 30 ------------------------------
>  migration/migration.h |  1 -
>  migration/ram.c       | 15 ++++++++++-----
>  qapi/migration.json   | 13 +------------
>  5 files changed, 11 insertions(+), 55 deletions(-)
> 
> diff --git a/hmp.c b/hmp.c
> index 8e283153b5..8bd5e48005 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -422,9 +422,6 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
>          monitor_printf(mon, "%s: %u\n",
>              MigrationParameter_str(MIGRATION_PARAMETER_X_MULTIFD_CHANNELS),
>              params->x_multifd_channels);
> -        monitor_printf(mon, "%s: %u\n",
> -            MigrationParameter_str(MIGRATION_PARAMETER_X_MULTIFD_PAGE_COUNT),
> -            params->x_multifd_page_count);
>          monitor_printf(mon, "%s: %" PRIu64 "\n",
>              MigrationParameter_str(MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE),
>              params->xbzrle_cache_size);
> @@ -1772,10 +1769,6 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
>          p->has_x_multifd_channels = true;
>          visit_type_int(v, param, &p->x_multifd_channels, &err);
>          break;
> -    case MIGRATION_PARAMETER_X_MULTIFD_PAGE_COUNT:
> -        p->has_x_multifd_page_count = true;
> -        visit_type_int(v, param, &p->x_multifd_page_count, &err);
> -        break;
>      case MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE:
>          p->has_xbzrle_cache_size = true;
>          visit_type_size(v, param, &cache_size, &err);
> diff --git a/migration/migration.c b/migration/migration.c
> index 5ecf0978ac..f1b34bfe29 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -81,7 +81,6 @@
>  /* The delay time (in ms) between two COLO checkpoints */
>  #define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
>  #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
> -#define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
>  
>  /* Background transfer rate for postcopy, 0 means unlimited, note
>   * that page requests can still exceed this limit.
> @@ -749,8 +748,6 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
>      params->block_incremental = s->parameters.block_incremental;
>      params->has_x_multifd_channels = true;
>      params->x_multifd_channels = s->parameters.x_multifd_channels;
> -    params->has_x_multifd_page_count = true;
> -    params->x_multifd_page_count = s->parameters.x_multifd_page_count;
>      params->has_xbzrle_cache_size = true;
>      params->xbzrle_cache_size = s->parameters.xbzrle_cache_size;
>      params->has_max_postcopy_bandwidth = true;
> @@ -1112,14 +1109,6 @@ static bool migrate_params_check(MigrationParameters *params, Error **errp)
>                     "is invalid, it should be in the range of 1 to 255");
>          return false;
>      }
> -    if (params->has_x_multifd_page_count &&
> -        (params->x_multifd_page_count < 1 ||
> -         params->x_multifd_page_count > 10000)) {
> -        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
> -                   "multifd_page_count",
> -                   "is invalid, it should be in the range of 1 to 10000");
> -        return false;
> -    }
>  
>      if (params->has_xbzrle_cache_size &&
>          (params->xbzrle_cache_size < qemu_target_page_size() ||
> @@ -1202,9 +1191,6 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
>      if (params->has_x_multifd_channels) {
>          dest->x_multifd_channels = params->x_multifd_channels;
>      }
> -    if (params->has_x_multifd_page_count) {
> -        dest->x_multifd_page_count = params->x_multifd_page_count;
> -    }
>      if (params->has_xbzrle_cache_size) {
>          dest->xbzrle_cache_size = params->xbzrle_cache_size;
>      }
> @@ -1283,9 +1269,6 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
>      if (params->has_x_multifd_channels) {
>          s->parameters.x_multifd_channels = params->x_multifd_channels;
>      }
> -    if (params->has_x_multifd_page_count) {
> -        s->parameters.x_multifd_page_count = params->x_multifd_page_count;
> -    }
>      if (params->has_xbzrle_cache_size) {
>          s->parameters.xbzrle_cache_size = params->xbzrle_cache_size;
>          xbzrle_cache_resize(params->xbzrle_cache_size, errp);
> @@ -2044,15 +2027,6 @@ int migrate_multifd_channels(void)
>      return s->parameters.x_multifd_channels;
>  }
>  
> -int migrate_multifd_page_count(void)
> -{
> -    MigrationState *s;
> -
> -    s = migrate_get_current();
> -
> -    return s->parameters.x_multifd_page_count;
> -}
> -
>  int migrate_use_xbzrle(void)
>  {
>      MigrationState *s;
> @@ -3286,9 +3260,6 @@ static Property migration_properties[] = {
>      DEFINE_PROP_UINT8("x-multifd-channels", MigrationState,
>                        parameters.x_multifd_channels,
>                        DEFAULT_MIGRATE_MULTIFD_CHANNELS),
> -    DEFINE_PROP_UINT32("x-multifd-page-count", MigrationState,
> -                      parameters.x_multifd_page_count,
> -                      DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT),
>      DEFINE_PROP_SIZE("xbzrle-cache-size", MigrationState,
>                        parameters.xbzrle_cache_size,
>                        DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE),
> @@ -3366,7 +3337,6 @@ static void migration_instance_init(Object *obj)
>      params->has_x_checkpoint_delay = true;
>      params->has_block_incremental = true;
>      params->has_x_multifd_channels = true;
> -    params->has_x_multifd_page_count = true;
>      params->has_xbzrle_cache_size = true;
>      params->has_max_postcopy_bandwidth = true;
>      params->has_max_cpu_throttle = true;
> diff --git a/migration/migration.h b/migration/migration.h
> index 837709d8a1..7e03643683 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -269,7 +269,6 @@ bool migrate_auto_converge(void);
>  bool migrate_use_multifd(void);
>  bool migrate_pause_before_switchover(void);
>  int migrate_multifd_channels(void);
> -int migrate_multifd_page_count(void);
>  
>  int migrate_use_xbzrle(void);
>  int64_t migrate_xbzrle_cache_size(void);
> diff --git a/migration/ram.c b/migration/ram.c
> index 26ed26fc2d..e22d02760b 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -520,6 +520,9 @@ exit:
>  
>  #define MULTIFD_FLAG_SYNC (1 << 0)
>  
> +/* This value needs to be a multiple of qemu_target_page_size() */
> +#define MULTIFD_PACKET_SIZE (64 * 1024)
> +
>  typedef struct {
>      uint32_t magic;
>      uint32_t version;
> @@ -720,12 +723,13 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
>  static void multifd_send_fill_packet(MultiFDSendParams *p)
>  {
>      MultiFDPacket_t *packet = p->packet;
> +    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      int i;
>  
>      packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>      packet->version = cpu_to_be32(MULTIFD_VERSION);
>      packet->flags = cpu_to_be32(p->flags);
> -    packet->pages_alloc = cpu_to_be32(migrate_multifd_page_count());
> +    packet->pages_alloc = cpu_to_be32(page_count);
>      packet->pages_used = cpu_to_be32(p->pages->used);
>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
> @@ -742,6 +746,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>  {
>      MultiFDPacket_t *packet = p->packet;
> +    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      RAMBlock *block;
>      int i;
>  
> @@ -764,10 +769,10 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>      p->flags = be32_to_cpu(packet->flags);
>  
>      packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
> -    if (packet->pages_alloc > migrate_multifd_page_count()) {
> +    if (packet->pages_alloc > page_count) {
>          error_setg(errp, "multifd: received packet "
>                     "with size %d and expected maximum size %d",
> -                   packet->pages_alloc, migrate_multifd_page_count()) ;
> +                   packet->pages_alloc, page_count) ;
>          return -1;
>      }
>  
> @@ -1099,7 +1104,7 @@ static void multifd_new_send_channel_async(QIOTask *task, gpointer opaque)
>  int multifd_save_setup(void)
>  {
>      int thread_count;
> -    uint32_t page_count = migrate_multifd_page_count();
> +    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      uint8_t i;
>  
>      if (!migrate_use_multifd()) {
> @@ -1299,7 +1304,7 @@ static void *multifd_recv_thread(void *opaque)
>  int multifd_load_setup(void)
>  {
>      int thread_count;
> -    uint32_t page_count = migrate_multifd_page_count();
> +    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      uint8_t i;
>  
>      if (!migrate_use_multifd()) {
> diff --git a/qapi/migration.json b/qapi/migration.json
> index b62947791f..8c5db60406 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -547,9 +547,6 @@
>  #                     number of sockets used for migration.  The
>  #                     default value is 2 (since 2.11)
>  #
> -# @x-multifd-page-count: Number of pages sent together to a thread.
> -#                        The default value is 16 (since 2.11)
> -#
>  # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
>  #                     needs to be a multiple of the target page size
>  #                     and a power of 2
> @@ -569,7 +566,7 @@
>             'cpu-throttle-initial', 'cpu-throttle-increment',
>             'tls-creds', 'tls-hostname', 'max-bandwidth',
>             'downtime-limit', 'x-checkpoint-delay', 'block-incremental',
> -           'x-multifd-channels', 'x-multifd-page-count',
> +           'x-multifd-channels',
>             'xbzrle-cache-size', 'max-postcopy-bandwidth',
>             'max-cpu-throttle' ] }
>  
> @@ -637,9 +634,6 @@
>  #                     number of sockets used for migration.  The
>  #                     default value is 2 (since 2.11)
>  #
> -# @x-multifd-page-count: Number of pages sent together to a thread.
> -#                        The default value is 16 (since 2.11)
> -#
>  # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
>  #                     needs to be a multiple of the target page size
>  #                     and a power of 2
> @@ -670,7 +664,6 @@
>              '*x-checkpoint-delay': 'int',
>              '*block-incremental': 'bool',
>              '*x-multifd-channels': 'int',
> -            '*x-multifd-page-count': 'int',
>              '*xbzrle-cache-size': 'size',
>              '*max-postcopy-bandwidth': 'size',
>  	    '*max-cpu-throttle': 'int' } }
> @@ -754,9 +747,6 @@
>  #                     number of sockets used for migration.
>  #                     The default value is 2 (since 2.11)
>  #
> -# @x-multifd-page-count: Number of pages sent together to a thread.
> -#                        The default value is 16 (since 2.11)
> -#
>  # @xbzrle-cache-size: cache size to be used by XBZRLE migration.  It
>  #                     needs to be a multiple of the target page size
>  #                     and a power of 2
> @@ -786,7 +776,6 @@
>              '*x-checkpoint-delay': 'uint32',
>              '*block-incremental': 'bool' ,
>              '*x-multifd-channels': 'uint8',
> -            '*x-multifd-page-count': 'uint32',
>              '*xbzrle-cache-size': 'size',
>  	    '*max-postcopy-bandwidth': 'size',
>              '*max-cpu-throttle':'uint8'} }
> -- 
> 2.20.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size Juan Quintela
@ 2019-02-21 18:30   ` Dr. David Alan Gilbert
  2019-02-27 11:06     ` Juan Quintela
  0 siblings, 1 reply; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2019-02-21 18:30 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

* Juan Quintela (quintela@redhat.com) wrote:
> This way we can change the packet size in the future and everything
> will work.  We choose an arbitrary big number (100 times configured
> size) as a limit about how big we will reallocate.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 24 ++++++++++++++++++------
>  1 file changed, 18 insertions(+), 6 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index e22d02760b..75a8fc21f8 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -723,13 +723,13 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
>  static void multifd_send_fill_packet(MultiFDSendParams *p)
>  {
>      MultiFDPacket_t *packet = p->packet;
> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
> +    uint32_t page_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      int i;
>  
>      packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>      packet->version = cpu_to_be32(MULTIFD_VERSION);
>      packet->flags = cpu_to_be32(p->flags);
> -    packet->pages_alloc = cpu_to_be32(page_count);
> +    packet->pages_alloc = cpu_to_be32(page_max);
>      packet->pages_used = cpu_to_be32(p->pages->used);
>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
> @@ -746,7 +746,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>  {
>      MultiFDPacket_t *packet = p->packet;
> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
> +    uint32_t pages_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      RAMBlock *block;
>      int i;
>  
> @@ -769,12 +769,24 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>      p->flags = be32_to_cpu(packet->flags);
>  
>      packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
> -    if (packet->pages_alloc > page_count) {
> +    /*
> +     * If we recevied a packet that is 100 times bigger than expected
> +     * just stop migration.  It is a magic number.
> +     */
> +    if (packet->pages_alloc > pages_max * 100) {
>          error_setg(errp, "multifd: received packet "
> -                   "with size %d and expected maximum size %d",
> -                   packet->pages_alloc, page_count) ;
> +                   "with size %d and expected size %d",
> +                   packet->pages_alloc, pages_max) ;

Should that end with pages_max * 100 ?

>          return -1;
>      }
> +    /*
> +     * We received a packet that is bigger than expected but inside
> +     * reasonable limits (see previous comment).  Just reallocate.
> +     */
> +    if (packet->pages_alloc > p->pages->allocated) {
> +        multifd_pages_clear(p->pages);
> +        multifd_pages_init(packet->pages_alloc);
> +    }
>  
>      p->pages->used = be32_to_cpu(packet->pages_used);
>      if (p->pages->used > packet->pages_alloc) {

Other than that error message, I think it's OK, although the names get
very confusing (max, alloc, allocated)


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> -- 
> 2.20.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 6/8] multifd: Change default packet size
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 6/8] multifd: Change default " Juan Quintela
@ 2019-02-21 18:40   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2019-02-21 18:40 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

* Juan Quintela (quintela@redhat.com) wrote:
> We moved from 64KB to 512KB, as it makes less locking contention
> without any downside in testing.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

That arbitrary number seems better than the previous arbitrary number,
so:

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 75a8fc21f8..d57db00ce4 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -521,7 +521,7 @@ exit:
>  #define MULTIFD_FLAG_SYNC (1 << 0)
>  
>  /* This value needs to be a multiple of qemu_target_page_size() */
> -#define MULTIFD_PACKET_SIZE (64 * 1024)
> +#define MULTIFD_PACKET_SIZE (512 * 1024)
>  
>  typedef struct {
>      uint32_t magic;
> -- 
> 2.20.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/8] multifd: Create new next_packet_size field
  2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 3/8] multifd: Create new next_packet_size field Juan Quintela
@ 2019-02-21 18:45   ` Dr. David Alan Gilbert
  2019-02-27 11:02     ` Juan Quintela
  0 siblings, 1 reply; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2019-02-21 18:45 UTC (permalink / raw)
  To: Juan Quintela
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

* Juan Quintela (quintela@redhat.com) wrote:
> We need to send this field when we add compression support.  As we are
> still on x- stage, we can do this kind of changes.

Can you explain this a bit more; I'm confused how you can know what the
next size is going to be until you've got there.

Dave

> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c        | 15 +++++++++++++--
>  migration/trace-events |  4 ++--
>  2 files changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 29f0d431a8..26ed26fc2d 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -534,6 +534,8 @@ typedef struct {
>      /* maximum number of allocated pages */
>      uint32_t pages_alloc;
>      uint32_t pages_used;
> +    /* size of the next packet that contains pages */
> +    uint32_t next_packet_size;
>      uint64_t packet_num;
>      char ramblock[256];
>      uint64_t offset[];
> @@ -581,6 +583,8 @@ typedef struct {
>      MultiFDPacket_t *packet;
>      /* multifd flags for each packet */
>      uint32_t flags;
> +    /* size of the next packet that contains pages */
> +    uint32_t next_packet_size;
>      /* global number of generated multifd packets */
>      uint64_t packet_num;
>      /* thread local variables */
> @@ -617,6 +621,8 @@ typedef struct {
>      /* global number of generated multifd packets */
>      uint64_t packet_num;
>      /* thread local variables */
> +    /* size of the next packet that contains pages */
> +    uint32_t next_packet_size;
>      /* packets sent through this channel */
>      uint64_t num_packets;
>      /* pages sent through this channel */
> @@ -721,6 +727,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>      packet->flags = cpu_to_be32(p->flags);
>      packet->pages_alloc = cpu_to_be32(migrate_multifd_page_count());
>      packet->pages_used = cpu_to_be32(p->pages->used);
> +    packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
>  
>      if (p->pages->block) {
> @@ -772,6 +779,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>  
> +    p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>      p->packet_num = be64_to_cpu(packet->packet_num);
>  
>      if (p->pages->used) {
> @@ -1011,6 +1019,7 @@ static void *multifd_send_thread(void *opaque)
>              uint64_t packet_num = p->packet_num;
>              uint32_t flags = p->flags;
>  
> +            p->next_packet_size = used * qemu_target_page_size();
>              multifd_send_fill_packet(p);
>              p->flags = 0;
>              p->num_packets++;
> @@ -1018,7 +1027,8 @@ static void *multifd_send_thread(void *opaque)
>              p->pages->used = 0;
>              qemu_mutex_unlock(&p->mutex);
>  
> -            trace_multifd_send(p->id, packet_num, used, flags);
> +            trace_multifd_send(p->id, packet_num, used, flags,
> +                               p->next_packet_size);
>  
>              ret = qio_channel_write_all(p->c, (void *)p->packet,
>                                          p->packet_len, &local_err);
> @@ -1253,7 +1263,8 @@ static void *multifd_recv_thread(void *opaque)
>  
>          used = p->pages->used;
>          flags = p->flags;
> -        trace_multifd_recv(p->id, p->packet_num, used, flags);
> +        trace_multifd_recv(p->id, p->packet_num, used, flags,
> +                           p->next_packet_size);
>          p->num_packets++;
>          p->num_pages += used;
>          qemu_mutex_unlock(&p->mutex);
> diff --git a/migration/trace-events b/migration/trace-events
> index bd2d0cd25a..a11e66e1d9 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -77,13 +77,13 @@ get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, unsigned
>  migration_bitmap_sync_start(void) ""
>  migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
>  migration_throttle(void) ""
> -multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags) "channel %d packet number %" PRIu64 " pages %d flags 0x%x"
> +multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet number %" PRIu64 " pages %d flags 0x%x next packet size %d"
>  multifd_recv_sync_main(long packet_num) "packet num %ld"
>  multifd_recv_sync_main_signal(uint8_t id) "channel %d"
>  multifd_recv_sync_main_wait(uint8_t id) "channel %d"
>  multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %" PRIu64
>  multifd_recv_thread_start(uint8_t id) "%d"
> -multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x"
> +multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x next packet size %d"
>  multifd_send_sync_main(long packet_num) "packet num %ld"
>  multifd_send_sync_main_signal(uint8_t id) "channel %d"
>  multifd_send_sync_main_wait(uint8_t id) "channel %d"
> -- 
> 2.20.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/8] multifd: Create new next_packet_size field
  2019-02-21 18:45   ` Dr. David Alan Gilbert
@ 2019-02-27 11:02     ` Juan Quintela
  0 siblings, 0 replies; 17+ messages in thread
From: Juan Quintela @ 2019-02-27 11:02 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We need to send this field when we add compression support.  As we are
>> still on x- stage, we can do this kind of changes.
>
> Can you explain this a bit more; I'm confused how you can know what the
> next size is going to be until you've got there.

This is needed for compression.  Without compression, we always know the
size of packet: number of pages * page size + some headers (that don't
matter here).

With compression that changes, and it is *much* easier for zlib if we
read the full packet in advance to a buffer and then tell zlib to
uncompress it.  It is possible to do it otherwise, but it is a mess (not
that zlib is not *always* a mess).

So with this change, I  can do a:
- read header (it is not compressed and has the size of the following
part of the packet)
- read rest of the packet in one go
- uncompress it like a champ, without worrying that we need to still
read something else.

Later, Juan.


> Dave
>
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c        | 15 +++++++++++++--
>>  migration/trace-events |  4 ++--
>>  2 files changed, 15 insertions(+), 4 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 29f0d431a8..26ed26fc2d 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -534,6 +534,8 @@ typedef struct {
>>      /* maximum number of allocated pages */
>>      uint32_t pages_alloc;
>>      uint32_t pages_used;
>> +    /* size of the next packet that contains pages */
>> +    uint32_t next_packet_size;
>>      uint64_t packet_num;
>>      char ramblock[256];
>>      uint64_t offset[];
>> @@ -581,6 +583,8 @@ typedef struct {
>>      MultiFDPacket_t *packet;
>>      /* multifd flags for each packet */
>>      uint32_t flags;
>> +    /* size of the next packet that contains pages */
>> +    uint32_t next_packet_size;
>>      /* global number of generated multifd packets */
>>      uint64_t packet_num;
>>      /* thread local variables */
>> @@ -617,6 +621,8 @@ typedef struct {
>>      /* global number of generated multifd packets */
>>      uint64_t packet_num;
>>      /* thread local variables */
>> +    /* size of the next packet that contains pages */
>> +    uint32_t next_packet_size;
>>      /* packets sent through this channel */
>>      uint64_t num_packets;
>>      /* pages sent through this channel */
>> @@ -721,6 +727,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>>      packet->flags = cpu_to_be32(p->flags);
>>      packet->pages_alloc = cpu_to_be32(migrate_multifd_page_count());
>>      packet->pages_used = cpu_to_be32(p->pages->used);
>> +    packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>>      packet->packet_num = cpu_to_be64(p->packet_num);
>>  
>>      if (p->pages->block) {
>> @@ -772,6 +779,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>>          return -1;
>>      }
>>  
>> +    p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>>      p->packet_num = be64_to_cpu(packet->packet_num);
>>  
>>      if (p->pages->used) {
>> @@ -1011,6 +1019,7 @@ static void *multifd_send_thread(void *opaque)
>>              uint64_t packet_num = p->packet_num;
>>              uint32_t flags = p->flags;
>>  
>> +            p->next_packet_size = used * qemu_target_page_size();
>>              multifd_send_fill_packet(p);
>>              p->flags = 0;
>>              p->num_packets++;
>> @@ -1018,7 +1027,8 @@ static void *multifd_send_thread(void *opaque)
>>              p->pages->used = 0;
>>              qemu_mutex_unlock(&p->mutex);
>>  
>> -            trace_multifd_send(p->id, packet_num, used, flags);
>> +            trace_multifd_send(p->id, packet_num, used, flags,
>> +                               p->next_packet_size);
>>  
>>              ret = qio_channel_write_all(p->c, (void *)p->packet,
>>                                          p->packet_len, &local_err);
>> @@ -1253,7 +1263,8 @@ static void *multifd_recv_thread(void *opaque)
>>  
>>          used = p->pages->used;
>>          flags = p->flags;
>> -        trace_multifd_recv(p->id, p->packet_num, used, flags);
>> +        trace_multifd_recv(p->id, p->packet_num, used, flags,
>> +                           p->next_packet_size);
>>          p->num_packets++;
>>          p->num_pages += used;
>>          qemu_mutex_unlock(&p->mutex);
>> diff --git a/migration/trace-events b/migration/trace-events
>> index bd2d0cd25a..a11e66e1d9 100644
>> --- a/migration/trace-events
>> +++ b/migration/trace-events
>> @@ -77,13 +77,13 @@ get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, unsigned
>>  migration_bitmap_sync_start(void) ""
>>  migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
>>  migration_throttle(void) ""
>> -multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags) "channel %d packet number %" PRIu64 " pages %d flags 0x%x"
>> +multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet number %" PRIu64 " pages %d flags 0x%x next packet size %d"
>>  multifd_recv_sync_main(long packet_num) "packet num %ld"
>>  multifd_recv_sync_main_signal(uint8_t id) "channel %d"
>>  multifd_recv_sync_main_wait(uint8_t id) "channel %d"
>>  multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %" PRIu64
>>  multifd_recv_thread_start(uint8_t id) "%d"
>> -multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x"
>> +multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x next packet size %d"
>>  multifd_send_sync_main(long packet_num) "packet num %ld"
>>  multifd_send_sync_main_signal(uint8_t id) "channel %d"
>>  multifd_send_sync_main_wait(uint8_t id) "channel %d"
>> -- 
>> 2.20.1
>> 
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size
  2019-02-21 18:30   ` Dr. David Alan Gilbert
@ 2019-02-27 11:06     ` Juan Quintela
  0 siblings, 0 replies; 17+ messages in thread
From: Juan Quintela @ 2019-02-27 11:06 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Thomas Huth, Paolo Bonzini, Eric Blake,
	Laurent Vivier, Markus Armbruster

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> This way we can change the packet size in the future and everything
>> will work.  We choose an arbitrary big number (100 times configured
>> size) as a limit about how big we will reallocate.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 24 ++++++++++++++++++------
>>  1 file changed, 18 insertions(+), 6 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index e22d02760b..75a8fc21f8 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -723,13 +723,13 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
>>  static void multifd_send_fill_packet(MultiFDSendParams *p)
>>  {
>>      MultiFDPacket_t *packet = p->packet;
>> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>> +    uint32_t page_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>>      int i;
>>  
>>      packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>>      packet->version = cpu_to_be32(MULTIFD_VERSION);
>>      packet->flags = cpu_to_be32(p->flags);
>> -    packet->pages_alloc = cpu_to_be32(page_count);
>> +    packet->pages_alloc = cpu_to_be32(page_max);
>>      packet->pages_used = cpu_to_be32(p->pages->used);
>>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>>      packet->packet_num = cpu_to_be64(p->packet_num);
>> @@ -746,7 +746,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>>  static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>>  {
>>      MultiFDPacket_t *packet = p->packet;
>> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>> +    uint32_t pages_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>>      RAMBlock *block;
>>      int i;
>>  
>> @@ -769,12 +769,24 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>>      p->flags = be32_to_cpu(packet->flags);
>>  
>>      packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
>> -    if (packet->pages_alloc > page_count) {
>> +    /*
>> +     * If we recevied a packet that is 100 times bigger than expected
>> +     * just stop migration.  It is a magic number.
>> +     */
>> +    if (packet->pages_alloc > pages_max * 100) {
>>          error_setg(errp, "multifd: received packet "
>> -                   "with size %d and expected maximum size %d",
>> -                   packet->pages_alloc, page_count) ;
>> +                   "with size %d and expected size %d",
>> +                   packet->pages_alloc, pages_max) ;
>
> Should that end with pages_max * 100 ?

Not sure.

The *allocated* by defaault size is pages_max.  If we receive
bigger packets, we update it, but until a limit (arbitrary, I am open to
other limits).

So, what multifd is expecting here is pages_max.  But it will cope with
anything that is smaller than pages_max * 100.  So, what I should put on
the error message?  100 * pages_max or pages_max?

It appears that for you it is simpler to understand pages_max * 100, and
as I don't care, I am just changing it.

>>          return -1;
>>      }
>> +    /*
>> +     * We received a packet that is bigger than expected but inside
>> +     * reasonable limits (see previous comment).  Just reallocate.
>> +     */
>> +    if (packet->pages_alloc > p->pages->allocated) {
>> +        multifd_pages_clear(p->pages);
>> +        multifd_pages_init(packet->pages_alloc);
>> +    }
>>  
>>      p->pages->used = be32_to_cpu(packet->pages_used);
>>      if (p->pages->used > packet->pages_alloc) {
>
> Other than that error message, I think it's OK, although the names get
> very confusing (max, alloc, allocated)

I am open to suggestions.  I just got out of names :-(

>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-02-27 11:03 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-20 11:56 [Qemu-devel] [PATCH v2 0/8] migration: Make multifd not experimental Juan Quintela
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 1/8] multifd: Only send pages when packet are not empty Juan Quintela
2019-02-21 17:43   ` Dr. David Alan Gilbert
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 2/8] multifd: Rename "size" member to pages_alloc Juan Quintela
2019-02-21 17:48   ` Dr. David Alan Gilbert
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 3/8] multifd: Create new next_packet_size field Juan Quintela
2019-02-21 18:45   ` Dr. David Alan Gilbert
2019-02-27 11:02     ` Juan Quintela
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 4/8] multifd: Drop x-multifd-page-count parameter Juan Quintela
2019-02-21 17:51   ` Dr. David Alan Gilbert
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 5/8] multifd: Be flexible about packet size Juan Quintela
2019-02-21 18:30   ` Dr. David Alan Gilbert
2019-02-27 11:06     ` Juan Quintela
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 6/8] multifd: Change default " Juan Quintela
2019-02-21 18:40   ` Dr. David Alan Gilbert
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 7/8] multifd: Drop x- Juan Quintela
2019-02-20 11:56 ` [Qemu-devel] [PATCH v2 8/8] tests: Add migration multifd test Juan Quintela

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.