All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads
@ 2021-11-24 10:05 Juan Quintela
  2021-11-24 10:05 ` [PATCH v3 01/23] multifd: Delete useless operation Juan Quintela
                   ` (23 more replies)
  0 siblings, 24 replies; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Hi

Trying with a different server.
As it used to happen, when I sent everything only to me, everything worked.

Sorry folks.

[v2]
This is a rebase against last master.

And the reason for resend is to configure properly git-publish and
hope this time that git-publish send all the patches.

Please, review.

[v1]
Since Friday version:
- More cleanups on the code
- Remove repeated calls to qemu_target_page_size()
- Establish normal pages and zero pages
- detect zero pages on the multifd threads
- send zero pages through the multifd channels.
- reviews by Richard addressed.

It pases migration-test, so it should be perfect O:+)

ToDo for next version:
- check the version changes
  I need that 6.2 is out to check for 7.0.
  This code don't exist at all due to that reason.
- Send measurements of the differences

Please, review.

[

Friday version that just created a single writev instead of
write+writev.

]

Right now, multifd does a write() for the header and a writev() for
each group of pages.  Simplify it so we send the header as another
member of the IOV.

Once there, I got several simplifications:
* is_zero_range() was used only once, just use its body.
* same with is_zero_page().
* Be consintent and use offset insed the ramblock everywhere.
* Now that we have the offsets of the ramblock, we can drop the iov.
* Now that nothing uses iov's except NOCOMP method, move the iovs
  from pages to methods.
* Now we can use iov's with a single field for zlib/zstd.
* send_write() method is the same in all the implementaitons, so use
  it directly.
* Now, we can use a single writev() to write everything.

ToDo: Move zero page detection to the multifd thrteads.

With RAM in the Terabytes size, the detection of the zero page takes
too much time on the main thread.

Last patch on the series removes the detection of zero pages in the
main thread for multifd.  In the next series post, I will add how to
detect the zero pages and send them on multifd channels.

Please review.

Later, Juan.

Juan Quintela (23):
  multifd: Delete useless operation
  migration: Never call twice qemu_target_page_size()
  multifd: Rename used field to num
  multifd: Add missing documention
  multifd: The variable is only used inside the loop
  multifd: remove used parameter from send_prepare() method
  multifd: remove used parameter from send_recv_pages() method
  multifd: Fill offset and block for reception
  multifd: Make zstd compression method not use iovs
  multifd: Make zlib compression method not use iovs
  multifd: Move iov from pages to params
  multifd: Make zlib use iov's
  multifd: Make zstd use iov's
  multifd: Remove send_write() method
  multifd: Use a single writev on the send side
  multifd: Unfold "used" variable by its value
  multifd: Use normal pages array on the send side
  multifd: Use normal pages array on the recv side
  multifd: recv side only needs the RAMBlock host address
  multifd: Rename pages_used to normal_pages
  multifd: Support for zero pages transmission
  multifd: Zero pages transmission
  migration: Use multifd before we check for the zero page

 migration/multifd.h      |  52 +++++++---
 migration/migration.c    |   7 +-
 migration/multifd-zlib.c |  71 +++++--------
 migration/multifd-zstd.c |  70 +++++--------
 migration/multifd.c      | 214 +++++++++++++++++++++++----------------
 migration/ram.c          |  22 ++--
 migration/savevm.c       |   5 +-
 migration/trace-events   |   4 +-
 8 files changed, 231 insertions(+), 214 deletions(-)

-- 
2.33.1




^ permalink raw reply	[flat|nested] 72+ messages in thread

* [PATCH v3 01/23] multifd: Delete useless operation
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
@ 2021-11-24 10:05 ` Juan Quintela
  2021-11-24 18:48   ` Dr. David Alan Gilbert
  2021-11-24 10:05 ` [PATCH v3 02/23] migration: Never call twice qemu_target_page_size() Juan Quintela
                   ` (22 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

We are divining by page_size to multiply again in the only use.
Once there, impreve the comments.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd-zlib.c | 13 ++++---------
 migration/multifd-zstd.c | 13 ++++---------
 2 files changed, 8 insertions(+), 18 deletions(-)

diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index ab4ba75d75..3fc7813b44 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -42,7 +42,6 @@ struct zlib_data {
  */
 static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
 {
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     struct zlib_data *z = g_malloc0(sizeof(struct zlib_data));
     z_stream *zs = &z->zs;
 
@@ -54,9 +53,8 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
         error_setg(errp, "multifd %d: deflate init failed", p->id);
         return -1;
     }
-    /* We will never have more than page_count pages */
-    z->zbuff_len = page_count * qemu_target_page_size();
-    z->zbuff_len *= 2;
+    /* To be safe, we reserve twice the size of the packet */
+    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         deflateEnd(&z->zs);
@@ -180,7 +178,6 @@ static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
  */
 static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     struct zlib_data *z = g_malloc0(sizeof(struct zlib_data));
     z_stream *zs = &z->zs;
 
@@ -194,10 +191,8 @@ static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
         error_setg(errp, "multifd %d: inflate init failed", p->id);
         return -1;
     }
-    /* We will never have more than page_count pages */
-    z->zbuff_len = page_count * qemu_target_page_size();
-    /* We know compression "could" use more space */
-    z->zbuff_len *= 2;
+    /* To be safe, we reserve twice the size of the packet */
+    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         inflateEnd(zs);
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 693bddf8c9..cc3b8869c0 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -47,7 +47,6 @@ struct zstd_data {
  */
 static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
 {
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     struct zstd_data *z = g_new0(struct zstd_data, 1);
     int res;
 
@@ -67,9 +66,8 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
                    p->id, ZSTD_getErrorName(res));
         return -1;
     }
-    /* We will never have more than page_count pages */
-    z->zbuff_len = page_count * qemu_target_page_size();
-    z->zbuff_len *= 2;
+    /* To be safe, we reserve twice the size of the packet */
+    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         ZSTD_freeCStream(z->zcs);
@@ -191,7 +189,6 @@ static int zstd_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
  */
 static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
     struct zstd_data *z = g_new0(struct zstd_data, 1);
     int ret;
 
@@ -212,10 +209,8 @@ static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
 
-    /* We will never have more than page_count pages */
-    z->zbuff_len = page_count * qemu_target_page_size();
-    /* We know compression "could" use more space */
-    z->zbuff_len *= 2;
+    /* To be safe, we reserve twice the size of the packet */
+    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         ZSTD_freeDStream(z->zds);
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 02/23] migration: Never call twice qemu_target_page_size()
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
  2021-11-24 10:05 ` [PATCH v3 01/23] multifd: Delete useless operation Juan Quintela
@ 2021-11-24 10:05 ` Juan Quintela
  2021-11-24 18:52   ` Dr. David Alan Gilbert
  2021-11-24 10:05 ` [PATCH v3 03/23] multifd: Rename used field to num Juan Quintela
                   ` (21 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c | 7 ++++---
 migration/multifd.c   | 7 ++++---
 migration/savevm.c    | 5 +++--
 3 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 2c1edb2cb9..3de11ae921 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -996,6 +996,8 @@ static void populate_time_info(MigrationInfo *info, MigrationState *s)
 
 static void populate_ram_info(MigrationInfo *info, MigrationState *s)
 {
+    size_t page_size = qemu_target_page_size();
+
     info->has_ram = true;
     info->ram = g_malloc0(sizeof(*info->ram));
     info->ram->transferred = ram_counters.transferred;
@@ -1004,12 +1006,11 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
     info->ram->normal = ram_counters.normal;
-    info->ram->normal_bytes = ram_counters.normal *
-        qemu_target_page_size();
+    info->ram->normal_bytes = ram_counters.normal * page_size;
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
     info->ram->postcopy_requests = ram_counters.postcopy_requests;
-    info->ram->page_size = qemu_target_page_size();
+    info->ram->page_size = page_size;
     info->ram->multifd_bytes = ram_counters.multifd_bytes;
     info->ram->pages_per_second = s->pages_per_second;
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 7c9deb1921..8125d0015c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -289,7 +289,8 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
 {
     MultiFDPacket_t *packet = p->packet;
-    uint32_t pages_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+    size_t page_size = qemu_target_page_size();
+    uint32_t pages_max = MULTIFD_PACKET_SIZE / page_size;
     RAMBlock *block;
     int i;
 
@@ -358,14 +359,14 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     for (i = 0; i < p->pages->used; i++) {
         uint64_t offset = be64_to_cpu(packet->offset[i]);
 
-        if (offset > (block->used_length - qemu_target_page_size())) {
+        if (offset > (block->used_length - page_size)) {
             error_setg(errp, "multifd: offset too long %" PRIu64
                        " (max " RAM_ADDR_FMT ")",
                        offset, block->used_length);
             return -1;
         }
         p->pages->iov[i].iov_base = block->host + offset;
-        p->pages->iov[i].iov_len = qemu_target_page_size();
+        p->pages->iov[i].iov_len = page_size;
     }
 
     return 0;
diff --git a/migration/savevm.c b/migration/savevm.c
index d59e976d50..0bef031acb 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1685,6 +1685,7 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis,
 {
     PostcopyState ps = postcopy_state_set(POSTCOPY_INCOMING_ADVISE);
     uint64_t remote_pagesize_summary, local_pagesize_summary, remote_tps;
+    size_t page_size = qemu_target_page_size();
     Error *local_err = NULL;
 
     trace_loadvm_postcopy_handle_advise();
@@ -1741,13 +1742,13 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis,
     }
 
     remote_tps = qemu_get_be64(mis->from_src_file);
-    if (remote_tps != qemu_target_page_size()) {
+    if (remote_tps != page_size) {
         /*
          * Again, some differences could be dealt with, but for now keep it
          * simple.
          */
         error_report("Postcopy needs matching target page sizes (s=%d d=%zd)",
-                     (int)remote_tps, qemu_target_page_size());
+                     (int)remote_tps, page_size);
         return -1;
     }
 
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 03/23] multifd: Rename used field to num
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
  2021-11-24 10:05 ` [PATCH v3 01/23] multifd: Delete useless operation Juan Quintela
  2021-11-24 10:05 ` [PATCH v3 02/23] migration: Never call twice qemu_target_page_size() Juan Quintela
@ 2021-11-24 10:05 ` Juan Quintela
  2021-11-24 19:37   ` Dr. David Alan Gilbert
  2021-12-13  9:34   ` Zheng Chuan via
  2021-11-24 10:05 ` [PATCH v3 04/23] multifd: Add missing documention Juan Quintela
                   ` (20 subsequent siblings)
  23 siblings, 2 replies; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

We will need to split it later in zero_num (number of zero pages) and
normal_num (number of normal pages).  This name is better.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h |  2 +-
 migration/multifd.c | 38 +++++++++++++++++++-------------------
 2 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 15c50ca0b2..86820dd028 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -55,7 +55,7 @@ typedef struct {
 
 typedef struct {
     /* number of used pages */
-    uint32_t used;
+    uint32_t num;
     /* number of allocated pages */
     uint32_t allocated;
     /* global number of generated multifd packets */
diff --git a/migration/multifd.c b/migration/multifd.c
index 8125d0015c..8ea86d81dc 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -252,7 +252,7 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
 
 static void multifd_pages_clear(MultiFDPages_t *pages)
 {
-    pages->used = 0;
+    pages->num = 0;
     pages->allocated = 0;
     pages->packet_num = 0;
     pages->block = NULL;
@@ -270,7 +270,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 
     packet->flags = cpu_to_be32(p->flags);
     packet->pages_alloc = cpu_to_be32(p->pages->allocated);
-    packet->pages_used = cpu_to_be32(p->pages->used);
+    packet->pages_used = cpu_to_be32(p->pages->num);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
 
@@ -278,7 +278,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
         strncpy(packet->ramblock, p->pages->block->idstr, 256);
     }
 
-    for (i = 0; i < p->pages->used; i++) {
+    for (i = 0; i < p->pages->num; i++) {
         /* there are architectures where ram_addr_t is 32 bit */
         uint64_t temp = p->pages->offset[i];
 
@@ -332,18 +332,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         p->pages = multifd_pages_init(packet->pages_alloc);
     }
 
-    p->pages->used = be32_to_cpu(packet->pages_used);
-    if (p->pages->used > packet->pages_alloc) {
+    p->pages->num = be32_to_cpu(packet->pages_used);
+    if (p->pages->num > packet->pages_alloc) {
         error_setg(errp, "multifd: received packet "
                    "with %d pages and expected maximum pages are %d",
-                   p->pages->used, packet->pages_alloc) ;
+                   p->pages->num, packet->pages_alloc) ;
         return -1;
     }
 
     p->next_packet_size = be32_to_cpu(packet->next_packet_size);
     p->packet_num = be64_to_cpu(packet->packet_num);
 
-    if (p->pages->used == 0) {
+    if (p->pages->num == 0) {
         return 0;
     }
 
@@ -356,7 +356,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
 
-    for (i = 0; i < p->pages->used; i++) {
+    for (i = 0; i < p->pages->num; i++) {
         uint64_t offset = be64_to_cpu(packet->offset[i]);
 
         if (offset > (block->used_length - page_size)) {
@@ -443,13 +443,13 @@ static int multifd_send_pages(QEMUFile *f)
         }
         qemu_mutex_unlock(&p->mutex);
     }
-    assert(!p->pages->used);
+    assert(!p->pages->num);
     assert(!p->pages->block);
 
     p->packet_num = multifd_send_state->packet_num++;
     multifd_send_state->pages = p->pages;
     p->pages = pages;
-    transferred = ((uint64_t) pages->used) * qemu_target_page_size()
+    transferred = ((uint64_t) pages->num) * qemu_target_page_size()
                 + p->packet_len;
     qemu_file_update_transfer(f, transferred);
     ram_counters.multifd_bytes += transferred;
@@ -469,12 +469,12 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
     }
 
     if (pages->block == block) {
-        pages->offset[pages->used] = offset;
-        pages->iov[pages->used].iov_base = block->host + offset;
-        pages->iov[pages->used].iov_len = qemu_target_page_size();
-        pages->used++;
+        pages->offset[pages->num] = offset;
+        pages->iov[pages->num].iov_base = block->host + offset;
+        pages->iov[pages->num].iov_len = qemu_target_page_size();
+        pages->num++;
 
-        if (pages->used < pages->allocated) {
+        if (pages->num < pages->allocated) {
             return 1;
         }
     }
@@ -586,7 +586,7 @@ void multifd_send_sync_main(QEMUFile *f)
     if (!migrate_use_multifd()) {
         return;
     }
-    if (multifd_send_state->pages->used) {
+    if (multifd_send_state->pages->num) {
         if (multifd_send_pages(f) < 0) {
             error_report("%s: multifd_send_pages fail", __func__);
             return;
@@ -649,7 +649,7 @@ static void *multifd_send_thread(void *opaque)
         qemu_mutex_lock(&p->mutex);
 
         if (p->pending_job) {
-            uint32_t used = p->pages->used;
+            uint32_t used = p->pages->num;
             uint64_t packet_num = p->packet_num;
             flags = p->flags;
 
@@ -665,7 +665,7 @@ static void *multifd_send_thread(void *opaque)
             p->flags = 0;
             p->num_packets++;
             p->num_pages += used;
-            p->pages->used = 0;
+            p->pages->num = 0;
             p->pages->block = NULL;
             qemu_mutex_unlock(&p->mutex);
 
@@ -1091,7 +1091,7 @@ static void *multifd_recv_thread(void *opaque)
             break;
         }
 
-        used = p->pages->used;
+        used = p->pages->num;
         flags = p->flags;
         /* recv methods don't know how to handle the SYNC flag */
         p->flags &= ~MULTIFD_FLAG_SYNC;
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 04/23] multifd: Add missing documention
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (2 preceding siblings ...)
  2021-11-24 10:05 ` [PATCH v3 03/23] multifd: Rename used field to num Juan Quintela
@ 2021-11-24 10:05 ` Juan Quintela
  2021-11-25 18:38   ` Dr. David Alan Gilbert
  2021-11-24 10:05 ` [PATCH v3 05/23] multifd: The variable is only used inside the loop Juan Quintela
                   ` (19 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd-zlib.c | 2 ++
 migration/multifd-zstd.c | 2 ++
 migration/multifd.c      | 1 +
 3 files changed, 5 insertions(+)

diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 3fc7813b44..d0437cce2a 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -72,6 +72,7 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
  * Close the channel and return memory.
  *
  * @p: Params for the channel that we are using
+ * @errp: pointer to an error
  */
 static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
 {
@@ -94,6 +95,7 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
  *
  * @p: Params for the channel that we are using
  * @used: number of pages used
+ * @errp: pointer to an error
  */
 static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
 {
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index cc3b8869c0..09ae1cf91a 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -84,6 +84,7 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
  * Close the channel and return memory.
  *
  * @p: Params for the channel that we are using
+ * @errp: pointer to an error
  */
 static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
 {
@@ -107,6 +108,7 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
  *
  * @p: Params for the channel that we are using
  * @used: number of pages used
+ * @errp: pointer to an error
  */
 static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
 {
diff --git a/migration/multifd.c b/migration/multifd.c
index 8ea86d81dc..cdeffdc4c5 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -66,6 +66,7 @@ static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
  * For no compression this function does nothing.
  *
  * @p: Params for the channel that we are using
+ * @errp: pointer to an error
  */
 static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
 {
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 05/23] multifd: The variable is only used inside the loop
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (3 preceding siblings ...)
  2021-11-24 10:05 ` [PATCH v3 04/23] multifd: Add missing documention Juan Quintela
@ 2021-11-24 10:05 ` Juan Quintela
  2021-11-25 18:40   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 06/23] multifd: remove used parameter from send_prepare() method Juan Quintela
                   ` (18 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:05 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index cdeffdc4c5..ce7101cf9d 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -629,7 +629,6 @@ static void *multifd_send_thread(void *opaque)
     MultiFDSendParams *p = opaque;
     Error *local_err = NULL;
     int ret = 0;
-    uint32_t flags = 0;
 
     trace_multifd_send_thread_start(p->id);
     rcu_register_thread();
@@ -652,7 +651,7 @@ static void *multifd_send_thread(void *opaque)
         if (p->pending_job) {
             uint32_t used = p->pages->num;
             uint64_t packet_num = p->packet_num;
-            flags = p->flags;
+            uint32_t flags = p->flags;
 
             if (used) {
                 ret = multifd_send_state->ops->send_prepare(p, used,
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 06/23] multifd: remove used parameter from send_prepare() method
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (4 preceding siblings ...)
  2021-11-24 10:05 ` [PATCH v3 05/23] multifd: The variable is only used inside the loop Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-25 18:51   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 07/23] multifd: remove used parameter from send_recv_pages() method Juan Quintela
                   ` (17 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

It is already there as p->pages->num.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h      | 2 +-
 migration/multifd-zlib.c | 7 +++----
 migration/multifd-zstd.c | 7 +++----
 migration/multifd.c      | 9 +++------
 4 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 86820dd028..7968cc5c20 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -159,7 +159,7 @@ typedef struct {
     /* Cleanup for sending side */
     void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
     /* Prepare the send packet */
-    int (*send_prepare)(MultiFDSendParams *p, uint32_t used, Error **errp);
+    int (*send_prepare)(MultiFDSendParams *p, Error **errp);
     /* Write the send packet */
     int (*send_write)(MultiFDSendParams *p, uint32_t used, Error **errp);
     /* Setup for receiving side */
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index d0437cce2a..28f0ed933b 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -94,10 +94,9 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
  * Returns 0 for success or -1 for error
  *
  * @p: Params for the channel that we are using
- * @used: number of pages used
  * @errp: pointer to an error
  */
-static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
+static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
 {
     struct iovec *iov = p->pages->iov;
     struct zlib_data *z = p->data;
@@ -106,11 +105,11 @@ static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
     int ret;
     uint32_t i;
 
-    for (i = 0; i < used; i++) {
+    for (i = 0; i < p->pages->num; i++) {
         uint32_t available = z->zbuff_len - out_size;
         int flush = Z_NO_FLUSH;
 
-        if (i == used - 1) {
+        if (i == p->pages->num - 1) {
             flush = Z_SYNC_FLUSH;
         }
 
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 09ae1cf91a..4a71e96e06 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -107,10 +107,9 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
  * Returns 0 for success or -1 for error
  *
  * @p: Params for the channel that we are using
- * @used: number of pages used
  * @errp: pointer to an error
  */
-static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
+static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
 {
     struct iovec *iov = p->pages->iov;
     struct zstd_data *z = p->data;
@@ -121,10 +120,10 @@ static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
     z->out.size = z->zbuff_len;
     z->out.pos = 0;
 
-    for (i = 0; i < used; i++) {
+    for (i = 0; i < p->pages->num; i++) {
         ZSTD_EndDirective flush = ZSTD_e_continue;
 
-        if (i == used - 1) {
+        if (i == p->pages->num - 1) {
             flush = ZSTD_e_flush;
         }
         z->in.src = iov[i].iov_base;
diff --git a/migration/multifd.c b/migration/multifd.c
index ce7101cf9d..098ef8842c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -82,13 +82,11 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
  * Returns 0 for success or -1 for error
  *
  * @p: Params for the channel that we are using
- * @used: number of pages used
  * @errp: pointer to an error
  */
-static int nocomp_send_prepare(MultiFDSendParams *p, uint32_t used,
-                               Error **errp)
+static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
 {
-    p->next_packet_size = used * qemu_target_page_size();
+    p->next_packet_size = p->pages->num * qemu_target_page_size();
     p->flags |= MULTIFD_FLAG_NOCOMP;
     return 0;
 }
@@ -654,8 +652,7 @@ static void *multifd_send_thread(void *opaque)
             uint32_t flags = p->flags;
 
             if (used) {
-                ret = multifd_send_state->ops->send_prepare(p, used,
-                                                            &local_err);
+                ret = multifd_send_state->ops->send_prepare(p, &local_err);
                 if (ret != 0) {
                     qemu_mutex_unlock(&p->mutex);
                     break;
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 07/23] multifd: remove used parameter from send_recv_pages() method
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (5 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 06/23] multifd: remove used parameter from send_prepare() method Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-25 18:53   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 08/23] multifd: Fill offset and block for reception Juan Quintela
                   ` (16 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

It is already there as p->pages->num.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h      | 2 +-
 migration/multifd-zlib.c | 9 ++++-----
 migration/multifd-zstd.c | 7 +++----
 migration/multifd.c      | 7 +++----
 4 files changed, 11 insertions(+), 14 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 7968cc5c20..e57adc783b 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -167,7 +167,7 @@ typedef struct {
     /* Cleanup for receiving side */
     void (*recv_cleanup)(MultiFDRecvParams *p);
     /* Read all pages */
-    int (*recv_pages)(MultiFDRecvParams *p, uint32_t used, Error **errp);
+    int (*recv_pages)(MultiFDRecvParams *p, Error **errp);
 } MultiFDMethods;
 
 void multifd_register_ops(int method, MultiFDMethods *ops);
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 28f0ed933b..e85ef8824d 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -230,17 +230,16 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
  * Returns 0 for success or -1 for error
  *
  * @p: Params for the channel that we are using
- * @used: number of pages used
  * @errp: pointer to an error
  */
-static int zlib_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
+static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
 {
     struct zlib_data *z = p->data;
     z_stream *zs = &z->zs;
     uint32_t in_size = p->next_packet_size;
     /* we measure the change of total_out */
     uint32_t out_size = zs->total_out;
-    uint32_t expected_size = used * qemu_target_page_size();
+    uint32_t expected_size = p->pages->num * qemu_target_page_size();
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     int ret;
     int i;
@@ -259,12 +258,12 @@ static int zlib_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
     zs->avail_in = in_size;
     zs->next_in = z->zbuff;
 
-    for (i = 0; i < used; i++) {
+    for (i = 0; i < p->pages->num; i++) {
         struct iovec *iov = &p->pages->iov[i];
         int flush = Z_NO_FLUSH;
         unsigned long start = zs->total_out;
 
-        if (i == used - 1) {
+        if (i == p->pages->num - 1) {
             flush = Z_SYNC_FLUSH;
         }
 
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 4a71e96e06..a8b104f4ee 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -250,14 +250,13 @@ static void zstd_recv_cleanup(MultiFDRecvParams *p)
  * Returns 0 for success or -1 for error
  *
  * @p: Params for the channel that we are using
- * @used: number of pages used
  * @errp: pointer to an error
  */
-static int zstd_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
+static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
 {
     uint32_t in_size = p->next_packet_size;
     uint32_t out_size = 0;
-    uint32_t expected_size = used * qemu_target_page_size();
+    uint32_t expected_size = p->pages->num * qemu_target_page_size();
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     struct zstd_data *z = p->data;
     int ret;
@@ -278,7 +277,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
     z->in.size = in_size;
     z->in.pos = 0;
 
-    for (i = 0; i < used; i++) {
+    for (i = 0; i < p->pages->num; i++) {
         struct iovec *iov = &p->pages->iov[i];
 
         z->out.dst = iov->iov_base;
diff --git a/migration/multifd.c b/migration/multifd.c
index 098ef8842c..55d99a8232 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -141,10 +141,9 @@ static void nocomp_recv_cleanup(MultiFDRecvParams *p)
  * Returns 0 for success or -1 for error
  *
  * @p: Params for the channel that we are using
- * @used: number of pages used
  * @errp: pointer to an error
  */
-static int nocomp_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
+static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
 {
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
 
@@ -153,7 +152,7 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
                    p->id, flags, MULTIFD_FLAG_NOCOMP);
         return -1;
     }
-    return qio_channel_readv_all(p->c, p->pages->iov, used, errp);
+    return qio_channel_readv_all(p->c, p->pages->iov, p->pages->num, errp);
 }
 
 static MultiFDMethods multifd_nocomp_ops = {
@@ -1099,7 +1098,7 @@ static void *multifd_recv_thread(void *opaque)
         qemu_mutex_unlock(&p->mutex);
 
         if (used) {
-            ret = multifd_recv_state->ops->recv_pages(p, used, &local_err);
+            ret = multifd_recv_state->ops->recv_pages(p, &local_err);
             if (ret != 0) {
                 break;
             }
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 08/23] multifd: Fill offset and block for reception
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (6 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 07/23] multifd: remove used parameter from send_recv_pages() method Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-25 19:41   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 09/23] multifd: Make zstd compression method not use iovs Juan Quintela
                   ` (15 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

We were using the iov directly, but we will need this info on the
following patch.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/migration/multifd.c b/migration/multifd.c
index 55d99a8232..0533da154a 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -354,6 +354,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
 
+    p->pages->block = block;
     for (i = 0; i < p->pages->num; i++) {
         uint64_t offset = be64_to_cpu(packet->offset[i]);
 
@@ -363,6 +364,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
                        offset, block->used_length);
             return -1;
         }
+        p->pages->offset[i] = offset;
         p->pages->iov[i].iov_base = block->host + offset;
         p->pages->iov[i].iov_len = page_size;
     }
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 09/23] multifd: Make zstd compression method not use iovs
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (7 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 08/23] multifd: Fill offset and block for reception Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-29 17:16   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 10/23] multifd: Make zlib " Juan Quintela
                   ` (14 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd-zstd.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index a8b104f4ee..2d5b61106c 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -13,6 +13,7 @@
 #include "qemu/osdep.h"
 #include <zstd.h>
 #include "qemu/rcu.h"
+#include "exec/ramblock.h"
 #include "exec/target_page.h"
 #include "qapi/error.h"
 #include "migration.h"
@@ -111,8 +112,8 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
  */
 static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
 {
-    struct iovec *iov = p->pages->iov;
     struct zstd_data *z = p->data;
+    size_t page_size = qemu_target_page_size();
     int ret;
     uint32_t i;
 
@@ -126,8 +127,8 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
         if (i == p->pages->num - 1) {
             flush = ZSTD_e_flush;
         }
-        z->in.src = iov[i].iov_base;
-        z->in.size = iov[i].iov_len;
+        z->in.src = p->pages->block->host + p->pages->offset[i];
+        z->in.size = page_size;
         z->in.pos = 0;
 
         /*
@@ -256,7 +257,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
 {
     uint32_t in_size = p->next_packet_size;
     uint32_t out_size = 0;
-    uint32_t expected_size = p->pages->num * qemu_target_page_size();
+    size_t page_size = qemu_target_page_size();
+    uint32_t expected_size = p->pages->num * page_size;
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     struct zstd_data *z = p->data;
     int ret;
@@ -278,10 +280,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
     z->in.pos = 0;
 
     for (i = 0; i < p->pages->num; i++) {
-        struct iovec *iov = &p->pages->iov[i];
-
-        z->out.dst = iov->iov_base;
-        z->out.size = iov->iov_len;
+        z->out.dst = p->pages->block->host + p->pages->offset[i];
+        z->out.size = page_size;
         z->out.pos = 0;
 
         /*
@@ -295,8 +295,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
         do {
             ret = ZSTD_decompressStream(z->zds, &z->out, &z->in);
         } while (ret > 0 && (z->in.size - z->in.pos > 0)
-                         && (z->out.pos < iov->iov_len));
-        if (ret > 0 && (z->out.pos < iov->iov_len)) {
+                         && (z->out.pos < page_size));
+        if (ret > 0 && (z->out.pos < page_size)) {
             error_setg(errp, "multifd %d: decompressStream buffer too small",
                        p->id);
             return -1;
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 10/23] multifd: Make zlib compression method not use iovs
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (8 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 09/23] multifd: Make zstd compression method not use iovs Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-29 17:30   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 11/23] multifd: Move iov from pages to params Juan Quintela
                   ` (13 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd-zlib.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index e85ef8824d..da6201704c 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -13,6 +13,7 @@
 #include "qemu/osdep.h"
 #include <zlib.h>
 #include "qemu/rcu.h"
+#include "exec/ramblock.h"
 #include "exec/target_page.h"
 #include "qapi/error.h"
 #include "migration.h"
@@ -98,8 +99,8 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
  */
 static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
 {
-    struct iovec *iov = p->pages->iov;
     struct zlib_data *z = p->data;
+    size_t page_size = qemu_target_page_size();
     z_stream *zs = &z->zs;
     uint32_t out_size = 0;
     int ret;
@@ -113,8 +114,8 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
             flush = Z_SYNC_FLUSH;
         }
 
-        zs->avail_in = iov[i].iov_len;
-        zs->next_in = iov[i].iov_base;
+        zs->avail_in = page_size;
+        zs->next_in = p->pages->block->host + p->pages->offset[i];
 
         zs->avail_out = available;
         zs->next_out = z->zbuff + out_size;
@@ -235,6 +236,7 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
 static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
 {
     struct zlib_data *z = p->data;
+    size_t page_size = qemu_target_page_size();
     z_stream *zs = &z->zs;
     uint32_t in_size = p->next_packet_size;
     /* we measure the change of total_out */
@@ -259,7 +261,6 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
     zs->next_in = z->zbuff;
 
     for (i = 0; i < p->pages->num; i++) {
-        struct iovec *iov = &p->pages->iov[i];
         int flush = Z_NO_FLUSH;
         unsigned long start = zs->total_out;
 
@@ -267,8 +268,8 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
             flush = Z_SYNC_FLUSH;
         }
 
-        zs->avail_out = iov->iov_len;
-        zs->next_out = iov->iov_base;
+        zs->avail_out = page_size;
+        zs->next_out = p->pages->block->host + p->pages->offset[i];
 
         /*
          * Welcome to inflate semantics
@@ -281,8 +282,8 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
         do {
             ret = inflate(zs, flush);
         } while (ret == Z_OK && zs->avail_in
-                             && (zs->total_out - start) < iov->iov_len);
-        if (ret == Z_OK && (zs->total_out - start) < iov->iov_len) {
+                             && (zs->total_out - start) < page_size);
+        if (ret == Z_OK && (zs->total_out - start) < page_size) {
             error_setg(errp, "multifd %d: inflate generated too few output",
                        p->id);
             return -1;
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 11/23] multifd: Move iov from pages to params
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (9 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 10/23] multifd: Make zlib " Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-29 17:52   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 12/23] multifd: Make zlib use iov's Juan Quintela
                   ` (12 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

This will allow us to reduce the number of system calls on the next patch.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h |  8 ++++++--
 migration/multifd.c | 34 ++++++++++++++++++++++++----------
 2 files changed, 30 insertions(+), 12 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index e57adc783b..c3f18af364 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -62,8 +62,6 @@ typedef struct {
     uint64_t packet_num;
     /* offset of each page */
     ram_addr_t *offset;
-    /* pointer to each page */
-    struct iovec *iov;
     RAMBlock *block;
 } MultiFDPages_t;
 
@@ -110,6 +108,10 @@ typedef struct {
     uint64_t num_pages;
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
+    /* buffers to send */
+    struct iovec *iov;
+    /* number of iovs used */
+    uint32_t iovs_num;
     /* used for compression methods */
     void *data;
 }  MultiFDSendParams;
@@ -149,6 +151,8 @@ typedef struct {
     uint64_t num_pages;
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
+    /* buffers to recv */
+    struct iovec *iov;
     /* used for de-compression methods */
     void *data;
 } MultiFDRecvParams;
diff --git a/migration/multifd.c b/migration/multifd.c
index 0533da154a..37487fd01c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -86,7 +86,16 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
  */
 static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
 {
-    p->next_packet_size = p->pages->num * qemu_target_page_size();
+    MultiFDPages_t *pages = p->pages;
+    size_t page_size = qemu_target_page_size();
+
+    for (int i = 0; i < p->pages->num; i++) {
+        p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
+        p->iov[p->iovs_num].iov_len = page_size;
+        p->iovs_num++;
+    }
+
+    p->next_packet_size = p->pages->num * page_size;
     p->flags |= MULTIFD_FLAG_NOCOMP;
     return 0;
 }
@@ -104,7 +113,7 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
  */
 static int nocomp_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
 {
-    return qio_channel_writev_all(p->c, p->pages->iov, used, errp);
+    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
 }
 
 /**
@@ -146,13 +155,18 @@ static void nocomp_recv_cleanup(MultiFDRecvParams *p)
 static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
 {
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
+    size_t page_size = qemu_target_page_size();
 
     if (flags != MULTIFD_FLAG_NOCOMP) {
         error_setg(errp, "multifd %d: flags received %x flags expected %x",
                    p->id, flags, MULTIFD_FLAG_NOCOMP);
         return -1;
     }
-    return qio_channel_readv_all(p->c, p->pages->iov, p->pages->num, errp);
+    for (int i = 0; i < p->pages->num; i++) {
+        p->iov[i].iov_base = p->pages->block->host + p->pages->offset[i];
+        p->iov[i].iov_len = page_size;
+    }
+    return qio_channel_readv_all(p->c, p->iov, p->pages->num, errp);
 }
 
 static MultiFDMethods multifd_nocomp_ops = {
@@ -242,7 +256,6 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
     MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
 
     pages->allocated = size;
-    pages->iov = g_new0(struct iovec, size);
     pages->offset = g_new0(ram_addr_t, size);
 
     return pages;
@@ -254,8 +267,6 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
     pages->allocated = 0;
     pages->packet_num = 0;
     pages->block = NULL;
-    g_free(pages->iov);
-    pages->iov = NULL;
     g_free(pages->offset);
     pages->offset = NULL;
     g_free(pages);
@@ -365,8 +376,6 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
             return -1;
         }
         p->pages->offset[i] = offset;
-        p->pages->iov[i].iov_base = block->host + offset;
-        p->pages->iov[i].iov_len = page_size;
     }
 
     return 0;
@@ -470,8 +479,6 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
 
     if (pages->block == block) {
         pages->offset[pages->num] = offset;
-        pages->iov[pages->num].iov_base = block->host + offset;
-        pages->iov[pages->num].iov_len = qemu_target_page_size();
         pages->num++;
 
         if (pages->num < pages->allocated) {
@@ -564,6 +571,8 @@ void multifd_save_cleanup(void)
         p->packet_len = 0;
         g_free(p->packet);
         p->packet = NULL;
+        g_free(p->iov);
+        p->iov = NULL;
         multifd_send_state->ops->send_cleanup(p, &local_err);
         if (local_err) {
             migrate_set_error(migrate_get_current(), local_err);
@@ -651,6 +660,7 @@ static void *multifd_send_thread(void *opaque)
             uint32_t used = p->pages->num;
             uint64_t packet_num = p->packet_num;
             uint32_t flags = p->flags;
+            p->iovs_num = 0;
 
             if (used) {
                 ret = multifd_send_state->ops->send_prepare(p, &local_err);
@@ -919,6 +929,7 @@ int multifd_save_setup(Error **errp)
         p->packet->version = cpu_to_be32(MULTIFD_VERSION);
         p->name = g_strdup_printf("multifdsend_%d", i);
         p->tls_hostname = g_strdup(s->hostname);
+        p->iov = g_new0(struct iovec, page_count);
         socket_send_channel_create(multifd_new_send_channel_async, p);
     }
 
@@ -1018,6 +1029,8 @@ int multifd_load_cleanup(Error **errp)
         p->packet_len = 0;
         g_free(p->packet);
         p->packet = NULL;
+        g_free(p->iov);
+        p->iov = NULL;
         multifd_recv_state->ops->recv_cleanup(p);
     }
     qemu_sem_destroy(&multifd_recv_state->sem_sync);
@@ -1158,6 +1171,7 @@ int multifd_load_setup(Error **errp)
                       + sizeof(uint64_t) * page_count;
         p->packet = g_malloc0(p->packet_len);
         p->name = g_strdup_printf("multifdrecv_%d", i);
+        p->iov = g_new0(struct iovec, page_count);
     }
 
     for (i = 0; i < thread_count; i++) {
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 12/23] multifd: Make zlib use iov's
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (10 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 11/23] multifd: Move iov from pages to params Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-29 18:01   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 13/23] multifd: Make zstd " Juan Quintela
                   ` (11 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd-zlib.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index da6201704c..478a4af115 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -143,6 +143,9 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
         }
         out_size += available - zs->avail_out;
     }
+    p->iov[p->iovs_num].iov_base = z->zbuff;
+    p->iov[p->iovs_num].iov_len = out_size;
+    p->iovs_num++;
     p->next_packet_size = out_size;
     p->flags |= MULTIFD_FLAG_ZLIB;
 
@@ -162,10 +165,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
  */
 static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
 {
-    struct zlib_data *z = p->data;
-
-    return qio_channel_write_all(p->c, (void *)z->zbuff, p->next_packet_size,
-                                 errp);
+    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
 }
 
 /**
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 13/23] multifd: Make zstd use iov's
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (11 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 12/23] multifd: Make zlib use iov's Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-29 18:03   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 14/23] multifd: Remove send_write() method Juan Quintela
                   ` (10 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd-zstd.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 2d5b61106c..259277dc42 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -154,6 +154,9 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
             return -1;
         }
     }
+    p->iov[p->iovs_num].iov_base = z->zbuff;
+    p->iov[p->iovs_num].iov_len = z->out.pos;
+    p->iovs_num++;
     p->next_packet_size = z->out.pos;
     p->flags |= MULTIFD_FLAG_ZSTD;
 
@@ -173,10 +176,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
  */
 static int zstd_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
 {
-    struct zstd_data *z = p->data;
-
-    return qio_channel_write_all(p->c, (void *)z->zbuff, p->next_packet_size,
-                                 errp);
+    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
 }
 
 /**
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 14/23] multifd: Remove send_write() method
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (12 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 13/23] multifd: Make zstd " Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-29 18:19   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 15/23] multifd: Use a single writev on the send side Juan Quintela
                   ` (9 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Everything use now iov's.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h      |  2 --
 migration/multifd-zlib.c | 17 -----------------
 migration/multifd-zstd.c | 17 -----------------
 migration/multifd.c      | 20 ++------------------
 4 files changed, 2 insertions(+), 54 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index c3f18af364..7496f951a7 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -164,8 +164,6 @@ typedef struct {
     void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
     /* Prepare the send packet */
     int (*send_prepare)(MultiFDSendParams *p, Error **errp);
-    /* Write the send packet */
-    int (*send_write)(MultiFDSendParams *p, uint32_t used, Error **errp);
     /* Setup for receiving side */
     int (*recv_setup)(MultiFDRecvParams *p, Error **errp);
     /* Cleanup for receiving side */
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 478a4af115..f65159392a 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -152,22 +152,6 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
     return 0;
 }
 
-/**
- * zlib_send_write: do the actual write of the data
- *
- * Do the actual write of the comprresed buffer.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @used: number of pages used
- * @errp: pointer to an error
- */
-static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
-{
-    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
-}
-
 /**
  * zlib_recv_setup: setup receive side
  *
@@ -307,7 +291,6 @@ static MultiFDMethods multifd_zlib_ops = {
     .send_setup = zlib_send_setup,
     .send_cleanup = zlib_send_cleanup,
     .send_prepare = zlib_send_prepare,
-    .send_write = zlib_send_write,
     .recv_setup = zlib_recv_setup,
     .recv_cleanup = zlib_recv_cleanup,
     .recv_pages = zlib_recv_pages
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 259277dc42..6933ba622a 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -163,22 +163,6 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
     return 0;
 }
 
-/**
- * zstd_send_write: do the actual write of the data
- *
- * Do the actual write of the comprresed buffer.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @used: number of pages used
- * @errp: pointer to an error
- */
-static int zstd_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
-{
-    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
-}
-
 /**
  * zstd_recv_setup: setup receive side
  *
@@ -320,7 +304,6 @@ static MultiFDMethods multifd_zstd_ops = {
     .send_setup = zstd_send_setup,
     .send_cleanup = zstd_send_cleanup,
     .send_prepare = zstd_send_prepare,
-    .send_write = zstd_send_write,
     .recv_setup = zstd_recv_setup,
     .recv_cleanup = zstd_recv_cleanup,
     .recv_pages = zstd_recv_pages
diff --git a/migration/multifd.c b/migration/multifd.c
index 37487fd01c..71bdef068e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -100,22 +100,6 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
     return 0;
 }
 
-/**
- * nocomp_send_write: do the actual write of the data
- *
- * For no compression we just have to write the data.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @used: number of pages used
- * @errp: pointer to an error
- */
-static int nocomp_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
-{
-    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
-}
-
 /**
  * nocomp_recv_setup: setup receive side
  *
@@ -173,7 +157,6 @@ static MultiFDMethods multifd_nocomp_ops = {
     .send_setup = nocomp_send_setup,
     .send_cleanup = nocomp_send_cleanup,
     .send_prepare = nocomp_send_prepare,
-    .send_write = nocomp_send_write,
     .recv_setup = nocomp_recv_setup,
     .recv_cleanup = nocomp_recv_cleanup,
     .recv_pages = nocomp_recv_pages
@@ -687,7 +670,8 @@ static void *multifd_send_thread(void *opaque)
             }
 
             if (used) {
-                ret = multifd_send_state->ops->send_write(p, used, &local_err);
+                ret = qio_channel_writev_all(p->c, p->iov, p->iovs_num,
+                                             &local_err);
                 if (ret != 0) {
                     break;
                 }
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 15/23] multifd: Use a single writev on the send side
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (13 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 14/23] multifd: Remove send_write() method Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-29 18:35   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 16/23] multifd: Unfold "used" variable by its value Juan Quintela
                   ` (8 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Until now, we wrote the packet header with write(), and the rest of the
pages with writev().  Just increase the size of the iovec and do a
single writev().

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 71bdef068e..65676d56fd 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -643,7 +643,7 @@ static void *multifd_send_thread(void *opaque)
             uint32_t used = p->pages->num;
             uint64_t packet_num = p->packet_num;
             uint32_t flags = p->flags;
-            p->iovs_num = 0;
+            p->iovs_num = 1;
 
             if (used) {
                 ret = multifd_send_state->ops->send_prepare(p, &local_err);
@@ -663,20 +663,15 @@ static void *multifd_send_thread(void *opaque)
             trace_multifd_send(p->id, packet_num, used, flags,
                                p->next_packet_size);
 
-            ret = qio_channel_write_all(p->c, (void *)p->packet,
-                                        p->packet_len, &local_err);
+            p->iov[0].iov_len = p->packet_len;
+            p->iov[0].iov_base = p->packet;
+
+            ret = qio_channel_writev_all(p->c, p->iov, p->iovs_num,
+                                         &local_err);
             if (ret != 0) {
                 break;
             }
 
-            if (used) {
-                ret = qio_channel_writev_all(p->c, p->iov, p->iovs_num,
-                                             &local_err);
-                if (ret != 0) {
-                    break;
-                }
-            }
-
             qemu_mutex_lock(&p->mutex);
             p->pending_job--;
             qemu_mutex_unlock(&p->mutex);
@@ -913,7 +908,8 @@ int multifd_save_setup(Error **errp)
         p->packet->version = cpu_to_be32(MULTIFD_VERSION);
         p->name = g_strdup_printf("multifdsend_%d", i);
         p->tls_hostname = g_strdup(s->hostname);
-        p->iov = g_new0(struct iovec, page_count);
+        /* We need one extra place for the packet header */
+        p->iov = g_new0(struct iovec, page_count + 1);
         socket_send_channel_create(multifd_new_send_channel_async, p);
     }
 
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 16/23] multifd: Unfold "used" variable by its value
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (14 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 15/23] multifd: Use a single writev on the send side Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-30 10:45   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 17/23] multifd: Use normal pages array on the send side Juan Quintela
                   ` (7 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 65676d56fd..6983ba3e7c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1059,7 +1059,6 @@ static void *multifd_recv_thread(void *opaque)
     rcu_register_thread();
 
     while (true) {
-        uint32_t used;
         uint32_t flags;
 
         if (p->quit) {
@@ -1082,17 +1081,16 @@ static void *multifd_recv_thread(void *opaque)
             break;
         }
 
-        used = p->pages->num;
         flags = p->flags;
         /* recv methods don't know how to handle the SYNC flag */
         p->flags &= ~MULTIFD_FLAG_SYNC;
-        trace_multifd_recv(p->id, p->packet_num, used, flags,
+        trace_multifd_recv(p->id, p->packet_num, p->pages->num, flags,
                            p->next_packet_size);
         p->num_packets++;
-        p->num_pages += used;
+        p->num_pages += p->pages->num;
         qemu_mutex_unlock(&p->mutex);
 
-        if (used) {
+        if (p->pages->num) {
             ret = multifd_recv_state->ops->recv_pages(p, &local_err);
             if (ret != 0) {
                 break;
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 17/23] multifd: Use normal pages array on the send side
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (15 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 16/23] multifd: Unfold "used" variable by its value Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-11-30 10:50   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 18/23] multifd: Use normal pages array on the recv side Juan Quintela
                   ` (6 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h      |  8 ++++++--
 migration/multifd-zlib.c |  6 +++---
 migration/multifd-zstd.c |  6 +++---
 migration/multifd.c      | 30 +++++++++++++++++++-----------
 migration/trace-events   |  4 ++--
 5 files changed, 33 insertions(+), 21 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 7496f951a7..78e73df3ec 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -104,14 +104,18 @@ typedef struct {
     /* thread local variables */
     /* packets sent through this channel */
     uint64_t num_packets;
-    /* pages sent through this channel */
-    uint64_t num_pages;
+    /* non zero pages sent through this channel */
+    uint64_t num_normal_pages;
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
     /* buffers to send */
     struct iovec *iov;
     /* number of iovs used */
     uint32_t iovs_num;
+    /* Pages that are not zero */
+    ram_addr_t *normal;
+    /* num of non zero pages */
+    uint32_t normal_num;
     /* used for compression methods */
     void *data;
 }  MultiFDSendParams;
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index f65159392a..25ef68a548 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -106,16 +106,16 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
     int ret;
     uint32_t i;
 
-    for (i = 0; i < p->pages->num; i++) {
+    for (i = 0; i < p->normal_num; i++) {
         uint32_t available = z->zbuff_len - out_size;
         int flush = Z_NO_FLUSH;
 
-        if (i == p->pages->num - 1) {
+        if (i == p->normal_num - 1) {
             flush = Z_SYNC_FLUSH;
         }
 
         zs->avail_in = page_size;
-        zs->next_in = p->pages->block->host + p->pages->offset[i];
+        zs->next_in = p->pages->block->host + p->normal[i];
 
         zs->avail_out = available;
         zs->next_out = z->zbuff + out_size;
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 6933ba622a..61842d713e 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -121,13 +121,13 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
     z->out.size = z->zbuff_len;
     z->out.pos = 0;
 
-    for (i = 0; i < p->pages->num; i++) {
+    for (i = 0; i < p->normal_num; i++) {
         ZSTD_EndDirective flush = ZSTD_e_continue;
 
-        if (i == p->pages->num - 1) {
+        if (i == p->normal_num - 1) {
             flush = ZSTD_e_flush;
         }
-        z->in.src = p->pages->block->host + p->pages->offset[i];
+        z->in.src = p->pages->block->host + p->normal[i];
         z->in.size = page_size;
         z->in.pos = 0;
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 6983ba3e7c..dbe919b764 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -89,13 +89,13 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
     MultiFDPages_t *pages = p->pages;
     size_t page_size = qemu_target_page_size();
 
-    for (int i = 0; i < p->pages->num; i++) {
-        p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
+    for (int i = 0; i < p->normal_num; i++) {
+        p->iov[p->iovs_num].iov_base = pages->block->host + p->normal[i];
         p->iov[p->iovs_num].iov_len = page_size;
         p->iovs_num++;
     }
 
-    p->next_packet_size = p->pages->num * page_size;
+    p->next_packet_size = p->normal_num * page_size;
     p->flags |= MULTIFD_FLAG_NOCOMP;
     return 0;
 }
@@ -262,7 +262,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 
     packet->flags = cpu_to_be32(p->flags);
     packet->pages_alloc = cpu_to_be32(p->pages->allocated);
-    packet->pages_used = cpu_to_be32(p->pages->num);
+    packet->pages_used = cpu_to_be32(p->normal_num);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
 
@@ -270,9 +270,9 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
         strncpy(packet->ramblock, p->pages->block->idstr, 256);
     }
 
-    for (i = 0; i < p->pages->num; i++) {
+    for (i = 0; i < p->normal_num; i++) {
         /* there are architectures where ram_addr_t is 32 bit */
-        uint64_t temp = p->pages->offset[i];
+        uint64_t temp = p->normal[i];
 
         packet->offset[i] = cpu_to_be64(temp);
     }
@@ -556,6 +556,8 @@ void multifd_save_cleanup(void)
         p->packet = NULL;
         g_free(p->iov);
         p->iov = NULL;
+        g_free(p->normal);
+        p->normal = NULL;
         multifd_send_state->ops->send_cleanup(p, &local_err);
         if (local_err) {
             migrate_set_error(migrate_get_current(), local_err);
@@ -640,12 +642,17 @@ static void *multifd_send_thread(void *opaque)
         qemu_mutex_lock(&p->mutex);
 
         if (p->pending_job) {
-            uint32_t used = p->pages->num;
             uint64_t packet_num = p->packet_num;
             uint32_t flags = p->flags;
             p->iovs_num = 1;
+            p->normal_num = 0;
 
-            if (used) {
+            for (int i = 0; i < p->pages->num; i++) {
+                p->normal[p->normal_num] = p->pages->offset[i];
+                p->normal_num++;
+            }
+
+            if (p->normal_num) {
                 ret = multifd_send_state->ops->send_prepare(p, &local_err);
                 if (ret != 0) {
                     qemu_mutex_unlock(&p->mutex);
@@ -655,12 +662,12 @@ static void *multifd_send_thread(void *opaque)
             multifd_send_fill_packet(p);
             p->flags = 0;
             p->num_packets++;
-            p->num_pages += used;
+            p->num_normal_pages += p->normal_num;
             p->pages->num = 0;
             p->pages->block = NULL;
             qemu_mutex_unlock(&p->mutex);
 
-            trace_multifd_send(p->id, packet_num, used, flags,
+            trace_multifd_send(p->id, packet_num, p->normal_num, flags,
                                p->next_packet_size);
 
             p->iov[0].iov_len = p->packet_len;
@@ -710,7 +717,7 @@ out:
     qemu_mutex_unlock(&p->mutex);
 
     rcu_unregister_thread();
-    trace_multifd_send_thread_end(p->id, p->num_packets, p->num_pages);
+    trace_multifd_send_thread_end(p->id, p->num_packets, p->num_normal_pages);
 
     return NULL;
 }
@@ -910,6 +917,7 @@ int multifd_save_setup(Error **errp)
         p->tls_hostname = g_strdup(s->hostname);
         /* We need one extra place for the packet header */
         p->iov = g_new0(struct iovec, page_count + 1);
+        p->normal = g_new0(ram_addr_t, page_count);
         socket_send_channel_create(multifd_new_send_channel_async, p);
     }
 
diff --git a/migration/trace-events b/migration/trace-events
index b48d873b8a..af8dee9af0 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -124,13 +124,13 @@ multifd_recv_sync_main_wait(uint8_t id) "channel %d"
 multifd_recv_terminate_threads(bool error) "error %d"
 multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %" PRIu64
 multifd_recv_thread_start(uint8_t id) "%d"
-multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x next packet size %d"
+multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " normal pages %d flags 0x%x next packet size %d"
 multifd_send_error(uint8_t id) "channel %d"
 multifd_send_sync_main(long packet_num) "packet num %ld"
 multifd_send_sync_main_signal(uint8_t id) "channel %d"
 multifd_send_sync_main_wait(uint8_t id) "channel %d"
 multifd_send_terminate_threads(bool error) "error %d"
-multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %"  PRIu64
+multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages) "channel %d packets %" PRIu64 " normal pages %"  PRIu64
 multifd_send_thread_start(uint8_t id) "%d"
 multifd_tls_outgoing_handshake_start(void *ioc, void *tioc, const char *hostname) "ioc=%p tioc=%p hostname=%s"
 multifd_tls_outgoing_handshake_error(void *ioc, const char *err) "ioc=%p err=%s"
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 18/23] multifd: Use normal pages array on the recv side
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (16 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 17/23] multifd: Use normal pages array on the send side Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-12-07  7:11   ` Peter Xu
  2021-11-24 10:06 ` [PATCH v3 19/23] multifd: recv side only needs the RAMBlock host address Juan Quintela
                   ` (5 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h      |  8 +++++--
 migration/multifd-zlib.c |  8 +++----
 migration/multifd-zstd.c |  6 +++---
 migration/multifd.c      | 45 ++++++++++++++++++----------------------
 4 files changed, 33 insertions(+), 34 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 78e73df3ec..9fbcb7bb9a 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -151,12 +151,16 @@ typedef struct {
     uint32_t next_packet_size;
     /* packets sent through this channel */
     uint64_t num_packets;
-    /* pages sent through this channel */
-    uint64_t num_pages;
+    /* non zero pages sent through this channel */
+    uint64_t num_normal_pages;
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
     /* buffers to recv */
     struct iovec *iov;
+    /* Pages that are not zero */
+    ram_addr_t *normal;
+    /* num of non zero pages */
+    uint32_t normal_num;
     /* used for de-compression methods */
     void *data;
 } MultiFDRecvParams;
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 25ef68a548..cc143b829d 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -225,7 +225,7 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
     uint32_t in_size = p->next_packet_size;
     /* we measure the change of total_out */
     uint32_t out_size = zs->total_out;
-    uint32_t expected_size = p->pages->num * qemu_target_page_size();
+    uint32_t expected_size = p->normal_num * page_size;
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     int ret;
     int i;
@@ -244,16 +244,16 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
     zs->avail_in = in_size;
     zs->next_in = z->zbuff;
 
-    for (i = 0; i < p->pages->num; i++) {
+    for (i = 0; i < p->normal_num; i++) {
         int flush = Z_NO_FLUSH;
         unsigned long start = zs->total_out;
 
-        if (i == p->pages->num - 1) {
+        if (i == p->normal_num - 1) {
             flush = Z_SYNC_FLUSH;
         }
 
         zs->avail_out = page_size;
-        zs->next_out = p->pages->block->host + p->pages->offset[i];
+        zs->next_out = p->pages->block->host + p->normal[i];
 
         /*
          * Welcome to inflate semantics
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 61842d713e..93d504ce0f 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -242,7 +242,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
     uint32_t in_size = p->next_packet_size;
     uint32_t out_size = 0;
     size_t page_size = qemu_target_page_size();
-    uint32_t expected_size = p->pages->num * page_size;
+    uint32_t expected_size = p->normal_num * page_size;
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     struct zstd_data *z = p->data;
     int ret;
@@ -263,8 +263,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
     z->in.size = in_size;
     z->in.pos = 0;
 
-    for (i = 0; i < p->pages->num; i++) {
-        z->out.dst = p->pages->block->host + p->pages->offset[i];
+    for (i = 0; i < p->normal_num; i++) {
+        z->out.dst = p->pages->block->host + p->normal[i];
         z->out.size = page_size;
         z->out.pos = 0;
 
diff --git a/migration/multifd.c b/migration/multifd.c
index dbe919b764..3ffb1aba64 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -146,11 +146,11 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
                    p->id, flags, MULTIFD_FLAG_NOCOMP);
         return -1;
     }
-    for (int i = 0; i < p->pages->num; i++) {
-        p->iov[i].iov_base = p->pages->block->host + p->pages->offset[i];
+    for (int i = 0; i < p->normal_num; i++) {
+        p->iov[i].iov_base = p->pages->block->host + p->normal[i];
         p->iov[i].iov_len = page_size;
     }
-    return qio_channel_readv_all(p->c, p->iov, p->pages->num, errp);
+    return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
 }
 
 static MultiFDMethods multifd_nocomp_ops = {
@@ -282,7 +282,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
 {
     MultiFDPacket_t *packet = p->packet;
     size_t page_size = qemu_target_page_size();
-    uint32_t pages_max = MULTIFD_PACKET_SIZE / page_size;
+    uint32_t page_count = MULTIFD_PACKET_SIZE / page_size;
     RAMBlock *block;
     int i;
 
@@ -309,33 +309,25 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
      * If we received a packet that is 100 times bigger than expected
      * just stop migration.  It is a magic number.
      */
-    if (packet->pages_alloc > pages_max * 100) {
+    if (packet->pages_alloc > page_count) {
         error_setg(errp, "multifd: received packet "
-                   "with size %d and expected a maximum size of %d",
-                   packet->pages_alloc, pages_max * 100) ;
+                   "with size %d and expected a size of %d",
+                   packet->pages_alloc, page_count) ;
         return -1;
     }
-    /*
-     * We received a packet that is bigger than expected but inside
-     * reasonable limits (see previous comment).  Just reallocate.
-     */
-    if (packet->pages_alloc > p->pages->allocated) {
-        multifd_pages_clear(p->pages);
-        p->pages = multifd_pages_init(packet->pages_alloc);
-    }
 
-    p->pages->num = be32_to_cpu(packet->pages_used);
-    if (p->pages->num > packet->pages_alloc) {
+    p->normal_num = be32_to_cpu(packet->pages_used);
+    if (p->normal_num > packet->pages_alloc) {
         error_setg(errp, "multifd: received packet "
                    "with %d pages and expected maximum pages are %d",
-                   p->pages->num, packet->pages_alloc) ;
+                   p->normal_num, packet->pages_alloc) ;
         return -1;
     }
 
     p->next_packet_size = be32_to_cpu(packet->next_packet_size);
     p->packet_num = be64_to_cpu(packet->packet_num);
 
-    if (p->pages->num == 0) {
+    if (p->normal_num == 0) {
         return 0;
     }
 
@@ -349,7 +341,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     }
 
     p->pages->block = block;
-    for (i = 0; i < p->pages->num; i++) {
+    for (i = 0; i < p->normal_num; i++) {
         uint64_t offset = be64_to_cpu(packet->offset[i]);
 
         if (offset > (block->used_length - page_size)) {
@@ -358,7 +350,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
                        offset, block->used_length);
             return -1;
         }
-        p->pages->offset[i] = offset;
+        p->normal[i] = offset;
     }
 
     return 0;
@@ -1019,6 +1011,8 @@ int multifd_load_cleanup(Error **errp)
         p->packet = NULL;
         g_free(p->iov);
         p->iov = NULL;
+        g_free(p->normal);
+        p->normal = NULL;
         multifd_recv_state->ops->recv_cleanup(p);
     }
     qemu_sem_destroy(&multifd_recv_state->sem_sync);
@@ -1092,13 +1086,13 @@ static void *multifd_recv_thread(void *opaque)
         flags = p->flags;
         /* recv methods don't know how to handle the SYNC flag */
         p->flags &= ~MULTIFD_FLAG_SYNC;
-        trace_multifd_recv(p->id, p->packet_num, p->pages->num, flags,
+        trace_multifd_recv(p->id, p->packet_num, p->normal_num, flags,
                            p->next_packet_size);
         p->num_packets++;
-        p->num_pages += p->pages->num;
+        p->num_normal_pages += p->normal_num;
         qemu_mutex_unlock(&p->mutex);
 
-        if (p->pages->num) {
+        if (p->normal_num) {
             ret = multifd_recv_state->ops->recv_pages(p, &local_err);
             if (ret != 0) {
                 break;
@@ -1120,7 +1114,7 @@ static void *multifd_recv_thread(void *opaque)
     qemu_mutex_unlock(&p->mutex);
 
     rcu_unregister_thread();
-    trace_multifd_recv_thread_end(p->id, p->num_packets, p->num_pages);
+    trace_multifd_recv_thread_end(p->id, p->num_packets, p->num_normal_pages);
 
     return NULL;
 }
@@ -1158,6 +1152,7 @@ int multifd_load_setup(Error **errp)
         p->packet = g_malloc0(p->packet_len);
         p->name = g_strdup_printf("multifdrecv_%d", i);
         p->iov = g_new0(struct iovec, page_count);
+        p->normal = g_new0(ram_addr_t, page_count);
     }
 
     for (i = 0; i < thread_count; i++) {
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 19/23] multifd: recv side only needs the RAMBlock host address
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (17 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 18/23] multifd: Use normal pages array on the recv side Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-12-01 18:56   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 20/23] multifd: Rename pages_used to normal_pages Juan Quintela
                   ` (4 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

So we can remove the MultiFDPages.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h      | 4 ++--
 migration/multifd-zlib.c | 2 +-
 migration/multifd-zstd.c | 2 +-
 migration/multifd.c      | 7 ++-----
 4 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 9fbcb7bb9a..ab32baebd7 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -136,8 +136,8 @@ typedef struct {
     bool running;
     /* should this thread finish */
     bool quit;
-    /* array of pages to receive */
-    MultiFDPages_t *pages;
+    /* ramblock host address */
+    uint8_t *host;
     /* packet allocated len */
     uint32_t packet_len;
     /* pointer to the packet */
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index cc143b829d..bf4d87fa16 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -253,7 +253,7 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
         }
 
         zs->avail_out = page_size;
-        zs->next_out = p->pages->block->host + p->normal[i];
+        zs->next_out = p->host + p->normal[i];
 
         /*
          * Welcome to inflate semantics
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 93d504ce0f..dd64ac3227 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -264,7 +264,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
     z->in.pos = 0;
 
     for (i = 0; i < p->normal_num; i++) {
-        z->out.dst = p->pages->block->host + p->normal[i];
+        z->out.dst = p->host + p->normal[i];
         z->out.size = page_size;
         z->out.pos = 0;
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 3ffb1aba64..dc76322137 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -147,7 +147,7 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
     for (int i = 0; i < p->normal_num; i++) {
-        p->iov[i].iov_base = p->pages->block->host + p->normal[i];
+        p->iov[i].iov_base = p->host + p->normal[i];
         p->iov[i].iov_len = page_size;
     }
     return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
@@ -340,7 +340,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
 
-    p->pages->block = block;
+    p->host = block->host;
     for (i = 0; i < p->normal_num; i++) {
         uint64_t offset = be64_to_cpu(packet->offset[i]);
 
@@ -1004,8 +1004,6 @@ int multifd_load_cleanup(Error **errp)
         qemu_sem_destroy(&p->sem_sync);
         g_free(p->name);
         p->name = NULL;
-        multifd_pages_clear(p->pages);
-        p->pages = NULL;
         p->packet_len = 0;
         g_free(p->packet);
         p->packet = NULL;
@@ -1146,7 +1144,6 @@ int multifd_load_setup(Error **errp)
         qemu_sem_init(&p->sem_sync, 0);
         p->quit = false;
         p->id = i;
-        p->pages = multifd_pages_init(page_count);
         p->packet_len = sizeof(MultiFDPacket_t)
                       + sizeof(uint64_t) * page_count;
         p->packet = g_malloc0(p->packet_len);
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 20/23] multifd: Rename pages_used to normal_pages
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (18 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 19/23] multifd: recv side only needs the RAMBlock host address Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-12-01 19:00   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 21/23] multifd: Support for zero pages transmission Juan Quintela
                   ` (3 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h | 3 ++-
 migration/multifd.c | 4 ++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index ab32baebd7..39e55d7f05 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -44,7 +44,8 @@ typedef struct {
     uint32_t flags;
     /* maximum number of allocated pages */
     uint32_t pages_alloc;
-    uint32_t pages_used;
+    /* non zero pages */
+    uint32_t normal_pages;
     /* size of the next packet that contains pages */
     uint32_t next_packet_size;
     uint64_t packet_num;
diff --git a/migration/multifd.c b/migration/multifd.c
index dc76322137..d1ab823f98 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -262,7 +262,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 
     packet->flags = cpu_to_be32(p->flags);
     packet->pages_alloc = cpu_to_be32(p->pages->allocated);
-    packet->pages_used = cpu_to_be32(p->normal_num);
+    packet->normal_pages = cpu_to_be32(p->normal_num);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
 
@@ -316,7 +316,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
 
-    p->normal_num = be32_to_cpu(packet->pages_used);
+    p->normal_num = be32_to_cpu(packet->normal_pages);
     if (p->normal_num > packet->pages_alloc) {
         error_setg(errp, "multifd: received packet "
                    "with %d pages and expected maximum pages are %d",
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 21/23] multifd: Support for zero pages transmission
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (19 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 20/23] multifd: Rename pages_used to normal_pages Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-12-02 11:36   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 22/23] multifd: Zero " Juan Quintela
                   ` (2 subsequent siblings)
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

This patch adds counters and similar.  Logic will be added on the
following patch.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h    | 13 ++++++++++++-
 migration/multifd.c    | 22 +++++++++++++++++++---
 migration/trace-events |  2 +-
 3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 39e55d7f05..973315b545 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -49,7 +49,10 @@ typedef struct {
     /* size of the next packet that contains pages */
     uint32_t next_packet_size;
     uint64_t packet_num;
-    uint64_t unused[4];    /* Reserved for future use */
+    /* zero pages */
+    uint32_t zero_pages;
+    uint32_t unused32[1];    /* Reserved for future use */
+    uint64_t unused64[3];    /* Reserved for future use */
     char ramblock[256];
     uint64_t offset[];
 } __attribute__((packed)) MultiFDPacket_t;
@@ -117,6 +120,10 @@ typedef struct {
     ram_addr_t *normal;
     /* num of non zero pages */
     uint32_t normal_num;
+    /* Pages that are  zero */
+    ram_addr_t *zero;
+    /* num of zero pages */
+    uint32_t zero_num;
     /* used for compression methods */
     void *data;
 }  MultiFDSendParams;
@@ -162,6 +169,10 @@ typedef struct {
     ram_addr_t *normal;
     /* num of non zero pages */
     uint32_t normal_num;
+    /* Pages that are  zero */
+    ram_addr_t *zero;
+    /* num of zero pages */
+    uint32_t zero_num;
     /* used for de-compression methods */
     void *data;
 } MultiFDRecvParams;
diff --git a/migration/multifd.c b/migration/multifd.c
index d1ab823f98..2e4dffd6c6 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -265,6 +265,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
     packet->normal_pages = cpu_to_be32(p->normal_num);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
+    packet->zero_pages = cpu_to_be32(p->zero_num);
 
     if (p->pages->block) {
         strncpy(packet->ramblock, p->pages->block->idstr, 256);
@@ -327,7 +328,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     p->next_packet_size = be32_to_cpu(packet->next_packet_size);
     p->packet_num = be64_to_cpu(packet->packet_num);
 
-    if (p->normal_num == 0) {
+    p->zero_num = be32_to_cpu(packet->zero_pages);
+    if (p->zero_num > packet->pages_alloc - p->normal_num) {
+        error_setg(errp, "multifd: received packet "
+                   "with %d zero pages and expected maximum pages are %d",
+                   p->normal_num, packet->pages_alloc - p->zero_num) ;
+        return -1;
+    }
+
+    if (p->normal_num == 0 && p->zero_num == 0) {
         return 0;
     }
 
@@ -550,6 +559,8 @@ void multifd_save_cleanup(void)
         p->iov = NULL;
         g_free(p->normal);
         p->normal = NULL;
+        g_free(p->zero);
+        p->zero = NULL;
         multifd_send_state->ops->send_cleanup(p, &local_err);
         if (local_err) {
             migrate_set_error(migrate_get_current(), local_err);
@@ -638,6 +649,7 @@ static void *multifd_send_thread(void *opaque)
             uint32_t flags = p->flags;
             p->iovs_num = 1;
             p->normal_num = 0;
+            p->zero_num = 0;
 
             for (int i = 0; i < p->pages->num; i++) {
                 p->normal[p->normal_num] = p->pages->offset[i];
@@ -659,8 +671,8 @@ static void *multifd_send_thread(void *opaque)
             p->pages->block = NULL;
             qemu_mutex_unlock(&p->mutex);
 
-            trace_multifd_send(p->id, packet_num, p->normal_num, flags,
-                               p->next_packet_size);
+            trace_multifd_send(p->id, packet_num, p->normal_num, p->zero_num,
+                               flags, p->next_packet_size);
 
             p->iov[0].iov_len = p->packet_len;
             p->iov[0].iov_base = p->packet;
@@ -910,6 +922,7 @@ int multifd_save_setup(Error **errp)
         /* We need one extra place for the packet header */
         p->iov = g_new0(struct iovec, page_count + 1);
         p->normal = g_new0(ram_addr_t, page_count);
+        p->zero = g_new0(ram_addr_t, page_count);
         socket_send_channel_create(multifd_new_send_channel_async, p);
     }
 
@@ -1011,6 +1024,8 @@ int multifd_load_cleanup(Error **errp)
         p->iov = NULL;
         g_free(p->normal);
         p->normal = NULL;
+        g_free(p->zero);
+        p->zero = NULL;
         multifd_recv_state->ops->recv_cleanup(p);
     }
     qemu_sem_destroy(&multifd_recv_state->sem_sync);
@@ -1150,6 +1165,7 @@ int multifd_load_setup(Error **errp)
         p->name = g_strdup_printf("multifdrecv_%d", i);
         p->iov = g_new0(struct iovec, page_count);
         p->normal = g_new0(ram_addr_t, page_count);
+        p->zero = g_new0(ram_addr_t, page_count);
     }
 
     for (i = 0; i < thread_count; i++) {
diff --git a/migration/trace-events b/migration/trace-events
index af8dee9af0..608decbdcc 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -124,7 +124,7 @@ multifd_recv_sync_main_wait(uint8_t id) "channel %d"
 multifd_recv_terminate_threads(bool error) "error %d"
 multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %" PRIu64
 multifd_recv_thread_start(uint8_t id) "%d"
-multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " normal pages %d flags 0x%x next packet size %d"
+multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t zero, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " normal pages %d zero pages %d flags 0x%x next packet size %d"
 multifd_send_error(uint8_t id) "channel %d"
 multifd_send_sync_main(long packet_num) "packet num %ld"
 multifd_send_sync_main_signal(uint8_t id) "channel %d"
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 22/23] multifd: Zero pages transmission
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (20 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 21/23] multifd: Support for zero pages transmission Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-12-02 16:42   ` Dr. David Alan Gilbert
  2021-11-24 10:06 ` [PATCH v3 23/23] migration: Use multifd before we check for the zero page Juan Quintela
  2021-11-24 10:24 ` [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Peter Xu
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

This implements the zero page dection and handling.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 2e4dffd6c6..5c1fc70ce3 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -11,6 +11,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/cutils.h"
 #include "qemu/rcu.h"
 #include "exec/target_page.h"
 #include "sysemu/sysemu.h"
@@ -277,6 +278,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 
         packet->offset[i] = cpu_to_be64(temp);
     }
+    for (i = 0; i < p->zero_num; i++) {
+        /* there are architectures where ram_addr_t is 32 bit */
+        uint64_t temp = p->zero[i];
+
+        packet->offset[p->normal_num + i] = cpu_to_be64(temp);
+    }
 }
 
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
@@ -362,6 +369,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         p->normal[i] = offset;
     }
 
+    for (i = 0; i < p->zero_num; i++) {
+        uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
+
+        if (offset > (block->used_length - page_size)) {
+            error_setg(errp, "multifd: offset too long %" PRIu64
+                       " (max " RAM_ADDR_FMT ")",
+                       offset, block->used_length);
+            return -1;
+        }
+        p->zero[i] = offset;
+    }
+
     return 0;
 }
 
@@ -652,8 +671,14 @@ static void *multifd_send_thread(void *opaque)
             p->zero_num = 0;
 
             for (int i = 0; i < p->pages->num; i++) {
-                p->normal[p->normal_num] = p->pages->offset[i];
-                p->normal_num++;
+                if (buffer_is_zero(p->pages->block->host + p->pages->offset[i],
+                                   qemu_target_page_size())) {
+                    p->zero[p->zero_num] = p->pages->offset[i];
+                    p->zero_num++;
+                } else {
+                    p->normal[p->normal_num] = p->pages->offset[i];
+                    p->normal_num++;
+                }
             }
 
             if (p->normal_num) {
@@ -1112,6 +1137,10 @@ static void *multifd_recv_thread(void *opaque)
             }
         }
 
+        for (int i = 0; i < p->zero_num; i++) {
+            memset(p->host + p->zero[i], 0, qemu_target_page_size());
+        }
+
         if (flags & MULTIFD_FLAG_SYNC) {
             qemu_sem_post(&multifd_recv_state->sem_sync);
             qemu_sem_wait(&p->sem_sync);
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v3 23/23] migration: Use multifd before we check for the zero page
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (21 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 22/23] multifd: Zero " Juan Quintela
@ 2021-11-24 10:06 ` Juan Quintela
  2021-12-02 17:11   ` Dr. David Alan Gilbert
  2021-11-24 10:24 ` [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Peter Xu
  23 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-24 10:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Juan Quintela

So we use multifd to transmit zero pages.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 57efa67f20..3ae094f653 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2138,6 +2138,17 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
     ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
     int res;
 
+    /*
+     * Do not use multifd for:
+     * 1. Compression as the first page in the new block should be posted out
+     *    before sending the compressed page
+     * 2. In postcopy as one whole host page should be placed
+     */
+    if (!save_page_use_compression(rs) && migrate_use_multifd()
+        && !migration_in_postcopy()) {
+        return ram_save_multifd_page(rs, block, offset);
+    }
+
     if (control_save_page(rs, block, offset, &res)) {
         return res;
     }
@@ -2160,17 +2171,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
         return res;
     }
 
-    /*
-     * Do not use multifd for:
-     * 1. Compression as the first page in the new block should be posted out
-     *    before sending the compressed page
-     * 2. In postcopy as one whole host page should be placed
-     */
-    if (!save_page_use_compression(rs) && migrate_use_multifd()
-        && !migration_in_postcopy()) {
-        return ram_save_multifd_page(rs, block, offset);
-    }
-
     return ram_save_page(rs, pss, last_stage);
 }
 
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads
  2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (22 preceding siblings ...)
  2021-11-24 10:06 ` [PATCH v3 23/23] migration: Use multifd before we check for the zero page Juan Quintela
@ 2021-11-24 10:24 ` Peter Xu
  23 siblings, 0 replies; 72+ messages in thread
From: Peter Xu @ 2021-11-24 10:24 UTC (permalink / raw)
  To: Juan Quintela
  Cc: Paolo Bonzini, Leonardo Bras, qemu-devel, Dr. David Alan Gilbert

On Wed, Nov 24, 2021 at 11:05:54AM +0100, Juan Quintela wrote:
> Hi
> 
> Trying with a different server.
> As it used to happen, when I sent everything only to me, everything worked.
> 
> Sorry folks.
> 
> [v2]
> This is a rebase against last master.
> 
> And the reason for resend is to configure properly git-publish and
> hope this time that git-publish send all the patches.

I do suffer from this too.  I normally use git-publish parameters "-S" plus
"-R" together when it happens, then in the interactive console selectively send
leftover series to complete previous attempt.

I think it's not a bug for git-publish, but git-send-email with fail with an
SMTP error.  I'm no expert of that, but iirc last time we discussed Paolo
mentioned it could be a git-send-email bug.  Copy Paolo for that in case
there's any further clue out of it.

> 
> Please, review.
> 
> [v1]
> Since Friday version:
> - More cleanups on the code
> - Remove repeated calls to qemu_target_page_size()
> - Establish normal pages and zero pages
> - detect zero pages on the multifd threads
> - send zero pages through the multifd channels.
> - reviews by Richard addressed.
> 
> It pases migration-test, so it should be perfect O:+)

OK I "agree". :-D

Besides, shall we try measure some real workloads?  E.g. total migration time
of an idle guest doing 8 channels multifd migration with/without the patchset?
I'd expect there's a huge speedup on that even with a low speed NIC, and it'll
be great to verify it.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 01/23] multifd: Delete useless operation
  2021-11-24 10:05 ` [PATCH v3 01/23] multifd: Delete useless operation Juan Quintela
@ 2021-11-24 18:48   ` Dr. David Alan Gilbert
  2021-11-25  7:24     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-24 18:48 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> We are divining by page_size to multiply again in the only use.
             ^--- typo
> Once there, impreve the comments.
                  ^--- typo
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

OK, with the typo's fixed:

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

but, could you also explain the  x 2 (that's no worse than the current
code); is this defined somewhere in zlib?  I thought there was a routine
that told you the worst case?

Dave
> ---
>  migration/multifd-zlib.c | 13 ++++---------
>  migration/multifd-zstd.c | 13 ++++---------
>  2 files changed, 8 insertions(+), 18 deletions(-)
> 
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index ab4ba75d75..3fc7813b44 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -42,7 +42,6 @@ struct zlib_data {
>   */
>  static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
>  {
> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      struct zlib_data *z = g_malloc0(sizeof(struct zlib_data));
>      z_stream *zs = &z->zs;
>  
> @@ -54,9 +53,8 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
>          error_setg(errp, "multifd %d: deflate init failed", p->id);
>          return -1;
>      }
> -    /* We will never have more than page_count pages */
> -    z->zbuff_len = page_count * qemu_target_page_size();
> -    z->zbuff_len *= 2;
> +    /* To be safe, we reserve twice the size of the packet */
> +    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          deflateEnd(&z->zs);
> @@ -180,7 +178,6 @@ static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
>   */
>  static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
>  {
> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      struct zlib_data *z = g_malloc0(sizeof(struct zlib_data));
>      z_stream *zs = &z->zs;
>  
> @@ -194,10 +191,8 @@ static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
>          error_setg(errp, "multifd %d: inflate init failed", p->id);
>          return -1;
>      }
> -    /* We will never have more than page_count pages */
> -    z->zbuff_len = page_count * qemu_target_page_size();
> -    /* We know compression "could" use more space */
> -    z->zbuff_len *= 2;
> +    /* To be safe, we reserve twice the size of the packet */
> +    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          inflateEnd(zs);
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 693bddf8c9..cc3b8869c0 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -47,7 +47,6 @@ struct zstd_data {
>   */
>  static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
>  {
> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      struct zstd_data *z = g_new0(struct zstd_data, 1);
>      int res;
>  
> @@ -67,9 +66,8 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
>                     p->id, ZSTD_getErrorName(res));
>          return -1;
>      }
> -    /* We will never have more than page_count pages */
> -    z->zbuff_len = page_count * qemu_target_page_size();
> -    z->zbuff_len *= 2;
> +    /* To be safe, we reserve twice the size of the packet */
> +    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          ZSTD_freeCStream(z->zcs);
> @@ -191,7 +189,6 @@ static int zstd_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
>   */
>  static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
>  {
> -    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
>      struct zstd_data *z = g_new0(struct zstd_data, 1);
>      int ret;
>  
> @@ -212,10 +209,8 @@ static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>  
> -    /* We will never have more than page_count pages */
> -    z->zbuff_len = page_count * qemu_target_page_size();
> -    /* We know compression "could" use more space */
> -    z->zbuff_len *= 2;
> +    /* To be safe, we reserve twice the size of the packet */
> +    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          ZSTD_freeDStream(z->zds);
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 02/23] migration: Never call twice qemu_target_page_size()
  2021-11-24 10:05 ` [PATCH v3 02/23] migration: Never call twice qemu_target_page_size() Juan Quintela
@ 2021-11-24 18:52   ` Dr. David Alan Gilbert
  2021-11-25  7:26     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-24 18:52 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

OK, not much difference

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.c | 7 ++++---
>  migration/multifd.c   | 7 ++++---
>  migration/savevm.c    | 5 +++--
>  3 files changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 2c1edb2cb9..3de11ae921 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -996,6 +996,8 @@ static void populate_time_info(MigrationInfo *info, MigrationState *s)
>  
>  static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>  {
> +    size_t page_size = qemu_target_page_size();
> +
>      info->has_ram = true;
>      info->ram = g_malloc0(sizeof(*info->ram));
>      info->ram->transferred = ram_counters.transferred;
> @@ -1004,12 +1006,11 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>      /* legacy value.  It is not used anymore */
>      info->ram->skipped = 0;
>      info->ram->normal = ram_counters.normal;
> -    info->ram->normal_bytes = ram_counters.normal *
> -        qemu_target_page_size();
> +    info->ram->normal_bytes = ram_counters.normal * page_size;
>      info->ram->mbps = s->mbps;
>      info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
>      info->ram->postcopy_requests = ram_counters.postcopy_requests;
> -    info->ram->page_size = qemu_target_page_size();
> +    info->ram->page_size = page_size;
>      info->ram->multifd_bytes = ram_counters.multifd_bytes;
>      info->ram->pages_per_second = s->pages_per_second;
>  
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 7c9deb1921..8125d0015c 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -289,7 +289,8 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>  {
>      MultiFDPacket_t *packet = p->packet;
> -    uint32_t pages_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
> +    size_t page_size = qemu_target_page_size();
> +    uint32_t pages_max = MULTIFD_PACKET_SIZE / page_size;
>      RAMBlock *block;
>      int i;
>  
> @@ -358,14 +359,14 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>      for (i = 0; i < p->pages->used; i++) {
>          uint64_t offset = be64_to_cpu(packet->offset[i]);
>  
> -        if (offset > (block->used_length - qemu_target_page_size())) {
> +        if (offset > (block->used_length - page_size)) {
>              error_setg(errp, "multifd: offset too long %" PRIu64
>                         " (max " RAM_ADDR_FMT ")",
>                         offset, block->used_length);
>              return -1;
>          }
>          p->pages->iov[i].iov_base = block->host + offset;
> -        p->pages->iov[i].iov_len = qemu_target_page_size();
> +        p->pages->iov[i].iov_len = page_size;
>      }
>  
>      return 0;
> diff --git a/migration/savevm.c b/migration/savevm.c
> index d59e976d50..0bef031acb 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1685,6 +1685,7 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis,
>  {
>      PostcopyState ps = postcopy_state_set(POSTCOPY_INCOMING_ADVISE);
>      uint64_t remote_pagesize_summary, local_pagesize_summary, remote_tps;
> +    size_t page_size = qemu_target_page_size();
>      Error *local_err = NULL;
>  
>      trace_loadvm_postcopy_handle_advise();
> @@ -1741,13 +1742,13 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis,
>      }
>  
>      remote_tps = qemu_get_be64(mis->from_src_file);
> -    if (remote_tps != qemu_target_page_size()) {
> +    if (remote_tps != page_size) {
>          /*
>           * Again, some differences could be dealt with, but for now keep it
>           * simple.
>           */
>          error_report("Postcopy needs matching target page sizes (s=%d d=%zd)",
> -                     (int)remote_tps, qemu_target_page_size());
> +                     (int)remote_tps, page_size);
>          return -1;
>      }
>  
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 03/23] multifd: Rename used field to num
  2021-11-24 10:05 ` [PATCH v3 03/23] multifd: Rename used field to num Juan Quintela
@ 2021-11-24 19:37   ` Dr. David Alan Gilbert
  2021-11-25  7:28     ` Juan Quintela
  2021-12-13  9:34   ` Zheng Chuan via
  1 sibling, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-24 19:37 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> We will need to split it later in zero_num (number of zero pages) and
> normal_num (number of normal pages).  This name is better.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/multifd.h |  2 +-
>  migration/multifd.c | 38 +++++++++++++++++++-------------------
>  2 files changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 15c50ca0b2..86820dd028 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -55,7 +55,7 @@ typedef struct {
>  
>  typedef struct {
>      /* number of used pages */
> -    uint32_t used;
> +    uint32_t num;

What does 'used' actually mean here?

Dave

>      /* number of allocated pages */
>      uint32_t allocated;
>      /* global number of generated multifd packets */
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 8125d0015c..8ea86d81dc 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -252,7 +252,7 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
>  
>  static void multifd_pages_clear(MultiFDPages_t *pages)
>  {
> -    pages->used = 0;
> +    pages->num = 0;
>      pages->allocated = 0;
>      pages->packet_num = 0;
>      pages->block = NULL;
> @@ -270,7 +270,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  
>      packet->flags = cpu_to_be32(p->flags);
>      packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> -    packet->pages_used = cpu_to_be32(p->pages->used);
> +    packet->pages_used = cpu_to_be32(p->pages->num);
>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
>  
> @@ -278,7 +278,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
>      }
>  
> -    for (i = 0; i < p->pages->used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          /* there are architectures where ram_addr_t is 32 bit */
>          uint64_t temp = p->pages->offset[i];
>  
> @@ -332,18 +332,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          p->pages = multifd_pages_init(packet->pages_alloc);
>      }
>  
> -    p->pages->used = be32_to_cpu(packet->pages_used);
> -    if (p->pages->used > packet->pages_alloc) {
> +    p->pages->num = be32_to_cpu(packet->pages_used);
> +    if (p->pages->num > packet->pages_alloc) {
>          error_setg(errp, "multifd: received packet "
>                     "with %d pages and expected maximum pages are %d",
> -                   p->pages->used, packet->pages_alloc) ;
> +                   p->pages->num, packet->pages_alloc) ;
>          return -1;
>      }
>  
>      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>      p->packet_num = be64_to_cpu(packet->packet_num);
>  
> -    if (p->pages->used == 0) {
> +    if (p->pages->num == 0) {
>          return 0;
>      }
>  
> @@ -356,7 +356,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>  
> -    for (i = 0; i < p->pages->used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          uint64_t offset = be64_to_cpu(packet->offset[i]);
>  
>          if (offset > (block->used_length - page_size)) {
> @@ -443,13 +443,13 @@ static int multifd_send_pages(QEMUFile *f)
>          }
>          qemu_mutex_unlock(&p->mutex);
>      }
> -    assert(!p->pages->used);
> +    assert(!p->pages->num);
>      assert(!p->pages->block);
>  
>      p->packet_num = multifd_send_state->packet_num++;
>      multifd_send_state->pages = p->pages;
>      p->pages = pages;
> -    transferred = ((uint64_t) pages->used) * qemu_target_page_size()
> +    transferred = ((uint64_t) pages->num) * qemu_target_page_size()
>                  + p->packet_len;
>      qemu_file_update_transfer(f, transferred);
>      ram_counters.multifd_bytes += transferred;
> @@ -469,12 +469,12 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
>      }
>  
>      if (pages->block == block) {
> -        pages->offset[pages->used] = offset;
> -        pages->iov[pages->used].iov_base = block->host + offset;
> -        pages->iov[pages->used].iov_len = qemu_target_page_size();
> -        pages->used++;
> +        pages->offset[pages->num] = offset;
> +        pages->iov[pages->num].iov_base = block->host + offset;
> +        pages->iov[pages->num].iov_len = qemu_target_page_size();
> +        pages->num++;
>  
> -        if (pages->used < pages->allocated) {
> +        if (pages->num < pages->allocated) {
>              return 1;
>          }
>      }
> @@ -586,7 +586,7 @@ void multifd_send_sync_main(QEMUFile *f)
>      if (!migrate_use_multifd()) {
>          return;
>      }
> -    if (multifd_send_state->pages->used) {
> +    if (multifd_send_state->pages->num) {
>          if (multifd_send_pages(f) < 0) {
>              error_report("%s: multifd_send_pages fail", __func__);
>              return;
> @@ -649,7 +649,7 @@ static void *multifd_send_thread(void *opaque)
>          qemu_mutex_lock(&p->mutex);
>  
>          if (p->pending_job) {
> -            uint32_t used = p->pages->used;
> +            uint32_t used = p->pages->num;
>              uint64_t packet_num = p->packet_num;
>              flags = p->flags;
>  
> @@ -665,7 +665,7 @@ static void *multifd_send_thread(void *opaque)
>              p->flags = 0;
>              p->num_packets++;
>              p->num_pages += used;
> -            p->pages->used = 0;
> +            p->pages->num = 0;
>              p->pages->block = NULL;
>              qemu_mutex_unlock(&p->mutex);
>  
> @@ -1091,7 +1091,7 @@ static void *multifd_recv_thread(void *opaque)
>              break;
>          }
>  
> -        used = p->pages->used;
> +        used = p->pages->num;
>          flags = p->flags;
>          /* recv methods don't know how to handle the SYNC flag */
>          p->flags &= ~MULTIFD_FLAG_SYNC;
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 01/23] multifd: Delete useless operation
  2021-11-24 18:48   ` Dr. David Alan Gilbert
@ 2021-11-25  7:24     ` Juan Quintela
  2021-11-25 19:46       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-25  7:24 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We are divining by page_size to multiply again in the only use.
>              ^--- typo
>> Once there, impreve the comments.
>                   ^--- typo
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> OK, with the typo's fixed:

Thanks.

> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>
> but, could you also explain the  x 2 (that's no worse than the current
> code); is this defined somewhere in zlib?  I thought there was a routine
> that told you the worst case?

Nowhere.

There are pathological cases where it can be worse.  Not clear at all
how much (ok, for zlib it appears that it is on the order of dozen of
bytes, because it marks it as uncompressed on the worst possible case),
For zstd, there is not a clear/fast answer when you google.

As this buffer is held for the whole migration, it is one for thread,
this looked safe to me.  Notice that we are compressing 128 pages at a
time, so for it not to compress anything looks very pathological.

But as one says, better safe than sorry.

If anyone that knows more about zlib/zstd give me different values, I
will change that in an additional patch.

Later, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 02/23] migration: Never call twice qemu_target_page_size()
  2021-11-24 18:52   ` Dr. David Alan Gilbert
@ 2021-11-25  7:26     ` Juan Quintela
  0 siblings, 0 replies; 72+ messages in thread
From: Juan Quintela @ 2021-11-25  7:26 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> OK, not much difference

This was after "finishing" the series I realised that I was calling that
function around 30 times or so in that three files.  And as Richard
complained when I put that inside a loop, I just decided to optimize
all.  Once that I optimized it.

O:-)

> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Thanks, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 03/23] multifd: Rename used field to num
  2021-11-24 19:37   ` Dr. David Alan Gilbert
@ 2021-11-25  7:28     ` Juan Quintela
  2021-11-25 18:30       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-25  7:28 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We will need to split it later in zero_num (number of zero pages) and
>> normal_num (number of normal pages).  This name is better.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/multifd.h |  2 +-
>>  migration/multifd.c | 38 +++++++++++++++++++-------------------
>>  2 files changed, 20 insertions(+), 20 deletions(-)
>> 
>> diff --git a/migration/multifd.h b/migration/multifd.h
>> index 15c50ca0b2..86820dd028 100644
>> --- a/migration/multifd.h
>> +++ b/migration/multifd.h
>> @@ -55,7 +55,7 @@ typedef struct {
>>  
>>  typedef struct {
>>      /* number of used pages */
>> -    uint32_t used;
>> +    uint32_t num;
>
> What does 'used' actually mean here?

We allocate 128 pages for each "packet".
But we can ben handled less than that (we are at the end of one
iteration, the end of a ramblock, ...).
That is what used mean.

But later on the series, we enter with normal pages, and zero pages, and
naming get really confusing.  So, I moved to use *_num for everything.

Even after all the series, I didn't rename everything on multifd, only
the fields that I have to use sooner or later.

Later, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 03/23] multifd: Rename used field to num
  2021-11-25  7:28     ` Juan Quintela
@ 2021-11-25 18:30       ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 18:30 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> We will need to split it later in zero_num (number of zero pages) and
> >> normal_num (number of normal pages).  This name is better.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  migration/multifd.h |  2 +-
> >>  migration/multifd.c | 38 +++++++++++++++++++-------------------
> >>  2 files changed, 20 insertions(+), 20 deletions(-)
> >> 
> >> diff --git a/migration/multifd.h b/migration/multifd.h
> >> index 15c50ca0b2..86820dd028 100644
> >> --- a/migration/multifd.h
> >> +++ b/migration/multifd.h
> >> @@ -55,7 +55,7 @@ typedef struct {
> >>  
> >>  typedef struct {
> >>      /* number of used pages */
> >> -    uint32_t used;
> >> +    uint32_t num;
> >
> > What does 'used' actually mean here?
> 
> We allocate 128 pages for each "packet".
> But we can ben handled less than that (we are at the end of one
> iteration, the end of a ramblock, ...).
> That is what used mean.
> 
> But later on the series, we enter with normal pages, and zero pages, and
> naming get really confusing.  So, I moved to use *_num for everything.
> 
> Even after all the series, I didn't rename everything on multifd, only
> the fields that I have to use sooner or later.

Hmm OK, I'm not sure 'num' is much better than used, but OK


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> Later, Juan.
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 04/23] multifd: Add missing documention
  2021-11-24 10:05 ` [PATCH v3 04/23] multifd: Add missing documention Juan Quintela
@ 2021-11-25 18:38   ` Dr. David Alan Gilbert
  2021-11-26  9:34     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 18:38 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Pretty obvious, but I guess to have the complete set of comments:


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd-zlib.c | 2 ++
>  migration/multifd-zstd.c | 2 ++
>  migration/multifd.c      | 1 +
>  3 files changed, 5 insertions(+)
> 
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index 3fc7813b44..d0437cce2a 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -72,6 +72,7 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
>   * Close the channel and return memory.
>   *
>   * @p: Params for the channel that we are using
> + * @errp: pointer to an error
>   */
>  static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
>  {
> @@ -94,6 +95,7 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
>   *
>   * @p: Params for the channel that we are using
>   * @used: number of pages used
> + * @errp: pointer to an error
>   */
>  static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
>  {
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index cc3b8869c0..09ae1cf91a 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -84,6 +84,7 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
>   * Close the channel and return memory.
>   *
>   * @p: Params for the channel that we are using
> + * @errp: pointer to an error
>   */
>  static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
>  {
> @@ -107,6 +108,7 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
>   *
>   * @p: Params for the channel that we are using
>   * @used: number of pages used
> + * @errp: pointer to an error
>   */
>  static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
>  {
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 8ea86d81dc..cdeffdc4c5 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -66,6 +66,7 @@ static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
>   * For no compression this function does nothing.
>   *
>   * @p: Params for the channel that we are using
> + * @errp: pointer to an error
>   */
>  static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
>  {
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 05/23] multifd: The variable is only used inside the loop
  2021-11-24 10:05 ` [PATCH v3 05/23] multifd: The variable is only used inside the loop Juan Quintela
@ 2021-11-25 18:40   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 18:40 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index cdeffdc4c5..ce7101cf9d 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -629,7 +629,6 @@ static void *multifd_send_thread(void *opaque)
>      MultiFDSendParams *p = opaque;
>      Error *local_err = NULL;
>      int ret = 0;
> -    uint32_t flags = 0;
>  
>      trace_multifd_send_thread_start(p->id);
>      rcu_register_thread();
> @@ -652,7 +651,7 @@ static void *multifd_send_thread(void *opaque)
>          if (p->pending_job) {
>              uint32_t used = p->pages->num;
>              uint64_t packet_num = p->packet_num;
> -            flags = p->flags;
> +            uint32_t flags = p->flags;
>  
>              if (used) {
>                  ret = multifd_send_state->ops->send_prepare(p, used,
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 06/23] multifd: remove used parameter from send_prepare() method
  2021-11-24 10:06 ` [PATCH v3 06/23] multifd: remove used parameter from send_prepare() method Juan Quintela
@ 2021-11-25 18:51   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 18:51 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> It is already there as p->pages->num.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.h      | 2 +-
>  migration/multifd-zlib.c | 7 +++----
>  migration/multifd-zstd.c | 7 +++----
>  migration/multifd.c      | 9 +++------
>  4 files changed, 10 insertions(+), 15 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 86820dd028..7968cc5c20 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -159,7 +159,7 @@ typedef struct {
>      /* Cleanup for sending side */
>      void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
>      /* Prepare the send packet */
> -    int (*send_prepare)(MultiFDSendParams *p, uint32_t used, Error **errp);
> +    int (*send_prepare)(MultiFDSendParams *p, Error **errp);
>      /* Write the send packet */
>      int (*send_write)(MultiFDSendParams *p, uint32_t used, Error **errp);
>      /* Setup for receiving side */
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index d0437cce2a..28f0ed933b 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -94,10 +94,9 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
>   * Returns 0 for success or -1 for error
>   *
>   * @p: Params for the channel that we are using
> - * @used: number of pages used
>   * @errp: pointer to an error
>   */
> -static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
> +static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>  {
>      struct iovec *iov = p->pages->iov;
>      struct zlib_data *z = p->data;
> @@ -106,11 +105,11 @@ static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
>      int ret;
>      uint32_t i;
>  
> -    for (i = 0; i < used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          uint32_t available = z->zbuff_len - out_size;
>          int flush = Z_NO_FLUSH;
>  
> -        if (i == used - 1) {
> +        if (i == p->pages->num - 1) {
>              flush = Z_SYNC_FLUSH;
>          }
>  
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 09ae1cf91a..4a71e96e06 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -107,10 +107,9 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
>   * Returns 0 for success or -1 for error
>   *
>   * @p: Params for the channel that we are using
> - * @used: number of pages used
>   * @errp: pointer to an error
>   */
> -static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
> +static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
>  {
>      struct iovec *iov = p->pages->iov;
>      struct zstd_data *z = p->data;
> @@ -121,10 +120,10 @@ static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
>      z->out.size = z->zbuff_len;
>      z->out.pos = 0;
>  
> -    for (i = 0; i < used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          ZSTD_EndDirective flush = ZSTD_e_continue;
>  
> -        if (i == used - 1) {
> +        if (i == p->pages->num - 1) {
>              flush = ZSTD_e_flush;
>          }
>          z->in.src = iov[i].iov_base;
> diff --git a/migration/multifd.c b/migration/multifd.c
> index ce7101cf9d..098ef8842c 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -82,13 +82,11 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
>   * Returns 0 for success or -1 for error
>   *
>   * @p: Params for the channel that we are using
> - * @used: number of pages used
>   * @errp: pointer to an error
>   */
> -static int nocomp_send_prepare(MultiFDSendParams *p, uint32_t used,
> -                               Error **errp)
> +static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>  {
> -    p->next_packet_size = used * qemu_target_page_size();
> +    p->next_packet_size = p->pages->num * qemu_target_page_size();
>      p->flags |= MULTIFD_FLAG_NOCOMP;
>      return 0;
>  }
> @@ -654,8 +652,7 @@ static void *multifd_send_thread(void *opaque)
>              uint32_t flags = p->flags;
>  
>              if (used) {
> -                ret = multifd_send_state->ops->send_prepare(p, used,
> -                                                            &local_err);
> +                ret = multifd_send_state->ops->send_prepare(p, &local_err);
>                  if (ret != 0) {
>                      qemu_mutex_unlock(&p->mutex);
>                      break;
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 07/23] multifd: remove used parameter from send_recv_pages() method
  2021-11-24 10:06 ` [PATCH v3 07/23] multifd: remove used parameter from send_recv_pages() method Juan Quintela
@ 2021-11-25 18:53   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 18:53 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> It is already there as p->pages->num.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.h      | 2 +-
>  migration/multifd-zlib.c | 9 ++++-----
>  migration/multifd-zstd.c | 7 +++----
>  migration/multifd.c      | 7 +++----
>  4 files changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 7968cc5c20..e57adc783b 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -167,7 +167,7 @@ typedef struct {
>      /* Cleanup for receiving side */
>      void (*recv_cleanup)(MultiFDRecvParams *p);
>      /* Read all pages */
> -    int (*recv_pages)(MultiFDRecvParams *p, uint32_t used, Error **errp);
> +    int (*recv_pages)(MultiFDRecvParams *p, Error **errp);
>  } MultiFDMethods;
>  
>  void multifd_register_ops(int method, MultiFDMethods *ops);
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index 28f0ed933b..e85ef8824d 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -230,17 +230,16 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
>   * Returns 0 for success or -1 for error
>   *
>   * @p: Params for the channel that we are using
> - * @used: number of pages used
>   * @errp: pointer to an error
>   */
> -static int zlib_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
> +static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
>  {
>      struct zlib_data *z = p->data;
>      z_stream *zs = &z->zs;
>      uint32_t in_size = p->next_packet_size;
>      /* we measure the change of total_out */
>      uint32_t out_size = zs->total_out;
> -    uint32_t expected_size = used * qemu_target_page_size();
> +    uint32_t expected_size = p->pages->num * qemu_target_page_size();
>      uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
>      int ret;
>      int i;
> @@ -259,12 +258,12 @@ static int zlib_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
>      zs->avail_in = in_size;
>      zs->next_in = z->zbuff;
>  
> -    for (i = 0; i < used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          struct iovec *iov = &p->pages->iov[i];
>          int flush = Z_NO_FLUSH;
>          unsigned long start = zs->total_out;
>  
> -        if (i == used - 1) {
> +        if (i == p->pages->num - 1) {
>              flush = Z_SYNC_FLUSH;
>          }
>  
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 4a71e96e06..a8b104f4ee 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -250,14 +250,13 @@ static void zstd_recv_cleanup(MultiFDRecvParams *p)
>   * Returns 0 for success or -1 for error
>   *
>   * @p: Params for the channel that we are using
> - * @used: number of pages used
>   * @errp: pointer to an error
>   */
> -static int zstd_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
> +static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
>  {
>      uint32_t in_size = p->next_packet_size;
>      uint32_t out_size = 0;
> -    uint32_t expected_size = used * qemu_target_page_size();
> +    uint32_t expected_size = p->pages->num * qemu_target_page_size();
>      uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
>      struct zstd_data *z = p->data;
>      int ret;
> @@ -278,7 +277,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
>      z->in.size = in_size;
>      z->in.pos = 0;
>  
> -    for (i = 0; i < used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          struct iovec *iov = &p->pages->iov[i];
>  
>          z->out.dst = iov->iov_base;
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 098ef8842c..55d99a8232 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -141,10 +141,9 @@ static void nocomp_recv_cleanup(MultiFDRecvParams *p)
>   * Returns 0 for success or -1 for error
>   *
>   * @p: Params for the channel that we are using
> - * @used: number of pages used
>   * @errp: pointer to an error
>   */
> -static int nocomp_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
> +static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
>  {
>      uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
>  
> @@ -153,7 +152,7 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
>                     p->id, flags, MULTIFD_FLAG_NOCOMP);
>          return -1;
>      }
> -    return qio_channel_readv_all(p->c, p->pages->iov, used, errp);
> +    return qio_channel_readv_all(p->c, p->pages->iov, p->pages->num, errp);
>  }
>  
>  static MultiFDMethods multifd_nocomp_ops = {
> @@ -1099,7 +1098,7 @@ static void *multifd_recv_thread(void *opaque)
>          qemu_mutex_unlock(&p->mutex);
>  
>          if (used) {
> -            ret = multifd_recv_state->ops->recv_pages(p, used, &local_err);
> +            ret = multifd_recv_state->ops->recv_pages(p, &local_err);
>              if (ret != 0) {
>                  break;
>              }
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 08/23] multifd: Fill offset and block for reception
  2021-11-24 10:06 ` [PATCH v3 08/23] multifd: Fill offset and block for reception Juan Quintela
@ 2021-11-25 19:41   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 19:41 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> We were using the iov directly, but we will need this info on the
> following patch.

Yes I think so;  have you considered that really need to check the
fields of MultiFD*Params to see which fields you're actually using?



Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/multifd.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 55d99a8232..0533da154a 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -354,6 +354,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>  
> +    p->pages->block = block;
>      for (i = 0; i < p->pages->num; i++) {
>          uint64_t offset = be64_to_cpu(packet->offset[i]);
>  
> @@ -363,6 +364,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>                         offset, block->used_length);
>              return -1;
>          }
> +        p->pages->offset[i] = offset;
>          p->pages->iov[i].iov_base = block->host + offset;
>          p->pages->iov[i].iov_len = page_size;
>      }
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 01/23] multifd: Delete useless operation
  2021-11-25  7:24     ` Juan Quintela
@ 2021-11-25 19:46       ` Dr. David Alan Gilbert
  2021-11-26  9:39         ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-25 19:46 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> We are divining by page_size to multiply again in the only use.
> >              ^--- typo
> >> Once there, impreve the comments.
> >                   ^--- typo
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >
> > OK, with the typo's fixed:
> 
> Thanks.
> 
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >
> > but, could you also explain the  x 2 (that's no worse than the current
> > code); is this defined somewhere in zlib?  I thought there was a routine
> > that told you the worst case?
> 
> Nowhere.
> 
> There are pathological cases where it can be worse.  Not clear at all
> how much (ok, for zlib it appears that it is on the order of dozen of
> bytes, because it marks it as uncompressed on the worst possible case),
> For zstd, there is not a clear/fast answer when you google.

For zlib:

ZEXTERN uLong ZEXPORT compressBound OF((uLong sourceLen));
/*
     compressBound() returns an upper bound on the compressed size after
   compress() or compress2() on sourceLen bytes.  It would be used before a
   compress() or compress2() call to allocate the destination buffer.
*/

> As this buffer is held for the whole migration, it is one for thread,
> this looked safe to me.  Notice that we are compressing 128 pages at a
> time, so for it not to compress anything looks very pathological.
> 
> But as one says, better safe than sorry.

Yeh.

Dave

> If anyone that knows more about zlib/zstd give me different values, I
> will change that in an additional patch.
> 
> Later, Juan.
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 04/23] multifd: Add missing documention
  2021-11-25 18:38   ` Dr. David Alan Gilbert
@ 2021-11-26  9:34     ` Juan Quintela
  0 siblings, 0 replies; 72+ messages in thread
From: Juan Quintela @ 2021-11-26  9:34 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Pretty obvious, but I guess to have the complete set of comments:

Yeap.  When I was removing the used parameter, I found that we have this
function without the comment.  If we have the comments, just make them
right.

> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Thanks, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 01/23] multifd: Delete useless operation
  2021-11-25 19:46       ` Dr. David Alan Gilbert
@ 2021-11-26  9:39         ` Juan Quintela
  0 siblings, 0 replies; 72+ messages in thread
From: Juan Quintela @ 2021-11-26  9:39 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>> > * Juan Quintela (quintela@redhat.com) wrote:
>> >> We are divining by page_size to multiply again in the only use.
>> >              ^--- typo
>> >> Once there, impreve the comments.
>> >                   ^--- typo
>> >> 
>> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> >
>> > OK, with the typo's fixed:
>> 
>> Thanks.
>> 
>> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> >
>> > but, could you also explain the  x 2 (that's no worse than the current
>> > code); is this defined somewhere in zlib?  I thought there was a routine
>> > that told you the worst case?
>> 
>> Nowhere.
>> 
>> There are pathological cases where it can be worse.  Not clear at all
>> how much (ok, for zlib it appears that it is on the order of dozen of
>> bytes, because it marks it as uncompressed on the worst possible case),
>> For zstd, there is not a clear/fast answer when you google.
>
> For zlib:
>
> ZEXTERN uLong ZEXPORT compressBound OF((uLong sourceLen));
> /*
>      compressBound() returns an upper bound on the compressed size after
>    compress() or compress2() on sourceLen bytes.  It would be used before a
>    compress() or compress2() call to allocate the destination buffer.
> */

Aha, exaactly what I needed.

thanks.

zstd one is called:

ZSTD_compressBound()

Added to the series.

Thanks, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 09/23] multifd: Make zstd compression method not use iovs
  2021-11-24 10:06 ` [PATCH v3 09/23] multifd: Make zstd compression method not use iovs Juan Quintela
@ 2021-11-29 17:16   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-29 17:16 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd-zstd.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index a8b104f4ee..2d5b61106c 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -13,6 +13,7 @@
>  #include "qemu/osdep.h"
>  #include <zstd.h>
>  #include "qemu/rcu.h"
> +#include "exec/ramblock.h"
>  #include "exec/target_page.h"
>  #include "qapi/error.h"
>  #include "migration.h"
> @@ -111,8 +112,8 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
>   */
>  static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
>  {
> -    struct iovec *iov = p->pages->iov;
>      struct zstd_data *z = p->data;
> +    size_t page_size = qemu_target_page_size();
>      int ret;
>      uint32_t i;
>  
> @@ -126,8 +127,8 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
>          if (i == p->pages->num - 1) {
>              flush = ZSTD_e_flush;
>          }
> -        z->in.src = iov[i].iov_base;
> -        z->in.size = iov[i].iov_len;
> +        z->in.src = p->pages->block->host + p->pages->offset[i];
> +        z->in.size = page_size;
>          z->in.pos = 0;
>  
>          /*
> @@ -256,7 +257,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
>  {
>      uint32_t in_size = p->next_packet_size;
>      uint32_t out_size = 0;
> -    uint32_t expected_size = p->pages->num * qemu_target_page_size();
> +    size_t page_size = qemu_target_page_size();
> +    uint32_t expected_size = p->pages->num * page_size;
>      uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
>      struct zstd_data *z = p->data;
>      int ret;
> @@ -278,10 +280,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
>      z->in.pos = 0;
>  
>      for (i = 0; i < p->pages->num; i++) {
> -        struct iovec *iov = &p->pages->iov[i];
> -
> -        z->out.dst = iov->iov_base;
> -        z->out.size = iov->iov_len;
> +        z->out.dst = p->pages->block->host + p->pages->offset[i];
> +        z->out.size = page_size;
>          z->out.pos = 0;
>  
>          /*
> @@ -295,8 +295,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
>          do {
>              ret = ZSTD_decompressStream(z->zds, &z->out, &z->in);
>          } while (ret > 0 && (z->in.size - z->in.pos > 0)
> -                         && (z->out.pos < iov->iov_len));
> -        if (ret > 0 && (z->out.pos < iov->iov_len)) {
> +                         && (z->out.pos < page_size));
> +        if (ret > 0 && (z->out.pos < page_size)) {
>              error_setg(errp, "multifd %d: decompressStream buffer too small",
>                         p->id);
>              return -1;
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 10/23] multifd: Make zlib compression method not use iovs
  2021-11-24 10:06 ` [PATCH v3 10/23] multifd: Make zlib " Juan Quintela
@ 2021-11-29 17:30   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-29 17:30 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd-zlib.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)
> 
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index e85ef8824d..da6201704c 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -13,6 +13,7 @@
>  #include "qemu/osdep.h"
>  #include <zlib.h>
>  #include "qemu/rcu.h"
> +#include "exec/ramblock.h"
>  #include "exec/target_page.h"
>  #include "qapi/error.h"
>  #include "migration.h"
> @@ -98,8 +99,8 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
>   */
>  static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>  {
> -    struct iovec *iov = p->pages->iov;
>      struct zlib_data *z = p->data;
> +    size_t page_size = qemu_target_page_size();
>      z_stream *zs = &z->zs;
>      uint32_t out_size = 0;
>      int ret;
> @@ -113,8 +114,8 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>              flush = Z_SYNC_FLUSH;
>          }
>  
> -        zs->avail_in = iov[i].iov_len;
> -        zs->next_in = iov[i].iov_base;
> +        zs->avail_in = page_size;
> +        zs->next_in = p->pages->block->host + p->pages->offset[i];
>  
>          zs->avail_out = available;
>          zs->next_out = z->zbuff + out_size;
> @@ -235,6 +236,7 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
>  static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
>  {
>      struct zlib_data *z = p->data;
> +    size_t page_size = qemu_target_page_size();
>      z_stream *zs = &z->zs;
>      uint32_t in_size = p->next_packet_size;
>      /* we measure the change of total_out */
> @@ -259,7 +261,6 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
>      zs->next_in = z->zbuff;
>  
>      for (i = 0; i < p->pages->num; i++) {
> -        struct iovec *iov = &p->pages->iov[i];
>          int flush = Z_NO_FLUSH;
>          unsigned long start = zs->total_out;
>  
> @@ -267,8 +268,8 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
>              flush = Z_SYNC_FLUSH;
>          }
>  
> -        zs->avail_out = iov->iov_len;
> -        zs->next_out = iov->iov_base;
> +        zs->avail_out = page_size;
> +        zs->next_out = p->pages->block->host + p->pages->offset[i];
>  
>          /*
>           * Welcome to inflate semantics
> @@ -281,8 +282,8 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
>          do {
>              ret = inflate(zs, flush);
>          } while (ret == Z_OK && zs->avail_in
> -                             && (zs->total_out - start) < iov->iov_len);
> -        if (ret == Z_OK && (zs->total_out - start) < iov->iov_len) {
> +                             && (zs->total_out - start) < page_size);
> +        if (ret == Z_OK && (zs->total_out - start) < page_size) {
>              error_setg(errp, "multifd %d: inflate generated too few output",
>                         p->id);
>              return -1;
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 11/23] multifd: Move iov from pages to params
  2021-11-24 10:06 ` [PATCH v3 11/23] multifd: Move iov from pages to params Juan Quintela
@ 2021-11-29 17:52   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-29 17:52 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> This will allow us to reduce the number of system calls on the next patch.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Leo: Does this make your zerocopy any harder?

Dave

> ---
>  migration/multifd.h |  8 ++++++--
>  migration/multifd.c | 34 ++++++++++++++++++++++++----------
>  2 files changed, 30 insertions(+), 12 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index e57adc783b..c3f18af364 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -62,8 +62,6 @@ typedef struct {
>      uint64_t packet_num;
>      /* offset of each page */
>      ram_addr_t *offset;
> -    /* pointer to each page */
> -    struct iovec *iov;
>      RAMBlock *block;
>  } MultiFDPages_t;
>  
> @@ -110,6 +108,10 @@ typedef struct {
>      uint64_t num_pages;
>      /* syncs main thread and channels */
>      QemuSemaphore sem_sync;
> +    /* buffers to send */
> +    struct iovec *iov;
> +    /* number of iovs used */
> +    uint32_t iovs_num;
>      /* used for compression methods */
>      void *data;
>  }  MultiFDSendParams;
> @@ -149,6 +151,8 @@ typedef struct {
>      uint64_t num_pages;
>      /* syncs main thread and channels */
>      QemuSemaphore sem_sync;
> +    /* buffers to recv */
> +    struct iovec *iov;
>      /* used for de-compression methods */
>      void *data;
>  } MultiFDRecvParams;
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 0533da154a..37487fd01c 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -86,7 +86,16 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
>   */
>  static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>  {
> -    p->next_packet_size = p->pages->num * qemu_target_page_size();
> +    MultiFDPages_t *pages = p->pages;
> +    size_t page_size = qemu_target_page_size();
> +
> +    for (int i = 0; i < p->pages->num; i++) {
> +        p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
> +        p->iov[p->iovs_num].iov_len = page_size;
> +        p->iovs_num++;
> +    }
> +
> +    p->next_packet_size = p->pages->num * page_size;
>      p->flags |= MULTIFD_FLAG_NOCOMP;
>      return 0;
>  }
> @@ -104,7 +113,7 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>   */
>  static int nocomp_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
>  {
> -    return qio_channel_writev_all(p->c, p->pages->iov, used, errp);
> +    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
>  }
>  
>  /**
> @@ -146,13 +155,18 @@ static void nocomp_recv_cleanup(MultiFDRecvParams *p)
>  static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
>  {
>      uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
> +    size_t page_size = qemu_target_page_size();
>  
>      if (flags != MULTIFD_FLAG_NOCOMP) {
>          error_setg(errp, "multifd %d: flags received %x flags expected %x",
>                     p->id, flags, MULTIFD_FLAG_NOCOMP);
>          return -1;
>      }
> -    return qio_channel_readv_all(p->c, p->pages->iov, p->pages->num, errp);
> +    for (int i = 0; i < p->pages->num; i++) {
> +        p->iov[i].iov_base = p->pages->block->host + p->pages->offset[i];
> +        p->iov[i].iov_len = page_size;
> +    }
> +    return qio_channel_readv_all(p->c, p->iov, p->pages->num, errp);
>  }
>  
>  static MultiFDMethods multifd_nocomp_ops = {
> @@ -242,7 +256,6 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
>      MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
>  
>      pages->allocated = size;
> -    pages->iov = g_new0(struct iovec, size);
>      pages->offset = g_new0(ram_addr_t, size);
>  
>      return pages;
> @@ -254,8 +267,6 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
>      pages->allocated = 0;
>      pages->packet_num = 0;
>      pages->block = NULL;
> -    g_free(pages->iov);
> -    pages->iov = NULL;
>      g_free(pages->offset);
>      pages->offset = NULL;
>      g_free(pages);
> @@ -365,8 +376,6 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>              return -1;
>          }
>          p->pages->offset[i] = offset;
> -        p->pages->iov[i].iov_base = block->host + offset;
> -        p->pages->iov[i].iov_len = page_size;
>      }
>  
>      return 0;
> @@ -470,8 +479,6 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
>  
>      if (pages->block == block) {
>          pages->offset[pages->num] = offset;
> -        pages->iov[pages->num].iov_base = block->host + offset;
> -        pages->iov[pages->num].iov_len = qemu_target_page_size();
>          pages->num++;
>  
>          if (pages->num < pages->allocated) {
> @@ -564,6 +571,8 @@ void multifd_save_cleanup(void)
>          p->packet_len = 0;
>          g_free(p->packet);
>          p->packet = NULL;
> +        g_free(p->iov);
> +        p->iov = NULL;
>          multifd_send_state->ops->send_cleanup(p, &local_err);
>          if (local_err) {
>              migrate_set_error(migrate_get_current(), local_err);
> @@ -651,6 +660,7 @@ static void *multifd_send_thread(void *opaque)
>              uint32_t used = p->pages->num;
>              uint64_t packet_num = p->packet_num;
>              uint32_t flags = p->flags;
> +            p->iovs_num = 0;
>  
>              if (used) {
>                  ret = multifd_send_state->ops->send_prepare(p, &local_err);
> @@ -919,6 +929,7 @@ int multifd_save_setup(Error **errp)
>          p->packet->version = cpu_to_be32(MULTIFD_VERSION);
>          p->name = g_strdup_printf("multifdsend_%d", i);
>          p->tls_hostname = g_strdup(s->hostname);
> +        p->iov = g_new0(struct iovec, page_count);
>          socket_send_channel_create(multifd_new_send_channel_async, p);
>      }
>  
> @@ -1018,6 +1029,8 @@ int multifd_load_cleanup(Error **errp)
>          p->packet_len = 0;
>          g_free(p->packet);
>          p->packet = NULL;
> +        g_free(p->iov);
> +        p->iov = NULL;
>          multifd_recv_state->ops->recv_cleanup(p);
>      }
>      qemu_sem_destroy(&multifd_recv_state->sem_sync);
> @@ -1158,6 +1171,7 @@ int multifd_load_setup(Error **errp)
>                        + sizeof(uint64_t) * page_count;
>          p->packet = g_malloc0(p->packet_len);
>          p->name = g_strdup_printf("multifdrecv_%d", i);
> +        p->iov = g_new0(struct iovec, page_count);
>      }
>  
>      for (i = 0; i < thread_count; i++) {
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 12/23] multifd: Make zlib use iov's
  2021-11-24 10:06 ` [PATCH v3 12/23] multifd: Make zlib use iov's Juan Quintela
@ 2021-11-29 18:01   ` Dr. David Alan Gilbert
  2021-11-29 18:21     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-29 18:01 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/multifd-zlib.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index da6201704c..478a4af115 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -143,6 +143,9 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>          }
>          out_size += available - zs->avail_out;
>      }
> +    p->iov[p->iovs_num].iov_base = z->zbuff;
> +    p->iov[p->iovs_num].iov_len = out_size;
> +    p->iovs_num++;
>      p->next_packet_size = out_size;

Do you still need next_packet_size?

but:


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>      p->flags |= MULTIFD_FLAG_ZLIB;
>  
> @@ -162,10 +165,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>   */
>  static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
>  {
> -    struct zlib_data *z = p->data;
> -
> -    return qio_channel_write_all(p->c, (void *)z->zbuff, p->next_packet_size,
> -                                 errp);
> +    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
>  }
>  
>  /**
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 13/23] multifd: Make zstd use iov's
  2021-11-24 10:06 ` [PATCH v3 13/23] multifd: Make zstd " Juan Quintela
@ 2021-11-29 18:03   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-29 18:03 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd-zstd.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 2d5b61106c..259277dc42 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -154,6 +154,9 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
>              return -1;
>          }
>      }
> +    p->iov[p->iovs_num].iov_base = z->zbuff;
> +    p->iov[p->iovs_num].iov_len = z->out.pos;
> +    p->iovs_num++;
>      p->next_packet_size = z->out.pos;
>      p->flags |= MULTIFD_FLAG_ZSTD;
>  
> @@ -173,10 +176,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
>   */
>  static int zstd_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
>  {
> -    struct zstd_data *z = p->data;
> -
> -    return qio_channel_write_all(p->c, (void *)z->zbuff, p->next_packet_size,
> -                                 errp);
> +    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
>  }
>  
>  /**
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 14/23] multifd: Remove send_write() method
  2021-11-24 10:06 ` [PATCH v3 14/23] multifd: Remove send_write() method Juan Quintela
@ 2021-11-29 18:19   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-29 18:19 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Everything use now iov's.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.h      |  2 --
>  migration/multifd-zlib.c | 17 -----------------
>  migration/multifd-zstd.c | 17 -----------------
>  migration/multifd.c      | 20 ++------------------
>  4 files changed, 2 insertions(+), 54 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index c3f18af364..7496f951a7 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -164,8 +164,6 @@ typedef struct {
>      void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
>      /* Prepare the send packet */
>      int (*send_prepare)(MultiFDSendParams *p, Error **errp);
> -    /* Write the send packet */
> -    int (*send_write)(MultiFDSendParams *p, uint32_t used, Error **errp);
>      /* Setup for receiving side */
>      int (*recv_setup)(MultiFDRecvParams *p, Error **errp);
>      /* Cleanup for receiving side */
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index 478a4af115..f65159392a 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -152,22 +152,6 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>      return 0;
>  }
>  
> -/**
> - * zlib_send_write: do the actual write of the data
> - *
> - * Do the actual write of the comprresed buffer.
> - *
> - * Returns 0 for success or -1 for error
> - *
> - * @p: Params for the channel that we are using
> - * @used: number of pages used
> - * @errp: pointer to an error
> - */
> -static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
> -{
> -    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
> -}
> -
>  /**
>   * zlib_recv_setup: setup receive side
>   *
> @@ -307,7 +291,6 @@ static MultiFDMethods multifd_zlib_ops = {
>      .send_setup = zlib_send_setup,
>      .send_cleanup = zlib_send_cleanup,
>      .send_prepare = zlib_send_prepare,
> -    .send_write = zlib_send_write,
>      .recv_setup = zlib_recv_setup,
>      .recv_cleanup = zlib_recv_cleanup,
>      .recv_pages = zlib_recv_pages
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 259277dc42..6933ba622a 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -163,22 +163,6 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
>      return 0;
>  }
>  
> -/**
> - * zstd_send_write: do the actual write of the data
> - *
> - * Do the actual write of the comprresed buffer.
> - *
> - * Returns 0 for success or -1 for error
> - *
> - * @p: Params for the channel that we are using
> - * @used: number of pages used
> - * @errp: pointer to an error
> - */
> -static int zstd_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
> -{
> -    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
> -}
> -
>  /**
>   * zstd_recv_setup: setup receive side
>   *
> @@ -320,7 +304,6 @@ static MultiFDMethods multifd_zstd_ops = {
>      .send_setup = zstd_send_setup,
>      .send_cleanup = zstd_send_cleanup,
>      .send_prepare = zstd_send_prepare,
> -    .send_write = zstd_send_write,
>      .recv_setup = zstd_recv_setup,
>      .recv_cleanup = zstd_recv_cleanup,
>      .recv_pages = zstd_recv_pages
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 37487fd01c..71bdef068e 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -100,22 +100,6 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>      return 0;
>  }
>  
> -/**
> - * nocomp_send_write: do the actual write of the data
> - *
> - * For no compression we just have to write the data.
> - *
> - * Returns 0 for success or -1 for error
> - *
> - * @p: Params for the channel that we are using
> - * @used: number of pages used
> - * @errp: pointer to an error
> - */
> -static int nocomp_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
> -{
> -    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
> -}
> -
>  /**
>   * nocomp_recv_setup: setup receive side
>   *
> @@ -173,7 +157,6 @@ static MultiFDMethods multifd_nocomp_ops = {
>      .send_setup = nocomp_send_setup,
>      .send_cleanup = nocomp_send_cleanup,
>      .send_prepare = nocomp_send_prepare,
> -    .send_write = nocomp_send_write,
>      .recv_setup = nocomp_recv_setup,
>      .recv_cleanup = nocomp_recv_cleanup,
>      .recv_pages = nocomp_recv_pages
> @@ -687,7 +670,8 @@ static void *multifd_send_thread(void *opaque)
>              }
>  
>              if (used) {
> -                ret = multifd_send_state->ops->send_write(p, used, &local_err);
> +                ret = qio_channel_writev_all(p->c, p->iov, p->iovs_num,
> +                                             &local_err);
>                  if (ret != 0) {
>                      break;
>                  }
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 12/23] multifd: Make zlib use iov's
  2021-11-29 18:01   ` Dr. David Alan Gilbert
@ 2021-11-29 18:21     ` Juan Quintela
  0 siblings, 0 replies; 72+ messages in thread
From: Juan Quintela @ 2021-11-29 18:21 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/multifd-zlib.c | 8 ++++----
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>> 
>> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
>> index da6201704c..478a4af115 100644
>> --- a/migration/multifd-zlib.c
>> +++ b/migration/multifd-zlib.c
>> @@ -143,6 +143,9 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>>          }
>>          out_size += available - zs->avail_out;
>>      }
>> +    p->iov[p->iovs_num].iov_base = z->zbuff;
>> +    p->iov[p->iovs_num].iov_len = out_size;
>> +    p->iovs_num++;
>>      p->next_packet_size = out_size;
>
> Do you still need next_packet_size?

As my crystal ball didn't worked so well, I ended putting
next_packet_size on the wire.  So yes, I need it.

Yes, I also wanted to remove it.


Later, Juan.

>
> but:
>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>
>>      p->flags |= MULTIFD_FLAG_ZLIB;
>>  
>> @@ -162,10 +165,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>>   */
>>  static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
>>  {
>> -    struct zlib_data *z = p->data;
>> -
>> -    return qio_channel_write_all(p->c, (void *)z->zbuff, p->next_packet_size,
>> -                                 errp);
>> +    return qio_channel_writev_all(p->c, p->iov, p->iovs_num, errp);
>>  }
>>  
>>  /**
>> -- 
>> 2.33.1
>> 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 15/23] multifd: Use a single writev on the send side
  2021-11-24 10:06 ` [PATCH v3 15/23] multifd: Use a single writev on the send side Juan Quintela
@ 2021-11-29 18:35   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-29 18:35 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Until now, we wrote the packet header with write(), and the rest of the
> pages with writev().  Just increase the size of the iovec and do a
> single writev().
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.c | 20 ++++++++------------
>  1 file changed, 8 insertions(+), 12 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 71bdef068e..65676d56fd 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -643,7 +643,7 @@ static void *multifd_send_thread(void *opaque)
>              uint32_t used = p->pages->num;
>              uint64_t packet_num = p->packet_num;
>              uint32_t flags = p->flags;
> -            p->iovs_num = 0;
> +            p->iovs_num = 1;
>  
>              if (used) {
>                  ret = multifd_send_state->ops->send_prepare(p, &local_err);
> @@ -663,20 +663,15 @@ static void *multifd_send_thread(void *opaque)
>              trace_multifd_send(p->id, packet_num, used, flags,
>                                 p->next_packet_size);
>  
> -            ret = qio_channel_write_all(p->c, (void *)p->packet,
> -                                        p->packet_len, &local_err);
> +            p->iov[0].iov_len = p->packet_len;
> +            p->iov[0].iov_base = p->packet;
> +
> +            ret = qio_channel_writev_all(p->c, p->iov, p->iovs_num,
> +                                         &local_err);
>              if (ret != 0) {
>                  break;
>              }
>  
> -            if (used) {
> -                ret = qio_channel_writev_all(p->c, p->iov, p->iovs_num,
> -                                             &local_err);
> -                if (ret != 0) {
> -                    break;
> -                }
> -            }
> -
>              qemu_mutex_lock(&p->mutex);
>              p->pending_job--;
>              qemu_mutex_unlock(&p->mutex);
> @@ -913,7 +908,8 @@ int multifd_save_setup(Error **errp)
>          p->packet->version = cpu_to_be32(MULTIFD_VERSION);
>          p->name = g_strdup_printf("multifdsend_%d", i);
>          p->tls_hostname = g_strdup(s->hostname);
> -        p->iov = g_new0(struct iovec, page_count);
> +        /* We need one extra place for the packet header */
> +        p->iov = g_new0(struct iovec, page_count + 1);
>          socket_send_channel_create(multifd_new_send_channel_async, p);
>      }
>  
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 16/23] multifd: Unfold "used" variable by its value
  2021-11-24 10:06 ` [PATCH v3 16/23] multifd: Unfold "used" variable by its value Juan Quintela
@ 2021-11-30 10:45   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-30 10:45 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 65676d56fd..6983ba3e7c 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -1059,7 +1059,6 @@ static void *multifd_recv_thread(void *opaque)
>      rcu_register_thread();
>  
>      while (true) {
> -        uint32_t used;
>          uint32_t flags;
>  
>          if (p->quit) {
> @@ -1082,17 +1081,16 @@ static void *multifd_recv_thread(void *opaque)
>              break;
>          }
>  
> -        used = p->pages->num;
>          flags = p->flags;
>          /* recv methods don't know how to handle the SYNC flag */
>          p->flags &= ~MULTIFD_FLAG_SYNC;
> -        trace_multifd_recv(p->id, p->packet_num, used, flags,
> +        trace_multifd_recv(p->id, p->packet_num, p->pages->num, flags,
>                             p->next_packet_size);
>          p->num_packets++;
> -        p->num_pages += used;
> +        p->num_pages += p->pages->num;
>          qemu_mutex_unlock(&p->mutex);
>  
> -        if (used) {
> +        if (p->pages->num) {
>              ret = multifd_recv_state->ops->recv_pages(p, &local_err);
>              if (ret != 0) {
>                  break;
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 17/23] multifd: Use normal pages array on the send side
  2021-11-24 10:06 ` [PATCH v3 17/23] multifd: Use normal pages array on the send side Juan Quintela
@ 2021-11-30 10:50   ` Dr. David Alan Gilbert
  2021-11-30 12:01     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-11-30 10:50 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Can you explain a bit more what's going on here?

Dave

> ---
>  migration/multifd.h      |  8 ++++++--
>  migration/multifd-zlib.c |  6 +++---
>  migration/multifd-zstd.c |  6 +++---
>  migration/multifd.c      | 30 +++++++++++++++++++-----------
>  migration/trace-events   |  4 ++--
>  5 files changed, 33 insertions(+), 21 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 7496f951a7..78e73df3ec 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -104,14 +104,18 @@ typedef struct {
>      /* thread local variables */
>      /* packets sent through this channel */
>      uint64_t num_packets;
> -    /* pages sent through this channel */
> -    uint64_t num_pages;
> +    /* non zero pages sent through this channel */
> +    uint64_t num_normal_pages;
>      /* syncs main thread and channels */
>      QemuSemaphore sem_sync;
>      /* buffers to send */
>      struct iovec *iov;
>      /* number of iovs used */
>      uint32_t iovs_num;
> +    /* Pages that are not zero */
> +    ram_addr_t *normal;
> +    /* num of non zero pages */
> +    uint32_t normal_num;
>      /* used for compression methods */
>      void *data;
>  }  MultiFDSendParams;
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index f65159392a..25ef68a548 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -106,16 +106,16 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
>      int ret;
>      uint32_t i;
>  
> -    for (i = 0; i < p->pages->num; i++) {
> +    for (i = 0; i < p->normal_num; i++) {
>          uint32_t available = z->zbuff_len - out_size;
>          int flush = Z_NO_FLUSH;
>  
> -        if (i == p->pages->num - 1) {
> +        if (i == p->normal_num - 1) {
>              flush = Z_SYNC_FLUSH;
>          }
>  
>          zs->avail_in = page_size;
> -        zs->next_in = p->pages->block->host + p->pages->offset[i];
> +        zs->next_in = p->pages->block->host + p->normal[i];
>  
>          zs->avail_out = available;
>          zs->next_out = z->zbuff + out_size;
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 6933ba622a..61842d713e 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -121,13 +121,13 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
>      z->out.size = z->zbuff_len;
>      z->out.pos = 0;
>  
> -    for (i = 0; i < p->pages->num; i++) {
> +    for (i = 0; i < p->normal_num; i++) {
>          ZSTD_EndDirective flush = ZSTD_e_continue;
>  
> -        if (i == p->pages->num - 1) {
> +        if (i == p->normal_num - 1) {
>              flush = ZSTD_e_flush;
>          }
> -        z->in.src = p->pages->block->host + p->pages->offset[i];
> +        z->in.src = p->pages->block->host + p->normal[i];
>          z->in.size = page_size;
>          z->in.pos = 0;
>  
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 6983ba3e7c..dbe919b764 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -89,13 +89,13 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>      MultiFDPages_t *pages = p->pages;
>      size_t page_size = qemu_target_page_size();
>  
> -    for (int i = 0; i < p->pages->num; i++) {
> -        p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
> +    for (int i = 0; i < p->normal_num; i++) {
> +        p->iov[p->iovs_num].iov_base = pages->block->host + p->normal[i];
>          p->iov[p->iovs_num].iov_len = page_size;
>          p->iovs_num++;
>      }
>  
> -    p->next_packet_size = p->pages->num * page_size;
> +    p->next_packet_size = p->normal_num * page_size;
>      p->flags |= MULTIFD_FLAG_NOCOMP;
>      return 0;
>  }
> @@ -262,7 +262,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  
>      packet->flags = cpu_to_be32(p->flags);
>      packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> -    packet->pages_used = cpu_to_be32(p->pages->num);
> +    packet->pages_used = cpu_to_be32(p->normal_num);
>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
>  
> @@ -270,9 +270,9 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
>      }
>  
> -    for (i = 0; i < p->pages->num; i++) {
> +    for (i = 0; i < p->normal_num; i++) {
>          /* there are architectures where ram_addr_t is 32 bit */
> -        uint64_t temp = p->pages->offset[i];
> +        uint64_t temp = p->normal[i];
>  
>          packet->offset[i] = cpu_to_be64(temp);
>      }
> @@ -556,6 +556,8 @@ void multifd_save_cleanup(void)
>          p->packet = NULL;
>          g_free(p->iov);
>          p->iov = NULL;
> +        g_free(p->normal);
> +        p->normal = NULL;
>          multifd_send_state->ops->send_cleanup(p, &local_err);
>          if (local_err) {
>              migrate_set_error(migrate_get_current(), local_err);
> @@ -640,12 +642,17 @@ static void *multifd_send_thread(void *opaque)
>          qemu_mutex_lock(&p->mutex);
>  
>          if (p->pending_job) {
> -            uint32_t used = p->pages->num;
>              uint64_t packet_num = p->packet_num;
>              uint32_t flags = p->flags;
>              p->iovs_num = 1;
> +            p->normal_num = 0;
>  
> -            if (used) {
> +            for (int i = 0; i < p->pages->num; i++) {
> +                p->normal[p->normal_num] = p->pages->offset[i];
> +                p->normal_num++;
> +            }
> +
> +            if (p->normal_num) {
>                  ret = multifd_send_state->ops->send_prepare(p, &local_err);
>                  if (ret != 0) {
>                      qemu_mutex_unlock(&p->mutex);
> @@ -655,12 +662,12 @@ static void *multifd_send_thread(void *opaque)
>              multifd_send_fill_packet(p);
>              p->flags = 0;
>              p->num_packets++;
> -            p->num_pages += used;
> +            p->num_normal_pages += p->normal_num;
>              p->pages->num = 0;
>              p->pages->block = NULL;
>              qemu_mutex_unlock(&p->mutex);
>  
> -            trace_multifd_send(p->id, packet_num, used, flags,
> +            trace_multifd_send(p->id, packet_num, p->normal_num, flags,
>                                 p->next_packet_size);
>  
>              p->iov[0].iov_len = p->packet_len;
> @@ -710,7 +717,7 @@ out:
>      qemu_mutex_unlock(&p->mutex);
>  
>      rcu_unregister_thread();
> -    trace_multifd_send_thread_end(p->id, p->num_packets, p->num_pages);
> +    trace_multifd_send_thread_end(p->id, p->num_packets, p->num_normal_pages);
>  
>      return NULL;
>  }
> @@ -910,6 +917,7 @@ int multifd_save_setup(Error **errp)
>          p->tls_hostname = g_strdup(s->hostname);
>          /* We need one extra place for the packet header */
>          p->iov = g_new0(struct iovec, page_count + 1);
> +        p->normal = g_new0(ram_addr_t, page_count);
>          socket_send_channel_create(multifd_new_send_channel_async, p);
>      }
>  
> diff --git a/migration/trace-events b/migration/trace-events
> index b48d873b8a..af8dee9af0 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -124,13 +124,13 @@ multifd_recv_sync_main_wait(uint8_t id) "channel %d"
>  multifd_recv_terminate_threads(bool error) "error %d"
>  multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %" PRIu64
>  multifd_recv_thread_start(uint8_t id) "%d"
> -multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " pages %d flags 0x%x next packet size %d"
> +multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " normal pages %d flags 0x%x next packet size %d"
>  multifd_send_error(uint8_t id) "channel %d"
>  multifd_send_sync_main(long packet_num) "packet num %ld"
>  multifd_send_sync_main_signal(uint8_t id) "channel %d"
>  multifd_send_sync_main_wait(uint8_t id) "channel %d"
>  multifd_send_terminate_threads(bool error) "error %d"
> -multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %"  PRIu64
> +multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages) "channel %d packets %" PRIu64 " normal pages %"  PRIu64
>  multifd_send_thread_start(uint8_t id) "%d"
>  multifd_tls_outgoing_handshake_start(void *ioc, void *tioc, const char *hostname) "ioc=%p tioc=%p hostname=%s"
>  multifd_tls_outgoing_handshake_error(void *ioc, const char *err) "ioc=%p err=%s"
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 17/23] multifd: Use normal pages array on the send side
  2021-11-30 10:50   ` Dr. David Alan Gilbert
@ 2021-11-30 12:01     ` Juan Quintela
  2021-12-01 10:59       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-11-30 12:01 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Can you explain a bit more what's going on here?

Sorry.

Until patch 20, we have what we had always have:

pages that are sent through multifd (non zero pages).  We are going to
call it normal pages.  So right now, we use the array of pages that we
are passed in directly on the multifd send methods.

But when we introduce zero pages handling around patch 20, we end having
two types of pages sent through multifd:
- normal pages (a.k.a. non-zero pages)
- zero pages

So the options are:
- we rename the fields before we introduce the zero page code, and then
  we introduce the zero page code.
- we rename at the same time that we introduce the zero page code.

I decided to go with the 1st option.

The other thing that we do here is that we introduce the normal array
pages, so right now we do:

for (i = 0; i < pages->num; i++) {
    p->narmal[p->normal_num] = pages->offset[i];
    p->normal_num++:
}


Why?

Because then patch 20 becomes:

for (i = 0; i < pages->num; i++) {
    if (buffer_is_zero(page->offset[i])) {
        p->zerol[p->zero_num] = pages->offset[i];
        p->zeronum++:
    } else {
        p->narmal[p->normal_num] = pages->offset[i];
        p->normal_num++:
    }
}

i.e. don't have to touch the handling of normal pages at all, only this
for loop.

As an added benefit, after this patch, multifd methods don't need to
know about the pages array, only about the params array (that will allow
me to drop the locking earlier).

I hope this helps.

Later, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 17/23] multifd: Use normal pages array on the send side
  2021-11-30 12:01     ` Juan Quintela
@ 2021-12-01 10:59       ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-01 10:59 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >
> > Can you explain a bit more what's going on here?
> 
> Sorry.
> 
> Until patch 20, we have what we had always have:
> 
> pages that are sent through multifd (non zero pages).  We are going to
> call it normal pages.  So right now, we use the array of pages that we
> are passed in directly on the multifd send methods.
> 
> But when we introduce zero pages handling around patch 20, we end having
> two types of pages sent through multifd:
> - normal pages (a.k.a. non-zero pages)
> - zero pages
> 
> So the options are:
> - we rename the fields before we introduce the zero page code, and then
>   we introduce the zero page code.
> - we rename at the same time that we introduce the zero page code.
> 
> I decided to go with the 1st option.
> 
> The other thing that we do here is that we introduce the normal array
> pages, so right now we do:
> 
> for (i = 0; i < pages->num; i++) {
>     p->narmal[p->normal_num] = pages->offset[i];
>     p->normal_num++:
> }
> 
> 
> Why?
> 
> Because then patch 20 becomes:
> 
> for (i = 0; i < pages->num; i++) {
>     if (buffer_is_zero(page->offset[i])) {
>         p->zerol[p->zero_num] = pages->offset[i];
>         p->zeronum++:
>     } else {
>         p->narmal[p->normal_num] = pages->offset[i];
>         p->normal_num++:
>     }
> }
> 
> i.e. don't have to touch the handling of normal pages at all, only this
> for loop.
> 
> As an added benefit, after this patch, multifd methods don't need to
> know about the pages array, only about the params array (that will allow
> me to drop the locking earlier).
> 
> I hope this helps.

OK, so the code is OK, but it needs a commit message that explains all
that a bit more concisely.

Dave

> Later, Juan.
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 19/23] multifd: recv side only needs the RAMBlock host address
  2021-11-24 10:06 ` [PATCH v3 19/23] multifd: recv side only needs the RAMBlock host address Juan Quintela
@ 2021-12-01 18:56   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-01 18:56 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> So we can remove the MultiFDPages.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/multifd.h      | 4 ++--
>  migration/multifd-zlib.c | 2 +-
>  migration/multifd-zstd.c | 2 +-
>  migration/multifd.c      | 7 ++-----
>  4 files changed, 6 insertions(+), 9 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 9fbcb7bb9a..ab32baebd7 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -136,8 +136,8 @@ typedef struct {
>      bool running;
>      /* should this thread finish */
>      bool quit;
> -    /* array of pages to receive */
> -    MultiFDPages_t *pages;
> +    /* ramblock host address */
> +    uint8_t *host;
>      /* packet allocated len */
>      uint32_t packet_len;
>      /* pointer to the packet */
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index cc143b829d..bf4d87fa16 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -253,7 +253,7 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
>          }
>  
>          zs->avail_out = page_size;
> -        zs->next_out = p->pages->block->host + p->normal[i];
> +        zs->next_out = p->host + p->normal[i];
>  
>          /*
>           * Welcome to inflate semantics
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 93d504ce0f..dd64ac3227 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -264,7 +264,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
>      z->in.pos = 0;
>  
>      for (i = 0; i < p->normal_num; i++) {
> -        z->out.dst = p->pages->block->host + p->normal[i];
> +        z->out.dst = p->host + p->normal[i];
>          z->out.size = page_size;
>          z->out.pos = 0;
>  
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 3ffb1aba64..dc76322137 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -147,7 +147,7 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>      for (int i = 0; i < p->normal_num; i++) {
> -        p->iov[i].iov_base = p->pages->block->host + p->normal[i];
> +        p->iov[i].iov_base = p->host + p->normal[i];
>          p->iov[i].iov_len = page_size;
>      }
>      return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
> @@ -340,7 +340,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>  
> -    p->pages->block = block;
> +    p->host = block->host;
>      for (i = 0; i < p->normal_num; i++) {
>          uint64_t offset = be64_to_cpu(packet->offset[i]);
>  
> @@ -1004,8 +1004,6 @@ int multifd_load_cleanup(Error **errp)
>          qemu_sem_destroy(&p->sem_sync);
>          g_free(p->name);
>          p->name = NULL;
> -        multifd_pages_clear(p->pages);
> -        p->pages = NULL;
>          p->packet_len = 0;
>          g_free(p->packet);
>          p->packet = NULL;
> @@ -1146,7 +1144,6 @@ int multifd_load_setup(Error **errp)
>          qemu_sem_init(&p->sem_sync, 0);
>          p->quit = false;
>          p->id = i;
> -        p->pages = multifd_pages_init(page_count);
>          p->packet_len = sizeof(MultiFDPacket_t)
>                        + sizeof(uint64_t) * page_count;
>          p->packet = g_malloc0(p->packet_len);
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 20/23] multifd: Rename pages_used to normal_pages
  2021-11-24 10:06 ` [PATCH v3 20/23] multifd: Rename pages_used to normal_pages Juan Quintela
@ 2021-12-01 19:00   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-01 19:00 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

(This series has a painful lot of small renamy patches)


> ---
>  migration/multifd.h | 3 ++-
>  migration/multifd.c | 4 ++--
>  2 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index ab32baebd7..39e55d7f05 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -44,7 +44,8 @@ typedef struct {
>      uint32_t flags;
>      /* maximum number of allocated pages */
>      uint32_t pages_alloc;
> -    uint32_t pages_used;
> +    /* non zero pages */
> +    uint32_t normal_pages;
>      /* size of the next packet that contains pages */
>      uint32_t next_packet_size;
>      uint64_t packet_num;
> diff --git a/migration/multifd.c b/migration/multifd.c
> index dc76322137..d1ab823f98 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -262,7 +262,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  
>      packet->flags = cpu_to_be32(p->flags);
>      packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> -    packet->pages_used = cpu_to_be32(p->normal_num);
> +    packet->normal_pages = cpu_to_be32(p->normal_num);
>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
>  
> @@ -316,7 +316,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>  
> -    p->normal_num = be32_to_cpu(packet->pages_used);
> +    p->normal_num = be32_to_cpu(packet->normal_pages);
>      if (p->normal_num > packet->pages_alloc) {
>          error_setg(errp, "multifd: received packet "
>                     "with %d pages and expected maximum pages are %d",
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 21/23] multifd: Support for zero pages transmission
  2021-11-24 10:06 ` [PATCH v3 21/23] multifd: Support for zero pages transmission Juan Quintela
@ 2021-12-02 11:36   ` Dr. David Alan Gilbert
  2021-12-02 12:08     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-02 11:36 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> This patch adds counters and similar.  Logic will be added on the
> following patch.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/multifd.h    | 13 ++++++++++++-
>  migration/multifd.c    | 22 +++++++++++++++++++---
>  migration/trace-events |  2 +-
>  3 files changed, 32 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 39e55d7f05..973315b545 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -49,7 +49,10 @@ typedef struct {
>      /* size of the next packet that contains pages */
>      uint32_t next_packet_size;
>      uint64_t packet_num;
> -    uint64_t unused[4];    /* Reserved for future use */
> +    /* zero pages */
> +    uint32_t zero_pages;

Had you considered just adding a flag, MULTIFD_FLAG_ZERO to the packet?

> +    uint32_t unused32[1];    /* Reserved for future use */
> +    uint64_t unused64[3];    /* Reserved for future use */
>      char ramblock[256];
>      uint64_t offset[];
>  } __attribute__((packed)) MultiFDPacket_t;
> @@ -117,6 +120,10 @@ typedef struct {
>      ram_addr_t *normal;
>      /* num of non zero pages */
>      uint32_t normal_num;
> +    /* Pages that are  zero */
> +    ram_addr_t *zero;
> +    /* num of zero pages */
> +    uint32_t zero_num;
>      /* used for compression methods */
>      void *data;
>  }  MultiFDSendParams;
> @@ -162,6 +169,10 @@ typedef struct {
>      ram_addr_t *normal;
>      /* num of non zero pages */
>      uint32_t normal_num;
> +    /* Pages that are  zero */
> +    ram_addr_t *zero;
> +    /* num of zero pages */
> +    uint32_t zero_num;
>      /* used for de-compression methods */
>      void *data;
>  } MultiFDRecvParams;
> diff --git a/migration/multifd.c b/migration/multifd.c
> index d1ab823f98..2e4dffd6c6 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -265,6 +265,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>      packet->normal_pages = cpu_to_be32(p->normal_num);
>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
> +    packet->zero_pages = cpu_to_be32(p->zero_num);
>  
>      if (p->pages->block) {
>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
> @@ -327,7 +328,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>      p->packet_num = be64_to_cpu(packet->packet_num);
>  
> -    if (p->normal_num == 0) {
> +    p->zero_num = be32_to_cpu(packet->zero_pages);
> +    if (p->zero_num > packet->pages_alloc - p->normal_num) {
> +        error_setg(errp, "multifd: received packet "
> +                   "with %d zero pages and expected maximum pages are %d",
> +                   p->normal_num, packet->pages_alloc - p->zero_num) ;

should that be p->zero_num, packet->pages_alloc - p->normal_num ?
(and be %u)

Dave

> +        return -1;
> +    }
> +
> +    if (p->normal_num == 0 && p->zero_num == 0) {
>          return 0;
>      }
>  
> @@ -550,6 +559,8 @@ void multifd_save_cleanup(void)
>          p->iov = NULL;
>          g_free(p->normal);
>          p->normal = NULL;
> +        g_free(p->zero);
> +        p->zero = NULL;
>          multifd_send_state->ops->send_cleanup(p, &local_err);
>          if (local_err) {
>              migrate_set_error(migrate_get_current(), local_err);
> @@ -638,6 +649,7 @@ static void *multifd_send_thread(void *opaque)
>              uint32_t flags = p->flags;
>              p->iovs_num = 1;
>              p->normal_num = 0;
> +            p->zero_num = 0;
>  
>              for (int i = 0; i < p->pages->num; i++) {
>                  p->normal[p->normal_num] = p->pages->offset[i];
> @@ -659,8 +671,8 @@ static void *multifd_send_thread(void *opaque)
>              p->pages->block = NULL;
>              qemu_mutex_unlock(&p->mutex);
>  
> -            trace_multifd_send(p->id, packet_num, p->normal_num, flags,
> -                               p->next_packet_size);
> +            trace_multifd_send(p->id, packet_num, p->normal_num, p->zero_num,
> +                               flags, p->next_packet_size);
>  
>              p->iov[0].iov_len = p->packet_len;
>              p->iov[0].iov_base = p->packet;
> @@ -910,6 +922,7 @@ int multifd_save_setup(Error **errp)
>          /* We need one extra place for the packet header */
>          p->iov = g_new0(struct iovec, page_count + 1);
>          p->normal = g_new0(ram_addr_t, page_count);
> +        p->zero = g_new0(ram_addr_t, page_count);
>          socket_send_channel_create(multifd_new_send_channel_async, p);
>      }
>  
> @@ -1011,6 +1024,8 @@ int multifd_load_cleanup(Error **errp)
>          p->iov = NULL;
>          g_free(p->normal);
>          p->normal = NULL;
> +        g_free(p->zero);
> +        p->zero = NULL;
>          multifd_recv_state->ops->recv_cleanup(p);
>      }
>      qemu_sem_destroy(&multifd_recv_state->sem_sync);
> @@ -1150,6 +1165,7 @@ int multifd_load_setup(Error **errp)
>          p->name = g_strdup_printf("multifdrecv_%d", i);
>          p->iov = g_new0(struct iovec, page_count);
>          p->normal = g_new0(ram_addr_t, page_count);
> +        p->zero = g_new0(ram_addr_t, page_count);
>      }
>  
>      for (i = 0; i < thread_count; i++) {
> diff --git a/migration/trace-events b/migration/trace-events
> index af8dee9af0..608decbdcc 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -124,7 +124,7 @@ multifd_recv_sync_main_wait(uint8_t id) "channel %d"
>  multifd_recv_terminate_threads(bool error) "error %d"
>  multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %d packets %" PRIu64 " pages %" PRIu64
>  multifd_recv_thread_start(uint8_t id) "%d"
> -multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " normal pages %d flags 0x%x next packet size %d"
> +multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t zero, uint32_t flags, uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " normal pages %d zero pages %d flags 0x%x next packet size %d"
>  multifd_send_error(uint8_t id) "channel %d"
>  multifd_send_sync_main(long packet_num) "packet num %ld"
>  multifd_send_sync_main_signal(uint8_t id) "channel %d"
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 21/23] multifd: Support for zero pages transmission
  2021-12-02 11:36   ` Dr. David Alan Gilbert
@ 2021-12-02 12:08     ` Juan Quintela
  2021-12-02 16:16       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-12-02 12:08 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> This patch adds counters and similar.  Logic will be added on the
>> following patch.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/multifd.h    | 13 ++++++++++++-
>>  migration/multifd.c    | 22 +++++++++++++++++++---
>>  migration/trace-events |  2 +-
>>  3 files changed, 32 insertions(+), 5 deletions(-)
>> 
>> diff --git a/migration/multifd.h b/migration/multifd.h
>> index 39e55d7f05..973315b545 100644
>> --- a/migration/multifd.h
>> +++ b/migration/multifd.h
>> @@ -49,7 +49,10 @@ typedef struct {
>>      /* size of the next packet that contains pages */
>>      uint32_t next_packet_size;
>>      uint64_t packet_num;
>> -    uint64_t unused[4];    /* Reserved for future use */
>> +    /* zero pages */
>> +    uint32_t zero_pages;
>
> Had you considered just adding a flag, MULTIFD_FLAG_ZERO to the packet?

I *have* to also add the flag.

I was waiting for 7.0 to get out, because I still have to do the
compatibility bits.  Otherwise you can't migrate to an old multifd version.

>
>> +    uint32_t unused32[1];    /* Reserved for future use */
>> +    uint64_t unused64[3];    /* Reserved for future use */
>>      char ramblock[256];
>>      uint64_t offset[];
>>  } __attribute__((packed)) MultiFDPacket_t;
>> @@ -117,6 +120,10 @@ typedef struct {
>>      ram_addr_t *normal;
>>      /* num of non zero pages */
>>      uint32_t normal_num;
>> +    /* Pages that are  zero */
>> +    ram_addr_t *zero;
>> +    /* num of zero pages */
>> +    uint32_t zero_num;
>>      /* used for compression methods */
>>      void *data;
>>  }  MultiFDSendParams;
>> @@ -162,6 +169,10 @@ typedef struct {
>>      ram_addr_t *normal;
>>      /* num of non zero pages */
>>      uint32_t normal_num;
>> +    /* Pages that are  zero */
>> +    ram_addr_t *zero;
>> +    /* num of zero pages */
>> +    uint32_t zero_num;
>>      /* used for de-compression methods */
>>      void *data;
>>  } MultiFDRecvParams;
>> diff --git a/migration/multifd.c b/migration/multifd.c
>> index d1ab823f98..2e4dffd6c6 100644
>> --- a/migration/multifd.c
>> +++ b/migration/multifd.c
>> @@ -265,6 +265,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>>      packet->normal_pages = cpu_to_be32(p->normal_num);
>>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>>      packet->packet_num = cpu_to_be64(p->packet_num);
>> +    packet->zero_pages = cpu_to_be32(p->zero_num);
>>  
>>      if (p->pages->block) {
>>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
>> @@ -327,7 +328,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>>      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>>      p->packet_num = be64_to_cpu(packet->packet_num);
>>  
>> -    if (p->normal_num == 0) {
>> +    p->zero_num = be32_to_cpu(packet->zero_pages);
>> +    if (p->zero_num > packet->pages_alloc - p->normal_num) {
>> +        error_setg(errp, "multifd: received packet "
>> +                   "with %d zero pages and expected maximum pages are %d",
>> +                   p->normal_num, packet->pages_alloc - p->zero_num) ;
>
> should that be p->zero_num, packet->pages_alloc - p->normal_num ?
> (and be %u)

Copy and paste error.  You are right on both cases.

Thanks.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 21/23] multifd: Support for zero pages transmission
  2021-12-02 12:08     ` Juan Quintela
@ 2021-12-02 16:16       ` Dr. David Alan Gilbert
  2021-12-02 16:19         ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-02 16:16 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> This patch adds counters and similar.  Logic will be added on the
> >> following patch.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  migration/multifd.h    | 13 ++++++++++++-
> >>  migration/multifd.c    | 22 +++++++++++++++++++---
> >>  migration/trace-events |  2 +-
> >>  3 files changed, 32 insertions(+), 5 deletions(-)
> >> 
> >> diff --git a/migration/multifd.h b/migration/multifd.h
> >> index 39e55d7f05..973315b545 100644
> >> --- a/migration/multifd.h
> >> +++ b/migration/multifd.h
> >> @@ -49,7 +49,10 @@ typedef struct {
> >>      /* size of the next packet that contains pages */
> >>      uint32_t next_packet_size;
> >>      uint64_t packet_num;
> >> -    uint64_t unused[4];    /* Reserved for future use */
> >> +    /* zero pages */
> >> +    uint32_t zero_pages;
> >
> > Had you considered just adding a flag, MULTIFD_FLAG_ZERO to the packet?
> 
> I *have* to also add the flag.

I meant can't you add a flag to say that this whole packet is zero pages
and then you only need one counter.

Dave

> I was waiting for 7.0 to get out, because I still have to do the
> compatibility bits.  Otherwise you can't migrate to an old multifd version.
> 
> >
> >> +    uint32_t unused32[1];    /* Reserved for future use */
> >> +    uint64_t unused64[3];    /* Reserved for future use */
> >>      char ramblock[256];
> >>      uint64_t offset[];
> >>  } __attribute__((packed)) MultiFDPacket_t;
> >> @@ -117,6 +120,10 @@ typedef struct {
> >>      ram_addr_t *normal;
> >>      /* num of non zero pages */
> >>      uint32_t normal_num;
> >> +    /* Pages that are  zero */
> >> +    ram_addr_t *zero;
> >> +    /* num of zero pages */
> >> +    uint32_t zero_num;
> >>      /* used for compression methods */
> >>      void *data;
> >>  }  MultiFDSendParams;
> >> @@ -162,6 +169,10 @@ typedef struct {
> >>      ram_addr_t *normal;
> >>      /* num of non zero pages */
> >>      uint32_t normal_num;
> >> +    /* Pages that are  zero */
> >> +    ram_addr_t *zero;
> >> +    /* num of zero pages */
> >> +    uint32_t zero_num;
> >>      /* used for de-compression methods */
> >>      void *data;
> >>  } MultiFDRecvParams;
> >> diff --git a/migration/multifd.c b/migration/multifd.c
> >> index d1ab823f98..2e4dffd6c6 100644
> >> --- a/migration/multifd.c
> >> +++ b/migration/multifd.c
> >> @@ -265,6 +265,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> >>      packet->normal_pages = cpu_to_be32(p->normal_num);
> >>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
> >>      packet->packet_num = cpu_to_be64(p->packet_num);
> >> +    packet->zero_pages = cpu_to_be32(p->zero_num);
> >>  
> >>      if (p->pages->block) {
> >>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
> >> @@ -327,7 +328,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> >>      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
> >>      p->packet_num = be64_to_cpu(packet->packet_num);
> >>  
> >> -    if (p->normal_num == 0) {
> >> +    p->zero_num = be32_to_cpu(packet->zero_pages);
> >> +    if (p->zero_num > packet->pages_alloc - p->normal_num) {
> >> +        error_setg(errp, "multifd: received packet "
> >> +                   "with %d zero pages and expected maximum pages are %d",
> >> +                   p->normal_num, packet->pages_alloc - p->zero_num) ;
> >
> > should that be p->zero_num, packet->pages_alloc - p->normal_num ?
> > (and be %u)
> 
> Copy and paste error.  You are right on both cases.
> 
> Thanks.
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 21/23] multifd: Support for zero pages transmission
  2021-12-02 16:16       ` Dr. David Alan Gilbert
@ 2021-12-02 16:19         ` Juan Quintela
  2021-12-02 16:46           ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-12-02 16:19 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>> > * Juan Quintela (quintela@redhat.com) wrote:
>> >> This patch adds counters and similar.  Logic will be added on the
>> >> following patch.
>> >> 
>> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> >> ---
>> >>  migration/multifd.h    | 13 ++++++++++++-
>> >>  migration/multifd.c    | 22 +++++++++++++++++++---
>> >>  migration/trace-events |  2 +-
>> >>  3 files changed, 32 insertions(+), 5 deletions(-)
>> >> 
>> >> diff --git a/migration/multifd.h b/migration/multifd.h
>> >> index 39e55d7f05..973315b545 100644
>> >> --- a/migration/multifd.h
>> >> +++ b/migration/multifd.h
>> >> @@ -49,7 +49,10 @@ typedef struct {
>> >>      /* size of the next packet that contains pages */
>> >>      uint32_t next_packet_size;
>> >>      uint64_t packet_num;
>> >> -    uint64_t unused[4];    /* Reserved for future use */
>> >> +    /* zero pages */
>> >> +    uint32_t zero_pages;
>> >
>> > Had you considered just adding a flag, MULTIFD_FLAG_ZERO to the packet?
>> 
>> I *have* to also add the flag.
>
> I meant can't you add a flag to say that this whole packet is zero pages
> and then you only need one counter.

No, in general packets are going to transmit *both*, zero pages and
normal pages.  It depends on the content that one receives.

Later, Juan.

> Dave
>
>> I was waiting for 7.0 to get out, because I still have to do the
>> compatibility bits.  Otherwise you can't migrate to an old multifd version.
>> 
>> >
>> >> +    uint32_t unused32[1];    /* Reserved for future use */
>> >> +    uint64_t unused64[3];    /* Reserved for future use */
>> >>      char ramblock[256];
>> >>      uint64_t offset[];
>> >>  } __attribute__((packed)) MultiFDPacket_t;
>> >> @@ -117,6 +120,10 @@ typedef struct {
>> >>      ram_addr_t *normal;
>> >>      /* num of non zero pages */
>> >>      uint32_t normal_num;
>> >> +    /* Pages that are  zero */
>> >> +    ram_addr_t *zero;
>> >> +    /* num of zero pages */
>> >> +    uint32_t zero_num;
>> >>      /* used for compression methods */
>> >>      void *data;
>> >>  }  MultiFDSendParams;
>> >> @@ -162,6 +169,10 @@ typedef struct {
>> >>      ram_addr_t *normal;
>> >>      /* num of non zero pages */
>> >>      uint32_t normal_num;
>> >> +    /* Pages that are  zero */
>> >> +    ram_addr_t *zero;
>> >> +    /* num of zero pages */
>> >> +    uint32_t zero_num;
>> >>      /* used for de-compression methods */
>> >>      void *data;
>> >>  } MultiFDRecvParams;
>> >> diff --git a/migration/multifd.c b/migration/multifd.c
>> >> index d1ab823f98..2e4dffd6c6 100644
>> >> --- a/migration/multifd.c
>> >> +++ b/migration/multifd.c
>> >> @@ -265,6 +265,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>> >>      packet->normal_pages = cpu_to_be32(p->normal_num);
>> >>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>> >>      packet->packet_num = cpu_to_be64(p->packet_num);
>> >> +    packet->zero_pages = cpu_to_be32(p->zero_num);
>> >>  
>> >>      if (p->pages->block) {
>> >>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
>> >> @@ -327,7 +328,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> >>      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>> >>      p->packet_num = be64_to_cpu(packet->packet_num);
>> >>  
>> >> -    if (p->normal_num == 0) {
>> >> +    p->zero_num = be32_to_cpu(packet->zero_pages);
>> >> +    if (p->zero_num > packet->pages_alloc - p->normal_num) {
>> >> +        error_setg(errp, "multifd: received packet "
>> >> +                   "with %d zero pages and expected maximum pages are %d",
>> >> +                   p->normal_num, packet->pages_alloc - p->zero_num) ;
>> >
>> > should that be p->zero_num, packet->pages_alloc - p->normal_num ?
>> > (and be %u)
>> 
>> Copy and paste error.  You are right on both cases.
>> 
>> Thanks.
>> 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 22/23] multifd: Zero pages transmission
  2021-11-24 10:06 ` [PATCH v3 22/23] multifd: Zero " Juan Quintela
@ 2021-12-02 16:42   ` Dr. David Alan Gilbert
  2021-12-02 16:49     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-02 16:42 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> This implements the zero page dection and handling.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/multifd.c | 33 +++++++++++++++++++++++++++++++--
>  1 file changed, 31 insertions(+), 2 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 2e4dffd6c6..5c1fc70ce3 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -11,6 +11,7 @@
>   */
>  
>  #include "qemu/osdep.h"
> +#include "qemu/cutils.h"
>  #include "qemu/rcu.h"
>  #include "exec/target_page.h"
>  #include "sysemu/sysemu.h"
> @@ -277,6 +278,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  
>          packet->offset[i] = cpu_to_be64(temp);
>      }
> +    for (i = 0; i < p->zero_num; i++) {
> +        /* there are architectures where ram_addr_t is 32 bit */
> +        uint64_t temp = p->zero[i];
> +
> +        packet->offset[p->normal_num + i] = cpu_to_be64(temp);

OK, so if I'm understanding correctly here, the packet->offset array
starts with the 'normals' and then the zeros?
If so that probably needs a comment somewhere.

Other than that,


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> +    }
>  }
>  
>  static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> @@ -362,6 +369,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          p->normal[i] = offset;
>      }
>  
> +    for (i = 0; i < p->zero_num; i++) {
> +        uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
> +
> +        if (offset > (block->used_length - page_size)) {
> +            error_setg(errp, "multifd: offset too long %" PRIu64
> +                       " (max " RAM_ADDR_FMT ")",
> +                       offset, block->used_length);
> +            return -1;
> +        }
> +        p->zero[i] = offset;
> +    }
> +
>      return 0;
>  }
>  
> @@ -652,8 +671,14 @@ static void *multifd_send_thread(void *opaque)
>              p->zero_num = 0;
>  
>              for (int i = 0; i < p->pages->num; i++) {
> -                p->normal[p->normal_num] = p->pages->offset[i];
> -                p->normal_num++;
> +                if (buffer_is_zero(p->pages->block->host + p->pages->offset[i],
> +                                   qemu_target_page_size())) {
> +                    p->zero[p->zero_num] = p->pages->offset[i];
> +                    p->zero_num++;
> +                } else {
> +                    p->normal[p->normal_num] = p->pages->offset[i];
> +                    p->normal_num++;
> +                }
>              }
>  
>              if (p->normal_num) {
> @@ -1112,6 +1137,10 @@ static void *multifd_recv_thread(void *opaque)
>              }
>          }
>  
> +        for (int i = 0; i < p->zero_num; i++) {
> +            memset(p->host + p->zero[i], 0, qemu_target_page_size());
> +        }
> +
>          if (flags & MULTIFD_FLAG_SYNC) {
>              qemu_sem_post(&multifd_recv_state->sem_sync);
>              qemu_sem_wait(&p->sem_sync);
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 21/23] multifd: Support for zero pages transmission
  2021-12-02 16:19         ` Juan Quintela
@ 2021-12-02 16:46           ` Dr. David Alan Gilbert
  2021-12-02 16:52             ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-02 16:46 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> >> > * Juan Quintela (quintela@redhat.com) wrote:
> >> >> This patch adds counters and similar.  Logic will be added on the
> >> >> following patch.
> >> >> 
> >> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> >> ---
> >> >>  migration/multifd.h    | 13 ++++++++++++-
> >> >>  migration/multifd.c    | 22 +++++++++++++++++++---
> >> >>  migration/trace-events |  2 +-
> >> >>  3 files changed, 32 insertions(+), 5 deletions(-)
> >> >> 
> >> >> diff --git a/migration/multifd.h b/migration/multifd.h
> >> >> index 39e55d7f05..973315b545 100644
> >> >> --- a/migration/multifd.h
> >> >> +++ b/migration/multifd.h
> >> >> @@ -49,7 +49,10 @@ typedef struct {
> >> >>      /* size of the next packet that contains pages */
> >> >>      uint32_t next_packet_size;
> >> >>      uint64_t packet_num;
> >> >> -    uint64_t unused[4];    /* Reserved for future use */
> >> >> +    /* zero pages */
> >> >> +    uint32_t zero_pages;
> >> >
> >> > Had you considered just adding a flag, MULTIFD_FLAG_ZERO to the packet?
> >> 
> >> I *have* to also add the flag.
> >
> > I meant can't you add a flag to say that this whole packet is zero pages
> > and then you only need one counter.
> 
> No, in general packets are going to transmit *both*, zero pages and
> normal pages.  It depends on the content that one receives.

OK, I'd wondered if it was just easier to send two packets; but fine.

Dave

> Later, Juan.
> 
> > Dave
> >
> >> I was waiting for 7.0 to get out, because I still have to do the
> >> compatibility bits.  Otherwise you can't migrate to an old multifd version.
> >> 
> >> >
> >> >> +    uint32_t unused32[1];    /* Reserved for future use */
> >> >> +    uint64_t unused64[3];    /* Reserved for future use */
> >> >>      char ramblock[256];
> >> >>      uint64_t offset[];
> >> >>  } __attribute__((packed)) MultiFDPacket_t;
> >> >> @@ -117,6 +120,10 @@ typedef struct {
> >> >>      ram_addr_t *normal;
> >> >>      /* num of non zero pages */
> >> >>      uint32_t normal_num;
> >> >> +    /* Pages that are  zero */
> >> >> +    ram_addr_t *zero;
> >> >> +    /* num of zero pages */
> >> >> +    uint32_t zero_num;
> >> >>      /* used for compression methods */
> >> >>      void *data;
> >> >>  }  MultiFDSendParams;
> >> >> @@ -162,6 +169,10 @@ typedef struct {
> >> >>      ram_addr_t *normal;
> >> >>      /* num of non zero pages */
> >> >>      uint32_t normal_num;
> >> >> +    /* Pages that are  zero */
> >> >> +    ram_addr_t *zero;
> >> >> +    /* num of zero pages */
> >> >> +    uint32_t zero_num;
> >> >>      /* used for de-compression methods */
> >> >>      void *data;
> >> >>  } MultiFDRecvParams;
> >> >> diff --git a/migration/multifd.c b/migration/multifd.c
> >> >> index d1ab823f98..2e4dffd6c6 100644
> >> >> --- a/migration/multifd.c
> >> >> +++ b/migration/multifd.c
> >> >> @@ -265,6 +265,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> >> >>      packet->normal_pages = cpu_to_be32(p->normal_num);
> >> >>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
> >> >>      packet->packet_num = cpu_to_be64(p->packet_num);
> >> >> +    packet->zero_pages = cpu_to_be32(p->zero_num);
> >> >>  
> >> >>      if (p->pages->block) {
> >> >>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
> >> >> @@ -327,7 +328,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> >> >>      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
> >> >>      p->packet_num = be64_to_cpu(packet->packet_num);
> >> >>  
> >> >> -    if (p->normal_num == 0) {
> >> >> +    p->zero_num = be32_to_cpu(packet->zero_pages);
> >> >> +    if (p->zero_num > packet->pages_alloc - p->normal_num) {
> >> >> +        error_setg(errp, "multifd: received packet "
> >> >> +                   "with %d zero pages and expected maximum pages are %d",
> >> >> +                   p->normal_num, packet->pages_alloc - p->zero_num) ;
> >> >
> >> > should that be p->zero_num, packet->pages_alloc - p->normal_num ?
> >> > (and be %u)
> >> 
> >> Copy and paste error.  You are right on both cases.
> >> 
> >> Thanks.
> >> 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 22/23] multifd: Zero pages transmission
  2021-12-02 16:42   ` Dr. David Alan Gilbert
@ 2021-12-02 16:49     ` Juan Quintela
  0 siblings, 0 replies; 72+ messages in thread
From: Juan Quintela @ 2021-12-02 16:49 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> This implements the zero page dection and handling.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/multifd.c | 33 +++++++++++++++++++++++++++++++--
>>  1 file changed, 31 insertions(+), 2 deletions(-)
>> 
>> diff --git a/migration/multifd.c b/migration/multifd.c
>> index 2e4dffd6c6..5c1fc70ce3 100644
>> --- a/migration/multifd.c
>> +++ b/migration/multifd.c
>> @@ -11,6 +11,7 @@
>>   */
>>  
>>  #include "qemu/osdep.h"
>> +#include "qemu/cutils.h"
>>  #include "qemu/rcu.h"
>>  #include "exec/target_page.h"
>>  #include "sysemu/sysemu.h"
>> @@ -277,6 +278,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>>  
>>          packet->offset[i] = cpu_to_be64(temp);
>>      }
>> +    for (i = 0; i < p->zero_num; i++) {
>> +        /* there are architectures where ram_addr_t is 32 bit */
>> +        uint64_t temp = p->zero[i];
>> +
>> +        packet->offset[p->normal_num + i] = cpu_to_be64(temp);
>
> OK, so if I'm understanding correctly here, the packet->offset array
> starts with the 'normals' and then the zeros?
> If so that probably needs a comment somewhere.

Yeap.

> Other than that,

Thanks, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 21/23] multifd: Support for zero pages transmission
  2021-12-02 16:46           ` Dr. David Alan Gilbert
@ 2021-12-02 16:52             ` Juan Quintela
  0 siblings, 0 replies; 72+ messages in thread
From: Juan Quintela @ 2021-12-02 16:52 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>> > * Juan Quintela (quintela@redhat.com) wrote:
>> >> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>> >> > * Juan Quintela (quintela@redhat.com) wrote:
>> >> >> This patch adds counters and similar.  Logic will be added on the
>> >> >> following patch.
>> >> >> 
>> >> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> >> >> ---
>> >> >>  migration/multifd.h    | 13 ++++++++++++-
>> >> >>  migration/multifd.c    | 22 +++++++++++++++++++---
>> >> >>  migration/trace-events |  2 +-
>> >> >>  3 files changed, 32 insertions(+), 5 deletions(-)
>> >> >> 
>> >> >> diff --git a/migration/multifd.h b/migration/multifd.h
>> >> >> index 39e55d7f05..973315b545 100644
>> >> >> --- a/migration/multifd.h
>> >> >> +++ b/migration/multifd.h
>> >> >> @@ -49,7 +49,10 @@ typedef struct {
>> >> >>      /* size of the next packet that contains pages */
>> >> >>      uint32_t next_packet_size;
>> >> >>      uint64_t packet_num;
>> >> >> -    uint64_t unused[4];    /* Reserved for future use */
>> >> >> +    /* zero pages */
>> >> >> +    uint32_t zero_pages;
>> >> >
>> >> > Had you considered just adding a flag, MULTIFD_FLAG_ZERO to the packet?
>> >> 
>> >> I *have* to also add the flag.
>> >
>> > I meant can't you add a flag to say that this whole packet is zero pages
>> > and then you only need one counter.
>> 
>> No, in general packets are going to transmit *both*, zero pages and
>> normal pages.  It depends on the content that one receives.
>
> OK, I'd wondered if it was just easier to send two packets; but fine.

Zero pages travel for free.

To have initial packets to the same size, we always send an array of 128
offsets in the packet (I am speaking x86_64 here).

And as we receive an array of 128 pages, we have space there for the
zero pages, no need of a different packet at all.

Later, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 23/23] migration: Use multifd before we check for the zero page
  2021-11-24 10:06 ` [PATCH v3 23/23] migration: Use multifd before we check for the zero page Juan Quintela
@ 2021-12-02 17:11   ` Dr. David Alan Gilbert
  2021-12-02 17:38     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-02 17:11 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> So we use multifd to transmit zero pages.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 57efa67f20..3ae094f653 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2138,6 +2138,17 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>      ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
>      int res;
>  
> +    /*
> +     * Do not use multifd for:
> +     * 1. Compression as the first page in the new block should be posted out
> +     *    before sending the compressed page
> +     * 2. In postcopy as one whole host page should be placed
> +     */
> +    if (!save_page_use_compression(rs) && migrate_use_multifd()
> +        && !migration_in_postcopy()) {
> +        return ram_save_multifd_page(rs, block, offset);
> +    }
> +
>      if (control_save_page(rs, block, offset, &res)) {
>          return res;
>      }

Although I don't think it currently matters, I think that should be
after the control_save_page.

Dave

> @@ -2160,17 +2171,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>          return res;
>      }
>  
> -    /*
> -     * Do not use multifd for:
> -     * 1. Compression as the first page in the new block should be posted out
> -     *    before sending the compressed page
> -     * 2. In postcopy as one whole host page should be placed
> -     */
> -    if (!save_page_use_compression(rs) && migrate_use_multifd()
> -        && !migration_in_postcopy()) {
> -        return ram_save_multifd_page(rs, block, offset);
> -    }
> -
>      return ram_save_page(rs, pss, last_stage);
>  }
>  
> -- 
> 2.33.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 23/23] migration: Use multifd before we check for the zero page
  2021-12-02 17:11   ` Dr. David Alan Gilbert
@ 2021-12-02 17:38     ` Juan Quintela
  2021-12-02 17:49       ` Dr. David Alan Gilbert
  2021-12-07  7:30       ` Peter Xu
  0 siblings, 2 replies; 72+ messages in thread
From: Juan Quintela @ 2021-12-02 17:38 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Leonardo Bras, qemu-devel, Peter Xu

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> So we use multifd to transmit zero pages.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 22 +++++++++++-----------
>>  1 file changed, 11 insertions(+), 11 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 57efa67f20..3ae094f653 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -2138,6 +2138,17 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>>      ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
>>      int res;
>>  
>> +    /*
>> +     * Do not use multifd for:
>> +     * 1. Compression as the first page in the new block should be posted out
>> +     *    before sending the compressed page
>> +     * 2. In postcopy as one whole host page should be placed
>> +     */
>> +    if (!save_page_use_compression(rs) && migrate_use_multifd()
>> +        && !migration_in_postcopy()) {
>> +        return ram_save_multifd_page(rs, block, offset);
>> +    }
>> +
>>      if (control_save_page(rs, block, offset, &res)) {
>>          return res;
>>      }
>
> Although I don't think it currently matters, I think that should be
> after the control_save_page.

This needs to be improved to be compatible with old versions.

But .... if we don't care about RDMA, why do we care about
control_save_page()?

Later, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 23/23] migration: Use multifd before we check for the zero page
  2021-12-02 17:38     ` Juan Quintela
@ 2021-12-02 17:49       ` Dr. David Alan Gilbert
  2021-12-07  7:30       ` Peter Xu
  1 sibling, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-02 17:49 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Peter Xu

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> So we use multifd to transmit zero pages.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  migration/ram.c | 22 +++++++++++-----------
> >>  1 file changed, 11 insertions(+), 11 deletions(-)
> >> 
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index 57efa67f20..3ae094f653 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -2138,6 +2138,17 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
> >>      ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
> >>      int res;
> >>  
> >> +    /*
> >> +     * Do not use multifd for:
> >> +     * 1. Compression as the first page in the new block should be posted out
> >> +     *    before sending the compressed page
> >> +     * 2. In postcopy as one whole host page should be placed
> >> +     */
> >> +    if (!save_page_use_compression(rs) && migrate_use_multifd()
> >> +        && !migration_in_postcopy()) {
> >> +        return ram_save_multifd_page(rs, block, offset);
> >> +    }
> >> +
> >>      if (control_save_page(rs, block, offset, &res)) {
> >>          return res;
> >>      }
> >
> > Although I don't think it currently matters, I think that should be
> > after the control_save_page.
> 
> This needs to be improved to be compatible with old versions.
> 
> But .... if we don't care about RDMA, why do we care about
> control_save_page()?

That's why I said I don't think it currently matters; but the patch
seemed a little odd since it changed this order which isn't what
it said.

Dave

> Later, Juan.
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 18/23] multifd: Use normal pages array on the recv side
  2021-11-24 10:06 ` [PATCH v3 18/23] multifd: Use normal pages array on the recv side Juan Quintela
@ 2021-12-07  7:11   ` Peter Xu
  2021-12-10 10:41     ` Juan Quintela
  0 siblings, 1 reply; 72+ messages in thread
From: Peter Xu @ 2021-12-07  7:11 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, qemu-devel, Dr. David Alan Gilbert

On Wed, Nov 24, 2021 at 11:06:12AM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/multifd.h      |  8 +++++--
>  migration/multifd-zlib.c |  8 +++----
>  migration/multifd-zstd.c |  6 +++---
>  migration/multifd.c      | 45 ++++++++++++++++++----------------------
>  4 files changed, 33 insertions(+), 34 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 78e73df3ec..9fbcb7bb9a 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -151,12 +151,16 @@ typedef struct {
>      uint32_t next_packet_size;
>      /* packets sent through this channel */
>      uint64_t num_packets;
> -    /* pages sent through this channel */
> -    uint64_t num_pages;
> +    /* non zero pages sent through this channel */

s/send/recv/

> +    uint64_t num_normal_pages;

How about renaming it to "total_normal_pages"?  It's merely impossible to
identify this from normal_num below from their names..

I'd have the same comment to previous patch.

Thanks,

>      /* syncs main thread and channels */
>      QemuSemaphore sem_sync;
>      /* buffers to recv */
>      struct iovec *iov;
> +    /* Pages that are not zero */
> +    ram_addr_t *normal;
> +    /* num of non zero pages */
> +    uint32_t normal_num;
>      /* used for de-compression methods */
>      void *data;
>  } MultiFDRecvParams;

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 23/23] migration: Use multifd before we check for the zero page
  2021-12-02 17:38     ` Juan Quintela
  2021-12-02 17:49       ` Dr. David Alan Gilbert
@ 2021-12-07  7:30       ` Peter Xu
  2021-12-13  9:03         ` Juan Quintela
  1 sibling, 1 reply; 72+ messages in thread
From: Peter Xu @ 2021-12-07  7:30 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, Dr. David Alan Gilbert, qemu-devel

On Thu, Dec 02, 2021 at 06:38:27PM +0100, Juan Quintela wrote:
> This needs to be improved to be compatible with old versions.

Any plan to let new binary work with old binary?

Maybe boost the version field for multifd packet (along with a
multifd_version=2 parameter and only set on new machine types)?

PS: We should really have some handshake mechanism between src/dst, I dreamt it
for a long time..  So that we only need to specify the capability/parameters on
src someday and we'll never see incompatible migration failing randomly because
the handshake should guarantee no stupid mistake..  Sad.

> 
> But .... if we don't care about RDMA, why do we care about
> control_save_page()?

Could anyone help to explain why we don't care?  I still see bugfixes coming..

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 18/23] multifd: Use normal pages array on the recv side
  2021-12-07  7:11   ` Peter Xu
@ 2021-12-10 10:41     ` Juan Quintela
  0 siblings, 0 replies; 72+ messages in thread
From: Juan Quintela @ 2021-12-10 10:41 UTC (permalink / raw)
  To: Peter Xu; +Cc: Leonardo Bras, qemu-devel, Dr. David Alan Gilbert

Peter Xu <peterx@redhat.com> wrote:
> On Wed, Nov 24, 2021 at 11:06:12AM +0100, Juan Quintela wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/multifd.h      |  8 +++++--
>>  migration/multifd-zlib.c |  8 +++----
>>  migration/multifd-zstd.c |  6 +++---
>>  migration/multifd.c      | 45 ++++++++++++++++++----------------------
>>  4 files changed, 33 insertions(+), 34 deletions(-)
>> 
>> diff --git a/migration/multifd.h b/migration/multifd.h
>> index 78e73df3ec..9fbcb7bb9a 100644
>> --- a/migration/multifd.h
>> +++ b/migration/multifd.h
>> @@ -151,12 +151,16 @@ typedef struct {
>>      uint32_t next_packet_size;
>>      /* packets sent through this channel */
>>      uint64_t num_packets;
>> -    /* pages sent through this channel */
>> -    uint64_t num_pages;
>> +    /* non zero pages sent through this channel */
>
> s/send/recv/

Thanks.

>> +    uint64_t num_normal_pages;
>
> How about renaming it to "total_normal_pages"?  It's merely impossible to
> identify this from normal_num below from their names..

I can change it.  It just makes some lines a bit longer, but that is
what you have with better names.

> I'd have the same comment to previous patch.

Ok.

Thanks, Juan.

>
> Thanks,
>
>>      /* syncs main thread and channels */
>>      QemuSemaphore sem_sync;
>>      /* buffers to recv */
>>      struct iovec *iov;
>> +    /* Pages that are not zero */
>> +    ram_addr_t *normal;
>> +    /* num of non zero pages */
>> +    uint32_t normal_num;
>>      /* used for de-compression methods */
>>      void *data;
>>  } MultiFDRecvParams;



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 23/23] migration: Use multifd before we check for the zero page
  2021-12-07  7:30       ` Peter Xu
@ 2021-12-13  9:03         ` Juan Quintela
  2021-12-15  1:39           ` Peter Xu
  0 siblings, 1 reply; 72+ messages in thread
From: Juan Quintela @ 2021-12-13  9:03 UTC (permalink / raw)
  To: Peter Xu; +Cc: Leonardo Bras, Dr. David Alan Gilbert, qemu-devel

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Dec 02, 2021 at 06:38:27PM +0100, Juan Quintela wrote:
>> This needs to be improved to be compatible with old versions.
>
> Any plan to let new binary work with old binary?

Yes, but I was waiting for 7.0 to get out.  Basically I need to do:

if (old)
    run the old code
else
    new code

this needs to be done only in a couple of places, but I need the
machine_type 7.0 created to put the property there.

> Maybe boost the version field for multifd packet (along with a
> multifd_version=2 parameter and only set on new machine types)?

For now, we only need to add a flag for the ZERO_PAGE functionality.  if
we are on older qemu, just don't test for zero pages.  On reception, we
can just accept everything, i.e. if there are no zero pages, everything
is ok.

> PS: We should really have some handshake mechanism between src/dst, I dreamt it
> for a long time..  So that we only need to specify the capability/parameters on
> src someday and we'll never see incompatible migration failing randomly because
> the handshake should guarantee no stupid mistake..  Sad.

That has been on my ToDo list for too long, just need the time to do
it.  It would make everything much, much easier.

>> But .... if we don't care about RDMA, why do we care about
>> control_save_page()?
>
> Could anyone help to explain why we don't care?  I still see bugfixes coming..

Sentence was inside a context.  We don't care for RDMA while we are on
multifd.  If multifd ever supports RDMA, it would be a new
implementation that don't use such hooks.

IMVHO, RDMA implementation in qemu is quite bad.  For historic reasons,
they needed to use qemu_file abstraction for comunication, so they are
dropping directly the ability of doing direct copies of pages.
So, if one is requiring to mlock all the guest memory on both sides to
use RDMA, the *right* thing to do from my point of view is just
"remotely" read the page without any overhead.

Yes, that requires quite a bit of changes, I was not suggesting that it
was a trivial task.

Later, Juan.



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 03/23] multifd: Rename used field to num
  2021-11-24 10:05 ` [PATCH v3 03/23] multifd: Rename used field to num Juan Quintela
  2021-11-24 19:37   ` Dr. David Alan Gilbert
@ 2021-12-13  9:34   ` Zheng Chuan via
  2021-12-13 15:17     ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 72+ messages in thread
From: Zheng Chuan via @ 2021-12-13  9:34 UTC (permalink / raw)
  To: Juan Quintela, qemu-devel
  Cc: Leonardo Bras, Dr. David Alan Gilbert, Peter Xu, Xiexiangyou

Hi, Juan,

Sorry, forget to send to qemu-devel, resend it.

On 2021/11/24 18:05, Juan Quintela wrote:
> We will need to split it later in zero_num (number of zero pages) and
> normal_num (number of normal pages).  This name is better.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/multifd.h |  2 +-
>  migration/multifd.c | 38 +++++++++++++++++++-------------------
>  2 files changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 15c50ca0b2..86820dd028 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -55,7 +55,7 @@ typedef struct {
>  
>  typedef struct {
>      /* number of used pages */
> -    uint32_t used;
> +    uint32_t num;
>      /* number of allocated pages */
>      uint32_t allocated;
>      /* global number of generated multifd packets */
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 8125d0015c..8ea86d81dc 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -252,7 +252,7 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
>  
>  static void multifd_pages_clear(MultiFDPages_t *pages)
>  {
> -    pages->used = 0;
> +    pages->num = 0;
>      pages->allocated = 0;
>      pages->packet_num = 0;
>      pages->block = NULL;
> @@ -270,7 +270,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  
>      packet->flags = cpu_to_be32(p->flags);
>      packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> -    packet->pages_used = cpu_to_be32(p->pages->used);
> +    packet->pages_used = cpu_to_be32(p->pages->num);
>      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>      packet->packet_num = cpu_to_be64(p->packet_num);
>  
> @@ -278,7 +278,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>          strncpy(packet->ramblock, p->pages->block->idstr, 256);
>      }
>  
> -    for (i = 0; i < p->pages->used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          /* there are architectures where ram_addr_t is 32 bit */
>          uint64_t temp = p->pages->offset[i];
>  
> @@ -332,18 +332,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          p->pages = multifd_pages_init(packet->pages_alloc);
>      }
>  
> -    p->pages->used = be32_to_cpu(packet->pages_used);
> -    if (p->pages->used > packet->pages_alloc) {
> +    p->pages->num = be32_to_cpu(packet->pages_used);
> +    if (p->pages->num > packet->pages_alloc) {
>          error_setg(errp, "multifd: received packet "
>                     "with %d pages and expected maximum pages are %d",
> -                   p->pages->used, packet->pages_alloc) ;
> +                   p->pages->num, packet->pages_alloc) ;
>          return -1;
>      }
>  
>      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>      p->packet_num = be64_to_cpu(packet->packet_num);
>  
> -    if (p->pages->used == 0) {
> +    if (p->pages->num == 0) {
>          return 0;
>      }
>  
> @@ -356,7 +356,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>  
> -    for (i = 0; i < p->pages->used; i++) {
> +    for (i = 0; i < p->pages->num; i++) {
>          uint64_t offset = be64_to_cpu(packet->offset[i]);
>  
>          if (offset > (block->used_length - page_size)) {
> @@ -443,13 +443,13 @@ static int multifd_send_pages(QEMUFile *f)
>          }
>          qemu_mutex_unlock(&p->mutex);
>      }
> -    assert(!p->pages->used);
> +    assert(!p->pages->num);
>      assert(!p->pages->block);
>  
>      p->packet_num = multifd_send_state->packet_num++;
>      multifd_send_state->pages = p->pages;
>      p->pages = pages;
> -    transferred = ((uint64_t) pages->used) * qemu_target_page_size()
> +    transferred = ((uint64_t) pages->num) * qemu_target_page_size()
>                  + p->packet_len;
The size of zero page should not regard as the whole pagesize.
I think the transferred should be updated after you introduce zero_num in following patches, such as:
+    transferred = ((uint64_t) p->normal_num) * qemu_target_page_size()
+               + ((uint64_t) p->zero_num) * sizeof(uint64_t);
Otherwise, migration time will get worse if we have low bandwidth limit parameter.

I tested it with bandwidth limit of 100MB/s and it works fine:)

>      qemu_file_update_transfer(f, transferred);
>      ram_counters.multifd_bytes += transferred;
> @@ -469,12 +469,12 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
>      }
>  
>      if (pages->block == block) {
> -        pages->offset[pages->used] = offset;
> -        pages->iov[pages->used].iov_base = block->host + offset;
> -        pages->iov[pages->used].iov_len = qemu_target_page_size();
> -        pages->used++;
> +        pages->offset[pages->num] = offset;
> +        pages->iov[pages->num].iov_base = block->host + offset;
> +        pages->iov[pages->num].iov_len = qemu_target_page_size();
> +        pages->num++;
>  
> -        if (pages->used < pages->allocated) {
> +        if (pages->num < pages->allocated) {
>              return 1;
>          }
>      }
> @@ -586,7 +586,7 @@ void multifd_send_sync_main(QEMUFile *f)
>      if (!migrate_use_multifd()) {
>          return;
>      }
> -    if (multifd_send_state->pages->used) {
> +    if (multifd_send_state->pages->num) {
>          if (multifd_send_pages(f) < 0) {
>              error_report("%s: multifd_send_pages fail", __func__);
>              return;
> @@ -649,7 +649,7 @@ static void *multifd_send_thread(void *opaque)
>          qemu_mutex_lock(&p->mutex);
>  
>          if (p->pending_job) {
> -            uint32_t used = p->pages->used;
> +            uint32_t used = p->pages->num;
>              uint64_t packet_num = p->packet_num;
>              flags = p->flags;
>  
> @@ -665,7 +665,7 @@ static void *multifd_send_thread(void *opaque)
>              p->flags = 0;
>              p->num_packets++;
>              p->num_pages += used;
> -            p->pages->used = 0;
> +            p->pages->num = 0;
>              p->pages->block = NULL;
>              qemu_mutex_unlock(&p->mutex);
>  
> @@ -1091,7 +1091,7 @@ static void *multifd_recv_thread(void *opaque)
>              break;
>          }
>  
> -        used = p->pages->used;
> +        used = p->pages->num;
>          flags = p->flags;
>          /* recv methods don't know how to handle the SYNC flag */
>          p->flags &= ~MULTIFD_FLAG_SYNC;
> 

-- 
Regards.
Chuan


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 03/23] multifd: Rename used field to num
  2021-12-13  9:34   ` Zheng Chuan via
@ 2021-12-13 15:17     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 72+ messages in thread
From: Dr. David Alan Gilbert @ 2021-12-13 15:17 UTC (permalink / raw)
  To: Zheng Chuan
  Cc: Xiexiangyou, Leonardo Bras, qemu-devel, Peter Xu, Juan Quintela

* Zheng Chuan (zhengchuan@huawei.com) wrote:
> Hi, Juan,
> 
> Sorry, forget to send to qemu-devel, resend it.
> 
> On 2021/11/24 18:05, Juan Quintela wrote:
> > We will need to split it later in zero_num (number of zero pages) and
> > normal_num (number of normal pages).  This name is better.
> > 
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> >  migration/multifd.h |  2 +-
> >  migration/multifd.c | 38 +++++++++++++++++++-------------------
> >  2 files changed, 20 insertions(+), 20 deletions(-)
> > 
> > diff --git a/migration/multifd.h b/migration/multifd.h
> > index 15c50ca0b2..86820dd028 100644
> > --- a/migration/multifd.h
> > +++ b/migration/multifd.h
> > @@ -55,7 +55,7 @@ typedef struct {
> >  
> >  typedef struct {
> >      /* number of used pages */
> > -    uint32_t used;
> > +    uint32_t num;
> >      /* number of allocated pages */
> >      uint32_t allocated;
> >      /* global number of generated multifd packets */
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index 8125d0015c..8ea86d81dc 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -252,7 +252,7 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
> >  
> >  static void multifd_pages_clear(MultiFDPages_t *pages)
> >  {
> > -    pages->used = 0;
> > +    pages->num = 0;
> >      pages->allocated = 0;
> >      pages->packet_num = 0;
> >      pages->block = NULL;
> > @@ -270,7 +270,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> >  
> >      packet->flags = cpu_to_be32(p->flags);
> >      packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> > -    packet->pages_used = cpu_to_be32(p->pages->used);
> > +    packet->pages_used = cpu_to_be32(p->pages->num);
> >      packet->next_packet_size = cpu_to_be32(p->next_packet_size);
> >      packet->packet_num = cpu_to_be64(p->packet_num);
> >  
> > @@ -278,7 +278,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> >          strncpy(packet->ramblock, p->pages->block->idstr, 256);
> >      }
> >  
> > -    for (i = 0; i < p->pages->used; i++) {
> > +    for (i = 0; i < p->pages->num; i++) {
> >          /* there are architectures where ram_addr_t is 32 bit */
> >          uint64_t temp = p->pages->offset[i];
> >  
> > @@ -332,18 +332,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> >          p->pages = multifd_pages_init(packet->pages_alloc);
> >      }
> >  
> > -    p->pages->used = be32_to_cpu(packet->pages_used);
> > -    if (p->pages->used > packet->pages_alloc) {
> > +    p->pages->num = be32_to_cpu(packet->pages_used);
> > +    if (p->pages->num > packet->pages_alloc) {
> >          error_setg(errp, "multifd: received packet "
> >                     "with %d pages and expected maximum pages are %d",
> > -                   p->pages->used, packet->pages_alloc) ;
> > +                   p->pages->num, packet->pages_alloc) ;
> >          return -1;
> >      }
> >  
> >      p->next_packet_size = be32_to_cpu(packet->next_packet_size);
> >      p->packet_num = be64_to_cpu(packet->packet_num);
> >  
> > -    if (p->pages->used == 0) {
> > +    if (p->pages->num == 0) {
> >          return 0;
> >      }
> >  
> > @@ -356,7 +356,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> >          return -1;
> >      }
> >  
> > -    for (i = 0; i < p->pages->used; i++) {
> > +    for (i = 0; i < p->pages->num; i++) {
> >          uint64_t offset = be64_to_cpu(packet->offset[i]);
> >  
> >          if (offset > (block->used_length - page_size)) {
> > @@ -443,13 +443,13 @@ static int multifd_send_pages(QEMUFile *f)
> >          }
> >          qemu_mutex_unlock(&p->mutex);
> >      }
> > -    assert(!p->pages->used);
> > +    assert(!p->pages->num);
> >      assert(!p->pages->block);
> >  
> >      p->packet_num = multifd_send_state->packet_num++;
> >      multifd_send_state->pages = p->pages;
> >      p->pages = pages;
> > -    transferred = ((uint64_t) pages->used) * qemu_target_page_size()
> > +    transferred = ((uint64_t) pages->num) * qemu_target_page_size()
> >                  + p->packet_len;
> The size of zero page should not regard as the whole pagesize.
> I think the transferred should be updated after you introduce zero_num in following patches, such as:
> +    transferred = ((uint64_t) p->normal_num) * qemu_target_page_size()
> +               + ((uint64_t) p->zero_num) * sizeof(uint64_t);
> Otherwise, migration time will get worse if we have low bandwidth limit parameter.
> 
> I tested it with bandwidth limit of 100MB/s and it works fine:)

Yes I think you're right; 'transferred' is normally a measure of used
network bandwidth.

Dave

> >      qemu_file_update_transfer(f, transferred);
> >      ram_counters.multifd_bytes += transferred;
> > @@ -469,12 +469,12 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
> >      }
> >  
> >      if (pages->block == block) {
> > -        pages->offset[pages->used] = offset;
> > -        pages->iov[pages->used].iov_base = block->host + offset;
> > -        pages->iov[pages->used].iov_len = qemu_target_page_size();
> > -        pages->used++;
> > +        pages->offset[pages->num] = offset;
> > +        pages->iov[pages->num].iov_base = block->host + offset;
> > +        pages->iov[pages->num].iov_len = qemu_target_page_size();
> > +        pages->num++;
> >  
> > -        if (pages->used < pages->allocated) {
> > +        if (pages->num < pages->allocated) {
> >              return 1;
> >          }
> >      }
> > @@ -586,7 +586,7 @@ void multifd_send_sync_main(QEMUFile *f)
> >      if (!migrate_use_multifd()) {
> >          return;
> >      }
> > -    if (multifd_send_state->pages->used) {
> > +    if (multifd_send_state->pages->num) {
> >          if (multifd_send_pages(f) < 0) {
> >              error_report("%s: multifd_send_pages fail", __func__);
> >              return;
> > @@ -649,7 +649,7 @@ static void *multifd_send_thread(void *opaque)
> >          qemu_mutex_lock(&p->mutex);
> >  
> >          if (p->pending_job) {
> > -            uint32_t used = p->pages->used;
> > +            uint32_t used = p->pages->num;
> >              uint64_t packet_num = p->packet_num;
> >              flags = p->flags;
> >  
> > @@ -665,7 +665,7 @@ static void *multifd_send_thread(void *opaque)
> >              p->flags = 0;
> >              p->num_packets++;
> >              p->num_pages += used;
> > -            p->pages->used = 0;
> > +            p->pages->num = 0;
> >              p->pages->block = NULL;
> >              qemu_mutex_unlock(&p->mutex);
> >  
> > @@ -1091,7 +1091,7 @@ static void *multifd_recv_thread(void *opaque)
> >              break;
> >          }
> >  
> > -        used = p->pages->used;
> > +        used = p->pages->num;
> >          flags = p->flags;
> >          /* recv methods don't know how to handle the SYNC flag */
> >          p->flags &= ~MULTIFD_FLAG_SYNC;
> > 
> 
> -- 
> Regards.
> Chuan
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v3 23/23] migration: Use multifd before we check for the zero page
  2021-12-13  9:03         ` Juan Quintela
@ 2021-12-15  1:39           ` Peter Xu
  0 siblings, 0 replies; 72+ messages in thread
From: Peter Xu @ 2021-12-15  1:39 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Leonardo Bras, Dr. David Alan Gilbert, qemu-devel

On Mon, Dec 13, 2021 at 10:03:53AM +0100, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Dec 02, 2021 at 06:38:27PM +0100, Juan Quintela wrote:
> >> This needs to be improved to be compatible with old versions.
> >
> > Any plan to let new binary work with old binary?
> 
> Yes, but I was waiting for 7.0 to get out.  Basically I need to do:
> 
> if (old)
>     run the old code
> else
>     new code
> 
> this needs to be done only in a couple of places, but I need the
> machine_type 7.0 created to put the property there.

OK.  We can also have the tunable be false by default until the new machine
type arrives; then the series won't need to be blocked by the machine type
patch and it'll be only one last patch to be adjusted there.

> 
> > Maybe boost the version field for multifd packet (along with a
> > multifd_version=2 parameter and only set on new machine types)?
> 
> For now, we only need to add a flag for the ZERO_PAGE functionality.  if
> we are on older qemu, just don't test for zero pages.  On reception, we
> can just accept everything, i.e. if there are no zero pages, everything
> is ok.

Do you mean zero detection for multifd=on only?  As otherwise it could regress
old machine types in some very common scenarios, iiuc, e.g. idle guests.

> 
> > PS: We should really have some handshake mechanism between src/dst, I dreamt it
> > for a long time..  So that we only need to specify the capability/parameters on
> > src someday and we'll never see incompatible migration failing randomly because
> > the handshake should guarantee no stupid mistake..  Sad.
> 
> That has been on my ToDo list for too long, just need the time to do
> it.  It would make everything much, much easier.
> 
> >> But .... if we don't care about RDMA, why do we care about
> >> control_save_page()?
> >
> > Could anyone help to explain why we don't care?  I still see bugfixes coming..
> 
> Sentence was inside a context.  We don't care for RDMA while we are on
> multifd.  If multifd ever supports RDMA, it would be a new
> implementation that don't use such hooks.
> 
> IMVHO, RDMA implementation in qemu is quite bad.  For historic reasons,
> they needed to use qemu_file abstraction for comunication, so they are
> dropping directly the ability of doing direct copies of pages.
> So, if one is requiring to mlock all the guest memory on both sides to
> use RDMA, the *right* thing to do from my point of view is just
> "remotely" read the page without any overhead.
> 
> Yes, that requires quite a bit of changes, I was not suggesting that it
> was a trivial task.

I see!

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 72+ messages in thread

end of thread, other threads:[~2021-12-15  1:42 UTC | newest]

Thread overview: 72+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
2021-11-24 10:05 ` [PATCH v3 01/23] multifd: Delete useless operation Juan Quintela
2021-11-24 18:48   ` Dr. David Alan Gilbert
2021-11-25  7:24     ` Juan Quintela
2021-11-25 19:46       ` Dr. David Alan Gilbert
2021-11-26  9:39         ` Juan Quintela
2021-11-24 10:05 ` [PATCH v3 02/23] migration: Never call twice qemu_target_page_size() Juan Quintela
2021-11-24 18:52   ` Dr. David Alan Gilbert
2021-11-25  7:26     ` Juan Quintela
2021-11-24 10:05 ` [PATCH v3 03/23] multifd: Rename used field to num Juan Quintela
2021-11-24 19:37   ` Dr. David Alan Gilbert
2021-11-25  7:28     ` Juan Quintela
2021-11-25 18:30       ` Dr. David Alan Gilbert
2021-12-13  9:34   ` Zheng Chuan via
2021-12-13 15:17     ` Dr. David Alan Gilbert
2021-11-24 10:05 ` [PATCH v3 04/23] multifd: Add missing documention Juan Quintela
2021-11-25 18:38   ` Dr. David Alan Gilbert
2021-11-26  9:34     ` Juan Quintela
2021-11-24 10:05 ` [PATCH v3 05/23] multifd: The variable is only used inside the loop Juan Quintela
2021-11-25 18:40   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 06/23] multifd: remove used parameter from send_prepare() method Juan Quintela
2021-11-25 18:51   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 07/23] multifd: remove used parameter from send_recv_pages() method Juan Quintela
2021-11-25 18:53   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 08/23] multifd: Fill offset and block for reception Juan Quintela
2021-11-25 19:41   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 09/23] multifd: Make zstd compression method not use iovs Juan Quintela
2021-11-29 17:16   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 10/23] multifd: Make zlib " Juan Quintela
2021-11-29 17:30   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 11/23] multifd: Move iov from pages to params Juan Quintela
2021-11-29 17:52   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 12/23] multifd: Make zlib use iov's Juan Quintela
2021-11-29 18:01   ` Dr. David Alan Gilbert
2021-11-29 18:21     ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 13/23] multifd: Make zstd " Juan Quintela
2021-11-29 18:03   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 14/23] multifd: Remove send_write() method Juan Quintela
2021-11-29 18:19   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 15/23] multifd: Use a single writev on the send side Juan Quintela
2021-11-29 18:35   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 16/23] multifd: Unfold "used" variable by its value Juan Quintela
2021-11-30 10:45   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 17/23] multifd: Use normal pages array on the send side Juan Quintela
2021-11-30 10:50   ` Dr. David Alan Gilbert
2021-11-30 12:01     ` Juan Quintela
2021-12-01 10:59       ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 18/23] multifd: Use normal pages array on the recv side Juan Quintela
2021-12-07  7:11   ` Peter Xu
2021-12-10 10:41     ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 19/23] multifd: recv side only needs the RAMBlock host address Juan Quintela
2021-12-01 18:56   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 20/23] multifd: Rename pages_used to normal_pages Juan Quintela
2021-12-01 19:00   ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 21/23] multifd: Support for zero pages transmission Juan Quintela
2021-12-02 11:36   ` Dr. David Alan Gilbert
2021-12-02 12:08     ` Juan Quintela
2021-12-02 16:16       ` Dr. David Alan Gilbert
2021-12-02 16:19         ` Juan Quintela
2021-12-02 16:46           ` Dr. David Alan Gilbert
2021-12-02 16:52             ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 22/23] multifd: Zero " Juan Quintela
2021-12-02 16:42   ` Dr. David Alan Gilbert
2021-12-02 16:49     ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 23/23] migration: Use multifd before we check for the zero page Juan Quintela
2021-12-02 17:11   ` Dr. David Alan Gilbert
2021-12-02 17:38     ` Juan Quintela
2021-12-02 17:49       ` Dr. David Alan Gilbert
2021-12-07  7:30       ` Peter Xu
2021-12-13  9:03         ` Juan Quintela
2021-12-15  1:39           ` Peter Xu
2021-11-24 10:24 ` [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Peter Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.