All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/18] Support Multifd for RDMA migration
@ 2020-10-17  4:25 Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function Chuan Zheng
                   ` (18 more replies)
  0 siblings, 19 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Now I continue to support multifd for RDMA migration based on my colleague
zhiming's work:)

The previous RFC patches is listed below:
v1:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
v2:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html

As descried in previous RFC, the RDMA bandwidth is not fully utilized for
over 25Gigabit NIC because of single channel for RDMA migration.
This patch series is going to support multifd for RDMA migration based on
multifd framework.

Comparsion is between origion and multifd RDMA migration is re-tested for v3.
The VM specifications for migration are as follows:
- VM use 4k page;
- the number of VCPU is 4;
- the total memory is 16Gigabit;
- use 'mempress' tool to pressurize VM(mempress 8000 500);
- use 25Gigabit network card to migrate;

For origin RDMA and MultiRDMA migration, the total migration times of
VM are as follows:
+++++++++++++++++++++++++++++++++++++++++++++++++
|             | NOT rdma-pin-all | rdma-pin-all |
+++++++++++++++++++++++++++++++++++++++++++++++++
| origin RDMA |       26 s       |     29 s     |
-------------------------------------------------
|  MultiRDMA  |       16 s       |     17 s     |
+++++++++++++++++++++++++++++++++++++++++++++++++

Test the multifd RDMA migration like this:
virsh migrate --live --multiFd --migrateuri
rdma://192.168.1.100 [VM] --listen-address 0.0.0.0  qemu+tcp://192.168.1.100/system --verbose

v2 -> v3:
    create multifd ops for both tcp and rdma
    do not export rdma to avoid multifd code in mess
    fix build issue for non-rdma
    fix some codestyle and buggy code

Chuan Zheng (18):
  migration/rdma: add the 'migrate_use_rdma_pin_all' function
  migration/rdma: judge whether or not the RDMA is used for migration
  migration/rdma: create multifd_setup_ops for Tx/Rx thread
  migration/rdma: add multifd_setup_ops for rdma
  migration/rdma: do not need sync main for rdma
  migration/rdma: export MultiFDSendParams/MultiFDRecvParams
  migration/rdma: add rdma field into multifd send/recv param
  migration/rdma: export getQIOChannel to get QIOchannel in rdma
  migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
  migration/rdma: Create the multifd recv channels for RDMA
  migration/rdma: record host_port for multifd RDMA
  migration/rdma: Create the multifd send channels for RDMA
  migration/rdma: Add the function for dynamic page registration
  migration/rdma: register memory for multifd RDMA channels
  migration/rdma: only register the memory for multifd channels
  migration/rdma: add rdma_channel into Migrationstate field
  migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
    mode
  migration/rdma: RDMA cleanup for multifd migration

 migration/migration.c |  24 +++
 migration/migration.h |  11 ++
 migration/multifd.c   |  97 +++++++++-
 migration/multifd.h   |  24 +++
 migration/qemu-file.c |   5 +
 migration/qemu-file.h |   1 +
 migration/rdma.c      | 503 +++++++++++++++++++++++++++++++++++++++++++++++++-
 7 files changed, 653 insertions(+), 12 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-11-10 11:52   ` Dr. David Alan Gilbert
  2020-10-17  4:25 ` [PATCH v3 02/18] migration/rdma: judge whether or not the RDMA is used for migration Chuan Zheng
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/migration.c | 9 +++++++++
 migration/migration.h | 1 +
 2 files changed, 10 insertions(+)

diff --git a/migration/migration.c b/migration/migration.c
index 0575ecb..64ae417 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2329,6 +2329,15 @@ bool migrate_use_events(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_EVENTS];
 }
 
+bool migrate_use_rdma_pin_all(void)
+{
+    MigrationState *s;
+
+    s = migrate_get_current();
+
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL];
+}
+
 bool migrate_use_multifd(void)
 {
     MigrationState *s;
diff --git a/migration/migration.h b/migration/migration.h
index deb411a..74fd790 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -300,6 +300,7 @@ bool migrate_ignore_shared(void);
 bool migrate_validate_uuid(void);
 
 bool migrate_auto_converge(void);
+bool migrate_use_rdma_pin_all(void);
 bool migrate_use_multifd(void);
 bool migrate_pause_before_switchover(void);
 int migrate_multifd_channels(void);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 02/18] migration/rdma: judge whether or not the RDMA is used for migration
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread Chuan Zheng
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Add enabled_rdma_migration into MigrationState to judge
whether or not the RDMA is used for migration.

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/migration.c | 13 +++++++++++++
 migration/migration.h |  6 ++++++
 2 files changed, 19 insertions(+)

diff --git a/migration/migration.c b/migration/migration.c
index 64ae417..be6166a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -389,7 +389,9 @@ void migrate_add_address(SocketAddress *address)
 void qemu_start_incoming_migration(const char *uri, Error **errp)
 {
     const char *p = NULL;
+    MigrationState *s = migrate_get_current();
 
+    s->enabled_rdma_migration = false;
     qapi_event_send_migration(MIGRATION_STATUS_SETUP);
     if (!strcmp(uri, "defer")) {
         deferred_incoming_migration(errp);
@@ -399,6 +401,7 @@ void qemu_start_incoming_migration(const char *uri, Error **errp)
         socket_start_incoming_migration(p ? p : uri, errp);
 #ifdef CONFIG_RDMA
     } else if (strstart(uri, "rdma:", &p)) {
+        s->enabled_rdma_migration = true;
         rdma_start_incoming_migration(p, errp);
 #endif
     } else if (strstart(uri, "exec:", &p)) {
@@ -1887,6 +1890,7 @@ void migrate_init(MigrationState *s)
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
     s->migration_thread_running = false;
+    s->enabled_rdma_migration = false;
     error_free(s->error);
     s->error = NULL;
     s->hostname = NULL;
@@ -2115,6 +2119,7 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
         socket_start_outgoing_migration(s, p ? p : uri, &local_err);
 #ifdef CONFIG_RDMA
     } else if (strstart(uri, "rdma:", &p)) {
+        s->enabled_rdma_migration = true;
         rdma_start_outgoing_migration(s, p, &local_err);
 #endif
     } else if (strstart(uri, "exec:", &p)) {
@@ -2329,6 +2334,14 @@ bool migrate_use_events(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_EVENTS];
 }
 
+bool migrate_use_rdma(void)
+{
+    MigrationState *s;
+    s = migrate_get_current();
+
+    return s->enabled_rdma_migration;
+}
+
 bool migrate_use_rdma_pin_all(void)
 {
     MigrationState *s;
diff --git a/migration/migration.h b/migration/migration.h
index 74fd790..e92eb29 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -264,6 +264,11 @@ struct MigrationState
      * This save hostname when out-going migration starts
      */
     char *hostname;
+
+    /*
+     * Enable RDMA migration
+     */
+    bool enabled_rdma_migration;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
@@ -300,6 +305,7 @@ bool migrate_ignore_shared(void);
 bool migrate_validate_uuid(void);
 
 bool migrate_auto_converge(void);
+bool migrate_use_rdma(void);
 bool migrate_use_rdma_pin_all(void);
 bool migrate_use_multifd(void);
 bool migrate_pause_before_switchover(void);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 02/18] migration/rdma: judge whether or not the RDMA is used for migration Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-11-10 12:11   ` Dr. David Alan Gilbert
  2020-10-17  4:25 ` [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma Chuan Zheng
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Create multifd_setup_ops for TxRx thread, no logic change.

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.c | 44 +++++++++++++++++++++++++++++++++++++++-----
 migration/multifd.h |  7 +++++++
 2 files changed, 46 insertions(+), 5 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 68b171f..1f82307 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -383,6 +383,8 @@ struct {
     int exiting;
     /* multifd ops */
     MultiFDMethods *ops;
+    /* multifd setup ops */
+    MultiFDSetup *setup_ops;
 } *multifd_send_state;
 
 /*
@@ -790,8 +792,9 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
         } else {
             /* update for tls qio channel */
             p->c = ioc;
-            qemu_thread_create(&p->thread, p->name, multifd_send_thread, p,
-                                   QEMU_THREAD_JOINABLE);
+            qemu_thread_create(&p->thread, p->name,
+                               multifd_send_state->setup_ops->send_thread_setup,
+                               p, QEMU_THREAD_JOINABLE);
        }
        return false;
     }
@@ -839,6 +842,11 @@ cleanup:
     multifd_new_send_channel_cleanup(p, sioc, local_err);
 }
 
+static void multifd_send_channel_setup(MultiFDSendParams *p)
+{
+    socket_send_channel_create(multifd_new_send_channel_async, p);
+}
+
 int multifd_save_setup(Error **errp)
 {
     int thread_count;
@@ -856,6 +864,7 @@ int multifd_save_setup(Error **errp)
     multifd_send_state->pages = multifd_pages_init(page_count);
     qemu_sem_init(&multifd_send_state->channels_ready, 0);
     qatomic_set(&multifd_send_state->exiting, 0);
+    multifd_send_state->setup_ops = multifd_setup_ops_init();
     multifd_send_state->ops = multifd_ops[migrate_multifd_compression()];
 
     for (i = 0; i < thread_count; i++) {
@@ -875,7 +884,7 @@ int multifd_save_setup(Error **errp)
         p->packet->version = cpu_to_be32(MULTIFD_VERSION);
         p->name = g_strdup_printf("multifdsend_%d", i);
         p->tls_hostname = g_strdup(s->hostname);
-        socket_send_channel_create(multifd_new_send_channel_async, p);
+        multifd_send_state->setup_ops->send_channel_setup(p);
     }
 
     for (i = 0; i < thread_count; i++) {
@@ -902,6 +911,8 @@ struct {
     uint64_t packet_num;
     /* multifd ops */
     MultiFDMethods *ops;
+    /* multifd setup ops */
+    MultiFDSetup *setup_ops;
 } *multifd_recv_state;
 
 static void multifd_recv_terminate_threads(Error *err)
@@ -1095,6 +1106,7 @@ int multifd_load_setup(Error **errp)
     multifd_recv_state->params = g_new0(MultiFDRecvParams, thread_count);
     qatomic_set(&multifd_recv_state->count, 0);
     qemu_sem_init(&multifd_recv_state->sem_sync, 0);
+    multifd_recv_state->setup_ops = multifd_setup_ops_init();
     multifd_recv_state->ops = multifd_ops[migrate_multifd_compression()];
 
     for (i = 0; i < thread_count; i++) {
@@ -1173,9 +1185,31 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
     p->num_packets = 1;
 
     p->running = true;
-    qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
-                       QEMU_THREAD_JOINABLE);
+    multifd_recv_state->setup_ops->recv_channel_setup(ioc, p);
+    qemu_thread_create(&p->thread, p->name,
+                       multifd_recv_state->setup_ops->recv_thread_setup,
+                       p, QEMU_THREAD_JOINABLE);
     qatomic_inc(&multifd_recv_state->count);
     return qatomic_read(&multifd_recv_state->count) ==
            migrate_multifd_channels();
 }
+
+static void multifd_recv_channel_setup(QIOChannel *ioc, MultiFDRecvParams *p)
+{
+    return;
+}
+
+static MultiFDSetup multifd_socket_ops = {
+    .send_thread_setup = multifd_send_thread,
+    .recv_thread_setup = multifd_recv_thread,
+    .send_channel_setup = multifd_send_channel_setup,
+    .recv_channel_setup = multifd_recv_channel_setup
+};
+
+MultiFDSetup *multifd_setup_ops_init(void)
+{
+    MultiFDSetup *ops;
+
+    ops = &multifd_socket_ops;
+    return ops;
+}
diff --git a/migration/multifd.h b/migration/multifd.h
index 8d6751f..446315b 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -166,6 +166,13 @@ typedef struct {
     int (*recv_pages)(MultiFDRecvParams *p, uint32_t used, Error **errp);
 } MultiFDMethods;
 
+typedef struct {
+    void *(*send_thread_setup)(void *opaque);
+    void *(*recv_thread_setup)(void *opaque);
+    void (*send_channel_setup)(MultiFDSendParams *p);
+    void (*recv_channel_setup)(QIOChannel *ioc, MultiFDRecvParams *p);
+} MultiFDSetup;
+
 void multifd_register_ops(int method, MultiFDMethods *ops);
 
 #endif
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (2 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-11-10 12:30   ` Dr. David Alan Gilbert
  2020-10-17  4:25 ` [PATCH v3 05/18] migration/rdma: do not need sync main " Chuan Zheng
                   ` (14 subsequent siblings)
  18 siblings, 1 reply; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.c |  6 ++++
 migration/multifd.h |  4 +++
 migration/rdma.c    | 82 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 92 insertions(+)

diff --git a/migration/multifd.c b/migration/multifd.c
index 1f82307..0d494df 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1210,6 +1210,12 @@ MultiFDSetup *multifd_setup_ops_init(void)
 {
     MultiFDSetup *ops;
 
+#ifdef CONFIG_RDMA
+    if (migrate_use_rdma()) {
+        ops = multifd_rdma_setup();
+        return ops;
+    }
+#endif
     ops = &multifd_socket_ops;
     return ops;
 }
diff --git a/migration/multifd.h b/migration/multifd.h
index 446315b..62a0b2a 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -173,6 +173,10 @@ typedef struct {
     void (*recv_channel_setup)(QIOChannel *ioc, MultiFDRecvParams *p);
 } MultiFDSetup;
 
+#ifdef CONFIG_RDMA
+MultiFDSetup *multifd_rdma_setup(void);
+#endif
+MultiFDSetup *multifd_setup_ops_init(void);
 void multifd_register_ops(int method, MultiFDMethods *ops);
 
 #endif
diff --git a/migration/rdma.c b/migration/rdma.c
index 0340841..ad4e4ba 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -19,6 +19,7 @@
 #include "qemu/cutils.h"
 #include "rdma.h"
 #include "migration.h"
+#include "multifd.h"
 #include "qemu-file.h"
 #include "ram.h"
 #include "qemu-file-channel.h"
@@ -4138,3 +4139,84 @@ err:
     g_free(rdma);
     g_free(rdma_return_path);
 }
+
+static void *multifd_rdma_send_thread(void *opaque)
+{
+    MultiFDSendParams *p = opaque;
+
+    while (true) {
+        qemu_mutex_lock(&p->mutex);
+        if (p->quit) {
+            qemu_mutex_unlock(&p->mutex);
+            break;
+        }
+        qemu_mutex_unlock(&p->mutex);
+        qemu_sem_wait(&p->sem);
+    }
+
+    qemu_mutex_lock(&p->mutex);
+    p->running = false;
+    qemu_mutex_unlock(&p->mutex);
+
+    return NULL;
+}
+
+static void multifd_rdma_send_channel_setup(MultiFDSendParams *p)
+{
+    Error *local_err = NULL;
+
+    if (p->quit) {
+        error_setg(&local_err, "multifd: send id %d already quit", p->id);
+        return ;
+    }
+    p->running = true;
+
+    qemu_thread_create(&p->thread, p->name, multifd_rdma_send_thread, p,
+                       QEMU_THREAD_JOINABLE);
+}
+
+static void *multifd_rdma_recv_thread(void *opaque)
+{
+    MultiFDRecvParams *p = opaque;
+
+    while (true) {
+        qemu_mutex_lock(&p->mutex);
+        if (p->quit) {
+            qemu_mutex_unlock(&p->mutex);
+            break;
+        }
+        qemu_mutex_unlock(&p->mutex);
+        qemu_sem_wait(&p->sem_sync);
+    }
+
+    qemu_mutex_lock(&p->mutex);
+    p->running = false;
+    qemu_mutex_unlock(&p->mutex);
+
+    return NULL;
+}
+
+static void multifd_rdma_recv_channel_setup(QIOChannel *ioc,
+                                            MultiFDRecvParams *p)
+{
+    QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
+
+    p->file = rioc->file;
+    return;
+}
+
+static MultiFDSetup multifd_rdma_ops = {
+    .send_thread_setup = multifd_rdma_send_thread,
+    .recv_thread_setup = multifd_rdma_recv_thread,
+    .send_channel_setup = multifd_rdma_send_channel_setup,
+    .recv_channel_setup = multifd_rdma_recv_channel_setup
+};
+
+MultiFDSetup *multifd_rdma_setup(void)
+{
+    MultiFDSetup *rdma_ops;
+
+    rdma_ops = &multifd_rdma_ops;
+
+    return rdma_ops;
+}
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 05/18] migration/rdma: do not need sync main for rdma
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (3 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 06/18] migration/rdma: export MultiFDSendParams/MultiFDRecvParams Chuan Zheng
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/migration/multifd.c b/migration/multifd.c
index 0d494df..8ccfd46 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -580,6 +580,10 @@ void multifd_send_sync_main(QEMUFile *f)
     if (!migrate_use_multifd()) {
         return;
     }
+     /* Do not need sync for rdma */
+    if (migrate_use_rdma()) {
+        return;
+    }
     if (multifd_send_state->pages->used) {
         if (multifd_send_pages(f) < 0) {
             error_report("%s: multifd_send_pages fail", __func__);
@@ -1002,6 +1006,10 @@ void multifd_recv_sync_main(void)
     if (!migrate_use_multifd()) {
         return;
     }
+    /* Do not need sync for rdma */
+    if (migrate_use_rdma()) {
+        return;
+    }
     for (i = 0; i < migrate_multifd_channels(); i++) {
         MultiFDRecvParams *p = &multifd_recv_state->params[i];
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 06/18] migration/rdma: export MultiFDSendParams/MultiFDRecvParams
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (4 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 05/18] migration/rdma: do not need sync main " Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 07/18] migration/rdma: add rdma field into multifd send/recv param Chuan Zheng
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

MultiFDSendParams and MultiFDRecvParams is need for rdma, export it

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.c | 26 ++++++++++++++++++++++++++
 migration/multifd.h |  2 ++
 2 files changed, 28 insertions(+)

diff --git a/migration/multifd.c b/migration/multifd.c
index 8ccfd46..03f3a1e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -387,6 +387,19 @@ struct {
     MultiFDSetup *setup_ops;
 } *multifd_send_state;
 
+int get_multifd_send_param(int id, MultiFDSendParams **param)
+{
+    int ret = 0;
+
+    if (id < 0 || id >= migrate_multifd_channels()) {
+        ret = -1;
+    } else {
+        *param = &(multifd_send_state->params[id]);
+    }
+
+    return ret;
+}
+
 /*
  * How we use multifd_send_state->pages and channel->pages?
  *
@@ -919,6 +932,19 @@ struct {
     MultiFDSetup *setup_ops;
 } *multifd_recv_state;
 
+int get_multifd_recv_param(int id, MultiFDRecvParams **param)
+{
+    int ret = 0;
+
+    if (id < 0 || id >= migrate_multifd_channels()) {
+        ret = -1;
+    } else {
+        *param = &(multifd_recv_state->params[id]);
+    }
+
+    return ret;
+}
+
 static void multifd_recv_terminate_threads(Error *err)
 {
     int i;
diff --git a/migration/multifd.h b/migration/multifd.h
index 62a0b2a..2f4e585 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -176,6 +176,8 @@ typedef struct {
 #ifdef CONFIG_RDMA
 MultiFDSetup *multifd_rdma_setup(void);
 #endif
+int get_multifd_send_param(int id, MultiFDSendParams **param);
+int get_multifd_recv_param(int id, MultiFDRecvParams **param);
 MultiFDSetup *multifd_setup_ops_init(void);
 void multifd_register_ops(int method, MultiFDMethods *ops);
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 07/18] migration/rdma: add rdma field into multifd send/recv param
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (5 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 06/18] migration/rdma: export MultiFDSendParams/MultiFDRecvParams Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 08/18] migration/rdma: export getQIOChannel to get QIOchannel in rdma Chuan Zheng
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Note we do want to export any rdma struct, take void * instead.

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/migration/multifd.h b/migration/multifd.h
index 2f4e585..ff80bd5 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -108,6 +108,10 @@ typedef struct {
     QemuSemaphore sem_sync;
     /* used for compression methods */
     void *data;
+    /* used for multifd rdma */
+    void *rdma;
+    /* communication channel */
+    QEMUFile *file;
 }  MultiFDSendParams;
 
 typedef struct {
@@ -147,6 +151,10 @@ typedef struct {
     QemuSemaphore sem_sync;
     /* used for de-compression methods */
     void *data;
+    /* used for multifd rdma */
+    void *rdma;
+    /* communication channel */
+    QEMUFile *file;
 } MultiFDRecvParams;
 
 typedef struct {
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 08/18] migration/rdma: export getQIOChannel to get QIOchannel in rdma
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (6 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 07/18] migration/rdma: add rdma field into multifd send/recv param Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 09/18] migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma Chuan Zheng
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/qemu-file.c | 5 +++++
 migration/qemu-file.h | 1 +
 2 files changed, 6 insertions(+)

diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index be21518..37f6201 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -260,6 +260,11 @@ void ram_control_before_iterate(QEMUFile *f, uint64_t flags)
     }
 }
 
+void *getQIOChannel(QEMUFile *f)
+{
+    return f->opaque;
+}
+
 void ram_control_after_iterate(QEMUFile *f, uint64_t flags)
 {
     int ret = 0;
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index a9b6d6c..4cef043 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -165,6 +165,7 @@ void qemu_file_set_blocking(QEMUFile *f, bool block);
 void ram_control_before_iterate(QEMUFile *f, uint64_t flags);
 void ram_control_after_iterate(QEMUFile *f, uint64_t flags);
 void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data);
+void *getQIOChannel(QEMUFile *f);
 
 /* Whenever this is found in the data stream, the flags
  * will be passed to ram_control_load_hook in the incoming-migration
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 09/18] migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (7 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 08/18] migration/rdma: export getQIOChannel to get QIOchannel in rdma Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-11-10 16:51   ` Dr. David Alan Gilbert
  2020-10-17  4:25 ` [PATCH v3 10/18] migration/rdma: Create the multifd recv channels for RDMA Chuan Zheng
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/rdma.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/migration/rdma.c b/migration/rdma.c
index ad4e4ba..2baa933 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -4010,6 +4010,48 @@ static void rdma_accept_incoming_migration(void *opaque)
     }
 }
 
+static bool multifd_rdma_load_setup(const char *host_port,
+                                    RDMAContext *rdma, Error **errp)
+{
+    int thread_count;
+    int i;
+    int idx;
+    MultiFDRecvParams *multifd_recv_param;
+    RDMAContext *multifd_rdma;
+
+    if (!migrate_use_multifd()) {
+        return true;
+    }
+
+    if (multifd_load_setup(errp) != 0) {
+        /*
+         * We haven't been able to create multifd threads
+         * nothing better to do
+         */
+        return false;
+    }
+
+    thread_count = migrate_multifd_channels();
+    for (i = 0; i < thread_count; i++) {
+        if (get_multifd_recv_param(i, &multifd_recv_param) < 0) {
+            ERROR(errp, "rdma: error getting multifd_recv_param(%d)", i);
+            return false;
+        }
+
+        multifd_rdma = qemu_rdma_data_init(host_port, errp);
+        for (idx = 0; idx < RDMA_WRID_MAX; idx++) {
+            multifd_rdma->wr_data[idx].control_len = 0;
+            multifd_rdma->wr_data[idx].control_curr = NULL;
+        }
+        /* the CM channel and CM id is shared */
+        multifd_rdma->channel = rdma->channel;
+        multifd_rdma->listen_id = rdma->listen_id;
+        multifd_recv_param->rdma = (void *)multifd_rdma;
+    }
+
+    return true;
+}
+
 void rdma_start_incoming_migration(const char *host_port, Error **errp)
 {
     int ret;
@@ -4057,6 +4099,16 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp)
         qemu_rdma_return_path_dest_init(rdma_return_path, rdma);
     }
 
+    /* multifd rdma setup */
+    if (!multifd_rdma_load_setup(host_port, rdma, &local_err)) {
+        /*
+         * We haven't been able to create multifd threads
+         * nothing better to do
+         */
+        error_report_err(local_err);
+        goto err;
+    }
+
     qemu_set_fd_handler(rdma->channel->fd, rdma_accept_incoming_migration,
                         NULL, (void *)(intptr_t)rdma);
     return;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 10/18] migration/rdma: Create the multifd recv channels for RDMA
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (8 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 09/18] migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 11/18] migration/rdma: record host_port for multifd RDMA Chuan Zheng
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

We still don't transmit anything through them, and we only build
the RDMA connections.

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/rdma.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 68 insertions(+), 2 deletions(-)

diff --git a/migration/rdma.c b/migration/rdma.c
index 2baa933..63559f1 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3266,6 +3266,40 @@ static void rdma_cm_poll_handler(void *opaque)
     }
 }
 
+static bool qemu_rdma_accept_setup(RDMAContext *rdma)
+{
+    RDMAContext *multifd_rdma = NULL;
+    int thread_count;
+    int i;
+    MultiFDRecvParams *multifd_recv_param;
+    thread_count = migrate_multifd_channels();
+    /* create the multifd channels for RDMA */
+    for (i = 0; i < thread_count; i++) {
+        if (get_multifd_recv_param(i, &multifd_recv_param) < 0) {
+            error_report("rdma: error getting multifd_recv_param(%d)", i);
+            return false;
+        }
+
+        multifd_rdma = (RDMAContext *) multifd_recv_param->rdma;
+        if (multifd_rdma->cm_id == NULL) {
+            break;
+        } else {
+            multifd_rdma = NULL;
+        }
+    }
+
+    if (multifd_rdma) {
+        qemu_set_fd_handler(rdma->channel->fd,
+                            rdma_accept_incoming_migration,
+                            NULL, (void *)(intptr_t)multifd_rdma);
+    } else {
+        qemu_set_fd_handler(rdma->channel->fd, rdma_cm_poll_handler,
+                            NULL, rdma);
+    }
+
+    return true;
+}
+
 static int qemu_rdma_accept(RDMAContext *rdma)
 {
     RDMACapabilities cap;
@@ -3365,6 +3399,10 @@ static int qemu_rdma_accept(RDMAContext *rdma)
         qemu_set_fd_handler(rdma->channel->fd, rdma_accept_incoming_migration,
                             NULL,
                             (void *)(intptr_t)rdma->return_path);
+    } else if (migrate_use_multifd()) {
+        if (!qemu_rdma_accept_setup(rdma)) {
+            goto err_rdma_dest_wait;
+        }
     } else {
         qemu_set_fd_handler(rdma->channel->fd, rdma_cm_poll_handler,
                             NULL, rdma);
@@ -3975,6 +4013,35 @@ static QEMUFile *qemu_fopen_rdma(RDMAContext *rdma, const char *mode)
     return rioc->file;
 }
 
+static void migration_rdma_process_incoming(QEMUFile *f,
+                                            RDMAContext *rdma, Error **errp)
+{
+    MigrationIncomingState *mis = migration_incoming_get_current();
+    QIOChannel *ioc = NULL;
+    bool start_migration = false;
+
+    /* FIXME: Need refactor */
+    if (!migrate_use_multifd()) {
+        rdma->migration_started_on_destination = 1;
+        migration_fd_process_incoming(f, errp);
+        return;
+    }
+
+    if (!mis->from_src_file) {
+        mis->from_src_file = f;
+        qemu_file_set_blocking(f, false);
+    } else {
+        ioc = QIO_CHANNEL(getQIOChannel(f));
+        /* Multiple connections */
+        assert(migrate_use_multifd());
+        start_migration = multifd_recv_new_channel(ioc, errp);
+    }
+
+    if (start_migration) {
+        migration_incoming_process();
+    }
+}
+
 static void rdma_accept_incoming_migration(void *opaque)
 {
     RDMAContext *rdma = opaque;
@@ -4003,8 +4070,7 @@ static void rdma_accept_incoming_migration(void *opaque)
         return;
     }
 
-    rdma->migration_started_on_destination = 1;
-    migration_fd_process_incoming(f, &local_err);
+    migration_rdma_process_incoming(f, rdma, &local_err);
     if (local_err) {
         error_reportf_err(local_err, "RDMA ERROR:");
     }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 11/18] migration/rdma: record host_port for multifd RDMA
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (9 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 10/18] migration/rdma: Create the multifd recv channels for RDMA Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 12/18] migration/rdma: Create the multifd send channels for RDMA Chuan Zheng
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/migration.c | 1 +
 migration/migration.h | 3 +++
 migration/rdma.c      | 3 +++
 3 files changed, 7 insertions(+)

diff --git a/migration/migration.c b/migration/migration.c
index be6166a..7061410 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1891,6 +1891,7 @@ void migrate_init(MigrationState *s)
     s->postcopy_after_devices = false;
     s->migration_thread_running = false;
     s->enabled_rdma_migration = false;
+    s->host_port = NULL;
     error_free(s->error);
     s->error = NULL;
     s->hostname = NULL;
diff --git a/migration/migration.h b/migration/migration.h
index e92eb29..fea63de 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -269,6 +269,9 @@ struct MigrationState
      * Enable RDMA migration
      */
     bool enabled_rdma_migration;
+
+    /* Need by Multi-RDMA */
+    char *host_port;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
diff --git a/migration/rdma.c b/migration/rdma.c
index 63559f1..dd9f705 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -4206,6 +4206,8 @@ void rdma_start_outgoing_migration(void *opaque,
         goto err;
     }
 
+    s->host_port = g_strdup(host_port);
+
     ret = qemu_rdma_source_init(rdma,
         s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL], errp);
 
@@ -4250,6 +4252,7 @@ void rdma_start_outgoing_migration(void *opaque,
 
     s->to_dst_file = qemu_fopen_rdma(rdma, "wb");
     migrate_fd_connect(s, NULL);
+    g_free(s->host_port);
     return;
 return_path_err:
     qemu_rdma_cleanup(rdma);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 12/18] migration/rdma: Create the multifd send channels for RDMA
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (10 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 11/18] migration/rdma: record host_port for multifd RDMA Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 13/18] migration/rdma: Add the function for dynamic page registration Chuan Zheng
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.c |  4 ++--
 migration/multifd.h |  2 ++
 migration/rdma.c    | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+), 2 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 03f3a1e..9439b3c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -173,7 +173,7 @@ void multifd_register_ops(int method, MultiFDMethods *ops)
     multifd_ops[method] = ops;
 }
 
-static int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp)
+int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp)
 {
     MultiFDInit_t msg = {};
     int ret;
@@ -500,7 +500,7 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
     return 1;
 }
 
-static void multifd_send_terminate_threads(Error *err)
+void multifd_send_terminate_threads(Error *err)
 {
     int i;
 
diff --git a/migration/multifd.h b/migration/multifd.h
index ff80bd5..ec9e897 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -184,6 +184,8 @@ typedef struct {
 #ifdef CONFIG_RDMA
 MultiFDSetup *multifd_rdma_setup(void);
 #endif
+void multifd_send_terminate_threads(Error *err);
+int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp);
 int get_multifd_send_param(int id, MultiFDSendParams **param);
 int get_multifd_recv_param(int id, MultiFDRecvParams **param);
 MultiFDSetup *multifd_setup_ops_init(void);
diff --git a/migration/rdma.c b/migration/rdma.c
index dd9f705..1af81f5 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -4261,9 +4261,54 @@ err:
     g_free(rdma_return_path);
 }
 
+static int multifd_channel_rdma_connect(void *opaque)
+{
+    MultiFDSendParams *p = opaque;
+    Error *local_err = NULL;
+    int ret = 0;
+    MigrationState *s = migrate_get_current();
+
+    p->rdma = qemu_rdma_data_init(s->host_port, &local_err);
+    if (p->rdma == NULL) {
+        goto out;
+    }
+
+    ret = qemu_rdma_source_init(p->rdma,
+                                migrate_use_rdma_pin_all(),
+                                &local_err);
+    if (ret) {
+        goto out;
+    }
+
+    ret = qemu_rdma_connect(p->rdma, &local_err);
+    if (ret) {
+        goto out;
+    }
+
+    p->file = qemu_fopen_rdma(p->rdma, "wb");
+    if (p->file == NULL) {
+        goto out;
+    }
+
+    p->c = QIO_CHANNEL(getQIOChannel(p->file));
+
+out:
+    if (local_err) {
+        trace_multifd_send_error(p->id);
+    }
+
+    return ret;
+}
+
 static void *multifd_rdma_send_thread(void *opaque)
 {
     MultiFDSendParams *p = opaque;
+    Error *local_err = NULL;
+
+    trace_multifd_send_thread_start(p->id);
+    if (multifd_send_initial_packet(p, &local_err) < 0) {
+        goto out;
+    }
 
     while (true) {
         qemu_mutex_lock(&p->mutex);
@@ -4275,6 +4320,11 @@ static void *multifd_rdma_send_thread(void *opaque)
         qemu_sem_wait(&p->sem);
     }
 
+out:
+    if (local_err) {
+        trace_multifd_send_error(p->id);
+        multifd_send_terminate_threads(local_err);
+    }
     qemu_mutex_lock(&p->mutex);
     p->running = false;
     qemu_mutex_unlock(&p->mutex);
@@ -4286,6 +4336,12 @@ static void multifd_rdma_send_channel_setup(MultiFDSendParams *p)
 {
     Error *local_err = NULL;
 
+    if (multifd_channel_rdma_connect(p)) {
+        error_setg(&local_err, "multifd: rdma channel %d not established",
+                   p->id);
+        return ;
+    }
+
     if (p->quit) {
         error_setg(&local_err, "multifd: send id %d already quit", p->id);
         return ;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 13/18] migration/rdma: Add the function for dynamic page registration
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (11 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 12/18] migration/rdma: Create the multifd send channels for RDMA Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 14/18] migration/rdma: register memory for multifd RDMA channels Chuan Zheng
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Add the 'qemu_rdma_registration' function, multifd send threads
call it to register memory.

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/rdma.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/migration/rdma.c b/migration/rdma.c
index 1af81f5..a366849 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3738,6 +3738,57 @@ out:
     return ret;
 }
 
+/*
+ * Dynamic page registrations for multifd RDMA threads.
+ */
+static int qemu_rdma_registration(void *opaque)
+{
+    RDMAContext *rdma = opaque;
+    RDMAControlHeader resp = {.type = RDMA_CONTROL_RAM_BLOCKS_RESULT };
+    RDMALocalBlocks *local = &rdma->local_ram_blocks;
+    int reg_result_idx, i, nb_dest_blocks;
+    RDMAControlHeader head = { .len = 0, .repeat = 1 };
+    int ret = 0;
+
+    head.type = RDMA_CONTROL_RAM_BLOCKS_REQUEST;
+
+    ret = qemu_rdma_exchange_send(rdma, &head, NULL, &resp,
+            &reg_result_idx, rdma->pin_all ?
+            qemu_rdma_reg_whole_ram_blocks : NULL);
+    if (ret < 0) {
+        goto out;
+    }
+
+    nb_dest_blocks = resp.len / sizeof(RDMADestBlock);
+
+    if (local->nb_blocks != nb_dest_blocks) {
+        rdma->error_state = -EINVAL;
+        ret = -1;
+        goto out;
+    }
+
+    qemu_rdma_move_header(rdma, reg_result_idx, &resp);
+    memcpy(rdma->dest_blocks,
+           rdma->wr_data[reg_result_idx].control_curr, resp.len);
+
+    for (i = 0; i < nb_dest_blocks; i++) {
+        network_to_dest_block(&rdma->dest_blocks[i]);
+
+        /* We require that the blocks are in the same order */
+        if (rdma->dest_blocks[i].length != local->block[i].length) {
+            rdma->error_state = -EINVAL;
+            ret = -1;
+            goto out;
+        }
+        local->block[i].remote_host_addr =
+            rdma->dest_blocks[i].remote_host_addr;
+        local->block[i].remote_rkey = rdma->dest_blocks[i].remote_rkey;
+    }
+
+out:
+    return ret;
+}
+
 /* Destination:
  * Called via a ram_control_load_hook during the initial RAM load section which
  * lists the RAMBlocks by name.  This lets us know the order of the RAMBlocks
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 14/18] migration/rdma: register memory for multifd RDMA channels
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (12 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 13/18] migration/rdma: Add the function for dynamic page registration Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 15/18] migration/rdma: only register the memory for multifd channels Chuan Zheng
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.c |  3 ++
 migration/rdma.c    | 94 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 95 insertions(+), 2 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 9439b3c..c4d90ef 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -534,6 +534,9 @@ void multifd_send_terminate_threads(Error *err)
         qemu_mutex_lock(&p->mutex);
         p->quit = true;
         qemu_sem_post(&p->sem);
+        if (migrate_use_rdma()) {
+            qemu_sem_post(&p->sem_sync);
+        }
         qemu_mutex_unlock(&p->mutex);
     }
 }
diff --git a/migration/rdma.c b/migration/rdma.c
index a366849..3210e6e 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3837,6 +3837,19 @@ static int rdma_load_hook(QEMUFile *f, void *opaque, uint64_t flags, void *data)
         return rdma_block_notification_handle(opaque, data);
 
     case RAM_CONTROL_HOOK:
+        if (migrate_use_multifd()) {
+            int i;
+            MultiFDRecvParams *multifd_recv_param = NULL;
+            int thread_count = migrate_multifd_channels();
+            /* Inform dest recv_thread to poll */
+            for (i = 0; i < thread_count; i++) {
+                if (get_multifd_recv_param(i, &multifd_recv_param)) {
+                    return -1;
+                }
+                qemu_sem_post(&multifd_recv_param->sem_sync);
+            }
+        }
+
         return qemu_rdma_registration_handle(f, opaque);
 
     default:
@@ -3909,6 +3922,24 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
         head.type = RDMA_CONTROL_RAM_BLOCKS_REQUEST;
         trace_qemu_rdma_registration_stop_ram();
 
+        if (migrate_use_multifd()) {
+            /*
+             * Inform the multifd channels to register memory
+             */
+            int i;
+            int thread_count = migrate_multifd_channels();
+            MultiFDSendParams *multifd_send_param = NULL;
+            for (i = 0; i < thread_count; i++) {
+                ret = get_multifd_send_param(i, &multifd_send_param);
+                if (ret) {
+                    error_report("rdma: error getting multifd(%d)", i);
+                    return ret;
+                }
+
+                qemu_sem_post(&multifd_send_param->sem_sync);
+            }
+        }
+
         /*
          * Make sure that we parallelize the pinning on both sides.
          * For very large guests, doing this serially takes a really
@@ -3967,6 +3998,21 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
                     rdma->dest_blocks[i].remote_host_addr;
             local->block[i].remote_rkey = rdma->dest_blocks[i].remote_rkey;
         }
+        /* Wait for all multifd channels to complete registration */
+        if (migrate_use_multifd()) {
+            int i;
+            int thread_count = migrate_multifd_channels();
+            MultiFDSendParams *multifd_send_param = NULL;
+            for (i = 0; i < thread_count; i++) {
+                ret = get_multifd_send_param(i, &multifd_send_param);
+                if (ret) {
+                    error_report("rdma: error getting multifd(%d)", i);
+                    return ret;
+                }
+
+                qemu_sem_wait(&multifd_send_param->sem);
+            }
+        }
     }
 
     trace_qemu_rdma_registration_stop(flags);
@@ -3978,6 +4024,24 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
         goto err;
     }
 
+    if (migrate_use_multifd()) {
+        /*
+         * Inform src send_thread to send FINISHED signal.
+         * Wait for multifd RDMA send threads to poll the CQE.
+         */
+        int i;
+        int thread_count = migrate_multifd_channels();
+        MultiFDSendParams *multifd_send_param = NULL;
+        for (i = 0; i < thread_count; i++) {
+            ret = get_multifd_send_param(i, &multifd_send_param);
+            if (ret < 0) {
+                goto err;
+            }
+
+            qemu_sem_post(&multifd_send_param->sem_sync);
+        }
+    }
+
     return 0;
 err:
     rdma->error_state = ret;
@@ -4355,20 +4419,39 @@ static void *multifd_rdma_send_thread(void *opaque)
 {
     MultiFDSendParams *p = opaque;
     Error *local_err = NULL;
+    int ret = 0;
+    RDMAControlHeader head = { .len = 0, .repeat = 1 };
 
     trace_multifd_send_thread_start(p->id);
     if (multifd_send_initial_packet(p, &local_err) < 0) {
         goto out;
     }
 
+    /* wait for semaphore notification to register memory */
+    qemu_sem_wait(&p->sem_sync);
+    if (qemu_rdma_registration(p->rdma) < 0) {
+        goto out;
+    }
+    /*
+     * Inform the main RDMA thread to run when multifd
+     * RDMA thread have completed registration.
+     */
+    qemu_sem_post(&p->sem);
     while (true) {
+        qemu_sem_wait(&p->sem_sync);
         qemu_mutex_lock(&p->mutex);
         if (p->quit) {
             qemu_mutex_unlock(&p->mutex);
             break;
         }
         qemu_mutex_unlock(&p->mutex);
-        qemu_sem_wait(&p->sem);
+
+        /* Send FINISHED to the destination */
+        head.type = RDMA_CONTROL_REGISTER_FINISHED;
+        ret = qemu_rdma_exchange_send(p->rdma, &head, NULL, NULL, NULL, NULL);
+        if (ret < 0) {
+            return NULL;
+        }
     }
 
 out:
@@ -4406,15 +4489,22 @@ static void multifd_rdma_send_channel_setup(MultiFDSendParams *p)
 static void *multifd_rdma_recv_thread(void *opaque)
 {
     MultiFDRecvParams *p = opaque;
+    int ret = 0;
 
     while (true) {
+        qemu_sem_wait(&p->sem_sync);
+
         qemu_mutex_lock(&p->mutex);
         if (p->quit) {
             qemu_mutex_unlock(&p->mutex);
             break;
         }
         qemu_mutex_unlock(&p->mutex);
-        qemu_sem_wait(&p->sem_sync);
+        ret = qemu_rdma_registration_handle(p->file, p->c);
+        if (ret < 0) {
+            qemu_file_set_error(p->file, ret);
+            break;
+        }
     }
 
     qemu_mutex_lock(&p->mutex);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 15/18] migration/rdma: only register the memory for multifd channels
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (13 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 14/18] migration/rdma: register memory for multifd RDMA channels Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 16/18] migration/rdma: add rdma_channel into Migrationstate field Chuan Zheng
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

All data is sent by multifd Channels, so we only register its for
multifd channels and main channel don't register its.

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/rdma.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/migration/rdma.c b/migration/rdma.c
index 3210e6e..d5d6364 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3938,6 +3938,12 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
 
                 qemu_sem_post(&multifd_send_param->sem_sync);
             }
+
+            /*
+             * Use multifd to migrate, we only register memory for
+             * multifd RDMA channel and main channel don't register it.
+             */
+            goto wait_reg_complete;
         }
 
         /*
@@ -3998,6 +4004,8 @@ static int qemu_rdma_registration_stop(QEMUFile *f, void *opaque,
                     rdma->dest_blocks[i].remote_host_addr;
             local->block[i].remote_rkey = rdma->dest_blocks[i].remote_rkey;
         }
+
+wait_reg_complete:
         /* Wait for all multifd channels to complete registration */
         if (migrate_use_multifd()) {
             int i;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 16/18] migration/rdma: add rdma_channel into Migrationstate field
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (14 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 15/18] migration/rdma: only register the memory for multifd channels Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 17/18] migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all mode Chuan Zheng
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Multifd RDMA is need to poll when we send data, record it.

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/migration.c |  1 +
 migration/migration.h |  1 +
 migration/rdma.c      | 14 ++++++++++++++
 3 files changed, 16 insertions(+)

diff --git a/migration/migration.c b/migration/migration.c
index 7061410..1ec1dc9 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1892,6 +1892,7 @@ void migrate_init(MigrationState *s)
     s->migration_thread_running = false;
     s->enabled_rdma_migration = false;
     s->host_port = NULL;
+    s->rdma_channel = 0;
     error_free(s->error);
     s->error = NULL;
     s->hostname = NULL;
diff --git a/migration/migration.h b/migration/migration.h
index fea63de..5676b23 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -272,6 +272,7 @@ struct MigrationState
 
     /* Need by Multi-RDMA */
     char *host_port;
+    int rdma_channel;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
diff --git a/migration/rdma.c b/migration/rdma.c
index d5d6364..327f80f 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -183,6 +183,20 @@ typedef struct {
 } RDMAWorkRequestData;
 
 /*
+ * Get the multifd RDMA channel used to send data.
+ */
+static int get_multifd_RDMA_channel(void)
+{
+    int thread_count = migrate_multifd_channels();
+    MigrationState *s = migrate_get_current();
+
+    s->rdma_channel++;
+    s->rdma_channel %= thread_count;
+
+    return s->rdma_channel;
+}
+
+/*
  * Negotiate RDMA capabilities during connection-setup time.
  */
 typedef struct {
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 17/18] migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all mode
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (15 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 16/18] migration/rdma: add rdma_channel into Migrationstate field Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-17  4:25 ` [PATCH v3 18/18] migration/rdma: RDMA cleanup for multifd migration Chuan Zheng
  2020-10-21  9:25 ` [PATCH v3 00/18] Support Multifd for RDMA migration Zhanghailiang
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/rdma.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 62 insertions(+), 5 deletions(-)

diff --git a/migration/rdma.c b/migration/rdma.c
index 327f80f..519fa7a 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -2001,6 +2001,20 @@ static int qemu_rdma_write_one(QEMUFile *f, RDMAContext *rdma,
                                .repeat = 1,
                              };
 
+    /* use multifd to send data */
+    if (migrate_use_multifd()) {
+        int channel = get_multifd_RDMA_channel();
+        int ret = 0;
+        MultiFDSendParams *multifd_send_param = NULL;
+        ret = get_multifd_send_param(channel, &multifd_send_param);
+        if (ret) {
+            error_report("rdma: error getting multifd_send_param(%d)", channel);
+            return -EINVAL;
+        }
+        rdma = (RDMAContext *)multifd_send_param->rdma;
+        block = &(rdma->local_ram_blocks.block[current_index]);
+    }
+
 retry:
     sge.addr = (uintptr_t)(block->local_host_addr +
                             (current_addr - block->offset));
@@ -2196,6 +2210,27 @@ retry:
     return 0;
 }
 
+static int multifd_rdma_write_flush(void)
+{
+    /* The multifd RDMA threads send data */
+    MultiFDSendParams *multifd_send_param = NULL;
+    RDMAContext *rdma = NULL;
+    MigrationState *s = migrate_get_current();
+    int ret = 0;
+
+    ret = get_multifd_send_param(s->rdma_channel,
+                                 &multifd_send_param);
+    if (ret) {
+        error_report("rdma: error getting multifd_send_param(%d)",
+                     s->rdma_channel);
+        return ret;
+    }
+    rdma = (RDMAContext *)(multifd_send_param->rdma);
+    rdma->nb_sent++;
+
+    return ret;
+}
+
 /*
  * Push out any unwritten RDMA operations.
  *
@@ -2218,8 +2253,15 @@ static int qemu_rdma_write_flush(QEMUFile *f, RDMAContext *rdma)
     }
 
     if (ret == 0) {
-        rdma->nb_sent++;
-        trace_qemu_rdma_write_flush(rdma->nb_sent);
+        if (migrate_use_multifd()) {
+            ret = multifd_rdma_write_flush();
+            if (ret) {
+                return ret;
+            }
+        } else {
+            rdma->nb_sent++;
+            trace_qemu_rdma_write_flush(rdma->nb_sent);
+        }
     }
 
     rdma->current_length = 0;
@@ -4061,6 +4103,7 @@ wait_reg_complete:
             }
 
             qemu_sem_post(&multifd_send_param->sem_sync);
+            qemu_sem_wait(&multifd_send_param->sem);
         }
     }
 
@@ -4443,6 +4486,7 @@ static void *multifd_rdma_send_thread(void *opaque)
     Error *local_err = NULL;
     int ret = 0;
     RDMAControlHeader head = { .len = 0, .repeat = 1 };
+    RDMAContext *rdma = p->rdma;
 
     trace_multifd_send_thread_start(p->id);
     if (multifd_send_initial_packet(p, &local_err) < 0) {
@@ -4451,7 +4495,7 @@ static void *multifd_rdma_send_thread(void *opaque)
 
     /* wait for semaphore notification to register memory */
     qemu_sem_wait(&p->sem_sync);
-    if (qemu_rdma_registration(p->rdma) < 0) {
+    if (qemu_rdma_registration(rdma) < 0) {
         goto out;
     }
     /*
@@ -4467,13 +4511,26 @@ static void *multifd_rdma_send_thread(void *opaque)
             break;
         }
         qemu_mutex_unlock(&p->mutex);
-
+        /* To complete polling(CQE) */
+        while (rdma->nb_sent) {
+            ret = qemu_rdma_block_for_wrid(rdma, RDMA_WRID_RDMA_WRITE, NULL);
+            if (ret < 0) {
+                error_report("multifd RDMA migration: "
+                             "complete polling error!");
+                return NULL;
+            }
+        }
         /* Send FINISHED to the destination */
         head.type = RDMA_CONTROL_REGISTER_FINISHED;
-        ret = qemu_rdma_exchange_send(p->rdma, &head, NULL, NULL, NULL, NULL);
+        ret = qemu_rdma_exchange_send(rdma, &head, NULL, NULL, NULL, NULL);
         if (ret < 0) {
+            error_report("multifd RDMA migration: "
+                         "sending remote error!");
             return NULL;
         }
+
+        /* sync main thread */
+        qemu_sem_post(&p->sem);
     }
 
 out:
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 18/18] migration/rdma: RDMA cleanup for multifd migration
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (16 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 17/18] migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all mode Chuan Zheng
@ 2020-10-17  4:25 ` Chuan Zheng
  2020-10-21  9:25 ` [PATCH v3 00/18] Support Multifd for RDMA migration Zhanghailiang
  18 siblings, 0 replies; 31+ messages in thread
From: Chuan Zheng @ 2020-10-17  4:25 UTC (permalink / raw)
  To: quintela, dgilbert
  Cc: yubihong, zhang.zhanghailiang, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
---
 migration/multifd.c |  6 ++++++
 migration/multifd.h |  1 +
 migration/rdma.c    | 16 +++++++++++++++-
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index c4d90ef..f548122 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -574,6 +574,9 @@ void multifd_save_cleanup(void)
         p->packet_len = 0;
         g_free(p->packet);
         p->packet = NULL;
+#ifdef CONFIG_RDMA
+        multifd_rdma_cleanup(p->rdma);
+#endif
         multifd_send_state->ops->send_cleanup(p, &local_err);
         if (local_err) {
             migrate_set_error(migrate_get_current(), local_err);
@@ -1017,6 +1020,9 @@ int multifd_load_cleanup(Error **errp)
         p->packet_len = 0;
         g_free(p->packet);
         p->packet = NULL;
+#ifdef CONFIG_RDMA
+        multifd_rdma_cleanup(p->rdma);
+#endif
         multifd_recv_state->ops->recv_cleanup(p);
     }
     qemu_sem_destroy(&multifd_recv_state->sem_sync);
diff --git a/migration/multifd.h b/migration/multifd.h
index ec9e897..6fddd4e 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -183,6 +183,7 @@ typedef struct {
 
 #ifdef CONFIG_RDMA
 MultiFDSetup *multifd_rdma_setup(void);
+void multifd_rdma_cleanup(void *opaque);
 #endif
 void multifd_send_terminate_threads(Error *err);
 int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp);
diff --git a/migration/rdma.c b/migration/rdma.c
index 519fa7a..89bf54b 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -2368,7 +2368,7 @@ static void qemu_rdma_cleanup(RDMAContext *rdma)
 {
     int idx;
 
-    if (rdma->cm_id && rdma->connected) {
+    if (rdma->channel && rdma->cm_id && rdma->connected) {
         if ((rdma->error_state ||
              migrate_get_current()->state == MIGRATION_STATUS_CANCELLING) &&
             !rdma->received_error) {
@@ -4609,6 +4609,20 @@ static MultiFDSetup multifd_rdma_ops = {
     .recv_channel_setup = multifd_rdma_recv_channel_setup
 };
 
+void multifd_rdma_cleanup(void *opaque)
+{
+    RDMAContext *rdma = (RDMAContext *)opaque;
+
+    if (!migrate_use_rdma()) {
+        return;
+    }
+
+    rdma->listen_id = NULL;
+    rdma->channel = NULL;
+    qemu_rdma_cleanup(rdma);
+    g_free(rdma);
+}
+
 MultiFDSetup *multifd_rdma_setup(void)
 {
     MultiFDSetup *rdma_ops;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* RE: [PATCH v3 00/18] Support Multifd for RDMA migration
  2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
                   ` (17 preceding siblings ...)
  2020-10-17  4:25 ` [PATCH v3 18/18] migration/rdma: RDMA cleanup for multifd migration Chuan Zheng
@ 2020-10-21  9:25 ` Zhanghailiang
  2020-10-21  9:33   ` Zheng Chuan
  18 siblings, 1 reply; 31+ messages in thread
From: Zhanghailiang @ 2020-10-21  9:25 UTC (permalink / raw)
  To: zhengchuan, quintela, dgilbert
  Cc: Chenzhendong (alex), yubihong, wanghao (O), qemu-devel, Xiexiangyou

Hi zhengchuan,

> -----Original Message-----
> From: zhengchuan
> Sent: Saturday, October 17, 2020 12:26 PM
> To: quintela@redhat.com; dgilbert@redhat.com
> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
> fengzhimin1@huawei.com; qemu-devel@nongnu.org
> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
> 
> Now I continue to support multifd for RDMA migration based on my colleague
> zhiming's work:)
> 
> The previous RFC patches is listed below:
> v1:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
> v2:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
> 
> As descried in previous RFC, the RDMA bandwidth is not fully utilized for over
> 25Gigabit NIC because of single channel for RDMA migration.
> This patch series is going to support multifd for RDMA migration based on multifd
> framework.
> 
> Comparsion is between origion and multifd RDMA migration is re-tested for v3.
> The VM specifications for migration are as follows:
> - VM use 4k page;
> - the number of VCPU is 4;
> - the total memory is 16Gigabit;
> - use 'mempress' tool to pressurize VM(mempress 8000 500);
> - use 25Gigabit network card to migrate;
> 
> For origin RDMA and MultiRDMA migration, the total migration times of VM are
> as follows:
> +++++++++++++++++++++++++++++++++++++++++++++++++
> |             | NOT rdma-pin-all | rdma-pin-all |
> +++++++++++++++++++++++++++++++++++++++++++++++++
> | origin RDMA |       26 s       |     29 s     |
> -------------------------------------------------
> |  MultiRDMA  |       16 s       |     17 s     |
> +++++++++++++++++++++++++++++++++++++++++++++++++
> 
> Test the multifd RDMA migration like this:
> virsh migrate --live --multiFd --migrateuri

There is no option '--multiFd' for virsh commands, It seems that, we added this private option for internal usage.
It's better to provide testing method by using qemu commands.


Thanks.

> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
> qemu+tcp://192.168.1.100/system --verbose
> 
> v2 -> v3:
>     create multifd ops for both tcp and rdma
>     do not export rdma to avoid multifd code in mess
>     fix build issue for non-rdma
>     fix some codestyle and buggy code
> 
> Chuan Zheng (18):
>   migration/rdma: add the 'migrate_use_rdma_pin_all' function
>   migration/rdma: judge whether or not the RDMA is used for migration
>   migration/rdma: create multifd_setup_ops for Tx/Rx thread
>   migration/rdma: add multifd_setup_ops for rdma
>   migration/rdma: do not need sync main for rdma
>   migration/rdma: export MultiFDSendParams/MultiFDRecvParams
>   migration/rdma: add rdma field into multifd send/recv param
>   migration/rdma: export getQIOChannel to get QIOchannel in rdma
>   migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
>   migration/rdma: Create the multifd recv channels for RDMA
>   migration/rdma: record host_port for multifd RDMA
>   migration/rdma: Create the multifd send channels for RDMA
>   migration/rdma: Add the function for dynamic page registration
>   migration/rdma: register memory for multifd RDMA channels
>   migration/rdma: only register the memory for multifd channels
>   migration/rdma: add rdma_channel into Migrationstate field
>   migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
>     mode
>   migration/rdma: RDMA cleanup for multifd migration
> 
>  migration/migration.c |  24 +++
>  migration/migration.h |  11 ++
>  migration/multifd.c   |  97 +++++++++-
>  migration/multifd.h   |  24 +++
>  migration/qemu-file.c |   5 +
>  migration/qemu-file.h |   1 +
>  migration/rdma.c      | 503
> +++++++++++++++++++++++++++++++++++++++++++++++++-
>  7 files changed, 653 insertions(+), 12 deletions(-)
> 
> --
> 1.8.3.1



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 00/18] Support Multifd for RDMA migration
  2020-10-21  9:25 ` [PATCH v3 00/18] Support Multifd for RDMA migration Zhanghailiang
@ 2020-10-21  9:33   ` Zheng Chuan
  2020-10-23 19:02     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 31+ messages in thread
From: Zheng Chuan @ 2020-10-21  9:33 UTC (permalink / raw)
  To: Zhanghailiang, quintela, dgilbert
  Cc: Chenzhendong (alex), yubihong, wanghao (O), qemu-devel, Xiexiangyou



On 2020/10/21 17:25, Zhanghailiang wrote:
> Hi zhengchuan,
> 
>> -----Original Message-----
>> From: zhengchuan
>> Sent: Saturday, October 17, 2020 12:26 PM
>> To: quintela@redhat.com; dgilbert@redhat.com
>> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
>> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
>> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
>> fengzhimin1@huawei.com; qemu-devel@nongnu.org
>> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
>>
>> Now I continue to support multifd for RDMA migration based on my colleague
>> zhiming's work:)
>>
>> The previous RFC patches is listed below:
>> v1:
>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
>> v2:
>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
>>
>> As descried in previous RFC, the RDMA bandwidth is not fully utilized for over
>> 25Gigabit NIC because of single channel for RDMA migration.
>> This patch series is going to support multifd for RDMA migration based on multifd
>> framework.
>>
>> Comparsion is between origion and multifd RDMA migration is re-tested for v3.
>> The VM specifications for migration are as follows:
>> - VM use 4k page;
>> - the number of VCPU is 4;
>> - the total memory is 16Gigabit;
>> - use 'mempress' tool to pressurize VM(mempress 8000 500);
>> - use 25Gigabit network card to migrate;
>>
>> For origin RDMA and MultiRDMA migration, the total migration times of VM are
>> as follows:
>> +++++++++++++++++++++++++++++++++++++++++++++++++
>> |             | NOT rdma-pin-all | rdma-pin-all |
>> +++++++++++++++++++++++++++++++++++++++++++++++++
>> | origin RDMA |       26 s       |     29 s     |
>> -------------------------------------------------
>> |  MultiRDMA  |       16 s       |     17 s     |
>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>
>> Test the multifd RDMA migration like this:
>> virsh migrate --live --multiFd --migrateuri
> 
> There is no option '--multiFd' for virsh commands, It seems that, we added this private option for internal usage.
> It's better to provide testing method by using qemu commands.
> 
> 
Hi, Hailiang
Yes, it should be, will update in V4.

Also, Ping.

Dave, Juan.

Any suggestion and comment about this series? Hope this feature could catch up with qemu 5.2.

> Thanks.
> 
>> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
>> qemu+tcp://192.168.1.100/system --verbose
>>
>> v2 -> v3:
>>     create multifd ops for both tcp and rdma
>>     do not export rdma to avoid multifd code in mess
>>     fix build issue for non-rdma
>>     fix some codestyle and buggy code
>>
>> Chuan Zheng (18):
>>   migration/rdma: add the 'migrate_use_rdma_pin_all' function
>>   migration/rdma: judge whether or not the RDMA is used for migration
>>   migration/rdma: create multifd_setup_ops for Tx/Rx thread
>>   migration/rdma: add multifd_setup_ops for rdma
>>   migration/rdma: do not need sync main for rdma
>>   migration/rdma: export MultiFDSendParams/MultiFDRecvParams
>>   migration/rdma: add rdma field into multifd send/recv param
>>   migration/rdma: export getQIOChannel to get QIOchannel in rdma
>>   migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
>>   migration/rdma: Create the multifd recv channels for RDMA
>>   migration/rdma: record host_port for multifd RDMA
>>   migration/rdma: Create the multifd send channels for RDMA
>>   migration/rdma: Add the function for dynamic page registration
>>   migration/rdma: register memory for multifd RDMA channels
>>   migration/rdma: only register the memory for multifd channels
>>   migration/rdma: add rdma_channel into Migrationstate field
>>   migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
>>     mode
>>   migration/rdma: RDMA cleanup for multifd migration
>>
>>  migration/migration.c |  24 +++
>>  migration/migration.h |  11 ++
>>  migration/multifd.c   |  97 +++++++++-
>>  migration/multifd.h   |  24 +++
>>  migration/qemu-file.c |   5 +
>>  migration/qemu-file.h |   1 +
>>  migration/rdma.c      | 503
>> +++++++++++++++++++++++++++++++++++++++++++++++++-
>>  7 files changed, 653 insertions(+), 12 deletions(-)
>>
>> --
>> 1.8.3.1
> 
> .
> 

-- 
Regards.
Chuan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 00/18] Support Multifd for RDMA migration
  2020-10-21  9:33   ` Zheng Chuan
@ 2020-10-23 19:02     ` Dr. David Alan Gilbert
  2020-10-25  2:29       ` Zheng Chuan
  0 siblings, 1 reply; 31+ messages in thread
From: Dr. David Alan Gilbert @ 2020-10-23 19:02 UTC (permalink / raw)
  To: Zheng Chuan
  Cc: yubihong, Zhanghailiang, quintela, qemu-devel, Xiexiangyou,
	Chenzhendong (alex), wanghao (O)

* Zheng Chuan (zhengchuan@huawei.com) wrote:
> 
> 
> On 2020/10/21 17:25, Zhanghailiang wrote:
> > Hi zhengchuan,
> > 
> >> -----Original Message-----
> >> From: zhengchuan
> >> Sent: Saturday, October 17, 2020 12:26 PM
> >> To: quintela@redhat.com; dgilbert@redhat.com
> >> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
> >> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
> >> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
> >> fengzhimin1@huawei.com; qemu-devel@nongnu.org
> >> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
> >>
> >> Now I continue to support multifd for RDMA migration based on my colleague
> >> zhiming's work:)
> >>
> >> The previous RFC patches is listed below:
> >> v1:
> >> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
> >> v2:
> >> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
> >>
> >> As descried in previous RFC, the RDMA bandwidth is not fully utilized for over
> >> 25Gigabit NIC because of single channel for RDMA migration.
> >> This patch series is going to support multifd for RDMA migration based on multifd
> >> framework.
> >>
> >> Comparsion is between origion and multifd RDMA migration is re-tested for v3.
> >> The VM specifications for migration are as follows:
> >> - VM use 4k page;
> >> - the number of VCPU is 4;
> >> - the total memory is 16Gigabit;
> >> - use 'mempress' tool to pressurize VM(mempress 8000 500);
> >> - use 25Gigabit network card to migrate;
> >>
> >> For origin RDMA and MultiRDMA migration, the total migration times of VM are
> >> as follows:
> >> +++++++++++++++++++++++++++++++++++++++++++++++++
> >> |             | NOT rdma-pin-all | rdma-pin-all |
> >> +++++++++++++++++++++++++++++++++++++++++++++++++
> >> | origin RDMA |       26 s       |     29 s     |
> >> -------------------------------------------------
> >> |  MultiRDMA  |       16 s       |     17 s     |
> >> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>
> >> Test the multifd RDMA migration like this:
> >> virsh migrate --live --multiFd --migrateuri
> > 
> > There is no option '--multiFd' for virsh commands, It seems that, we added this private option for internal usage.
> > It's better to provide testing method by using qemu commands.
> > 
> > 
> Hi, Hailiang
> Yes, it should be, will update in V4.
> 
> Also, Ping.
> 
> Dave, Juan.
> 
> Any suggestion and comment about this series? Hope this feature could catch up with qemu 5.2.

It's a bit close; I'm not sure if I'll have time to review it on Monday
before the pull.

Dave

> > Thanks.
> > 
> >> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
> >> qemu+tcp://192.168.1.100/system --verbose
> >>
> >> v2 -> v3:
> >>     create multifd ops for both tcp and rdma
> >>     do not export rdma to avoid multifd code in mess
> >>     fix build issue for non-rdma
> >>     fix some codestyle and buggy code
> >>
> >> Chuan Zheng (18):
> >>   migration/rdma: add the 'migrate_use_rdma_pin_all' function
> >>   migration/rdma: judge whether or not the RDMA is used for migration
> >>   migration/rdma: create multifd_setup_ops for Tx/Rx thread
> >>   migration/rdma: add multifd_setup_ops for rdma
> >>   migration/rdma: do not need sync main for rdma
> >>   migration/rdma: export MultiFDSendParams/MultiFDRecvParams
> >>   migration/rdma: add rdma field into multifd send/recv param
> >>   migration/rdma: export getQIOChannel to get QIOchannel in rdma
> >>   migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
> >>   migration/rdma: Create the multifd recv channels for RDMA
> >>   migration/rdma: record host_port for multifd RDMA
> >>   migration/rdma: Create the multifd send channels for RDMA
> >>   migration/rdma: Add the function for dynamic page registration
> >>   migration/rdma: register memory for multifd RDMA channels
> >>   migration/rdma: only register the memory for multifd channels
> >>   migration/rdma: add rdma_channel into Migrationstate field
> >>   migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
> >>     mode
> >>   migration/rdma: RDMA cleanup for multifd migration
> >>
> >>  migration/migration.c |  24 +++
> >>  migration/migration.h |  11 ++
> >>  migration/multifd.c   |  97 +++++++++-
> >>  migration/multifd.h   |  24 +++
> >>  migration/qemu-file.c |   5 +
> >>  migration/qemu-file.h |   1 +
> >>  migration/rdma.c      | 503
> >> +++++++++++++++++++++++++++++++++++++++++++++++++-
> >>  7 files changed, 653 insertions(+), 12 deletions(-)
> >>
> >> --
> >> 1.8.3.1
> > 
> > .
> > 
> 
> -- 
> Regards.
> Chuan
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 00/18] Support Multifd for RDMA migration
  2020-10-23 19:02     ` Dr. David Alan Gilbert
@ 2020-10-25  2:29       ` Zheng Chuan
  2020-12-15  7:28         ` Zheng Chuan
  0 siblings, 1 reply; 31+ messages in thread
From: Zheng Chuan @ 2020-10-25  2:29 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: yubihong, Zhanghailiang, quintela, qemu-devel, Xiexiangyou,
	Chenzhendong (alex), wanghao (O)



On 2020/10/24 3:02, Dr. David Alan Gilbert wrote:
> * Zheng Chuan (zhengchuan@huawei.com) wrote:
>>
>>
>> On 2020/10/21 17:25, Zhanghailiang wrote:
>>> Hi zhengchuan,
>>>
>>>> -----Original Message-----
>>>> From: zhengchuan
>>>> Sent: Saturday, October 17, 2020 12:26 PM
>>>> To: quintela@redhat.com; dgilbert@redhat.com
>>>> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
>>>> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
>>>> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
>>>> fengzhimin1@huawei.com; qemu-devel@nongnu.org
>>>> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
>>>>
>>>> Now I continue to support multifd for RDMA migration based on my colleague
>>>> zhiming's work:)
>>>>
>>>> The previous RFC patches is listed below:
>>>> v1:
>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
>>>> v2:
>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
>>>>
>>>> As descried in previous RFC, the RDMA bandwidth is not fully utilized for over
>>>> 25Gigabit NIC because of single channel for RDMA migration.
>>>> This patch series is going to support multifd for RDMA migration based on multifd
>>>> framework.
>>>>
>>>> Comparsion is between origion and multifd RDMA migration is re-tested for v3.
>>>> The VM specifications for migration are as follows:
>>>> - VM use 4k page;
>>>> - the number of VCPU is 4;
>>>> - the total memory is 16Gigabit;
>>>> - use 'mempress' tool to pressurize VM(mempress 8000 500);
>>>> - use 25Gigabit network card to migrate;
>>>>
>>>> For origin RDMA and MultiRDMA migration, the total migration times of VM are
>>>> as follows:
>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>> |             | NOT rdma-pin-all | rdma-pin-all |
>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>> | origin RDMA |       26 s       |     29 s     |
>>>> -------------------------------------------------
>>>> |  MultiRDMA  |       16 s       |     17 s     |
>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>
>>>> Test the multifd RDMA migration like this:
>>>> virsh migrate --live --multiFd --migrateuri
>>>
>>> There is no option '--multiFd' for virsh commands, It seems that, we added this private option for internal usage.
>>> It's better to provide testing method by using qemu commands.
>>>
>>>
>> Hi, Hailiang
>> Yes, it should be, will update in V4.
>>
>> Also, Ping.
>>
>> Dave, Juan.
>>
>> Any suggestion and comment about this series? Hope this feature could catch up with qemu 5.2.
> 
> It's a bit close; I'm not sure if I'll have time to review it on Monday
> before the pull.
> 
> Dave
> 
Yes, it is.
Then we may wait for next merge window after fully review:)

>>> Thanks.
>>>
>>>> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
>>>> qemu+tcp://192.168.1.100/system --verbose
>>>>
>>>> v2 -> v3:
>>>>     create multifd ops for both tcp and rdma
>>>>     do not export rdma to avoid multifd code in mess
>>>>     fix build issue for non-rdma
>>>>     fix some codestyle and buggy code
>>>>
>>>> Chuan Zheng (18):
>>>>   migration/rdma: add the 'migrate_use_rdma_pin_all' function
>>>>   migration/rdma: judge whether or not the RDMA is used for migration
>>>>   migration/rdma: create multifd_setup_ops for Tx/Rx thread
>>>>   migration/rdma: add multifd_setup_ops for rdma
>>>>   migration/rdma: do not need sync main for rdma
>>>>   migration/rdma: export MultiFDSendParams/MultiFDRecvParams
>>>>   migration/rdma: add rdma field into multifd send/recv param
>>>>   migration/rdma: export getQIOChannel to get QIOchannel in rdma
>>>>   migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
>>>>   migration/rdma: Create the multifd recv channels for RDMA
>>>>   migration/rdma: record host_port for multifd RDMA
>>>>   migration/rdma: Create the multifd send channels for RDMA
>>>>   migration/rdma: Add the function for dynamic page registration
>>>>   migration/rdma: register memory for multifd RDMA channels
>>>>   migration/rdma: only register the memory for multifd channels
>>>>   migration/rdma: add rdma_channel into Migrationstate field
>>>>   migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
>>>>     mode
>>>>   migration/rdma: RDMA cleanup for multifd migration
>>>>
>>>>  migration/migration.c |  24 +++
>>>>  migration/migration.h |  11 ++
>>>>  migration/multifd.c   |  97 +++++++++-
>>>>  migration/multifd.h   |  24 +++
>>>>  migration/qemu-file.c |   5 +
>>>>  migration/qemu-file.h |   1 +
>>>>  migration/rdma.c      | 503
>>>> +++++++++++++++++++++++++++++++++++++++++++++++++-
>>>>  7 files changed, 653 insertions(+), 12 deletions(-)
>>>>
>>>> --
>>>> 1.8.3.1
>>>
>>> .
>>>
>>
>> -- 
>> Regards.
>> Chuan
>>

-- 
Regards.
Chuan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function
  2020-10-17  4:25 ` [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function Chuan Zheng
@ 2020-11-10 11:52   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 31+ messages in thread
From: Dr. David Alan Gilbert @ 2020-11-10 11:52 UTC (permalink / raw)
  To: Chuan Zheng
  Cc: yubihong, zhang.zhanghailiang, quintela, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

* Chuan Zheng (zhengchuan@huawei.com) wrote:
> Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
> Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
> ---
>  migration/migration.c | 9 +++++++++
>  migration/migration.h | 1 +
>  2 files changed, 10 insertions(+)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 0575ecb..64ae417 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -2329,6 +2329,15 @@ bool migrate_use_events(void)
>      return s->enabled_capabilities[MIGRATION_CAPABILITY_EVENTS];
>  }
>  
> +bool migrate_use_rdma_pin_all(void)
> +{
> +    MigrationState *s;
> +
> +    s = migrate_get_current();
> +
> +    return s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL];
> +}
> +

I'd omit the 'use_' if you need to respin; but that's fine:


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>  bool migrate_use_multifd(void)
>  {
>      MigrationState *s;
> diff --git a/migration/migration.h b/migration/migration.h
> index deb411a..74fd790 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -300,6 +300,7 @@ bool migrate_ignore_shared(void);
>  bool migrate_validate_uuid(void);
>  
>  bool migrate_auto_converge(void);
> +bool migrate_use_rdma_pin_all(void);
>  bool migrate_use_multifd(void);
>  bool migrate_pause_before_switchover(void);
>  int migrate_multifd_channels(void);
> -- 
> 1.8.3.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread
  2020-10-17  4:25 ` [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread Chuan Zheng
@ 2020-11-10 12:11   ` Dr. David Alan Gilbert
  2020-11-11  7:51     ` Zheng Chuan
  0 siblings, 1 reply; 31+ messages in thread
From: Dr. David Alan Gilbert @ 2020-11-10 12:11 UTC (permalink / raw)
  To: Chuan Zheng
  Cc: yubihong, zhang.zhanghailiang, quintela, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

* Chuan Zheng (zhengchuan@huawei.com) wrote:
> Create multifd_setup_ops for TxRx thread, no logic change.
> 
> Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
> ---
>  migration/multifd.c | 44 +++++++++++++++++++++++++++++++++++++++-----
>  migration/multifd.h |  7 +++++++
>  2 files changed, 46 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 68b171f..1f82307 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -383,6 +383,8 @@ struct {
>      int exiting;
>      /* multifd ops */
>      MultiFDMethods *ops;
> +    /* multifd setup ops */
> +    MultiFDSetup *setup_ops;
>  } *multifd_send_state;
>  
>  /*
> @@ -790,8 +792,9 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
>          } else {
>              /* update for tls qio channel */
>              p->c = ioc;
> -            qemu_thread_create(&p->thread, p->name, multifd_send_thread, p,
> -                                   QEMU_THREAD_JOINABLE);
> +            qemu_thread_create(&p->thread, p->name,
> +                               multifd_send_state->setup_ops->send_thread_setup,
> +                               p, QEMU_THREAD_JOINABLE);
>         }
>         return false;
>      }
> @@ -839,6 +842,11 @@ cleanup:
>      multifd_new_send_channel_cleanup(p, sioc, local_err);
>  }
>  
> +static void multifd_send_channel_setup(MultiFDSendParams *p)
> +{
> +    socket_send_channel_create(multifd_new_send_channel_async, p);
> +}
> +
>  int multifd_save_setup(Error **errp)
>  {
>      int thread_count;
> @@ -856,6 +864,7 @@ int multifd_save_setup(Error **errp)
>      multifd_send_state->pages = multifd_pages_init(page_count);
>      qemu_sem_init(&multifd_send_state->channels_ready, 0);
>      qatomic_set(&multifd_send_state->exiting, 0);
> +    multifd_send_state->setup_ops = multifd_setup_ops_init();
>      multifd_send_state->ops = multifd_ops[migrate_multifd_compression()];
>  
>      for (i = 0; i < thread_count; i++) {
> @@ -875,7 +884,7 @@ int multifd_save_setup(Error **errp)
>          p->packet->version = cpu_to_be32(MULTIFD_VERSION);
>          p->name = g_strdup_printf("multifdsend_%d", i);
>          p->tls_hostname = g_strdup(s->hostname);
> -        socket_send_channel_create(multifd_new_send_channel_async, p);
> +        multifd_send_state->setup_ops->send_channel_setup(p);
>      }
>  
>      for (i = 0; i < thread_count; i++) {
> @@ -902,6 +911,8 @@ struct {
>      uint64_t packet_num;
>      /* multifd ops */
>      MultiFDMethods *ops;
> +    /* multifd setup ops */
> +    MultiFDSetup *setup_ops;
>  } *multifd_recv_state;
>  
>  static void multifd_recv_terminate_threads(Error *err)
> @@ -1095,6 +1106,7 @@ int multifd_load_setup(Error **errp)
>      multifd_recv_state->params = g_new0(MultiFDRecvParams, thread_count);
>      qatomic_set(&multifd_recv_state->count, 0);
>      qemu_sem_init(&multifd_recv_state->sem_sync, 0);
> +    multifd_recv_state->setup_ops = multifd_setup_ops_init();
>      multifd_recv_state->ops = multifd_ops[migrate_multifd_compression()];
>  
>      for (i = 0; i < thread_count; i++) {
> @@ -1173,9 +1185,31 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
>      p->num_packets = 1;
>  
>      p->running = true;
> -    qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
> -                       QEMU_THREAD_JOINABLE);
> +    multifd_recv_state->setup_ops->recv_channel_setup(ioc, p);
> +    qemu_thread_create(&p->thread, p->name,
> +                       multifd_recv_state->setup_ops->recv_thread_setup,
> +                       p, QEMU_THREAD_JOINABLE);
>      qatomic_inc(&multifd_recv_state->count);
>      return qatomic_read(&multifd_recv_state->count) ==
>             migrate_multifd_channels();
>  }
> +
> +static void multifd_recv_channel_setup(QIOChannel *ioc, MultiFDRecvParams *p)
> +{
> +    return;
> +}
> +
> +static MultiFDSetup multifd_socket_ops = {
> +    .send_thread_setup = multifd_send_thread,
> +    .recv_thread_setup = multifd_recv_thread,
> +    .send_channel_setup = multifd_send_channel_setup,
> +    .recv_channel_setup = multifd_recv_channel_setup
> +};

I don't think you need '_setup' on the thread function names here.

Dave

> +MultiFDSetup *multifd_setup_ops_init(void)
> +{
> +    MultiFDSetup *ops;
> +
> +    ops = &multifd_socket_ops;
> +    return ops;
> +}
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 8d6751f..446315b 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -166,6 +166,13 @@ typedef struct {
>      int (*recv_pages)(MultiFDRecvParams *p, uint32_t used, Error **errp);
>  } MultiFDMethods;
>  
> +typedef struct {
> +    void *(*send_thread_setup)(void *opaque);
> +    void *(*recv_thread_setup)(void *opaque);
> +    void (*send_channel_setup)(MultiFDSendParams *p);
> +    void (*recv_channel_setup)(QIOChannel *ioc, MultiFDRecvParams *p);
> +} MultiFDSetup;
> +
>  void multifd_register_ops(int method, MultiFDMethods *ops);
>  
>  #endif
> -- 
> 1.8.3.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma
  2020-10-17  4:25 ` [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma Chuan Zheng
@ 2020-11-10 12:30   ` Dr. David Alan Gilbert
  2020-11-11  7:56     ` Zheng Chuan
  0 siblings, 1 reply; 31+ messages in thread
From: Dr. David Alan Gilbert @ 2020-11-10 12:30 UTC (permalink / raw)
  To: Chuan Zheng
  Cc: yubihong, zhang.zhanghailiang, quintela, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

* Chuan Zheng (zhengchuan@huawei.com) wrote:
> Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
> ---
>  migration/multifd.c |  6 ++++
>  migration/multifd.h |  4 +++
>  migration/rdma.c    | 82 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 92 insertions(+)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 1f82307..0d494df 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -1210,6 +1210,12 @@ MultiFDSetup *multifd_setup_ops_init(void)
>  {
>      MultiFDSetup *ops;
>  
> +#ifdef CONFIG_RDMA
> +    if (migrate_use_rdma()) {
> +        ops = multifd_rdma_setup();
> +        return ops;
> +    }
> +#endif
>      ops = &multifd_socket_ops;
>      return ops;
>  }
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 446315b..62a0b2a 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -173,6 +173,10 @@ typedef struct {
>      void (*recv_channel_setup)(QIOChannel *ioc, MultiFDRecvParams *p);
>  } MultiFDSetup;
>  
> +#ifdef CONFIG_RDMA
> +MultiFDSetup *multifd_rdma_setup(void);
> +#endif
> +MultiFDSetup *multifd_setup_ops_init(void);
>  void multifd_register_ops(int method, MultiFDMethods *ops);
>  
>  #endif
> diff --git a/migration/rdma.c b/migration/rdma.c
> index 0340841..ad4e4ba 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -19,6 +19,7 @@
>  #include "qemu/cutils.h"
>  #include "rdma.h"
>  #include "migration.h"
> +#include "multifd.h"
>  #include "qemu-file.h"
>  #include "ram.h"
>  #include "qemu-file-channel.h"
> @@ -4138,3 +4139,84 @@ err:
>      g_free(rdma);
>      g_free(rdma_return_path);
>  }
> +
> +static void *multifd_rdma_send_thread(void *opaque)
> +{
> +    MultiFDSendParams *p = opaque;
> +
> +    while (true) {
> +        qemu_mutex_lock(&p->mutex);
> +        if (p->quit) {
> +            qemu_mutex_unlock(&p->mutex);
> +            break;
> +        }
> +        qemu_mutex_unlock(&p->mutex);
> +        qemu_sem_wait(&p->sem);
> +    }
> +
> +    qemu_mutex_lock(&p->mutex);
> +    p->running = false;
> +    qemu_mutex_unlock(&p->mutex);
> +
> +    return NULL;
> +}

You might like to consider using WITH_QEMU_LOCK_GUARD, I think that
would become:

  while (true) {
      WITH_QEMU_LOCK_GUARD(&p->mutex) {
          if (p->quit) {
              break;
          }
      }
      qemu_sem_wait(&p->sem);
  }
  WITH_QEMU_LOCK_GUARD(&p->mutex) {
      p->running = false;
  }

> +
> +static void multifd_rdma_send_channel_setup(MultiFDSendParams *p)
> +{
> +    Error *local_err = NULL;
> +
> +    if (p->quit) {
> +        error_setg(&local_err, "multifd: send id %d already quit", p->id);
> +        return ;
> +    }
> +    p->running = true;
> +
> +    qemu_thread_create(&p->thread, p->name, multifd_rdma_send_thread, p,
> +                       QEMU_THREAD_JOINABLE);
> +}
> +
> +static void *multifd_rdma_recv_thread(void *opaque)
> +{
> +    MultiFDRecvParams *p = opaque;
> +
> +    while (true) {
> +        qemu_mutex_lock(&p->mutex);
> +        if (p->quit) {
> +            qemu_mutex_unlock(&p->mutex);
> +            break;
> +        }
> +        qemu_mutex_unlock(&p->mutex);
> +        qemu_sem_wait(&p->sem_sync);
> +    }
> +
> +    qemu_mutex_lock(&p->mutex);
> +    p->running = false;
> +    qemu_mutex_unlock(&p->mutex);
> +
> +    return NULL;
> +}
> +
> +static void multifd_rdma_recv_channel_setup(QIOChannel *ioc,
> +                                            MultiFDRecvParams *p)
> +{
> +    QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
> +
> +    p->file = rioc->file;
> +    return;
> +}
> +
> +static MultiFDSetup multifd_rdma_ops = {
> +    .send_thread_setup = multifd_rdma_send_thread,
> +    .recv_thread_setup = multifd_rdma_recv_thread,
> +    .send_channel_setup = multifd_rdma_send_channel_setup,
> +    .recv_channel_setup = multifd_rdma_recv_channel_setup
> +};
> +
> +MultiFDSetup *multifd_rdma_setup(void)
> +{
> +    MultiFDSetup *rdma_ops;
> +
> +    rdma_ops = &multifd_rdma_ops;
> +
> +    return rdma_ops;

Why bother making this a function - just export multifd_rdma_ops ?

Dave

> +}
> -- 
> 1.8.3.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 09/18] migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
  2020-10-17  4:25 ` [PATCH v3 09/18] migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma Chuan Zheng
@ 2020-11-10 16:51   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 31+ messages in thread
From: Dr. David Alan Gilbert @ 2020-11-10 16:51 UTC (permalink / raw)
  To: Chuan Zheng
  Cc: yubihong, zhang.zhanghailiang, quintela, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232

* Chuan Zheng (zhengchuan@huawei.com) wrote:
> Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/rdma.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 52 insertions(+)
> 
> diff --git a/migration/rdma.c b/migration/rdma.c
> index ad4e4ba..2baa933 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -4010,6 +4010,48 @@ static void rdma_accept_incoming_migration(void *opaque)
>      }
>  }
>  
> +static bool multifd_rdma_load_setup(const char *host_port,
> +                                    RDMAContext *rdma, Error **errp)
> +{
> +    int thread_count;
> +    int i;
> +    int idx;
> +    MultiFDRecvParams *multifd_recv_param;
> +    RDMAContext *multifd_rdma;
> +
> +    if (!migrate_use_multifd()) {
> +        return true;
> +    }
> +
> +    if (multifd_load_setup(errp) != 0) {
> +        /*
> +         * We haven't been able to create multifd threads
> +         * nothing better to do
> +         */
> +        return false;
> +    }
> +
> +    thread_count = migrate_multifd_channels();
> +    for (i = 0; i < thread_count; i++) {
> +        if (get_multifd_recv_param(i, &multifd_recv_param) < 0) {
> +            ERROR(errp, "rdma: error getting multifd_recv_param(%d)", i);
> +            return false;
> +        }
> +
> +        multifd_rdma = qemu_rdma_data_init(host_port, errp);
> +        for (idx = 0; idx < RDMA_WRID_MAX; idx++) {
> +            multifd_rdma->wr_data[idx].control_len = 0;
> +            multifd_rdma->wr_data[idx].control_curr = NULL;
> +        }
> +        /* the CM channel and CM id is shared */
> +        multifd_rdma->channel = rdma->channel;
> +        multifd_rdma->listen_id = rdma->listen_id;
> +        multifd_recv_param->rdma = (void *)multifd_rdma;
> +    }
> +
> +    return true;
> +}
> +
>  void rdma_start_incoming_migration(const char *host_port, Error **errp)
>  {
>      int ret;
> @@ -4057,6 +4099,16 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp)
>          qemu_rdma_return_path_dest_init(rdma_return_path, rdma);
>      }
>  
> +    /* multifd rdma setup */
> +    if (!multifd_rdma_load_setup(host_port, rdma, &local_err)) {
> +        /*
> +         * We haven't been able to create multifd threads
> +         * nothing better to do
> +         */
> +        error_report_err(local_err);
> +        goto err;
> +    }
> +
>      qemu_set_fd_handler(rdma->channel->fd, rdma_accept_incoming_migration,
>                          NULL, (void *)(intptr_t)rdma);
>      return;
> -- 
> 1.8.3.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread
  2020-11-10 12:11   ` Dr. David Alan Gilbert
@ 2020-11-11  7:51     ` Zheng Chuan
  0 siblings, 0 replies; 31+ messages in thread
From: Zheng Chuan @ 2020-11-11  7:51 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: yubihong, zhang.zhanghailiang, quintela, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232



On 2020/11/10 20:11, Dr. David Alan Gilbert wrote:
> * Chuan Zheng (zhengchuan@huawei.com) wrote:
>> Create multifd_setup_ops for TxRx thread, no logic change.
>>
>> Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
>> ---
>>  migration/multifd.c | 44 +++++++++++++++++++++++++++++++++++++++-----
>>  migration/multifd.h |  7 +++++++
>>  2 files changed, 46 insertions(+), 5 deletions(-)
>>
>> diff --git a/migration/multifd.c b/migration/multifd.c
>> index 68b171f..1f82307 100644
>> --- a/migration/multifd.c
>> +++ b/migration/multifd.c
>> @@ -383,6 +383,8 @@ struct {
>>      int exiting;
>>      /* multifd ops */
>>      MultiFDMethods *ops;
>> +    /* multifd setup ops */
>> +    MultiFDSetup *setup_ops;
>>  } *multifd_send_state;
>>  
>>  /*
>> @@ -790,8 +792,9 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
>>          } else {
>>              /* update for tls qio channel */
>>              p->c = ioc;
>> -            qemu_thread_create(&p->thread, p->name, multifd_send_thread, p,
>> -                                   QEMU_THREAD_JOINABLE);
>> +            qemu_thread_create(&p->thread, p->name,
>> +                               multifd_send_state->setup_ops->send_thread_setup,
>> +                               p, QEMU_THREAD_JOINABLE);
>>         }
>>         return false;
>>      }
>> @@ -839,6 +842,11 @@ cleanup:
>>      multifd_new_send_channel_cleanup(p, sioc, local_err);
>>  }
>>  
>> +static void multifd_send_channel_setup(MultiFDSendParams *p)
>> +{
>> +    socket_send_channel_create(multifd_new_send_channel_async, p);
>> +}
>> +
>>  int multifd_save_setup(Error **errp)
>>  {
>>      int thread_count;
>> @@ -856,6 +864,7 @@ int multifd_save_setup(Error **errp)
>>      multifd_send_state->pages = multifd_pages_init(page_count);
>>      qemu_sem_init(&multifd_send_state->channels_ready, 0);
>>      qatomic_set(&multifd_send_state->exiting, 0);
>> +    multifd_send_state->setup_ops = multifd_setup_ops_init();
>>      multifd_send_state->ops = multifd_ops[migrate_multifd_compression()];
>>  
>>      for (i = 0; i < thread_count; i++) {
>> @@ -875,7 +884,7 @@ int multifd_save_setup(Error **errp)
>>          p->packet->version = cpu_to_be32(MULTIFD_VERSION);
>>          p->name = g_strdup_printf("multifdsend_%d", i);
>>          p->tls_hostname = g_strdup(s->hostname);
>> -        socket_send_channel_create(multifd_new_send_channel_async, p);
>> +        multifd_send_state->setup_ops->send_channel_setup(p);
>>      }
>>  
>>      for (i = 0; i < thread_count; i++) {
>> @@ -902,6 +911,8 @@ struct {
>>      uint64_t packet_num;
>>      /* multifd ops */
>>      MultiFDMethods *ops;
>> +    /* multifd setup ops */
>> +    MultiFDSetup *setup_ops;
>>  } *multifd_recv_state;
>>  
>>  static void multifd_recv_terminate_threads(Error *err)
>> @@ -1095,6 +1106,7 @@ int multifd_load_setup(Error **errp)
>>      multifd_recv_state->params = g_new0(MultiFDRecvParams, thread_count);
>>      qatomic_set(&multifd_recv_state->count, 0);
>>      qemu_sem_init(&multifd_recv_state->sem_sync, 0);
>> +    multifd_recv_state->setup_ops = multifd_setup_ops_init();
>>      multifd_recv_state->ops = multifd_ops[migrate_multifd_compression()];
>>  
>>      for (i = 0; i < thread_count; i++) {
>> @@ -1173,9 +1185,31 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
>>      p->num_packets = 1;
>>  
>>      p->running = true;
>> -    qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
>> -                       QEMU_THREAD_JOINABLE);
>> +    multifd_recv_state->setup_ops->recv_channel_setup(ioc, p);
>> +    qemu_thread_create(&p->thread, p->name,
>> +                       multifd_recv_state->setup_ops->recv_thread_setup,
>> +                       p, QEMU_THREAD_JOINABLE);
>>      qatomic_inc(&multifd_recv_state->count);
>>      return qatomic_read(&multifd_recv_state->count) ==
>>             migrate_multifd_channels();
>>  }
>> +
>> +static void multifd_recv_channel_setup(QIOChannel *ioc, MultiFDRecvParams *p)
>> +{
>> +    return;
>> +}
>> +
>> +static MultiFDSetup multifd_socket_ops = {
>> +    .send_thread_setup = multifd_send_thread,
>> +    .recv_thread_setup = multifd_recv_thread,
>> +    .send_channel_setup = multifd_send_channel_setup,
>> +    .recv_channel_setup = multifd_recv_channel_setup
>> +};
> 
> I don't think you need '_setup' on the thread function names here.
> 
> Dave
> 
OK, done in my local tree.
>> +MultiFDSetup *multifd_setup_ops_init(void)
>> +{
>> +    MultiFDSetup *ops;
>> +
>> +    ops = &multifd_socket_ops;
>> +    return ops;
>> +}
>> diff --git a/migration/multifd.h b/migration/multifd.h
>> index 8d6751f..446315b 100644
>> --- a/migration/multifd.h
>> +++ b/migration/multifd.h
>> @@ -166,6 +166,13 @@ typedef struct {
>>      int (*recv_pages)(MultiFDRecvParams *p, uint32_t used, Error **errp);
>>  } MultiFDMethods;
>>  
>> +typedef struct {
>> +    void *(*send_thread_setup)(void *opaque);
>> +    void *(*recv_thread_setup)(void *opaque);
>> +    void (*send_channel_setup)(MultiFDSendParams *p);
>> +    void (*recv_channel_setup)(QIOChannel *ioc, MultiFDRecvParams *p);
>> +} MultiFDSetup;
>> +
>>  void multifd_register_ops(int method, MultiFDMethods *ops);
>>  
>>  #endif
>> -- 
>> 1.8.3.1
>>

-- 
Regards.
Chuan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma
  2020-11-10 12:30   ` Dr. David Alan Gilbert
@ 2020-11-11  7:56     ` Zheng Chuan
  0 siblings, 0 replies; 31+ messages in thread
From: Zheng Chuan @ 2020-11-11  7:56 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: yubihong, zhang.zhanghailiang, quintela, fengzhimin1, qemu-devel,
	xiexiangyou, alex.chen, wanghao232



On 2020/11/10 20:30, Dr. David Alan Gilbert wrote:
> * Chuan Zheng (zhengchuan@huawei.com) wrote:
>> Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
>> ---
>>  migration/multifd.c |  6 ++++
>>  migration/multifd.h |  4 +++
>>  migration/rdma.c    | 82 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 92 insertions(+)
>>
>> diff --git a/migration/multifd.c b/migration/multifd.c
>> index 1f82307..0d494df 100644
>> --- a/migration/multifd.c
>> +++ b/migration/multifd.c
>> @@ -1210,6 +1210,12 @@ MultiFDSetup *multifd_setup_ops_init(void)
>>  {
>>      MultiFDSetup *ops;
>>  
>> +#ifdef CONFIG_RDMA
>> +    if (migrate_use_rdma()) {
>> +        ops = multifd_rdma_setup();
>> +        return ops;
>> +    }
>> +#endif
>>      ops = &multifd_socket_ops;
>>      return ops;
>>  }
>> diff --git a/migration/multifd.h b/migration/multifd.h
>> index 446315b..62a0b2a 100644
>> --- a/migration/multifd.h
>> +++ b/migration/multifd.h
>> @@ -173,6 +173,10 @@ typedef struct {
>>      void (*recv_channel_setup)(QIOChannel *ioc, MultiFDRecvParams *p);
>>  } MultiFDSetup;
>>  
>> +#ifdef CONFIG_RDMA
>> +MultiFDSetup *multifd_rdma_setup(void);
>> +#endif
>> +MultiFDSetup *multifd_setup_ops_init(void);
>>  void multifd_register_ops(int method, MultiFDMethods *ops);
>>  
>>  #endif
>> diff --git a/migration/rdma.c b/migration/rdma.c
>> index 0340841..ad4e4ba 100644
>> --- a/migration/rdma.c
>> +++ b/migration/rdma.c
>> @@ -19,6 +19,7 @@
>>  #include "qemu/cutils.h"
>>  #include "rdma.h"
>>  #include "migration.h"
>> +#include "multifd.h"
>>  #include "qemu-file.h"
>>  #include "ram.h"
>>  #include "qemu-file-channel.h"
>> @@ -4138,3 +4139,84 @@ err:
>>      g_free(rdma);
>>      g_free(rdma_return_path);
>>  }
>> +
>> +static void *multifd_rdma_send_thread(void *opaque)
>> +{
>> +    MultiFDSendParams *p = opaque;
>> +
>> +    while (true) {
>> +        qemu_mutex_lock(&p->mutex);
>> +        if (p->quit) {
>> +            qemu_mutex_unlock(&p->mutex);
>> +            break;
>> +        }
>> +        qemu_mutex_unlock(&p->mutex);
>> +        qemu_sem_wait(&p->sem);
>> +    }
>> +
>> +    qemu_mutex_lock(&p->mutex);
>> +    p->running = false;
>> +    qemu_mutex_unlock(&p->mutex);
>> +
>> +    return NULL;
>> +}
> 
> You might like to consider using WITH_QEMU_LOCK_GUARD, I think that
> would become:
> 
>   while (true) {
>       WITH_QEMU_LOCK_GUARD(&p->mutex) {
>           if (p->quit) {
>               break;
>           }
>       }
>       qemu_sem_wait(&p->sem);
>   }
>   WITH_QEMU_LOCK_GUARD(&p->mutex) {
>       p->running = false;
>   }
> 
OK. and this remind me now we keep qemu_mutex_lock(&p->mutex); in our multifd code, it that should also optimized?
>> +
>> +static void multifd_rdma_send_channel_setup(MultiFDSendParams *p)
>> +{
>> +    Error *local_err = NULL;
>> +
>> +    if (p->quit) {
>> +        error_setg(&local_err, "multifd: send id %d already quit", p->id);
>> +        return ;
>> +    }
>> +    p->running = true;
>> +
>> +    qemu_thread_create(&p->thread, p->name, multifd_rdma_send_thread, p,
>> +                       QEMU_THREAD_JOINABLE);
>> +}
>> +
>> +static void *multifd_rdma_recv_thread(void *opaque)
>> +{
>> +    MultiFDRecvParams *p = opaque;
>> +
>> +    while (true) {
>> +        qemu_mutex_lock(&p->mutex);
>> +        if (p->quit) {
>> +            qemu_mutex_unlock(&p->mutex);
>> +            break;
>> +        }
>> +        qemu_mutex_unlock(&p->mutex);
>> +        qemu_sem_wait(&p->sem_sync);
>> +    }
>> +
>> +    qemu_mutex_lock(&p->mutex);
>> +    p->running = false;
>> +    qemu_mutex_unlock(&p->mutex);
>> +
>> +    return NULL;
>> +}
>> +
>> +static void multifd_rdma_recv_channel_setup(QIOChannel *ioc,
>> +                                            MultiFDRecvParams *p)
>> +{
>> +    QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
>> +
>> +    p->file = rioc->file;
>> +    return;
>> +}
>> +
>> +static MultiFDSetup multifd_rdma_ops = {
>> +    .send_thread_setup = multifd_rdma_send_thread,
>> +    .recv_thread_setup = multifd_rdma_recv_thread,
>> +    .send_channel_setup = multifd_rdma_send_channel_setup,
>> +    .recv_channel_setup = multifd_rdma_recv_channel_setup
>> +};
>> +
>> +MultiFDSetup *multifd_rdma_setup(void)
>> +{
>> +    MultiFDSetup *rdma_ops;
>> +
>> +    rdma_ops = &multifd_rdma_ops;
>> +
>> +    return rdma_ops;
> 
> Why bother making this a function - just export multifd_rdma_ops ?
> 
> Dave
> 
OK, will consider in that way.

>> +}
>> -- 
>> 1.8.3.1
>>

-- 
Regards.
Chuan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 00/18] Support Multifd for RDMA migration
  2020-10-25  2:29       ` Zheng Chuan
@ 2020-12-15  7:28         ` Zheng Chuan
  2020-12-18 20:01           ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 31+ messages in thread
From: Zheng Chuan @ 2020-12-15  7:28 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: yubihong, Zhanghailiang, quintela, qemu-devel, Xiexiangyou,
	Chenzhendong (alex), wanghao (O)

Hi, Dave.

Since qemu 6.0 is open and some patches of this series have been reviewed, might you have time to continue reviewing rest of them ?

On 2020/10/25 10:29, Zheng Chuan wrote:
> 
> 
> On 2020/10/24 3:02, Dr. David Alan Gilbert wrote:
>> * Zheng Chuan (zhengchuan@huawei.com) wrote:
>>>
>>>
>>> On 2020/10/21 17:25, Zhanghailiang wrote:
>>>> Hi zhengchuan,
>>>>
>>>>> -----Original Message-----
>>>>> From: zhengchuan
>>>>> Sent: Saturday, October 17, 2020 12:26 PM
>>>>> To: quintela@redhat.com; dgilbert@redhat.com
>>>>> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
>>>>> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
>>>>> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
>>>>> fengzhimin1@huawei.com; qemu-devel@nongnu.org
>>>>> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
>>>>>
>>>>> Now I continue to support multifd for RDMA migration based on my colleague
>>>>> zhiming's work:)
>>>>>
>>>>> The previous RFC patches is listed below:
>>>>> v1:
>>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
>>>>> v2:
>>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
>>>>>
>>>>> As descried in previous RFC, the RDMA bandwidth is not fully utilized for over
>>>>> 25Gigabit NIC because of single channel for RDMA migration.
>>>>> This patch series is going to support multifd for RDMA migration based on multifd
>>>>> framework.
>>>>>
>>>>> Comparsion is between origion and multifd RDMA migration is re-tested for v3.
>>>>> The VM specifications for migration are as follows:
>>>>> - VM use 4k page;
>>>>> - the number of VCPU is 4;
>>>>> - the total memory is 16Gigabit;
>>>>> - use 'mempress' tool to pressurize VM(mempress 8000 500);
>>>>> - use 25Gigabit network card to migrate;
>>>>>
>>>>> For origin RDMA and MultiRDMA migration, the total migration times of VM are
>>>>> as follows:
>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>> |             | NOT rdma-pin-all | rdma-pin-all |
>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>> | origin RDMA |       26 s       |     29 s     |
>>>>> -------------------------------------------------
>>>>> |  MultiRDMA  |       16 s       |     17 s     |
>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>
>>>>> Test the multifd RDMA migration like this:
>>>>> virsh migrate --live --multiFd --migrateuri
>>>>
>>>> There is no option '--multiFd' for virsh commands, It seems that, we added this private option for internal usage.
>>>> It's better to provide testing method by using qemu commands.
>>>>
>>>>
>>> Hi, Hailiang
>>> Yes, it should be, will update in V4.
>>>
>>> Also, Ping.
>>>
>>> Dave, Juan.
>>>
>>> Any suggestion and comment about this series? Hope this feature could catch up with qemu 5.2.
>>
>> It's a bit close; I'm not sure if I'll have time to review it on Monday
>> before the pull.
>>
>> Dave
>>
> Yes, it is.
> Then we may wait for next merge window after fully review:)
> 
>>>> Thanks.
>>>>
>>>>> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
>>>>> qemu+tcp://192.168.1.100/system --verbose
>>>>>
>>>>> v2 -> v3:
>>>>>     create multifd ops for both tcp and rdma
>>>>>     do not export rdma to avoid multifd code in mess
>>>>>     fix build issue for non-rdma
>>>>>     fix some codestyle and buggy code
>>>>>
>>>>> Chuan Zheng (18):
>>>>>   migration/rdma: add the 'migrate_use_rdma_pin_all' function
>>>>>   migration/rdma: judge whether or not the RDMA is used for migration
>>>>>   migration/rdma: create multifd_setup_ops for Tx/Rx thread
>>>>>   migration/rdma: add multifd_setup_ops for rdma
>>>>>   migration/rdma: do not need sync main for rdma
>>>>>   migration/rdma: export MultiFDSendParams/MultiFDRecvParams
>>>>>   migration/rdma: add rdma field into multifd send/recv param
>>>>>   migration/rdma: export getQIOChannel to get QIOchannel in rdma
>>>>>   migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
>>>>>   migration/rdma: Create the multifd recv channels for RDMA
>>>>>   migration/rdma: record host_port for multifd RDMA
>>>>>   migration/rdma: Create the multifd send channels for RDMA
>>>>>   migration/rdma: Add the function for dynamic page registration
>>>>>   migration/rdma: register memory for multifd RDMA channels
>>>>>   migration/rdma: only register the memory for multifd channels
>>>>>   migration/rdma: add rdma_channel into Migrationstate field
>>>>>   migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
>>>>>     mode
>>>>>   migration/rdma: RDMA cleanup for multifd migration
>>>>>
>>>>>  migration/migration.c |  24 +++
>>>>>  migration/migration.h |  11 ++
>>>>>  migration/multifd.c   |  97 +++++++++-
>>>>>  migration/multifd.h   |  24 +++
>>>>>  migration/qemu-file.c |   5 +
>>>>>  migration/qemu-file.h |   1 +
>>>>>  migration/rdma.c      | 503
>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++-
>>>>>  7 files changed, 653 insertions(+), 12 deletions(-)
>>>>>
>>>>> --
>>>>> 1.8.3.1
>>>>
>>>> .
>>>>
>>>
>>> -- 
>>> Regards.
>>> Chuan
>>>
> 

-- 
Regards.
Chuan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 00/18] Support Multifd for RDMA migration
  2020-12-15  7:28         ` Zheng Chuan
@ 2020-12-18 20:01           ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 31+ messages in thread
From: Dr. David Alan Gilbert @ 2020-12-18 20:01 UTC (permalink / raw)
  To: Zheng Chuan
  Cc: yubihong, Zhanghailiang, quintela, qemu-devel, Xiexiangyou,
	Chenzhendong (alex), wanghao (O)

* Zheng Chuan (zhengchuan@huawei.com) wrote:
> Hi, Dave.
> 
> Since qemu 6.0 is open and some patches of this series have been reviewed, might you have time to continue reviewing rest of them ?

Yes, apologies for not getting further; I'll need to attack it again in
the new year;  it's quite hard, since I know the RDMA code, but not the
multifd code that well, and Juan knows the multifd code but not the RDMA
code that well; and it's quite a large series.

Dave

> On 2020/10/25 10:29, Zheng Chuan wrote:
> > 
> > 
> > On 2020/10/24 3:02, Dr. David Alan Gilbert wrote:
> >> * Zheng Chuan (zhengchuan@huawei.com) wrote:
> >>>
> >>>
> >>> On 2020/10/21 17:25, Zhanghailiang wrote:
> >>>> Hi zhengchuan,
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: zhengchuan
> >>>>> Sent: Saturday, October 17, 2020 12:26 PM
> >>>>> To: quintela@redhat.com; dgilbert@redhat.com
> >>>>> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
> >>>>> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
> >>>>> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
> >>>>> fengzhimin1@huawei.com; qemu-devel@nongnu.org
> >>>>> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
> >>>>>
> >>>>> Now I continue to support multifd for RDMA migration based on my colleague
> >>>>> zhiming's work:)
> >>>>>
> >>>>> The previous RFC patches is listed below:
> >>>>> v1:
> >>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
> >>>>> v2:
> >>>>> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
> >>>>>
> >>>>> As descried in previous RFC, the RDMA bandwidth is not fully utilized for over
> >>>>> 25Gigabit NIC because of single channel for RDMA migration.
> >>>>> This patch series is going to support multifd for RDMA migration based on multifd
> >>>>> framework.
> >>>>>
> >>>>> Comparsion is between origion and multifd RDMA migration is re-tested for v3.
> >>>>> The VM specifications for migration are as follows:
> >>>>> - VM use 4k page;
> >>>>> - the number of VCPU is 4;
> >>>>> - the total memory is 16Gigabit;
> >>>>> - use 'mempress' tool to pressurize VM(mempress 8000 500);
> >>>>> - use 25Gigabit network card to migrate;
> >>>>>
> >>>>> For origin RDMA and MultiRDMA migration, the total migration times of VM are
> >>>>> as follows:
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>> |             | NOT rdma-pin-all | rdma-pin-all |
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>> | origin RDMA |       26 s       |     29 s     |
> >>>>> -------------------------------------------------
> >>>>> |  MultiRDMA  |       16 s       |     17 s     |
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>
> >>>>> Test the multifd RDMA migration like this:
> >>>>> virsh migrate --live --multiFd --migrateuri
> >>>>
> >>>> There is no option '--multiFd' for virsh commands, It seems that, we added this private option for internal usage.
> >>>> It's better to provide testing method by using qemu commands.
> >>>>
> >>>>
> >>> Hi, Hailiang
> >>> Yes, it should be, will update in V4.
> >>>
> >>> Also, Ping.
> >>>
> >>> Dave, Juan.
> >>>
> >>> Any suggestion and comment about this series? Hope this feature could catch up with qemu 5.2.
> >>
> >> It's a bit close; I'm not sure if I'll have time to review it on Monday
> >> before the pull.
> >>
> >> Dave
> >>
> > Yes, it is.
> > Then we may wait for next merge window after fully review:)
> > 
> >>>> Thanks.
> >>>>
> >>>>> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
> >>>>> qemu+tcp://192.168.1.100/system --verbose
> >>>>>
> >>>>> v2 -> v3:
> >>>>>     create multifd ops for both tcp and rdma
> >>>>>     do not export rdma to avoid multifd code in mess
> >>>>>     fix build issue for non-rdma
> >>>>>     fix some codestyle and buggy code
> >>>>>
> >>>>> Chuan Zheng (18):
> >>>>>   migration/rdma: add the 'migrate_use_rdma_pin_all' function
> >>>>>   migration/rdma: judge whether or not the RDMA is used for migration
> >>>>>   migration/rdma: create multifd_setup_ops for Tx/Rx thread
> >>>>>   migration/rdma: add multifd_setup_ops for rdma
> >>>>>   migration/rdma: do not need sync main for rdma
> >>>>>   migration/rdma: export MultiFDSendParams/MultiFDRecvParams
> >>>>>   migration/rdma: add rdma field into multifd send/recv param
> >>>>>   migration/rdma: export getQIOChannel to get QIOchannel in rdma
> >>>>>   migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
> >>>>>   migration/rdma: Create the multifd recv channels for RDMA
> >>>>>   migration/rdma: record host_port for multifd RDMA
> >>>>>   migration/rdma: Create the multifd send channels for RDMA
> >>>>>   migration/rdma: Add the function for dynamic page registration
> >>>>>   migration/rdma: register memory for multifd RDMA channels
> >>>>>   migration/rdma: only register the memory for multifd channels
> >>>>>   migration/rdma: add rdma_channel into Migrationstate field
> >>>>>   migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
> >>>>>     mode
> >>>>>   migration/rdma: RDMA cleanup for multifd migration
> >>>>>
> >>>>>  migration/migration.c |  24 +++
> >>>>>  migration/migration.h |  11 ++
> >>>>>  migration/multifd.c   |  97 +++++++++-
> >>>>>  migration/multifd.h   |  24 +++
> >>>>>  migration/qemu-file.c |   5 +
> >>>>>  migration/qemu-file.h |   1 +
> >>>>>  migration/rdma.c      | 503
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++-
> >>>>>  7 files changed, 653 insertions(+), 12 deletions(-)
> >>>>>
> >>>>> --
> >>>>> 1.8.3.1
> >>>>
> >>>> .
> >>>>
> >>>
> >>> -- 
> >>> Regards.
> >>> Chuan
> >>>
> > 
> 
> -- 
> Regards.
> Chuan
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2020-12-18 20:03 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-17  4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function Chuan Zheng
2020-11-10 11:52   ` Dr. David Alan Gilbert
2020-10-17  4:25 ` [PATCH v3 02/18] migration/rdma: judge whether or not the RDMA is used for migration Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread Chuan Zheng
2020-11-10 12:11   ` Dr. David Alan Gilbert
2020-11-11  7:51     ` Zheng Chuan
2020-10-17  4:25 ` [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma Chuan Zheng
2020-11-10 12:30   ` Dr. David Alan Gilbert
2020-11-11  7:56     ` Zheng Chuan
2020-10-17  4:25 ` [PATCH v3 05/18] migration/rdma: do not need sync main " Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 06/18] migration/rdma: export MultiFDSendParams/MultiFDRecvParams Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 07/18] migration/rdma: add rdma field into multifd send/recv param Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 08/18] migration/rdma: export getQIOChannel to get QIOchannel in rdma Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 09/18] migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma Chuan Zheng
2020-11-10 16:51   ` Dr. David Alan Gilbert
2020-10-17  4:25 ` [PATCH v3 10/18] migration/rdma: Create the multifd recv channels for RDMA Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 11/18] migration/rdma: record host_port for multifd RDMA Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 12/18] migration/rdma: Create the multifd send channels for RDMA Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 13/18] migration/rdma: Add the function for dynamic page registration Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 14/18] migration/rdma: register memory for multifd RDMA channels Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 15/18] migration/rdma: only register the memory for multifd channels Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 16/18] migration/rdma: add rdma_channel into Migrationstate field Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 17/18] migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all mode Chuan Zheng
2020-10-17  4:25 ` [PATCH v3 18/18] migration/rdma: RDMA cleanup for multifd migration Chuan Zheng
2020-10-21  9:25 ` [PATCH v3 00/18] Support Multifd for RDMA migration Zhanghailiang
2020-10-21  9:33   ` Zheng Chuan
2020-10-23 19:02     ` Dr. David Alan Gilbert
2020-10-25  2:29       ` Zheng Chuan
2020-12-15  7:28         ` Zheng Chuan
2020-12-18 20:01           ` Dr. David Alan Gilbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.