All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy
@ 2018-05-14 16:54 Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 01/17] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
                   ` (17 more replies)
  0 siblings, 18 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang

The RESEND version just fix code style in patch 11/17.

Hi~ All~

COLO Frame, block replication and COLO proxy(colo-compare,filter-mirror,
filter-redirector,filter-rewriter) have been exist in qemu
for long time, it's time to integrate these three parts to make COLO really works.

In this series, we have some optimizations for COLO frame, including separating the
process of saving ram and device state, using an COLO_EXIT event to notify users that
VM exits COLO, for these parts, most of them have been reviewed long time ago in old version,
but since this series have just rebased on upstream which had merged a new series of migration,
parts of pathes in this series deserve review again.

We use notifier/callback method for COLO compare to notify COLO frame about
net packets inconsistent event, and add a handle_event method for NetFilterClass to
help COLO frame to notify filters and colo-compare about checkpoint/failover event,
it is flexible.

For the neweset version, please refer to:
https://github.com/zhangckid/qemu/tree/qemu-colo-18may1-rebase

Please review, thanks.

V7:
 - Addressed Markus's comments in 11/17.
 - Rebased on upstream.

V6:
 - Addressed Eric Blake's comments, use the enum to feedback in patch 11/17.
 - Fixed QAPI command separator problem in patch 11/17.


Zhang Chen (12):
  filter-rewriter: fix memory leak for connection in
    connection_track_table
  colo-compare: implement the process of checkpoint
  colo-compare: use notifier to notify packets comparing result
  COLO: integrate colo compare with colo frame
  COLO: Add block replication into colo process
  COLO: Remove colo_state migration struct
  COLO: Load dirty pages into SVM's RAM cache firstly
  ram/COLO: Record the dirty pages that SVM received
  COLO: Flush memory data from ram cache
  qapi: Add new command to query colo status
  filter: Add handle_event method for NetFilterClass
  filter-rewriter: handle checkpoint and failover event

zhanghailiang (5):
  qmp event: Add COLO_EXIT event to notify users while exited COLO
  savevm: split the process of different stages for loadvm/savevm
  COLO: flush host dirty ram from cache
  COLO: notify net filters about checkpoint/failover event
  COLO: quick failover process by kick COLO thread

 include/exec/ram_addr.h  |   1 +
 include/migration/colo.h |  11 +-
 include/net/filter.h     |   5 +
 migration/Makefile.objs  |   2 +-
 migration/colo-comm.c    |  76 --------------
 migration/colo.c         | 219 +++++++++++++++++++++++++++++++++++++--
 migration/migration.c    |  38 ++++++-
 migration/ram.c          | 183 +++++++++++++++++++++++++++++++-
 migration/ram.h          |   4 +
 migration/savevm.c       |  55 ++++++++--
 migration/savevm.h       |   5 +
 migration/trace-events   |   3 +
 net/colo-compare.c       | 108 +++++++++++++++++--
 net/colo-compare.h       |  24 +++++
 net/colo.h               |   4 +
 net/filter-rewriter.c    | 109 +++++++++++++++++--
 net/filter.c             |  17 +++
 net/net.c                |  28 +++++
 qapi/migration.json      |  70 +++++++++++++
 vl.c                     |   2 -
 20 files changed, 846 insertions(+), 118 deletions(-)
 delete mode 100644 migration/colo-comm.c
 create mode 100644 net/colo-compare.h

-- 
2.17.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 01/17] filter-rewriter: fix memory leak for connection in connection_track_table
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 02/17] colo-compare: implement the process of checkpoint Zhang Chen
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

After a net connection is closed, we didn't clear its releated resources
in connection_track_table, which will lead to memory leak.

Let't track the state of net connection, if it is closed, its related
resources will be cleared up.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 net/colo.h            |  4 +++
 net/filter-rewriter.c | 69 ++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 66 insertions(+), 7 deletions(-)

diff --git a/net/colo.h b/net/colo.h
index da6c36dcf7..cd118510c5 100644
--- a/net/colo.h
+++ b/net/colo.h
@@ -18,6 +18,7 @@
 #include "slirp/slirp.h"
 #include "qemu/jhash.h"
 #include "qemu/timer.h"
+#include "slirp/tcp.h"
 
 #define HASHTABLE_MAX_SIZE 16384
 
@@ -86,6 +87,9 @@ typedef struct Connection {
      * run once in independent tcp connection
      */
     int syn_flag;
+
+    int tcp_state; /* TCP FSM state */
+    tcp_seq fin_ack_seq; /* the seq of 'fin=1,ack=1' */
 } Connection;
 
 uint32_t connection_key_hash(const void *opaque);
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index 62dad2d773..0909a9a8af 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -59,9 +59,9 @@ static int is_tcp_packet(Packet *pkt)
 }
 
 /* handle tcp packet from primary guest */
-static int handle_primary_tcp_pkt(NetFilterState *nf,
+static int handle_primary_tcp_pkt(RewriterState *rf,
                                   Connection *conn,
-                                  Packet *pkt)
+                                  Packet *pkt, ConnectionKey *key)
 {
     struct tcphdr *tcp_pkt;
 
@@ -99,15 +99,44 @@ static int handle_primary_tcp_pkt(NetFilterState *nf,
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
                                    pkt->size - pkt->vnet_hdr_len);
         }
+        /*
+         * Case 1:
+         * The *server* side of this connect is VM, *client* tries to close
+         * the connection.
+         *
+         * We got 'ack=1' packets from client side, it acks 'fin=1, ack=1'
+         * packet from server side. From this point, we can ensure that there
+         * will be no packets in the connection, except that, some errors
+         * happen between the path of 'filter object' and vNIC, if this rare
+         * case really happen, we can still create a new connection,
+         * So it is safe to remove the connection from connection_track_table.
+         *
+         */
+        if ((conn->tcp_state == TCPS_LAST_ACK) &&
+            (ntohl(tcp_pkt->th_ack) == (conn->fin_ack_seq + 1))) {
+            g_hash_table_remove(rf->connection_track_table, key);
+        }
+    }
+    /*
+     * Case 2:
+     * The *server* side of this connect is VM, *server* tries to close
+     * the connection.
+     *
+     * We got 'fin=1, ack=1' packet from client side, we need to
+     * record the seq of 'fin=1, ack=1' packet.
+     */
+    if ((tcp_pkt->th_flags & (TH_ACK | TH_FIN)) == (TH_ACK | TH_FIN)) {
+        conn->fin_ack_seq = htonl(tcp_pkt->th_seq);
+        conn->tcp_state = TCPS_LAST_ACK;
     }
 
     return 0;
 }
 
 /* handle tcp packet from secondary guest */
-static int handle_secondary_tcp_pkt(NetFilterState *nf,
+static int handle_secondary_tcp_pkt(RewriterState *rf,
                                     Connection *conn,
-                                    Packet *pkt)
+                                    Packet *pkt, ConnectionKey *key)
 {
     struct tcphdr *tcp_pkt;
 
@@ -139,8 +168,34 @@ static int handle_secondary_tcp_pkt(NetFilterState *nf,
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
                                    pkt->size - pkt->vnet_hdr_len);
         }
+        /*
+         * Case 2:
+         * The *server* side of this connect is VM, *server* tries to close
+         * the connection.
+         *
+         * We got 'ack=1' packets from server side, it acks 'fin=1, ack=1'
+         * packet from client side. Like Case 1, there should be no packets
+         * in the connection from now know, But the difference here is
+         * if the packet is lost, We will get the resent 'fin=1,ack=1' packet.
+         * TODO: Fix above case.
+         */
+        if ((conn->tcp_state == TCPS_LAST_ACK) &&
+            (ntohl(tcp_pkt->th_ack) == (conn->fin_ack_seq + 1))) {
+            g_hash_table_remove(rf->connection_track_table, key);
+        }
+    }
+    /*
+     * Case 1:
+     * The *server* side of this connect is VM, *client* tries to close
+     * the connection.
+     *
+     * We got 'fin=1, ack=1' packet from server side, we need to
+     * record the seq of 'fin=1, ack=1' packet.
+     */
+    if ((tcp_pkt->th_flags & (TH_ACK | TH_FIN)) == (TH_ACK | TH_FIN)) {
+        conn->fin_ack_seq = ntohl(tcp_pkt->th_seq);
+        conn->tcp_state = TCPS_LAST_ACK;
     }
-
     return 0;
 }
 
@@ -190,7 +245,7 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
 
         if (sender == nf->netdev) {
             /* NET_FILTER_DIRECTION_TX */
-            if (!handle_primary_tcp_pkt(nf, conn, pkt)) {
+            if (!handle_primary_tcp_pkt(s, conn, pkt, &key)) {
                 qemu_net_queue_send(s->incoming_queue, sender, 0,
                 (const uint8_t *)pkt->data, pkt->size, NULL);
                 packet_destroy(pkt, NULL);
@@ -203,7 +258,7 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
             }
         } else {
             /* NET_FILTER_DIRECTION_RX */
-            if (!handle_secondary_tcp_pkt(nf, conn, pkt)) {
+            if (!handle_secondary_tcp_pkt(s, conn, pkt, &key)) {
                 qemu_net_queue_send(s->incoming_queue, sender, 0,
                 (const uint8_t *)pkt->data, pkt->size, NULL);
                 packet_destroy(pkt, NULL);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 02/17] colo-compare: implement the process of checkpoint
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 01/17] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 03/17] colo-compare: use notifier to notify packets comparing result Zhang Chen
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

While do checkpoint, we need to flush all the unhandled packets,
By using the filter notifier mechanism, we can easily to notify
every compare object to do this process, which runs inside
of compare threads as a coroutine.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 include/migration/colo.h |  6 ++++
 net/colo-compare.c       | 76 ++++++++++++++++++++++++++++++++++++++++
 net/colo-compare.h       | 22 ++++++++++++
 3 files changed, 104 insertions(+)
 create mode 100644 net/colo-compare.h

diff --git a/include/migration/colo.h b/include/migration/colo.h
index 2fe48ad353..fefb2fcf4c 100644
--- a/include/migration/colo.h
+++ b/include/migration/colo.h
@@ -16,6 +16,12 @@
 #include "qemu-common.h"
 #include "qapi/qapi-types-migration.h"
 
+enum colo_event {
+    COLO_EVENT_NONE,
+    COLO_EVENT_CHECKPOINT,
+    COLO_EVENT_FAILOVER,
+};
+
 void colo_info_init(void);
 
 void migrate_start_colo_process(MigrationState *s);
diff --git a/net/colo-compare.c b/net/colo-compare.c
index 23b2d2c4cc..7ff3ae8904 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -27,11 +27,16 @@
 #include "qemu/sockets.h"
 #include "net/colo.h"
 #include "sysemu/iothread.h"
+#include "net/colo-compare.h"
+#include "migration/colo.h"
 
 #define TYPE_COLO_COMPARE "colo-compare"
 #define COLO_COMPARE(obj) \
     OBJECT_CHECK(CompareState, (obj), TYPE_COLO_COMPARE)
 
+static QTAILQ_HEAD(, CompareState) net_compares =
+       QTAILQ_HEAD_INITIALIZER(net_compares);
+
 #define COMPARE_READ_LEN_MAX NET_BUFSIZE
 #define MAX_QUEUE_SIZE 1024
 
@@ -41,6 +46,10 @@
 /* TODO: Should be configurable */
 #define REGULAR_PACKET_CHECK_MS 3000
 
+static QemuMutex event_mtx;
+static QemuCond event_complete_cond;
+static int event_unhandled_count;
+
 /*
  *  + CompareState ++
  *  |               |
@@ -87,6 +96,11 @@ typedef struct CompareState {
     IOThread *iothread;
     GMainContext *worker_context;
     QEMUTimer *packet_check_timer;
+
+    QEMUBH *event_bh;
+    enum colo_event event;
+
+    QTAILQ_ENTRY(CompareState) next;
 } CompareState;
 
 typedef struct CompareClass {
@@ -736,6 +750,25 @@ static void check_old_packet_regular(void *opaque)
                 REGULAR_PACKET_CHECK_MS);
 }
 
+/* Public API, Used for COLO frame to notify compare event */
+void colo_notify_compares_event(void *opaque, int event, Error **errp)
+{
+    CompareState *s;
+
+    qemu_mutex_lock(&event_mtx);
+    QTAILQ_FOREACH(s, &net_compares, next) {
+        s->event = event;
+        qemu_bh_schedule(s->event_bh);
+        event_unhandled_count++;
+    }
+    /* Wait all compare threads to finish handling this event */
+    while (event_unhandled_count > 0) {
+        qemu_cond_wait(&event_complete_cond, &event_mtx);
+    }
+
+    qemu_mutex_unlock(&event_mtx);
+}
+
 static void colo_compare_timer_init(CompareState *s)
 {
     AioContext *ctx = iothread_get_aio_context(s->iothread);
@@ -756,6 +789,28 @@ static void colo_compare_timer_del(CompareState *s)
     }
  }
 
+static void colo_flush_packets(void *opaque, void *user_data);
+
+static void colo_compare_handle_event(void *opaque)
+{
+    CompareState *s = opaque;
+
+    switch (s->event) {
+    case COLO_EVENT_CHECKPOINT:
+        g_queue_foreach(&s->conn_list, colo_flush_packets, s);
+        break;
+    case COLO_EVENT_FAILOVER:
+        break;
+    default:
+        break;
+    }
+    qemu_mutex_lock(&event_mtx);
+    assert(event_unhandled_count > 0);
+    event_unhandled_count--;
+    qemu_cond_broadcast(&event_complete_cond);
+    qemu_mutex_unlock(&event_mtx);
+}
+
 static void colo_compare_iothread(CompareState *s)
 {
     object_ref(OBJECT(s->iothread));
@@ -769,6 +824,7 @@ static void colo_compare_iothread(CompareState *s)
                              s, s->worker_context, true);
 
     colo_compare_timer_init(s);
+    s->event_bh = qemu_bh_new(colo_compare_handle_event, s);
 }
 
 static char *compare_get_pri_indev(Object *obj, Error **errp)
@@ -926,8 +982,13 @@ static void colo_compare_complete(UserCreatable *uc, Error **errp)
     net_socket_rs_init(&s->pri_rs, compare_pri_rs_finalize, s->vnet_hdr);
     net_socket_rs_init(&s->sec_rs, compare_sec_rs_finalize, s->vnet_hdr);
 
+    QTAILQ_INSERT_TAIL(&net_compares, s, next);
+
     g_queue_init(&s->conn_list);
 
+    qemu_mutex_init(&event_mtx);
+    qemu_cond_init(&event_complete_cond);
+
     s->connection_track_table = g_hash_table_new_full(connection_key_hash,
                                                       connection_key_equal,
                                                       g_free,
@@ -990,6 +1051,7 @@ static void colo_compare_init(Object *obj)
 static void colo_compare_finalize(Object *obj)
 {
     CompareState *s = COLO_COMPARE(obj);
+    CompareState *tmp = NULL;
 
     qemu_chr_fe_deinit(&s->chr_pri_in, false);
     qemu_chr_fe_deinit(&s->chr_sec_in, false);
@@ -997,6 +1059,16 @@ static void colo_compare_finalize(Object *obj)
     if (s->iothread) {
         colo_compare_timer_del(s);
     }
+
+    qemu_bh_delete(s->event_bh);
+
+    QTAILQ_FOREACH(tmp, &net_compares, next) {
+        if (!strcmp(tmp->outdev, s->outdev)) {
+            QTAILQ_REMOVE(&net_compares, s, next);
+            break;
+        }
+    }
+
     /* Release all unhandled packets after compare thead exited */
     g_queue_foreach(&s->conn_list, colo_flush_packets, s);
 
@@ -1009,6 +1081,10 @@ static void colo_compare_finalize(Object *obj)
     if (s->iothread) {
         object_unref(OBJECT(s->iothread));
     }
+
+    qemu_mutex_destroy(&event_mtx);
+    qemu_cond_destroy(&event_complete_cond);
+
     g_free(s->pri_indev);
     g_free(s->sec_indev);
     g_free(s->outdev);
diff --git a/net/colo-compare.h b/net/colo-compare.h
new file mode 100644
index 0000000000..1b1ce76aea
--- /dev/null
+++ b/net/colo-compare.h
@@ -0,0 +1,22 @@
+/*
+ * COarse-grain LOck-stepping Virtual Machines for Non-stop Service (COLO)
+ * (a.k.a. Fault Tolerance or Continuous Replication)
+ *
+ * Copyright (c) 2017 HUAWEI TECHNOLOGIES CO., LTD.
+ * Copyright (c) 2017 FUJITSU LIMITED
+ * Copyright (c) 2017 Intel Corporation
+ *
+ * Authors:
+ *    zhanghailiang <zhang.zhanghailiang@huawei.com>
+ *    Zhang Chen <zhangckid@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+
+#ifndef QEMU_COLO_COMPARE_H
+#define QEMU_COLO_COMPARE_H
+
+void colo_notify_compares_event(void *opaque, int event, Error **errp);
+
+#endif /* QEMU_COLO_COMPARE_H */
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 03/17] colo-compare: use notifier to notify packets comparing result
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 01/17] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 02/17] colo-compare: implement the process of checkpoint Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame Zhang Chen
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

It's a good idea to use notifier to notify COLO frame of
inconsistent packets comparing.

Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 net/colo-compare.c | 32 +++++++++++++++++++++++++-------
 net/colo-compare.h |  2 ++
 2 files changed, 27 insertions(+), 7 deletions(-)

diff --git a/net/colo-compare.c b/net/colo-compare.c
index 7ff3ae8904..05061cd1c4 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -29,6 +29,7 @@
 #include "sysemu/iothread.h"
 #include "net/colo-compare.h"
 #include "migration/colo.h"
+#include "migration/migration.h"
 
 #define TYPE_COLO_COMPARE "colo-compare"
 #define COLO_COMPARE(obj) \
@@ -37,6 +38,9 @@
 static QTAILQ_HEAD(, CompareState) net_compares =
        QTAILQ_HEAD_INITIALIZER(net_compares);
 
+static NotifierList colo_compare_notifiers =
+    NOTIFIER_LIST_INITIALIZER(colo_compare_notifiers);
+
 #define COMPARE_READ_LEN_MAX NET_BUFSIZE
 #define MAX_QUEUE_SIZE 1024
 
@@ -561,8 +565,24 @@ static int colo_old_packet_check_one(Packet *pkt, int64_t *check_time)
     }
 }
 
+static void colo_compare_inconsistent_notify(void)
+{
+    notifier_list_notify(&colo_compare_notifiers,
+                migrate_get_current());
+}
+
+void colo_compare_register_notifier(Notifier *notify)
+{
+    notifier_list_add(&colo_compare_notifiers, notify);
+}
+
+void colo_compare_unregister_notifier(Notifier *notify)
+{
+    notifier_remove(notify);
+}
+
 static int colo_old_packet_check_one_conn(Connection *conn,
-                                          void *user_data)
+                                           void *user_data)
 {
     GList *result = NULL;
     int64_t check_time = REGULAR_PACKET_CHECK_MS;
@@ -573,10 +593,7 @@ static int colo_old_packet_check_one_conn(Connection *conn,
 
     if (result) {
         /* Do checkpoint will flush old packet */
-        /*
-         * TODO: Notify colo frame to do checkpoint.
-         * colo_compare_inconsistent_notify();
-         */
+        colo_compare_inconsistent_notify();
         return 0;
     }
 
@@ -620,11 +637,12 @@ static void colo_compare_packet(CompareState *s, Connection *conn,
             /*
              * If one packet arrive late, the secondary_list or
              * primary_list will be empty, so we can't compare it
-             * until next comparison.
+             * until next comparison. If the packets in the list are
+             * timeout, it will trigger a checkpoint request.
              */
             trace_colo_compare_main("packet different");
             g_queue_push_head(&conn->primary_list, pkt);
-            /* TODO: colo_notify_checkpoint();*/
+            colo_compare_inconsistent_notify();
             break;
         }
     }
diff --git a/net/colo-compare.h b/net/colo-compare.h
index 1b1ce76aea..22ddd512e2 100644
--- a/net/colo-compare.h
+++ b/net/colo-compare.h
@@ -18,5 +18,7 @@
 #define QEMU_COLO_COMPARE_H
 
 void colo_notify_compares_event(void *opaque, int event, Error **errp);
+void colo_compare_register_notifier(Notifier *notify);
+void colo_compare_unregister_notifier(Notifier *notify);
 
 #endif /* QEMU_COLO_COMPARE_H */
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (2 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 03/17] colo-compare: use notifier to notify packets comparing result Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-16 11:12   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 05/17] COLO: Add block replication into colo process Zhang Chen
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

For COLO FT, both the PVM and SVM run at the same time,
only sync the state while it needs.

So here, let SVM runs while not doing checkpoint, change
DEFAULT_MIGRATE_X_CHECKPOINT_DELAY to 200*100.

Besides, we forgot to release colo_checkpoint_semd and
colo_delay_timer, fix them here.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/colo.c      | 42 ++++++++++++++++++++++++++++++++++++++++--
 migration/migration.c |  4 ++--
 2 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/migration/colo.c b/migration/colo.c
index 4381067ed4..081df1835f 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -25,8 +25,11 @@
 #include "qemu/error-report.h"
 #include "migration/failover.h"
 #include "replication.h"
+#include "net/colo-compare.h"
+#include "net/colo.h"
 
 static bool vmstate_loading;
+static Notifier packets_compare_notifier;
 
 #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024)
 
@@ -343,6 +346,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
         goto out;
     }
 
+    colo_notify_compares_event(NULL, COLO_EVENT_CHECKPOINT, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
     /* Disable block migration */
     migrate_set_block_enabled(false, &local_err);
     qemu_savevm_state_header(fb);
@@ -400,6 +408,11 @@ out:
     return ret;
 }
 
+static void colo_compare_notify_checkpoint(Notifier *notifier, void *data)
+{
+    colo_checkpoint_notify(data);
+}
+
 static void colo_process_checkpoint(MigrationState *s)
 {
     QIOChannelBuffer *bioc;
@@ -416,6 +429,9 @@ static void colo_process_checkpoint(MigrationState *s)
         goto out;
     }
 
+    packets_compare_notifier.notify = colo_compare_notify_checkpoint;
+    colo_compare_register_notifier(&packets_compare_notifier);
+
     /*
      * Wait for Secondary finish loading VM states and enter COLO
      * restore.
@@ -461,11 +477,21 @@ out:
         qemu_fclose(fb);
     }
 
-    timer_del(s->colo_delay_timer);
-
     /* Hope this not to be too long to wait here */
     qemu_sem_wait(&s->colo_exit_sem);
     qemu_sem_destroy(&s->colo_exit_sem);
+
+    /*
+     * It is safe to unregister notifier after failover finished.
+     * Besides, colo_delay_timer and colo_checkpoint_sem can't be
+     * released befor unregister notifier, or there will be use-after-free
+     * error.
+     */
+    colo_compare_unregister_notifier(&packets_compare_notifier);
+    timer_del(s->colo_delay_timer);
+    timer_free(s->colo_delay_timer);
+    qemu_sem_destroy(&s->colo_checkpoint_sem);
+
     /*
      * Must be called after failover BH is completed,
      * Or the failover BH may shutdown the wrong fd that
@@ -558,6 +584,11 @@ void *colo_process_incoming_thread(void *opaque)
     fb = qemu_fopen_channel_input(QIO_CHANNEL(bioc));
     object_unref(OBJECT(bioc));
 
+    qemu_mutex_lock_iothread();
+    vm_start();
+    trace_colo_vm_state_change("stop", "run");
+    qemu_mutex_unlock_iothread();
+
     colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_READY,
                       &local_err);
     if (local_err) {
@@ -577,6 +608,11 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        qemu_mutex_lock_iothread();
+        vm_stop_force_state(RUN_STATE_COLO);
+        trace_colo_vm_state_change("run", "stop");
+        qemu_mutex_unlock_iothread();
+
         /* FIXME: This is unnecessary for periodic checkpoint mode */
         colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_REPLY,
                      &local_err);
@@ -630,6 +666,8 @@ void *colo_process_incoming_thread(void *opaque)
         }
 
         vmstate_loading = false;
+        vm_start();
+        trace_colo_vm_state_change("stop", "run");
         qemu_mutex_unlock_iothread();
 
         if (failover_get_state() == FAILOVER_STATUS_RELAUNCH) {
diff --git a/migration/migration.c b/migration/migration.c
index 35f2781b03..bca187275a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -76,9 +76,9 @@
 #define DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE (64 * 1024 * 1024)
 
 /* The delay time (in ms) between two COLO checkpoints
- * Note: Please change this default value to 10000 when we support hybrid mode.
+ * Note: Please change this default value to 20000 when we support hybrid mode.
  */
-#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY 200
+#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
 #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
 #define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
 
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 05/17] COLO: Add block replication into colo process
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (3 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-16 15:54   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct Zhang Chen
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang, Li Zhijian

Make sure master start block replication after slave's block
replication started.

Besides, we need to activate VM's blocks before goes into
COLO state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 migration/colo.c      | 43 +++++++++++++++++++++++++++++++++++++++++++
 migration/migration.c |  9 +++++++++
 2 files changed, 52 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 081df1835f..e06640c3d6 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -27,6 +27,7 @@
 #include "replication.h"
 #include "net/colo-compare.h"
 #include "net/colo.h"
+#include "block/block.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -56,6 +57,7 @@ static void secondary_vm_do_failover(void)
 {
     int old_state;
     MigrationIncomingState *mis = migration_incoming_get_current();
+    Error *local_err = NULL;
 
     /* Can not do failover during the process of VM's loading VMstate, Or
      * it will break the secondary VM.
@@ -73,6 +75,11 @@ static void secondary_vm_do_failover(void)
     migrate_set_state(&mis->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
 
+    replication_stop_all(true, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
+
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
         /* recover runstate to normal migration finish state */
@@ -110,6 +117,7 @@ static void primary_vm_do_failover(void)
 {
     MigrationState *s = migrate_get_current();
     int old_state;
+    Error *local_err = NULL;
 
     migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
@@ -133,6 +141,13 @@ static void primary_vm_do_failover(void)
                      FailoverStatus_str(old_state));
         return;
     }
+
+    replication_stop_all(true, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+        local_err = NULL;
+    }
+
     /* Notify COLO thread that failover work is finished */
     qemu_sem_post(&s->colo_exit_sem);
 }
@@ -356,6 +371,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
     qemu_savevm_state_header(fb);
     qemu_savevm_state_setup(fb);
     qemu_mutex_lock_iothread();
+    replication_do_checkpoint_all(&local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
     qemu_savevm_state_complete_precopy(fb, false, false);
     qemu_mutex_unlock_iothread();
 
@@ -446,6 +466,12 @@ static void colo_process_checkpoint(MigrationState *s)
     object_unref(OBJECT(bioc));
 
     qemu_mutex_lock_iothread();
+    replication_start_all(REPLICATION_MODE_PRIMARY, &local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
+
     vm_start();
     qemu_mutex_unlock_iothread();
     trace_colo_vm_state_change("stop", "run");
@@ -585,6 +611,11 @@ void *colo_process_incoming_thread(void *opaque)
     object_unref(OBJECT(bioc));
 
     qemu_mutex_lock_iothread();
+    replication_start_all(REPLICATION_MODE_SECONDARY, &local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
     vm_start();
     trace_colo_vm_state_change("stop", "run");
     qemu_mutex_unlock_iothread();
@@ -665,6 +696,18 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        replication_get_error_all(&local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+        /* discard colo disk buffer */
+        replication_do_checkpoint_all(&local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
diff --git a/migration/migration.c b/migration/migration.c
index bca187275a..ddd0c4b988 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -357,6 +357,7 @@ static void process_incoming_migration_co(void *opaque)
     MigrationIncomingState *mis = migration_incoming_get_current();
     PostcopyState ps;
     int ret;
+    Error *local_err = NULL;
 
     assert(mis->from_src_file);
     mis->largest_page_size = qemu_ram_pagesize_largest();
@@ -388,6 +389,14 @@ static void process_incoming_migration_co(void *opaque)
 
     /* we get COLO info, and know if we are in COLO mode */
     if (!ret && migration_incoming_enable_colo()) {
+        /* Make sure all file formats flush their mutable metadata */
+        bdrv_invalidate_cache_all(&local_err);
+        if (local_err) {
+            migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
+                    MIGRATION_STATUS_FAILED);
+            error_report_err(local_err);
+            exit(EXIT_FAILURE);
+        }
         mis->migration_incoming_co = qemu_coroutine_self();
         qemu_thread_create(&mis->colo_incoming_thread, "COLO incoming",
              colo_process_incoming_thread, mis, QEMU_THREAD_JOINABLE);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (4 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 05/17] COLO: Add block replication into colo process Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-15 16:02   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

We need to know if migration is going into COLO state for
incoming side before start normal migration.

Instead by using the VMStateDescription to send colo_state
from source side to destination side, we use MIG_CMD_ENABLE_COLO
to indicate whether COLO is enabled or not.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 include/migration/colo.h |  5 +--
 migration/Makefile.objs  |  2 +-
 migration/colo-comm.c    | 76 ----------------------------------------
 migration/colo.c         | 13 ++++++-
 migration/migration.c    | 23 +++++++++++-
 migration/savevm.c       | 20 +++++++++++
 migration/savevm.h       |  1 +
 migration/trace-events   |  1 +
 vl.c                     |  2 --
 9 files changed, 60 insertions(+), 83 deletions(-)
 delete mode 100644 migration/colo-comm.c

diff --git a/include/migration/colo.h b/include/migration/colo.h
index fefb2fcf4c..99ce17aca7 100644
--- a/include/migration/colo.h
+++ b/include/migration/colo.h
@@ -28,8 +28,9 @@ void migrate_start_colo_process(MigrationState *s);
 bool migration_in_colo_state(void);
 
 /* loadvm */
-bool migration_incoming_enable_colo(void);
-void migration_incoming_exit_colo(void);
+void migration_incoming_enable_colo(void);
+void migration_incoming_disable_colo(void);
+bool migration_incoming_colo_enabled(void);
 void *colo_process_incoming_thread(void *opaque);
 bool migration_incoming_in_colo_state(void);
 
diff --git a/migration/Makefile.objs b/migration/Makefile.objs
index c83ec47ba8..a4f3bafd86 100644
--- a/migration/Makefile.objs
+++ b/migration/Makefile.objs
@@ -1,6 +1,6 @@
 common-obj-y += migration.o socket.o fd.o exec.o
 common-obj-y += tls.o channel.o savevm.o
-common-obj-y += colo-comm.o colo.o colo-failover.o
+common-obj-y += colo.o colo-failover.o
 common-obj-y += vmstate.o vmstate-types.o page_cache.o
 common-obj-y += qemu-file.o global_state.o
 common-obj-y += qemu-file-channel.o
diff --git a/migration/colo-comm.c b/migration/colo-comm.c
deleted file mode 100644
index df26e4dfe7..0000000000
--- a/migration/colo-comm.c
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * COarse-grain LOck-stepping Virtual Machines for Non-stop Service (COLO)
- * (a.k.a. Fault Tolerance or Continuous Replication)
- *
- * Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
- * Copyright (c) 2016 FUJITSU LIMITED
- * Copyright (c) 2016 Intel Corporation
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or
- * later. See the COPYING file in the top-level directory.
- *
- */
-
-#include "qemu/osdep.h"
-#include "migration.h"
-#include "migration/colo.h"
-#include "migration/vmstate.h"
-#include "trace.h"
-
-typedef struct {
-     bool colo_requested;
-} COLOInfo;
-
-static COLOInfo colo_info;
-
-COLOMode get_colo_mode(void)
-{
-    if (migration_in_colo_state()) {
-        return COLO_MODE_PRIMARY;
-    } else if (migration_incoming_in_colo_state()) {
-        return COLO_MODE_SECONDARY;
-    } else {
-        return COLO_MODE_UNKNOWN;
-    }
-}
-
-static int colo_info_pre_save(void *opaque)
-{
-    COLOInfo *s = opaque;
-
-    s->colo_requested = migrate_colo_enabled();
-
-    return 0;
-}
-
-static bool colo_info_need(void *opaque)
-{
-   return migrate_colo_enabled();
-}
-
-static const VMStateDescription colo_state = {
-    .name = "COLOState",
-    .version_id = 1,
-    .minimum_version_id = 1,
-    .pre_save = colo_info_pre_save,
-    .needed = colo_info_need,
-    .fields = (VMStateField[]) {
-        VMSTATE_BOOL(colo_requested, COLOInfo),
-        VMSTATE_END_OF_LIST()
-    },
-};
-
-void colo_info_init(void)
-{
-    vmstate_register(NULL, 0, &colo_state, &colo_info);
-}
-
-bool migration_incoming_enable_colo(void)
-{
-    return colo_info.colo_requested;
-}
-
-void migration_incoming_exit_colo(void)
-{
-    colo_info.colo_requested = false;
-}
diff --git a/migration/colo.c b/migration/colo.c
index e06640c3d6..c083d3696f 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -152,6 +152,17 @@ static void primary_vm_do_failover(void)
     qemu_sem_post(&s->colo_exit_sem);
 }
 
+COLOMode get_colo_mode(void)
+{
+    if (migration_in_colo_state()) {
+        return COLO_MODE_PRIMARY;
+    } else if (migration_incoming_in_colo_state()) {
+        return COLO_MODE_SECONDARY;
+    } else {
+        return COLO_MODE_UNKNOWN;
+    }
+}
+
 void colo_do_failover(MigrationState *s)
 {
     /* Make sure VM stopped while failover happened. */
@@ -745,7 +756,7 @@ out:
     if (mis->to_src_file) {
         qemu_fclose(mis->to_src_file);
     }
-    migration_incoming_exit_colo();
+    migration_incoming_disable_colo();
 
     return NULL;
 }
diff --git a/migration/migration.c b/migration/migration.c
index ddd0c4b988..8dee7dd309 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -277,6 +277,22 @@ int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
     return migrate_send_rp_message(mis, msg_type, msglen, bufc);
 }
 
+static bool migration_colo_enabled;
+bool migration_incoming_colo_enabled(void)
+{
+    return migration_colo_enabled;
+}
+
+void migration_incoming_disable_colo(void)
+{
+    migration_colo_enabled = false;
+}
+
+void migration_incoming_enable_colo(void)
+{
+    migration_colo_enabled = true;
+}
+
 void qemu_start_incoming_migration(const char *uri, Error **errp)
 {
     const char *p;
@@ -388,7 +404,7 @@ static void process_incoming_migration_co(void *opaque)
     }
 
     /* we get COLO info, and know if we are in COLO mode */
-    if (!ret && migration_incoming_enable_colo()) {
+    if (!ret && migration_incoming_colo_enabled()) {
         /* Make sure all file formats flush their mutable metadata */
         bdrv_invalidate_cache_all(&local_err);
         if (local_err) {
@@ -2431,6 +2447,11 @@ static void *migration_thread(void *opaque)
         qemu_savevm_send_postcopy_advise(s->to_dst_file);
     }
 
+    if (migrate_colo_enabled()) {
+        /* Notify migration destination that we enable COLO */
+        qemu_savevm_send_colo_enable(s->to_dst_file);
+    }
+
     qemu_savevm_state_setup(s->to_dst_file);
 
     s->setup_time = qemu_clock_get_ms(QEMU_CLOCK_HOST) - setup_start;
diff --git a/migration/savevm.c b/migration/savevm.c
index e2be02afe4..c43d220220 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -55,6 +55,8 @@
 #include "io/channel-buffer.h"
 #include "io/channel-file.h"
 #include "sysemu/replay.h"
+#include "migration/colo.h"
+
 
 #ifndef ETH_P_RARP
 #define ETH_P_RARP 0x8035
@@ -81,6 +83,9 @@ enum qemu_vm_cmd {
                                       were previously sent during
                                       precopy but are dirty. */
     MIG_CMD_PACKAGED,          /* Send a wrapped stream within this stream */
+
+    MIG_CMD_ENABLE_COLO, /* Enable COLO */
+
     MIG_CMD_MAX
 };
 
@@ -836,6 +841,12 @@ static void qemu_savevm_command_send(QEMUFile *f,
     qemu_fflush(f);
 }
 
+void qemu_savevm_send_colo_enable(QEMUFile *f)
+{
+    trace_savevm_send_colo_enable();
+    qemu_savevm_command_send(f, MIG_CMD_ENABLE_COLO, 0, NULL);
+}
+
 void qemu_savevm_send_ping(QEMUFile *f, uint32_t value)
 {
     uint32_t buf;
@@ -1793,6 +1804,12 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
     return ret;
 }
 
+static int loadvm_process_enable_colo(MigrationIncomingState *mis)
+{
+    migration_incoming_enable_colo();
+    return 0;
+}
+
 /*
  * Process an incoming 'QEMU_VM_COMMAND'
  * 0           just a normal return
@@ -1866,6 +1883,9 @@ static int loadvm_process_command(QEMUFile *f)
 
     case MIG_CMD_POSTCOPY_RAM_DISCARD:
         return loadvm_postcopy_ram_handle_discard(mis, len);
+
+    case MIG_CMD_ENABLE_COLO:
+        return loadvm_process_enable_colo(mis);
     }
 
     return 0;
diff --git a/migration/savevm.h b/migration/savevm.h
index cf4f0d37ca..c6d46b37a2 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -52,6 +52,7 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
                                            uint16_t len,
                                            uint64_t *start_list,
                                            uint64_t *length_list);
+void qemu_savevm_send_colo_enable(QEMUFile *f);
 
 int qemu_loadvm_state(QEMUFile *f);
 void qemu_loadvm_state_cleanup(void);
diff --git a/migration/trace-events b/migration/trace-events
index d6be74b7a7..9295b4cf40 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -34,6 +34,7 @@ savevm_send_open_return_path(void) ""
 savevm_send_ping(uint32_t val) "0x%x"
 savevm_send_postcopy_listen(void) ""
 savevm_send_postcopy_run(void) ""
+savevm_send_colo_enable(void) ""
 savevm_state_setup(void) ""
 savevm_state_header(void) ""
 savevm_state_iterate(void) ""
diff --git a/vl.c b/vl.c
index 12e31d1aa9..a1576d2045 100644
--- a/vl.c
+++ b/vl.c
@@ -4437,8 +4437,6 @@ int main(int argc, char **argv, char **envp)
 #endif
     }
 
-    colo_info_init();
-
     if (net_init_clients(&err) < 0) {
         error_report_err(err);
         exit(1);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (5 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-15 16:55   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 08/17] ram/COLO: Record the dirty pages that SVM received Zhang Chen
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang, Li Zhijian

We should not load PVM's state directly into SVM, because there maybe some
errors happen when SVM is receving data, which will break SVM.

We need to ensure receving all data before load the state into SVM. We use
an extra memory to cache these data (PVM's ram). The ram cache in secondary side
is initially the same as SVM/PVM's memory. And in the process of checkpoint,
we cache the dirty pages of PVM into this ram cache firstly, so this ram cache
always the same as PVM's memory at every checkpoint, then we flush this cached ram
to SVM after we receive all PVM's state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 include/exec/ram_addr.h |  1 +
 migration/migration.c   |  2 +
 migration/ram.c         | 99 +++++++++++++++++++++++++++++++++++++++--
 migration/ram.h         |  4 ++
 migration/savevm.c      |  2 +-
 5 files changed, 104 insertions(+), 4 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index cf2446a176..51ec153a57 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -27,6 +27,7 @@ struct RAMBlock {
     struct rcu_head rcu;
     struct MemoryRegion *mr;
     uint8_t *host;
+    uint8_t *colo_cache; /* For colo, VM's ram cache */
     ram_addr_t offset;
     ram_addr_t used_length;
     ram_addr_t max_length;
diff --git a/migration/migration.c b/migration/migration.c
index 8dee7dd309..cfc1b958b9 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -421,6 +421,8 @@ static void process_incoming_migration_co(void *opaque)
 
         /* Wait checkpoint incoming thread exit before free resource */
         qemu_thread_join(&mis->colo_incoming_thread);
+        /* We hold the global iothread lock, so it is safe here */
+        colo_release_ram_cache();
     }
 
     if (ret < 0) {
diff --git a/migration/ram.c b/migration/ram.c
index 912810c18e..7ca845f8a9 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2520,6 +2520,20 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
     return block->host + offset;
 }
 
+static inline void *colo_cache_from_block_offset(RAMBlock *block,
+                                                 ram_addr_t offset)
+{
+    if (!offset_in_ramblock(block, offset)) {
+        return NULL;
+    }
+    if (!block->colo_cache) {
+        error_report("%s: colo_cache is NULL in block :%s",
+                     __func__, block->idstr);
+        return NULL;
+    }
+    return block->colo_cache + offset;
+}
+
 /**
  * ram_handle_compressed: handle the zero page case
  *
@@ -2724,6 +2738,57 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
     qemu_mutex_unlock(&decomp_done_lock);
 }
 
+/*
+ * colo cache: this is for secondary VM, we cache the whole
+ * memory of the secondary VM, it is need to hold the global lock
+ * to call this helper.
+ */
+int colo_init_ram_cache(void)
+{
+    RAMBlock *block;
+
+    rcu_read_lock();
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        block->colo_cache = qemu_anon_ram_alloc(block->used_length,
+                                                NULL,
+                                                false);
+        if (!block->colo_cache) {
+            error_report("%s: Can't alloc memory for COLO cache of block %s,"
+                         "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
+                         block->used_length);
+            goto out_locked;
+        }
+    }
+    rcu_read_unlock();
+    return 0;
+
+out_locked:
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        if (block->colo_cache) {
+            qemu_anon_ram_free(block->colo_cache, block->used_length);
+            block->colo_cache = NULL;
+        }
+    }
+
+    rcu_read_unlock();
+    return -errno;
+}
+
+/* It is need to hold the global lock to call this helper */
+void colo_release_ram_cache(void)
+{
+    RAMBlock *block;
+
+    rcu_read_lock();
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        if (block->colo_cache) {
+            qemu_anon_ram_free(block->colo_cache, block->used_length);
+            block->colo_cache = NULL;
+        }
+    }
+    rcu_read_unlock();
+}
+
 /**
  * ram_load_setup: Setup RAM for migration incoming side
  *
@@ -2740,6 +2805,7 @@ static int ram_load_setup(QEMUFile *f, void *opaque)
 
     xbzrle_load_setup();
     ramblock_recv_map_init();
+
     return 0;
 }
 
@@ -2753,6 +2819,7 @@ static int ram_load_cleanup(void *opaque)
         g_free(rb->receivedmap);
         rb->receivedmap = NULL;
     }
+
     return 0;
 }
 
@@ -2966,7 +3033,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
 
     while (!postcopy_running && !ret && !(flags & RAM_SAVE_FLAG_EOS)) {
         ram_addr_t addr, total_ram_bytes;
-        void *host = NULL;
+        void *host = NULL, *host_bak = NULL;
         uint8_t ch;
 
         addr = qemu_get_be64(f);
@@ -2986,13 +3053,36 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
                      RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
             RAMBlock *block = ram_block_from_stream(f, flags);
 
-            host = host_from_ram_block_offset(block, addr);
+             /*
+             * After going into COLO, we should load the Page into colo_cache
+             * NOTE: We need to keep a copy of SVM's ram in colo_cache.
+             * Privously, we copied all these memory in preparing stage of COLO
+             * while we need to stop VM, which is a time-consuming process.
+             * Here we optimize it by a trick, back-up every page while in
+             * migration process while COLO is enabled, though it affects the
+             * speed of the migration, but it obviously reduce the downtime of
+             * back-up all SVM'S memory in COLO preparing stage.
+             */
+            if (migration_incoming_in_colo_state()) {
+                host = colo_cache_from_block_offset(block, addr);
+                /* After goes into COLO state, don't backup it any more */
+                if (!migration_incoming_in_colo_state()) {
+                    host_bak = host;
+                }
+            }
+            if (!migration_incoming_in_colo_state()) {
+                host = host_from_ram_block_offset(block, addr);
+            }
             if (!host) {
                 error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
                 ret = -EINVAL;
                 break;
             }
-            ramblock_recv_bitmap_set(block, host);
+
+            if (!migration_incoming_in_colo_state()) {
+                ramblock_recv_bitmap_set(block, host);
+            }
+
             trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
         }
 
@@ -3087,6 +3177,9 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
         if (!ret) {
             ret = qemu_file_get_error(f);
         }
+        if (!ret && host_bak && host) {
+            memcpy(host_bak, host, TARGET_PAGE_SIZE);
+        }
     }
 
     ret |= wait_for_decompress_done();
diff --git a/migration/ram.h b/migration/ram.h
index 5030be110a..66e9b86ff0 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -64,4 +64,8 @@ bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
 void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
 void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr);
 
+/* ram cache */
+int colo_init_ram_cache(void);
+void colo_release_ram_cache(void);
+
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index c43d220220..ec0bff09ce 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1807,7 +1807,7 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
 static int loadvm_process_enable_colo(MigrationIncomingState *mis)
 {
     migration_incoming_enable_colo();
-    return 0;
+    return colo_init_ram_cache();
 }
 
 /*
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 08/17] ram/COLO: Record the dirty pages that SVM received
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (6 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache Zhang Chen
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

We record the address of the dirty pages that received,
it will help flushing pages that cached into SVM.

Here, it is a trick, we record dirty pages by re-using migration
dirty bitmap. In the later patch, we will start the dirty log
for SVM, just like migration, in this way, we can record both
the dirty pages caused by PVM and SVM, we only flush those dirty
pages from RAM cache while do checkpoint.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 7ca845f8a9..e35dfee06e 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2531,6 +2531,15 @@ static inline void *colo_cache_from_block_offset(RAMBlock *block,
                      __func__, block->idstr);
         return NULL;
     }
+
+    /*
+    * During colo checkpoint, we need bitmap of these migrated pages.
+    * It help us to decide which pages in ram cache should be flushed
+    * into VM's RAM later.
+    */
+    if (!test_and_set_bit(offset >> TARGET_PAGE_BITS, block->bmap)) {
+        ram_state->migration_dirty_pages++;
+    }
     return block->colo_cache + offset;
 }
 
@@ -2760,6 +2769,24 @@ int colo_init_ram_cache(void)
         }
     }
     rcu_read_unlock();
+    /*
+    * Record the dirty pages that sent by PVM, we use this dirty bitmap together
+    * with to decide which page in cache should be flushed into SVM's RAM. Here
+    * we use the same name 'ram_bitmap' as for migration.
+    */
+    if (ram_bytes_total()) {
+        RAMBlock *block;
+
+        QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+            unsigned long pages = block->max_length >> TARGET_PAGE_BITS;
+
+            block->bmap = bitmap_new(pages);
+            bitmap_set(block->bmap, 0, pages);
+         }
+    }
+    ram_state = g_new0(RAMState, 1);
+    ram_state->migration_dirty_pages = 0;
+
     return 0;
 
 out_locked:
@@ -2779,6 +2806,10 @@ void colo_release_ram_cache(void)
 {
     RAMBlock *block;
 
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        g_free(block->bmap);
+        block->bmap = NULL;
+    }
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         if (block->colo_cache) {
@@ -2787,6 +2818,8 @@ void colo_release_ram_cache(void)
         }
     }
     rcu_read_unlock();
+    g_free(ram_state);
+    ram_state = NULL;
 }
 
 /**
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (7 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 08/17] ram/COLO: Record the dirty pages that SVM received Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-15 14:44   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 10/17] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang, Li Zhijian

During the time of VM's running, PVM may dirty some pages, we will transfer
PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
time. So, the content of SVM's RAM cache will always be same with PVM's memory
after checkpoint.

Instead of flushing all content of PVM's RAM cache into SVM's MEMORY,
we do this in a more efficient way:
Only flush any page that dirtied by PVM since last checkpoint.
In this way, we can ensure SVM's memory same with PVM's.

Besides, we must ensure flush RAM cache before load device state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c        | 39 +++++++++++++++++++++++++++++++++++++++
 migration/trace-events |  2 ++
 2 files changed, 41 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index e35dfee06e..4235a8f24d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3031,6 +3031,40 @@ static bool postcopy_is_running(void)
     return ps >= POSTCOPY_INCOMING_LISTENING && ps < POSTCOPY_INCOMING_END;
 }
 
+/*
+ * Flush content of RAM cache into SVM's memory.
+ * Only flush the pages that be dirtied by PVM or SVM or both.
+ */
+static void colo_flush_ram_cache(void)
+{
+    RAMBlock *block = NULL;
+    void *dst_host;
+    void *src_host;
+    unsigned long offset = 0;
+
+    trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
+    rcu_read_lock();
+    block = QLIST_FIRST_RCU(&ram_list.blocks);
+
+    while (block) {
+        offset = migration_bitmap_find_dirty(ram_state, block, offset);
+        migration_bitmap_clear_dirty(ram_state, block, offset);
+
+        if (offset << TARGET_PAGE_BITS >= block->used_length) {
+            offset = 0;
+            block = QLIST_NEXT_RCU(block, next);
+        } else {
+            dst_host = block->host + (offset << TARGET_PAGE_BITS);
+            src_host = block->colo_cache + (offset << TARGET_PAGE_BITS);
+            memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+        }
+    }
+
+    rcu_read_unlock();
+    trace_colo_flush_ram_cache_end();
+    assert(ram_state->migration_dirty_pages == 0);
+}
+
 static int ram_load(QEMUFile *f, void *opaque, int version_id)
 {
     int flags = 0, ret = 0, invalid_flags = 0;
@@ -3043,6 +3077,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
     bool postcopy_running = postcopy_is_running();
     /* ADVISE is earlier, it shows the source has the postcopy capability on */
     bool postcopy_advised = postcopy_is_advised();
+    bool need_flush = false;
 
     seq_iter++;
 
@@ -3218,6 +3253,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
     ret |= wait_for_decompress_done();
     rcu_read_unlock();
     trace_ram_load_complete(ret, seq_iter);
+
+    if (!ret  && migration_incoming_in_colo_state() && need_flush) {
+        colo_flush_ram_cache();
+    }
     return ret;
 }
 
diff --git a/migration/trace-events b/migration/trace-events
index 9295b4cf40..8e2f9749e0 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -78,6 +78,8 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
 ram_postcopy_send_discard_bitmap(void) ""
 ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
 ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
+colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64
+colo_flush_ram_cache_end(void) ""
 
 # migration/migration.c
 await_return_path_close_on_source_close(void) ""
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 10/17] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (8 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 11/17] qapi: Add new command to query colo status Zhang Chen
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

If some errors happen during VM's COLO FT stage, it's important to
notify the users of this event. Together with 'x-colo-lost-heartbeat',
Users can intervene in COLO's failover work immediately.
If users don't want to get involved in COLO's failover verdict,
it is still necessary to notify users that we exited COLO mode.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 migration/colo.c    | 20 ++++++++++++++++++++
 qapi/migration.json | 37 +++++++++++++++++++++++++++++++++++++
 2 files changed, 57 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index c083d3696f..8ca63813c2 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -28,6 +28,7 @@
 #include "net/colo-compare.h"
 #include "net/colo.h"
 #include "block/block.h"
+#include "qapi/qapi-events-migration.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -514,6 +515,18 @@ out:
         qemu_fclose(fb);
     }
 
+    /*
+     * There are only two reasons we can go here, some error happened.
+     * Or the user triggered failover.
+     */
+    if (failover_get_state() == FAILOVER_STATUS_NONE) {
+        qapi_event_send_colo_exit(COLO_MODE_PRIMARY,
+                                  COLO_EXIT_REASON_ERROR, NULL);
+    } else {
+        qapi_event_send_colo_exit(COLO_MODE_PRIMARY,
+                                  COLO_EXIT_REASON_REQUEST, NULL);
+    }
+
     /* Hope this not to be too long to wait here */
     qemu_sem_wait(&s->colo_exit_sem);
     qemu_sem_destroy(&s->colo_exit_sem);
@@ -744,6 +757,13 @@ out:
     if (local_err) {
         error_report_err(local_err);
     }
+    if (failover_get_state() == FAILOVER_STATUS_NONE) {
+        qapi_event_send_colo_exit(COLO_MODE_SECONDARY,
+                                  COLO_EXIT_REASON_ERROR, NULL);
+    } else {
+        qapi_event_send_colo_exit(COLO_MODE_SECONDARY,
+                                  COLO_EXIT_REASON_REQUEST, NULL);
+    }
 
     if (fb) {
         qemu_fclose(fb);
diff --git a/qapi/migration.json b/qapi/migration.json
index f3974c6807..55dae48089 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -874,6 +874,43 @@
 { 'enum': 'FailoverStatus',
   'data': [ 'none', 'require', 'active', 'completed', 'relaunch' ] }
 
+##
+# @COLO_EXIT:
+#
+# Emitted when VM finishes COLO mode due to some errors happening or
+# at the request of users.
+#
+# @mode: report COLO mode when COLO exited.
+#
+# @reason: describes the reason for the COLO exit.
+#
+# Since: 2.13
+#
+# Example:
+#
+# <- { "timestamp": {"seconds": 2032141960, "microseconds": 417172},
+#      "event": "COLO_EXIT", "data": {"mode": "primary", "reason": "request" } }
+#
+##
+{ 'event': 'COLO_EXIT',
+  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
+
+##
+# @COLOExitReason:
+#
+# The reason for a COLO exit
+#
+# @none: no failover has ever happened.
+#
+# @request: COLO exit is due to an external request
+#
+# @error: COLO exit is due to an internal error
+#
+# Since: 2.13
+##
+{ 'enum': 'COLOExitReason',
+  'data': [ 'none', 'request', 'error' ] }
+
 ##
 # @x-colo-lost-heartbeat:
 #
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 11/17] qapi: Add new command to query colo status
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (9 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 10/17] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm Zhang Chen
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang

Libvirt or other high level software can use this command query colo status.
You can test this command like that:
{'execute':'query-colo-status'}

Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 migration/colo.c    | 34 ++++++++++++++++++++++++++++++++++
 qapi/migration.json | 33 +++++++++++++++++++++++++++++++++
 2 files changed, 67 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 8ca63813c2..cdff0a2490 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -29,6 +29,7 @@
 #include "net/colo.h"
 #include "block/block.h"
 #include "qapi/qapi-events-migration.h"
+#include "qapi/qmp/qerror.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -237,6 +238,39 @@ void qmp_xen_colo_do_checkpoint(Error **errp)
 #endif
 }
 
+COLOStatus *qmp_query_colo_status(Error **errp)
+{
+    int state;
+    COLOStatus *s = g_new0(COLOStatus, 1);
+
+    if (get_colo_mode() == COLO_MODE_UNKNOWN) {
+        error_setg(errp, QERR_FEATURE_DISABLED, "colo");
+        s->colo_running = false;
+        goto out;
+    } else if (get_colo_mode() == COLO_MODE_PRIMARY) {
+        state = migrate_get_current()->state;
+    } else {
+        state = migration_incoming_get_current()->state;
+    }
+    s->colo_running = state == MIGRATION_STATUS_COLO;
+
+out:
+    s->mode = get_colo_mode();
+
+    switch (failover_get_state()) {
+    case FAILOVER_STATUS_NONE:
+        s->reason = COLO_EXIT_REASON_NONE;
+        break;
+    case FAILOVER_STATUS_REQUIRE:
+        s->reason = COLO_EXIT_REASON_REQUEST;
+        break;
+    default:
+        s->reason = COLO_EXIT_REASON_ERROR;
+    }
+
+    return s;
+}
+
 static void colo_send_message(QEMUFile *f, COLOMessage msg,
                               Error **errp)
 {
diff --git a/qapi/migration.json b/qapi/migration.json
index 55dae48089..13589ba948 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1220,3 +1220,36 @@
 # Since: 2.9
 ##
 { 'command': 'xen-colo-do-checkpoint' }
+
+##
+# @COLOStatus:
+#
+# The result format for 'query-colo-status'.
+#
+# @mode: which COLO mode the VM was in when it exited.
+#
+# @colo-running: true if COLO is running.
+#
+# @reason: describes the reason for the COLO exit.
+#
+# Since: 2.13
+##
+{ 'struct': 'COLOStatus',
+  'data': { 'mode': 'COLOMode', 'colo-running': 'bool', 'reason': 'COLOExitReason' } }
+
+##
+# @query-colo-status:
+#
+# Query COLO status while the vm is running.
+#
+# Returns: A @COLOStatus object showing the status.
+#
+# Example:
+#
+# -> { "execute": "query-colo-status" }
+# <- { "return": { "mode": "primary", "colo-running": true, "reason": "request" } }
+#
+# Since: 2.13
+##
+{ 'command': 'query-colo-status',
+  'returns': 'COLOStatus' }
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (10 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 11/17] qapi: Add new command to query colo status Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-15 18:56   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 13/17] COLO: flush host dirty ram from cache Zhang Chen
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

There are several stages during loadvm/savevm process. In different stage,
migration incoming processes different types of sections.
We want to control these stages more accuracy, it will benefit COLO
performance, we don't have to save type of QEMU_VM_SECTION_START
sections everytime while do checkpoint, besides, we want to separate
the process of saving/loading memory and devices state.

So we add three new helper functions: qemu_load_device_state() and
qemu_savevm_live_state() to achieve different process during migration.

Besides, we make qemu_loadvm_state_main() and qemu_save_device_state()
public, and simplify the codes of qemu_save_device_state() by calling the
wrapper qemu_savevm_state_header().

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/colo.c   | 36 ++++++++++++++++++++++++++++--------
 migration/savevm.c | 35 ++++++++++++++++++++++++++++-------
 migration/savevm.h |  4 ++++
 3 files changed, 60 insertions(+), 15 deletions(-)

diff --git a/migration/colo.c b/migration/colo.c
index cdff0a2490..5b055f79f1 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -30,6 +30,7 @@
 #include "block/block.h"
 #include "qapi/qapi-events-migration.h"
 #include "qapi/qmp/qerror.h"
+#include "sysemu/cpus.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -414,23 +415,30 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
 
     /* Disable block migration */
     migrate_set_block_enabled(false, &local_err);
-    qemu_savevm_state_header(fb);
-    qemu_savevm_state_setup(fb);
     qemu_mutex_lock_iothread();
     replication_do_checkpoint_all(&local_err);
     if (local_err) {
         qemu_mutex_unlock_iothread();
         goto out;
     }
-    qemu_savevm_state_complete_precopy(fb, false, false);
-    qemu_mutex_unlock_iothread();
-
-    qemu_fflush(fb);
 
     colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err);
     if (local_err) {
         goto out;
     }
+    /*
+     * Only save VM's live state, which not including device state.
+     * TODO: We may need a timeout mechanism to prevent COLO process
+     * to be blocked here.
+     */
+    qemu_savevm_live_state(s->to_dst_file);
+    /* Note: device state is saved into buffer */
+    ret = qemu_save_device_state(fb);
+
+    qemu_mutex_unlock_iothread();
+
+    qemu_fflush(fb);
+
     /*
      * We need the size of the VMstate data in Secondary side,
      * With which we can decide how much data should be read.
@@ -643,6 +651,7 @@ void *colo_process_incoming_thread(void *opaque)
     uint64_t total_size;
     uint64_t value;
     Error *local_err = NULL;
+    int ret;
 
     qemu_sem_init(&mis->colo_incoming_sem, 0);
 
@@ -715,6 +724,16 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        qemu_mutex_lock_iothread();
+        cpu_synchronize_all_pre_loadvm();
+        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
+        qemu_mutex_unlock_iothread();
+
+        if (ret < 0) {
+            error_report("Load VM's live state (ram) error");
+            goto out;
+        }
+
         value = colo_receive_message_value(mis->from_src_file,
                                  COLO_MESSAGE_VMSTATE_SIZE, &local_err);
         if (local_err) {
@@ -748,8 +767,9 @@ void *colo_process_incoming_thread(void *opaque)
         qemu_mutex_lock_iothread();
         qemu_system_reset(SHUTDOWN_CAUSE_NONE);
         vmstate_loading = true;
-        if (qemu_loadvm_state(fb) < 0) {
-            error_report("COLO: loadvm failed");
+        ret = qemu_load_device_state(fb);
+        if (ret < 0) {
+            error_report("COLO: load device state failed");
             qemu_mutex_unlock_iothread();
             goto out;
         }
diff --git a/migration/savevm.c b/migration/savevm.c
index ec0bff09ce..0f61239429 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1332,13 +1332,20 @@ done:
     return ret;
 }
 
-static int qemu_save_device_state(QEMUFile *f)
+void qemu_savevm_live_state(QEMUFile *f)
 {
-    SaveStateEntry *se;
+    /* save QEMU_VM_SECTION_END section */
+    qemu_savevm_state_complete_precopy(f, true, false);
+    qemu_put_byte(f, QEMU_VM_EOF);
+}
 
-    qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
-    qemu_put_be32(f, QEMU_VM_FILE_VERSION);
+int qemu_save_device_state(QEMUFile *f)
+{
+    SaveStateEntry *se;
 
+    if (!migration_in_colo_state()) {
+        qemu_savevm_state_header(f);
+    }
     cpu_synchronize_all_states();
 
     QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
@@ -1394,8 +1401,6 @@ enum LoadVMExitCodes {
     LOADVM_QUIT     =  1,
 };
 
-static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
-
 /* ------ incoming postcopy messages ------ */
 /* 'advise' arrives before any transfers just to tell us that a postcopy
  * *might* happen - it might be skipped if precopy transferred everything
@@ -2075,7 +2080,7 @@ void qemu_loadvm_state_cleanup(void)
     }
 }
 
-static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
+int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
 {
     uint8_t section_type;
     int ret = 0;
@@ -2229,6 +2234,22 @@ int qemu_loadvm_state(QEMUFile *f)
     return ret;
 }
 
+int qemu_load_device_state(QEMUFile *f)
+{
+    MigrationIncomingState *mis = migration_incoming_get_current();
+    int ret;
+
+    /* Load QEMU_VM_SECTION_FULL section */
+    ret = qemu_loadvm_state_main(f, mis);
+    if (ret < 0) {
+        error_report("Failed to load device state: %d", ret);
+        return ret;
+    }
+
+    cpu_synchronize_all_post_init();
+    return 0;
+}
+
 int save_snapshot(const char *name, Error **errp)
 {
     BlockDriverState *bs, *bs1;
diff --git a/migration/savevm.h b/migration/savevm.h
index c6d46b37a2..cf7935dd68 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -53,8 +53,12 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
                                            uint64_t *start_list,
                                            uint64_t *length_list);
 void qemu_savevm_send_colo_enable(QEMUFile *f);
+void qemu_savevm_live_state(QEMUFile *f);
+int qemu_save_device_state(QEMUFile *f);
 
 int qemu_loadvm_state(QEMUFile *f);
 void qemu_loadvm_state_cleanup(void);
+int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
+int qemu_load_device_state(QEMUFile *f);
 
 #endif
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 13/17] COLO: flush host dirty ram from cache
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (11 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-15 15:32   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 14/17] filter: Add handle_event method for NetFilterClass Zhang Chen
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

Don't need to flush all VM's ram from cache, only
flush the dirty pages since last checkpoint

Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 migration/ram.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 4235a8f24d..21027c5b4d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2786,6 +2786,7 @@ int colo_init_ram_cache(void)
     }
     ram_state = g_new0(RAMState, 1);
     ram_state->migration_dirty_pages = 0;
+    memory_global_dirty_log_start();
 
     return 0;
 
@@ -2806,10 +2807,12 @@ void colo_release_ram_cache(void)
 {
     RAMBlock *block;
 
+    memory_global_dirty_log_stop();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         g_free(block->bmap);
         block->bmap = NULL;
     }
+
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         if (block->colo_cache) {
@@ -3042,6 +3045,15 @@ static void colo_flush_ram_cache(void)
     void *src_host;
     unsigned long offset = 0;
 
+    memory_global_dirty_log_sync();
+    qemu_mutex_lock(&ram_state->bitmap_mutex);
+    rcu_read_lock();
+    RAMBLOCK_FOREACH(block) {
+        migration_bitmap_sync_range(ram_state, block, 0, block->used_length);
+    }
+    rcu_read_unlock();
+    qemu_mutex_unlock(&ram_state->bitmap_mutex);
+
     trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
     rcu_read_lock();
     block = QLIST_FIRST_RCU(&ram_list.blocks);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 14/17] filter: Add handle_event method for NetFilterClass
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (12 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 13/17] COLO: flush host dirty ram from cache Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 15/17] filter-rewriter: handle checkpoint and failover event Zhang Chen
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

Filter needs to process the event of checkpoint/failover or
other event passed by COLO frame.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 include/net/filter.h |  5 +++++
 net/filter.c         | 17 +++++++++++++++++
 net/net.c            | 28 ++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/include/net/filter.h b/include/net/filter.h
index 435acd6f82..49da666ac0 100644
--- a/include/net/filter.h
+++ b/include/net/filter.h
@@ -38,6 +38,8 @@ typedef ssize_t (FilterReceiveIOV)(NetFilterState *nc,
 
 typedef void (FilterStatusChanged) (NetFilterState *nf, Error **errp);
 
+typedef void (FilterHandleEvent) (NetFilterState *nf, int event, Error **errp);
+
 typedef struct NetFilterClass {
     ObjectClass parent_class;
 
@@ -45,6 +47,7 @@ typedef struct NetFilterClass {
     FilterSetup *setup;
     FilterCleanup *cleanup;
     FilterStatusChanged *status_changed;
+    FilterHandleEvent *handle_event;
     /* mandatory */
     FilterReceiveIOV *receive_iov;
 } NetFilterClass;
@@ -77,4 +80,6 @@ ssize_t qemu_netfilter_pass_to_next(NetClientState *sender,
                                     int iovcnt,
                                     void *opaque);
 
+void colo_notify_filters_event(int event, Error **errp);
+
 #endif /* QEMU_NET_FILTER_H */
diff --git a/net/filter.c b/net/filter.c
index 2fd7d7d663..0f17eba143 100644
--- a/net/filter.c
+++ b/net/filter.c
@@ -17,6 +17,8 @@
 #include "net/vhost_net.h"
 #include "qom/object_interfaces.h"
 #include "qemu/iov.h"
+#include "net/colo.h"
+#include "migration/colo.h"
 
 static inline bool qemu_can_skip_netfilter(NetFilterState *nf)
 {
@@ -245,11 +247,26 @@ static void netfilter_finalize(Object *obj)
     g_free(nf->netdev_id);
 }
 
+static void dummy_handle_event(NetFilterState *nf, int event, Error **errp)
+{
+    switch (event) {
+    case COLO_EVENT_CHECKPOINT:
+        break;
+    case COLO_EVENT_FAILOVER:
+        object_property_set_str(OBJECT(nf), "off", "status", errp);
+        break;
+    default:
+        break;
+    }
+}
+
 static void netfilter_class_init(ObjectClass *oc, void *data)
 {
     UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
+    NetFilterClass *nfc = NETFILTER_CLASS(oc);
 
     ucc->complete = netfilter_complete;
+    nfc->handle_event = dummy_handle_event;
 }
 
 static const TypeInfo netfilter_info = {
diff --git a/net/net.c b/net/net.c
index 29f83983e5..d58691bb8e 100644
--- a/net/net.c
+++ b/net/net.c
@@ -1335,6 +1335,34 @@ void hmp_info_network(Monitor *mon, const QDict *qdict)
     }
 }
 
+void colo_notify_filters_event(int event, Error **errp)
+{
+    NetClientState *nc, *peer;
+    NetClientDriver type;
+    NetFilterState *nf;
+    NetFilterClass *nfc = NULL;
+    Error *local_err = NULL;
+
+    QTAILQ_FOREACH(nc, &net_clients, next) {
+        peer = nc->peer;
+        type = nc->info->type;
+        if (!peer || type != NET_CLIENT_DRIVER_TAP) {
+            continue;
+        }
+        QTAILQ_FOREACH(nf, &nc->filters, next) {
+            nfc =  NETFILTER_GET_CLASS(OBJECT(nf));
+            if (!nfc->handle_event) {
+                continue;
+            }
+            nfc->handle_event(nf, event, &local_err);
+            if (local_err) {
+                error_propagate(errp, local_err);
+                return;
+            }
+        }
+    }
+}
+
 void qmp_set_link(const char *name, bool up, Error **errp)
 {
     NetClientState *ncs[MAX_QUEUE_NUM];
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 15/17] filter-rewriter: handle checkpoint and failover event
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (13 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 14/17] filter: Add handle_event method for NetFilterClass Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 16/17] COLO: notify net filters about checkpoint/failover event Zhang Chen
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

After one round of checkpoint, the states between PVM and SVM
become consistent, so it is unnecessary to adjust the sequence
of net packets for old connections, besides, while failover
happens, filter-rewriter needs to check if it still needs to
adjust sequence of net packets.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 migration/colo.c      | 13 +++++++++++++
 net/filter-rewriter.c | 40 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 5b055f79f1..3dfd84d897 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -31,6 +31,7 @@
 #include "qapi/qapi-events-migration.h"
 #include "qapi/qmp/qerror.h"
 #include "sysemu/cpus.h"
+#include "net/filter.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -82,6 +83,11 @@ static void secondary_vm_do_failover(void)
     if (local_err) {
         error_report_err(local_err);
     }
+    /* Notify all filters of all NIC to do checkpoint */
+    colo_notify_filters_event(COLO_EVENT_FAILOVER, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
 
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
@@ -786,6 +792,13 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        /* Notify all filters of all NIC to do checkpoint */
+        colo_notify_filters_event(COLO_EVENT_CHECKPOINT, &local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index 0909a9a8af..f3c306cc89 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -20,6 +20,8 @@
 #include "qemu/main-loop.h"
 #include "qemu/iov.h"
 #include "net/checksum.h"
+#include "net/colo.h"
+#include "migration/colo.h"
 
 #define FILTER_COLO_REWRITER(obj) \
     OBJECT_CHECK(RewriterState, (obj), TYPE_FILTER_REWRITER)
@@ -277,6 +279,43 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
     return 0;
 }
 
+static void reset_seq_offset(gpointer key, gpointer value, gpointer user_data)
+{
+    Connection *conn = (Connection *)value;
+
+    conn->offset = 0;
+}
+
+static gboolean offset_is_nonzero(gpointer key,
+                                  gpointer value,
+                                  gpointer user_data)
+{
+    Connection *conn = (Connection *)value;
+
+    return conn->offset ? true : false;
+}
+
+static void colo_rewriter_handle_event(NetFilterState *nf, int event,
+                                       Error **errp)
+{
+    RewriterState *rs = FILTER_COLO_REWRITER(nf);
+
+    switch (event) {
+    case COLO_EVENT_CHECKPOINT:
+        g_hash_table_foreach(rs->connection_track_table,
+                            reset_seq_offset, NULL);
+        break;
+    case COLO_EVENT_FAILOVER:
+        if (!g_hash_table_find(rs->connection_track_table,
+                              offset_is_nonzero, NULL)) {
+            object_property_set_str(OBJECT(nf), "off", "status", errp);
+        }
+        break;
+    default:
+        break;
+    }
+}
+
 static void colo_rewriter_cleanup(NetFilterState *nf)
 {
     RewriterState *s = FILTER_COLO_REWRITER(nf);
@@ -332,6 +371,7 @@ static void colo_rewriter_class_init(ObjectClass *oc, void *data)
     nfc->setup = colo_rewriter_setup;
     nfc->cleanup = colo_rewriter_cleanup;
     nfc->receive_iov = colo_rewriter_receive_iov;
+    nfc->handle_event = colo_rewriter_handle_event;
 }
 
 static const TypeInfo colo_rewriter_info = {
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 16/17] COLO: notify net filters about checkpoint/failover event
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (14 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 15/17] filter-rewriter: handle checkpoint and failover event Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-17  9:48   ` Dr. David Alan Gilbert
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 17/17] COLO: quick failover process by kick COLO thread Zhang Chen
  2018-05-16 11:18 ` [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Dr. David Alan Gilbert
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

Notify all net filters about the checkpoint and failover event.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 migration/colo.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 3dfd84d897..15463e2823 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -88,6 +88,11 @@ static void secondary_vm_do_failover(void)
     if (local_err) {
         error_report_err(local_err);
     }
+    /* Notify all filters of all NIC to do checkpoint */
+    colo_notify_filters_event(COLO_EVENT_FAILOVER, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
 
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
@@ -799,6 +804,13 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        /* Notify all filters of all NIC to do checkpoint */
+        colo_notify_filters_event(COLO_EVENT_CHECKPOINT, &local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [PATCH V7 RESEND 17/17] COLO: quick failover process by kick COLO thread
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (15 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 16/17] COLO: notify net filters about checkpoint/failover event Zhang Chen
@ 2018-05-14 16:54 ` Zhang Chen
  2018-05-17  9:53   ` Dr. David Alan Gilbert
  2018-05-16 11:18 ` [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Dr. David Alan Gilbert
  17 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-05-14 16:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen,
	Dr . David Alan Gilbert, Jason Wang, zhanghailiang

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

COLO thread may sleep at qemu_sem_wait(&s->colo_checkpoint_sem),
while failover works begin, It's better to wakeup it to quick
the process.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 migration/colo.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 15463e2823..16def4865c 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -135,6 +135,11 @@ static void primary_vm_do_failover(void)
 
     migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
+    /*
+     * kick COLO thread which might wait at
+     * qemu_sem_wait(&s->colo_checkpoint_sem).
+     */
+    colo_checkpoint_notify(migrate_get_current());
 
     /*
      * Wake up COLO thread which may blocked in recv() or send(),
@@ -552,6 +557,9 @@ static void colo_process_checkpoint(MigrationState *s)
 
         qemu_sem_wait(&s->colo_checkpoint_sem);
 
+        if (s->state != MIGRATION_STATUS_COLO) {
+            goto out;
+        }
         ret = colo_do_checkpoint_transaction(s, bioc, fb);
         if (ret < 0) {
             goto out;
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache Zhang Chen
@ 2018-05-15 14:44   ` Dr. David Alan Gilbert
  2018-05-20 16:09     ` Zhang Chen
  0 siblings, 1 reply; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-15 14:44 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

* Zhang Chen (zhangckid@gmail.com) wrote:
> During the time of VM's running, PVM may dirty some pages, we will transfer
> PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
> time. So, the content of SVM's RAM cache will always be same with PVM's memory
> after checkpoint.
> 
> Instead of flushing all content of PVM's RAM cache into SVM's MEMORY,
> we do this in a more efficient way:
> Only flush any page that dirtied by PVM since last checkpoint.
> In this way, we can ensure SVM's memory same with PVM's.
> 
> Besides, we must ensure flush RAM cache before load device state.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/ram.c        | 39 +++++++++++++++++++++++++++++++++++++++
>  migration/trace-events |  2 ++
>  2 files changed, 41 insertions(+)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index e35dfee06e..4235a8f24d 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3031,6 +3031,40 @@ static bool postcopy_is_running(void)
>      return ps >= POSTCOPY_INCOMING_LISTENING && ps < POSTCOPY_INCOMING_END;
>  }
>  
> +/*
> + * Flush content of RAM cache into SVM's memory.
> + * Only flush the pages that be dirtied by PVM or SVM or both.
> + */
> +static void colo_flush_ram_cache(void)
> +{
> +    RAMBlock *block = NULL;
> +    void *dst_host;
> +    void *src_host;
> +    unsigned long offset = 0;
> +
> +    trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
> +    rcu_read_lock();
> +    block = QLIST_FIRST_RCU(&ram_list.blocks);
> +
> +    while (block) {
> +        offset = migration_bitmap_find_dirty(ram_state, block, offset);
> +        migration_bitmap_clear_dirty(ram_state, block, offset);

That looks suspicious to me; shouldn't that be inside the else block
below?   If the find_dirty returns a bit after or equal to used_length
(i.e. there's nothing dirty in this block), then you don't want to clear
that bit yet because it really means there's a dirty page at the start
of the next block?

Dave

> +        if (offset << TARGET_PAGE_BITS >= block->used_length) {
> +            offset = 0;
> +            block = QLIST_NEXT_RCU(block, next);
> +        } else {
> +            dst_host = block->host + (offset << TARGET_PAGE_BITS);
> +            src_host = block->colo_cache + (offset << TARGET_PAGE_BITS);
> +            memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
> +        }
> +    }
> +
> +    rcu_read_unlock();
> +    trace_colo_flush_ram_cache_end();
> +    assert(ram_state->migration_dirty_pages == 0);
> +}
> +
>  static int ram_load(QEMUFile *f, void *opaque, int version_id)
>  {
>      int flags = 0, ret = 0, invalid_flags = 0;
> @@ -3043,6 +3077,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>      bool postcopy_running = postcopy_is_running();
>      /* ADVISE is earlier, it shows the source has the postcopy capability on */
>      bool postcopy_advised = postcopy_is_advised();
> +    bool need_flush = false;
>  
>      seq_iter++;
>  
> @@ -3218,6 +3253,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>      ret |= wait_for_decompress_done();
>      rcu_read_unlock();
>      trace_ram_load_complete(ret, seq_iter);
> +
> +    if (!ret  && migration_incoming_in_colo_state() && need_flush) {
> +        colo_flush_ram_cache();
> +    }
>      return ret;
>  }
>  
> diff --git a/migration/trace-events b/migration/trace-events
> index 9295b4cf40..8e2f9749e0 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -78,6 +78,8 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
>  ram_postcopy_send_discard_bitmap(void) ""
>  ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
>  ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
> +colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64
> +colo_flush_ram_cache_end(void) ""
>  
>  # migration/migration.c
>  await_return_path_close_on_source_close(void) ""
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 13/17] COLO: flush host dirty ram from cache
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 13/17] COLO: flush host dirty ram from cache Zhang Chen
@ 2018-05-15 15:32   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-15 15:32 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

* Zhang Chen (zhangckid@gmail.com) wrote:
> From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> 
> Don't need to flush all VM's ram from cache, only
> flush the dirty pages since last checkpoint
> 
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>

Yes, I think that's right (although I wonder if it can actually be
merged in with the loop directly below it).



Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 4235a8f24d..21027c5b4d 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2786,6 +2786,7 @@ int colo_init_ram_cache(void)
>      }
>      ram_state = g_new0(RAMState, 1);
>      ram_state->migration_dirty_pages = 0;
> +    memory_global_dirty_log_start();
>  
>      return 0;
>  
> @@ -2806,10 +2807,12 @@ void colo_release_ram_cache(void)
>  {
>      RAMBlock *block;
>  
> +    memory_global_dirty_log_stop();
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>          g_free(block->bmap);
>          block->bmap = NULL;
>      }
> +
>      rcu_read_lock();
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>          if (block->colo_cache) {
> @@ -3042,6 +3045,15 @@ static void colo_flush_ram_cache(void)
>      void *src_host;
>      unsigned long offset = 0;
>  
> +    memory_global_dirty_log_sync();
> +    qemu_mutex_lock(&ram_state->bitmap_mutex);
> +    rcu_read_lock();
> +    RAMBLOCK_FOREACH(block) {
> +        migration_bitmap_sync_range(ram_state, block, 0, block->used_length);
> +    }
> +    rcu_read_unlock();
> +    qemu_mutex_unlock(&ram_state->bitmap_mutex);
> +
>      trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
>      rcu_read_lock();
>      block = QLIST_FIRST_RCU(&ram_list.blocks);
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct Zhang Chen
@ 2018-05-15 16:02   ` Dr. David Alan Gilbert
  2018-05-16 13:58     ` Zhang Chen
  0 siblings, 1 reply; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-15 16:02 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang

* Zhang Chen (zhangckid@gmail.com) wrote:
> We need to know if migration is going into COLO state for
> incoming side before start normal migration.
> 
> Instead by using the VMStateDescription to send colo_state
> from source side to destination side, we use MIG_CMD_ENABLE_COLO
> to indicate whether COLO is enabled or not.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>

Yes, that simplifies it a bit.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/migration/colo.h |  5 +--
>  migration/Makefile.objs  |  2 +-
>  migration/colo-comm.c    | 76 ----------------------------------------
>  migration/colo.c         | 13 ++++++-
>  migration/migration.c    | 23 +++++++++++-
>  migration/savevm.c       | 20 +++++++++++
>  migration/savevm.h       |  1 +
>  migration/trace-events   |  1 +
>  vl.c                     |  2 --
>  9 files changed, 60 insertions(+), 83 deletions(-)
>  delete mode 100644 migration/colo-comm.c
> 
> diff --git a/include/migration/colo.h b/include/migration/colo.h
> index fefb2fcf4c..99ce17aca7 100644
> --- a/include/migration/colo.h
> +++ b/include/migration/colo.h
> @@ -28,8 +28,9 @@ void migrate_start_colo_process(MigrationState *s);
>  bool migration_in_colo_state(void);
>  
>  /* loadvm */
> -bool migration_incoming_enable_colo(void);
> -void migration_incoming_exit_colo(void);
> +void migration_incoming_enable_colo(void);
> +void migration_incoming_disable_colo(void);
> +bool migration_incoming_colo_enabled(void);
>  void *colo_process_incoming_thread(void *opaque);
>  bool migration_incoming_in_colo_state(void);
>  
> diff --git a/migration/Makefile.objs b/migration/Makefile.objs
> index c83ec47ba8..a4f3bafd86 100644
> --- a/migration/Makefile.objs
> +++ b/migration/Makefile.objs
> @@ -1,6 +1,6 @@
>  common-obj-y += migration.o socket.o fd.o exec.o
>  common-obj-y += tls.o channel.o savevm.o
> -common-obj-y += colo-comm.o colo.o colo-failover.o
> +common-obj-y += colo.o colo-failover.o
>  common-obj-y += vmstate.o vmstate-types.o page_cache.o
>  common-obj-y += qemu-file.o global_state.o
>  common-obj-y += qemu-file-channel.o
> diff --git a/migration/colo-comm.c b/migration/colo-comm.c
> deleted file mode 100644
> index df26e4dfe7..0000000000
> --- a/migration/colo-comm.c
> +++ /dev/null
> @@ -1,76 +0,0 @@
> -/*
> - * COarse-grain LOck-stepping Virtual Machines for Non-stop Service (COLO)
> - * (a.k.a. Fault Tolerance or Continuous Replication)
> - *
> - * Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
> - * Copyright (c) 2016 FUJITSU LIMITED
> - * Copyright (c) 2016 Intel Corporation
> - *
> - * This work is licensed under the terms of the GNU GPL, version 2 or
> - * later. See the COPYING file in the top-level directory.
> - *
> - */
> -
> -#include "qemu/osdep.h"
> -#include "migration.h"
> -#include "migration/colo.h"
> -#include "migration/vmstate.h"
> -#include "trace.h"
> -
> -typedef struct {
> -     bool colo_requested;
> -} COLOInfo;
> -
> -static COLOInfo colo_info;
> -
> -COLOMode get_colo_mode(void)
> -{
> -    if (migration_in_colo_state()) {
> -        return COLO_MODE_PRIMARY;
> -    } else if (migration_incoming_in_colo_state()) {
> -        return COLO_MODE_SECONDARY;
> -    } else {
> -        return COLO_MODE_UNKNOWN;
> -    }
> -}
> -
> -static int colo_info_pre_save(void *opaque)
> -{
> -    COLOInfo *s = opaque;
> -
> -    s->colo_requested = migrate_colo_enabled();
> -
> -    return 0;
> -}
> -
> -static bool colo_info_need(void *opaque)
> -{
> -   return migrate_colo_enabled();
> -}
> -
> -static const VMStateDescription colo_state = {
> -    .name = "COLOState",
> -    .version_id = 1,
> -    .minimum_version_id = 1,
> -    .pre_save = colo_info_pre_save,
> -    .needed = colo_info_need,
> -    .fields = (VMStateField[]) {
> -        VMSTATE_BOOL(colo_requested, COLOInfo),
> -        VMSTATE_END_OF_LIST()
> -    },
> -};
> -
> -void colo_info_init(void)
> -{
> -    vmstate_register(NULL, 0, &colo_state, &colo_info);
> -}
> -
> -bool migration_incoming_enable_colo(void)
> -{
> -    return colo_info.colo_requested;
> -}
> -
> -void migration_incoming_exit_colo(void)
> -{
> -    colo_info.colo_requested = false;
> -}
> diff --git a/migration/colo.c b/migration/colo.c
> index e06640c3d6..c083d3696f 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -152,6 +152,17 @@ static void primary_vm_do_failover(void)
>      qemu_sem_post(&s->colo_exit_sem);
>  }
>  
> +COLOMode get_colo_mode(void)
> +{
> +    if (migration_in_colo_state()) {
> +        return COLO_MODE_PRIMARY;
> +    } else if (migration_incoming_in_colo_state()) {
> +        return COLO_MODE_SECONDARY;
> +    } else {
> +        return COLO_MODE_UNKNOWN;
> +    }
> +}
> +
>  void colo_do_failover(MigrationState *s)
>  {
>      /* Make sure VM stopped while failover happened. */
> @@ -745,7 +756,7 @@ out:
>      if (mis->to_src_file) {
>          qemu_fclose(mis->to_src_file);
>      }
> -    migration_incoming_exit_colo();
> +    migration_incoming_disable_colo();
>  
>      return NULL;
>  }
> diff --git a/migration/migration.c b/migration/migration.c
> index ddd0c4b988..8dee7dd309 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -277,6 +277,22 @@ int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
>      return migrate_send_rp_message(mis, msg_type, msglen, bufc);
>  }
>  
> +static bool migration_colo_enabled;
> +bool migration_incoming_colo_enabled(void)
> +{
> +    return migration_colo_enabled;
> +}
> +
> +void migration_incoming_disable_colo(void)
> +{
> +    migration_colo_enabled = false;
> +}
> +
> +void migration_incoming_enable_colo(void)
> +{
> +    migration_colo_enabled = true;
> +}
> +
>  void qemu_start_incoming_migration(const char *uri, Error **errp)
>  {
>      const char *p;
> @@ -388,7 +404,7 @@ static void process_incoming_migration_co(void *opaque)
>      }
>  
>      /* we get COLO info, and know if we are in COLO mode */
> -    if (!ret && migration_incoming_enable_colo()) {
> +    if (!ret && migration_incoming_colo_enabled()) {
>          /* Make sure all file formats flush their mutable metadata */
>          bdrv_invalidate_cache_all(&local_err);
>          if (local_err) {
> @@ -2431,6 +2447,11 @@ static void *migration_thread(void *opaque)
>          qemu_savevm_send_postcopy_advise(s->to_dst_file);
>      }
>  
> +    if (migrate_colo_enabled()) {
> +        /* Notify migration destination that we enable COLO */
> +        qemu_savevm_send_colo_enable(s->to_dst_file);
> +    }
> +
>      qemu_savevm_state_setup(s->to_dst_file);
>  
>      s->setup_time = qemu_clock_get_ms(QEMU_CLOCK_HOST) - setup_start;
> diff --git a/migration/savevm.c b/migration/savevm.c
> index e2be02afe4..c43d220220 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -55,6 +55,8 @@
>  #include "io/channel-buffer.h"
>  #include "io/channel-file.h"
>  #include "sysemu/replay.h"
> +#include "migration/colo.h"
> +
>  
>  #ifndef ETH_P_RARP
>  #define ETH_P_RARP 0x8035
> @@ -81,6 +83,9 @@ enum qemu_vm_cmd {
>                                        were previously sent during
>                                        precopy but are dirty. */
>      MIG_CMD_PACKAGED,          /* Send a wrapped stream within this stream */
> +
> +    MIG_CMD_ENABLE_COLO, /* Enable COLO */
> +
>      MIG_CMD_MAX
>  };
>  
> @@ -836,6 +841,12 @@ static void qemu_savevm_command_send(QEMUFile *f,
>      qemu_fflush(f);
>  }
>  
> +void qemu_savevm_send_colo_enable(QEMUFile *f)
> +{
> +    trace_savevm_send_colo_enable();
> +    qemu_savevm_command_send(f, MIG_CMD_ENABLE_COLO, 0, NULL);
> +}
> +
>  void qemu_savevm_send_ping(QEMUFile *f, uint32_t value)
>  {
>      uint32_t buf;
> @@ -1793,6 +1804,12 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
>      return ret;
>  }
>  
> +static int loadvm_process_enable_colo(MigrationIncomingState *mis)
> +{
> +    migration_incoming_enable_colo();
> +    return 0;
> +}
> +
>  /*
>   * Process an incoming 'QEMU_VM_COMMAND'
>   * 0           just a normal return
> @@ -1866,6 +1883,9 @@ static int loadvm_process_command(QEMUFile *f)
>  
>      case MIG_CMD_POSTCOPY_RAM_DISCARD:
>          return loadvm_postcopy_ram_handle_discard(mis, len);
> +
> +    case MIG_CMD_ENABLE_COLO:
> +        return loadvm_process_enable_colo(mis);
>      }
>  
>      return 0;
> diff --git a/migration/savevm.h b/migration/savevm.h
> index cf4f0d37ca..c6d46b37a2 100644
> --- a/migration/savevm.h
> +++ b/migration/savevm.h
> @@ -52,6 +52,7 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
>                                             uint16_t len,
>                                             uint64_t *start_list,
>                                             uint64_t *length_list);
> +void qemu_savevm_send_colo_enable(QEMUFile *f);
>  
>  int qemu_loadvm_state(QEMUFile *f);
>  void qemu_loadvm_state_cleanup(void);
> diff --git a/migration/trace-events b/migration/trace-events
> index d6be74b7a7..9295b4cf40 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -34,6 +34,7 @@ savevm_send_open_return_path(void) ""
>  savevm_send_ping(uint32_t val) "0x%x"
>  savevm_send_postcopy_listen(void) ""
>  savevm_send_postcopy_run(void) ""
> +savevm_send_colo_enable(void) ""
>  savevm_state_setup(void) ""
>  savevm_state_header(void) ""
>  savevm_state_iterate(void) ""
> diff --git a/vl.c b/vl.c
> index 12e31d1aa9..a1576d2045 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -4437,8 +4437,6 @@ int main(int argc, char **argv, char **envp)
>  #endif
>      }
>  
> -    colo_info_init();
> -
>      if (net_init_clients(&err) < 0) {
>          error_report_err(err);
>          exit(1);
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
@ 2018-05-15 16:55   ` Dr. David Alan Gilbert
  2018-05-20 18:30     ` Zhang Chen
  0 siblings, 1 reply; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-15 16:55 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

* Zhang Chen (zhangckid@gmail.com) wrote:
> We should not load PVM's state directly into SVM, because there maybe some
> errors happen when SVM is receving data, which will break SVM.
> 
> We need to ensure receving all data before load the state into SVM. We use
> an extra memory to cache these data (PVM's ram). The ram cache in secondary side
> is initially the same as SVM/PVM's memory. And in the process of checkpoint,
> we cache the dirty pages of PVM into this ram cache firstly, so this ram cache
> always the same as PVM's memory at every checkpoint, then we flush this cached ram
> to SVM after we receive all PVM's state.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> ---
>  include/exec/ram_addr.h |  1 +
>  migration/migration.c   |  2 +
>  migration/ram.c         | 99 +++++++++++++++++++++++++++++++++++++++--
>  migration/ram.h         |  4 ++
>  migration/savevm.c      |  2 +-
>  5 files changed, 104 insertions(+), 4 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index cf2446a176..51ec153a57 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -27,6 +27,7 @@ struct RAMBlock {
>      struct rcu_head rcu;
>      struct MemoryRegion *mr;
>      uint8_t *host;
> +    uint8_t *colo_cache; /* For colo, VM's ram cache */
>      ram_addr_t offset;
>      ram_addr_t used_length;
>      ram_addr_t max_length;
> diff --git a/migration/migration.c b/migration/migration.c
> index 8dee7dd309..cfc1b958b9 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -421,6 +421,8 @@ static void process_incoming_migration_co(void *opaque)
>  
>          /* Wait checkpoint incoming thread exit before free resource */
>          qemu_thread_join(&mis->colo_incoming_thread);
> +        /* We hold the global iothread lock, so it is safe here */
> +        colo_release_ram_cache();
>      }
>  
>      if (ret < 0) {
> diff --git a/migration/ram.c b/migration/ram.c
> index 912810c18e..7ca845f8a9 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2520,6 +2520,20 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
>      return block->host + offset;
>  }
>  
> +static inline void *colo_cache_from_block_offset(RAMBlock *block,
> +                                                 ram_addr_t offset)
> +{
> +    if (!offset_in_ramblock(block, offset)) {
> +        return NULL;
> +    }
> +    if (!block->colo_cache) {
> +        error_report("%s: colo_cache is NULL in block :%s",
> +                     __func__, block->idstr);
> +        return NULL;
> +    }
> +    return block->colo_cache + offset;
> +}
> +
>  /**
>   * ram_handle_compressed: handle the zero page case
>   *
> @@ -2724,6 +2738,57 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
>      qemu_mutex_unlock(&decomp_done_lock);
>  }
>  
> +/*
> + * colo cache: this is for secondary VM, we cache the whole
> + * memory of the secondary VM, it is need to hold the global lock
> + * to call this helper.
> + */
> +int colo_init_ram_cache(void)
> +{
> +    RAMBlock *block;
> +
> +    rcu_read_lock();
> +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> +        block->colo_cache = qemu_anon_ram_alloc(block->used_length,
> +                                                NULL,
> +                                                false);
> +        if (!block->colo_cache) {
> +            error_report("%s: Can't alloc memory for COLO cache of block %s,"
> +                         "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
> +                         block->used_length);
> +            goto out_locked;
> +        }
> +    }
> +    rcu_read_unlock();
> +    return 0;
> +
> +out_locked:
> +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> +        if (block->colo_cache) {
> +            qemu_anon_ram_free(block->colo_cache, block->used_length);
> +            block->colo_cache = NULL;
> +        }
> +    }
> +
> +    rcu_read_unlock();
> +    return -errno;
> +}
> +
> +/* It is need to hold the global lock to call this helper */
> +void colo_release_ram_cache(void)
> +{
> +    RAMBlock *block;
> +
> +    rcu_read_lock();
> +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> +        if (block->colo_cache) {
> +            qemu_anon_ram_free(block->colo_cache, block->used_length);
> +            block->colo_cache = NULL;
> +        }
> +    }
> +    rcu_read_unlock();
> +}
> +
>  /**
>   * ram_load_setup: Setup RAM for migration incoming side
>   *
> @@ -2740,6 +2805,7 @@ static int ram_load_setup(QEMUFile *f, void *opaque)
>  
>      xbzrle_load_setup();
>      ramblock_recv_map_init();
> +
>      return 0;
>  }
>  
> @@ -2753,6 +2819,7 @@ static int ram_load_cleanup(void *opaque)
>          g_free(rb->receivedmap);
>          rb->receivedmap = NULL;
>      }
> +
>      return 0;
>  }
>  
> @@ -2966,7 +3033,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>  
>      while (!postcopy_running && !ret && !(flags & RAM_SAVE_FLAG_EOS)) {
>          ram_addr_t addr, total_ram_bytes;
> -        void *host = NULL;
> +        void *host = NULL, *host_bak = NULL;
>          uint8_t ch;
>  
>          addr = qemu_get_be64(f);
> @@ -2986,13 +3053,36 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>                       RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
>              RAMBlock *block = ram_block_from_stream(f, flags);
>  
> -            host = host_from_ram_block_offset(block, addr);
> +             /*
> +             * After going into COLO, we should load the Page into colo_cache
> +             * NOTE: We need to keep a copy of SVM's ram in colo_cache.
> +             * Privously, we copied all these memory in preparing stage of COLO
> +             * while we need to stop VM, which is a time-consuming process.
> +             * Here we optimize it by a trick, back-up every page while in
> +             * migration process while COLO is enabled, though it affects the
> +             * speed of the migration, but it obviously reduce the downtime of
> +             * back-up all SVM'S memory in COLO preparing stage.
> +             */
> +            if (migration_incoming_in_colo_state()) {
> +                host = colo_cache_from_block_offset(block, addr);
> +                /* After goes into COLO state, don't backup it any more */
> +                if (!migration_incoming_in_colo_state()) {

I don't understand how we can reach this nested 'if';
colo_cache_from_block_offset is short and simple; so how can
migration_incoming_in_colo_state() be both true and false?

I think this is trying to do it for when COLO is enabled but when
receiving the first checkpoint you want to take a copy; but I don't
think that's what the 'if' is doing.

Dave

> +                    host_bak = host;
> +                }
> +            }
> +            if (!migration_incoming_in_colo_state()) {
> +                host = host_from_ram_block_offset(block, addr);
> +            }
>              if (!host) {
>                  error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
>                  ret = -EINVAL;
>                  break;
>              }
> -            ramblock_recv_bitmap_set(block, host);
> +
> +            if (!migration_incoming_in_colo_state()) {
> +                ramblock_recv_bitmap_set(block, host);
> +            }
> +
>              trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
>          }
>  
> @@ -3087,6 +3177,9 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>          if (!ret) {
>              ret = qemu_file_get_error(f);
>          }
> +        if (!ret && host_bak && host) {
> +            memcpy(host_bak, host, TARGET_PAGE_SIZE);
> +        }
>      }
>  
>      ret |= wait_for_decompress_done();
> diff --git a/migration/ram.h b/migration/ram.h
> index 5030be110a..66e9b86ff0 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -64,4 +64,8 @@ bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
>  void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
>  void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr);
>  
> +/* ram cache */
> +int colo_init_ram_cache(void);
> +void colo_release_ram_cache(void);
> +
>  #endif
> diff --git a/migration/savevm.c b/migration/savevm.c
> index c43d220220..ec0bff09ce 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1807,7 +1807,7 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
>  static int loadvm_process_enable_colo(MigrationIncomingState *mis)
>  {
>      migration_incoming_enable_colo();
> -    return 0;
> +    return colo_init_ram_cache();
>  }
>  
>  /*
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm Zhang Chen
@ 2018-05-15 18:56   ` Dr. David Alan Gilbert
  2018-06-03  5:10     ` Zhang Chen
  0 siblings, 1 reply; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-15 18:56 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

* Zhang Chen (zhangckid@gmail.com) wrote:
> From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> 
> There are several stages during loadvm/savevm process. In different stage,
> migration incoming processes different types of sections.
> We want to control these stages more accuracy, it will benefit COLO
> performance, we don't have to save type of QEMU_VM_SECTION_START
> sections everytime while do checkpoint, besides, we want to separate
> the process of saving/loading memory and devices state.
> 
> So we add three new helper functions: qemu_load_device_state() and
> qemu_savevm_live_state() to achieve different process during migration.
> 
> Besides, we make qemu_loadvm_state_main() and qemu_save_device_state()
> public, and simplify the codes of qemu_save_device_state() by calling the
> wrapper qemu_savevm_state_header().
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/colo.c   | 36 ++++++++++++++++++++++++++++--------
>  migration/savevm.c | 35 ++++++++++++++++++++++++++++-------
>  migration/savevm.h |  4 ++++
>  3 files changed, 60 insertions(+), 15 deletions(-)
> 
> diff --git a/migration/colo.c b/migration/colo.c
> index cdff0a2490..5b055f79f1 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -30,6 +30,7 @@
>  #include "block/block.h"
>  #include "qapi/qapi-events-migration.h"
>  #include "qapi/qmp/qerror.h"
> +#include "sysemu/cpus.h"
>  
>  static bool vmstate_loading;
>  static Notifier packets_compare_notifier;
> @@ -414,23 +415,30 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
>  
>      /* Disable block migration */
>      migrate_set_block_enabled(false, &local_err);
> -    qemu_savevm_state_header(fb);
> -    qemu_savevm_state_setup(fb);
>      qemu_mutex_lock_iothread();
>      replication_do_checkpoint_all(&local_err);
>      if (local_err) {
>          qemu_mutex_unlock_iothread();
>          goto out;
>      }
> -    qemu_savevm_state_complete_precopy(fb, false, false);
> -    qemu_mutex_unlock_iothread();
> -
> -    qemu_fflush(fb);
>  
>      colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err);
>      if (local_err) {
>          goto out;
>      }
> +    /*
> +     * Only save VM's live state, which not including device state.
> +     * TODO: We may need a timeout mechanism to prevent COLO process
> +     * to be blocked here.
> +     */

I guess that's the downside to transmitting it directly than into the buffer;
Peter Xu's OOB command system would let you kill the connection - and
that's something I think COLO should use.
Still the change saves you having that huge outgoing buffer on the
source side and lets you start sending the checkpoint sooner, which
means the pause time should be smaller.

> +    qemu_savevm_live_state(s->to_dst_file);

Does this actually need to be inside of the qemu_mutex_lock_iothread?
I'm pretty sure the device_state needs to be, but I'm not sure the
live_state needs to.

> +    /* Note: device state is saved into buffer */
> +    ret = qemu_save_device_state(fb);
> +
> +    qemu_mutex_unlock_iothread();
> +
> +    qemu_fflush(fb);
> +
>      /*
>       * We need the size of the VMstate data in Secondary side,
>       * With which we can decide how much data should be read.
> @@ -643,6 +651,7 @@ void *colo_process_incoming_thread(void *opaque)
>      uint64_t total_size;
>      uint64_t value;
>      Error *local_err = NULL;
> +    int ret;
>  
>      qemu_sem_init(&mis->colo_incoming_sem, 0);
>  
> @@ -715,6 +724,16 @@ void *colo_process_incoming_thread(void *opaque)
>              goto out;
>          }
>  
> +        qemu_mutex_lock_iothread();
> +        cpu_synchronize_all_pre_loadvm();
> +        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
> +        qemu_mutex_unlock_iothread();
> +
> +        if (ret < 0) {
> +            error_report("Load VM's live state (ram) error");
> +            goto out;
> +        }
> +
>          value = colo_receive_message_value(mis->from_src_file,
>                                   COLO_MESSAGE_VMSTATE_SIZE, &local_err);
>          if (local_err) {
> @@ -748,8 +767,9 @@ void *colo_process_incoming_thread(void *opaque)
>          qemu_mutex_lock_iothread();
>          qemu_system_reset(SHUTDOWN_CAUSE_NONE);

Is the reset safe? Are you sure it doesn't change the ram you've just
loaded?

>          vmstate_loading = true;
> -        if (qemu_loadvm_state(fb) < 0) {
> -            error_report("COLO: loadvm failed");
> +        ret = qemu_load_device_state(fb);
> +        if (ret < 0) {
> +            error_report("COLO: load device state failed");
>              qemu_mutex_unlock_iothread();
>              goto out;
>          }
> diff --git a/migration/savevm.c b/migration/savevm.c
> index ec0bff09ce..0f61239429 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1332,13 +1332,20 @@ done:
>      return ret;
>  }
>  
> -static int qemu_save_device_state(QEMUFile *f)
> +void qemu_savevm_live_state(QEMUFile *f)
>  {
> -    SaveStateEntry *se;
> +    /* save QEMU_VM_SECTION_END section */
> +    qemu_savevm_state_complete_precopy(f, true, false);
> +    qemu_put_byte(f, QEMU_VM_EOF);
> +}
>  
> -    qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
> -    qemu_put_be32(f, QEMU_VM_FILE_VERSION);
> +int qemu_save_device_state(QEMUFile *f)
> +{
> +    SaveStateEntry *se;
>  
> +    if (!migration_in_colo_state()) {
> +        qemu_savevm_state_header(f);
> +    }
>      cpu_synchronize_all_states();

So this changes qemu_save_device_state to use savevm_state_header
which feels reasonable, but that includes the 'configuration'
section;  do we want that?  Is that OK for Xen's use in
qmp_xen_save_devices_state?

>      QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
> @@ -1394,8 +1401,6 @@ enum LoadVMExitCodes {
>      LOADVM_QUIT     =  1,
>  };
>  
> -static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
> -
>  /* ------ incoming postcopy messages ------ */
>  /* 'advise' arrives before any transfers just to tell us that a postcopy
>   * *might* happen - it might be skipped if precopy transferred everything
> @@ -2075,7 +2080,7 @@ void qemu_loadvm_state_cleanup(void)
>      }
>  }
>  
> -static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
> +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
>  {
>      uint8_t section_type;
>      int ret = 0;
> @@ -2229,6 +2234,22 @@ int qemu_loadvm_state(QEMUFile *f)
>      return ret;
>  }
>  
> +int qemu_load_device_state(QEMUFile *f)
> +{
> +    MigrationIncomingState *mis = migration_incoming_get_current();
> +    int ret;
> +
> +    /* Load QEMU_VM_SECTION_FULL section */
> +    ret = qemu_loadvm_state_main(f, mis);
> +    if (ret < 0) {
> +        error_report("Failed to load device state: %d", ret);
> +        return ret;
> +    }
> +
> +    cpu_synchronize_all_post_init();
> +    return 0;
> +}
> +
>  int save_snapshot(const char *name, Error **errp)
>  {
>      BlockDriverState *bs, *bs1;
> diff --git a/migration/savevm.h b/migration/savevm.h
> index c6d46b37a2..cf7935dd68 100644
> --- a/migration/savevm.h
> +++ b/migration/savevm.h
> @@ -53,8 +53,12 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
>                                             uint64_t *start_list,
>                                             uint64_t *length_list);
>  void qemu_savevm_send_colo_enable(QEMUFile *f);
> +void qemu_savevm_live_state(QEMUFile *f);
> +int qemu_save_device_state(QEMUFile *f);
>  
>  int qemu_loadvm_state(QEMUFile *f);
>  void qemu_loadvm_state_cleanup(void);
> +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
> +int qemu_load_device_state(QEMUFile *f);
>  
>  #endif
> -- 
> 2.17.0

Dave

> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame Zhang Chen
@ 2018-05-16 11:12   ` Dr. David Alan Gilbert
  2018-05-16 13:55     ` Zhang Chen
  0 siblings, 1 reply; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-16 11:12 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang

* Zhang Chen (zhangckid@gmail.com) wrote:
> For COLO FT, both the PVM and SVM run at the same time,
> only sync the state while it needs.
> 
> So here, let SVM runs while not doing checkpoint, change
> DEFAULT_MIGRATE_X_CHECKPOINT_DELAY to 200*100.
> 
> Besides, we forgot to release colo_checkpoint_semd and
> colo_delay_timer, fix them here.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/colo.c      | 42 ++++++++++++++++++++++++++++++++++++++++--
>  migration/migration.c |  4 ++--
>  2 files changed, 42 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/colo.c b/migration/colo.c
> index 4381067ed4..081df1835f 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -25,8 +25,11 @@
>  #include "qemu/error-report.h"
>  #include "migration/failover.h"
>  #include "replication.h"
> +#include "net/colo-compare.h"
> +#include "net/colo.h"
>  
>  static bool vmstate_loading;
> +static Notifier packets_compare_notifier;
>  
>  #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024)
>  
> @@ -343,6 +346,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
>          goto out;
>      }
>  
> +    colo_notify_compares_event(NULL, COLO_EVENT_CHECKPOINT, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
>      /* Disable block migration */
>      migrate_set_block_enabled(false, &local_err);
>      qemu_savevm_state_header(fb);
> @@ -400,6 +408,11 @@ out:
>      return ret;
>  }
>  
> +static void colo_compare_notify_checkpoint(Notifier *notifier, void *data)
> +{
> +    colo_checkpoint_notify(data);
> +}
> +
>  static void colo_process_checkpoint(MigrationState *s)
>  {
>      QIOChannelBuffer *bioc;
> @@ -416,6 +429,9 @@ static void colo_process_checkpoint(MigrationState *s)
>          goto out;
>      }
>  
> +    packets_compare_notifier.notify = colo_compare_notify_checkpoint;
> +    colo_compare_register_notifier(&packets_compare_notifier);
> +
>      /*
>       * Wait for Secondary finish loading VM states and enter COLO
>       * restore.
> @@ -461,11 +477,21 @@ out:
>          qemu_fclose(fb);
>      }
>  
> -    timer_del(s->colo_delay_timer);
> -
>      /* Hope this not to be too long to wait here */
>      qemu_sem_wait(&s->colo_exit_sem);
>      qemu_sem_destroy(&s->colo_exit_sem);
> +
> +    /*
> +     * It is safe to unregister notifier after failover finished.
> +     * Besides, colo_delay_timer and colo_checkpoint_sem can't be
> +     * released befor unregister notifier, or there will be use-after-free
> +     * error.
> +     */
> +    colo_compare_unregister_notifier(&packets_compare_notifier);
> +    timer_del(s->colo_delay_timer);
> +    timer_free(s->colo_delay_timer);
> +    qemu_sem_destroy(&s->colo_checkpoint_sem);
> +
>      /*
>       * Must be called after failover BH is completed,
>       * Or the failover BH may shutdown the wrong fd that
> @@ -558,6 +584,11 @@ void *colo_process_incoming_thread(void *opaque)
>      fb = qemu_fopen_channel_input(QIO_CHANNEL(bioc));
>      object_unref(OBJECT(bioc));
>  
> +    qemu_mutex_lock_iothread();
> +    vm_start();
> +    trace_colo_vm_state_change("stop", "run");
> +    qemu_mutex_unlock_iothread();
> +
>      colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_READY,
>                        &local_err);
>      if (local_err) {
> @@ -577,6 +608,11 @@ void *colo_process_incoming_thread(void *opaque)
>              goto out;
>          }
>  
> +        qemu_mutex_lock_iothread();
> +        vm_stop_force_state(RUN_STATE_COLO);
> +        trace_colo_vm_state_change("run", "stop");
> +        qemu_mutex_unlock_iothread();
> +
>          /* FIXME: This is unnecessary for periodic checkpoint mode */
>          colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_REPLY,
>                       &local_err);
> @@ -630,6 +666,8 @@ void *colo_process_incoming_thread(void *opaque)
>          }
>  
>          vmstate_loading = false;
> +        vm_start();
> +        trace_colo_vm_state_change("stop", "run");
>          qemu_mutex_unlock_iothread();
>  
>          if (failover_get_state() == FAILOVER_STATUS_RELAUNCH) {
> diff --git a/migration/migration.c b/migration/migration.c
> index 35f2781b03..bca187275a 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -76,9 +76,9 @@
>  #define DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE (64 * 1024 * 1024)
>  
>  /* The delay time (in ms) between two COLO checkpoints
> - * Note: Please change this default value to 10000 when we support hybrid mode.
> + * Note: Please change this default value to 20000 when we support hybrid mode.

You can remove that comment now?

Dave

>   */
> -#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY 200
> +#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
>  #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
>  #define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
>  
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy
  2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (16 preceding siblings ...)
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 17/17] COLO: quick failover process by kick COLO thread Zhang Chen
@ 2018-05-16 11:18 ` Dr. David Alan Gilbert
  2018-05-16 12:21   ` Jason Wang
  17 siblings, 1 reply; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-16 11:18 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini, Zhang Chen

Hi Jason,
  Patches 1,2,3,14,15 seem mostly networky to me; can you have a look?

Dave

* Zhang Chen (zhangckid@gmail.com) wrote:
> The RESEND version just fix code style in patch 11/17.
> 
> Hi~ All~
> 
> COLO Frame, block replication and COLO proxy(colo-compare,filter-mirror,
> filter-redirector,filter-rewriter) have been exist in qemu
> for long time, it's time to integrate these three parts to make COLO really works.
> 
> In this series, we have some optimizations for COLO frame, including separating the
> process of saving ram and device state, using an COLO_EXIT event to notify users that
> VM exits COLO, for these parts, most of them have been reviewed long time ago in old version,
> but since this series have just rebased on upstream which had merged a new series of migration,
> parts of pathes in this series deserve review again.
> 
> We use notifier/callback method for COLO compare to notify COLO frame about
> net packets inconsistent event, and add a handle_event method for NetFilterClass to
> help COLO frame to notify filters and colo-compare about checkpoint/failover event,
> it is flexible.
> 
> For the neweset version, please refer to:
> https://github.com/zhangckid/qemu/tree/qemu-colo-18may1-rebase
> 
> Please review, thanks.
> 
> V7:
>  - Addressed Markus's comments in 11/17.
>  - Rebased on upstream.
> 
> V6:
>  - Addressed Eric Blake's comments, use the enum to feedback in patch 11/17.
>  - Fixed QAPI command separator problem in patch 11/17.
> 
> 
> Zhang Chen (12):
>   filter-rewriter: fix memory leak for connection in
>     connection_track_table
>   colo-compare: implement the process of checkpoint
>   colo-compare: use notifier to notify packets comparing result
>   COLO: integrate colo compare with colo frame
>   COLO: Add block replication into colo process
>   COLO: Remove colo_state migration struct
>   COLO: Load dirty pages into SVM's RAM cache firstly
>   ram/COLO: Record the dirty pages that SVM received
>   COLO: Flush memory data from ram cache
>   qapi: Add new command to query colo status
>   filter: Add handle_event method for NetFilterClass
>   filter-rewriter: handle checkpoint and failover event
> 
> zhanghailiang (5):
>   qmp event: Add COLO_EXIT event to notify users while exited COLO
>   savevm: split the process of different stages for loadvm/savevm
>   COLO: flush host dirty ram from cache
>   COLO: notify net filters about checkpoint/failover event
>   COLO: quick failover process by kick COLO thread
> 
>  include/exec/ram_addr.h  |   1 +
>  include/migration/colo.h |  11 +-
>  include/net/filter.h     |   5 +
>  migration/Makefile.objs  |   2 +-
>  migration/colo-comm.c    |  76 --------------
>  migration/colo.c         | 219 +++++++++++++++++++++++++++++++++++++--
>  migration/migration.c    |  38 ++++++-
>  migration/ram.c          | 183 +++++++++++++++++++++++++++++++-
>  migration/ram.h          |   4 +
>  migration/savevm.c       |  55 ++++++++--
>  migration/savevm.h       |   5 +
>  migration/trace-events   |   3 +
>  net/colo-compare.c       | 108 +++++++++++++++++--
>  net/colo-compare.h       |  24 +++++
>  net/colo.h               |   4 +
>  net/filter-rewriter.c    | 109 +++++++++++++++++--
>  net/filter.c             |  17 +++
>  net/net.c                |  28 +++++
>  qapi/migration.json      |  70 +++++++++++++
>  vl.c                     |   2 -
>  20 files changed, 846 insertions(+), 118 deletions(-)
>  delete mode 100644 migration/colo-comm.c
>  create mode 100644 net/colo-compare.h
> 
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy
  2018-05-16 11:18 ` [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Dr. David Alan Gilbert
@ 2018-05-16 12:21   ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2018-05-16 12:21 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Paolo Bonzini, Zhang Chen, qemu-devel, Markus Armbruster



On 2018年05月16日 19:18, Dr. David Alan Gilbert wrote:
> Hi Jason,
>    Patches 1,2,3,14,15 seem mostly networky to me; can you have a look?
>
> Dave

Sure, will review.

Thanks

>
> * Zhang Chen (zhangckid@gmail.com) wrote:
>> The RESEND version just fix code style in patch 11/17.
>>
>> Hi~ All~
>>
>> COLO Frame, block replication and COLO proxy(colo-compare,filter-mirror,
>> filter-redirector,filter-rewriter) have been exist in qemu
>> for long time, it's time to integrate these three parts to make COLO really works.
>>
>> In this series, we have some optimizations for COLO frame, including separating the
>> process of saving ram and device state, using an COLO_EXIT event to notify users that
>> VM exits COLO, for these parts, most of them have been reviewed long time ago in old version,
>> but since this series have just rebased on upstream which had merged a new series of migration,
>> parts of pathes in this series deserve review again.
>>
>> We use notifier/callback method for COLO compare to notify COLO frame about
>> net packets inconsistent event, and add a handle_event method for NetFilterClass to
>> help COLO frame to notify filters and colo-compare about checkpoint/failover event,
>> it is flexible.
>>
>> For the neweset version, please refer to:
>> https://github.com/zhangckid/qemu/tree/qemu-colo-18may1-rebase
>>
>> Please review, thanks.
>>
>> V7:
>>   - Addressed Markus's comments in 11/17.
>>   - Rebased on upstream.
>>
>> V6:
>>   - Addressed Eric Blake's comments, use the enum to feedback in patch 11/17.
>>   - Fixed QAPI command separator problem in patch 11/17.
>>
>>
>> Zhang Chen (12):
>>    filter-rewriter: fix memory leak for connection in
>>      connection_track_table
>>    colo-compare: implement the process of checkpoint
>>    colo-compare: use notifier to notify packets comparing result
>>    COLO: integrate colo compare with colo frame
>>    COLO: Add block replication into colo process
>>    COLO: Remove colo_state migration struct
>>    COLO: Load dirty pages into SVM's RAM cache firstly
>>    ram/COLO: Record the dirty pages that SVM received
>>    COLO: Flush memory data from ram cache
>>    qapi: Add new command to query colo status
>>    filter: Add handle_event method for NetFilterClass
>>    filter-rewriter: handle checkpoint and failover event
>>
>> zhanghailiang (5):
>>    qmp event: Add COLO_EXIT event to notify users while exited COLO
>>    savevm: split the process of different stages for loadvm/savevm
>>    COLO: flush host dirty ram from cache
>>    COLO: notify net filters about checkpoint/failover event
>>    COLO: quick failover process by kick COLO thread
>>
>>   include/exec/ram_addr.h  |   1 +
>>   include/migration/colo.h |  11 +-
>>   include/net/filter.h     |   5 +
>>   migration/Makefile.objs  |   2 +-
>>   migration/colo-comm.c    |  76 --------------
>>   migration/colo.c         | 219 +++++++++++++++++++++++++++++++++++++--
>>   migration/migration.c    |  38 ++++++-
>>   migration/ram.c          | 183 +++++++++++++++++++++++++++++++-
>>   migration/ram.h          |   4 +
>>   migration/savevm.c       |  55 ++++++++--
>>   migration/savevm.h       |   5 +
>>   migration/trace-events   |   3 +
>>   net/colo-compare.c       | 108 +++++++++++++++++--
>>   net/colo-compare.h       |  24 +++++
>>   net/colo.h               |   4 +
>>   net/filter-rewriter.c    | 109 +++++++++++++++++--
>>   net/filter.c             |  17 +++
>>   net/net.c                |  28 +++++
>>   qapi/migration.json      |  70 +++++++++++++
>>   vl.c                     |   2 -
>>   20 files changed, 846 insertions(+), 118 deletions(-)
>>   delete mode 100644 migration/colo-comm.c
>>   create mode 100644 net/colo-compare.h
>>
>> -- 
>> 2.17.0
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame
  2018-05-16 11:12   ` Dr. David Alan Gilbert
@ 2018-05-16 13:55     ` Zhang Chen
  0 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-16 13:55 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang

On Wed, May 16, 2018 at 7:12 PM, Dr. David Alan Gilbert <dgilbert@redhat.com
> wrote:

> * Zhang Chen (zhangckid@gmail.com) wrote:
> > For COLO FT, both the PVM and SVM run at the same time,
> > only sync the state while it needs.
> >
> > So here, let SVM runs while not doing checkpoint, change
> > DEFAULT_MIGRATE_X_CHECKPOINT_DELAY to 200*100.
> >
> > Besides, we forgot to release colo_checkpoint_semd and
> > colo_delay_timer, fix them here.
> >
> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > ---
> >  migration/colo.c      | 42 ++++++++++++++++++++++++++++++++++++++++--
> >  migration/migration.c |  4 ++--
> >  2 files changed, 42 insertions(+), 4 deletions(-)
> >
> > diff --git a/migration/colo.c b/migration/colo.c
> > index 4381067ed4..081df1835f 100644
> > --- a/migration/colo.c
> > +++ b/migration/colo.c
> > @@ -25,8 +25,11 @@
> >  #include "qemu/error-report.h"
> >  #include "migration/failover.h"
> >  #include "replication.h"
> > +#include "net/colo-compare.h"
> > +#include "net/colo.h"
> >
> >  static bool vmstate_loading;
> > +static Notifier packets_compare_notifier;
> >
> >  #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024)
> >
> > @@ -343,6 +346,11 @@ static int colo_do_checkpoint_transaction(MigrationState
> *s,
> >          goto out;
> >      }
> >
> > +    colo_notify_compares_event(NULL, COLO_EVENT_CHECKPOINT,
> &local_err);
> > +    if (local_err) {
> > +        goto out;
> > +    }
> > +
> >      /* Disable block migration */
> >      migrate_set_block_enabled(false, &local_err);
> >      qemu_savevm_state_header(fb);
> > @@ -400,6 +408,11 @@ out:
> >      return ret;
> >  }
> >
> > +static void colo_compare_notify_checkpoint(Notifier *notifier, void
> *data)
> > +{
> > +    colo_checkpoint_notify(data);
> > +}
> > +
> >  static void colo_process_checkpoint(MigrationState *s)
> >  {
> >      QIOChannelBuffer *bioc;
> > @@ -416,6 +429,9 @@ static void colo_process_checkpoint(MigrationState
> *s)
> >          goto out;
> >      }
> >
> > +    packets_compare_notifier.notify = colo_compare_notify_checkpoint;
> > +    colo_compare_register_notifier(&packets_compare_notifier);
> > +
> >      /*
> >       * Wait for Secondary finish loading VM states and enter COLO
> >       * restore.
> > @@ -461,11 +477,21 @@ out:
> >          qemu_fclose(fb);
> >      }
> >
> > -    timer_del(s->colo_delay_timer);
> > -
> >      /* Hope this not to be too long to wait here */
> >      qemu_sem_wait(&s->colo_exit_sem);
> >      qemu_sem_destroy(&s->colo_exit_sem);
> > +
> > +    /*
> > +     * It is safe to unregister notifier after failover finished.
> > +     * Besides, colo_delay_timer and colo_checkpoint_sem can't be
> > +     * released befor unregister notifier, or there will be
> use-after-free
> > +     * error.
> > +     */
> > +    colo_compare_unregister_notifier(&packets_compare_notifier);
> > +    timer_del(s->colo_delay_timer);
> > +    timer_free(s->colo_delay_timer);
> > +    qemu_sem_destroy(&s->colo_checkpoint_sem);
> > +
> >      /*
> >       * Must be called after failover BH is completed,
> >       * Or the failover BH may shutdown the wrong fd that
> > @@ -558,6 +584,11 @@ void *colo_process_incoming_thread(void *opaque)
> >      fb = qemu_fopen_channel_input(QIO_CHANNEL(bioc));
> >      object_unref(OBJECT(bioc));
> >
> > +    qemu_mutex_lock_iothread();
> > +    vm_start();
> > +    trace_colo_vm_state_change("stop", "run");
> > +    qemu_mutex_unlock_iothread();
> > +
> >      colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_READY,
> >                        &local_err);
> >      if (local_err) {
> > @@ -577,6 +608,11 @@ void *colo_process_incoming_thread(void *opaque)
> >              goto out;
> >          }
> >
> > +        qemu_mutex_lock_iothread();
> > +        vm_stop_force_state(RUN_STATE_COLO);
> > +        trace_colo_vm_state_change("run", "stop");
> > +        qemu_mutex_unlock_iothread();
> > +
> >          /* FIXME: This is unnecessary for periodic checkpoint mode */
> >          colo_send_message(mis->to_src_file,
> COLO_MESSAGE_CHECKPOINT_REPLY,
> >                       &local_err);
> > @@ -630,6 +666,8 @@ void *colo_process_incoming_thread(void *opaque)
> >          }
> >
> >          vmstate_loading = false;
> > +        vm_start();
> > +        trace_colo_vm_state_change("stop", "run");
> >          qemu_mutex_unlock_iothread();
> >
> >          if (failover_get_state() == FAILOVER_STATUS_RELAUNCH) {
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 35f2781b03..bca187275a 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -76,9 +76,9 @@
> >  #define DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE (64 * 1024 * 1024)
> >
> >  /* The delay time (in ms) between two COLO checkpoints
> > - * Note: Please change this default value to 10000 when we support
> hybrid mode.
> > + * Note: Please change this default value to 20000 when we support
> hybrid mode.
>
> You can remove that comment now?
>

Yes, I will remove it in next version.

Thanks
Zhang Chen



>
> Dave
>
> >   */
> > -#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY 200
> > +#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
> >  #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
> >  #define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
> >
> > --
> > 2.17.0
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct
  2018-05-15 16:02   ` Dr. David Alan Gilbert
@ 2018-05-16 13:58     ` Zhang Chen
  0 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-16 13:58 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang

On Wed, May 16, 2018 at 12:02 AM, Dr. David Alan Gilbert <
dgilbert@redhat.com> wrote:

> * Zhang Chen (zhangckid@gmail.com) wrote:
> > We need to know if migration is going into COLO state for
> > incoming side before start normal migration.
> >
> > Instead by using the VMStateDescription to send colo_state
> > from source side to destination side, we use MIG_CMD_ENABLE_COLO
> > to indicate whether COLO is enabled or not.
> >
> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
>
> Yes, that simplifies it a bit.
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>

Thanks your review.
Zhang Chen


>
> > ---
> >  include/migration/colo.h |  5 +--
> >  migration/Makefile.objs  |  2 +-
> >  migration/colo-comm.c    | 76 ----------------------------------------
> >  migration/colo.c         | 13 ++++++-
> >  migration/migration.c    | 23 +++++++++++-
> >  migration/savevm.c       | 20 +++++++++++
> >  migration/savevm.h       |  1 +
> >  migration/trace-events   |  1 +
> >  vl.c                     |  2 --
> >  9 files changed, 60 insertions(+), 83 deletions(-)
> >  delete mode 100644 migration/colo-comm.c
> >
> > diff --git a/include/migration/colo.h b/include/migration/colo.h
> > index fefb2fcf4c..99ce17aca7 100644
> > --- a/include/migration/colo.h
> > +++ b/include/migration/colo.h
> > @@ -28,8 +28,9 @@ void migrate_start_colo_process(MigrationState *s);
> >  bool migration_in_colo_state(void);
> >
> >  /* loadvm */
> > -bool migration_incoming_enable_colo(void);
> > -void migration_incoming_exit_colo(void);
> > +void migration_incoming_enable_colo(void);
> > +void migration_incoming_disable_colo(void);
> > +bool migration_incoming_colo_enabled(void);
> >  void *colo_process_incoming_thread(void *opaque);
> >  bool migration_incoming_in_colo_state(void);
> >
> > diff --git a/migration/Makefile.objs b/migration/Makefile.objs
> > index c83ec47ba8..a4f3bafd86 100644
> > --- a/migration/Makefile.objs
> > +++ b/migration/Makefile.objs
> > @@ -1,6 +1,6 @@
> >  common-obj-y += migration.o socket.o fd.o exec.o
> >  common-obj-y += tls.o channel.o savevm.o
> > -common-obj-y += colo-comm.o colo.o colo-failover.o
> > +common-obj-y += colo.o colo-failover.o
> >  common-obj-y += vmstate.o vmstate-types.o page_cache.o
> >  common-obj-y += qemu-file.o global_state.o
> >  common-obj-y += qemu-file-channel.o
> > diff --git a/migration/colo-comm.c b/migration/colo-comm.c
> > deleted file mode 100644
> > index df26e4dfe7..0000000000
> > --- a/migration/colo-comm.c
> > +++ /dev/null
> > @@ -1,76 +0,0 @@
> > -/*
> > - * COarse-grain LOck-stepping Virtual Machines for Non-stop Service
> (COLO)
> > - * (a.k.a. Fault Tolerance or Continuous Replication)
> > - *
> > - * Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
> > - * Copyright (c) 2016 FUJITSU LIMITED
> > - * Copyright (c) 2016 Intel Corporation
> > - *
> > - * This work is licensed under the terms of the GNU GPL, version 2 or
> > - * later. See the COPYING file in the top-level directory.
> > - *
> > - */
> > -
> > -#include "qemu/osdep.h"
> > -#include "migration.h"
> > -#include "migration/colo.h"
> > -#include "migration/vmstate.h"
> > -#include "trace.h"
> > -
> > -typedef struct {
> > -     bool colo_requested;
> > -} COLOInfo;
> > -
> > -static COLOInfo colo_info;
> > -
> > -COLOMode get_colo_mode(void)
> > -{
> > -    if (migration_in_colo_state()) {
> > -        return COLO_MODE_PRIMARY;
> > -    } else if (migration_incoming_in_colo_state()) {
> > -        return COLO_MODE_SECONDARY;
> > -    } else {
> > -        return COLO_MODE_UNKNOWN;
> > -    }
> > -}
> > -
> > -static int colo_info_pre_save(void *opaque)
> > -{
> > -    COLOInfo *s = opaque;
> > -
> > -    s->colo_requested = migrate_colo_enabled();
> > -
> > -    return 0;
> > -}
> > -
> > -static bool colo_info_need(void *opaque)
> > -{
> > -   return migrate_colo_enabled();
> > -}
> > -
> > -static const VMStateDescription colo_state = {
> > -    .name = "COLOState",
> > -    .version_id = 1,
> > -    .minimum_version_id = 1,
> > -    .pre_save = colo_info_pre_save,
> > -    .needed = colo_info_need,
> > -    .fields = (VMStateField[]) {
> > -        VMSTATE_BOOL(colo_requested, COLOInfo),
> > -        VMSTATE_END_OF_LIST()
> > -    },
> > -};
> > -
> > -void colo_info_init(void)
> > -{
> > -    vmstate_register(NULL, 0, &colo_state, &colo_info);
> > -}
> > -
> > -bool migration_incoming_enable_colo(void)
> > -{
> > -    return colo_info.colo_requested;
> > -}
> > -
> > -void migration_incoming_exit_colo(void)
> > -{
> > -    colo_info.colo_requested = false;
> > -}
> > diff --git a/migration/colo.c b/migration/colo.c
> > index e06640c3d6..c083d3696f 100644
> > --- a/migration/colo.c
> > +++ b/migration/colo.c
> > @@ -152,6 +152,17 @@ static void primary_vm_do_failover(void)
> >      qemu_sem_post(&s->colo_exit_sem);
> >  }
> >
> > +COLOMode get_colo_mode(void)
> > +{
> > +    if (migration_in_colo_state()) {
> > +        return COLO_MODE_PRIMARY;
> > +    } else if (migration_incoming_in_colo_state()) {
> > +        return COLO_MODE_SECONDARY;
> > +    } else {
> > +        return COLO_MODE_UNKNOWN;
> > +    }
> > +}
> > +
> >  void colo_do_failover(MigrationState *s)
> >  {
> >      /* Make sure VM stopped while failover happened. */
> > @@ -745,7 +756,7 @@ out:
> >      if (mis->to_src_file) {
> >          qemu_fclose(mis->to_src_file);
> >      }
> > -    migration_incoming_exit_colo();
> > +    migration_incoming_disable_colo();
> >
> >      return NULL;
> >  }
> > diff --git a/migration/migration.c b/migration/migration.c
> > index ddd0c4b988..8dee7dd309 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -277,6 +277,22 @@ int migrate_send_rp_req_pages(MigrationIncomingState
> *mis, const char *rbname,
> >      return migrate_send_rp_message(mis, msg_type, msglen, bufc);
> >  }
> >
> > +static bool migration_colo_enabled;
> > +bool migration_incoming_colo_enabled(void)
> > +{
> > +    return migration_colo_enabled;
> > +}
> > +
> > +void migration_incoming_disable_colo(void)
> > +{
> > +    migration_colo_enabled = false;
> > +}
> > +
> > +void migration_incoming_enable_colo(void)
> > +{
> > +    migration_colo_enabled = true;
> > +}
> > +
> >  void qemu_start_incoming_migration(const char *uri, Error **errp)
> >  {
> >      const char *p;
> > @@ -388,7 +404,7 @@ static void process_incoming_migration_co(void
> *opaque)
> >      }
> >
> >      /* we get COLO info, and know if we are in COLO mode */
> > -    if (!ret && migration_incoming_enable_colo()) {
> > +    if (!ret && migration_incoming_colo_enabled()) {
> >          /* Make sure all file formats flush their mutable metadata */
> >          bdrv_invalidate_cache_all(&local_err);
> >          if (local_err) {
> > @@ -2431,6 +2447,11 @@ static void *migration_thread(void *opaque)
> >          qemu_savevm_send_postcopy_advise(s->to_dst_file);
> >      }
> >
> > +    if (migrate_colo_enabled()) {
> > +        /* Notify migration destination that we enable COLO */
> > +        qemu_savevm_send_colo_enable(s->to_dst_file);
> > +    }
> > +
> >      qemu_savevm_state_setup(s->to_dst_file);
> >
> >      s->setup_time = qemu_clock_get_ms(QEMU_CLOCK_HOST) - setup_start;
> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index e2be02afe4..c43d220220 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -55,6 +55,8 @@
> >  #include "io/channel-buffer.h"
> >  #include "io/channel-file.h"
> >  #include "sysemu/replay.h"
> > +#include "migration/colo.h"
> > +
> >
> >  #ifndef ETH_P_RARP
> >  #define ETH_P_RARP 0x8035
> > @@ -81,6 +83,9 @@ enum qemu_vm_cmd {
> >                                        were previously sent during
> >                                        precopy but are dirty. */
> >      MIG_CMD_PACKAGED,          /* Send a wrapped stream within this
> stream */
> > +
> > +    MIG_CMD_ENABLE_COLO, /* Enable COLO */
> > +
> >      MIG_CMD_MAX
> >  };
> >
> > @@ -836,6 +841,12 @@ static void qemu_savevm_command_send(QEMUFile *f,
> >      qemu_fflush(f);
> >  }
> >
> > +void qemu_savevm_send_colo_enable(QEMUFile *f)
> > +{
> > +    trace_savevm_send_colo_enable();
> > +    qemu_savevm_command_send(f, MIG_CMD_ENABLE_COLO, 0, NULL);
> > +}
> > +
> >  void qemu_savevm_send_ping(QEMUFile *f, uint32_t value)
> >  {
> >      uint32_t buf;
> > @@ -1793,6 +1804,12 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState
> *mis)
> >      return ret;
> >  }
> >
> > +static int loadvm_process_enable_colo(MigrationIncomingState *mis)
> > +{
> > +    migration_incoming_enable_colo();
> > +    return 0;
> > +}
> > +
> >  /*
> >   * Process an incoming 'QEMU_VM_COMMAND'
> >   * 0           just a normal return
> > @@ -1866,6 +1883,9 @@ static int loadvm_process_command(QEMUFile *f)
> >
> >      case MIG_CMD_POSTCOPY_RAM_DISCARD:
> >          return loadvm_postcopy_ram_handle_discard(mis, len);
> > +
> > +    case MIG_CMD_ENABLE_COLO:
> > +        return loadvm_process_enable_colo(mis);
> >      }
> >
> >      return 0;
> > diff --git a/migration/savevm.h b/migration/savevm.h
> > index cf4f0d37ca..c6d46b37a2 100644
> > --- a/migration/savevm.h
> > +++ b/migration/savevm.h
> > @@ -52,6 +52,7 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile
> *f, const char *name,
> >                                             uint16_t len,
> >                                             uint64_t *start_list,
> >                                             uint64_t *length_list);
> > +void qemu_savevm_send_colo_enable(QEMUFile *f);
> >
> >  int qemu_loadvm_state(QEMUFile *f);
> >  void qemu_loadvm_state_cleanup(void);
> > diff --git a/migration/trace-events b/migration/trace-events
> > index d6be74b7a7..9295b4cf40 100644
> > --- a/migration/trace-events
> > +++ b/migration/trace-events
> > @@ -34,6 +34,7 @@ savevm_send_open_return_path(void) ""
> >  savevm_send_ping(uint32_t val) "0x%x"
> >  savevm_send_postcopy_listen(void) ""
> >  savevm_send_postcopy_run(void) ""
> > +savevm_send_colo_enable(void) ""
> >  savevm_state_setup(void) ""
> >  savevm_state_header(void) ""
> >  savevm_state_iterate(void) ""
> > diff --git a/vl.c b/vl.c
> > index 12e31d1aa9..a1576d2045 100644
> > --- a/vl.c
> > +++ b/vl.c
> > @@ -4437,8 +4437,6 @@ int main(int argc, char **argv, char **envp)
> >  #endif
> >      }
> >
> > -    colo_info_init();
> > -
> >      if (net_init_clients(&err) < 0) {
> >          error_report_err(err);
> >          exit(1);
> > --
> > 2.17.0
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 05/17] COLO: Add block replication into colo process
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 05/17] COLO: Add block replication into colo process Zhang Chen
@ 2018-05-16 15:54   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-16 15:54 UTC (permalink / raw)
  To: Zhang Chen, stefanha
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

* Zhang Chen (zhangckid@gmail.com) wrote:
> Make sure master start block replication after slave's block
> replication started.
> 
> Besides, we need to activate VM's blocks before goes into
> COLO state.

Stefan: This looks mostly OK to me, how does it look from the block
side?

The only thing I'd like to be convinced of is that
the replication_do_checkpoint_all() is synchronous enough
to know that the destination has received all disk IO
for one checkpoint before the primary starts running the next one.

Also, in the 'colo_do_checkpoint_transaction' the replication is called
near the start; is that the right point or should it be after any of the
device saves (could they spit out one last write?)

Dave

> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> ---
>  migration/colo.c      | 43 +++++++++++++++++++++++++++++++++++++++++++
>  migration/migration.c |  9 +++++++++
>  2 files changed, 52 insertions(+)
> 
> diff --git a/migration/colo.c b/migration/colo.c
> index 081df1835f..e06640c3d6 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -27,6 +27,7 @@
>  #include "replication.h"
>  #include "net/colo-compare.h"
>  #include "net/colo.h"
> +#include "block/block.h"
>  
>  static bool vmstate_loading;
>  static Notifier packets_compare_notifier;
> @@ -56,6 +57,7 @@ static void secondary_vm_do_failover(void)
>  {
>      int old_state;
>      MigrationIncomingState *mis = migration_incoming_get_current();
> +    Error *local_err = NULL;
>  
>      /* Can not do failover during the process of VM's loading VMstate, Or
>       * it will break the secondary VM.
> @@ -73,6 +75,11 @@ static void secondary_vm_do_failover(void)
>      migrate_set_state(&mis->state, MIGRATION_STATUS_COLO,
>                        MIGRATION_STATUS_COMPLETED);
>  
> +    replication_stop_all(true, &local_err);
> +    if (local_err) {
> +        error_report_err(local_err);
> +    }
> +
>      if (!autostart) {
>          error_report("\"-S\" qemu option will be ignored in secondary side");
>          /* recover runstate to normal migration finish state */
> @@ -110,6 +117,7 @@ static void primary_vm_do_failover(void)
>  {
>      MigrationState *s = migrate_get_current();
>      int old_state;
> +    Error *local_err = NULL;
>  
>      migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
>                        MIGRATION_STATUS_COMPLETED);
> @@ -133,6 +141,13 @@ static void primary_vm_do_failover(void)
>                       FailoverStatus_str(old_state));
>          return;
>      }
> +
> +    replication_stop_all(true, &local_err);
> +    if (local_err) {
> +        error_report_err(local_err);
> +        local_err = NULL;
> +    }
> +
>      /* Notify COLO thread that failover work is finished */
>      qemu_sem_post(&s->colo_exit_sem);
>  }
> @@ -356,6 +371,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
>      qemu_savevm_state_header(fb);
>      qemu_savevm_state_setup(fb);
>      qemu_mutex_lock_iothread();
> +    replication_do_checkpoint_all(&local_err);
> +    if (local_err) {
> +        qemu_mutex_unlock_iothread();
> +        goto out;
> +    }
>      qemu_savevm_state_complete_precopy(fb, false, false);
>      qemu_mutex_unlock_iothread();
>  
> @@ -446,6 +466,12 @@ static void colo_process_checkpoint(MigrationState *s)
>      object_unref(OBJECT(bioc));
>  
>      qemu_mutex_lock_iothread();
> +    replication_start_all(REPLICATION_MODE_PRIMARY, &local_err);
> +    if (local_err) {
> +        qemu_mutex_unlock_iothread();
> +        goto out;
> +    }
> +
>      vm_start();
>      qemu_mutex_unlock_iothread();
>      trace_colo_vm_state_change("stop", "run");
> @@ -585,6 +611,11 @@ void *colo_process_incoming_thread(void *opaque)
>      object_unref(OBJECT(bioc));
>  
>      qemu_mutex_lock_iothread();
> +    replication_start_all(REPLICATION_MODE_SECONDARY, &local_err);
> +    if (local_err) {
> +        qemu_mutex_unlock_iothread();
> +        goto out;
> +    }
>      vm_start();
>      trace_colo_vm_state_change("stop", "run");
>      qemu_mutex_unlock_iothread();
> @@ -665,6 +696,18 @@ void *colo_process_incoming_thread(void *opaque)
>              goto out;
>          }
>  
> +        replication_get_error_all(&local_err);
> +        if (local_err) {
> +            qemu_mutex_unlock_iothread();
> +            goto out;
> +        }
> +        /* discard colo disk buffer */
> +        replication_do_checkpoint_all(&local_err);
> +        if (local_err) {
> +            qemu_mutex_unlock_iothread();
> +            goto out;
> +        }
> +
>          vmstate_loading = false;
>          vm_start();
>          trace_colo_vm_state_change("stop", "run");
> diff --git a/migration/migration.c b/migration/migration.c
> index bca187275a..ddd0c4b988 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -357,6 +357,7 @@ static void process_incoming_migration_co(void *opaque)
>      MigrationIncomingState *mis = migration_incoming_get_current();
>      PostcopyState ps;
>      int ret;
> +    Error *local_err = NULL;
>  
>      assert(mis->from_src_file);
>      mis->largest_page_size = qemu_ram_pagesize_largest();
> @@ -388,6 +389,14 @@ static void process_incoming_migration_co(void *opaque)
>  
>      /* we get COLO info, and know if we are in COLO mode */
>      if (!ret && migration_incoming_enable_colo()) {
> +        /* Make sure all file formats flush their mutable metadata */
> +        bdrv_invalidate_cache_all(&local_err);
> +        if (local_err) {
> +            migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
> +                    MIGRATION_STATUS_FAILED);
> +            error_report_err(local_err);
> +            exit(EXIT_FAILURE);
> +        }
>          mis->migration_incoming_co = qemu_coroutine_self();
>          qemu_thread_create(&mis->colo_incoming_thread, "COLO incoming",
>               colo_process_incoming_thread, mis, QEMU_THREAD_JOINABLE);
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 16/17] COLO: notify net filters about checkpoint/failover event
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 16/17] COLO: notify net filters about checkpoint/failover event Zhang Chen
@ 2018-05-17  9:48   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-17  9:48 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang

* Zhang Chen (zhangckid@gmail.com) wrote:
> From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> 
> Notify all net filters about the checkpoint and failover event.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/colo.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/migration/colo.c b/migration/colo.c
> index 3dfd84d897..15463e2823 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -88,6 +88,11 @@ static void secondary_vm_do_failover(void)
>      if (local_err) {
>          error_report_err(local_err);
>      }
> +    /* Notify all filters of all NIC to do checkpoint */
> +    colo_notify_filters_event(COLO_EVENT_FAILOVER, &local_err);
> +    if (local_err) {
> +        error_report_err(local_err);
> +    }
>  
>      if (!autostart) {
>          error_report("\"-S\" qemu option will be ignored in secondary side");
> @@ -799,6 +804,13 @@ void *colo_process_incoming_thread(void *opaque)
>              goto out;
>          }
>  
> +        /* Notify all filters of all NIC to do checkpoint */
> +        colo_notify_filters_event(COLO_EVENT_CHECKPOINT, &local_err);
> +        if (local_err) {
> +            qemu_mutex_unlock_iothread();
> +            goto out;
> +        }
> +
>          vmstate_loading = false;
>          vm_start();
>          trace_colo_vm_state_change("stop", "run");
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 17/17] COLO: quick failover process by kick COLO thread
  2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 17/17] COLO: quick failover process by kick COLO thread Zhang Chen
@ 2018-05-17  9:53   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-05-17  9:53 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang

* Zhang Chen (zhangckid@gmail.com) wrote:
> From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> 
> COLO thread may sleep at qemu_sem_wait(&s->colo_checkpoint_sem),
> while failover works begin, It's better to wakeup it to quick
> the process.
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/colo.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/migration/colo.c b/migration/colo.c
> index 15463e2823..16def4865c 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -135,6 +135,11 @@ static void primary_vm_do_failover(void)
>  
>      migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
>                        MIGRATION_STATUS_COMPLETED);
> +    /*
> +     * kick COLO thread which might wait at
> +     * qemu_sem_wait(&s->colo_checkpoint_sem).
> +     */
> +    colo_checkpoint_notify(migrate_get_current());
>  
>      /*
>       * Wake up COLO thread which may blocked in recv() or send(),
> @@ -552,6 +557,9 @@ static void colo_process_checkpoint(MigrationState *s)
>  
>          qemu_sem_wait(&s->colo_checkpoint_sem);
>  
> +        if (s->state != MIGRATION_STATUS_COLO) {
> +            goto out;
> +        }
>          ret = colo_do_checkpoint_transaction(s, bioc, fb);
>          if (ret < 0) {
>              goto out;
> -- 
> 2.17.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache
  2018-05-15 14:44   ` Dr. David Alan Gilbert
@ 2018-05-20 16:09     ` Zhang Chen
  0 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-20 16:09 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

On Tue, May 15, 2018 at 10:44 PM, Dr. David Alan Gilbert <
dgilbert@redhat.com> wrote:

> * Zhang Chen (zhangckid@gmail.com) wrote:
> > During the time of VM's running, PVM may dirty some pages, we will
> transfer
> > PVM's dirty pages to SVM and store them into SVM's RAM cache at next
> checkpoint
> > time. So, the content of SVM's RAM cache will always be same with PVM's
> memory
> > after checkpoint.
> >
> > Instead of flushing all content of PVM's RAM cache into SVM's MEMORY,
> > we do this in a more efficient way:
> > Only flush any page that dirtied by PVM since last checkpoint.
> > In this way, we can ensure SVM's memory same with PVM's.
> >
> > Besides, we must ensure flush RAM cache before load device state.
> >
> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > ---
> >  migration/ram.c        | 39 +++++++++++++++++++++++++++++++++++++++
> >  migration/trace-events |  2 ++
> >  2 files changed, 41 insertions(+)
> >
> > diff --git a/migration/ram.c b/migration/ram.c
> > index e35dfee06e..4235a8f24d 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -3031,6 +3031,40 @@ static bool postcopy_is_running(void)
> >      return ps >= POSTCOPY_INCOMING_LISTENING && ps <
> POSTCOPY_INCOMING_END;
> >  }
> >
> > +/*
> > + * Flush content of RAM cache into SVM's memory.
> > + * Only flush the pages that be dirtied by PVM or SVM or both.
> > + */
> > +static void colo_flush_ram_cache(void)
> > +{
> > +    RAMBlock *block = NULL;
> > +    void *dst_host;
> > +    void *src_host;
> > +    unsigned long offset = 0;
> > +
> > +    trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
> > +    rcu_read_lock();
> > +    block = QLIST_FIRST_RCU(&ram_list.blocks);
> > +
> > +    while (block) {
> > +        offset = migration_bitmap_find_dirty(ram_state, block, offset);
> > +        migration_bitmap_clear_dirty(ram_state, block, offset);
>
> That looks suspicious to me; shouldn't that be inside the else block
> below?   If the find_dirty returns a bit after or equal to used_length
> (i.e. there's nothing dirty in this block), then you don't want to clear
> that bit yet because it really means there's a dirty page at the start
> of the next block?
>
>
Make sense. I will move it to the 'else block below'.

Thanks
Zhang Chen



> Dave
>
> > +        if (offset << TARGET_PAGE_BITS >= block->used_length) {
> > +            offset = 0;
> > +            block = QLIST_NEXT_RCU(block, next);
> > +        } else {
> > +            dst_host = block->host + (offset << TARGET_PAGE_BITS);
> > +            src_host = block->colo_cache + (offset << TARGET_PAGE_BITS);
> > +            memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
> > +        }
> > +    }
> > +
> > +    rcu_read_unlock();
> > +    trace_colo_flush_ram_cache_end();
> > +    assert(ram_state->migration_dirty_pages == 0);
> > +}
> > +
> >  static int ram_load(QEMUFile *f, void *opaque, int version_id)
> >  {
> >      int flags = 0, ret = 0, invalid_flags = 0;
> > @@ -3043,6 +3077,7 @@ static int ram_load(QEMUFile *f, void *opaque, int
> version_id)
> >      bool postcopy_running = postcopy_is_running();
> >      /* ADVISE is earlier, it shows the source has the postcopy
> capability on */
> >      bool postcopy_advised = postcopy_is_advised();
> > +    bool need_flush = false;
> >
> >      seq_iter++;
> >
> > @@ -3218,6 +3253,10 @@ static int ram_load(QEMUFile *f, void *opaque,
> int version_id)
> >      ret |= wait_for_decompress_done();
> >      rcu_read_unlock();
> >      trace_ram_load_complete(ret, seq_iter);
> > +
> > +    if (!ret  && migration_incoming_in_colo_state() && need_flush) {
> > +        colo_flush_ram_cache();
> > +    }
> >      return ret;
> >  }
> >
> > diff --git a/migration/trace-events b/migration/trace-events
> > index 9295b4cf40..8e2f9749e0 100644
> > --- a/migration/trace-events
> > +++ b/migration/trace-events
> > @@ -78,6 +78,8 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%"
> PRIx64 " %x"
> >  ram_postcopy_send_discard_bitmap(void) ""
> >  ram_save_page(const char *rbname, uint64_t offset, void *host) "%s:
> offset: 0x%" PRIx64 " host: %p"
> >  ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s:
> start: 0x%zx len: 0x%zx"
> > +colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64
> > +colo_flush_ram_cache_end(void) ""
> >
> >  # migration/migration.c
> >  await_return_path_close_on_source_close(void) ""
> > --
> > 2.17.0
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly
  2018-05-15 16:55   ` Dr. David Alan Gilbert
@ 2018-05-20 18:30     ` Zhang Chen
  0 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-05-20 18:30 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

On Wed, May 16, 2018 at 12:55 AM, Dr. David Alan Gilbert <
dgilbert@redhat.com> wrote:

> * Zhang Chen (zhangckid@gmail.com) wrote:
> > We should not load PVM's state directly into SVM, because there maybe
> some
> > errors happen when SVM is receving data, which will break SVM.
> >
> > We need to ensure receving all data before load the state into SVM. We
> use
> > an extra memory to cache these data (PVM's ram). The ram cache in
> secondary side
> > is initially the same as SVM/PVM's memory. And in the process of
> checkpoint,
> > we cache the dirty pages of PVM into this ram cache firstly, so this ram
> cache
> > always the same as PVM's memory at every checkpoint, then we flush this
> cached ram
> > to SVM after we receive all PVM's state.
> >
> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> > ---
> >  include/exec/ram_addr.h |  1 +
> >  migration/migration.c   |  2 +
> >  migration/ram.c         | 99 +++++++++++++++++++++++++++++++++++++++--
> >  migration/ram.h         |  4 ++
> >  migration/savevm.c      |  2 +-
> >  5 files changed, 104 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> > index cf2446a176..51ec153a57 100644
> > --- a/include/exec/ram_addr.h
> > +++ b/include/exec/ram_addr.h
> > @@ -27,6 +27,7 @@ struct RAMBlock {
> >      struct rcu_head rcu;
> >      struct MemoryRegion *mr;
> >      uint8_t *host;
> > +    uint8_t *colo_cache; /* For colo, VM's ram cache */
> >      ram_addr_t offset;
> >      ram_addr_t used_length;
> >      ram_addr_t max_length;
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 8dee7dd309..cfc1b958b9 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -421,6 +421,8 @@ static void process_incoming_migration_co(void
> *opaque)
> >
> >          /* Wait checkpoint incoming thread exit before free resource */
> >          qemu_thread_join(&mis->colo_incoming_thread);
> > +        /* We hold the global iothread lock, so it is safe here */
> > +        colo_release_ram_cache();
> >      }
> >
> >      if (ret < 0) {
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 912810c18e..7ca845f8a9 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -2520,6 +2520,20 @@ static inline void *host_from_ram_block_offset(RAMBlock
> *block,
> >      return block->host + offset;
> >  }
> >
> > +static inline void *colo_cache_from_block_offset(RAMBlock *block,
> > +                                                 ram_addr_t offset)
> > +{
> > +    if (!offset_in_ramblock(block, offset)) {
> > +        return NULL;
> > +    }
> > +    if (!block->colo_cache) {
> > +        error_report("%s: colo_cache is NULL in block :%s",
> > +                     __func__, block->idstr);
> > +        return NULL;
> > +    }
> > +    return block->colo_cache + offset;
> > +}
> > +
> >  /**
> >   * ram_handle_compressed: handle the zero page case
> >   *
> > @@ -2724,6 +2738,57 @@ static void decompress_data_with_multi_threads(QEMUFile
> *f,
> >      qemu_mutex_unlock(&decomp_done_lock);
> >  }
> >
> > +/*
> > + * colo cache: this is for secondary VM, we cache the whole
> > + * memory of the secondary VM, it is need to hold the global lock
> > + * to call this helper.
> > + */
> > +int colo_init_ram_cache(void)
> > +{
> > +    RAMBlock *block;
> > +
> > +    rcu_read_lock();
> > +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> > +        block->colo_cache = qemu_anon_ram_alloc(block->used_length,
> > +                                                NULL,
> > +                                                false);
> > +        if (!block->colo_cache) {
> > +            error_report("%s: Can't alloc memory for COLO cache of
> block %s,"
> > +                         "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
> > +                         block->used_length);
> > +            goto out_locked;
> > +        }
> > +    }
> > +    rcu_read_unlock();
> > +    return 0;
> > +
> > +out_locked:
> > +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> > +        if (block->colo_cache) {
> > +            qemu_anon_ram_free(block->colo_cache, block->used_length);
> > +            block->colo_cache = NULL;
> > +        }
> > +    }
> > +
> > +    rcu_read_unlock();
> > +    return -errno;
> > +}
> > +
> > +/* It is need to hold the global lock to call this helper */
> > +void colo_release_ram_cache(void)
> > +{
> > +    RAMBlock *block;
> > +
> > +    rcu_read_lock();
> > +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> > +        if (block->colo_cache) {
> > +            qemu_anon_ram_free(block->colo_cache, block->used_length);
> > +            block->colo_cache = NULL;
> > +        }
> > +    }
> > +    rcu_read_unlock();
> > +}
> > +
> >  /**
> >   * ram_load_setup: Setup RAM for migration incoming side
> >   *
> > @@ -2740,6 +2805,7 @@ static int ram_load_setup(QEMUFile *f, void
> *opaque)
> >
> >      xbzrle_load_setup();
> >      ramblock_recv_map_init();
> > +
> >      return 0;
> >  }
> >
> > @@ -2753,6 +2819,7 @@ static int ram_load_cleanup(void *opaque)
> >          g_free(rb->receivedmap);
> >          rb->receivedmap = NULL;
> >      }
> > +
> >      return 0;
> >  }
> >
> > @@ -2966,7 +3033,7 @@ static int ram_load(QEMUFile *f, void *opaque, int
> version_id)
> >
> >      while (!postcopy_running && !ret && !(flags & RAM_SAVE_FLAG_EOS)) {
> >          ram_addr_t addr, total_ram_bytes;
> > -        void *host = NULL;
> > +        void *host = NULL, *host_bak = NULL;
> >          uint8_t ch;
> >
> >          addr = qemu_get_be64(f);
> > @@ -2986,13 +3053,36 @@ static int ram_load(QEMUFile *f, void *opaque,
> int version_id)
> >                       RAM_SAVE_FLAG_COMPRESS_PAGE |
> RAM_SAVE_FLAG_XBZRLE)) {
> >              RAMBlock *block = ram_block_from_stream(f, flags);
> >
> > -            host = host_from_ram_block_offset(block, addr);
> > +             /*
> > +             * After going into COLO, we should load the Page into
> colo_cache
> > +             * NOTE: We need to keep a copy of SVM's ram in colo_cache.
> > +             * Privously, we copied all these memory in preparing stage
> of COLO
> > +             * while we need to stop VM, which is a time-consuming
> process.
> > +             * Here we optimize it by a trick, back-up every page while
> in
> > +             * migration process while COLO is enabled, though it
> affects the
> > +             * speed of the migration, but it obviously reduce the
> downtime of
> > +             * back-up all SVM'S memory in COLO preparing stage.
> > +             */
> > +            if (migration_incoming_in_colo_state()) {
> > +                host = colo_cache_from_block_offset(block, addr);
> > +                /* After goes into COLO state, don't backup it any more
> */
> > +                if (!migration_incoming_in_colo_state()) {
>
> I don't understand how we can reach this nested 'if';
> colo_cache_from_block_offset is short and simple; so how can
> migration_incoming_in_colo_state() be both true and false?
>
> I think this is trying to do it for when COLO is enabled but when
> receiving the first checkpoint you want to take a copy; but I don't
> think that's what the 'if' is doing.
>

I'm also confused about this part.
I checked old version codes for this part, but it looks same .
How do you think this should be done?

Or maybe the original author know some detail?
CC Lizhijian and Zhanghailiang

Thanks
Zhang Chen




>
> Dave
>
> > +                    host_bak = host;
> > +                }
> > +            }
> > +            if (!migration_incoming_in_colo_state()) {
> > +                host = host_from_ram_block_offset(block, addr);
> > +            }
> >              if (!host) {
> >                  error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
> >                  ret = -EINVAL;
> >                  break;
> >              }
> > -            ramblock_recv_bitmap_set(block, host);
> > +
> > +            if (!migration_incoming_in_colo_state()) {
> > +                ramblock_recv_bitmap_set(block, host);
> > +            }
> > +
> >              trace_ram_load_loop(block->idstr, (uint64_t)addr, flags,
> host);
> >          }
> >
> > @@ -3087,6 +3177,9 @@ static int ram_load(QEMUFile *f, void *opaque, int
> version_id)
> >          if (!ret) {
> >              ret = qemu_file_get_error(f);
> >          }
> > +        if (!ret && host_bak && host) {
> > +            memcpy(host_bak, host, TARGET_PAGE_SIZE);
> > +        }
> >      }
> >
> >      ret |= wait_for_decompress_done();
> > diff --git a/migration/ram.h b/migration/ram.h
> > index 5030be110a..66e9b86ff0 100644
> > --- a/migration/ram.h
> > +++ b/migration/ram.h
> > @@ -64,4 +64,8 @@ bool ramblock_recv_bitmap_test_byte_offset(RAMBlock
> *rb, uint64_t byte_offset);
> >  void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
> >  void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr,
> size_t nr);
> >
> > +/* ram cache */
> > +int colo_init_ram_cache(void);
> > +void colo_release_ram_cache(void);
> > +
> >  #endif
> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index c43d220220..ec0bff09ce 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -1807,7 +1807,7 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState
> *mis)
> >  static int loadvm_process_enable_colo(MigrationIncomingState *mis)
> >  {
> >      migration_incoming_enable_colo();
> > -    return 0;
> > +    return colo_init_ram_cache();
> >  }
> >
> >  /*
> > --
> > 2.17.0
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm
  2018-05-15 18:56   ` Dr. David Alan Gilbert
@ 2018-06-03  5:10     ` Zhang Chen
  2018-06-19 19:00       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 36+ messages in thread
From: Zhang Chen @ 2018-06-03  5:10 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

On Wed, May 16, 2018 at 2:56 AM, Dr. David Alan Gilbert <dgilbert@redhat.com
> wrote:

> * Zhang Chen (zhangckid@gmail.com) wrote:
> > From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> >
> > There are several stages during loadvm/savevm process. In different
> stage,
> > migration incoming processes different types of sections.
> > We want to control these stages more accuracy, it will benefit COLO
> > performance, we don't have to save type of QEMU_VM_SECTION_START
> > sections everytime while do checkpoint, besides, we want to separate
> > the process of saving/loading memory and devices state.
> >
> > So we add three new helper functions: qemu_load_device_state() and
> > qemu_savevm_live_state() to achieve different process during migration.
> >
> > Besides, we make qemu_loadvm_state_main() and qemu_save_device_state()
> > public, and simplify the codes of qemu_save_device_state() by calling the
> > wrapper qemu_savevm_state_header().
> >
> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > ---
> >  migration/colo.c   | 36 ++++++++++++++++++++++++++++--------
> >  migration/savevm.c | 35 ++++++++++++++++++++++++++++-------
> >  migration/savevm.h |  4 ++++
> >  3 files changed, 60 insertions(+), 15 deletions(-)
> >
> > diff --git a/migration/colo.c b/migration/colo.c
> > index cdff0a2490..5b055f79f1 100644
> > --- a/migration/colo.c
> > +++ b/migration/colo.c
> > @@ -30,6 +30,7 @@
> >  #include "block/block.h"
> >  #include "qapi/qapi-events-migration.h"
> >  #include "qapi/qmp/qerror.h"
> > +#include "sysemu/cpus.h"
> >
> >  static bool vmstate_loading;
> >  static Notifier packets_compare_notifier;
> > @@ -414,23 +415,30 @@ static int colo_do_checkpoint_transaction(MigrationState
> *s,
> >
> >      /* Disable block migration */
> >      migrate_set_block_enabled(false, &local_err);
> > -    qemu_savevm_state_header(fb);
> > -    qemu_savevm_state_setup(fb);
> >      qemu_mutex_lock_iothread();
> >      replication_do_checkpoint_all(&local_err);
> >      if (local_err) {
> >          qemu_mutex_unlock_iothread();
> >          goto out;
> >      }
> > -    qemu_savevm_state_complete_precopy(fb, false, false);
> > -    qemu_mutex_unlock_iothread();
> > -
> > -    qemu_fflush(fb);
> >
> >      colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND,
> &local_err);
> >      if (local_err) {
> >          goto out;
> >      }
> > +    /*
> > +     * Only save VM's live state, which not including device state.
> > +     * TODO: We may need a timeout mechanism to prevent COLO process
> > +     * to be blocked here.
> > +     */
>
> I guess that's the downside to transmitting it directly than into the
> buffer;
> Peter Xu's OOB command system would let you kill the connection - and
> that's something I think COLO should use.
> Still the change saves you having that huge outgoing buffer on the
> source side and lets you start sending the checkpoint sooner, which
> means the pause time should be smaller.
>

Yes, you are right.
But I think this is a performance optimization, this series focus on
enabling.
I will do this job in the future.


>
> > +    qemu_savevm_live_state(s->to_dst_file);
>
> Does this actually need to be inside of the qemu_mutex_lock_iothread?
> I'm pretty sure the device_state needs to be, but I'm not sure the
> live_state needs to.
>

I have checked the codes, qemu_savevm_live_state needn't inside of the
qemu_mutex_lock_iothread,
I will move the it out the lock area in next version.



>
> > +    /* Note: device state is saved into buffer */
> > +    ret = qemu_save_device_state(fb);
> > +
> > +    qemu_mutex_unlock_iothread();
> > +
> > +    qemu_fflush(fb);
> > +
> >      /*
> >       * We need the size of the VMstate data in Secondary side,
> >       * With which we can decide how much data should be read.
> > @@ -643,6 +651,7 @@ void *colo_process_incoming_thread(void *opaque)
> >      uint64_t total_size;
> >      uint64_t value;
> >      Error *local_err = NULL;
> > +    int ret;
> >
> >      qemu_sem_init(&mis->colo_incoming_sem, 0);
> >
> > @@ -715,6 +724,16 @@ void *colo_process_incoming_thread(void *opaque)
> >              goto out;
> >          }
> >
> > +        qemu_mutex_lock_iothread();
> > +        cpu_synchronize_all_pre_loadvm();
> > +        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
> > +        qemu_mutex_unlock_iothread();
> > +
> > +        if (ret < 0) {
> > +            error_report("Load VM's live state (ram) error");
> > +            goto out;
> > +        }
> > +
> >          value = colo_receive_message_value(mis->from_src_file,
> >                                   COLO_MESSAGE_VMSTATE_SIZE, &local_err);
> >          if (local_err) {
> > @@ -748,8 +767,9 @@ void *colo_process_incoming_thread(void *opaque)
> >          qemu_mutex_lock_iothread();
> >          qemu_system_reset(SHUTDOWN_CAUSE_NONE);
>
> Is the reset safe? Are you sure it doesn't change the ram you've just
> loaded?
>

Yes, It is safe. In my test the secondary node running well.


>
> >          vmstate_loading = true;
> > -        if (qemu_loadvm_state(fb) < 0) {
> > -            error_report("COLO: loadvm failed");
> > +        ret = qemu_load_device_state(fb);
> > +        if (ret < 0) {
> > +            error_report("COLO: load device state failed");
> >              qemu_mutex_unlock_iothread();
> >              goto out;
> >          }
> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index ec0bff09ce..0f61239429 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -1332,13 +1332,20 @@ done:
> >      return ret;
> >  }
> >
> > -static int qemu_save_device_state(QEMUFile *f)
> > +void qemu_savevm_live_state(QEMUFile *f)
> >  {
> > -    SaveStateEntry *se;
> > +    /* save QEMU_VM_SECTION_END section */
> > +    qemu_savevm_state_complete_precopy(f, true, false);
> > +    qemu_put_byte(f, QEMU_VM_EOF);
> > +}
> >
> > -    qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
> > -    qemu_put_be32(f, QEMU_VM_FILE_VERSION);
> > +int qemu_save_device_state(QEMUFile *f)
> > +{
> > +    SaveStateEntry *se;
> >
> > +    if (!migration_in_colo_state()) {
> > +        qemu_savevm_state_header(f);
> > +    }
> >      cpu_synchronize_all_states();
>
> So this changes qemu_save_device_state to use savevm_state_header
> which feels reasonable, but that includes the 'configuration'
> section;  do we want that?  Is that OK for Xen's use in
> qmp_xen_save_devices_state?
>

If I understand correctly, COLO Xen don't use qemu migration codes do this
job currently,
COLO Xen have an independent COLO frame do same job(use Xen migration
codes),
So I think it is OK for Xen.


Thanks your review and continued support.
Zhang Chen


>
> >      QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
> > @@ -1394,8 +1401,6 @@ enum LoadVMExitCodes {
> >      LOADVM_QUIT     =  1,
> >  };
> >
> > -static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState
> *mis);
> > -
> >  /* ------ incoming postcopy messages ------ */
> >  /* 'advise' arrives before any transfers just to tell us that a postcopy
> >   * *might* happen - it might be skipped if precopy transferred
> everything
> > @@ -2075,7 +2080,7 @@ void qemu_loadvm_state_cleanup(void)
> >      }
> >  }
> >
> > -static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState
> *mis)
> > +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
> >  {
> >      uint8_t section_type;
> >      int ret = 0;
> > @@ -2229,6 +2234,22 @@ int qemu_loadvm_state(QEMUFile *f)
> >      return ret;
> >  }
> >
> > +int qemu_load_device_state(QEMUFile *f)
> > +{
> > +    MigrationIncomingState *mis = migration_incoming_get_current();
> > +    int ret;
> > +
> > +    /* Load QEMU_VM_SECTION_FULL section */
> > +    ret = qemu_loadvm_state_main(f, mis);
> > +    if (ret < 0) {
> > +        error_report("Failed to load device state: %d", ret);
> > +        return ret;
> > +    }
> > +
> > +    cpu_synchronize_all_post_init();
> > +    return 0;
> > +}
> > +
> >  int save_snapshot(const char *name, Error **errp)
> >  {
> >      BlockDriverState *bs, *bs1;
> > diff --git a/migration/savevm.h b/migration/savevm.h
> > index c6d46b37a2..cf7935dd68 100644
> > --- a/migration/savevm.h
> > +++ b/migration/savevm.h
> > @@ -53,8 +53,12 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile
> *f, const char *name,
> >                                             uint64_t *start_list,
> >                                             uint64_t *length_list);
> >  void qemu_savevm_send_colo_enable(QEMUFile *f);
> > +void qemu_savevm_live_state(QEMUFile *f);
> > +int qemu_save_device_state(QEMUFile *f);
> >
> >  int qemu_loadvm_state(QEMUFile *f);
> >  void qemu_loadvm_state_cleanup(void);
> > +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
> > +int qemu_load_device_state(QEMUFile *f);
> >
> >  #endif
> > --
> > 2.17.0
>
> Dave
>
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm
  2018-06-03  5:10     ` Zhang Chen
@ 2018-06-19 19:00       ` Dr. David Alan Gilbert
  2018-06-22  3:45         ` Zhang Chen
  0 siblings, 1 reply; 36+ messages in thread
From: Dr. David Alan Gilbert @ 2018-06-19 19:00 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

* Zhang Chen (zhangckid@gmail.com) wrote:
> On Wed, May 16, 2018 at 2:56 AM, Dr. David Alan Gilbert <dgilbert@redhat.com
> > wrote:
> 
> > * Zhang Chen (zhangckid@gmail.com) wrote:
> > > From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > >
> > > There are several stages during loadvm/savevm process. In different
> > stage,
> > > migration incoming processes different types of sections.
> > > We want to control these stages more accuracy, it will benefit COLO
> > > performance, we don't have to save type of QEMU_VM_SECTION_START
> > > sections everytime while do checkpoint, besides, we want to separate
> > > the process of saving/loading memory and devices state.
> > >
> > > So we add three new helper functions: qemu_load_device_state() and
> > > qemu_savevm_live_state() to achieve different process during migration.
> > >
> > > Besides, we make qemu_loadvm_state_main() and qemu_save_device_state()
> > > public, and simplify the codes of qemu_save_device_state() by calling the
> > > wrapper qemu_savevm_state_header().
> > >
> > > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> > > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> > > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > > ---
> > >  migration/colo.c   | 36 ++++++++++++++++++++++++++++--------
> > >  migration/savevm.c | 35 ++++++++++++++++++++++++++++-------
> > >  migration/savevm.h |  4 ++++
> > >  3 files changed, 60 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/migration/colo.c b/migration/colo.c
> > > index cdff0a2490..5b055f79f1 100644
> > > --- a/migration/colo.c
> > > +++ b/migration/colo.c
> > > @@ -30,6 +30,7 @@
> > >  #include "block/block.h"
> > >  #include "qapi/qapi-events-migration.h"
> > >  #include "qapi/qmp/qerror.h"
> > > +#include "sysemu/cpus.h"
> > >
> > >  static bool vmstate_loading;
> > >  static Notifier packets_compare_notifier;
> > > @@ -414,23 +415,30 @@ static int colo_do_checkpoint_transaction(MigrationState
> > *s,
> > >
> > >      /* Disable block migration */
> > >      migrate_set_block_enabled(false, &local_err);
> > > -    qemu_savevm_state_header(fb);
> > > -    qemu_savevm_state_setup(fb);
> > >      qemu_mutex_lock_iothread();
> > >      replication_do_checkpoint_all(&local_err);
> > >      if (local_err) {
> > >          qemu_mutex_unlock_iothread();
> > >          goto out;
> > >      }
> > > -    qemu_savevm_state_complete_precopy(fb, false, false);
> > > -    qemu_mutex_unlock_iothread();
> > > -
> > > -    qemu_fflush(fb);
> > >
> > >      colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND,
> > &local_err);
> > >      if (local_err) {
> > >          goto out;
> > >      }
> > > +    /*
> > > +     * Only save VM's live state, which not including device state.
> > > +     * TODO: We may need a timeout mechanism to prevent COLO process
> > > +     * to be blocked here.
> > > +     */
> >
> > I guess that's the downside to transmitting it directly than into the
> > buffer;
> > Peter Xu's OOB command system would let you kill the connection - and
> > that's something I think COLO should use.
> > Still the change saves you having that huge outgoing buffer on the
> > source side and lets you start sending the checkpoint sooner, which
> > means the pause time should be smaller.
> >
> 
> Yes, you are right.
> But I think this is a performance optimization, this series focus on
> enabling.
> I will do this job in the future.
> 
> 
> >
> > > +    qemu_savevm_live_state(s->to_dst_file);
> >
> > Does this actually need to be inside of the qemu_mutex_lock_iothread?
> > I'm pretty sure the device_state needs to be, but I'm not sure the
> > live_state needs to.
> >
> 
> I have checked the codes, qemu_savevm_live_state needn't inside of the
> qemu_mutex_lock_iothread,
> I will move the it out the lock area in next version.
> 
> 
> 
> >
> > > +    /* Note: device state is saved into buffer */
> > > +    ret = qemu_save_device_state(fb);
> > > +
> > > +    qemu_mutex_unlock_iothread();
> > > +
> > > +    qemu_fflush(fb);
> > > +
> > >      /*
> > >       * We need the size of the VMstate data in Secondary side,
> > >       * With which we can decide how much data should be read.
> > > @@ -643,6 +651,7 @@ void *colo_process_incoming_thread(void *opaque)
> > >      uint64_t total_size;
> > >      uint64_t value;
> > >      Error *local_err = NULL;
> > > +    int ret;
> > >
> > >      qemu_sem_init(&mis->colo_incoming_sem, 0);
> > >
> > > @@ -715,6 +724,16 @@ void *colo_process_incoming_thread(void *opaque)
> > >              goto out;
> > >          }
> > >
> > > +        qemu_mutex_lock_iothread();
> > > +        cpu_synchronize_all_pre_loadvm();
> > > +        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
> > > +        qemu_mutex_unlock_iothread();
> > > +
> > > +        if (ret < 0) {
> > > +            error_report("Load VM's live state (ram) error");
> > > +            goto out;
> > > +        }
> > > +
> > >          value = colo_receive_message_value(mis->from_src_file,
> > >                                   COLO_MESSAGE_VMSTATE_SIZE, &local_err);
> > >          if (local_err) {
> > > @@ -748,8 +767,9 @@ void *colo_process_incoming_thread(void *opaque)
> > >          qemu_mutex_lock_iothread();
> > >          qemu_system_reset(SHUTDOWN_CAUSE_NONE);
> >
> > Is the reset safe? Are you sure it doesn't change the ram you've just
> > loaded?
> >
> 
> Yes, It is safe. In my test the secondary node running well.

The fact it's working doesn't mean it's safe; it just means you've
not hit a problem!  A qemu_system_reset calls a 'reset' on all of the
devices, nd calls cpu_synchronize_all_states() - I'd worry that any of
those might touch RAM.

> >
> > >          vmstate_loading = true;
> > > -        if (qemu_loadvm_state(fb) < 0) {
> > > -            error_report("COLO: loadvm failed");
> > > +        ret = qemu_load_device_state(fb);
> > > +        if (ret < 0) {
> > > +            error_report("COLO: load device state failed");
> > >              qemu_mutex_unlock_iothread();
> > >              goto out;
> > >          }
> > > diff --git a/migration/savevm.c b/migration/savevm.c
> > > index ec0bff09ce..0f61239429 100644
> > > --- a/migration/savevm.c
> > > +++ b/migration/savevm.c
> > > @@ -1332,13 +1332,20 @@ done:
> > >      return ret;
> > >  }
> > >
> > > -static int qemu_save_device_state(QEMUFile *f)
> > > +void qemu_savevm_live_state(QEMUFile *f)
> > >  {
> > > -    SaveStateEntry *se;
> > > +    /* save QEMU_VM_SECTION_END section */
> > > +    qemu_savevm_state_complete_precopy(f, true, false);
> > > +    qemu_put_byte(f, QEMU_VM_EOF);
> > > +}
> > >
> > > -    qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
> > > -    qemu_put_be32(f, QEMU_VM_FILE_VERSION);
> > > +int qemu_save_device_state(QEMUFile *f)
> > > +{
> > > +    SaveStateEntry *se;
> > >
> > > +    if (!migration_in_colo_state()) {
> > > +        qemu_savevm_state_header(f);
> > > +    }
> > >      cpu_synchronize_all_states();
> >
> > So this changes qemu_save_device_state to use savevm_state_header
> > which feels reasonable, but that includes the 'configuration'
> > section;  do we want that?  Is that OK for Xen's use in
> > qmp_xen_save_devices_state?
> >
> 
> If I understand correctly, COLO Xen don't use qemu migration codes do this
> job currently,
> COLO Xen have an independent COLO frame do same job(use Xen migration
> codes),
> So I think it is OK for Xen.

It's important not to break non-COLO Xen though; so you need to check
with the Xen people that a change that impacts
qmp_xen_save_devices_state is OK.

Dave

> 
> 
> Thanks your review and continued support.
> Zhang Chen
> 
> 
> >
> > >      QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
> > > @@ -1394,8 +1401,6 @@ enum LoadVMExitCodes {
> > >      LOADVM_QUIT     =  1,
> > >  };
> > >
> > > -static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState
> > *mis);
> > > -
> > >  /* ------ incoming postcopy messages ------ */
> > >  /* 'advise' arrives before any transfers just to tell us that a postcopy
> > >   * *might* happen - it might be skipped if precopy transferred
> > everything
> > > @@ -2075,7 +2080,7 @@ void qemu_loadvm_state_cleanup(void)
> > >      }
> > >  }
> > >
> > > -static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState
> > *mis)
> > > +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
> > >  {
> > >      uint8_t section_type;
> > >      int ret = 0;
> > > @@ -2229,6 +2234,22 @@ int qemu_loadvm_state(QEMUFile *f)
> > >      return ret;
> > >  }
> > >
> > > +int qemu_load_device_state(QEMUFile *f)
> > > +{
> > > +    MigrationIncomingState *mis = migration_incoming_get_current();
> > > +    int ret;
> > > +
> > > +    /* Load QEMU_VM_SECTION_FULL section */
> > > +    ret = qemu_loadvm_state_main(f, mis);
> > > +    if (ret < 0) {
> > > +        error_report("Failed to load device state: %d", ret);
> > > +        return ret;
> > > +    }
> > > +
> > > +    cpu_synchronize_all_post_init();
> > > +    return 0;
> > > +}
> > > +
> > >  int save_snapshot(const char *name, Error **errp)
> > >  {
> > >      BlockDriverState *bs, *bs1;
> > > diff --git a/migration/savevm.h b/migration/savevm.h
> > > index c6d46b37a2..cf7935dd68 100644
> > > --- a/migration/savevm.h
> > > +++ b/migration/savevm.h
> > > @@ -53,8 +53,12 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile
> > *f, const char *name,
> > >                                             uint64_t *start_list,
> > >                                             uint64_t *length_list);
> > >  void qemu_savevm_send_colo_enable(QEMUFile *f);
> > > +void qemu_savevm_live_state(QEMUFile *f);
> > > +int qemu_save_device_state(QEMUFile *f);
> > >
> > >  int qemu_loadvm_state(QEMUFile *f);
> > >  void qemu_loadvm_state_cleanup(void);
> > > +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
> > > +int qemu_load_device_state(QEMUFile *f);
> > >
> > >  #endif
> > > --
> > > 2.17.0
> >
> > Dave
> >
> > >
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm
  2018-06-19 19:00       ` Dr. David Alan Gilbert
@ 2018-06-22  3:45         ` Zhang Chen
  0 siblings, 0 replies; 36+ messages in thread
From: Zhang Chen @ 2018-06-22  3:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Eric Blake, Markus Armbruster, Paolo Bonzini,
	Jason Wang, zhanghailiang, Li Zhijian

On Wed, Jun 20, 2018 at 3:00 AM, Dr. David Alan Gilbert <dgilbert@redhat.com
> wrote:

> * Zhang Chen (zhangckid@gmail.com) wrote:
> > On Wed, May 16, 2018 at 2:56 AM, Dr. David Alan Gilbert <
> dgilbert@redhat.com
> > > wrote:
> >
> > > * Zhang Chen (zhangckid@gmail.com) wrote:
> > > > From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > > >
> > > > There are several stages during loadvm/savevm process. In different
> > > stage,
> > > > migration incoming processes different types of sections.
> > > > We want to control these stages more accuracy, it will benefit COLO
> > > > performance, we don't have to save type of QEMU_VM_SECTION_START
> > > > sections everytime while do checkpoint, besides, we want to separate
> > > > the process of saving/loading memory and devices state.
> > > >
> > > > So we add three new helper functions: qemu_load_device_state() and
> > > > qemu_savevm_live_state() to achieve different process during
> migration.
> > > >
> > > > Besides, we make qemu_loadvm_state_main() and
> qemu_save_device_state()
> > > > public, and simplify the codes of qemu_save_device_state() by
> calling the
> > > > wrapper qemu_savevm_state_header().
> > > >
> > > > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > > > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> > > > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> > > > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > > > ---
> > > >  migration/colo.c   | 36 ++++++++++++++++++++++++++++--------
> > > >  migration/savevm.c | 35 ++++++++++++++++++++++++++++-------
> > > >  migration/savevm.h |  4 ++++
> > > >  3 files changed, 60 insertions(+), 15 deletions(-)
> > > >
> > > > diff --git a/migration/colo.c b/migration/colo.c
> > > > index cdff0a2490..5b055f79f1 100644
> > > > --- a/migration/colo.c
> > > > +++ b/migration/colo.c
> > > > @@ -30,6 +30,7 @@
> > > >  #include "block/block.h"
> > > >  #include "qapi/qapi-events-migration.h"
> > > >  #include "qapi/qmp/qerror.h"
> > > > +#include "sysemu/cpus.h"
> > > >
> > > >  static bool vmstate_loading;
> > > >  static Notifier packets_compare_notifier;
> > > > @@ -414,23 +415,30 @@ static int colo_do_checkpoint_transaction
> (MigrationState
> > > *s,
> > > >
> > > >      /* Disable block migration */
> > > >      migrate_set_block_enabled(false, &local_err);
> > > > -    qemu_savevm_state_header(fb);
> > > > -    qemu_savevm_state_setup(fb);
> > > >      qemu_mutex_lock_iothread();
> > > >      replication_do_checkpoint_all(&local_err);
> > > >      if (local_err) {
> > > >          qemu_mutex_unlock_iothread();
> > > >          goto out;
> > > >      }
> > > > -    qemu_savevm_state_complete_precopy(fb, false, false);
> > > > -    qemu_mutex_unlock_iothread();
> > > > -
> > > > -    qemu_fflush(fb);
> > > >
> > > >      colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND,
> > > &local_err);
> > > >      if (local_err) {
> > > >          goto out;
> > > >      }
> > > > +    /*
> > > > +     * Only save VM's live state, which not including device state.
> > > > +     * TODO: We may need a timeout mechanism to prevent COLO process
> > > > +     * to be blocked here.
> > > > +     */
> > >
> > > I guess that's the downside to transmitting it directly than into the
> > > buffer;
> > > Peter Xu's OOB command system would let you kill the connection - and
> > > that's something I think COLO should use.
> > > Still the change saves you having that huge outgoing buffer on the
> > > source side and lets you start sending the checkpoint sooner, which
> > > means the pause time should be smaller.
> > >
> >
> > Yes, you are right.
> > But I think this is a performance optimization, this series focus on
> > enabling.
> > I will do this job in the future.
> >
> >
> > >
> > > > +    qemu_savevm_live_state(s->to_dst_file);
> > >
> > > Does this actually need to be inside of the qemu_mutex_lock_iothread?
> > > I'm pretty sure the device_state needs to be, but I'm not sure the
> > > live_state needs to.
> > >
> >
> > I have checked the codes, qemu_savevm_live_state needn't inside of the
> > qemu_mutex_lock_iothread,
> > I will move the it out the lock area in next version.
> >
> >
> >
> > >
> > > > +    /* Note: device state is saved into buffer */
> > > > +    ret = qemu_save_device_state(fb);
> > > > +
> > > > +    qemu_mutex_unlock_iothread();
> > > > +
> > > > +    qemu_fflush(fb);
> > > > +
> > > >      /*
> > > >       * We need the size of the VMstate data in Secondary side,
> > > >       * With which we can decide how much data should be read.
> > > > @@ -643,6 +651,7 @@ void *colo_process_incoming_thread(void *opaque)
> > > >      uint64_t total_size;
> > > >      uint64_t value;
> > > >      Error *local_err = NULL;
> > > > +    int ret;
> > > >
> > > >      qemu_sem_init(&mis->colo_incoming_sem, 0);
> > > >
> > > > @@ -715,6 +724,16 @@ void *colo_process_incoming_thread(void
> *opaque)
> > > >              goto out;
> > > >          }
> > > >
> > > > +        qemu_mutex_lock_iothread();
> > > > +        cpu_synchronize_all_pre_loadvm();
> > > > +        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
> > > > +        qemu_mutex_unlock_iothread();
> > > > +
> > > > +        if (ret < 0) {
> > > > +            error_report("Load VM's live state (ram) error");
> > > > +            goto out;
> > > > +        }
> > > > +
> > > >          value = colo_receive_message_value(mis->from_src_file,
> > > >                                   COLO_MESSAGE_VMSTATE_SIZE,
> &local_err);
> > > >          if (local_err) {
> > > > @@ -748,8 +767,9 @@ void *colo_process_incoming_thread(void *opaque)
> > > >          qemu_mutex_lock_iothread();
> > > >          qemu_system_reset(SHUTDOWN_CAUSE_NONE);
> > >
> > > Is the reset safe? Are you sure it doesn't change the ram you've just
> > > loaded?
> > >
> >
> > Yes, It is safe. In my test the secondary node running well.
>
> The fact it's working doesn't mean it's safe; it just means you've
> not hit a problem!  A qemu_system_reset calls a 'reset' on all of the
> devices, nd calls cpu_synchronize_all_states() - I'd worry that any of
> those might touch RAM.
>

In the migration/savevm.c   load_snapshot() do the same job like here, have
any difference?
And I have tested running COLO without the reset, looks like it works well
too in short test.


>
> > >
> > > >          vmstate_loading = true;
> > > > -        if (qemu_loadvm_state(fb) < 0) {
> > > > -            error_report("COLO: loadvm failed");
> > > > +        ret = qemu_load_device_state(fb);
> > > > +        if (ret < 0) {
> > > > +            error_report("COLO: load device state failed");
> > > >              qemu_mutex_unlock_iothread();
> > > >              goto out;
> > > >          }
> > > > diff --git a/migration/savevm.c b/migration/savevm.c
> > > > index ec0bff09ce..0f61239429 100644
> > > > --- a/migration/savevm.c
> > > > +++ b/migration/savevm.c
> > > > @@ -1332,13 +1332,20 @@ done:
> > > >      return ret;
> > > >  }
> > > >
> > > > -static int qemu_save_device_state(QEMUFile *f)
> > > > +void qemu_savevm_live_state(QEMUFile *f)
> > > >  {
> > > > -    SaveStateEntry *se;
> > > > +    /* save QEMU_VM_SECTION_END section */
> > > > +    qemu_savevm_state_complete_precopy(f, true, false);
> > > > +    qemu_put_byte(f, QEMU_VM_EOF);
> > > > +}
> > > >
> > > > -    qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
> > > > -    qemu_put_be32(f, QEMU_VM_FILE_VERSION);
> > > > +int qemu_save_device_state(QEMUFile *f)
> > > > +{
> > > > +    SaveStateEntry *se;
> > > >
> > > > +    if (!migration_in_colo_state()) {
> > > > +        qemu_savevm_state_header(f);
> > > > +    }
> > > >      cpu_synchronize_all_states();
> > >
> > > So this changes qemu_save_device_state to use savevm_state_header
> > > which feels reasonable, but that includes the 'configuration'
> > > section;  do we want that?  Is that OK for Xen's use in
> > > qmp_xen_save_devices_state?
> > >
> >
> > If I understand correctly, COLO Xen don't use qemu migration codes do
> this
> > job currently,
> > COLO Xen have an independent COLO frame do same job(use Xen migration
> > codes),
> > So I think it is OK for Xen.
>
> It's important not to break non-COLO Xen though; so you need to check
> with the Xen people that a change that impacts
> qmp_xen_save_devices_state is OK.
>

Yes, I think the quick way is remove the  "qmp_xen_save_devices_state"
related codes to keep the interface for Xen.
I will do this job in next version.

Thanks
Zhang Chen


>
> Dave
>
> >
> >
> > Thanks your review and continued support.
> > Zhang Chen
> >
> >
> > >
> > > >      QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
> > > > @@ -1394,8 +1401,6 @@ enum LoadVMExitCodes {
> > > >      LOADVM_QUIT     =  1,
> > > >  };
> > > >
> > > > -static int qemu_loadvm_state_main(QEMUFile *f,
> MigrationIncomingState
> > > *mis);
> > > > -
> > > >  /* ------ incoming postcopy messages ------ */
> > > >  /* 'advise' arrives before any transfers just to tell us that a
> postcopy
> > > >   * *might* happen - it might be skipped if precopy transferred
> > > everything
> > > > @@ -2075,7 +2080,7 @@ void qemu_loadvm_state_cleanup(void)
> > > >      }
> > > >  }
> > > >
> > > > -static int qemu_loadvm_state_main(QEMUFile *f,
> MigrationIncomingState
> > > *mis)
> > > > +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState
> *mis)
> > > >  {
> > > >      uint8_t section_type;
> > > >      int ret = 0;
> > > > @@ -2229,6 +2234,22 @@ int qemu_loadvm_state(QEMUFile *f)
> > > >      return ret;
> > > >  }
> > > >
> > > > +int qemu_load_device_state(QEMUFile *f)
> > > > +{
> > > > +    MigrationIncomingState *mis = migration_incoming_get_current();
> > > > +    int ret;
> > > > +
> > > > +    /* Load QEMU_VM_SECTION_FULL section */
> > > > +    ret = qemu_loadvm_state_main(f, mis);
> > > > +    if (ret < 0) {
> > > > +        error_report("Failed to load device state: %d", ret);
> > > > +        return ret;
> > > > +    }
> > > > +
> > > > +    cpu_synchronize_all_post_init();
> > > > +    return 0;
> > > > +}
> > > > +
> > > >  int save_snapshot(const char *name, Error **errp)
> > > >  {
> > > >      BlockDriverState *bs, *bs1;
> > > > diff --git a/migration/savevm.h b/migration/savevm.h
> > > > index c6d46b37a2..cf7935dd68 100644
> > > > --- a/migration/savevm.h
> > > > +++ b/migration/savevm.h
> > > > @@ -53,8 +53,12 @@ void qemu_savevm_send_postcopy_ram_
> discard(QEMUFile
> > > *f, const char *name,
> > > >                                             uint64_t *start_list,
> > > >                                             uint64_t *length_list);
> > > >  void qemu_savevm_send_colo_enable(QEMUFile *f);
> > > > +void qemu_savevm_live_state(QEMUFile *f);
> > > > +int qemu_save_device_state(QEMUFile *f);
> > > >
> > > >  int qemu_loadvm_state(QEMUFile *f);
> > > >  void qemu_loadvm_state_cleanup(void);
> > > > +int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState
> *mis);
> > > > +int qemu_load_device_state(QEMUFile *f);
> > > >
> > > >  #endif
> > > > --
> > > > 2.17.0
> > >
> > > Dave
> > >
> > > >
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2018-06-22  3:45 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 01/17] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 02/17] colo-compare: implement the process of checkpoint Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 03/17] colo-compare: use notifier to notify packets comparing result Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame Zhang Chen
2018-05-16 11:12   ` Dr. David Alan Gilbert
2018-05-16 13:55     ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 05/17] COLO: Add block replication into colo process Zhang Chen
2018-05-16 15:54   ` Dr. David Alan Gilbert
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct Zhang Chen
2018-05-15 16:02   ` Dr. David Alan Gilbert
2018-05-16 13:58     ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
2018-05-15 16:55   ` Dr. David Alan Gilbert
2018-05-20 18:30     ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 08/17] ram/COLO: Record the dirty pages that SVM received Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache Zhang Chen
2018-05-15 14:44   ` Dr. David Alan Gilbert
2018-05-20 16:09     ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 10/17] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 11/17] qapi: Add new command to query colo status Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm Zhang Chen
2018-05-15 18:56   ` Dr. David Alan Gilbert
2018-06-03  5:10     ` Zhang Chen
2018-06-19 19:00       ` Dr. David Alan Gilbert
2018-06-22  3:45         ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 13/17] COLO: flush host dirty ram from cache Zhang Chen
2018-05-15 15:32   ` Dr. David Alan Gilbert
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 14/17] filter: Add handle_event method for NetFilterClass Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 15/17] filter-rewriter: handle checkpoint and failover event Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 16/17] COLO: notify net filters about checkpoint/failover event Zhang Chen
2018-05-17  9:48   ` Dr. David Alan Gilbert
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 17/17] COLO: quick failover process by kick COLO thread Zhang Chen
2018-05-17  9:53   ` Dr. David Alan Gilbert
2018-05-16 11:18 ` [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Dr. David Alan Gilbert
2018-05-16 12:21   ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.