All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy
@ 2018-01-19 13:44 Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 01/16] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
                   ` (16 more replies)
  0 siblings, 17 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen, Zhang Chen

From: Zhang Chen <chen.zhang@intel.com>

Hi~

COLO Frame, block replication and COLO proxy(colo-compare,filter-mirror,
filter-redirector,filter-rewriter) have been exist in qemu
for long time, it's time to integrate these three parts to make COLO really works.

In this series, we have some optimizations for COLO frame, including separating the
process of saving ram and device state, using an COLO_EXIT event to notify users that
VM exits COLO, for these parts, most of them have been reviewed long time ago in old version,
but since this series have just rebased on upstream which had merged a new series of migration,
parts of pathes in this series deserve review again.

We use notifier/callback method for COLO compare to notify COLO frame about
net packets inconsistent event, and add a handle_event method for NetFilterClass to
help COLO frame to notify filters and colo-compare about checkpoint/failover event, 
it is flexible.

For the neweset version, please refer to:
https://github.com/zhangckid/qemu/tree/qemu-colo-18jan19

Please review, thanks.

V4:
 - Address Eric's comments in patch 10/16.
 - Rebase on upstream codes.
 - Fix mingw compile error in patch 02/16.
 - Fix some comments.
 - Fix conflect with the patch "migration: remove "enable_colo" var" in patch 06/16.

V3:
 - Address community comments from V2.
 - Rebase on upstream codes.
 - Fix several bugs.
 - Splite shared disk part to indepentent patch set.
 - Optimize codes.


Zhang Chen (8):
  filter-rewriter: fix memory leak for connection in
    connection_track_table
  colo-compare: implement the process of checkpoint
  colo-compare: use notifier to notify packets comparing result
  COLO: integrate colo compare with colo frame
  COLO: Add block replication into colo process
  ram/COLO: Record the dirty pages that SVM received
  filter: Add handle_event method for NetFilterClass
  filter-rewriter: handle checkpoint and failover event

zhanghailiang (8):
  COLO: Remove colo_state migration struct
  COLO: Load dirty pages into SVM's RAM cache firstly
  COLO: Flush memory data from ram cache
  qmp event: Add COLO_EXIT event to notify users while exited COLO
  savevm: split the process of different stages for loadvm/savevm
  COLO: flush host dirty ram from cache
  COLO: notify net filters about checkpoint/failover event
  COLO: quick failover process by kick COLO thread

 include/exec/ram_addr.h  |   1 +
 include/migration/colo.h |  11 ++-
 include/net/filter.h     |   5 ++
 migration/Makefile.objs  |   2 +-
 migration/colo-comm.c    |  76 -------------------
 migration/colo.c         | 188 ++++++++++++++++++++++++++++++++++++++++++++---
 migration/migration.c    |  38 +++++++++-
 migration/ram.c          | 181 ++++++++++++++++++++++++++++++++++++++++++++-
 migration/ram.h          |   4 +
 migration/savevm.c       |  54 ++++++++++++--
 migration/savevm.h       |   5 ++
 migration/trace-events   |   3 +
 net/colo-compare.c       | 108 +++++++++++++++++++++++++--
 net/colo-compare.h       |  24 ++++++
 net/colo.h               |   4 +
 net/filter-rewriter.c    | 109 +++++++++++++++++++++++++--
 net/filter.c             |  17 +++++
 net/net.c                |  28 +++++++
 qapi/migration.json      |  35 +++++++++
 vl.c                     |   2 -
 20 files changed, 777 insertions(+), 118 deletions(-)
 delete mode 100644 migration/colo-comm.c
 create mode 100644 net/colo-compare.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 01/16] filter-rewriter: fix memory leak for connection in connection_track_table
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 02/16] colo-compare: implement the process of checkpoint Zhang Chen
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

After a net connection is closed, we didn't clear its releated resources
in connection_track_table, which will lead to memory leak.

Let't track the state of net connection, if it is closed, its related
resources will be cleared up.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 net/colo.h            |  4 +++
 net/filter-rewriter.c | 69 +++++++++++++++++++++++++++++++++++++++++++++------
 2 files changed, 66 insertions(+), 7 deletions(-)

diff --git a/net/colo.h b/net/colo.h
index 0658e86..0193935 100644
--- a/net/colo.h
+++ b/net/colo.h
@@ -18,6 +18,7 @@
 #include "slirp/slirp.h"
 #include "qemu/jhash.h"
 #include "qemu/timer.h"
+#include "slirp/tcp.h"
 
 #define HASHTABLE_MAX_SIZE 16384
 
@@ -71,6 +72,9 @@ typedef struct Connection {
      * run once in independent tcp connection
      */
     int syn_flag;
+
+    int tcp_state; /* TCP FSM state */
+    tcp_seq fin_ack_seq; /* the seq of 'fin=1,ack=1' */
 } Connection;
 
 uint32_t connection_key_hash(const void *opaque);
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index 2be388f..a58310a 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -62,9 +62,9 @@ static int is_tcp_packet(Packet *pkt)
 }
 
 /* handle tcp packet from primary guest */
-static int handle_primary_tcp_pkt(NetFilterState *nf,
+static int handle_primary_tcp_pkt(RewriterState *rf,
                                   Connection *conn,
-                                  Packet *pkt)
+                                  Packet *pkt, ConnectionKey *key)
 {
     struct tcphdr *tcp_pkt;
 
@@ -102,15 +102,44 @@ static int handle_primary_tcp_pkt(NetFilterState *nf,
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
                                    pkt->size - pkt->vnet_hdr_len);
         }
+        /*
+         * Case 1:
+         * The *server* side of this connect is VM, *client* tries to close
+         * the connection.
+         *
+         * We got 'ack=1' packets from client side, it acks 'fin=1, ack=1'
+         * packet from server side. From this point, we can ensure that there
+         * will be no packets in the connection, except that, some errors
+         * happen between the path of 'filter object' and vNIC, if this rare
+         * case really happen, we can still create a new connection,
+         * So it is safe to remove the connection from connection_track_table.
+         *
+         */
+        if ((conn->tcp_state == TCPS_LAST_ACK) &&
+            (ntohl(tcp_pkt->th_ack) == (conn->fin_ack_seq + 1))) {
+            g_hash_table_remove(rf->connection_track_table, key);
+        }
+    }
+    /*
+     * Case 2:
+     * The *server* side of this connect is VM, *server* tries to close
+     * the connection.
+     *
+     * We got 'fin=1, ack=1' packet from client side, we need to
+     * record the seq of 'fin=1, ack=1' packet.
+     */
+    if ((tcp_pkt->th_flags & (TH_ACK | TH_FIN)) == (TH_ACK | TH_FIN)) {
+        conn->fin_ack_seq = htonl(tcp_pkt->th_seq);
+        conn->tcp_state = TCPS_LAST_ACK;
     }
 
     return 0;
 }
 
 /* handle tcp packet from secondary guest */
-static int handle_secondary_tcp_pkt(NetFilterState *nf,
+static int handle_secondary_tcp_pkt(RewriterState *rf,
                                     Connection *conn,
-                                    Packet *pkt)
+                                    Packet *pkt, ConnectionKey *key)
 {
     struct tcphdr *tcp_pkt;
 
@@ -142,8 +171,34 @@ static int handle_secondary_tcp_pkt(NetFilterState *nf,
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
                                    pkt->size - pkt->vnet_hdr_len);
         }
+        /*
+         * Case 2:
+         * The *server* side of this connect is VM, *server* tries to close
+         * the connection.
+         *
+         * We got 'ack=1' packets from server side, it acks 'fin=1, ack=1'
+         * packet from client side. Like Case 1, there should be no packets
+         * in the connection from now know, But the difference here is
+         * if the packet is lost, We will get the resent 'fin=1,ack=1' packet.
+         * TODO: Fix above case.
+         */
+        if ((conn->tcp_state == TCPS_LAST_ACK) &&
+            (ntohl(tcp_pkt->th_ack) == (conn->fin_ack_seq + 1))) {
+            g_hash_table_remove(rf->connection_track_table, key);
+        }
+    }
+    /*
+     * Case 1:
+     * The *server* side of this connect is VM, *client* tries to close
+     * the connection.
+     *
+     * We got 'fin=1, ack=1' packet from server side, we need to
+     * record the seq of 'fin=1, ack=1' packet.
+     */
+    if ((tcp_pkt->th_flags & (TH_ACK | TH_FIN)) == (TH_ACK | TH_FIN)) {
+        conn->fin_ack_seq = ntohl(tcp_pkt->th_seq);
+        conn->tcp_state = TCPS_LAST_ACK;
     }
-
     return 0;
 }
 
@@ -193,7 +248,7 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
 
         if (sender == nf->netdev) {
             /* NET_FILTER_DIRECTION_TX */
-            if (!handle_primary_tcp_pkt(nf, conn, pkt)) {
+            if (!handle_primary_tcp_pkt(s, conn, pkt, &key)) {
                 qemu_net_queue_send(s->incoming_queue, sender, 0,
                 (const uint8_t *)pkt->data, pkt->size, NULL);
                 packet_destroy(pkt, NULL);
@@ -206,7 +261,7 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
             }
         } else {
             /* NET_FILTER_DIRECTION_RX */
-            if (!handle_secondary_tcp_pkt(nf, conn, pkt)) {
+            if (!handle_secondary_tcp_pkt(s, conn, pkt, &key)) {
                 qemu_net_queue_send(s->incoming_queue, sender, 0,
                 (const uint8_t *)pkt->data, pkt->size, NULL);
                 packet_destroy(pkt, NULL);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 02/16] colo-compare: implement the process of checkpoint
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 01/16] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 03/16] colo-compare: use notifier to notify packets comparing result Zhang Chen
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

While do checkpoint, we need to flush all the unhandled packets,
By using the filter notifier mechanism, we can easily to notify
every compare object to do this process, which runs inside
of compare threads as a coroutine.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 include/migration/colo.h |  6 ++++
 net/colo-compare.c       | 76 ++++++++++++++++++++++++++++++++++++++++++++++++
 net/colo-compare.h       | 22 ++++++++++++++
 3 files changed, 104 insertions(+)
 create mode 100644 net/colo-compare.h

diff --git a/include/migration/colo.h b/include/migration/colo.h
index ff9874e..6adf3a5 100644
--- a/include/migration/colo.h
+++ b/include/migration/colo.h
@@ -15,6 +15,12 @@
 
 #include "qemu-common.h"
 
+enum colo_event {
+    COLO_EVENT_NONE,
+    COLO_EVENT_CHECKPOINT,
+    COLO_EVENT_FAILOVER,
+};
+
 void colo_info_init(void);
 
 void migrate_start_colo_process(MigrationState *s);
diff --git a/net/colo-compare.c b/net/colo-compare.c
index 0ebdec9..4bceca8 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -29,17 +29,26 @@
 #include "qapi-visit.h"
 #include "net/colo.h"
 #include "sysemu/iothread.h"
+#include "net/colo-compare.h"
+#include "migration/colo.h"
 
 #define TYPE_COLO_COMPARE "colo-compare"
 #define COLO_COMPARE(obj) \
     OBJECT_CHECK(CompareState, (obj), TYPE_COLO_COMPARE)
 
+static QTAILQ_HEAD(, CompareState) net_compares =
+       QTAILQ_HEAD_INITIALIZER(net_compares);
+
 #define COMPARE_READ_LEN_MAX NET_BUFSIZE
 #define MAX_QUEUE_SIZE 1024
 
 /* TODO: Should be configurable */
 #define REGULAR_PACKET_CHECK_MS 3000
 
+static QemuMutex event_mtx;
+static QemuCond event_complete_cond;
+static int event_unhandled_count;
+
 /*
  *  + CompareState ++
  *  |               |
@@ -86,6 +95,11 @@ typedef struct CompareState {
     IOThread *iothread;
     GMainContext *worker_context;
     QEMUTimer *packet_check_timer;
+
+    QEMUBH *event_bh;
+    enum colo_event event;
+
+    QTAILQ_ENTRY(CompareState) next;
 } CompareState;
 
 typedef struct CompareClass {
@@ -631,6 +645,25 @@ static void check_old_packet_regular(void *opaque)
                 REGULAR_PACKET_CHECK_MS);
 }
 
+/* Public API, Used for COLO frame to notify compare event */
+void colo_notify_compares_event(void *opaque, int event, Error **errp)
+{
+    CompareState *s;
+
+    qemu_mutex_lock(&event_mtx);
+    QTAILQ_FOREACH(s, &net_compares, next) {
+        s->event = event;
+        qemu_bh_schedule(s->event_bh);
+        event_unhandled_count++;
+    }
+    /* Wait all compare threads to finish handling this event */
+    while (event_unhandled_count > 0) {
+        qemu_cond_wait(&event_complete_cond, &event_mtx);
+    }
+
+    qemu_mutex_unlock(&event_mtx);
+}
+
 static void colo_compare_timer_init(CompareState *s)
 {
     AioContext *ctx = iothread_get_aio_context(s->iothread);
@@ -651,6 +684,28 @@ static void colo_compare_timer_del(CompareState *s)
     }
  }
 
+static void colo_flush_packets(void *opaque, void *user_data);
+
+static void colo_compare_handle_event(void *opaque)
+{
+    CompareState *s = opaque;
+
+    switch (s->event) {
+    case COLO_EVENT_CHECKPOINT:
+        g_queue_foreach(&s->conn_list, colo_flush_packets, s);
+        break;
+    case COLO_EVENT_FAILOVER:
+        break;
+    default:
+        break;
+    }
+    qemu_mutex_lock(&event_mtx);
+    assert(event_unhandled_count > 0);
+    event_unhandled_count--;
+    qemu_cond_broadcast(&event_complete_cond);
+    qemu_mutex_unlock(&event_mtx);
+}
+
 static void colo_compare_iothread(CompareState *s)
 {
     object_ref(OBJECT(s->iothread));
@@ -664,6 +719,7 @@ static void colo_compare_iothread(CompareState *s)
                              s, s->worker_context, true);
 
     colo_compare_timer_init(s);
+    s->event_bh = qemu_bh_new(colo_compare_handle_event, s);
 }
 
 static char *compare_get_pri_indev(Object *obj, Error **errp)
@@ -821,8 +877,13 @@ static void colo_compare_complete(UserCreatable *uc, Error **errp)
     net_socket_rs_init(&s->pri_rs, compare_pri_rs_finalize, s->vnet_hdr);
     net_socket_rs_init(&s->sec_rs, compare_sec_rs_finalize, s->vnet_hdr);
 
+    QTAILQ_INSERT_TAIL(&net_compares, s, next);
+
     g_queue_init(&s->conn_list);
 
+    qemu_mutex_init(&event_mtx);
+    qemu_cond_init(&event_complete_cond);
+
     s->connection_track_table = g_hash_table_new_full(connection_key_hash,
                                                       connection_key_equal,
                                                       g_free,
@@ -885,6 +946,7 @@ static void colo_compare_init(Object *obj)
 static void colo_compare_finalize(Object *obj)
 {
     CompareState *s = COLO_COMPARE(obj);
+    CompareState *tmp = NULL;
 
     qemu_chr_fe_deinit(&s->chr_pri_in, false);
     qemu_chr_fe_deinit(&s->chr_sec_in, false);
@@ -892,6 +954,16 @@ static void colo_compare_finalize(Object *obj)
     if (s->iothread) {
         colo_compare_timer_del(s);
     }
+
+    qemu_bh_delete(s->event_bh);
+
+    QTAILQ_FOREACH(tmp, &net_compares, next) {
+        if (!strcmp(tmp->outdev, s->outdev)) {
+            QTAILQ_REMOVE(&net_compares, s, next);
+            break;
+        }
+    }
+
     /* Release all unhandled packets after compare thead exited */
     g_queue_foreach(&s->conn_list, colo_flush_packets, s);
 
@@ -904,6 +976,10 @@ static void colo_compare_finalize(Object *obj)
     if (s->iothread) {
         object_unref(OBJECT(s->iothread));
     }
+
+    qemu_mutex_destroy(&event_mtx);
+    qemu_cond_destroy(&event_complete_cond);
+
     g_free(s->pri_indev);
     g_free(s->sec_indev);
     g_free(s->outdev);
diff --git a/net/colo-compare.h b/net/colo-compare.h
new file mode 100644
index 0000000..1b1ce76
--- /dev/null
+++ b/net/colo-compare.h
@@ -0,0 +1,22 @@
+/*
+ * COarse-grain LOck-stepping Virtual Machines for Non-stop Service (COLO)
+ * (a.k.a. Fault Tolerance or Continuous Replication)
+ *
+ * Copyright (c) 2017 HUAWEI TECHNOLOGIES CO., LTD.
+ * Copyright (c) 2017 FUJITSU LIMITED
+ * Copyright (c) 2017 Intel Corporation
+ *
+ * Authors:
+ *    zhanghailiang <zhang.zhanghailiang@huawei.com>
+ *    Zhang Chen <zhangckid@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+
+#ifndef QEMU_COLO_COMPARE_H
+#define QEMU_COLO_COMPARE_H
+
+void colo_notify_compares_event(void *opaque, int event, Error **errp);
+
+#endif /* QEMU_COLO_COMPARE_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 03/16] colo-compare: use notifier to notify packets comparing result
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 01/16] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 02/16] colo-compare: implement the process of checkpoint Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 04/16] COLO: integrate colo compare with colo frame Zhang Chen
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

It's a good idea to use notifier to notify COLO frame of
inconsistent packets comparing.

Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 net/colo-compare.c | 32 +++++++++++++++++++++++++-------
 net/colo-compare.h |  2 ++
 2 files changed, 27 insertions(+), 7 deletions(-)

diff --git a/net/colo-compare.c b/net/colo-compare.c
index 4bceca8..ba9bc71 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -31,6 +31,7 @@
 #include "sysemu/iothread.h"
 #include "net/colo-compare.h"
 #include "migration/colo.h"
+#include "migration/migration.h"
 
 #define TYPE_COLO_COMPARE "colo-compare"
 #define COLO_COMPARE(obj) \
@@ -39,6 +40,9 @@
 static QTAILQ_HEAD(, CompareState) net_compares =
        QTAILQ_HEAD_INITIALIZER(net_compares);
 
+static NotifierList colo_compare_notifiers =
+    NOTIFIER_LIST_INITIALIZER(colo_compare_notifiers);
+
 #define COMPARE_READ_LEN_MAX NET_BUFSIZE
 #define MAX_QUEUE_SIZE 1024
 
@@ -452,8 +456,24 @@ static int colo_old_packet_check_one(Packet *pkt, int64_t *check_time)
     }
 }
 
+static void colo_compare_inconsistent_notify(void)
+{
+    notifier_list_notify(&colo_compare_notifiers,
+                migrate_get_current());
+}
+
+void colo_compare_register_notifier(Notifier *notify)
+{
+    notifier_list_add(&colo_compare_notifiers, notify);
+}
+
+void colo_compare_unregister_notifier(Notifier *notify)
+{
+    notifier_remove(notify);
+}
+
 static int colo_old_packet_check_one_conn(Connection *conn,
-                                          void *user_data)
+                                           void *user_data)
 {
     GList *result = NULL;
     int64_t check_time = REGULAR_PACKET_CHECK_MS;
@@ -464,10 +484,7 @@ static int colo_old_packet_check_one_conn(Connection *conn,
 
     if (result) {
         /* Do checkpoint will flush old packet */
-        /*
-         * TODO: Notify colo frame to do checkpoint.
-         * colo_compare_inconsistent_notify();
-         */
+        colo_compare_inconsistent_notify();
         return 0;
     }
 
@@ -542,11 +559,12 @@ static void colo_compare_connection(void *opaque, void *user_data)
             /*
              * If one packet arrive late, the secondary_list or
              * primary_list will be empty, so we can't compare it
-             * until next comparison.
+             * until next comparison. If the packets in the list are
+             * timeout, it will trigger a checkpoint request.
              */
             trace_colo_compare_main("packet different");
             g_queue_push_head(&conn->primary_list, pkt);
-            /* TODO: colo_notify_checkpoint();*/
+            colo_compare_inconsistent_notify();
             break;
         }
     }
diff --git a/net/colo-compare.h b/net/colo-compare.h
index 1b1ce76..22ddd51 100644
--- a/net/colo-compare.h
+++ b/net/colo-compare.h
@@ -18,5 +18,7 @@
 #define QEMU_COLO_COMPARE_H
 
 void colo_notify_compares_event(void *opaque, int event, Error **errp);
+void colo_compare_register_notifier(Notifier *notify);
+void colo_compare_unregister_notifier(Notifier *notify);
 
 #endif /* QEMU_COLO_COMPARE_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 04/16] COLO: integrate colo compare with colo frame
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (2 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 03/16] colo-compare: use notifier to notify packets comparing result Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 05/16] COLO: Add block replication into colo process Zhang Chen
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

For COLO FT, both the PVM and SVM run at the same time,
only sync the state while it needs.

So here, let SVM runs while not doing checkpoint, change
DEFAULT_MIGRATE_X_CHECKPOINT_DELAY to 200*100.

Besides, we forgot to release colo_checkpoint_semd and
colo_delay_timer, fix them here.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/colo.c      | 42 ++++++++++++++++++++++++++++++++++++++++--
 migration/migration.c |  4 ++--
 2 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/migration/colo.c b/migration/colo.c
index dee3aa8..c513805 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -24,8 +24,11 @@
 #include "migration/failover.h"
 #include "replication.h"
 #include "qmp-commands.h"
+#include "net/colo-compare.h"
+#include "net/colo.h"
 
 static bool vmstate_loading;
+static Notifier packets_compare_notifier;
 
 #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024)
 
@@ -342,6 +345,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
         goto out;
     }
 
+    colo_notify_compares_event(NULL, COLO_EVENT_CHECKPOINT, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
     /* Disable block migration */
     migrate_set_block_enabled(false, &local_err);
     qemu_savevm_state_header(fb);
@@ -399,6 +407,11 @@ out:
     return ret;
 }
 
+static void colo_compare_notify_checkpoint(Notifier *notifier, void *data)
+{
+    colo_checkpoint_notify(data);
+}
+
 static void colo_process_checkpoint(MigrationState *s)
 {
     QIOChannelBuffer *bioc;
@@ -415,6 +428,9 @@ static void colo_process_checkpoint(MigrationState *s)
         goto out;
     }
 
+    packets_compare_notifier.notify = colo_compare_notify_checkpoint;
+    colo_compare_register_notifier(&packets_compare_notifier);
+
     /*
      * Wait for Secondary finish loading VM states and enter COLO
      * restore.
@@ -460,11 +476,21 @@ out:
         qemu_fclose(fb);
     }
 
-    timer_del(s->colo_delay_timer);
-
     /* Hope this not to be too long to wait here */
     qemu_sem_wait(&s->colo_exit_sem);
     qemu_sem_destroy(&s->colo_exit_sem);
+
+    /*
+     * It is safe to unregister notifier after failover finished.
+     * Besides, colo_delay_timer and colo_checkpoint_sem can't be
+     * released befor unregister notifier, or there will be use-after-free
+     * error.
+     */
+    colo_compare_unregister_notifier(&packets_compare_notifier);
+    timer_del(s->colo_delay_timer);
+    timer_free(s->colo_delay_timer);
+    qemu_sem_destroy(&s->colo_checkpoint_sem);
+
     /*
      * Must be called after failover BH is completed,
      * Or the failover BH may shutdown the wrong fd that
@@ -557,6 +583,11 @@ void *colo_process_incoming_thread(void *opaque)
     fb = qemu_fopen_channel_input(QIO_CHANNEL(bioc));
     object_unref(OBJECT(bioc));
 
+    qemu_mutex_lock_iothread();
+    vm_start();
+    trace_colo_vm_state_change("stop", "run");
+    qemu_mutex_unlock_iothread();
+
     colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_READY,
                       &local_err);
     if (local_err) {
@@ -576,6 +607,11 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        qemu_mutex_lock_iothread();
+        vm_stop_force_state(RUN_STATE_COLO);
+        trace_colo_vm_state_change("run", "stop");
+        qemu_mutex_unlock_iothread();
+
         /* FIXME: This is unnecessary for periodic checkpoint mode */
         colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_REPLY,
                      &local_err);
@@ -629,6 +665,8 @@ void *colo_process_incoming_thread(void *opaque)
         }
 
         vmstate_loading = false;
+        vm_start();
+        trace_colo_vm_state_change("stop", "run");
         qemu_mutex_unlock_iothread();
 
         if (failover_get_state() == FAILOVER_STATUS_RELAUNCH) {
diff --git a/migration/migration.c b/migration/migration.c
index d3a1c49..5f8c2de 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -74,9 +74,9 @@
 #define DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE (64 * 1024 * 1024)
 
 /* The delay time (in ms) between two COLO checkpoints
- * Note: Please change this default value to 10000 when we support hybrid mode.
+ * Note: Please change this default value to 20000 when we support hybrid mode.
  */
-#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY 200
+#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
 #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
 #define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 05/16] COLO: Add block replication into colo process
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (3 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 04/16] COLO: integrate colo compare with colo frame Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 06/16] COLO: Remove colo_state migration struct Zhang Chen
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen, Li Zhijian

Make sure master start block replication after slave's block
replication started.

Besides, we need to activate VM's blocks before goes into
COLO state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 migration/colo.c      | 46 ++++++++++++++++++++++++++++++++++++++++++++++
 migration/migration.c |  9 +++++++++
 2 files changed, 55 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index c513805..0e689df 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -26,6 +26,9 @@
 #include "qmp-commands.h"
 #include "net/colo-compare.h"
 #include "net/colo.h"
+#include "qapi-event.h"
+#include "block/block.h"
+#include "replication.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -55,6 +58,7 @@ static void secondary_vm_do_failover(void)
 {
     int old_state;
     MigrationIncomingState *mis = migration_incoming_get_current();
+    Error *local_err = NULL;
 
     /* Can not do failover during the process of VM's loading VMstate, Or
      * it will break the secondary VM.
@@ -72,6 +76,11 @@ static void secondary_vm_do_failover(void)
     migrate_set_state(&mis->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
 
+    replication_stop_all(true, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
+
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
         /* recover runstate to normal migration finish state */
@@ -109,6 +118,7 @@ static void primary_vm_do_failover(void)
 {
     MigrationState *s = migrate_get_current();
     int old_state;
+    Error *local_err = NULL;
 
     migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
@@ -132,6 +142,13 @@ static void primary_vm_do_failover(void)
                      FailoverStatus_str(old_state));
         return;
     }
+
+    replication_stop_all(true, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+        local_err = NULL;
+    }
+
     /* Notify COLO thread that failover work is finished */
     qemu_sem_post(&s->colo_exit_sem);
 }
@@ -355,6 +372,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
     qemu_savevm_state_header(fb);
     qemu_savevm_state_setup(fb);
     qemu_mutex_lock_iothread();
+    replication_do_checkpoint_all(&local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
     qemu_savevm_state_complete_precopy(fb, false, false);
     qemu_mutex_unlock_iothread();
 
@@ -396,6 +418,7 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
     ret = 0;
 
     qemu_mutex_lock_iothread();
+
     vm_start();
     qemu_mutex_unlock_iothread();
     trace_colo_vm_state_change("stop", "run");
@@ -445,6 +468,12 @@ static void colo_process_checkpoint(MigrationState *s)
     object_unref(OBJECT(bioc));
 
     qemu_mutex_lock_iothread();
+    replication_start_all(REPLICATION_MODE_PRIMARY, &local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
+
     vm_start();
     qemu_mutex_unlock_iothread();
     trace_colo_vm_state_change("stop", "run");
@@ -584,6 +613,11 @@ void *colo_process_incoming_thread(void *opaque)
     object_unref(OBJECT(bioc));
 
     qemu_mutex_lock_iothread();
+    replication_start_all(REPLICATION_MODE_SECONDARY, &local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
     vm_start();
     trace_colo_vm_state_change("stop", "run");
     qemu_mutex_unlock_iothread();
@@ -664,6 +698,18 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        replication_get_error_all(&local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+        /* discard colo disk buffer */
+        replication_do_checkpoint_all(&local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
diff --git a/migration/migration.c b/migration/migration.c
index 5f8c2de..23b3cff 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -323,6 +323,7 @@ static void process_incoming_migration_co(void *opaque)
     MigrationIncomingState *mis = migration_incoming_get_current();
     PostcopyState ps;
     int ret;
+    Error *local_err = NULL;
 
     assert(mis->from_src_file);
     mis->largest_page_size = qemu_ram_pagesize_largest();
@@ -354,6 +355,14 @@ static void process_incoming_migration_co(void *opaque)
 
     /* we get COLO info, and know if we are in COLO mode */
     if (!ret && migration_incoming_enable_colo()) {
+        /* Make sure all file formats flush their mutable metadata */
+        bdrv_invalidate_cache_all(&local_err);
+        if (local_err) {
+            migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
+                    MIGRATION_STATUS_FAILED);
+            error_report_err(local_err);
+            exit(EXIT_FAILURE);
+        }
         mis->migration_incoming_co = qemu_coroutine_self();
         qemu_thread_create(&mis->colo_incoming_thread, "COLO incoming",
              colo_process_incoming_thread, mis, QEMU_THREAD_JOINABLE);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 06/16] COLO: Remove colo_state migration struct
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (4 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 05/16] COLO: Add block replication into colo process Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 07/16] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

We need to know if migration is going into COLO state for
incoming side before start normal migration.

Instead by using the VMStateDescription to send colo_state
from source side to destination side, we use MIG_CMD_ENABLE_COLO
to indicate whether COLO is enabled or not.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 include/migration/colo.h |  5 ++--
 migration/Makefile.objs  |  2 +-
 migration/colo-comm.c    | 76 ------------------------------------------------
 migration/colo.c         | 13 ++++++++-
 migration/migration.c    | 23 ++++++++++++++-
 migration/savevm.c       | 19 ++++++++++++
 migration/savevm.h       |  1 +
 migration/trace-events   |  1 +
 vl.c                     |  2 --
 9 files changed, 59 insertions(+), 83 deletions(-)
 delete mode 100644 migration/colo-comm.c

diff --git a/include/migration/colo.h b/include/migration/colo.h
index 6adf3a5..546cb9a 100644
--- a/include/migration/colo.h
+++ b/include/migration/colo.h
@@ -27,8 +27,9 @@ void migrate_start_colo_process(MigrationState *s);
 bool migration_in_colo_state(void);
 
 /* loadvm */
-bool migration_incoming_enable_colo(void);
-void migration_incoming_exit_colo(void);
+void migration_incoming_enable_colo(void);
+void migration_incoming_disable_colo(void);
+bool migration_incoming_colo_enabled(void);
 void *colo_process_incoming_thread(void *opaque);
 bool migration_incoming_in_colo_state(void);
 
diff --git a/migration/Makefile.objs b/migration/Makefile.objs
index 99e0380..3099eec 100644
--- a/migration/Makefile.objs
+++ b/migration/Makefile.objs
@@ -1,6 +1,6 @@
 common-obj-y += migration.o socket.o fd.o exec.o
 common-obj-y += tls.o channel.o savevm.o
-common-obj-y += colo-comm.o colo.o colo-failover.o
+common-obj-y += colo.o colo-failover.o
 common-obj-y += vmstate.o vmstate-types.o page_cache.o
 common-obj-y += qemu-file.o global_state.o
 common-obj-y += qemu-file-channel.o
diff --git a/migration/colo-comm.c b/migration/colo-comm.c
deleted file mode 100644
index df26e4d..0000000
--- a/migration/colo-comm.c
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * COarse-grain LOck-stepping Virtual Machines for Non-stop Service (COLO)
- * (a.k.a. Fault Tolerance or Continuous Replication)
- *
- * Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
- * Copyright (c) 2016 FUJITSU LIMITED
- * Copyright (c) 2016 Intel Corporation
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or
- * later. See the COPYING file in the top-level directory.
- *
- */
-
-#include "qemu/osdep.h"
-#include "migration.h"
-#include "migration/colo.h"
-#include "migration/vmstate.h"
-#include "trace.h"
-
-typedef struct {
-     bool colo_requested;
-} COLOInfo;
-
-static COLOInfo colo_info;
-
-COLOMode get_colo_mode(void)
-{
-    if (migration_in_colo_state()) {
-        return COLO_MODE_PRIMARY;
-    } else if (migration_incoming_in_colo_state()) {
-        return COLO_MODE_SECONDARY;
-    } else {
-        return COLO_MODE_UNKNOWN;
-    }
-}
-
-static int colo_info_pre_save(void *opaque)
-{
-    COLOInfo *s = opaque;
-
-    s->colo_requested = migrate_colo_enabled();
-
-    return 0;
-}
-
-static bool colo_info_need(void *opaque)
-{
-   return migrate_colo_enabled();
-}
-
-static const VMStateDescription colo_state = {
-    .name = "COLOState",
-    .version_id = 1,
-    .minimum_version_id = 1,
-    .pre_save = colo_info_pre_save,
-    .needed = colo_info_need,
-    .fields = (VMStateField[]) {
-        VMSTATE_BOOL(colo_requested, COLOInfo),
-        VMSTATE_END_OF_LIST()
-    },
-};
-
-void colo_info_init(void)
-{
-    vmstate_register(NULL, 0, &colo_state, &colo_info);
-}
-
-bool migration_incoming_enable_colo(void)
-{
-    return colo_info.colo_requested;
-}
-
-void migration_incoming_exit_colo(void)
-{
-    colo_info.colo_requested = false;
-}
diff --git a/migration/colo.c b/migration/colo.c
index 0e689df..8d2e3f8 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -153,6 +153,17 @@ static void primary_vm_do_failover(void)
     qemu_sem_post(&s->colo_exit_sem);
 }
 
+COLOMode get_colo_mode(void)
+{
+    if (migration_in_colo_state()) {
+        return COLO_MODE_PRIMARY;
+    } else if (migration_incoming_in_colo_state()) {
+        return COLO_MODE_SECONDARY;
+    } else {
+        return COLO_MODE_UNKNOWN;
+    }
+}
+
 void colo_do_failover(MigrationState *s)
 {
     /* Make sure VM stopped while failover happened. */
@@ -747,7 +758,7 @@ out:
     if (mis->to_src_file) {
         qemu_fclose(mis->to_src_file);
     }
-    migration_incoming_exit_colo();
+    migration_incoming_disable_colo();
 
     return NULL;
 }
diff --git a/migration/migration.c b/migration/migration.c
index 23b3cff..6042ee3 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -245,6 +245,22 @@ void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
     }
 }
 
+static bool migration_colo_enabled;
+bool migration_incoming_colo_enabled(void)
+{
+    return migration_colo_enabled;
+}
+
+void migration_incoming_disable_colo(void)
+{
+    migration_colo_enabled = false;
+}
+
+void migration_incoming_enable_colo(void)
+{
+    migration_colo_enabled = true;
+}
+
 void qemu_start_incoming_migration(const char *uri, Error **errp)
 {
     const char *p;
@@ -354,7 +370,7 @@ static void process_incoming_migration_co(void *opaque)
     }
 
     /* we get COLO info, and know if we are in COLO mode */
-    if (!ret && migration_incoming_enable_colo()) {
+    if (!ret && migration_incoming_colo_enabled()) {
         /* Make sure all file formats flush their mutable metadata */
         bdrv_invalidate_cache_all(&local_err);
         if (local_err) {
@@ -2383,6 +2399,11 @@ static void *migration_thread(void *opaque)
         qemu_savevm_send_postcopy_advise(s->to_dst_file);
     }
 
+    if (migrate_colo_enabled()) {
+        /* Notify migration destination that we enable COLO */
+        qemu_savevm_send_colo_enable(s->to_dst_file);
+    }
+
     qemu_savevm_state_setup(s->to_dst_file);
 
     s->setup_time = qemu_clock_get_ms(QEMU_CLOCK_HOST) - setup_start;
diff --git a/migration/savevm.c b/migration/savevm.c
index b7908f6..cd753c4 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -52,6 +52,7 @@
 #include "qemu/cutils.h"
 #include "io/channel-buffer.h"
 #include "io/channel-file.h"
+#include "migration/colo.h"
 
 #ifndef ETH_P_RARP
 #define ETH_P_RARP 0x8035
@@ -78,6 +79,9 @@ enum qemu_vm_cmd {
                                       were previously sent during
                                       precopy but are dirty. */
     MIG_CMD_PACKAGED,          /* Send a wrapped stream within this stream */
+
+    MIG_CMD_ENABLE_COLO, /* Enable COLO */
+
     MIG_CMD_MAX
 };
 
@@ -833,6 +837,12 @@ static void qemu_savevm_command_send(QEMUFile *f,
     qemu_fflush(f);
 }
 
+void qemu_savevm_send_colo_enable(QEMUFile *f)
+{
+    trace_savevm_send_colo_enable();
+    qemu_savevm_command_send(f, MIG_CMD_ENABLE_COLO, 0, NULL);
+}
+
 void qemu_savevm_send_ping(QEMUFile *f, uint32_t value)
 {
     uint32_t buf;
@@ -1749,6 +1759,12 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
     return ret;
 }
 
+static int loadvm_process_enable_colo(MigrationIncomingState *mis)
+{
+    migration_incoming_enable_colo();
+    return 0;
+}
+
 /*
  * Process an incoming 'QEMU_VM_COMMAND'
  * 0           just a normal return
@@ -1817,6 +1833,9 @@ static int loadvm_process_command(QEMUFile *f)
 
     case MIG_CMD_POSTCOPY_RAM_DISCARD:
         return loadvm_postcopy_ram_handle_discard(mis, len);
+
+    case MIG_CMD_ENABLE_COLO:
+        return loadvm_process_enable_colo(mis);
     }
 
     return 0;
diff --git a/migration/savevm.h b/migration/savevm.h
index 295c4a1..041d23c 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -51,6 +51,7 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
                                            uint16_t len,
                                            uint64_t *start_list,
                                            uint64_t *length_list);
+void qemu_savevm_send_colo_enable(QEMUFile *f);
 
 int qemu_loadvm_state(QEMUFile *f);
 void qemu_loadvm_state_cleanup(void);
diff --git a/migration/trace-events b/migration/trace-events
index 141e773..59c7e3e 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -34,6 +34,7 @@ savevm_send_open_return_path(void) ""
 savevm_send_ping(uint32_t val) "0x%x"
 savevm_send_postcopy_listen(void) ""
 savevm_send_postcopy_run(void) ""
+savevm_send_colo_enable(void) ""
 savevm_state_setup(void) ""
 savevm_state_header(void) ""
 savevm_state_iterate(void) ""
diff --git a/vl.c b/vl.c
index 2586f25..0184c7d 100644
--- a/vl.c
+++ b/vl.c
@@ -4499,8 +4499,6 @@ int main(int argc, char **argv, char **envp)
 #endif
     }
 
-    colo_info_init();
-
     if (net_init_clients() < 0) {
         exit(1);
     }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 07/16] COLO: Load dirty pages into SVM's RAM cache firstly
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (5 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 06/16] COLO: Remove colo_state migration struct Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 08/16] ram/COLO: Record the dirty pages that SVM received Zhang Chen
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

We should not load PVM's state directly into SVM, because there maybe some
errors happen when SVM is receving data, which will break SVM.

We need to ensure receving all data before load the state into SVM. We use
an extra memory to cache these data (PVM's ram). The ram cache in secondary side
is initially the same as SVM/PVM's memory. And in the process of checkpoint,
we cache the dirty pages of PVM into this ram cache firstly, so this ram cache
always the same as PVM's memory at every checkpoint, then we flush this cached ram
to SVM after we receive all PVM's state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 include/exec/ram_addr.h |  1 +
 migration/migration.c   |  2 +
 migration/ram.c         | 97 +++++++++++++++++++++++++++++++++++++++++++++++--
 migration/ram.h         |  4 ++
 migration/savevm.c      |  2 +-
 5 files changed, 102 insertions(+), 4 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 7633ef6..15e2474 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -27,6 +27,7 @@ struct RAMBlock {
     struct rcu_head rcu;
     struct MemoryRegion *mr;
     uint8_t *host;
+    uint8_t *colo_cache; /* For colo, VM's ram cache */
     ram_addr_t offset;
     ram_addr_t used_length;
     ram_addr_t max_length;
diff --git a/migration/migration.c b/migration/migration.c
index 6042ee3..be8defd 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -387,6 +387,8 @@ static void process_incoming_migration_co(void *opaque)
 
         /* Wait checkpoint incoming thread exit before free resource */
         qemu_thread_join(&mis->colo_incoming_thread);
+        /* We hold the global iothread lock, so it is safe here */
+        colo_release_ram_cache();
     }
 
     if (ret < 0) {
diff --git a/migration/ram.c b/migration/ram.c
index cb1950f..6460777 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2467,6 +2467,20 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
     return block->host + offset;
 }
 
+static inline void *colo_cache_from_block_offset(RAMBlock *block,
+                                                 ram_addr_t offset)
+{
+    if (!offset_in_ramblock(block, offset)) {
+        return NULL;
+    }
+    if (!block->colo_cache) {
+        error_report("%s: colo_cache is NULL in block :%s",
+                     __func__, block->idstr);
+        return NULL;
+    }
+    return block->colo_cache + offset;
+}
+
 /**
  * ram_handle_compressed: handle the zero page case
  *
@@ -2620,6 +2634,55 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
     qemu_mutex_unlock(&decomp_done_lock);
 }
 
+/*
+ * colo cache: this is for secondary VM, we cache the whole
+ * memory of the secondary VM, it is need to hold the global lock
+ * to call this helper.
+ */
+int colo_init_ram_cache(void)
+{
+    RAMBlock *block;
+
+    rcu_read_lock();
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        block->colo_cache = qemu_anon_ram_alloc(block->used_length, NULL);
+        if (!block->colo_cache) {
+            error_report("%s: Can't alloc memory for COLO cache of block %s,"
+                         "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
+                         block->used_length);
+            goto out_locked;
+        }
+    }
+    rcu_read_unlock();
+    return 0;
+
+out_locked:
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        if (block->colo_cache) {
+            qemu_anon_ram_free(block->colo_cache, block->used_length);
+            block->colo_cache = NULL;
+        }
+    }
+
+    rcu_read_unlock();
+    return -errno;
+}
+
+/* It is need to hold the global lock to call this helper */
+void colo_release_ram_cache(void)
+{
+    RAMBlock *block;
+
+    rcu_read_lock();
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        if (block->colo_cache) {
+            qemu_anon_ram_free(block->colo_cache, block->used_length);
+            block->colo_cache = NULL;
+        }
+    }
+    rcu_read_unlock();
+}
+
 /**
  * ram_load_setup: Setup RAM for migration incoming side
  *
@@ -2633,6 +2696,7 @@ static int ram_load_setup(QEMUFile *f, void *opaque)
     xbzrle_load_setup();
     compress_threads_load_setup();
     ramblock_recv_map_init();
+
     return 0;
 }
 
@@ -2646,6 +2710,7 @@ static int ram_load_cleanup(void *opaque)
         g_free(rb->receivedmap);
         rb->receivedmap = NULL;
     }
+
     return 0;
 }
 
@@ -2846,7 +2911,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
 
     while (!postcopy_running && !ret && !(flags & RAM_SAVE_FLAG_EOS)) {
         ram_addr_t addr, total_ram_bytes;
-        void *host = NULL;
+        void *host = NULL, *host_bak = NULL;
         uint8_t ch;
 
         addr = qemu_get_be64(f);
@@ -2866,13 +2931,36 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
                      RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
             RAMBlock *block = ram_block_from_stream(f, flags);
 
-            host = host_from_ram_block_offset(block, addr);
+             /*
+             * After going into COLO, we should load the Page into colo_cache
+             * NOTE: We need to keep a copy of SVM's ram in colo_cache.
+             * Privously, we copied all these memory in preparing stage of COLO
+             * while we need to stop VM, which is a time-consuming process.
+             * Here we optimize it by a trick, back-up every page while in
+             * migration process while COLO is enabled, though it affects the
+             * speed of the migration, but it obviously reduce the downtime of
+             * back-up all SVM'S memory in COLO preparing stage.
+             */
+            if (migration_incoming_in_colo_state()) {
+                host = colo_cache_from_block_offset(block, addr);
+                /* After goes into COLO state, don't backup it any more */
+                if (!migration_incoming_in_colo_state()) {
+                    host_bak = host;
+                }
+            }
+            if (!migration_incoming_in_colo_state()) {
+                host = host_from_ram_block_offset(block, addr);
+            }
             if (!host) {
                 error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
                 ret = -EINVAL;
                 break;
             }
-            ramblock_recv_bitmap_set(block, host);
+
+            if (!migration_incoming_in_colo_state()) {
+                ramblock_recv_bitmap_set(block, host);
+            }
+
             trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
         }
 
@@ -2967,6 +3055,9 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
         if (!ret) {
             ret = qemu_file_get_error(f);
         }
+        if (!ret && host_bak && host) {
+            memcpy(host_bak, host, TARGET_PAGE_SIZE);
+        }
     }
 
     wait_for_decompress_done();
diff --git a/migration/ram.h b/migration/ram.h
index 64d81e9..07abf71 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -62,4 +62,8 @@ int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
 void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
 void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr);
 
+/* ram cache */
+int colo_init_ram_cache(void);
+void colo_release_ram_cache(void);
+
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index cd753c4..c582716 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1762,7 +1762,7 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
 static int loadvm_process_enable_colo(MigrationIncomingState *mis)
 {
     migration_incoming_enable_colo();
-    return 0;
+    return colo_init_ram_cache();
 }
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 08/16] ram/COLO: Record the dirty pages that SVM received
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (6 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 07/16] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 09/16] COLO: Flush memory data from ram cache Zhang Chen
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

We record the address of the dirty pages that received,
it will help flushing pages that cached into SVM.

Here, it is a trick, we record dirty pages by re-using migration
dirty bitmap. In the later patch, we will start the dirty log
for SVM, just like migration, in this way, we can record both
the dirty pages caused by PVM and SVM, we only flush those dirty
pages from RAM cache while do checkpoint.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 6460777..d916da0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2478,6 +2478,15 @@ static inline void *colo_cache_from_block_offset(RAMBlock *block,
                      __func__, block->idstr);
         return NULL;
     }
+
+    /*
+    * During colo checkpoint, we need bitmap of these migrated pages.
+    * It help us to decide which pages in ram cache should be flushed
+    * into VM's RAM later.
+    */
+    if (!test_and_set_bit(offset >> TARGET_PAGE_BITS, block->bmap)) {
+        ram_state->migration_dirty_pages++;
+    }
     return block->colo_cache + offset;
 }
 
@@ -2654,6 +2663,24 @@ int colo_init_ram_cache(void)
         }
     }
     rcu_read_unlock();
+    /*
+    * Record the dirty pages that sent by PVM, we use this dirty bitmap together
+    * with to decide which page in cache should be flushed into SVM's RAM. Here
+    * we use the same name 'ram_bitmap' as for migration.
+    */
+    if (ram_bytes_total()) {
+        RAMBlock *block;
+
+        QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+            unsigned long pages = block->max_length >> TARGET_PAGE_BITS;
+
+            block->bmap = bitmap_new(pages);
+            bitmap_set(block->bmap, 0, pages);
+         }
+    }
+    ram_state = g_new0(RAMState, 1);
+    ram_state->migration_dirty_pages = 0;
+
     return 0;
 
 out_locked:
@@ -2673,6 +2700,10 @@ void colo_release_ram_cache(void)
 {
     RAMBlock *block;
 
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        g_free(block->bmap);
+        block->bmap = NULL;
+    }
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         if (block->colo_cache) {
@@ -2681,6 +2712,8 @@ void colo_release_ram_cache(void)
         }
     }
     rcu_read_unlock();
+    g_free(ram_state);
+    ram_state = NULL;
 }
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 09/16] COLO: Flush memory data from ram cache
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (7 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 08/16] ram/COLO: Record the dirty pages that SVM received Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

During the time of VM's running, PVM may dirty some pages, we will transfer
PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
time. So, the content of SVM's RAM cache will always be same with PVM's memory
after checkpoint.

Instead of flushing all content of PVM's RAM cache into SVM's MEMORY,
we do this in a more efficient way:
Only flush any page that dirtied by PVM since last checkpoint.
In this way, we can ensure SVM's memory same with PVM's.

Besides, we must ensure flush RAM cache before load device state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c        | 39 +++++++++++++++++++++++++++++++++++++++
 migration/trace-events |  2 ++
 2 files changed, 41 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index d916da0..faee086 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2909,6 +2909,40 @@ static bool postcopy_is_running(void)
     return ps >= POSTCOPY_INCOMING_LISTENING && ps < POSTCOPY_INCOMING_END;
 }
 
+/*
+ * Flush content of RAM cache into SVM's memory.
+ * Only flush the pages that be dirtied by PVM or SVM or both.
+ */
+static void colo_flush_ram_cache(void)
+{
+    RAMBlock *block = NULL;
+    void *dst_host;
+    void *src_host;
+    unsigned long offset = 0;
+
+    trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
+    rcu_read_lock();
+    block = QLIST_FIRST_RCU(&ram_list.blocks);
+
+    while (block) {
+        offset = migration_bitmap_find_dirty(ram_state, block, offset);
+        migration_bitmap_clear_dirty(ram_state, block, offset);
+
+        if (offset << TARGET_PAGE_BITS >= block->used_length) {
+            offset = 0;
+            block = QLIST_NEXT_RCU(block, next);
+        } else {
+            dst_host = block->host + (offset << TARGET_PAGE_BITS);
+            src_host = block->colo_cache + (offset << TARGET_PAGE_BITS);
+            memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+        }
+    }
+
+    rcu_read_unlock();
+    trace_colo_flush_ram_cache_end();
+    assert(ram_state->migration_dirty_pages == 0);
+}
+
 static int ram_load(QEMUFile *f, void *opaque, int version_id)
 {
     int flags = 0, ret = 0, invalid_flags = 0;
@@ -2921,6 +2955,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
     bool postcopy_running = postcopy_is_running();
     /* ADVISE is earlier, it shows the source has the postcopy capability on */
     bool postcopy_advised = postcopy_is_advised();
+    bool need_flush = false;
 
     seq_iter++;
 
@@ -3096,6 +3131,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
     wait_for_decompress_done();
     rcu_read_unlock();
     trace_ram_load_complete(ret, seq_iter);
+
+    if (!ret  && migration_incoming_in_colo_state() && need_flush) {
+        colo_flush_ram_cache();
+    }
     return ret;
 }
 
diff --git a/migration/trace-events b/migration/trace-events
index 59c7e3e..eb56cc6 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -78,6 +78,8 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
 ram_postcopy_send_discard_bitmap(void) ""
 ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
 ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
+colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64
+colo_flush_ram_cache_end(void) ""
 
 # migration/migration.c
 await_return_path_close_on_source_close(void) ""
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (8 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 09/16] COLO: Flush memory data from ram cache Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-02-03 15:49   ` Markus Armbruster
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 11/16] savevm: split the process of different stages for loadvm/savevm Zhang Chen
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

If some errors happen during VM's COLO FT stage, it's important to
notify the users of this event. Together with 'x-colo-lost-heartbeat',
Users can intervene in COLO's failover work immediately.
If users don't want to get involved in COLO's failover verdict,
it is still necessary to notify users that we exited COLO mode.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 migration/colo.c    | 19 +++++++++++++++++++
 qapi/migration.json | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 8d2e3f8..790b122 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -516,6 +516,18 @@ out:
         qemu_fclose(fb);
     }
 
+    /*
+     * There are only two reasons we can go here, some error happened.
+     * Or the user triggered failover.
+     */
+    if (failover_get_state() == FAILOVER_STATUS_NONE) {
+        qapi_event_send_colo_exit(COLO_MODE_PRIMARY,
+                                  COLO_EXIT_REASON_ERROR, NULL);
+    } else {
+        qapi_event_send_colo_exit(COLO_MODE_PRIMARY,
+                                  COLO_EXIT_REASON_REQUEST, NULL);
+    }
+
     /* Hope this not to be too long to wait here */
     qemu_sem_wait(&s->colo_exit_sem);
     qemu_sem_destroy(&s->colo_exit_sem);
@@ -746,6 +758,13 @@ out:
     if (local_err) {
         error_report_err(local_err);
     }
+    if (failover_get_state() == FAILOVER_STATUS_NONE) {
+        qapi_event_send_colo_exit(COLO_MODE_SECONDARY,
+                                  COLO_EXIT_REASON_ERROR, NULL);
+    } else {
+        qapi_event_send_colo_exit(COLO_MODE_SECONDARY,
+                                  COLO_EXIT_REASON_REQUEST, NULL);
+    }
 
     if (fb) {
         qemu_fclose(fb);
diff --git a/qapi/migration.json b/qapi/migration.json
index 70e7b67..6fc95b7 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -869,6 +869,41 @@
   'data': [ 'none', 'require', 'active', 'completed', 'relaunch' ] }
 
 ##
+# @COLO_EXIT:
+#
+# Emitted when VM finishes COLO mode due to some errors happening or
+# at the request of users.
+#
+# @mode: which COLO mode the VM was in when it exited.
+#
+# @reason: describes the reason for the COLO exit.
+#
+# Since: 2.12
+#
+# Example:
+#
+# <- { "timestamp": {"seconds": 2032141960, "microseconds": 417172},
+#      "event": "COLO_EXIT", "data": {"mode": "primary", "reason": "request" } }
+#
+##
+{ 'event': 'COLO_EXIT',
+  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
+
+##
+# @COLOExitReason:
+#
+# The reason for a COLO exit
+#
+# @request: COLO exit is due to an external request
+#
+# @error: COLO exit is due to an internal error
+#
+# Since: 2.12
+##
+{ 'enum': 'COLOExitReason',
+  'data': [ 'request', 'error' ] }
+
+##
 # @x-colo-lost-heartbeat:
 #
 # Tell qemu that heartbeat is lost, request it to do takeover procedures.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 11/16] savevm: split the process of different stages for loadvm/savevm
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (9 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 12/16] COLO: flush host dirty ram from cache Zhang Chen
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

There are several stages during loadvm/savevm process. In different stage,
migration incoming processes different types of sections.
We want to control these stages more accuracy, it will benefit COLO
performance, we don't have to save type of QEMU_VM_SECTION_START
sections everytime while do checkpoint, besides, we want to separate
the process of saving/loading memory and devices state.

So we add three new helper functions: qemu_load_device_state() and
qemu_savevm_live_state() to achieve different process during migration.

Besides, we make qemu_loadvm_state_main() and qemu_save_device_state()
public, and simplify the codes of qemu_save_device_state() by calling the
wrapper qemu_savevm_state_header().

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/colo.c   | 37 +++++++++++++++++++++++++++++--------
 migration/savevm.c | 35 ++++++++++++++++++++++++++++-------
 migration/savevm.h |  4 ++++
 3 files changed, 61 insertions(+), 15 deletions(-)

diff --git a/migration/colo.c b/migration/colo.c
index 790b122..a931ff2 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -29,6 +29,7 @@
 #include "qapi-event.h"
 #include "block/block.h"
 #include "replication.h"
+#include "sysemu/cpus.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -380,24 +381,31 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
 
     /* Disable block migration */
     migrate_set_block_enabled(false, &local_err);
-    qemu_savevm_state_header(fb);
-    qemu_savevm_state_setup(fb);
     qemu_mutex_lock_iothread();
     replication_do_checkpoint_all(&local_err);
     if (local_err) {
         qemu_mutex_unlock_iothread();
         goto out;
     }
-    qemu_savevm_state_complete_precopy(fb, false, false);
-    qemu_mutex_unlock_iothread();
-
-    qemu_fflush(fb);
 
     colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err);
     if (local_err) {
         goto out;
     }
     /*
+     * Only save VM's live state, which not including device state.
+     * TODO: We may need a timeout mechanism to prevent COLO process
+     * to be blocked here.
+     */
+    qemu_savevm_live_state(s->to_dst_file);
+    /* Note: device state is saved into buffer */
+    ret = qemu_save_device_state(fb);
+
+    qemu_mutex_unlock_iothread();
+
+    qemu_fflush(fb);
+
+    /*
      * We need the size of the VMstate data in Secondary side,
      * With which we can decide how much data should be read.
      */
@@ -610,6 +618,7 @@ void *colo_process_incoming_thread(void *opaque)
     uint64_t total_size;
     uint64_t value;
     Error *local_err = NULL;
+    int ret;
 
     qemu_sem_init(&mis->colo_incoming_sem, 0);
 
@@ -682,6 +691,16 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        qemu_mutex_lock_iothread();
+        cpu_synchronize_all_pre_loadvm();
+        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
+        qemu_mutex_unlock_iothread();
+
+        if (ret < 0) {
+            error_report("Load VM's live state (ram) error");
+            goto out;
+        }
+
         value = colo_receive_message_value(mis->from_src_file,
                                  COLO_MESSAGE_VMSTATE_SIZE, &local_err);
         if (local_err) {
@@ -715,8 +734,9 @@ void *colo_process_incoming_thread(void *opaque)
         qemu_mutex_lock_iothread();
         qemu_system_reset(SHUTDOWN_CAUSE_NONE);
         vmstate_loading = true;
-        if (qemu_loadvm_state(fb) < 0) {
-            error_report("COLO: loadvm failed");
+        ret = qemu_load_device_state(fb);
+        if (ret < 0) {
+            error_report("COLO: load device state failed");
             qemu_mutex_unlock_iothread();
             goto out;
         }
@@ -777,6 +797,7 @@ out:
     if (mis->to_src_file) {
         qemu_fclose(mis->to_src_file);
     }
+    qemu_loadvm_state_cleanup();
     migration_incoming_disable_colo();
 
     return NULL;
diff --git a/migration/savevm.c b/migration/savevm.c
index c582716..30a3c77 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1317,13 +1317,20 @@ done:
     return ret;
 }
 
-static int qemu_save_device_state(QEMUFile *f)
+void qemu_savevm_live_state(QEMUFile *f)
 {
-    SaveStateEntry *se;
+    /* save QEMU_VM_SECTION_END section */
+    qemu_savevm_state_complete_precopy(f, true, false);
+    qemu_put_byte(f, QEMU_VM_EOF);
+}
 
-    qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
-    qemu_put_be32(f, QEMU_VM_FILE_VERSION);
+int qemu_save_device_state(QEMUFile *f)
+{
+    SaveStateEntry *se;
 
+    if (!migration_in_colo_state()) {
+        qemu_savevm_state_header(f);
+    }
     cpu_synchronize_all_states();
 
     QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
@@ -1379,8 +1386,6 @@ enum LoadVMExitCodes {
     LOADVM_QUIT     =  1,
 };
 
-static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
-
 /* ------ incoming postcopy messages ------ */
 /* 'advise' arrives before any transfers just to tell us that a postcopy
  * *might* happen - it might be skipped if precopy transferred everything
@@ -2003,7 +2008,7 @@ void qemu_loadvm_state_cleanup(void)
     }
 }
 
-static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
+int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
 {
     uint8_t section_type;
     int ret = 0;
@@ -2148,6 +2153,22 @@ int qemu_loadvm_state(QEMUFile *f)
     return ret;
 }
 
+int qemu_load_device_state(QEMUFile *f)
+{
+    MigrationIncomingState *mis = migration_incoming_get_current();
+    int ret;
+
+    /* Load QEMU_VM_SECTION_FULL section */
+    ret = qemu_loadvm_state_main(f, mis);
+    if (ret < 0) {
+        error_report("Failed to load device state: %d", ret);
+        return ret;
+    }
+
+    cpu_synchronize_all_post_init();
+    return 0;
+}
+
 int save_snapshot(const char *name, Error **errp)
 {
     BlockDriverState *bs, *bs1;
diff --git a/migration/savevm.h b/migration/savevm.h
index 041d23c..8d463fd 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -52,8 +52,12 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
                                            uint64_t *start_list,
                                            uint64_t *length_list);
 void qemu_savevm_send_colo_enable(QEMUFile *f);
+void qemu_savevm_live_state(QEMUFile *f);
+int qemu_save_device_state(QEMUFile *f);
 
 int qemu_loadvm_state(QEMUFile *f);
 void qemu_loadvm_state_cleanup(void);
+int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
+int qemu_load_device_state(QEMUFile *f);
 
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 12/16] COLO: flush host dirty ram from cache
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (10 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 11/16] savevm: split the process of different stages for loadvm/savevm Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 13/16] filter: Add handle_event method for NetFilterClass Zhang Chen
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen, Li Zhijian

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

Don't need to flush all VM's ram from cache, only
flush the dirty pages since last checkpoint

Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 migration/ram.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index faee086..7f9ce60 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2680,6 +2680,7 @@ int colo_init_ram_cache(void)
     }
     ram_state = g_new0(RAMState, 1);
     ram_state->migration_dirty_pages = 0;
+    memory_global_dirty_log_start();
 
     return 0;
 
@@ -2700,10 +2701,12 @@ void colo_release_ram_cache(void)
 {
     RAMBlock *block;
 
+    memory_global_dirty_log_stop();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         g_free(block->bmap);
         block->bmap = NULL;
     }
+
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         if (block->colo_cache) {
@@ -2920,6 +2923,15 @@ static void colo_flush_ram_cache(void)
     void *src_host;
     unsigned long offset = 0;
 
+    memory_global_dirty_log_sync();
+    qemu_mutex_lock(&ram_state->bitmap_mutex);
+    rcu_read_lock();
+    RAMBLOCK_FOREACH(block) {
+        migration_bitmap_sync_range(ram_state, block, 0, block->used_length);
+    }
+    rcu_read_unlock();
+    qemu_mutex_unlock(&ram_state->bitmap_mutex);
+
     trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
     rcu_read_lock();
     block = QLIST_FIRST_RCU(&ram_list.blocks);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 13/16] filter: Add handle_event method for NetFilterClass
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (11 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 12/16] COLO: flush host dirty ram from cache Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 14/16] filter-rewriter: handle checkpoint and failover event Zhang Chen
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

Filter needs to process the event of checkpoint/failover or
other event passed by COLO frame.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 include/net/filter.h |  5 +++++
 net/filter.c         | 17 +++++++++++++++++
 net/net.c            | 28 ++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/include/net/filter.h b/include/net/filter.h
index 0c4a2ea..df4510d 100644
--- a/include/net/filter.h
+++ b/include/net/filter.h
@@ -37,6 +37,8 @@ typedef ssize_t (FilterReceiveIOV)(NetFilterState *nc,
 
 typedef void (FilterStatusChanged) (NetFilterState *nf, Error **errp);
 
+typedef void (FilterHandleEvent) (NetFilterState *nf, int event, Error **errp);
+
 typedef struct NetFilterClass {
     ObjectClass parent_class;
 
@@ -44,6 +46,7 @@ typedef struct NetFilterClass {
     FilterSetup *setup;
     FilterCleanup *cleanup;
     FilterStatusChanged *status_changed;
+    FilterHandleEvent *handle_event;
     /* mandatory */
     FilterReceiveIOV *receive_iov;
 } NetFilterClass;
@@ -76,4 +79,6 @@ ssize_t qemu_netfilter_pass_to_next(NetClientState *sender,
                                     int iovcnt,
                                     void *opaque);
 
+void colo_notify_filters_event(int event, Error **errp);
+
 #endif /* QEMU_NET_FILTER_H */
diff --git a/net/filter.c b/net/filter.c
index 2fd7d7d..0f17eba 100644
--- a/net/filter.c
+++ b/net/filter.c
@@ -17,6 +17,8 @@
 #include "net/vhost_net.h"
 #include "qom/object_interfaces.h"
 #include "qemu/iov.h"
+#include "net/colo.h"
+#include "migration/colo.h"
 
 static inline bool qemu_can_skip_netfilter(NetFilterState *nf)
 {
@@ -245,11 +247,26 @@ static void netfilter_finalize(Object *obj)
     g_free(nf->netdev_id);
 }
 
+static void dummy_handle_event(NetFilterState *nf, int event, Error **errp)
+{
+    switch (event) {
+    case COLO_EVENT_CHECKPOINT:
+        break;
+    case COLO_EVENT_FAILOVER:
+        object_property_set_str(OBJECT(nf), "off", "status", errp);
+        break;
+    default:
+        break;
+    }
+}
+
 static void netfilter_class_init(ObjectClass *oc, void *data)
 {
     UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
+    NetFilterClass *nfc = NETFILTER_CLASS(oc);
 
     ucc->complete = netfilter_complete;
+    nfc->handle_event = dummy_handle_event;
 }
 
 static const TypeInfo netfilter_info = {
diff --git a/net/net.c b/net/net.c
index 2b81c93..56a54e7 100644
--- a/net/net.c
+++ b/net/net.c
@@ -1399,6 +1399,34 @@ void hmp_info_network(Monitor *mon, const QDict *qdict)
     }
 }
 
+void colo_notify_filters_event(int event, Error **errp)
+{
+    NetClientState *nc, *peer;
+    NetClientDriver type;
+    NetFilterState *nf;
+    NetFilterClass *nfc = NULL;
+    Error *local_err = NULL;
+
+    QTAILQ_FOREACH(nc, &net_clients, next) {
+        peer = nc->peer;
+        type = nc->info->type;
+        if (!peer || type != NET_CLIENT_DRIVER_TAP) {
+            continue;
+        }
+        QTAILQ_FOREACH(nf, &nc->filters, next) {
+            nfc =  NETFILTER_GET_CLASS(OBJECT(nf));
+            if (!nfc->handle_event) {
+                continue;
+            }
+            nfc->handle_event(nf, event, &local_err);
+            if (local_err) {
+                error_propagate(errp, local_err);
+                return;
+            }
+        }
+    }
+}
+
 void qmp_set_link(const char *name, bool up, Error **errp)
 {
     NetClientState *ncs[MAX_QUEUE_NUM];
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 14/16] filter-rewriter: handle checkpoint and failover event
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (12 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 13/16] filter: Add handle_event method for NetFilterClass Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 15/16] COLO: notify net filters about checkpoint/failover event Zhang Chen
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

After one round of checkpoint, the states between PVM and SVM
become consistent, so it is unnecessary to adjust the sequence
of net packets for old connections, besides, while failover
happens, filter-rewriter needs to check if it still needs to
adjust sequence of net packets.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
---
 migration/colo.c      | 13 +++++++++++++
 net/filter-rewriter.c | 40 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index a931ff2..9eab4a3 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -30,6 +30,7 @@
 #include "block/block.h"
 #include "replication.h"
 #include "sysemu/cpus.h"
+#include "net/filter.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -81,6 +82,11 @@ static void secondary_vm_do_failover(void)
     if (local_err) {
         error_report_err(local_err);
     }
+    /* Notify all filters of all NIC to do checkpoint */
+    colo_notify_filters_event(COLO_EVENT_FAILOVER, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
 
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
@@ -753,6 +759,13 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        /* Notify all filters of all NIC to do checkpoint */
+        colo_notify_filters_event(COLO_EVENT_CHECKPOINT, &local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index a58310a..bd4b6cf 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -23,6 +23,8 @@
 #include "qemu/main-loop.h"
 #include "qemu/iov.h"
 #include "net/checksum.h"
+#include "net/colo.h"
+#include "migration/colo.h"
 
 #define FILTER_COLO_REWRITER(obj) \
     OBJECT_CHECK(RewriterState, (obj), TYPE_FILTER_REWRITER)
@@ -280,6 +282,43 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
     return 0;
 }
 
+static void reset_seq_offset(gpointer key, gpointer value, gpointer user_data)
+{
+    Connection *conn = (Connection *)value;
+
+    conn->offset = 0;
+}
+
+static gboolean offset_is_nonzero(gpointer key,
+                                  gpointer value,
+                                  gpointer user_data)
+{
+    Connection *conn = (Connection *)value;
+
+    return conn->offset ? true : false;
+}
+
+static void colo_rewriter_handle_event(NetFilterState *nf, int event,
+                                       Error **errp)
+{
+    RewriterState *rs = FILTER_COLO_REWRITER(nf);
+
+    switch (event) {
+    case COLO_EVENT_CHECKPOINT:
+        g_hash_table_foreach(rs->connection_track_table,
+                            reset_seq_offset, NULL);
+        break;
+    case COLO_EVENT_FAILOVER:
+        if (!g_hash_table_find(rs->connection_track_table,
+                              offset_is_nonzero, NULL)) {
+            object_property_set_str(OBJECT(nf), "off", "status", errp);
+        }
+        break;
+    default:
+        break;
+    }
+}
+
 static void colo_rewriter_cleanup(NetFilterState *nf)
 {
     RewriterState *s = FILTER_COLO_REWRITER(nf);
@@ -335,6 +374,7 @@ static void colo_rewriter_class_init(ObjectClass *oc, void *data)
     nfc->setup = colo_rewriter_setup;
     nfc->cleanup = colo_rewriter_cleanup;
     nfc->receive_iov = colo_rewriter_receive_iov;
+    nfc->handle_event = colo_rewriter_handle_event;
 }
 
 static const TypeInfo colo_rewriter_info = {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 15/16] COLO: notify net filters about checkpoint/failover event
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (13 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 14/16] filter-rewriter: handle checkpoint and failover event Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 16/16] COLO: quick failover process by kick COLO thread Zhang Chen
  2018-01-30  5:42 ` [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

Notify all net filters about the checkpoint and failover event.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 migration/colo.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 9eab4a3..10bc80c 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -87,6 +87,11 @@ static void secondary_vm_do_failover(void)
     if (local_err) {
         error_report_err(local_err);
     }
+    /* Notify all filters of all NIC to do checkpoint */
+    colo_notify_filters_event(COLO_EVENT_FAILOVER, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
 
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
@@ -766,6 +771,13 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        /* Notify all filters of all NIC to do checkpoint */
+        colo_notify_filters_event(COLO_EVENT_CHECKPOINT, &local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH V4 16/16] COLO: quick failover process by kick COLO thread
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (14 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 15/16] COLO: notify net filters about checkpoint/failover event Zhang Chen
@ 2018-01-19 13:44 ` Zhang Chen
  2018-01-30  5:42 ` [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-19 13:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, zhanghailiang, Juan Quintela,
	Dr . David Alan Gilbert, Jason Wang, Eric Blake,
	Markus Armbruster, Zhang Chen

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

COLO thread may sleep at qemu_sem_wait(&s->colo_checkpoint_sem),
while failover works begin, It's better to wakeup it to quick
the process.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 migration/colo.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 10bc80c..cc616d9 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -134,6 +134,11 @@ static void primary_vm_do_failover(void)
 
     migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
+    /*
+     * kick COLO thread which might wait at
+     * qemu_sem_wait(&s->colo_checkpoint_sem).
+     */
+    colo_checkpoint_notify(migrate_get_current());
 
     /*
      * Wake up COLO thread which may blocked in recv() or send(),
@@ -519,6 +524,9 @@ static void colo_process_checkpoint(MigrationState *s)
 
         qemu_sem_wait(&s->colo_checkpoint_sem);
 
+        if (s->state != MIGRATION_STATUS_COLO) {
+            goto out;
+        }
         ret = colo_do_checkpoint_transaction(s, bioc, fb);
         if (ret < 0) {
             goto out;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy
  2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
                   ` (15 preceding siblings ...)
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 16/16] COLO: quick failover process by kick COLO thread Zhang Chen
@ 2018-01-30  5:42 ` Zhang Chen
  16 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-01-30  5:42 UTC (permalink / raw)
  To: qemu-devel

Hi~ All~

No news for a long time, this is the last part to make COLO running in
upstream.
Any comments for this series?

Thanks
Zhang Chen


On Fri, Jan 19, 2018 at 1:44 PM, Zhang Chen <zhangckid@gmail.com> wrote:

> From: Zhang Chen <chen.zhang@intel.com>
>
> Hi~
>
> COLO Frame, block replication and COLO proxy(colo-compare,filter-mirror,
> filter-redirector,filter-rewriter) have been exist in qemu
> for long time, it's time to integrate these three parts to make COLO
> really works.
>
> In this series, we have some optimizations for COLO frame, including
> separating the
> process of saving ram and device state, using an COLO_EXIT event to notify
> users that
> VM exits COLO, for these parts, most of them have been reviewed long time
> ago in old version,
> but since this series have just rebased on upstream which had merged a new
> series of migration,
> parts of pathes in this series deserve review again.
>
> We use notifier/callback method for COLO compare to notify COLO frame about
> net packets inconsistent event, and add a handle_event method for
> NetFilterClass to
> help COLO frame to notify filters and colo-compare about
> checkpoint/failover event,
> it is flexible.
>
> For the neweset version, please refer to:
> https://github.com/zhangckid/qemu/tree/qemu-colo-18jan19
>
> Please review, thanks.
>
> V4:
>  - Address Eric's comments in patch 10/16.
>  - Rebase on upstream codes.
>  - Fix mingw compile error in patch 02/16.
>  - Fix some comments.
>  - Fix conflect with the patch "migration: remove "enable_colo" var" in
> patch 06/16.
>
> V3:
>  - Address community comments from V2.
>  - Rebase on upstream codes.
>  - Fix several bugs.
>  - Splite shared disk part to indepentent patch set.
>  - Optimize codes.
>
>
> Zhang Chen (8):
>   filter-rewriter: fix memory leak for connection in
>     connection_track_table
>   colo-compare: implement the process of checkpoint
>   colo-compare: use notifier to notify packets comparing result
>   COLO: integrate colo compare with colo frame
>   COLO: Add block replication into colo process
>   ram/COLO: Record the dirty pages that SVM received
>   filter: Add handle_event method for NetFilterClass
>   filter-rewriter: handle checkpoint and failover event
>
> zhanghailiang (8):
>   COLO: Remove colo_state migration struct
>   COLO: Load dirty pages into SVM's RAM cache firstly
>   COLO: Flush memory data from ram cache
>   qmp event: Add COLO_EXIT event to notify users while exited COLO
>   savevm: split the process of different stages for loadvm/savevm
>   COLO: flush host dirty ram from cache
>   COLO: notify net filters about checkpoint/failover event
>   COLO: quick failover process by kick COLO thread
>
>  include/exec/ram_addr.h  |   1 +
>  include/migration/colo.h |  11 ++-
>  include/net/filter.h     |   5 ++
>  migration/Makefile.objs  |   2 +-
>  migration/colo-comm.c    |  76 -------------------
>  migration/colo.c         | 188 ++++++++++++++++++++++++++++++
> ++++++++++++++---
>  migration/migration.c    |  38 +++++++++-
>  migration/ram.c          | 181 ++++++++++++++++++++++++++++++
> ++++++++++++++-
>  migration/ram.h          |   4 +
>  migration/savevm.c       |  54 ++++++++++++--
>  migration/savevm.h       |   5 ++
>  migration/trace-events   |   3 +
>  net/colo-compare.c       | 108 +++++++++++++++++++++++++--
>  net/colo-compare.h       |  24 ++++++
>  net/colo.h               |   4 +
>  net/filter-rewriter.c    | 109 +++++++++++++++++++++++++--
>  net/filter.c             |  17 +++++
>  net/net.c                |  28 +++++++
>  qapi/migration.json      |  35 +++++++++
>  vl.c                     |   2 -
>  20 files changed, 777 insertions(+), 118 deletions(-)
>  delete mode 100644 migration/colo-comm.c
>  create mode 100644 net/colo-compare.h
>
> --
> 2.7.4
>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
@ 2018-02-03 15:49   ` Markus Armbruster
  2018-02-06  3:13     ` Zhang Chen
  0 siblings, 1 reply; 25+ messages in thread
From: Markus Armbruster @ 2018-02-03 15:49 UTC (permalink / raw)
  To: Zhang Chen
  Cc: qemu-devel, zhanghailiang, Li Zhijian, Juan Quintela, Jason Wang,
	Dr . David Alan Gilbert, Paolo Bonzini

Zhang Chen <zhangckid@gmail.com> writes:

> From: zhanghailiang <zhang.zhanghailiang@huawei.com>
>
> If some errors happen during VM's COLO FT stage, it's important to
> notify the users of this event. Together with 'x-colo-lost-heartbeat',
> Users can intervene in COLO's failover work immediately.
> If users don't want to get involved in COLO's failover verdict,
> it is still necessary to notify users that we exited COLO mode.
>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
[...]
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 70e7b67..6fc95b7 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -869,6 +869,41 @@
>    'data': [ 'none', 'require', 'active', 'completed', 'relaunch' ] }
>  
>  ##
> +# @COLO_EXIT:
> +#
> +# Emitted when VM finishes COLO mode due to some errors happening or
> +# at the request of users.
> +#
> +# @mode: which COLO mode the VM was in when it exited.
> +#
> +# @reason: describes the reason for the COLO exit.
> +#
> +# Since: 2.12
> +#
> +# Example:
> +#
> +# <- { "timestamp": {"seconds": 2032141960, "microseconds": 417172},
> +#      "event": "COLO_EXIT", "data": {"mode": "primary", "reason": "request" } }
> +#
> +##
> +{ 'event': 'COLO_EXIT',
> +  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }

Standard question when I see a new event: is there a way to poll for the
event's information?  If not, why don't we need one?

Remember, management applications might miss events when they lose the
connection and have to reconnect, say because the management application
needs to be restarted.

> +
> +##
> +# @COLOExitReason:
> +#
> +# The reason for a COLO exit
> +#
> +# @request: COLO exit is due to an external request
> +#
> +# @error: COLO exit is due to an internal error
> +#
> +# Since: 2.12
> +##
> +{ 'enum': 'COLOExitReason',
> +  'data': [ 'request', 'error' ] }
> +
> +##
>  # @x-colo-lost-heartbeat:
>  #
>  # Tell qemu that heartbeat is lost, request it to do takeover procedures.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-02-03 15:49   ` Markus Armbruster
@ 2018-02-06  3:13     ` Zhang Chen
  2018-02-06  7:27       ` Markus Armbruster
  2018-02-06 15:20       ` Eric Blake
  0 siblings, 2 replies; 25+ messages in thread
From: Zhang Chen @ 2018-02-06  3:13 UTC (permalink / raw)
  To: Markus Armbruster, Eric Blake
  Cc: qemu-devel, zhanghailiang, Li Zhijian, Juan Quintela, Jason Wang,
	Dr . David Alan Gilbert, Paolo Bonzini

On Sat, Feb 3, 2018 at 3:49 PM, Markus Armbruster <armbru@redhat.com> wrote:

> Zhang Chen <zhangckid@gmail.com> writes:
>
> > From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> >
> > If some errors happen during VM's COLO FT stage, it's important to
> > notify the users of this event. Together with 'x-colo-lost-heartbeat',
> > Users can intervene in COLO's failover work immediately.
> > If users don't want to get involved in COLO's failover verdict,
> > it is still necessary to notify users that we exited COLO mode.
> >
> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> [...]
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index 70e7b67..6fc95b7 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -869,6 +869,41 @@
> >    'data': [ 'none', 'require', 'active', 'completed', 'relaunch' ] }
> >
> >  ##
> > +# @COLO_EXIT:
> > +#
> > +# Emitted when VM finishes COLO mode due to some errors happening or
> > +# at the request of users.
> > +#
> > +# @mode: which COLO mode the VM was in when it exited.
> > +#
> > +# @reason: describes the reason for the COLO exit.
> > +#
> > +# Since: 2.12
> > +#
> > +# Example:
> > +#
> > +# <- { "timestamp": {"seconds": 2032141960, "microseconds": 417172},
> > +#      "event": "COLO_EXIT", "data": {"mode": "primary", "reason":
> "request" } }
> > +#
> > +##
> > +{ 'event': 'COLO_EXIT',
> > +  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
>
> Standard question when I see a new event: is there a way to poll for the
> event's information?  If not, why don't we need one?
>
>
Your means is we'd better print the information to a log file or something
like that for all qemu events?
CC  Eric Blake <eblake@redhat.com>
any idea about this?

Thanks
Zhang Chen


> Remember, management applications might miss events when they lose the
> connection and have to reconnect, say because the management application
> needs to be restarted.
>
> > +
> > +##
> > +# @COLOExitReason:
> > +#
> > +# The reason for a COLO exit
> > +#
> > +# @request: COLO exit is due to an external request
> > +#
> > +# @error: COLO exit is due to an internal error
> > +#
> > +# Since: 2.12
> > +##
> > +{ 'enum': 'COLOExitReason',
> > +  'data': [ 'request', 'error' ] }
> > +
> > +##
> >  # @x-colo-lost-heartbeat:
> >  #
> >  # Tell qemu that heartbeat is lost, request it to do takeover
> procedures.
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-02-06  3:13     ` Zhang Chen
@ 2018-02-06  7:27       ` Markus Armbruster
  2018-02-06  8:01         ` Zhang Chen
  2018-02-06 15:20       ` Eric Blake
  1 sibling, 1 reply; 25+ messages in thread
From: Markus Armbruster @ 2018-02-06  7:27 UTC (permalink / raw)
  To: Zhang Chen
  Cc: Eric Blake, zhanghailiang, Li Zhijian, Juan Quintela, Jason Wang,
	Dr . David Alan Gilbert, qemu-devel, Paolo Bonzini

Zhang Chen <zhangckid@gmail.com> writes:

> On Sat, Feb 3, 2018 at 3:49 PM, Markus Armbruster <armbru@redhat.com> wrote:
>
>> Zhang Chen <zhangckid@gmail.com> writes:
>>
>> > From: zhanghailiang <zhang.zhanghailiang@huawei.com>
>> >
>> > If some errors happen during VM's COLO FT stage, it's important to
>> > notify the users of this event. Together with 'x-colo-lost-heartbeat',
>> > Users can intervene in COLO's failover work immediately.
>> > If users don't want to get involved in COLO's failover verdict,
>> > it is still necessary to notify users that we exited COLO mode.
>> >
>> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
>> > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
>> > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
>> > Reviewed-by: Eric Blake <eblake@redhat.com>
>> [...]
>> > diff --git a/qapi/migration.json b/qapi/migration.json
>> > index 70e7b67..6fc95b7 100644
>> > --- a/qapi/migration.json
>> > +++ b/qapi/migration.json
>> > @@ -869,6 +869,41 @@
>> >    'data': [ 'none', 'require', 'active', 'completed', 'relaunch' ] }
>> >
>> >  ##
>> > +# @COLO_EXIT:
>> > +#
>> > +# Emitted when VM finishes COLO mode due to some errors happening or
>> > +# at the request of users.
>> > +#
>> > +# @mode: which COLO mode the VM was in when it exited.
>> > +#
>> > +# @reason: describes the reason for the COLO exit.
>> > +#
>> > +# Since: 2.12
>> > +#
>> > +# Example:
>> > +#
>> > +# <- { "timestamp": {"seconds": 2032141960, "microseconds": 417172},
>> > +#      "event": "COLO_EXIT", "data": {"mode": "primary", "reason": "request" } }
>> > +#
>> > +##
>> > +{ 'event': 'COLO_EXIT',
>> > +  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
>>
>> Standard question when I see a new event: is there a way to poll for the
>> event's information?  If not, why don't we need one?
>>
>>
> Your means is we'd better print the information to a log file or something
> like that for all qemu events?
> CC  Eric Blake <eblake@redhat.com>
> any idea about this?

Events carrying state change information management applications want to
track are generally paired with a query- command.  While the management
application is connected, it can track by passively listening for state
change events.  After (re)connect, it has to actively query the current
state.

Questions?

>> Remember, management applications might miss events when they lose the
>> connection and have to reconnect, say because the management application
>> needs to be restarted.
>>
>> > +
>> > +##
>> > +# @COLOExitReason:
>> > +#
>> > +# The reason for a COLO exit
>> > +#
>> > +# @request: COLO exit is due to an external request
>> > +#
>> > +# @error: COLO exit is due to an internal error
>> > +#
>> > +# Since: 2.12
>> > +##
>> > +{ 'enum': 'COLOExitReason',
>> > +  'data': [ 'request', 'error' ] }
>> > +
>> > +##
>> >  # @x-colo-lost-heartbeat:
>> >  #
>> >  # Tell qemu that heartbeat is lost, request it to do takeover procedures.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-02-06  7:27       ` Markus Armbruster
@ 2018-02-06  8:01         ` Zhang Chen
  2018-02-06  9:53           ` Markus Armbruster
  0 siblings, 1 reply; 25+ messages in thread
From: Zhang Chen @ 2018-02-06  8:01 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Eric Blake, zhanghailiang, Li Zhijian, Juan Quintela, Jason Wang,
	Dr . David Alan Gilbert, qemu-devel, Paolo Bonzini

On Tue, Feb 6, 2018 at 3:27 PM, Markus Armbruster <armbru@redhat.com> wrote:

> Zhang Chen <zhangckid@gmail.com> writes:
>
> > On Sat, Feb 3, 2018 at 3:49 PM, Markus Armbruster <armbru@redhat.com>
> wrote:
> >
> >> Zhang Chen <zhangckid@gmail.com> writes:
> >>
> >> > From: zhanghailiang <zhang.zhanghailiang@huawei.com>
> >> >
> >> > If some errors happen during VM's COLO FT stage, it's important to
> >> > notify the users of this event. Together with 'x-colo-lost-heartbeat',
> >> > Users can intervene in COLO's failover work immediately.
> >> > If users don't want to get involved in COLO's failover verdict,
> >> > it is still necessary to notify users that we exited COLO mode.
> >> >
> >> > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> >> > Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> >> > Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> >> > Reviewed-by: Eric Blake <eblake@redhat.com>
> >> [...]
> >> > diff --git a/qapi/migration.json b/qapi/migration.json
> >> > index 70e7b67..6fc95b7 100644
> >> > --- a/qapi/migration.json
> >> > +++ b/qapi/migration.json
> >> > @@ -869,6 +869,41 @@
> >> >    'data': [ 'none', 'require', 'active', 'completed', 'relaunch' ] }
> >> >
> >> >  ##
> >> > +# @COLO_EXIT:
> >> > +#
> >> > +# Emitted when VM finishes COLO mode due to some errors happening or
> >> > +# at the request of users.
> >> > +#
> >> > +# @mode: which COLO mode the VM was in when it exited.
> >> > +#
> >> > +# @reason: describes the reason for the COLO exit.
> >> > +#
> >> > +# Since: 2.12
> >> > +#
> >> > +# Example:
> >> > +#
> >> > +# <- { "timestamp": {"seconds": 2032141960, "microseconds": 417172},
> >> > +#      "event": "COLO_EXIT", "data": {"mode": "primary", "reason":
> "request" } }
> >> > +#
> >> > +##
> >> > +{ 'event': 'COLO_EXIT',
> >> > +  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
> >>
> >> Standard question when I see a new event: is there a way to poll for the
> >> event's information?  If not, why don't we need one?
> >>
> >>
> > Your means is we'd better print the information to a log file or
> something
> > like that for all qemu events?
> > CC  Eric Blake <eblake@redhat.com>
> > any idea about this?
>
> Events carrying state change information management applications want to
> track are generally paired with a query- command.  While the management
> application is connected, it can track by passively listening for state
> change events.  After (re)connect, it has to actively query the current
> state.
>
> Questions?
>


If I understand correctly, maybe we need a qemu events general history
mechanism
to solve this problem,
because lots of qemu events can't resend the current state. Yes, when the
"management application"(like libvirt)
lose the connection to qemu,  management application can't get the
information after reconnect.

Thanks
Zhang Chen


>
> >> Remember, management applications might miss events when they lose the
> >> connection and have to reconnect, say because the management application
> >> needs to be restarted.
> >>
> >> > +
> >> > +##
> >> > +# @COLOExitReason:
> >> > +#
> >> > +# The reason for a COLO exit
> >> > +#
> >> > +# @request: COLO exit is due to an external request
> >> > +#
> >> > +# @error: COLO exit is due to an internal error
> >> > +#
> >> > +# Since: 2.12
> >> > +##
> >> > +{ 'enum': 'COLOExitReason',
> >> > +  'data': [ 'request', 'error' ] }
> >> > +
> >> > +##
> >> >  # @x-colo-lost-heartbeat:
> >> >  #
> >> >  # Tell qemu that heartbeat is lost, request it to do takeover
> procedures.
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-02-06  8:01         ` Zhang Chen
@ 2018-02-06  9:53           ` Markus Armbruster
  2018-02-06 12:44             ` Zhang Chen
  0 siblings, 1 reply; 25+ messages in thread
From: Markus Armbruster @ 2018-02-06  9:53 UTC (permalink / raw)
  To: Zhang Chen
  Cc: zhanghailiang, Li Zhijian, Juan Quintela, Jason Wang,
	Dr . David Alan Gilbert, qemu-devel, Paolo Bonzini

Zhang Chen <zhangckid@gmail.com> writes:

> On Tue, Feb 6, 2018 at 3:27 PM, Markus Armbruster <armbru@redhat.com> wrote:
>
>> Zhang Chen <zhangckid@gmail.com> writes:
>>
>> > On Sat, Feb 3, 2018 at 3:49 PM, Markus Armbruster <armbru@redhat.com> wrote:
>> >> Standard question when I see a new event: is there a way to poll for the
>> >> event's information?  If not, why don't we need one?
>> >>
>> >>
>> > Your means is we'd better print the information to a log file or something
>> > like that for all qemu events?
>> > CC  Eric Blake <eblake@redhat.com>
>> > any idea about this?
>>
>> Events carrying state change information management applications want to
>> track are generally paired with a query- command.  While the management
>> application is connected, it can track by passively listening for state
>> change events.  After (re)connect, it has to actively query the current
>> state.
>>
>> Questions?
>>
>
>
> If I understand correctly, maybe we need a qemu events general history
> mechanism
> to solve this problem,
> because lots of qemu events can't resend the current state. Yes, when the
> "management application"(like libvirt)
> lose the connection to qemu,  management application can't get the
> information after reconnect.

Events can't resend the current state, but query commands can.

Designing of an "events general history mechanism" could well be
non-trivial.  Its implementation might not be simple, either.  Query
commands, on the other hand, are well understood and easy to implement.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-02-06  9:53           ` Markus Armbruster
@ 2018-02-06 12:44             ` Zhang Chen
  0 siblings, 0 replies; 25+ messages in thread
From: Zhang Chen @ 2018-02-06 12:44 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: zhanghailiang, Li Zhijian, Juan Quintela, Jason Wang,
	Dr . David Alan Gilbert, qemu-devel, Paolo Bonzini

On Tue, Feb 6, 2018 at 5:53 PM, Markus Armbruster <armbru@redhat.com> wrote:

> Zhang Chen <zhangckid@gmail.com> writes:
>
> > On Tue, Feb 6, 2018 at 3:27 PM, Markus Armbruster <armbru@redhat.com>
> wrote:
> >
> >> Zhang Chen <zhangckid@gmail.com> writes:
> >>
> >> > On Sat, Feb 3, 2018 at 3:49 PM, Markus Armbruster <armbru@redhat.com>
> wrote:
> >> >> Standard question when I see a new event: is there a way to poll for
> the
> >> >> event's information?  If not, why don't we need one?
> >> >>
> >> >>
> >> > Your means is we'd better print the information to a log file or
> something
> >> > like that for all qemu events?
> >> > CC  Eric Blake <eblake@redhat.com>
> >> > any idea about this?
> >>
> >> Events carrying state change information management applications want to
> >> track are generally paired with a query- command.  While the management
> >> application is connected, it can track by passively listening for state
> >> change events.  After (re)connect, it has to actively query the current
> >> state.
> >>
> >> Questions?
> >>
> >
> >
> > If I understand correctly, maybe we need a qemu events general history
> > mechanism
> > to solve this problem,
> > because lots of qemu events can't resend the current state. Yes, when the
> > "management application"(like libvirt)
> > lose the connection to qemu,  management application can't get the
> > information after reconnect.
>
> Events can't resend the current state, but query commands can.
>
> Designing of an "events general history mechanism" could well be
> non-trivial.  Its implementation might not be simple, either.  Query
> commands, on the other hand, are well understood and easy to implement.
>

OK, I got it.
I will add a new query command for COLO state in next version.
Thanks your comments.

Zhang Chen

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-02-06  3:13     ` Zhang Chen
  2018-02-06  7:27       ` Markus Armbruster
@ 2018-02-06 15:20       ` Eric Blake
  1 sibling, 0 replies; 25+ messages in thread
From: Eric Blake @ 2018-02-06 15:20 UTC (permalink / raw)
  To: Zhang Chen, Markus Armbruster
  Cc: qemu-devel, zhanghailiang, Li Zhijian, Juan Quintela, Jason Wang,
	Dr . David Alan Gilbert, Paolo Bonzini

On 02/05/2018 09:13 PM, Zhang Chen wrote:

>>> +##
>>> +{ 'event': 'COLO_EXIT',
>>> +  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
>>
>> Standard question when I see a new event: is there a way to poll for the
>> event's information?  If not, why don't we need one?
>>
>>
> Your means is we'd better print the information to a log file or something
> like that for all qemu events?
> CC  Eric Blake <eblake@redhat.com>
> any idea about this?

Nothing to add, Markus is right - implementing a new mechanism that logs 
all events as they are issued, and teaching libvirt to parse that log at 
startup, is more work than just implementing a query-foo command that 
libvirt already knows how to use to query current state on first connect 
(and based on that query, make an intelligent decision on whether at 
least one event was missed during downtime).  So far, no one has come up 
with an event that is so important it must be logged, when compared to 
the working alternative of just having events be ways to optimize 
performance so that the query- command doesn't have to be polled all the 
time, but no severe loss if the event is missed because the query- can 
be used in its place.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2018-02-06 15:20 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-19 13:44 [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 01/16] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 02/16] colo-compare: implement the process of checkpoint Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 03/16] colo-compare: use notifier to notify packets comparing result Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 04/16] COLO: integrate colo compare with colo frame Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 05/16] COLO: Add block replication into colo process Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 06/16] COLO: Remove colo_state migration struct Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 07/16] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 08/16] ram/COLO: Record the dirty pages that SVM received Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 09/16] COLO: Flush memory data from ram cache Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 10/16] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
2018-02-03 15:49   ` Markus Armbruster
2018-02-06  3:13     ` Zhang Chen
2018-02-06  7:27       ` Markus Armbruster
2018-02-06  8:01         ` Zhang Chen
2018-02-06  9:53           ` Markus Armbruster
2018-02-06 12:44             ` Zhang Chen
2018-02-06 15:20       ` Eric Blake
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 11/16] savevm: split the process of different stages for loadvm/savevm Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 12/16] COLO: flush host dirty ram from cache Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 13/16] filter: Add handle_event method for NetFilterClass Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 14/16] filter-rewriter: handle checkpoint and failover event Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 15/16] COLO: notify net filters about checkpoint/failover event Zhang Chen
2018-01-19 13:44 ` [Qemu-devel] [PATCH V4 16/16] COLO: quick failover process by kick COLO thread Zhang Chen
2018-01-30  5:42 ` [Qemu-devel] [PATCH V4 00/16] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.