All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PULL V2 00/26] Net patches
@ 2018-10-19  3:21 Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table Jason Wang
                   ` (26 more replies)
  0 siblings, 27 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:21 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Jason Wang

The following changes since commit 77f7c747193662edfadeeb3118d63eed0eac51a6:

  Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2018-10-17' into staging (2018-10-18 13:40:19 +0100)

are available in the git repository at:

  https://github.com/jasowang/qemu.git tags/net-pull-request

for you to fetch changes up to 37a4442a76d010f5d957e3ee09dfb23364281b37:

  qemu-options: Fix bad "macaddr" property in the documentation (2018-10-19 11:15:04 +0800)

----------------------------------------------------------------

----------------------------------------------------------------
Jason Wang (5):
      ne2000: fix possible out of bound access in ne2000_receive
      rtl8139: fix possible out of bound access
      pcnet: fix possible buffer overflow
      net: ignore packet size greater than INT_MAX
      e1000: indicate dropped packets in HW counters

Thomas Huth (1):
      qemu-options: Fix bad "macaddr" property in the documentation

Zhang Chen (15):
      filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table
      colo-compare: implement the process of checkpoint
      colo-compare: use notifier to notify packets comparing result
      COLO: integrate colo compare with colo frame
      COLO: Add block replication into colo process
      COLO: Remove colo_state migration struct
      COLO: Load dirty pages into SVM's RAM cache firstly
      ram/COLO: Record the dirty pages that SVM received
      COLO: Flush memory data from ram cache
      qapi/migration.json: Rename COLO unknown mode to none mode.
      qapi: Add new command to query colo status
      savevm: split the process of different stages for loadvm/savevm
      filter: Add handle_event method for NetFilterClass
      filter-rewriter: handle checkpoint and failover event
      docs: Add COLO status diagram to COLO-FT.txt

liujunjie (1):
      clean up callback when del virtqueue

zhanghailiang (4):
      qmp event: Add COLO_EXIT event to notify users while exited COLO
      COLO: flush host dirty ram from cache
      COLO: notify net filters about checkpoint/failover event
      COLO: quick failover process by kick COLO thread

 docs/COLO-FT.txt          |  34 ++++++++
 hw/net/e1000.c            |  16 +++-
 hw/net/ne2000.c           |   4 +-
 hw/net/pcnet.c            |   4 +-
 hw/net/rtl8139.c          |   8 +-
 hw/net/trace-events       |   3 +
 hw/virtio/virtio.c        |   2 +
 include/exec/ram_addr.h   |   1 +
 include/migration/colo.h  |  11 ++-
 include/net/filter.h      |   5 ++
 migration/Makefile.objs   |   2 +-
 migration/colo-comm.c     |  76 -----------------
 migration/colo-failover.c |   2 +-
 migration/colo.c          | 212 +++++++++++++++++++++++++++++++++++++++++++---
 migration/migration.c     |  46 ++++++++--
 migration/ram.c           | 166 +++++++++++++++++++++++++++++++++++-
 migration/ram.h           |   4 +
 migration/savevm.c        |  53 ++++++++++--
 migration/savevm.h        |   5 ++
 migration/trace-events    |   3 +
 net/colo-compare.c        | 115 ++++++++++++++++++++++---
 net/colo-compare.h        |  24 ++++++
 net/colo.c                |  10 ++-
 net/colo.h                |  11 +--
 net/filter-rewriter.c     | 166 +++++++++++++++++++++++++++++++++---
 net/filter.c              |  17 ++++
 net/net.c                 |  26 +++++-
 qapi/migration.json       |  80 +++++++++++++++--
 qemu-options.hx           |   2 +-
 vl.c                      |   2 -
 30 files changed, 958 insertions(+), 152 deletions(-)
 delete mode 100644 migration/colo-comm.c
 create mode 100644 net/colo-compare.h

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-29 11:01   ` Peter Maydell
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 02/26] colo-compare: implement the process of checkpoint Jason Wang
                   ` (25 subsequent siblings)
  26 siblings, 1 reply; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

We add almost full TCP state machine in filter-rewriter, except
TCPS_LISTEN and some simplify in VM active close FIN states.
The reason for this simplify job is because guest kernel will track
the TCP status and wait 2MSL time too, if client resend the FIN packet,
guest will resend the last ACK, so we needn't wait 2MSL time in filter-rewriter.

After a net connection is closed, we didn't clear its related resources
in connection_track_table, which will lead to memory leak.

Let's track the state of net connection, if it is closed, its related
resources will be cleared up.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/colo.c            |   2 +-
 net/colo.h            |   9 ++---
 net/filter-rewriter.c | 109 +++++++++++++++++++++++++++++++++++++++++++++-----
 3 files changed, 104 insertions(+), 16 deletions(-)

diff --git a/net/colo.c b/net/colo.c
index 6dda4ed..97c8fc9 100644
--- a/net/colo.c
+++ b/net/colo.c
@@ -137,7 +137,7 @@ Connection *connection_new(ConnectionKey *key)
     conn->ip_proto = key->ip_proto;
     conn->processing = false;
     conn->offset = 0;
-    conn->syn_flag = 0;
+    conn->tcp_state = TCPS_CLOSED;
     conn->pack = 0;
     conn->sack = 0;
     g_queue_init(&conn->primary_list);
diff --git a/net/colo.h b/net/colo.h
index da6c36d..0277e0e 100644
--- a/net/colo.h
+++ b/net/colo.h
@@ -18,6 +18,7 @@
 #include "slirp/slirp.h"
 #include "qemu/jhash.h"
 #include "qemu/timer.h"
+#include "slirp/tcp.h"
 
 #define HASHTABLE_MAX_SIZE 16384
 
@@ -81,11 +82,9 @@ typedef struct Connection {
     uint32_t sack;
     /* offset = secondary_seq - primary_seq */
     tcp_seq  offset;
-    /*
-     * we use this flag update offset func
-     * run once in independent tcp connection
-     */
-    int syn_flag;
+
+    int tcp_state; /* TCP FSM state */
+    tcp_seq fin_ack_seq; /* the seq of 'fin=1,ack=1' */
 } Connection;
 
 uint32_t connection_key_hash(const void *opaque);
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index f584e4e..dd323fa 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -59,9 +59,9 @@ static int is_tcp_packet(Packet *pkt)
 }
 
 /* handle tcp packet from primary guest */
-static int handle_primary_tcp_pkt(NetFilterState *nf,
+static int handle_primary_tcp_pkt(RewriterState *rf,
                                   Connection *conn,
-                                  Packet *pkt)
+                                  Packet *pkt, ConnectionKey *key)
 {
     struct tcphdr *tcp_pkt;
 
@@ -74,23 +74,28 @@ static int handle_primary_tcp_pkt(NetFilterState *nf,
         trace_colo_filter_rewriter_conn_offset(conn->offset);
     }
 
+    if (((tcp_pkt->th_flags & (TH_ACK | TH_SYN)) == (TH_ACK | TH_SYN)) &&
+        conn->tcp_state == TCPS_SYN_SENT) {
+        conn->tcp_state = TCPS_ESTABLISHED;
+    }
+
     if (((tcp_pkt->th_flags & (TH_ACK | TH_SYN)) == TH_SYN)) {
         /*
          * we use this flag update offset func
          * run once in independent tcp connection
          */
-        conn->syn_flag = 1;
+        conn->tcp_state = TCPS_SYN_RECEIVED;
     }
 
     if (((tcp_pkt->th_flags & (TH_ACK | TH_SYN)) == TH_ACK)) {
-        if (conn->syn_flag) {
+        if (conn->tcp_state == TCPS_SYN_RECEIVED) {
             /*
              * offset = secondary_seq - primary seq
              * ack packet sent by guest from primary node,
              * so we use th_ack - 1 get primary_seq
              */
             conn->offset -= (ntohl(tcp_pkt->th_ack) - 1);
-            conn->syn_flag = 0;
+            conn->tcp_state = TCPS_ESTABLISHED;
         }
         if (conn->offset) {
             /* handle packets to the secondary from the primary */
@@ -99,15 +104,66 @@ static int handle_primary_tcp_pkt(NetFilterState *nf,
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
                                    pkt->size - pkt->vnet_hdr_len);
         }
+
+        /*
+         * Passive close step 3
+         */
+        if ((conn->tcp_state == TCPS_LAST_ACK) &&
+            (ntohl(tcp_pkt->th_ack) == (conn->fin_ack_seq + 1))) {
+            conn->tcp_state = TCPS_CLOSED;
+            g_hash_table_remove(rf->connection_track_table, key);
+        }
+    }
+
+    if ((tcp_pkt->th_flags & TH_FIN) == TH_FIN) {
+        /*
+         * Passive close.
+         * Step 1:
+         * The *server* side of this connect is VM, *client* tries to close
+         * the connection. We will into CLOSE_WAIT status.
+         *
+         * Step 2:
+         * In this step we will into LAST_ACK status.
+         *
+         * We got 'fin=1, ack=1' packet from server side, we need to
+         * record the seq of 'fin=1, ack=1' packet.
+         *
+         * Step 3:
+         * We got 'ack=1' packets from client side, it acks 'fin=1, ack=1'
+         * packet from server side. From this point, we can ensure that there
+         * will be no packets in the connection, except that, some errors
+         * happen between the path of 'filter object' and vNIC, if this rare
+         * case really happen, we can still create a new connection,
+         * So it is safe to remove the connection from connection_track_table.
+         *
+         */
+        if (conn->tcp_state == TCPS_ESTABLISHED) {
+            conn->tcp_state = TCPS_CLOSE_WAIT;
+        }
+
+        /*
+         * Active close step 2.
+         */
+        if (conn->tcp_state == TCPS_FIN_WAIT_1) {
+            conn->tcp_state = TCPS_TIME_WAIT;
+            /*
+             * For simplify implementation, we needn't wait 2MSL time
+             * in filter rewriter. Because guest kernel will track the
+             * TCP status and wait 2MSL time, if client resend the FIN
+             * packet, guest will apply the last ACK too.
+             */
+            conn->tcp_state = TCPS_CLOSED;
+            g_hash_table_remove(rf->connection_track_table, key);
+        }
     }
 
     return 0;
 }
 
 /* handle tcp packet from secondary guest */
-static int handle_secondary_tcp_pkt(NetFilterState *nf,
+static int handle_secondary_tcp_pkt(RewriterState *rf,
                                     Connection *conn,
-                                    Packet *pkt)
+                                    Packet *pkt, ConnectionKey *key)
 {
     struct tcphdr *tcp_pkt;
 
@@ -121,7 +177,8 @@ static int handle_secondary_tcp_pkt(NetFilterState *nf,
         trace_colo_filter_rewriter_conn_offset(conn->offset);
     }
 
-    if (((tcp_pkt->th_flags & (TH_ACK | TH_SYN)) == (TH_ACK | TH_SYN))) {
+    if (conn->tcp_state == TCPS_SYN_RECEIVED &&
+        ((tcp_pkt->th_flags & (TH_ACK | TH_SYN)) == (TH_ACK | TH_SYN))) {
         /*
          * save offset = secondary_seq and then
          * in handle_primary_tcp_pkt make offset
@@ -130,6 +187,12 @@ static int handle_secondary_tcp_pkt(NetFilterState *nf,
         conn->offset = ntohl(tcp_pkt->th_seq);
     }
 
+    /* VM active connect */
+    if (conn->tcp_state == TCPS_CLOSED &&
+        ((tcp_pkt->th_flags & (TH_ACK | TH_SYN)) == TH_SYN)) {
+        conn->tcp_state = TCPS_SYN_SENT;
+    }
+
     if ((tcp_pkt->th_flags & (TH_ACK | TH_SYN)) == TH_ACK) {
         /* Only need to adjust seq while offset is Non-zero */
         if (conn->offset) {
@@ -141,6 +204,32 @@ static int handle_secondary_tcp_pkt(NetFilterState *nf,
         }
     }
 
+    /*
+     * Passive close step 2:
+     */
+    if (conn->tcp_state == TCPS_CLOSE_WAIT &&
+        (tcp_pkt->th_flags & (TH_ACK | TH_FIN)) == (TH_ACK | TH_FIN)) {
+        conn->fin_ack_seq = ntohl(tcp_pkt->th_seq);
+        conn->tcp_state = TCPS_LAST_ACK;
+    }
+
+    /*
+     * Active close
+     *
+     * Step 1:
+     * The *server* side of this connect is VM, *server* tries to close
+     * the connection.
+     *
+     * Step 2:
+     * We will into CLOSE_WAIT status.
+     * We simplify the TCPS_FIN_WAIT_2, TCPS_TIME_WAIT and
+     * CLOSING status.
+     */
+    if (conn->tcp_state == TCPS_ESTABLISHED &&
+        (tcp_pkt->th_flags & (TH_ACK | TH_FIN)) == TH_FIN) {
+        conn->tcp_state = TCPS_FIN_WAIT_1;
+    }
+
     return 0;
 }
 
@@ -190,7 +279,7 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
 
         if (sender == nf->netdev) {
             /* NET_FILTER_DIRECTION_TX */
-            if (!handle_primary_tcp_pkt(nf, conn, pkt)) {
+            if (!handle_primary_tcp_pkt(s, conn, pkt, &key)) {
                 qemu_net_queue_send(s->incoming_queue, sender, 0,
                 (const uint8_t *)pkt->data, pkt->size, NULL);
                 packet_destroy(pkt, NULL);
@@ -203,7 +292,7 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
             }
         } else {
             /* NET_FILTER_DIRECTION_RX */
-            if (!handle_secondary_tcp_pkt(nf, conn, pkt)) {
+            if (!handle_secondary_tcp_pkt(s, conn, pkt, &key)) {
                 qemu_net_queue_send(s->incoming_queue, sender, 0,
                 (const uint8_t *)pkt->data, pkt->size, NULL);
                 packet_destroy(pkt, NULL);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 02/26] colo-compare: implement the process of checkpoint
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 03/26] colo-compare: use notifier to notify packets comparing result Jason Wang
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

While do checkpoint, we need to flush all the unhandled packets,
By using the filter notifier mechanism, we can easily to notify
every compare object to do this process, which runs inside
of compare threads as a coroutine.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/migration/colo.h |  6 ++++
 net/colo-compare.c       | 78 ++++++++++++++++++++++++++++++++++++++++++++++++
 net/colo-compare.h       | 22 ++++++++++++++
 3 files changed, 106 insertions(+)
 create mode 100644 net/colo-compare.h

diff --git a/include/migration/colo.h b/include/migration/colo.h
index 2fe48ad..fefb2fc 100644
--- a/include/migration/colo.h
+++ b/include/migration/colo.h
@@ -16,6 +16,12 @@
 #include "qemu-common.h"
 #include "qapi/qapi-types-migration.h"
 
+enum colo_event {
+    COLO_EVENT_NONE,
+    COLO_EVENT_CHECKPOINT,
+    COLO_EVENT_FAILOVER,
+};
+
 void colo_info_init(void);
 
 void migrate_start_colo_process(MigrationState *s);
diff --git a/net/colo-compare.c b/net/colo-compare.c
index dd745a4..80e6532 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -27,11 +27,16 @@
 #include "qemu/sockets.h"
 #include "colo.h"
 #include "sysemu/iothread.h"
+#include "net/colo-compare.h"
+#include "migration/colo.h"
 
 #define TYPE_COLO_COMPARE "colo-compare"
 #define COLO_COMPARE(obj) \
     OBJECT_CHECK(CompareState, (obj), TYPE_COLO_COMPARE)
 
+static QTAILQ_HEAD(, CompareState) net_compares =
+       QTAILQ_HEAD_INITIALIZER(net_compares);
+
 #define COMPARE_READ_LEN_MAX NET_BUFSIZE
 #define MAX_QUEUE_SIZE 1024
 
@@ -41,6 +46,10 @@
 /* TODO: Should be configurable */
 #define REGULAR_PACKET_CHECK_MS 3000
 
+static QemuMutex event_mtx;
+static QemuCond event_complete_cond;
+static int event_unhandled_count;
+
 /*
  *  + CompareState ++
  *  |               |
@@ -87,6 +96,11 @@ typedef struct CompareState {
     IOThread *iothread;
     GMainContext *worker_context;
     QEMUTimer *packet_check_timer;
+
+    QEMUBH *event_bh;
+    enum colo_event event;
+
+    QTAILQ_ENTRY(CompareState) next;
 } CompareState;
 
 typedef struct CompareClass {
@@ -736,6 +750,25 @@ static void check_old_packet_regular(void *opaque)
                 REGULAR_PACKET_CHECK_MS);
 }
 
+/* Public API, Used for COLO frame to notify compare event */
+void colo_notify_compares_event(void *opaque, int event, Error **errp)
+{
+    CompareState *s;
+
+    qemu_mutex_lock(&event_mtx);
+    QTAILQ_FOREACH(s, &net_compares, next) {
+        s->event = event;
+        qemu_bh_schedule(s->event_bh);
+        event_unhandled_count++;
+    }
+    /* Wait all compare threads to finish handling this event */
+    while (event_unhandled_count > 0) {
+        qemu_cond_wait(&event_complete_cond, &event_mtx);
+    }
+
+    qemu_mutex_unlock(&event_mtx);
+}
+
 static void colo_compare_timer_init(CompareState *s)
 {
     AioContext *ctx = iothread_get_aio_context(s->iothread);
@@ -756,6 +789,30 @@ static void colo_compare_timer_del(CompareState *s)
     }
  }
 
+static void colo_flush_packets(void *opaque, void *user_data);
+
+static void colo_compare_handle_event(void *opaque)
+{
+    CompareState *s = opaque;
+
+    switch (s->event) {
+    case COLO_EVENT_CHECKPOINT:
+        g_queue_foreach(&s->conn_list, colo_flush_packets, s);
+        break;
+    case COLO_EVENT_FAILOVER:
+        break;
+    default:
+        break;
+    }
+
+    assert(event_unhandled_count > 0);
+
+    qemu_mutex_lock(&event_mtx);
+    event_unhandled_count--;
+    qemu_cond_broadcast(&event_complete_cond);
+    qemu_mutex_unlock(&event_mtx);
+}
+
 static void colo_compare_iothread(CompareState *s)
 {
     object_ref(OBJECT(s->iothread));
@@ -769,6 +826,7 @@ static void colo_compare_iothread(CompareState *s)
                              s, s->worker_context, true);
 
     colo_compare_timer_init(s);
+    s->event_bh = qemu_bh_new(colo_compare_handle_event, s);
 }
 
 static char *compare_get_pri_indev(Object *obj, Error **errp)
@@ -926,8 +984,13 @@ static void colo_compare_complete(UserCreatable *uc, Error **errp)
     net_socket_rs_init(&s->pri_rs, compare_pri_rs_finalize, s->vnet_hdr);
     net_socket_rs_init(&s->sec_rs, compare_sec_rs_finalize, s->vnet_hdr);
 
+    QTAILQ_INSERT_TAIL(&net_compares, s, next);
+
     g_queue_init(&s->conn_list);
 
+    qemu_mutex_init(&event_mtx);
+    qemu_cond_init(&event_complete_cond);
+
     s->connection_track_table = g_hash_table_new_full(connection_key_hash,
                                                       connection_key_equal,
                                                       g_free,
@@ -990,6 +1053,7 @@ static void colo_compare_init(Object *obj)
 static void colo_compare_finalize(Object *obj)
 {
     CompareState *s = COLO_COMPARE(obj);
+    CompareState *tmp = NULL;
 
     qemu_chr_fe_deinit(&s->chr_pri_in, false);
     qemu_chr_fe_deinit(&s->chr_sec_in, false);
@@ -997,6 +1061,16 @@ static void colo_compare_finalize(Object *obj)
     if (s->iothread) {
         colo_compare_timer_del(s);
     }
+
+    qemu_bh_delete(s->event_bh);
+
+    QTAILQ_FOREACH(tmp, &net_compares, next) {
+        if (tmp == s) {
+            QTAILQ_REMOVE(&net_compares, s, next);
+            break;
+        }
+    }
+
     /* Release all unhandled packets after compare thead exited */
     g_queue_foreach(&s->conn_list, colo_flush_packets, s);
 
@@ -1009,6 +1083,10 @@ static void colo_compare_finalize(Object *obj)
     if (s->iothread) {
         object_unref(OBJECT(s->iothread));
     }
+
+    qemu_mutex_destroy(&event_mtx);
+    qemu_cond_destroy(&event_complete_cond);
+
     g_free(s->pri_indev);
     g_free(s->sec_indev);
     g_free(s->outdev);
diff --git a/net/colo-compare.h b/net/colo-compare.h
new file mode 100644
index 0000000..1b1ce76
--- /dev/null
+++ b/net/colo-compare.h
@@ -0,0 +1,22 @@
+/*
+ * COarse-grain LOck-stepping Virtual Machines for Non-stop Service (COLO)
+ * (a.k.a. Fault Tolerance or Continuous Replication)
+ *
+ * Copyright (c) 2017 HUAWEI TECHNOLOGIES CO., LTD.
+ * Copyright (c) 2017 FUJITSU LIMITED
+ * Copyright (c) 2017 Intel Corporation
+ *
+ * Authors:
+ *    zhanghailiang <zhang.zhanghailiang@huawei.com>
+ *    Zhang Chen <zhangckid@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+
+#ifndef QEMU_COLO_COMPARE_H
+#define QEMU_COLO_COMPARE_H
+
+void colo_notify_compares_event(void *opaque, int event, Error **errp);
+
+#endif /* QEMU_COLO_COMPARE_H */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 03/26] colo-compare: use notifier to notify packets comparing result
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 02/26] colo-compare: implement the process of checkpoint Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 04/26] COLO: integrate colo compare with colo frame Jason Wang
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, Zhang Chen, zhanghailiang, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

It's a good idea to use notifier to notify COLO frame of
inconsistent packets comparing.

Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/colo-compare.c | 37 ++++++++++++++++++++++++++-----------
 net/colo-compare.h |  2 ++
 2 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/net/colo-compare.c b/net/colo-compare.c
index 80e6532..3f7e240 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -29,6 +29,7 @@
 #include "sysemu/iothread.h"
 #include "net/colo-compare.h"
 #include "migration/colo.h"
+#include "migration/migration.h"
 
 #define TYPE_COLO_COMPARE "colo-compare"
 #define COLO_COMPARE(obj) \
@@ -37,6 +38,9 @@
 static QTAILQ_HEAD(, CompareState) net_compares =
        QTAILQ_HEAD_INITIALIZER(net_compares);
 
+static NotifierList colo_compare_notifiers =
+    NOTIFIER_LIST_INITIALIZER(colo_compare_notifiers);
+
 #define COMPARE_READ_LEN_MAX NET_BUFSIZE
 #define MAX_QUEUE_SIZE 1024
 
@@ -326,6 +330,12 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
     return false;
 }
 
+static void colo_compare_inconsistency_notify(void)
+{
+    notifier_list_notify(&colo_compare_notifiers,
+                migrate_get_current());
+}
+
 static void colo_compare_tcp(CompareState *s, Connection *conn)
 {
     Packet *ppkt = NULL, *spkt = NULL;
@@ -427,10 +437,7 @@ sec:
         qemu_hexdump((char *)spkt->data, stderr,
                      "colo-compare spkt", spkt->size);
 
-        /*
-         * colo_compare_inconsistent_notify();
-         * TODO: notice to checkpoint();
-         */
+        colo_compare_inconsistency_notify();
     }
 }
 
@@ -561,8 +568,18 @@ static int colo_old_packet_check_one(Packet *pkt, int64_t *check_time)
     }
 }
 
+void colo_compare_register_notifier(Notifier *notify)
+{
+    notifier_list_add(&colo_compare_notifiers, notify);
+}
+
+void colo_compare_unregister_notifier(Notifier *notify)
+{
+    notifier_remove(notify);
+}
+
 static int colo_old_packet_check_one_conn(Connection *conn,
-                                          void *user_data)
+                                           void *user_data)
 {
     GList *result = NULL;
     int64_t check_time = REGULAR_PACKET_CHECK_MS;
@@ -573,10 +590,7 @@ static int colo_old_packet_check_one_conn(Connection *conn,
 
     if (result) {
         /* Do checkpoint will flush old packet */
-        /*
-         * TODO: Notify colo frame to do checkpoint.
-         * colo_compare_inconsistent_notify();
-         */
+        colo_compare_inconsistency_notify();
         return 0;
     }
 
@@ -620,11 +634,12 @@ static void colo_compare_packet(CompareState *s, Connection *conn,
             /*
              * If one packet arrive late, the secondary_list or
              * primary_list will be empty, so we can't compare it
-             * until next comparison.
+             * until next comparison. If the packets in the list are
+             * timeout, it will trigger a checkpoint request.
              */
             trace_colo_compare_main("packet different");
             g_queue_push_head(&conn->primary_list, pkt);
-            /* TODO: colo_notify_checkpoint();*/
+            colo_compare_inconsistency_notify();
             break;
         }
     }
diff --git a/net/colo-compare.h b/net/colo-compare.h
index 1b1ce76..22ddd51 100644
--- a/net/colo-compare.h
+++ b/net/colo-compare.h
@@ -18,5 +18,7 @@
 #define QEMU_COLO_COMPARE_H
 
 void colo_notify_compares_event(void *opaque, int event, Error **errp);
+void colo_compare_register_notifier(Notifier *notify);
+void colo_compare_unregister_notifier(Notifier *notify);
 
 #endif /* QEMU_COLO_COMPARE_H */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 04/26] COLO: integrate colo compare with colo frame
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (2 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 03/26] colo-compare: use notifier to notify packets comparing result Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 05/26] COLO: Add block replication into colo process Jason Wang
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

For COLO FT, both the PVM and SVM run at the same time,
only sync the state while it needs.

So here, let SVM runs while not doing checkpoint, change
DEFAULT_MIGRATE_X_CHECKPOINT_DELAY to 200*100.

Besides, we forgot to release colo_checkpoint_semd and
colo_delay_timer, fix them here.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo.c      | 42 ++++++++++++++++++++++++++++++++++++++++--
 migration/migration.c |  6 ++----
 2 files changed, 42 insertions(+), 6 deletions(-)

diff --git a/migration/colo.c b/migration/colo.c
index 88936f5..f4bdfde 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -25,8 +25,11 @@
 #include "qemu/error-report.h"
 #include "migration/failover.h"
 #include "replication.h"
+#include "net/colo-compare.h"
+#include "net/colo.h"
 
 static bool vmstate_loading;
+static Notifier packets_compare_notifier;
 
 #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024)
 
@@ -343,6 +346,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
         goto out;
     }
 
+    colo_notify_compares_event(NULL, COLO_EVENT_CHECKPOINT, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
     /* Disable block migration */
     migrate_set_block_enabled(false, &local_err);
     qemu_savevm_state_header(fb);
@@ -400,6 +408,11 @@ out:
     return ret;
 }
 
+static void colo_compare_notify_checkpoint(Notifier *notifier, void *data)
+{
+    colo_checkpoint_notify(data);
+}
+
 static void colo_process_checkpoint(MigrationState *s)
 {
     QIOChannelBuffer *bioc;
@@ -416,6 +429,9 @@ static void colo_process_checkpoint(MigrationState *s)
         goto out;
     }
 
+    packets_compare_notifier.notify = colo_compare_notify_checkpoint;
+    colo_compare_register_notifier(&packets_compare_notifier);
+
     /*
      * Wait for Secondary finish loading VM states and enter COLO
      * restore.
@@ -461,11 +477,21 @@ out:
         qemu_fclose(fb);
     }
 
-    timer_del(s->colo_delay_timer);
-
     /* Hope this not to be too long to wait here */
     qemu_sem_wait(&s->colo_exit_sem);
     qemu_sem_destroy(&s->colo_exit_sem);
+
+    /*
+     * It is safe to unregister notifier after failover finished.
+     * Besides, colo_delay_timer and colo_checkpoint_sem can't be
+     * released befor unregister notifier, or there will be use-after-free
+     * error.
+     */
+    colo_compare_unregister_notifier(&packets_compare_notifier);
+    timer_del(s->colo_delay_timer);
+    timer_free(s->colo_delay_timer);
+    qemu_sem_destroy(&s->colo_checkpoint_sem);
+
     /*
      * Must be called after failover BH is completed,
      * Or the failover BH may shutdown the wrong fd that
@@ -559,6 +585,11 @@ void *colo_process_incoming_thread(void *opaque)
     fb = qemu_fopen_channel_input(QIO_CHANNEL(bioc));
     object_unref(OBJECT(bioc));
 
+    qemu_mutex_lock_iothread();
+    vm_start();
+    trace_colo_vm_state_change("stop", "run");
+    qemu_mutex_unlock_iothread();
+
     colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_READY,
                       &local_err);
     if (local_err) {
@@ -578,6 +609,11 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        qemu_mutex_lock_iothread();
+        vm_stop_force_state(RUN_STATE_COLO);
+        trace_colo_vm_state_change("run", "stop");
+        qemu_mutex_unlock_iothread();
+
         /* FIXME: This is unnecessary for periodic checkpoint mode */
         colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_REPLY,
                      &local_err);
@@ -631,6 +667,8 @@ void *colo_process_incoming_thread(void *opaque)
         }
 
         vmstate_loading = false;
+        vm_start();
+        trace_colo_vm_state_change("stop", "run");
         qemu_mutex_unlock_iothread();
 
         if (failover_get_state() == FAILOVER_STATUS_RELAUNCH) {
diff --git a/migration/migration.c b/migration/migration.c
index d6ae879..32ce058 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -76,10 +76,8 @@
 /* Migration XBZRLE default cache size */
 #define DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE (64 * 1024 * 1024)
 
-/* The delay time (in ms) between two COLO checkpoints
- * Note: Please change this default value to 10000 when we support hybrid mode.
- */
-#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY 200
+/* The delay time (in ms) between two COLO checkpoints */
+#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
 #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
 #define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 05/26] COLO: Add block replication into colo process
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (3 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 04/26] COLO: integrate colo compare with colo frame Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 06/26] COLO: Remove colo_state migration struct Jason Wang
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Li Zhijian, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

Make sure master start block replication after slave's block
replication started.

Besides, we need to activate VM's blocks before goes into
COLO state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo.c      | 43 +++++++++++++++++++++++++++++++++++++++++++
 migration/migration.c | 10 ++++++++++
 2 files changed, 53 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index f4bdfde..af04010 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -27,6 +27,7 @@
 #include "replication.h"
 #include "net/colo-compare.h"
 #include "net/colo.h"
+#include "block/block.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -56,6 +57,7 @@ static void secondary_vm_do_failover(void)
 {
     int old_state;
     MigrationIncomingState *mis = migration_incoming_get_current();
+    Error *local_err = NULL;
 
     /* Can not do failover during the process of VM's loading VMstate, Or
      * it will break the secondary VM.
@@ -73,6 +75,11 @@ static void secondary_vm_do_failover(void)
     migrate_set_state(&mis->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
 
+    replication_stop_all(true, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
+
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
         /* recover runstate to normal migration finish state */
@@ -110,6 +117,7 @@ static void primary_vm_do_failover(void)
 {
     MigrationState *s = migrate_get_current();
     int old_state;
+    Error *local_err = NULL;
 
     migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
@@ -133,6 +141,13 @@ static void primary_vm_do_failover(void)
                      FailoverStatus_str(old_state));
         return;
     }
+
+    replication_stop_all(true, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+        local_err = NULL;
+    }
+
     /* Notify COLO thread that failover work is finished */
     qemu_sem_post(&s->colo_exit_sem);
 }
@@ -356,6 +371,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
     qemu_savevm_state_header(fb);
     qemu_savevm_state_setup(fb);
     qemu_mutex_lock_iothread();
+    replication_do_checkpoint_all(&local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
     qemu_savevm_state_complete_precopy(fb, false, false);
     qemu_mutex_unlock_iothread();
 
@@ -446,6 +466,12 @@ static void colo_process_checkpoint(MigrationState *s)
     object_unref(OBJECT(bioc));
 
     qemu_mutex_lock_iothread();
+    replication_start_all(REPLICATION_MODE_PRIMARY, &local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
+
     vm_start();
     qemu_mutex_unlock_iothread();
     trace_colo_vm_state_change("stop", "run");
@@ -586,6 +612,11 @@ void *colo_process_incoming_thread(void *opaque)
     object_unref(OBJECT(bioc));
 
     qemu_mutex_lock_iothread();
+    replication_start_all(REPLICATION_MODE_SECONDARY, &local_err);
+    if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
     vm_start();
     trace_colo_vm_state_change("stop", "run");
     qemu_mutex_unlock_iothread();
@@ -666,6 +697,18 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        replication_get_error_all(&local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+        /* discard colo disk buffer */
+        replication_do_checkpoint_all(&local_err);
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
diff --git a/migration/migration.c b/migration/migration.c
index 32ce058..bf5fcd1 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -386,6 +386,7 @@ static void process_incoming_migration_co(void *opaque)
     MigrationIncomingState *mis = migration_incoming_get_current();
     PostcopyState ps;
     int ret;
+    Error *local_err = NULL;
 
     assert(mis->from_src_file);
     mis->migration_incoming_co = qemu_coroutine_self();
@@ -418,6 +419,15 @@ static void process_incoming_migration_co(void *opaque)
 
     /* we get COLO info, and know if we are in COLO mode */
     if (!ret && migration_incoming_enable_colo()) {
+        /* Make sure all file formats flush their mutable metadata */
+        bdrv_invalidate_cache_all(&local_err);
+        if (local_err) {
+            migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
+                    MIGRATION_STATUS_FAILED);
+            error_report_err(local_err);
+            exit(EXIT_FAILURE);
+        }
+
         qemu_thread_create(&mis->colo_incoming_thread, "COLO incoming",
              colo_process_incoming_thread, mis, QEMU_THREAD_JOINABLE);
         mis->have_colo_incoming_thread = true;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 06/26] COLO: Remove colo_state migration struct
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (4 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 05/26] COLO: Add block replication into colo process Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 07/26] COLO: Load dirty pages into SVM's RAM cache firstly Jason Wang
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

We need to know if migration is going into COLO state for
incoming side before start normal migration.

Instead by using the VMStateDescription to send colo_state
from source side to destination side, we use MIG_CMD_ENABLE_COLO
to indicate whether COLO is enabled or not.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/migration/colo.h |  5 ++--
 migration/Makefile.objs  |  2 +-
 migration/colo-comm.c    | 76 ------------------------------------------------
 migration/colo.c         | 13 ++++++++-
 migration/migration.c    | 23 ++++++++++++++-
 migration/savevm.c       | 17 +++++++++++
 migration/savevm.h       |  1 +
 migration/trace-events   |  1 +
 vl.c                     |  2 --
 9 files changed, 57 insertions(+), 83 deletions(-)
 delete mode 100644 migration/colo-comm.c

diff --git a/include/migration/colo.h b/include/migration/colo.h
index fefb2fc..99ce17a 100644
--- a/include/migration/colo.h
+++ b/include/migration/colo.h
@@ -28,8 +28,9 @@ void migrate_start_colo_process(MigrationState *s);
 bool migration_in_colo_state(void);
 
 /* loadvm */
-bool migration_incoming_enable_colo(void);
-void migration_incoming_exit_colo(void);
+void migration_incoming_enable_colo(void);
+void migration_incoming_disable_colo(void);
+bool migration_incoming_colo_enabled(void);
 void *colo_process_incoming_thread(void *opaque);
 bool migration_incoming_in_colo_state(void);
 
diff --git a/migration/Makefile.objs b/migration/Makefile.objs
index c83ec47..a4f3baf 100644
--- a/migration/Makefile.objs
+++ b/migration/Makefile.objs
@@ -1,6 +1,6 @@
 common-obj-y += migration.o socket.o fd.o exec.o
 common-obj-y += tls.o channel.o savevm.o
-common-obj-y += colo-comm.o colo.o colo-failover.o
+common-obj-y += colo.o colo-failover.o
 common-obj-y += vmstate.o vmstate-types.o page_cache.o
 common-obj-y += qemu-file.o global_state.o
 common-obj-y += qemu-file-channel.o
diff --git a/migration/colo-comm.c b/migration/colo-comm.c
deleted file mode 100644
index df26e4d..0000000
--- a/migration/colo-comm.c
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * COarse-grain LOck-stepping Virtual Machines for Non-stop Service (COLO)
- * (a.k.a. Fault Tolerance or Continuous Replication)
- *
- * Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
- * Copyright (c) 2016 FUJITSU LIMITED
- * Copyright (c) 2016 Intel Corporation
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or
- * later. See the COPYING file in the top-level directory.
- *
- */
-
-#include "qemu/osdep.h"
-#include "migration.h"
-#include "migration/colo.h"
-#include "migration/vmstate.h"
-#include "trace.h"
-
-typedef struct {
-     bool colo_requested;
-} COLOInfo;
-
-static COLOInfo colo_info;
-
-COLOMode get_colo_mode(void)
-{
-    if (migration_in_colo_state()) {
-        return COLO_MODE_PRIMARY;
-    } else if (migration_incoming_in_colo_state()) {
-        return COLO_MODE_SECONDARY;
-    } else {
-        return COLO_MODE_UNKNOWN;
-    }
-}
-
-static int colo_info_pre_save(void *opaque)
-{
-    COLOInfo *s = opaque;
-
-    s->colo_requested = migrate_colo_enabled();
-
-    return 0;
-}
-
-static bool colo_info_need(void *opaque)
-{
-   return migrate_colo_enabled();
-}
-
-static const VMStateDescription colo_state = {
-    .name = "COLOState",
-    .version_id = 1,
-    .minimum_version_id = 1,
-    .pre_save = colo_info_pre_save,
-    .needed = colo_info_need,
-    .fields = (VMStateField[]) {
-        VMSTATE_BOOL(colo_requested, COLOInfo),
-        VMSTATE_END_OF_LIST()
-    },
-};
-
-void colo_info_init(void)
-{
-    vmstate_register(NULL, 0, &colo_state, &colo_info);
-}
-
-bool migration_incoming_enable_colo(void)
-{
-    return colo_info.colo_requested;
-}
-
-void migration_incoming_exit_colo(void)
-{
-    colo_info.colo_requested = false;
-}
diff --git a/migration/colo.c b/migration/colo.c
index af04010..d3163b5 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -152,6 +152,17 @@ static void primary_vm_do_failover(void)
     qemu_sem_post(&s->colo_exit_sem);
 }
 
+COLOMode get_colo_mode(void)
+{
+    if (migration_in_colo_state()) {
+        return COLO_MODE_PRIMARY;
+    } else if (migration_incoming_in_colo_state()) {
+        return COLO_MODE_SECONDARY;
+    } else {
+        return COLO_MODE_UNKNOWN;
+    }
+}
+
 void colo_do_failover(MigrationState *s)
 {
     /* Make sure VM stopped while failover happened. */
@@ -746,7 +757,7 @@ out:
     if (mis->to_src_file) {
         qemu_fclose(mis->to_src_file);
     }
-    migration_incoming_exit_colo();
+    migration_incoming_disable_colo();
 
     rcu_unregister_thread();
     return NULL;
diff --git a/migration/migration.c b/migration/migration.c
index bf5fcd1..215e81a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -296,6 +296,22 @@ int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
     return migrate_send_rp_message(mis, msg_type, msglen, bufc);
 }
 
+static bool migration_colo_enabled;
+bool migration_incoming_colo_enabled(void)
+{
+    return migration_colo_enabled;
+}
+
+void migration_incoming_disable_colo(void)
+{
+    migration_colo_enabled = false;
+}
+
+void migration_incoming_enable_colo(void)
+{
+    migration_colo_enabled = true;
+}
+
 void qemu_start_incoming_migration(const char *uri, Error **errp)
 {
     const char *p;
@@ -418,7 +434,7 @@ static void process_incoming_migration_co(void *opaque)
     }
 
     /* we get COLO info, and know if we are in COLO mode */
-    if (!ret && migration_incoming_enable_colo()) {
+    if (!ret && migration_incoming_colo_enabled()) {
         /* Make sure all file formats flush their mutable metadata */
         bdrv_invalidate_cache_all(&local_err);
         if (local_err) {
@@ -3025,6 +3041,11 @@ static void *migration_thread(void *opaque)
         qemu_savevm_send_postcopy_advise(s->to_dst_file);
     }
 
+    if (migrate_colo_enabled()) {
+        /* Notify migration destination that we enable COLO */
+        qemu_savevm_send_colo_enable(s->to_dst_file);
+    }
+
     qemu_savevm_state_setup(s->to_dst_file);
 
     s->setup_time = qemu_clock_get_ms(QEMU_CLOCK_HOST) - setup_start;
diff --git a/migration/savevm.c b/migration/savevm.c
index 2d10e45..09ad962 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -56,6 +56,7 @@
 #include "io/channel-file.h"
 #include "sysemu/replay.h"
 #include "qjson.h"
+#include "migration/colo.h"
 
 #ifndef ETH_P_RARP
 #define ETH_P_RARP 0x8035
@@ -82,6 +83,7 @@ enum qemu_vm_cmd {
                                       were previously sent during
                                       precopy but are dirty. */
     MIG_CMD_PACKAGED,          /* Send a wrapped stream within this stream */
+    MIG_CMD_ENABLE_COLO,       /* Enable COLO */
     MIG_CMD_POSTCOPY_RESUME,   /* resume postcopy on dest */
     MIG_CMD_RECV_BITMAP,       /* Request for recved bitmap on dst */
     MIG_CMD_MAX
@@ -841,6 +843,12 @@ static void qemu_savevm_command_send(QEMUFile *f,
     qemu_fflush(f);
 }
 
+void qemu_savevm_send_colo_enable(QEMUFile *f)
+{
+    trace_savevm_send_colo_enable();
+    qemu_savevm_command_send(f, MIG_CMD_ENABLE_COLO, 0, NULL);
+}
+
 void qemu_savevm_send_ping(QEMUFile *f, uint32_t value)
 {
     uint32_t buf;
@@ -1922,6 +1930,12 @@ static int loadvm_handle_recv_bitmap(MigrationIncomingState *mis,
     return 0;
 }
 
+static int loadvm_process_enable_colo(MigrationIncomingState *mis)
+{
+    migration_incoming_enable_colo();
+    return 0;
+}
+
 /*
  * Process an incoming 'QEMU_VM_COMMAND'
  * 0           just a normal return
@@ -2001,6 +2015,9 @@ static int loadvm_process_command(QEMUFile *f)
 
     case MIG_CMD_RECV_BITMAP:
         return loadvm_handle_recv_bitmap(mis, len);
+
+    case MIG_CMD_ENABLE_COLO:
+        return loadvm_process_enable_colo(mis);
     }
 
     return 0;
diff --git a/migration/savevm.h b/migration/savevm.h
index a5e65b8..8373c2f 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -55,6 +55,7 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
                                            uint16_t len,
                                            uint64_t *start_list,
                                            uint64_t *length_list);
+void qemu_savevm_send_colo_enable(QEMUFile *f);
 
 int qemu_loadvm_state(QEMUFile *f);
 void qemu_loadvm_state_cleanup(void);
diff --git a/migration/trace-events b/migration/trace-events
index 9430f3c..fa0ff3f 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -37,6 +37,7 @@ savevm_send_ping(uint32_t val) "0x%x"
 savevm_send_postcopy_listen(void) ""
 savevm_send_postcopy_run(void) ""
 savevm_send_postcopy_resume(void) ""
+savevm_send_colo_enable(void) ""
 savevm_send_recv_bitmap(char *name) "%s"
 savevm_state_setup(void) ""
 savevm_state_resume_prepare(void) ""
diff --git a/vl.c b/vl.c
index 4e25c78..ac3ed17 100644
--- a/vl.c
+++ b/vl.c
@@ -4365,8 +4365,6 @@ int main(int argc, char **argv, char **envp)
 #endif
     }
 
-    colo_info_init();
-
     if (net_init_clients(&err) < 0) {
         error_report_err(err);
         exit(1);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 07/26] COLO: Load dirty pages into SVM's RAM cache firstly
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (5 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 06/26] COLO: Remove colo_state migration struct Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 08/26] ram/COLO: Record the dirty pages that SVM received Jason Wang
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Li Zhijian, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

We should not load PVM's state directly into SVM, because there maybe some
errors happen when SVM is receving data, which will break SVM.

We need to ensure receving all data before load the state into SVM. We use
an extra memory to cache these data (PVM's ram). The ram cache in secondary side
is initially the same as SVM/PVM's memory. And in the process of checkpoint,
we cache the dirty pages of PVM into this ram cache firstly, so this ram cache
always the same as PVM's memory at every checkpoint, then we flush this cached ram
to SVM after we receive all PVM's state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/exec/ram_addr.h |  1 +
 migration/migration.c   |  7 +++++
 migration/ram.c         | 83 +++++++++++++++++++++++++++++++++++++++++++++++--
 migration/ram.h         |  4 +++
 migration/savevm.c      |  2 +-
 5 files changed, 94 insertions(+), 3 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 3abb639..9ecd911 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -27,6 +27,7 @@ struct RAMBlock {
     struct rcu_head rcu;
     struct MemoryRegion *mr;
     uint8_t *host;
+    uint8_t *colo_cache; /* For colo, VM's ram cache */
     ram_addr_t offset;
     ram_addr_t used_length;
     ram_addr_t max_length;
diff --git a/migration/migration.c b/migration/migration.c
index 215e81a..7696729 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -444,6 +444,11 @@ static void process_incoming_migration_co(void *opaque)
             exit(EXIT_FAILURE);
         }
 
+        if (colo_init_ram_cache() < 0) {
+            error_report("Init ram cache failed");
+            exit(EXIT_FAILURE);
+        }
+
         qemu_thread_create(&mis->colo_incoming_thread, "COLO incoming",
              colo_process_incoming_thread, mis, QEMU_THREAD_JOINABLE);
         mis->have_colo_incoming_thread = true;
@@ -451,6 +456,8 @@ static void process_incoming_migration_co(void *opaque)
 
         /* Wait checkpoint incoming thread exit before free resource */
         qemu_thread_join(&mis->colo_incoming_thread);
+        /* We hold the global iothread lock, so it is safe here */
+        colo_release_ram_cache();
     }
 
     if (ret < 0) {
diff --git a/migration/ram.c b/migration/ram.c
index bc38d98..cd7a446 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3447,6 +3447,20 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
     return block->host + offset;
 }
 
+static inline void *colo_cache_from_block_offset(RAMBlock *block,
+                                                 ram_addr_t offset)
+{
+    if (!offset_in_ramblock(block, offset)) {
+        return NULL;
+    }
+    if (!block->colo_cache) {
+        error_report("%s: colo_cache is NULL in block :%s",
+                     __func__, block->idstr);
+        return NULL;
+    }
+    return block->colo_cache + offset;
+}
+
 /**
  * ram_handle_compressed: handle the zero page case
  *
@@ -3651,6 +3665,58 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
     qemu_mutex_unlock(&decomp_done_lock);
 }
 
+/*
+ * colo cache: this is for secondary VM, we cache the whole
+ * memory of the secondary VM, it is need to hold the global lock
+ * to call this helper.
+ */
+int colo_init_ram_cache(void)
+{
+    RAMBlock *block;
+
+    rcu_read_lock();
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        block->colo_cache = qemu_anon_ram_alloc(block->used_length,
+                                                NULL,
+                                                false);
+        if (!block->colo_cache) {
+            error_report("%s: Can't alloc memory for COLO cache of block %s,"
+                         "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
+                         block->used_length);
+            goto out_locked;
+        }
+        memcpy(block->colo_cache, block->host, block->used_length);
+    }
+    rcu_read_unlock();
+    return 0;
+
+out_locked:
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        if (block->colo_cache) {
+            qemu_anon_ram_free(block->colo_cache, block->used_length);
+            block->colo_cache = NULL;
+        }
+    }
+
+    rcu_read_unlock();
+    return -errno;
+}
+
+/* It is need to hold the global lock to call this helper */
+void colo_release_ram_cache(void)
+{
+    RAMBlock *block;
+
+    rcu_read_lock();
+    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+        if (block->colo_cache) {
+            qemu_anon_ram_free(block->colo_cache, block->used_length);
+            block->colo_cache = NULL;
+        }
+    }
+    rcu_read_unlock();
+}
+
 /**
  * ram_load_setup: Setup RAM for migration incoming side
  *
@@ -3667,6 +3733,7 @@ static int ram_load_setup(QEMUFile *f, void *opaque)
 
     xbzrle_load_setup();
     ramblock_recv_map_init();
+
     return 0;
 }
 
@@ -3687,6 +3754,7 @@ static int ram_load_cleanup(void *opaque)
         g_free(rb->receivedmap);
         rb->receivedmap = NULL;
     }
+
     return 0;
 }
 
@@ -3924,13 +3992,24 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
                      RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
             RAMBlock *block = ram_block_from_stream(f, flags);
 
-            host = host_from_ram_block_offset(block, addr);
+            /*
+             * After going into COLO, we should load the Page into colo_cache.
+             */
+            if (migration_incoming_in_colo_state()) {
+                host = colo_cache_from_block_offset(block, addr);
+            } else {
+                host = host_from_ram_block_offset(block, addr);
+            }
             if (!host) {
                 error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
                 ret = -EINVAL;
                 break;
             }
-            ramblock_recv_bitmap_set(block, host);
+
+            if (!migration_incoming_in_colo_state()) {
+                ramblock_recv_bitmap_set(block, host);
+            }
+
             trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
         }
 
diff --git a/migration/ram.h b/migration/ram.h
index a139066..83ff1bc 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -71,4 +71,8 @@ int64_t ramblock_recv_bitmap_send(QEMUFile *file,
                                   const char *block_name);
 int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb);
 
+/* ram cache */
+int colo_init_ram_cache(void);
+void colo_release_ram_cache(void);
+
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index 09ad962..288b807 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1933,7 +1933,7 @@ static int loadvm_handle_recv_bitmap(MigrationIncomingState *mis,
 static int loadvm_process_enable_colo(MigrationIncomingState *mis)
 {
     migration_incoming_enable_colo();
-    return 0;
+    return colo_init_ram_cache();
 }
 
 /*
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 08/26] ram/COLO: Record the dirty pages that SVM received
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (6 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 07/26] COLO: Load dirty pages into SVM's RAM cache firstly Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 09/26] COLO: Flush memory data from ram cache Jason Wang
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

We record the address of the dirty pages that received,
it will help flushing pages that cached into SVM.

Here, it is a trick, we record dirty pages by re-using migration
dirty bitmap. In the later patch, we will start the dirty log
for SVM, just like migration, in this way, we can record both
the dirty pages caused by PVM and SVM, we only flush those dirty
pages from RAM cache while do checkpoint.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/ram.c | 43 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index cd7a446..404c8f0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3458,6 +3458,15 @@ static inline void *colo_cache_from_block_offset(RAMBlock *block,
                      __func__, block->idstr);
         return NULL;
     }
+
+    /*
+    * During colo checkpoint, we need bitmap of these migrated pages.
+    * It help us to decide which pages in ram cache should be flushed
+    * into VM's RAM later.
+    */
+    if (!test_and_set_bit(offset >> TARGET_PAGE_BITS, block->bmap)) {
+        ram_state->migration_dirty_pages++;
+    }
     return block->colo_cache + offset;
 }
 
@@ -3675,7 +3684,7 @@ int colo_init_ram_cache(void)
     RAMBlock *block;
 
     rcu_read_lock();
-    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+    RAMBLOCK_FOREACH_MIGRATABLE(block) {
         block->colo_cache = qemu_anon_ram_alloc(block->used_length,
                                                 NULL,
                                                 false);
@@ -3688,10 +3697,29 @@ int colo_init_ram_cache(void)
         memcpy(block->colo_cache, block->host, block->used_length);
     }
     rcu_read_unlock();
+    /*
+    * Record the dirty pages that sent by PVM, we use this dirty bitmap together
+    * with to decide which page in cache should be flushed into SVM's RAM. Here
+    * we use the same name 'ram_bitmap' as for migration.
+    */
+    if (ram_bytes_total()) {
+        RAMBlock *block;
+
+        RAMBLOCK_FOREACH_MIGRATABLE(block) {
+            unsigned long pages = block->max_length >> TARGET_PAGE_BITS;
+
+            block->bmap = bitmap_new(pages);
+            bitmap_set(block->bmap, 0, pages);
+        }
+    }
+    ram_state = g_new0(RAMState, 1);
+    ram_state->migration_dirty_pages = 0;
+
     return 0;
 
 out_locked:
-    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+
+    RAMBLOCK_FOREACH_MIGRATABLE(block) {
         if (block->colo_cache) {
             qemu_anon_ram_free(block->colo_cache, block->used_length);
             block->colo_cache = NULL;
@@ -3707,14 +3735,23 @@ void colo_release_ram_cache(void)
 {
     RAMBlock *block;
 
+    RAMBLOCK_FOREACH_MIGRATABLE(block) {
+        g_free(block->bmap);
+        block->bmap = NULL;
+    }
+
     rcu_read_lock();
-    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+
+    RAMBLOCK_FOREACH_MIGRATABLE(block) {
         if (block->colo_cache) {
             qemu_anon_ram_free(block->colo_cache, block->used_length);
             block->colo_cache = NULL;
         }
     }
+
     rcu_read_unlock();
+    g_free(ram_state);
+    ram_state = NULL;
 }
 
 /**
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 09/26] COLO: Flush memory data from ram cache
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (7 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 08/26] ram/COLO: Record the dirty pages that SVM received Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 10/26] qmp event: Add COLO_EXIT event to notify users while exited COLO Jason Wang
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Li Zhijian, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

During the time of VM's running, PVM may dirty some pages, we will transfer
PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
time. So, the content of SVM's RAM cache will always be same with PVM's memory
after checkpoint.

Instead of flushing all content of PVM's RAM cache into SVM's MEMORY,
we do this in a more efficient way:
Only flush any page that dirtied by PVM since last checkpoint.
In this way, we can ensure SVM's memory same with PVM's.

Besides, we must ensure flush RAM cache before load device state.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/ram.c        | 37 +++++++++++++++++++++++++++++++++++++
 migration/trace-events |  2 ++
 2 files changed, 39 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 404c8f0..477853d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3974,6 +3974,39 @@ static bool postcopy_is_running(void)
     return ps >= POSTCOPY_INCOMING_LISTENING && ps < POSTCOPY_INCOMING_END;
 }
 
+/*
+ * Flush content of RAM cache into SVM's memory.
+ * Only flush the pages that be dirtied by PVM or SVM or both.
+ */
+static void colo_flush_ram_cache(void)
+{
+    RAMBlock *block = NULL;
+    void *dst_host;
+    void *src_host;
+    unsigned long offset = 0;
+
+    trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
+    rcu_read_lock();
+    block = QLIST_FIRST_RCU(&ram_list.blocks);
+
+    while (block) {
+        offset = migration_bitmap_find_dirty(ram_state, block, offset);
+
+        if (offset << TARGET_PAGE_BITS >= block->used_length) {
+            offset = 0;
+            block = QLIST_NEXT_RCU(block, next);
+        } else {
+            migration_bitmap_clear_dirty(ram_state, block, offset);
+            dst_host = block->host + (offset << TARGET_PAGE_BITS);
+            src_host = block->colo_cache + (offset << TARGET_PAGE_BITS);
+            memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+        }
+    }
+
+    rcu_read_unlock();
+    trace_colo_flush_ram_cache_end();
+}
+
 static int ram_load(QEMUFile *f, void *opaque, int version_id)
 {
     int flags = 0, ret = 0, invalid_flags = 0;
@@ -4150,6 +4183,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
     ret |= wait_for_decompress_done();
     rcu_read_unlock();
     trace_ram_load_complete(ret, seq_iter);
+
+    if (!ret  && migration_incoming_in_colo_state()) {
+        colo_flush_ram_cache();
+    }
     return ret;
 }
 
diff --git a/migration/trace-events b/migration/trace-events
index fa0ff3f..bd2d0cd 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -102,6 +102,8 @@ ram_dirty_bitmap_sync_start(void) ""
 ram_dirty_bitmap_sync_wait(void) ""
 ram_dirty_bitmap_sync_complete(void) ""
 ram_state_resume_prepare(uint64_t v) "%" PRId64
+colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64
+colo_flush_ram_cache_end(void) ""
 
 # migration/migration.c
 await_return_path_close_on_source_close(void) ""
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 10/26] qmp event: Add COLO_EXIT event to notify users while exited COLO
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (8 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 09/26] COLO: Flush memory data from ram cache Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 11/26] qapi/migration.json: Rename COLO unknown mode to none mode Jason Wang
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: zhanghailiang, Li Zhijian, Zhang Chen, Zhang Chen, Jason Wang

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

If some errors happen during VM's COLO FT stage, it's important to
notify the users of this event. Together with 'x-colo-lost-heartbeat',
Users can intervene in COLO's failover work immediately.
If users don't want to get involved in COLO's failover verdict,
it is still necessary to notify users that we exited COLO mode.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo.c    | 31 +++++++++++++++++++++++++++++++
 qapi/migration.json | 38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index d3163b5..bd7390d 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -28,6 +28,7 @@
 #include "net/colo-compare.h"
 #include "net/colo.h"
 #include "block/block.h"
+#include "qapi/qapi-events-migration.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -514,6 +515,23 @@ out:
         qemu_fclose(fb);
     }
 
+    /*
+     * There are only two reasons we can get here, some error happened
+     * or the user triggered failover.
+     */
+    switch (failover_get_state()) {
+    case FAILOVER_STATUS_NONE:
+        qapi_event_send_colo_exit(COLO_MODE_PRIMARY,
+                                  COLO_EXIT_REASON_ERROR);
+        break;
+    case FAILOVER_STATUS_REQUIRE:
+        qapi_event_send_colo_exit(COLO_MODE_PRIMARY,
+                                  COLO_EXIT_REASON_REQUEST);
+        break;
+    default:
+        abort();
+    }
+
     /* Hope this not to be too long to wait here */
     qemu_sem_wait(&s->colo_exit_sem);
     qemu_sem_destroy(&s->colo_exit_sem);
@@ -746,6 +764,19 @@ out:
         error_report_err(local_err);
     }
 
+    switch (failover_get_state()) {
+    case FAILOVER_STATUS_NONE:
+        qapi_event_send_colo_exit(COLO_MODE_SECONDARY,
+                                  COLO_EXIT_REASON_ERROR);
+        break;
+    case FAILOVER_STATUS_REQUIRE:
+        qapi_event_send_colo_exit(COLO_MODE_SECONDARY,
+                                  COLO_EXIT_REASON_REQUEST);
+        break;
+    default:
+        abort();
+    }
+
     if (fb) {
         qemu_fclose(fb);
     }
diff --git a/qapi/migration.json b/qapi/migration.json
index 6e8c212..4a18209 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -957,6 +957,44 @@
   'data': [ 'none', 'require', 'active', 'completed', 'relaunch' ] }
 
 ##
+# @COLO_EXIT:
+#
+# Emitted when VM finishes COLO mode due to some errors happening or
+# at the request of users.
+#
+# @mode: report COLO mode when COLO exited.
+#
+# @reason: describes the reason for the COLO exit.
+#
+# Since: 3.1
+#
+# Example:
+#
+# <- { "timestamp": {"seconds": 2032141960, "microseconds": 417172},
+#      "event": "COLO_EXIT", "data": {"mode": "primary", "reason": "request" } }
+#
+##
+{ 'event': 'COLO_EXIT',
+  'data': {'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
+
+##
+# @COLOExitReason:
+#
+# The reason for a COLO exit
+#
+# @none: no failover has ever happened. This can't occur in the
+# COLO_EXIT event, only in the result of query-colo-status.
+#
+# @request: COLO exit is due to an external request
+#
+# @error: COLO exit is due to an internal error
+#
+# Since: 3.1
+##
+{ 'enum': 'COLOExitReason',
+  'data': [ 'none', 'request', 'error' ] }
+
+##
 # @x-colo-lost-heartbeat:
 #
 # Tell qemu that heartbeat is lost, request it to do takeover procedures.
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 11/26] qapi/migration.json: Rename COLO unknown mode to none mode.
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (9 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 10/26] qmp event: Add COLO_EXIT event to notify users while exited COLO Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 12/26] qapi: Add new command to query colo status Jason Wang
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Zhang Chen, Zhang Chen, Jason Wang

From: Zhang Chen <chen.zhang@intel.com>

Suggested by Markus Armbruster rename COLO unknown mode to none mode.

Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo-failover.c |  2 +-
 migration/colo.c          |  2 +-
 qapi/migration.json       | 10 +++++-----
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/migration/colo-failover.c b/migration/colo-failover.c
index 0ae0c41..4854a96 100644
--- a/migration/colo-failover.c
+++ b/migration/colo-failover.c
@@ -77,7 +77,7 @@ FailoverStatus failover_get_state(void)
 
 void qmp_x_colo_lost_heartbeat(Error **errp)
 {
-    if (get_colo_mode() == COLO_MODE_UNKNOWN) {
+    if (get_colo_mode() == COLO_MODE_NONE) {
         error_setg(errp, QERR_FEATURE_DISABLED, "colo");
         return;
     }
diff --git a/migration/colo.c b/migration/colo.c
index bd7390d..2cdd366 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -160,7 +160,7 @@ COLOMode get_colo_mode(void)
     } else if (migration_incoming_in_colo_state()) {
         return COLO_MODE_SECONDARY;
     } else {
-        return COLO_MODE_UNKNOWN;
+        return COLO_MODE_NONE;
     }
 }
 
diff --git a/qapi/migration.json b/qapi/migration.json
index 4a18209..0776e0f 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -923,18 +923,18 @@
 ##
 # @COLOMode:
 #
-# The colo mode
+# The COLO current mode.
 #
-# @unknown: unknown mode
+# @none: COLO is disabled.
 #
-# @primary: master side
+# @primary: COLO node in primary side.
 #
-# @secondary: slave side
+# @secondary: COLO node in slave side.
 #
 # Since: 2.8
 ##
 { 'enum': 'COLOMode',
-  'data': [ 'unknown', 'primary', 'secondary'] }
+  'data': [ 'none', 'primary', 'secondary'] }
 
 ##
 # @FailoverStatus:
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 12/26] qapi: Add new command to query colo status
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (10 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 11/26] qapi/migration.json: Rename COLO unknown mode to none mode Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19 15:30   ` Eric Blake
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 13/26] savevm: split the process of different stages for loadvm/savevm Jason Wang
                   ` (14 subsequent siblings)
  26 siblings, 1 reply; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Zhang Chen, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

Libvirt or other high level software can use this command query colo status.
You can test this command like that:
{'execute':'query-colo-status'}

Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo.c    | 21 +++++++++++++++++++++
 qapi/migration.json | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 2cdd366..94c4e09 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -29,6 +29,7 @@
 #include "net/colo.h"
 #include "block/block.h"
 #include "qapi/qapi-events-migration.h"
+#include "qapi/qmp/qerror.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -237,6 +238,26 @@ void qmp_xen_colo_do_checkpoint(Error **errp)
 #endif
 }
 
+COLOStatus *qmp_query_colo_status(Error **errp)
+{
+    COLOStatus *s = g_new0(COLOStatus, 1);
+
+    s->mode = get_colo_mode();
+
+    switch (failover_get_state()) {
+    case FAILOVER_STATUS_NONE:
+        s->reason = COLO_EXIT_REASON_NONE;
+        break;
+    case FAILOVER_STATUS_REQUIRE:
+        s->reason = COLO_EXIT_REASON_REQUEST;
+        break;
+    default:
+        s->reason = COLO_EXIT_REASON_ERROR;
+    }
+
+    return s;
+}
+
 static void colo_send_message(QEMUFile *f, COLOMessage msg,
                               Error **errp)
 {
diff --git a/qapi/migration.json b/qapi/migration.json
index 0776e0f..0928f4b 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1308,6 +1308,38 @@
 { 'command': 'xen-colo-do-checkpoint' }
 
 ##
+# @COLOStatus:
+#
+# The result format for 'query-colo-status'.
+#
+# @mode: COLO running mode. If COLO is running, this field will return
+#        'primary' or 'secondary'.
+#
+# @reason: describes the reason for the COLO exit.
+#
+# Since: 3.0
+##
+{ 'struct': 'COLOStatus',
+  'data': { 'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
+
+##
+# @query-colo-status:
+#
+# Query COLO status while the vm is running.
+#
+# Returns: A @COLOStatus object showing the status.
+#
+# Example:
+#
+# -> { "execute": "query-colo-status" }
+# <- { "return": { "mode": "primary", "active": true, "reason": "request" } }
+#
+# Since: 3.0
+##
+{ 'command': 'query-colo-status',
+  'returns': 'COLOStatus' }
+
+##
 # @migrate-recover:
 #
 # Provide a recovery migration stream URI.
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 13/26] savevm: split the process of different stages for loadvm/savevm
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (11 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 12/26] qapi: Add new command to query colo status Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 14/26] COLO: flush host dirty ram from cache Jason Wang
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Li Zhijian, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

There are several stages during loadvm/savevm process. In different stage,
migration incoming processes different types of sections.
We want to control these stages more accuracy, it will benefit COLO
performance, we don't have to save type of QEMU_VM_SECTION_START
sections everytime while do checkpoint, besides, we want to separate
the process of saving/loading memory and devices state.

So we add three new helper functions: qemu_load_device_state() and
qemu_savevm_live_state() to achieve different process during migration.

Besides, we make qemu_loadvm_state_main() and qemu_save_device_state()
public, and simplify the codes of qemu_save_device_state() by calling the
wrapper qemu_savevm_state_header().

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo.c   | 41 ++++++++++++++++++++++++++++++++---------
 migration/savevm.c | 36 +++++++++++++++++++++++++++++-------
 migration/savevm.h |  4 ++++
 3 files changed, 65 insertions(+), 16 deletions(-)

diff --git a/migration/colo.c b/migration/colo.c
index 94c4e09..59bb507 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -30,6 +30,7 @@
 #include "block/block.h"
 #include "qapi/qapi-events-migration.h"
 #include "qapi/qmp/qerror.h"
+#include "sysemu/cpus.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -401,24 +402,35 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
 
     /* Disable block migration */
     migrate_set_block_enabled(false, &local_err);
-    qemu_savevm_state_header(fb);
-    qemu_savevm_state_setup(fb);
     qemu_mutex_lock_iothread();
     replication_do_checkpoint_all(&local_err);
     if (local_err) {
         qemu_mutex_unlock_iothread();
         goto out;
     }
-    qemu_savevm_state_complete_precopy(fb, false, false);
-    qemu_mutex_unlock_iothread();
-
-    qemu_fflush(fb);
 
     colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err);
     if (local_err) {
+        qemu_mutex_unlock_iothread();
+        goto out;
+    }
+    /* Note: device state is saved into buffer */
+    ret = qemu_save_device_state(fb);
+
+    qemu_mutex_unlock_iothread();
+    if (ret < 0) {
         goto out;
     }
     /*
+     * Only save VM's live state, which not including device state.
+     * TODO: We may need a timeout mechanism to prevent COLO process
+     * to be blocked here.
+     */
+    qemu_savevm_live_state(s->to_dst_file);
+
+    qemu_fflush(fb);
+
+    /*
      * We need the size of the VMstate data in Secondary side,
      * With which we can decide how much data should be read.
      */
@@ -635,6 +647,7 @@ void *colo_process_incoming_thread(void *opaque)
     uint64_t total_size;
     uint64_t value;
     Error *local_err = NULL;
+    int ret;
 
     rcu_register_thread();
     qemu_sem_init(&mis->colo_incoming_sem, 0);
@@ -708,6 +721,16 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        qemu_mutex_lock_iothread();
+        cpu_synchronize_all_pre_loadvm();
+        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
+        qemu_mutex_unlock_iothread();
+
+        if (ret < 0) {
+            error_report("Load VM's live state (ram) error");
+            goto out;
+        }
+
         value = colo_receive_message_value(mis->from_src_file,
                                  COLO_MESSAGE_VMSTATE_SIZE, &local_err);
         if (local_err) {
@@ -739,10 +762,10 @@ void *colo_process_incoming_thread(void *opaque)
         }
 
         qemu_mutex_lock_iothread();
-        qemu_system_reset(SHUTDOWN_CAUSE_NONE);
         vmstate_loading = true;
-        if (qemu_loadvm_state(fb) < 0) {
-            error_report("COLO: loadvm failed");
+        ret = qemu_load_device_state(fb);
+        if (ret < 0) {
+            error_report("COLO: load device state failed");
             qemu_mutex_unlock_iothread();
             goto out;
         }
diff --git a/migration/savevm.c b/migration/savevm.c
index 288b807..e4caff9 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1378,13 +1378,21 @@ done:
     return ret;
 }
 
-static int qemu_save_device_state(QEMUFile *f)
+void qemu_savevm_live_state(QEMUFile *f)
 {
-    SaveStateEntry *se;
+    /* save QEMU_VM_SECTION_END section */
+    qemu_savevm_state_complete_precopy(f, true, false);
+    qemu_put_byte(f, QEMU_VM_EOF);
+}
 
-    qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
-    qemu_put_be32(f, QEMU_VM_FILE_VERSION);
+int qemu_save_device_state(QEMUFile *f)
+{
+    SaveStateEntry *se;
 
+    if (!migration_in_colo_state()) {
+        qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
+        qemu_put_be32(f, QEMU_VM_FILE_VERSION);
+    }
     cpu_synchronize_all_states();
 
     QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
@@ -1440,8 +1448,6 @@ enum LoadVMExitCodes {
     LOADVM_QUIT     =  1,
 };
 
-static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
-
 /* ------ incoming postcopy messages ------ */
 /* 'advise' arrives before any transfers just to tell us that a postcopy
  * *might* happen - it might be skipped if precopy transferred everything
@@ -2247,7 +2253,7 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
     return true;
 }
 
-static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
+int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
 {
     uint8_t section_type;
     int ret = 0;
@@ -2418,6 +2424,22 @@ int qemu_loadvm_state(QEMUFile *f)
     return ret;
 }
 
+int qemu_load_device_state(QEMUFile *f)
+{
+    MigrationIncomingState *mis = migration_incoming_get_current();
+    int ret;
+
+    /* Load QEMU_VM_SECTION_FULL section */
+    ret = qemu_loadvm_state_main(f, mis);
+    if (ret < 0) {
+        error_report("Failed to load device state: %d", ret);
+        return ret;
+    }
+
+    cpu_synchronize_all_post_init();
+    return 0;
+}
+
 int save_snapshot(const char *name, Error **errp)
 {
     BlockDriverState *bs, *bs1;
diff --git a/migration/savevm.h b/migration/savevm.h
index 8373c2f..51a4b9c 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -56,8 +56,12 @@ void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
                                            uint64_t *start_list,
                                            uint64_t *length_list);
 void qemu_savevm_send_colo_enable(QEMUFile *f);
+void qemu_savevm_live_state(QEMUFile *f);
+int qemu_save_device_state(QEMUFile *f);
 
 int qemu_loadvm_state(QEMUFile *f);
 void qemu_loadvm_state_cleanup(void);
+int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
+int qemu_load_device_state(QEMUFile *f);
 
 #endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 14/26] COLO: flush host dirty ram from cache
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (12 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 13/26] savevm: split the process of different stages for loadvm/savevm Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 15/26] filter: Add handle_event method for NetFilterClass Jason Wang
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: zhanghailiang, Li Zhijian, Zhang Chen, Zhang Chen, Jason Wang

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

Don't need to flush all VM's ram from cache, only
flush the dirty pages since last checkpoint

Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/ram.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 477853d..7e7deec 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3714,6 +3714,7 @@ int colo_init_ram_cache(void)
     }
     ram_state = g_new0(RAMState, 1);
     ram_state->migration_dirty_pages = 0;
+    memory_global_dirty_log_start();
 
     return 0;
 
@@ -3735,6 +3736,7 @@ void colo_release_ram_cache(void)
 {
     RAMBlock *block;
 
+    memory_global_dirty_log_stop();
     RAMBLOCK_FOREACH_MIGRATABLE(block) {
         g_free(block->bmap);
         block->bmap = NULL;
@@ -3985,6 +3987,13 @@ static void colo_flush_ram_cache(void)
     void *src_host;
     unsigned long offset = 0;
 
+    memory_global_dirty_log_sync();
+    rcu_read_lock();
+    RAMBLOCK_FOREACH_MIGRATABLE(block) {
+        migration_bitmap_sync_range(ram_state, block, 0, block->used_length);
+    }
+    rcu_read_unlock();
+
     trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
     rcu_read_lock();
     block = QLIST_FIRST_RCU(&ram_list.blocks);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 15/26] filter: Add handle_event method for NetFilterClass
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (13 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 14/26] COLO: flush host dirty ram from cache Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 16/26] filter-rewriter: handle checkpoint and failover event Jason Wang
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

Filter needs to process the event of checkpoint/failover or
other event passed by COLO frame.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/net/filter.h |  5 +++++
 net/filter.c         | 17 +++++++++++++++++
 net/net.c            | 19 +++++++++++++++++++
 3 files changed, 41 insertions(+)

diff --git a/include/net/filter.h b/include/net/filter.h
index 435acd6..49da666 100644
--- a/include/net/filter.h
+++ b/include/net/filter.h
@@ -38,6 +38,8 @@ typedef ssize_t (FilterReceiveIOV)(NetFilterState *nc,
 
 typedef void (FilterStatusChanged) (NetFilterState *nf, Error **errp);
 
+typedef void (FilterHandleEvent) (NetFilterState *nf, int event, Error **errp);
+
 typedef struct NetFilterClass {
     ObjectClass parent_class;
 
@@ -45,6 +47,7 @@ typedef struct NetFilterClass {
     FilterSetup *setup;
     FilterCleanup *cleanup;
     FilterStatusChanged *status_changed;
+    FilterHandleEvent *handle_event;
     /* mandatory */
     FilterReceiveIOV *receive_iov;
 } NetFilterClass;
@@ -77,4 +80,6 @@ ssize_t qemu_netfilter_pass_to_next(NetClientState *sender,
                                     int iovcnt,
                                     void *opaque);
 
+void colo_notify_filters_event(int event, Error **errp);
+
 #endif /* QEMU_NET_FILTER_H */
diff --git a/net/filter.c b/net/filter.c
index 2fd7d7d..c9f9e5f 100644
--- a/net/filter.c
+++ b/net/filter.c
@@ -17,6 +17,8 @@
 #include "net/vhost_net.h"
 #include "qom/object_interfaces.h"
 #include "qemu/iov.h"
+#include "net/colo.h"
+#include "migration/colo.h"
 
 static inline bool qemu_can_skip_netfilter(NetFilterState *nf)
 {
@@ -245,11 +247,26 @@ static void netfilter_finalize(Object *obj)
     g_free(nf->netdev_id);
 }
 
+static void default_handle_event(NetFilterState *nf, int event, Error **errp)
+{
+    switch (event) {
+    case COLO_EVENT_CHECKPOINT:
+        break;
+    case COLO_EVENT_FAILOVER:
+        object_property_set_str(OBJECT(nf), "off", "status", errp);
+        break;
+    default:
+        break;
+    }
+}
+
 static void netfilter_class_init(ObjectClass *oc, void *data)
 {
     UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
+    NetFilterClass *nfc = NETFILTER_CLASS(oc);
 
     ucc->complete = netfilter_complete;
+    nfc->handle_event = default_handle_event;
 }
 
 static const TypeInfo netfilter_info = {
diff --git a/net/net.c b/net/net.c
index cdcd5cf..c66847e 100644
--- a/net/net.c
+++ b/net/net.c
@@ -1335,6 +1335,25 @@ void hmp_info_network(Monitor *mon, const QDict *qdict)
     }
 }
 
+void colo_notify_filters_event(int event, Error **errp)
+{
+    NetClientState *nc;
+    NetFilterState *nf;
+    NetFilterClass *nfc = NULL;
+    Error *local_err = NULL;
+
+    QTAILQ_FOREACH(nc, &net_clients, next) {
+        QTAILQ_FOREACH(nf, &nc->filters, next) {
+            nfc = NETFILTER_GET_CLASS(OBJECT(nf));
+            nfc->handle_event(nf, event, &local_err);
+            if (local_err) {
+                error_propagate(errp, local_err);
+                return;
+            }
+        }
+    }
+}
+
 void qmp_set_link(const char *name, bool up, Error **errp)
 {
     NetClientState *ncs[MAX_QUEUE_NUM];
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 16/26] filter-rewriter: handle checkpoint and failover event
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (14 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 15/26] filter: Add handle_event method for NetFilterClass Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 17/26] COLO: notify net filters about checkpoint/failover event Jason Wang
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel
  Cc: Zhang Chen, zhanghailiang, Zhang Chen, Jason Wang

From: Zhang Chen <zhangckid@gmail.com>

After one round of checkpoint, the states between PVM and SVM
become consistent, so it is unnecessary to adjust the sequence
of net packets for old connections, besides, while failover
happens, filter-rewriter will into failover mode that needn't
handle the new TCP connection.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/colo-compare.c    | 12 +++++------
 net/colo.c            |  8 ++++++++
 net/colo.h            |  2 ++
 net/filter-rewriter.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 73 insertions(+), 6 deletions(-)

diff --git a/net/colo-compare.c b/net/colo-compare.c
index 3f7e240..a39191d 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -116,6 +116,12 @@ enum {
     SECONDARY_IN,
 };
 
+static void colo_compare_inconsistency_notify(void)
+{
+    notifier_list_notify(&colo_compare_notifiers,
+                migrate_get_current());
+}
+
 static int compare_chr_send(CompareState *s,
                             const uint8_t *buf,
                             uint32_t size,
@@ -330,12 +336,6 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
     return false;
 }
 
-static void colo_compare_inconsistency_notify(void)
-{
-    notifier_list_notify(&colo_compare_notifiers,
-                migrate_get_current());
-}
-
 static void colo_compare_tcp(CompareState *s, Connection *conn)
 {
     Packet *ppkt = NULL, *spkt = NULL;
diff --git a/net/colo.c b/net/colo.c
index 97c8fc9..49176bf 100644
--- a/net/colo.c
+++ b/net/colo.c
@@ -221,3 +221,11 @@ Connection *connection_get(GHashTable *connection_track_table,
 
     return conn;
 }
+
+bool connection_has_tracked(GHashTable *connection_track_table,
+                            ConnectionKey *key)
+{
+    Connection *conn = g_hash_table_lookup(connection_track_table, key);
+
+    return conn ? true : false;
+}
diff --git a/net/colo.h b/net/colo.h
index 0277e0e..11c5226 100644
--- a/net/colo.h
+++ b/net/colo.h
@@ -98,6 +98,8 @@ void connection_destroy(void *opaque);
 Connection *connection_get(GHashTable *connection_track_table,
                            ConnectionKey *key,
                            GQueue *conn_list);
+bool connection_has_tracked(GHashTable *connection_track_table,
+                            ConnectionKey *key);
 void connection_hashtable_reset(GHashTable *connection_track_table);
 Packet *packet_new(const void *data, int size, int vnet_hdr_len);
 void packet_destroy(void *opaque, void *user_data);
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index dd323fa..bb8f4d9 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -20,11 +20,15 @@
 #include "qemu/main-loop.h"
 #include "qemu/iov.h"
 #include "net/checksum.h"
+#include "net/colo.h"
+#include "migration/colo.h"
 
 #define FILTER_COLO_REWRITER(obj) \
     OBJECT_CHECK(RewriterState, (obj), TYPE_FILTER_REWRITER)
 
 #define TYPE_FILTER_REWRITER "filter-rewriter"
+#define FAILOVER_MODE_ON  true
+#define FAILOVER_MODE_OFF false
 
 typedef struct RewriterState {
     NetFilterState parent_obj;
@@ -32,8 +36,14 @@ typedef struct RewriterState {
     /* hashtable to save connection */
     GHashTable *connection_track_table;
     bool vnet_hdr;
+    bool failover_mode;
 } RewriterState;
 
+static void filter_rewriter_failover_mode(RewriterState *s)
+{
+    s->failover_mode = FAILOVER_MODE_ON;
+}
+
 static void filter_rewriter_flush(NetFilterState *nf)
 {
     RewriterState *s = FILTER_COLO_REWRITER(nf);
@@ -273,6 +283,13 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
              */
             reverse_connection_key(&key);
         }
+
+        /* After failover we needn't change new TCP packet */
+        if (s->failover_mode &&
+            !connection_has_tracked(s->connection_track_table, &key)) {
+            goto out;
+        }
+
         conn = connection_get(s->connection_track_table,
                               &key,
                               NULL);
@@ -306,11 +323,49 @@ static ssize_t colo_rewriter_receive_iov(NetFilterState *nf,
         }
     }
 
+out:
     packet_destroy(pkt, NULL);
     pkt = NULL;
     return 0;
 }
 
+static void reset_seq_offset(gpointer key, gpointer value, gpointer user_data)
+{
+    Connection *conn = (Connection *)value;
+
+    conn->offset = 0;
+}
+
+static gboolean offset_is_nonzero(gpointer key,
+                                  gpointer value,
+                                  gpointer user_data)
+{
+    Connection *conn = (Connection *)value;
+
+    return conn->offset ? true : false;
+}
+
+static void colo_rewriter_handle_event(NetFilterState *nf, int event,
+                                       Error **errp)
+{
+    RewriterState *rs = FILTER_COLO_REWRITER(nf);
+
+    switch (event) {
+    case COLO_EVENT_CHECKPOINT:
+        g_hash_table_foreach(rs->connection_track_table,
+                            reset_seq_offset, NULL);
+        break;
+    case COLO_EVENT_FAILOVER:
+        if (!g_hash_table_find(rs->connection_track_table,
+                              offset_is_nonzero, NULL)) {
+            filter_rewriter_failover_mode(rs);
+        }
+        break;
+    default:
+        break;
+    }
+}
+
 static void colo_rewriter_cleanup(NetFilterState *nf)
 {
     RewriterState *s = FILTER_COLO_REWRITER(nf);
@@ -354,6 +409,7 @@ static void filter_rewriter_init(Object *obj)
     RewriterState *s = FILTER_COLO_REWRITER(obj);
 
     s->vnet_hdr = false;
+    s->failover_mode = FAILOVER_MODE_OFF;
     object_property_add_bool(obj, "vnet_hdr_support",
                              filter_rewriter_get_vnet_hdr,
                              filter_rewriter_set_vnet_hdr, NULL);
@@ -366,6 +422,7 @@ static void colo_rewriter_class_init(ObjectClass *oc, void *data)
     nfc->setup = colo_rewriter_setup;
     nfc->cleanup = colo_rewriter_cleanup;
     nfc->receive_iov = colo_rewriter_receive_iov;
+    nfc->handle_event = colo_rewriter_handle_event;
 }
 
 static const TypeInfo colo_rewriter_info = {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 17/26] COLO: notify net filters about checkpoint/failover event
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (15 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 16/26] filter-rewriter: handle checkpoint and failover event Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 18/26] COLO: quick failover process by kick COLO thread Jason Wang
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: zhanghailiang, Jason Wang

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

Notify all net filters about the checkpoint and failover event.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 59bb507..57a8542 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -31,6 +31,7 @@
 #include "qapi/qapi-events-migration.h"
 #include "qapi/qmp/qerror.h"
 #include "sysemu/cpus.h"
+#include "net/filter.h"
 
 static bool vmstate_loading;
 static Notifier packets_compare_notifier;
@@ -83,6 +84,12 @@ static void secondary_vm_do_failover(void)
         error_report_err(local_err);
     }
 
+    /* Notify all filters of all NIC to do checkpoint */
+    colo_notify_filters_event(COLO_EVENT_FAILOVER, &local_err);
+    if (local_err) {
+        error_report_err(local_err);
+    }
+
     if (!autostart) {
         error_report("\"-S\" qemu option will be ignored in secondary side");
         /* recover runstate to normal migration finish state */
@@ -782,6 +789,14 @@ void *colo_process_incoming_thread(void *opaque)
             goto out;
         }
 
+        /* Notify all filters of all NIC to do checkpoint */
+        colo_notify_filters_event(COLO_EVENT_CHECKPOINT, &local_err);
+
+        if (local_err) {
+            qemu_mutex_unlock_iothread();
+            goto out;
+        }
+
         vmstate_loading = false;
         vm_start();
         trace_colo_vm_state_change("stop", "run");
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 18/26] COLO: quick failover process by kick COLO thread
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (16 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 17/26] COLO: notify net filters about checkpoint/failover event Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 19/26] docs: Add COLO status diagram to COLO-FT.txt Jason Wang
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: zhanghailiang, Jason Wang

From: zhanghailiang <zhang.zhanghailiang@huawei.com>

COLO thread may sleep at qemu_sem_wait(&s->colo_checkpoint_sem),
while failover works begin, It's better to wakeup it to quick
the process.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 migration/colo.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/migration/colo.c b/migration/colo.c
index 57a8542..956ac23 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -131,6 +131,11 @@ static void primary_vm_do_failover(void)
 
     migrate_set_state(&s->state, MIGRATION_STATUS_COLO,
                       MIGRATION_STATUS_COMPLETED);
+    /*
+     * kick COLO thread which might wait at
+     * qemu_sem_wait(&s->colo_checkpoint_sem).
+     */
+    colo_checkpoint_notify(migrate_get_current());
 
     /*
      * Wake up COLO thread which may blocked in recv() or send(),
@@ -539,6 +544,9 @@ static void colo_process_checkpoint(MigrationState *s)
 
         qemu_sem_wait(&s->colo_checkpoint_sem);
 
+        if (s->state != MIGRATION_STATUS_COLO) {
+            goto out;
+        }
         ret = colo_do_checkpoint_transaction(s, bioc, fb);
         if (ret < 0) {
             goto out;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 19/26] docs: Add COLO status diagram to COLO-FT.txt
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (17 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 18/26] COLO: quick failover process by kick COLO thread Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 20/26] clean up callback when del virtqueue Jason Wang
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Zhang Chen, Zhang Chen, Jason Wang

From: Zhang Chen <chen.zhang@intel.com>

This diagram make user better understand COLO.
Suggested by Markus Armbruster.

Signed-off-by: Zhang Chen <zhangckid@gmail.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 docs/COLO-FT.txt | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/docs/COLO-FT.txt b/docs/COLO-FT.txt
index 70cfb9c..6302469 100644
--- a/docs/COLO-FT.txt
+++ b/docs/COLO-FT.txt
@@ -110,6 +110,40 @@ Note:
 HeartBeat has not been implemented yet, so you need to trigger failover process
 by using 'x-colo-lost-heartbeat' command.
 
+== COLO operation status ==
+
++-----------------+
+|                 |
+|    Start COLO   |
+|                 |
++--------+--------+
+         |
+         |  Main qmp command:
+         |  migrate-set-capabilities with x-colo
+         |  migrate
+         |
+         v
++--------+--------+
+|                 |
+|  COLO running   |
+|                 |
++--------+--------+
+         |
+         |  Main qmp command:
+         |  x-colo-lost-heartbeat
+         |  or
+         |  some error happened
+         v
++--------+--------+
+|                 |  send qmp event:
+|  COLO failover  |  COLO_EXIT
+|                 |
++-----------------+
+
+COLO use the qmp command to switch and report operation status.
+The diagram just shows the main qmp command, you can get the detail
+in test procedure.
+
 == Test procedure ==
 1. Startup qemu
 Primary:
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 20/26] clean up callback when del virtqueue
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (18 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 19/26] docs: Add COLO status diagram to COLO-FT.txt Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 21/26] ne2000: fix possible out of bound access in ne2000_receive Jason Wang
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: liujunjie, qemu-stable, Jason Wang

From: liujunjie <liujunjie23@huawei.com>

Before, we did not clear callback like handle_output when delete
the virtqueue which may result be segmentfault.
The scene is as follows:
1. Start a vm with multiqueue vhost-net,
2. then we write VIRTIO_PCI_GUEST_FEATURES in PCI configuration to
triger multiqueue disable in this vm which will delete the virtqueue.
In this step, the tx_bh is deleted but the callback virtio_net_handle_tx_bh
still exist.
3. Finally, we write VIRTIO_PCI_QUEUE_NOTIFY in PCI configuration to
notify the deleted virtqueue. In this way, virtio_net_handle_tx_bh
will be called and qemu will be crashed.

Although the way described above is uncommon, we had better reinforce it.

CC: qemu-stable@nongnu.org
Signed-off-by: liujunjie <liujunjie23@huawei.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/virtio.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 4e61944..4136d23 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1611,6 +1611,8 @@ void virtio_del_queue(VirtIODevice *vdev, int n)
 
     vdev->vq[n].vring.num = 0;
     vdev->vq[n].vring.num_default = 0;
+    vdev->vq[n].handle_output = NULL;
+    vdev->vq[n].handle_aio_output = NULL;
 }
 
 static void virtio_set_isr(VirtIODevice *vdev, int value)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 21/26] ne2000: fix possible out of bound access in ne2000_receive
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (19 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 20/26] clean up callback when del virtqueue Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 22/26] rtl8139: fix possible out of bound access Jason Wang
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Jason Wang, qemu-stable

In ne2000_receive(), we try to assign size_ to size which converts
from size_t to integer. This will cause troubles when size_ is greater
INT_MAX, this will lead a negative value in size and it can then pass
the check of size < MIN_BUF_SIZE which may lead out of bound access of
for both buf and buf1.

Fixing by converting the type of size to size_t.

CC: qemu-stable@nongnu.org
Reported-by: Daniel Shapira <daniel@twistlock.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/ne2000.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/net/ne2000.c b/hw/net/ne2000.c
index 07d79e3..869518e 100644
--- a/hw/net/ne2000.c
+++ b/hw/net/ne2000.c
@@ -174,7 +174,7 @@ static int ne2000_buffer_full(NE2000State *s)
 ssize_t ne2000_receive(NetClientState *nc, const uint8_t *buf, size_t size_)
 {
     NE2000State *s = qemu_get_nic_opaque(nc);
-    int size = size_;
+    size_t size = size_;
     uint8_t *p;
     unsigned int total_len, next, avail, len, index, mcast_idx;
     uint8_t buf1[60];
@@ -182,7 +182,7 @@ ssize_t ne2000_receive(NetClientState *nc, const uint8_t *buf, size_t size_)
         { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
 
 #if defined(DEBUG_NE2000)
-    printf("NE2000: received len=%d\n", size);
+    printf("NE2000: received len=%zu\n", size);
 #endif
 
     if (s->cmd & E8390_STOP || ne2000_buffer_full(s))
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 22/26] rtl8139: fix possible out of bound access
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (20 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 21/26] ne2000: fix possible out of bound access in ne2000_receive Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 23/26] pcnet: fix possible buffer overflow Jason Wang
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Jason Wang, qemu-stable

In rtl8139_do_receive(), we try to assign size_ to size which converts
from size_t to integer. This will cause troubles when size_ is greater
INT_MAX, this will lead a negative value in size and it can then pass
the check of size < MIN_BUF_SIZE which may lead out of bound access of
for both buf and buf1.

Fixing by converting the type of size to size_t.

CC: qemu-stable@nongnu.org
Reported-by: Daniel Shapira <daniel@twistlock.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/rtl8139.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/net/rtl8139.c b/hw/net/rtl8139.c
index 46daa16..2342a09 100644
--- a/hw/net/rtl8139.c
+++ b/hw/net/rtl8139.c
@@ -817,7 +817,7 @@ static ssize_t rtl8139_do_receive(NetClientState *nc, const uint8_t *buf, size_t
     RTL8139State *s = qemu_get_nic_opaque(nc);
     PCIDevice *d = PCI_DEVICE(s);
     /* size is the length of the buffer passed to the driver */
-    int size = size_;
+    size_t size = size_;
     const uint8_t *dot1q_buf = NULL;
 
     uint32_t packet_header = 0;
@@ -826,7 +826,7 @@ static ssize_t rtl8139_do_receive(NetClientState *nc, const uint8_t *buf, size_t
     static const uint8_t broadcast_macaddr[6] =
         { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
 
-    DPRINTF(">>> received len=%d\n", size);
+    DPRINTF(">>> received len=%zu\n", size);
 
     /* test if board clock is stopped */
     if (!s->clock_enabled)
@@ -1035,7 +1035,7 @@ static ssize_t rtl8139_do_receive(NetClientState *nc, const uint8_t *buf, size_t
 
         if (size+4 > rx_space)
         {
-            DPRINTF("C+ Rx mode : descriptor %d size %d received %d + 4\n",
+            DPRINTF("C+ Rx mode : descriptor %d size %d received %zu + 4\n",
                 descriptor, rx_space, size);
 
             s->IntrStatus |= RxOverflow;
@@ -1148,7 +1148,7 @@ static ssize_t rtl8139_do_receive(NetClientState *nc, const uint8_t *buf, size_t
         if (avail != 0 && RX_ALIGN(size + 8) >= avail)
         {
             DPRINTF("rx overflow: rx buffer length %d head 0x%04x "
-                "read 0x%04x === available 0x%04x need 0x%04x\n",
+                "read 0x%04x === available 0x%04x need 0x%04zx\n",
                 s->RxBufferSize, s->RxBufAddr, s->RxBufPtr, avail, size + 8);
 
             s->IntrStatus |= RxOverflow;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 23/26] pcnet: fix possible buffer overflow
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (21 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 22/26] rtl8139: fix possible out of bound access Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX Jason Wang
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Jason Wang, qemu-stable

In pcnet_receive(), we try to assign size_ to size which converts from
size_t to integer. This will cause troubles when size_ is greater
INT_MAX, this will lead a negative value in size and it can then pass
the check of size < MIN_BUF_SIZE which may lead out of bound access
for both buf and buf1.

Fixing by converting the type of size to size_t.

CC: qemu-stable@nongnu.org
Reported-by: Daniel Shapira <daniel@twistlock.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/pcnet.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/net/pcnet.c b/hw/net/pcnet.c
index 0c44554..d9ba04b 100644
--- a/hw/net/pcnet.c
+++ b/hw/net/pcnet.c
@@ -988,14 +988,14 @@ ssize_t pcnet_receive(NetClientState *nc, const uint8_t *buf, size_t size_)
     uint8_t buf1[60];
     int remaining;
     int crc_err = 0;
-    int size = size_;
+    size_t size = size_;
 
     if (CSR_DRX(s) || CSR_STOP(s) || CSR_SPND(s) || !size ||
         (CSR_LOOP(s) && !s->looptest)) {
         return -1;
     }
 #ifdef PCNET_DEBUG
-    printf("pcnet_receive size=%d\n", size);
+    printf("pcnet_receive size=%zu\n", size);
 #endif
 
     /* if too small buffer, then expand it */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (22 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 23/26] pcnet: fix possible buffer overflow Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-11-13 15:41   ` Dima Stepanov
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 25/26] e1000: indicate dropped packets in HW counters Jason Wang
                   ` (2 subsequent siblings)
  26 siblings, 1 reply; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Jason Wang, qemu-stable

There should not be a reason for passing a packet size greater than
INT_MAX. It's usually a hint of bug somewhere, so ignore packet size
greater than INT_MAX in qemu_deliver_packet_iov()

CC: qemu-stable@nongnu.org
Reported-by: Daniel Shapira <daniel@twistlock.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/net.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/net/net.c b/net/net.c
index c66847e..07c194a 100644
--- a/net/net.c
+++ b/net/net.c
@@ -712,10 +712,15 @@ ssize_t qemu_deliver_packet_iov(NetClientState *sender,
                                 void *opaque)
 {
     NetClientState *nc = opaque;
+    size_t size = iov_size(iov, iovcnt);
     int ret;
 
+    if (size > INT_MAX) {
+        return size;
+    }
+
     if (nc->link_down) {
-        return iov_size(iov, iovcnt);
+        return size;
     }
 
     if (nc->receive_disabled) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 25/26] e1000: indicate dropped packets in HW counters
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (23 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 26/26] qemu-options: Fix bad "macaddr" property in the documentation Jason Wang
  2018-10-19 15:17 ` [Qemu-devel] [PULL V2 00/26] Net patches Peter Maydell
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Jason Wang, Martin Wilck

The e1000 emulation silently discards RX packets if there's
insufficient space in the ring buffer. This leads to errors
on higher-level protocols in the guest, with no indication
about the error cause.

This patch increments the "Missed Packets Count" (MPC) and
"Receive No Buffers Count" (RNBC) HW counters in this case.
As the emulation has no FIFO for buffering packets that can't
immediately be pushed to the guest, these two registers are
practically equivalent (see 10.2.7.4, 10.2.7.33 in
https://www.intel.com/content/www/us/en/embedded/products/networking/82574l-gbe-controller-datasheet.html).

On a Linux guest, the register content  will be reflected in
the "rx_missed_errors" and "rx_no_buffer_count" stats from
"ethtool -S", and in the "missed" stat from "ip -s -s link show",
giving at least some hint about the error cause inside the guest.

If the cause is known, problems like this can often be avoided
easily, by increasing the number of RX descriptors in the guest
e1000 driver (e.g under Linux, "e1000.RxDescriptors=1024").

The patch also adds a qemu trace message for this condition.

Signed-off-by: Martin Wilck <mwilck@suse.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/e1000.c      | 16 +++++++++++++---
 hw/net/trace-events |  3 +++
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index 13a9494..5e144cb 100644
--- a/hw/net/e1000.c
+++ b/hw/net/e1000.c
@@ -36,6 +36,7 @@
 #include "qemu/range.h"
 
 #include "e1000x_common.h"
+#include "trace.h"
 
 static const uint8_t bcast[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
 
@@ -847,6 +848,15 @@ static uint64_t rx_desc_base(E1000State *s)
     return (bah << 32) + bal;
 }
 
+static void
+e1000_receiver_overrun(E1000State *s, size_t size)
+{
+    trace_e1000_receiver_overrun(size, s->mac_reg[RDH], s->mac_reg[RDT]);
+    e1000x_inc_reg_if_not_full(s->mac_reg, RNBC);
+    e1000x_inc_reg_if_not_full(s->mac_reg, MPC);
+    set_ics(s, 0, E1000_ICS_RXO);
+}
+
 static ssize_t
 e1000_receive_iov(NetClientState *nc, const struct iovec *iov, int iovcnt)
 {
@@ -916,8 +926,8 @@ e1000_receive_iov(NetClientState *nc, const struct iovec *iov, int iovcnt)
     desc_offset = 0;
     total_size = size + e1000x_fcs_len(s->mac_reg);
     if (!e1000_has_rxbufs(s, total_size)) {
-            set_ics(s, 0, E1000_ICS_RXO);
-            return -1;
+        e1000_receiver_overrun(s, total_size);
+        return -1;
     }
     do {
         desc_size = total_size - desc_offset;
@@ -969,7 +979,7 @@ e1000_receive_iov(NetClientState *nc, const struct iovec *iov, int iovcnt)
             rdh_start >= s->mac_reg[RDLEN] / sizeof(desc)) {
             DBGOUT(RXERR, "RDH wraparound @%x, RDT %x, RDLEN %x\n",
                    rdh_start, s->mac_reg[RDT], s->mac_reg[RDLEN]);
-            set_ics(s, 0, E1000_ICS_RXO);
+            e1000_receiver_overrun(s, total_size);
             return -1;
         }
     } while (desc_offset < total_size);
diff --git a/hw/net/trace-events b/hw/net/trace-events
index c1dea4b..9d49f62 100644
--- a/hw/net/trace-events
+++ b/hw/net/trace-events
@@ -98,6 +98,9 @@ net_rx_pkt_rss_ip6_ex(void) "Calculating IPv6/EX RSS  hash"
 net_rx_pkt_rss_hash(size_t rss_length, uint32_t rss_hash) "RSS hash for %zu bytes: 0x%X"
 net_rx_pkt_rss_add_chunk(void* ptr, size_t size, size_t input_offset) "Add RSS chunk %p, %zu bytes, RSS input offset %zu bytes"
 
+# hw/net/e1000.c
+e1000_receiver_overrun(size_t s, uint32_t rdh, uint32_t rdt) "Receiver overrun: dropped packet of %zu bytes, RDH=%u, RDT=%u"
+
 # hw/net/e1000x_common.c
 e1000x_rx_can_recv_disabled(bool link_up, bool rx_enabled, bool pci_master) "link_up: %d, rx_enabled %d, pci_master %d"
 e1000x_vlan_is_vlan_pkt(bool is_vlan_pkt, uint16_t eth_proto, uint16_t vet) "Is VLAN packet: %d, ETH proto: 0x%X, VET: 0x%X"
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [Qemu-devel] [PULL V2 26/26] qemu-options: Fix bad "macaddr" property in the documentation
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (24 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 25/26] e1000: indicate dropped packets in HW counters Jason Wang
@ 2018-10-19  3:22 ` Jason Wang
  2018-10-19 15:17 ` [Qemu-devel] [PULL V2 00/26] Net patches Peter Maydell
  26 siblings, 0 replies; 37+ messages in thread
From: Jason Wang @ 2018-10-19  3:22 UTC (permalink / raw)
  To: peter.maydell, qemu-devel; +Cc: Thomas Huth, Jason Wang

From: Thomas Huth <thuth@redhat.com>

When using the "-device" option, the property is called "mac".
"macaddr" is only used for the legacy "-net nic" option.

Reported-by: Harald Hoyer <harald@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 qemu-options.hx | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index f139459..0f2f65e 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2256,7 +2256,7 @@ qemu-system-i386 linux.img \
                  -netdev socket,id=n2,mcast=230.0.0.1:1234
 # launch yet another QEMU instance on same "bus"
 qemu-system-i386 linux.img \
-                 -device e1000,netdev=n3,macaddr=52:54:00:12:34:58 \
+                 -device e1000,netdev=n3,mac=52:54:00:12:34:58 \
                  -netdev socket,id=n3,mcast=230.0.0.1:1234
 @end example
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 00/26] Net patches
  2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
                   ` (25 preceding siblings ...)
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 26/26] qemu-options: Fix bad "macaddr" property in the documentation Jason Wang
@ 2018-10-19 15:17 ` Peter Maydell
  26 siblings, 0 replies; 37+ messages in thread
From: Peter Maydell @ 2018-10-19 15:17 UTC (permalink / raw)
  To: Jason Wang; +Cc: QEMU Developers

On 19 October 2018 at 04:21, Jason Wang <jasowang@redhat.com> wrote:
> The following changes since commit 77f7c747193662edfadeeb3118d63eed0eac51a6:
>
>   Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2018-10-17' into staging (2018-10-18 13:40:19 +0100)
>
> are available in the git repository at:
>
>   https://github.com/jasowang/qemu.git tags/net-pull-request
>
> for you to fetch changes up to 37a4442a76d010f5d957e3ee09dfb23364281b37:
>
>   qemu-options: Fix bad "macaddr" property in the documentation (2018-10-19 11:15:04 +0800)
>
> ----------------------------------------------------------------
>
> ----------------------------------------------------------------
> Jason Wang (5):
>       ne2000: fix possible out of bound access in ne2000_receive
>       rtl8139: fix possible out of bound access
>       pcnet: fix possible buffer overflow
>       net: ignore packet size greater than INT_MAX
>       e1000: indicate dropped packets in HW counters
>
> Thomas Huth (1):
>       qemu-options: Fix bad "macaddr" property in the documentation
>
> Zhang Chen (15):
>       filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table
>       colo-compare: implement the process of checkpoint
>       colo-compare: use notifier to notify packets comparing result
>       COLO: integrate colo compare with colo frame
>       COLO: Add block replication into colo process
>       COLO: Remove colo_state migration struct
>       COLO: Load dirty pages into SVM's RAM cache firstly
>       ram/COLO: Record the dirty pages that SVM received
>       COLO: Flush memory data from ram cache
>       qapi/migration.json: Rename COLO unknown mode to none mode.
>       qapi: Add new command to query colo status
>       savevm: split the process of different stages for loadvm/savevm
>       filter: Add handle_event method for NetFilterClass
>       filter-rewriter: handle checkpoint and failover event
>       docs: Add COLO status diagram to COLO-FT.txt
>
> liujunjie (1):
>       clean up callback when del virtqueue
>
> zhanghailiang (4):
>       qmp event: Add COLO_EXIT event to notify users while exited COLO
>       COLO: flush host dirty ram from cache
>       COLO: notify net filters about checkpoint/failover event
>       COLO: quick failover process by kick COLO thread
>

Applied, thanks.

-- PMM

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 12/26] qapi: Add new command to query colo status
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 12/26] qapi: Add new command to query colo status Jason Wang
@ 2018-10-19 15:30   ` Eric Blake
  0 siblings, 0 replies; 37+ messages in thread
From: Eric Blake @ 2018-10-19 15:30 UTC (permalink / raw)
  To: Jason Wang, peter.maydell, qemu-devel; +Cc: Zhang Chen, Zhang Chen

On 10/18/18 10:22 PM, Jason Wang wrote:
> From: Zhang Chen <zhangckid@gmail.com>
> 
> Libvirt or other high level software can use this command query colo status.
> You can test this command like that:
> {'execute':'query-colo-status'}
> 
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> Signed-off-by: Zhang Chen <chen.zhang@intel.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---

> +++ b/qapi/migration.json
> @@ -1308,6 +1308,38 @@
>   { 'command': 'xen-colo-do-checkpoint' }
>   
>   ##
> +# @COLOStatus:
> +#
> +# The result format for 'query-colo-status'.
> +#
> +# @mode: COLO running mode. If COLO is running, this field will return
> +#        'primary' or 'secondary'.
> +#
> +# @reason: describes the reason for the COLO exit.
> +#
> +# Since: 3.0

Now that the pull request has landed, please submit a followup patch 
that fixes s/3.0/3.1/

> +##
> +{ 'struct': 'COLOStatus',
> +  'data': { 'mode': 'COLOMode', 'reason': 'COLOExitReason' } }
> +
> +##
> +# @query-colo-status:
> +#
> +# Query COLO status while the vm is running.
> +#
> +# Returns: A @COLOStatus object showing the status.
> +#
> +# Example:
> +#
> +# -> { "execute": "query-colo-status" }
> +# <- { "return": { "mode": "primary", "active": true, "reason": "request" } }
> +#
> +# Since: 3.0

at both locations


-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table Jason Wang
@ 2018-10-29 11:01   ` Peter Maydell
  2018-10-30  2:02     ` Jason Wang
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Maydell @ 2018-10-29 11:01 UTC (permalink / raw)
  To: Jason Wang; +Cc: QEMU Developers, Zhang Chen, zhanghailiang, Zhang Chen

On 19 October 2018 at 04:22, Jason Wang <jasowang@redhat.com> wrote:
> From: Zhang Chen <zhangckid@gmail.com>
>
> We add almost full TCP state machine in filter-rewriter, except
> TCPS_LISTEN and some simplify in VM active close FIN states.
> The reason for this simplify job is because guest kernel will track
> the TCP status and wait 2MSL time too, if client resend the FIN packet,
> guest will resend the last ACK, so we needn't wait 2MSL time in filter-rewriter.
>
> After a net connection is closed, we didn't clear its related resources
> in connection_track_table, which will lead to memory leak.
>
> Let's track the state of net connection, if it is closed, its related
> resources will be cleared up.

Hi. Coverity (CID 1396477) points out that here:

> +        /*
> +         * Active close step 2.
> +         */
> +        if (conn->tcp_state == TCPS_FIN_WAIT_1) {
> +            conn->tcp_state = TCPS_TIME_WAIT;

...this assignment to conn->tcp_state has no effect, because...

> +            /*
> +             * For simplify implementation, we needn't wait 2MSL time
> +             * in filter rewriter. Because guest kernel will track the
> +             * TCP status and wait 2MSL time, if client resend the FIN
> +             * packet, guest will apply the last ACK too.
> +             */
> +            conn->tcp_state = TCPS_CLOSED;

...we immediately overwrite it with a different value.

> +            g_hash_table_remove(rf->connection_track_table, key);
> +        }
>      }

What was the intention of the code here?

thanks
-- PMM

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table
  2018-10-29 11:01   ` Peter Maydell
@ 2018-10-30  2:02     ` Jason Wang
  2018-10-30  2:29       ` Zhang Chen
  0 siblings, 1 reply; 37+ messages in thread
From: Jason Wang @ 2018-10-30  2:02 UTC (permalink / raw)
  To: Peter Maydell, Zhang Chen, Zhang Chen; +Cc: zhanghailiang, QEMU Developers


On 2018/10/29 下午7:01, Peter Maydell wrote:
> On 19 October 2018 at 04:22, Jason Wang <jasowang@redhat.com> wrote:
>> From: Zhang Chen <zhangckid@gmail.com>
>>
>> We add almost full TCP state machine in filter-rewriter, except
>> TCPS_LISTEN and some simplify in VM active close FIN states.
>> The reason for this simplify job is because guest kernel will track
>> the TCP status and wait 2MSL time too, if client resend the FIN packet,
>> guest will resend the last ACK, so we needn't wait 2MSL time in filter-rewriter.
>>
>> After a net connection is closed, we didn't clear its related resources
>> in connection_track_table, which will lead to memory leak.
>>
>> Let's track the state of net connection, if it is closed, its related
>> resources will be cleared up.
> Hi. Coverity (CID 1396477) points out that here:
>
>> +        /*
>> +         * Active close step 2.
>> +         */
>> +        if (conn->tcp_state == TCPS_FIN_WAIT_1) {
>> +            conn->tcp_state = TCPS_TIME_WAIT;
> ...this assignment to conn->tcp_state has no effect, because...
>
>> +            /*
>> +             * For simplify implementation, we needn't wait 2MSL time
>> +             * in filter rewriter. Because guest kernel will track the
>> +             * TCP status and wait 2MSL time, if client resend the FIN
>> +             * packet, guest will apply the last ACK too.
>> +             */
>> +            conn->tcp_state = TCPS_CLOSED;
> ...we immediately overwrite it with a different value.
>
>> +            g_hash_table_remove(rf->connection_track_table, key);
>> +        }
>>       }
> What was the intention of the code here?
>
> thanks
> -- PMM


Looks not.

Chen, can you please send a patch to fix this?

Thanks

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table
  2018-10-30  2:02     ` Jason Wang
@ 2018-10-30  2:29       ` Zhang Chen
  0 siblings, 0 replies; 37+ messages in thread
From: Zhang Chen @ 2018-10-30  2:29 UTC (permalink / raw)
  To: jasowang; +Cc: Peter Maydell, Zhang Chen, zhanghailiang, qemu-devel

On Tue, Oct 30, 2018 at 10:02 AM Jason Wang <jasowang@redhat.com> wrote:

>
> On 2018/10/29 下午7:01, Peter Maydell wrote:
> > On 19 October 2018 at 04:22, Jason Wang <jasowang@redhat.com> wrote:
> >> From: Zhang Chen <zhangckid@gmail.com>
> >>
> >> We add almost full TCP state machine in filter-rewriter, except
> >> TCPS_LISTEN and some simplify in VM active close FIN states.
> >> The reason for this simplify job is because guest kernel will track
> >> the TCP status and wait 2MSL time too, if client resend the FIN packet,
> >> guest will resend the last ACK, so we needn't wait 2MSL time in
> filter-rewriter.
> >>
> >> After a net connection is closed, we didn't clear its related resources
> >> in connection_track_table, which will lead to memory leak.
> >>
> >> Let's track the state of net connection, if it is closed, its related
> >> resources will be cleared up.
> > Hi. Coverity (CID 1396477) points out that here:
> >
> >> +        /*
> >> +         * Active close step 2.
> >> +         */
> >> +        if (conn->tcp_state == TCPS_FIN_WAIT_1) {
> >> +            conn->tcp_state = TCPS_TIME_WAIT;
> > ...this assignment to conn->tcp_state has no effect, because...
> >
> >> +            /*
> >> +             * For simplify implementation, we needn't wait 2MSL time
> >> +             * in filter rewriter. Because guest kernel will track the
> >> +             * TCP status and wait 2MSL time, if client resend the FIN
> >> +             * packet, guest will apply the last ACK too.
> >> +             */
> >> +            conn->tcp_state = TCPS_CLOSED;
> > ...we immediately overwrite it with a different value.
> >
> >> +            g_hash_table_remove(rf->connection_track_table, key);
> >> +        }
> >>       }
> > What was the intention of the code here?
> >
> > thanks
> > -- PMM
>
>
> Looks not.
>
> Chen, can you please send a patch to fix this?
>
>
Sure, I will send a patch for this issue.

Thanks
Zhang Chen


> Thanks
>
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX
  2018-10-19  3:22 ` [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX Jason Wang
@ 2018-11-13 15:41   ` Dima Stepanov
  2018-11-14  2:59     ` Jason Wang
  0 siblings, 1 reply; 37+ messages in thread
From: Dima Stepanov @ 2018-11-13 15:41 UTC (permalink / raw)
  To: Jason Wang; +Cc: peter.maydell, qemu-devel

Hi Jason,

I know that this patch has been already merged to stable, but i have a
question:

On Fri, Oct 19, 2018 at 11:22:23AM +0800, Jason Wang wrote:
> There should not be a reason for passing a packet size greater than
> INT_MAX. It's usually a hint of bug somewhere, so ignore packet size
> greater than INT_MAX in qemu_deliver_packet_iov()
> 
> CC: qemu-stable@nongnu.org
> Reported-by: Daniel Shapira <daniel@twistlock.com>
> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  net/net.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/net/net.c b/net/net.c
> index c66847e..07c194a 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -712,10 +712,15 @@ ssize_t qemu_deliver_packet_iov(NetClientState *sender,
>                                  void *opaque)
>  {
>      NetClientState *nc = opaque;
> +    size_t size = iov_size(iov, iovcnt);
>      int ret;
>  
> +    if (size > INT_MAX) {
> +        return size;
Is it okay that the function returns ssize_t (signed), but the type of the
size variable is size_t (unsigned)? For now the top level routine checks
the return value only for 0, but anyway we can return negative value
here instead of positive. What do you think?

Regards, Dima.

> +    }
> +
>      if (nc->link_down) {
> -        return iov_size(iov, iovcnt);
> +        return size;
>      }
>  
>      if (nc->receive_disabled) {
> -- 
> 2.5.0
> 
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX
  2018-11-13 15:41   ` Dima Stepanov
@ 2018-11-14  2:59     ` Jason Wang
  2018-11-14 16:23       ` Dima Stepanov
  0 siblings, 1 reply; 37+ messages in thread
From: Jason Wang @ 2018-11-14  2:59 UTC (permalink / raw)
  To: Dima Stepanov; +Cc: peter.maydell, qemu-devel


On 2018/11/13 下午11:41, Dima Stepanov wrote:
> Hi Jason,
>
> I know that this patch has been already merged to stable, but i have a
> question:
>
> On Fri, Oct 19, 2018 at 11:22:23AM +0800, Jason Wang wrote:
>> There should not be a reason for passing a packet size greater than
>> INT_MAX. It's usually a hint of bug somewhere, so ignore packet size
>> greater than INT_MAX in qemu_deliver_packet_iov()
>>
>> CC:qemu-stable@nongnu.org
>> Reported-by: Daniel Shapira<daniel@twistlock.com>
>> Reviewed-by: Michael S. Tsirkin<mst@redhat.com>
>> Signed-off-by: Jason Wang<jasowang@redhat.com>
>> ---
>>   net/net.c | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/net.c b/net/net.c
>> index c66847e..07c194a 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -712,10 +712,15 @@ ssize_t qemu_deliver_packet_iov(NetClientState *sender,
>>                                   void *opaque)
>>   {
>>       NetClientState *nc = opaque;
>> +    size_t size = iov_size(iov, iovcnt);
>>       int ret;
>>   
>> +    if (size > INT_MAX) {
>> +        return size;
> Is it okay that the function returns ssize_t (signed), but the type of the
> size variable is size_t (unsigned)? For now the top level routine checks
> the return value only for 0, but anyway we can return negative value
> here instead of positive. What do you think?
>
> Regards, Dima.
>

Any non zero value should be ok here. Actually I think because of the 
conversion from size_t to ssize_t, caller actually see negative value?

Thanks

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX
  2018-11-14  2:59     ` Jason Wang
@ 2018-11-14 16:23       ` Dima Stepanov
  2018-11-15  2:47         ` Jason Wang
  0 siblings, 1 reply; 37+ messages in thread
From: Dima Stepanov @ 2018-11-14 16:23 UTC (permalink / raw)
  To: Jason Wang; +Cc: peter.maydell, qemu-devel

On Wed, Nov 14, 2018 at 10:59:32AM +0800, Jason Wang wrote:
> 
> On 2018/11/13 下午11:41, Dima Stepanov wrote:
> >Hi Jason,
> >
> >I know that this patch has been already merged to stable, but i have a
> >question:
> >
> >On Fri, Oct 19, 2018 at 11:22:23AM +0800, Jason Wang wrote:
> >>There should not be a reason for passing a packet size greater than
> >>INT_MAX. It's usually a hint of bug somewhere, so ignore packet size
> >>greater than INT_MAX in qemu_deliver_packet_iov()
> >>
> >>CC:qemu-stable@nongnu.org
> >>Reported-by: Daniel Shapira<daniel@twistlock.com>
> >>Reviewed-by: Michael S. Tsirkin<mst@redhat.com>
> >>Signed-off-by: Jason Wang<jasowang@redhat.com>
> >>---
> >>  net/net.c | 7 ++++++-
> >>  1 file changed, 6 insertions(+), 1 deletion(-)
> >>
> >>diff --git a/net/net.c b/net/net.c
> >>index c66847e..07c194a 100644
> >>--- a/net/net.c
> >>+++ b/net/net.c
> >>@@ -712,10 +712,15 @@ ssize_t qemu_deliver_packet_iov(NetClientState *sender,
> >>                                  void *opaque)
> >>  {
> >>      NetClientState *nc = opaque;
> >>+    size_t size = iov_size(iov, iovcnt);
> >>      int ret;
> >>+    if (size > INT_MAX) {
> >>+        return size;
> >Is it okay that the function returns ssize_t (signed), but the type of the
> >size variable is size_t (unsigned)? For now the top level routine checks
> >the return value only for 0, but anyway we can return negative value
> >here instead of positive. What do you think?
> >
> >Regards, Dima.
> >
> 
> Any non zero value should be ok here. Actually I think because of the
> conversion from size_t to ssize_t, caller actually see negative value?
I believe it depends. If long (ssize_t and size_t type) is 8 bytes, then
the routine can sometimes return positive values and sometimes negative.
I fully agree that in the current case any non zero value should be
okay. I just wanted to point on the inconsistency in types and as a
result a return value.

Dima.
> 
> Thanks
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX
  2018-11-14 16:23       ` Dima Stepanov
@ 2018-11-15  2:47         ` Jason Wang
  2018-11-16  7:48           ` Dima Stepanov
  0 siblings, 1 reply; 37+ messages in thread
From: Jason Wang @ 2018-11-15  2:47 UTC (permalink / raw)
  To: Dima Stepanov; +Cc: peter.maydell, qemu-devel


On 2018/11/15 上午12:23, Dima Stepanov wrote:
> On Wed, Nov 14, 2018 at 10:59:32AM +0800, Jason Wang wrote:
>> On 2018/11/13 下午11:41, Dima Stepanov wrote:
>>> Hi Jason,
>>>
>>> I know that this patch has been already merged to stable, but i have a
>>> question:
>>>
>>> On Fri, Oct 19, 2018 at 11:22:23AM +0800, Jason Wang wrote:
>>>> There should not be a reason for passing a packet size greater than
>>>> INT_MAX. It's usually a hint of bug somewhere, so ignore packet size
>>>> greater than INT_MAX in qemu_deliver_packet_iov()
>>>>
>>>> CC:qemu-stable@nongnu.org
>>>> Reported-by: Daniel Shapira<daniel@twistlock.com>
>>>> Reviewed-by: Michael S. Tsirkin<mst@redhat.com>
>>>> Signed-off-by: Jason Wang<jasowang@redhat.com>
>>>> ---
>>>>   net/net.c | 7 ++++++-
>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/net/net.c b/net/net.c
>>>> index c66847e..07c194a 100644
>>>> --- a/net/net.c
>>>> +++ b/net/net.c
>>>> @@ -712,10 +712,15 @@ ssize_t qemu_deliver_packet_iov(NetClientState *sender,
>>>>                                   void *opaque)
>>>>   {
>>>>       NetClientState *nc = opaque;
>>>> +    size_t size = iov_size(iov, iovcnt);
>>>>       int ret;
>>>> +    if (size > INT_MAX) {
>>>> +        return size;
>>> Is it okay that the function returns ssize_t (signed), but the type of the
>>> size variable is size_t (unsigned)? For now the top level routine checks
>>> the return value only for 0, but anyway we can return negative value
>>> here instead of positive. What do you think?
>>>
>>> Regards, Dima.
>>>
>> Any non zero value should be ok here. Actually I think because of the
>> conversion from size_t to ssize_t, caller actually see negative value?
> I believe it depends. If long (ssize_t and size_t type) is 8 bytes, then
> the routine can sometimes return positive values and sometimes negative.
> I fully agree that in the current case any non zero value should be
> okay. I just wanted to point on the inconsistency in types and as a
> result a return value.


I see, want to post a patch for this?

Thanks


> Dima.
>> Thanks
>>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX
  2018-11-15  2:47         ` Jason Wang
@ 2018-11-16  7:48           ` Dima Stepanov
  0 siblings, 0 replies; 37+ messages in thread
From: Dima Stepanov @ 2018-11-16  7:48 UTC (permalink / raw)
  To: Jason Wang; +Cc: peter.maydell, qemu-devel

On Thu, Nov 15, 2018 at 10:47:04AM +0800, Jason Wang wrote:
> 
> On 2018/11/15 上午12:23, Dima Stepanov wrote:
> >On Wed, Nov 14, 2018 at 10:59:32AM +0800, Jason Wang wrote:
> >>On 2018/11/13 下午11:41, Dima Stepanov wrote:
> >>>Hi Jason,
> >>>
> >>>I know that this patch has been already merged to stable, but i have a
> >>>question:
> >>>
> >>>On Fri, Oct 19, 2018 at 11:22:23AM +0800, Jason Wang wrote:
> >>>>There should not be a reason for passing a packet size greater than
> >>>>INT_MAX. It's usually a hint of bug somewhere, so ignore packet size
> >>>>greater than INT_MAX in qemu_deliver_packet_iov()
> >>>>
> >>>>CC:qemu-stable@nongnu.org
> >>>>Reported-by: Daniel Shapira<daniel@twistlock.com>
> >>>>Reviewed-by: Michael S. Tsirkin<mst@redhat.com>
> >>>>Signed-off-by: Jason Wang<jasowang@redhat.com>
> >>>>---
> >>>>  net/net.c | 7 ++++++-
> >>>>  1 file changed, 6 insertions(+), 1 deletion(-)
> >>>>
> >>>>diff --git a/net/net.c b/net/net.c
> >>>>index c66847e..07c194a 100644
> >>>>--- a/net/net.c
> >>>>+++ b/net/net.c
> >>>>@@ -712,10 +712,15 @@ ssize_t qemu_deliver_packet_iov(NetClientState *sender,
> >>>>                                  void *opaque)
> >>>>  {
> >>>>      NetClientState *nc = opaque;
> >>>>+    size_t size = iov_size(iov, iovcnt);
> >>>>      int ret;
> >>>>+    if (size > INT_MAX) {
> >>>>+        return size;
> >>>Is it okay that the function returns ssize_t (signed), but the type of the
> >>>size variable is size_t (unsigned)? For now the top level routine checks
> >>>the return value only for 0, but anyway we can return negative value
> >>>here instead of positive. What do you think?
> >>>
> >>>Regards, Dima.
> >>>
> >>Any non zero value should be ok here. Actually I think because of the
> >>conversion from size_t to ssize_t, caller actually see negative value?
> >I believe it depends. If long (ssize_t and size_t type) is 8 bytes, then
> >the routine can sometimes return positive values and sometimes negative.
> >I fully agree that in the current case any non zero value should be
> >okay. I just wanted to point on the inconsistency in types and as a
> >result a return value.
> 
> 
> I see, want to post a patch for this?
> 
> Thanks

Yes, will take a look into it and prepare a patch.

Thanks, Dima.
> 
> 
> >Dima.
> >>Thanks
> >>

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2018-11-16  7:48 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-19  3:21 [Qemu-devel] [PULL V2 00/26] Net patches Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 01/26] filter-rewriter: Add TCP state machine and fix memory leak in connection_track_table Jason Wang
2018-10-29 11:01   ` Peter Maydell
2018-10-30  2:02     ` Jason Wang
2018-10-30  2:29       ` Zhang Chen
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 02/26] colo-compare: implement the process of checkpoint Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 03/26] colo-compare: use notifier to notify packets comparing result Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 04/26] COLO: integrate colo compare with colo frame Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 05/26] COLO: Add block replication into colo process Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 06/26] COLO: Remove colo_state migration struct Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 07/26] COLO: Load dirty pages into SVM's RAM cache firstly Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 08/26] ram/COLO: Record the dirty pages that SVM received Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 09/26] COLO: Flush memory data from ram cache Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 10/26] qmp event: Add COLO_EXIT event to notify users while exited COLO Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 11/26] qapi/migration.json: Rename COLO unknown mode to none mode Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 12/26] qapi: Add new command to query colo status Jason Wang
2018-10-19 15:30   ` Eric Blake
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 13/26] savevm: split the process of different stages for loadvm/savevm Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 14/26] COLO: flush host dirty ram from cache Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 15/26] filter: Add handle_event method for NetFilterClass Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 16/26] filter-rewriter: handle checkpoint and failover event Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 17/26] COLO: notify net filters about checkpoint/failover event Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 18/26] COLO: quick failover process by kick COLO thread Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 19/26] docs: Add COLO status diagram to COLO-FT.txt Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 20/26] clean up callback when del virtqueue Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 21/26] ne2000: fix possible out of bound access in ne2000_receive Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 22/26] rtl8139: fix possible out of bound access Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 23/26] pcnet: fix possible buffer overflow Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 24/26] net: ignore packet size greater than INT_MAX Jason Wang
2018-11-13 15:41   ` Dima Stepanov
2018-11-14  2:59     ` Jason Wang
2018-11-14 16:23       ` Dima Stepanov
2018-11-15  2:47         ` Jason Wang
2018-11-16  7:48           ` Dima Stepanov
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 25/26] e1000: indicate dropped packets in HW counters Jason Wang
2018-10-19  3:22 ` [Qemu-devel] [PULL V2 26/26] qemu-options: Fix bad "macaddr" property in the documentation Jason Wang
2018-10-19 15:17 ` [Qemu-devel] [PULL V2 00/26] Net patches Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.