All of lore.kernel.org
 help / color / mirror / Atom feed
* [QEMU v2 0/9] Fast (de)inflating & fast live migration
@ 2016-07-15  2:47 ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

This patch set intends to do two optimizations, one is to speed up
the (de)inflating process of virtio balloon, and another one which
is to speed up the live migration process. We put them together
because both of them are required to change the virtio balloon spec. 

The main idea of speeding up the (de)inflating process is to use 
bitmap to send the page information to host instead of the PFNs, to
reduce the overhead of virtio data transmission, address translation
and madvise(). This can help to improve the performance by about 85%.

The idea of speeding up live migration is to skip process guest's
free pages in the first round of data copy, to reduce needless
data processing, this can help to save quite a lot of CPU cycles and
network bandwidth. We get guest's free page information through the
virt queue of virtio-balloon, and filter out these free pages during
live migration. For an idle 8GB guest, this can help to shorten the 
total live migration time from 2Sec to about 500ms in the 10Gbps
network environment.  

Changes from v1 to v2:
    * Abandon the patch for dropping page cache.
    * Get a struct from vq instead of separate variables.
    * Use two separate APIs to request free pages and query the status.
    * Changed the virtio balloon interface.
    * Addressed some of the comments of v1.

Liang Li (9):
  virtio-balloon: Remove needless precompiled directive
  virtio-balloon: update linux head file
  virtio-balloon: speed up inflating & deflating process
  virtio-balloon: update linux head file for new feature
  balloon: get free page info from guest
  balloon: migrate vq elem to destination
  bitmap: Add a new bitmap_move function
  kvm: Add two new arch specific functions
  migration: skip free pages during live migration

 balloon.c                                       |  47 +++-
 hw/virtio/virtio-balloon.c                      | 304 ++++++++++++++++++++++--
 include/hw/virtio/virtio-balloon.h              |  18 +-
 include/qemu/bitmap.h                           |  13 +
 include/standard-headers/linux/virtio_balloon.h |  41 ++++
 include/sysemu/balloon.h                        |  18 +-
 include/sysemu/kvm.h                            |  18 ++
 migration/ram.c                                 |  86 +++++++
 target-arm/kvm.c                                |  14 ++
 target-i386/kvm.c                               |  37 +++
 target-mips/kvm.c                               |  14 ++
 target-ppc/kvm.c                                |  14 ++
 target-s390x/kvm.c                              |  14 ++
 13 files changed, 608 insertions(+), 30 deletions(-)

-- 
1.9.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 0/9] Fast (de)inflating & fast live migration
@ 2016-07-15  2:47 ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

This patch set intends to do two optimizations, one is to speed up
the (de)inflating process of virtio balloon, and another one which
is to speed up the live migration process. We put them together
because both of them are required to change the virtio balloon spec. 

The main idea of speeding up the (de)inflating process is to use 
bitmap to send the page information to host instead of the PFNs, to
reduce the overhead of virtio data transmission, address translation
and madvise(). This can help to improve the performance by about 85%.

The idea of speeding up live migration is to skip process guest's
free pages in the first round of data copy, to reduce needless
data processing, this can help to save quite a lot of CPU cycles and
network bandwidth. We get guest's free page information through the
virt queue of virtio-balloon, and filter out these free pages during
live migration. For an idle 8GB guest, this can help to shorten the 
total live migration time from 2Sec to about 500ms in the 10Gbps
network environment.  

Changes from v1 to v2:
    * Abandon the patch for dropping page cache.
    * Get a struct from vq instead of separate variables.
    * Use two separate APIs to request free pages and query the status.
    * Changed the virtio balloon interface.
    * Addressed some of the comments of v1.

Liang Li (9):
  virtio-balloon: Remove needless precompiled directive
  virtio-balloon: update linux head file
  virtio-balloon: speed up inflating & deflating process
  virtio-balloon: update linux head file for new feature
  balloon: get free page info from guest
  balloon: migrate vq elem to destination
  bitmap: Add a new bitmap_move function
  kvm: Add two new arch specific functions
  migration: skip free pages during live migration

 balloon.c                                       |  47 +++-
 hw/virtio/virtio-balloon.c                      | 304 ++++++++++++++++++++++--
 include/hw/virtio/virtio-balloon.h              |  18 +-
 include/qemu/bitmap.h                           |  13 +
 include/standard-headers/linux/virtio_balloon.h |  41 ++++
 include/sysemu/balloon.h                        |  18 +-
 include/sysemu/kvm.h                            |  18 ++
 migration/ram.c                                 |  86 +++++++
 target-arm/kvm.c                                |  14 ++
 target-i386/kvm.c                               |  37 +++
 target-mips/kvm.c                               |  14 ++
 target-ppc/kvm.c                                |  14 ++
 target-s390x/kvm.c                              |  14 ++
 13 files changed, 608 insertions(+), 30 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [QEMU v2 1/9] virtio-balloon: Remove needless precompiled directive
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Since there in wrapper around madvise(), the virtio-balloon
code is able to work without the precompiled directive, the
directive can be removed.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Suggested-by: Thomas Huth <thuth@redhat.com>
---
 hw/virtio/virtio-balloon.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 1a22e6d..62931b3 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -34,13 +34,11 @@
 
 static void balloon_page(void *addr, int deflate)
 {
-#if defined(__linux__)
     if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
                                          kvm_has_sync_mmu())) {
         qemu_madvise(addr, BALLOON_PAGE_SIZE,
                 deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
     }
-#endif
 }
 
 static const char *balloon_stat_names[] = {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 1/9] virtio-balloon: Remove needless precompiled directive
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Since there in wrapper around madvise(), the virtio-balloon
code is able to work without the precompiled directive, the
directive can be removed.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Suggested-by: Thomas Huth <thuth@redhat.com>
---
 hw/virtio/virtio-balloon.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 1a22e6d..62931b3 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -34,13 +34,11 @@
 
 static void balloon_page(void *addr, int deflate)
 {
-#if defined(__linux__)
     if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
                                          kvm_has_sync_mmu())) {
         qemu_madvise(addr, BALLOON_PAGE_SIZE,
                 deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
     }
-#endif
 }
 
 static const char *balloon_stat_names[] = {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 2/9] virtio-balloon: update linux head file
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Update the new feature bit definition and the page bitmap header
struct to keep consistent with kernel side.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/standard-headers/linux/virtio_balloon.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/include/standard-headers/linux/virtio_balloon.h b/include/standard-headers/linux/virtio_balloon.h
index 9d06ccd..d577359 100644
--- a/include/standard-headers/linux/virtio_balloon.h
+++ b/include/standard-headers/linux/virtio_balloon.h
@@ -34,6 +34,7 @@
 #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_PAGE_BITMAP	3 /* Send page info with bitmap */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
@@ -82,4 +83,22 @@ struct virtio_balloon_stat {
 	__virtio64 val;
 } QEMU_PACKED;
 
+/* Page bitmap header structure */
+struct balloon_bmap_hdr {
+	/* Used to distinguish different request */
+	__virtio16 cmd;
+	/* Shift width of page in the bitmap */
+	__virtio16 page_shift;
+	/* flag used to identify different status */
+	__virtio16 flag;
+	/* Reserved */
+	__virtio16 reserved;
+	/* ID of the request */
+	__virtio64 req_id;
+	/* The pfn of 0 bit in the bitmap */
+	__virtio64 start_pfn;
+	/* The length of the bitmap, in bytes */
+	__virtio64 bmap_len;
+};
+
 #endif /* _LINUX_VIRTIO_BALLOON_H */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 2/9] virtio-balloon: update linux head file
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Update the new feature bit definition and the page bitmap header
struct to keep consistent with kernel side.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/standard-headers/linux/virtio_balloon.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/include/standard-headers/linux/virtio_balloon.h b/include/standard-headers/linux/virtio_balloon.h
index 9d06ccd..d577359 100644
--- a/include/standard-headers/linux/virtio_balloon.h
+++ b/include/standard-headers/linux/virtio_balloon.h
@@ -34,6 +34,7 @@
 #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_PAGE_BITMAP	3 /* Send page info with bitmap */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
@@ -82,4 +83,22 @@ struct virtio_balloon_stat {
 	__virtio64 val;
 } QEMU_PACKED;
 
+/* Page bitmap header structure */
+struct balloon_bmap_hdr {
+	/* Used to distinguish different request */
+	__virtio16 cmd;
+	/* Shift width of page in the bitmap */
+	__virtio16 page_shift;
+	/* flag used to identify different status */
+	__virtio16 flag;
+	/* Reserved */
+	__virtio16 reserved;
+	/* ID of the request */
+	__virtio64 req_id;
+	/* The pfn of 0 bit in the bitmap */
+	__virtio64 start_pfn;
+	/* The length of the bitmap, in bytes */
+	__virtio64 bmap_len;
+};
+
 #endif /* _LINUX_VIRTIO_BALLOON_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 3/9] virtio-balloon: speed up inflating & deflating process
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

The implementation of the current virtio-balloon is not very
efficient, the time spends on different stages of inflating
the balloon to 7GB of a 8GB idle guest:

a. allocating pages (6.5%)
b. sending PFNs to host (68.3%)
c. address translation (6.1%)
d. madvise (19%)

It takes about 4126ms for the inflating process to complete.
Debugging shows that the bottle neck are the stage b and stage d.

If using a bitmap to send the page info instead of the PFNs, we
can reduce the overhead in stage b quite a lot. Furthermore, we
can do the address translation and call madvise() with a bulk of
RAM pages, instead of the current page per page way, the overhead
of stage c and stage d can also be reduced a lot.

This patch is the kernel side implementation which is intended to
speed up the inflating & deflating process by adding a new feature
to the virtio-balloon device. With this new feature, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.

TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/virtio/virtio-balloon.c | 144 ++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 123 insertions(+), 21 deletions(-)

diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 62931b3..a7152c8 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -52,6 +52,77 @@ static const char *balloon_stat_names[] = {
    [VIRTIO_BALLOON_S_NR] = NULL
 };
 
+static void do_balloon_bulk_pages(ram_addr_t base_pfn, uint16_t page_shift,
+                                  unsigned long len, bool deflate)
+{
+    ram_addr_t size, processed, chunk, base;
+    MemoryRegionSection section = {.mr = NULL};
+
+    size = len << page_shift;
+    base = base_pfn << page_shift;
+
+    for (processed = 0; processed < size; processed += chunk) {
+        chunk = size - processed;
+        while (chunk >= TARGET_PAGE_SIZE) {
+            section = memory_region_find(get_system_memory(),
+                                         base + processed, chunk);
+            if (!section.mr) {
+                chunk = QEMU_ALIGN_DOWN(chunk / 2, TARGET_PAGE_SIZE);
+            } else {
+                break;
+            }
+        }
+
+        if (section.mr &&
+            (int128_nz(section.size) && memory_region_is_ram(section.mr))) {
+            void *addr = section.offset_within_region +
+                   memory_region_get_ram_ptr(section.mr);
+            qemu_madvise(addr, chunk,
+                         deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
+        } else {
+            qemu_log_mask(LOG_GUEST_ERROR,
+                          "Invalid guest RAM range [0x%lx, 0x%lx]\n",
+                          base + processed, chunk);
+            chunk = TARGET_PAGE_SIZE;
+        }
+    }
+}
+
+static void balloon_bulk_pages(struct balloon_bmap_hdr *hdr,
+                               unsigned long *bitmap, bool deflate)
+{
+    ram_addr_t base_pfn = hdr->start_pfn;
+    uint16_t page_shift = hdr->page_shift;
+    unsigned long len = hdr->bmap_len;
+    unsigned long current = 0, end = len * BITS_PER_BYTE;
+
+    if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
+                                         kvm_has_sync_mmu())) {
+        while (current < end) {
+            unsigned long one = find_next_bit(bitmap, end, current);
+
+            if (one < end) {
+                unsigned long pages, zero;
+
+                zero = find_next_zero_bit(bitmap, end, one + 1);
+                if (zero >= end) {
+                    pages = end - one;
+                } else {
+                    pages = zero - one;
+                }
+
+                if (pages) {
+                    do_balloon_bulk_pages(base_pfn + one, page_shift,
+                                          pages, deflate);
+                }
+                current = one + pages;
+            } else {
+                current = one;
+            }
+        }
+    }
+}
+
 /*
  * reset_stats - Mark all items in the stats array as unset
  *
@@ -72,6 +143,13 @@ static bool balloon_stats_supported(const VirtIOBalloon *s)
     return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_STATS_VQ);
 }
 
+static bool balloon_page_bitmap_supported(const VirtIOBalloon *s)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+
+    return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP);
+}
+
 static bool balloon_stats_enabled(const VirtIOBalloon *s)
 {
     return s->stats_poll_interval > 0;
@@ -213,32 +291,54 @@ static void virtio_balloon_handle_output(VirtIODevice *vdev, VirtQueue *vq)
     for (;;) {
         size_t offset = 0;
         uint32_t pfn;
+
         elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
         if (!elem) {
             return;
         }
 
-        while (iov_to_buf(elem->out_sg, elem->out_num, offset, &pfn, 4) == 4) {
-            ram_addr_t pa;
-            ram_addr_t addr;
-            int p = virtio_ldl_p(vdev, &pfn);
-
-            pa = (ram_addr_t) p << VIRTIO_BALLOON_PFN_SHIFT;
-            offset += 4;
-
-            /* FIXME: remove get_system_memory(), but how? */
-            section = memory_region_find(get_system_memory(), pa, 1);
-            if (!int128_nz(section.size) || !memory_region_is_ram(section.mr))
-                continue;
-
-            trace_virtio_balloon_handle_output(memory_region_name(section.mr),
-                                               pa);
-            /* Using memory_region_get_ram_ptr is bending the rules a bit, but
-               should be OK because we only want a single page.  */
-            addr = section.offset_within_region;
-            balloon_page(memory_region_get_ram_ptr(section.mr) + addr,
-                         !!(vq == s->dvq));
-            memory_region_unref(section.mr);
+        if (balloon_page_bitmap_supported(s)) {
+            struct balloon_bmap_hdr hdr;
+            uint64_t bmap_len;
+
+            iov_to_buf(elem->out_sg, elem->out_num, offset, &hdr, sizeof(hdr));
+            offset += sizeof(hdr);
+
+            bmap_len = hdr.bmap_len;
+            if (bmap_len > 0) {
+                unsigned long *bitmap = bitmap_new(bmap_len * BITS_PER_BYTE);
+                iov_to_buf(elem->out_sg, elem->out_num, offset,
+                           bitmap, bmap_len);
+
+                balloon_bulk_pages(&hdr, bitmap, !!(vq == s->dvq));
+                g_free(bitmap);
+            }
+        } else {
+            while (iov_to_buf(elem->out_sg, elem->out_num, offset,
+                              &pfn, 4) == 4) {
+                ram_addr_t pa;
+                ram_addr_t addr;
+                int p = virtio_ldl_p(vdev, &pfn);
+
+                pa = (ram_addr_t) p << VIRTIO_BALLOON_PFN_SHIFT;
+                offset += 4;
+
+                /* FIXME: remove get_system_memory(), but how? */
+                section = memory_region_find(get_system_memory(), pa, 1);
+                if (!int128_nz(section.size) ||
+                    !memory_region_is_ram(section.mr)) {
+                    continue;
+                }
+
+                trace_virtio_balloon_handle_output(memory_region_name(
+                                                            section.mr), pa);
+                /* Using memory_region_get_ram_ptr is bending the rules a bit,
+                 * but should be OK because we only want a single page.  */
+                addr = section.offset_within_region;
+                balloon_page(memory_region_get_ram_ptr(section.mr) + addr,
+                             !!(vq == s->dvq));
+                memory_region_unref(section.mr);
+            }
         }
 
         virtqueue_push(vq, elem, offset);
@@ -494,6 +594,8 @@ static void virtio_balloon_instance_init(Object *obj)
 static Property virtio_balloon_properties[] = {
     DEFINE_PROP_BIT("deflate-on-oom", VirtIOBalloon, host_features,
                     VIRTIO_BALLOON_F_DEFLATE_ON_OOM, false),
+    DEFINE_PROP_BIT("page-bitmap", VirtIOBalloon, host_features,
+                    VIRTIO_BALLOON_F_PAGE_BITMAP, true),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 3/9] virtio-balloon: speed up inflating & deflating process
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

The implementation of the current virtio-balloon is not very
efficient, the time spends on different stages of inflating
the balloon to 7GB of a 8GB idle guest:

a. allocating pages (6.5%)
b. sending PFNs to host (68.3%)
c. address translation (6.1%)
d. madvise (19%)

It takes about 4126ms for the inflating process to complete.
Debugging shows that the bottle neck are the stage b and stage d.

If using a bitmap to send the page info instead of the PFNs, we
can reduce the overhead in stage b quite a lot. Furthermore, we
can do the address translation and call madvise() with a bulk of
RAM pages, instead of the current page per page way, the overhead
of stage c and stage d can also be reduced a lot.

This patch is the kernel side implementation which is intended to
speed up the inflating & deflating process by adding a new feature
to the virtio-balloon device. With this new feature, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.

TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/virtio/virtio-balloon.c | 144 ++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 123 insertions(+), 21 deletions(-)

diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 62931b3..a7152c8 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -52,6 +52,77 @@ static const char *balloon_stat_names[] = {
    [VIRTIO_BALLOON_S_NR] = NULL
 };
 
+static void do_balloon_bulk_pages(ram_addr_t base_pfn, uint16_t page_shift,
+                                  unsigned long len, bool deflate)
+{
+    ram_addr_t size, processed, chunk, base;
+    MemoryRegionSection section = {.mr = NULL};
+
+    size = len << page_shift;
+    base = base_pfn << page_shift;
+
+    for (processed = 0; processed < size; processed += chunk) {
+        chunk = size - processed;
+        while (chunk >= TARGET_PAGE_SIZE) {
+            section = memory_region_find(get_system_memory(),
+                                         base + processed, chunk);
+            if (!section.mr) {
+                chunk = QEMU_ALIGN_DOWN(chunk / 2, TARGET_PAGE_SIZE);
+            } else {
+                break;
+            }
+        }
+
+        if (section.mr &&
+            (int128_nz(section.size) && memory_region_is_ram(section.mr))) {
+            void *addr = section.offset_within_region +
+                   memory_region_get_ram_ptr(section.mr);
+            qemu_madvise(addr, chunk,
+                         deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
+        } else {
+            qemu_log_mask(LOG_GUEST_ERROR,
+                          "Invalid guest RAM range [0x%lx, 0x%lx]\n",
+                          base + processed, chunk);
+            chunk = TARGET_PAGE_SIZE;
+        }
+    }
+}
+
+static void balloon_bulk_pages(struct balloon_bmap_hdr *hdr,
+                               unsigned long *bitmap, bool deflate)
+{
+    ram_addr_t base_pfn = hdr->start_pfn;
+    uint16_t page_shift = hdr->page_shift;
+    unsigned long len = hdr->bmap_len;
+    unsigned long current = 0, end = len * BITS_PER_BYTE;
+
+    if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
+                                         kvm_has_sync_mmu())) {
+        while (current < end) {
+            unsigned long one = find_next_bit(bitmap, end, current);
+
+            if (one < end) {
+                unsigned long pages, zero;
+
+                zero = find_next_zero_bit(bitmap, end, one + 1);
+                if (zero >= end) {
+                    pages = end - one;
+                } else {
+                    pages = zero - one;
+                }
+
+                if (pages) {
+                    do_balloon_bulk_pages(base_pfn + one, page_shift,
+                                          pages, deflate);
+                }
+                current = one + pages;
+            } else {
+                current = one;
+            }
+        }
+    }
+}
+
 /*
  * reset_stats - Mark all items in the stats array as unset
  *
@@ -72,6 +143,13 @@ static bool balloon_stats_supported(const VirtIOBalloon *s)
     return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_STATS_VQ);
 }
 
+static bool balloon_page_bitmap_supported(const VirtIOBalloon *s)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+
+    return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP);
+}
+
 static bool balloon_stats_enabled(const VirtIOBalloon *s)
 {
     return s->stats_poll_interval > 0;
@@ -213,32 +291,54 @@ static void virtio_balloon_handle_output(VirtIODevice *vdev, VirtQueue *vq)
     for (;;) {
         size_t offset = 0;
         uint32_t pfn;
+
         elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
         if (!elem) {
             return;
         }
 
-        while (iov_to_buf(elem->out_sg, elem->out_num, offset, &pfn, 4) == 4) {
-            ram_addr_t pa;
-            ram_addr_t addr;
-            int p = virtio_ldl_p(vdev, &pfn);
-
-            pa = (ram_addr_t) p << VIRTIO_BALLOON_PFN_SHIFT;
-            offset += 4;
-
-            /* FIXME: remove get_system_memory(), but how? */
-            section = memory_region_find(get_system_memory(), pa, 1);
-            if (!int128_nz(section.size) || !memory_region_is_ram(section.mr))
-                continue;
-
-            trace_virtio_balloon_handle_output(memory_region_name(section.mr),
-                                               pa);
-            /* Using memory_region_get_ram_ptr is bending the rules a bit, but
-               should be OK because we only want a single page.  */
-            addr = section.offset_within_region;
-            balloon_page(memory_region_get_ram_ptr(section.mr) + addr,
-                         !!(vq == s->dvq));
-            memory_region_unref(section.mr);
+        if (balloon_page_bitmap_supported(s)) {
+            struct balloon_bmap_hdr hdr;
+            uint64_t bmap_len;
+
+            iov_to_buf(elem->out_sg, elem->out_num, offset, &hdr, sizeof(hdr));
+            offset += sizeof(hdr);
+
+            bmap_len = hdr.bmap_len;
+            if (bmap_len > 0) {
+                unsigned long *bitmap = bitmap_new(bmap_len * BITS_PER_BYTE);
+                iov_to_buf(elem->out_sg, elem->out_num, offset,
+                           bitmap, bmap_len);
+
+                balloon_bulk_pages(&hdr, bitmap, !!(vq == s->dvq));
+                g_free(bitmap);
+            }
+        } else {
+            while (iov_to_buf(elem->out_sg, elem->out_num, offset,
+                              &pfn, 4) == 4) {
+                ram_addr_t pa;
+                ram_addr_t addr;
+                int p = virtio_ldl_p(vdev, &pfn);
+
+                pa = (ram_addr_t) p << VIRTIO_BALLOON_PFN_SHIFT;
+                offset += 4;
+
+                /* FIXME: remove get_system_memory(), but how? */
+                section = memory_region_find(get_system_memory(), pa, 1);
+                if (!int128_nz(section.size) ||
+                    !memory_region_is_ram(section.mr)) {
+                    continue;
+                }
+
+                trace_virtio_balloon_handle_output(memory_region_name(
+                                                            section.mr), pa);
+                /* Using memory_region_get_ram_ptr is bending the rules a bit,
+                 * but should be OK because we only want a single page.  */
+                addr = section.offset_within_region;
+                balloon_page(memory_region_get_ram_ptr(section.mr) + addr,
+                             !!(vq == s->dvq));
+                memory_region_unref(section.mr);
+            }
         }
 
         virtqueue_push(vq, elem, offset);
@@ -494,6 +594,8 @@ static void virtio_balloon_instance_init(Object *obj)
 static Property virtio_balloon_properties[] = {
     DEFINE_PROP_BIT("deflate-on-oom", VirtIOBalloon, host_features,
                     VIRTIO_BALLOON_F_DEFLATE_ON_OOM, false),
+    DEFINE_PROP_BIT("page-bitmap", VirtIOBalloon, host_features,
+                    VIRTIO_BALLOON_F_PAGE_BITMAP, true),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 4/9] virtio-balloon: update linux head file for new feature
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Update the new feature bit definition for the new virt queue and
the request header struct to keep consistent with kernel side.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/standard-headers/linux/virtio_balloon.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/include/standard-headers/linux/virtio_balloon.h b/include/standard-headers/linux/virtio_balloon.h
index d577359..797a868 100644
--- a/include/standard-headers/linux/virtio_balloon.h
+++ b/include/standard-headers/linux/virtio_balloon.h
@@ -35,6 +35,7 @@
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
 #define VIRTIO_BALLOON_F_PAGE_BITMAP	3 /* Send page info with bitmap */
+#define VIRTIO_BALLOON_F_MISC_VQ	4 /* Misc info virtqueue */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
@@ -101,4 +102,25 @@ struct balloon_bmap_hdr {
 	__virtio64 bmap_len;
 };
 
+enum balloon_req_id {
+	/* Get free pages information */
+	BALLOON_GET_FREE_PAGES,
+};
+
+enum balloon_flag {
+	/* Have more data for a request */
+	BALLOON_FLAG_CONT,
+	/* No more data for a request */
+	BALLOON_FLAG_DONE,
+};
+
+struct balloon_req_hdr {
+	/* Used to distinguish different request */
+	__virtio16 cmd;
+	/* Reserved */
+	__virtio16 reserved[3];
+	/* Request parameter */
+	__virtio64 param;
+};
+
 #endif /* _LINUX_VIRTIO_BALLOON_H */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 4/9] virtio-balloon: update linux head file for new feature
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Update the new feature bit definition for the new virt queue and
the request header struct to keep consistent with kernel side.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/standard-headers/linux/virtio_balloon.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/include/standard-headers/linux/virtio_balloon.h b/include/standard-headers/linux/virtio_balloon.h
index d577359..797a868 100644
--- a/include/standard-headers/linux/virtio_balloon.h
+++ b/include/standard-headers/linux/virtio_balloon.h
@@ -35,6 +35,7 @@
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
 #define VIRTIO_BALLOON_F_PAGE_BITMAP	3 /* Send page info with bitmap */
+#define VIRTIO_BALLOON_F_MISC_VQ	4 /* Misc info virtqueue */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
@@ -101,4 +102,25 @@ struct balloon_bmap_hdr {
 	__virtio64 bmap_len;
 };
 
+enum balloon_req_id {
+	/* Get free pages information */
+	BALLOON_GET_FREE_PAGES,
+};
+
+enum balloon_flag {
+	/* Have more data for a request */
+	BALLOON_FLAG_CONT,
+	/* No more data for a request */
+	BALLOON_FLAG_DONE,
+};
+
+struct balloon_req_hdr {
+	/* Used to distinguish different request */
+	__virtio16 cmd;
+	/* Reserved */
+	__virtio16 reserved[3];
+	/* Request parameter */
+	__virtio64 param;
+};
+
 #endif /* _LINUX_VIRTIO_BALLOON_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 5/9] balloon: get free page info from guest
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Add a new feature to get the free page information from guest,
the free page information is saved in a bitmap. Please note that
'free page' means page is free sometime after host set the value
of request ID and before it receive response with the same ID.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 balloon.c                          |  47 +++++++++++++-
 hw/virtio/virtio-balloon.c         | 122 ++++++++++++++++++++++++++++++++++++-
 include/hw/virtio/virtio-balloon.h |  18 +++++-
 include/sysemu/balloon.h           |  18 +++++-
 4 files changed, 200 insertions(+), 5 deletions(-)

diff --git a/balloon.c b/balloon.c
index f2ef50c..d6a3791 100644
--- a/balloon.c
+++ b/balloon.c
@@ -36,6 +36,8 @@
 
 static QEMUBalloonEvent *balloon_event_fn;
 static QEMUBalloonStatus *balloon_stat_fn;
+static QEMUBalloonGetFreePage *balloon_get_free_page_fn;
+static QEMUBalloonFreePageReady *balloon_free_page_ready_fn;
 static void *balloon_opaque;
 static bool balloon_inhibited;
 
@@ -65,9 +67,13 @@ static bool have_balloon(Error **errp)
 }
 
 int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
-                             QEMUBalloonStatus *stat_func, void *opaque)
+                             QEMUBalloonStatus *stat_func,
+                             QEMUBalloonGetFreePage *get_free_page_func,
+                             QEMUBalloonFreePageReady *free_page_ready_func,
+                             void *opaque)
 {
-    if (balloon_event_fn || balloon_stat_fn || balloon_opaque) {
+    if (balloon_event_fn || balloon_stat_fn || balloon_get_free_page_fn
+        || balloon_free_page_ready_fn || balloon_opaque) {
         /* We're already registered one balloon handler.  How many can
          * a guest really have?
          */
@@ -75,6 +81,8 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
     }
     balloon_event_fn = event_func;
     balloon_stat_fn = stat_func;
+    balloon_get_free_page_fn = get_free_page_func;
+    balloon_free_page_ready_fn = free_page_ready_func;
     balloon_opaque = opaque;
     return 0;
 }
@@ -86,6 +94,8 @@ void qemu_remove_balloon_handler(void *opaque)
     }
     balloon_event_fn = NULL;
     balloon_stat_fn = NULL;
+    balloon_get_free_page_fn = NULL;
+    balloon_free_page_ready_fn = NULL;
     balloon_opaque = NULL;
 }
 
@@ -116,3 +126,36 @@ void qmp_balloon(int64_t target, Error **errp)
     trace_balloon_event(balloon_opaque, target);
     balloon_event_fn(balloon_opaque, target);
 }
+
+bool balloon_free_pages_support(void)
+{
+    return balloon_get_free_page_fn ? true : false;
+}
+
+BalloonReqStatus balloon_get_free_pages(unsigned long *bitmap,
+                                        unsigned long len,
+                                        unsigned long req_id)
+{
+    if (!balloon_get_free_page_fn) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (!bitmap) {
+        return REQ_INVALID_PARAM;
+    }
+
+    return balloon_get_free_page_fn(balloon_opaque, bitmap, len, req_id);
+}
+
+BalloonReqStatus balloon_free_page_ready(unsigned long *req_id)
+{
+    if (!balloon_free_page_ready_fn) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (!req_id) {
+        return REQ_INVALID_PARAM;
+    }
+
+    return balloon_free_page_ready_fn(balloon_opaque, req_id);
+}
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index a7152c8..b0c09a7 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -150,6 +150,13 @@ static bool balloon_page_bitmap_supported(const VirtIOBalloon *s)
     return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP);
 }
 
+static bool balloon_misc_vq_supported(const VirtIOBalloon *s)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+
+    return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_MISC_VQ);
+}
+
 static bool balloon_stats_enabled(const VirtIOBalloon *s)
 {
     return s->stats_poll_interval > 0;
@@ -399,6 +406,52 @@ out:
     }
 }
 
+static void virtio_balloon_handle_resp(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
+    VirtQueueElement *elem;
+    size_t offset = 0;
+    struct balloon_bmap_hdr hdr;
+
+    elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
+    if (!elem) {
+        s->req_status = REQ_ERROR;
+        return;
+    }
+
+    s->misc_vq_elem = elem;
+    if (!elem->out_num) {
+        return;
+    }
+
+    iov_to_buf(elem->out_sg, elem->out_num, offset,
+               &hdr, sizeof(hdr));
+    offset += sizeof(hdr);
+
+    switch (hdr.cmd) {
+    case BALLOON_GET_FREE_PAGES:
+        if (hdr.req_id == s->misc_req.param) {
+            if (s->bmap_len < hdr.start_pfn / BITS_PER_BYTE + hdr.bmap_len) {
+                hdr.bmap_len = s->bmap_len - hdr.start_pfn / BITS_PER_BYTE;
+            }
+
+            iov_to_buf(elem->out_sg, elem->out_num, offset,
+                       s->free_page_bmap + hdr.start_pfn / BITS_PER_LONG,
+                       hdr.bmap_len);
+            if (hdr.flag == BALLOON_FLAG_DONE) {
+                s->req_id = hdr.req_id;
+                s->req_status = REQ_DONE;
+            } else {
+                s->req_status = REQ_ON_GOING;
+            }
+        }
+        break;
+    default:
+        break;
+    }
+
+}
+
 static void virtio_balloon_get_config(VirtIODevice *vdev, uint8_t *config_data)
 {
     VirtIOBalloon *dev = VIRTIO_BALLOON(vdev);
@@ -478,6 +531,61 @@ static void virtio_balloon_stat(void *opaque, BalloonInfo *info)
                                              VIRTIO_BALLOON_PFN_SHIFT);
 }
 
+static BalloonReqStatus virtio_balloon_free_pages(void *opaque,
+                                                  unsigned long *bitmap,
+                                                  unsigned long bmap_len,
+                                                  unsigned long req_id)
+{
+    VirtIOBalloon *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    VirtQueueElement *elem = s->misc_vq_elem;
+    int len;
+
+    if (!balloon_misc_vq_supported(s)) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (s->req_status == REQ_INIT || s->req_status == REQ_DONE) {
+        s->free_page_bmap = bitmap;
+        if (elem == NULL || !elem->in_num) {
+            elem = virtqueue_pop(s->mvq, sizeof(VirtQueueElement));
+            if (!elem) {
+                return REQ_ERROR;
+            }
+            s->misc_vq_elem = elem;
+        }
+        s->misc_req.cmd = BALLOON_GET_FREE_PAGES;
+        s->misc_req.param = req_id;
+        s->bmap_len = bmap_len;
+        len = iov_from_buf(elem->in_sg, elem->in_num, 0, &s->misc_req,
+                           sizeof(s->misc_req));
+        virtqueue_push(s->mvq, elem, len);
+        virtio_notify(vdev, s->mvq);
+        g_free(s->misc_vq_elem);
+        s->misc_vq_elem = NULL;
+        s->req_status = REQ_ON_GOING;
+        return REQ_START;
+    }
+
+    return REQ_ON_GOING;
+}
+
+static BalloonReqStatus virtio_balloon_free_page_ready(void *opaque,
+                                                       unsigned long *req_id)
+{
+    VirtIOBalloon *s = opaque;
+
+    if (!balloon_misc_vq_supported(s)) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (s->req_status == REQ_DONE) {
+        *req_id = s->req_id;
+    }
+
+    return s->req_status;
+}
+
 static void virtio_balloon_to_target(void *opaque, ram_addr_t target)
 {
     VirtIOBalloon *dev = VIRTIO_BALLOON(opaque);
@@ -539,7 +647,9 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
                 sizeof(struct virtio_balloon_config));
 
     ret = qemu_add_balloon_handler(virtio_balloon_to_target,
-                                   virtio_balloon_stat, s);
+                                   virtio_balloon_stat,
+                                   virtio_balloon_free_pages,
+                                   virtio_balloon_free_page_ready, s);
 
     if (ret < 0) {
         error_setg(errp, "Only one balloon device is supported");
@@ -550,8 +660,10 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
     s->ivq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output);
     s->dvq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output);
     s->svq = virtio_add_queue(vdev, 128, virtio_balloon_receive_stats);
+    s->mvq = virtio_add_queue(vdev, 128, virtio_balloon_handle_resp);
 
     reset_stats(s);
+    s->req_status = REQ_INIT;
 
     register_savevm(dev, "virtio-balloon", -1, 1,
                     virtio_balloon_save, virtio_balloon_load, s);
@@ -576,6 +688,12 @@ static void virtio_balloon_device_reset(VirtIODevice *vdev)
         g_free(s->stats_vq_elem);
         s->stats_vq_elem = NULL;
     }
+
+    if (s->misc_vq_elem != NULL) {
+        g_free(s->misc_vq_elem);
+        s->misc_vq_elem = NULL;
+    }
+    s->req_status = REQ_INIT;
 }
 
 static void virtio_balloon_instance_init(Object *obj)
@@ -596,6 +714,8 @@ static Property virtio_balloon_properties[] = {
                     VIRTIO_BALLOON_F_DEFLATE_ON_OOM, false),
     DEFINE_PROP_BIT("page-bitmap", VirtIOBalloon, host_features,
                     VIRTIO_BALLOON_F_PAGE_BITMAP, true),
+    DEFINE_PROP_BIT("misc-vq", VirtIOBalloon, host_features,
+                    VIRTIO_BALLOON_F_MISC_VQ, true),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/include/hw/virtio/virtio-balloon.h b/include/hw/virtio/virtio-balloon.h
index 1ea13bd..a390a84 100644
--- a/include/hw/virtio/virtio-balloon.h
+++ b/include/hw/virtio/virtio-balloon.h
@@ -23,6 +23,16 @@
 #define VIRTIO_BALLOON(obj) \
         OBJECT_CHECK(VirtIOBalloon, (obj), TYPE_VIRTIO_BALLOON)
 
+typedef enum {
+    REQ_INIT,
+    REQ_START,
+    REQ_ON_GOING,
+    REQ_DONE,
+    REQ_ERROR,
+    REQ_INVALID_PARAM,
+    REQ_UNSUPPORT,
+} BalloonReqStatus;
+
 typedef struct virtio_balloon_stat VirtIOBalloonStat;
 
 typedef struct virtio_balloon_stat_modern {
@@ -33,16 +43,22 @@ typedef struct virtio_balloon_stat_modern {
 
 typedef struct VirtIOBalloon {
     VirtIODevice parent_obj;
-    VirtQueue *ivq, *dvq, *svq;
+    VirtQueue *ivq, *dvq, *svq, *mvq;
     uint32_t num_pages;
     uint32_t actual;
     uint64_t stats[VIRTIO_BALLOON_S_NR];
     VirtQueueElement *stats_vq_elem;
+    VirtQueueElement *misc_vq_elem;
     size_t stats_vq_offset;
     QEMUTimer *stats_timer;
     int64_t stats_last_update;
     int64_t stats_poll_interval;
     uint32_t host_features;
+    struct balloon_req_hdr misc_req;
+    BalloonReqStatus req_status;
+    uint64_t *free_page_bmap;
+    uint64_t bmap_len;
+    uint64_t req_id;
 } VirtIOBalloon;
 
 #endif
diff --git a/include/sysemu/balloon.h b/include/sysemu/balloon.h
index af49e19..5f5960c 100644
--- a/include/sysemu/balloon.h
+++ b/include/sysemu/balloon.h
@@ -15,14 +15,30 @@
 #define QEMU_BALLOON_H
 
 #include "qapi-types.h"
+#include "hw/virtio/virtio-balloon.h"
 
 typedef void (QEMUBalloonEvent)(void *opaque, ram_addr_t target);
 typedef void (QEMUBalloonStatus)(void *opaque, BalloonInfo *info);
+typedef BalloonReqStatus (QEMUBalloonGetFreePage)(void *opaque,
+                                                  unsigned long *bitmap,
+                                                  unsigned long len,
+                                                  unsigned long req_id);
+
+typedef BalloonReqStatus (QEMUBalloonFreePageReady)(void *opaque,
+                                                    unsigned long *req_id);
 
 int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
-			     QEMUBalloonStatus *stat_func, void *opaque);
+                             QEMUBalloonStatus *stat_func,
+                             QEMUBalloonGetFreePage *get_free_page_func,
+                             QEMUBalloonFreePageReady *free_page_ready_func,
+                             void *opaque);
 void qemu_remove_balloon_handler(void *opaque);
 bool qemu_balloon_is_inhibited(void);
 void qemu_balloon_inhibit(bool state);
+bool balloon_free_pages_support(void);
+BalloonReqStatus balloon_get_free_pages(unsigned long *bitmap,
+                                        unsigned long len,
+                                        unsigned long req_id);
+BalloonReqStatus balloon_free_page_ready(unsigned long *req_id);
 
 #endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 5/9] balloon: get free page info from guest
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Add a new feature to get the free page information from guest,
the free page information is saved in a bitmap. Please note that
'free page' means page is free sometime after host set the value
of request ID and before it receive response with the same ID.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 balloon.c                          |  47 +++++++++++++-
 hw/virtio/virtio-balloon.c         | 122 ++++++++++++++++++++++++++++++++++++-
 include/hw/virtio/virtio-balloon.h |  18 +++++-
 include/sysemu/balloon.h           |  18 +++++-
 4 files changed, 200 insertions(+), 5 deletions(-)

diff --git a/balloon.c b/balloon.c
index f2ef50c..d6a3791 100644
--- a/balloon.c
+++ b/balloon.c
@@ -36,6 +36,8 @@
 
 static QEMUBalloonEvent *balloon_event_fn;
 static QEMUBalloonStatus *balloon_stat_fn;
+static QEMUBalloonGetFreePage *balloon_get_free_page_fn;
+static QEMUBalloonFreePageReady *balloon_free_page_ready_fn;
 static void *balloon_opaque;
 static bool balloon_inhibited;
 
@@ -65,9 +67,13 @@ static bool have_balloon(Error **errp)
 }
 
 int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
-                             QEMUBalloonStatus *stat_func, void *opaque)
+                             QEMUBalloonStatus *stat_func,
+                             QEMUBalloonGetFreePage *get_free_page_func,
+                             QEMUBalloonFreePageReady *free_page_ready_func,
+                             void *opaque)
 {
-    if (balloon_event_fn || balloon_stat_fn || balloon_opaque) {
+    if (balloon_event_fn || balloon_stat_fn || balloon_get_free_page_fn
+        || balloon_free_page_ready_fn || balloon_opaque) {
         /* We're already registered one balloon handler.  How many can
          * a guest really have?
          */
@@ -75,6 +81,8 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
     }
     balloon_event_fn = event_func;
     balloon_stat_fn = stat_func;
+    balloon_get_free_page_fn = get_free_page_func;
+    balloon_free_page_ready_fn = free_page_ready_func;
     balloon_opaque = opaque;
     return 0;
 }
@@ -86,6 +94,8 @@ void qemu_remove_balloon_handler(void *opaque)
     }
     balloon_event_fn = NULL;
     balloon_stat_fn = NULL;
+    balloon_get_free_page_fn = NULL;
+    balloon_free_page_ready_fn = NULL;
     balloon_opaque = NULL;
 }
 
@@ -116,3 +126,36 @@ void qmp_balloon(int64_t target, Error **errp)
     trace_balloon_event(balloon_opaque, target);
     balloon_event_fn(balloon_opaque, target);
 }
+
+bool balloon_free_pages_support(void)
+{
+    return balloon_get_free_page_fn ? true : false;
+}
+
+BalloonReqStatus balloon_get_free_pages(unsigned long *bitmap,
+                                        unsigned long len,
+                                        unsigned long req_id)
+{
+    if (!balloon_get_free_page_fn) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (!bitmap) {
+        return REQ_INVALID_PARAM;
+    }
+
+    return balloon_get_free_page_fn(balloon_opaque, bitmap, len, req_id);
+}
+
+BalloonReqStatus balloon_free_page_ready(unsigned long *req_id)
+{
+    if (!balloon_free_page_ready_fn) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (!req_id) {
+        return REQ_INVALID_PARAM;
+    }
+
+    return balloon_free_page_ready_fn(balloon_opaque, req_id);
+}
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index a7152c8..b0c09a7 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -150,6 +150,13 @@ static bool balloon_page_bitmap_supported(const VirtIOBalloon *s)
     return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP);
 }
 
+static bool balloon_misc_vq_supported(const VirtIOBalloon *s)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+
+    return virtio_vdev_has_feature(vdev, VIRTIO_BALLOON_F_MISC_VQ);
+}
+
 static bool balloon_stats_enabled(const VirtIOBalloon *s)
 {
     return s->stats_poll_interval > 0;
@@ -399,6 +406,52 @@ out:
     }
 }
 
+static void virtio_balloon_handle_resp(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
+    VirtQueueElement *elem;
+    size_t offset = 0;
+    struct balloon_bmap_hdr hdr;
+
+    elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
+    if (!elem) {
+        s->req_status = REQ_ERROR;
+        return;
+    }
+
+    s->misc_vq_elem = elem;
+    if (!elem->out_num) {
+        return;
+    }
+
+    iov_to_buf(elem->out_sg, elem->out_num, offset,
+               &hdr, sizeof(hdr));
+    offset += sizeof(hdr);
+
+    switch (hdr.cmd) {
+    case BALLOON_GET_FREE_PAGES:
+        if (hdr.req_id == s->misc_req.param) {
+            if (s->bmap_len < hdr.start_pfn / BITS_PER_BYTE + hdr.bmap_len) {
+                hdr.bmap_len = s->bmap_len - hdr.start_pfn / BITS_PER_BYTE;
+            }
+
+            iov_to_buf(elem->out_sg, elem->out_num, offset,
+                       s->free_page_bmap + hdr.start_pfn / BITS_PER_LONG,
+                       hdr.bmap_len);
+            if (hdr.flag == BALLOON_FLAG_DONE) {
+                s->req_id = hdr.req_id;
+                s->req_status = REQ_DONE;
+            } else {
+                s->req_status = REQ_ON_GOING;
+            }
+        }
+        break;
+    default:
+        break;
+    }
+
+}
+
 static void virtio_balloon_get_config(VirtIODevice *vdev, uint8_t *config_data)
 {
     VirtIOBalloon *dev = VIRTIO_BALLOON(vdev);
@@ -478,6 +531,61 @@ static void virtio_balloon_stat(void *opaque, BalloonInfo *info)
                                              VIRTIO_BALLOON_PFN_SHIFT);
 }
 
+static BalloonReqStatus virtio_balloon_free_pages(void *opaque,
+                                                  unsigned long *bitmap,
+                                                  unsigned long bmap_len,
+                                                  unsigned long req_id)
+{
+    VirtIOBalloon *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    VirtQueueElement *elem = s->misc_vq_elem;
+    int len;
+
+    if (!balloon_misc_vq_supported(s)) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (s->req_status == REQ_INIT || s->req_status == REQ_DONE) {
+        s->free_page_bmap = bitmap;
+        if (elem == NULL || !elem->in_num) {
+            elem = virtqueue_pop(s->mvq, sizeof(VirtQueueElement));
+            if (!elem) {
+                return REQ_ERROR;
+            }
+            s->misc_vq_elem = elem;
+        }
+        s->misc_req.cmd = BALLOON_GET_FREE_PAGES;
+        s->misc_req.param = req_id;
+        s->bmap_len = bmap_len;
+        len = iov_from_buf(elem->in_sg, elem->in_num, 0, &s->misc_req,
+                           sizeof(s->misc_req));
+        virtqueue_push(s->mvq, elem, len);
+        virtio_notify(vdev, s->mvq);
+        g_free(s->misc_vq_elem);
+        s->misc_vq_elem = NULL;
+        s->req_status = REQ_ON_GOING;
+        return REQ_START;
+    }
+
+    return REQ_ON_GOING;
+}
+
+static BalloonReqStatus virtio_balloon_free_page_ready(void *opaque,
+                                                       unsigned long *req_id)
+{
+    VirtIOBalloon *s = opaque;
+
+    if (!balloon_misc_vq_supported(s)) {
+        return REQ_UNSUPPORT;
+    }
+
+    if (s->req_status == REQ_DONE) {
+        *req_id = s->req_id;
+    }
+
+    return s->req_status;
+}
+
 static void virtio_balloon_to_target(void *opaque, ram_addr_t target)
 {
     VirtIOBalloon *dev = VIRTIO_BALLOON(opaque);
@@ -539,7 +647,9 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
                 sizeof(struct virtio_balloon_config));
 
     ret = qemu_add_balloon_handler(virtio_balloon_to_target,
-                                   virtio_balloon_stat, s);
+                                   virtio_balloon_stat,
+                                   virtio_balloon_free_pages,
+                                   virtio_balloon_free_page_ready, s);
 
     if (ret < 0) {
         error_setg(errp, "Only one balloon device is supported");
@@ -550,8 +660,10 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
     s->ivq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output);
     s->dvq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output);
     s->svq = virtio_add_queue(vdev, 128, virtio_balloon_receive_stats);
+    s->mvq = virtio_add_queue(vdev, 128, virtio_balloon_handle_resp);
 
     reset_stats(s);
+    s->req_status = REQ_INIT;
 
     register_savevm(dev, "virtio-balloon", -1, 1,
                     virtio_balloon_save, virtio_balloon_load, s);
@@ -576,6 +688,12 @@ static void virtio_balloon_device_reset(VirtIODevice *vdev)
         g_free(s->stats_vq_elem);
         s->stats_vq_elem = NULL;
     }
+
+    if (s->misc_vq_elem != NULL) {
+        g_free(s->misc_vq_elem);
+        s->misc_vq_elem = NULL;
+    }
+    s->req_status = REQ_INIT;
 }
 
 static void virtio_balloon_instance_init(Object *obj)
@@ -596,6 +714,8 @@ static Property virtio_balloon_properties[] = {
                     VIRTIO_BALLOON_F_DEFLATE_ON_OOM, false),
     DEFINE_PROP_BIT("page-bitmap", VirtIOBalloon, host_features,
                     VIRTIO_BALLOON_F_PAGE_BITMAP, true),
+    DEFINE_PROP_BIT("misc-vq", VirtIOBalloon, host_features,
+                    VIRTIO_BALLOON_F_MISC_VQ, true),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/include/hw/virtio/virtio-balloon.h b/include/hw/virtio/virtio-balloon.h
index 1ea13bd..a390a84 100644
--- a/include/hw/virtio/virtio-balloon.h
+++ b/include/hw/virtio/virtio-balloon.h
@@ -23,6 +23,16 @@
 #define VIRTIO_BALLOON(obj) \
         OBJECT_CHECK(VirtIOBalloon, (obj), TYPE_VIRTIO_BALLOON)
 
+typedef enum {
+    REQ_INIT,
+    REQ_START,
+    REQ_ON_GOING,
+    REQ_DONE,
+    REQ_ERROR,
+    REQ_INVALID_PARAM,
+    REQ_UNSUPPORT,
+} BalloonReqStatus;
+
 typedef struct virtio_balloon_stat VirtIOBalloonStat;
 
 typedef struct virtio_balloon_stat_modern {
@@ -33,16 +43,22 @@ typedef struct virtio_balloon_stat_modern {
 
 typedef struct VirtIOBalloon {
     VirtIODevice parent_obj;
-    VirtQueue *ivq, *dvq, *svq;
+    VirtQueue *ivq, *dvq, *svq, *mvq;
     uint32_t num_pages;
     uint32_t actual;
     uint64_t stats[VIRTIO_BALLOON_S_NR];
     VirtQueueElement *stats_vq_elem;
+    VirtQueueElement *misc_vq_elem;
     size_t stats_vq_offset;
     QEMUTimer *stats_timer;
     int64_t stats_last_update;
     int64_t stats_poll_interval;
     uint32_t host_features;
+    struct balloon_req_hdr misc_req;
+    BalloonReqStatus req_status;
+    uint64_t *free_page_bmap;
+    uint64_t bmap_len;
+    uint64_t req_id;
 } VirtIOBalloon;
 
 #endif
diff --git a/include/sysemu/balloon.h b/include/sysemu/balloon.h
index af49e19..5f5960c 100644
--- a/include/sysemu/balloon.h
+++ b/include/sysemu/balloon.h
@@ -15,14 +15,30 @@
 #define QEMU_BALLOON_H
 
 #include "qapi-types.h"
+#include "hw/virtio/virtio-balloon.h"
 
 typedef void (QEMUBalloonEvent)(void *opaque, ram_addr_t target);
 typedef void (QEMUBalloonStatus)(void *opaque, BalloonInfo *info);
+typedef BalloonReqStatus (QEMUBalloonGetFreePage)(void *opaque,
+                                                  unsigned long *bitmap,
+                                                  unsigned long len,
+                                                  unsigned long req_id);
+
+typedef BalloonReqStatus (QEMUBalloonFreePageReady)(void *opaque,
+                                                    unsigned long *req_id);
 
 int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
-			     QEMUBalloonStatus *stat_func, void *opaque);
+                             QEMUBalloonStatus *stat_func,
+                             QEMUBalloonGetFreePage *get_free_page_func,
+                             QEMUBalloonFreePageReady *free_page_ready_func,
+                             void *opaque);
 void qemu_remove_balloon_handler(void *opaque);
 bool qemu_balloon_is_inhibited(void);
 void qemu_balloon_inhibit(bool state);
+bool balloon_free_pages_support(void);
+BalloonReqStatus balloon_get_free_pages(unsigned long *bitmap,
+                                        unsigned long len,
+                                        unsigned long req_id);
+BalloonReqStatus balloon_free_page_ready(unsigned long *req_id);
 
 #endif
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 6/9] balloon: migrate vq elem to destination
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

After live migration, 'guest-stats' can't get the expected memory
status in the guest. This issue is caused by commit 4eae2a657d.
The value of 's->stats_vq_elem' will be NULL after live migration,
and the check in the function 'balloon_stats_poll_cb()' will
prevent the 'virtio_notify()' from executing. So guest will not
update the memory status.

Commit 4eae2a657d is doing the right thing, but 's->stats_vq_elem'
should be treated as part of balloon device state and migrated to
destination if it's not NULL to make everything works well.

For the same reason, 's->misc_vq_elem' should be migrated to
destination too.
Michael has other idea to solve this issue, but he is busy at the
moment, this patch can be used for test before his patch is ready.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 hw/virtio/virtio-balloon.c | 36 ++++++++++++++++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index b0c09a7..f9bf26d 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -31,6 +31,7 @@
 #include "hw/virtio/virtio-access.h"
 
 #define BALLOON_PAGE_SIZE  (1 << VIRTIO_BALLOON_PFN_SHIFT)
+#define BALLOON_VERSION 2
 
 static void balloon_page(void *addr, int deflate)
 {
@@ -610,15 +611,33 @@ static void virtio_balloon_save(QEMUFile *f, void *opaque)
 static void virtio_balloon_save_device(VirtIODevice *vdev, QEMUFile *f)
 {
     VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
+    uint16_t elem_num = 0;
 
     qemu_put_be32(f, s->num_pages);
     qemu_put_be32(f, s->actual);
+    if (s->stats_vq_elem != NULL) {
+        elem_num = 1;
+    }
+    qemu_put_be16(f, elem_num);
+    if (elem_num) {
+        qemu_put_virtqueue_element(f, s->stats_vq_elem);
+    }
+
+    elem_num = 0;
+    if (s->misc_vq_elem != NULL) {
+        elem_num = 1;
+    }
+    qemu_put_be16(f, elem_num);
+    if (elem_num) {
+        qemu_put_virtqueue_element(f, s->misc_vq_elem);
+    }
 }
 
 static int virtio_balloon_load(QEMUFile *f, void *opaque, int version_id)
 {
-    if (version_id != 1)
+    if (version_id < 1 || version_id > BALLOON_VERSION) {
         return -EINVAL;
+    }
 
     return virtio_load(VIRTIO_DEVICE(opaque), f, version_id);
 }
@@ -627,9 +646,22 @@ static int virtio_balloon_load_device(VirtIODevice *vdev, QEMUFile *f,
                                       int version_id)
 {
     VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
+    uint16_t elem_num = 0;
 
     s->num_pages = qemu_get_be32(f);
     s->actual = qemu_get_be32(f);
+    if (version_id == BALLOON_VERSION) {
+        elem_num = qemu_get_be16(f);
+        if (elem_num == 1) {
+            s->stats_vq_elem =
+                    qemu_get_virtqueue_element(f, sizeof(VirtQueueElement));
+        }
+        elem_num = qemu_get_be16(f);
+        if (elem_num == 1) {
+            s->misc_vq_elem =
+                    qemu_get_virtqueue_element(f, sizeof(VirtQueueElement));
+        }
+    }
 
     if (balloon_stats_enabled(s)) {
         balloon_stats_change_timer(s, s->stats_poll_interval);
@@ -665,7 +697,7 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
     reset_stats(s);
     s->req_status = REQ_INIT;
 
-    register_savevm(dev, "virtio-balloon", -1, 1,
+    register_savevm(dev, "virtio-balloon", -1, BALLOON_VERSION,
                     virtio_balloon_save, virtio_balloon_load, s);
 }
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 6/9] balloon: migrate vq elem to destination
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

After live migration, 'guest-stats' can't get the expected memory
status in the guest. This issue is caused by commit 4eae2a657d.
The value of 's->stats_vq_elem' will be NULL after live migration,
and the check in the function 'balloon_stats_poll_cb()' will
prevent the 'virtio_notify()' from executing. So guest will not
update the memory status.

Commit 4eae2a657d is doing the right thing, but 's->stats_vq_elem'
should be treated as part of balloon device state and migrated to
destination if it's not NULL to make everything works well.

For the same reason, 's->misc_vq_elem' should be migrated to
destination too.
Michael has other idea to solve this issue, but he is busy at the
moment, this patch can be used for test before his patch is ready.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 hw/virtio/virtio-balloon.c | 36 ++++++++++++++++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index b0c09a7..f9bf26d 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -31,6 +31,7 @@
 #include "hw/virtio/virtio-access.h"
 
 #define BALLOON_PAGE_SIZE  (1 << VIRTIO_BALLOON_PFN_SHIFT)
+#define BALLOON_VERSION 2
 
 static void balloon_page(void *addr, int deflate)
 {
@@ -610,15 +611,33 @@ static void virtio_balloon_save(QEMUFile *f, void *opaque)
 static void virtio_balloon_save_device(VirtIODevice *vdev, QEMUFile *f)
 {
     VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
+    uint16_t elem_num = 0;
 
     qemu_put_be32(f, s->num_pages);
     qemu_put_be32(f, s->actual);
+    if (s->stats_vq_elem != NULL) {
+        elem_num = 1;
+    }
+    qemu_put_be16(f, elem_num);
+    if (elem_num) {
+        qemu_put_virtqueue_element(f, s->stats_vq_elem);
+    }
+
+    elem_num = 0;
+    if (s->misc_vq_elem != NULL) {
+        elem_num = 1;
+    }
+    qemu_put_be16(f, elem_num);
+    if (elem_num) {
+        qemu_put_virtqueue_element(f, s->misc_vq_elem);
+    }
 }
 
 static int virtio_balloon_load(QEMUFile *f, void *opaque, int version_id)
 {
-    if (version_id != 1)
+    if (version_id < 1 || version_id > BALLOON_VERSION) {
         return -EINVAL;
+    }
 
     return virtio_load(VIRTIO_DEVICE(opaque), f, version_id);
 }
@@ -627,9 +646,22 @@ static int virtio_balloon_load_device(VirtIODevice *vdev, QEMUFile *f,
                                       int version_id)
 {
     VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
+    uint16_t elem_num = 0;
 
     s->num_pages = qemu_get_be32(f);
     s->actual = qemu_get_be32(f);
+    if (version_id == BALLOON_VERSION) {
+        elem_num = qemu_get_be16(f);
+        if (elem_num == 1) {
+            s->stats_vq_elem =
+                    qemu_get_virtqueue_element(f, sizeof(VirtQueueElement));
+        }
+        elem_num = qemu_get_be16(f);
+        if (elem_num == 1) {
+            s->misc_vq_elem =
+                    qemu_get_virtqueue_element(f, sizeof(VirtQueueElement));
+        }
+    }
 
     if (balloon_stats_enabled(s)) {
         balloon_stats_change_timer(s, s->stats_poll_interval);
@@ -665,7 +697,7 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
     reset_stats(s);
     s->req_status = REQ_INIT;
 
-    register_savevm(dev, "virtio-balloon", -1, 1,
+    register_savevm(dev, "virtio-balloon", -1, BALLOON_VERSION,
                     virtio_balloon_save, virtio_balloon_load, s);
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 7/9] bitmap: Add a new bitmap_move function
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Sometimes, it is need to move a portion of bitmap to another place
in a large bitmap, if overlap happens, the bitmap_copy can't not
work correctly, we need a new function to do this work.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/qemu/bitmap.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
index ec5146f..6ac89ca 100644
--- a/include/qemu/bitmap.h
+++ b/include/qemu/bitmap.h
@@ -37,6 +37,7 @@
  * bitmap_set(dst, pos, nbits)			Set specified bit area
  * bitmap_set_atomic(dst, pos, nbits)   Set specified bit area with atomic ops
  * bitmap_clear(dst, pos, nbits)		Clear specified bit area
+ * bitmap_move(dst, src, nbits)                 Move *src to *dst
  * bitmap_test_and_clear_atomic(dst, pos, nbits)    Test and clear area
  * bitmap_find_next_zero_area(buf, len, pos, n, mask)	Find bit free area
  */
@@ -136,6 +137,18 @@ static inline void bitmap_copy(unsigned long *dst, const unsigned long *src,
     }
 }
 
+static inline void bitmap_move(unsigned long *dst, const unsigned long *src,
+                               long nbits)
+{
+    if (small_nbits(nbits)) {
+        unsigned long tmp = *src;
+        *dst = tmp;
+    } else {
+        long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
+        memmove(dst, src, len);
+    }
+}
+
 static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
                              const unsigned long *src2, long nbits)
 {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 7/9] bitmap: Add a new bitmap_move function
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Sometimes, it is need to move a portion of bitmap to another place
in a large bitmap, if overlap happens, the bitmap_copy can't not
work correctly, we need a new function to do this work.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/qemu/bitmap.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
index ec5146f..6ac89ca 100644
--- a/include/qemu/bitmap.h
+++ b/include/qemu/bitmap.h
@@ -37,6 +37,7 @@
  * bitmap_set(dst, pos, nbits)			Set specified bit area
  * bitmap_set_atomic(dst, pos, nbits)   Set specified bit area with atomic ops
  * bitmap_clear(dst, pos, nbits)		Clear specified bit area
+ * bitmap_move(dst, src, nbits)                 Move *src to *dst
  * bitmap_test_and_clear_atomic(dst, pos, nbits)    Test and clear area
  * bitmap_find_next_zero_area(buf, len, pos, n, mask)	Find bit free area
  */
@@ -136,6 +137,18 @@ static inline void bitmap_copy(unsigned long *dst, const unsigned long *src,
     }
 }
 
+static inline void bitmap_move(unsigned long *dst, const unsigned long *src,
+                               long nbits)
+{
+    if (small_nbits(nbits)) {
+        unsigned long tmp = *src;
+        *dst = tmp;
+    } else {
+        long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
+        memmove(dst, src, len);
+    }
+}
+
 static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
                              const unsigned long *src2, long nbits)
 {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 8/9] kvm: Add two new arch specific functions
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Add a new function to get the vm's max pfn and a new function
to filter out the holes in the undressed free page bitmap to get
a tight free page bitmap. They are implemented on X86 and should
be implemented on other arches for live migration optimization.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/sysemu/kvm.h | 18 ++++++++++++++++++
 target-arm/kvm.c     | 14 ++++++++++++++
 target-i386/kvm.c    | 37 +++++++++++++++++++++++++++++++++++++
 target-mips/kvm.c    | 14 ++++++++++++++
 target-ppc/kvm.c     | 14 ++++++++++++++
 target-s390x/kvm.c   | 14 ++++++++++++++
 6 files changed, 111 insertions(+)

diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index ad6f837..fd0956f 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -230,6 +230,24 @@ int kvm_remove_breakpoint(CPUState *cpu, target_ulong addr,
                           target_ulong len, int type);
 void kvm_remove_all_breakpoints(CPUState *cpu);
 int kvm_update_guest_debug(CPUState *cpu, unsigned long reinject_trap);
+
+/**
+ * tighten_guest_free_page_bmap - process the free page bitmap from
+ *         guest to get a tight page bitmap which does not contain
+ *         holes.
+ * @bmap: undressed guest free page bitmap
+ * Returns: a tight guest free page bitmap, the n th bit in the
+ *         returned bitmap and the n th bit in the migration bitmap
+ *         should correspond to the same guest RAM page.
+ */
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap);
+
+/**
+ * get_guest_max_pfn - get the max pfn of guest
+ * Returns: the max pfn of guest
+ */
+unsigned long get_guest_max_pfn(void);
+
 #ifndef _WIN32
 int kvm_set_signal_mask(CPUState *cpu, const sigset_t *sigset);
 #endif
diff --git a/target-arm/kvm.c b/target-arm/kvm.c
index 5c2bd7a..1cfccb3 100644
--- a/target-arm/kvm.c
+++ b/target-arm/kvm.c
@@ -626,3 +626,17 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     return (data - 32) & 0xffff;
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 9327523..f0503dd 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -3378,3 +3378,40 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     abort();
 }
+
+#define _4G (1ULL << 32)
+
+unsigned long get_guest_max_pfn(void)
+{
+    PCMachineState *pcms = PC_MACHINE(current_machine);
+    ram_addr_t above_4g_mem = pcms->above_4g_mem_size;
+    unsigned long max_pfn;
+
+    if (above_4g_mem) {
+        max_pfn = (_4G + above_4g_mem) >> TARGET_PAGE_BITS;
+    } else {
+        max_pfn = pcms->below_4g_mem_size >> TARGET_PAGE_BITS;
+    }
+
+    return max_pfn;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    PCMachineState *pcms = PC_MACHINE(current_machine);
+    ram_addr_t above_4g_mem = pcms->above_4g_mem_size;
+
+    if (above_4g_mem) {
+        unsigned long *src, *dst, len, pos;
+        ram_addr_t below_4g_mem = pcms->below_4g_mem_size;
+        src = bmap + (_4G >> TARGET_PAGE_BITS) / BITS_PER_LONG;
+        dst = bmap + (below_4g_mem >> TARGET_PAGE_BITS) / BITS_PER_LONG;
+        bitmap_move(dst, src, above_4g_mem >> TARGET_PAGE_BITS);
+
+        pos = (above_4g_mem + below_4g_mem) >> TARGET_PAGE_BITS;
+        len = (_4G - below_4g_mem) >> TARGET_PAGE_BITS;
+        bitmap_clear(bmap, pos, len);
+    }
+
+    return bmap;
+}
diff --git a/target-mips/kvm.c b/target-mips/kvm.c
index f3f832d..ba39827 100644
--- a/target-mips/kvm.c
+++ b/target-mips/kvm.c
@@ -1047,3 +1047,17 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     abort();
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 884d564..ff67b3e 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -2630,3 +2630,17 @@ int kvmppc_enable_hwrng(void)
 
     return kvmppc_enable_hcall(kvm_state, H_RANDOM);
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
diff --git a/target-s390x/kvm.c b/target-s390x/kvm.c
index 2991bff..2e5c763 100644
--- a/target-s390x/kvm.c
+++ b/target-s390x/kvm.c
@@ -2271,3 +2271,17 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     abort();
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 8/9] kvm: Add two new arch specific functions
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

Add a new function to get the vm's max pfn and a new function
to filter out the holes in the undressed free page bitmap to get
a tight free page bitmap. They are implemented on X86 and should
be implemented on other arches for live migration optimization.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 include/sysemu/kvm.h | 18 ++++++++++++++++++
 target-arm/kvm.c     | 14 ++++++++++++++
 target-i386/kvm.c    | 37 +++++++++++++++++++++++++++++++++++++
 target-mips/kvm.c    | 14 ++++++++++++++
 target-ppc/kvm.c     | 14 ++++++++++++++
 target-s390x/kvm.c   | 14 ++++++++++++++
 6 files changed, 111 insertions(+)

diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index ad6f837..fd0956f 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -230,6 +230,24 @@ int kvm_remove_breakpoint(CPUState *cpu, target_ulong addr,
                           target_ulong len, int type);
 void kvm_remove_all_breakpoints(CPUState *cpu);
 int kvm_update_guest_debug(CPUState *cpu, unsigned long reinject_trap);
+
+/**
+ * tighten_guest_free_page_bmap - process the free page bitmap from
+ *         guest to get a tight page bitmap which does not contain
+ *         holes.
+ * @bmap: undressed guest free page bitmap
+ * Returns: a tight guest free page bitmap, the n th bit in the
+ *         returned bitmap and the n th bit in the migration bitmap
+ *         should correspond to the same guest RAM page.
+ */
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap);
+
+/**
+ * get_guest_max_pfn - get the max pfn of guest
+ * Returns: the max pfn of guest
+ */
+unsigned long get_guest_max_pfn(void);
+
 #ifndef _WIN32
 int kvm_set_signal_mask(CPUState *cpu, const sigset_t *sigset);
 #endif
diff --git a/target-arm/kvm.c b/target-arm/kvm.c
index 5c2bd7a..1cfccb3 100644
--- a/target-arm/kvm.c
+++ b/target-arm/kvm.c
@@ -626,3 +626,17 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     return (data - 32) & 0xffff;
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 9327523..f0503dd 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -3378,3 +3378,40 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     abort();
 }
+
+#define _4G (1ULL << 32)
+
+unsigned long get_guest_max_pfn(void)
+{
+    PCMachineState *pcms = PC_MACHINE(current_machine);
+    ram_addr_t above_4g_mem = pcms->above_4g_mem_size;
+    unsigned long max_pfn;
+
+    if (above_4g_mem) {
+        max_pfn = (_4G + above_4g_mem) >> TARGET_PAGE_BITS;
+    } else {
+        max_pfn = pcms->below_4g_mem_size >> TARGET_PAGE_BITS;
+    }
+
+    return max_pfn;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    PCMachineState *pcms = PC_MACHINE(current_machine);
+    ram_addr_t above_4g_mem = pcms->above_4g_mem_size;
+
+    if (above_4g_mem) {
+        unsigned long *src, *dst, len, pos;
+        ram_addr_t below_4g_mem = pcms->below_4g_mem_size;
+        src = bmap + (_4G >> TARGET_PAGE_BITS) / BITS_PER_LONG;
+        dst = bmap + (below_4g_mem >> TARGET_PAGE_BITS) / BITS_PER_LONG;
+        bitmap_move(dst, src, above_4g_mem >> TARGET_PAGE_BITS);
+
+        pos = (above_4g_mem + below_4g_mem) >> TARGET_PAGE_BITS;
+        len = (_4G - below_4g_mem) >> TARGET_PAGE_BITS;
+        bitmap_clear(bmap, pos, len);
+    }
+
+    return bmap;
+}
diff --git a/target-mips/kvm.c b/target-mips/kvm.c
index f3f832d..ba39827 100644
--- a/target-mips/kvm.c
+++ b/target-mips/kvm.c
@@ -1047,3 +1047,17 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     abort();
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 884d564..ff67b3e 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -2630,3 +2630,17 @@ int kvmppc_enable_hwrng(void)
 
     return kvmppc_enable_hcall(kvm_state, H_RANDOM);
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
diff --git a/target-s390x/kvm.c b/target-s390x/kvm.c
index 2991bff..2e5c763 100644
--- a/target-s390x/kvm.c
+++ b/target-s390x/kvm.c
@@ -2271,3 +2271,17 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
 {
     abort();
 }
+
+unsigned long get_guest_max_pfn(void)
+{
+    /* To be done */
+
+    return 0;
+}
+
+unsigned long *tighten_guest_free_page_bmap(unsigned long *bmap)
+{
+    /* To be done */
+
+    return bmap;
+}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [QEMU v2 9/9] migration: skip free pages during live migration
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-15  2:47   ` Liang Li
  -1 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

After sending out the request for free pages, live migration
process will start without waiting for the free page bitmap is
ready. If the free page bitmap is not ready when doing the 1st
migration_bitmap_sync() after ram_save_setup(), the free page
bitmap will be ignored, this means the free pages will not be
filtered out in this case.

The current implementation can not work with post copy, if post
copy is enabled, we simply ignore the free pages. Will make it
work later.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 migration/ram.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 815bc0e..223d243 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -43,6 +43,8 @@
 #include "trace.h"
 #include "exec/ram_addr.h"
 #include "qemu/rcu_queue.h"
+#include "sysemu/balloon.h"
+#include "sysemu/kvm.h"
 
 #ifdef DEBUG_MIGRATION_RAM
 #define DPRINTF(fmt, ...) \
@@ -228,6 +230,8 @@ static QemuMutex migration_bitmap_mutex;
 static uint64_t migration_dirty_pages;
 static uint32_t last_version;
 static bool ram_bulk_stage;
+static bool ignore_freepage_rsp;
+static uint64_t free_page_req_id;
 
 /* used by the search for pages to send */
 struct PageSearchStatus {
@@ -244,6 +248,7 @@ static struct BitmapRcu {
     struct rcu_head rcu;
     /* Main migration bitmap */
     unsigned long *bmap;
+    unsigned long *free_page_bmap;
     /* bitmap of pages that haven't been sent even once
      * only maintained and used in postcopy at the moment
      * where it's used to send the dirtymap at the start
@@ -636,6 +641,7 @@ static void migration_bitmap_sync(void)
     rcu_read_unlock();
     qemu_mutex_unlock(&migration_bitmap_mutex);
 
+    ignore_freepage_rsp = true;
     trace_migration_bitmap_sync_end(migration_dirty_pages
                                     - num_dirty_pages_init);
     num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
@@ -1409,6 +1415,7 @@ static void migration_bitmap_free(struct BitmapRcu *bmap)
 {
     g_free(bmap->bmap);
     g_free(bmap->unsentmap);
+    g_free(bmap->free_page_bmap);
     g_free(bmap);
 }
 
@@ -1479,6 +1486,77 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
     }
 }
 
+static void filter_out_guest_free_page(unsigned long *free_page_bmap,
+                                       long nbits)
+{
+    long i, page_count = 0, len;
+    unsigned long *bitmap;
+
+    tighten_guest_free_page_bmap(free_page_bmap);
+    qemu_mutex_lock(&migration_bitmap_mutex);
+    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    slow_bitmap_complement(bitmap, free_page_bmap, nbits);
+
+    len = (last_ram_offset() >> TARGET_PAGE_BITS) / BITS_PER_LONG;
+    for (i = 0; i < len; i++) {
+        page_count += hweight_long(bitmap[i]);
+    }
+
+    migration_dirty_pages = page_count;
+    qemu_mutex_unlock(&migration_bitmap_mutex);
+}
+
+static void ram_request_free_page(unsigned long *bmap, unsigned long max_pfn)
+{
+    BalloonReqStatus status;
+
+    free_page_req_id++;
+    status = balloon_get_free_pages(bmap, max_pfn / BITS_PER_BYTE,
+                                    free_page_req_id);
+    if (status == REQ_START) {
+        ignore_freepage_rsp = false;
+    }
+}
+
+static void ram_handle_free_page(void)
+{
+    unsigned long nbits, req_id = 0;
+    RAMBlock *pc_ram_block;
+    BalloonReqStatus status;
+
+    status = balloon_free_page_ready(&req_id);
+    switch (status) {
+    case REQ_DONE:
+        if (req_id != free_page_req_id) {
+            return;
+        }
+        rcu_read_lock();
+        pc_ram_block = QLIST_FIRST_RCU(&ram_list.blocks);
+        nbits = pc_ram_block->used_length >> TARGET_PAGE_BITS;
+        filter_out_guest_free_page(migration_bitmap_rcu->free_page_bmap, nbits);
+        rcu_read_unlock();
+
+        qemu_mutex_lock_iothread();
+        migration_bitmap_sync();
+        qemu_mutex_unlock_iothread();
+        /*
+         * bulk stage assumes in (migration_bitmap_find_and_reset_dirty) that
+         * every page is dirty, that's no longer ture at this point.
+         */
+        ram_bulk_stage = false;
+        last_seen_block = NULL;
+        last_sent_block = NULL;
+        last_offset = 0;
+        break;
+    case REQ_ERROR:
+        ignore_freepage_rsp = true;
+        error_report("failed to get free page");
+        break;
+    default:
+        break;
+    }
+}
+
 /*
  * 'expected' is the value you expect the bitmap mostly to be full
  * of; it won't bother printing lines that are all this value.
@@ -1944,6 +2022,11 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     qemu_mutex_unlock_ramlist();
     qemu_mutex_unlock_iothread();
 
+    if (balloon_free_pages_support() && !migrate_postcopy_ram()) {
+        unsigned long max_pfn = get_guest_max_pfn();
+        migration_bitmap_rcu->free_page_bmap = bitmap_new(max_pfn);
+        ram_request_free_page(migration_bitmap_rcu->free_page_bmap, max_pfn);
+    }
     qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
 
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
@@ -1984,6 +2067,9 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
+        if (!ignore_freepage_rsp) {
+            ram_handle_free_page();
+        }
         pages = ram_find_and_save_block(f, false, &bytes_transferred);
         /* no more pages to sent */
         if (pages == 0) {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [QEMU v2 9/9] migration: skip free pages during live migration
@ 2016-07-15  2:47   ` Liang Li
  0 siblings, 0 replies; 28+ messages in thread
From: Liang Li @ 2016-07-15  2:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth, Liang Li

After sending out the request for free pages, live migration
process will start without waiting for the free page bitmap is
ready. If the free page bitmap is not ready when doing the 1st
migration_bitmap_sync() after ram_save_setup(), the free page
bitmap will be ignored, this means the free pages will not be
filtered out in this case.

The current implementation can not work with post copy, if post
copy is enabled, we simply ignore the free pages. Will make it
work later.

Signed-off-by: Liang Li <liang.z.li@intel.com>
---
 migration/ram.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index 815bc0e..223d243 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -43,6 +43,8 @@
 #include "trace.h"
 #include "exec/ram_addr.h"
 #include "qemu/rcu_queue.h"
+#include "sysemu/balloon.h"
+#include "sysemu/kvm.h"
 
 #ifdef DEBUG_MIGRATION_RAM
 #define DPRINTF(fmt, ...) \
@@ -228,6 +230,8 @@ static QemuMutex migration_bitmap_mutex;
 static uint64_t migration_dirty_pages;
 static uint32_t last_version;
 static bool ram_bulk_stage;
+static bool ignore_freepage_rsp;
+static uint64_t free_page_req_id;
 
 /* used by the search for pages to send */
 struct PageSearchStatus {
@@ -244,6 +248,7 @@ static struct BitmapRcu {
     struct rcu_head rcu;
     /* Main migration bitmap */
     unsigned long *bmap;
+    unsigned long *free_page_bmap;
     /* bitmap of pages that haven't been sent even once
      * only maintained and used in postcopy at the moment
      * where it's used to send the dirtymap at the start
@@ -636,6 +641,7 @@ static void migration_bitmap_sync(void)
     rcu_read_unlock();
     qemu_mutex_unlock(&migration_bitmap_mutex);
 
+    ignore_freepage_rsp = true;
     trace_migration_bitmap_sync_end(migration_dirty_pages
                                     - num_dirty_pages_init);
     num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
@@ -1409,6 +1415,7 @@ static void migration_bitmap_free(struct BitmapRcu *bmap)
 {
     g_free(bmap->bmap);
     g_free(bmap->unsentmap);
+    g_free(bmap->free_page_bmap);
     g_free(bmap);
 }
 
@@ -1479,6 +1486,77 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
     }
 }
 
+static void filter_out_guest_free_page(unsigned long *free_page_bmap,
+                                       long nbits)
+{
+    long i, page_count = 0, len;
+    unsigned long *bitmap;
+
+    tighten_guest_free_page_bmap(free_page_bmap);
+    qemu_mutex_lock(&migration_bitmap_mutex);
+    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    slow_bitmap_complement(bitmap, free_page_bmap, nbits);
+
+    len = (last_ram_offset() >> TARGET_PAGE_BITS) / BITS_PER_LONG;
+    for (i = 0; i < len; i++) {
+        page_count += hweight_long(bitmap[i]);
+    }
+
+    migration_dirty_pages = page_count;
+    qemu_mutex_unlock(&migration_bitmap_mutex);
+}
+
+static void ram_request_free_page(unsigned long *bmap, unsigned long max_pfn)
+{
+    BalloonReqStatus status;
+
+    free_page_req_id++;
+    status = balloon_get_free_pages(bmap, max_pfn / BITS_PER_BYTE,
+                                    free_page_req_id);
+    if (status == REQ_START) {
+        ignore_freepage_rsp = false;
+    }
+}
+
+static void ram_handle_free_page(void)
+{
+    unsigned long nbits, req_id = 0;
+    RAMBlock *pc_ram_block;
+    BalloonReqStatus status;
+
+    status = balloon_free_page_ready(&req_id);
+    switch (status) {
+    case REQ_DONE:
+        if (req_id != free_page_req_id) {
+            return;
+        }
+        rcu_read_lock();
+        pc_ram_block = QLIST_FIRST_RCU(&ram_list.blocks);
+        nbits = pc_ram_block->used_length >> TARGET_PAGE_BITS;
+        filter_out_guest_free_page(migration_bitmap_rcu->free_page_bmap, nbits);
+        rcu_read_unlock();
+
+        qemu_mutex_lock_iothread();
+        migration_bitmap_sync();
+        qemu_mutex_unlock_iothread();
+        /*
+         * bulk stage assumes in (migration_bitmap_find_and_reset_dirty) that
+         * every page is dirty, that's no longer ture at this point.
+         */
+        ram_bulk_stage = false;
+        last_seen_block = NULL;
+        last_sent_block = NULL;
+        last_offset = 0;
+        break;
+    case REQ_ERROR:
+        ignore_freepage_rsp = true;
+        error_report("failed to get free page");
+        break;
+    default:
+        break;
+    }
+}
+
 /*
  * 'expected' is the value you expect the bitmap mostly to be full
  * of; it won't bother printing lines that are all this value.
@@ -1944,6 +2022,11 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     qemu_mutex_unlock_ramlist();
     qemu_mutex_unlock_iothread();
 
+    if (balloon_free_pages_support() && !migrate_postcopy_ram()) {
+        unsigned long max_pfn = get_guest_max_pfn();
+        migration_bitmap_rcu->free_page_bmap = bitmap_new(max_pfn);
+        ram_request_free_page(migration_bitmap_rcu->free_page_bmap, max_pfn);
+    }
     qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
 
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
@@ -1984,6 +2067,9 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
+        if (!ignore_freepage_rsp) {
+            ram_handle_free_page();
+        }
         pages = ram_find_and_save_block(f, false, &bytes_transferred);
         /* no more pages to sent */
         if (pages == 0) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* RE: [QEMU v2 0/9] Fast (de)inflating & fast live migration
  2016-07-15  2:47 ` [Qemu-devel] " Liang Li
@ 2016-07-21  8:10   ` Li, Liang Z
  -1 siblings, 0 replies; 28+ messages in thread
From: Li, Liang Z @ 2016-07-21  8:10 UTC (permalink / raw)
  To: Li, Liang Z, qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth

Ping ...

Liang

> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org]
> On Behalf Of Liang Li
> Sent: Friday, July 15, 2016 10:47 AM
> To: qemu-devel@nongnu.org
> Cc: mst@redhat.com; pbonzini@redhat.com; quintela@redhat.com;
> amit.shah@redhat.com; kvm@vger.kernel.org; dgilbert@redhat.com;
> thuth@redhat.com; Li, Liang Z
> Subject: [QEMU v2 0/9] Fast (de)inflating & fast live migration
> 
> This patch set intends to do two optimizations, one is to speed up the
> (de)inflating process of virtio balloon, and another one which is to speed up
> the live migration process. We put them together because both of them are
> required to change the virtio balloon spec.
> 
> The main idea of speeding up the (de)inflating process is to use bitmap to
> send the page information to host instead of the PFNs, to reduce the
> overhead of virtio data transmission, address translation and madvise(). This
> can help to improve the performance by about 85%.
> 
> The idea of speeding up live migration is to skip process guest's free pages in
> the first round of data copy, to reduce needless data processing, this can
> help to save quite a lot of CPU cycles and network bandwidth. We get guest's
> free page information through the virt queue of virtio-balloon, and filter out
> these free pages during live migration. For an idle 8GB guest, this can help to
> shorten the total live migration time from 2Sec to about 500ms in the 10Gbps
> network environment.
> 
> Changes from v1 to v2:
>     * Abandon the patch for dropping page cache.
>     * Get a struct from vq instead of separate variables.
>     * Use two separate APIs to request free pages and query the status.
>     * Changed the virtio balloon interface.
>     * Addressed some of the comments of v1.
> 
> Liang Li (9):
>   virtio-balloon: Remove needless precompiled directive
>   virtio-balloon: update linux head file
>   virtio-balloon: speed up inflating & deflating process
>   virtio-balloon: update linux head file for new feature
>   balloon: get free page info from guest
>   balloon: migrate vq elem to destination
>   bitmap: Add a new bitmap_move function
>   kvm: Add two new arch specific functions
>   migration: skip free pages during live migration
> 
>  balloon.c                                       |  47 +++-
>  hw/virtio/virtio-balloon.c                      | 304 ++++++++++++++++++++++--
>  include/hw/virtio/virtio-balloon.h              |  18 +-
>  include/qemu/bitmap.h                           |  13 +
>  include/standard-headers/linux/virtio_balloon.h |  41 ++++
>  include/sysemu/balloon.h                        |  18 +-
>  include/sysemu/kvm.h                            |  18 ++
>  migration/ram.c                                 |  86 +++++++
>  target-arm/kvm.c                                |  14 ++
>  target-i386/kvm.c                               |  37 +++
>  target-mips/kvm.c                               |  14 ++
>  target-ppc/kvm.c                                |  14 ++
>  target-s390x/kvm.c                              |  14 ++
>  13 files changed, 608 insertions(+), 30 deletions(-)
> 
> --
> 1.9.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in the body of
> a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [QEMU v2 0/9] Fast (de)inflating & fast live migration
@ 2016-07-21  8:10   ` Li, Liang Z
  0 siblings, 0 replies; 28+ messages in thread
From: Li, Liang Z @ 2016-07-21  8:10 UTC (permalink / raw)
  To: Li, Liang Z, qemu-devel
  Cc: mst, pbonzini, quintela, amit.shah, kvm, dgilbert, thuth

Ping ...

Liang

> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org]
> On Behalf Of Liang Li
> Sent: Friday, July 15, 2016 10:47 AM
> To: qemu-devel@nongnu.org
> Cc: mst@redhat.com; pbonzini@redhat.com; quintela@redhat.com;
> amit.shah@redhat.com; kvm@vger.kernel.org; dgilbert@redhat.com;
> thuth@redhat.com; Li, Liang Z
> Subject: [QEMU v2 0/9] Fast (de)inflating & fast live migration
> 
> This patch set intends to do two optimizations, one is to speed up the
> (de)inflating process of virtio balloon, and another one which is to speed up
> the live migration process. We put them together because both of them are
> required to change the virtio balloon spec.
> 
> The main idea of speeding up the (de)inflating process is to use bitmap to
> send the page information to host instead of the PFNs, to reduce the
> overhead of virtio data transmission, address translation and madvise(). This
> can help to improve the performance by about 85%.
> 
> The idea of speeding up live migration is to skip process guest's free pages in
> the first round of data copy, to reduce needless data processing, this can
> help to save quite a lot of CPU cycles and network bandwidth. We get guest's
> free page information through the virt queue of virtio-balloon, and filter out
> these free pages during live migration. For an idle 8GB guest, this can help to
> shorten the total live migration time from 2Sec to about 500ms in the 10Gbps
> network environment.
> 
> Changes from v1 to v2:
>     * Abandon the patch for dropping page cache.
>     * Get a struct from vq instead of separate variables.
>     * Use two separate APIs to request free pages and query the status.
>     * Changed the virtio balloon interface.
>     * Addressed some of the comments of v1.
> 
> Liang Li (9):
>   virtio-balloon: Remove needless precompiled directive
>   virtio-balloon: update linux head file
>   virtio-balloon: speed up inflating & deflating process
>   virtio-balloon: update linux head file for new feature
>   balloon: get free page info from guest
>   balloon: migrate vq elem to destination
>   bitmap: Add a new bitmap_move function
>   kvm: Add two new arch specific functions
>   migration: skip free pages during live migration
> 
>  balloon.c                                       |  47 +++-
>  hw/virtio/virtio-balloon.c                      | 304 ++++++++++++++++++++++--
>  include/hw/virtio/virtio-balloon.h              |  18 +-
>  include/qemu/bitmap.h                           |  13 +
>  include/standard-headers/linux/virtio_balloon.h |  41 ++++
>  include/sysemu/balloon.h                        |  18 +-
>  include/sysemu/kvm.h                            |  18 ++
>  migration/ram.c                                 |  86 +++++++
>  target-arm/kvm.c                                |  14 ++
>  target-i386/kvm.c                               |  37 +++
>  target-mips/kvm.c                               |  14 ++
>  target-ppc/kvm.c                                |  14 ++
>  target-s390x/kvm.c                              |  14 ++
>  13 files changed, 608 insertions(+), 30 deletions(-)
> 
> --
> 1.9.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in the body of
> a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [QEMU v2 1/9] virtio-balloon: Remove needless precompiled directive
  2016-07-15  2:47   ` [Qemu-devel] " Liang Li
@ 2016-08-01 10:42     ` Dr. David Alan Gilbert
  -1 siblings, 0 replies; 28+ messages in thread
From: Dr. David Alan Gilbert @ 2016-08-01 10:42 UTC (permalink / raw)
  To: Liang Li; +Cc: qemu-devel, mst, pbonzini, quintela, amit.shah, kvm, thuth

* Liang Li (liang.z.li@intel.com) wrote:
> Since there in wrapper around madvise(), the virtio-balloon
> code is able to work without the precompiled directive, the
> directive can be removed.
> 
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Suggested-by: Thomas Huth <thuth@redhat.com>

This one could be posted separately.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  hw/virtio/virtio-balloon.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
> index 1a22e6d..62931b3 100644
> --- a/hw/virtio/virtio-balloon.c
> +++ b/hw/virtio/virtio-balloon.c
> @@ -34,13 +34,11 @@
>  
>  static void balloon_page(void *addr, int deflate)
>  {
> -#if defined(__linux__)
>      if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
>                                           kvm_has_sync_mmu())) {
>          qemu_madvise(addr, BALLOON_PAGE_SIZE,
>                  deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
>      }
> -#endif
>  }
>  
>  static const char *balloon_stat_names[] = {
> -- 
> 1.9.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [QEMU v2 1/9] virtio-balloon: Remove needless precompiled directive
@ 2016-08-01 10:42     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 28+ messages in thread
From: Dr. David Alan Gilbert @ 2016-08-01 10:42 UTC (permalink / raw)
  To: Liang Li; +Cc: qemu-devel, mst, pbonzini, quintela, amit.shah, kvm, thuth

* Liang Li (liang.z.li@intel.com) wrote:
> Since there in wrapper around madvise(), the virtio-balloon
> code is able to work without the precompiled directive, the
> directive can be removed.
> 
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Suggested-by: Thomas Huth <thuth@redhat.com>

This one could be posted separately.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  hw/virtio/virtio-balloon.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
> index 1a22e6d..62931b3 100644
> --- a/hw/virtio/virtio-balloon.c
> +++ b/hw/virtio/virtio-balloon.c
> @@ -34,13 +34,11 @@
>  
>  static void balloon_page(void *addr, int deflate)
>  {
> -#if defined(__linux__)
>      if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
>                                           kvm_has_sync_mmu())) {
>          qemu_madvise(addr, BALLOON_PAGE_SIZE,
>                  deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
>      }
> -#endif
>  }
>  
>  static const char *balloon_stat_names[] = {
> -- 
> 1.9.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [QEMU v2 7/9] bitmap: Add a new bitmap_move function
  2016-07-15  2:47   ` [Qemu-devel] " Liang Li
@ 2016-08-01 11:25     ` Dr. David Alan Gilbert
  -1 siblings, 0 replies; 28+ messages in thread
From: Dr. David Alan Gilbert @ 2016-08-01 11:25 UTC (permalink / raw)
  To: Liang Li; +Cc: qemu-devel, mst, pbonzini, quintela, amit.shah, kvm, thuth

* Liang Li (liang.z.li@intel.com) wrote:
> Sometimes, it is need to move a portion of bitmap to another place
> in a large bitmap, if overlap happens, the bitmap_copy can't not
> work correctly, we need a new function to do this work.
> 
> Signed-off-by: Liang Li <liang.z.li@intel.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/qemu/bitmap.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
> index ec5146f..6ac89ca 100644
> --- a/include/qemu/bitmap.h
> +++ b/include/qemu/bitmap.h
> @@ -37,6 +37,7 @@
>   * bitmap_set(dst, pos, nbits)			Set specified bit area
>   * bitmap_set_atomic(dst, pos, nbits)   Set specified bit area with atomic ops
>   * bitmap_clear(dst, pos, nbits)		Clear specified bit area
> + * bitmap_move(dst, src, nbits)                 Move *src to *dst
>   * bitmap_test_and_clear_atomic(dst, pos, nbits)    Test and clear area
>   * bitmap_find_next_zero_area(buf, len, pos, n, mask)	Find bit free area
>   */
> @@ -136,6 +137,18 @@ static inline void bitmap_copy(unsigned long *dst, const unsigned long *src,
>      }
>  }
>  
> +static inline void bitmap_move(unsigned long *dst, const unsigned long *src,
> +                               long nbits)
> +{
> +    if (small_nbits(nbits)) {
> +        unsigned long tmp = *src;
> +        *dst = tmp;
> +    } else {
> +        long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
> +        memmove(dst, src, len);
> +    }
> +}
> +
>  static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
>                               const unsigned long *src2, long nbits)
>  {
> -- 
> 1.9.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [QEMU v2 7/9] bitmap: Add a new bitmap_move function
@ 2016-08-01 11:25     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 28+ messages in thread
From: Dr. David Alan Gilbert @ 2016-08-01 11:25 UTC (permalink / raw)
  To: Liang Li; +Cc: qemu-devel, mst, pbonzini, quintela, amit.shah, kvm, thuth

* Liang Li (liang.z.li@intel.com) wrote:
> Sometimes, it is need to move a portion of bitmap to another place
> in a large bitmap, if overlap happens, the bitmap_copy can't not
> work correctly, we need a new function to do this work.
> 
> Signed-off-by: Liang Li <liang.z.li@intel.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/qemu/bitmap.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
> index ec5146f..6ac89ca 100644
> --- a/include/qemu/bitmap.h
> +++ b/include/qemu/bitmap.h
> @@ -37,6 +37,7 @@
>   * bitmap_set(dst, pos, nbits)			Set specified bit area
>   * bitmap_set_atomic(dst, pos, nbits)   Set specified bit area with atomic ops
>   * bitmap_clear(dst, pos, nbits)		Clear specified bit area
> + * bitmap_move(dst, src, nbits)                 Move *src to *dst
>   * bitmap_test_and_clear_atomic(dst, pos, nbits)    Test and clear area
>   * bitmap_find_next_zero_area(buf, len, pos, n, mask)	Find bit free area
>   */
> @@ -136,6 +137,18 @@ static inline void bitmap_copy(unsigned long *dst, const unsigned long *src,
>      }
>  }
>  
> +static inline void bitmap_move(unsigned long *dst, const unsigned long *src,
> +                               long nbits)
> +{
> +    if (small_nbits(nbits)) {
> +        unsigned long tmp = *src;
> +        *dst = tmp;
> +    } else {
> +        long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
> +        memmove(dst, src, len);
> +    }
> +}
> +
>  static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
>                               const unsigned long *src2, long nbits)
>  {
> -- 
> 1.9.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [QEMU v2 1/9] virtio-balloon: Remove needless precompiled directive
  2016-08-01 10:42     ` [Qemu-devel] " Dr. David Alan Gilbert
@ 2016-08-01 23:48       ` Li, Liang Z
  -1 siblings, 0 replies; 28+ messages in thread
From: Li, Liang Z @ 2016-08-01 23:48 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: thuth, kvm, quintela, mst, qemu-devel, amit.shah, pbonzini

> * Liang Li (liang.z.li@intel.com) wrote:
> > Since there in wrapper around madvise(), the virtio-balloon code is
> > able to work without the precompiled directive, the directive can be
> > removed.
> >
> > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > Suggested-by: Thomas Huth <thuth@redhat.com>
> 
> This one could be posted separately.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> 

I will send a separate one. Thanks! David. 

Liang

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [QEMU v2 1/9] virtio-balloon: Remove needless precompiled directive
@ 2016-08-01 23:48       ` Li, Liang Z
  0 siblings, 0 replies; 28+ messages in thread
From: Li, Liang Z @ 2016-08-01 23:48 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, mst, pbonzini, quintela, amit.shah, kvm, thuth

> * Liang Li (liang.z.li@intel.com) wrote:
> > Since there in wrapper around madvise(), the virtio-balloon code is
> > able to work without the precompiled directive, the directive can be
> > removed.
> >
> > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > Suggested-by: Thomas Huth <thuth@redhat.com>
> 
> This one could be posted separately.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> 

I will send a separate one. Thanks! David. 

Liang

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2016-08-01 23:48 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-15  2:47 [QEMU v2 0/9] Fast (de)inflating & fast live migration Liang Li
2016-07-15  2:47 ` [Qemu-devel] " Liang Li
2016-07-15  2:47 ` [QEMU v2 1/9] virtio-balloon: Remove needless precompiled directive Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-08-01 10:42   ` Dr. David Alan Gilbert
2016-08-01 10:42     ` [Qemu-devel] " Dr. David Alan Gilbert
2016-08-01 23:48     ` Li, Liang Z
2016-08-01 23:48       ` [Qemu-devel] " Li, Liang Z
2016-07-15  2:47 ` [QEMU v2 2/9] virtio-balloon: update linux head file Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-07-15  2:47 ` [QEMU v2 3/9] virtio-balloon: speed up inflating & deflating process Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-07-15  2:47 ` [QEMU v2 4/9] virtio-balloon: update linux head file for new feature Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-07-15  2:47 ` [QEMU v2 5/9] balloon: get free page info from guest Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-07-15  2:47 ` [QEMU v2 6/9] balloon: migrate vq elem to destination Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-07-15  2:47 ` [QEMU v2 7/9] bitmap: Add a new bitmap_move function Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-08-01 11:25   ` Dr. David Alan Gilbert
2016-08-01 11:25     ` [Qemu-devel] " Dr. David Alan Gilbert
2016-07-15  2:47 ` [QEMU v2 8/9] kvm: Add two new arch specific functions Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-07-15  2:47 ` [QEMU v2 9/9] migration: skip free pages during live migration Liang Li
2016-07-15  2:47   ` [Qemu-devel] " Liang Li
2016-07-21  8:10 ` [QEMU v2 0/9] Fast (de)inflating & fast " Li, Liang Z
2016-07-21  8:10   ` [Qemu-devel] " Li, Liang Z

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.