All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
@ 2021-06-29 16:01 huangy81
  2021-06-29 16:01 ` [PATCH v12 1/6] KVM: introduce dirty_pages and kvm_dirty_ring_enabled huangy81
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: huangy81 @ 2021-06-29 16:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Hyman, Dr. David Alan Gilbert,
	Peter Xu, Chuan Zheng, Paolo Bonzini

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

v12
- adjust the order of calculating dirty rate
  let memory_global_dirty_log_sync before calculating as
  v11 version description.

v11
- rebase on master

- call memory_global_dirty_log_sync before stopping dirty log
  when we calculate dirtyrate. this can make dirtyrate result
  more accurate, since kvm-reaper may be sleeping when there are
  dirty pages but not very much, this may lead to miss some pages 
  when we are calculating. do the dirty log sync manually before
  we do the calculation, so that we can get the dirty page 
  produced as recently as possible.

v10
- rebase on master

- pre-check if dirty log has started when calling
  memory_global_dirty_log_stop in the ram_save_cleanup.
  since it will stop dirty log unconditionally, so add if statement
  to ensure that memory_global_dirty_log_start/stop used in pairs.

- modify the memory_global_dirty_log_start/log in xen vitualization
  and make the qemu works in the old way.

v9
- rebase on master

- make global_dirty_tracking a bitmask:
  pass the casted "flags" to the callback in the right way, and drop
  the malloc/free step.

- make bitmask comments more accurate

please review, thanks.

v8
- drop the "none" type of DirtyRateMeasureMode

- make global_dirty_tracking a bitmask:
  1. code clean: pass the casted "flags" as input parameter in
     memory_global_dirty_log_stop, and then drop the free step.
  2. squash commit "rename global_dirty_log to global_dirty_tracking"
     into commit "make global_dirty_tracking a bitmask"
  3. move "GLOBAL_DIRTY_MASK" macro to the bits's definations.

- drop the "cleanup_dirtyrate_stat" in commit
  "move init step of calculation to main thread" so that each commit
  keeps the old way working.

- add dirty rate unit "MB/s" in the output of hmp

please review, may be this is the last version of this patchset, :)
thanks for Peter's patient instructions and reviews.

Hyman Huang(黄勇)

v7
- fix the code style problem, sorry about that

v6:
- pick up commit "KVM: introduce dirty_pages and kvm_dirty_ring_enabled"
  which has been dropped in verison 5

v5:
- rename global_dirty_log to global_dirty_tracking on Peter's advice

- make global_dirty_tracking a bitmask:
  1. add assert statement to ensure starting dirty tracking repeatly
     not allowed.
  2. add assert statement to ensure dirty tracking cannot be stopped
     without having been started.

- protecting dirty rate stat info:
  1. drop the mutext for protecting dirty rate introduced in version 4
  2. change the code block in query_dirty_rate_info so that requirements
     of "safe racing" to the dirty rate stat can be meet

- make the helper function "record_dirtypages" inline and change
  the global var dirty_pages  to local var

- free DirtyRateVcpuList in case of memory leak

please review, thanks a lot.

v4:
- make global_dirty_log a bitmask:
  1. add comments about dirty log bitmask
  2. use assert statement to check validity of flags
  3. add trace to log bitmask changes

- introduce mode option to show what method calculation should be used,
  also, export mode option in the as last commmit

- split cleanup and init of dirty rate stat and move it in the main
  thread

- change the fields of DirtyPageRecord to uint64_t type so that we
  can calculation the increased dirty pages with the formula
  as Peter's advice: dirty pages = end_pages - start_pages

- introduce mutex to protect dirty rate stat info

- adjust order of registering thread

- drop the memory free callback

this version modify some code on Peter's advice, reference to:
https://lore.kernel.org/qemu-devel/YL5nNYXmrqMlXF3v@t490s/

thanks again.

v3:
- pick up "migration/dirtyrate: make sample page count configurable" to
  make patchset apply master correctly

v2:
- rebase to "migration/dirtyrate: make sample page count configurable"

- rename "vcpu" to "per_vcpu" to show the per-vcpu method

- squash patch 5/6 into a single one, squash patch 1/2 also

- pick up "hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds"

- make global_dirty_log a bitmask to make sure both migration and dirty
  could not intefer with each other

- add memory free callback to prevent memory leaking

the most different of v2 fron v1 is that we make the global_dirty_log a
bitmask. the reason is dirty rate measurement may start or stop dirty
logging during calculation. this conflict with migration because stop
dirty log make migration leave dirty pages out then that'll be a
problem.

make global_dirty_log a bitmask can let both migration and dirty
rate measurement work fine. introduce GLOBAL_DIRTY_MIGRATION and
GLOBAL_DIRTY_DIRTY_RATE to distinguish what current dirty log aims
for, migration or dirty rate.

all references to global_dirty_log should be untouched because any bit
set there should justify that global dirty logging is enabled.

Please review, thanks !

v1:

Since the Dirty Ring on QEMU part has been merged recently, how to use
this feature is under consideration.

In the scene of migration, it is valuable to provide a more accurante
interface to track dirty memory than existing one, so that the upper
layer application can make a wise decision, or whatever. More
importantly,
dirtyrate info at the granualrity of vcpu could provide a possibility to
make migration convergent by imposing restriction on vcpu. With Dirty
Ring, we can calculate dirtyrate efficiently and cheaply.

The old interface implemented by sampling pages, it consumes cpu
resource, and the larger guest memory size become, the more cpu resource
it consumes, namely, hard to scale. New interface has no such drawback.

Please review, thanks !

Best Regards !

Hyman Huang(黄勇) (6):
  KVM: introduce dirty_pages and kvm_dirty_ring_enabled
  memory: make global_dirty_tracking a bitmask
  migration/dirtyrate: introduce struct and adjust DirtyRateStat
  migration/dirtyrate: adjust order of registering thread
  migration/dirtyrate: move init step of calculation to main thread
  migration/dirtyrate: implement dirty-ring dirtyrate calculation

 accel/kvm/kvm-all.c     |   7 ++
 hmp-commands.hx         |   7 +-
 hw/i386/xen/xen-hvm.c   |   4 +-
 include/exec/memory.h   |  20 +++-
 include/exec/ram_addr.h |   4 +-
 include/hw/core/cpu.h   |   1 +
 include/sysemu/kvm.h    |   1 +
 migration/dirtyrate.c   | 266 +++++++++++++++++++++++++++++++++++++++++-------
 migration/dirtyrate.h   |  19 +++-
 migration/ram.c         |  15 ++-
 migration/trace-events  |   2 +
 qapi/migration.json     |  46 ++++++++-
 softmmu/memory.c        |  32 ++++--
 softmmu/trace-events    |   1 +
 14 files changed, 361 insertions(+), 64 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v12 1/6] KVM: introduce dirty_pages and kvm_dirty_ring_enabled
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
@ 2021-06-29 16:01 ` huangy81
  2021-06-29 16:01 ` [PATCH v12 2/6] memory: make global_dirty_tracking a bitmask huangy81
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: huangy81 @ 2021-06-29 16:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Hyman, Dr. David Alan Gilbert,
	Peter Xu, Chuan Zheng, Paolo Bonzini

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

dirty_pages is used to calculate dirtyrate via dirty ring, when
enabled, kvm-reaper will increase the dirty pages after gfns
being dirtied.

kvm_dirty_ring_enabled shows if kvm-reaper is working. dirtyrate
thread could use it to check if measurement can base on dirty
ring feature.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Message-Id: <fee5fb2ab17ec2159405fc54a3cff8e02322f816.1624040308.git.huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 accel/kvm/kvm-all.c   | 7 +++++++
 include/hw/core/cpu.h | 1 +
 include/sysemu/kvm.h  | 1 +
 3 files changed, 9 insertions(+)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index e5b10dd..e0e88a2 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -469,6 +469,7 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
     cpu->kvm_fd = ret;
     cpu->kvm_state = s;
     cpu->vcpu_dirty = true;
+    cpu->dirty_pages = 0;
 
     mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
     if (mmap_size < 0) {
@@ -743,6 +744,7 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState *s, CPUState *cpu)
         count++;
     }
     cpu->kvm_fetch_index = fetch;
+    cpu->dirty_pages += count;
 
     return count;
 }
@@ -2293,6 +2295,11 @@ bool kvm_vcpu_id_is_valid(int vcpu_id)
     return vcpu_id >= 0 && vcpu_id < kvm_max_vcpu_id(s);
 }
 
+bool kvm_dirty_ring_enabled(void)
+{
+    return kvm_state->kvm_dirty_ring_size ? true : false;
+}
+
 static int kvm_init(MachineState *ms)
 {
     MachineClass *mc = MACHINE_GET_CLASS(ms);
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 4e0ea68..80fcb1d 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -374,6 +374,7 @@ struct CPUState {
     struct kvm_run *kvm_run;
     struct kvm_dirty_gfn *kvm_dirty_gfns;
     uint32_t kvm_fetch_index;
+    uint64_t dirty_pages;
 
     /* Used for events with 'vcpu' and *without* the 'disabled' properties */
     DECLARE_BITMAP(trace_dstate_delayed, CPU_TRACE_DSTATE_MAX_EVENTS);
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index a1ab1ee..7b22aeb 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -547,4 +547,5 @@ bool kvm_cpu_check_are_resettable(void);
 
 bool kvm_arch_cpu_check_are_resettable(void);
 
+bool kvm_dirty_ring_enabled(void);
 #endif
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v12 2/6] memory: make global_dirty_tracking a bitmask
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
  2021-06-29 16:01 ` [PATCH v12 1/6] KVM: introduce dirty_pages and kvm_dirty_ring_enabled huangy81
@ 2021-06-29 16:01 ` huangy81
  2021-06-29 16:01 ` [PATCH v12 3/6] migration/dirtyrate: introduce struct and adjust DirtyRateStat huangy81
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: huangy81 @ 2021-06-29 16:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Hyman, Dr. David Alan Gilbert,
	Peter Xu, Chuan Zheng, Paolo Bonzini

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

since dirty ring has been introduced, there are two methods
to track dirty pages of vm. it seems that "logging" has
a hint on the method, so rename the global_dirty_log to
global_dirty_tracking would make description more accurate.

dirty rate measurement may start or stop dirty tracking during
calculation. this conflict with migration because stop dirty
tracking make migration leave dirty pages out then that'll be
a problem.

make global_dirty_tracking a bitmask can let both migration and
dirty rate measurement work fine. introduce GLOBAL_DIRTY_MIGRATION
and GLOBAL_DIRTY_DIRTY_RATE to distinguish what current dirty
tracking aims for, migration or dirty rate.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Message-Id: <9c9388657cfa0301bd2c1cfa36e7cf6da4aeca19.1624040308.git.huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 hw/i386/xen/xen-hvm.c   |  4 ++--
 include/exec/memory.h   | 20 +++++++++++++++++---
 include/exec/ram_addr.h |  4 ++--
 migration/ram.c         | 15 +++++++++++----
 softmmu/memory.c        | 32 +++++++++++++++++++++-----------
 softmmu/trace-events    |  1 +
 6 files changed, 54 insertions(+), 22 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 9b43277..d836d48 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1611,8 +1611,8 @@ void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
     if (enable) {
-        memory_global_dirty_log_start();
+        memory_global_dirty_log_start(GLOBAL_DIRTY_MIGRATION);
     } else {
-        memory_global_dirty_log_stop();
+        memory_global_dirty_log_stop(GLOBAL_DIRTY_MIGRATION);
     }
 }
diff --git a/include/exec/memory.h b/include/exec/memory.h
index b116f7c..0ad0fe0 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -55,7 +55,17 @@ static inline void fuzz_dma_read_cb(size_t addr,
 }
 #endif
 
-extern bool global_dirty_log;
+/* Possible bits for global_dirty_log_{start|stop} */
+
+/* Dirty tracking enabled because migration is running */
+#define GLOBAL_DIRTY_MIGRATION  (1U << 0)
+
+/* Dirty tracking enabled because measuring dirty rate */
+#define GLOBAL_DIRTY_DIRTY_RATE (1U << 1)
+
+#define GLOBAL_DIRTY_MASK  (0x3)
+
+extern unsigned int global_dirty_tracking;
 
 typedef struct MemoryRegionOps MemoryRegionOps;
 
@@ -2105,13 +2115,17 @@ void memory_listener_unregister(MemoryListener *listener);
 
 /**
  * memory_global_dirty_log_start: begin dirty logging for all regions
+ *
+ * @flags: purpose of starting dirty log, migration or dirty rate
  */
-void memory_global_dirty_log_start(void);
+void memory_global_dirty_log_start(unsigned int flags);
 
 /**
  * memory_global_dirty_log_stop: end dirty logging for all regions
+ *
+ * @flags: purpose of stopping dirty log, migration or dirty rate
  */
-void memory_global_dirty_log_stop(void);
+void memory_global_dirty_log_stop(unsigned int flags);
 
 void mtree_info(bool flatview, bool dispatch_tree, bool owner, bool disabled);
 
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 551876b..45c9132 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -369,7 +369,7 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
 
                     qatomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
 
-                    if (global_dirty_log) {
+                    if (global_dirty_tracking) {
                         qatomic_or(
                                 &blocks[DIRTY_MEMORY_MIGRATION][idx][offset],
                                 temp);
@@ -392,7 +392,7 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
     } else {
         uint8_t clients = tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIENTS_NOCODE;
 
-        if (!global_dirty_log) {
+        if (!global_dirty_tracking) {
             clients &= ~(1 << DIRTY_MEMORY_MIGRATION);
         }
 
diff --git a/migration/ram.c b/migration/ram.c
index 723af67..33d201f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2190,7 +2190,14 @@ static void ram_save_cleanup(void *opaque)
         /* caller have hold iothread lock or is in a bh, so there is
          * no writing race against the migration bitmap
          */
-        memory_global_dirty_log_stop();
+        if (global_dirty_tracking & GLOBAL_DIRTY_MIGRATION) {
+            /*
+             * do not stop dirty log without starting it, since
+             * memory_global_dirty_log_stop will assert that
+             * memory_global_dirty_log_start/stop used in pairs
+             */
+            memory_global_dirty_log_stop(GLOBAL_DIRTY_MIGRATION);
+        }
     }
 
     RAMBLOCK_FOREACH_NOT_IGNORED(block) {
@@ -2652,7 +2659,7 @@ static void ram_init_bitmaps(RAMState *rs)
         ram_list_init_bitmaps();
         /* We don't use dirty log with background snapshots */
         if (!migrate_background_snapshot()) {
-            memory_global_dirty_log_start();
+            memory_global_dirty_log_start(GLOBAL_DIRTY_MIGRATION);
             migration_bitmap_sync_precopy(rs);
         }
     }
@@ -3392,7 +3399,7 @@ void colo_incoming_start_dirty_log(void)
             /* Discard this dirty bitmap record */
             bitmap_zero(block->bmap, block->max_length >> TARGET_PAGE_BITS);
         }
-        memory_global_dirty_log_start();
+        memory_global_dirty_log_start(GLOBAL_DIRTY_MIGRATION);
     }
     ram_state->migration_dirty_pages = 0;
     qemu_mutex_unlock_ramlist();
@@ -3404,7 +3411,7 @@ void colo_release_ram_cache(void)
 {
     RAMBlock *block;
 
-    memory_global_dirty_log_stop();
+    memory_global_dirty_log_stop(GLOBAL_DIRTY_MIGRATION);
     RAMBLOCK_FOREACH_NOT_IGNORED(block) {
         g_free(block->bmap);
         block->bmap = NULL;
diff --git a/softmmu/memory.c b/softmmu/memory.c
index f016151..9ddfb61 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -39,7 +39,7 @@
 static unsigned memory_region_transaction_depth;
 static bool memory_region_update_pending;
 static bool ioeventfd_update_pending;
-bool global_dirty_log;
+unsigned int global_dirty_tracking;
 
 static QTAILQ_HEAD(, MemoryListener) memory_listeners
     = QTAILQ_HEAD_INITIALIZER(memory_listeners);
@@ -1812,7 +1812,7 @@ uint8_t memory_region_get_dirty_log_mask(MemoryRegion *mr)
     uint8_t mask = mr->dirty_log_mask;
     RAMBlock *rb = mr->ram_block;
 
-    if (global_dirty_log && ((rb && qemu_ram_is_migratable(rb)) ||
+    if (global_dirty_tracking && ((rb && qemu_ram_is_migratable(rb)) ||
                              memory_region_is_iommu(mr))) {
         mask |= (1 << DIRTY_MEMORY_MIGRATION);
     }
@@ -2658,14 +2658,18 @@ void memory_global_after_dirty_log_sync(void)
 
 static VMChangeStateEntry *vmstate_change;
 
-void memory_global_dirty_log_start(void)
+void memory_global_dirty_log_start(unsigned int flags)
 {
     if (vmstate_change) {
         qemu_del_vm_change_state_handler(vmstate_change);
         vmstate_change = NULL;
     }
 
-    global_dirty_log = true;
+    assert(flags && !(flags & (~GLOBAL_DIRTY_MASK)));
+    assert(!(global_dirty_tracking & flags));
+    global_dirty_tracking |= flags;
+
+    trace_global_dirty_changed(global_dirty_tracking);
 
     MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward);
 
@@ -2675,9 +2679,13 @@ void memory_global_dirty_log_start(void)
     memory_region_transaction_commit();
 }
 
-static void memory_global_dirty_log_do_stop(void)
+static void memory_global_dirty_log_do_stop(unsigned int flags)
 {
-    global_dirty_log = false;
+    assert(flags && !(flags & (~GLOBAL_DIRTY_MASK)));
+    assert((global_dirty_tracking & flags) == flags);
+    global_dirty_tracking &= ~flags;
+
+    trace_global_dirty_changed(global_dirty_tracking);
 
     /* Refresh DIRTY_MEMORY_MIGRATION bit.  */
     memory_region_transaction_begin();
@@ -2690,8 +2698,9 @@ static void memory_global_dirty_log_do_stop(void)
 static void memory_vm_change_state_handler(void *opaque, bool running,
                                            RunState state)
 {
+    unsigned int flags = (unsigned int)(uintptr_t)opaque;
     if (running) {
-        memory_global_dirty_log_do_stop();
+        memory_global_dirty_log_do_stop(flags);
 
         if (vmstate_change) {
             qemu_del_vm_change_state_handler(vmstate_change);
@@ -2700,18 +2709,19 @@ static void memory_vm_change_state_handler(void *opaque, bool running,
     }
 }
 
-void memory_global_dirty_log_stop(void)
+void memory_global_dirty_log_stop(unsigned int flags)
 {
     if (!runstate_is_running()) {
         if (vmstate_change) {
             return;
         }
         vmstate_change = qemu_add_vm_change_state_handler(
-                                memory_vm_change_state_handler, NULL);
+                                memory_vm_change_state_handler,
+                                (void *)(uintptr_t)flags);
         return;
     }
 
-    memory_global_dirty_log_do_stop();
+    memory_global_dirty_log_do_stop(flags);
 }
 
 static void listener_add_address_space(MemoryListener *listener,
@@ -2723,7 +2733,7 @@ static void listener_add_address_space(MemoryListener *listener,
     if (listener->begin) {
         listener->begin(listener);
     }
-    if (global_dirty_log) {
+    if (global_dirty_tracking) {
         if (listener->log_global_start) {
             listener->log_global_start(listener);
         }
diff --git a/softmmu/trace-events b/softmmu/trace-events
index d18ac41..f95ea3d 100644
--- a/softmmu/trace-events
+++ b/softmmu/trace-events
@@ -18,6 +18,7 @@ memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t
 flatview_new(void *view, void *root) "%p (root %p)"
 flatview_destroy(void *view, void *root) "%p (root %p)"
 flatview_destroy_rcu(void *view, void *root) "%p (root %p)"
+global_dirty_changed(unsigned int bitmask) "bitmask 0x%"PRIx32
 
 # softmmu.c
 vm_stop_flush_all(int ret) "ret %d"
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v12 3/6] migration/dirtyrate: introduce struct and adjust DirtyRateStat
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
  2021-06-29 16:01 ` [PATCH v12 1/6] KVM: introduce dirty_pages and kvm_dirty_ring_enabled huangy81
  2021-06-29 16:01 ` [PATCH v12 2/6] memory: make global_dirty_tracking a bitmask huangy81
@ 2021-06-29 16:01 ` huangy81
  2021-06-29 16:01 ` [PATCH v12 4/6] migration/dirtyrate: adjust order of registering thread huangy81
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: huangy81 @ 2021-06-29 16:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Hyman, Dr. David Alan Gilbert,
	Peter Xu, Chuan Zheng, Paolo Bonzini

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

introduce "DirtyRateMeasureMode" to specify what method should be
used to calculate dirty rate, introduce "DirtyRateVcpu" to store
dirty rate fore each vcpu.

use union to store stat data of specific mode

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Message-Id: <661c98c40f40e163aa58334337af8f3ddf41316a.1624040308.git.huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/dirtyrate.c | 48 ++++++++++++++++++++++++++++--------------------
 migration/dirtyrate.h | 19 ++++++++++++++++---
 qapi/migration.json   | 30 ++++++++++++++++++++++++++++++
 3 files changed, 74 insertions(+), 23 deletions(-)

diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
index 320c56b..e0a27a9 100644
--- a/migration/dirtyrate.c
+++ b/migration/dirtyrate.c
@@ -88,33 +88,44 @@ static struct DirtyRateInfo *query_dirty_rate_info(void)
     return info;
 }
 
-static void init_dirtyrate_stat(int64_t start_time, int64_t calc_time,
-                                uint64_t sample_pages)
+static void init_dirtyrate_stat(int64_t start_time,
+                                struct DirtyRateConfig config)
 {
-    DirtyStat.total_dirty_samples = 0;
-    DirtyStat.total_sample_count = 0;
-    DirtyStat.total_block_mem_MB = 0;
     DirtyStat.dirty_rate = -1;
     DirtyStat.start_time = start_time;
-    DirtyStat.calc_time = calc_time;
-    DirtyStat.sample_pages = sample_pages;
+    DirtyStat.calc_time = config.sample_period_seconds;
+    DirtyStat.sample_pages = config.sample_pages_per_gigabytes;
+
+    switch (config.mode) {
+    case DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING:
+        DirtyStat.page_sampling.total_dirty_samples = 0;
+        DirtyStat.page_sampling.total_sample_count = 0;
+        DirtyStat.page_sampling.total_block_mem_MB = 0;
+        break;
+    case DIRTY_RATE_MEASURE_MODE_DIRTY_RING:
+        DirtyStat.dirty_ring.nvcpu = -1;
+        DirtyStat.dirty_ring.rates = NULL;
+        break;
+    default:
+        break;
+    }
 }
 
 static void update_dirtyrate_stat(struct RamblockDirtyInfo *info)
 {
-    DirtyStat.total_dirty_samples += info->sample_dirty_count;
-    DirtyStat.total_sample_count += info->sample_pages_count;
+    DirtyStat.page_sampling.total_dirty_samples += info->sample_dirty_count;
+    DirtyStat.page_sampling.total_sample_count += info->sample_pages_count;
     /* size of total pages in MB */
-    DirtyStat.total_block_mem_MB += (info->ramblock_pages *
-                                     TARGET_PAGE_SIZE) >> 20;
+    DirtyStat.page_sampling.total_block_mem_MB += (info->ramblock_pages *
+                                                   TARGET_PAGE_SIZE) >> 20;
 }
 
 static void update_dirtyrate(uint64_t msec)
 {
     uint64_t dirtyrate;
-    uint64_t total_dirty_samples = DirtyStat.total_dirty_samples;
-    uint64_t total_sample_count = DirtyStat.total_sample_count;
-    uint64_t total_block_mem_MB = DirtyStat.total_block_mem_MB;
+    uint64_t total_dirty_samples = DirtyStat.page_sampling.total_dirty_samples;
+    uint64_t total_sample_count = DirtyStat.page_sampling.total_sample_count;
+    uint64_t total_block_mem_MB = DirtyStat.page_sampling.total_block_mem_MB;
 
     dirtyrate = total_dirty_samples * total_block_mem_MB *
                 1000 / (total_sample_count * msec);
@@ -327,7 +338,7 @@ static bool compare_page_hash_info(struct RamblockDirtyInfo *info,
         update_dirtyrate_stat(block_dinfo);
     }
 
-    if (DirtyStat.total_sample_count == 0) {
+    if (DirtyStat.page_sampling.total_sample_count == 0) {
         return false;
     }
 
@@ -372,8 +383,6 @@ void *get_dirtyrate_thread(void *arg)
     struct DirtyRateConfig config = *(struct DirtyRateConfig *)arg;
     int ret;
     int64_t start_time;
-    int64_t calc_time;
-    uint64_t sample_pages;
 
     ret = dirtyrate_set_state(&CalculatingState, DIRTY_RATE_STATUS_UNSTARTED,
                               DIRTY_RATE_STATUS_MEASURING);
@@ -383,9 +392,7 @@ void *get_dirtyrate_thread(void *arg)
     }
 
     start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME) / 1000;
-    calc_time = config.sample_period_seconds;
-    sample_pages = config.sample_pages_per_gigabytes;
-    init_dirtyrate_stat(start_time, calc_time, sample_pages);
+    init_dirtyrate_stat(start_time, config);
 
     calculate_dirtyrate(config);
 
@@ -442,6 +449,7 @@ void qmp_calc_dirty_rate(int64_t calc_time, bool has_sample_pages,
 
     config.sample_period_seconds = calc_time;
     config.sample_pages_per_gigabytes = sample_pages;
+    config.mode = DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING;
     qemu_thread_create(&thread, "get_dirtyrate", get_dirtyrate_thread,
                        (void *)&config, QEMU_THREAD_DETACHED);
 }
diff --git a/migration/dirtyrate.h b/migration/dirtyrate.h
index e1fd290..69d4c5b 100644
--- a/migration/dirtyrate.h
+++ b/migration/dirtyrate.h
@@ -43,6 +43,7 @@
 struct DirtyRateConfig {
     uint64_t sample_pages_per_gigabytes; /* sample pages per GB */
     int64_t sample_period_seconds; /* time duration between two sampling */
+    DirtyRateMeasureMode mode; /* mode of dirtyrate measurement */
 };
 
 /*
@@ -58,17 +59,29 @@ struct RamblockDirtyInfo {
     uint32_t *hash_result; /* array of hash result for sampled pages */
 };
 
+typedef struct SampleVMStat {
+    uint64_t total_dirty_samples; /* total dirty sampled page */
+    uint64_t total_sample_count; /* total sampled pages */
+    uint64_t total_block_mem_MB; /* size of total sampled pages in MB */
+} SampleVMStat;
+
+typedef struct VcpuStat {
+    int nvcpu; /* number of vcpu */
+    DirtyRateVcpu *rates; /* array of dirty rate for each vcpu */
+} VcpuStat;
+
 /*
  * Store calculation statistics for each measure.
  */
 struct DirtyRateStat {
-    uint64_t total_dirty_samples; /* total dirty sampled page */
-    uint64_t total_sample_count; /* total sampled pages */
-    uint64_t total_block_mem_MB; /* size of total sampled pages in MB */
     int64_t dirty_rate; /* dirty rate in MB/s */
     int64_t start_time; /* calculation start time in units of second */
     int64_t calc_time; /* time duration of two sampling in units of second */
     uint64_t sample_pages; /* sample pages per GB */
+    union {
+        SampleVMStat page_sampling;
+        VcpuStat dirty_ring;
+    };
 };
 
 void *get_dirtyrate_thread(void *arg);
diff --git a/qapi/migration.json b/qapi/migration.json
index 1124a2d..c464bef 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1709,6 +1709,21 @@
   'data': { 'device-id': 'str' } }
 
 ##
+# @DirtyRateVcpu:
+#
+# Dirty rate of vcpu.
+#
+# @id: vcpu index.
+#
+# @dirty-rate: dirty rate.
+#
+# Since: 6.1
+#
+##
+{ 'struct': 'DirtyRateVcpu',
+  'data': { 'id': 'int', 'dirty-rate': 'int64' } }
+
+##
 # @DirtyRateStatus:
 #
 # An enumeration of dirtyrate status.
@@ -1726,6 +1741,21 @@
   'data': [ 'unstarted', 'measuring', 'measured'] }
 
 ##
+# @DirtyRateMeasureMode:
+#
+# An enumeration of mode of measuring dirtyrate.
+#
+# @page-sampling: calculate dirtyrate by sampling pages.
+#
+# @dirty-ring: calculate dirtyrate by via dirty ring.
+#
+# Since: 6.1
+#
+##
+{ 'enum': 'DirtyRateMeasureMode',
+  'data': ['page-sampling', 'dirty-ring'] }
+
+##
 # @DirtyRateInfo:
 #
 # Information about current dirty page rate of vm.
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v12 4/6] migration/dirtyrate: adjust order of registering thread
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
                   ` (2 preceding siblings ...)
  2021-06-29 16:01 ` [PATCH v12 3/6] migration/dirtyrate: introduce struct and adjust DirtyRateStat huangy81
@ 2021-06-29 16:01 ` huangy81
  2021-06-29 16:01 ` [PATCH v12 5/6] migration/dirtyrate: move init step of calculation to main thread huangy81
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: huangy81 @ 2021-06-29 16:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Hyman, Dr. David Alan Gilbert,
	Peter Xu, Chuan Zheng, Paolo Bonzini

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

registering get_dirtyrate thread in advance so that both
page-sampling and dirty-ring mode can be covered.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Message-Id: <d7727581a8e86d4a42fc3eacf7f310419b9ebf7e.1624040308.git.huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/dirtyrate.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
index e0a27a9..a9bdd60 100644
--- a/migration/dirtyrate.c
+++ b/migration/dirtyrate.c
@@ -352,7 +352,6 @@ static void calculate_dirtyrate(struct DirtyRateConfig config)
     int64_t msec = 0;
     int64_t initial_time;
 
-    rcu_register_thread();
     rcu_read_lock();
     initial_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
     if (!record_ramblock_hash_info(&block_dinfo, config, &block_count)) {
@@ -375,7 +374,6 @@ static void calculate_dirtyrate(struct DirtyRateConfig config)
 out:
     rcu_read_unlock();
     free_ramblock_dirty_info(block_dinfo, block_count);
-    rcu_unregister_thread();
 }
 
 void *get_dirtyrate_thread(void *arg)
@@ -383,6 +381,7 @@ void *get_dirtyrate_thread(void *arg)
     struct DirtyRateConfig config = *(struct DirtyRateConfig *)arg;
     int ret;
     int64_t start_time;
+    rcu_register_thread();
 
     ret = dirtyrate_set_state(&CalculatingState, DIRTY_RATE_STATUS_UNSTARTED,
                               DIRTY_RATE_STATUS_MEASURING);
@@ -401,6 +400,8 @@ void *get_dirtyrate_thread(void *arg)
     if (ret == -1) {
         error_report("change dirtyrate state failed.");
     }
+
+    rcu_unregister_thread();
     return NULL;
 }
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v12 5/6] migration/dirtyrate: move init step of calculation to main thread
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
                   ` (3 preceding siblings ...)
  2021-06-29 16:01 ` [PATCH v12 4/6] migration/dirtyrate: adjust order of registering thread huangy81
@ 2021-06-29 16:01 ` huangy81
  2021-06-29 16:01 ` [PATCH v12 6/6] migration/dirtyrate: implement dirty-ring dirtyrate calculation huangy81
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: huangy81 @ 2021-06-29 16:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Hyman, Dr. David Alan Gilbert,
	Peter Xu, Chuan Zheng, Paolo Bonzini

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

since main thread may "query dirty rate" at any time, it's better
to move init step into main thead so that synchronization overhead
between "main" and "get_dirtyrate" can be reduced.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Message-Id: <109f8077518ed2f13068e3bfb10e625e964780f1.1624040308.git.huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/dirtyrate.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
index a9bdd60..b8f61cc 100644
--- a/migration/dirtyrate.c
+++ b/migration/dirtyrate.c
@@ -380,7 +380,6 @@ void *get_dirtyrate_thread(void *arg)
 {
     struct DirtyRateConfig config = *(struct DirtyRateConfig *)arg;
     int ret;
-    int64_t start_time;
     rcu_register_thread();
 
     ret = dirtyrate_set_state(&CalculatingState, DIRTY_RATE_STATUS_UNSTARTED,
@@ -390,9 +389,6 @@ void *get_dirtyrate_thread(void *arg)
         return NULL;
     }
 
-    start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME) / 1000;
-    init_dirtyrate_stat(start_time, config);
-
     calculate_dirtyrate(config);
 
     ret = dirtyrate_set_state(&CalculatingState, DIRTY_RATE_STATUS_MEASURING,
@@ -411,6 +407,7 @@ void qmp_calc_dirty_rate(int64_t calc_time, bool has_sample_pages,
     static struct DirtyRateConfig config;
     QemuThread thread;
     int ret;
+    int64_t start_time;
 
     /*
      * If the dirty rate is already being measured, don't attempt to start.
@@ -451,6 +448,10 @@ void qmp_calc_dirty_rate(int64_t calc_time, bool has_sample_pages,
     config.sample_period_seconds = calc_time;
     config.sample_pages_per_gigabytes = sample_pages;
     config.mode = DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING;
+
+    start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME) / 1000;
+    init_dirtyrate_stat(start_time, config);
+
     qemu_thread_create(&thread, "get_dirtyrate", get_dirtyrate_thread,
                        (void *)&config, QEMU_THREAD_DETACHED);
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v12 6/6] migration/dirtyrate: implement dirty-ring dirtyrate calculation
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
                   ` (4 preceding siblings ...)
  2021-06-29 16:01 ` [PATCH v12 5/6] migration/dirtyrate: move init step of calculation to main thread huangy81
@ 2021-06-29 16:01 ` huangy81
  2021-08-29 23:11 ` [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu Hyman
  2021-10-15  1:32 ` Peter Xu
  7 siblings, 0 replies; 14+ messages in thread
From: huangy81 @ 2021-06-29 16:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Hyman, Dr. David Alan Gilbert,
	Peter Xu, Chuan Zheng, Paolo Bonzini

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

use dirty ring feature to implement dirtyrate calculation.

introduce mode option in qmp calc_dirty_rate to specify what
method should be used when calculating dirtyrate, either
page-sampling or dirty-ring should be passed.

introduce "dirty_ring:-r" option in hmp calc_dirty_rate to
indicate dirty ring method should be used for calculation.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Message-Id: <7db445109bd18125ce8ec86816d14f6ab5de6a7d.1624040308.git.huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 hmp-commands.hx        |   7 +-
 migration/dirtyrate.c  | 208 ++++++++++++++++++++++++++++++++++++++++++++++---
 migration/trace-events |   2 +
 qapi/migration.json    |  16 +++-
 4 files changed, 218 insertions(+), 15 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 8e45bce..f7fc9d7 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1738,8 +1738,9 @@ ERST
 
     {
         .name       = "calc_dirty_rate",
-        .args_type  = "second:l,sample_pages_per_GB:l?",
-        .params     = "second [sample_pages_per_GB]",
-        .help       = "start a round of guest dirty rate measurement",
+        .args_type  = "dirty_ring:-r,second:l,sample_pages_per_GB:l?",
+        .params     = "[-r] second [sample_pages_per_GB]",
+        .help       = "start a round of guest dirty rate measurement (using -d to"
+                      "\n\t\t\t specify dirty ring as the method of calculation)",
         .cmd        = hmp_calc_dirty_rate,
     },
diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
index b8f61cc..f92c4b4 100644
--- a/migration/dirtyrate.c
+++ b/migration/dirtyrate.c
@@ -16,6 +16,7 @@
 #include "cpu.h"
 #include "exec/ramblock.h"
 #include "qemu/rcu_queue.h"
+#include "qemu/main-loop.h"
 #include "qapi/qapi-commands-migration.h"
 #include "ram.h"
 #include "trace.h"
@@ -23,9 +24,19 @@
 #include "monitor/hmp.h"
 #include "monitor/monitor.h"
 #include "qapi/qmp/qdict.h"
+#include "sysemu/kvm.h"
+#include "sysemu/runstate.h"
+#include "exec/memory.h"
+
+typedef struct DirtyPageRecord {
+    uint64_t start_pages;
+    uint64_t end_pages;
+} DirtyPageRecord;
 
 static int CalculatingState = DIRTY_RATE_STATUS_UNSTARTED;
 static struct DirtyRateStat DirtyStat;
+static DirtyRateMeasureMode dirtyrate_mode =
+                DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING;
 
 static int64_t set_sample_page_period(int64_t msec, int64_t initial_time)
 {
@@ -70,18 +81,37 @@ static int dirtyrate_set_state(int *state, int old_state, int new_state)
 
 static struct DirtyRateInfo *query_dirty_rate_info(void)
 {
+    int i;
     int64_t dirty_rate = DirtyStat.dirty_rate;
     struct DirtyRateInfo *info = g_malloc0(sizeof(DirtyRateInfo));
-
-    if (qatomic_read(&CalculatingState) == DIRTY_RATE_STATUS_MEASURED) {
-        info->has_dirty_rate = true;
-        info->dirty_rate = dirty_rate;
-    }
+    DirtyRateVcpuList *head = NULL, **tail = &head;
 
     info->status = CalculatingState;
     info->start_time = DirtyStat.start_time;
     info->calc_time = DirtyStat.calc_time;
     info->sample_pages = DirtyStat.sample_pages;
+    info->mode = dirtyrate_mode;
+
+    if (qatomic_read(&CalculatingState) == DIRTY_RATE_STATUS_MEASURED) {
+        info->has_dirty_rate = true;
+        info->dirty_rate = dirty_rate;
+
+        if (dirtyrate_mode == DIRTY_RATE_MEASURE_MODE_DIRTY_RING) {
+            /*
+             * set sample_pages with 0 to indicate page sampling
+             * isn't enabled
+             **/
+            info->sample_pages = 0;
+            info->has_vcpu_dirty_rate = true;
+            for (i = 0; i < DirtyStat.dirty_ring.nvcpu; i++) {
+                DirtyRateVcpu *rate = g_malloc0(sizeof(DirtyRateVcpu));
+                rate->id = DirtyStat.dirty_ring.rates[i].id;
+                rate->dirty_rate = DirtyStat.dirty_ring.rates[i].dirty_rate;
+                QAPI_LIST_APPEND(tail, rate);
+            }
+            info->vcpu_dirty_rate = head;
+        }
+    }
 
     trace_query_dirty_rate_info(DirtyRateStatus_str(CalculatingState));
 
@@ -111,6 +141,15 @@ static void init_dirtyrate_stat(int64_t start_time,
     }
 }
 
+static void cleanup_dirtyrate_stat(struct DirtyRateConfig config)
+{
+    /* last calc-dirty-rate qmp use dirty ring mode */
+    if (dirtyrate_mode == DIRTY_RATE_MEASURE_MODE_DIRTY_RING) {
+        free(DirtyStat.dirty_ring.rates);
+        DirtyStat.dirty_ring.rates = NULL;
+    }
+}
+
 static void update_dirtyrate_stat(struct RamblockDirtyInfo *info)
 {
     DirtyStat.page_sampling.total_dirty_samples += info->sample_dirty_count;
@@ -345,7 +384,97 @@ static bool compare_page_hash_info(struct RamblockDirtyInfo *info,
     return true;
 }
 
-static void calculate_dirtyrate(struct DirtyRateConfig config)
+static inline void record_dirtypages(DirtyPageRecord *dirty_pages,
+                                     CPUState *cpu, bool start)
+{
+    if (start) {
+        dirty_pages[cpu->cpu_index].start_pages = cpu->dirty_pages;
+    } else {
+        dirty_pages[cpu->cpu_index].end_pages = cpu->dirty_pages;
+    }
+}
+
+static void dirtyrate_global_dirty_log_start(void)
+{
+    qemu_mutex_lock_iothread();
+    memory_global_dirty_log_start(GLOBAL_DIRTY_DIRTY_RATE);
+    qemu_mutex_unlock_iothread();
+}
+
+static void dirtyrate_global_dirty_log_stop(void)
+{
+    qemu_mutex_lock_iothread();
+    memory_global_dirty_log_sync();
+    memory_global_dirty_log_stop(GLOBAL_DIRTY_DIRTY_RATE);
+    qemu_mutex_unlock_iothread();
+}
+
+static int64_t do_calculate_dirtyrate_vcpu(DirtyPageRecord dirty_pages)
+{
+    uint64_t memory_size_MB;
+    int64_t time_s;
+    uint64_t increased_dirty_pages =
+        dirty_pages.end_pages - dirty_pages.start_pages;
+
+    memory_size_MB = (increased_dirty_pages * TARGET_PAGE_SIZE) >> 20;
+    time_s = DirtyStat.calc_time;
+
+    return memory_size_MB / time_s;
+}
+
+static void calculate_dirtyrate_dirty_ring(struct DirtyRateConfig config)
+{
+    CPUState *cpu;
+    int64_t msec = 0;
+    int64_t start_time;
+    uint64_t dirtyrate = 0;
+    uint64_t dirtyrate_sum = 0;
+    DirtyPageRecord *dirty_pages;
+    int nvcpu = 0;
+    int i = 0;
+
+    CPU_FOREACH(cpu) {
+        nvcpu++;
+    }
+
+    dirty_pages = malloc(sizeof(*dirty_pages) * nvcpu);
+
+    DirtyStat.dirty_ring.nvcpu = nvcpu;
+    DirtyStat.dirty_ring.rates = malloc(sizeof(DirtyRateVcpu) * nvcpu);
+
+    dirtyrate_global_dirty_log_start();
+
+    CPU_FOREACH(cpu) {
+        record_dirtypages(dirty_pages, cpu, true);
+    }
+
+    start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+    DirtyStat.start_time = start_time / 1000;
+
+    msec = config.sample_period_seconds * 1000;
+    msec = set_sample_page_period(msec, start_time);
+    DirtyStat.calc_time = msec / 1000;
+
+    dirtyrate_global_dirty_log_stop();
+
+    CPU_FOREACH(cpu) {
+        record_dirtypages(dirty_pages, cpu, false);
+    }
+
+    for (i = 0; i < DirtyStat.dirty_ring.nvcpu; i++) {
+        dirtyrate = do_calculate_dirtyrate_vcpu(dirty_pages[i]);
+        trace_dirtyrate_do_calculate_vcpu(i, dirtyrate);
+
+        DirtyStat.dirty_ring.rates[i].id = i;
+        DirtyStat.dirty_ring.rates[i].dirty_rate = dirtyrate;
+        dirtyrate_sum += dirtyrate;
+    }
+
+    DirtyStat.dirty_rate = dirtyrate_sum;
+    free(dirty_pages);
+}
+
+static void calculate_dirtyrate_sample_vm(struct DirtyRateConfig config)
 {
     struct RamblockDirtyInfo *block_dinfo = NULL;
     int block_count = 0;
@@ -376,6 +505,17 @@ out:
     free_ramblock_dirty_info(block_dinfo, block_count);
 }
 
+static void calculate_dirtyrate(struct DirtyRateConfig config)
+{
+    if (config.mode == DIRTY_RATE_MEASURE_MODE_DIRTY_RING) {
+        calculate_dirtyrate_dirty_ring(config);
+    } else {
+        calculate_dirtyrate_sample_vm(config);
+    }
+
+    trace_dirtyrate_calculate(DirtyStat.dirty_rate);
+}
+
 void *get_dirtyrate_thread(void *arg)
 {
     struct DirtyRateConfig config = *(struct DirtyRateConfig *)arg;
@@ -401,8 +541,12 @@ void *get_dirtyrate_thread(void *arg)
     return NULL;
 }
 
-void qmp_calc_dirty_rate(int64_t calc_time, bool has_sample_pages,
-                         int64_t sample_pages, Error **errp)
+void qmp_calc_dirty_rate(int64_t calc_time,
+                         bool has_sample_pages,
+                         int64_t sample_pages,
+                         bool has_mode,
+                         DirtyRateMeasureMode mode,
+                         Error **errp)
 {
     static struct DirtyRateConfig config;
     QemuThread thread;
@@ -424,6 +568,15 @@ void qmp_calc_dirty_rate(int64_t calc_time, bool has_sample_pages,
         return;
     }
 
+    if (!has_mode) {
+        mode =  DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING;
+    }
+
+    if (has_sample_pages && mode == DIRTY_RATE_MEASURE_MODE_DIRTY_RING) {
+        error_setg(errp, "either sample-pages or dirty-ring can be specified.");
+        return;
+    }
+
     if (has_sample_pages) {
         if (!is_sample_pages_valid(sample_pages)) {
             error_setg(errp, "sample-pages is out of range[%d, %d].",
@@ -436,6 +589,16 @@ void qmp_calc_dirty_rate(int64_t calc_time, bool has_sample_pages,
     }
 
     /*
+     * dirty ring mode only works when kvm dirty ring is enabled.
+     */
+    if ((mode == DIRTY_RATE_MEASURE_MODE_DIRTY_RING) &&
+        !kvm_dirty_ring_enabled()) {
+        error_setg(errp, "dirty ring is disabled, use sample-pages method "
+                         "or remeasure later.");
+        return;
+    }
+
+    /*
      * Init calculation state as unstarted.
      */
     ret = dirtyrate_set_state(&CalculatingState, CalculatingState,
@@ -447,7 +610,15 @@ void qmp_calc_dirty_rate(int64_t calc_time, bool has_sample_pages,
 
     config.sample_period_seconds = calc_time;
     config.sample_pages_per_gigabytes = sample_pages;
-    config.mode = DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING;
+    config.mode = mode;
+
+    cleanup_dirtyrate_stat(config);
+
+    /*
+     * update dirty rate mode so that we can figure out what mode has
+     * been used in last calculation
+     **/
+    dirtyrate_mode = mode;
 
     start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME) / 1000;
     init_dirtyrate_stat(start_time, config);
@@ -473,12 +644,24 @@ void hmp_info_dirty_rate(Monitor *mon, const QDict *qdict)
                    info->sample_pages);
     monitor_printf(mon, "Period: %"PRIi64" (sec)\n",
                    info->calc_time);
+    monitor_printf(mon, "Mode: %s\n",
+                   DirtyRateMeasureMode_str(info->mode));
     monitor_printf(mon, "Dirty rate: ");
     if (info->has_dirty_rate) {
         monitor_printf(mon, "%"PRIi64" (MB/s)\n", info->dirty_rate);
+        if (info->has_vcpu_dirty_rate) {
+            DirtyRateVcpuList *rate, *head = info->vcpu_dirty_rate;
+            for (rate = head; rate != NULL; rate = rate->next) {
+                monitor_printf(mon, "vcpu[%"PRIi64"], Dirty rate: %"PRIi64
+                               " (MB/s)\n", rate->value->id,
+                               rate->value->dirty_rate);
+            }
+        }
     } else {
         monitor_printf(mon, "(not ready)\n");
     }
+
+    qapi_free_DirtyRateVcpuList(info->vcpu_dirty_rate);
     g_free(info);
 }
 
@@ -487,6 +670,10 @@ void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict)
     int64_t sec = qdict_get_try_int(qdict, "second", 0);
     int64_t sample_pages = qdict_get_try_int(qdict, "sample_pages_per_GB", -1);
     bool has_sample_pages = (sample_pages != -1);
+    bool dirty_ring = qdict_get_try_bool(qdict, "dirty_ring", false);
+    DirtyRateMeasureMode mode =
+        (dirty_ring ? DIRTY_RATE_MEASURE_MODE_DIRTY_RING :
+         DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING);
     Error *err = NULL;
 
     if (!sec) {
@@ -494,7 +681,8 @@ void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict)
         return;
     }
 
-    qmp_calc_dirty_rate(sec, has_sample_pages, sample_pages, &err);
+    qmp_calc_dirty_rate(sec, has_sample_pages, sample_pages, true,
+                        mode, &err);
     if (err) {
         hmp_handle_error(mon, err);
         return;
diff --git a/migration/trace-events b/migration/trace-events
index a1c0f03..ce0b5e6 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -331,6 +331,8 @@ get_ramblock_vfn_hash(const char *idstr, uint64_t vfn, uint32_t crc) "ramblock n
 calc_page_dirty_rate(const char *idstr, uint32_t new_crc, uint32_t old_crc) "ramblock name: %s, new crc: %" PRIu32 ", old crc: %" PRIu32
 skip_sample_ramblock(const char *idstr, uint64_t ramblock_size) "ramblock name: %s, ramblock size: %" PRIu64
 find_page_matched(const char *idstr) "ramblock %s addr or size changed"
+dirtyrate_calculate(int64_t dirtyrate) "dirty rate: %" PRIi64 " MB/s"
+dirtyrate_do_calculate_vcpu(int idx, uint64_t rate) "vcpu[%d]: %"PRIu64 " MB/s"
 
 # block.c
 migration_block_init_shared(const char *blk_device_name) "Start migration for %s with shared base image"
diff --git a/qapi/migration.json b/qapi/migration.json
index c464bef..de35528 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1773,6 +1773,12 @@
 # @sample-pages: page count per GB for sample dirty pages
 #                the default value is 512 (since 6.1)
 #
+# @mode: mode containing method of calculate dirtyrate includes
+#        'page-sampling' and 'dirty-ring' (Since 6.1)
+#
+# @vcpu-dirty-rate: dirtyrate for each vcpu if dirty-ring
+#                   mode specified (Since 6.1)
+#
 # Since: 5.2
 #
 ##
@@ -1781,7 +1787,9 @@
            'status': 'DirtyRateStatus',
            'start-time': 'int64',
            'calc-time': 'int64',
-           'sample-pages': 'uint64'} }
+           'sample-pages': 'uint64',
+           'mode': 'DirtyRateMeasureMode',
+           '*vcpu-dirty-rate': [ 'DirtyRateVcpu' ] } }
 
 ##
 # @calc-dirty-rate:
@@ -1793,6 +1801,9 @@
 # @sample-pages: page count per GB for sample dirty pages
 #                the default value is 512 (since 6.1)
 #
+# @mode: mechanism of calculating dirtyrate includes
+#        'page-sampling' and 'dirty-ring' (Since 6.1)
+#
 # Since: 5.2
 #
 # Example:
@@ -1801,7 +1812,8 @@
 #
 ##
 { 'command': 'calc-dirty-rate', 'data': {'calc-time': 'int64',
-                                         '*sample-pages': 'int'} }
+                                         '*sample-pages': 'int',
+                                         '*mode': 'DirtyRateMeasureMode'} }
 
 ##
 # @query-dirty-rate:
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
                   ` (5 preceding siblings ...)
  2021-06-29 16:01 ` [PATCH v12 6/6] migration/dirtyrate: implement dirty-ring dirtyrate calculation huangy81
@ 2021-08-29 23:11 ` Hyman
  2021-09-20 15:45   ` Hyman
  2021-10-15  1:32 ` Peter Xu
  7 siblings, 1 reply; 14+ messages in thread
From: Hyman @ 2021-08-29 23:11 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Dr. David Alan Gilbert, Peter Xu,
	Chuan Zheng, Paolo Bonzini

ping - hi, what would you think about this patchset ? :)

在 2021/6/30 0:01, huangy81@chinatelecom.cn 写道:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> v12
> - adjust the order of calculating dirty rate
>    let memory_global_dirty_log_sync before calculating as
>    v11 version description.
> 
> v11
> - rebase on master
> 
> - call memory_global_dirty_log_sync before stopping dirty log
>    when we calculate dirtyrate. this can make dirtyrate result
>    more accurate, since kvm-reaper may be sleeping when there are
>    dirty pages but not very much, this may lead to miss some pages
>    when we are calculating. do the dirty log sync manually before
>    we do the calculation, so that we can get the dirty page
>    produced as recently as possible.
> 
> v10
> - rebase on master
> 
> - pre-check if dirty log has started when calling
>    memory_global_dirty_log_stop in the ram_save_cleanup.
>    since it will stop dirty log unconditionally, so add if statement
>    to ensure that memory_global_dirty_log_start/stop used in pairs.
> 
> - modify the memory_global_dirty_log_start/log in xen vitualization
>    and make the qemu works in the old way.
> 
> v9
> - rebase on master
> 
> - make global_dirty_tracking a bitmask:
>    pass the casted "flags" to the callback in the right way, and drop
>    the malloc/free step.
> 
> - make bitmask comments more accurate
> 
> please review, thanks.
> 
> v8
> - drop the "none" type of DirtyRateMeasureMode
> 
> - make global_dirty_tracking a bitmask:
>    1. code clean: pass the casted "flags" as input parameter in
>       memory_global_dirty_log_stop, and then drop the free step.
>    2. squash commit "rename global_dirty_log to global_dirty_tracking"
>       into commit "make global_dirty_tracking a bitmask"
>    3. move "GLOBAL_DIRTY_MASK" macro to the bits's definations.
> 
> - drop the "cleanup_dirtyrate_stat" in commit
>    "move init step of calculation to main thread" so that each commit
>    keeps the old way working.
> 
> - add dirty rate unit "MB/s" in the output of hmp
> 
> please review, may be this is the last version of this patchset, :)
> thanks for Peter's patient instructions and reviews.
> 
> Hyman Huang(黄勇)
> 
> v7
> - fix the code style problem, sorry about that
> 
> v6:
> - pick up commit "KVM: introduce dirty_pages and kvm_dirty_ring_enabled"
>    which has been dropped in verison 5
> 
> v5:
> - rename global_dirty_log to global_dirty_tracking on Peter's advice
> 
> - make global_dirty_tracking a bitmask:
>    1. add assert statement to ensure starting dirty tracking repeatly
>       not allowed.
>    2. add assert statement to ensure dirty tracking cannot be stopped
>       without having been started.
> 
> - protecting dirty rate stat info:
>    1. drop the mutext for protecting dirty rate introduced in version 4
>    2. change the code block in query_dirty_rate_info so that requirements
>       of "safe racing" to the dirty rate stat can be meet
> 
> - make the helper function "record_dirtypages" inline and change
>    the global var dirty_pages  to local var
> 
> - free DirtyRateVcpuList in case of memory leak
> 
> please review, thanks a lot.
> 
> v4:
> - make global_dirty_log a bitmask:
>    1. add comments about dirty log bitmask
>    2. use assert statement to check validity of flags
>    3. add trace to log bitmask changes
> 
> - introduce mode option to show what method calculation should be used,
>    also, export mode option in the as last commmit
> 
> - split cleanup and init of dirty rate stat and move it in the main
>    thread
> 
> - change the fields of DirtyPageRecord to uint64_t type so that we
>    can calculation the increased dirty pages with the formula
>    as Peter's advice: dirty pages = end_pages - start_pages
> 
> - introduce mutex to protect dirty rate stat info
> 
> - adjust order of registering thread
> 
> - drop the memory free callback
> 
> this version modify some code on Peter's advice, reference to:
> https://lore.kernel.org/qemu-devel/YL5nNYXmrqMlXF3v@t490s/
> 
> thanks again.
> 
> v3:
> - pick up "migration/dirtyrate: make sample page count configurable" to
>    make patchset apply master correctly
> 
> v2:
> - rebase to "migration/dirtyrate: make sample page count configurable"
> 
> - rename "vcpu" to "per_vcpu" to show the per-vcpu method
> 
> - squash patch 5/6 into a single one, squash patch 1/2 also
> 
> - pick up "hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds"
> 
> - make global_dirty_log a bitmask to make sure both migration and dirty
>    could not intefer with each other
> 
> - add memory free callback to prevent memory leaking
> 
> the most different of v2 fron v1 is that we make the global_dirty_log a
> bitmask. the reason is dirty rate measurement may start or stop dirty
> logging during calculation. this conflict with migration because stop
> dirty log make migration leave dirty pages out then that'll be a
> problem.
> 
> make global_dirty_log a bitmask can let both migration and dirty
> rate measurement work fine. introduce GLOBAL_DIRTY_MIGRATION and
> GLOBAL_DIRTY_DIRTY_RATE to distinguish what current dirty log aims
> for, migration or dirty rate.
> 
> all references to global_dirty_log should be untouched because any bit
> set there should justify that global dirty logging is enabled.
> 
> Please review, thanks !
> 
> v1:
> 
> Since the Dirty Ring on QEMU part has been merged recently, how to use
> this feature is under consideration.
> 
> In the scene of migration, it is valuable to provide a more accurante
> interface to track dirty memory than existing one, so that the upper
> layer application can make a wise decision, or whatever. More
> importantly,
> dirtyrate info at the granualrity of vcpu could provide a possibility to
> make migration convergent by imposing restriction on vcpu. With Dirty
> Ring, we can calculate dirtyrate efficiently and cheaply.
> 
> The old interface implemented by sampling pages, it consumes cpu
> resource, and the larger guest memory size become, the more cpu resource
> it consumes, namely, hard to scale. New interface has no such drawback.
> 
> Please review, thanks !
> 
> Best Regards !
> 
> Hyman Huang(黄勇) (6):
>    KVM: introduce dirty_pages and kvm_dirty_ring_enabled
>    memory: make global_dirty_tracking a bitmask
>    migration/dirtyrate: introduce struct and adjust DirtyRateStat
>    migration/dirtyrate: adjust order of registering thread
>    migration/dirtyrate: move init step of calculation to main thread
>    migration/dirtyrate: implement dirty-ring dirtyrate calculation
> 
>   accel/kvm/kvm-all.c     |   7 ++
>   hmp-commands.hx         |   7 +-
>   hw/i386/xen/xen-hvm.c   |   4 +-
>   include/exec/memory.h   |  20 +++-
>   include/exec/ram_addr.h |   4 +-
>   include/hw/core/cpu.h   |   1 +
>   include/sysemu/kvm.h    |   1 +
>   migration/dirtyrate.c   | 266 +++++++++++++++++++++++++++++++++++++++++-------
>   migration/dirtyrate.h   |  19 +++-
>   migration/ram.c         |  15 ++-
>   migration/trace-events  |   2 +
>   qapi/migration.json     |  46 ++++++++-
>   softmmu/memory.c        |  32 ++++--
>   softmmu/trace-events    |   1 +
>   14 files changed, 361 insertions(+), 64 deletions(-)
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
  2021-08-29 23:11 ` [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu Hyman
@ 2021-09-20 15:45   ` Hyman
  2021-09-20 16:00     ` Peter Xu
  0 siblings, 1 reply; 14+ messages in thread
From: Hyman @ 2021-09-20 15:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Dr. David Alan Gilbert, Peter Xu,
	Chuan Zheng, Paolo Bonzini


在 2021/8/30 7:11, Hyman 写道:
> ping - hi, what would you think about this patchset ? :)
ping again, just make sure this patchset not be ignored,
any advice or comment are welcome. :)
> 
> 在 2021/6/30 0:01, huangy81@chinatelecom.cn 写道:
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> v12
>> - adjust the order of calculating dirty rate
>>    let memory_global_dirty_log_sync before calculating as
>>    v11 version description.
>>
>> v11
>> - rebase on master
>>
>> - call memory_global_dirty_log_sync before stopping dirty log
>>    when we calculate dirtyrate. this can make dirtyrate result
>>    more accurate, since kvm-reaper may be sleeping when there are
>>    dirty pages but not very much, this may lead to miss some pages
>>    when we are calculating. do the dirty log sync manually before
>>    we do the calculation, so that we can get the dirty page
>>    produced as recently as possible.
>>
>> v10
>> - rebase on master
>>
>> - pre-check if dirty log has started when calling
>>    memory_global_dirty_log_stop in the ram_save_cleanup.
>>    since it will stop dirty log unconditionally, so add if statement
>>    to ensure that memory_global_dirty_log_start/stop used in pairs.
>>
>> - modify the memory_global_dirty_log_start/log in xen vitualization
>>    and make the qemu works in the old way.
>>
>> v9
>> - rebase on master
>>
>> - make global_dirty_tracking a bitmask:
>>    pass the casted "flags" to the callback in the right way, and drop
>>    the malloc/free step.
>>
>> - make bitmask comments more accurate
>>
>> please review, thanks.
>>
>> v8
>> - drop the "none" type of DirtyRateMeasureMode
>>
>> - make global_dirty_tracking a bitmask:
>>    1. code clean: pass the casted "flags" as input parameter in
>>       memory_global_dirty_log_stop, and then drop the free step.
>>    2. squash commit "rename global_dirty_log to global_dirty_tracking"
>>       into commit "make global_dirty_tracking a bitmask"
>>    3. move "GLOBAL_DIRTY_MASK" macro to the bits's definations.
>>
>> - drop the "cleanup_dirtyrate_stat" in commit
>>    "move init step of calculation to main thread" so that each commit
>>    keeps the old way working.
>>
>> - add dirty rate unit "MB/s" in the output of hmp
>>
>> please review, may be this is the last version of this patchset, :)
>> thanks for Peter's patient instructions and reviews.
>>
>> Hyman Huang(黄勇)
>>
>> v7
>> - fix the code style problem, sorry about that
>>
>> v6:
>> - pick up commit "KVM: introduce dirty_pages and kvm_dirty_ring_enabled"
>>    which has been dropped in verison 5
>>
>> v5:
>> - rename global_dirty_log to global_dirty_tracking on Peter's advice
>>
>> - make global_dirty_tracking a bitmask:
>>    1. add assert statement to ensure starting dirty tracking repeatly
>>       not allowed.
>>    2. add assert statement to ensure dirty tracking cannot be stopped
>>       without having been started.
>>
>> - protecting dirty rate stat info:
>>    1. drop the mutext for protecting dirty rate introduced in version 4
>>    2. change the code block in query_dirty_rate_info so that requirements
>>       of "safe racing" to the dirty rate stat can be meet
>>
>> - make the helper function "record_dirtypages" inline and change
>>    the global var dirty_pages  to local var
>>
>> - free DirtyRateVcpuList in case of memory leak
>>
>> please review, thanks a lot.
>>
>> v4:
>> - make global_dirty_log a bitmask:
>>    1. add comments about dirty log bitmask
>>    2. use assert statement to check validity of flags
>>    3. add trace to log bitmask changes
>>
>> - introduce mode option to show what method calculation should be used,
>>    also, export mode option in the as last commmit
>>
>> - split cleanup and init of dirty rate stat and move it in the main
>>    thread
>>
>> - change the fields of DirtyPageRecord to uint64_t type so that we
>>    can calculation the increased dirty pages with the formula
>>    as Peter's advice: dirty pages = end_pages - start_pages
>>
>> - introduce mutex to protect dirty rate stat info
>>
>> - adjust order of registering thread
>>
>> - drop the memory free callback
>>
>> this version modify some code on Peter's advice, reference to:
>> https://lore.kernel.org/qemu-devel/YL5nNYXmrqMlXF3v@t490s/
>>
>> thanks again.
>>
>> v3:
>> - pick up "migration/dirtyrate: make sample page count configurable" to
>>    make patchset apply master correctly
>>
>> v2:
>> - rebase to "migration/dirtyrate: make sample page count configurable"
>>
>> - rename "vcpu" to "per_vcpu" to show the per-vcpu method
>>
>> - squash patch 5/6 into a single one, squash patch 1/2 also
>>
>> - pick up "hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds"
>>
>> - make global_dirty_log a bitmask to make sure both migration and dirty
>>    could not intefer with each other
>>
>> - add memory free callback to prevent memory leaking
>>
>> the most different of v2 fron v1 is that we make the global_dirty_log a
>> bitmask. the reason is dirty rate measurement may start or stop dirty
>> logging during calculation. this conflict with migration because stop
>> dirty log make migration leave dirty pages out then that'll be a
>> problem.
>>
>> make global_dirty_log a bitmask can let both migration and dirty
>> rate measurement work fine. introduce GLOBAL_DIRTY_MIGRATION and
>> GLOBAL_DIRTY_DIRTY_RATE to distinguish what current dirty log aims
>> for, migration or dirty rate.
>>
>> all references to global_dirty_log should be untouched because any bit
>> set there should justify that global dirty logging is enabled.
>>
>> Please review, thanks !
>>
>> v1:
>>
>> Since the Dirty Ring on QEMU part has been merged recently, how to use
>> this feature is under consideration.
>>
>> In the scene of migration, it is valuable to provide a more accurante
>> interface to track dirty memory than existing one, so that the upper
>> layer application can make a wise decision, or whatever. More
>> importantly,
>> dirtyrate info at the granualrity of vcpu could provide a possibility to
>> make migration convergent by imposing restriction on vcpu. With Dirty
>> Ring, we can calculate dirtyrate efficiently and cheaply.
>>
>> The old interface implemented by sampling pages, it consumes cpu
>> resource, and the larger guest memory size become, the more cpu resource
>> it consumes, namely, hard to scale. New interface has no such drawback.
>>
>> Please review, thanks !
>>
>> Best Regards !
>>
>> Hyman Huang(黄勇) (6):
>>    KVM: introduce dirty_pages and kvm_dirty_ring_enabled
>>    memory: make global_dirty_tracking a bitmask
>>    migration/dirtyrate: introduce struct and adjust DirtyRateStat
>>    migration/dirtyrate: adjust order of registering thread
>>    migration/dirtyrate: move init step of calculation to main thread
>>    migration/dirtyrate: implement dirty-ring dirtyrate calculation
>>
>>   accel/kvm/kvm-all.c     |   7 ++
>>   hmp-commands.hx         |   7 +-
>>   hw/i386/xen/xen-hvm.c   |   4 +-
>>   include/exec/memory.h   |  20 +++-
>>   include/exec/ram_addr.h |   4 +-
>>   include/hw/core/cpu.h   |   1 +
>>   include/sysemu/kvm.h    |   1 +
>>   migration/dirtyrate.c   | 266 
>> +++++++++++++++++++++++++++++++++++++++++-------
>>   migration/dirtyrate.h   |  19 +++-
>>   migration/ram.c         |  15 ++-
>>   migration/trace-events  |   2 +
>>   qapi/migration.json     |  46 ++++++++-
>>   softmmu/memory.c        |  32 ++++--
>>   softmmu/trace-events    |   1 +
>>   14 files changed, 361 insertions(+), 64 deletions(-)
>>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
  2021-09-20 15:45   ` Hyman
@ 2021-09-20 16:00     ` Peter Xu
  0 siblings, 0 replies; 14+ messages in thread
From: Peter Xu @ 2021-09-20 16:00 UTC (permalink / raw)
  To: Hyman
  Cc: Eduardo Habkost, Juan Quintela, qemu-devel,
	Dr. David Alan Gilbert, Chuan Zheng, Paolo Bonzini

On Mon, Sep 20, 2021 at 11:45:21PM +0800, Hyman wrote:
> 
> 在 2021/8/30 7:11, Hyman 写道:
> > ping - hi, what would you think about this patchset ? :)
> ping again, just make sure this patchset not be ignored,
> any advice or comment are welcome. :)

Thanks for following this up, Yong.  Yes, I keep thinking this series (and
actually the other series to enable dirty log dirty measurement) is a nice
thing to have, and worth merging.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
  2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
                   ` (6 preceding siblings ...)
  2021-08-29 23:11 ` [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu Hyman
@ 2021-10-15  1:32 ` Peter Xu
  2021-10-15  2:07   ` Hyman Huang
  7 siblings, 1 reply; 14+ messages in thread
From: Peter Xu @ 2021-10-15  1:32 UTC (permalink / raw)
  To: huangy81
  Cc: Eduardo Habkost, Juan Quintela, qemu-devel,
	Dr. David Alan Gilbert, Chuan Zheng, Paolo Bonzini

On Wed, Jun 30, 2021 at 12:01:17AM +0800, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> v12
> - adjust the order of calculating dirty rate
>   let memory_global_dirty_log_sync before calculating as
>   v11 version description.

Ping for Yong.

Dave/Juan, any plan to review/merge this series (along with the other series of
dirty logging)?

I found it useful when I wanted to modify the program I used to generate
constant dirty workload - this series can help me to verify the change.

I still keep thinking this series is something good to have.  Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
  2021-10-15  1:32 ` Peter Xu
@ 2021-10-15  2:07   ` Hyman Huang
  2021-10-27  6:31     ` Zheng Chuan
  0 siblings, 1 reply; 14+ messages in thread
From: Hyman Huang @ 2021-10-15  2:07 UTC (permalink / raw)
  To: Peter Xu
  Cc: Eduardo Habkost, Juan Quintela, qemu-devel,
	Dr. David Alan Gilbert, Chuan Zheng, Paolo Bonzini



在 2021/10/15 9:32, Peter Xu 写道:
> On Wed, Jun 30, 2021 at 12:01:17AM +0800, huangy81@chinatelecom.cn wrote:
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> v12
>> - adjust the order of calculating dirty rate
>>    let memory_global_dirty_log_sync before calculating as
>>    v11 version description.
> 
> Ping for Yong. >
> Dave/Juan, any plan to review/merge this series (along with the other series of
> dirty logging)?
> 
> I found it useful when I wanted to modify the program I used to generate
> constant dirty workload - this series can help me to verify the change.
> 
> I still keep thinking this series is something good to have.  Thanks,
the dirtyrate calculation has already been used to estimate time of live 
migration in "e cloud" production of chinatelecom, it also predict the 
migration success ratio, which provide valuable information for the 
cloud management plane when selecting which vm should be migrated.
> 

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
  2021-10-15  2:07   ` Hyman Huang
@ 2021-10-27  6:31     ` Zheng Chuan
  2021-10-28  4:26       ` Hyman Huang
  0 siblings, 1 reply; 14+ messages in thread
From: Zheng Chuan @ 2021-10-27  6:31 UTC (permalink / raw)
  To: Hyman Huang, Peter Xu
  Cc: Eduardo Habkost, Juan Quintela, qemu-devel, Xiexiangyou,
	Dr. David Alan Gilbert, Paolo Bonzini

Hi.
I have no objection for the implement code itself.
But we should know or let the user know the performance penalty and conflicted with migration compared to the hash method, especially for the performance of vm with hugepages.

On 2021/10/15 10:07, Hyman Huang wrote:
> 
> 
> 在 2021/10/15 9:32, Peter Xu 写道:
>> On Wed, Jun 30, 2021 at 12:01:17AM +0800, huangy81@chinatelecom.cn wrote:
>>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>
>>> v12
>>> - adjust the order of calculating dirty rate
>>>    let memory_global_dirty_log_sync before calculating as
>>>    v11 version description.
>>
>> Ping for Yong. >
>> Dave/Juan, any plan to review/merge this series (along with the other series of
>> dirty logging)?
>>
>> I found it useful when I wanted to modify the program I used to generate
>> constant dirty workload - this series can help me to verify the change.
>>
>> I still keep thinking this series is something good to have.  Thanks,
> the dirtyrate calculation has already been used to estimate time of live migration in "e cloud" production of chinatelecom, it also predict the migration success ratio, which provide valuable information for the cloud management plane when selecting which vm should be migrated.
>>
> 

-- 
Regards.
Chuan


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu
  2021-10-27  6:31     ` Zheng Chuan
@ 2021-10-28  4:26       ` Hyman Huang
  0 siblings, 0 replies; 14+ messages in thread
From: Hyman Huang @ 2021-10-28  4:26 UTC (permalink / raw)
  To: Zheng Chuan, Peter Xu
  Cc: Eduardo Habkost, Juan Quintela, qemu-devel, Xiexiangyou,
	Dr. David Alan Gilbert, Paolo Bonzini



在 2021/10/27 14:31, Zheng Chuan 写道:
> Hi.
> I have no objection for the implement code itself.
> But we should know or let the user know the performance penalty and conflicted with migration compared to the hash method, especially for the performance of vm with hugepages.

i dirty guest memory with 1G and do the measurement with two method.

the copy rate is almost 1,665 MB/s in vm

the following output is guest memory performance when do measurement 
with dirty ring method:
/init (00001): INFO: 1635392998977ms copied 1 GB in 00616ms
/init (00001): INFO: 1635392999593ms copied 1 GB in 00615ms
/init (00001): INFO: 1635393000211ms copied 1 GB in 00616ms
----------------- start measurement -----------------------
/init (00001): INFO: 1635393000884ms copied 1 GB in 00672ms		
/init (00001): INFO: 1635393001849ms copied 1 GB in 00963ms	
/init (00001): INFO: 1635393002578ms copied 1 GB in 00727ms
----------------- end measurement -----------------------
/init (00001): INFO: 1635393003195ms copied 1 GB in 00615ms
/init (00001): INFO: 1635393003811ms copied 1 GB in 00614ms
/init (00001): INFO: 1635393004427ms copied 1 GB in 00615ms

guest memory performance do not trigger any changes almostly with hash 
method:

the following is test results (measurment interval=1s):

method		measurement result  	copy rate during measurement
hash		44 MB/s			1,665MB/s	
dirty ring   	1167 MB/s		1,523MB/s、1,063MB/s、1,408MB/s

the max penalty is 36% during test interval(1s), the average penalty is 
20%。

if we trade off accurance, the dirty ring method may be a availiabe 
method for user. users can select a appropriate method as they need.

> 
> On 2021/10/15 10:07, Hyman Huang wrote:
>>
>>
>> 在 2021/10/15 9:32, Peter Xu 写道:
>>> On Wed, Jun 30, 2021 at 12:01:17AM +0800, huangy81@chinatelecom.cn wrote:
>>>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>>
>>>> v12
>>>> - adjust the order of calculating dirty rate
>>>>     let memory_global_dirty_log_sync before calculating as
>>>>     v11 version description.
>>>
>>> Ping for Yong. >
>>> Dave/Juan, any plan to review/merge this series (along with the other series of
>>> dirty logging)?
>>>
>>> I found it useful when I wanted to modify the program I used to generate
>>> constant dirty workload - this series can help me to verify the change.
>>>
>>> I still keep thinking this series is something good to have.  Thanks,
>> the dirtyrate calculation has already been used to estimate time of live migration in "e cloud" production of chinatelecom, it also predict the migration success ratio, which provide valuable information for the cloud management plane when selecting which vm should be migrated.
>>>
>>
> 

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-10-28  4:28 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-29 16:01 [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu huangy81
2021-06-29 16:01 ` [PATCH v12 1/6] KVM: introduce dirty_pages and kvm_dirty_ring_enabled huangy81
2021-06-29 16:01 ` [PATCH v12 2/6] memory: make global_dirty_tracking a bitmask huangy81
2021-06-29 16:01 ` [PATCH v12 3/6] migration/dirtyrate: introduce struct and adjust DirtyRateStat huangy81
2021-06-29 16:01 ` [PATCH v12 4/6] migration/dirtyrate: adjust order of registering thread huangy81
2021-06-29 16:01 ` [PATCH v12 5/6] migration/dirtyrate: move init step of calculation to main thread huangy81
2021-06-29 16:01 ` [PATCH v12 6/6] migration/dirtyrate: implement dirty-ring dirtyrate calculation huangy81
2021-08-29 23:11 ` [PATCH v12 0/6] support dirtyrate at the granualrity of vcpu Hyman
2021-09-20 15:45   ` Hyman
2021-09-20 16:00     ` Peter Xu
2021-10-15  1:32 ` Peter Xu
2021-10-15  2:07   ` Hyman Huang
2021-10-27  6:31     ` Zheng Chuan
2021-10-28  4:26       ` Hyman Huang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.