All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/11] migration: introduce dirtylimit capability
@ 2022-11-21 16:26 huangy81
  2022-11-21 16:26 ` [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB huangy81
                   ` (10 more replies)
  0 siblings, 11 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

v2: 
This version make a little bit modifications comparing with
version 1 as following:
1. fix the overflow issue reported by Peter Maydell
2. add parameter check for hmp "set_vcpu_dirty_limit" command
3. fix the racing issue between dirty ring reaper thread and
   Qemu main thread.
4. add migrate parameter check for x-vcpu-dirty-limit-period
   and vcpu-dirty-limit.
5. add the logic to forbid hmp/qmp commands set_vcpu_dirty_limit,
   cancel_vcpu_dirty_limit during dirty-limit live migration when
   implement dirty-limit convergence algo.
6. add capability check to ensure auto-converge and dirty-limit
   are mutually exclusive.
7. pre-check if kvm dirty ring size is configured before setting
   dirty-limit migrate parameter 

A more comprehensive test was done comparing with version 1.

The following are test environment:
-------------------------------------------------------------
a. Host hardware info:

CPU:
Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz

CPU(s):                          64
On-line CPU(s) list:             0-63
Thread(s) per core:              2
Core(s) per socket:              16
Socket(s):                       2
NUMA node(s):                    2

NUMA node0 CPU(s):               0-15,32-47
NUMA node1 CPU(s):               16-31,48-63

Memory:
Hynix  503Gi

Interface:
Intel Corporation Ethernet Connection X722 for 1GbE (rev 09)
Speed: 1000Mb/s

b. Host software info:

OS: ctyunos release 2
Kernel: 4.19.90-2102.2.0.0066.ctl2.x86_64
Libvirt baseline version:  libvirt-6.9.0
Qemu baseline version: qemu-5.0

c. vm scale
CPU: 4
Memory: 4G
-------------------------------------------------------------

All the supplementary test data shown as follows are basing on
above test environment.

In version 1, we post a test data from unixbench as follows:

$ taskset -c 8-15 ./Run -i 2 -c 8 {unixbench test item}

host cpu: Intel(R) Xeon(R) Platinum 8378A
host interface speed: 1000Mb/s
  |---------------------+--------+------------+---------------|
  | UnixBench test item | Normal | Dirtylimit | Auto-converge |
  |---------------------+--------+------------+---------------|
  | dhry2reg            | 32800  | 32786      | 25292         |
  | whetstone-double    | 10326  | 10315      | 9847          |
  | pipe                | 15442  | 15271      | 14506         |
  | context1            | 7260   | 6235       | 4514          |
  | spawn               | 3663   | 3317       | 3249          |
  | syscall             | 4669   | 4667       | 3841          |
  |---------------------+--------+------------+---------------|

In version 2, we post a supplementary test data that do not use
taskset and make the scenario more general, see as follows:

$ ./Run

per-vcpu data:
  |---------------------+--------+------------+---------------|
  | UnixBench test item | Normal | Dirtylimit | Auto-converge |
  |---------------------+--------+------------+---------------|
  | dhry2reg            | 2991   | 2902       | 1722          |
  | whetstone-double    | 1018   | 1006       | 627           |
  | Execl Throughput    | 955    | 320        | 660           |
  | File Copy - 1       | 2362   | 805        | 1325          |
  | File Copy - 2       | 1500   | 1406       | 643           |  
  | File Copy - 3       | 4778   | 2160       | 1047          | 
  | Pipe Throughput     | 1181   | 1170       | 842           |
  | Context Switching   | 192    | 224        | 198           |
  | Process Creation    | 490    | 145        | 95            |
  | Shell Scripts - 1   | 1284   | 565        | 610           |
  | Shell Scripts - 2   | 2368   | 900        | 1040          |
  | System Call Overhead| 983    | 948        | 698           |
  | Index Score         | 1263   | 815        | 600           |
  |---------------------+--------+------------+---------------|
Note:
  File Copy - 1: File Copy 1024 bufsize 2000 maxblocks
  File Copy - 2: File Copy 256 bufsize 500 maxblocks 
  File Copy - 3: File Copy 4096 bufsize 8000 maxblocks 
  Shell Scripts - 1: Shell Scripts (1 concurrent)
  Shell Scripts - 2: Shell Scripts (8 concurrent)

Basing on above data, we can draw a conclusion that dirty-limit
can hugely improve the system benchmark almost in every respect,
the "System Benchmarks Index Score" show it improve 35% performance
comparing with auto-converge during live migration.

4-vcpu parallel data(we run a test vm with 4c4g-scale):
  |---------------------+--------+------------+---------------|
  | UnixBench test item | Normal | Dirtylimit | Auto-converge |
  |---------------------+--------+------------+---------------|
  | dhry2reg            | 7975   | 7146       | 5071          |
  | whetstone-double    | 3982   | 3561       | 2124          |
  | Execl Throughput    | 1882   | 1205       | 768           |
  | File Copy - 1       | 1061   | 865        | 498           |
  | File Copy - 2       | 676    | 491        | 519           |  
  | File Copy - 3       | 2260   | 923        | 1329          | 
  | Pipe Throughput     | 3026   | 3009       | 1616          |
  | Context Switching   | 1219   | 1093       | 695           |
  | Process Creation    | 947    | 307        | 446           |
  | Shell Scripts - 1   | 2469   | 977        | 989           |
  | Shell Scripts - 2   | 2667   | 1275       | 984           |
  | System Call Overhead| 1592   | 1459       | 692           |
  | Index Score         | 1976   | 1294       | 997           |
  |---------------------+--------+------------+---------------|

For the parallel data, the "System Benchmarks Index Score" show it
also improve 29% performance.

In version 1, migration total time is shown as follows: 

host cpu: Intel(R) Xeon(R) Platinum 8378A
host interface speed: 1000Mb/s
  |-----------------------+----------------+-------------------|
  | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
  |-----------------------+----------------+-------------------|
  | 60                    | 2014           | 2131              |
  | 70                    | 5381           | 12590             |
  | 90                    | 6037           | 33545             |
  | 110                   | 7660           | [*]               |
  |-----------------------+----------------+-------------------|
  [*]: This case means migration is not convergent. 

In version 2, we post more comprehensive migration total time test data
as follows: 

we update N MB on 4 cpus and sleep S us every time 1 MB data was updated.
test twice in each condition, data is shown as follow: 

  |-----------+--------+--------+----------------+-------------------|
  | ring size | N (MB) | S (us) | Dirtylimit(ms) | Auto-converge(ms) |
  |-----------+--------+--------+----------------+-------------------|
  | 1024      | 1024   | 1000   | 44951          | 191780            |
  | 1024      | 1024   | 1000   | 44546          | 185341            |
  | 1024      | 1024   | 500    | 46505          | 203545            |
  | 1024      | 1024   | 500    | 45469          | 909945            |
  | 1024      | 1024   | 0      | 61858          | [*]               |
  | 1024      | 1024   | 0      | 57922          | [*]               |
  | 1024      | 2048   | 0      | 91982          | [*]               |
  | 1024      | 2048   | 0      | 90388          | [*]               |
  | 2048      | 128    | 10000  | 14511          | 25971             |
  | 2048      | 128    | 10000  | 13472          | 26294             |
  | 2048      | 1024   | 10000  | 44244          | 26294             |
  | 2048      | 1024   | 10000  | 45099          | 157701            |
  | 2048      | 1024   | 500    | 51105          | [*]               |
  | 2048      | 1024   | 500    | 49648          | [*]               |
  | 2048      | 1024   | 0      | 229031         | [*]               |
  | 2048      | 1024   | 0      | 154282         | [*]               |
  |-----------+--------+--------+----------------+-------------------|
  [*]: This case means migration is not convergent. 

Not that the larger ring size is, the less sensitively dirty-limit responds,
so we should choose a optimal ring size base on the test data with different 
scale vm.

We also test the effect of "x-vcpu-dirty-limit-period" parameter on
migration total time. test twice in each condition, data is shown
as follows:

  |-----------+--------+--------+-------------+----------------------|
  | ring size | N (MB) | S (us) | Period (ms) | migration total time | 
  |-----------+--------+--------+-------------+----------------------|
  | 2048      | 1024   | 10000  | 100         | [*]                  |
  | 2048      | 1024   | 10000  | 100         | [*]                  |
  | 2048      | 1024   | 10000  | 300         | 156795               |
  | 2048      | 1024   | 10000  | 300         | 118179               |
  | 2048      | 1024   | 10000  | 500         | 44244                |
  | 2048      | 1024   | 10000  | 500         | 45099                |
  | 2048      | 1024   | 10000  | 700         | 41871                |
  | 2048      | 1024   | 10000  | 700         | 42582                |
  | 2048      | 1024   | 10000  | 1000        | 41430                |
  | 2048      | 1024   | 10000  | 1000        | 40383                |
  | 2048      | 1024   | 10000  | 1500        | 42030                |
  | 2048      | 1024   | 10000  | 1500        | 42598                |
  | 2048      | 1024   | 10000  | 2000        | 41694                |
  | 2048      | 1024   | 10000  | 2000        | 42403                |
  | 2048      | 1024   | 10000  | 3000        | 43538                |
  | 2048      | 1024   | 10000  | 3000        | 43010                |
  |-----------+--------+--------+-------------+----------------------|

It shows that x-vcpu-dirty-limit-period should be configured with 1000 ms
in above condition.

Please review, any comments and suggestions are very appreciated, thanks

Yong

Hyman Huang (11):
  dirtylimit: Fix overflow when computing MB
  softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit"
  kvm-all: Do not allow reap vcpu dirty ring buffer if not ready
  qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  qapi/migration: Introduce vcpu-dirty-limit parameters
  migration: Introduce dirty-limit capability
  migration: Implement dirty-limit convergence algo
  migration: Export dirty-limit time info
  tests: Add migration dirty-limit capability test
  tests/migration: Introduce dirty-ring-size option into guestperf
  tests/migration: Introduce dirty-limit into guestperf

 accel/kvm/kvm-all.c                     |  36 ++++++++
 include/sysemu/dirtylimit.h             |   2 +
 migration/migration.c                   |  85 ++++++++++++++++++
 migration/migration.h                   |   1 +
 migration/ram.c                         |  62 ++++++++++---
 migration/trace-events                  |   1 +
 monitor/hmp-cmds.c                      |  26 ++++++
 qapi/migration.json                     |  60 +++++++++++--
 softmmu/dirtylimit.c                    |  75 +++++++++++++++-
 tests/migration/guestperf/comparison.py |  24 +++++
 tests/migration/guestperf/engine.py     |  24 ++++-
 tests/migration/guestperf/hardware.py   |   8 +-
 tests/migration/guestperf/progress.py   |  17 +++-
 tests/migration/guestperf/scenario.py   |  11 ++-
 tests/migration/guestperf/shell.py      |  25 +++++-
 tests/qtest/migration-test.c            | 154 ++++++++++++++++++++++++++++++++
 16 files changed, 577 insertions(+), 34 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-29 23:17   ` Peter Xu
  2022-12-03  8:56   ` Markus Armbruster
  2022-11-21 16:26 ` [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit" huangy81
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

overity points out a overflow problem when computing MB,
dirty_ring_size and TARGET_PAGE_SIZE are both 32 bits,
multiplication will be done as a 32-bit operation, which
could overflow. Simplify the formula.

Meanwhile, fix spelling mistake of variable name.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 softmmu/dirtylimit.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 1266855..940d238 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -236,14 +236,14 @@ static inline int64_t dirtylimit_dirty_ring_full_time(uint64_t dirtyrate)
 {
     static uint64_t max_dirtyrate;
     uint32_t dirty_ring_size = kvm_dirty_ring_size();
-    uint64_t dirty_ring_size_meory_MB =
-        dirty_ring_size * TARGET_PAGE_SIZE >> 20;
+    uint32_t dirty_ring_size_memory_MB =
+        dirty_ring_size >> (20 - TARGET_PAGE_BITS);
 
     if (max_dirtyrate < dirtyrate) {
         max_dirtyrate = dirtyrate;
     }
 
-    return dirty_ring_size_meory_MB * 1000000 / max_dirtyrate;
+    return dirty_ring_size_memory_MB * 1000000ULL / max_dirtyrate;
 }
 
 static inline bool dirtylimit_done(uint64_t quota,
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit"
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
  2022-11-21 16:26 ` [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-29 23:17   ` Peter Xu
  2022-12-03  9:01   ` Markus Armbruster
  2022-11-21 16:26 ` [PATCH v2 03/11] kvm-all: Do not allow reap vcpu dirty ring buffer if not ready huangy81
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

dirty_rate paraemter of hmp command "set_vcpu_dirty_limit" is invalid
if less than 0, so add parameter check for it.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 softmmu/dirtylimit.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 940d238..c42eddd 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -515,6 +515,11 @@ void hmp_set_vcpu_dirty_limit(Monitor *mon, const QDict *qdict)
     int64_t cpu_index = qdict_get_try_int(qdict, "cpu_index", -1);
     Error *err = NULL;
 
+    if (dirty_rate < 0) {
+        monitor_printf(mon, "invalid dirty page limit %ld\n", dirty_rate);
+        return;
+    }
+
     qmp_set_vcpu_dirty_limit(!!(cpu_index != -1), cpu_index, dirty_rate, &err);
     if (err) {
         hmp_handle_error(mon, err);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 03/11] kvm-all: Do not allow reap vcpu dirty ring buffer if not ready
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
  2022-11-21 16:26 ` [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB huangy81
  2022-11-21 16:26 ` [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit" huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-29 22:42   ` Peter Xu
  2022-11-21 16:26 ` [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

When tested large vcpu size vm with dirtylimit feature, Qemu crashed
due to the assertion in kvm_dirty_ring_reap_one, which assert that
vcpu's kvm_dirty_gfns has been allocated and not NULL.

Because dirty ring reaper thread races with Qemu main thread, reaper
may reap vcpu's dirty ring buffer when main thread doesn't complete
vcpu instantiation. So add the waiting logic in reaper thread and
start to reap until vcpu instantiation is completed.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 accel/kvm/kvm-all.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index f99b0be..9457715 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1401,6 +1401,35 @@ out:
     kvm_slots_unlock();
 }
 
+/*
+ * test if dirty ring has been initialized by checking if vcpu
+ * has been initialized and gfns was allocated correspondlingly.
+ * return true if dirty ring has been initialized, false otherwise.
+ */
+static bool kvm_vcpu_dirty_ring_initialized(void)
+{
+    CPUState *cpu;
+    MachineState *ms = MACHINE(qdev_get_machine());
+    int ncpus = ms->smp.cpus;
+
+    /*
+     * assume vcpu has not been initilaized if generation
+     * id less than number of vcpu
+     */
+    if (ncpus > cpu_list_generation_id_get()) {
+        return false;
+    }
+
+    CPU_FOREACH(cpu) {
+        if (!cpu->kvm_dirty_gfns) {
+            return false;
+        }
+    }
+
+    return true;
+}
+
+
 static void *kvm_dirty_ring_reaper_thread(void *data)
 {
     KVMState *s = data;
@@ -1410,6 +1439,13 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
 
     trace_kvm_dirty_ring_reaper("init");
 
+retry:
+    /* don't allow reaping dirty ring if ring buffer hasn't been mapped */
+    if (!kvm_vcpu_dirty_ring_initialized()) {
+        sleep(1);
+        goto retry;
+    }
+
     while (true) {
         r->reaper_state = KVM_DIRTY_RING_REAPER_WAIT;
         trace_kvm_dirty_ring_reaper("wait");
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (2 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 03/11] kvm-all: Do not allow reap vcpu dirty ring buffer if not ready huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-29 22:49   ` Peter Xu
                     ` (2 more replies)
  2022-11-21 16:26 ` [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters huangy81
                   ` (6 subsequent siblings)
  10 siblings, 3 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Introduce "x-vcpu-dirty-limit-period" migration experimental
parameter, which is in the range of 1 to 1000ms and used to
make dirtyrate calculation period configurable.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c | 26 ++++++++++++++++++++++++++
 monitor/hmp-cmds.c    |  8 ++++++++
 qapi/migration.json   | 34 +++++++++++++++++++++++++++-------
 3 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 739bb68..701267c 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -116,6 +116,8 @@
 #define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS    5
 #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
 
+#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD     500     /* ms */
+
 static NotifierList migration_state_notifiers =
     NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
 
@@ -963,6 +965,9 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
                        s->parameters.block_bitmap_mapping);
     }
 
+    params->has_x_vcpu_dirty_limit_period = true;
+    params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
+
     return params;
 }
 
@@ -1564,6 +1569,15 @@ static bool migrate_params_check(MigrationParameters *params, Error **errp)
     }
 #endif
 
+    if (params->has_x_vcpu_dirty_limit_period &&
+        (params->x_vcpu_dirty_limit_period < 1 ||
+         params->x_vcpu_dirty_limit_period > 1000)) {
+        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
+                   "x_vcpu_dirty_limit_period",
+                   "is invalid, it must be in the range of 1 to 1000 ms");
+        return false;
+    }
+
     return true;
 }
 
@@ -1663,6 +1677,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
         dest->has_block_bitmap_mapping = true;
         dest->block_bitmap_mapping = params->block_bitmap_mapping;
     }
+
+    if (params->has_x_vcpu_dirty_limit_period) {
+        dest->x_vcpu_dirty_limit_period = params->x_vcpu_dirty_limit_period;
+    }
 }
 
 static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1785,6 +1803,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
             QAPI_CLONE(BitmapMigrationNodeAliasList,
                        params->block_bitmap_mapping);
     }
+    if (params->has_x_vcpu_dirty_limit_period) {
+        s->parameters.x_vcpu_dirty_limit_period =
+            params->x_vcpu_dirty_limit_period;
+    }
 }
 
 void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
@@ -4386,6 +4408,9 @@ static Property migration_properties[] = {
     DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
     DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
     DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
+    DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
+                       parameters.x_vcpu_dirty_limit_period,
+                       DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -4477,6 +4502,7 @@ static void migration_instance_init(Object *obj)
     params->has_tls_creds = true;
     params->has_tls_hostname = true;
     params->has_tls_authz = true;
+    params->has_x_vcpu_dirty_limit_period = true;
 
     qemu_sem_init(&ms->postcopy_pause_sem, 0);
     qemu_sem_init(&ms->postcopy_pause_rp_sem, 0);
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index 01b789a..a3170ca 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -513,6 +513,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
                 }
             }
         }
+
+        monitor_printf(mon, "%s: %" PRIu64 " ms\n",
+        MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
+        params->x_vcpu_dirty_limit_period);
     }
 
     qapi_free_MigrationParameters(params);
@@ -1332,6 +1336,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         error_setg(&err, "The block-bitmap-mapping parameter can only be set "
                    "through QMP");
         break;
+    case MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD:
+        p->has_x_vcpu_dirty_limit_period = true;
+        visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
+        break;
     default:
         assert(0);
     }
diff --git a/qapi/migration.json b/qapi/migration.json
index 88ecf86..5175779 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -776,8 +776,13 @@
 #                        block device name if there is one, and to their node name
 #                        otherwise. (Since 5.2)
 #
+# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
+#                             Should be in the range 1 to 1000ms, defaults to 500ms.
+#                             (Since 7.1)
+#
 # Features:
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+#            are experimental.
 #
 # Since: 2.4
 ##
@@ -795,8 +800,9 @@
            'multifd-channels',
            'xbzrle-cache-size', 'max-postcopy-bandwidth',
            'max-cpu-throttle', 'multifd-compression',
-           'multifd-zlib-level' ,'multifd-zstd-level',
-           'block-bitmap-mapping' ] }
+           'multifd-zlib-level', 'multifd-zstd-level',
+           'block-bitmap-mapping',
+           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
 
 ##
 # @MigrateSetParameters:
@@ -941,8 +947,13 @@
 #                        block device name if there is one, and to their node name
 #                        otherwise. (Since 5.2)
 #
+# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
+#                             Should be in the range 1 to 1000ms, defaults to 500ms.
+#                             (Since 7.1)
+#
 # Features:
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+#            are experimental.
 #
 # Since: 2.4
 ##
@@ -976,7 +987,9 @@
             '*multifd-compression': 'MultiFDCompression',
             '*multifd-zlib-level': 'uint8',
             '*multifd-zstd-level': 'uint8',
-            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
+            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
+            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
+                                            'features': [ 'unstable' ] } } }
 
 ##
 # @migrate-set-parameters:
@@ -1141,8 +1154,13 @@
 #                        block device name if there is one, and to their node name
 #                        otherwise. (Since 5.2)
 #
+# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
+#                             Should be in the range 1 to 1000ms, defaults to 500ms.
+#                             (Since 7.1)
+#
 # Features:
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+#            are experimental.
 #
 # Since: 2.4
 ##
@@ -1174,7 +1192,9 @@
             '*multifd-compression': 'MultiFDCompression',
             '*multifd-zlib-level': 'uint8',
             '*multifd-zstd-level': 'uint8',
-            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
+            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
+            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
+                                            'features': [ 'unstable' ] } } }
 
 ##
 # @query-migrate-parameters:
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (3 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-29 23:58   ` Peter Xu
  2022-12-03  9:13   ` Markus Armbruster
  2022-11-21 16:26 ` [PATCH v2 06/11] migration: Introduce dirty-limit capability huangy81
                   ` (5 subsequent siblings)
  10 siblings, 2 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Introduce "vcpu-dirty-limit" migration parameter used
to limit dirty page rate during live migration.

"vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
two dirty-limit-related migration parameters, which can
be set before and during live migration by qmp
migrate-set-parameters.

This two parameters are used to help implement the dirty
page rate limit algo of migration.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c | 23 +++++++++++++++++++++++
 monitor/hmp-cmds.c    |  8 ++++++++
 qapi/migration.json   | 18 +++++++++++++++---
 3 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 701267c..e2aada2 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -117,6 +117,7 @@
 #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
 
 #define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD     500     /* ms */
+#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT            1       /* MB/s */
 
 static NotifierList migration_state_notifiers =
     NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
@@ -968,6 +969,9 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
     params->has_x_vcpu_dirty_limit_period = true;
     params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
 
+    params->has_vcpu_dirty_limit = true;
+    params->vcpu_dirty_limit = s->parameters.vcpu_dirty_limit;
+
     return params;
 }
 
@@ -1578,6 +1582,14 @@ static bool migrate_params_check(MigrationParameters *params, Error **errp)
         return false;
     }
 
+    if (params->has_vcpu_dirty_limit &&
+        (params->vcpu_dirty_limit < 1)) {
+        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
+                   "vcpu_dirty_limit",
+                   "is invalid, it must greater then 1 MB/s");
+        return false;
+    }
+
     return true;
 }
 
@@ -1681,6 +1693,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
     if (params->has_x_vcpu_dirty_limit_period) {
         dest->x_vcpu_dirty_limit_period = params->x_vcpu_dirty_limit_period;
     }
+
+    if (params->has_vcpu_dirty_limit) {
+        dest->vcpu_dirty_limit = params->vcpu_dirty_limit;
+    }
 }
 
 static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1807,6 +1823,9 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
         s->parameters.x_vcpu_dirty_limit_period =
             params->x_vcpu_dirty_limit_period;
     }
+    if (params->has_vcpu_dirty_limit) {
+        s->parameters.vcpu_dirty_limit = params->vcpu_dirty_limit;
+    }
 }
 
 void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
@@ -4411,6 +4430,9 @@ static Property migration_properties[] = {
     DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
                        parameters.x_vcpu_dirty_limit_period,
                        DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
+    DEFINE_PROP_UINT64("vcpu-dirty-limit", MigrationState,
+                       parameters.vcpu_dirty_limit,
+                       DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -4503,6 +4525,7 @@ static void migration_instance_init(Object *obj)
     params->has_tls_hostname = true;
     params->has_tls_authz = true;
     params->has_x_vcpu_dirty_limit_period = true;
+    params->has_vcpu_dirty_limit = true;
 
     qemu_sem_init(&ms->postcopy_pause_sem, 0);
     qemu_sem_init(&ms->postcopy_pause_rp_sem, 0);
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index a3170ca..9ad6ee5 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -517,6 +517,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
         monitor_printf(mon, "%s: %" PRIu64 " ms\n",
         MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
         params->x_vcpu_dirty_limit_period);
+
+        monitor_printf(mon, "%s: %" PRIu64 " MB/s\n",
+            MigrationParameter_str(MIGRATION_PARAMETER_VCPU_DIRTY_LIMIT),
+            params->vcpu_dirty_limit);
     }
 
     qapi_free_MigrationParameters(params);
@@ -1340,6 +1344,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         p->has_x_vcpu_dirty_limit_period = true;
         visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
         break;
+    case MIGRATION_PARAMETER_VCPU_DIRTY_LIMIT:
+        p->has_vcpu_dirty_limit = true;
+        visit_type_size(v, param, &p->vcpu_dirty_limit, &err);
+        break;
     default:
         assert(0);
     }
diff --git a/qapi/migration.json b/qapi/migration.json
index 5175779..dd667dd 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -780,6 +780,9 @@
 #                             Should be in the range 1 to 1000ms, defaults to 500ms.
 #                             (Since 7.1)
 #
+# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+#                    Defaults to 1. (Since 7.1)
+#
 # Features:
 # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
 #            are experimental.
@@ -802,7 +805,8 @@
            'max-cpu-throttle', 'multifd-compression',
            'multifd-zlib-level', 'multifd-zstd-level',
            'block-bitmap-mapping',
-           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
+           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
+           'vcpu-dirty-limit'] }
 
 ##
 # @MigrateSetParameters:
@@ -951,6 +955,9 @@
 #                             Should be in the range 1 to 1000ms, defaults to 500ms.
 #                             (Since 7.1)
 #
+# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+#                    Defaults to 1. (Since 7.1)
+#
 # Features:
 # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
 #            are experimental.
@@ -989,7 +996,8 @@
             '*multifd-zstd-level': 'uint8',
             '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
             '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
-                                            'features': [ 'unstable' ] } } }
+                                            'features': [ 'unstable' ] },
+            '*vcpu-dirty-limit': 'uint64'} }
 
 ##
 # @migrate-set-parameters:
@@ -1158,6 +1166,9 @@
 #                             Should be in the range 1 to 1000ms, defaults to 500ms.
 #                             (Since 7.1)
 #
+# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+#                    Defaults to 1. (Since 7.1)
+#
 # Features:
 # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
 #            are experimental.
@@ -1194,7 +1205,8 @@
             '*multifd-zstd-level': 'uint8',
             '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
             '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
-                                            'features': [ 'unstable' ] } } }
+                                            'features': [ 'unstable' ] },
+            '*vcpu-dirty-limit': 'uint64'} }
 
 ##
 # @query-migrate-parameters:
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 06/11] migration: Introduce dirty-limit capability
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (4 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-29 23:58   ` Peter Xu
  2022-12-03  9:24   ` Markus Armbruster
  2022-11-21 16:26 ` [PATCH v2 07/11] migration: Implement dirty-limit convergence algo huangy81
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Introduce migration dirty-limit capability, which can
be turned on before live migration and limit dirty
page rate durty live migration.

Introduce migrate_dirty_limit function to help check
if dirty-limit capability enabled during live migration.

Meanwhile, refactor vcpu_dirty_rate_stat_collect
so that period can be configured instead of hardcoded.

dirty-limit capability is kind of like auto-converge
but using dirty limit instead of traditional cpu-throttle
to throttle guest down. To enable this feature, turn on
the dirty-limit capability before live migration using
migratioin-set-capabilities, and set the parameters
"x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
to speed up convergence.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c | 25 +++++++++++++++++++++++++
 migration/migration.h |  1 +
 qapi/migration.json   |  4 +++-
 softmmu/dirtylimit.c  | 11 ++++++++++-
 4 files changed, 39 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index e2aada2..86950a1 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -61,6 +61,7 @@
 #include "sysemu/cpus.h"
 #include "yank_functions.h"
 #include "sysemu/qtest.h"
+#include "sysemu/kvm.h"
 
 #define MAX_THROTTLE  (128 << 20)      /* Migration transfer speed throttling */
 
@@ -1348,6 +1349,20 @@ static bool migrate_caps_check(bool *cap_list,
         }
     }
 
+    if (cap_list[MIGRATION_CAPABILITY_DIRTY_LIMIT]) {
+        if (cap_list[MIGRATION_CAPABILITY_AUTO_CONVERGE]) {
+            error_setg(errp, "dirty-limit conflicts with auto-converge"
+                       " either of then available currently");
+            return false;
+        }
+
+        if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
+            error_setg(errp, "dirty-limit requires KVM with accelerator"
+                   " property 'dirty-ring-size' set");
+            return false;
+        }
+    }
+
     return true;
 }
 
@@ -2526,6 +2541,15 @@ bool migrate_auto_converge(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_AUTO_CONVERGE];
 }
 
+bool migrate_dirty_limit(void)
+{
+    MigrationState *s;
+
+    s = migrate_get_current();
+
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_DIRTY_LIMIT];
+}
+
 bool migrate_zero_blocks(void)
 {
     MigrationState *s;
@@ -4455,6 +4479,7 @@ static Property migration_properties[] = {
     DEFINE_PROP_MIG_CAP("x-zero-copy-send",
             MIGRATION_CAPABILITY_ZERO_COPY_SEND),
 #endif
+    DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
 
     DEFINE_PROP_END_OF_LIST(),
 };
diff --git a/migration/migration.h b/migration/migration.h
index cdad8ac..7fbb9f8 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -409,6 +409,7 @@ bool migrate_ignore_shared(void);
 bool migrate_validate_uuid(void);
 
 bool migrate_auto_converge(void);
+bool migrate_dirty_limit(void);
 bool migrate_use_multifd(void);
 bool migrate_pause_before_switchover(void);
 int migrate_multifd_channels(void);
diff --git a/qapi/migration.json b/qapi/migration.json
index dd667dd..af6b2da 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -477,6 +477,8 @@
 #                    will be handled faster.  This is a performance feature and
 #                    should not affect the correctness of postcopy migration.
 #                    (since 7.1)
+# @dirty-limit: Use dirty-limit to throttle down guest if enabled.
+#               (since 7.1)
 #
 # Features:
 # @unstable: Members @x-colo and @x-ignore-shared are experimental.
@@ -492,7 +494,7 @@
            'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
            { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
            'validate-uuid', 'background-snapshot',
-           'zero-copy-send', 'postcopy-preempt'] }
+           'zero-copy-send', 'postcopy-preempt', 'dirty-limit'] }
 
 ##
 # @MigrationCapabilityStatus:
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index c42eddd..4537c51 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -23,6 +23,8 @@
 #include "exec/memory.h"
 #include "hw/boards.h"
 #include "sysemu/kvm.h"
+#include "migration/misc.h"
+#include "migration/migration.h"
 #include "trace.h"
 
 /*
@@ -75,11 +77,18 @@ static bool dirtylimit_quit;
 
 static void vcpu_dirty_rate_stat_collect(void)
 {
+    MigrationState *s = migrate_get_current();
     VcpuStat stat;
     int i = 0;
+    int64_t period = DIRTYLIMIT_CALC_TIME_MS;
+
+    if (migrate_dirty_limit() &&
+        migration_is_active(s)) {
+        period = s->parameters.x_vcpu_dirty_limit_period;
+    }
 
     /* calculate vcpu dirtyrate */
-    vcpu_calculate_dirtyrate(DIRTYLIMIT_CALC_TIME_MS,
+    vcpu_calculate_dirtyrate(period,
                              &stat,
                              GLOBAL_DIRTY_LIMIT,
                              false);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 07/11] migration: Implement dirty-limit convergence algo
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (5 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 06/11] migration: Introduce dirty-limit capability huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-29 23:17   ` Peter Xu
  2022-11-21 16:26 ` [PATCH v2 08/11] migration: Export dirty-limit time info huangy81
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Implement dirty-limit convergence algo for live migration,
which is kind of like auto-converge algo but using dirty-limit
instead of cpu throttle to make migration convergent.

Enable dirty page limit if dirty_rate_high_cnt greater than 2
when dirty-limit capability enabled, Disable dirty-limit if
migration be cancled.

Note that "set_vcpu_dirty_limit", "cancel_vcpu_dirty_limit"
commands are not allowed during dirty-limit live migration.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c  |  1 +
 migration/ram.c        | 62 +++++++++++++++++++++++++++++++++++++++-----------
 migration/trace-events |  1 +
 softmmu/dirtylimit.c   | 22 ++++++++++++++++++
 4 files changed, 73 insertions(+), 13 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 86950a1..096b61a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -240,6 +240,7 @@ void migration_cancel(const Error *error)
     if (error) {
         migrate_set_error(current_migration, error);
     }
+    qmp_cancel_vcpu_dirty_limit(false, -1, NULL);
     migrate_fd_cancel(current_migration);
 }
 
diff --git a/migration/ram.c b/migration/ram.c
index dc1de9d..94516b7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -45,6 +45,7 @@
 #include "qapi/error.h"
 #include "qapi/qapi-types-migration.h"
 #include "qapi/qapi-events-migration.h"
+#include "qapi/qapi-commands-migration.h"
 #include "qapi/qmp/qerror.h"
 #include "trace.h"
 #include "exec/ram_addr.h"
@@ -57,6 +58,8 @@
 #include "qemu/iov.h"
 #include "multifd.h"
 #include "sysemu/runstate.h"
+#include "sysemu/dirtylimit.h"
+#include "sysemu/kvm.h"
 
 #include "hw/boards.h" /* for machine_dump_guest_core() */
 
@@ -1139,6 +1142,30 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
     }
 }
 
+/*
+ * Enable dirty-limit to throttle down the guest
+ */
+static void migration_dirty_limit_guest(void)
+{
+    static int64_t quota_dirtyrate;
+    MigrationState *s = migrate_get_current();
+
+    /*
+     * If dirty limit already enabled and migration parameter
+     * vcpu-dirty-limit untouched.
+     */
+    if (dirtylimit_in_service() &&
+        quota_dirtyrate == s->parameters.vcpu_dirty_limit) {
+        return;
+    }
+
+    quota_dirtyrate = s->parameters.vcpu_dirty_limit;
+
+    /* Set or update quota dirty limit */
+    qmp_set_vcpu_dirty_limit(false, -1, quota_dirtyrate, NULL);
+    trace_migration_dirty_limit_guest(quota_dirtyrate);
+}
+
 static void migration_trigger_throttle(RAMState *rs)
 {
     MigrationState *s = migrate_get_current();
@@ -1148,22 +1175,31 @@ static void migration_trigger_throttle(RAMState *rs)
     uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
     uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
 
-    /* During block migration the auto-converge logic incorrectly detects
-     * that ram migration makes no progress. Avoid this by disabling the
-     * throttling logic during the bulk phase of block migration. */
-    if (migrate_auto_converge() && !blk_mig_bulk_active()) {
-        /* The following detection logic can be refined later. For now:
-           Check to see if the ratio between dirtied bytes and the approx.
-           amount of bytes that just got transferred since the last time
-           we were in this routine reaches the threshold. If that happens
-           twice, start or increase throttling. */
-
-        if ((bytes_dirty_period > bytes_dirty_threshold) &&
-            (++rs->dirty_rate_high_cnt >= 2)) {
+    /*
+     * The following detection logic can be refined later. For now:
+     * Check to see if the ratio between dirtied bytes and the approx.
+     * amount of bytes that just got transferred since the last time
+     * we were in this routine reaches the threshold. If that happens
+     * twice, start or increase throttling.
+     */
+
+    if ((bytes_dirty_period > bytes_dirty_threshold) &&
+        (++rs->dirty_rate_high_cnt >= 2)) {
+        rs->dirty_rate_high_cnt = 0;
+        /*
+         * During block migration the auto-converge logic incorrectly detects
+         * that ram migration makes no progress. Avoid this by disabling the
+         * throttling logic during the bulk phase of block migration
+         */
+
+        if (migrate_auto_converge() && !blk_mig_bulk_active()) {
             trace_migration_throttle();
-            rs->dirty_rate_high_cnt = 0;
             mig_throttle_guest_down(bytes_dirty_period,
                                     bytes_dirty_threshold);
+        } else if (migrate_dirty_limit() &&
+                   kvm_dirty_ring_enabled() &&
+                   migration_is_active(s)) {
+            migration_dirty_limit_guest();
         }
     }
 }
diff --git a/migration/trace-events b/migration/trace-events
index 57003ed..33a2666 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -91,6 +91,7 @@ migration_bitmap_sync_start(void) ""
 migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
 migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned long page) "rb %s start 0x%"PRIx64" size 0x%"PRIx64" page 0x%lx"
 migration_throttle(void) ""
+migration_dirty_limit_guest(int64_t dirtyrate) "guest dirty page rate limit %" PRIi64 " MB/s"
 ram_discard_range(const char *rbname, uint64_t start, size_t len) "%s: start: %" PRIx64 " %zx"
 ram_load_loop(const char *rbname, uint64_t addr, int flags, void *host) "%s: addr: 0x%" PRIx64 " flags: 0x%x host: %p"
 ram_load_postcopy_loop(int channel, uint64_t addr, int flags) "chan=%d addr=0x%" PRIx64 " flags=0x%x"
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 4537c51..3f3c405 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -439,6 +439,8 @@ void qmp_cancel_vcpu_dirty_limit(bool has_cpu_index,
                                  int64_t cpu_index,
                                  Error **errp)
 {
+    MigrationState *ms = migrate_get_current();
+
     if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
         return;
     }
@@ -452,6 +454,15 @@ void qmp_cancel_vcpu_dirty_limit(bool has_cpu_index,
         return;
     }
 
+    if (migration_is_running(ms->state) &&
+        (!qemu_thread_is_self(&ms->thread)) &&
+        migrate_dirty_limit() &&
+        dirtylimit_in_service()) {
+        error_setg(errp, "dirty-limit live migration is running, not"
+                   " allowing dirty page limit being canceled manually");
+        return;
+    }
+
     dirtylimit_state_lock();
 
     if (has_cpu_index) {
@@ -487,6 +498,8 @@ void qmp_set_vcpu_dirty_limit(bool has_cpu_index,
                               uint64_t dirty_rate,
                               Error **errp)
 {
+    MigrationState *ms = migrate_get_current();
+
     if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
         error_setg(errp, "dirty page limit feature requires KVM with"
                    " accelerator property 'dirty-ring-size' set'");
@@ -503,6 +516,15 @@ void qmp_set_vcpu_dirty_limit(bool has_cpu_index,
         return;
     }
 
+    if (migration_is_running(ms->state) &&
+        (!qemu_thread_is_self(&ms->thread)) &&
+        migrate_dirty_limit() &&
+        dirtylimit_in_service()) {
+        error_setg(errp, "dirty-limit live migration is running, not"
+                   " allowing dirty page limit being configured manually");
+        return;
+    }
+
     dirtylimit_state_lock();
 
     if (!dirtylimit_in_service()) {
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 08/11] migration: Export dirty-limit time info
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (6 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 07/11] migration: Implement dirty-limit convergence algo huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-30  0:09   ` Peter Xu
                     ` (2 more replies)
  2022-11-21 16:26 ` [PATCH v2 09/11] tests: Add migration dirty-limit capability test huangy81
                   ` (2 subsequent siblings)
  10 siblings, 3 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Export dirty limit throttle time and estimated ring full
time, through which we can observe the process of dirty
limit during live migration.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 include/sysemu/dirtylimit.h |  2 ++
 migration/migration.c       | 10 ++++++++++
 monitor/hmp-cmds.c          | 10 ++++++++++
 qapi/migration.json         | 10 +++++++++-
 softmmu/dirtylimit.c        | 31 +++++++++++++++++++++++++++++++
 5 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/include/sysemu/dirtylimit.h b/include/sysemu/dirtylimit.h
index 8d2c1f3..98cc4a6 100644
--- a/include/sysemu/dirtylimit.h
+++ b/include/sysemu/dirtylimit.h
@@ -34,4 +34,6 @@ void dirtylimit_set_vcpu(int cpu_index,
 void dirtylimit_set_all(uint64_t quota,
                         bool enable);
 void dirtylimit_vcpu_execute(CPUState *cpu);
+int64_t dirtylimit_throttle_us_per_full(void);
+int64_t dirtylimit_us_ring_full(void);
 #endif
diff --git a/migration/migration.c b/migration/migration.c
index 096b61a..886c25d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -62,6 +62,7 @@
 #include "yank_functions.h"
 #include "sysemu/qtest.h"
 #include "sysemu/kvm.h"
+#include "sysemu/dirtylimit.h"
 
 #define MAX_THROTTLE  (128 << 20)      /* Migration transfer speed throttling */
 
@@ -1112,6 +1113,15 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
         info->ram->remaining = ram_bytes_remaining();
         info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
     }
+
+    if (migrate_dirty_limit() && dirtylimit_in_service()) {
+        info->has_dirty_limit_throttle_us_per_full = true;
+        info->dirty_limit_throttle_us_per_full =
+                            dirtylimit_throttle_us_per_full();
+
+        info->has_dirty_limit_us_ring_full = true;
+        info->dirty_limit_us_ring_full = dirtylimit_us_ring_full();
+    }
 }
 
 static void populate_disk_info(MigrationInfo *info)
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index 9ad6ee5..9d02baf 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -339,6 +339,16 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
                        info->cpu_throttle_percentage);
     }
 
+    if (info->has_dirty_limit_throttle_us_per_full) {
+        monitor_printf(mon, "dirty-limit throttle time: %" PRIi64 " us\n",
+                       info->dirty_limit_throttle_us_per_full);
+    }
+
+    if (info->has_dirty_limit_us_ring_full) {
+        monitor_printf(mon, "dirty-limit ring full time: %" PRIi64 " us\n",
+                       info->dirty_limit_us_ring_full);
+    }
+
     if (info->has_postcopy_blocktime) {
         monitor_printf(mon, "postcopy blocktime: %u\n",
                        info->postcopy_blocktime);
diff --git a/qapi/migration.json b/qapi/migration.json
index af6b2da..62db5cb 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -242,6 +242,12 @@
 #                   Present and non-empty when migration is blocked.
 #                   (since 6.0)
 #
+# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
+#                                    dirty ring full (since 7.1)
+#
+# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
+#                            (since 7.1)
+#
 # Since: 0.14
 ##
 { 'struct': 'MigrationInfo',
@@ -259,7 +265,9 @@
            '*postcopy-blocktime' : 'uint32',
            '*postcopy-vcpu-blocktime': ['uint32'],
            '*compression': 'CompressionStats',
-           '*socket-address': ['SocketAddress'] } }
+           '*socket-address': ['SocketAddress'],
+           '*dirty-limit-throttle-us-per-full': 'int64',
+           '*dirty-limit-us-ring-full': 'int64'} }
 
 ##
 # @query-migrate:
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 3f3c405..9d1df9b 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -573,6 +573,37 @@ static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
     return info;
 }
 
+/* Pick up first vcpu throttle time by default */
+int64_t dirtylimit_throttle_us_per_full(void)
+{
+    CPUState *cpu = first_cpu;
+    return cpu->throttle_us_per_full;
+}
+
+/*
+ * Estimate dirty ring full time under current dirty page rate.
+ * Return -1 if guest doesn't dirty memory.
+ */
+int64_t dirtylimit_us_ring_full(void)
+{
+    CPUState *cpu;
+    uint64_t curr_rate = 0;
+    int nvcpus = 0;
+
+    CPU_FOREACH(cpu) {
+        if (cpu->running) {
+            nvcpus++;
+            curr_rate += vcpu_dirty_rate_get(cpu->cpu_index);
+        }
+    }
+
+    if (!curr_rate || !nvcpus) {
+        return -1;
+    }
+
+    return dirtylimit_dirty_ring_full_time(curr_rate / nvcpus);
+}
+
 static struct DirtyLimitInfoList *dirtylimit_query_all(void)
 {
     int i, index;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 09/11] tests: Add migration dirty-limit capability test
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (7 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 08/11] migration: Export dirty-limit time info huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-21 16:26 ` [PATCH v2 10/11] tests/migration: Introduce dirty-ring-size option into guestperf huangy81
  2022-11-21 16:26 ` [PATCH v2 11/11] tests/migration: Introduce dirty-limit " huangy81
  10 siblings, 0 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Add migration dirty-limit capability test if kernel support
dirty ring.

Migration dirty-limit capability introduce dirty limit
capability, two parameters: x-vcpu-dirty-limit-period and
x-vcpu-dirty-limit are introduced to implement the live
migration with dirty limit.

The test case does the following things:
1. start src, dst vm and enable dirty-limit capability
2. start migrate and set cancel it to check if dirty limit
   stop working.
3. restart dst vm
4. start migrate and enable dirty-limit capability
5. check if migration satisfy the convergence condition
   during pre-switchover phase.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 tests/qtest/migration-test.c | 154 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 154 insertions(+)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 442998d..baa614c 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -2422,6 +2422,158 @@ static void test_vcpu_dirty_limit(void)
     dirtylimit_stop_vm(vm);
 }
 
+static void migrate_dirty_limit_wait_showup(QTestState *from,
+                                            const int64_t period,
+                                            const int64_t value)
+{
+    /* Enable dirty limit capability */
+    migrate_set_capability(from, "dirty-limit", true);
+
+    /* Set dirty limit parameters */
+    migrate_set_parameter_int(from, "x-vcpu-dirty-limit-period", period);
+    migrate_set_parameter_int(from, "vcpu-dirty-limit", value);
+
+    /* Make sure migrate can't converge */
+    migrate_ensure_non_converge(from);
+
+    /* To check limit rate after precopy */
+    migrate_set_capability(from, "pause-before-switchover", true);
+
+    /* Wait for the serial output from the source */
+    wait_for_serial("src_serial");
+}
+
+/*
+ * This test does:
+ *  source               target
+ *                       migrate_incoming
+ *     migrate
+ *     migrate_cancel
+ *                       restart target
+ *     migrate
+ *
+ *  And see that if dirty limit works correctly
+ */
+static void test_migrate_dirty_limit(void)
+{
+    g_autofree char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs);
+    QTestState *from, *to;
+    int64_t remaining, throttle_us_per_full;
+    /*
+     * We want the test to be stable and as fast as possible.
+     * E.g., with 1Gb/s bandwith migration may pass without dirty limit,
+     * so we need to decrease a bandwidth.
+     */
+    const int64_t dirtylimit_period = 1000, dirtylimit_value = 50;
+    const int64_t max_bandwidth = 400000000; /* ~400Mb/s */
+    const int64_t downtime_limit = 250; /* 250ms */
+    /*
+     * We migrate through unix-socket (> 500Mb/s).
+     * Thus, expected migration speed ~= bandwidth limit (< 500Mb/s).
+     * So, we can predict expected_threshold
+     */
+    const int64_t expected_threshold = max_bandwidth * downtime_limit / 1000;
+    int max_try_count = 10;
+    MigrateCommon args = {
+        .start = {
+            .hide_stderr = true,
+            .use_dirty_ring = true,
+        },
+        .listen_uri = uri,
+        .connect_uri = uri,
+    };
+
+    /* Start src, dst vm */
+    if (test_migrate_start(&from, &to, args.listen_uri, &args.start)) {
+        return;
+    }
+
+    /* Prepare for dirty limit migration and wait src vm show up */
+    migrate_dirty_limit_wait_showup(from, dirtylimit_period, dirtylimit_value);
+
+    /* Start migrate */
+    migrate_qmp(from, uri, "{}");
+
+    /* Wait for dirty limit throttle begin */
+    throttle_us_per_full = 0;
+    while (throttle_us_per_full == 0) {
+        throttle_us_per_full =
+            read_migrate_property_int(from, "dirty-limit-throttle-us-per-full");
+        usleep(100);
+        g_assert_false(got_stop);
+    }
+
+    /* Now cancel migrate and wait for dirty limit throttle switch off */
+    migrate_cancel(from);
+    wait_for_migration_status(from, "cancelled", NULL);
+
+    /* Check if dirty limit throttle switched off, set timeout 1ms */
+    do {
+        throttle_us_per_full =
+            read_migrate_property_int(from, "dirty-limit-throttle-us-per-full");
+        usleep(100);
+        g_assert_false(got_stop);
+    } while (throttle_us_per_full != 0 && --max_try_count);
+
+    /* Assert dirty limit is not in service */
+    g_assert_cmpint(throttle_us_per_full, ==, 0);
+
+    args = (MigrateCommon) {
+        .start = {
+            .only_target = true,
+            .use_dirty_ring = true,
+        },
+        .listen_uri = uri,
+        .connect_uri = uri,
+    };
+
+    /* Restart dst vm, src vm already show up so we needn't wait anymore */
+    if (test_migrate_start(&from, &to, args.listen_uri, &args.start)) {
+        return;
+    }
+
+    /* Start migrate */
+    migrate_qmp(from, uri, "{}");
+
+    /* Wait for dirty limit throttle begin */
+    throttle_us_per_full = 0;
+    while (throttle_us_per_full == 0) {
+        throttle_us_per_full =
+            read_migrate_property_int(from, "dirty-limit-throttle-us-per-full");
+        usleep(100);
+        g_assert_false(got_stop);
+    }
+
+    /*
+     * The dirty limit rate should equals the return value of
+     * query-vcpu-dirty-limit if dirty limit cap set
+     */
+    g_assert_cmpint(dirtylimit_value, ==, get_limit_rate(from));
+
+    /* Now, we have tested if dirty limit works, let it converge */
+    migrate_set_parameter_int(from, "downtime-limit", downtime_limit);
+    migrate_set_parameter_int(from, "max-bandwidth", max_bandwidth);
+
+    /*
+     * Wait for pre-switchover status to check if migration
+     * satisfy the convergence condition
+     */
+    wait_for_migration_status(from, "pre-switchover", NULL);
+
+    remaining = read_ram_property_int(from, "remaining");
+    g_assert_cmpint(remaining, <,
+                    (expected_threshold + expected_threshold / 100));
+
+    migrate_continue(from, "pre-switchover");
+
+    qtest_qmp_eventwait(to, "RESUME");
+
+    wait_for_serial("dest_serial");
+    wait_for_migration_complete(from);
+
+    test_migrate_end(from, to, true);
+}
+
 static bool kvm_dirty_ring_supported(void)
 {
 #if defined(__linux__) && defined(HOST_X86_64)
@@ -2592,6 +2744,8 @@ int main(int argc, char **argv)
                        test_precopy_unix_dirty_ring);
         qtest_add_func("/migration/vcpu_dirty_limit",
                        test_vcpu_dirty_limit);
+        qtest_add_func("/migration/dirty_limit",
+                       test_migrate_dirty_limit);
     }
 
     ret = g_test_run();
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 10/11] tests/migration: Introduce dirty-ring-size option into guestperf
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (8 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 09/11] tests: Add migration dirty-limit capability test huangy81
@ 2022-11-21 16:26 ` huangy81
  2022-11-21 16:26 ` [PATCH v2 11/11] tests/migration: Introduce dirty-limit " huangy81
  10 siblings, 0 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Guestperf tool does not enable diry ring feature when test
migration by default.

To support dirty ring migration performance test, introduce
dirty-ring-size option into guestperf tools, which ranges in
[1024, 65536].

To set dirty ring size with 4096 during migration test:
$ ./tests/migration/guestperf.py --dirty-ring-size 4096 xxx

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 tests/migration/guestperf/engine.py   | 7 ++++++-
 tests/migration/guestperf/hardware.py | 8 ++++++--
 tests/migration/guestperf/shell.py    | 7 ++++++-
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/tests/migration/guestperf/engine.py b/tests/migration/guestperf/engine.py
index 59fca2c..d7b75b9 100644
--- a/tests/migration/guestperf/engine.py
+++ b/tests/migration/guestperf/engine.py
@@ -303,7 +303,6 @@ def _get_common_args(self, hardware, tunnelled=False):
             cmdline = "'" + cmdline + "'"
 
         argv = [
-            "-accel", "kvm",
             "-cpu", "host",
             "-kernel", self._kernel,
             "-initrd", self._initrd,
@@ -314,6 +313,12 @@ def _get_common_args(self, hardware, tunnelled=False):
             "-smp", str(hardware._cpus),
         ]
 
+        if hardware._dirty_ring_size:
+            argv.extend(["-accel", "kvm,dirty-ring-size=%s" %
+                         hardware._dirty_ring_size])
+        else:
+            argv.extend(["-accel", "kvm"])
+
         if self._debug:
             argv.extend(["-device", "sga"])
 
diff --git a/tests/migration/guestperf/hardware.py b/tests/migration/guestperf/hardware.py
index 3145785..f779cc0 100644
--- a/tests/migration/guestperf/hardware.py
+++ b/tests/migration/guestperf/hardware.py
@@ -23,7 +23,8 @@ def __init__(self, cpus=1, mem=1,
                  src_cpu_bind=None, src_mem_bind=None,
                  dst_cpu_bind=None, dst_mem_bind=None,
                  prealloc_pages = False,
-                 huge_pages=False, locked_pages=False):
+                 huge_pages=False, locked_pages=False,
+                 dirty_ring_size=0):
         self._cpus = cpus
         self._mem = mem # GiB
         self._src_mem_bind = src_mem_bind # List of NUMA nodes
@@ -33,6 +34,7 @@ def __init__(self, cpus=1, mem=1,
         self._prealloc_pages = prealloc_pages
         self._huge_pages = huge_pages
         self._locked_pages = locked_pages
+        self._dirty_ring_size = dirty_ring_size
 
 
     def serialize(self):
@@ -46,6 +48,7 @@ def serialize(self):
             "prealloc_pages": self._prealloc_pages,
             "huge_pages": self._huge_pages,
             "locked_pages": self._locked_pages,
+            "dirty_ring_size": self._dirty_ring_size,
         }
 
     @classmethod
@@ -59,4 +62,5 @@ def deserialize(cls, data):
             data["dst_mem_bind"],
             data["prealloc_pages"],
             data["huge_pages"],
-            data["locked_pages"])
+            data["locked_pages"],
+            data["dirty_ring_size"])
diff --git a/tests/migration/guestperf/shell.py b/tests/migration/guestperf/shell.py
index 8a809e3..559616f 100644
--- a/tests/migration/guestperf/shell.py
+++ b/tests/migration/guestperf/shell.py
@@ -60,6 +60,8 @@ def __init__(self):
         parser.add_argument("--prealloc-pages", dest="prealloc_pages", default=False)
         parser.add_argument("--huge-pages", dest="huge_pages", default=False)
         parser.add_argument("--locked-pages", dest="locked_pages", default=False)
+        parser.add_argument("--dirty-ring-size", dest="dirty_ring_size",
+                            default=0, type=int)
 
         self._parser = parser
 
@@ -89,7 +91,10 @@ def split_map(value):
 
                         locked_pages=args.locked_pages,
                         huge_pages=args.huge_pages,
-                        prealloc_pages=args.prealloc_pages)
+                        prealloc_pages=args.prealloc_pages,
+
+                        dirty_ring_size=args.dirty_ring_size)
+
 
 
 class Shell(BaseShell):
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 11/11] tests/migration: Introduce dirty-limit into guestperf
  2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
                   ` (9 preceding siblings ...)
  2022-11-21 16:26 ` [PATCH v2 10/11] tests/migration: Introduce dirty-ring-size option into guestperf huangy81
@ 2022-11-21 16:26 ` huangy81
  10 siblings, 0 replies; 33+ messages in thread
From: huangy81 @ 2022-11-21 16:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson,
	Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Guestperf tool does not cover the dirty-limit migration
currently, support this feature.

To enable dirty-limit, setting x-vcpu-dirty-limit-period
as 500ms and x-vcpu-dirty-limit as 10MB/s:
$ ./tests/migration/guestperf.py \
    --dirty-limit --x-vcpu-dirty-limit-period 500 \
    --vcpu-dirty-limit 10 --output output.json \

To run the entire standardized set of dirty-limit-enabled
comparisons, with unix migration:
$ ./tests/migration/guestperf-batch.py \
    --dst-host localhost --transport unix \
    --filter compr-dirty-limit* --output outputdir

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 tests/migration/guestperf/comparison.py | 24 ++++++++++++++++++++++++
 tests/migration/guestperf/engine.py     | 17 +++++++++++++++++
 tests/migration/guestperf/progress.py   | 17 +++++++++++++++--
 tests/migration/guestperf/scenario.py   | 11 ++++++++++-
 tests/migration/guestperf/shell.py      | 18 +++++++++++++++++-
 5 files changed, 83 insertions(+), 4 deletions(-)

diff --git a/tests/migration/guestperf/comparison.py b/tests/migration/guestperf/comparison.py
index c03b3f6..ad403f9 100644
--- a/tests/migration/guestperf/comparison.py
+++ b/tests/migration/guestperf/comparison.py
@@ -135,4 +135,28 @@ def __init__(self, name, scenarios):
         Scenario("compr-multifd-channels-64",
                  multifd=True, multifd_channels=64),
     ]),
+
+
+    # Looking at effect of dirty-limit with
+    # varying x_vcpu_dirty_limit_period
+    Comparison("compr-dirty-limit-period", scenarios = [
+        Scenario("compr-dirty-limit-period-100",
+                 dirty_limit=True, x_vcpu_dirty_limit_period=100),
+        Scenario("compr-dirty-limit-period-500",
+                 dirty_limit=True, x_vcpu_dirty_limit_period=500),
+        Scenario("compr-dirty-limit-period-1000",
+                 dirty_limit=True, x_vcpu_dirty_limit_period=1000),
+    ]),
+
+
+    # Looking at effect of dirty-limit with
+    # varying vcpu_dirty_limit
+    Comparison("compr-dirty-limit", scenarios = [
+        Scenario("compr-dirty-limit-10MB",
+                 dirty_limit=True, vcpu_dirty_limit=10),
+        Scenario("compr-dirty-limit-20MB",
+                 dirty_limit=True, vcpu_dirty_limit=20),
+        Scenario("compr-dirty-limit-50MB",
+                 dirty_limit=True, vcpu_dirty_limit=50),
+    ]),
 ]
diff --git a/tests/migration/guestperf/engine.py b/tests/migration/guestperf/engine.py
index d7b75b9..e3940bf 100644
--- a/tests/migration/guestperf/engine.py
+++ b/tests/migration/guestperf/engine.py
@@ -102,6 +102,8 @@ def _migrate_progress(self, vm):
             info.get("expected-downtime", 0),
             info.get("setup-time", 0),
             info.get("cpu-throttle-percentage", 0),
+            info.get("dirty-limit-throttle-us-per-full", 0),
+            info.get("dirty-limit-us-ring-full", 0),
         )
 
     def _migrate(self, hardware, scenario, src, dst, connect_uri):
@@ -203,6 +205,21 @@ def _migrate(self, hardware, scenario, src, dst, connect_uri):
             resp = dst.command("migrate-set-parameters",
                                multifd_channels=scenario._multifd_channels)
 
+        if scenario._dirty_limit:
+            if not hardware._dirty_ring_size:
+                raise Exception("dirty ring size must be configured when "
+                                "testing dirty limit migration")
+
+            resp = src.command("migrate-set-capabilities",
+                               capabilities = [
+                                   { "capability": "dirty-limit",
+                                     "state": True }
+                               ])
+            resp = src.command("migrate-set-parameters",
+                x_vcpu_dirty_limit_period=scenario._x_vcpu_dirty_limit_period)
+            resp = src.command("migrate-set-parameters",
+                               vcpu_dirty_limit=scenario._vcpu_dirty_limit)
+
         resp = src.command("migrate", uri=connect_uri)
 
         post_copy = False
diff --git a/tests/migration/guestperf/progress.py b/tests/migration/guestperf/progress.py
index ab1ee57..dd5d86b 100644
--- a/tests/migration/guestperf/progress.py
+++ b/tests/migration/guestperf/progress.py
@@ -81,7 +81,9 @@ def __init__(self,
                  downtime,
                  downtime_expected,
                  setup_time,
-                 throttle_pcent):
+                 throttle_pcent,
+                 dirty_limit_throttle_us_per_full,
+                 dirty_limit_us_ring_full):
 
         self._status = status
         self._ram = ram
@@ -91,6 +93,11 @@ def __init__(self,
         self._downtime_expected = downtime_expected
         self._setup_time = setup_time
         self._throttle_pcent = throttle_pcent
+        self._dirty_limit_throttle_us_per_full =
+            dirty_limit_throttle_us_per_full
+        self._dirty_limit_us_ring_full =
+            dirty_limit_us_ring_full
+
 
     def serialize(self):
         return {
@@ -102,6 +109,10 @@ def serialize(self):
             "downtime_expected": self._downtime_expected,
             "setup_time": self._setup_time,
             "throttle_pcent": self._throttle_pcent,
+            "dirty_limit_throttle_time_per_full":
+                self._dirty_limit_throttle_us_per_full,
+            "dirty_limit_ring_full_time":
+                self._dirty_limit_us_ring_full,
         }
 
     @classmethod
@@ -114,4 +125,6 @@ def deserialize(cls, data):
             data["downtime"],
             data["downtime_expected"],
             data["setup_time"],
-            data["throttle_pcent"])
+            data["throttle_pcent"],
+            data["dirty_limit_throttle_time_per_full"],
+            data["dirty_limit_ring_full_time"])
diff --git a/tests/migration/guestperf/scenario.py b/tests/migration/guestperf/scenario.py
index de70d9b..154c4f5 100644
--- a/tests/migration/guestperf/scenario.py
+++ b/tests/migration/guestperf/scenario.py
@@ -30,7 +30,9 @@ def __init__(self, name,
                  auto_converge=False, auto_converge_step=10,
                  compression_mt=False, compression_mt_threads=1,
                  compression_xbzrle=False, compression_xbzrle_cache=10,
-                 multifd=False, multifd_channels=2):
+                 multifd=False, multifd_channels=2,
+                 dirty_limit=False, x_vcpu_dirty_limit_period=500,
+                 vcpu_dirty_limit=1):
 
         self._name = name
 
@@ -60,6 +62,10 @@ def __init__(self, name,
         self._multifd = multifd
         self._multifd_channels = multifd_channels
 
+        self._dirty_limit = dirty_limit
+        self._x_vcpu_dirty_limit_period = x_vcpu_dirty_limit_period
+        self._vcpu_dirty_limit = vcpu_dirty_limit
+
     def serialize(self):
         return {
             "name": self._name,
@@ -79,6 +85,9 @@ def serialize(self):
             "compression_xbzrle_cache": self._compression_xbzrle_cache,
             "multifd": self._multifd,
             "multifd_channels": self._multifd_channels,
+            "dirty_limit": self._dirty_limit,
+            "x_vcpu_dirty_limit_period": self._x_vcpu_dirty_limit_period,
+            "vcpu_dirty_limit": self._vcpu_dirty_limit,
         }
 
     @classmethod
diff --git a/tests/migration/guestperf/shell.py b/tests/migration/guestperf/shell.py
index 559616f..23fe895 100644
--- a/tests/migration/guestperf/shell.py
+++ b/tests/migration/guestperf/shell.py
@@ -132,6 +132,17 @@ def __init__(self):
         parser.add_argument("--multifd-channels", dest="multifd_channels",
                             default=2, type=int)
 
+        parser.add_argument("--dirty-limit", dest="dirty_limit", default=False,
+                            action="store_true")
+
+        parser.add_argument("--x-vcpu-dirty-limit-period",
+                            dest="x_vcpu_dirty_limit_period",
+                            default=500, type=int)
+
+        parser.add_argument("--vcpu-dirty-limit",
+                            dest="vcpu_dirty_limit",
+                            default=1, type=int)
+
     def get_scenario(self, args):
         return Scenario(name="perfreport",
                         downtime=args.downtime,
@@ -155,7 +166,12 @@ def get_scenario(self, args):
                         compression_xbzrle_cache=args.compression_xbzrle_cache,
 
                         multifd=args.multifd,
-                        multifd_channels=args.multifd_channels)
+                        multifd_channels=args.multifd_channels,
+
+                        dirty_limit=args.dirty_limit,
+                        x_vcpu_dirty_limit_period=
+                            args.x_vcpu_dirty_limit_period,
+                        vcpu_dirty_limit=args.vcpu_dirty_limit)
 
     def run(self, argv):
         args = self._parser.parse_args(argv)
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] kvm-all: Do not allow reap vcpu dirty ring buffer if not ready
  2022-11-21 16:26 ` [PATCH v2 03/11] kvm-all: Do not allow reap vcpu dirty ring buffer if not ready huangy81
@ 2022-11-29 22:42   ` Peter Xu
  2022-11-30  3:11     ` Hyman Huang
  0 siblings, 1 reply; 33+ messages in thread
From: Peter Xu @ 2022-11-29 22:42 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

Hi, Yong,

On Mon, Nov 21, 2022 at 11:26:35AM -0500, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> When tested large vcpu size vm with dirtylimit feature, Qemu crashed
> due to the assertion in kvm_dirty_ring_reap_one, which assert that
> vcpu's kvm_dirty_gfns has been allocated and not NULL.
> 
> Because dirty ring reaper thread races with Qemu main thread, reaper
> may reap vcpu's dirty ring buffer when main thread doesn't complete
> vcpu instantiation. So add the waiting logic in reaper thread and
> start to reap until vcpu instantiation is completed.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>  accel/kvm/kvm-all.c | 36 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 36 insertions(+)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index f99b0be..9457715 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -1401,6 +1401,35 @@ out:
>      kvm_slots_unlock();
>  }
>  
> +/*
> + * test if dirty ring has been initialized by checking if vcpu
> + * has been initialized and gfns was allocated correspondlingly.
> + * return true if dirty ring has been initialized, false otherwise.
> + */
> +static bool kvm_vcpu_dirty_ring_initialized(void)
> +{
> +    CPUState *cpu;
> +    MachineState *ms = MACHINE(qdev_get_machine());
> +    int ncpus = ms->smp.cpus;
> +
> +    /*
> +     * assume vcpu has not been initilaized if generation
> +     * id less than number of vcpu
> +     */
> +    if (ncpus > cpu_list_generation_id_get()) {
> +        return false;
> +    }
> +
> +    CPU_FOREACH(cpu) {
> +        if (!cpu->kvm_dirty_gfns) {
> +            return false;
> +        }
> +    }
> +
> +    return true;
> +}
> +
> +
>  static void *kvm_dirty_ring_reaper_thread(void *data)
>  {
>      KVMState *s = data;
> @@ -1410,6 +1439,13 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
>  
>      trace_kvm_dirty_ring_reaper("init");
>  
> +retry:
> +    /* don't allow reaping dirty ring if ring buffer hasn't been mapped */
> +    if (!kvm_vcpu_dirty_ring_initialized()) {
> +        sleep(1);

The sleep here is probably not necessary.  Could you instead have a look at
the other much simpler patch?  Here:

https://lore.kernel.org/qemu-devel/20220927154653.77296-1-peterx@redhat.com/

> +        goto retry;
> +    }
> +
>      while (true) {
>          r->reaper_state = KVM_DIRTY_RING_REAPER_WAIT;
>          trace_kvm_dirty_ring_reaper("wait");
> -- 
> 1.8.3.1
> 
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  2022-11-21 16:26 ` [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
@ 2022-11-29 22:49   ` Peter Xu
  2022-12-03  9:06   ` Markus Armbruster
  2022-12-03  9:11   ` Markus Armbruster
  2 siblings, 0 replies; 33+ messages in thread
From: Peter Xu @ 2022-11-29 22:49 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

On Mon, Nov 21, 2022 at 11:26:36AM -0500, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> Introduce "x-vcpu-dirty-limit-period" migration experimental
> parameter, which is in the range of 1 to 1000ms and used to
> make dirtyrate calculation period configurable.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>  migration/migration.c | 26 ++++++++++++++++++++++++++
>  monitor/hmp-cmds.c    |  8 ++++++++
>  qapi/migration.json   | 34 +++++++++++++++++++++++++++-------
>  3 files changed, 61 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 739bb68..701267c 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -116,6 +116,8 @@
>  #define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS    5
>  #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
>  
> +#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD     500     /* ms */
> +
>  static NotifierList migration_state_notifiers =
>      NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
>  
> @@ -963,6 +965,9 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
>                         s->parameters.block_bitmap_mapping);
>      }
>  
> +    params->has_x_vcpu_dirty_limit_period = true;
> +    params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
> +
>      return params;
>  }
>  
> @@ -1564,6 +1569,15 @@ static bool migrate_params_check(MigrationParameters *params, Error **errp)
>      }
>  #endif
>  
> +    if (params->has_x_vcpu_dirty_limit_period &&
> +        (params->x_vcpu_dirty_limit_period < 1 ||
> +         params->x_vcpu_dirty_limit_period > 1000)) {
> +        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
> +                   "x_vcpu_dirty_limit_period",
> +                   "is invalid, it must be in the range of 1 to 1000 ms");
> +        return false;
> +    }
> +
>      return true;
>  }
>  
> @@ -1663,6 +1677,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
>          dest->has_block_bitmap_mapping = true;
>          dest->block_bitmap_mapping = params->block_bitmap_mapping;
>      }
> +
> +    if (params->has_x_vcpu_dirty_limit_period) {
> +        dest->x_vcpu_dirty_limit_period = params->x_vcpu_dirty_limit_period;
> +    }
>  }
>  
>  static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
> @@ -1785,6 +1803,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
>              QAPI_CLONE(BitmapMigrationNodeAliasList,
>                         params->block_bitmap_mapping);
>      }
> +    if (params->has_x_vcpu_dirty_limit_period) {
> +        s->parameters.x_vcpu_dirty_limit_period =
> +            params->x_vcpu_dirty_limit_period;
> +    }
>  }
>  
>  void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
> @@ -4386,6 +4408,9 @@ static Property migration_properties[] = {
>      DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
>      DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
>      DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
> +    DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
> +                       parameters.x_vcpu_dirty_limit_period,
> +                       DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
>  
>      /* Migration capabilities */
>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> @@ -4477,6 +4502,7 @@ static void migration_instance_init(Object *obj)
>      params->has_tls_creds = true;
>      params->has_tls_hostname = true;
>      params->has_tls_authz = true;
> +    params->has_x_vcpu_dirty_limit_period = true;
>  
>      qemu_sem_init(&ms->postcopy_pause_sem, 0);
>      qemu_sem_init(&ms->postcopy_pause_rp_sem, 0);
> diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
> index 01b789a..a3170ca 100644
> --- a/monitor/hmp-cmds.c
> +++ b/monitor/hmp-cmds.c
> @@ -513,6 +513,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
>                  }
>              }
>          }
> +
> +        monitor_printf(mon, "%s: %" PRIu64 " ms\n",
> +        MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
> +        params->x_vcpu_dirty_limit_period);
>      }
>  
>      qapi_free_MigrationParameters(params);
> @@ -1332,6 +1336,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
>          error_setg(&err, "The block-bitmap-mapping parameter can only be set "
>                     "through QMP");
>          break;
> +    case MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD:
> +        p->has_x_vcpu_dirty_limit_period = true;
> +        visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
> +        break;
>      default:
>          assert(0);
>      }
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 88ecf86..5175779 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -776,8 +776,13 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Should be in the range 1 to 1000ms, defaults to 500ms.
> +#                             (Since 7.1)

Not 7.1 anymore but 7.3.  Yeah a bit sad.

> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -795,8 +800,9 @@
>             'multifd-channels',
>             'xbzrle-cache-size', 'max-postcopy-bandwidth',
>             'max-cpu-throttle', 'multifd-compression',
> -           'multifd-zlib-level' ,'multifd-zstd-level',
> -           'block-bitmap-mapping' ] }
> +           'multifd-zlib-level', 'multifd-zstd-level',
> +           'block-bitmap-mapping',
> +           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
>  
>  ##
>  # @MigrateSetParameters:
> @@ -941,8 +947,13 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Should be in the range 1 to 1000ms, defaults to 500ms.
> +#                             (Since 7.1)

Same here.

> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -976,7 +987,9 @@
>              '*multifd-compression': 'MultiFDCompression',
>              '*multifd-zlib-level': 'uint8',
>              '*multifd-zstd-level': 'uint8',
> -            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
> +            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
> +            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> +                                            'features': [ 'unstable' ] } } }
>  
>  ##
>  # @migrate-set-parameters:
> @@ -1141,8 +1154,13 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Should be in the range 1 to 1000ms, defaults to 500ms.
> +#                             (Since 7.1)

Same here.

> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -1174,7 +1192,9 @@
>              '*multifd-compression': 'MultiFDCompression',
>              '*multifd-zlib-level': 'uint8',
>              '*multifd-zstd-level': 'uint8',
> -            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
> +            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
> +            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> +                                            'features': [ 'unstable' ] } } }
>  
>  ##
>  # @query-migrate-parameters:
> -- 
> 1.8.3.1
> 
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 07/11] migration: Implement dirty-limit convergence algo
  2022-11-21 16:26 ` [PATCH v2 07/11] migration: Implement dirty-limit convergence algo huangy81
@ 2022-11-29 23:17   ` Peter Xu
  2022-12-01  1:13     ` Hyman
  0 siblings, 1 reply; 33+ messages in thread
From: Peter Xu @ 2022-11-29 23:17 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

On Mon, Nov 21, 2022 at 11:26:39AM -0500, huangy81@chinatelecom.cn wrote:
> diff --git a/migration/migration.c b/migration/migration.c
> index 86950a1..096b61a 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -240,6 +240,7 @@ void migration_cancel(const Error *error)
>      if (error) {
>          migrate_set_error(current_migration, error);
>      }
> +    qmp_cancel_vcpu_dirty_limit(false, -1, NULL);

Disable it only if migrate_dirty_limit() is true?  It seems okay if the
admin wants to use dirtylimit separately from migration.

>      migrate_fd_cancel(current_migration);
>  }

[...]

> @@ -1148,22 +1175,31 @@ static void migration_trigger_throttle(RAMState *rs)
>      uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
>      uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
>  
> -    /* During block migration the auto-converge logic incorrectly detects
> -     * that ram migration makes no progress. Avoid this by disabling the
> -     * throttling logic during the bulk phase of block migration. */
> -    if (migrate_auto_converge() && !blk_mig_bulk_active()) {
> -        /* The following detection logic can be refined later. For now:
> -           Check to see if the ratio between dirtied bytes and the approx.
> -           amount of bytes that just got transferred since the last time
> -           we were in this routine reaches the threshold. If that happens
> -           twice, start or increase throttling. */
> -
> -        if ((bytes_dirty_period > bytes_dirty_threshold) &&
> -            (++rs->dirty_rate_high_cnt >= 2)) {
> +    /*
> +     * The following detection logic can be refined later. For now:
> +     * Check to see if the ratio between dirtied bytes and the approx.
> +     * amount of bytes that just got transferred since the last time
> +     * we were in this routine reaches the threshold. If that happens
> +     * twice, start or increase throttling.
> +     */
> +
> +    if ((bytes_dirty_period > bytes_dirty_threshold) &&
> +        (++rs->dirty_rate_high_cnt >= 2)) {
> +        rs->dirty_rate_high_cnt = 0;
> +        /*
> +         * During block migration the auto-converge logic incorrectly detects
> +         * that ram migration makes no progress. Avoid this by disabling the
> +         * throttling logic during the bulk phase of block migration
> +         */
> +
> +        if (migrate_auto_converge() && !blk_mig_bulk_active()) {

Does dirtylimit cap needs to check blk_mig_bulk_active() too?  I assume
that check was used to ignore the bulk block migration phase where major
bandwidth will be consumed by block migrations so the measured bandwidth is
not accurate.  IIUC it applies to dirtylimit too.

>              trace_migration_throttle();
> -            rs->dirty_rate_high_cnt = 0;
>              mig_throttle_guest_down(bytes_dirty_period,
>                                      bytes_dirty_threshold);
> +        } else if (migrate_dirty_limit() &&
> +                   kvm_dirty_ring_enabled() &&
> +                   migration_is_active(s)) {

Is "kvm_dirty_ring_enabled()" and "migration_is_active(s)" check helpful?
Can we only rely on migrate_dirty_limit() alone?

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB
  2022-11-21 16:26 ` [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB huangy81
@ 2022-11-29 23:17   ` Peter Xu
  2022-12-03  8:56   ` Markus Armbruster
  1 sibling, 0 replies; 33+ messages in thread
From: Peter Xu @ 2022-11-29 23:17 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

        On Mon, Nov 21, 2022 at 11:26:33AM -0500, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> overity points out a overflow problem when computing MB,
> dirty_ring_size and TARGET_PAGE_SIZE are both 32 bits,
> multiplication will be done as a 32-bit operation, which
> could overflow. Simplify the formula.
> 
> Meanwhile, fix spelling mistake of variable name.
> 
> Reported-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit"
  2022-11-21 16:26 ` [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit" huangy81
@ 2022-11-29 23:17   ` Peter Xu
  2022-12-03  9:01   ` Markus Armbruster
  1 sibling, 0 replies; 33+ messages in thread
From: Peter Xu @ 2022-11-29 23:17 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

On Mon, Nov 21, 2022 at 11:26:34AM -0500, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> dirty_rate paraemter of hmp command "set_vcpu_dirty_limit" is invalid
> if less than 0, so add parameter check for it.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters
  2022-11-21 16:26 ` [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters huangy81
@ 2022-11-29 23:58   ` Peter Xu
  2022-12-03  9:13   ` Markus Armbruster
  1 sibling, 0 replies; 33+ messages in thread
From: Peter Xu @ 2022-11-29 23:58 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

On Mon, Nov 21, 2022 at 11:26:37AM -0500, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> Introduce "vcpu-dirty-limit" migration parameter used
> to limit dirty page rate during live migration.
> 
> "vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
> two dirty-limit-related migration parameters, which can
> be set before and during live migration by qmp
> migrate-set-parameters.
> 
> This two parameters are used to help implement the dirty
> page rate limit algo of migration.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Acked-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 06/11] migration: Introduce dirty-limit capability
  2022-11-21 16:26 ` [PATCH v2 06/11] migration: Introduce dirty-limit capability huangy81
@ 2022-11-29 23:58   ` Peter Xu
  2022-12-03  9:24   ` Markus Armbruster
  1 sibling, 0 replies; 33+ messages in thread
From: Peter Xu @ 2022-11-29 23:58 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

On Mon, Nov 21, 2022 at 11:26:38AM -0500, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> Introduce migration dirty-limit capability, which can
> be turned on before live migration and limit dirty
> page rate durty live migration.
> 
> Introduce migrate_dirty_limit function to help check
> if dirty-limit capability enabled during live migration.
> 
> Meanwhile, refactor vcpu_dirty_rate_stat_collect
> so that period can be configured instead of hardcoded.
> 
> dirty-limit capability is kind of like auto-converge
> but using dirty limit instead of traditional cpu-throttle
> to throttle guest down. To enable this feature, turn on
> the dirty-limit capability before live migration using
> migratioin-set-capabilities, and set the parameters
> "x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
> to speed up convergence.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Acked-by: Peter Xu <peterx@redhat.com>

PS: please replace 7.1 with 7.3 in this patch and the previous one.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 08/11] migration: Export dirty-limit time info
  2022-11-21 16:26 ` [PATCH v2 08/11] migration: Export dirty-limit time info huangy81
@ 2022-11-30  0:09   ` Peter Xu
  2022-12-01  2:09     ` Hyman
  2022-12-03  9:14   ` Hyman
  2022-12-03  9:28   ` Markus Armbruster
  2 siblings, 1 reply; 33+ messages in thread
From: Peter Xu @ 2022-11-30  0:09 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

On Mon, Nov 21, 2022 at 11:26:40AM -0500, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> Export dirty limit throttle time and estimated ring full
> time, through which we can observe the process of dirty
> limit during live migration.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>  include/sysemu/dirtylimit.h |  2 ++
>  migration/migration.c       | 10 ++++++++++
>  monitor/hmp-cmds.c          | 10 ++++++++++
>  qapi/migration.json         | 10 +++++++++-
>  softmmu/dirtylimit.c        | 31 +++++++++++++++++++++++++++++++
>  5 files changed, 62 insertions(+), 1 deletion(-)
> 
> diff --git a/include/sysemu/dirtylimit.h b/include/sysemu/dirtylimit.h
> index 8d2c1f3..98cc4a6 100644
> --- a/include/sysemu/dirtylimit.h
> +++ b/include/sysemu/dirtylimit.h
> @@ -34,4 +34,6 @@ void dirtylimit_set_vcpu(int cpu_index,
>  void dirtylimit_set_all(uint64_t quota,
>                          bool enable);
>  void dirtylimit_vcpu_execute(CPUState *cpu);
> +int64_t dirtylimit_throttle_us_per_full(void);
> +int64_t dirtylimit_us_ring_full(void);
>  #endif
> diff --git a/migration/migration.c b/migration/migration.c
> index 096b61a..886c25d 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -62,6 +62,7 @@
>  #include "yank_functions.h"
>  #include "sysemu/qtest.h"
>  #include "sysemu/kvm.h"
> +#include "sysemu/dirtylimit.h"
>  
>  #define MAX_THROTTLE  (128 << 20)      /* Migration transfer speed throttling */
>  
> @@ -1112,6 +1113,15 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>          info->ram->remaining = ram_bytes_remaining();
>          info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
>      }
> +
> +    if (migrate_dirty_limit() && dirtylimit_in_service()) {
> +        info->has_dirty_limit_throttle_us_per_full = true;
> +        info->dirty_limit_throttle_us_per_full =
> +                            dirtylimit_throttle_us_per_full();
> +
> +        info->has_dirty_limit_us_ring_full = true;
> +        info->dirty_limit_us_ring_full = dirtylimit_us_ring_full();
> +    }
>  }
>  
>  static void populate_disk_info(MigrationInfo *info)
> diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
> index 9ad6ee5..9d02baf 100644
> --- a/monitor/hmp-cmds.c
> +++ b/monitor/hmp-cmds.c
> @@ -339,6 +339,16 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
>                         info->cpu_throttle_percentage);
>      }
>  
> +    if (info->has_dirty_limit_throttle_us_per_full) {
> +        monitor_printf(mon, "dirty-limit throttle time: %" PRIi64 " us\n",
> +                       info->dirty_limit_throttle_us_per_full);
> +    }
> +
> +    if (info->has_dirty_limit_us_ring_full) {
> +        monitor_printf(mon, "dirty-limit ring full time: %" PRIi64 " us\n",
> +                       info->dirty_limit_us_ring_full);
> +    }
> +
>      if (info->has_postcopy_blocktime) {
>          monitor_printf(mon, "postcopy blocktime: %u\n",
>                         info->postcopy_blocktime);
> diff --git a/qapi/migration.json b/qapi/migration.json
> index af6b2da..62db5cb 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -242,6 +242,12 @@
>  #                   Present and non-empty when migration is blocked.
>  #                   (since 6.0)
>  #
> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
> +#                                    dirty ring full (since 7.1)
> +#
> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
> +#                            (since 7.1)

s/7.1/7.3/

Could you enrich the document for the new fields?  For example, currently
you only report throttle time for vcpu0 on the 1st field, while for the
latter it's an average of all vcpus.  These need to be mentioned.

OTOH, how do you normally use these values?  Maybe that can also be added
into the documents too.

> +#
>  # Since: 0.14
>  ##
>  { 'struct': 'MigrationInfo',
> @@ -259,7 +265,9 @@
>             '*postcopy-blocktime' : 'uint32',
>             '*postcopy-vcpu-blocktime': ['uint32'],
>             '*compression': 'CompressionStats',
> -           '*socket-address': ['SocketAddress'] } }
> +           '*socket-address': ['SocketAddress'],
> +           '*dirty-limit-throttle-us-per-full': 'int64',
> +           '*dirty-limit-us-ring-full': 'int64'} }
>  
>  ##
>  # @query-migrate:
> diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
> index 3f3c405..9d1df9b 100644
> --- a/softmmu/dirtylimit.c
> +++ b/softmmu/dirtylimit.c
> @@ -573,6 +573,37 @@ static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
>      return info;
>  }
>  
> +/* Pick up first vcpu throttle time by default */
> +int64_t dirtylimit_throttle_us_per_full(void)
> +{
> +    CPUState *cpu = first_cpu;
> +    return cpu->throttle_us_per_full;

Why would vcpu0 be the standard on this sampling?

I'm wondering whether it'll make more sense to collect the MAX() of all
vcpus here, because that'll be the maximum delay for a guest write to be
postponed due to dirtylimit.  It'll provide the admin some information on
the worst impact on guest workloads.

> +}
> +
> +/*
> + * Estimate dirty ring full time under current dirty page rate.
> + * Return -1 if guest doesn't dirty memory.
> + */
> +int64_t dirtylimit_us_ring_full(void)
> +{
> +    CPUState *cpu;
> +    uint64_t curr_rate = 0;
> +    int nvcpus = 0;
> +
> +    CPU_FOREACH(cpu) {
> +        if (cpu->running) {
> +            nvcpus++;
> +            curr_rate += vcpu_dirty_rate_get(cpu->cpu_index);
> +        }
> +    }
> +
> +    if (!curr_rate || !nvcpus) {
> +        return -1;
> +    }
> +
> +    return dirtylimit_dirty_ring_full_time(curr_rate / nvcpus);
> +}
> +
>  static struct DirtyLimitInfoList *dirtylimit_query_all(void)
>  {
>      int i, index;
> -- 
> 1.8.3.1
> 
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] kvm-all: Do not allow reap vcpu dirty ring buffer if not ready
  2022-11-29 22:42   ` Peter Xu
@ 2022-11-30  3:11     ` Hyman Huang
  0 siblings, 0 replies; 33+ messages in thread
From: Hyman Huang @ 2022-11-30  3:11 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson



在 2022/11/30 6:42, Peter Xu 写道:
> Hi, Yong,
> 
> On Mon, Nov 21, 2022 at 11:26:35AM -0500, huangy81@chinatelecom.cn wrote:
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> When tested large vcpu size vm with dirtylimit feature, Qemu crashed
>> due to the assertion in kvm_dirty_ring_reap_one, which assert that
>> vcpu's kvm_dirty_gfns has been allocated and not NULL.
>>
>> Because dirty ring reaper thread races with Qemu main thread, reaper
>> may reap vcpu's dirty ring buffer when main thread doesn't complete
>> vcpu instantiation. So add the waiting logic in reaper thread and
>> start to reap until vcpu instantiation is completed.
>>
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>> ---
>>   accel/kvm/kvm-all.c | 36 ++++++++++++++++++++++++++++++++++++
>>   1 file changed, 36 insertions(+)
>>
>> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
>> index f99b0be..9457715 100644
>> --- a/accel/kvm/kvm-all.c
>> +++ b/accel/kvm/kvm-all.c
>> @@ -1401,6 +1401,35 @@ out:
>>       kvm_slots_unlock();
>>   }
>>   
>> +/*
>> + * test if dirty ring has been initialized by checking if vcpu
>> + * has been initialized and gfns was allocated correspondlingly.
>> + * return true if dirty ring has been initialized, false otherwise.
>> + */
>> +static bool kvm_vcpu_dirty_ring_initialized(void)
>> +{
>> +    CPUState *cpu;
>> +    MachineState *ms = MACHINE(qdev_get_machine());
>> +    int ncpus = ms->smp.cpus;
>> +
>> +    /*
>> +     * assume vcpu has not been initilaized if generation
>> +     * id less than number of vcpu
>> +     */
>> +    if (ncpus > cpu_list_generation_id_get()) {
>> +        return false;
>> +    }
>> +
>> +    CPU_FOREACH(cpu) {
>> +        if (!cpu->kvm_dirty_gfns) {
>> +            return false;
>> +        }
>> +    }
>> +
>> +    return true;
>> +}
>> +
>> +
>>   static void *kvm_dirty_ring_reaper_thread(void *data)
>>   {
>>       KVMState *s = data;
>> @@ -1410,6 +1439,13 @@ static void *kvm_dirty_ring_reaper_thread(void *data)
>>   
>>       trace_kvm_dirty_ring_reaper("init");
>>   
>> +retry:
>> +    /* don't allow reaping dirty ring if ring buffer hasn't been mapped */
>> +    if (!kvm_vcpu_dirty_ring_initialized()) {
>> +        sleep(1);
> 
> The sleep here is probably not necessary.  Could you instead have a look at
> the other much simpler patch?  
Of course yes, this patch is much more graceful, i'll cherry pick it 
next version.

Here:
> 
> https://lore.kernel.org/qemu-devel/20220927154653.77296-1-peterx@redhat.com/
> 
>> +        goto retry;
>> +    }
>> +
>>       while (true) {
>>           r->reaper_state = KVM_DIRTY_RING_REAPER_WAIT;
>>           trace_kvm_dirty_ring_reaper("wait");
>> -- 
>> 1.8.3.1
>>
>>
> 

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 07/11] migration: Implement dirty-limit convergence algo
  2022-11-29 23:17   ` Peter Xu
@ 2022-12-01  1:13     ` Hyman
  0 siblings, 0 replies; 33+ messages in thread
From: Hyman @ 2022-12-01  1:13 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson



在 2022/11/30 7:17, Peter Xu 写道:
> On Mon, Nov 21, 2022 at 11:26:39AM -0500, huangy81@chinatelecom.cn wrote:
>> diff --git a/migration/migration.c b/migration/migration.c
>> index 86950a1..096b61a 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -240,6 +240,7 @@ void migration_cancel(const Error *error)
>>       if (error) {
>>           migrate_set_error(current_migration, error);
>>       }
>> +    qmp_cancel_vcpu_dirty_limit(false, -1, NULL);
> 
> Disable it only if migrate_dirty_limit() is true?  It seems okay if the
> admin wants to use dirtylimit separately from migration.
Ok.
> 
>>       migrate_fd_cancel(current_migration);
>>   }
> 
> [...]
> 
>> @@ -1148,22 +1175,31 @@ static void migration_trigger_throttle(RAMState *rs)
>>       uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
>>       uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
>>   
>> -    /* During block migration the auto-converge logic incorrectly detects
>> -     * that ram migration makes no progress. Avoid this by disabling the
>> -     * throttling logic during the bulk phase of block migration. */
>> -    if (migrate_auto_converge() && !blk_mig_bulk_active()) {
>> -        /* The following detection logic can be refined later. For now:
>> -           Check to see if the ratio between dirtied bytes and the approx.
>> -           amount of bytes that just got transferred since the last time
>> -           we were in this routine reaches the threshold. If that happens
>> -           twice, start or increase throttling. */
>> -
>> -        if ((bytes_dirty_period > bytes_dirty_threshold) &&
>> -            (++rs->dirty_rate_high_cnt >= 2)) {
>> +    /*
>> +     * The following detection logic can be refined later. For now:
>> +     * Check to see if the ratio between dirtied bytes and the approx.
>> +     * amount of bytes that just got transferred since the last time
>> +     * we were in this routine reaches the threshold. If that happens
>> +     * twice, start or increase throttling.
>> +     */
>> +
>> +    if ((bytes_dirty_period > bytes_dirty_threshold) &&
>> +        (++rs->dirty_rate_high_cnt >= 2)) {
>> +        rs->dirty_rate_high_cnt = 0;
>> +        /*
>> +         * During block migration the auto-converge logic incorrectly detects
>> +         * that ram migration makes no progress. Avoid this by disabling the
>> +         * throttling logic during the bulk phase of block migration
>> +         */
>> +
>> +        if (migrate_auto_converge() && !blk_mig_bulk_active()) {
> 
> Does dirtylimit cap needs to check blk_mig_bulk_active() too?  I assume
> that check was used to ignore the bulk block migration phase where major
> bandwidth will be consumed by block migrations so the measured bandwidth is
> not accurate.  IIUC it applies to dirtylimit too.Indeed, i'll add this next version.
> 
>>               trace_migration_throttle();
>> -            rs->dirty_rate_high_cnt = 0;
>>               mig_throttle_guest_down(bytes_dirty_period,
>>                                       bytes_dirty_threshold);
>> +        } else if (migrate_dirty_limit() &&
>> +                   kvm_dirty_ring_enabled() &&
>> +                   migration_is_active(s)) {
> 
> Is "kvm_dirty_ring_enabled()" and "migration_is_active(s)" check helpful?
> Can we only rely on migrate_dirty_limit() alone?
In qmp_set_vcpu_dirty_limit, it checks if kvm enabled and dirty ring 
size set. When "dirty-limit" capability set, we also check this in 
migrate_caps_check, so kvm_dirty_ring_enabled can be dropped indeed.

As for migration_is_active, dirty-limit can be set anytime and migration 
is active already in the path. It also can be dropped.

I'll fix this next version.
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 08/11] migration: Export dirty-limit time info
  2022-11-30  0:09   ` Peter Xu
@ 2022-12-01  2:09     ` Hyman
  0 siblings, 0 replies; 33+ messages in thread
From: Hyman @ 2022-12-01  2:09 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson



在 2022/11/30 8:09, Peter Xu 写道:
> On Mon, Nov 21, 2022 at 11:26:40AM -0500, huangy81@chinatelecom.cn wrote:
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> Export dirty limit throttle time and estimated ring full
>> time, through which we can observe the process of dirty
>> limit during live migration.
>>
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>> ---
>>   include/sysemu/dirtylimit.h |  2 ++
>>   migration/migration.c       | 10 ++++++++++
>>   monitor/hmp-cmds.c          | 10 ++++++++++
>>   qapi/migration.json         | 10 +++++++++-
>>   softmmu/dirtylimit.c        | 31 +++++++++++++++++++++++++++++++
>>   5 files changed, 62 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/sysemu/dirtylimit.h b/include/sysemu/dirtylimit.h
>> index 8d2c1f3..98cc4a6 100644
>> --- a/include/sysemu/dirtylimit.h
>> +++ b/include/sysemu/dirtylimit.h
>> @@ -34,4 +34,6 @@ void dirtylimit_set_vcpu(int cpu_index,
>>   void dirtylimit_set_all(uint64_t quota,
>>                           bool enable);
>>   void dirtylimit_vcpu_execute(CPUState *cpu);
>> +int64_t dirtylimit_throttle_us_per_full(void);
>> +int64_t dirtylimit_us_ring_full(void);
>>   #endif
>> diff --git a/migration/migration.c b/migration/migration.c
>> index 096b61a..886c25d 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -62,6 +62,7 @@
>>   #include "yank_functions.h"
>>   #include "sysemu/qtest.h"
>>   #include "sysemu/kvm.h"
>> +#include "sysemu/dirtylimit.h"
>>   
>>   #define MAX_THROTTLE  (128 << 20)      /* Migration transfer speed throttling */
>>   
>> @@ -1112,6 +1113,15 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>>           info->ram->remaining = ram_bytes_remaining();
>>           info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
>>       }
>> +
>> +    if (migrate_dirty_limit() && dirtylimit_in_service()) {
>> +        info->has_dirty_limit_throttle_us_per_full = true;
>> +        info->dirty_limit_throttle_us_per_full =
>> +                            dirtylimit_throttle_us_per_full();
>> +
>> +        info->has_dirty_limit_us_ring_full = true;
>> +        info->dirty_limit_us_ring_full = dirtylimit_us_ring_full();
>> +    }
>>   }
>>   
>>   static void populate_disk_info(MigrationInfo *info)
>> diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
>> index 9ad6ee5..9d02baf 100644
>> --- a/monitor/hmp-cmds.c
>> +++ b/monitor/hmp-cmds.c
>> @@ -339,6 +339,16 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
>>                          info->cpu_throttle_percentage);
>>       }
>>   
>> +    if (info->has_dirty_limit_throttle_us_per_full) {
>> +        monitor_printf(mon, "dirty-limit throttle time: %" PRIi64 " us\n",
>> +                       info->dirty_limit_throttle_us_per_full);
>> +    }
>> +
>> +    if (info->has_dirty_limit_us_ring_full) {
>> +        monitor_printf(mon, "dirty-limit ring full time: %" PRIi64 " us\n",
>> +                       info->dirty_limit_us_ring_full);
>> +    }
>> +
>>       if (info->has_postcopy_blocktime) {
>>           monitor_printf(mon, "postcopy blocktime: %u\n",
>>                          info->postcopy_blocktime);
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index af6b2da..62db5cb 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -242,6 +242,12 @@
>>   #                   Present and non-empty when migration is blocked.
>>   #                   (since 6.0)
>>   #
>> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
>> +#                                    dirty ring full (since 7.1)
>> +#
>> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
>> +#                            (since 7.1)
> 
> s/7.1/7.3/
> 
> Could you enrich the document for the new fields?  For example, currently
> you only report throttle time for vcpu0 on the 1st field, while for the
> latter it's an average of all vcpus.  These need to be mentioned.
> > OTOH, how do you normally use these values?  Maybe that can also be added
> into the documents too.
> 
Of course yes. I'll do that next version
>> +#
>>   # Since: 0.14
>>   ##
>>   { 'struct': 'MigrationInfo',
>> @@ -259,7 +265,9 @@
>>              '*postcopy-blocktime' : 'uint32',
>>              '*postcopy-vcpu-blocktime': ['uint32'],
>>              '*compression': 'CompressionStats',
>> -           '*socket-address': ['SocketAddress'] } }
>> +           '*socket-address': ['SocketAddress'],
>> +           '*dirty-limit-throttle-us-per-full': 'int64',
>> +           '*dirty-limit-us-ring-full': 'int64'} }
>>   
>>   ##
>>   # @query-migrate:
>> diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
>> index 3f3c405..9d1df9b 100644
>> --- a/softmmu/dirtylimit.c
>> +++ b/softmmu/dirtylimit.c
>> @@ -573,6 +573,37 @@ static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
>>       return info;
>>   }
>>   
>> +/* Pick up first vcpu throttle time by default */
>> +int64_t dirtylimit_throttle_us_per_full(void)
>> +{
>> +    CPUState *cpu = first_cpu;
>> +    return cpu->throttle_us_per_full;
> 
> Why would vcpu0 be the standard on this sampling?
> 
> I'm wondering whether it'll make more sense to collect the MAX() of all
> vcpus here, because that'll be the maximum delay for a guest write to be
> postponed due to dirtylimit.  It'll provide the admin some information on
> the worst impact on guest workloads.
> 
Good idea.
>> +}
>> +
>> +/*
>> + * Estimate dirty ring full time under current dirty page rate.
>> + * Return -1 if guest doesn't dirty memory.
>> + */
>> +int64_t dirtylimit_us_ring_full(void)
>> +{
>> +    CPUState *cpu;
>> +    uint64_t curr_rate = 0;
>> +    int nvcpus = 0;
>> +
>> +    CPU_FOREACH(cpu) {
>> +        if (cpu->running) {
>> +            nvcpus++;
>> +            curr_rate += vcpu_dirty_rate_get(cpu->cpu_index);
>> +        }
>> +    }
>> +
>> +    if (!curr_rate || !nvcpus) {
>> +        return -1;
>> +    }
>> +
>> +    return dirtylimit_dirty_ring_full_time(curr_rate / nvcpus);
>> +}
>> +
>>   static struct DirtyLimitInfoList *dirtylimit_query_all(void)
>>   {
>>       int i, index;
>> -- 
>> 1.8.3.1
>>
>>
> 


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB
  2022-11-21 16:26 ` [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB huangy81
  2022-11-29 23:17   ` Peter Xu
@ 2022-12-03  8:56   ` Markus Armbruster
  1 sibling, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  8:56 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Dr. David Alan Gilbert, Paolo Bonzini,
	Laurent Vivier, Eric Blake, Juan Quintela, Thomas Huth,
	Peter Maydell, Richard Henderson

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> overity points out a overflow problem when computing MB,

Coverity

> dirty_ring_size and TARGET_PAGE_SIZE are both 32 bits,
> multiplication will be done as a 32-bit operation, which
> could overflow. Simplify the formula.
>
> Meanwhile, fix spelling mistake of variable name.
>
> Reported-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit"
  2022-11-21 16:26 ` [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit" huangy81
  2022-11-29 23:17   ` Peter Xu
@ 2022-12-03  9:01   ` Markus Armbruster
  1 sibling, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:01 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> dirty_rate paraemter of hmp command "set_vcpu_dirty_limit" is invalid
> if less than 0, so add parameter check for it.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>  softmmu/dirtylimit.c | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
> index 940d238..c42eddd 100644
> --- a/softmmu/dirtylimit.c
> +++ b/softmmu/dirtylimit.c
> @@ -515,6 +515,11 @@ void hmp_set_vcpu_dirty_limit(Monitor *mon, const QDict *qdict)
   void hmp_set_vcpu_dirty_limit(Monitor *mon, const QDict *qdict)
   {
       int64_t dirty_rate = qdict_get_int(qdict, "dirty_rate");
>      int64_t cpu_index = qdict_get_try_int(qdict, "cpu_index", -1);
>      Error *err = NULL;
>  
> +    if (dirty_rate < 0) {
> +        monitor_printf(mon, "invalid dirty page limit %ld\n", dirty_rate);

Here, you use monitor_printf() to report an error, and ...

> +        return;
> +    }
> +
>      qmp_set_vcpu_dirty_limit(!!(cpu_index != -1), cpu_index, dirty_rate, &err);
>      if (err) {
>          hmp_handle_error(mon, err);

... here you use hmp_handle_error().  Suggest to use the latter
consistently.

           return;
       }

       monitor_printf(mon, "[Please use 'info vcpu_dirty_limit' to query "
                      "dirty limit for virtual CPU]\n");

This prints unsolicited help how to read the setting every time you
change it.  We don't that.  Please delete.

   }

Together, the function could look like this:

void hmp_set_vcpu_dirty_limit(Monitor *mon, const QDict *qdict)
{
    int64_t dirty_rate = qdict_get_int(qdict, "dirty_rate");
    int64_t cpu_index = qdict_get_try_int(qdict, "cpu_index", -1);
    Error *err = NULL;

    if (dirty_rate < 0) {
        error_setg(&err, "invalid dirty page limit %ld", dirty_rate);
        goto out;
    }

    qmp_set_vcpu_dirty_limit(!!(cpu_index != -1), cpu_index, dirty_rate, &err);

out:
    hmp_handle_error(mon, err);
}



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  2022-11-21 16:26 ` [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
  2022-11-29 22:49   ` Peter Xu
@ 2022-12-03  9:06   ` Markus Armbruster
  2022-12-03  9:11   ` Markus Armbruster
  2 siblings, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:06 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Dr. David Alan Gilbert, Paolo Bonzini,
	Laurent Vivier, Eric Blake, Juan Quintela, Thomas Huth,
	Peter Maydell, Richard Henderson

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Introduce "x-vcpu-dirty-limit-period" migration experimental
> parameter, which is in the range of 1 to 1000ms and used to
> make dirtyrate calculation period configurable.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

[...]

> diff --git a/qapi/migration.json b/qapi/migration.json
> index 88ecf86..5175779 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -776,8 +776,13 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Should be in the range 1 to 1000ms, defaults to 500ms.
> +#                             (Since 7.1)

8.0

> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period

Members

> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -795,8 +800,9 @@
>             'multifd-channels',
>             'xbzrle-cache-size', 'max-postcopy-bandwidth',
>             'max-cpu-throttle', 'multifd-compression',
> -           'multifd-zlib-level' ,'multifd-zstd-level',
> -           'block-bitmap-mapping' ] }
> +           'multifd-zlib-level', 'multifd-zstd-level',
> +           'block-bitmap-mapping',
> +           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
>  
>  ##
>  # @MigrateSetParameters:
> @@ -941,8 +947,13 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Should be in the range 1 to 1000ms, defaults to 500ms.
> +#                             (Since 7.1)
> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period

Members

> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -976,7 +987,9 @@
>              '*multifd-compression': 'MultiFDCompression',
>              '*multifd-zlib-level': 'uint8',
>              '*multifd-zstd-level': 'uint8',
> -            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
> +            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
> +            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> +                                            'features': [ 'unstable' ] } } }
>  
>  ##
>  # @migrate-set-parameters:
> @@ -1141,8 +1154,13 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Should be in the range 1 to 1000ms, defaults to 500ms.
> +#                             (Since 7.1)
> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period

Members

> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -1174,7 +1192,9 @@
>              '*multifd-compression': 'MultiFDCompression',
>              '*multifd-zlib-level': 'uint8',
>              '*multifd-zstd-level': 'uint8',
> -            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
> +            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
> +            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> +                                            'features': [ 'unstable' ] } } }
>  
>  ##
>  # @query-migrate-parameters:



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  2022-11-21 16:26 ` [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
  2022-11-29 22:49   ` Peter Xu
  2022-12-03  9:06   ` Markus Armbruster
@ 2022-12-03  9:11   ` Markus Armbruster
  2 siblings, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:11 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Introduce "x-vcpu-dirty-limit-period" migration experimental
> parameter, which is in the range of 1 to 1000ms and used to
> make dirtyrate calculation period configurable.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>  migration/migration.c | 26 ++++++++++++++++++++++++++
>  monitor/hmp-cmds.c    |  8 ++++++++
>  qapi/migration.json   | 34 +++++++++++++++++++++++++++-------
>  3 files changed, 61 insertions(+), 7 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index 739bb68..701267c 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -116,6 +116,8 @@
>  #define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS    5
>  #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
>  
> +#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD     500     /* ms */
> +
>  static NotifierList migration_state_notifiers =
>      NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
>  
> @@ -963,6 +965,9 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
>                         s->parameters.block_bitmap_mapping);
>      }
>  
> +    params->has_x_vcpu_dirty_limit_period = true;
> +    params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
> +
>      return params;
>  }
>  
> @@ -1564,6 +1569,15 @@ static bool migrate_params_check(MigrationParameters *params, Error **errp)
>      }
>  #endif
>  
> +    if (params->has_x_vcpu_dirty_limit_period &&
> +        (params->x_vcpu_dirty_limit_period < 1 ||
> +         params->x_vcpu_dirty_limit_period > 1000)) {
> +        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
> +                   "x_vcpu_dirty_limit_period",
> +                   "is invalid, it must be in the range of 1 to 1000 ms");

Two mistakes:

1. The parameter is called "x-vcpu-dirty-limit-period".

2. The error message is bad:

    (qemu) migrate_set_parameter x-vcpu-dirty-limit-period 100000
    Error: Parameter 'x_vcpu_dirty_limit_period' expects is invalid, it must be in the range of 1 to 1000 ms

   Use something like

           error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
                      "x-vcpu-dirty-limit-period",
                      "a value between 1 and 1000");

Always, always, always test your error paths!

> +        return false;
> +    }
> +
>      return true;
>  }
>  

[...]



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters
  2022-11-21 16:26 ` [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters huangy81
  2022-11-29 23:58   ` Peter Xu
@ 2022-12-03  9:13   ` Markus Armbruster
  2022-12-03  9:21     ` Markus Armbruster
  1 sibling, 1 reply; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:13 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Dr. David Alan Gilbert, Paolo Bonzini,
	Laurent Vivier, Eric Blake, Juan Quintela, Thomas Huth,
	Peter Maydell, Richard Henderson

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Introduce "vcpu-dirty-limit" migration parameter used
> to limit dirty page rate during live migration.
>
> "vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
> two dirty-limit-related migration parameters, which can
> be set before and during live migration by qmp
> migrate-set-parameters.
>
> This two parameters are used to help implement the dirty
> page rate limit algo of migration.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Could you explain briefly why x-vcpu-dirty-limit-period is experimental,
and vcpu-dirty-limit is not?



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 08/11] migration: Export dirty-limit time info
  2022-11-21 16:26 ` [PATCH v2 08/11] migration: Export dirty-limit time info huangy81
  2022-11-30  0:09   ` Peter Xu
@ 2022-12-03  9:14   ` Hyman
  2022-12-03  9:42     ` Markus Armbruster
  2022-12-03  9:28   ` Markus Armbruster
  2 siblings, 1 reply; 33+ messages in thread
From: Hyman @ 2022-12-03  9:14 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Markus Armbruster, Dr. David Alan Gilbert,
	Paolo Bonzini, Laurent Vivier, Eric Blake, Juan Quintela,
	Thomas Huth, Peter Maydell, Richard Henderson



在 2022/11/22 0:26, huangy81@chinatelecom.cn 写道:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> Export dirty limit throttle time and estimated ring full
> time, through which we can observe the process of dirty
> limit during live migration.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>   include/sysemu/dirtylimit.h |  2 ++
>   migration/migration.c       | 10 ++++++++++
>   monitor/hmp-cmds.c          | 10 ++++++++++
>   qapi/migration.json         | 10 +++++++++-
>   softmmu/dirtylimit.c        | 31 +++++++++++++++++++++++++++++++
>   5 files changed, 62 insertions(+), 1 deletion(-)
> 
> diff --git a/include/sysemu/dirtylimit.h b/include/sysemu/dirtylimit.h
> index 8d2c1f3..98cc4a6 100644
> --- a/include/sysemu/dirtylimit.h
> +++ b/include/sysemu/dirtylimit.h
> @@ -34,4 +34,6 @@ void dirtylimit_set_vcpu(int cpu_index,
>   void dirtylimit_set_all(uint64_t quota,
>                           bool enable);
>   void dirtylimit_vcpu_execute(CPUState *cpu);
> +int64_t dirtylimit_throttle_us_per_full(void);
> +int64_t dirtylimit_us_ring_full(void);
>   #endif
> diff --git a/migration/migration.c b/migration/migration.c
> index 096b61a..886c25d 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -62,6 +62,7 @@
>   #include "yank_functions.h"
>   #include "sysemu/qtest.h"
>   #include "sysemu/kvm.h"
> +#include "sysemu/dirtylimit.h"
>   
>   #define MAX_THROTTLE  (128 << 20)      /* Migration transfer speed throttling */
>   
> @@ -1112,6 +1113,15 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>           info->ram->remaining = ram_bytes_remaining();
>           info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
>       }
> +
> +    if (migrate_dirty_limit() && dirtylimit_in_service()) {
> +        info->has_dirty_limit_throttle_us_per_full = true;
> +        info->dirty_limit_throttle_us_per_full =
> +                            dirtylimit_throttle_us_per_full();
> +
> +        info->has_dirty_limit_us_ring_full = true;
> +        info->dirty_limit_us_ring_full = dirtylimit_us_ring_full();
> +    }
>   }
>   
>   static void populate_disk_info(MigrationInfo *info)
> diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
> index 9ad6ee5..9d02baf 100644
> --- a/monitor/hmp-cmds.c
> +++ b/monitor/hmp-cmds.c
> @@ -339,6 +339,16 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
>                          info->cpu_throttle_percentage);
>       }
>   
> +    if (info->has_dirty_limit_throttle_us_per_full) {
> +        monitor_printf(mon, "dirty-limit throttle time: %" PRIi64 " us\n",
> +                       info->dirty_limit_throttle_us_per_full);
> +    }
> +
> +    if (info->has_dirty_limit_us_ring_full) {
> +        monitor_printf(mon, "dirty-limit ring full time: %" PRIi64 " us\n",
> +                       info->dirty_limit_us_ring_full);
> +    }
> +
>       if (info->has_postcopy_blocktime) {
>           monitor_printf(mon, "postcopy blocktime: %u\n",
>                          info->postcopy_blocktime);
> diff --git a/qapi/migration.json b/qapi/migration.json
> index af6b2da..62db5cb 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -242,6 +242,12 @@
>   #                   Present and non-empty when migration is blocked.
>   #                   (since 6.0)
>   #
> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
> +#                                    dirty ring full (since 7.1)
> +# > +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty 
ring full.
> +#                            (since 7.1)
How about the following documents:

# @dirty-limit-throttletime-each-round: Max throttle time (us) of all 
virtual CPUs each dirty ring
#                                       full round, used to observe if 
dirty-limit take effect
#                                       during live migration. (since 7.3)
#
# @dirty-limit-ring-full-time: Estimated average dirty ring full time 
(us) each round, note that
#                              the value equals dirty ring memory size 
divided by average dirty
#                              page rate of virtual CPU, which can be 
used to observe the average
#                              memory load of virtual CPU indirectly. 
(since 7.3)

Is it more easy-understanding ?
> +#
>   # Since: 0.14
>   ##
>   { 'struct': 'MigrationInfo',
> @@ -259,7 +265,9 @@
>              '*postcopy-blocktime' : 'uint32',
>              '*postcopy-vcpu-blocktime': ['uint32'],
>              '*compression': 'CompressionStats',
> -           '*socket-address': ['SocketAddress'] } }
> +           '*socket-address': ['SocketAddress'],
> +           '*dirty-limit-throttle-us-per-full': 'int64',
> +           '*dirty-limit-us-ring-full': 'int64'} }
>  >   ##
>   # @query-migrate:
> diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
> index 3f3c405..9d1df9b 100644
> --- a/softmmu/dirtylimit.c
> +++ b/softmmu/dirtylimit.c
> @@ -573,6 +573,37 @@ static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
>       return info;
>   }
>   
> +/* Pick up first vcpu throttle time by default */
> +int64_t dirtylimit_throttle_us_per_full(void)
> +{
> +    CPUState *cpu = first_cpu;
> +    return cpu->throttle_us_per_full;
> +}
> +
> +/*
> + * Estimate dirty ring full time under current dirty page rate.
> + * Return -1 if guest doesn't dirty memory.
> + */
> +int64_t dirtylimit_us_ring_full(void)
> +{
> +    CPUState *cpu;
> +    uint64_t curr_rate = 0;
> +    int nvcpus = 0;
> +
> +    CPU_FOREACH(cpu) {
> +        if (cpu->running) {
> +            nvcpus++;
> +            curr_rate += vcpu_dirty_rate_get(cpu->cpu_index);
> +        }
> +    }
> +
> +    if (!curr_rate || !nvcpus) {
> +        return -1;
> +    }
> +
> +    return dirtylimit_dirty_ring_full_time(curr_rate / nvcpus);
> +}
> +
>   static struct DirtyLimitInfoList *dirtylimit_query_all(void)
>   {
>       int i, index;


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters
  2022-12-03  9:13   ` Markus Armbruster
@ 2022-12-03  9:21     ` Markus Armbruster
  0 siblings, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:21 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Dr. David Alan Gilbert, Paolo Bonzini,
	Laurent Vivier, Eric Blake, Juan Quintela, Thomas Huth,
	Peter Maydell, Richard Henderson

Markus Armbruster <armbru@redhat.com> writes:

> huangy81@chinatelecom.cn writes:
>
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> Introduce "vcpu-dirty-limit" migration parameter used
>> to limit dirty page rate during live migration.
>>
>> "vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
>> two dirty-limit-related migration parameters, which can
>> be set before and during live migration by qmp
>> migrate-set-parameters.
>>
>> This two parameters are used to help implement the dirty
>> page rate limit algo of migration.
>>
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Could you explain briefly why x-vcpu-dirty-limit-period is experimental,
> and vcpu-dirty-limit is not?

Prior discussion under "[PATCH v1 3/8] migration: Introduce dirty-limit
capability".  I'm not asking you to explain all over again, I'm asking
you to try to work rationale into the commit message.



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 06/11] migration: Introduce dirty-limit capability
  2022-11-21 16:26 ` [PATCH v2 06/11] migration: Introduce dirty-limit capability huangy81
  2022-11-29 23:58   ` Peter Xu
@ 2022-12-03  9:24   ` Markus Armbruster
  1 sibling, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:24 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Dr. David Alan Gilbert, Paolo Bonzini,
	Laurent Vivier, Eric Blake, Juan Quintela, Thomas Huth,
	Peter Maydell, Richard Henderson

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Introduce migration dirty-limit capability, which can
> be turned on before live migration and limit dirty
> page rate durty live migration.
>
> Introduce migrate_dirty_limit function to help check
> if dirty-limit capability enabled during live migration.
>
> Meanwhile, refactor vcpu_dirty_rate_stat_collect
> so that period can be configured instead of hardcoded.
>
> dirty-limit capability is kind of like auto-converge
> but using dirty limit instead of traditional cpu-throttle
> to throttle guest down. To enable this feature, turn on
> the dirty-limit capability before live migration using
> migratioin-set-capabilities, and set the parameters

migrate-set-capabilities

> "x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
> to speed up convergence.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---

[...]

> diff --git a/migration/migration.h b/migration/migration.h
> index cdad8ac..7fbb9f8 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -409,6 +409,7 @@ bool migrate_ignore_shared(void);
>  bool migrate_validate_uuid(void);
>  
>  bool migrate_auto_converge(void);
> +bool migrate_dirty_limit(void);
>  bool migrate_use_multifd(void);
>  bool migrate_pause_before_switchover(void);
>  int migrate_multifd_channels(void);
> diff --git a/qapi/migration.json b/qapi/migration.json
> index dd667dd..af6b2da 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -477,6 +477,8 @@
>  #                    will be handled faster.  This is a performance feature and
>  #                    should not affect the correctness of postcopy migration.
>  #                    (since 7.1)
> +# @dirty-limit: Use dirty-limit to throttle down guest if enabled.
> +#               (since 7.1)

8.0

>  #
>  # Features:
>  # @unstable: Members @x-colo and @x-ignore-shared are experimental.
> @@ -492,7 +494,7 @@
>             'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
>             { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
>             'validate-uuid', 'background-snapshot',
> -           'zero-copy-send', 'postcopy-preempt'] }
> +           'zero-copy-send', 'postcopy-preempt', 'dirty-limit'] }
>  
>  ##
>  # @MigrationCapabilityStatus:

[...]



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 08/11] migration: Export dirty-limit time info
  2022-11-21 16:26 ` [PATCH v2 08/11] migration: Export dirty-limit time info huangy81
  2022-11-30  0:09   ` Peter Xu
  2022-12-03  9:14   ` Hyman
@ 2022-12-03  9:28   ` Markus Armbruster
  2 siblings, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:28 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Dr. David Alan Gilbert, Paolo Bonzini,
	Laurent Vivier, Eric Blake, Juan Quintela, Thomas Huth,
	Peter Maydell, Richard Henderson

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Export dirty limit throttle time and estimated ring full
> time, through which we can observe the process of dirty
> limit during live migration.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

[...]

> diff --git a/qapi/migration.json b/qapi/migration.json
> index af6b2da..62db5cb 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -242,6 +242,12 @@
>  #                   Present and non-empty when migration is blocked.
>  #                   (since 6.0)
>  #
> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
> +#                                    dirty ring full (since 7.1)
> +#
> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
> +#                            (since 7.1)

8.0

In review of v1, I asked you to explain the two members' meaning,
i.e. what exactly is being measured.  You replied.  Would it make sense
to work your explanation into the doc comments?

> +#
>  # Since: 0.14
>  ##
>  { 'struct': 'MigrationInfo',
> @@ -259,7 +265,9 @@
>             '*postcopy-blocktime' : 'uint32',
>             '*postcopy-vcpu-blocktime': ['uint32'],
>             '*compression': 'CompressionStats',
> -           '*socket-address': ['SocketAddress'] } }
> +           '*socket-address': ['SocketAddress'],
> +           '*dirty-limit-throttle-us-per-full': 'int64',
> +           '*dirty-limit-us-ring-full': 'int64'} }
>  
>  ##
>  # @query-migrate:

[...]



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 08/11] migration: Export dirty-limit time info
  2022-12-03  9:14   ` Hyman
@ 2022-12-03  9:42     ` Markus Armbruster
  0 siblings, 0 replies; 33+ messages in thread
From: Markus Armbruster @ 2022-12-03  9:42 UTC (permalink / raw)
  To: Hyman
  Cc: qemu-devel, Peter Xu, Dr. David Alan Gilbert, Paolo Bonzini,
	Laurent Vivier, Eric Blake, Juan Quintela, Thomas Huth,
	Peter Maydell, Richard Henderson

Hyman <huangy81@chinatelecom.cn> writes:

> 在 2022/11/22 0:26, huangy81@chinatelecom.cn 写道:
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>> Export dirty limit throttle time and estimated ring full
>> time, through which we can observe the process of dirty
>> limit during live migration.
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>> ---
>>   include/sysemu/dirtylimit.h |  2 ++
>>   migration/migration.c       | 10 ++++++++++
>>   monitor/hmp-cmds.c          | 10 ++++++++++
>>   qapi/migration.json         | 10 +++++++++-
>>   softmmu/dirtylimit.c        | 31 +++++++++++++++++++++++++++++++
>>   5 files changed, 62 insertions(+), 1 deletion(-)

[...]

>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index af6b2da..62db5cb 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -242,6 +242,12 @@
>>  #                   Present and non-empty when migration is blocked.
>>  #                   (since 6.0)
>>  #
>> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
>> +#                                    dirty ring full (since 7.1)
>> +#
>> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
>> +#                            (since 7.1)
> How about the following documents:
>
> # @dirty-limit-throttletime-each-round: Max throttle time (us) of all virtual CPUs each dirty ring
> #                                       full round, used to observe if dirty-limit take effect
> #                                       during live migration. (since 7.3)
> #
> # @dirty-limit-ring-full-time: Estimated average dirty ring full time (us) each round, note that
> #                              the value equals dirty ring memory size divided by average dirty
> #                              page rate of virtual CPU, which can be used to observe the average
> #                              memory load of virtual CPU indirectly. (since 7.3)
>
> Is it more easy-understanding ?

dirty-limit-ring-full-time is better than dirty-limit-us-ring-full.

dirty-limit-throttletime-each-round is rather long.

We say "in microseconds" in doc comments.

Avoid abbreviations like "max" in doc comments, spell them out like
"maximum".

I need to give the text a closer read.  Out of time for today.  If you
don't see a reply from me early next week, feel free to remind me.

>> +#
>>  # Since: 0.14
>>  ##
>>  { 'struct': 'MigrationInfo',
>> @@ -259,7 +265,9 @@
>>             '*postcopy-blocktime' : 'uint32',
>>             '*postcopy-vcpu-blocktime': ['uint32'],
>>             '*compression': 'CompressionStats',
>> -           '*socket-address': ['SocketAddress'] } }
>> +           '*socket-address': ['SocketAddress'],
>> +           '*dirty-limit-throttle-us-per-full': 'int64',
>> +           '*dirty-limit-us-ring-full': 'int64'} }
>>  ##
>>  # @query-migrate:

[...]



^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2022-12-03  9:43 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-21 16:26 [PATCH v2 00/11] migration: introduce dirtylimit capability huangy81
2022-11-21 16:26 ` [PATCH v2 01/11] dirtylimit: Fix overflow when computing MB huangy81
2022-11-29 23:17   ` Peter Xu
2022-12-03  8:56   ` Markus Armbruster
2022-11-21 16:26 ` [PATCH v2 02/11] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit" huangy81
2022-11-29 23:17   ` Peter Xu
2022-12-03  9:01   ` Markus Armbruster
2022-11-21 16:26 ` [PATCH v2 03/11] kvm-all: Do not allow reap vcpu dirty ring buffer if not ready huangy81
2022-11-29 22:42   ` Peter Xu
2022-11-30  3:11     ` Hyman Huang
2022-11-21 16:26 ` [PATCH v2 04/11] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
2022-11-29 22:49   ` Peter Xu
2022-12-03  9:06   ` Markus Armbruster
2022-12-03  9:11   ` Markus Armbruster
2022-11-21 16:26 ` [PATCH v2 05/11] qapi/migration: Introduce vcpu-dirty-limit parameters huangy81
2022-11-29 23:58   ` Peter Xu
2022-12-03  9:13   ` Markus Armbruster
2022-12-03  9:21     ` Markus Armbruster
2022-11-21 16:26 ` [PATCH v2 06/11] migration: Introduce dirty-limit capability huangy81
2022-11-29 23:58   ` Peter Xu
2022-12-03  9:24   ` Markus Armbruster
2022-11-21 16:26 ` [PATCH v2 07/11] migration: Implement dirty-limit convergence algo huangy81
2022-11-29 23:17   ` Peter Xu
2022-12-01  1:13     ` Hyman
2022-11-21 16:26 ` [PATCH v2 08/11] migration: Export dirty-limit time info huangy81
2022-11-30  0:09   ` Peter Xu
2022-12-01  2:09     ` Hyman
2022-12-03  9:14   ` Hyman
2022-12-03  9:42     ` Markus Armbruster
2022-12-03  9:28   ` Markus Armbruster
2022-11-21 16:26 ` [PATCH v2 09/11] tests: Add migration dirty-limit capability test huangy81
2022-11-21 16:26 ` [PATCH v2 10/11] tests/migration: Introduce dirty-ring-size option into guestperf huangy81
2022-11-21 16:26 ` [PATCH v2 11/11] tests/migration: Introduce dirty-limit " huangy81

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.