All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/8] migration: introduce dirtylimit capability
@ 2022-09-01 17:22 huangy81
  2022-09-01 17:22 ` [PATCH v1 1/8] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
                   ` (9 more replies)
  0 siblings, 10 replies; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

v1:
- make parameter vcpu-dirty-limit experimental 
- switch dirty limit off when cancel migrate
- add cancel logic in migration test 

Please review, thanks,

Yong 

Abstract
========

This series added a new migration capability called "dirtylimit".  It can
be enabled when dirty ring is enabled, and it'll improve the vCPU performance
during the process of migration. It is based on the previous patchset:
https://lore.kernel.org/qemu-devel/cover.1656177590.git.huangy81@chinatelecom.cn/

As mentioned in patchset "support dirty restraint on vCPU", dirtylimit way of
migration can make the read-process not be penalized. This series wires up the
vcpu dirty limit and wrappers as dirtylimit capability of migration. I introduce
two parameters vcpu-dirtylimit-period and vcpu-dirtylimit to implement the setup 
of dirtylimit during live migration.

To validate the implementation, i tested a 32 vCPU vm live migration with such 
model:
Only dirty vcpu0, vcpu1 with heavy memory workoad and leave the rest vcpus
untouched, running unixbench on the vpcu8-vcpu15 by setup the cpu affinity as
the following command:
taskset -c 8-15 ./Run -i 2 -c 8 {unixbench test item}

The following are results:

host cpu: Intel(R) Xeon(R) Platinum 8378A
host interface speed: 1000Mb/s
  |---------------------+--------+------------+---------------|
  | UnixBench test item | Normal | Dirtylimit | Auto-converge |
  |---------------------+--------+------------+---------------|
  | dhry2reg            | 32800  | 32786      | 25292         |
  | whetstone-double    | 10326  | 10315      | 9847          |
  | pipe                | 15442  | 15271      | 14506         |
  | context1            | 7260   | 6235       | 4514          |
  | spawn               | 3663   | 3317       | 3249          |
  | syscall             | 4669   | 4667       | 3841          |
  |---------------------+--------+------------+---------------|
From the data above we can draw a conclusion that vcpus that do not dirty memory
in vm are almost unaffected during the dirtylimit migration, but the auto converge
way does. 

I also tested the total time of dirtylimit migration with variable dirty memory
size in vm.

senario 1:
host cpu: Intel(R) Xeon(R) Platinum 8378A
host interface speed: 1000Mb/s
  |-----------------------+----------------+-------------------|
  | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
  |-----------------------+----------------+-------------------|
  | 60                    | 2014           | 2131              |
  | 70                    | 5381           | 12590             |
  | 90                    | 6037           | 33545             |
  | 110                   | 7660           | [*]               |
  |-----------------------+----------------+-------------------|
  [*]: This case means migration is not convergent. 

senario 2:
host cpu: Intel(R) Xeon(R) CPU E5-2650
host interface speed: 10000Mb/s
  |-----------------------+----------------+-------------------|
  | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
  |-----------------------+----------------+-------------------|
  | 1600                  | 15842          | 27548             |
  | 2000                  | 19026          | 38447             |
  | 2400                  | 19897          | 46381             |
  | 2800                  | 22338          | 57149             |
  |-----------------------+----------------+-------------------|
Above data shows that dirtylimit way of migration can also reduce the total
time of migration and it achieves convergence more easily in some case.

In addition to implement dirtylimit capability itself, this series
add 3 tests for migration, aiming at playing around for developer simply: 
 1. qtest for dirty limit migration
 2. support dirty ring way of migration for guestperf tool
 3. support dirty limit migration for guestperf tool

Please review, thanks !

Hyman Huang (8):
  qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  qapi/migration: Introduce x-vcpu-dirty-limit parameters
  migration: Introduce dirty-limit capability
  migration: Implement dirty-limit convergence algo
  migration: Export dirty-limit time info
  tests: Add migration dirty-limit capability test
  tests/migration: Introduce dirty-ring-size option into guestperf
  tests/migration: Introduce dirty-limit into guestperf

 include/sysemu/dirtylimit.h             |   2 +
 migration/migration.c                   |  51 +++++++++++
 migration/migration.h                   |   1 +
 migration/ram.c                         |  53 ++++++++---
 migration/trace-events                  |   1 +
 monitor/hmp-cmds.c                      |  26 ++++++
 qapi/migration.json                     |  57 ++++++++++--
 softmmu/dirtylimit.c                    |  33 ++++++-
 tests/migration/guestperf/comparison.py |  24 +++++
 tests/migration/guestperf/engine.py     |  33 ++++++-
 tests/migration/guestperf/hardware.py   |   8 +-
 tests/migration/guestperf/progress.py   |  17 +++-
 tests/migration/guestperf/scenario.py   |  11 ++-
 tests/migration/guestperf/shell.py      |  25 +++++-
 tests/qtest/migration-test.c            | 154 ++++++++++++++++++++++++++++++++
 15 files changed, 465 insertions(+), 31 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v1 1/8] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-09-02  8:02   ` Markus Armbruster
  2022-09-01 17:22 ` [PATCH v1 2/8] qapi/migration: Introduce x-vcpu-dirty-limit parameters huangy81
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Introduce "x-vcpu-dirty-limit-period" migration experimental
parameter, which is used to make dirtyrate calculation period
configurable.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c | 16 ++++++++++++++++
 monitor/hmp-cmds.c    |  8 ++++++++
 qapi/migration.json   | 31 ++++++++++++++++++++++++-------
 3 files changed, 48 insertions(+), 7 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index bb8bbdd..a8a8065 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -116,6 +116,8 @@
 #define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS    5
 #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
 
+#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD     500     /* ms */
+
 static NotifierList migration_state_notifiers =
     NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
 
@@ -962,6 +964,9 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
                        s->parameters.block_bitmap_mapping);
     }
 
+    params->has_x_vcpu_dirty_limit_period = true;
+    params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
+
     return params;
 }
 
@@ -1662,6 +1667,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
         dest->has_block_bitmap_mapping = true;
         dest->block_bitmap_mapping = params->block_bitmap_mapping;
     }
+
+    if (params->has_x_vcpu_dirty_limit_period) {
+        dest->x_vcpu_dirty_limit_period = params->x_vcpu_dirty_limit_period;
+    }
 }
 
 static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1784,6 +1793,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
             QAPI_CLONE(BitmapMigrationNodeAliasList,
                        params->block_bitmap_mapping);
     }
+    if (params->has_x_vcpu_dirty_limit_period) {
+        s->parameters.x_vcpu_dirty_limit_period =
+            params->x_vcpu_dirty_limit_period;
+    }
 }
 
 void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
@@ -4385,6 +4398,9 @@ static Property migration_properties[] = {
     DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
     DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
     DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
+    DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
+                       parameters.x_vcpu_dirty_limit_period,
+                       DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index c6cd6f9..7569859 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -532,6 +532,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
                 }
             }
         }
+
+        monitor_printf(mon, "%s: %" PRIu64 " ms\n",
+        MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
+        params->x_vcpu_dirty_limit_period);
     }
 
     qapi_free_MigrationParameters(params);
@@ -1351,6 +1355,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         error_setg(&err, "The block-bitmap-mapping parameter can only be set "
                    "through QMP");
         break;
+    case MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD:
+        p->has_x_vcpu_dirty_limit_period = true;
+        visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
+        break;
     default:
         assert(0);
     }
diff --git a/qapi/migration.json b/qapi/migration.json
index 81185d4..332c087 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -776,8 +776,12 @@
 #                        block device name if there is one, and to their node name
 #                        otherwise. (Since 5.2)
 #
+# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
+#                             Defaults to 500ms. (Since 7.1)
+#
 # Features:
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+#            are experimental.
 #
 # Since: 2.4
 ##
@@ -795,8 +799,9 @@
            'multifd-channels',
            'xbzrle-cache-size', 'max-postcopy-bandwidth',
            'max-cpu-throttle', 'multifd-compression',
-           'multifd-zlib-level' ,'multifd-zstd-level',
-           'block-bitmap-mapping' ] }
+           'multifd-zlib-level', 'multifd-zstd-level',
+           'block-bitmap-mapping',
+           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
 
 ##
 # @MigrateSetParameters:
@@ -941,8 +946,12 @@
 #                        block device name if there is one, and to their node name
 #                        otherwise. (Since 5.2)
 #
+# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
+#                             Defaults to 500ms. (Since 7.1)
+#
 # Features:
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+#            are experimental.
 #
 # Since: 2.4
 ##
@@ -976,7 +985,9 @@
             '*multifd-compression': 'MultiFDCompression',
             '*multifd-zlib-level': 'uint8',
             '*multifd-zstd-level': 'uint8',
-            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
+            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
+            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
+                                            'features': [ 'unstable' ] } } }
 
 ##
 # @migrate-set-parameters:
@@ -1141,8 +1152,12 @@
 #                        block device name if there is one, and to their node name
 #                        otherwise. (Since 5.2)
 #
+# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
+#                             Defaults to 500ms. (Since 7.1)
+#
 # Features:
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+#            are experimental.
 #
 # Since: 2.4
 ##
@@ -1174,7 +1189,9 @@
             '*multifd-compression': 'MultiFDCompression',
             '*multifd-zlib-level': 'uint8',
             '*multifd-zstd-level': 'uint8',
-            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
+            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
+            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
+                                            'features': [ 'unstable' ] } } }
 
 ##
 # @query-migrate-parameters:
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 2/8] qapi/migration: Introduce x-vcpu-dirty-limit parameters
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
  2022-09-01 17:22 ` [PATCH v1 1/8] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-09-02  8:03   ` Markus Armbruster
  2022-09-01 17:22 ` [PATCH v1 3/8] migration: Introduce dirty-limit capability huangy81
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Introduce "x-vcpu-dirty-limit" migration parameter used
to limit dirty page rate during live migration.

"x-vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
two dirty-limit-related migration parameters, which can
be set before and during live migration by qmp
migrate-set-parameters.

This two parameters are used to help implement the dirty
page rate limit algo of migration.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c | 14 ++++++++++++++
 monitor/hmp-cmds.c    |  8 ++++++++
 qapi/migration.json   | 18 +++++++++++++++---
 3 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index a8a8065..a748fe5 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -117,6 +117,7 @@
 #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
 
 #define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD     500     /* ms */
+#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT            1       /* MB/s */
 
 static NotifierList migration_state_notifiers =
     NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
@@ -967,6 +968,9 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
     params->has_x_vcpu_dirty_limit_period = true;
     params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
 
+    params->has_x_vcpu_dirty_limit = true;
+    params->x_vcpu_dirty_limit = s->parameters.x_vcpu_dirty_limit;
+
     return params;
 }
 
@@ -1671,6 +1675,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
     if (params->has_x_vcpu_dirty_limit_period) {
         dest->x_vcpu_dirty_limit_period = params->x_vcpu_dirty_limit_period;
     }
+
+    if (params->has_x_vcpu_dirty_limit) {
+        dest->x_vcpu_dirty_limit = params->x_vcpu_dirty_limit;
+    }
 }
 
 static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1797,6 +1805,9 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
         s->parameters.x_vcpu_dirty_limit_period =
             params->x_vcpu_dirty_limit_period;
     }
+    if (params->has_x_vcpu_dirty_limit) {
+        s->parameters.x_vcpu_dirty_limit = params->x_vcpu_dirty_limit;
+    }
 }
 
 void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
@@ -4401,6 +4412,9 @@ static Property migration_properties[] = {
     DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
                        parameters.x_vcpu_dirty_limit_period,
                        DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
+    DEFINE_PROP_UINT64("vcpu-dirty-limit", MigrationState,
+                       parameters.x_vcpu_dirty_limit,
+                       DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index 7569859..b362fae 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -536,6 +536,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
         monitor_printf(mon, "%s: %" PRIu64 " ms\n",
         MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
         params->x_vcpu_dirty_limit_period);
+
+        monitor_printf(mon, "%s: %" PRIu64 " MB/s\n",
+            MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT),
+            params->x_vcpu_dirty_limit);
     }
 
     qapi_free_MigrationParameters(params);
@@ -1359,6 +1363,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         p->has_x_vcpu_dirty_limit_period = true;
         visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
         break;
+    case MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT:
+        p->has_x_vcpu_dirty_limit = true;
+        visit_type_size(v, param, &p->x_vcpu_dirty_limit, &err);
+        break;
     default:
         assert(0);
     }
diff --git a/qapi/migration.json b/qapi/migration.json
index 332c087..8554d33 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -779,6 +779,9 @@
 # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
 #                             Defaults to 500ms. (Since 7.1)
 #
+# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+#                      Defaults to 1. (Since 7.1)
+#
 # Features:
 # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
 #            are experimental.
@@ -801,7 +804,8 @@
            'max-cpu-throttle', 'multifd-compression',
            'multifd-zlib-level', 'multifd-zstd-level',
            'block-bitmap-mapping',
-           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
+           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
+           'x-vcpu-dirty-limit'] }
 
 ##
 # @MigrateSetParameters:
@@ -949,6 +953,9 @@
 # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
 #                             Defaults to 500ms. (Since 7.1)
 #
+# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+#                      Defaults to 1. (Since 7.1)
+#
 # Features:
 # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
 #            are experimental.
@@ -987,7 +994,8 @@
             '*multifd-zstd-level': 'uint8',
             '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
             '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
-                                            'features': [ 'unstable' ] } } }
+                                            'features': [ 'unstable' ] },
+            '*x-vcpu-dirty-limit': 'uint64'} }
 
 ##
 # @migrate-set-parameters:
@@ -1155,6 +1163,9 @@
 # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
 #                             Defaults to 500ms. (Since 7.1)
 #
+# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+#                      Defaults to 1. (Since 7.1)
+#
 # Features:
 # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
 #            are experimental.
@@ -1191,7 +1202,8 @@
             '*multifd-zstd-level': 'uint8',
             '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
             '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
-                                            'features': [ 'unstable' ] } } }
+                                            'features': [ 'unstable' ] },
+            '*x-vcpu-dirty-limit': 'uint64'} }
 
 ##
 # @query-migrate-parameters:
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 3/8] migration: Introduce dirty-limit capability
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
  2022-09-01 17:22 ` [PATCH v1 1/8] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
  2022-09-01 17:22 ` [PATCH v1 2/8] qapi/migration: Introduce x-vcpu-dirty-limit parameters huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-09-02  8:07   ` Markus Armbruster
  2022-09-01 17:22 ` [PATCH v1 4/8] migration: Implement dirty-limit convergence algo huangy81
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Introduce migration dirty-limit capability, which can
be turned on before live migration and limit dirty
page rate durty live migration.

Introduce migrate_dirty_limit function to help check
if dirty-limit capability enabled during live migration.

Meanwhile, refactor vcpu_dirty_rate_stat_collect
so that period can be configured instead of hardcoded.

dirty-limit capability is kind of like auto-converge
but using dirty limit instead of traditional cpu-throttle
to throttle guest down. To enable this feature, turn on
the dirty-limit capability before live migration using
migratioin-set-capabilities, and set the parameters
"x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
to speed up convergence.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c | 10 ++++++++++
 migration/migration.h |  1 +
 qapi/migration.json   |  4 +++-
 softmmu/dirtylimit.c  | 11 ++++++++++-
 4 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index a748fe5..d117bb4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2508,6 +2508,15 @@ bool migrate_auto_converge(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_AUTO_CONVERGE];
 }
 
+bool migrate_dirty_limit(void)
+{
+    MigrationState *s;
+
+    s = migrate_get_current();
+
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_DIRTY_LIMIT];
+}
+
 bool migrate_zero_blocks(void)
 {
     MigrationState *s;
@@ -4437,6 +4446,7 @@ static Property migration_properties[] = {
     DEFINE_PROP_MIG_CAP("x-zero-copy-send",
             MIGRATION_CAPABILITY_ZERO_COPY_SEND),
 #endif
+    DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
 
     DEFINE_PROP_END_OF_LIST(),
 };
diff --git a/migration/migration.h b/migration/migration.h
index cdad8ac..7fbb9f8 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -409,6 +409,7 @@ bool migrate_ignore_shared(void);
 bool migrate_validate_uuid(void);
 
 bool migrate_auto_converge(void);
+bool migrate_dirty_limit(void);
 bool migrate_use_multifd(void);
 bool migrate_pause_before_switchover(void);
 int migrate_multifd_channels(void);
diff --git a/qapi/migration.json b/qapi/migration.json
index 8554d33..bc4bc96 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -477,6 +477,8 @@
 #                    will be handled faster.  This is a performance feature and
 #                    should not affect the correctness of postcopy migration.
 #                    (since 7.1)
+# @dirty-limit: Use dirty-limit to throttle down guest if enabled.
+#               (since 7.1)
 #
 # Features:
 # @unstable: Members @x-colo and @x-ignore-shared are experimental.
@@ -492,7 +494,7 @@
            'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
            { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
            'validate-uuid', 'background-snapshot',
-           'zero-copy-send', 'postcopy-preempt'] }
+           'zero-copy-send', 'postcopy-preempt', 'dirty-limit'] }
 
 ##
 # @MigrationCapabilityStatus:
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 8d98cb7..1fdd8c6 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -23,6 +23,8 @@
 #include "exec/memory.h"
 #include "hw/boards.h"
 #include "sysemu/kvm.h"
+#include "migration/misc.h"
+#include "migration/migration.h"
 #include "trace.h"
 
 /*
@@ -75,11 +77,18 @@ static bool dirtylimit_quit;
 
 static void vcpu_dirty_rate_stat_collect(void)
 {
+    MigrationState *s = migrate_get_current();
     VcpuStat stat;
     int i = 0;
+    int64_t period = DIRTYLIMIT_CALC_TIME_MS;
+
+    if (migrate_dirty_limit() &&
+        migration_is_active(s)) {
+        period = s->parameters.x_vcpu_dirty_limit_period;
+    }
 
     /* calculate vcpu dirtyrate */
-    vcpu_calculate_dirtyrate(DIRTYLIMIT_CALC_TIME_MS,
+    vcpu_calculate_dirtyrate(period,
                              &stat,
                              GLOBAL_DIRTY_LIMIT,
                              false);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 4/8] migration: Implement dirty-limit convergence algo
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
                   ` (2 preceding siblings ...)
  2022-09-01 17:22 ` [PATCH v1 3/8] migration: Introduce dirty-limit capability huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-09-06 20:37   ` Peter Xu
  2022-09-01 17:22 ` [PATCH v1 5/8] migration: Export dirty-limit time info huangy81
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Implement dirty-limit convergence algo for live migration,
which is kind of like auto-converge algo but using dirty-limit
instead of cpu throttle to make migration convergent.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 migration/migration.c  |  1 +
 migration/ram.c        | 53 +++++++++++++++++++++++++++++++++++++-------------
 migration/trace-events |  1 +
 3 files changed, 42 insertions(+), 13 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index d117bb4..64696de 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -239,6 +239,7 @@ void migration_cancel(const Error *error)
     if (error) {
         migrate_set_error(current_migration, error);
     }
+    qmp_cancel_vcpu_dirty_limit(false, -1, NULL);
     migrate_fd_cancel(current_migration);
 }
 
diff --git a/migration/ram.c b/migration/ram.c
index dc1de9d..cc19c5e 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -45,6 +45,7 @@
 #include "qapi/error.h"
 #include "qapi/qapi-types-migration.h"
 #include "qapi/qapi-events-migration.h"
+#include "qapi/qapi-commands-migration.h"
 #include "qapi/qmp/qerror.h"
 #include "trace.h"
 #include "exec/ram_addr.h"
@@ -57,6 +58,8 @@
 #include "qemu/iov.h"
 #include "multifd.h"
 #include "sysemu/runstate.h"
+#include "sysemu/dirtylimit.h"
+#include "sysemu/kvm.h"
 
 #include "hw/boards.h" /* for machine_dump_guest_core() */
 
@@ -1139,6 +1142,21 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
     }
 }
 
+/*
+ * Enable dirty-limit to throttle down the guest
+ */
+static void migration_dirty_limit_guest(void)
+{
+    if (!dirtylimit_in_service()) {
+        MigrationState *s = migrate_get_current();
+        int64_t quota_dirtyrate = s->parameters.x_vcpu_dirty_limit;
+
+        /* Set quota dirtyrate if dirty limit not in service */
+        qmp_set_vcpu_dirty_limit(false, -1, quota_dirtyrate, NULL);
+        trace_migration_dirty_limit_guest(quota_dirtyrate);
+    }
+}
+
 static void migration_trigger_throttle(RAMState *rs)
 {
     MigrationState *s = migrate_get_current();
@@ -1148,22 +1166,31 @@ static void migration_trigger_throttle(RAMState *rs)
     uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
     uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
 
-    /* During block migration the auto-converge logic incorrectly detects
-     * that ram migration makes no progress. Avoid this by disabling the
-     * throttling logic during the bulk phase of block migration. */
-    if (migrate_auto_converge() && !blk_mig_bulk_active()) {
-        /* The following detection logic can be refined later. For now:
-           Check to see if the ratio between dirtied bytes and the approx.
-           amount of bytes that just got transferred since the last time
-           we were in this routine reaches the threshold. If that happens
-           twice, start or increase throttling. */
-
-        if ((bytes_dirty_period > bytes_dirty_threshold) &&
-            (++rs->dirty_rate_high_cnt >= 2)) {
+    /*
+     * The following detection logic can be refined later. For now:
+     * Check to see if the ratio between dirtied bytes and the approx.
+     * amount of bytes that just got transferred since the last time
+     * we were in this routine reaches the threshold. If that happens
+     * twice, start or increase throttling.
+     */
+
+    if ((bytes_dirty_period > bytes_dirty_threshold) &&
+        (++rs->dirty_rate_high_cnt >= 2)) {
+        rs->dirty_rate_high_cnt = 0;
+        /*
+         * During block migration the auto-converge logic incorrectly detects
+         * that ram migration makes no progress. Avoid this by disabling the
+         * throttling logic during the bulk phase of block migration
+         */
+
+        if (migrate_auto_converge() && !blk_mig_bulk_active()) {
             trace_migration_throttle();
-            rs->dirty_rate_high_cnt = 0;
             mig_throttle_guest_down(bytes_dirty_period,
                                     bytes_dirty_threshold);
+        } else if (migrate_dirty_limit() &&
+                   kvm_dirty_ring_enabled() &&
+                   migration_is_active(s)) {
+            migration_dirty_limit_guest();
         }
     }
 }
diff --git a/migration/trace-events b/migration/trace-events
index 57003ed..33a2666 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -91,6 +91,7 @@ migration_bitmap_sync_start(void) ""
 migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
 migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned long page) "rb %s start 0x%"PRIx64" size 0x%"PRIx64" page 0x%lx"
 migration_throttle(void) ""
+migration_dirty_limit_guest(int64_t dirtyrate) "guest dirty page rate limit %" PRIi64 " MB/s"
 ram_discard_range(const char *rbname, uint64_t start, size_t len) "%s: start: %" PRIx64 " %zx"
 ram_load_loop(const char *rbname, uint64_t addr, int flags, void *host) "%s: addr: 0x%" PRIx64 " flags: 0x%x host: %p"
 ram_load_postcopy_loop(int channel, uint64_t addr, int flags) "chan=%d addr=0x%" PRIx64 " flags=0x%x"
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 5/8] migration: Export dirty-limit time info
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
                   ` (3 preceding siblings ...)
  2022-09-01 17:22 ` [PATCH v1 4/8] migration: Implement dirty-limit convergence algo huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-10-01 18:31   ` Markus Armbruster
  2022-09-01 17:22 ` [PATCH v1 6/8] tests: Add migration dirty-limit capability test huangy81
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Export dirty limit throttle time and estimated ring full
time, through which we can observe the process of dirty
limit during live migration.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 include/sysemu/dirtylimit.h |  2 ++
 migration/migration.c       | 10 ++++++++++
 monitor/hmp-cmds.c          | 10 ++++++++++
 qapi/migration.json         | 10 +++++++++-
 softmmu/dirtylimit.c        | 22 ++++++++++++++++++++++
 5 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/include/sysemu/dirtylimit.h b/include/sysemu/dirtylimit.h
index 8d2c1f3..98cc4a6 100644
--- a/include/sysemu/dirtylimit.h
+++ b/include/sysemu/dirtylimit.h
@@ -34,4 +34,6 @@ void dirtylimit_set_vcpu(int cpu_index,
 void dirtylimit_set_all(uint64_t quota,
                         bool enable);
 void dirtylimit_vcpu_execute(CPUState *cpu);
+int64_t dirtylimit_throttle_us_per_full(void);
+int64_t dirtylimit_us_ring_full(void);
 #endif
diff --git a/migration/migration.c b/migration/migration.c
index 64696de..22ba197 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -61,6 +61,7 @@
 #include "sysemu/cpus.h"
 #include "yank_functions.h"
 #include "sysemu/qtest.h"
+#include "sysemu/dirtylimit.h"
 
 #define MAX_THROTTLE  (128 << 20)      /* Migration transfer speed throttling */
 
@@ -1110,6 +1111,15 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
         info->ram->remaining = ram_bytes_remaining();
         info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
     }
+
+    if (migrate_dirty_limit() && dirtylimit_in_service()) {
+        info->has_dirty_limit_throttle_us_per_full = true;
+        info->dirty_limit_throttle_us_per_full =
+                            dirtylimit_throttle_us_per_full();
+
+        info->has_dirty_limit_us_ring_full = true;
+        info->dirty_limit_us_ring_full = dirtylimit_us_ring_full();
+    }
 }
 
 static void populate_disk_info(MigrationInfo *info)
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index b362fae..23c3f48 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -358,6 +358,16 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
                        info->cpu_throttle_percentage);
     }
 
+    if (info->has_dirty_limit_throttle_us_per_full) {
+        monitor_printf(mon, "dirty-limit throttle time: %" PRIu64 " us\n",
+                       info->dirty_limit_throttle_us_per_full);
+    }
+
+    if (info->has_dirty_limit_us_ring_full) {
+        monitor_printf(mon, "dirty-limit ring full time: %" PRIu64 " us\n",
+                       info->dirty_limit_us_ring_full);
+    }
+
     if (info->has_postcopy_blocktime) {
         monitor_printf(mon, "postcopy blocktime: %u\n",
                        info->postcopy_blocktime);
diff --git a/qapi/migration.json b/qapi/migration.json
index bc4bc96..c263d54 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -242,6 +242,12 @@
 #                   Present and non-empty when migration is blocked.
 #                   (since 6.0)
 #
+# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
+#                                    dirty ring full (since 7.0)
+#
+# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
+#                            (since 7.0)
+#
 # Since: 0.14
 ##
 { 'struct': 'MigrationInfo',
@@ -259,7 +265,9 @@
            '*postcopy-blocktime' : 'uint32',
            '*postcopy-vcpu-blocktime': ['uint32'],
            '*compression': 'CompressionStats',
-           '*socket-address': ['SocketAddress'] } }
+           '*socket-address': ['SocketAddress'],
+           '*dirty-limit-throttle-us-per-full': 'int64',
+           '*dirty-limit-us-ring-full': 'int64'} }
 
 ##
 # @query-migrate:
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 1fdd8c6..1251b27 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -546,6 +546,28 @@ static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
     return info;
 }
 
+/* Pick up first vcpu throttle time by default */
+int64_t dirtylimit_throttle_us_per_full(void)
+{
+    CPUState *cpu = first_cpu;
+    return cpu->throttle_us_per_full;
+}
+
+/*
+ * Estimate dirty ring full time under current dirty page rate.
+ * Return -1 if guest doesn't dirty memory.
+ */
+int64_t dirtylimit_us_ring_full(void)
+{
+    uint64_t curr_rate = vcpu_dirty_rate_get(0);
+
+    if (!curr_rate) {
+        return -1;
+    }
+
+    return dirtylimit_dirty_ring_full_time(curr_rate);
+}
+
 static struct DirtyLimitInfoList *dirtylimit_query_all(void)
 {
     int i, index;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 6/8] tests: Add migration dirty-limit capability test
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
                   ` (4 preceding siblings ...)
  2022-09-01 17:22 ` [PATCH v1 5/8] migration: Export dirty-limit time info huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-09-01 17:22 ` [PATCH v1 7/8] tests/migration: Introduce dirty-ring-size option into guestperf huangy81
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Add migration dirty-limit capability test if kernel support
dirty ring.

Migration dirty-limit capability introduce dirty limit
capability, two parameters: x-vcpu-dirty-limit-period and
x-vcpu-dirty-limit are introduced to implement the live
migration with dirty limit.

The test case does the following things:
1. start src, dst vm and enable dirty-limit capability
2. start migrate and set cancel it to check if dirty limit
   stop working.
3. restart dst vm
4. start migrate and enable dirty-limit capability
5. check if migration satisfy the convergence condition
   during pre-switchover phase.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 tests/qtest/migration-test.c | 154 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 154 insertions(+)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 4728d52..f3bfd85 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -2409,6 +2409,158 @@ static void test_vcpu_dirty_limit(void)
     dirtylimit_stop_vm(vm);
 }
 
+static void migrate_dirty_limit_wait_showup(QTestState *from,
+                                            const int64_t period,
+                                            const int64_t value)
+{
+    /* Enable dirty limit capability */
+    migrate_set_capability(from, "dirty-limit", true);
+
+    /* Set dirty limit parameters */
+    migrate_set_parameter_int(from, "x-vcpu-dirty-limit-period", period);
+    migrate_set_parameter_int(from, "x-vcpu-dirty-limit", value);
+
+    /* Make sure migrate can't converge */
+    migrate_ensure_non_converge(from);
+
+    /* To check limit rate after precopy */
+    migrate_set_capability(from, "pause-before-switchover", true);
+
+    /* Wait for the serial output from the source */
+    wait_for_serial("src_serial");
+}
+
+/*
+ * This test does:
+ *  source               target
+ *                       migrate_incoming
+ *     migrate
+ *     migrate_cancel
+ *                       restart target
+ *     migrate
+ *
+ *  And see that if dirty limit works correctly
+ */
+static void test_migrate_dirty_limit(void)
+{
+    g_autofree char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs);
+    QTestState *from, *to;
+    int64_t remaining, throttle_us_per_full;
+    /*
+     * We want the test to be stable and as fast as possible.
+     * E.g., with 1Gb/s bandwith migration may pass without dirty limit,
+     * so we need to decrease a bandwidth.
+     */
+    const int64_t dirtylimit_period = 1000, dirtylimit_value = 50;
+    const int64_t max_bandwidth = 400000000; /* ~400Mb/s */
+    const int64_t downtime_limit = 250; /* 250ms */
+    /*
+     * We migrate through unix-socket (> 500Mb/s).
+     * Thus, expected migration speed ~= bandwidth limit (< 500Mb/s).
+     * So, we can predict expected_threshold
+     */
+    const int64_t expected_threshold = max_bandwidth * downtime_limit / 1000;
+    int max_try_count = 10;
+    MigrateCommon args = {
+        .start = {
+            .hide_stderr = true,
+            .use_dirty_ring = true,
+        },
+        .listen_uri = uri,
+        .connect_uri = uri,
+    };
+
+    /* Start src, dst vm */
+    if (test_migrate_start(&from, &to, args.listen_uri, &args.start)) {
+        return;
+    }
+
+    /* Prepare for dirty limit migration and wait src vm show up */
+    migrate_dirty_limit_wait_showup(from, dirtylimit_period, dirtylimit_value);
+
+    /* Start migrate */
+    migrate_qmp(from, uri, "{}");
+
+    /* Wait for dirty limit throttle begin */
+    throttle_us_per_full = 0;
+    while (throttle_us_per_full == 0) {
+        throttle_us_per_full =
+            read_migrate_property_int(from, "dirty-limit-throttle-us-per-full");
+        usleep(100);
+        g_assert_false(got_stop);
+    }
+
+    /* Now cancel migrate and wait for dirty limit throttle switch off */
+    migrate_cancel(from);
+    wait_for_migration_status(from, "cancelled", NULL);
+
+    /* Check if dirty limit throttle switched off, set timeout 1ms */
+    do {
+        throttle_us_per_full =
+            read_migrate_property_int(from, "dirty-limit-throttle-us-per-full");
+        usleep(100);
+        g_assert_false(got_stop);
+    } while (throttle_us_per_full != 0 && --max_try_count);
+
+    /* Assert dirty limit is not in service */
+    g_assert_cmpint(throttle_us_per_full, ==, 0);
+
+    args = (MigrateCommon) {
+        .start = {
+            .only_target = true,
+            .use_dirty_ring = true,
+        },
+        .listen_uri = uri,
+        .connect_uri = uri,
+    };
+
+    /* Restart dst vm, src vm already show up so we needn't wait anymore */
+    if (test_migrate_start(&from, &to, args.listen_uri, &args.start)) {
+        return;
+    }
+
+    /* Start migrate */
+    migrate_qmp(from, uri, "{}");
+
+    /* Wait for dirty limit throttle begin */
+    throttle_us_per_full = 0;
+    while (throttle_us_per_full == 0) {
+        throttle_us_per_full =
+            read_migrate_property_int(from, "dirty-limit-throttle-us-per-full");
+        usleep(100);
+        g_assert_false(got_stop);
+    }
+
+    /*
+     * The dirty limit rate should equals the return value of
+     * query-vcpu-dirty-limit if dirty limit cap set
+     */
+    g_assert_cmpint(dirtylimit_value, ==, get_limit_rate(from));
+
+    /* Now, we have tested if dirty limit works, let it converge */
+    migrate_set_parameter_int(from, "downtime-limit", downtime_limit);
+    migrate_set_parameter_int(from, "max-bandwidth", max_bandwidth);
+
+    /*
+     * Wait for pre-switchover status to check if migration
+     * satisfy the convergence condition
+     */
+    wait_for_migration_status(from, "pre-switchover", NULL);
+
+    remaining = read_ram_property_int(from, "remaining");
+    g_assert_cmpint(remaining, <,
+                    (expected_threshold + expected_threshold / 100));
+
+    migrate_continue(from, "pre-switchover");
+
+    qtest_qmp_eventwait(to, "RESUME");
+
+    wait_for_serial("dest_serial");
+    wait_for_migration_complete(from);
+
+    test_migrate_end(from, to, true);
+}
+
 static bool kvm_dirty_ring_supported(void)
 {
 #if defined(__linux__) && defined(HOST_X86_64)
@@ -2578,6 +2730,8 @@ int main(int argc, char **argv)
                        test_precopy_unix_dirty_ring);
         qtest_add_func("/migration/vcpu_dirty_limit",
                        test_vcpu_dirty_limit);
+        qtest_add_func("/migration/dirty_limit",
+                       test_migrate_dirty_limit);
     }
 
     ret = g_test_run();
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 7/8] tests/migration: Introduce dirty-ring-size option into guestperf
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
                   ` (5 preceding siblings ...)
  2022-09-01 17:22 ` [PATCH v1 6/8] tests: Add migration dirty-limit capability test huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-09-01 17:22 ` [PATCH v1 8/8] tests/migration: Introduce dirty-limit " huangy81
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Guestperf tool does not enable diry ring feature when test
migration by default.

To support dirty ring migration performance test, introduce
dirty-ring-size option into guestperf tools, which ranges in
[1024, 65536].

To set dirty ring size with 4096 during migration test:
$ ./tests/migration/guestperf.py --dirty-ring-size 4096 xxx

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 tests/migration/guestperf/engine.py   | 7 ++++++-
 tests/migration/guestperf/hardware.py | 8 ++++++--
 tests/migration/guestperf/shell.py    | 7 ++++++-
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/tests/migration/guestperf/engine.py b/tests/migration/guestperf/engine.py
index 87a6ab2..2b98f00 100644
--- a/tests/migration/guestperf/engine.py
+++ b/tests/migration/guestperf/engine.py
@@ -304,7 +304,6 @@ def _get_common_args(self, hardware, tunnelled=False):
             cmdline = "'" + cmdline + "'"
 
         argv = [
-            "-accel", "kvm",
             "-cpu", "host",
             "-kernel", self._kernel,
             "-initrd", self._initrd,
@@ -315,6 +314,12 @@ def _get_common_args(self, hardware, tunnelled=False):
             "-smp", str(hardware._cpus),
         ]
 
+        if hardware._dirty_ring_size:
+            argv.extend(["-accel", "kvm,dirty-ring-size=%s" %
+                         hardware._dirty_ring_size])
+        else:
+            argv.extend(["-accel", "kvm"])
+
         if self._debug:
             argv.extend(["-device", "sga"])
 
diff --git a/tests/migration/guestperf/hardware.py b/tests/migration/guestperf/hardware.py
index 3145785..f779cc0 100644
--- a/tests/migration/guestperf/hardware.py
+++ b/tests/migration/guestperf/hardware.py
@@ -23,7 +23,8 @@ def __init__(self, cpus=1, mem=1,
                  src_cpu_bind=None, src_mem_bind=None,
                  dst_cpu_bind=None, dst_mem_bind=None,
                  prealloc_pages = False,
-                 huge_pages=False, locked_pages=False):
+                 huge_pages=False, locked_pages=False,
+                 dirty_ring_size=0):
         self._cpus = cpus
         self._mem = mem # GiB
         self._src_mem_bind = src_mem_bind # List of NUMA nodes
@@ -33,6 +34,7 @@ def __init__(self, cpus=1, mem=1,
         self._prealloc_pages = prealloc_pages
         self._huge_pages = huge_pages
         self._locked_pages = locked_pages
+        self._dirty_ring_size = dirty_ring_size
 
 
     def serialize(self):
@@ -46,6 +48,7 @@ def serialize(self):
             "prealloc_pages": self._prealloc_pages,
             "huge_pages": self._huge_pages,
             "locked_pages": self._locked_pages,
+            "dirty_ring_size": self._dirty_ring_size,
         }
 
     @classmethod
@@ -59,4 +62,5 @@ def deserialize(cls, data):
             data["dst_mem_bind"],
             data["prealloc_pages"],
             data["huge_pages"],
-            data["locked_pages"])
+            data["locked_pages"],
+            data["dirty_ring_size"])
diff --git a/tests/migration/guestperf/shell.py b/tests/migration/guestperf/shell.py
index 8a809e3..559616f 100644
--- a/tests/migration/guestperf/shell.py
+++ b/tests/migration/guestperf/shell.py
@@ -60,6 +60,8 @@ def __init__(self):
         parser.add_argument("--prealloc-pages", dest="prealloc_pages", default=False)
         parser.add_argument("--huge-pages", dest="huge_pages", default=False)
         parser.add_argument("--locked-pages", dest="locked_pages", default=False)
+        parser.add_argument("--dirty-ring-size", dest="dirty_ring_size",
+                            default=0, type=int)
 
         self._parser = parser
 
@@ -89,7 +91,10 @@ def split_map(value):
 
                         locked_pages=args.locked_pages,
                         huge_pages=args.huge_pages,
-                        prealloc_pages=args.prealloc_pages)
+                        prealloc_pages=args.prealloc_pages,
+
+                        dirty_ring_size=args.dirty_ring_size)
+
 
 
 class Shell(BaseShell):
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 8/8] tests/migration: Introduce dirty-limit into guestperf
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
                   ` (6 preceding siblings ...)
  2022-09-01 17:22 ` [PATCH v1 7/8] tests/migration: Introduce dirty-ring-size option into guestperf huangy81
@ 2022-09-01 17:22 ` huangy81
  2022-09-06 20:46 ` [PATCH v1 0/8] migration: introduce dirtylimit capability Peter Xu
  2022-10-01 14:37 ` Markus Armbruster
  9 siblings, 0 replies; 29+ messages in thread
From: huangy81 @ 2022-09-01 17:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange, Hyman Huang(黄勇)

From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Guestperf tool does not cover the dirty-limit migration
currently, support this feature.

To enable dirty-limit, setting x-vcpu-dirty-limit-period
as 500ms and x-vcpu-dirty-limit as 10MB/s:
$ ./tests/migration/guestperf.py \
    --dirty-limit --x-vcpu-dirty-limit-period 500 \
    --x-vcpu-dirty-limit 10 --output output.json \

To run the entire standardized set of dirty-limit-enabled
comparisons, with unix migration:
$ ./tests/migration/guestperf-batch.py \
    --dst-host localhost --transport unix \
    --filter compr-dirty-limit* --output outputdir

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
---
 tests/migration/guestperf/comparison.py | 24 ++++++++++++++++++++++++
 tests/migration/guestperf/engine.py     | 26 ++++++++++++++++++++++++++
 tests/migration/guestperf/progress.py   | 17 +++++++++++++++--
 tests/migration/guestperf/scenario.py   | 11 ++++++++++-
 tests/migration/guestperf/shell.py      | 18 +++++++++++++++++-
 5 files changed, 92 insertions(+), 4 deletions(-)

diff --git a/tests/migration/guestperf/comparison.py b/tests/migration/guestperf/comparison.py
index c03b3f6..d082a54 100644
--- a/tests/migration/guestperf/comparison.py
+++ b/tests/migration/guestperf/comparison.py
@@ -135,4 +135,28 @@ def __init__(self, name, scenarios):
         Scenario("compr-multifd-channels-64",
                  multifd=True, multifd_channels=64),
     ]),
+
+
+    # Looking at effect of dirty-limit with
+    # varying x_vcpu_dirty_limit_period
+    Comparison("compr-dirty-limit-period", scenarios = [
+        Scenario("compr-dirty-limit-period-100",
+                 dirty_limit=True, x_vcpu_dirty_limit_period=100),
+        Scenario("compr-dirty-limit-period-500",
+                 dirty_limit=True, x_vcpu_dirty_limit_period=500),
+        Scenario("compr-dirty-limit-period-1000",
+                 dirty_limit=True, x_vcpu_dirty_limit_period=1000),
+    ]),
+
+
+    # Looking at effect of dirty-limit with
+    # varying x_vcpu_dirty_limit
+    Comparison("compr-dirty-limit", scenarios = [
+        Scenario("compr-dirty-limit-10MB",
+                 dirty_limit=True, x_vcpu_dirty_limit=10),
+        Scenario("compr-dirty-limit-20MB",
+                 dirty_limit=True, x_vcpu_dirty_limit=20),
+        Scenario("compr-dirty-limit-50MB",
+                 dirty_limit=True, x_vcpu_dirty_limit=50),
+    ]),
 ]
diff --git a/tests/migration/guestperf/engine.py b/tests/migration/guestperf/engine.py
index 2b98f00..c6b9bb1 100644
--- a/tests/migration/guestperf/engine.py
+++ b/tests/migration/guestperf/engine.py
@@ -103,6 +103,8 @@ def _migrate_progress(self, vm):
             info.get("expected-downtime", 0),
             info.get("setup-time", 0),
             info.get("cpu-throttle-percentage", 0),
+            info.get("dirty-limit-throttle-us-per-full", 0),
+            info.get("dirty-limit-us-ring-full", 0),
         )
 
     def _migrate(self, hardware, scenario, src, dst, connect_uri):
@@ -204,6 +206,30 @@ def _migrate(self, hardware, scenario, src, dst, connect_uri):
             resp = dst.command("migrate-set-parameters",
                                multifd_channels=scenario._multifd_channels)
 
+        if scenario._dirty_limit:
+            if not hardware._dirty_ring_size:
+                raise Exception("dirty ring size must be configured when "
+                                "testing dirty limit migration")
+
+            resp = src.command("migrate-set-capabilities",
+                               capabilities = [
+                                   { "capability": "dirty-limit",
+                                     "state": True }
+                               ])
+            resp = src.command("migrate-set-parameters",
+                x_vcpu_dirty_limit_period=scenario._x_vcpu_dirty_limit_period)
+            resp = src.command("migrate-set-parameters",
+                               x_vcpu_dirty_limit=scenario._x_vcpu_dirty_limit)
+            resp = dst.command("migrate-set-capabilities",
+                               capabilities = [
+                                   { "capability": "dirty-limit",
+                                     "state": True }
+                               ])
+            resp = dst.command("migrate-set-parameters",
+                x_vcpu_dirty_limit_period=scenario._x_vcpu_dirty_limit_period)
+            resp = dst.command("migrate-set-parameters",
+                               x_vcpu_dirty_limit=scenario._x_vcpu_dirty_limit)
+
         resp = src.command("migrate", uri=connect_uri)
 
         post_copy = False
diff --git a/tests/migration/guestperf/progress.py b/tests/migration/guestperf/progress.py
index ab1ee57..dd5d86b 100644
--- a/tests/migration/guestperf/progress.py
+++ b/tests/migration/guestperf/progress.py
@@ -81,7 +81,9 @@ def __init__(self,
                  downtime,
                  downtime_expected,
                  setup_time,
-                 throttle_pcent):
+                 throttle_pcent,
+                 dirty_limit_throttle_us_per_full,
+                 dirty_limit_us_ring_full):
 
         self._status = status
         self._ram = ram
@@ -91,6 +93,11 @@ def __init__(self,
         self._downtime_expected = downtime_expected
         self._setup_time = setup_time
         self._throttle_pcent = throttle_pcent
+        self._dirty_limit_throttle_us_per_full =
+            dirty_limit_throttle_us_per_full
+        self._dirty_limit_us_ring_full =
+            dirty_limit_us_ring_full
+
 
     def serialize(self):
         return {
@@ -102,6 +109,10 @@ def serialize(self):
             "downtime_expected": self._downtime_expected,
             "setup_time": self._setup_time,
             "throttle_pcent": self._throttle_pcent,
+            "dirty_limit_throttle_time_per_full":
+                self._dirty_limit_throttle_us_per_full,
+            "dirty_limit_ring_full_time":
+                self._dirty_limit_us_ring_full,
         }
 
     @classmethod
@@ -114,4 +125,6 @@ def deserialize(cls, data):
             data["downtime"],
             data["downtime_expected"],
             data["setup_time"],
-            data["throttle_pcent"])
+            data["throttle_pcent"],
+            data["dirty_limit_throttle_time_per_full"],
+            data["dirty_limit_ring_full_time"])
diff --git a/tests/migration/guestperf/scenario.py b/tests/migration/guestperf/scenario.py
index de70d9b..22aacb3 100644
--- a/tests/migration/guestperf/scenario.py
+++ b/tests/migration/guestperf/scenario.py
@@ -30,7 +30,9 @@ def __init__(self, name,
                  auto_converge=False, auto_converge_step=10,
                  compression_mt=False, compression_mt_threads=1,
                  compression_xbzrle=False, compression_xbzrle_cache=10,
-                 multifd=False, multifd_channels=2):
+                 multifd=False, multifd_channels=2,
+                 dirty_limit=False, x_vcpu_dirty_limit_period=500,
+                 x_vcpu_dirty_limit=1):
 
         self._name = name
 
@@ -60,6 +62,10 @@ def __init__(self, name,
         self._multifd = multifd
         self._multifd_channels = multifd_channels
 
+        self._dirty_limit = dirty_limit
+        self._x_vcpu_dirty_limit_period = x_vcpu_dirty_limit_period
+        self._x_vcpu_dirty_limit = x_vcpu_dirty_limit
+
     def serialize(self):
         return {
             "name": self._name,
@@ -79,6 +85,9 @@ def serialize(self):
             "compression_xbzrle_cache": self._compression_xbzrle_cache,
             "multifd": self._multifd,
             "multifd_channels": self._multifd_channels,
+            "dirty_limit": self._dirty_limit,
+            "x_vcpu_dirty_limit_period": self._x_vcpu_dirty_limit_period,
+            "x_vcpu_dirty_limit": self._x_vcpu_dirty_limit,
         }
 
     @classmethod
diff --git a/tests/migration/guestperf/shell.py b/tests/migration/guestperf/shell.py
index 559616f..c51e9a6 100644
--- a/tests/migration/guestperf/shell.py
+++ b/tests/migration/guestperf/shell.py
@@ -132,6 +132,17 @@ def __init__(self):
         parser.add_argument("--multifd-channels", dest="multifd_channels",
                             default=2, type=int)
 
+        parser.add_argument("--dirty-limit", dest="dirty_limit", default=False,
+                            action="store_true")
+
+        parser.add_argument("--x-vcpu-dirty-limit-period",
+                            dest="x_vcpu_dirty_limit_period",
+                            default=500, type=int)
+
+        parser.add_argument("--x-vcpu-dirty-limit",
+                            dest="x_vcpu_dirty_limit",
+                            default=1, type=int)
+
     def get_scenario(self, args):
         return Scenario(name="perfreport",
                         downtime=args.downtime,
@@ -155,7 +166,12 @@ def get_scenario(self, args):
                         compression_xbzrle_cache=args.compression_xbzrle_cache,
 
                         multifd=args.multifd,
-                        multifd_channels=args.multifd_channels)
+                        multifd_channels=args.multifd_channels,
+
+                        dirty_limit=args.dirty_limit,
+                        x_vcpu_dirty_limit_period=
+                            args.x_vcpu_dirty_limit_period,
+                        x_vcpu_dirty_limit=args.x_vcpu_dirty_limit)
 
     def run(self, argv):
         args = self._parser.parse_args(argv)
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 1/8] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
  2022-09-01 17:22 ` [PATCH v1 1/8] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
@ 2022-09-02  8:02   ` Markus Armbruster
  0 siblings, 0 replies; 29+ messages in thread
From: Markus Armbruster @ 2022-09-02  8:02 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Introduce "x-vcpu-dirty-limit-period" migration experimental
> parameter, which is used to make dirtyrate calculation period
> configurable.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

[...]

> diff --git a/qapi/migration.json b/qapi/migration.json
> index 81185d4..332c087 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -776,8 +776,12 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Defaults to 500ms. (Since 7.1)
> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period

Members

> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -795,8 +799,9 @@
>             'multifd-channels',
>             'xbzrle-cache-size', 'max-postcopy-bandwidth',
>             'max-cpu-throttle', 'multifd-compression',
> -           'multifd-zlib-level' ,'multifd-zstd-level',
> -           'block-bitmap-mapping' ] }
> +           'multifd-zlib-level', 'multifd-zstd-level',
> +           'block-bitmap-mapping',
> +           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
>  
>  ##
>  # @MigrateSetParameters:
> @@ -941,8 +946,12 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Defaults to 500ms. (Since 7.1)
> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period

Members

> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -976,7 +985,9 @@
>              '*multifd-compression': 'MultiFDCompression',
>              '*multifd-zlib-level': 'uint8',
>              '*multifd-zstd-level': 'uint8',
> -            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
> +            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
> +            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> +                                            'features': [ 'unstable' ] } } }
>  
>  ##
>  # @migrate-set-parameters:
> @@ -1141,8 +1152,12 @@
>  #                        block device name if there is one, and to their node name
>  #                        otherwise. (Since 5.2)
>  #
> +# @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
> +#                             Defaults to 500ms. (Since 7.1)
> +#
>  # Features:
> -# @unstable: Member @x-checkpoint-delay is experimental.
> +# @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period

Members

> +#            are experimental.
>  #
>  # Since: 2.4
>  ##
> @@ -1174,7 +1189,9 @@
>              '*multifd-compression': 'MultiFDCompression',
>              '*multifd-zlib-level': 'uint8',
>              '*multifd-zstd-level': 'uint8',
> -            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
> +            '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
> +            '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> +                                            'features': [ 'unstable' ] } } }
>  
>  ##
>  # @query-migrate-parameters:



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 2/8] qapi/migration: Introduce x-vcpu-dirty-limit parameters
  2022-09-01 17:22 ` [PATCH v1 2/8] qapi/migration: Introduce x-vcpu-dirty-limit parameters huangy81
@ 2022-09-02  8:03   ` Markus Armbruster
  2022-09-02 13:27     ` Hyman Huang
  0 siblings, 1 reply; 29+ messages in thread
From: Markus Armbruster @ 2022-09-02  8:03 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Introduce "x-vcpu-dirty-limit" migration parameter used
> to limit dirty page rate during live migration.
>
> "x-vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
> two dirty-limit-related migration parameters, which can
> be set before and during live migration by qmp
> migrate-set-parameters.
>
> This two parameters are used to help implement the dirty
> page rate limit algo of migration.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
[...]
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 332c087..8554d33 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -779,6 +779,9 @@
>  # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
>  #                             Defaults to 500ms. (Since 7.1)
>  #
> +# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
> +#                      Defaults to 1. (Since 7.1)
> +#
>  # Features:
>  # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
>  #            are experimental.
> @@ -801,7 +804,8 @@
>             'max-cpu-throttle', 'multifd-compression',
>             'multifd-zlib-level', 'multifd-zstd-level',
>             'block-bitmap-mapping',
> -           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
> +           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
> +           'x-vcpu-dirty-limit'] }

Shouldn't 'x-vcpu-dirty-limit-period' have feature 'unstable', too?

Same below.

>  
>  ##
>  # @MigrateSetParameters:
> @@ -949,6 +953,9 @@
>  # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
>  #                             Defaults to 500ms. (Since 7.1)
>  #
> +# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
> +#                      Defaults to 1. (Since 7.1)
> +#
>  # Features:
>  # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
>  #            are experimental.
> @@ -987,7 +994,8 @@
>              '*multifd-zstd-level': 'uint8',
>              '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
>              '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> -                                            'features': [ 'unstable' ] } } }
> +                                            'features': [ 'unstable' ] },
> +            '*x-vcpu-dirty-limit': 'uint64'} }
>  
>  ##
>  # @migrate-set-parameters:
> @@ -1155,6 +1163,9 @@
>  # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
>  #                             Defaults to 500ms. (Since 7.1)
>  #
> +# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
> +#                      Defaults to 1. (Since 7.1)
> +#
>  # Features:
>  # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
>  #            are experimental.
> @@ -1191,7 +1202,8 @@
>              '*multifd-zstd-level': 'uint8',
>              '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
>              '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
> -                                            'features': [ 'unstable' ] } } }
> +                                            'features': [ 'unstable' ] },
> +            '*x-vcpu-dirty-limit': 'uint64'} }
>  
>  ##
>  # @query-migrate-parameters:



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 3/8] migration: Introduce dirty-limit capability
  2022-09-01 17:22 ` [PATCH v1 3/8] migration: Introduce dirty-limit capability huangy81
@ 2022-09-02  8:07   ` Markus Armbruster
  2022-09-02 14:15     ` Hyman Huang
  0 siblings, 1 reply; 29+ messages in thread
From: Markus Armbruster @ 2022-09-02  8:07 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Introduce migration dirty-limit capability, which can
> be turned on before live migration and limit dirty
> page rate durty live migration.
>
> Introduce migrate_dirty_limit function to help check
> if dirty-limit capability enabled during live migration.
>
> Meanwhile, refactor vcpu_dirty_rate_stat_collect
> so that period can be configured instead of hardcoded.
>
> dirty-limit capability is kind of like auto-converge
> but using dirty limit instead of traditional cpu-throttle
> to throttle guest down. To enable this feature, turn on
> the dirty-limit capability before live migration using
> migratioin-set-capabilities, and set the parameters

migrate-set-capabilities

> "x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably

"x-vcpu-dirty-limit"

> to speed up convergence.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

Hmm.  You make dirty-limiting as a whole a stable interface (evidence:
capability "dirty-limit" is stable), but keep its two parameters
unstable.  Rationale behind that?



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 2/8] qapi/migration: Introduce x-vcpu-dirty-limit parameters
  2022-09-02  8:03   ` Markus Armbruster
@ 2022-09-02 13:27     ` Hyman Huang
  0 siblings, 0 replies; 29+ messages in thread
From: Hyman Huang @ 2022-09-02 13:27 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/9/2 16:03, Markus Armbruster 写道:
> huangy81@chinatelecom.cn writes:
> 
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> Introduce "x-vcpu-dirty-limit" migration parameter used
>> to limit dirty page rate during live migration.
>>
>> "x-vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
>> two dirty-limit-related migration parameters, which can
>> be set before and during live migration by qmp
>> migrate-set-parameters.
>>
>> This two parameters are used to help implement the dirty
>> page rate limit algo of migration.
>>
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> [...]
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index 332c087..8554d33 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -779,6 +779,9 @@
>>   # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
>>   #                             Defaults to 500ms. (Since 7.1)
>>   #
>> +# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
>> +#                      Defaults to 1. (Since 7.1)
>> +#
>>   # Features:
>>   # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
>>   #            are experimental.
>> @@ -801,7 +804,8 @@
>>              'max-cpu-throttle', 'multifd-compression',
>>              'multifd-zlib-level', 'multifd-zstd-level',
>>              'block-bitmap-mapping',
>> -           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
>> +           { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
>> +           'x-vcpu-dirty-limit'] }
> 
> Shouldn't 'x-vcpu-dirty-limit-period' have feature 'unstable', too?
> 
Yes, i missed that, thanks very much.
> Same below.
> 
>>   
>>   ##
>>   # @MigrateSetParameters:
>> @@ -949,6 +953,9 @@
>>   # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
>>   #                             Defaults to 500ms. (Since 7.1)
>>   #
>> +# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
>> +#                      Defaults to 1. (Since 7.1)
>> +#
>>   # Features:
>>   # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
>>   #            are experimental.
>> @@ -987,7 +994,8 @@
>>               '*multifd-zstd-level': 'uint8',
>>               '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
>>               '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
>> -                                            'features': [ 'unstable' ] } } }
>> +                                            'features': [ 'unstable' ] },
>> +            '*x-vcpu-dirty-limit': 'uint64'} }
>>   
>>   ##
>>   # @migrate-set-parameters:
>> @@ -1155,6 +1163,9 @@
>>   # @x-vcpu-dirty-limit-period: Periodic time (ms) of dirty limit during live migration.
>>   #                             Defaults to 500ms. (Since 7.1)
>>   #
>> +# @x-vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
>> +#                      Defaults to 1. (Since 7.1)
>> +#
>>   # Features:
>>   # @unstable: Member @x-checkpoint-delay and @x-vcpu-dirty-limit-period
>>   #            are experimental.
>> @@ -1191,7 +1202,8 @@
>>               '*multifd-zstd-level': 'uint8',
>>               '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
>>               '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
>> -                                            'features': [ 'unstable' ] } } }
>> +                                            'features': [ 'unstable' ] },
>> +            '*x-vcpu-dirty-limit': 'uint64'} }
>>   
>>   ##
>>   # @query-migrate-parameters:
> 

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 3/8] migration: Introduce dirty-limit capability
  2022-09-02  8:07   ` Markus Armbruster
@ 2022-09-02 14:15     ` Hyman Huang
  2022-09-05  9:32       ` Markus Armbruster
  0 siblings, 1 reply; 29+ messages in thread
From: Hyman Huang @ 2022-09-02 14:15 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/9/2 16:07, Markus Armbruster 写道:
> huangy81@chinatelecom.cn writes:
> 
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> Introduce migration dirty-limit capability, which can
>> be turned on before live migration and limit dirty
>> page rate durty live migration.
>>
>> Introduce migrate_dirty_limit function to help check
>> if dirty-limit capability enabled during live migration.
>>
>> Meanwhile, refactor vcpu_dirty_rate_stat_collect
>> so that period can be configured instead of hardcoded.
>>
>> dirty-limit capability is kind of like auto-converge
>> but using dirty limit instead of traditional cpu-throttle
>> to throttle guest down. To enable this feature, turn on
>> the dirty-limit capability before live migration using
>> migratioin-set-capabilities, and set the parameters
> 
> migrate-set-capabilities
> 
>> "x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
> 
> "x-vcpu-dirty-limit"
> 
>> to speed up convergence.
>>
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> Hmm.  You make dirty-limiting as a whole a stable interface (evidence:
> capability "dirty-limit" is stable), but keep its two parameters
> unstable.  Rationale behind that?
> 
Thanks Markus's comments. :)

x-vcpu-dirty-limit-period is an experimental parameter indeed, as to 
x-vcpu-dirty-limit, i think it's resonable to be a stable parameter.
These 2 parameters are introduced first time and none of them suffer 
heavily tests, so i also made vcpu-dirty-limit experimental last version.

For dirty-limit interface, it improves the vCPU computational 
performance during migration indeed(see the test results in cover 
letter), so it sounds ok to be an stable interface.

The 'x-vcpu-dirty-limit-period' parameter can be dropped, IMHO, after 
being proved insignificant for migration in the future, and meanwhile, 
x-vcpu-dirty-limit be made stable.

Since I don't have much experience to introducing fresh new interface,
any suggestions are welcome.

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 3/8] migration: Introduce dirty-limit capability
  2022-09-02 14:15     ` Hyman Huang
@ 2022-09-05  9:32       ` Markus Armbruster
  2022-09-05 13:13         ` Hyman Huang
  0 siblings, 1 reply; 29+ messages in thread
From: Markus Armbruster @ 2022-09-05  9:32 UTC (permalink / raw)
  To: Hyman Huang
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

Hyman Huang <huangy81@chinatelecom.cn> writes:

> 在 2022/9/2 16:07, Markus Armbruster 写道:
>> huangy81@chinatelecom.cn writes:
>> 
>>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>
>>> Introduce migration dirty-limit capability, which can
>>> be turned on before live migration and limit dirty
>>> page rate durty live migration.
>>>
>>> Introduce migrate_dirty_limit function to help check
>>> if dirty-limit capability enabled during live migration.
>>>
>>> Meanwhile, refactor vcpu_dirty_rate_stat_collect
>>> so that period can be configured instead of hardcoded.
>>>
>>> dirty-limit capability is kind of like auto-converge
>>> but using dirty limit instead of traditional cpu-throttle
>>> to throttle guest down. To enable this feature, turn on
>>> the dirty-limit capability before live migration using
>>> migratioin-set-capabilities, and set the parameters
>> 
>> migrate-set-capabilities
>> 
>>> "x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
>> 
>> "x-vcpu-dirty-limit"
>> 
>>> to speed up convergence.
>>>
>>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>> 
>> Hmm.  You make dirty-limiting as a whole a stable interface (evidence:
>> capability "dirty-limit" is stable), but keep its two parameters
>> unstable.  Rationale behind that?
>> 
> Thanks Markus's comments. :)
>
> x-vcpu-dirty-limit-period is an experimental parameter indeed, as to x-vcpu-dirty-limit, i think it's resonable to be a stable parameter.
> These 2 parameters are introduced first time and none of them suffer heavily tests, so i also made vcpu-dirty-limit experimental last version.
>
> For dirty-limit interface, it improves the vCPU computational performance during migration indeed(see the test results in cover 
> letter), so it sounds ok to be an stable interface.
>
> The 'x-vcpu-dirty-limit-period' parameter can be dropped, IMHO, after being proved insignificant for migration in the future, and meanwhile, 
> x-vcpu-dirty-limit be made stable.
>
> Since I don't have much experience to introducing fresh new interface,
> any suggestions are welcome.

Is the new interface fit for purpose without use of any experimental
parameter?

If the answer is something like "command dirty-limit improves things
even without use of experimental parameters, but using them may well
improve things more (but we need more testing to know for sure)", then
your current use of 'unstable' may make sense.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 3/8] migration: Introduce dirty-limit capability
  2022-09-05  9:32       ` Markus Armbruster
@ 2022-09-05 13:13         ` Hyman Huang
  2022-09-06  8:02           ` Markus Armbruster
  0 siblings, 1 reply; 29+ messages in thread
From: Hyman Huang @ 2022-09-05 13:13 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/9/5 17:32, Markus Armbruster 写道:
> Hyman Huang <huangy81@chinatelecom.cn> writes:
> 
>> 在 2022/9/2 16:07, Markus Armbruster 写道:
>>> huangy81@chinatelecom.cn writes:
>>>
>>>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>>
>>>> Introduce migration dirty-limit capability, which can
>>>> be turned on before live migration and limit dirty
>>>> page rate durty live migration.
>>>>
>>>> Introduce migrate_dirty_limit function to help check
>>>> if dirty-limit capability enabled during live migration.
>>>>
>>>> Meanwhile, refactor vcpu_dirty_rate_stat_collect
>>>> so that period can be configured instead of hardcoded.
>>>>
>>>> dirty-limit capability is kind of like auto-converge
>>>> but using dirty limit instead of traditional cpu-throttle
>>>> to throttle guest down. To enable this feature, turn on
>>>> the dirty-limit capability before live migration using
>>>> migratioin-set-capabilities, and set the parameters
>>>
>>> migrate-set-capabilities
>>>
>>>> "x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
>>>
>>> "x-vcpu-dirty-limit"
>>>
>>>> to speed up convergence.
>>>>
>>>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>
>>> Hmm.  You make dirty-limiting as a whole a stable interface (evidence:
>>> capability "dirty-limit" is stable), but keep its two parameters
>>> unstable.  Rationale behind that?
>>>
>> Thanks Markus's comments. :)
>>
>> x-vcpu-dirty-limit-period is an experimental parameter indeed, as to x-vcpu-dirty-limit, i think it's resonable to be a stable parameter.
>> These 2 parameters are introduced first time and none of them suffer heavily tests, so i also made vcpu-dirty-limit experimental last version.
>>
>> For dirty-limit interface, it improves the vCPU computational performance during migration indeed(see the test results in cover
>> letter), so it sounds ok to be an stable interface.
>>
>> The 'x-vcpu-dirty-limit-period' parameter can be dropped, IMHO, after being proved insignificant for migration in the future, and meanwhile,
>> x-vcpu-dirty-limit be made stable.
>>
>> Since I don't have much experience to introducing fresh new interface,
>> any suggestions are welcome.
> 
> Is the new interface fit for purpose without use of any experimental
> parameter?
>  > If the answer is something like "command dirty-limit improves things
> even without use of experimental parameters, but using them may well
> improve things more (but we need more testing to know for sure)", then
> your current use of 'unstable' may make sense.
> 
Yes, with the default value of parameter,the new interface works ok and 
improve performance.

For x-vcpu-dirty-limit, we provide it because user may not want virtual 
CPU throttle heavily, x-vcpu-dirty-limit is kind of like 
cpu-throttle-percentage, which is used to setup the threshold when 
making guest down.

For x-vcpu-dirty-limit-period, it is just as you said: "command 
dirty-limit improves things even without use of experimental parameters, 
but using them may wellimprove things more (but we need more testing to 
know for sure)"

So, should i make x-vcpu-dirty-limit non-experimental next version?
-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 3/8] migration: Introduce dirty-limit capability
  2022-09-05 13:13         ` Hyman Huang
@ 2022-09-06  8:02           ` Markus Armbruster
  0 siblings, 0 replies; 29+ messages in thread
From: Markus Armbruster @ 2022-09-06  8:02 UTC (permalink / raw)
  To: Hyman Huang
  Cc: Markus Armbruster, qemu-devel, Peter Xu, Juan Quintela,
	Dr. David Alan Gilbert, Eric Blake, Thomas Huth, Laurent Vivier,
	Paolo Bonzini, Daniel P. Berrange

Hyman Huang <huangy81@chinatelecom.cn> writes:

> 在 2022/9/5 17:32, Markus Armbruster 写道:
>> Hyman Huang <huangy81@chinatelecom.cn> writes:
>> 
>>> 在 2022/9/2 16:07, Markus Armbruster 写道:
>>>> huangy81@chinatelecom.cn writes:
>>>>
>>>>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>>>
>>>>> Introduce migration dirty-limit capability, which can
>>>>> be turned on before live migration and limit dirty
>>>>> page rate durty live migration.
>>>>>
>>>>> Introduce migrate_dirty_limit function to help check
>>>>> if dirty-limit capability enabled during live migration.
>>>>>
>>>>> Meanwhile, refactor vcpu_dirty_rate_stat_collect
>>>>> so that period can be configured instead of hardcoded.
>>>>>
>>>>> dirty-limit capability is kind of like auto-converge
>>>>> but using dirty limit instead of traditional cpu-throttle
>>>>> to throttle guest down. To enable this feature, turn on
>>>>> the dirty-limit capability before live migration using
>>>>> migratioin-set-capabilities, and set the parameters
>>>>
>>>> migrate-set-capabilities
>>>>
>>>>> "x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
>>>>
>>>> "x-vcpu-dirty-limit"
>>>>
>>>>> to speed up convergence.
>>>>>
>>>>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>>
>>>> Hmm.  You make dirty-limiting as a whole a stable interface (evidence:
>>>> capability "dirty-limit" is stable), but keep its two parameters
>>>> unstable.  Rationale behind that?
>>>>
>>> Thanks Markus's comments. :)
>>>
>>> x-vcpu-dirty-limit-period is an experimental parameter indeed, as to x-vcpu-dirty-limit, i think it's resonable to be a stable parameter.
>>> These 2 parameters are introduced first time and none of them suffer heavily tests, so i also made vcpu-dirty-limit experimental last version.
>>>
>>> For dirty-limit interface, it improves the vCPU computational performance during migration indeed(see the test results in cover
>>> letter), so it sounds ok to be an stable interface.
>>>
>>> The 'x-vcpu-dirty-limit-period' parameter can be dropped, IMHO, after being proved insignificant for migration in the future, and meanwhile,
>>> x-vcpu-dirty-limit be made stable.
>>>
>>> Since I don't have much experience to introducing fresh new interface,
>>> any suggestions are welcome.
>> Is the new interface fit for purpose without use of any experimental
>> parameter?
>>  > If the answer is something like "command dirty-limit improves things
>> even without use of experimental parameters, but using them may well
>> improve things more (but we need more testing to know for sure)", then
>> your current use of 'unstable' may make sense.
>> 
> Yes, with the default value of parameter,the new interface works ok and improve performance.
>
> For x-vcpu-dirty-limit, we provide it because user may not want virtual CPU throttle heavily, x-vcpu-dirty-limit is kind of like 
> cpu-throttle-percentage, which is used to setup the threshold when making guest down.
>
> For x-vcpu-dirty-limit-period, it is just as you said: "command dirty-limit improves things even without use of experimental parameters, 
> but using them may wellimprove things more (but we need more testing to know for sure)"
>
> So, should i make x-vcpu-dirty-limit non-experimental next version?

I think this depends on what exactly you want to signal to users.

Your current patch has command dirty-limit stable and the parameters
unstable.  This signals "go ahead and use dirty-limit, don't worry about
the parameters; even if they become stable later, using just dirty-limit
will remain okay."

If you keep the command unstable as well, you signal the entire
interface isn't quite baked, yet.  That's a much weaker proposition.
So weak in fact that you cannot go wrong :)

In short, it boils down to whether you want to encourage use of a part
of the evolving interface *now*.  Make that part stable.  Requires
confidence in that part, obviously.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 4/8] migration: Implement dirty-limit convergence algo
  2022-09-01 17:22 ` [PATCH v1 4/8] migration: Implement dirty-limit convergence algo huangy81
@ 2022-09-06 20:37   ` Peter Xu
  2022-09-08 14:35     ` Hyman
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Xu @ 2022-09-06 20:37 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

On Fri, Sep 02, 2022 at 01:22:32AM +0800, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> Implement dirty-limit convergence algo for live migration,
> which is kind of like auto-converge algo but using dirty-limit
> instead of cpu throttle to make migration convergent.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>  migration/migration.c  |  1 +
>  migration/ram.c        | 53 +++++++++++++++++++++++++++++++++++++-------------
>  migration/trace-events |  1 +
>  3 files changed, 42 insertions(+), 13 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index d117bb4..64696de 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -239,6 +239,7 @@ void migration_cancel(const Error *error)
>      if (error) {
>          migrate_set_error(current_migration, error);
>      }
> +    qmp_cancel_vcpu_dirty_limit(false, -1, NULL);
>      migrate_fd_cancel(current_migration);
>  }
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index dc1de9d..cc19c5e 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -45,6 +45,7 @@
>  #include "qapi/error.h"
>  #include "qapi/qapi-types-migration.h"
>  #include "qapi/qapi-events-migration.h"
> +#include "qapi/qapi-commands-migration.h"
>  #include "qapi/qmp/qerror.h"
>  #include "trace.h"
>  #include "exec/ram_addr.h"
> @@ -57,6 +58,8 @@
>  #include "qemu/iov.h"
>  #include "multifd.h"
>  #include "sysemu/runstate.h"
> +#include "sysemu/dirtylimit.h"
> +#include "sysemu/kvm.h"
>  
>  #include "hw/boards.h" /* for machine_dump_guest_core() */
>  
> @@ -1139,6 +1142,21 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
>      }
>  }
>  
> +/*
> + * Enable dirty-limit to throttle down the guest
> + */
> +static void migration_dirty_limit_guest(void)
> +{
> +    if (!dirtylimit_in_service()) {
> +        MigrationState *s = migrate_get_current();
> +        int64_t quota_dirtyrate = s->parameters.x_vcpu_dirty_limit;
> +
> +        /* Set quota dirtyrate if dirty limit not in service */
> +        qmp_set_vcpu_dirty_limit(false, -1, quota_dirtyrate, NULL);
> +        trace_migration_dirty_limit_guest(quota_dirtyrate);
> +    }
> +}
> +
>  static void migration_trigger_throttle(RAMState *rs)
>  {
>      MigrationState *s = migrate_get_current();
> @@ -1148,22 +1166,31 @@ static void migration_trigger_throttle(RAMState *rs)
>      uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
>      uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
>  
> -    /* During block migration the auto-converge logic incorrectly detects
> -     * that ram migration makes no progress. Avoid this by disabling the
> -     * throttling logic during the bulk phase of block migration. */
> -    if (migrate_auto_converge() && !blk_mig_bulk_active()) {
> -        /* The following detection logic can be refined later. For now:
> -           Check to see if the ratio between dirtied bytes and the approx.
> -           amount of bytes that just got transferred since the last time
> -           we were in this routine reaches the threshold. If that happens
> -           twice, start or increase throttling. */
> -
> -        if ((bytes_dirty_period > bytes_dirty_threshold) &&
> -            (++rs->dirty_rate_high_cnt >= 2)) {
> +    /*
> +     * The following detection logic can be refined later. For now:
> +     * Check to see if the ratio between dirtied bytes and the approx.
> +     * amount of bytes that just got transferred since the last time
> +     * we were in this routine reaches the threshold. If that happens
> +     * twice, start or increase throttling.
> +     */
> +
> +    if ((bytes_dirty_period > bytes_dirty_threshold) &&
> +        (++rs->dirty_rate_high_cnt >= 2)) {
> +        rs->dirty_rate_high_cnt = 0;
> +        /*
> +         * During block migration the auto-converge logic incorrectly detects
> +         * that ram migration makes no progress. Avoid this by disabling the
> +         * throttling logic during the bulk phase of block migration
> +         */
> +
> +        if (migrate_auto_converge() && !blk_mig_bulk_active()) {
>              trace_migration_throttle();
> -            rs->dirty_rate_high_cnt = 0;
>              mig_throttle_guest_down(bytes_dirty_period,
>                                      bytes_dirty_threshold);
> +        } else if (migrate_dirty_limit() &&
> +                   kvm_dirty_ring_enabled() &&
> +                   migration_is_active(s)) {
> +            migration_dirty_limit_guest();

We'll call this multiple time, but only the 1st call will make sense, right?

Can we call it once somewhere?  E.g. at the start of migration?

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 0/8] migration: introduce dirtylimit capability
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
                   ` (7 preceding siblings ...)
  2022-09-01 17:22 ` [PATCH v1 8/8] tests/migration: Introduce dirty-limit " huangy81
@ 2022-09-06 20:46 ` Peter Xu
  2022-09-07 14:52   ` Hyman
  2022-10-01 14:37 ` Markus Armbruster
  9 siblings, 1 reply; 29+ messages in thread
From: Peter Xu @ 2022-09-06 20:46 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

On Fri, Sep 02, 2022 at 01:22:28AM +0800, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> v1:
> - make parameter vcpu-dirty-limit experimental 
> - switch dirty limit off when cancel migrate
> - add cancel logic in migration test 
> 
> Please review, thanks,
> 
> Yong 
> 
> Abstract
> ========
> 
> This series added a new migration capability called "dirtylimit".  It can
> be enabled when dirty ring is enabled, and it'll improve the vCPU performance
> during the process of migration. It is based on the previous patchset:
> https://lore.kernel.org/qemu-devel/cover.1656177590.git.huangy81@chinatelecom.cn/
> 
> As mentioned in patchset "support dirty restraint on vCPU", dirtylimit way of
> migration can make the read-process not be penalized. This series wires up the
> vcpu dirty limit and wrappers as dirtylimit capability of migration. I introduce
> two parameters vcpu-dirtylimit-period and vcpu-dirtylimit to implement the setup 
> of dirtylimit during live migration.
> 
> To validate the implementation, i tested a 32 vCPU vm live migration with such 
> model:
> Only dirty vcpu0, vcpu1 with heavy memory workoad and leave the rest vcpus
> untouched, running unixbench on the vpcu8-vcpu15 by setup the cpu affinity as
> the following command:
> taskset -c 8-15 ./Run -i 2 -c 8 {unixbench test item}
> 
> The following are results:
> 
> host cpu: Intel(R) Xeon(R) Platinum 8378A
> host interface speed: 1000Mb/s
>   |---------------------+--------+------------+---------------|
>   | UnixBench test item | Normal | Dirtylimit | Auto-converge |
>   |---------------------+--------+------------+---------------|
>   | dhry2reg            | 32800  | 32786      | 25292         |
>   | whetstone-double    | 10326  | 10315      | 9847          |
>   | pipe                | 15442  | 15271      | 14506         |
>   | context1            | 7260   | 6235       | 4514          |
>   | spawn               | 3663   | 3317       | 3249          |
>   | syscall             | 4669   | 4667       | 3841          |
>   |---------------------+--------+------------+---------------|
> From the data above we can draw a conclusion that vcpus that do not dirty memory
> in vm are almost unaffected during the dirtylimit migration, but the auto converge
> way does. 
> 
> I also tested the total time of dirtylimit migration with variable dirty memory
> size in vm.
> 
> senario 1:
> host cpu: Intel(R) Xeon(R) Platinum 8378A
> host interface speed: 1000Mb/s
>   |-----------------------+----------------+-------------------|
>   | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>   |-----------------------+----------------+-------------------|
>   | 60                    | 2014           | 2131              |
>   | 70                    | 5381           | 12590             |
>   | 90                    | 6037           | 33545             |
>   | 110                   | 7660           | [*]               |
>   |-----------------------+----------------+-------------------|
>   [*]: This case means migration is not convergent. 
> 
> senario 2:
> host cpu: Intel(R) Xeon(R) CPU E5-2650
> host interface speed: 10000Mb/s
>   |-----------------------+----------------+-------------------|
>   | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>   |-----------------------+----------------+-------------------|
>   | 1600                  | 15842          | 27548             |
>   | 2000                  | 19026          | 38447             |
>   | 2400                  | 19897          | 46381             |
>   | 2800                  | 22338          | 57149             |
>   |-----------------------+----------------+-------------------|
> Above data shows that dirtylimit way of migration can also reduce the total
> time of migration and it achieves convergence more easily in some case.
> 
> In addition to implement dirtylimit capability itself, this series
> add 3 tests for migration, aiming at playing around for developer simply: 
>  1. qtest for dirty limit migration
>  2. support dirty ring way of migration for guestperf tool
>  3. support dirty limit migration for guestperf tool

Yong,

I should have asked even earlier - just curious whether you have started
using this in production systems?  It's definitely not required for any
patchset to be merged, but it'll be very useful (and supportive)
information to have if there's proper testing beds applied already.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 0/8] migration: introduce dirtylimit capability
  2022-09-06 20:46 ` [PATCH v1 0/8] migration: introduce dirtylimit capability Peter Xu
@ 2022-09-07 14:52   ` Hyman
  0 siblings, 0 replies; 29+ messages in thread
From: Hyman @ 2022-09-07 14:52 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/9/7 4:46, Peter Xu 写道:
> On Fri, Sep 02, 2022 at 01:22:28AM +0800, huangy81@chinatelecom.cn wrote:
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> v1:
>> - make parameter vcpu-dirty-limit experimental
>> - switch dirty limit off when cancel migrate
>> - add cancel logic in migration test
>>
>> Please review, thanks,
>>
>> Yong
>>
>> Abstract
>> ========
>>
>> This series added a new migration capability called "dirtylimit".  It can
>> be enabled when dirty ring is enabled, and it'll improve the vCPU performance
>> during the process of migration. It is based on the previous patchset:
>> https://lore.kernel.org/qemu-devel/cover.1656177590.git.huangy81@chinatelecom.cn/
>>
>> As mentioned in patchset "support dirty restraint on vCPU", dirtylimit way of
>> migration can make the read-process not be penalized. This series wires up the
>> vcpu dirty limit and wrappers as dirtylimit capability of migration. I introduce
>> two parameters vcpu-dirtylimit-period and vcpu-dirtylimit to implement the setup
>> of dirtylimit during live migration.
>>
>> To validate the implementation, i tested a 32 vCPU vm live migration with such
>> model:
>> Only dirty vcpu0, vcpu1 with heavy memory workoad and leave the rest vcpus
>> untouched, running unixbench on the vpcu8-vcpu15 by setup the cpu affinity as
>> the following command:
>> taskset -c 8-15 ./Run -i 2 -c 8 {unixbench test item}
>>
>> The following are results:
>>
>> host cpu: Intel(R) Xeon(R) Platinum 8378A
>> host interface speed: 1000Mb/s
>>    |---------------------+--------+------------+---------------|
>>    | UnixBench test item | Normal | Dirtylimit | Auto-converge |
>>    |---------------------+--------+------------+---------------|
>>    | dhry2reg            | 32800  | 32786      | 25292         |
>>    | whetstone-double    | 10326  | 10315      | 9847          |
>>    | pipe                | 15442  | 15271      | 14506         |
>>    | context1            | 7260   | 6235       | 4514          |
>>    | spawn               | 3663   | 3317       | 3249          |
>>    | syscall             | 4669   | 4667       | 3841          |
>>    |---------------------+--------+------------+---------------|
>>  From the data above we can draw a conclusion that vcpus that do not dirty memory
>> in vm are almost unaffected during the dirtylimit migration, but the auto converge
>> way does.
>>
>> I also tested the total time of dirtylimit migration with variable dirty memory
>> size in vm.
>>
>> senario 1:
>> host cpu: Intel(R) Xeon(R) Platinum 8378A
>> host interface speed: 1000Mb/s
>>    |-----------------------+----------------+-------------------|
>>    | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>>    |-----------------------+----------------+-------------------|
>>    | 60                    | 2014           | 2131              |
>>    | 70                    | 5381           | 12590             |
>>    | 90                    | 6037           | 33545             |
>>    | 110                   | 7660           | [*]               |
>>    |-----------------------+----------------+-------------------|
>>    [*]: This case means migration is not convergent.
>>
>> senario 2:
>> host cpu: Intel(R) Xeon(R) CPU E5-2650
>> host interface speed: 10000Mb/s
>>    |-----------------------+----------------+-------------------|
>>    | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>>    |-----------------------+----------------+-------------------|
>>    | 1600                  | 15842          | 27548             |
>>    | 2000                  | 19026          | 38447             |
>>    | 2400                  | 19897          | 46381             |
>>    | 2800                  | 22338          | 57149             |
>>    |-----------------------+----------------+-------------------|
>> Above data shows that dirtylimit way of migration can also reduce the total
>> time of migration and it achieves convergence more easily in some case.
>>
>> In addition to implement dirtylimit capability itself, this series
>> add 3 tests for migration, aiming at playing around for developer simply:
>>   1. qtest for dirty limit migration
>>   2. support dirty ring way of migration for guestperf tool
>>   3. support dirty limit migration for guestperf tool
> 
> Yong,
> 
> I should have asked even earlier - just curious whether you have started
> using this in production systems?  It's definitely not required for any
> patchset to be merged, but it'll be very useful (and supportive)
> information to have if there's proper testing beds applied already.
> 
Actually no when i posted the cover letter above, the qemu version in 
our production is much lower than upstream, and the patchset is 
different from here, i built test mode and did the test on my own in the 
first time. But this feature is in the process of test conducted by 
another professional test team, so once report is ready, i'll post it. :)
> Thanks,
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 4/8] migration: Implement dirty-limit convergence algo
  2022-09-06 20:37   ` Peter Xu
@ 2022-09-08 14:35     ` Hyman
  2022-09-08 14:47       ` Peter Xu
  0 siblings, 1 reply; 29+ messages in thread
From: Hyman @ 2022-09-08 14:35 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/9/7 4:37, Peter Xu 写道:
> On Fri, Sep 02, 2022 at 01:22:32AM +0800, huangy81@chinatelecom.cn wrote:
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> Implement dirty-limit convergence algo for live migration,
>> which is kind of like auto-converge algo but using dirty-limit
>> instead of cpu throttle to make migration convergent.
>>
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>> ---
>>   migration/migration.c  |  1 +
>>   migration/ram.c        | 53 +++++++++++++++++++++++++++++++++++++-------------
>>   migration/trace-events |  1 +
>>   3 files changed, 42 insertions(+), 13 deletions(-)
>>
>> diff --git a/migration/migration.c b/migration/migration.c
>> index d117bb4..64696de 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -239,6 +239,7 @@ void migration_cancel(const Error *error)
>>       if (error) {
>>           migrate_set_error(current_migration, error);
>>       }
>> +    qmp_cancel_vcpu_dirty_limit(false, -1, NULL);
>>       migrate_fd_cancel(current_migration);
>>   }
>>   
>> diff --git a/migration/ram.c b/migration/ram.c
>> index dc1de9d..cc19c5e 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -45,6 +45,7 @@
>>   #include "qapi/error.h"
>>   #include "qapi/qapi-types-migration.h"
>>   #include "qapi/qapi-events-migration.h"
>> +#include "qapi/qapi-commands-migration.h"
>>   #include "qapi/qmp/qerror.h"
>>   #include "trace.h"
>>   #include "exec/ram_addr.h"
>> @@ -57,6 +58,8 @@
>>   #include "qemu/iov.h"
>>   #include "multifd.h"
>>   #include "sysemu/runstate.h"
>> +#include "sysemu/dirtylimit.h"
>> +#include "sysemu/kvm.h"
>>   
>>   #include "hw/boards.h" /* for machine_dump_guest_core() */
>>   
>> @@ -1139,6 +1142,21 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
>>       }
>>   }
>>   
>> +/*
>> + * Enable dirty-limit to throttle down the guest
>> + */
>> +static void migration_dirty_limit_guest(void)
>> +{
>> +    if (!dirtylimit_in_service()) {
>> +        MigrationState *s = migrate_get_current();
>> +        int64_t quota_dirtyrate = s->parameters.x_vcpu_dirty_limit;
>> +
>> +        /* Set quota dirtyrate if dirty limit not in service */
>> +        qmp_set_vcpu_dirty_limit(false, -1, quota_dirtyrate, NULL);
>> +        trace_migration_dirty_limit_guest(quota_dirtyrate);
>> +    }
>> +}
>> +
>>   static void migration_trigger_throttle(RAMState *rs)
>>   {
>>       MigrationState *s = migrate_get_current();
>> @@ -1148,22 +1166,31 @@ static void migration_trigger_throttle(RAMState *rs)
>>       uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
>>       uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
>>   
>> -    /* During block migration the auto-converge logic incorrectly detects
>> -     * that ram migration makes no progress. Avoid this by disabling the
>> -     * throttling logic during the bulk phase of block migration. */
>> -    if (migrate_auto_converge() && !blk_mig_bulk_active()) {
>> -        /* The following detection logic can be refined later. For now:
>> -           Check to see if the ratio between dirtied bytes and the approx.
>> -           amount of bytes that just got transferred since the last time
>> -           we were in this routine reaches the threshold. If that happens
>> -           twice, start or increase throttling. */
>> -
>> -        if ((bytes_dirty_period > bytes_dirty_threshold) &&
>> -            (++rs->dirty_rate_high_cnt >= 2)) {
>> +    /*
>> +     * The following detection logic can be refined later. For now:
>> +     * Check to see if the ratio between dirtied bytes and the approx.
>> +     * amount of bytes that just got transferred since the last time
>> +     * we were in this routine reaches the threshold. If that happens
>> +     * twice, start or increase throttling.
>> +     */
>> +
>> +    if ((bytes_dirty_period > bytes_dirty_threshold) &&
>> +        (++rs->dirty_rate_high_cnt >= 2)) {
>> +        rs->dirty_rate_high_cnt = 0;
>> +        /*
>> +         * During block migration the auto-converge logic incorrectly detects
>> +         * that ram migration makes no progress. Avoid this by disabling the
>> +         * throttling logic during the bulk phase of block migration
>> +         */
>> +
>> +        if (migrate_auto_converge() && !blk_mig_bulk_active()) {
>>               trace_migration_throttle();
>> -            rs->dirty_rate_high_cnt = 0;
>>               mig_throttle_guest_down(bytes_dirty_period,
>>                                       bytes_dirty_threshold);
>> +        } else if (migrate_dirty_limit() &&
>> +                   kvm_dirty_ring_enabled() &&
>> +                   migration_is_active(s)) {
>> +            migration_dirty_limit_guest();
> 
> We'll call this multiple time, but only the 1st call will make sense, right?
Yes.
> 
> Can we call it once somewhere?  E.g. at the start of migration?It make sense indeed, if dirtylimit run once migration start, the 
behavior of dirtylimit migration would be kind of different from 
auto-converge, i mean, dirtylimit will make guest write vCPU slow no 
matter if dirty_rate_high_cnt exceed 2 times. For those vms that dirty 
memory lightly, they can get convergent without throttle, but in the new 
way ,if we set the dirtylimit to a very low value, they may suffer 
restriction. Can we accept that ?
> 
> Thanks,
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 4/8] migration: Implement dirty-limit convergence algo
  2022-09-08 14:35     ` Hyman
@ 2022-09-08 14:47       ` Peter Xu
  2022-09-08 14:59         ` Hyman Huang
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Xu @ 2022-09-08 14:47 UTC (permalink / raw)
  To: Hyman
  Cc: qemu-devel, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

Yong,

Your recent two posts all got wrongly cut-off by your mail server for some
reason..

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 4/8] migration: Implement dirty-limit convergence algo
  2022-09-08 14:47       ` Peter Xu
@ 2022-09-08 14:59         ` Hyman Huang
  0 siblings, 0 replies; 29+ messages in thread
From: Hyman Huang @ 2022-09-08 14:59 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Juan Quintela, Dr. David Alan Gilbert, Eric Blake,
	Markus Armbruster, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/9/8 22:47, Peter Xu 写道:
> Yong,
> 
> Your recent two posts all got wrongly cut-off by your mail server for some
> reason..
> 
Hm, i noticed that, i'll check it. Thanks for reminding. :)

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 0/8] migration: introduce dirtylimit capability
  2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
                   ` (8 preceding siblings ...)
  2022-09-06 20:46 ` [PATCH v1 0/8] migration: introduce dirtylimit capability Peter Xu
@ 2022-10-01 14:37 ` Markus Armbruster
  2022-10-01 15:01   ` Hyman Huang
  9 siblings, 1 reply; 29+ messages in thread
From: Markus Armbruster @ 2022-10-01 14:37 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> v1:
> - make parameter vcpu-dirty-limit experimental 
> - switch dirty limit off when cancel migrate
> - add cancel logic in migration test 
>
> Please review, thanks,
>
> Yong 

Are you still pursuing this feature?

> Abstract
> ========
>
> This series added a new migration capability called "dirtylimit".  It can
> be enabled when dirty ring is enabled, and it'll improve the vCPU performance
> during the process of migration. It is based on the previous patchset:
> https://lore.kernel.org/qemu-devel/cover.1656177590.git.huangy81@chinatelecom.cn/
>
> As mentioned in patchset "support dirty restraint on vCPU", dirtylimit way of
> migration can make the read-process not be penalized. This series wires up the
> vcpu dirty limit and wrappers as dirtylimit capability of migration. I introduce
> two parameters vcpu-dirtylimit-period and vcpu-dirtylimit to implement the setup 
> of dirtylimit during live migration.
>
> To validate the implementation, i tested a 32 vCPU vm live migration with such 
> model:
> Only dirty vcpu0, vcpu1 with heavy memory workoad and leave the rest vcpus
> untouched, running unixbench on the vpcu8-vcpu15 by setup the cpu affinity as
> the following command:
> taskset -c 8-15 ./Run -i 2 -c 8 {unixbench test item}
>
> The following are results:
>
> host cpu: Intel(R) Xeon(R) Platinum 8378A
> host interface speed: 1000Mb/s
>   |---------------------+--------+------------+---------------|
>   | UnixBench test item | Normal | Dirtylimit | Auto-converge |
>   |---------------------+--------+------------+---------------|
>   | dhry2reg            | 32800  | 32786      | 25292         |
>   | whetstone-double    | 10326  | 10315      | 9847          |
>   | pipe                | 15442  | 15271      | 14506         |
>   | context1            | 7260   | 6235       | 4514          |
>   | spawn               | 3663   | 3317       | 3249          |
>   | syscall             | 4669   | 4667       | 3841          |
>   |---------------------+--------+------------+---------------|
>>From the data above we can draw a conclusion that vcpus that do not dirty memory
> in vm are almost unaffected during the dirtylimit migration, but the auto converge
> way does. 
>
> I also tested the total time of dirtylimit migration with variable dirty memory
> size in vm.
>
> senario 1:
> host cpu: Intel(R) Xeon(R) Platinum 8378A
> host interface speed: 1000Mb/s
>   |-----------------------+----------------+-------------------|
>   | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>   |-----------------------+----------------+-------------------|
>   | 60                    | 2014           | 2131              |
>   | 70                    | 5381           | 12590             |
>   | 90                    | 6037           | 33545             |
>   | 110                   | 7660           | [*]               |
>   |-----------------------+----------------+-------------------|
>   [*]: This case means migration is not convergent. 
>
> senario 2:
> host cpu: Intel(R) Xeon(R) CPU E5-2650
> host interface speed: 10000Mb/s
>   |-----------------------+----------------+-------------------|
>   | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>   |-----------------------+----------------+-------------------|
>   | 1600                  | 15842          | 27548             |
>   | 2000                  | 19026          | 38447             |
>   | 2400                  | 19897          | 46381             |
>   | 2800                  | 22338          | 57149             |
>   |-----------------------+----------------+-------------------|
> Above data shows that dirtylimit way of migration can also reduce the total
> time of migration and it achieves convergence more easily in some case.
>
> In addition to implement dirtylimit capability itself, this series
> add 3 tests for migration, aiming at playing around for developer simply: 
>  1. qtest for dirty limit migration
>  2. support dirty ring way of migration for guestperf tool
>  3. support dirty limit migration for guestperf tool
>
> Please review, thanks !
>
> Hyman Huang (8):
>   qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
>   qapi/migration: Introduce x-vcpu-dirty-limit parameters
>   migration: Introduce dirty-limit capability
>   migration: Implement dirty-limit convergence algo
>   migration: Export dirty-limit time info
>   tests: Add migration dirty-limit capability test
>   tests/migration: Introduce dirty-ring-size option into guestperf
>   tests/migration: Introduce dirty-limit into guestperf
>
>  include/sysemu/dirtylimit.h             |   2 +
>  migration/migration.c                   |  51 +++++++++++
>  migration/migration.h                   |   1 +
>  migration/ram.c                         |  53 ++++++++---
>  migration/trace-events                  |   1 +
>  monitor/hmp-cmds.c                      |  26 ++++++
>  qapi/migration.json                     |  57 ++++++++++--
>  softmmu/dirtylimit.c                    |  33 ++++++-
>  tests/migration/guestperf/comparison.py |  24 +++++
>  tests/migration/guestperf/engine.py     |  33 ++++++-
>  tests/migration/guestperf/hardware.py   |   8 +-
>  tests/migration/guestperf/progress.py   |  17 +++-
>  tests/migration/guestperf/scenario.py   |  11 ++-
>  tests/migration/guestperf/shell.py      |  25 +++++-
>  tests/qtest/migration-test.c            | 154 ++++++++++++++++++++++++++++++++
>  15 files changed, 465 insertions(+), 31 deletions(-)



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 0/8] migration: introduce dirtylimit capability
  2022-10-01 14:37 ` Markus Armbruster
@ 2022-10-01 15:01   ` Hyman Huang
  0 siblings, 0 replies; 29+ messages in thread
From: Hyman Huang @ 2022-10-01 15:01 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/10/1 22:37, Markus Armbruster 写道:
> huangy81@chinatelecom.cn writes:
> 
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> v1:
>> - make parameter vcpu-dirty-limit experimental
>> - switch dirty limit off when cancel migrate
>> - add cancel logic in migration test
>>
>> Please review, thanks,
>>
>> Yong
> 
> Are you still pursuing this feature?
Yes, of course, but the detailed test report has not been prepared, and 
the last 3 commits of this patchset has not been commentted, i'm waiting 
for the it and the next version can be improved hugely.

Ping to Daniel and David, what do you think of these 3 test patches?
I would be very pleased if you could help me with the review.  :)

Thanks

Yong
> 
>> Abstract
>> ========
>>
>> This series added a new migration capability called "dirtylimit".  It can
>> be enabled when dirty ring is enabled, and it'll improve the vCPU performance
>> during the process of migration. It is based on the previous patchset:
>> https://lore.kernel.org/qemu-devel/cover.1656177590.git.huangy81@chinatelecom.cn/
>>
>> As mentioned in patchset "support dirty restraint on vCPU", dirtylimit way of
>> migration can make the read-process not be penalized. This series wires up the
>> vcpu dirty limit and wrappers as dirtylimit capability of migration. I introduce
>> two parameters vcpu-dirtylimit-period and vcpu-dirtylimit to implement the setup
>> of dirtylimit during live migration.
>>
>> To validate the implementation, i tested a 32 vCPU vm live migration with such
>> model:
>> Only dirty vcpu0, vcpu1 with heavy memory workoad and leave the rest vcpus
>> untouched, running unixbench on the vpcu8-vcpu15 by setup the cpu affinity as
>> the following command:
>> taskset -c 8-15 ./Run -i 2 -c 8 {unixbench test item}
>>
>> The following are results:
>>
>> host cpu: Intel(R) Xeon(R) Platinum 8378A
>> host interface speed: 1000Mb/s
>>    |---------------------+--------+------------+---------------|
>>    | UnixBench test item | Normal | Dirtylimit | Auto-converge |
>>    |---------------------+--------+------------+---------------|
>>    | dhry2reg            | 32800  | 32786      | 25292         |
>>    | whetstone-double    | 10326  | 10315      | 9847          |
>>    | pipe                | 15442  | 15271      | 14506         |
>>    | context1            | 7260   | 6235       | 4514          |
>>    | spawn               | 3663   | 3317       | 3249          |
>>    | syscall             | 4669   | 4667       | 3841          |
>>    |---------------------+--------+------------+---------------|
>> >From the data above we can draw a conclusion that vcpus that do not dirty memory
>> in vm are almost unaffected during the dirtylimit migration, but the auto converge
>> way does.
>>
>> I also tested the total time of dirtylimit migration with variable dirty memory
>> size in vm.
>>
>> senario 1:
>> host cpu: Intel(R) Xeon(R) Platinum 8378A
>> host interface speed: 1000Mb/s
>>    |-----------------------+----------------+-------------------|
>>    | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>>    |-----------------------+----------------+-------------------|
>>    | 60                    | 2014           | 2131              |
>>    | 70                    | 5381           | 12590             |
>>    | 90                    | 6037           | 33545             |
>>    | 110                   | 7660           | [*]               |
>>    |-----------------------+----------------+-------------------|
>>    [*]: This case means migration is not convergent.
>>
>> senario 2:
>> host cpu: Intel(R) Xeon(R) CPU E5-2650
>> host interface speed: 10000Mb/s
>>    |-----------------------+----------------+-------------------|
>>    | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) |
>>    |-----------------------+----------------+-------------------|
>>    | 1600                  | 15842          | 27548             |
>>    | 2000                  | 19026          | 38447             |
>>    | 2400                  | 19897          | 46381             |
>>    | 2800                  | 22338          | 57149             |
>>    |-----------------------+----------------+-------------------|
>> Above data shows that dirtylimit way of migration can also reduce the total
>> time of migration and it achieves convergence more easily in some case.
>>
>> In addition to implement dirtylimit capability itself, this series
>> add 3 tests for migration, aiming at playing around for developer simply:
>>   1. qtest for dirty limit migration
>>   2. support dirty ring way of migration for guestperf tool
>>   3. support dirty limit migration for guestperf tool
>>
>> Please review, thanks !
>>
>> Hyman Huang (8):
>>    qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
>>    qapi/migration: Introduce x-vcpu-dirty-limit parameters
>>    migration: Introduce dirty-limit capability
>>    migration: Implement dirty-limit convergence algo
>>    migration: Export dirty-limit time info
>>    tests: Add migration dirty-limit capability test
>>    tests/migration: Introduce dirty-ring-size option into guestperf
>>    tests/migration: Introduce dirty-limit into guestperf
>>
>>   include/sysemu/dirtylimit.h             |   2 +
>>   migration/migration.c                   |  51 +++++++++++
>>   migration/migration.h                   |   1 +
>>   migration/ram.c                         |  53 ++++++++---
>>   migration/trace-events                  |   1 +
>>   monitor/hmp-cmds.c                      |  26 ++++++
>>   qapi/migration.json                     |  57 ++++++++++--
>>   softmmu/dirtylimit.c                    |  33 ++++++-
>>   tests/migration/guestperf/comparison.py |  24 +++++
>>   tests/migration/guestperf/engine.py     |  33 ++++++-
>>   tests/migration/guestperf/hardware.py   |   8 +-
>>   tests/migration/guestperf/progress.py   |  17 +++-
>>   tests/migration/guestperf/scenario.py   |  11 ++-
>>   tests/migration/guestperf/shell.py      |  25 +++++-
>>   tests/qtest/migration-test.c            | 154 ++++++++++++++++++++++++++++++++
>>   15 files changed, 465 insertions(+), 31 deletions(-)
> 

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 5/8] migration: Export dirty-limit time info
  2022-09-01 17:22 ` [PATCH v1 5/8] migration: Export dirty-limit time info huangy81
@ 2022-10-01 18:31   ` Markus Armbruster
  2022-10-02  1:13     ` Hyman Huang
  0 siblings, 1 reply; 29+ messages in thread
From: Markus Armbruster @ 2022-10-01 18:31 UTC (permalink / raw)
  To: huangy81
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

huangy81@chinatelecom.cn writes:

> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>
> Export dirty limit throttle time and estimated ring full
> time, through which we can observe the process of dirty
> limit during live migration.
>
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>

[...]

> diff --git a/qapi/migration.json b/qapi/migration.json
> index bc4bc96..c263d54 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -242,6 +242,12 @@
>  #                   Present and non-empty when migration is blocked.
>  #                   (since 6.0)
>  #
> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
> +#                                    dirty ring full (since 7.0)
> +#
> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
> +#                            (since 7.0)
> +#

Can you explain what is measured here a bit more verbosely?

>  # Since: 0.14
>  ##
>  { 'struct': 'MigrationInfo',
> @@ -259,7 +265,9 @@
>             '*postcopy-blocktime' : 'uint32',
>             '*postcopy-vcpu-blocktime': ['uint32'],
>             '*compression': 'CompressionStats',
> -           '*socket-address': ['SocketAddress'] } }
> +           '*socket-address': ['SocketAddress'],
> +           '*dirty-limit-throttle-us-per-full': 'int64',
> +           '*dirty-limit-us-ring-full': 'int64'} }
>  
>  ##
>  # @query-migrate:

[...]



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 5/8] migration: Export dirty-limit time info
  2022-10-01 18:31   ` Markus Armbruster
@ 2022-10-02  1:13     ` Hyman Huang
  2022-10-07 15:09       ` Markus Armbruster
  0 siblings, 1 reply; 29+ messages in thread
From: Hyman Huang @ 2022-10-02  1:13 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/10/2 2:31, Markus Armbruster 写道:
> huangy81@chinatelecom.cn writes:
> 
>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>
>> Export dirty limit throttle time and estimated ring full
>> time, through which we can observe the process of dirty
>> limit during live migration.
>>
>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> [...]
> 
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index bc4bc96..c263d54 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -242,6 +242,12 @@
>>   #                   Present and non-empty when migration is blocked.
>>   #                   (since 6.0)
>>   #
>> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
>> +#                                    dirty ring full (since 7.0)
>> +#
>> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
>> +#                            (since 7.0)
>> +#
> 
> Can you explain what is measured here a bit more verbosely?
The two fields of migration info aims to export dirty-limit throttle 
time so that upper apps can check out the process of live migration, 
like 'cpu-throttle-percentage'.

The commit "tests: Add migration dirty-limit capability test" make use 
of the 'dirty-limit-throttle-us-per-full' to checkout if dirty-limit has 
started, the commit "tests/migration: Introduce dirty-limit into 
guestperf" introduce the two field so guestperf tools also show the 
process of dirty-limit migration.

And i also use qmp_query_migrate to observe the migration by checkout 
these two fields.

I'm not sure if above explantation is what you want exactly, please be 
free to start any discussion about this features.

Thank Markus.

Yong
> 
>>   # Since: 0.14
>>   ##
>>   { 'struct': 'MigrationInfo',
>> @@ -259,7 +265,9 @@
>>              '*postcopy-blocktime' : 'uint32',
>>              '*postcopy-vcpu-blocktime': ['uint32'],
>>              '*compression': 'CompressionStats',
>> -           '*socket-address': ['SocketAddress'] } }
>> +           '*socket-address': ['SocketAddress'],
>> +           '*dirty-limit-throttle-us-per-full': 'int64',
>> +           '*dirty-limit-us-ring-full': 'int64'} }
>>   
>>   ##
>>   # @query-migrate:
> 
> [...]
> 

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 5/8] migration: Export dirty-limit time info
  2022-10-02  1:13     ` Hyman Huang
@ 2022-10-07 15:09       ` Markus Armbruster
  2022-10-07 16:22         ` Hyman Huang
  0 siblings, 1 reply; 29+ messages in thread
From: Markus Armbruster @ 2022-10-07 15:09 UTC (permalink / raw)
  To: Hyman Huang
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange

Hyman Huang <huangy81@chinatelecom.cn> writes:

> 在 2022/10/2 2:31, Markus Armbruster 写道:
>> huangy81@chinatelecom.cn writes:
>> 
>>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>
>>> Export dirty limit throttle time and estimated ring full
>>> time, through which we can observe the process of dirty
>>> limit during live migration.
>>>
>>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>> [...]
>> 
>>> diff --git a/qapi/migration.json b/qapi/migration.json
>>> index bc4bc96..c263d54 100644
>>> --- a/qapi/migration.json
>>> +++ b/qapi/migration.json
>>> @@ -242,6 +242,12 @@
>>>   #                   Present and non-empty when migration is blocked.
>>>   #                   (since 6.0)
>>>   #
>>> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
>>> +#                                    dirty ring full (since 7.0)
>>> +#
>>> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
>>> +#                            (since 7.0)
>>> +#
>>
>> Can you explain what is measured here a bit more verbosely?
>
> The two fields of migration info aims to export dirty-limit throttle time so that upper apps can check out the process of live migration, 
> like 'cpu-throttle-percentage'.
>
> The commit "tests: Add migration dirty-limit capability test" make use of the 'dirty-limit-throttle-us-per-full' to checkout if dirty-limit has 
> started, the commit "tests/migration: Introduce dirty-limit into guestperf" introduce the two field so guestperf tools also show the 
> process of dirty-limit migration.
>
> And i also use qmp_query_migrate to observe the migration by checkout these two fields.
>
> I'm not sure if above explantation is what you want exactly, please be free to start any discussion about this features.

You explained use cases, which is always welcome.

I'm trying to understand the two new members' meaning, i.e. what exactly
is being measured.

For existing @cpu-throttle-percentage, the doc comment tells me:
"percentage of time guest cpus are being throttled during
auto-converge."

For the your new members, the doc comment tries to tell me, but it
doesn't succeed.  If you explain what is being measured more verbosely,
we may be able to improve the doc comment.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 5/8] migration: Export dirty-limit time info
  2022-10-07 15:09       ` Markus Armbruster
@ 2022-10-07 16:22         ` Hyman Huang
  0 siblings, 0 replies; 29+ messages in thread
From: Hyman Huang @ 2022-10-07 16:22 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: qemu-devel, Peter Xu, Juan Quintela, Dr. David Alan Gilbert,
	Eric Blake, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	Daniel P. Berrange



在 2022/10/7 23:09, Markus Armbruster 写道:
> Hyman Huang <huangy81@chinatelecom.cn> writes:
> 
>> 在 2022/10/2 2:31, Markus Armbruster 写道:
>>> huangy81@chinatelecom.cn writes:
>>>
>>>> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>>>
>>>> Export dirty limit throttle time and estimated ring full
>>>> time, through which we can observe the process of dirty
>>>> limit during live migration.
>>>>
>>>> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
>>> [...]
>>>
>>>> diff --git a/qapi/migration.json b/qapi/migration.json
>>>> index bc4bc96..c263d54 100644
>>>> --- a/qapi/migration.json
>>>> +++ b/qapi/migration.json
>>>> @@ -242,6 +242,12 @@
>>>>    #                   Present and non-empty when migration is blocked.
>>>>    #                   (since 6.0)
>>>>    #
>>>> +# @dirty-limit-throttle-us-per-full: Throttle time (us) during the period of
>>>> +#                                    dirty ring full (since 7.0)
>>>> +#
>>>> +# @dirty-limit-us-ring-full: Estimated periodic time (us) of dirty ring full.
>>>> +#                            (since 7.0)
>>>> +#
>>>
>>> Can you explain what is measured here a bit more verbosely?
>>
>> The two fields of migration info aims to export dirty-limit throttle time so that upper apps can check out the process of live migration,
>> like 'cpu-throttle-percentage'.
>>
>> The commit "tests: Add migration dirty-limit capability test" make use of the 'dirty-limit-throttle-us-per-full' to checkout if dirty-limit has
>> started, the commit "tests/migration: Introduce dirty-limit into guestperf" introduce the two field so guestperf tools also show the
>> process of dirty-limit migration.
>>
>> And i also use qmp_query_migrate to observe the migration by checkout these two fields.
>>
>> I'm not sure if above explantation is what you want exactly, please be free to start any discussion about this features.
> 
> You explained use cases, which is always welcome.
> 
> I'm trying to understand the two new members' meaning, i.e. what exactly
> is being measured.

dirty-limit-throttle-us-per-full:
Means the time vCPU should sleep once it's dirty ring get full, since we 
set limit on vCPU every time it returns to Qemu for the 
KVM_EXIT_DIRTY_RING_FULL reason, the sleep time may also changes everty 
time dirty ring get full. 'dirty-limit-throttle-us-per-full' can be 
simplified as 'throttle time(us) every time vCPU's dirty ring full get 
full'. The 'dirty-limit' is just the prefix to mark that parameter is 
dirty-limit-related.

dirty-limit-us-ring-full:
It is an estimated value which means the time a vCPU's dirty ring get 
full. It depends on the vCPU's dirty page rate, the higher the rate is, 
the smaller dirty-limit-us-ring-full is.

dirty-limit-throttle-us-per-full / dirty-limit-us-ring-full * 100 is 
kind of like 'cpu-throttle-percentage'.

Thanks,

Yong

> 
> For existing @cpu-throttle-percentage, the doc comment tells me:
> "percentage of time guest cpus are being throttled during
> auto-converge."
> 
> For the your new members, the doc comment tries to tell me, but it
> doesn't succeed.  If you explain what is being measured more verbosely,
> we may be able to improve the doc comment.
> 

-- 
Best regard

Hyman Huang(黄勇)


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2022-10-07 16:43 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-01 17:22 [PATCH v1 0/8] migration: introduce dirtylimit capability huangy81
2022-09-01 17:22 ` [PATCH v1 1/8] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter huangy81
2022-09-02  8:02   ` Markus Armbruster
2022-09-01 17:22 ` [PATCH v1 2/8] qapi/migration: Introduce x-vcpu-dirty-limit parameters huangy81
2022-09-02  8:03   ` Markus Armbruster
2022-09-02 13:27     ` Hyman Huang
2022-09-01 17:22 ` [PATCH v1 3/8] migration: Introduce dirty-limit capability huangy81
2022-09-02  8:07   ` Markus Armbruster
2022-09-02 14:15     ` Hyman Huang
2022-09-05  9:32       ` Markus Armbruster
2022-09-05 13:13         ` Hyman Huang
2022-09-06  8:02           ` Markus Armbruster
2022-09-01 17:22 ` [PATCH v1 4/8] migration: Implement dirty-limit convergence algo huangy81
2022-09-06 20:37   ` Peter Xu
2022-09-08 14:35     ` Hyman
2022-09-08 14:47       ` Peter Xu
2022-09-08 14:59         ` Hyman Huang
2022-09-01 17:22 ` [PATCH v1 5/8] migration: Export dirty-limit time info huangy81
2022-10-01 18:31   ` Markus Armbruster
2022-10-02  1:13     ` Hyman Huang
2022-10-07 15:09       ` Markus Armbruster
2022-10-07 16:22         ` Hyman Huang
2022-09-01 17:22 ` [PATCH v1 6/8] tests: Add migration dirty-limit capability test huangy81
2022-09-01 17:22 ` [PATCH v1 7/8] tests/migration: Introduce dirty-ring-size option into guestperf huangy81
2022-09-01 17:22 ` [PATCH v1 8/8] tests/migration: Introduce dirty-limit " huangy81
2022-09-06 20:46 ` [PATCH v1 0/8] migration: introduce dirtylimit capability Peter Xu
2022-09-07 14:52   ` Hyman
2022-10-01 14:37 ` Markus Armbruster
2022-10-01 15:01   ` Hyman Huang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.