qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] Migration time prediction using calc-dirty-rate
@ 2023-02-28 13:16 Andrei Gudkov via
  2023-02-28 13:16 ` [PATCH 1/2] migration/calc-dirty-rate: new metrics in sampling mode Andrei Gudkov via
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Andrei Gudkov via @ 2023-02-28 13:16 UTC (permalink / raw)
  To: qemu-devel; +Cc: quintela, dgilbert, Andrei Gudkov

The overall goal of this patch is to be able to predict time it would
take to migrate VM in precopy mode based on max allowed downtime,
network bandwidth, and metrics collected with "calc-dirty-rate".
Predictor itself is a simple python script that closely follows iterations
of the migration algorithm: compute how long it would take to copy
dirty pages, estimate number of pages dirtied by VM from the beginning
of the last iteration; repeat all over again until estimated iteration time
fits max allowed downtime. However, to get reasonable accuracy, predictor
requires more metrics, which have been implemented into "calc-dirty-rate".

Summary of calc-dirty-rate changes:

1. The most important change is that now calc-dirty-rate produces
   a *vector* of dirty page measurements for progressively increasing time
   periods: 125ms, 250, 500, 750, 1000, 1500, .., up to specified calc-time.
   The motivation behind such change is that number of dirtied pages as
   a function of time starting from "clean state" (new migration iteration)
   is far from linear. Shape of this function depends on the workload type
   and intensity. Measuring number of dirty pages at progressively
   increasing periods allows to reconstruct this function using piece-wise
   interpolation.

2. New metric added -- number of all-zero pages.
   Predictor needs to distinguish between number of zero and non-zero pages
   because during migration only 8 byte header is placed on the wire for
   all-zero page.

3. Hashing function was changed from CRC32 to xxHash.
   This reduces overhead of sampling by ~10 times, which is important since
   now some of the measurement periods are sub-second.

4. Other trivial metrics were added for convenience: total number
   of VM pages, number of sampled pages, page size.


After these changes output from calc-dirty-rate looks like this:

{
  "page-size": 4096,
  "periods": [125, 250, 375, 500, 750, 1000, 1500,
              2000, 3000, 4001, 6000, 8000, 10000,
              15000, 20000, 25000, 30000, 35000,
              40000, 45000, 50000, 60000],
  "status": "measured",
  "sample-pages": 512,
  "dirty-rate": 98,
  "mode": "page-sampling",
  "n-dirty-pages": [33, 78, 119, 151, 217, 236, 293, 336,
                    425, 505, 620, 756, 898, 1204, 1457,
                    1723, 1934, 2141, 2328, 2522, 2675, 2958],
  "n-sampled-pages": 16392,
  "n-zero-pages": 10060,
  "n-total-pages": 8392704,
  "start-time": 2916750,
  "calc-time": 60
}

Passing this data into prediction script, we get the following estimations:

Downtime> |    125ms |    250ms |    500ms |   1000ms |   5000ms |    unlim
---------------------------------------------------------------------------
 100 Mbps |        - |        - |        - |        - |        - |   16m59s  
   1 Gbps |        - |        - |        - |        - |        - |    1m40s
   2 Gbps |        - |        - |        - |        - |    1m41s |      50s  
 2.5 Gbps |        - |        - |        - |        - |    1m07s |      40s
   5 Gbps |      48s |      46s |      31s |      28s |      25s |      20s
  10 Gbps |      13s |      12s |      12s |      12s |      12s |      10s
  25 Gbps |       5s |       5s |       5s |       5s |       4s |       4s
  40 Gbps |       3s |       3s |       3s |       3s |       3s |       3s


Quality of prediction was tested with YCSB benchmark. Memcached instance
was installed into 32GiB VM, and a client generated a stream of requests.
Between experiments we varied request size distribution, number of threads,
and location of the client (inside or outside the VM).
After short preheat phase, we measured calc-dirty-rate:
1. {"execute": "calc-dirty-rate", "arguments":{"calc-time":60}}
2. Wait 60 seconds
3. Collect results with {"execute": "query-dirty-rate"}

Afterwards we tried to migrate VM after randomly selecting max downtime
and bandwidth limit. Typical prediction error is 6-7%, with only 180 out
of 5779 experiments failing badly: prediction error >=25% or incorrectly
predicting migration success when in fact it didn't converge.


Andrei Gudkov (2):
  migration/calc-dirty-rate: new metrics in sampling mode
  migration/calc-dirty-rate: tool to predict migration time

 MAINTAINERS                  |   1 +
 migration/dirtyrate.c        | 219 +++++++++++++++++++++------
 migration/dirtyrate.h        |  26 +++-
 qapi/migration.json          |  25 ++++
 scripts/predict_migration.py | 283 +++++++++++++++++++++++++++++++++++
 5 files changed, 502 insertions(+), 52 deletions(-)
 create mode 100644 scripts/predict_migration.py

-- 
2.30.2



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-04-27 13:52 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-28 13:16 [PATCH 0/2] Migration time prediction using calc-dirty-rate Andrei Gudkov via
2023-02-28 13:16 ` [PATCH 1/2] migration/calc-dirty-rate: new metrics in sampling mode Andrei Gudkov via
2023-04-18 17:11   ` Daniel P. Berrangé
2023-02-28 13:16 ` [PATCH 2/2] migration/calc-dirty-rate: tool to predict migration time Andrei Gudkov via
2023-03-17 13:29 ` [PATCH 0/2] Migration time prediction using calc-dirty-rate Gudkov Andrei via
2023-03-27 14:08 ` Gudkov Andrei via
2023-04-03 14:41 ` Gudkov Andrei via
2023-04-10 15:19 ` Gudkov Andrei via
2023-04-18 13:25 ` Gudkov Andrei via
2023-04-18 17:21   ` Daniel P. Berrangé
2023-04-18 17:17 ` Daniel P. Berrangé
2023-04-27 13:51   ` Gudkov Andrei via

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).