All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/20] backup performance: block_status + async
@ 2020-06-01 18:10 Vladimir Sementsov-Ogievskiy
  2020-06-01 18:10 ` [PATCH v2 01/20] block/block-copy: block_copy_dirty_clusters: fix failure check Vladimir Sementsov-Ogievskiy
                   ` (22 more replies)
  0 siblings, 23 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:10 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Hi all!

This a last part of original
"[RFC 00/24] backup performance: block_status + async", prepartions are
already merged.

The series turn backup into series of block_copy_async calls, covering
the whole disk, so we get block-status based paralallel async requests
out of the box, which gives performance gain:

-----------------  ----------------  -------------  --------------------------  --------------------------  ----------------  -------------------------------
                   mirror(upstream)  backup(new)    backup(new, no-copy-range)  backup(new, copy-range-1w)  backup(upstream)  backup(upstream, no-copy-range)
hdd-ext4:hdd-ext4  18.86 +- 0.11     45.50 +- 2.35  19.22 +- 0.09               19.51 +- 0.09               22.85 +- 5.98     19.72 +- 0.35
hdd-ext4:ssd-ext4  8.99 +- 0.02      9.30 +- 0.01   8.97 +- 0.02                9.02 +- 0.02                9.68 +- 0.26      9.84 +- 0.12
ssd-ext4:hdd-ext4  9.09 +- 0.11      9.34 +- 0.10   9.34 +- 0.10                8.99 +- 0.01                11.37 +- 0.37     11.47 +- 0.30
ssd-ext4:ssd-ext4  4.07 +- 0.02      5.41 +- 0.05   4.05 +- 0.01                8.35 +- 0.58                9.83 +- 0.64      8.62 +- 0.35
hdd-xfs:hdd-xfs    18.90 +- 0.19     43.26 +- 2.47  19.62 +- 0.14               19.38 +- 0.16               19.55 +- 0.26     19.62 +- 0.12
hdd-xfs:ssd-xfs    8.93 +- 0.12      9.35 +- 0.03   8.93 +- 0.08                8.93 +- 0.05                9.79 +- 0.30      9.55 +- 0.15
ssd-xfs:hdd-xfs    9.15 +- 0.07      9.74 +- 0.28   9.29 +- 0.03                9.08 +- 0.05                10.85 +- 0.31     10.91 +- 0.30
ssd-xfs:ssd-xfs    4.06 +- 0.01      4.93 +- 0.02   4.04 +- 0.01                8.17 +- 0.42                9.52 +- 0.49      8.85 +- 0.46
ssd-ext4:nbd       9.96 +- 0.11      11.45 +- 0.15  11.45 +- 0.02               17.22 +- 0.06               34.45 +- 1.35     35.16 +- 0.37
nbd:ssd-ext4       9.84 +- 0.02      9.84 +- 0.04   9.80 +- 0.06                18.96 +- 0.06               30.89 +- 0.73     31.46 +- 0.21
-----------------  ----------------  -------------  --------------------------  --------------------------  ----------------  -------------------------------


The table shows, that copy_range is in bad relation with parallel async
requests. copy_range brings real performance gain only on supporting fs,
like btrfs. But even on such fs, I'm not sure that this is a good
default behavior: if we do offload copy, so, that no real copy but just
link block in backup the same blocks as in original, this means that
further write from guest will lead to fragmentation of guest disk, when
the aim of backup is to operate transparently for the guest.

So, in addition to these series I also suggest to disable copy_range by
default.

===

How to test:

prepare images:
In a directories, where you want to place source and target images,
prepare images by:

for img in test-source test-target; do
 ./qemu-img create -f raw $img 1000M;
 ./qemu-img bench -c 1000 -d 1 -f raw -s 1M -w --pattern=0xff $img
done

prepare similar image for nbd server, and start it somewhere by

 qemu-nbd --persistent --nocache -f raw IMAGE

Then, run benchmark, like this:
./bench-backup.py --qemu new:../../x86_64-softmmu/qemu-system-x86_64 upstream:/work/src/qemu/up-backup-block-copy-master/x86_64-softmmu/qemu-system-x86_64 --dir hdd-ext4:/test-a hdd-xfs:/test-b ssd-ext4:/ssd ssd-xfs:/ssd-b --test $(for fs in ext4 xfs; do echo hdd-$fs:hdd-$fs hdd-$fs:ssd-$fs ssd-$fs:hdd-$fs ssd-$fs:ssd-$fs; done) --nbd 192.168.100.2 --test ssd-ext4:nbd nbd:ssd-ext4

(you may simply reduce number of directories/test-cases, use --help for
 help)

===

Note, that I included here
"[PATCH] block/block-copy: block_copy_dirty_clusters: fix failure check"
which was previously sent in separate, but still untouched in mailing
list. It still may be applied separately.

Vladimir Sementsov-Ogievskiy (20):
  block/block-copy: block_copy_dirty_clusters: fix failure check
  iotests: 129 don't check backup "busy"
  qapi: backup: add x-use-copy-range parameter
  block/block-copy: More explicit call_state
  block/block-copy: implement block_copy_async
  block/block-copy: add max_chunk and max_workers parameters
  block/block-copy: add ratelimit to block-copy
  block/block-copy: add block_copy_cancel
  blockjob: add set_speed to BlockJobDriver
  job: call job_enter from job_user_pause
  qapi: backup: add x-max-chunk and x-max-workers parameters
  iotests: 56: prepare for backup over block-copy
  iotests: 129: prepare for backup over block-copy
  iotests: 185: prepare for backup over block-copy
  iotests: 219: prepare for backup over block-copy
  iotests: 257: prepare for backup over block-copy
  backup: move to block-copy
  block/block-copy: drop unused argument of block_copy()
  simplebench: bench_block_job: add cmd_options argument
  simplebench: add bench-backup.py

 qapi/block-core.json                   |  11 +-
 block/backup-top.h                     |   1 +
 include/block/block-copy.h             |  45 +++-
 include/block/block_int.h              |   8 +
 include/block/blockjob_int.h           |   2 +
 block/backup-top.c                     |   6 +-
 block/backup.c                         | 170 ++++++++------
 block/block-copy.c                     | 183 ++++++++++++---
 block/replication.c                    |   1 +
 blockdev.c                             |  10 +
 blockjob.c                             |   6 +
 job.c                                  |   1 +
 scripts/simplebench/bench-backup.py    | 132 +++++++++++
 scripts/simplebench/bench-example.py   |   2 +-
 scripts/simplebench/bench_block_job.py |  13 +-
 tests/qemu-iotests/056                 |   8 +-
 tests/qemu-iotests/129                 |   3 +-
 tests/qemu-iotests/185                 |   3 +-
 tests/qemu-iotests/185.out             |   2 +-
 tests/qemu-iotests/219                 |  13 +-
 tests/qemu-iotests/257                 |   1 +
 tests/qemu-iotests/257.out             | 306 ++++++++++++-------------
 22 files changed, 640 insertions(+), 287 deletions(-)
 create mode 100755 scripts/simplebench/bench-backup.py

-- 
2.21.0



^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 01/20] block/block-copy: block_copy_dirty_clusters: fix failure check
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:10 ` Vladimir Sementsov-Ogievskiy
  2020-06-01 18:11 ` [PATCH v2 02/20] iotests: 129 don't check backup "busy" Vladimir Sementsov-Ogievskiy
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:10 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

ret may be > 0 on success path at this point. Fix assertion, which may
crash currently.

Fixes: 4ce5dd3e9b5ee0fac18625860eb3727399ee965e
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/block-copy.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index bb8d0569f2..f7428a7c08 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -622,8 +622,10 @@ out:
          * block_copy_task_run. If it fails, it means some task already failed
          * for real reason, let's return first failure.
          * Still, assert that we don't rewrite failure by success.
+         *
+         * Note: ret may be positive here because of block-status result.
          */
-        assert(ret == 0 || aio_task_pool_status(aio) < 0);
+        assert(ret >= 0 || aio_task_pool_status(aio) < 0);
         ret = aio_task_pool_status(aio);
 
         aio_task_pool_free(aio);
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 02/20] iotests: 129 don't check backup "busy"
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
  2020-06-01 18:10 ` [PATCH v2 01/20] block/block-copy: block_copy_dirty_clusters: fix failure check Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-17 12:57   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 03/20] qapi: backup: add x-use-copy-range parameter Vladimir Sementsov-Ogievskiy
                   ` (20 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Busy is racy, job has it's "pause-points" when it's not busy. Drop this
check.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/129 | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
index b0da4a5541..4db5eca441 100755
--- a/tests/qemu-iotests/129
+++ b/tests/qemu-iotests/129
@@ -66,7 +66,6 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
         result = self.vm.qmp("stop")
         self.assert_qmp(result, 'return', {})
         result = self.vm.qmp("query-block-jobs")
-        self.assert_qmp(result, 'return[0]/busy', True)
         self.assert_qmp(result, 'return[0]/ready', False)
 
     def test_drive_mirror(self):
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 03/20] qapi: backup: add x-use-copy-range parameter
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
  2020-06-01 18:10 ` [PATCH v2 01/20] block/block-copy: block_copy_dirty_clusters: fix failure check Vladimir Sementsov-Ogievskiy
  2020-06-01 18:11 ` [PATCH v2 02/20] iotests: 129 don't check backup "busy" Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-17 13:15   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 04/20] block/block-copy: More explicit call_state Vladimir Sementsov-Ogievskiy
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Add parameter to enable/disable copy_range. Keep current default for
now (enabled).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 qapi/block-core.json       | 4 +++-
 block/backup-top.h         | 1 +
 include/block/block-copy.h | 2 +-
 include/block/block_int.h  | 1 +
 block/backup-top.c         | 4 +++-
 block/backup.c             | 4 +++-
 block/block-copy.c         | 4 ++--
 block/replication.c        | 1 +
 blockdev.c                 | 5 +++++
 9 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index 6fbacddab2..0c7600e4ec 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1405,6 +1405,8 @@
 #                    above node specified by @drive. If this option is not given,
 #                    a node name is autogenerated. (Since: 4.2)
 #
+# @x-use-copy-range: use copy offloading if possible. Default true. (Since 5.1)
+#
 # Note: @on-source-error and @on-target-error only affect background
 #       I/O.  If an error occurs during a guest write request, the device's
 #       rerror/werror actions will be used.
@@ -1419,7 +1421,7 @@
             '*on-source-error': 'BlockdevOnError',
             '*on-target-error': 'BlockdevOnError',
             '*auto-finalize': 'bool', '*auto-dismiss': 'bool',
-            '*filter-node-name': 'str' } }
+            '*filter-node-name': 'str', '*x-use-copy-range': 'bool'  } }
 
 ##
 # @DriveBackup:
diff --git a/block/backup-top.h b/block/backup-top.h
index e5cabfa197..2d74a976d8 100644
--- a/block/backup-top.h
+++ b/block/backup-top.h
@@ -33,6 +33,7 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
                                          BlockDriverState *target,
                                          const char *filter_node_name,
                                          uint64_t cluster_size,
+                                         bool use_copy_range,
                                          BdrvRequestFlags write_flags,
                                          BlockCopyState **bcs,
                                          Error **errp);
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index aac85e1488..6397505f30 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -22,7 +22,7 @@ typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
 typedef struct BlockCopyState BlockCopyState;
 
 BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
-                                     int64_t cluster_size,
+                                     int64_t cluster_size, bool use_copy_range,
                                      BdrvRequestFlags write_flags,
                                      Error **errp);
 
diff --git a/include/block/block_int.h b/include/block/block_int.h
index 791de6a59c..93b9b3bdc0 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -1225,6 +1225,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                             BitmapSyncMode bitmap_mode,
                             bool compress,
                             const char *filter_node_name,
+                            bool use_copy_range,
                             BlockdevOnError on_source_error,
                             BlockdevOnError on_target_error,
                             int creation_flags,
diff --git a/block/backup-top.c b/block/backup-top.c
index af2f20f346..0a09544c76 100644
--- a/block/backup-top.c
+++ b/block/backup-top.c
@@ -188,6 +188,7 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
                                          BlockDriverState *target,
                                          const char *filter_node_name,
                                          uint64_t cluster_size,
+                                         bool use_copy_range,
                                          BdrvRequestFlags write_flags,
                                          BlockCopyState **bcs,
                                          Error **errp)
@@ -246,7 +247,8 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
 
     state->cluster_size = cluster_size;
     state->bcs = block_copy_state_new(top->backing, state->target,
-                                      cluster_size, write_flags, &local_err);
+                                      cluster_size, use_copy_range,
+                                      write_flags, &local_err);
     if (local_err) {
         error_prepend(&local_err, "Cannot create block-copy-state: ");
         goto fail;
diff --git a/block/backup.c b/block/backup.c
index 4f13bb20a5..76847b4daf 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -334,6 +334,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                   BitmapSyncMode bitmap_mode,
                   bool compress,
                   const char *filter_node_name,
+                  bool use_copy_range,
                   BlockdevOnError on_source_error,
                   BlockdevOnError on_target_error,
                   int creation_flags,
@@ -440,7 +441,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                   (compress ? BDRV_REQ_WRITE_COMPRESSED : 0),
 
     backup_top = bdrv_backup_top_append(bs, target, filter_node_name,
-                                        cluster_size, write_flags, &bcs, errp);
+                                        cluster_size, use_copy_range,
+                                        write_flags, &bcs, errp);
     if (!backup_top) {
         goto error;
     }
diff --git a/block/block-copy.c b/block/block-copy.c
index f7428a7c08..43a018d190 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -215,7 +215,7 @@ static uint32_t block_copy_max_transfer(BdrvChild *source, BdrvChild *target)
 }
 
 BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
-                                     int64_t cluster_size,
+                                     int64_t cluster_size, bool use_copy_range,
                                      BdrvRequestFlags write_flags, Error **errp)
 {
     BlockCopyState *s;
@@ -257,7 +257,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
          * We enable copy-range, but keep small copy_size, until first
          * successful copy_range (look at block_copy_do_copy).
          */
-        s->use_copy_range = true;
+        s->use_copy_range = use_copy_range;
         s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
     }
 
diff --git a/block/replication.c b/block/replication.c
index ccf7b78160..25987eab0f 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -563,6 +563,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
         s->backup_job = backup_job_create(
                                 NULL, s->secondary_disk->bs, s->hidden_disk->bs,
                                 0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
+                                true,
                                 BLOCKDEV_ON_ERROR_REPORT,
                                 BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
                                 backup_job_completed, bs, NULL, &local_err);
diff --git a/blockdev.c b/blockdev.c
index 72df193ca7..28145afe7d 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2813,10 +2813,15 @@ static BlockJob *do_backup_common(BackupCommon *backup,
         job_flags |= JOB_MANUAL_DISMISS;
     }
 
+    if (!backup->has_x_use_copy_range) {
+        backup->x_use_copy_range = true;
+    }
+
     job = backup_job_create(backup->job_id, bs, target_bs, backup->speed,
                             backup->sync, bmap, backup->bitmap_mode,
                             backup->compress,
                             backup->filter_node_name,
+                            backup->x_use_copy_range,
                             backup->on_source_error,
                             backup->on_target_error,
                             job_flags, NULL, NULL, txn, errp);
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 04/20] block/block-copy: More explicit call_state
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 03/20] qapi: backup: add x-use-copy-range parameter Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-17 13:45   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 05/20] block/block-copy: implement block_copy_async Vladimir Sementsov-Ogievskiy
                   ` (18 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Refactor common path to use BlockCopyCallState pointer as parameter, to
prepare it for use in asynchronous block-copy (at least, we'll need to
run block-copy in a coroutine, passing the whole parameters as one
pointer).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
 1 file changed, 38 insertions(+), 13 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 43a018d190..75882a094c 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -30,7 +30,15 @@
 static coroutine_fn int block_copy_task_entry(AioTask *task);
 
 typedef struct BlockCopyCallState {
+    /* IN parameters */
+    BlockCopyState *s;
+    int64_t offset;
+    int64_t bytes;
+
+    /* State */
     bool failed;
+
+    /* OUT parameters */
     bool error_is_read;
 } BlockCopyCallState;
 
@@ -541,15 +549,17 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
  * Returns 1 if dirty clusters found and successfully copied, 0 if no dirty
  * clusters found and -errno on failure.
  */
-static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
-                                                  int64_t offset, int64_t bytes,
-                                                  bool *error_is_read)
+static int coroutine_fn
+block_copy_dirty_clusters(BlockCopyCallState *call_state)
 {
+    BlockCopyState *s = call_state->s;
+    int64_t offset = call_state->offset;
+    int64_t bytes = call_state->bytes;
+
     int ret = 0;
     bool found_dirty = false;
     int64_t end = offset + bytes;
     AioTaskPool *aio = NULL;
-    BlockCopyCallState call_state = {false, false};
 
     /*
      * block_copy() user is responsible for keeping source and target in same
@@ -565,7 +575,7 @@ static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
         BlockCopyTask *task;
         int64_t status_bytes;
 
-        task = block_copy_task_create(s, &call_state, offset, bytes);
+        task = block_copy_task_create(s, call_state, offset, bytes);
         if (!task) {
             /* No more dirty bits in the bitmap */
             trace_block_copy_skip_range(s, offset, bytes);
@@ -630,15 +640,12 @@ out:
 
         aio_task_pool_free(aio);
     }
-    if (error_is_read && ret < 0) {
-        *error_is_read = call_state.error_is_read;
-    }
 
     return ret < 0 ? ret : found_dirty;
 }
 
 /*
- * block_copy
+ * block_copy_common
  *
  * Copy requested region, accordingly to dirty bitmap.
  * Collaborate with parallel block_copy requests: if they succeed it will help
@@ -646,16 +653,16 @@ out:
  * it means that some I/O operation failed in context of _this_ block_copy call,
  * not some parallel operation.
  */
-int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
-                            bool *error_is_read)
+static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
 {
     int ret;
 
     do {
-        ret = block_copy_dirty_clusters(s, offset, bytes, error_is_read);
+        ret = block_copy_dirty_clusters(call_state);
 
         if (ret == 0) {
-            ret = block_copy_wait_one(s, offset, bytes);
+            ret = block_copy_wait_one(call_state->s, call_state->offset,
+                                      call_state->bytes);
         }
 
         /*
@@ -672,6 +679,24 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
     return ret;
 }
 
+int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
+                            bool *error_is_read)
+{
+    BlockCopyCallState call_state = {
+        .s = s,
+        .offset = start,
+        .bytes = bytes,
+    };
+
+    int ret = block_copy_common(&call_state);
+
+    if (error_is_read && ret < 0) {
+        *error_is_read = call_state.error_is_read;
+    }
+
+    return ret;
+}
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
 {
     return s->copy_bitmap;
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 05/20] block/block-copy: implement block_copy_async
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 04/20] block/block-copy: More explicit call_state Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-17 14:00   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 06/20] block/block-copy: add max_chunk and max_workers parameters Vladimir Sementsov-Ogievskiy
                   ` (17 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

We'll need async block-copy invocation to use in backup directly.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/block-copy.h | 13 +++++++++++++
 block/block-copy.c         | 40 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 6397505f30..ada0d99566 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -19,7 +19,10 @@
 #include "qemu/co-shared-resource.h"
 
 typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
+typedef void (*BlockCopyAsyncCallbackFunc)(int ret, bool error_is_read,
+                                           void *opaque);
 typedef struct BlockCopyState BlockCopyState;
+typedef struct BlockCopyCallState BlockCopyCallState;
 
 BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
                                      int64_t cluster_size, bool use_copy_range,
@@ -41,6 +44,16 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
 int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
                             bool *error_is_read);
 
+/*
+ * Run block-copy in a coroutine, return state pointer. If finished early
+ * returns NULL (@cb is called anyway).
+ */
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
+                                     int64_t offset, int64_t bytes,
+                                     bool ratelimit, int max_workers,
+                                     int64_t max_chunk,
+                                     BlockCopyAsyncCallbackFunc cb);
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
 void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
 
diff --git a/block/block-copy.c b/block/block-copy.c
index 75882a094c..a0477d90f3 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -34,9 +34,11 @@ typedef struct BlockCopyCallState {
     BlockCopyState *s;
     int64_t offset;
     int64_t bytes;
+    BlockCopyAsyncCallbackFunc cb;
 
     /* State */
     bool failed;
+    bool finished;
 
     /* OUT parameters */
     bool error_is_read;
@@ -676,6 +678,13 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
          */
     } while (ret > 0);
 
+    if (call_state->cb) {
+        call_state->cb(ret, call_state->error_is_read,
+                       call_state->s->progress_opaque);
+    }
+
+    call_state->finished = true;
+
     return ret;
 }
 
@@ -697,6 +706,37 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
     return ret;
 }
 
+static void coroutine_fn block_copy_async_co_entry(void *opaque)
+{
+    block_copy_common(opaque);
+}
+
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
+                                     int64_t offset, int64_t bytes,
+                                     bool ratelimit, int max_workers,
+                                     int64_t max_chunk,
+                                     BlockCopyAsyncCallbackFunc cb)
+{
+    BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
+    Coroutine *co = qemu_coroutine_create(block_copy_async_co_entry,
+                                          call_state);
+
+    *call_state = (BlockCopyCallState) {
+        .s = s,
+        .offset = offset,
+        .bytes = bytes,
+        .cb = cb,
+    };
+
+    qemu_coroutine_enter(co);
+
+    if (call_state->finished) {
+        g_free(call_state);
+        return NULL;
+    }
+
+    return call_state;
+}
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
 {
     return s->copy_bitmap;
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 06/20] block/block-copy: add max_chunk and max_workers parameters
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 05/20] block/block-copy: implement block_copy_async Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-22  9:39   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy Vladimir Sementsov-Ogievskiy
                   ` (16 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

They will be used for backup.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/block-copy.h |  5 +++++
 block/block-copy.c         | 10 ++++++++--
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index ada0d99566..600984c733 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -47,6 +47,11 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
 /*
  * Run block-copy in a coroutine, return state pointer. If finished early
  * returns NULL (@cb is called anyway).
+ *
+ * @max_workers means maximum of parallel coroutines to execute sub-requests,
+ * must be > 0.
+ *
+ * @max_chunk means maximum length for one IO operation. Zero means unlimited.
  */
 BlockCopyCallState *block_copy_async(BlockCopyState *s,
                                      int64_t offset, int64_t bytes,
diff --git a/block/block-copy.c b/block/block-copy.c
index a0477d90f3..4114d1fd25 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -34,6 +34,8 @@ typedef struct BlockCopyCallState {
     BlockCopyState *s;
     int64_t offset;
     int64_t bytes;
+    int max_workers;
+    int64_t max_chunk;
     BlockCopyAsyncCallbackFunc cb;
 
     /* State */
@@ -144,10 +146,11 @@ static BlockCopyTask *block_copy_task_create(BlockCopyState *s,
                                              int64_t offset, int64_t bytes)
 {
     BlockCopyTask *task;
+    int64_t max_chunk = MIN_NON_ZERO(s->copy_size, call_state->max_chunk);
 
     if (!bdrv_dirty_bitmap_next_dirty_area(s->copy_bitmap,
                                            offset, offset + bytes,
-                                           s->copy_size, &offset, &bytes))
+                                           max_chunk, &offset, &bytes))
     {
         return NULL;
     }
@@ -616,7 +619,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
         bytes = end - offset;
 
         if (!aio && bytes) {
-            aio = aio_task_pool_new(BLOCK_COPY_MAX_WORKERS);
+            aio = aio_task_pool_new(call_state->max_workers);
         }
 
         ret = block_copy_task_run(aio, task);
@@ -695,6 +698,7 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
         .s = s,
         .offset = start,
         .bytes = bytes,
+        .max_workers = BLOCK_COPY_MAX_WORKERS,
     };
 
     int ret = block_copy_common(&call_state);
@@ -726,6 +730,8 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
         .offset = offset,
         .bytes = bytes,
         .cb = cb,
+        .max_workers = max_workers ?: BLOCK_COPY_MAX_WORKERS,
+        .max_chunk = max_chunk,
     };
 
     qemu_coroutine_enter(co);
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 06/20] block/block-copy: add max_chunk and max_workers parameters Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-22 11:05   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 08/20] block/block-copy: add block_copy_cancel Vladimir Sementsov-Ogievskiy
                   ` (15 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

We are going to directly use one async block-copy operation for backup
job, so we need rate limitator.

We want to maintain current backup behavior: only background copying is
limited and copy-before-write operations only participate in limit
calculation. Therefore we need one rate limitator for block-copy state
and boolean flag for block-copy call state for actual limitation.

Note, that we can't just calculate each chunk in limitator after
successful copying: it will not save us from starting a lot of async
sub-requests which will exceed limit too much. Instead let's use the
following scheme on sub-request creation:
1. If at the moment limit is not exceeded, create the request and
account it immediately.
2. If at the moment limit is already exceeded, drop create sub-request
and handle limit instead (by sleep).
With this approach we'll never exceed the limit more than by one
sub-request (which pretty much matches current backup behavior).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/block-copy.h |  8 +++++++
 block/block-copy.c         | 44 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 52 insertions(+)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 600984c733..d40e691123 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -59,6 +59,14 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
                                      int64_t max_chunk,
                                      BlockCopyAsyncCallbackFunc cb);
 
+/*
+ * Set speed limit for block-copy instance. All block-copy operations related to
+ * this BlockCopyState will participate in speed calculation, but only
+ * block_copy_async calls with @ratelimit=true will be actually limited.
+ */
+void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
+                          uint64_t speed);
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
 void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
 
diff --git a/block/block-copy.c b/block/block-copy.c
index 4114d1fd25..851d9c8aaf 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -26,6 +26,7 @@
 #define BLOCK_COPY_MAX_BUFFER (1 * MiB)
 #define BLOCK_COPY_MAX_MEM (128 * MiB)
 #define BLOCK_COPY_MAX_WORKERS 64
+#define BLOCK_COPY_SLICE_TIME 100000000ULL /* ns */
 
 static coroutine_fn int block_copy_task_entry(AioTask *task);
 
@@ -36,11 +37,13 @@ typedef struct BlockCopyCallState {
     int64_t bytes;
     int max_workers;
     int64_t max_chunk;
+    bool ratelimit;
     BlockCopyAsyncCallbackFunc cb;
 
     /* State */
     bool failed;
     bool finished;
+    QemuCoSleepState *sleep_state;
 
     /* OUT parameters */
     bool error_is_read;
@@ -103,6 +106,9 @@ typedef struct BlockCopyState {
     void *progress_opaque;
 
     SharedResource *mem;
+
+    uint64_t speed;
+    RateLimit rate_limit;
 } BlockCopyState;
 
 static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
@@ -611,6 +617,21 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
         }
         task->zeroes = ret & BDRV_BLOCK_ZERO;
 
+        if (s->speed) {
+            if (call_state->ratelimit) {
+                uint64_t ns = ratelimit_calculate_delay(&s->rate_limit, 0);
+                if (ns > 0) {
+                    block_copy_task_end(task, -EAGAIN);
+                    g_free(task);
+                    qemu_co_sleep_ns_wakeable(QEMU_CLOCK_REALTIME, ns,
+                                              &call_state->sleep_state);
+                    continue;
+                }
+            }
+
+            ratelimit_calculate_delay(&s->rate_limit, task->bytes);
+        }
+
         trace_block_copy_process(s, task->offset);
 
         co_get_from_shres(s->mem, task->bytes);
@@ -649,6 +670,13 @@ out:
     return ret < 0 ? ret : found_dirty;
 }
 
+static void block_copy_kick(BlockCopyCallState *call_state)
+{
+    if (call_state->sleep_state) {
+        qemu_co_sleep_wake(call_state->sleep_state);
+    }
+}
+
 /*
  * block_copy_common
  *
@@ -729,6 +757,7 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
         .s = s,
         .offset = offset,
         .bytes = bytes,
+        .ratelimit = ratelimit,
         .cb = cb,
         .max_workers = max_workers ?: BLOCK_COPY_MAX_WORKERS,
         .max_chunk = max_chunk,
@@ -752,3 +781,18 @@ void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip)
 {
     s->skip_unallocated = skip;
 }
+
+void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
+                          uint64_t speed)
+{
+    uint64_t old_speed = s->speed;
+
+    s->speed = speed;
+    if (speed > 0) {
+        ratelimit_set_speed(&s->rate_limit, speed, BLOCK_COPY_SLICE_TIME);
+    }
+
+    if (call_state && old_speed && (speed > old_speed || speed == 0)) {
+        block_copy_kick(call_state);
+    }
+}
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 08/20] block/block-copy: add block_copy_cancel
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (6 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-22 11:28   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 09/20] blockjob: add set_speed to BlockJobDriver Vladimir Sementsov-Ogievskiy
                   ` (14 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Add function to cancel running async block-copy call. It will be used
in backup.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/block-copy.h |  7 +++++++
 block/block-copy.c         | 22 +++++++++++++++++++---
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index d40e691123..370a194d3c 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -67,6 +67,13 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
 void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
                           uint64_t speed);
 
+/*
+ * Cancel running block-copy call.
+ * Cancel leaves block-copy state valid: dirty bits are correct and you may use
+ * cancel + <run block_copy with same parameters> to emulate pause/resume.
+ */
+void block_copy_cancel(BlockCopyCallState *call_state);
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
 void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
 
diff --git a/block/block-copy.c b/block/block-copy.c
index 851d9c8aaf..b551feb6c2 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -44,6 +44,8 @@ typedef struct BlockCopyCallState {
     bool failed;
     bool finished;
     QemuCoSleepState *sleep_state;
+    bool cancelled;
+    Coroutine *canceller;
 
     /* OUT parameters */
     bool error_is_read;
@@ -582,7 +584,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
     assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
     assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
 
-    while (bytes && aio_task_pool_status(aio) == 0) {
+    while (bytes && aio_task_pool_status(aio) == 0 && !call_state->cancelled) {
         BlockCopyTask *task;
         int64_t status_bytes;
 
@@ -693,7 +695,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
     do {
         ret = block_copy_dirty_clusters(call_state);
 
-        if (ret == 0) {
+        if (ret == 0 && !call_state->cancelled) {
             ret = block_copy_wait_one(call_state->s, call_state->offset,
                                       call_state->bytes);
         }
@@ -707,13 +709,18 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
          * 2. We have waited for some intersecting block-copy request
          *    It may have failed and produced new dirty bits.
          */
-    } while (ret > 0);
+    } while (ret > 0 && !call_state->cancelled);
 
     if (call_state->cb) {
         call_state->cb(ret, call_state->error_is_read,
                        call_state->s->progress_opaque);
     }
 
+    if (call_state->canceller) {
+        aio_co_wake(call_state->canceller);
+        call_state->canceller = NULL;
+    }
+
     call_state->finished = true;
 
     return ret;
@@ -772,6 +779,15 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
 
     return call_state;
 }
+
+void block_copy_cancel(BlockCopyCallState *call_state)
+{
+    call_state->cancelled = true;
+    call_state->canceller = qemu_coroutine_self();
+    block_copy_kick(call_state);
+    qemu_coroutine_yield();
+}
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
 {
     return s->copy_bitmap;
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 09/20] blockjob: add set_speed to BlockJobDriver
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (7 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 08/20] block/block-copy: add block_copy_cancel Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-22 11:34   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 10/20] job: call job_enter from job_user_pause Vladimir Sementsov-Ogievskiy
                   ` (13 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

We are going to use async block-copy call in backup, so we'll need to
passthrough setting backup speed to block-copy call.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/blockjob_int.h | 2 ++
 blockjob.c                   | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index e2824a36a8..6633d83da2 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -52,6 +52,8 @@ struct BlockJobDriver {
      * besides job->blk to the new AioContext.
      */
     void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
+
+    void (*set_speed)(BlockJob *job, int64_t speed);
 };
 
 /**
diff --git a/blockjob.c b/blockjob.c
index 470facfd47..6a0cd392e2 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -256,6 +256,7 @@ static bool job_timer_pending(Job *job)
 
 void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
 {
+    const BlockJobDriver *drv = block_job_driver(job);
     int64_t old_speed = job->speed;
 
     if (job_apply_verb(&job->job, JOB_VERB_SET_SPEED, errp)) {
@@ -270,6 +271,11 @@ void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
     ratelimit_set_speed(&job->limit, speed, BLOCK_JOB_SLICE_TIME);
 
     job->speed = speed;
+
+    if (drv->set_speed) {
+        drv->set_speed(job, speed);
+    }
+
     if (speed && speed <= old_speed) {
         return;
     }
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 10/20] job: call job_enter from job_user_pause
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (8 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 09/20] blockjob: add set_speed to BlockJobDriver Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-22 11:49   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters Vladimir Sementsov-Ogievskiy
                   ` (12 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

If main job coroutine called job_yield (while some background process
is in progress), we should give it a chance to call job_pause_point().
It will be used in backup, when moved on async block-copy.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 job.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/job.c b/job.c
index 53be57a3a0..0a9510ece1 100644
--- a/job.c
+++ b/job.c
@@ -578,6 +578,7 @@ void job_user_pause(Job *job, Error **errp)
     }
     job->user_paused = true;
     job_pause(job);
+    job_enter(job);
 }
 
 bool job_user_paused(Job *job)
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (9 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 10/20] job: call job_enter from job_user_pause Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-06-02  8:19   ` Vladimir Sementsov-Ogievskiy
  2020-07-22 12:22   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 12/20] iotests: 56: prepare for backup over block-copy Vladimir Sementsov-Ogievskiy
                   ` (11 subsequent siblings)
  22 siblings, 2 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Add new parameters to configure future backup features. The patch
doesn't introduce aio backup requests (so we actually have only one
worker) neither requests larger than one cluster. Still, formally we
satisfy these maximums anyway, so add the parameters now, to facilitate
further patch which will really change backup job behavior.

Options are added with x- prefixes, as the only use for them are some
very conservative iotests which will be updated soon.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 qapi/block-core.json      |  9 ++++++++-
 include/block/block_int.h |  7 +++++++
 block/backup.c            | 21 +++++++++++++++++++++
 block/replication.c       |  2 +-
 blockdev.c                |  5 +++++
 5 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index 0c7600e4ec..f4abcde34e 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1407,6 +1407,12 @@
 #
 # @x-use-copy-range: use copy offloading if possible. Default true. (Since 5.1)
 #
+# @x-max-workers: maximum of parallel requests for static data backup. This
+#                 doesn't influence copy-before-write operations. (Since: 5.1)
+#
+# @x-max-chunk: maximum chunk length for static data backup. This doesn't
+#               influence copy-before-write operations. (Since: 5.1)
+#
 # Note: @on-source-error and @on-target-error only affect background
 #       I/O.  If an error occurs during a guest write request, the device's
 #       rerror/werror actions will be used.
@@ -1421,7 +1427,8 @@
             '*on-source-error': 'BlockdevOnError',
             '*on-target-error': 'BlockdevOnError',
             '*auto-finalize': 'bool', '*auto-dismiss': 'bool',
-            '*filter-node-name': 'str', '*x-use-copy-range': 'bool'  } }
+            '*filter-node-name': 'str', '*x-use-copy-range': 'bool',
+            '*x-max-workers': 'int', '*x-max-chunk': 'int64' } }
 
 ##
 # @DriveBackup:
diff --git a/include/block/block_int.h b/include/block/block_int.h
index 93b9b3bdc0..d93a170d37 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -1207,6 +1207,11 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
  * @sync_mode: What parts of the disk image should be copied to the destination.
  * @sync_bitmap: The dirty bitmap if sync_mode is 'bitmap' or 'incremental'
  * @bitmap_mode: The bitmap synchronization policy to use.
+ * @max_workers: The limit for parallel requests for main backup loop.
+ *               Must be >= 1.
+ * @max_chunk: The limit for one IO operation length in main backup loop.
+ *             Must be not less than job cluster size or zero. Zero means no
+ *             specific limit.
  * @on_source_error: The action to take upon error reading from the source.
  * @on_target_error: The action to take upon error writing to the target.
  * @creation_flags: Flags that control the behavior of the Job lifetime.
@@ -1226,6 +1231,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                             bool compress,
                             const char *filter_node_name,
                             bool use_copy_range,
+                            int max_workers,
+                            int64_t max_chunk,
                             BlockdevOnError on_source_error,
                             BlockdevOnError on_target_error,
                             int creation_flags,
diff --git a/block/backup.c b/block/backup.c
index 76847b4daf..ec2676abc2 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -46,6 +46,8 @@ typedef struct BackupBlockJob {
     uint64_t len;
     uint64_t bytes_read;
     int64_t cluster_size;
+    int max_workers;
+    int64_t max_chunk;
 
     BlockCopyState *bcs;
 } BackupBlockJob;
@@ -335,6 +337,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                   bool compress,
                   const char *filter_node_name,
                   bool use_copy_range,
+                  int max_workers,
+                  int64_t max_chunk,
                   BlockdevOnError on_source_error,
                   BlockdevOnError on_target_error,
                   int creation_flags,
@@ -355,6 +359,16 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
     assert(sync_bitmap || sync_mode != MIRROR_SYNC_MODE_BITMAP);
 
+    if (max_workers < 1) {
+        error_setg(errp, "At least one worker needed");
+        return NULL;
+    }
+
+    if (max_chunk < 0) {
+        error_setg(errp, "max-chunk is negative");
+        return NULL;
+    }
+
     if (bs == target) {
         error_setg(errp, "Source and target cannot be the same");
         return NULL;
@@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     if (cluster_size < 0) {
         goto error;
     }
+    if (max_chunk && max_chunk < cluster_size) {
+        error_setg(errp, "Required max-chunk (%" PRIi64") is less than backup "
+                   "cluster size (%" PRIi64 ")", max_chunk, cluster_size);
+        return NULL;
+    }
 
     /*
      * If source is in backing chain of target assume that target is going to be
@@ -465,6 +484,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     job->bcs = bcs;
     job->cluster_size = cluster_size;
     job->len = len;
+    job->max_workers = max_workers;
+    job->max_chunk = max_chunk;
 
     block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
     block_copy_set_progress_meter(bcs, &job->common.job.progress);
diff --git a/block/replication.c b/block/replication.c
index 25987eab0f..a9ee82db80 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -563,7 +563,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
         s->backup_job = backup_job_create(
                                 NULL, s->secondary_disk->bs, s->hidden_disk->bs,
                                 0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
-                                true,
+                                true, 0, 0,
                                 BLOCKDEV_ON_ERROR_REPORT,
                                 BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
                                 backup_job_completed, bs, NULL, &local_err);
diff --git a/blockdev.c b/blockdev.c
index 28145afe7d..cf068d20fa 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2743,6 +2743,9 @@ static BlockJob *do_backup_common(BackupCommon *backup,
     if (!backup->has_compress) {
         backup->compress = false;
     }
+    if (!backup->has_x_max_workers) {
+        backup->x_max_workers = 64;
+    }
 
     if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
         (backup->sync == MIRROR_SYNC_MODE_INCREMENTAL)) {
@@ -2822,6 +2825,8 @@ static BlockJob *do_backup_common(BackupCommon *backup,
                             backup->compress,
                             backup->filter_node_name,
                             backup->x_use_copy_range,
+                            backup->x_max_workers,
+                            backup->x_max_chunk,
                             backup->on_source_error,
                             backup->on_target_error,
                             job_flags, NULL, NULL, txn, errp);
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 12/20] iotests: 56: prepare for backup over block-copy
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (10 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23  7:57   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 13/20] iotests: 129: " Vladimir Sementsov-Ogievskiy
                   ` (10 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

After introducing parallel async copy requests instead of plain
cluster-by-cluster copying loop, we'll have to wait for paused status,
as we need to wait for several parallel request. So, let's gently wait
instead of just asserting that job already paused.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/056 | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
index f73fc74457..2ced356a43 100755
--- a/tests/qemu-iotests/056
+++ b/tests/qemu-iotests/056
@@ -306,8 +306,12 @@ class BackupTest(iotests.QMPTestCase):
         event = self.vm.event_wait(name="BLOCK_JOB_ERROR",
                                    match={'data': {'device': 'drive0'}})
         self.assertNotEqual(event, None)
-        # OK, job should be wedged
-        res = self.vm.qmp('query-block-jobs')
+        # OK, job should pause, but it can't do it immediately, as it can't
+        # cancel other parallel requests (which didn't fail)
+        while True:
+            res = self.vm.qmp('query-block-jobs')
+            if res['return'][0]['status'] == 'paused':
+                break
         self.assert_qmp(res, 'return[0]/status', 'paused')
         res = self.vm.qmp('block-job-dismiss', id='drive0')
         self.assert_qmp(res, 'error/desc',
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 13/20] iotests: 129: prepare for backup over block-copy
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (11 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 12/20] iotests: 56: prepare for backup over block-copy Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23  8:03   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 14/20] iotests: 185: " Vladimir Sementsov-Ogievskiy
                   ` (9 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

After introducing parallel async copy requests instead of plain
cluster-by-cluster copying loop, backup job may finish earlier than
final assertion in do_test_stop. Let's require slow backup explicitly
by specifying speed parameter.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/129 | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
index 4db5eca441..bca56b589d 100755
--- a/tests/qemu-iotests/129
+++ b/tests/qemu-iotests/129
@@ -76,7 +76,7 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
     def test_drive_backup(self):
         self.do_test_stop("drive-backup", device="drive0",
                           target=self.target_img,
-                          sync="full")
+                          sync="full", speed=1024)
 
     def test_block_commit(self):
         self.do_test_stop("block-commit", device="drive0")
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 14/20] iotests: 185: prepare for backup over block-copy
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (12 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 13/20] iotests: 129: " Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23  8:19   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 15/20] iotests: 219: " Vladimir Sementsov-Ogievskiy
                   ` (8 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

The further change of moving backup to be a on block-copy call will
make copying chunk-size and cluster-size a separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
185 test however assumes, that with speed limited to 64K, one iteration
would result in offset=64K. It will change, as first iteration would
result in offset=1M independently of speed.

So, let's explicitly specify, what test wants: set max-chunk to 64K, so
that one iteration is 64K. Note, that we don't need to limit
max-workers, as block-copy rate limitator will handle the situation and
wouldn't start new workers when speed limit is obviously reached.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/185     | 3 ++-
 tests/qemu-iotests/185.out | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/185 b/tests/qemu-iotests/185
index fd5e6ebe11..6afb3fc82f 100755
--- a/tests/qemu-iotests/185
+++ b/tests/qemu-iotests/185
@@ -182,7 +182,8 @@ _send_qemu_cmd $h \
                       'target': '$TEST_IMG.copy',
                       'format': '$IMGFMT',
                       'sync': 'full',
-                      'speed': 65536 } }" \
+                      'speed': 65536,
+                      'x-max-chunk': 65536 } }" \
     "return"
 
 # If we don't sleep here 'quit' command races with disk I/O
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
index ac5ab16bc8..5232647972 100644
--- a/tests/qemu-iotests/185.out
+++ b/tests/qemu-iotests/185.out
@@ -61,7 +61,7 @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 l
 
 { 'execute': 'qmp_capabilities' }
 {"return": {}}
-{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536 } }
+{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536, 'x-max-chunk': 65536 } }
 Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16 compression_type=zlib
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 15/20] iotests: 219: prepare for backup over block-copy
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (13 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 14/20] iotests: 185: " Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23  8:35   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 16/20] iotests: 257: " Vladimir Sementsov-Ogievskiy
                   ` (7 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

The further change of moving backup to be a on block-copy call will
make copying chunk-size and cluster-size a separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
Test 219 depends on specified chunk-size. Update it for explicit
chunk-size for backup as for mirror.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/219 | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
index db272c5249..2bbed28f39 100755
--- a/tests/qemu-iotests/219
+++ b/tests/qemu-iotests/219
@@ -203,13 +203,13 @@ with iotests.FilePath('disk.img') as disk_path, \
     # but related to this also automatic state transitions like job
     # completion), but still get pause points often enough to avoid making this
     # test very slow, it's important to have the right ratio between speed and
-    # buf_size.
+    # copy-chunk-size.
     #
-    # For backup, buf_size is hard-coded to the source image cluster size (64k),
-    # so we'll pick the same for mirror. The slice time, i.e. the granularity
-    # of the rate limiting is 100ms. With a speed of 256k per second, we can
-    # get four pause points per second. This gives us 250ms per iteration,
-    # which should be enough to stay deterministic.
+    # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
+    # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
+    # is 100ms. With a speed of 256k per second, we can get four pause points
+    # per second. This gives us 250ms per iteration, which should be enough to
+    # stay deterministic.
 
     test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
         'device': 'drive0-node',
@@ -226,6 +226,7 @@ with iotests.FilePath('disk.img') as disk_path, \
                 'target': copy_path,
                 'sync': 'full',
                 'speed': 262144,
+                'x-max-chunk': 65536,
                 'auto-finalize': auto_finalize,
                 'auto-dismiss': auto_dismiss,
             })
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 16/20] iotests: 257: prepare for backup over block-copy
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (14 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 15/20] iotests: 219: " Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23  8:49   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 17/20] backup: move to block-copy Vladimir Sementsov-Ogievskiy
                   ` (6 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Iotest 257 dumps a lot of in-progress information of backup job, such
as offset and bitmap dirtiness. Further commit will move backup to be
one block-copy call, which will introduce async parallel requests
instead of plain cluster-by-cluster copying. To keep things
deterministic, allow only one worker (only one copy request at a time)
for this test.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/257     |   1 +
 tests/qemu-iotests/257.out | 306 ++++++++++++++++++-------------------
 2 files changed, 154 insertions(+), 153 deletions(-)

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index 004a433b8b..732b2f0071 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -191,6 +191,7 @@ def blockdev_backup(vm, device, target, sync, **kwargs):
                         target=target,
                         sync=sync,
                         filter_node_name='backup-top',
+                        x_max_workers=1,
                         **kwargs)
     return result
 
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
index 64dd460055..6997b56567 100644
--- a/tests/qemu-iotests/257.out
+++ b/tests/qemu-iotests/257.out
@@ -30,7 +30,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -78,7 +78,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -92,7 +92,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -205,7 +205,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -219,7 +219,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -290,7 +290,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -338,7 +338,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -354,7 +354,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -416,7 +416,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -430,7 +430,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -501,7 +501,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -549,7 +549,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -563,7 +563,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -676,7 +676,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -690,7 +690,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -761,7 +761,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -809,7 +809,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -823,7 +823,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -936,7 +936,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -950,7 +950,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1021,7 +1021,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1069,7 +1069,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1085,7 +1085,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -1147,7 +1147,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1161,7 +1161,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1232,7 +1232,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1280,7 +1280,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1294,7 +1294,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -1407,7 +1407,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1421,7 +1421,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1492,7 +1492,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1540,7 +1540,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1554,7 +1554,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -1667,7 +1667,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1681,7 +1681,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1752,7 +1752,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1800,7 +1800,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1816,7 +1816,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -1878,7 +1878,7 @@ expecting 13 dirty sectors; have 13. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1892,7 +1892,7 @@ expecting 13 dirty sectors; have 13. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1963,7 +1963,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2011,7 +2011,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2025,7 +2025,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -2138,7 +2138,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2152,7 +2152,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2223,7 +2223,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2271,7 +2271,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2285,7 +2285,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -2398,7 +2398,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2412,7 +2412,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2483,7 +2483,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2531,7 +2531,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2547,7 +2547,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -2609,7 +2609,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2623,7 +2623,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2694,7 +2694,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2742,7 +2742,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2756,7 +2756,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -2869,7 +2869,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2883,7 +2883,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2954,7 +2954,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3002,7 +3002,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3016,7 +3016,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -3129,7 +3129,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3143,7 +3143,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3214,7 +3214,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3262,7 +3262,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3278,7 +3278,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -3340,7 +3340,7 @@ expecting 1014 dirty sectors; have 1014. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3354,7 +3354,7 @@ expecting 1014 dirty sectors; have 1014. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3425,7 +3425,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3473,7 +3473,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3487,7 +3487,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -3600,7 +3600,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3614,7 +3614,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3685,7 +3685,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3733,7 +3733,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3747,7 +3747,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -3860,7 +3860,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3874,7 +3874,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3945,7 +3945,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3993,7 +3993,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4009,7 +4009,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -4071,7 +4071,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4085,7 +4085,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4156,7 +4156,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4204,7 +4204,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4218,7 +4218,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -4331,7 +4331,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4345,7 +4345,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4416,7 +4416,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4464,7 +4464,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4478,7 +4478,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -4591,7 +4591,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4605,7 +4605,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4676,7 +4676,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4724,7 +4724,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4740,7 +4740,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -4802,7 +4802,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4816,7 +4816,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4887,7 +4887,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4935,7 +4935,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4949,7 +4949,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 
 --- Write #2 ---
@@ -5062,7 +5062,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -5076,7 +5076,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-max-workers": 1}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -5139,155 +5139,155 @@ qemu_img compare "TEST_DIR/PID-img" "TEST_DIR/PID-fbackup2" ==> Identical, OK!
 
 -- Sync mode incremental tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
 -- Sync mode bitmap tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
 -- Sync mode full tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'full'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
 -- Sync mode top tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'top'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
 -- Sync mode none tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-max-workers": 1}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 17/20] backup: move to block-copy
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (15 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 16/20] iotests: 257: " Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23  9:47   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 18/20] block/block-copy: drop unused argument of block_copy() Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

This brings async request handling and block-status driven chunk sizes
to backup out of the box, which improves backup performance.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/block-copy.h |   9 +--
 block/backup.c             | 145 +++++++++++++++++++------------------
 block/block-copy.c         |  21 ++----
 3 files changed, 83 insertions(+), 92 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 370a194d3c..a3e11d3fa2 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -18,7 +18,6 @@
 #include "block/block.h"
 #include "qemu/co-shared-resource.h"
 
-typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
 typedef void (*BlockCopyAsyncCallbackFunc)(int ret, bool error_is_read,
                                            void *opaque);
 typedef struct BlockCopyState BlockCopyState;
@@ -29,11 +28,6 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
                                      BdrvRequestFlags write_flags,
                                      Error **errp);
 
-void block_copy_set_progress_callback(
-        BlockCopyState *s,
-        ProgressBytesCallbackFunc progress_bytes_callback,
-        void *progress_opaque);
-
 void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm);
 
 void block_copy_state_free(BlockCopyState *s);
@@ -57,7 +51,8 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
                                      int64_t offset, int64_t bytes,
                                      bool ratelimit, int max_workers,
                                      int64_t max_chunk,
-                                     BlockCopyAsyncCallbackFunc cb);
+                                     BlockCopyAsyncCallbackFunc cb,
+                                     void *cb_opaque);
 
 /*
  * Set speed limit for block-copy instance. All block-copy operations related to
diff --git a/block/backup.c b/block/backup.c
index ec2676abc2..59c00f5293 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -44,42 +44,19 @@ typedef struct BackupBlockJob {
     BlockdevOnError on_source_error;
     BlockdevOnError on_target_error;
     uint64_t len;
-    uint64_t bytes_read;
     int64_t cluster_size;
     int max_workers;
     int64_t max_chunk;
 
     BlockCopyState *bcs;
+
+    BlockCopyCallState *bcs_call;
+    int ret;
+    bool error_is_read;
 } BackupBlockJob;
 
 static const BlockJobDriver backup_job_driver;
 
-static void backup_progress_bytes_callback(int64_t bytes, void *opaque)
-{
-    BackupBlockJob *s = opaque;
-
-    s->bytes_read += bytes;
-}
-
-static int coroutine_fn backup_do_cow(BackupBlockJob *job,
-                                      int64_t offset, uint64_t bytes,
-                                      bool *error_is_read)
-{
-    int ret = 0;
-    int64_t start, end; /* bytes */
-
-    start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
-    end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
-
-    trace_backup_do_cow_enter(job, start, offset, bytes);
-
-    ret = block_copy(job->bcs, start, end - start, error_is_read);
-
-    trace_backup_do_cow_return(job, offset, bytes, ret);
-
-    return ret;
-}
-
 static void backup_cleanup_sync_bitmap(BackupBlockJob *job, int ret)
 {
     BdrvDirtyBitmap *bm;
@@ -159,54 +136,58 @@ static BlockErrorAction backup_error_action(BackupBlockJob *job,
     }
 }
 
-static bool coroutine_fn yield_and_check(BackupBlockJob *job)
+static void coroutine_fn backup_block_copy_callback(int ret, bool error_is_read,
+                                                    void *opaque)
 {
-    uint64_t delay_ns;
-
-    if (job_is_cancelled(&job->common.job)) {
-        return true;
-    }
-
-    /*
-     * We need to yield even for delay_ns = 0 so that bdrv_drain_all() can
-     * return. Without a yield, the VM would not reboot.
-     */
-    delay_ns = block_job_ratelimit_get_delay(&job->common, job->bytes_read);
-    job->bytes_read = 0;
-    job_sleep_ns(&job->common.job, delay_ns);
-
-    if (job_is_cancelled(&job->common.job)) {
-        return true;
-    }
+    BackupBlockJob *s = opaque;
 
-    return false;
+    s->bcs_call = NULL;
+    s->ret = ret;
+    s->error_is_read = error_is_read;
+    job_enter(&s->common.job);
 }
 
 static int coroutine_fn backup_loop(BackupBlockJob *job)
 {
-    bool error_is_read;
-    int64_t offset;
-    BdrvDirtyBitmapIter *bdbi;
-    int ret = 0;
+    while (true) { /* retry loop */
+        assert(!job->bcs_call);
+        job->bcs_call = block_copy_async(job->bcs, 0,
+                                         QEMU_ALIGN_UP(job->len,
+                                                       job->cluster_size),
+                                         true, job->max_workers, job->max_chunk,
+                                         backup_block_copy_callback, job);
 
-    bdbi = bdrv_dirty_iter_new(block_copy_dirty_bitmap(job->bcs));
-    while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
-        do {
-            if (yield_and_check(job)) {
-                goto out;
+        while (job->bcs_call && !job->common.job.cancelled) {
+            /* wait and handle pauses */
+
+            job_pause_point(&job->common.job);
+
+            if (job->bcs_call && !job->common.job.cancelled) {
+                job_yield(&job->common.job);
             }
-            ret = backup_do_cow(job, offset, job->cluster_size, &error_is_read);
-            if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
-                           BLOCK_ERROR_ACTION_REPORT)
-            {
-                goto out;
+        }
+
+        if (!job->bcs_call && job->ret == 0) {
+            /* Success */
+            return 0;
+        }
+
+        if (job->common.job.cancelled) {
+            if (job->bcs_call) {
+                block_copy_cancel(job->bcs_call);
             }
-        } while (ret < 0);
+            return 0;
+        }
+
+        if (!job->bcs_call && job->ret < 0 &&
+            (backup_error_action(job, job->error_is_read, -job->ret) ==
+             BLOCK_ERROR_ACTION_REPORT))
+        {
+            return job->ret;
+        }
     }
 
- out:
-    bdrv_dirty_iter_free(bdbi);
-    return ret;
+    g_assert_not_reached();
 }
 
 static void backup_init_bcs_bitmap(BackupBlockJob *job)
@@ -246,9 +227,14 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
         int64_t count;
 
         for (offset = 0; offset < s->len; ) {
-            if (yield_and_check(s)) {
-                ret = -ECANCELED;
-                goto out;
+            if (job_is_cancelled(job)) {
+                return -ECANCELED;
+            }
+
+            job_pause_point(job);
+
+            if (job_is_cancelled(job)) {
+                return -ECANCELED;
             }
 
             ret = block_copy_reset_unallocated(s->bcs, offset, &count);
@@ -281,6 +267,25 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
     return ret;
 }
 
+static void coroutine_fn backup_pause(Job *job)
+{
+    BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
+
+    if (s->bcs_call) {
+        block_copy_cancel(s->bcs_call);
+    }
+}
+
+static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
+{
+    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
+
+    if (s->bcs) {
+        /* In block_job_create we yet don't have bcs */
+        block_copy_set_speed(s->bcs, s->bcs_call, speed);
+    }
+}
+
 static const BlockJobDriver backup_job_driver = {
     .job_driver = {
         .instance_size          = sizeof(BackupBlockJob),
@@ -291,7 +296,9 @@ static const BlockJobDriver backup_job_driver = {
         .commit                 = backup_commit,
         .abort                  = backup_abort,
         .clean                  = backup_clean,
-    }
+        .pause                  = backup_pause,
+    },
+    .set_speed = backup_set_speed,
 };
 
 static int64_t backup_calculate_cluster_size(BlockDriverState *target,
@@ -487,8 +494,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     job->max_workers = max_workers;
     job->max_chunk = max_chunk;
 
-    block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
     block_copy_set_progress_meter(bcs, &job->common.job.progress);
+    block_copy_set_speed(bcs, NULL, speed);
 
     /* Required permissions are already taken by backup-top target */
     block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
diff --git a/block/block-copy.c b/block/block-copy.c
index b551feb6c2..6a9d891b63 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -39,6 +39,7 @@ typedef struct BlockCopyCallState {
     int64_t max_chunk;
     bool ratelimit;
     BlockCopyAsyncCallbackFunc cb;
+    void *cb_opaque;
 
     /* State */
     bool failed;
@@ -103,9 +104,6 @@ typedef struct BlockCopyState {
     bool skip_unallocated;
 
     ProgressMeter *progress;
-    /* progress_bytes_callback: called when some copying progress is done. */
-    ProgressBytesCallbackFunc progress_bytes_callback;
-    void *progress_opaque;
 
     SharedResource *mem;
 
@@ -287,15 +285,6 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
     return s;
 }
 
-void block_copy_set_progress_callback(
-        BlockCopyState *s,
-        ProgressBytesCallbackFunc progress_bytes_callback,
-        void *progress_opaque)
-{
-    s->progress_bytes_callback = progress_bytes_callback;
-    s->progress_opaque = progress_opaque;
-}
-
 void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm)
 {
     s->progress = pm;
@@ -443,7 +432,6 @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
         t->call_state->error_is_read = error_is_read;
     } else {
         progress_work_done(t->s->progress, t->bytes);
-        t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
     }
     co_put_to_shres(t->s->mem, t->bytes);
     block_copy_task_end(t, ret);
@@ -712,8 +700,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
     } while (ret > 0 && !call_state->cancelled);
 
     if (call_state->cb) {
-        call_state->cb(ret, call_state->error_is_read,
-                       call_state->s->progress_opaque);
+        call_state->cb(ret, call_state->error_is_read, call_state->cb_opaque);
     }
 
     if (call_state->canceller) {
@@ -754,7 +741,8 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
                                      int64_t offset, int64_t bytes,
                                      bool ratelimit, int max_workers,
                                      int64_t max_chunk,
-                                     BlockCopyAsyncCallbackFunc cb)
+                                     BlockCopyAsyncCallbackFunc cb,
+                                     void *cb_opaque)
 {
     BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
     Coroutine *co = qemu_coroutine_create(block_copy_async_co_entry,
@@ -766,6 +754,7 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
         .bytes = bytes,
         .ratelimit = ratelimit,
         .cb = cb,
+        .cb_opaque = cb_opaque,
         .max_workers = max_workers ?: BLOCK_COPY_MAX_WORKERS,
         .max_chunk = max_chunk,
     };
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 18/20] block/block-copy: drop unused argument of block_copy()
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (16 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 17/20] backup: move to block-copy Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23 13:24   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 19/20] simplebench: bench_block_job: add cmd_options argument Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/block-copy.h |  3 +--
 block/backup-top.c         |  2 +-
 block/block-copy.c         | 11 ++---------
 3 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index a3e11d3fa2..760dfc9eae 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -35,8 +35,7 @@ void block_copy_state_free(BlockCopyState *s);
 int64_t block_copy_reset_unallocated(BlockCopyState *s,
                                      int64_t offset, int64_t *count);
 
-int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
-                            bool *error_is_read);
+int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes);
 
 /*
  * Run block-copy in a coroutine, return state pointer. If finished early
diff --git a/block/backup-top.c b/block/backup-top.c
index 0a09544c76..f4230aded4 100644
--- a/block/backup-top.c
+++ b/block/backup-top.c
@@ -61,7 +61,7 @@ static coroutine_fn int backup_top_cbw(BlockDriverState *bs, uint64_t offset,
     off = QEMU_ALIGN_DOWN(offset, s->cluster_size);
     end = QEMU_ALIGN_UP(offset + bytes, s->cluster_size);
 
-    return block_copy(s->bcs, off, end - off, NULL);
+    return block_copy(s->bcs, off, end - off);
 }
 
 static int coroutine_fn backup_top_co_pdiscard(BlockDriverState *bs,
diff --git a/block/block-copy.c b/block/block-copy.c
index 6a9d891b63..6cb721cc26 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -713,8 +713,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
     return ret;
 }
 
-int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
-                            bool *error_is_read)
+int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes)
 {
     BlockCopyCallState call_state = {
         .s = s,
@@ -723,13 +722,7 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
         .max_workers = BLOCK_COPY_MAX_WORKERS,
     };
 
-    int ret = block_copy_common(&call_state);
-
-    if (error_is_read && ret < 0) {
-        *error_is_read = call_state.error_is_read;
-    }
-
-    return ret;
+    return block_copy_common(&call_state);
 }
 
 static void coroutine_fn block_copy_async_co_entry(void *opaque)
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 19/20] simplebench: bench_block_job: add cmd_options argument
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (17 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 18/20] block/block-copy: drop unused argument of block_copy() Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23 13:30   ` Max Reitz
  2020-06-01 18:11 ` [PATCH v2 20/20] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Add argument to allow additional block-job options.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 scripts/simplebench/bench-example.py   |  2 +-
 scripts/simplebench/bench_block_job.py | 13 ++++++++-----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
index c642a5b891..c3d17213e3 100644
--- a/scripts/simplebench/bench-example.py
+++ b/scripts/simplebench/bench-example.py
@@ -24,7 +24,7 @@ from bench_block_job import bench_block_copy, drv_file, drv_nbd
 
 def bench_func(env, case):
     """ Handle one "cell" of benchmarking table. """
-    return bench_block_copy(env['qemu_binary'], env['cmd'],
+    return bench_block_copy(env['qemu_binary'], env['cmd'], {}
                             case['source'], case['target'])
 
 
diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
index 9808d696cf..7332845c1c 100755
--- a/scripts/simplebench/bench_block_job.py
+++ b/scripts/simplebench/bench_block_job.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
 #
 # Benchmark block jobs
 #
@@ -78,16 +78,19 @@ def bench_block_job(cmd, cmd_args, qemu_args):
 
 
 # Bench backup or mirror
-def bench_block_copy(qemu_binary, cmd, source, target):
+def bench_block_copy(qemu_binary, cmd, cmd_options, source, target):
     """Helper to run bench_block_job() for mirror or backup"""
     assert cmd in ('blockdev-backup', 'blockdev-mirror')
 
     source['node-name'] = 'source'
     target['node-name'] = 'target'
 
-    return bench_block_job(cmd,
-                           {'job-id': 'job0', 'device': 'source',
-                            'target': 'target', 'sync': 'full'},
+    cmd_options['job-id'] = 'job0'
+    cmd_options['device'] = 'source'
+    cmd_options['target'] = 'target'
+    cmd_options['sync'] = 'full'
+
+    return bench_block_job(cmd, cmd_options,
                            [qemu_binary,
                             '-blockdev', json.dumps(source),
                             '-blockdev', json.dumps(target)])
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 20/20] simplebench: add bench-backup.py
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (18 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 19/20] simplebench: bench_block_job: add cmd_options argument Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:11 ` Vladimir Sementsov-Ogievskiy
  2020-07-23 13:47   ` Max Reitz
  2020-06-01 18:15 ` [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  22 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:11 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, qemu-devel, wencongyang2, xiechanglong.d,
	armbru, mreitz, den, jsnow

Add script to benchmark new backup architecture.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 scripts/simplebench/bench-backup.py | 132 ++++++++++++++++++++++++++++
 1 file changed, 132 insertions(+)
 create mode 100755 scripts/simplebench/bench-backup.py

diff --git a/scripts/simplebench/bench-backup.py b/scripts/simplebench/bench-backup.py
new file mode 100755
index 0000000000..8930d23887
--- /dev/null
+++ b/scripts/simplebench/bench-backup.py
@@ -0,0 +1,132 @@
+#!/usr/bin/env python3
+#
+# Bench backup block-job
+#
+# Copyright (c) 2020 Virtuozzo International GmbH.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import argparse
+import simplebench
+from bench_block_job import bench_block_copy, drv_file, drv_nbd
+
+
+def bench_func(env, case):
+    """ Handle one "cell" of benchmarking table. """
+    cmd_options = env['cmd-options'] if 'cmd-options' in env else {}
+    return bench_block_copy(env['qemu-binary'], env['cmd'],
+                            cmd_options,
+                            case['source'], case['target'])
+
+
+def bench(args):
+    test_cases = []
+
+    sources = {}
+    targets = {}
+    for d in args.dir:
+        label, path = d.split(':')
+        sources[label] = drv_file(path + '/test-source')
+        targets[label] = drv_file(path + '/test-target')
+
+    if args.nbd:
+        nbd = args.nbd.split(':')
+        host = nbd[0]
+        port = '10809' if len(nbd) == 1 else nbd[1]
+        drv = drv_nbd(host, port)
+        sources['nbd'] = drv
+        targets['nbd'] = drv
+
+    for t in args.test:
+        src, dst = t.split(':')
+
+        test_cases.append({
+            'id': t,
+            'source': sources[src],
+            'target': targets[dst]
+        })
+
+    binaries = []
+    upstream = None
+    for i, q in enumerate(args.qemu):
+        name_path = q.split(':')
+        if len(name_path) == 1:
+            binaries.append((f'q{i}', name_path[0]))
+        else:
+            binaries.append((name_path[0], name_path[1]))
+            if name_path[0] == 'upstream' or name_path[0] == 'master':
+                upstream = binaries[-1]
+
+    test_envs = []
+    if upstream:
+        label, path = upstream
+        test_envs.append({
+                'id': f'mirror({label})',
+                'cmd': 'blockdev-mirror',
+                'qemu-binary': path
+            })
+
+    for label, path in binaries:
+        test_envs.append({
+            'id': f'backup({label})',
+            'cmd': 'blockdev-backup',
+            'qemu-binary': path
+        })
+        test_envs.append({
+            'id': f'backup({label}, no-copy-range)',
+            'cmd': 'blockdev-backup',
+            'cmd-options': {'x-use-copy-range': False},
+            'qemu-binary': path
+        })
+        if label == 'new':
+            test_envs.append({
+                'id': f'backup({label}, copy-range-1w)',
+                'cmd': 'blockdev-backup',
+                'cmd-options': {'x-use-copy-range': True,
+                                'x-max-workers': 1},
+                'qemu-binary': path
+            })
+
+    result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
+    print(simplebench.ascii(result))
+
+
+class ExtendAction(argparse.Action):
+    def __call__(self, parser, namespace, values, option_string=None):
+        items = getattr(namespace, self.dest) or []
+        items.extend(values)
+        setattr(namespace, self.dest, items)
+
+
+if __name__ == '__main__':
+    p = argparse.ArgumentParser('Backup benchmark')
+    p.add_argument('--qemu', nargs='+', help='Qemu binaries to compare, just '
+                   'file path with label, like label:/path/to/qemu. Qemu '
+                   'labeled "new" should support x-max-workers argument for '
+                   'backup job, labeled "upstream" will be used also to run '
+                   'mirror benchmark for comparison.',
+                   action=ExtendAction)
+    p.add_argument('--dir', nargs='+', help='Directories, each containing '
+                   '"test-source" and/or "test-target" files, raw images to '
+                   'used in benchmarking. File path with label, like '
+                   'label:/path/to/directory', action=ExtendAction)
+    p.add_argument('--nbd', help='host:port for remote NBD '
+                   'image, (or just host, for default port 10809). Use it in '
+                   'tests, label is "nbd" (but you cannot create test '
+                   'nbd:nbd).')
+    p.add_argument('--test', nargs='+', help='Tests, in form '
+                   'source-dir-label:target-dir-label', action=ExtendAction)
+
+    bench(p.parse_args())
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 00/20] backup performance: block_status + async
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (19 preceding siblings ...)
  2020-06-01 18:11 ` [PATCH v2 20/20] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:15 ` Vladimir Sementsov-Ogievskiy
  2020-06-01 18:59 ` no-reply
  2020-06-01 19:43 ` no-reply
  22 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-01 18:15 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, qemu-devel, wencongyang2, xiechanglong.d, armbru, mreitz,
	den, jsnow

01.06.2020 21:10, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
> 
> This a last part of original
> "[RFC 00/24] backup performance: block_status + async", prepartions are
> already merged.
> 
> The series turn backup into series of block_copy_async calls, covering
> the whole disk, so we get block-status based paralallel async requests
> out of the box, which gives performance gain:
> 
> -----------------  ----------------  -------------  --------------------------  --------------------------  ----------------  -------------------------------
>                     mirror(upstream)  backup(new)    backup(new, no-copy-range)  backup(new, copy-range-1w)  backup(upstream)  backup(upstream, no-copy-range)
> hdd-ext4:hdd-ext4  18.86 +- 0.11     45.50 +- 2.35  19.22 +- 0.09               19.51 +- 0.09               22.85 +- 5.98     19.72 +- 0.35
> hdd-ext4:ssd-ext4  8.99 +- 0.02      9.30 +- 0.01   8.97 +- 0.02                9.02 +- 0.02                9.68 +- 0.26      9.84 +- 0.12
> ssd-ext4:hdd-ext4  9.09 +- 0.11      9.34 +- 0.10   9.34 +- 0.10                8.99 +- 0.01                11.37 +- 0.37     11.47 +- 0.30
> ssd-ext4:ssd-ext4  4.07 +- 0.02      5.41 +- 0.05   4.05 +- 0.01                8.35 +- 0.58                9.83 +- 0.64      8.62 +- 0.35
> hdd-xfs:hdd-xfs    18.90 +- 0.19     43.26 +- 2.47  19.62 +- 0.14               19.38 +- 0.16               19.55 +- 0.26     19.62 +- 0.12
> hdd-xfs:ssd-xfs    8.93 +- 0.12      9.35 +- 0.03   8.93 +- 0.08                8.93 +- 0.05                9.79 +- 0.30      9.55 +- 0.15
> ssd-xfs:hdd-xfs    9.15 +- 0.07      9.74 +- 0.28   9.29 +- 0.03                9.08 +- 0.05                10.85 +- 0.31     10.91 +- 0.30
> ssd-xfs:ssd-xfs    4.06 +- 0.01      4.93 +- 0.02   4.04 +- 0.01                8.17 +- 0.42                9.52 +- 0.49      8.85 +- 0.46
> ssd-ext4:nbd       9.96 +- 0.11      11.45 +- 0.15  11.45 +- 0.02               17.22 +- 0.06               34.45 +- 1.35     35.16 +- 0.37
> nbd:ssd-ext4       9.84 +- 0.02      9.84 +- 0.04   9.80 +- 0.06                18.96 +- 0.06               30.89 +- 0.73     31.46 +- 0.21
> -----------------  ----------------  -------------  --------------------------  --------------------------  ----------------  -------------------------------

I should add, that nbd results may be damaged by the fact that node with nbd server is my desktop, which was used for another tasks in parallel. Still I don't think it really hurt.

> 
> 
> The table shows, that copy_range is in bad relation with parallel async
> requests. copy_range brings real performance gain only on supporting fs,
> like btrfs. But even on such fs, I'm not sure that this is a good
> default behavior: if we do offload copy, so, that no real copy but just
> link block in backup the same blocks as in original, this means that
> further write from guest will lead to fragmentation of guest disk, when
> the aim of backup is to operate transparently for the guest.
> 
> So, in addition to these series I also suggest to disable copy_range by
> default.
> 
> ===
> 
> How to test:
> 
> prepare images:
> In a directories, where you want to place source and target images,
> prepare images by:
> 
> for img in test-source test-target; do
>   ./qemu-img create -f raw $img 1000M;
>   ./qemu-img bench -c 1000 -d 1 -f raw -s 1M -w --pattern=0xff $img
> done
> 
> prepare similar image for nbd server, and start it somewhere by
> 
>   qemu-nbd --persistent --nocache -f raw IMAGE
> 
> Then, run benchmark, like this:
> ./bench-backup.py --qemu new:../../x86_64-softmmu/qemu-system-x86_64 upstream:/work/src/qemu/up-backup-block-copy-master/x86_64-softmmu/qemu-system-x86_64 --dir hdd-ext4:/test-a hdd-xfs:/test-b ssd-ext4:/ssd ssd-xfs:/ssd-b --test $(for fs in ext4 xfs; do echo hdd-$fs:hdd-$fs hdd-$fs:ssd-$fs ssd-$fs:hdd-$fs ssd-$fs:ssd-$fs; done) --nbd 192.168.100.2 --test ssd-ext4:nbd nbd:ssd-ext4
> 
> (you may simply reduce number of directories/test-cases, use --help for
>   help)
> 
> ===
> 
> Note, that I included here
> "[PATCH] block/block-copy: block_copy_dirty_clusters: fix failure check"
> which was previously sent in separate, but still untouched in mailing
> list. It still may be applied separately.
> 
> Vladimir Sementsov-Ogievskiy (20):
>    block/block-copy: block_copy_dirty_clusters: fix failure check
>    iotests: 129 don't check backup "busy"
>    qapi: backup: add x-use-copy-range parameter
>    block/block-copy: More explicit call_state
>    block/block-copy: implement block_copy_async
>    block/block-copy: add max_chunk and max_workers parameters
>    block/block-copy: add ratelimit to block-copy
>    block/block-copy: add block_copy_cancel
>    blockjob: add set_speed to BlockJobDriver
>    job: call job_enter from job_user_pause
>    qapi: backup: add x-max-chunk and x-max-workers parameters
>    iotests: 56: prepare for backup over block-copy
>    iotests: 129: prepare for backup over block-copy
>    iotests: 185: prepare for backup over block-copy
>    iotests: 219: prepare for backup over block-copy
>    iotests: 257: prepare for backup over block-copy
>    backup: move to block-copy
>    block/block-copy: drop unused argument of block_copy()
>    simplebench: bench_block_job: add cmd_options argument
>    simplebench: add bench-backup.py
> 
>   qapi/block-core.json                   |  11 +-
>   block/backup-top.h                     |   1 +
>   include/block/block-copy.h             |  45 +++-
>   include/block/block_int.h              |   8 +
>   include/block/blockjob_int.h           |   2 +
>   block/backup-top.c                     |   6 +-
>   block/backup.c                         | 170 ++++++++------
>   block/block-copy.c                     | 183 ++++++++++++---
>   block/replication.c                    |   1 +
>   blockdev.c                             |  10 +
>   blockjob.c                             |   6 +
>   job.c                                  |   1 +
>   scripts/simplebench/bench-backup.py    | 132 +++++++++++
>   scripts/simplebench/bench-example.py   |   2 +-
>   scripts/simplebench/bench_block_job.py |  13 +-
>   tests/qemu-iotests/056                 |   8 +-
>   tests/qemu-iotests/129                 |   3 +-
>   tests/qemu-iotests/185                 |   3 +-
>   tests/qemu-iotests/185.out             |   2 +-
>   tests/qemu-iotests/219                 |  13 +-
>   tests/qemu-iotests/257                 |   1 +
>   tests/qemu-iotests/257.out             | 306 ++++++++++++-------------
>   22 files changed, 640 insertions(+), 287 deletions(-)
>   create mode 100755 scripts/simplebench/bench-backup.py
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 00/20] backup performance: block_status + async
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (20 preceding siblings ...)
  2020-06-01 18:15 ` [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
@ 2020-06-01 18:59 ` no-reply
  2020-06-02  8:20   ` Vladimir Sementsov-Ogievskiy
  2020-06-01 19:43 ` no-reply
  22 siblings, 1 reply; 66+ messages in thread
From: no-reply @ 2020-06-01 18:59 UTC (permalink / raw)
  To: vsementsov
  Cc: kwolf, vsementsov, qemu-block, wencongyang2, xiechanglong.d,
	qemu-devel, armbru, den, mreitz, jsnow

Patchew URL: https://patchew.org/QEMU/20200601181118.579-1-vsementsov@virtuozzo.com/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

  TEST    check-unit: tests/test-logging
  TEST    check-unit: tests/test-replication
**
ERROR:/tmp/qemu-test/src/tests/test-replication.c:428:test_secondary_start: assertion failed: (!local_err)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/test-replication.c:428:test_secondary_start: assertion failed: (!local_err)
make: *** [check-unit] Error 1
make: *** Waiting for unfinished jobs....
  TEST    check-qtest-x86_64: tests/qtest/boot-order-test
  TEST    check-qtest-x86_64: tests/qtest/bios-tables-test
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=73ef29198bda41a1bce9aa3697266e4c', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-j0ydkht8/src/docker-src.2020-06-01-14.44.54.8388:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=73ef29198bda41a1bce9aa3697266e4c
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-j0ydkht8/src'
make: *** [docker-run-test-quick@centos7] Error 2

real    14m11.116s
user    0m9.306s


The full log is available at
http://patchew.org/logs/20200601181118.579-1-vsementsov@virtuozzo.com/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 00/20] backup performance: block_status + async
  2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (21 preceding siblings ...)
  2020-06-01 18:59 ` no-reply
@ 2020-06-01 19:43 ` no-reply
  22 siblings, 0 replies; 66+ messages in thread
From: no-reply @ 2020-06-01 19:43 UTC (permalink / raw)
  To: vsementsov
  Cc: kwolf, vsementsov, qemu-block, wencongyang2, xiechanglong.d,
	qemu-devel, armbru, den, mreitz, jsnow

Patchew URL: https://patchew.org/QEMU/20200601181118.579-1-vsementsov@virtuozzo.com/



Hi,

This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
=== TEST SCRIPT END ===

PASS 1 fdc-test /x86_64/fdc/cmos
PASS 2 fdc-test /x86_64/fdc/no_media_on_start
PASS 3 fdc-test /x86_64/fdc/read_without_media
==8167==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 4 fdc-test /x86_64/fdc/media_change
PASS 5 fdc-test /x86_64/fdc/sense_interrupt
PASS 6 fdc-test /x86_64/fdc/relative_seek
---
PASS 32 test-opts-visitor /visitor/opts/range/beyond
PASS 33 test-opts-visitor /visitor/opts/dict/unvisited
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-coroutine -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-coroutine" 
==8242==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-coroutine /basic/no-dangling-access
PASS 2 test-coroutine /basic/lifecycle
PASS 3 test-coroutine /basic/yield
==8242==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffc47dee000; bottom 0x7f11b97d2000; size: 0x00ea8e61c000 (1007411118080)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 4 test-coroutine /basic/nesting
---
PASS 12 test-aio /aio/event/flush
PASS 13 test-aio /aio/event/wait/no-flush-cb
PASS 14 test-aio /aio/timer/schedule
==8257==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 15 test-aio /aio/coroutine/queue-chaining
PASS 16 test-aio /aio-gsource/flush
PASS 17 test-aio /aio-gsource/bh/schedule
---
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/ide-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="ide-test" 
PASS 28 test-aio /aio-gsource/timer/schedule
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-aio-multithread -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-aio-multithread" 
==8268==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-aio-multithread /aio/multi/lifecycle
==8265==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 ide-test /x86_64/ide/identify
PASS 2 test-aio-multithread /aio/multi/schedule
==8285==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 2 ide-test /x86_64/ide/flush
PASS 3 test-aio-multithread /aio/multi/mutex/contended
==8296==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 ide-test /x86_64/ide/bmdma/simple_rw
==8307==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 4 ide-test /x86_64/ide/bmdma/trim
==8313==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 4 test-aio-multithread /aio/multi/mutex/handoff
PASS 5 test-aio-multithread /aio/multi/mutex/mcs
PASS 6 test-aio-multithread /aio/multi/mutex/pthread
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-throttle -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-throttle" 
==8330==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-throttle /throttle/leak_bucket
PASS 2 test-throttle /throttle/compute_wait
PASS 3 test-throttle /throttle/init
---
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-thread-pool -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-thread-pool" 
PASS 1 test-thread-pool /thread-pool/submit
PASS 2 test-thread-pool /thread-pool/submit-aio
==8334==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 test-thread-pool /thread-pool/submit-co
PASS 4 test-thread-pool /thread-pool/submit-many
PASS 5 test-thread-pool /thread-pool/cancel
==8401==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 6 test-thread-pool /thread-pool/cancel-async
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-hbitmap -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-hbitmap" 
PASS 1 test-hbitmap /hbitmap/granularity
---
PASS 39 test-hbitmap /hbitmap/next_dirty_area/next_dirty_area_4
PASS 40 test-hbitmap /hbitmap/next_dirty_area/next_dirty_area_after_truncate
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-bdrv-drain -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-bdrv-drain" 
==8412==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-bdrv-drain /bdrv-drain/nested
PASS 2 test-bdrv-drain /bdrv-drain/multiparent
PASS 3 test-bdrv-drain /bdrv-drain/set_aio_context
---
PASS 41 test-bdrv-drain /bdrv-drain/bdrv_drop_intermediate/poll
PASS 42 test-bdrv-drain /bdrv-drain/replace_child/mid-drain
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-bdrv-graph-mod -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-bdrv-graph-mod" 
==8451==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-bdrv-graph-mod /bdrv-graph-mod/update-perm-tree
PASS 2 test-bdrv-graph-mod /bdrv-graph-mod/should-update-child
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-blockjob -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-blockjob" 
==8455==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-blockjob /blockjob/ids
PASS 2 test-blockjob /blockjob/cancel/created
PASS 3 test-blockjob /blockjob/cancel/running
---
PASS 7 test-blockjob /blockjob/cancel/pending
PASS 8 test-blockjob /blockjob/cancel/concluded
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-blockjob-txn -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-blockjob-txn" 
==8459==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-blockjob-txn /single/success
PASS 2 test-blockjob-txn /single/failure
PASS 3 test-blockjob-txn /single/cancel
---
PASS 6 test-blockjob-txn /pair/cancel
PASS 7 test-blockjob-txn /pair/fail-cancel-race
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-block-backend -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-block-backend" 
==8465==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-block-backend /block-backend/drain_aio_error
PASS 2 test-block-backend /block-backend/drain_all_aio_error
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-block-iothread -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-block-iothread" 
==8469==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-block-iothread /sync-op/pread
PASS 2 test-block-iothread /sync-op/pwrite
PASS 3 test-block-iothread /sync-op/load_vmstate
---
PASS 15 test-block-iothread /propagate/diamond
PASS 16 test-block-iothread /propagate/mirror
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-image-locking -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-image-locking" 
==8461==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==8492==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-image-locking /image-locking/basic
PASS 2 test-image-locking /image-locking/set-perm-abort
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-x86-cpuid -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-x86-cpuid" 
---
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-rcu-list -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-rcu-list" 
PASS 1 test-rcu-list /rcu/qlist/single-threaded
PASS 2 test-rcu-list /rcu/qlist/short-few
==8578==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 test-rcu-list /rcu/qlist/long-many
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-rcu-simpleq -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-rcu-simpleq" 
PASS 1 test-rcu-simpleq /rcu/qsimpleq/single-threaded
---
PASS 3 test-rcu-simpleq /rcu/qsimpleq/long-many
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-rcu-tailq -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-rcu-tailq" 
PASS 1 test-rcu-tailq /rcu/qtailq/single-threaded
==8623==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 2 test-rcu-tailq /rcu/qtailq/short-few
PASS 3 test-rcu-tailq /rcu/qtailq/long-many
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-rcu-slist -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-rcu-slist" 
PASS 1 test-rcu-slist /rcu/qslist/single-threaded
PASS 2 test-rcu-slist /rcu/qslist/short-few
==8689==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 test-rcu-slist /rcu/qslist/long-many
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-qdist -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-qdist" 
PASS 1 test-qdist /qdist/none
---
PASS 7 test-qdist /qdist/binning/expand
PASS 8 test-qdist /qdist/binning/shrink
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-qht -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-qht" 
==8702==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==8708==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-qht /qht/mode/default
PASS 2 test-qht /qht/mode/resize
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-qht-par -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-qht-par" 
---
PASS 15 test-crypto-secret /crypto/secret/crypt/missingiv
PASS 16 test-crypto-secret /crypto/secret/crypt/badiv
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-crypto-tlscredsx509 -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-crypto-tlscredsx509" 
==8781==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-crypto-tlscredsx509 /qcrypto/tlscredsx509/perfectserver
PASS 2 test-crypto-tlscredsx509 /qcrypto/tlscredsx509/perfectclient
PASS 3 test-crypto-tlscredsx509 /qcrypto/tlscredsx509/goodca1
---
PASS 33 test-crypto-tlscredsx509 /qcrypto/tlscredsx509/inactive2
PASS 34 test-crypto-tlscredsx509 /qcrypto/tlscredsx509/inactive3
PASS 5 ide-test /x86_64/ide/bmdma/various_prdts
==8795==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==8795==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7fffe8f24000; bottom 0x7fc54fbfe000; size: 0x003a99326000 (251678318592)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 6 ide-test /x86_64/ide/bmdma/no_busmaster
---
PASS 39 test-crypto-tlscredsx509 /qcrypto/tlscredsx509/missingclient
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-crypto-tlssession -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-crypto-tlssession" 
PASS 7 ide-test /x86_64/ide/flush/nodev
==8810==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 8 ide-test /x86_64/ide/flush/empty_drive
PASS 1 test-crypto-tlssession /qcrypto/tlssession/psk
==8815==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 2 test-crypto-tlssession /qcrypto/tlssession/basicca
PASS 3 test-crypto-tlssession /qcrypto/tlssession/differentca
PASS 9 ide-test /x86_64/ide/flush/retry_pci
PASS 4 test-crypto-tlssession /qcrypto/tlssession/altname1
==8821==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 10 ide-test /x86_64/ide/flush/retry_isa
PASS 5 test-crypto-tlssession /qcrypto/tlssession/altname2
==8827==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 6 test-crypto-tlssession /qcrypto/tlssession/altname3
PASS 11 ide-test /x86_64/ide/cdrom/pio
PASS 7 test-crypto-tlssession /qcrypto/tlssession/altname4
PASS 8 test-crypto-tlssession /qcrypto/tlssession/altname5
==8833==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 9 test-crypto-tlssession /qcrypto/tlssession/altname6
PASS 10 test-crypto-tlssession /qcrypto/tlssession/wildcard1
PASS 11 test-crypto-tlssession /qcrypto/tlssession/wildcard2
PASS 12 ide-test /x86_64/ide/cdrom/pio_large
PASS 12 test-crypto-tlssession /qcrypto/tlssession/wildcard3
==8839==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 13 ide-test /x86_64/ide/cdrom/dma
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/ahci-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="ahci-test" 
PASS 13 test-crypto-tlssession /qcrypto/tlssession/wildcard4
PASS 14 test-crypto-tlssession /qcrypto/tlssession/wildcard5
==8853==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 ahci-test /x86_64/ahci/sanity
==8859==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 2 ahci-test /x86_64/ahci/pci_spec
==8865==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 ahci-test /x86_64/ahci/pci_enable
PASS 15 test-crypto-tlssession /qcrypto/tlssession/wildcard6
==8871==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 16 test-crypto-tlssession /qcrypto/tlssession/cachain
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-qga -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-qga" 
PASS 4 ahci-test /x86_64/ahci/hba_spec
==8885==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-qga /qga/sync-delimited
PASS 2 test-qga /qga/sync
PASS 3 test-qga /qga/ping
---
PASS 16 test-qga /qga/invalid-args
PASS 17 test-qga /qga/fsfreeze-status
PASS 5 ahci-test /x86_64/ahci/hba_enable
==8894==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 6 ahci-test /x86_64/ahci/identify
PASS 18 test-qga /qga/blacklist
PASS 19 test-qga /qga/config
PASS 20 test-qga /qga/guest-exec
PASS 21 test-qga /qga/guest-exec-invalid
==8900==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 7 ahci-test /x86_64/ahci/max
==8918==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 22 test-qga /qga/guest-get-osinfo
PASS 23 test-qga /qga/guest-get-host-name
PASS 24 test-qga /qga/guest-get-timezone
---
PASS 1 test-util-sockets /util/socket/is-socket/bad
PASS 2 test-util-sockets /util/socket/is-socket/good
PASS 8 ahci-test /x86_64/ahci/reset
==8939==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==8939==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffcad268000; bottom 0x7fae285fe000; size: 0x004e84c6a000 (337235058688)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 9 ahci-test /x86_64/ahci/io/pio/lba28/simple/zero
==8947==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==8947==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffd27324000; bottom 0x7f631d1fe000; size: 0x009a0a126000 (661593939968)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 3 test-util-sockets /util/socket/unix-abstract/good
---
PASS 10 ahci-test /x86_64/ahci/io/pio/lba28/simple/low
PASS 1 test-authz-simple /authz/simple
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-authz-list -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-authz-list" 
==8958==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-authz-list /auth/list/complex
PASS 2 test-authz-list /auth/list/add-remove
PASS 3 test-authz-list /auth/list/default/deny
---
PASS 4 test-authz-listfile /auth/list/explicit/deny
PASS 5 test-authz-listfile /auth/list/explicit/allow
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-io-task -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-io-task" 
==8958==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffe1bf2e000; bottom 0x7f6d085fe000; size: 0x009113930000 (623098658816)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 1 test-io-task /crypto/task/complete
---
PASS 4 test-io-channel-file /io/channel/pipe/sync
PASS 5 test-io-channel-file /io/channel/pipe/async
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-io-channel-tls -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-io-channel-tls" 
==9010==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9010==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffc27a06000; bottom 0x7f1d9f5fe000; size: 0x00de88408000 (955768668160)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 12 ahci-test /x86_64/ahci/io/pio/lba28/double/zero
==9036==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9036==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7fff5fc1a000; bottom 0x7fe5227fe000; size: 0x001a3d41c000 (112696868864)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 13 ahci-test /x86_64/ahci/io/pio/lba28/double/low
---
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-io-channel-buffer -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-io-channel-buffer" 
PASS 1 test-io-channel-buffer /io/channel/buf
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-base64 -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-base64" 
==9042==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-base64 /util/base64/good
PASS 2 test-base64 /util/base64/embedded-nul
PASS 3 test-base64 /util/base64/not-nul-terminated
PASS 4 test-base64 /util/base64/invalid-chars
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-crypto-pbkdf -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-crypto-pbkdf" 
==9042==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffd21508000; bottom 0x7fbf6c7fe000; size: 0x003db4d0a000 (265026576384)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 1 test-crypto-pbkdf /crypto/pbkdf/rfc3962/sha1/iter1
---
PASS 3 test-crypto-afsplit /crypto/afsplit/sha256/big
PASS 4 test-crypto-afsplit /crypto/afsplit/sha1/1000
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-crypto-xts -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-crypto-xts" 
==9067==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 test-crypto-xts /crypto/xts/t-1-key-32-ptx-32/basic
PASS 2 test-crypto-xts /crypto/xts/t-1-key-32-ptx-32/split
PASS 3 test-crypto-xts /crypto/xts/t-1-key-32-ptx-32/unaligned
---
PASS 17 test-crypto-xts /crypto/xts/t-21-key-32-ptx-31/basic
PASS 18 test-crypto-xts /crypto/xts/t-21-key-32-ptx-31/unaligned
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-crypto-block -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-crypto-block" 
==9067==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffdbe6bb000; bottom 0x7f20b697c000; size: 0x00dd07d3f000 (949319102464)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 1 test-crypto-block /crypto/block/qcow
---
PASS 3 test-logging /logging/logfile_write_path
PASS 4 test-logging /logging/logfile_lock_path
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  tests/test-replication -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-replication" 
==9094==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 15 ahci-test /x86_64/ahci/io/pio/lba28/long/zero
PASS 1 test-replication /replication/primary/read
PASS 2 test-replication /replication/primary/write
==9098==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9098==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffd31d90000; bottom 0x7face31fe000; size: 0x00504eb92000 (344918138880)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 3 test-replication /replication/primary/start
---
PASS 5 test-replication /replication/primary/do_checkpoint
PASS 6 test-replication /replication/primary/get_error_all
PASS 16 ahci-test /x86_64/ahci/io/pio/lba28/long/low
==9104==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9104==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffe8ecdd000; bottom 0x7f499e37c000; size: 0x00b4f0961000 (777130479616)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 7 test-replication /replication/secondary/read
PASS 17 ahci-test /x86_64/ahci/io/pio/lba28/long/high
==9110==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 8 test-replication /replication/secondary/write
PASS 18 ahci-test /x86_64/ahci/io/pio/lba28/short/zero
**
ERROR:/tmp/qemu-test/src/tests/test-replication.c:428:test_secondary_start: assertion failed: (!local_err)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/test-replication.c:428:test_secondary_start: assertion failed: (!local_err)
==9116==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
make: *** [/tmp/qemu-test/src/tests/Makefile.include:642: check-unit] Error 1
make: *** Waiting for unfinished jobs....
PASS 19 ahci-test /x86_64/ahci/io/pio/lba28/short/low
==9122==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 20 ahci-test /x86_64/ahci/io/pio/lba28/short/high
==9128==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9128==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffc481e0000; bottom 0x7f7db47fe000; size: 0x007e939e2000 (543642492928)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 21 ahci-test /x86_64/ahci/io/pio/lba48/simple/zero
==9134==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9134==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffc5fea2000; bottom 0x7fa30d3fe000; size: 0x005952aa4000 (383638978560)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 22 ahci-test /x86_64/ahci/io/pio/lba48/simple/low
==9140==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9140==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffcc8774000; bottom 0x7fd05effe000; size: 0x002c69776000 (190747992064)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 23 ahci-test /x86_64/ahci/io/pio/lba48/simple/high
==9146==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9146==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7fffeb4ff000; bottom 0x7fdeabdfe000; size: 0x00213f701000 (142798229504)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 24 ahci-test /x86_64/ahci/io/pio/lba48/double/zero
==9152==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9152==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffee8121000; bottom 0x7f783bbfe000; size: 0x0086ac523000 (578416685056)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 25 ahci-test /x86_64/ahci/io/pio/lba48/double/low
==9158==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9158==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffd281e2000; bottom 0x7f9e0f1fe000; size: 0x005f18fe4000 (408441208832)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 26 ahci-test /x86_64/ahci/io/pio/lba48/double/high
==9164==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9164==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffe211b8000; bottom 0x7f52125fe000; size: 0x00ac0ebba000 (738981552128)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 27 ahci-test /x86_64/ahci/io/pio/lba48/long/zero
==9170==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9170==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffd10c20000; bottom 0x7fd20c9fe000; size: 0x002b04222000 (184752939008)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 28 ahci-test /x86_64/ahci/io/pio/lba48/long/low
==9176==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9176==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffc60c29000; bottom 0x7f1fbb3fe000; size: 0x00dca582b000 (947669610496)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 29 ahci-test /x86_64/ahci/io/pio/lba48/long/high
==9182==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 30 ahci-test /x86_64/ahci/io/pio/lba48/short/zero
==9188==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 31 ahci-test /x86_64/ahci/io/pio/lba48/short/low
==9194==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 32 ahci-test /x86_64/ahci/io/pio/lba48/short/high
==9200==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 33 ahci-test /x86_64/ahci/io/dma/lba28/fragmented
==9206==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 34 ahci-test /x86_64/ahci/io/dma/lba28/retry
==9212==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 35 ahci-test /x86_64/ahci/io/dma/lba28/simple/zero
==9218==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 36 ahci-test /x86_64/ahci/io/dma/lba28/simple/low
==9224==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 37 ahci-test /x86_64/ahci/io/dma/lba28/simple/high
==9230==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 38 ahci-test /x86_64/ahci/io/dma/lba28/double/zero
==9236==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 39 ahci-test /x86_64/ahci/io/dma/lba28/double/low
==9242==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 40 ahci-test /x86_64/ahci/io/dma/lba28/double/high
==9248==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9248==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7fffc74a2000; bottom 0x7f3efd1fd000; size: 0x00c0ca2a5000 (828025491456)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 41 ahci-test /x86_64/ahci/io/dma/lba28/long/zero
==9255==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9255==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffe05b4a000; bottom 0x7fa43bd23000; size: 0x0059c9e27000 (385639149568)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 42 ahci-test /x86_64/ahci/io/dma/lba28/long/low
==9262==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9262==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffe2266e000; bottom 0x7f81717fd000; size: 0x007cb0e71000 (535543877632)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 43 ahci-test /x86_64/ahci/io/dma/lba28/long/high
==9269==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 44 ahci-test /x86_64/ahci/io/dma/lba28/short/zero
==9275==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 45 ahci-test /x86_64/ahci/io/dma/lba28/short/low
==9281==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 46 ahci-test /x86_64/ahci/io/dma/lba28/short/high
==9287==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 47 ahci-test /x86_64/ahci/io/dma/lba48/simple/zero
==9293==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 48 ahci-test /x86_64/ahci/io/dma/lba48/simple/low
==9299==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 49 ahci-test /x86_64/ahci/io/dma/lba48/simple/high
==9305==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 50 ahci-test /x86_64/ahci/io/dma/lba48/double/zero
==9311==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 51 ahci-test /x86_64/ahci/io/dma/lba48/double/low
==9317==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 52 ahci-test /x86_64/ahci/io/dma/lba48/double/high
==9323==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9323==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffdf79c8000; bottom 0x7fa67edfd000; size: 0x005778bcb000 (375687786496)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 53 ahci-test /x86_64/ahci/io/dma/lba48/long/zero
==9330==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9330==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffdfe2ef000; bottom 0x7fa09e5fd000; size: 0x005d5fcf2000 (401039368192)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 54 ahci-test /x86_64/ahci/io/dma/lba48/long/low
==9337==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9337==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7ffc176af000; bottom 0x7f7dee5fd000; size: 0x007e290b2000 (541854474240)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 55 ahci-test /x86_64/ahci/io/dma/lba48/long/high
==9344==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 56 ahci-test /x86_64/ahci/io/dma/lba48/short/zero
==9350==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 57 ahci-test /x86_64/ahci/io/dma/lba48/short/low
==9356==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 58 ahci-test /x86_64/ahci/io/dma/lba48/short/high
==9362==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 59 ahci-test /x86_64/ahci/io/ncq/simple
==9368==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 60 ahci-test /x86_64/ahci/io/ncq/retry
==9374==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 61 ahci-test /x86_64/ahci/flush/simple
==9380==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 62 ahci-test /x86_64/ahci/flush/retry
==9386==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9392==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 63 ahci-test /x86_64/ahci/flush/migrate
==9400==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9406==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 64 ahci-test /x86_64/ahci/migrate/sanity
==9414==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9420==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 65 ahci-test /x86_64/ahci/migrate/dma/simple
==9428==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9434==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 66 ahci-test /x86_64/ahci/migrate/dma/halted
==9442==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9448==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 67 ahci-test /x86_64/ahci/migrate/ncq/simple
==9456==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9462==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 68 ahci-test /x86_64/ahci/migrate/ncq/halted
==9470==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 69 ahci-test /x86_64/ahci/cdrom/eject
==9475==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 70 ahci-test /x86_64/ahci/cdrom/dma/single
==9481==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 71 ahci-test /x86_64/ahci/cdrom/dma/multi
==9487==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 72 ahci-test /x86_64/ahci/cdrom/pio/single
==9493==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9493==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7fff53807000; bottom 0x7f4b91154000; size: 0x00b3c26b3000 (772060950528)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
PASS 73 ahci-test /x86_64/ahci/cdrom/pio/multi
==9499==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 74 ahci-test /x86_64/ahci/cdrom/pio/bcl
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/hd-geo-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="hd-geo-test" 
PASS 1 hd-geo-test /x86_64/hd-geo/ide/none
==9513==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 2 hd-geo-test /x86_64/hd-geo/ide/drive/cd_0
==9519==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 hd-geo-test /x86_64/hd-geo/ide/drive/mbr/blank
==9525==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 4 hd-geo-test /x86_64/hd-geo/ide/drive/mbr/lba
==9531==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 5 hd-geo-test /x86_64/hd-geo/ide/drive/mbr/chs
==9537==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 6 hd-geo-test /x86_64/hd-geo/ide/device/mbr/blank
==9543==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 7 hd-geo-test /x86_64/hd-geo/ide/device/mbr/lba
==9549==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 8 hd-geo-test /x86_64/hd-geo/ide/device/mbr/chs
==9555==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 9 hd-geo-test /x86_64/hd-geo/ide/device/user/chs
==9560==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 10 hd-geo-test /x86_64/hd-geo/ide/device/user/chst
==9566==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9570==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9574==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9578==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9582==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9586==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9590==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9594==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9597==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 11 hd-geo-test /x86_64/hd-geo/override/ide
==9604==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9608==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9612==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9616==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9620==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9624==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9628==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9632==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9635==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 12 hd-geo-test /x86_64/hd-geo/override/scsi
==9642==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9646==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9650==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9654==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9658==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9662==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9666==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9670==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9673==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 13 hd-geo-test /x86_64/hd-geo/override/scsi_2_controllers
==9680==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9684==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9688==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9692==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9695==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 14 hd-geo-test /x86_64/hd-geo/override/virtio_blk
==9702==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9706==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9709==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 15 hd-geo-test /x86_64/hd-geo/override/zero_chs
==9716==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9720==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9724==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9728==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9731==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 16 hd-geo-test /x86_64/hd-geo/override/scsi_hot_unplug
==9738==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9742==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9746==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9750==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
==9753==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 17 hd-geo-test /x86_64/hd-geo/override/virtio_hot_unplug
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/boot-order-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="boot-order-test" 
PASS 1 boot-order-test /x86_64/boot-order/pc
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9822==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP'
Using expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9828==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP'
Using expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9834==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP.bridge'
Looking for expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9840==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP.ipmikcs'
Looking for expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9846==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP.cphp'
Looking for expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9853==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP.memhp'
Looking for expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9859==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP.numamem'
Looking for expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9865==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP.dimmpxm'
Looking for expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9874==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/pc/FACP.acpihmat'
Looking for expected file 'tests/data/acpi/pc/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9881==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.bridge'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9887==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.mmio64'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9893==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.ipmibt'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9899==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.cphp'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9906==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.memhp'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9912==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.numamem'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9918==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.dimmpxm'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==9927==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!

Looking for expected file 'tests/data/acpi/q35/FACP.acpihmat'
Looking for expected file 'tests/data/acpi/q35/FACP'
---
PASS 1 i440fx-test /x86_64/i440fx/defaults
PASS 2 i440fx-test /x86_64/i440fx/pam
PASS 3 i440fx-test /x86_64/i440fx/firmware/bios
==10019==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 4 i440fx-test /x86_64/i440fx/firmware/pflash
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/fw_cfg-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="fw_cfg-test" 
PASS 1 fw_cfg-test /x86_64/fw_cfg/signature
---
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/drive_del-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="drive_del-test" 
PASS 1 drive_del-test /x86_64/drive_del/without-dev
PASS 2 drive_del-test /x86_64/drive_del/after_failed_device_add
==10112==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 drive_del-test /x86_64/blockdev/drive_del_device_del
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/wdt_ib700-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="wdt_ib700-test" 
PASS 1 wdt_ib700-test /x86_64/wdt_ib700/pause
---
PASS 1 usb-hcd-uhci-test /x86_64/uhci/pci/init
PASS 2 usb-hcd-uhci-test /x86_64/uhci/pci/port1
PASS 3 usb-hcd-uhci-test /x86_64/uhci/pci/hotplug
==10307==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 4 usb-hcd-uhci-test /x86_64/uhci/pci/hotplug/usb-storage
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/usb-hcd-ehci-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="usb-hcd-ehci-test" 
PASS 1 usb-hcd-ehci-test /x86_64/ehci/pci/uhci-port-1
---
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/usb-hcd-xhci-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="usb-hcd-xhci-test" 
PASS 1 usb-hcd-xhci-test /x86_64/xhci/pci/init
PASS 2 usb-hcd-xhci-test /x86_64/xhci/pci/hotplug
==10325==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 usb-hcd-xhci-test /x86_64/xhci/pci/hotplug/usb-uas
PASS 4 usb-hcd-xhci-test /x86_64/xhci/pci/hotplug/usb-ccid
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/cpu-plug-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="cpu-plug-test" 
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10461==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 1 vmgenid-test /x86_64/vmgenid/vmgenid/set-guid
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10467==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 2 vmgenid-test /x86_64/vmgenid/vmgenid/set-guid-auto
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10473==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 vmgenid-test /x86_64/vmgenid/vmgenid/query-monitor
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/tpm-crb-swtpm-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="tpm-crb-swtpm-test" 
SKIP 1 tpm-crb-swtpm-test /x86_64/tpm/crb-swtpm/test # SKIP swtpm not in PATH or missing --tpm2 support
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10572==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10578==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 3 migration-test /x86_64/migration/fd_proto
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10585==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10591==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 4 migration-test /x86_64/migration/validate_uuid
PASS 5 migration-test /x86_64/migration/validate_uuid_error
PASS 6 migration-test /x86_64/migration/validate_uuid_src_not_set
---
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10641==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10647==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 8 migration-test /x86_64/migration/auto_converge
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10655==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10661==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 9 migration-test /x86_64/migration/postcopy/unix
PASS 10 migration-test /x86_64/migration/postcopy/recovery
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10690==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10696==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 11 migration-test /x86_64/migration/precopy/unix
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10704==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10710==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 12 migration-test /x86_64/migration/precopy/tcp
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10718==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10724==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 13 migration-test /x86_64/migration/xbzrle/unix
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10732==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10738==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 14 migration-test /x86_64/migration/multifd/tcp/none
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10856==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 15 migration-test /x86_64/migration/multifd/tcp/cancel
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10912==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10918==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 16 migration-test /x86_64/migration/multifd/tcp/zlib
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10974==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory
qemu-system-x86_64: falling back to tcg
==10980==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 17 migration-test /x86_64/migration/multifd/tcp/zstd
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/test-x86-cpuid-compat -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="test-x86-cpuid-compat" 
PASS 1 test-x86-cpuid-compat /x86/cpuid/parsing-plus-minus
---
PASS 1 machine-none-test /x86_64/machine/none/cpu_option
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}  QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 QTEST_QEMU_IMG=qemu-img tests/qtest/qmp-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl --test-name="qmp-test" 
PASS 1 qmp-test /x86_64/qmp/protocol
==11419==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 2 qmp-test /x86_64/qmp/oob
PASS 3 qmp-test /x86_64/qmp/preconfig
PASS 4 qmp-test /x86_64/qmp/missing-any-arg
---
PASS 16 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/i82562/pci-device/pci-device-tests/nop
PASS 17 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/i82801/pci-device/pci-device-tests/nop
PASS 18 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/ES1370/pci-device/pci-device-tests/nop
==11837==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 19 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/megasas/pci-device/pci-device-tests/nop
PASS 20 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/megasas/megasas-tests/dcmd/pd-get-info/fuzz
PASS 21 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/ne2k_pci/pci-device/pci-device-tests/nop
PASS 22 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/tulip/pci-device/pci-device-tests/nop
PASS 23 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/tulip/tulip-tests/tulip_large_tx
PASS 24 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/nvme/pci-device/pci-device-tests/nop
==11852==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 25 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/nvme/nvme-tests/oob-cmb-access
==11857==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 26 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/pcnet/pci-device/pci-device-tests/nop
PASS 27 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/pci-ohci/pci-device/pci-device-tests/nop
PASS 28 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/pci-ohci/pci-ohci-tests/ohci_pci-test-hotplug
---
PASS 35 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/sdhci-pci/sdhci/sdhci-tests/registers
PASS 36 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/tpci200/ipack/ipoctal232/ipoctal232-tests/nop
PASS 37 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/tpci200/pci-device/pci-device-tests/nop
==11917==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 38 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-9p-pci/pci-device/pci-device-tests/nop
PASS 39 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-9p-pci/virtio/virtio-tests/nop
PASS 40 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-9p-pci/virtio-9p/virtio-9p-tests/config
---
PASS 50 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-9p-pci/virtio-9p/virtio-9p-tests/fs/readdir/basic
PASS 51 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-balloon-pci/pci-device/pci-device-tests/nop
PASS 52 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-balloon-pci/virtio/virtio-tests/nop
==11928==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 53 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk-pci-tests/msix
==11934==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 54 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk-pci-tests/idx
==11940==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 55 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk-pci-tests/nxvirtq
==11946==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 56 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk-pci-tests/hotplug
==11952==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 57 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk/virtio-blk-tests/indirect
==11958==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 58 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk/virtio-blk-tests/config
==11964==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 59 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk/virtio-blk-tests/basic
==11970==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 60 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-blk-pci/virtio-blk/virtio-blk-tests/resize
PASS 61 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/basic
PASS 62 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/rx_stop_cont
---
PASS 70 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-rng-pci/virtio-rng-pci-tests/hotplug
PASS 71 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-rng-pci/pci-device/pci-device-tests/nop
PASS 72 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-rng-pci/virtio/virtio-tests/nop
==12062==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 73 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-scsi-pci/virtio-scsi-pci-tests/iothread-attach-node
==12073==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 74 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-scsi-pci/pci-device/pci-device-tests/nop
==12078==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 75 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-scsi-pci/virtio-scsi/virtio-scsi-tests/hotplug
==12083==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
PASS 76 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-scsi-pci/virtio-scsi/virtio-scsi-tests/unaligned-write-same
PASS 77 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-serial-pci/pci-device/pci-device-tests/nop
PASS 78 qos-test /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-serial-pci/virtio/virtio-tests/nop
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=f7e7d249fa3d477f8f5f54bfb4705e3b', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=x86_64-softmmu', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-kde08xkh/src/docker-src.2020-06-01-15.00.26.19416:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-debug']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=f7e7d249fa3d477f8f5f54bfb4705e3b
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-kde08xkh/src'
make: *** [docker-run-test-debug@fedora] Error 2

real    43m19.363s
user    0m9.290s


The full log is available at
http://patchew.org/logs/20200601181118.579-1-vsementsov@virtuozzo.com/testing.asan/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters
  2020-06-01 18:11 ` [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters Vladimir Sementsov-Ogievskiy
@ 2020-06-02  8:19   ` Vladimir Sementsov-Ogievskiy
  2020-07-22 12:22   ` Max Reitz
  1 sibling, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-02  8:19 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, qemu-devel, wencongyang2, xiechanglong.d, armbru, mreitz,
	den, jsnow

01.06.2020 21:11, Vladimir Sementsov-Ogievskiy wrote:
> Add new parameters to configure future backup features. The patch
> doesn't introduce aio backup requests (so we actually have only one
> worker) neither requests larger than one cluster. Still, formally we
> satisfy these maximums anyway, so add the parameters now, to facilitate
> further patch which will really change backup job behavior.
> 
> Options are added with x- prefixes, as the only use for them are some
> very conservative iotests which will be updated soon.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>   qapi/block-core.json      |  9 ++++++++-
>   include/block/block_int.h |  7 +++++++
>   block/backup.c            | 21 +++++++++++++++++++++
>   block/replication.c       |  2 +-
>   blockdev.c                |  5 +++++
>   5 files changed, 42 insertions(+), 2 deletions(-)
> 

[..]

> --- a/block/replication.c
> +++ b/block/replication.c
> @@ -563,7 +563,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
>           s->backup_job = backup_job_create(
>                                   NULL, s->secondary_disk->bs, s->hidden_disk->bs,
>                                   0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
> -                                true,
> +                                true, 0, 0,

must be true, 1, 0, to setup 1 worker.

>                                   BLOCKDEV_ON_ERROR_REPORT,
>                                   BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
>                                   backup_job_completed, bs, NULL, &local_err);
> diff --git a/blockdev.c b/blockdev.c
> index 28145afe7d..cf068d20fa 100644


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 00/20] backup performance: block_status + async
  2020-06-01 18:59 ` no-reply
@ 2020-06-02  8:20   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-06-02  8:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, qemu-block, wencongyang2, xiechanglong.d, armbru, mreitz,
	den, jsnow

01.06.2020 21:59, no-reply@patchew.org wrote:
> Patchew URL: https://patchew.org/QEMU/20200601181118.579-1-vsementsov@virtuozzo.com/
> 
> 
> 
> Hi,
> 
> This series failed the docker-quick@centos7 build test. Please find the testing commands and
> their output below. If you have Docker installed, you can probably reproduce it
> locally.
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> make docker-image-centos7 V=1 NETWORK=1
> time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
> === TEST SCRIPT END ===
> 
>    TEST    check-unit: tests/test-logging
>    TEST    check-unit: tests/test-replication
> **
> ERROR:/tmp/qemu-test/src/tests/test-replication.c:428:test_secondary_start: assertion failed: (!local_err)
> ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/test-replication.c:428:test_secondary_start: assertion failed: (!local_err)

bug in patch 11, we need to set up 1 backup worker in block/repliction, not zero.

> make: *** [check-unit] Error 1
> make: *** Waiting for unfinished jobs....
>    TEST    check-qtest-x86_64: tests/qtest/boot-order-test
>    TEST    check-qtest-x86_64: tests/qtest/bios-tables-test
> ---
>      raise CalledProcessError(retcode, cmd)
> subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=73ef29198bda41a1bce9aa3697266e4c', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-j0ydkht8/src/docker-src.2020-06-01-14.44.54.8388:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2.
> filter=--filter=label=com.qemu.instance.uuid=73ef29198bda41a1bce9aa3697266e4c
> make[1]: *** [docker-run] Error 1
> make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-j0ydkht8/src'
> make: *** [docker-run-test-quick@centos7] Error 2
> 
> real    14m11.116s
> user    0m9.306s
> 
> 
> The full log is available at
> http://patchew.org/logs/20200601181118.579-1-vsementsov@virtuozzo.com/testing.docker-quick@centos7/?type=message.
> ---
> Email generated automatically by Patchew [https://patchew.org/].
> Please send your feedback to patchew-devel@redhat.com
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 02/20] iotests: 129 don't check backup "busy"
  2020-06-01 18:11 ` [PATCH v2 02/20] iotests: 129 don't check backup "busy" Vladimir Sementsov-Ogievskiy
@ 2020-07-17 12:57   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-17 12:57 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 718 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Busy is racy, job has it's "pause-points" when it's not busy. Drop this
> check.

Possible, though I have to admit I don’t think I’ve ever seen 129 fail
because of that, but rather because of:

https://lists.nongnu.org/archive/html/qemu-block/2019-06/msg00499.html

I.e. the fact that BB throttling doesn’t do anything and the job is
basically just done before query-block-jobs.

But checking @busy is probably not the best thing to do, right.

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/129 | 1 -
>  1 file changed, 1 deletion(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 03/20] qapi: backup: add x-use-copy-range parameter
  2020-06-01 18:11 ` [PATCH v2 03/20] qapi: backup: add x-use-copy-range parameter Vladimir Sementsov-Ogievskiy
@ 2020-07-17 13:15   ` Max Reitz
  2020-07-17 15:18     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-17 13:15 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1666 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Add parameter to enable/disable copy_range. Keep current default for
> now (enabled).

Why x-, though?  I can’t think of a reason why we would have to remove this.

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  qapi/block-core.json       | 4 +++-
>  block/backup-top.h         | 1 +
>  include/block/block-copy.h | 2 +-
>  include/block/block_int.h  | 1 +
>  block/backup-top.c         | 4 +++-
>  block/backup.c             | 4 +++-
>  block/block-copy.c         | 4 ++--
>  block/replication.c        | 1 +
>  blockdev.c                 | 5 +++++
>  9 files changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 6fbacddab2..0c7600e4ec 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1405,6 +1405,8 @@
>  #                    above node specified by @drive. If this option is not given,
>  #                    a node name is autogenerated. (Since: 4.2)
>  #
> +# @x-use-copy-range: use copy offloading if possible. Default true. (Since 5.1)

Would it make more sense to invert it to disable-copy-range?  First,
this would make for a cleaner meaning, because it would allow dropping
the “if possible” part.  Setting use-copy-range=true would intuitively
imply to me that I get an error if copy-range cannot be used.  Sure,
there’s this little “if possible” in the documentation, but it goes
against my intuition.  disable-copy-range=true is intuitively clear.
Second, this would give us a default of false, which is marginally nicer.

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 04/20] block/block-copy: More explicit call_state
  2020-06-01 18:11 ` [PATCH v2 04/20] block/block-copy: More explicit call_state Vladimir Sementsov-Ogievskiy
@ 2020-07-17 13:45   ` Max Reitz
  2020-09-18 20:11     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-17 13:45 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1840 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Refactor common path to use BlockCopyCallState pointer as parameter, to
> prepare it for use in asynchronous block-copy (at least, we'll need to
> run block-copy in a coroutine, passing the whole parameters as one
> pointer).
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
>  1 file changed, 38 insertions(+), 13 deletions(-)
> 
> diff --git a/block/block-copy.c b/block/block-copy.c
> index 43a018d190..75882a094c 100644
> --- a/block/block-copy.c
> +++ b/block/block-copy.c

[...]

> @@ -646,16 +653,16 @@ out:
>   * it means that some I/O operation failed in context of _this_ block_copy call,
>   * not some parallel operation.
>   */
> -int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
> -                            bool *error_is_read)
> +static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>  {
>      int ret;
>  
>      do {
> -        ret = block_copy_dirty_clusters(s, offset, bytes, error_is_read);
> +        ret = block_copy_dirty_clusters(call_state);

It’s possible that much of this code will change in a future patch of
this series, but as it is, it might be nice to make
block_copy_dirty_clusters’s argument a const pointer so it’s clear that
the call to block_copy_wait_one() below will use the original @offset
and @bytes values.

*shrug*

Reviewed-by: Max Reitz <mreitz@redhat.com>

>  
>          if (ret == 0) {
> -            ret = block_copy_wait_one(s, offset, bytes);
> +            ret = block_copy_wait_one(call_state->s, call_state->offset,
> +                                      call_state->bytes);
>          }
>  
>          /*


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 05/20] block/block-copy: implement block_copy_async
  2020-06-01 18:11 ` [PATCH v2 05/20] block/block-copy: implement block_copy_async Vladimir Sementsov-Ogievskiy
@ 2020-07-17 14:00   ` Max Reitz
  2020-07-17 15:24     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-17 14:00 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 4481 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> We'll need async block-copy invocation to use in backup directly.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/block-copy.h | 13 +++++++++++++
>  block/block-copy.c         | 40 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
> index 6397505f30..ada0d99566 100644
> --- a/include/block/block-copy.h
> +++ b/include/block/block-copy.h
> @@ -19,7 +19,10 @@
>  #include "qemu/co-shared-resource.h"
>  
>  typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
> +typedef void (*BlockCopyAsyncCallbackFunc)(int ret, bool error_is_read,
> +                                           void *opaque);
>  typedef struct BlockCopyState BlockCopyState;
> +typedef struct BlockCopyCallState BlockCopyCallState;
>  
>  BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
>                                       int64_t cluster_size, bool use_copy_range,
> @@ -41,6 +44,16 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
>  int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
>                              bool *error_is_read);
>  
> +/*
> + * Run block-copy in a coroutine, return state pointer. If finished early
> + * returns NULL (@cb is called anyway).

Any special reason for doing so?  Seems like the code would be a tad
simpler if we just returned it either way.  (And off the top of my head
I’d guess it’d be easier for the caller if the returned value was always
non-NULL, too.)

> + */
> +BlockCopyCallState *block_copy_async(BlockCopyState *s,
> +                                     int64_t offset, int64_t bytes,
> +                                     bool ratelimit, int max_workers,
> +                                     int64_t max_chunk,
> +                                     BlockCopyAsyncCallbackFunc cb);
> +
>  BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>  void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>  
> diff --git a/block/block-copy.c b/block/block-copy.c
> index 75882a094c..a0477d90f3 100644
> --- a/block/block-copy.c
> +++ b/block/block-copy.c
> @@ -34,9 +34,11 @@ typedef struct BlockCopyCallState {
>      BlockCopyState *s;
>      int64_t offset;
>      int64_t bytes;
> +    BlockCopyAsyncCallbackFunc cb;
>  
>      /* State */
>      bool failed;
> +    bool finished;
>  
>      /* OUT parameters */
>      bool error_is_read;
> @@ -676,6 +678,13 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>           */
>      } while (ret > 0);
>  
> +    if (call_state->cb) {
> +        call_state->cb(ret, call_state->error_is_read,
> +                       call_state->s->progress_opaque);

I find it weird to pass progress_opaque here.  Shouldn’t we just have a
dedicated opaque object for this CB?

> +    }
> +
> +    call_state->finished = true;
> +
>      return ret;
>  }
>  
> @@ -697,6 +706,37 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
>      return ret;
>  }
>  
> +static void coroutine_fn block_copy_async_co_entry(void *opaque)
> +{
> +    block_copy_common(opaque);
> +}
> +
> +BlockCopyCallState *block_copy_async(BlockCopyState *s,
> +                                     int64_t offset, int64_t bytes,
> +                                     bool ratelimit, int max_workers,
> +                                     int64_t max_chunk,
> +                                     BlockCopyAsyncCallbackFunc cb)
> +{
> +    BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
> +    Coroutine *co = qemu_coroutine_create(block_copy_async_co_entry,
> +                                          call_state);
> +
> +    *call_state = (BlockCopyCallState) {
> +        .s = s,
> +        .offset = offset,
> +        .bytes = bytes,
> +        .cb = cb,
> +    };
> +
> +    qemu_coroutine_enter(co);

Do we need/want any already-in-coroutine shenanigans here?

Max

> +
> +    if (call_state->finished) {
> +        g_free(call_state);
> +        return NULL;
> +    }
> +
> +    return call_state;
> +}
>  BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
>  {
>      return s->copy_bitmap;
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 03/20] qapi: backup: add x-use-copy-range parameter
  2020-07-17 13:15   ` Max Reitz
@ 2020-07-17 15:18     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-07-17 15:18 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow

17.07.2020 16:15, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> Add parameter to enable/disable copy_range. Keep current default for
>> now (enabled).
> 
> Why x-, though?  I can’t think of a reason why we would have to remove this.

I add some x- arguments in these series:

  1. I'm unsure about default values (still, it can be changed even without x- prefixes as I understand)
  2. Probably all this should be wrapped into common "block-copy" options, to not add same arguments to different block-jobs, when they all (I believe in bright future :) move on block-copy.

So, this series is not about API, but for new backup architecture, and experimental APIs are needed only to experiment with performance and adjust some iotests.

> 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   qapi/block-core.json       | 4 +++-
>>   block/backup-top.h         | 1 +
>>   include/block/block-copy.h | 2 +-
>>   include/block/block_int.h  | 1 +
>>   block/backup-top.c         | 4 +++-
>>   block/backup.c             | 4 +++-
>>   block/block-copy.c         | 4 ++--
>>   block/replication.c        | 1 +
>>   blockdev.c                 | 5 +++++
>>   9 files changed, 20 insertions(+), 6 deletions(-)
>>
>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index 6fbacddab2..0c7600e4ec 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -1405,6 +1405,8 @@
>>   #                    above node specified by @drive. If this option is not given,
>>   #                    a node name is autogenerated. (Since: 4.2)
>>   #
>> +# @x-use-copy-range: use copy offloading if possible. Default true. (Since 5.1)
> 
> Would it make more sense to invert it to disable-copy-range?  First,
> this would make for a cleaner meaning, because it would allow dropping
> the “if possible” part.  Setting use-copy-range=true would intuitively
> imply to me that I get an error if copy-range cannot be used.  Sure,
> there’s this little “if possible” in the documentation, but it goes
> against my intuition.  disable-copy-range=true is intuitively clear.
> Second, this would give us a default of false, which is marginally nicer.
> 

Reasonable. But we also should consider disabling copy_range by default as I propose.. But yes, if we keep x- for now, and don't change default for copy_range in this series, I can invert it just for readability.

Thanks a lot for reviewing this!

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 05/20] block/block-copy: implement block_copy_async
  2020-07-17 14:00   ` Max Reitz
@ 2020-07-17 15:24     ` Vladimir Sementsov-Ogievskiy
  2020-07-21  8:43       ` Max Reitz
  0 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-07-17 15:24 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow

17.07.2020 17:00, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> We'll need async block-copy invocation to use in backup directly.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/block-copy.h | 13 +++++++++++++
>>   block/block-copy.c         | 40 ++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 53 insertions(+)
>>
>> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
>> index 6397505f30..ada0d99566 100644
>> --- a/include/block/block-copy.h
>> +++ b/include/block/block-copy.h
>> @@ -19,7 +19,10 @@
>>   #include "qemu/co-shared-resource.h"
>>   
>>   typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
>> +typedef void (*BlockCopyAsyncCallbackFunc)(int ret, bool error_is_read,
>> +                                           void *opaque);
>>   typedef struct BlockCopyState BlockCopyState;
>> +typedef struct BlockCopyCallState BlockCopyCallState;
>>   
>>   BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
>>                                        int64_t cluster_size, bool use_copy_range,
>> @@ -41,6 +44,16 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
>>   int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
>>                               bool *error_is_read);
>>   
>> +/*
>> + * Run block-copy in a coroutine, return state pointer. If finished early
>> + * returns NULL (@cb is called anyway).
> 
> Any special reason for doing so?  Seems like the code would be a tad
> simpler if we just returned it either way.  (And off the top of my head
> I’d guess it’d be easier for the caller if the returned value was always
> non-NULL, too.)

Sounds reasonable, will check

> 
>> + */
>> +BlockCopyCallState *block_copy_async(BlockCopyState *s,
>> +                                     int64_t offset, int64_t bytes,
>> +                                     bool ratelimit, int max_workers,
>> +                                     int64_t max_chunk,
>> +                                     BlockCopyAsyncCallbackFunc cb);
>> +
>>   BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>>   void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>>   
>> diff --git a/block/block-copy.c b/block/block-copy.c
>> index 75882a094c..a0477d90f3 100644
>> --- a/block/block-copy.c
>> +++ b/block/block-copy.c
>> @@ -34,9 +34,11 @@ typedef struct BlockCopyCallState {
>>       BlockCopyState *s;
>>       int64_t offset;
>>       int64_t bytes;
>> +    BlockCopyAsyncCallbackFunc cb;
>>   
>>       /* State */
>>       bool failed;
>> +    bool finished;
>>   
>>       /* OUT parameters */
>>       bool error_is_read;
>> @@ -676,6 +678,13 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>>            */
>>       } while (ret > 0);
>>   
>> +    if (call_state->cb) {
>> +        call_state->cb(ret, call_state->error_is_read,
>> +                       call_state->s->progress_opaque);
> 
> I find it weird to pass progress_opaque here.  Shouldn’t we just have a
> dedicated opaque object for this CB?

I remember, it should be refactored later. But seems strange here, better to change.

> 
>> +    }
>> +
>> +    call_state->finished = true;
>> +
>>       return ret;
>>   }
>>   
>> @@ -697,6 +706,37 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
>>       return ret;
>>   }
>>   
>> +static void coroutine_fn block_copy_async_co_entry(void *opaque)
>> +{
>> +    block_copy_common(opaque);
>> +}
>> +
>> +BlockCopyCallState *block_copy_async(BlockCopyState *s,
>> +                                     int64_t offset, int64_t bytes,
>> +                                     bool ratelimit, int max_workers,
>> +                                     int64_t max_chunk,
>> +                                     BlockCopyAsyncCallbackFunc cb)
>> +{
>> +    BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
>> +    Coroutine *co = qemu_coroutine_create(block_copy_async_co_entry,
>> +                                          call_state);
>> +
>> +    *call_state = (BlockCopyCallState) {
>> +        .s = s,
>> +        .offset = offset,
>> +        .bytes = bytes,
>> +        .cb = cb,
>> +    };
>> +
>> +    qemu_coroutine_enter(co);
> 
> Do we need/want any already-in-coroutine shenanigans here?
> 

No: the aim of the function is to start a new coroutine in parallel, independently of are we already in some other coroutine or not.

> 
>> +
>> +    if (call_state->finished) {
>> +        g_free(call_state);
>> +        return NULL;
>> +    }
>> +
>> +    return call_state;
>> +}
>>   BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
>>   {
>>       return s->copy_bitmap;
>>
> 
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 05/20] block/block-copy: implement block_copy_async
  2020-07-17 15:24     ` Vladimir Sementsov-Ogievskiy
@ 2020-07-21  8:43       ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-21  8:43 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1760 bytes --]

On 17.07.20 17:24, Vladimir Sementsov-Ogievskiy wrote:
> 17.07.2020 17:00, Max Reitz wrote:
>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>> We'll need async block-copy invocation to use in backup directly.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   include/block/block-copy.h | 13 +++++++++++++
>>>   block/block-copy.c         | 40 ++++++++++++++++++++++++++++++++++++++
>>>   2 files changed, 53 insertions(+)

[...]

>>> +BlockCopyCallState *block_copy_async(BlockCopyState *s,
>>> +                                     int64_t offset, int64_t bytes,
>>> +                                     bool ratelimit, int max_workers,
>>> +                                     int64_t max_chunk,
>>> +                                     BlockCopyAsyncCallbackFunc cb)
>>> +{
>>> +    BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
>>> +    Coroutine *co = qemu_coroutine_create(block_copy_async_co_entry,
>>> +                                          call_state);
>>> +
>>> +    *call_state = (BlockCopyCallState) {
>>> +        .s = s,
>>> +        .offset = offset,
>>> +        .bytes = bytes,
>>> +        .cb = cb,
>>> +    };
>>> +
>>> +    qemu_coroutine_enter(co);
>>
>> Do we need/want any already-in-coroutine shenanigans here?
>>
> 
> No: the aim of the function is to start a new coroutine in parallel,
> independently of are we already in some other coroutine or not.

OK, that makes sense.


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 06/20] block/block-copy: add max_chunk and max_workers parameters
  2020-06-01 18:11 ` [PATCH v2 06/20] block/block-copy: add max_chunk and max_workers parameters Vladimir Sementsov-Ogievskiy
@ 2020-07-22  9:39   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-22  9:39 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 3424 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> They will be used for backup.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/block-copy.h |  5 +++++
>  block/block-copy.c         | 10 ++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
> index ada0d99566..600984c733 100644
> --- a/include/block/block-copy.h
> +++ b/include/block/block-copy.h
> @@ -47,6 +47,11 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
>  /*
>   * Run block-copy in a coroutine, return state pointer. If finished early
>   * returns NULL (@cb is called anyway).
> + *
> + * @max_workers means maximum of parallel coroutines to execute sub-requests,
> + * must be > 0.
> + *
> + * @max_chunk means maximum length for one IO operation. Zero means unlimited.
>   */
>  BlockCopyCallState *block_copy_async(BlockCopyState *s,
>                                       int64_t offset, int64_t bytes,

I only just now notice that @max_workers and @max_chunk were already
added in the previous patch, even though they aren’t used there.  Should
we defer adding them until this patch?

> diff --git a/block/block-copy.c b/block/block-copy.c
> index a0477d90f3..4114d1fd25 100644
> --- a/block/block-copy.c
> +++ b/block/block-copy.c
> @@ -34,6 +34,8 @@ typedef struct BlockCopyCallState {
>      BlockCopyState *s;
>      int64_t offset;
>      int64_t bytes;
> +    int max_workers;
> +    int64_t max_chunk;
>      BlockCopyAsyncCallbackFunc cb;
>  
>      /* State */
> @@ -144,10 +146,11 @@ static BlockCopyTask *block_copy_task_create(BlockCopyState *s,
>                                               int64_t offset, int64_t bytes)
>  {
>      BlockCopyTask *task;
> +    int64_t max_chunk = MIN_NON_ZERO(s->copy_size, call_state->max_chunk);
>  
>      if (!bdrv_dirty_bitmap_next_dirty_area(s->copy_bitmap,
>                                             offset, offset + bytes,
> -                                           s->copy_size, &offset, &bytes))
> +                                           max_chunk, &offset, &bytes))
>      {
>          return NULL;
>      }
> @@ -616,7 +619,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
>          bytes = end - offset;
>  
>          if (!aio && bytes) {
> -            aio = aio_task_pool_new(BLOCK_COPY_MAX_WORKERS);
> +            aio = aio_task_pool_new(call_state->max_workers);
>          }
>  
>          ret = block_copy_task_run(aio, task);
> @@ -695,6 +698,7 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
>          .s = s,
>          .offset = start,
>          .bytes = bytes,
> +        .max_workers = BLOCK_COPY_MAX_WORKERS,
>      };
>  
>      int ret = block_copy_common(&call_state);
> @@ -726,6 +730,8 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>          .offset = offset,
>          .bytes = bytes,
>          .cb = cb,
> +        .max_workers = max_workers ?: BLOCK_COPY_MAX_WORKERS,

I thought this must be > 0?

> +        .max_chunk = max_chunk,
>      };
>  
>      qemu_coroutine_enter(co);
> 

And I now notice that there’s no newline after block_copy_async().
(Doesn’t concern this patch, of course, but the previous one.)

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy
  2020-06-01 18:11 ` [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy Vladimir Sementsov-Ogievskiy
@ 2020-07-22 11:05   ` Max Reitz
  2020-09-25 18:19     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-22 11:05 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 5971 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> We are going to directly use one async block-copy operation for backup
> job, so we need rate limitator.

%s/limitator/limiter/g, I think.

> We want to maintain current backup behavior: only background copying is
> limited and copy-before-write operations only participate in limit
> calculation. Therefore we need one rate limitator for block-copy state
> and boolean flag for block-copy call state for actual limitation.
> 
> Note, that we can't just calculate each chunk in limitator after
> successful copying: it will not save us from starting a lot of async
> sub-requests which will exceed limit too much. Instead let's use the
> following scheme on sub-request creation:
> 1. If at the moment limit is not exceeded, create the request and
> account it immediately.
> 2. If at the moment limit is already exceeded, drop create sub-request
> and handle limit instead (by sleep).
> With this approach we'll never exceed the limit more than by one
> sub-request (which pretty much matches current backup behavior).

Sounds reasonable.

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/block-copy.h |  8 +++++++
>  block/block-copy.c         | 44 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 52 insertions(+)
> 
> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
> index 600984c733..d40e691123 100644
> --- a/include/block/block-copy.h
> +++ b/include/block/block-copy.h
> @@ -59,6 +59,14 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>                                       int64_t max_chunk,
>                                       BlockCopyAsyncCallbackFunc cb);
>  
> +/*
> + * Set speed limit for block-copy instance. All block-copy operations related to
> + * this BlockCopyState will participate in speed calculation, but only
> + * block_copy_async calls with @ratelimit=true will be actually limited.
> + */
> +void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
> +                          uint64_t speed);
> +
>  BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>  void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>  
> diff --git a/block/block-copy.c b/block/block-copy.c
> index 4114d1fd25..851d9c8aaf 100644
> --- a/block/block-copy.c
> +++ b/block/block-copy.c
> @@ -26,6 +26,7 @@
>  #define BLOCK_COPY_MAX_BUFFER (1 * MiB)
>  #define BLOCK_COPY_MAX_MEM (128 * MiB)
>  #define BLOCK_COPY_MAX_WORKERS 64
> +#define BLOCK_COPY_SLICE_TIME 100000000ULL /* ns */
>  
>  static coroutine_fn int block_copy_task_entry(AioTask *task);
>  
> @@ -36,11 +37,13 @@ typedef struct BlockCopyCallState {
>      int64_t bytes;
>      int max_workers;
>      int64_t max_chunk;
> +    bool ratelimit;
>      BlockCopyAsyncCallbackFunc cb;
>  
>      /* State */
>      bool failed;
>      bool finished;
> +    QemuCoSleepState *sleep_state;
>  
>      /* OUT parameters */
>      bool error_is_read;
> @@ -103,6 +106,9 @@ typedef struct BlockCopyState {
>      void *progress_opaque;
>  
>      SharedResource *mem;
> +
> +    uint64_t speed;
> +    RateLimit rate_limit;
>  } BlockCopyState;
>  
>  static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
> @@ -611,6 +617,21 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
>          }
>          task->zeroes = ret & BDRV_BLOCK_ZERO;
>  
> +        if (s->speed) {
> +            if (call_state->ratelimit) {
> +                uint64_t ns = ratelimit_calculate_delay(&s->rate_limit, 0);
> +                if (ns > 0) {
> +                    block_copy_task_end(task, -EAGAIN);
> +                    g_free(task);
> +                    qemu_co_sleep_ns_wakeable(QEMU_CLOCK_REALTIME, ns,
> +                                              &call_state->sleep_state);
> +                    continue;
> +                }
> +            }
> +
> +            ratelimit_calculate_delay(&s->rate_limit, task->bytes);
> +        }
> +

Looks good.

>          trace_block_copy_process(s, task->offset);
>  
>          co_get_from_shres(s->mem, task->bytes);
> @@ -649,6 +670,13 @@ out:
>      return ret < 0 ? ret : found_dirty;
>  }
>  
> +static void block_copy_kick(BlockCopyCallState *call_state)
> +{
> +    if (call_state->sleep_state) {
> +        qemu_co_sleep_wake(call_state->sleep_state);
> +    }
> +}
> +
>  /*
>   * block_copy_common
>   *
> @@ -729,6 +757,7 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>          .s = s,
>          .offset = offset,
>          .bytes = bytes,
> +        .ratelimit = ratelimit,

Hm, same problem/question as in patch 6: Should the @ratelimit parameter
really be added in patch 5 if it’s used only now?

>          .cb = cb,
>          .max_workers = max_workers ?: BLOCK_COPY_MAX_WORKERS,
>          .max_chunk = max_chunk,
> @@ -752,3 +781,18 @@ void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip)
>  {
>      s->skip_unallocated = skip;
>  }
> +
> +void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
> +                          uint64_t speed)
> +{
> +    uint64_t old_speed = s->speed;
> +
> +    s->speed = speed;
> +    if (speed > 0) {
> +        ratelimit_set_speed(&s->rate_limit, speed, BLOCK_COPY_SLICE_TIME);
> +    }
> +
> +    if (call_state && old_speed && (speed > old_speed || speed == 0)) {
> +        block_copy_kick(call_state);
> +    }
> +}

Hm.  I’m interested in seeing how this is going to be used, i.e. what
callers will pass for @call_state.  I suppose it’s going to be the
background operation for the whole device, but I wonder whether it
actually makes sense to pass it.  I mean, the caller could just call
block_copy_kick() itself (unconditionally, because it’ll never hurt, I
think).


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 08/20] block/block-copy: add block_copy_cancel
  2020-06-01 18:11 ` [PATCH v2 08/20] block/block-copy: add block_copy_cancel Vladimir Sementsov-Ogievskiy
@ 2020-07-22 11:28   ` Max Reitz
  2020-10-22 20:50     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-22 11:28 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 3293 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Add function to cancel running async block-copy call. It will be used
> in backup.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/block-copy.h |  7 +++++++
>  block/block-copy.c         | 22 +++++++++++++++++++---
>  2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
> index d40e691123..370a194d3c 100644
> --- a/include/block/block-copy.h
> +++ b/include/block/block-copy.h
> @@ -67,6 +67,13 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>  void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
>                            uint64_t speed);
>  
> +/*
> + * Cancel running block-copy call.
> + * Cancel leaves block-copy state valid: dirty bits are correct and you may use
> + * cancel + <run block_copy with same parameters> to emulate pause/resume.
> + */
> +void block_copy_cancel(BlockCopyCallState *call_state);
> +
>  BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>  void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>  
> diff --git a/block/block-copy.c b/block/block-copy.c
> index 851d9c8aaf..b551feb6c2 100644
> --- a/block/block-copy.c
> +++ b/block/block-copy.c
> @@ -44,6 +44,8 @@ typedef struct BlockCopyCallState {
>      bool failed;
>      bool finished;
>      QemuCoSleepState *sleep_state;
> +    bool cancelled;
> +    Coroutine *canceller;
>  
>      /* OUT parameters */
>      bool error_is_read;
> @@ -582,7 +584,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
>      assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
>      assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
>  
> -    while (bytes && aio_task_pool_status(aio) == 0) {
> +    while (bytes && aio_task_pool_status(aio) == 0 && !call_state->cancelled) {
>          BlockCopyTask *task;
>          int64_t status_bytes;
>  
> @@ -693,7 +695,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>      do {
>          ret = block_copy_dirty_clusters(call_state);
>  
> -        if (ret == 0) {
> +        if (ret == 0 && !call_state->cancelled) {
>              ret = block_copy_wait_one(call_state->s, call_state->offset,
>                                        call_state->bytes);
>          }
> @@ -707,13 +709,18 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>           * 2. We have waited for some intersecting block-copy request
>           *    It may have failed and produced new dirty bits.
>           */
> -    } while (ret > 0);
> +    } while (ret > 0 && !call_state->cancelled);

Would it be cleaner if block_copy_dirty_cluster() just returned
-ECANCELED?  Or would that pose a problem for its callers or the async
callback?

>      if (call_state->cb) {
>          call_state->cb(ret, call_state->error_is_read,
>                         call_state->s->progress_opaque);
>      }
>  
> +    if (call_state->canceller) {
> +        aio_co_wake(call_state->canceller);
> +        call_state->canceller = NULL;
> +    }
> +
>      call_state->finished = true;
>  
>      return ret;


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 484 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 09/20] blockjob: add set_speed to BlockJobDriver
  2020-06-01 18:11 ` [PATCH v2 09/20] blockjob: add set_speed to BlockJobDriver Vladimir Sementsov-Ogievskiy
@ 2020-07-22 11:34   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-22 11:34 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 439 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> We are going to use async block-copy call in backup, so we'll need to
> passthrough setting backup speed to block-copy call.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/blockjob_int.h | 2 ++
>  blockjob.c                   | 6 ++++++
>  2 files changed, 8 insertions(+)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 10/20] job: call job_enter from job_user_pause
  2020-06-01 18:11 ` [PATCH v2 10/20] job: call job_enter from job_user_pause Vladimir Sementsov-Ogievskiy
@ 2020-07-22 11:49   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-22 11:49 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 543 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> If main job coroutine called job_yield (while some background process
> is in progress), we should give it a chance to call job_pause_point().
> It will be used in backup, when moved on async block-copy.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  job.c | 1 +
>  1 file changed, 1 insertion(+)

Sounds reasonable to me, although I’d prefer an opinion from John.

So, a middle-weak:

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters
  2020-06-01 18:11 ` [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters Vladimir Sementsov-Ogievskiy
  2020-06-02  8:19   ` Vladimir Sementsov-Ogievskiy
@ 2020-07-22 12:22   ` Max Reitz
  2020-07-23  7:43     ` Max Reitz
  2020-10-22 20:35     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 2 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-22 12:22 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 7508 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Add new parameters to configure future backup features. The patch
> doesn't introduce aio backup requests (so we actually have only one
> worker) neither requests larger than one cluster. Still, formally we
> satisfy these maximums anyway, so add the parameters now, to facilitate
> further patch which will really change backup job behavior.
> 
> Options are added with x- prefixes, as the only use for them are some
> very conservative iotests which will be updated soon.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  qapi/block-core.json      |  9 ++++++++-
>  include/block/block_int.h |  7 +++++++
>  block/backup.c            | 21 +++++++++++++++++++++
>  block/replication.c       |  2 +-
>  blockdev.c                |  5 +++++
>  5 files changed, 42 insertions(+), 2 deletions(-)
> 
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 0c7600e4ec..f4abcde34e 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1407,6 +1407,12 @@
>  #
>  # @x-use-copy-range: use copy offloading if possible. Default true. (Since 5.1)
>  #
> +# @x-max-workers: maximum of parallel requests for static data backup. This
> +#                 doesn't influence copy-before-write operations. (Since: 5.1)

I think I’d prefer something with “background” rather than or in
addition to “static”, like “maximum number of parallel requests for the
sustained background backup operation”.

> +#
> +# @x-max-chunk: maximum chunk length for static data backup. This doesn't
> +#               influence copy-before-write operations. (Since: 5.1)
> +#
>  # Note: @on-source-error and @on-target-error only affect background
>  #       I/O.  If an error occurs during a guest write request, the device's
>  #       rerror/werror actions will be used.
> @@ -1421,7 +1427,8 @@
>              '*on-source-error': 'BlockdevOnError',
>              '*on-target-error': 'BlockdevOnError',
>              '*auto-finalize': 'bool', '*auto-dismiss': 'bool',
> -            '*filter-node-name': 'str', '*x-use-copy-range': 'bool'  } }
> +            '*filter-node-name': 'str', '*x-use-copy-range': 'bool',
> +            '*x-max-workers': 'int', '*x-max-chunk': 'int64' } }
>  
>  ##
>  # @DriveBackup:
> diff --git a/include/block/block_int.h b/include/block/block_int.h
> index 93b9b3bdc0..d93a170d37 100644
> --- a/include/block/block_int.h
> +++ b/include/block/block_int.h
> @@ -1207,6 +1207,11 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
>   * @sync_mode: What parts of the disk image should be copied to the destination.
>   * @sync_bitmap: The dirty bitmap if sync_mode is 'bitmap' or 'incremental'
>   * @bitmap_mode: The bitmap synchronization policy to use.
> + * @max_workers: The limit for parallel requests for main backup loop.
> + *               Must be >= 1.
> + * @max_chunk: The limit for one IO operation length in main backup loop.
> + *             Must be not less than job cluster size or zero. Zero means no
> + *             specific limit.
>   * @on_source_error: The action to take upon error reading from the source.
>   * @on_target_error: The action to take upon error writing to the target.
>   * @creation_flags: Flags that control the behavior of the Job lifetime.
> @@ -1226,6 +1231,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
>                              bool compress,
>                              const char *filter_node_name,
>                              bool use_copy_range,
> +                            int max_workers,
> +                            int64_t max_chunk,
>                              BlockdevOnError on_source_error,
>                              BlockdevOnError on_target_error,
>                              int creation_flags,
> diff --git a/block/backup.c b/block/backup.c
> index 76847b4daf..ec2676abc2 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -46,6 +46,8 @@ typedef struct BackupBlockJob {
>      uint64_t len;
>      uint64_t bytes_read;
>      int64_t cluster_size;
> +    int max_workers;
> +    int64_t max_chunk;
>  
>      BlockCopyState *bcs;
>  } BackupBlockJob;
> @@ -335,6 +337,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
>                    bool compress,
>                    const char *filter_node_name,
>                    bool use_copy_range,
> +                  int max_workers,
> +                  int64_t max_chunk,
>                    BlockdevOnError on_source_error,
>                    BlockdevOnError on_target_error,
>                    int creation_flags,
> @@ -355,6 +359,16 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
>      assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
>      assert(sync_bitmap || sync_mode != MIRROR_SYNC_MODE_BITMAP);
>  
> +    if (max_workers < 1) {
> +        error_setg(errp, "At least one worker needed");
> +        return NULL;
> +    }
> +
> +    if (max_chunk < 0) {
> +        error_setg(errp, "max-chunk is negative");

Perhaps “must be positive or 0” instead?  I think most error messages
try to specify what is allowed instead of what isn’t.

> +        return NULL;
> +    }
> +
>      if (bs == target) {
>          error_setg(errp, "Source and target cannot be the same");
>          return NULL;
> @@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
>      if (cluster_size < 0) {
>          goto error;
>      }
> +    if (max_chunk && max_chunk < cluster_size) {
> +        error_setg(errp, "Required max-chunk (%" PRIi64") is less than backup "

(missing a space after PRIi64)

> +                   "cluster size (%" PRIi64 ")", max_chunk, cluster_size);

Should this be noted in the QAPI documentation?  (And perhaps the fact
that without copy offloading, we’ll never copy anything bigger than one
cluster at a time anyway?)

> +        return NULL;
> +    }
>  
>      /*
>       * If source is in backing chain of target assume that target is going to be
> @@ -465,6 +484,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
>      job->bcs = bcs;
>      job->cluster_size = cluster_size;
>      job->len = len;
> +    job->max_workers = max_workers;
> +    job->max_chunk = max_chunk;
>  
>      block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
>      block_copy_set_progress_meter(bcs, &job->common.job.progress);
> diff --git a/block/replication.c b/block/replication.c
> index 25987eab0f..a9ee82db80 100644
> --- a/block/replication.c
> +++ b/block/replication.c
> @@ -563,7 +563,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
>          s->backup_job = backup_job_create(
>                                  NULL, s->secondary_disk->bs, s->hidden_disk->bs,
>                                  0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
> -                                true,
> +                                true, 0, 0,

Passing 0 for max_workers will error out immediately, won’t it?

>                                  BLOCKDEV_ON_ERROR_REPORT,
>                                  BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
>                                  backup_job_completed, bs, NULL, &local_err);


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters
  2020-07-22 12:22   ` Max Reitz
@ 2020-07-23  7:43     ` Max Reitz
  2020-10-22 20:35     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-23  7:43 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1687 bytes --]

On 22.07.20 14:22, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> Add new parameters to configure future backup features. The patch
>> doesn't introduce aio backup requests (so we actually have only one
>> worker) neither requests larger than one cluster. Still, formally we
>> satisfy these maximums anyway, so add the parameters now, to facilitate
>> further patch which will really change backup job behavior.
>>
>> Options are added with x- prefixes, as the only use for them are some
>> very conservative iotests which will be updated soon.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>  qapi/block-core.json      |  9 ++++++++-
>>  include/block/block_int.h |  7 +++++++
>>  block/backup.c            | 21 +++++++++++++++++++++
>>  block/replication.c       |  2 +-
>>  blockdev.c                |  5 +++++
>>  5 files changed, 42 insertions(+), 2 deletions(-)

[...]

>> diff --git a/block/replication.c b/block/replication.c
>> index 25987eab0f..a9ee82db80 100644
>> --- a/block/replication.c
>> +++ b/block/replication.c
>> @@ -563,7 +563,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
>>          s->backup_job = backup_job_create(
>>                                  NULL, s->secondary_disk->bs, s->hidden_disk->bs,
>>                                  0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
>> -                                true,
>> +                                true, 0, 0,
> 
> Passing 0 for max_workers will error out immediately, won’t it?

Ah, oops.  Saw your own reply only now.  Yep, 1 worker would be nice. :)


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 12/20] iotests: 56: prepare for backup over block-copy
  2020-06-01 18:11 ` [PATCH v2 12/20] iotests: 56: prepare for backup over block-copy Vladimir Sementsov-Ogievskiy
@ 2020-07-23  7:57   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-23  7:57 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1551 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> After introducing parallel async copy requests instead of plain
> cluster-by-cluster copying loop, we'll have to wait for paused status,
> as we need to wait for several parallel request. So, let's gently wait
> instead of just asserting that job already paused.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/056 | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
> index f73fc74457..2ced356a43 100755
> --- a/tests/qemu-iotests/056
> +++ b/tests/qemu-iotests/056
> @@ -306,8 +306,12 @@ class BackupTest(iotests.QMPTestCase):
>          event = self.vm.event_wait(name="BLOCK_JOB_ERROR",
>                                     match={'data': {'device': 'drive0'}})
>          self.assertNotEqual(event, None)
> -        # OK, job should be wedged
> -        res = self.vm.qmp('query-block-jobs')
> +        # OK, job should pause, but it can't do it immediately, as it can't
> +        # cancel other parallel requests (which didn't fail)
> +        while True:
> +            res = self.vm.qmp('query-block-jobs')
> +            if res['return'][0]['status'] == 'paused':
> +                break

A timeout around this would be nice, I think.

>          self.assert_qmp(res, 'return[0]/status', 'paused')
>          res = self.vm.qmp('block-job-dismiss', id='drive0')
>          self.assert_qmp(res, 'error/desc',
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 13/20] iotests: 129: prepare for backup over block-copy
  2020-06-01 18:11 ` [PATCH v2 13/20] iotests: 129: " Vladimir Sementsov-Ogievskiy
@ 2020-07-23  8:03   ` Max Reitz
  2020-10-22 21:10     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-23  8:03 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1247 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> After introducing parallel async copy requests instead of plain
> cluster-by-cluster copying loop, backup job may finish earlier than
> final assertion in do_test_stop. Let's require slow backup explicitly
> by specifying speed parameter.

Isn’t the problem really that block_set_io_throttle does absolutely
nothing?  (Which is a long-standing problem with 129.  I personally just
never run it, honestly.)

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/129 | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
> index 4db5eca441..bca56b589d 100755
> --- a/tests/qemu-iotests/129
> +++ b/tests/qemu-iotests/129
> @@ -76,7 +76,7 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
>      def test_drive_backup(self):
>          self.do_test_stop("drive-backup", device="drive0",
>                            target=self.target_img,
> -                          sync="full")
> +                          sync="full", speed=1024)
>  
>      def test_block_commit(self):
>          self.do_test_stop("block-commit", device="drive0")
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 14/20] iotests: 185: prepare for backup over block-copy
  2020-06-01 18:11 ` [PATCH v2 14/20] iotests: 185: " Vladimir Sementsov-Ogievskiy
@ 2020-07-23  8:19   ` Max Reitz
  2020-10-22 21:16     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-23  8:19 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 2846 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> The further change of moving backup to be a on block-copy call will

-on?

> make copying chunk-size and cluster-size a separate things. So, even

s/a/two/

> with 64k cluster sized qcow2 image, default chunk would be 1M.
> 185 test however assumes, that with speed limited to 64K, one iteration
> would result in offset=64K. It will change, as first iteration would
> result in offset=1M independently of speed.
> 
> So, let's explicitly specify, what test wants: set max-chunk to 64K, so
> that one iteration is 64K. Note, that we don't need to limit
> max-workers, as block-copy rate limitator will handle the situation and

*limitator

> wouldn't start new workers when speed limit is obviously reached.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/185     | 3 ++-
>  tests/qemu-iotests/185.out | 2 +-
>  2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/tests/qemu-iotests/185 b/tests/qemu-iotests/185
> index fd5e6ebe11..6afb3fc82f 100755
> --- a/tests/qemu-iotests/185
> +++ b/tests/qemu-iotests/185
> @@ -182,7 +182,8 @@ _send_qemu_cmd $h \
>                        'target': '$TEST_IMG.copy',
>                        'format': '$IMGFMT',
>                        'sync': 'full',
> -                      'speed': 65536 } }" \
> +                      'speed': 65536,
> +                      'x-max-chunk': 65536 } }" \

Out of curiosity, would it also suffice to disable copy offloading?

But anyway:

Reviewed-by: Max Reitz <mreitz@redhat.com>

>      "return"
>  
>  # If we don't sleep here 'quit' command races with disk I/O
> diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
> index ac5ab16bc8..5232647972 100644
> --- a/tests/qemu-iotests/185.out
> +++ b/tests/qemu-iotests/185.out
> @@ -61,7 +61,7 @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 l
>  
>  { 'execute': 'qmp_capabilities' }
>  {"return": {}}
> -{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536 } }
> +{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536, 'x-max-chunk': 65536 } }
>  Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16 compression_type=zlib
>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 15/20] iotests: 219: prepare for backup over block-copy
  2020-06-01 18:11 ` [PATCH v2 15/20] iotests: 219: " Vladimir Sementsov-Ogievskiy
@ 2020-07-23  8:35   ` Max Reitz
  2020-10-22 21:20     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-07-23  8:35 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 2517 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> The further change of moving backup to be a on block-copy call will

-on?

> make copying chunk-size and cluster-size a separate things. So, even

s/a/two/

> with 64k cluster sized qcow2 image, default chunk would be 1M.
> Test 219 depends on specified chunk-size. Update it for explicit
> chunk-size for backup as for mirror.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/219 | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
> index db272c5249..2bbed28f39 100755
> --- a/tests/qemu-iotests/219
> +++ b/tests/qemu-iotests/219
> @@ -203,13 +203,13 @@ with iotests.FilePath('disk.img') as disk_path, \
>      # but related to this also automatic state transitions like job
>      # completion), but still get pause points often enough to avoid making this
>      # test very slow, it's important to have the right ratio between speed and
> -    # buf_size.
> +    # copy-chunk-size.
>      #
> -    # For backup, buf_size is hard-coded to the source image cluster size (64k),
> -    # so we'll pick the same for mirror. The slice time, i.e. the granularity
> -    # of the rate limiting is 100ms. With a speed of 256k per second, we can
> -    # get four pause points per second. This gives us 250ms per iteration,
> -    # which should be enough to stay deterministic.
> +    # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
> +    # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
> +    # is 100ms. With a speed of 256k per second, we can get four pause points
> +    # per second. This gives us 250ms per iteration, which should be enough to
> +    # stay deterministic.

Don’t we also have to limit the number of workers to 1 so we actually
keep 250 ms per iteration instead of just finishing four requests
immediately, then pausing for a second?

>      test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
>          'device': 'drive0-node',
> @@ -226,6 +226,7 @@ with iotests.FilePath('disk.img') as disk_path, \
>                  'target': copy_path,
>                  'sync': 'full',
>                  'speed': 262144,
> +                'x-max-chunk': 65536,
>                  'auto-finalize': auto_finalize,
>                  'auto-dismiss': auto_dismiss,
>              })
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 16/20] iotests: 257: prepare for backup over block-copy
  2020-06-01 18:11 ` [PATCH v2 16/20] iotests: 257: " Vladimir Sementsov-Ogievskiy
@ 2020-07-23  8:49   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-23  8:49 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 870 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Iotest 257 dumps a lot of in-progress information of backup job, such
> as offset and bitmap dirtiness. Further commit will move backup to be
> one block-copy call, which will introduce async parallel requests
> instead of plain cluster-by-cluster copying. To keep things
> deterministic, allow only one worker (only one copy request at a time)
> for this test.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/257     |   1 +
>  tests/qemu-iotests/257.out | 306 ++++++++++++++++++-------------------
>  2 files changed, 154 insertions(+), 153 deletions(-)

It’s a shame that we don’t run this test with the default configuration
then, but I suppose that’s how it is.  For now, at least.

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 17/20] backup: move to block-copy
  2020-06-01 18:11 ` [PATCH v2 17/20] backup: move to block-copy Vladimir Sementsov-Ogievskiy
@ 2020-07-23  9:47   ` Max Reitz
  2020-09-21 13:58     ` Vladimir Sementsov-Ogievskiy
  2020-10-26 15:18     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 2 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-23  9:47 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 6501 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> This brings async request handling and block-status driven chunk sizes
> to backup out of the box, which improves backup performance.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/block-copy.h |   9 +--
>  block/backup.c             | 145 +++++++++++++++++++------------------
>  block/block-copy.c         |  21 ++----
>  3 files changed, 83 insertions(+), 92 deletions(-)

This patch feels like it should be multiple ones.  I don’t see why a
patch that lets backup use block-copy (more) should have to modify the
block-copy code.

More specifically: I think that block_copy_set_progress_callback()
should be removed in a separate patch.  Also, moving
@cb_opaque/@progress_opaque from BlockCopyState into BlockCopyCallState
seems like a separate patch to me, too.

(I presume if the cb had had its own opaque object from patch 5 on,
there wouldn’t be this problem now.  We’d just stop using the progress
callback in backup, and remove it in one separate patch.)

[...]

> diff --git a/block/backup.c b/block/backup.c
> index ec2676abc2..59c00f5293 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -44,42 +44,19 @@ typedef struct BackupBlockJob {
>      BlockdevOnError on_source_error;
>      BlockdevOnError on_target_error;
>      uint64_t len;
> -    uint64_t bytes_read;
>      int64_t cluster_size;
>      int max_workers;
>      int64_t max_chunk;
>  
>      BlockCopyState *bcs;
> +
> +    BlockCopyCallState *bcs_call;

Can this be more descriptive?  E.g. background_bcs?  bg_bcs_call?  bg_bcsc?

> +    int ret;
> +    bool error_is_read;
>  } BackupBlockJob;
>  
>  static const BlockJobDriver backup_job_driver;
>  

[...]

>  static int coroutine_fn backup_loop(BackupBlockJob *job)
>  {
> -    bool error_is_read;
> -    int64_t offset;
> -    BdrvDirtyBitmapIter *bdbi;
> -    int ret = 0;
> +    while (true) { /* retry loop */
> +        assert(!job->bcs_call);
> +        job->bcs_call = block_copy_async(job->bcs, 0,
> +                                         QEMU_ALIGN_UP(job->len,
> +                                                       job->cluster_size),
> +                                         true, job->max_workers, job->max_chunk,
> +                                         backup_block_copy_callback, job);
>  
> -    bdbi = bdrv_dirty_iter_new(block_copy_dirty_bitmap(job->bcs));
> -    while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
> -        do {
> -            if (yield_and_check(job)) {
> -                goto out;
> +        while (job->bcs_call && !job->common.job.cancelled) {
> +            /* wait and handle pauses */

Doesn’t someone need to reset BlockCopyCallState.cancelled when the job
is unpaused?  I can’t see anyone doing that.

Well, or even just reentering the block-copy operation after
backup_pause() has cancelled it.  Is there some magic I’m missing?

Does pausing (which leads to cancelling the block-copy operation) just
yield to the CB being invoked, so the copy operation is considered done,
and then we just enter the next iteration of the loop and try again?
But cancelling the block-copy operation leads to it returning 0, AFAIR,
so...

> +
> +            job_pause_point(&job->common.job);
> +
> +            if (job->bcs_call && !job->common.job.cancelled) {
> +                job_yield(&job->common.job);
>              }
> -            ret = backup_do_cow(job, offset, job->cluster_size, &error_is_read);
> -            if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
> -                           BLOCK_ERROR_ACTION_REPORT)
> -            {
> -                goto out;
> +        }
> +
> +        if (!job->bcs_call && job->ret == 0) {
> +            /* Success */
> +            return 0;

...I would assume we return here when the job is paused.

> +        }
> +
> +        if (job->common.job.cancelled) {
> +            if (job->bcs_call) {
> +                block_copy_cancel(job->bcs_call);
>              }
> -        } while (ret < 0);
> +            return 0;
> +        }
> +
> +        if (!job->bcs_call && job->ret < 0 &&

Is it even possible for bcs_call to be non-NULL here?

> +            (backup_error_action(job, job->error_is_read, -job->ret) ==
> +             BLOCK_ERROR_ACTION_REPORT))
> +        {
> +            return job->ret;
> +        }

So if we get an error, and the error action isn’t REPORT, we just try
the whole operation again?  That doesn’t sound very IGNORE-y to me.

>      }
>  
> - out:
> -    bdrv_dirty_iter_free(bdbi);
> -    return ret;
> +    g_assert_not_reached();
>  }
>  
>  static void backup_init_bcs_bitmap(BackupBlockJob *job)
> @@ -246,9 +227,14 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
>          int64_t count;
>  
>          for (offset = 0; offset < s->len; ) {
> -            if (yield_and_check(s)) {
> -                ret = -ECANCELED;
> -                goto out;
> +            if (job_is_cancelled(job)) {
> +                return -ECANCELED;

I’d either drop the out label altogether, or use it here.  It’s a bit
weird to use it sometimes, but not all the time.

> +            }
> +
> +            job_pause_point(job);
> +
> +            if (job_is_cancelled(job)) {
> +                return -ECANCELED;
>              }
>  
>              ret = block_copy_reset_unallocated(s->bcs, offset, &count);
> @@ -281,6 +267,25 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
>      return ret;
>  }
>  
> +static void coroutine_fn backup_pause(Job *job)
> +{
> +    BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
> +
> +    if (s->bcs_call) {
> +        block_copy_cancel(s->bcs_call);
> +    }
> +}
> +
> +static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
> +{
> +    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
> +
> +    if (s->bcs) {
> +        /* In block_job_create we yet don't have bcs */

Shouldn’t hurt to make it conditional, but how can we get here in
block_job_create()?

> +        block_copy_set_speed(s->bcs, s->bcs_call, speed);
> +    }
> +}
> +
>  static const BlockJobDriver backup_job_driver = {
>      .job_driver = {
>          .instance_size          = sizeof(BackupBlockJob),


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 18/20] block/block-copy: drop unused argument of block_copy()
  2020-06-01 18:11 ` [PATCH v2 18/20] block/block-copy: drop unused argument of block_copy() Vladimir Sementsov-Ogievskiy
@ 2020-07-23 13:24   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-23 13:24 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 366 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/block-copy.h |  3 +--
>  block/backup-top.c         |  2 +-
>  block/block-copy.c         | 11 ++---------
>  3 files changed, 4 insertions(+), 12 deletions(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 19/20] simplebench: bench_block_job: add cmd_options argument
  2020-06-01 18:11 ` [PATCH v2 19/20] simplebench: bench_block_job: add cmd_options argument Vladimir Sementsov-Ogievskiy
@ 2020-07-23 13:30   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-23 13:30 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 768 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Add argument to allow additional block-job options.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  scripts/simplebench/bench-example.py   |  2 +-
>  scripts/simplebench/bench_block_job.py | 13 ++++++++-----
>  2 files changed, 9 insertions(+), 6 deletions(-)

[...]

> diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
> index 9808d696cf..7332845c1c 100755
> --- a/scripts/simplebench/bench_block_job.py
> +++ b/scripts/simplebench/bench_block_job.py
> @@ -1,4 +1,4 @@
> -#!/usr/bin/env python
> +#!/usr/bin/env python3

Looks a bit unrelated.  Apart from that:

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 20/20] simplebench: add bench-backup.py
  2020-06-01 18:11 ` [PATCH v2 20/20] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
@ 2020-07-23 13:47   ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-07-23 13:47 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 860 bytes --]

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> Add script to benchmark new backup architecture.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  scripts/simplebench/bench-backup.py | 132 ++++++++++++++++++++++++++++
>  1 file changed, 132 insertions(+)
>  create mode 100755 scripts/simplebench/bench-backup.py

Looks reasonable.  Not sure whether I can give an R-b, given my
non-existing background on the simplebench scripts, and also that it’s
probably not really necessary.

The only thing that sprung to my eye is that you give x-use-copy-range
even when the “new” label isn’t given, which may be a problem.  (Because
AFAIU, the “new” label means that qemu is sufficiently new to support
x-max-workers; and that’s basically equivalent to supporting
x-use-copy-range.)

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 04/20] block/block-copy: More explicit call_state
  2020-07-17 13:45   ` Max Reitz
@ 2020-09-18 20:11     ` Vladimir Sementsov-Ogievskiy
  2020-09-21  8:54       ` Max Reitz
  0 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-09-18 20:11 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

17.07.2020 16:45, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> Refactor common path to use BlockCopyCallState pointer as parameter, to
>> prepare it for use in asynchronous block-copy (at least, we'll need to
>> run block-copy in a coroutine, passing the whole parameters as one
>> pointer).
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
>>   1 file changed, 38 insertions(+), 13 deletions(-)
>>
>> diff --git a/block/block-copy.c b/block/block-copy.c
>> index 43a018d190..75882a094c 100644
>> --- a/block/block-copy.c
>> +++ b/block/block-copy.c
> 
> [...]
> 
>> @@ -646,16 +653,16 @@ out:
>>    * it means that some I/O operation failed in context of _this_ block_copy call,
>>    * not some parallel operation.
>>    */
>> -int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
>> -                            bool *error_is_read)
>> +static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>>   {
>>       int ret;
>>   
>>       do {
>> -        ret = block_copy_dirty_clusters(s, offset, bytes, error_is_read);
>> +        ret = block_copy_dirty_clusters(call_state);
> 
> It’s possible that much of this code will change in a future patch of
> this series, but as it is, it might be nice to make
> block_copy_dirty_clusters’s argument a const pointer so it’s clear that
> the call to block_copy_wait_one() below will use the original @offset
> and @bytes values.

Hm. I'm trying this, and it doesn't work:

block_copy_task_entry() wants to change call_state:

    t->call_state->failed = true;

> 
> *shrug*
> 
> Reviewed-by: Max Reitz <mreitz@redhat.com>
> 
>>   
>>           if (ret == 0) {
>> -            ret = block_copy_wait_one(s, offset, bytes);
>> +            ret = block_copy_wait_one(call_state->s, call_state->offset,
>> +                                      call_state->bytes);
>>           }
>>   
>>           /*
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 04/20] block/block-copy: More explicit call_state
  2020-09-18 20:11     ` Vladimir Sementsov-Ogievskiy
@ 2020-09-21  8:54       ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-09-21  8:54 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 2387 bytes --]

On 18.09.20 22:11, Vladimir Sementsov-Ogievskiy wrote:
> 17.07.2020 16:45, Max Reitz wrote:
>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>> Refactor common path to use BlockCopyCallState pointer as parameter, to
>>> prepare it for use in asynchronous block-copy (at least, we'll need to
>>> run block-copy in a coroutine, passing the whole parameters as one
>>> pointer).
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
>>>   1 file changed, 38 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/block/block-copy.c b/block/block-copy.c
>>> index 43a018d190..75882a094c 100644
>>> --- a/block/block-copy.c
>>> +++ b/block/block-copy.c
>>
>> [...]
>>
>>> @@ -646,16 +653,16 @@ out:
>>>    * it means that some I/O operation failed in context of _this_
>>> block_copy call,
>>>    * not some parallel operation.
>>>    */
>>> -int coroutine_fn block_copy(BlockCopyState *s, int64_t offset,
>>> int64_t bytes,
>>> -                            bool *error_is_read)
>>> +static int coroutine_fn block_copy_common(BlockCopyCallState
>>> *call_state)
>>>   {
>>>       int ret;
>>>         do {
>>> -        ret = block_copy_dirty_clusters(s, offset, bytes,
>>> error_is_read);
>>> +        ret = block_copy_dirty_clusters(call_state);
>>
>> It’s possible that much of this code will change in a future patch of
>> this series, but as it is, it might be nice to make
>> block_copy_dirty_clusters’s argument a const pointer so it’s clear that
>> the call to block_copy_wait_one() below will use the original @offset
>> and @bytes values.
> 
> Hm. I'm trying this, and it doesn't work:
> 
> block_copy_task_entry() wants to change call_state:
> 
>    t->call_state->failed = true;

Too bad :)

>> *shrug*
>>
>> Reviewed-by: Max Reitz <mreitz@redhat.com>
>>
>>>             if (ret == 0) {
>>> -            ret = block_copy_wait_one(s, offset, bytes);
>>> +            ret = block_copy_wait_one(call_state->s,
>>> call_state->offset,
>>> +                                      call_state->bytes);
>>>           }
>>>             /*
>>
> 
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 17/20] backup: move to block-copy
  2020-07-23  9:47   ` Max Reitz
@ 2020-09-21 13:58     ` Vladimir Sementsov-Ogievskiy
  2020-10-26 15:18     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-09-21 13:58 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

23.07.2020 12:47, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> This brings async request handling and block-status driven chunk sizes
>> to backup out of the box, which improves backup performance.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/block-copy.h |   9 +--
>>   block/backup.c             | 145 +++++++++++++++++++------------------
>>   block/block-copy.c         |  21 ++----
>>   3 files changed, 83 insertions(+), 92 deletions(-)
> 
> This patch feels like it should be multiple ones.  I don’t see why a
> patch that lets backup use block-copy (more) should have to modify the
> block-copy code.
> 
> More specifically: I think that block_copy_set_progress_callback()
> should be removed in a separate patch.  Also, moving
> @cb_opaque/@progress_opaque from BlockCopyState into BlockCopyCallState
> seems like a separate patch to me, too.
> 
> (I presume if the cb had had its own opaque object from patch 5 on,
> there wouldn’t be this problem now.  We’d just stop using the progress
> callback in backup, and remove it in one separate patch.)
> 
> [...]
> 
>> diff --git a/block/backup.c b/block/backup.c
>> index ec2676abc2..59c00f5293 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
>> @@ -44,42 +44,19 @@ typedef struct BackupBlockJob {
>>       BlockdevOnError on_source_error;
>>       BlockdevOnError on_target_error;
>>       uint64_t len;
>> -    uint64_t bytes_read;
>>       int64_t cluster_size;
>>       int max_workers;
>>       int64_t max_chunk;
>>   
>>       BlockCopyState *bcs;
>> +
>> +    BlockCopyCallState *bcs_call;
> 
> Can this be more descriptive?  E.g. background_bcs?  bg_bcs_call?  bg_bcsc?
> 
>> +    int ret;
>> +    bool error_is_read;
>>   } BackupBlockJob;
>>   
>>   static const BlockJobDriver backup_job_driver;
>>   
> 
> [...]
> 
>>   static int coroutine_fn backup_loop(BackupBlockJob *job)
>>   {
>> -    bool error_is_read;
>> -    int64_t offset;
>> -    BdrvDirtyBitmapIter *bdbi;
>> -    int ret = 0;
>> +    while (true) { /* retry loop */
>> +        assert(!job->bcs_call);
>> +        job->bcs_call = block_copy_async(job->bcs, 0,
>> +                                         QEMU_ALIGN_UP(job->len,
>> +                                                       job->cluster_size),
>> +                                         true, job->max_workers, job->max_chunk,
>> +                                         backup_block_copy_callback, job);
>>   
>> -    bdbi = bdrv_dirty_iter_new(block_copy_dirty_bitmap(job->bcs));
>> -    while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
>> -        do {
>> -            if (yield_and_check(job)) {
>> -                goto out;
>> +        while (job->bcs_call && !job->common.job.cancelled) {
>> +            /* wait and handle pauses */
> 
> Doesn’t someone need to reset BlockCopyCallState.cancelled when the job
> is unpaused?  I can’t see anyone doing that.
> 
> Well, or even just reentering the block-copy operation after
> backup_pause() has cancelled it.  Is there some magic I’m missing?
> 
> Does pausing (which leads to cancelling the block-copy operation) just
> yield to the CB being invoked, so the copy operation is considered done,
> and then we just enter the next iteration of the loop and try again?

Yes, that's my idea: we cancel on pause and just run new block_copy operation
on resume.

> But cancelling the block-copy operation leads to it returning 0, AFAIR,
> so...

Looks like it should be fixed. By returning ECANCELED or by checking the bitmap.

> 
>> +
>> +            job_pause_point(&job->common.job);
>> +
>> +            if (job->bcs_call && !job->common.job.cancelled) {
>> +                job_yield(&job->common.job);
>>               }
>> -            ret = backup_do_cow(job, offset, job->cluster_size, &error_is_read);
>> -            if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
>> -                           BLOCK_ERROR_ACTION_REPORT)
>> -            {
>> -                goto out;
>> +        }
>> +
>> +        if (!job->bcs_call && job->ret == 0) {
>> +            /* Success */
>> +            return 0;
> 
> ...I would assume we return here when the job is paused.
> 
>> +        }
>> +
>> +        if (job->common.job.cancelled) {
>> +            if (job->bcs_call) {
>> +                block_copy_cancel(job->bcs_call);
>>               }
>> -        } while (ret < 0);
>> +            return 0;
>> +        }
>> +
>> +        if (!job->bcs_call && job->ret < 0 &&
> 
> Is it even possible for bcs_call to be non-NULL here?
> 
>> +            (backup_error_action(job, job->error_is_read, -job->ret) ==
>> +             BLOCK_ERROR_ACTION_REPORT))
>> +        {
>> +            return job->ret;
>> +        }
> 
> So if we get an error, and the error action isn’t REPORT, we just try
> the whole operation again?  That doesn’t sound very IGNORE-y to me.

Not the whole. We have the dirty bitmap: clusters that was already copied are not touched more.



-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy
  2020-07-22 11:05   ` Max Reitz
@ 2020-09-25 18:19     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-09-25 18:19 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

22.07.2020 14:05, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> We are going to directly use one async block-copy operation for backup
>> job, so we need rate limitator.
> 
> %s/limitator/limiter/g, I think.
> 
>> We want to maintain current backup behavior: only background copying is
>> limited and copy-before-write operations only participate in limit
>> calculation. Therefore we need one rate limitator for block-copy state
>> and boolean flag for block-copy call state for actual limitation.
>>
>> Note, that we can't just calculate each chunk in limitator after
>> successful copying: it will not save us from starting a lot of async
>> sub-requests which will exceed limit too much. Instead let's use the
>> following scheme on sub-request creation:
>> 1. If at the moment limit is not exceeded, create the request and
>> account it immediately.
>> 2. If at the moment limit is already exceeded, drop create sub-request
>> and handle limit instead (by sleep).
>> With this approach we'll never exceed the limit more than by one
>> sub-request (which pretty much matches current backup behavior).
> 
> Sounds reasonable.
> 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/block-copy.h |  8 +++++++
>>   block/block-copy.c         | 44 ++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 52 insertions(+)
>>
>> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
>> index 600984c733..d40e691123 100644
>> --- a/include/block/block-copy.h
>> +++ b/include/block/block-copy.h
>> @@ -59,6 +59,14 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>>                                        int64_t max_chunk,
>>                                        BlockCopyAsyncCallbackFunc cb);
>>   
>> +/*
>> + * Set speed limit for block-copy instance. All block-copy operations related to
>> + * this BlockCopyState will participate in speed calculation, but only
>> + * block_copy_async calls with @ratelimit=true will be actually limited.
>> + */
>> +void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
>> +                          uint64_t speed);
>> +
>>   BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>>   void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>>   
>> diff --git a/block/block-copy.c b/block/block-copy.c
>> index 4114d1fd25..851d9c8aaf 100644
>> --- a/block/block-copy.c
>> +++ b/block/block-copy.c
>> @@ -26,6 +26,7 @@
>>   #define BLOCK_COPY_MAX_BUFFER (1 * MiB)
>>   #define BLOCK_COPY_MAX_MEM (128 * MiB)
>>   #define BLOCK_COPY_MAX_WORKERS 64
>> +#define BLOCK_COPY_SLICE_TIME 100000000ULL /* ns */
>>   
>>   static coroutine_fn int block_copy_task_entry(AioTask *task);
>>   
>> @@ -36,11 +37,13 @@ typedef struct BlockCopyCallState {
>>       int64_t bytes;
>>       int max_workers;
>>       int64_t max_chunk;
>> +    bool ratelimit;
>>       BlockCopyAsyncCallbackFunc cb;
>>   
>>       /* State */
>>       bool failed;
>>       bool finished;
>> +    QemuCoSleepState *sleep_state;
>>   
>>       /* OUT parameters */
>>       bool error_is_read;
>> @@ -103,6 +106,9 @@ typedef struct BlockCopyState {
>>       void *progress_opaque;
>>   
>>       SharedResource *mem;
>> +
>> +    uint64_t speed;
>> +    RateLimit rate_limit;
>>   } BlockCopyState;
>>   
>>   static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
>> @@ -611,6 +617,21 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
>>           }
>>           task->zeroes = ret & BDRV_BLOCK_ZERO;
>>   
>> +        if (s->speed) {
>> +            if (call_state->ratelimit) {
>> +                uint64_t ns = ratelimit_calculate_delay(&s->rate_limit, 0);
>> +                if (ns > 0) {
>> +                    block_copy_task_end(task, -EAGAIN);
>> +                    g_free(task);
>> +                    qemu_co_sleep_ns_wakeable(QEMU_CLOCK_REALTIME, ns,
>> +                                              &call_state->sleep_state);
>> +                    continue;
>> +                }
>> +            }
>> +
>> +            ratelimit_calculate_delay(&s->rate_limit, task->bytes);
>> +        }
>> +
> 
> Looks good.
> 
>>           trace_block_copy_process(s, task->offset);
>>   
>>           co_get_from_shres(s->mem, task->bytes);
>> @@ -649,6 +670,13 @@ out:
>>       return ret < 0 ? ret : found_dirty;
>>   }
>>   
>> +static void block_copy_kick(BlockCopyCallState *call_state)
>> +{
>> +    if (call_state->sleep_state) {
>> +        qemu_co_sleep_wake(call_state->sleep_state);
>> +    }
>> +}
>> +
>>   /*
>>    * block_copy_common
>>    *
>> @@ -729,6 +757,7 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>>           .s = s,
>>           .offset = offset,
>>           .bytes = bytes,
>> +        .ratelimit = ratelimit,
> 
> Hm, same problem/question as in patch 6: Should the @ratelimit parameter
> really be added in patch 5 if it’s used only now?
> 
>>           .cb = cb,
>>           .max_workers = max_workers ?: BLOCK_COPY_MAX_WORKERS,
>>           .max_chunk = max_chunk,
>> @@ -752,3 +781,18 @@ void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip)
>>   {
>>       s->skip_unallocated = skip;
>>   }
>> +
>> +void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
>> +                          uint64_t speed)
>> +{
>> +    uint64_t old_speed = s->speed;
>> +
>> +    s->speed = speed;
>> +    if (speed > 0) {
>> +        ratelimit_set_speed(&s->rate_limit, speed, BLOCK_COPY_SLICE_TIME);
>> +    }
>> +
>> +    if (call_state && old_speed && (speed > old_speed || speed == 0)) {
>> +        block_copy_kick(call_state);
>> +    }
>> +}
> 
> Hm.  I’m interested in seeing how this is going to be used, i.e. what
> callers will pass for @call_state.  I suppose it’s going to be the
> background operation for the whole device, but I wonder whether it
> actually makes sense to pass it.  I mean, the caller could just call
> block_copy_kick() itself (unconditionally, because it’ll never hurt, I
> think).
> 

Yes, that's weird. I think the correct approach is to have list of call-states in block-copy-state, and kick them all on speed update.


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters
  2020-07-22 12:22   ` Max Reitz
  2020-07-23  7:43     ` Max Reitz
@ 2020-10-22 20:35     ` Vladimir Sementsov-Ogievskiy
  2020-11-04 17:19       ` Max Reitz
  1 sibling, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-10-22 20:35 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

22.07.2020 15:22, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> Add new parameters to configure future backup features. The patch
>> doesn't introduce aio backup requests (so we actually have only one
>> worker) neither requests larger than one cluster. Still, formally we
>> satisfy these maximums anyway, so add the parameters now, to facilitate
>> further patch which will really change backup job behavior.
>>
>> Options are added with x- prefixes, as the only use for them are some
>> very conservative iotests which will be updated soon.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   qapi/block-core.json      |  9 ++++++++-
>>   include/block/block_int.h |  7 +++++++
>>   block/backup.c            | 21 +++++++++++++++++++++
>>   block/replication.c       |  2 +-
>>   blockdev.c                |  5 +++++
>>   5 files changed, 42 insertions(+), 2 deletions(-)
>>

[..]

>> @@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
>>       if (cluster_size < 0) {
>>           goto error;
>>       }
>> +    if (max_chunk && max_chunk < cluster_size) {
>> +        error_setg(errp, "Required max-chunk (%" PRIi64") is less than backup "
> 
> (missing a space after PRIi64)
> 
>> +                   "cluster size (%" PRIi64 ")", max_chunk, cluster_size);
> 
> Should this be noted in the QAPI documentation?

Hmm.. It makes sense, but I don't know what to write: should be >= job cluster_size? But then I'll have to describe the logic of backup_calculate_cluster_size()...

>  (And perhaps the fact
> that without copy offloading, we’ll never copy anything bigger than one
> cluster at a time anyway?)

This is a parameter for background copying. Look at block_copy_task_create(), if call_state has own max_chunk, it's used instead of common copy_size (derived from cluster_size). But at a moment of this patch background process through async block-copy is not  yet implemented, and the parameter doesn't work, which is described in commit message.

> 
>> +        return NULL;
>> +    }
>>   
>>       /*

[..]


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 08/20] block/block-copy: add block_copy_cancel
  2020-07-22 11:28   ` Max Reitz
@ 2020-10-22 20:50     ` Vladimir Sementsov-Ogievskiy
  2020-10-23  9:49       ` Vladimir Sementsov-Ogievskiy
  2020-11-04 17:26       ` Max Reitz
  0 siblings, 2 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-10-22 20:50 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

22.07.2020 14:28, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> Add function to cancel running async block-copy call. It will be used
>> in backup.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/block-copy.h |  7 +++++++
>>   block/block-copy.c         | 22 +++++++++++++++++++---
>>   2 files changed, 26 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
>> index d40e691123..370a194d3c 100644
>> --- a/include/block/block-copy.h
>> +++ b/include/block/block-copy.h
>> @@ -67,6 +67,13 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>>   void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
>>                             uint64_t speed);
>>   
>> +/*
>> + * Cancel running block-copy call.
>> + * Cancel leaves block-copy state valid: dirty bits are correct and you may use
>> + * cancel + <run block_copy with same parameters> to emulate pause/resume.
>> + */
>> +void block_copy_cancel(BlockCopyCallState *call_state);
>> +
>>   BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>>   void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>>   
>> diff --git a/block/block-copy.c b/block/block-copy.c
>> index 851d9c8aaf..b551feb6c2 100644
>> --- a/block/block-copy.c
>> +++ b/block/block-copy.c
>> @@ -44,6 +44,8 @@ typedef struct BlockCopyCallState {
>>       bool failed;
>>       bool finished;
>>       QemuCoSleepState *sleep_state;
>> +    bool cancelled;
>> +    Coroutine *canceller;
>>   
>>       /* OUT parameters */
>>       bool error_is_read;
>> @@ -582,7 +584,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
>>       assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
>>       assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
>>   
>> -    while (bytes && aio_task_pool_status(aio) == 0) {
>> +    while (bytes && aio_task_pool_status(aio) == 0 && !call_state->cancelled) {
>>           BlockCopyTask *task;
>>           int64_t status_bytes;
>>   
>> @@ -693,7 +695,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>>       do {
>>           ret = block_copy_dirty_clusters(call_state);
>>   
>> -        if (ret == 0) {
>> +        if (ret == 0 && !call_state->cancelled) {
>>               ret = block_copy_wait_one(call_state->s, call_state->offset,
>>                                         call_state->bytes);
>>           }
>> @@ -707,13 +709,18 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>>            * 2. We have waited for some intersecting block-copy request
>>            *    It may have failed and produced new dirty bits.
>>            */
>> -    } while (ret > 0);
>> +    } while (ret > 0 && !call_state->cancelled);
> 
> Would it be cleaner if block_copy_dirty_cluster() just returned
> -ECANCELED?  Or would that pose a problem for its callers or the async
> callback?
> 

I'd prefer not to merge io ret with block-copy logic: who knows what underlying operations may return.. Can't it be _another_ ECANCELED?
And it would be just a sugar for block_copy_dirty_clusters() call, I'll have to check ->cancelled after block_copy_wait_one() anyway.
Also, for the next version I try to make it more obvious that finished block-copy call is in one of thee states:
  - success
  - failed
  - cancelled

Hmm. Also, cancelled should be OK for copy-on-write operations in filter, it just mean that we don't need to care anymore.

>>       if (call_state->cb) {
>>           call_state->cb(ret, call_state->error_is_read,
>>                          call_state->s->progress_opaque);
>>       }
>>   
>> +    if (call_state->canceller) {
>> +        aio_co_wake(call_state->canceller);
>> +        call_state->canceller = NULL;
>> +    }
>> +
>>       call_state->finished = true;
>>   
>>       return ret;
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 13/20] iotests: 129: prepare for backup over block-copy
  2020-07-23  8:03   ` Max Reitz
@ 2020-10-22 21:10     ` Vladimir Sementsov-Ogievskiy
  2020-11-04 17:30       ` Max Reitz
  0 siblings, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-10-22 21:10 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

23.07.2020 11:03, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> After introducing parallel async copy requests instead of plain
>> cluster-by-cluster copying loop, backup job may finish earlier than
>> final assertion in do_test_stop. Let's require slow backup explicitly
>> by specifying speed parameter.
> 
> Isn’t the problem really that block_set_io_throttle does absolutely
> nothing?  (Which is a long-standing problem with 129.  I personally just
> never run it, honestly.)

Hmm.. is it better to drop test_drive_backup() from here ?

> 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   tests/qemu-iotests/129 | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
>> index 4db5eca441..bca56b589d 100755
>> --- a/tests/qemu-iotests/129
>> +++ b/tests/qemu-iotests/129
>> @@ -76,7 +76,7 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
>>       def test_drive_backup(self):
>>           self.do_test_stop("drive-backup", device="drive0",
>>                             target=self.target_img,
>> -                          sync="full")
>> +                          sync="full", speed=1024)
>>   
>>       def test_block_commit(self):
>>           self.do_test_stop("block-commit", device="drive0")
>>
> 
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 14/20] iotests: 185: prepare for backup over block-copy
  2020-07-23  8:19   ` Max Reitz
@ 2020-10-22 21:16     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-10-22 21:16 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

23.07.2020 11:19, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> The further change of moving backup to be a on block-copy call will
> 
> -on?

one :)

> 
>> make copying chunk-size and cluster-size a separate things. So, even
> 
> s/a/two/
> 
>> with 64k cluster sized qcow2 image, default chunk would be 1M.
>> 185 test however assumes, that with speed limited to 64K, one iteration
>> would result in offset=64K. It will change, as first iteration would
>> result in offset=1M independently of speed.
>>
>> So, let's explicitly specify, what test wants: set max-chunk to 64K, so
>> that one iteration is 64K. Note, that we don't need to limit
>> max-workers, as block-copy rate limitator will handle the situation and
> 
> *limitator
> 
>> wouldn't start new workers when speed limit is obviously reached.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   tests/qemu-iotests/185     | 3 ++-
>>   tests/qemu-iotests/185.out | 2 +-
>>   2 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/tests/qemu-iotests/185 b/tests/qemu-iotests/185
>> index fd5e6ebe11..6afb3fc82f 100755
>> --- a/tests/qemu-iotests/185
>> +++ b/tests/qemu-iotests/185
>> @@ -182,7 +182,8 @@ _send_qemu_cmd $h \
>>                         'target': '$TEST_IMG.copy',
>>                         'format': '$IMGFMT',
>>                         'sync': 'full',
>> -                      'speed': 65536 } }" \
>> +                      'speed': 65536,
>> +                      'x-max-chunk': 65536 } }" \
> 
> Out of curiosity, would it also suffice to disable copy offloading?

Note that x-max-chunk works even with copy offloading enabled, it sets maximum only for background copying, not for all operations.

> 
> But anyway:
> 
> Reviewed-by: Max Reitz <mreitz@redhat.com>
> 
>>       "return"
>>   
>>   # If we don't sleep here 'quit' command races with disk I/O
>> diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
>> index ac5ab16bc8..5232647972 100644
>> --- a/tests/qemu-iotests/185.out
>> +++ b/tests/qemu-iotests/185.out
>> @@ -61,7 +61,7 @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 l
>>   
>>   { 'execute': 'qmp_capabilities' }
>>   {"return": {}}
>> -{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536 } }
>> +{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536, 'x-max-chunk': 65536 } }
>>   Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16 compression_type=zlib
>>   {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
>>   {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
>>
> 
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 15/20] iotests: 219: prepare for backup over block-copy
  2020-07-23  8:35   ` Max Reitz
@ 2020-10-22 21:20     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-10-22 21:20 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

23.07.2020 11:35, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> The further change of moving backup to be a on block-copy call will
> 
> -on?
> 
>> make copying chunk-size and cluster-size a separate things. So, even
> 
> s/a/two/
> 
>> with 64k cluster sized qcow2 image, default chunk would be 1M.
>> Test 219 depends on specified chunk-size. Update it for explicit
>> chunk-size for backup as for mirror.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   tests/qemu-iotests/219 | 13 +++++++------
>>   1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
>> index db272c5249..2bbed28f39 100755
>> --- a/tests/qemu-iotests/219
>> +++ b/tests/qemu-iotests/219
>> @@ -203,13 +203,13 @@ with iotests.FilePath('disk.img') as disk_path, \
>>       # but related to this also automatic state transitions like job
>>       # completion), but still get pause points often enough to avoid making this
>>       # test very slow, it's important to have the right ratio between speed and
>> -    # buf_size.
>> +    # copy-chunk-size.
>>       #
>> -    # For backup, buf_size is hard-coded to the source image cluster size (64k),
>> -    # so we'll pick the same for mirror. The slice time, i.e. the granularity
>> -    # of the rate limiting is 100ms. With a speed of 256k per second, we can
>> -    # get four pause points per second. This gives us 250ms per iteration,
>> -    # which should be enough to stay deterministic.
>> +    # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
>> +    # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
>> +    # is 100ms. With a speed of 256k per second, we can get four pause points
>> +    # per second. This gives us 250ms per iteration, which should be enough to
>> +    # stay deterministic.
> 
> Don’t we also have to limit the number of workers to 1 so we actually
> keep 250 ms per iteration instead of just finishing four requests
> immediately, then pausing for a second?

Block-copy rate limiter works good enough: it will not start too much requests.

> 
>>       test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
>>           'device': 'drive0-node',
>> @@ -226,6 +226,7 @@ with iotests.FilePath('disk.img') as disk_path, \
>>                   'target': copy_path,
>>                   'sync': 'full',
>>                   'speed': 262144,
>> +                'x-max-chunk': 65536,
>>                   'auto-finalize': auto_finalize,
>>                   'auto-dismiss': auto_dismiss,
>>               })
>>
> 
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 08/20] block/block-copy: add block_copy_cancel
  2020-10-22 20:50     ` Vladimir Sementsov-Ogievskiy
@ 2020-10-23  9:49       ` Vladimir Sementsov-Ogievskiy
  2020-11-04 17:26       ` Max Reitz
  1 sibling, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-10-23  9:49 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

22.10.2020 23:50, Vladimir Sementsov-Ogievskiy wrote:
> 22.07.2020 14:28, Max Reitz wrote:
>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>> Add function to cancel running async block-copy call. It will be used
>>> in backup.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   include/block/block-copy.h |  7 +++++++
>>>   block/block-copy.c         | 22 +++++++++++++++++++---
>>>   2 files changed, 26 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
>>> index d40e691123..370a194d3c 100644
>>> --- a/include/block/block-copy.h
>>> +++ b/include/block/block-copy.h
>>> @@ -67,6 +67,13 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
>>>   void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
>>>                             uint64_t speed);
>>> +/*
>>> + * Cancel running block-copy call.
>>> + * Cancel leaves block-copy state valid: dirty bits are correct and you may use
>>> + * cancel + <run block_copy with same parameters> to emulate pause/resume.
>>> + */
>>> +void block_copy_cancel(BlockCopyCallState *call_state);
>>> +
>>>   BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>>>   void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>>> diff --git a/block/block-copy.c b/block/block-copy.c
>>> index 851d9c8aaf..b551feb6c2 100644
>>> --- a/block/block-copy.c
>>> +++ b/block/block-copy.c
>>> @@ -44,6 +44,8 @@ typedef struct BlockCopyCallState {
>>>       bool failed;
>>>       bool finished;
>>>       QemuCoSleepState *sleep_state;
>>> +    bool cancelled;
>>> +    Coroutine *canceller;
>>>       /* OUT parameters */
>>>       bool error_is_read;
>>> @@ -582,7 +584,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
>>>       assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
>>>       assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
>>> -    while (bytes && aio_task_pool_status(aio) == 0) {
>>> +    while (bytes && aio_task_pool_status(aio) == 0 && !call_state->cancelled) {
>>>           BlockCopyTask *task;
>>>           int64_t status_bytes;
>>> @@ -693,7 +695,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>>>       do {
>>>           ret = block_copy_dirty_clusters(call_state);
>>> -        if (ret == 0) {
>>> +        if (ret == 0 && !call_state->cancelled) {
>>>               ret = block_copy_wait_one(call_state->s, call_state->offset,
>>>                                         call_state->bytes);
>>>           }
>>> @@ -707,13 +709,18 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>>>            * 2. We have waited for some intersecting block-copy request
>>>            *    It may have failed and produced new dirty bits.
>>>            */
>>> -    } while (ret > 0);
>>> +    } while (ret > 0 && !call_state->cancelled);
>>
>> Would it be cleaner if block_copy_dirty_cluster() just returned
>> -ECANCELED?  Or would that pose a problem for its callers or the async
>> callback?
>>
> 
> I'd prefer not to merge io ret with block-copy logic: who knows what underlying operations may return.. Can't it be _another_ ECANCELED?
> And it would be just a sugar for block_copy_dirty_clusters() call, I'll have to check ->cancelled after block_copy_wait_one() anyway.
> Also, for the next version I try to make it more obvious that finished block-copy call is in one of thee states:
>   - success
>   - failed
>   - cancelled
> 
> Hmm. Also, cancelled should be OK for copy-on-write operations in filter, it just mean that we don't need to care anymore.

This is unrelated: actually only async block-copy call may be cancelled.

> 
>>>       if (call_state->cb) {
>>>           call_state->cb(ret, call_state->error_is_read,
>>>                          call_state->s->progress_opaque);
>>>       }
>>> +    if (call_state->canceller) {
>>> +        aio_co_wake(call_state->canceller);
>>> +        call_state->canceller = NULL;
>>> +    }
>>> +
>>>       call_state->finished = true;
>>>       return ret;
>>
> 
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 17/20] backup: move to block-copy
  2020-07-23  9:47   ` Max Reitz
  2020-09-21 13:58     ` Vladimir Sementsov-Ogievskiy
@ 2020-10-26 15:18     ` Vladimir Sementsov-Ogievskiy
  2020-11-04 17:45       ` Max Reitz
  1 sibling, 1 reply; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-10-26 15:18 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

23.07.2020 12:47, Max Reitz wrote:
>> +static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
>> +{
>> +    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
>> +
>> +    if (s->bcs) {
>> +        /* In block_job_create we yet don't have bcs */
> Shouldn’t hurt to make it conditional, but how can we get here in
> block_job_create()?
> 

block_job_set_speed is called from block_job_create.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters
  2020-10-22 20:35     ` Vladimir Sementsov-Ogievskiy
@ 2020-11-04 17:19       ` Max Reitz
  2020-11-09 11:11         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-11-04 17:19 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow

On 22.10.20 22:35, Vladimir Sementsov-Ogievskiy wrote:
> 22.07.2020 15:22, Max Reitz wrote:
>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>> Add new parameters to configure future backup features. The patch
>>> doesn't introduce aio backup requests (so we actually have only one
>>> worker) neither requests larger than one cluster. Still, formally we
>>> satisfy these maximums anyway, so add the parameters now, to facilitate
>>> further patch which will really change backup job behavior.
>>>
>>> Options are added with x- prefixes, as the only use for them are some
>>> very conservative iotests which will be updated soon.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   qapi/block-core.json      |  9 ++++++++-
>>>   include/block/block_int.h |  7 +++++++
>>>   block/backup.c            | 21 +++++++++++++++++++++
>>>   block/replication.c       |  2 +-
>>>   blockdev.c                |  5 +++++
>>>   5 files changed, 42 insertions(+), 2 deletions(-)
>>>
> 
> [..]
> 
>>> @@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id,
>>> BlockDriverState *bs,
>>>       if (cluster_size < 0) {
>>>           goto error;
>>>       }
>>> +    if (max_chunk && max_chunk < cluster_size) {
>>> +        error_setg(errp, "Required max-chunk (%" PRIi64") is less
>>> than backup "
>>
>> (missing a space after PRIi64)
>>
>>> +                   "cluster size (%" PRIi64 ")", max_chunk,
>>> cluster_size);
>>
>> Should this be noted in the QAPI documentation?
> 
> Hmm.. It makes sense, but I don't know what to write: should be >= job
> cluster_size? But then I'll have to describe the logic of
> backup_calculate_cluster_size()...

Isn’t the logic basically just “cluster size of the target image file
(min 64k)”?  The other cases just cover error cases, and they always
return 64k, which would effectively be documented by stating that 64k is
the minimum.

But in the common cases (qcow2 or raw), we’ll never get an error anyway.

I think it’d be good to describe the cluster size somewhere, because
otherwise I find it a bit malicious to throw an error at the user that
they could not have anticipated because they have no idea what valid
values for the parameter are (until they try).

>>  (And perhaps the fact
>> that without copy offloading, we’ll never copy anything bigger than one
>> cluster at a time anyway?)
> 
> This is a parameter for background copying. Look at
> block_copy_task_create(), if call_state has own max_chunk, it's used
> instead of common copy_size (derived from cluster_size). But at a moment
> of this patch background process through async block-copy is not  yet
> implemented, and the parameter doesn't work, which is described in
> commit message.

Ah, OK.  Right.

Max



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 08/20] block/block-copy: add block_copy_cancel
  2020-10-22 20:50     ` Vladimir Sementsov-Ogievskiy
  2020-10-23  9:49       ` Vladimir Sementsov-Ogievskiy
@ 2020-11-04 17:26       ` Max Reitz
  1 sibling, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-11-04 17:26 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow

On 22.10.20 22:50, Vladimir Sementsov-Ogievskiy wrote:
> 22.07.2020 14:28, Max Reitz wrote:
>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>> Add function to cancel running async block-copy call. It will be used
>>> in backup.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   include/block/block-copy.h |  7 +++++++
>>>   block/block-copy.c         | 22 +++++++++++++++++++---
>>>   2 files changed, 26 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/block/block-copy.h b/include/block/block-copy.h
>>> index d40e691123..370a194d3c 100644
>>> --- a/include/block/block-copy.h
>>> +++ b/include/block/block-copy.h
>>> @@ -67,6 +67,13 @@ BlockCopyCallState
>>> *block_copy_async(BlockCopyState *s,
>>>   void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState
>>> *call_state,
>>>                             uint64_t speed);
>>>   +/*
>>> + * Cancel running block-copy call.
>>> + * Cancel leaves block-copy state valid: dirty bits are correct and
>>> you may use
>>> + * cancel + <run block_copy with same parameters> to emulate
>>> pause/resume.
>>> + */
>>> +void block_copy_cancel(BlockCopyCallState *call_state);
>>> +
>>>   BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
>>>   void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
>>>   diff --git a/block/block-copy.c b/block/block-copy.c
>>> index 851d9c8aaf..b551feb6c2 100644
>>> --- a/block/block-copy.c
>>> +++ b/block/block-copy.c
>>> @@ -44,6 +44,8 @@ typedef struct BlockCopyCallState {
>>>       bool failed;
>>>       bool finished;
>>>       QemuCoSleepState *sleep_state;
>>> +    bool cancelled;
>>> +    Coroutine *canceller;
>>>         /* OUT parameters */
>>>       bool error_is_read;
>>> @@ -582,7 +584,7 @@ block_copy_dirty_clusters(BlockCopyCallState
>>> *call_state)
>>>       assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
>>>       assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
>>>   -    while (bytes && aio_task_pool_status(aio) == 0) {
>>> +    while (bytes && aio_task_pool_status(aio) == 0 &&
>>> !call_state->cancelled) {
>>>           BlockCopyTask *task;
>>>           int64_t status_bytes;
>>>   @@ -693,7 +695,7 @@ static int coroutine_fn
>>> block_copy_common(BlockCopyCallState *call_state)
>>>       do {
>>>           ret = block_copy_dirty_clusters(call_state);
>>>   -        if (ret == 0) {
>>> +        if (ret == 0 && !call_state->cancelled) {
>>>               ret = block_copy_wait_one(call_state->s,
>>> call_state->offset,
>>>                                         call_state->bytes);
>>>           }
>>> @@ -707,13 +709,18 @@ static int coroutine_fn
>>> block_copy_common(BlockCopyCallState *call_state)
>>>            * 2. We have waited for some intersecting block-copy request
>>>            *    It may have failed and produced new dirty bits.
>>>            */
>>> -    } while (ret > 0);
>>> +    } while (ret > 0 && !call_state->cancelled);
>>
>> Would it be cleaner if block_copy_dirty_cluster() just returned
>> -ECANCELED?  Or would that pose a problem for its callers or the async
>> callback?
>>
> 
> I'd prefer not to merge io ret with block-copy logic: who knows what
> underlying operations may return.. Can't it be _another_ ECANCELED?
> And it would be just a sugar for block_copy_dirty_clusters() call, I'll
> have to check ->cancelled after block_copy_wait_one() anyway.
> Also, for the next version I try to make it more obvious that finished
> block-copy call is in one of thee states:
>  - success
>  - failed
>  - cancelled

OK.

Max



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 13/20] iotests: 129: prepare for backup over block-copy
  2020-10-22 21:10     ` Vladimir Sementsov-Ogievskiy
@ 2020-11-04 17:30       ` Max Reitz
  2020-11-09 12:16         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 66+ messages in thread
From: Max Reitz @ 2020-11-04 17:30 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow

On 22.10.20 23:10, Vladimir Sementsov-Ogievskiy wrote:
> 23.07.2020 11:03, Max Reitz wrote:
>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>> After introducing parallel async copy requests instead of plain
>>> cluster-by-cluster copying loop, backup job may finish earlier than
>>> final assertion in do_test_stop. Let's require slow backup explicitly
>>> by specifying speed parameter.
>>
>> Isn’t the problem really that block_set_io_throttle does absolutely
>> nothing?  (Which is a long-standing problem with 129.  I personally just
>> never run it, honestly.)
> 
> Hmm.. is it better to drop test_drive_backup() from here ?

I think the best would be to revisit this:

https://lists.nongnu.org/archive/html/qemu-block/2019-06/msg00499.html

Max



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 17/20] backup: move to block-copy
  2020-10-26 15:18     ` Vladimir Sementsov-Ogievskiy
@ 2020-11-04 17:45       ` Max Reitz
  0 siblings, 0 replies; 66+ messages in thread
From: Max Reitz @ 2020-11-04 17:45 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, armbru, qemu-devel, den, jsnow

On 26.10.20 16:18, Vladimir Sementsov-Ogievskiy wrote:
> 23.07.2020 12:47, Max Reitz wrote:
>>> +static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
>>> +{
>>> +    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
>>> +
>>> +    if (s->bcs) {
>>> +        /* In block_job_create we yet don't have bcs */
>> Shouldn’t hurt to make it conditional, but how can we get here in
>> block_job_create()?
>>
> 
> block_job_set_speed is called from block_job_create.

Ah, right.

Max



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters
  2020-11-04 17:19       ` Max Reitz
@ 2020-11-09 11:11         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-11-09 11:11 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

04.11.2020 20:19, Max Reitz wrote:
> On 22.10.20 22:35, Vladimir Sementsov-Ogievskiy wrote:
>> 22.07.2020 15:22, Max Reitz wrote:
>>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>>> Add new parameters to configure future backup features. The patch
>>>> doesn't introduce aio backup requests (so we actually have only one
>>>> worker) neither requests larger than one cluster. Still, formally we
>>>> satisfy these maximums anyway, so add the parameters now, to facilitate
>>>> further patch which will really change backup job behavior.
>>>>
>>>> Options are added with x- prefixes, as the only use for them are some
>>>> very conservative iotests which will be updated soon.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>    qapi/block-core.json      |  9 ++++++++-
>>>>    include/block/block_int.h |  7 +++++++
>>>>    block/backup.c            | 21 +++++++++++++++++++++
>>>>    block/replication.c       |  2 +-
>>>>    blockdev.c                |  5 +++++
>>>>    5 files changed, 42 insertions(+), 2 deletions(-)
>>>>
>>
>> [..]
>>
>>>> @@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id,
>>>> BlockDriverState *bs,
>>>>        if (cluster_size < 0) {
>>>>            goto error;
>>>>        }
>>>> +    if (max_chunk && max_chunk < cluster_size) {
>>>> +        error_setg(errp, "Required max-chunk (%" PRIi64") is less
>>>> than backup "
>>>
>>> (missing a space after PRIi64)
>>>
>>>> +                   "cluster size (%" PRIi64 ")", max_chunk,
>>>> cluster_size);
>>>
>>> Should this be noted in the QAPI documentation?
>>
>> Hmm.. It makes sense, but I don't know what to write: should be >= job
>> cluster_size? But then I'll have to describe the logic of
>> backup_calculate_cluster_size()...
> 
> Isn’t the logic basically just “cluster size of the target image file
> (min 64k)”?  The other cases just cover error cases, and they always
> return 64k, which would effectively be documented by stating that 64k is
> the minimum.
> 
> But in the common cases (qcow2 or raw), we’ll never get an error anyway.
> 
> I think it’d be good to describe the cluster size somewhere, because
> otherwise I find it a bit malicious to throw an error at the user that
> they could not have anticipated because they have no idea what valid
> values for the parameter are (until they try).

OK

> 
>>>   (And perhaps the fact
>>> that without copy offloading, we’ll never copy anything bigger than one
>>> cluster at a time anyway?)
>>
>> This is a parameter for background copying. Look at
>> block_copy_task_create(), if call_state has own max_chunk, it's used
>> instead of common copy_size (derived from cluster_size). But at a moment
>> of this patch background process through async block-copy is not  yet
>> implemented, and the parameter doesn't work, which is described in
>> commit message.
> 
> Ah, OK.  Right.
> 
> Max
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 13/20] iotests: 129: prepare for backup over block-copy
  2020-11-04 17:30       ` Max Reitz
@ 2020-11-09 12:16         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 66+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-11-09 12:16 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, jsnow, wencongyang2, xiechanglong.d, armbru, eblake,
	qemu-devel, den

04.11.2020 20:30, Max Reitz wrote:
> On 22.10.20 23:10, Vladimir Sementsov-Ogievskiy wrote:
>> 23.07.2020 11:03, Max Reitz wrote:
>>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>>>> After introducing parallel async copy requests instead of plain
>>>> cluster-by-cluster copying loop, backup job may finish earlier than
>>>> final assertion in do_test_stop. Let's require slow backup explicitly
>>>> by specifying speed parameter.
>>>
>>> Isn’t the problem really that block_set_io_throttle does absolutely
>>> nothing?  (Which is a long-standing problem with 129.  I personally just
>>> never run it, honestly.)
>>
>> Hmm.. is it better to drop test_drive_backup() from here ?
> 
> I think the best would be to revisit this:
> 
> https://lists.nongnu.org/archive/html/qemu-block/2019-06/msg00499.html
> 
> Max
> 

I've checked, no, that doesn't help. I suppose that throttling has same defect as
original block-job speed: it calculates throttling after request, and if we issue
simultaneously many requests they all will be handled (and after it we'll have long
throttling pause). New solution for backup in these series works better with parallel
requests (see patch [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy).

So, I'd keep this patch for now.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 66+ messages in thread

end of thread, other threads:[~2020-11-09 12:18 UTC | newest]

Thread overview: 66+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-01 18:10 [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
2020-06-01 18:10 ` [PATCH v2 01/20] block/block-copy: block_copy_dirty_clusters: fix failure check Vladimir Sementsov-Ogievskiy
2020-06-01 18:11 ` [PATCH v2 02/20] iotests: 129 don't check backup "busy" Vladimir Sementsov-Ogievskiy
2020-07-17 12:57   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 03/20] qapi: backup: add x-use-copy-range parameter Vladimir Sementsov-Ogievskiy
2020-07-17 13:15   ` Max Reitz
2020-07-17 15:18     ` Vladimir Sementsov-Ogievskiy
2020-06-01 18:11 ` [PATCH v2 04/20] block/block-copy: More explicit call_state Vladimir Sementsov-Ogievskiy
2020-07-17 13:45   ` Max Reitz
2020-09-18 20:11     ` Vladimir Sementsov-Ogievskiy
2020-09-21  8:54       ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 05/20] block/block-copy: implement block_copy_async Vladimir Sementsov-Ogievskiy
2020-07-17 14:00   ` Max Reitz
2020-07-17 15:24     ` Vladimir Sementsov-Ogievskiy
2020-07-21  8:43       ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 06/20] block/block-copy: add max_chunk and max_workers parameters Vladimir Sementsov-Ogievskiy
2020-07-22  9:39   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 07/20] block/block-copy: add ratelimit to block-copy Vladimir Sementsov-Ogievskiy
2020-07-22 11:05   ` Max Reitz
2020-09-25 18:19     ` Vladimir Sementsov-Ogievskiy
2020-06-01 18:11 ` [PATCH v2 08/20] block/block-copy: add block_copy_cancel Vladimir Sementsov-Ogievskiy
2020-07-22 11:28   ` Max Reitz
2020-10-22 20:50     ` Vladimir Sementsov-Ogievskiy
2020-10-23  9:49       ` Vladimir Sementsov-Ogievskiy
2020-11-04 17:26       ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 09/20] blockjob: add set_speed to BlockJobDriver Vladimir Sementsov-Ogievskiy
2020-07-22 11:34   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 10/20] job: call job_enter from job_user_pause Vladimir Sementsov-Ogievskiy
2020-07-22 11:49   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 11/20] qapi: backup: add x-max-chunk and x-max-workers parameters Vladimir Sementsov-Ogievskiy
2020-06-02  8:19   ` Vladimir Sementsov-Ogievskiy
2020-07-22 12:22   ` Max Reitz
2020-07-23  7:43     ` Max Reitz
2020-10-22 20:35     ` Vladimir Sementsov-Ogievskiy
2020-11-04 17:19       ` Max Reitz
2020-11-09 11:11         ` Vladimir Sementsov-Ogievskiy
2020-06-01 18:11 ` [PATCH v2 12/20] iotests: 56: prepare for backup over block-copy Vladimir Sementsov-Ogievskiy
2020-07-23  7:57   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 13/20] iotests: 129: " Vladimir Sementsov-Ogievskiy
2020-07-23  8:03   ` Max Reitz
2020-10-22 21:10     ` Vladimir Sementsov-Ogievskiy
2020-11-04 17:30       ` Max Reitz
2020-11-09 12:16         ` Vladimir Sementsov-Ogievskiy
2020-06-01 18:11 ` [PATCH v2 14/20] iotests: 185: " Vladimir Sementsov-Ogievskiy
2020-07-23  8:19   ` Max Reitz
2020-10-22 21:16     ` Vladimir Sementsov-Ogievskiy
2020-06-01 18:11 ` [PATCH v2 15/20] iotests: 219: " Vladimir Sementsov-Ogievskiy
2020-07-23  8:35   ` Max Reitz
2020-10-22 21:20     ` Vladimir Sementsov-Ogievskiy
2020-06-01 18:11 ` [PATCH v2 16/20] iotests: 257: " Vladimir Sementsov-Ogievskiy
2020-07-23  8:49   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 17/20] backup: move to block-copy Vladimir Sementsov-Ogievskiy
2020-07-23  9:47   ` Max Reitz
2020-09-21 13:58     ` Vladimir Sementsov-Ogievskiy
2020-10-26 15:18     ` Vladimir Sementsov-Ogievskiy
2020-11-04 17:45       ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 18/20] block/block-copy: drop unused argument of block_copy() Vladimir Sementsov-Ogievskiy
2020-07-23 13:24   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 19/20] simplebench: bench_block_job: add cmd_options argument Vladimir Sementsov-Ogievskiy
2020-07-23 13:30   ` Max Reitz
2020-06-01 18:11 ` [PATCH v2 20/20] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
2020-07-23 13:47   ` Max Reitz
2020-06-01 18:15 ` [PATCH v2 00/20] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
2020-06-01 18:59 ` no-reply
2020-06-02  8:20   ` Vladimir Sementsov-Ogievskiy
2020-06-01 19:43 ` no-reply

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.