All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/23] backup performance: block_status + async
@ 2021-01-16 21:46 Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 01/23] qapi: backup: add perf.use-copy-range parameter Vladimir Sementsov-Ogievskiy
                   ` (25 more replies)
  0 siblings, 26 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Hi Max!
I applied my series onto yours 129-fixing and found, that 129 fails for backup.
And setting small max-chunk and even max-workers to 1 doesn't help! (setting
speed like in v3 still helps).

And I found, that the problem is that really, the whole backup job goes during
drain, because in new architecture we do just job_yield() during the whole
background block-copy.

This leads to modifying the existing patch in the series, which does job_enter()
from job_user_pause: we just need call job_enter() from job_pause() to cover
not only user pauses but also drained_begin.

So, now I don't need any additional fixing of 129.

Changes in v4:
- add a lot of Max's r-b's, thanks!

03: fix over-80 line (in comment), add r-b
09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
    now changed to finally fix 129 iotest, drop r-b

10: squash-in additional wording on max-chunk, fix error message, keep r-b
17: drop extra include, assert job_is_cancelled() instead of check, add r-b
18: adjust commit message, add r-b
23: add comments and assertion, about the fact that test doesn't support
    paths with colon inside
    fix s/disable-copy-range/use-copy-range/

Vladimir Sementsov-Ogievskiy (23):
  qapi: backup: add perf.use-copy-range parameter
  block/block-copy: More explicit call_state
  block/block-copy: implement block_copy_async
  block/block-copy: add max_chunk and max_workers parameters
  block/block-copy: add list of all call-states
  block/block-copy: add ratelimit to block-copy
  block/block-copy: add block_copy_cancel
  blockjob: add set_speed to BlockJobDriver
  job: call job_enter from job_pause
  qapi: backup: add max-chunk and max-workers to x-perf struct
  iotests: 56: prepare for backup over block-copy
  iotests: 185: prepare for backup over block-copy
  iotests: 219: prepare for backup over block-copy
  iotests: 257: prepare for backup over block-copy
  block/block-copy: make progress_bytes_callback optional
  block/backup: drop extra gotos from backup_run()
  backup: move to block-copy
  qapi: backup: disable copy_range by default
  block/block-copy: drop unused block_copy_set_progress_callback()
  block/block-copy: drop unused argument of block_copy()
  simplebench/bench_block_job: use correct shebang line with python3
  simplebench: bench_block_job: add cmd_options argument
  simplebench: add bench-backup.py

 qapi/block-core.json                   |  28 ++-
 block/backup-top.h                     |   1 +
 include/block/block-copy.h             |  61 ++++-
 include/block/block_int.h              |   3 +
 include/block/blockjob_int.h           |   2 +
 block/backup-top.c                     |   6 +-
 block/backup.c                         | 233 ++++++++++++-------
 block/block-copy.c                     | 227 +++++++++++++++---
 block/replication.c                    |   2 +
 blockdev.c                             |  14 ++
 blockjob.c                             |   6 +
 job.c                                  |   3 +
 scripts/simplebench/bench-backup.py    | 167 ++++++++++++++
 scripts/simplebench/bench-example.py   |   2 +-
 scripts/simplebench/bench_block_job.py |  13 +-
 tests/qemu-iotests/056                 |   9 +-
 tests/qemu-iotests/109.out             |  24 ++
 tests/qemu-iotests/185                 |   3 +-
 tests/qemu-iotests/185.out             |   3 +-
 tests/qemu-iotests/219                 |  13 +-
 tests/qemu-iotests/257                 |   1 +
 tests/qemu-iotests/257.out             | 306 ++++++++++++-------------
 22 files changed, 830 insertions(+), 297 deletions(-)
 create mode 100755 scripts/simplebench/bench-backup.py

-- 
2.29.2



^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v4 01/23] qapi: backup: add perf.use-copy-range parameter
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 02/23] block/block-copy: More explicit call_state Vladimir Sementsov-Ogievskiy
                   ` (24 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Experiments show, that copy_range is not always making things faster.
So, to make experimentation simpler, let's add a parameter. Some more
perf parameters will be added soon, so here is a new struct.

For now, add new backup qmp parameter with x- prefix for the following
reasons:

 - We are going to add more performance parameters, some will be
   related to the whole block-copy process, some only to background
   copying in backup (ignored for copy-before-write operations).
 - On the other hand, we are going to use block-copy interface in other
   block jobs, which will need performance options as well.. And it
   should be the same structure or at least somehow related.

So, there are too much unclean things about how the interface and now
we need the new options mostly for testing. Let's keep them
experimental for a while.

In do_backup_common() new x-perf parameter handled in a way to
make further options addition simpler.

We add use-copy-range with default=true, and we'll change the default
in further patch, after moving backup to use block-copy.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 qapi/block-core.json       | 17 ++++++++++++++++-
 block/backup-top.h         |  1 +
 include/block/block-copy.h |  2 +-
 include/block/block_int.h  |  3 +++
 block/backup-top.c         |  4 +++-
 block/backup.c             |  6 +++++-
 block/block-copy.c         |  4 ++--
 block/replication.c        |  2 ++
 blockdev.c                 |  8 ++++++++
 9 files changed, 41 insertions(+), 6 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index 3484986d1c..d161b099db 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1371,6 +1371,19 @@
 { 'struct': 'BlockdevSnapshot',
   'data': { 'node': 'str', 'overlay': 'str' } }
 
+##
+# @BackupPerf:
+#
+# Optional parameters for backup. These parameters don't affect
+# functionality, but may significantly affect performance.
+#
+# @use-copy-range: Use copy offloading. Default true.
+#
+# Since: 5.2
+##
+{ 'struct': 'BackupPerf',
+  'data': { '*use-copy-range': 'bool' }}
+
 ##
 # @BackupCommon:
 #
@@ -1426,6 +1439,8 @@
 #                    above node specified by @drive. If this option is not given,
 #                    a node name is autogenerated. (Since: 4.2)
 #
+# @x-perf: Performance options. (Since 5.2)
+#
 # Note: @on-source-error and @on-target-error only affect background
 #       I/O.  If an error occurs during a guest write request, the device's
 #       rerror/werror actions will be used.
@@ -1440,7 +1455,7 @@
             '*on-source-error': 'BlockdevOnError',
             '*on-target-error': 'BlockdevOnError',
             '*auto-finalize': 'bool', '*auto-dismiss': 'bool',
-            '*filter-node-name': 'str' } }
+            '*filter-node-name': 'str', '*x-perf': 'BackupPerf'  } }
 
 ##
 # @DriveBackup:
diff --git a/block/backup-top.h b/block/backup-top.h
index e5cabfa197..b28b0031c4 100644
--- a/block/backup-top.h
+++ b/block/backup-top.h
@@ -33,6 +33,7 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
                                          BlockDriverState *target,
                                          const char *filter_node_name,
                                          uint64_t cluster_size,
+                                         BackupPerf *perf,
                                          BdrvRequestFlags write_flags,
                                          BlockCopyState **bcs,
                                          Error **errp);
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index aac85e1488..6397505f30 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -22,7 +22,7 @@ typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
 typedef struct BlockCopyState BlockCopyState;
 
 BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
-                                     int64_t cluster_size,
+                                     int64_t cluster_size, bool use_copy_range,
                                      BdrvRequestFlags write_flags,
                                      Error **errp);
 
diff --git a/include/block/block_int.h b/include/block/block_int.h
index b9ef61fe4d..1ddc3af244 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -1256,6 +1256,8 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
  * @sync_mode: What parts of the disk image should be copied to the destination.
  * @sync_bitmap: The dirty bitmap if sync_mode is 'bitmap' or 'incremental'
  * @bitmap_mode: The bitmap synchronization policy to use.
+ * @perf: Performance options. All actual fields assumed to be present,
+ *        all ".has_*" fields are ignored.
  * @on_source_error: The action to take upon error reading from the source.
  * @on_target_error: The action to take upon error writing to the target.
  * @creation_flags: Flags that control the behavior of the Job lifetime.
@@ -1274,6 +1276,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                             BitmapSyncMode bitmap_mode,
                             bool compress,
                             const char *filter_node_name,
+                            BackupPerf *perf,
                             BlockdevOnError on_source_error,
                             BlockdevOnError on_target_error,
                             int creation_flags,
diff --git a/block/backup-top.c b/block/backup-top.c
index fe6883cc97..789acf6965 100644
--- a/block/backup-top.c
+++ b/block/backup-top.c
@@ -186,6 +186,7 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
                                          BlockDriverState *target,
                                          const char *filter_node_name,
                                          uint64_t cluster_size,
+                                         BackupPerf *perf,
                                          BdrvRequestFlags write_flags,
                                          BlockCopyState **bcs,
                                          Error **errp)
@@ -244,7 +245,8 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
 
     state->cluster_size = cluster_size;
     state->bcs = block_copy_state_new(top->backing, state->target,
-                                      cluster_size, write_flags, &local_err);
+                                      cluster_size, perf->use_copy_range,
+                                      write_flags, &local_err);
     if (local_err) {
         error_prepend(&local_err, "Cannot create block-copy-state: ");
         goto fail;
diff --git a/block/backup.c b/block/backup.c
index 9afa0bf3b4..4b07e9115d 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -46,6 +46,7 @@ typedef struct BackupBlockJob {
     uint64_t len;
     uint64_t bytes_read;
     int64_t cluster_size;
+    BackupPerf perf;
 
     BlockCopyState *bcs;
 } BackupBlockJob;
@@ -335,6 +336,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                   BitmapSyncMode bitmap_mode,
                   bool compress,
                   const char *filter_node_name,
+                  BackupPerf *perf,
                   BlockdevOnError on_source_error,
                   BlockdevOnError on_target_error,
                   int creation_flags,
@@ -441,7 +443,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
                   (compress ? BDRV_REQ_WRITE_COMPRESSED : 0),
 
     backup_top = bdrv_backup_top_append(bs, target, filter_node_name,
-                                        cluster_size, write_flags, &bcs, errp);
+                                        cluster_size, perf,
+                                        write_flags, &bcs, errp);
     if (!backup_top) {
         goto error;
     }
@@ -464,6 +467,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     job->bcs = bcs;
     job->cluster_size = cluster_size;
     job->len = len;
+    job->perf = *perf;
 
     block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
     block_copy_set_progress_meter(bcs, &job->common.job.progress);
diff --git a/block/block-copy.c b/block/block-copy.c
index cd9bc47c8f..63398a171c 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -218,7 +218,7 @@ static uint32_t block_copy_max_transfer(BdrvChild *source, BdrvChild *target)
 }
 
 BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
-                                     int64_t cluster_size,
+                                     int64_t cluster_size, bool use_copy_range,
                                      BdrvRequestFlags write_flags, Error **errp)
 {
     BlockCopyState *s;
@@ -260,7 +260,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
          * We enable copy-range, but keep small copy_size, until first
          * successful copy_range (look at block_copy_do_copy).
          */
-        s->use_copy_range = true;
+        s->use_copy_range = use_copy_range;
         s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
     }
 
diff --git a/block/replication.c b/block/replication.c
index 0c70215784..22ffc811ee 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -454,6 +454,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
     int64_t active_length, hidden_length, disk_length;
     AioContext *aio_context;
     Error *local_err = NULL;
+    BackupPerf perf = { .use_copy_range = true };
 
     aio_context = bdrv_get_aio_context(bs);
     aio_context_acquire(aio_context);
@@ -558,6 +559,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
         s->backup_job = backup_job_create(
                                 NULL, s->secondary_disk->bs, s->hidden_disk->bs,
                                 0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
+                                &perf,
                                 BLOCKDEV_ON_ERROR_REPORT,
                                 BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
                                 backup_job_completed, bs, NULL, &local_err);
diff --git a/blockdev.c b/blockdev.c
index 2431448c5d..84685e057b 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2794,6 +2794,7 @@ static BlockJob *do_backup_common(BackupCommon *backup,
 {
     BlockJob *job = NULL;
     BdrvDirtyBitmap *bmap = NULL;
+    BackupPerf perf = { .use_copy_range = true };
     int job_flags = JOB_DEFAULT;
 
     if (!backup->has_speed) {
@@ -2818,6 +2819,12 @@ static BlockJob *do_backup_common(BackupCommon *backup,
         backup->compress = false;
     }
 
+    if (backup->x_perf) {
+        if (backup->x_perf->has_use_copy_range) {
+            perf.use_copy_range = backup->x_perf->use_copy_range;
+        }
+    }
+
     if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
         (backup->sync == MIRROR_SYNC_MODE_INCREMENTAL)) {
         /* done before desugaring 'incremental' to print the right message */
@@ -2891,6 +2898,7 @@ static BlockJob *do_backup_common(BackupCommon *backup,
                             backup->sync, bmap, backup->bitmap_mode,
                             backup->compress,
                             backup->filter_node_name,
+                            &perf,
                             backup->on_source_error,
                             backup->on_target_error,
                             job_flags, NULL, NULL, txn, errp);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 02/23] block/block-copy: More explicit call_state
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 01/23] qapi: backup: add perf.use-copy-range parameter Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 03/23] block/block-copy: implement block_copy_async Vladimir Sementsov-Ogievskiy
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Refactor common path to use BlockCopyCallState pointer as parameter, to
prepare it for use in asynchronous block-copy (at least, we'll need to
run block-copy in a coroutine, passing the whole parameters as one
pointer).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
 1 file changed, 38 insertions(+), 13 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 63398a171c..6ea55f1f9a 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -30,7 +30,15 @@
 static coroutine_fn int block_copy_task_entry(AioTask *task);
 
 typedef struct BlockCopyCallState {
+    /* IN parameters */
+    BlockCopyState *s;
+    int64_t offset;
+    int64_t bytes;
+
+    /* State */
     bool failed;
+
+    /* OUT parameters */
     bool error_is_read;
 } BlockCopyCallState;
 
@@ -544,15 +552,17 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
  * Returns 1 if dirty clusters found and successfully copied, 0 if no dirty
  * clusters found and -errno on failure.
  */
-static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
-                                                  int64_t offset, int64_t bytes,
-                                                  bool *error_is_read)
+static int coroutine_fn
+block_copy_dirty_clusters(BlockCopyCallState *call_state)
 {
+    BlockCopyState *s = call_state->s;
+    int64_t offset = call_state->offset;
+    int64_t bytes = call_state->bytes;
+
     int ret = 0;
     bool found_dirty = false;
     int64_t end = offset + bytes;
     AioTaskPool *aio = NULL;
-    BlockCopyCallState call_state = {false, false};
 
     /*
      * block_copy() user is responsible for keeping source and target in same
@@ -568,7 +578,7 @@ static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
         BlockCopyTask *task;
         int64_t status_bytes;
 
-        task = block_copy_task_create(s, &call_state, offset, bytes);
+        task = block_copy_task_create(s, call_state, offset, bytes);
         if (!task) {
             /* No more dirty bits in the bitmap */
             trace_block_copy_skip_range(s, offset, bytes);
@@ -633,15 +643,12 @@ out:
 
         aio_task_pool_free(aio);
     }
-    if (error_is_read && ret < 0) {
-        *error_is_read = call_state.error_is_read;
-    }
 
     return ret < 0 ? ret : found_dirty;
 }
 
 /*
- * block_copy
+ * block_copy_common
  *
  * Copy requested region, accordingly to dirty bitmap.
  * Collaborate with parallel block_copy requests: if they succeed it will help
@@ -649,16 +656,16 @@ out:
  * it means that some I/O operation failed in context of _this_ block_copy call,
  * not some parallel operation.
  */
-int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
-                            bool *error_is_read)
+static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
 {
     int ret;
 
     do {
-        ret = block_copy_dirty_clusters(s, offset, bytes, error_is_read);
+        ret = block_copy_dirty_clusters(call_state);
 
         if (ret == 0) {
-            ret = block_copy_wait_one(s, offset, bytes);
+            ret = block_copy_wait_one(call_state->s, call_state->offset,
+                                      call_state->bytes);
         }
 
         /*
@@ -675,6 +682,24 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
     return ret;
 }
 
+int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
+                            bool *error_is_read)
+{
+    BlockCopyCallState call_state = {
+        .s = s,
+        .offset = start,
+        .bytes = bytes,
+    };
+
+    int ret = block_copy_common(&call_state);
+
+    if (error_is_read && ret < 0) {
+        *error_is_read = call_state.error_is_read;
+    }
+
+    return ret;
+}
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
 {
     return s->copy_bitmap;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 03/23] block/block-copy: implement block_copy_async
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 01/23] qapi: backup: add perf.use-copy-range parameter Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 02/23] block/block-copy: More explicit call_state Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 04/23] block/block-copy: add max_chunk and max_workers parameters Vladimir Sementsov-Ogievskiy
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

We'll need async block-copy invocation to use in backup directly.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/block/block-copy.h | 29 ++++++++++++++
 block/block-copy.c         | 81 ++++++++++++++++++++++++++++++++++++--
 2 files changed, 106 insertions(+), 4 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 6397505f30..8c225ebf81 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -19,7 +19,9 @@
 #include "qemu/co-shared-resource.h"
 
 typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
+typedef void (*BlockCopyAsyncCallbackFunc)(void *opaque);
 typedef struct BlockCopyState BlockCopyState;
+typedef struct BlockCopyCallState BlockCopyCallState;
 
 BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
                                      int64_t cluster_size, bool use_copy_range,
@@ -41,6 +43,33 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
 int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
                             bool *error_is_read);
 
+/*
+ * Run block-copy in a coroutine, create corresponding BlockCopyCallState
+ * object and return pointer to it. Never returns NULL.
+ *
+ * Caller is responsible to call block_copy_call_free() to free
+ * BlockCopyCallState object.
+ */
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
+                                     int64_t offset, int64_t bytes,
+                                     BlockCopyAsyncCallbackFunc cb,
+                                     void *cb_opaque);
+
+/*
+ * Free finished BlockCopyCallState. Trying to free running
+ * block-copy will crash.
+ */
+void block_copy_call_free(BlockCopyCallState *call_state);
+
+/*
+ * Note, that block-copy call is marked finished prior to calling
+ * the callback.
+ */
+bool block_copy_call_finished(BlockCopyCallState *call_state);
+bool block_copy_call_succeeded(BlockCopyCallState *call_state);
+bool block_copy_call_failed(BlockCopyCallState *call_state);
+int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
 void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
 
diff --git a/block/block-copy.c b/block/block-copy.c
index 6ea55f1f9a..74655b86f8 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -30,13 +30,19 @@
 static coroutine_fn int block_copy_task_entry(AioTask *task);
 
 typedef struct BlockCopyCallState {
-    /* IN parameters */
+    /* IN parameters. Initialized in block_copy_async() and never changed. */
     BlockCopyState *s;
     int64_t offset;
     int64_t bytes;
+    BlockCopyAsyncCallbackFunc cb;
+    void *cb_opaque;
+
+    /* Coroutine where async block-copy is running */
+    Coroutine *co;
 
     /* State */
-    bool failed;
+    int ret;
+    bool finished;
 
     /* OUT parameters */
     bool error_is_read;
@@ -428,8 +434,8 @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
 
     ret = block_copy_do_copy(t->s, t->offset, t->bytes, t->zeroes,
                              &error_is_read);
-    if (ret < 0 && !t->call_state->failed) {
-        t->call_state->failed = true;
+    if (ret < 0 && !t->call_state->ret) {
+        t->call_state->ret = ret;
         t->call_state->error_is_read = error_is_read;
     } else {
         progress_work_done(t->s->progress, t->bytes);
@@ -679,6 +685,12 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
          */
     } while (ret > 0);
 
+    call_state->finished = true;
+
+    if (call_state->cb) {
+        call_state->cb(call_state->cb_opaque);
+    }
+
     return ret;
 }
 
@@ -700,6 +712,67 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
     return ret;
 }
 
+static void coroutine_fn block_copy_async_co_entry(void *opaque)
+{
+    block_copy_common(opaque);
+}
+
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
+                                     int64_t offset, int64_t bytes,
+                                     BlockCopyAsyncCallbackFunc cb,
+                                     void *cb_opaque)
+{
+    BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
+
+    *call_state = (BlockCopyCallState) {
+        .s = s,
+        .offset = offset,
+        .bytes = bytes,
+        .cb = cb,
+        .cb_opaque = cb_opaque,
+
+        .co = qemu_coroutine_create(block_copy_async_co_entry, call_state),
+    };
+
+    qemu_coroutine_enter(call_state->co);
+
+    return call_state;
+}
+
+void block_copy_call_free(BlockCopyCallState *call_state)
+{
+    if (!call_state) {
+        return;
+    }
+
+    assert(call_state->finished);
+    g_free(call_state);
+}
+
+bool block_copy_call_finished(BlockCopyCallState *call_state)
+{
+    return call_state->finished;
+}
+
+bool block_copy_call_succeeded(BlockCopyCallState *call_state)
+{
+    return call_state->finished && call_state->ret == 0;
+}
+
+bool block_copy_call_failed(BlockCopyCallState *call_state)
+{
+    return call_state->finished && call_state->ret < 0;
+}
+
+int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
+{
+    assert(call_state->finished);
+    if (error_is_read) {
+        *error_is_read = call_state->error_is_read;
+    }
+    return call_state->ret;
+}
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
 {
     return s->copy_bitmap;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 04/23] block/block-copy: add max_chunk and max_workers parameters
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 03/23] block/block-copy: implement block_copy_async Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 05/23] block/block-copy: add list of all call-states Vladimir Sementsov-Ogievskiy
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

They will be used for backup.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/block/block-copy.h |  6 ++++++
 block/block-copy.c         | 11 +++++++++--
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 8c225ebf81..22372aa375 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -49,9 +49,15 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
  *
  * Caller is responsible to call block_copy_call_free() to free
  * BlockCopyCallState object.
+ *
+ * @max_workers means maximum of parallel coroutines to execute sub-requests,
+ * must be > 0.
+ *
+ * @max_chunk means maximum length for one IO operation. Zero means unlimited.
  */
 BlockCopyCallState *block_copy_async(BlockCopyState *s,
                                      int64_t offset, int64_t bytes,
+                                     int max_workers, int64_t max_chunk,
                                      BlockCopyAsyncCallbackFunc cb,
                                      void *cb_opaque);
 
diff --git a/block/block-copy.c b/block/block-copy.c
index 74655b86f8..35213bd832 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -34,6 +34,8 @@ typedef struct BlockCopyCallState {
     BlockCopyState *s;
     int64_t offset;
     int64_t bytes;
+    int max_workers;
+    int64_t max_chunk;
     BlockCopyAsyncCallbackFunc cb;
     void *cb_opaque;
 
@@ -148,10 +150,11 @@ static BlockCopyTask *block_copy_task_create(BlockCopyState *s,
                                              int64_t offset, int64_t bytes)
 {
     BlockCopyTask *task;
+    int64_t max_chunk = MIN_NON_ZERO(s->copy_size, call_state->max_chunk);
 
     if (!bdrv_dirty_bitmap_next_dirty_area(s->copy_bitmap,
                                            offset, offset + bytes,
-                                           s->copy_size, &offset, &bytes))
+                                           max_chunk, &offset, &bytes))
     {
         return NULL;
     }
@@ -623,7 +626,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
         bytes = end - offset;
 
         if (!aio && bytes) {
-            aio = aio_task_pool_new(BLOCK_COPY_MAX_WORKERS);
+            aio = aio_task_pool_new(call_state->max_workers);
         }
 
         ret = block_copy_task_run(aio, task);
@@ -701,6 +704,7 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
         .s = s,
         .offset = start,
         .bytes = bytes,
+        .max_workers = BLOCK_COPY_MAX_WORKERS,
     };
 
     int ret = block_copy_common(&call_state);
@@ -719,6 +723,7 @@ static void coroutine_fn block_copy_async_co_entry(void *opaque)
 
 BlockCopyCallState *block_copy_async(BlockCopyState *s,
                                      int64_t offset, int64_t bytes,
+                                     int max_workers, int64_t max_chunk,
                                      BlockCopyAsyncCallbackFunc cb,
                                      void *cb_opaque)
 {
@@ -728,6 +733,8 @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
         .s = s,
         .offset = offset,
         .bytes = bytes,
+        .max_workers = max_workers,
+        .max_chunk = max_chunk,
         .cb = cb,
         .cb_opaque = cb_opaque,
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 05/23] block/block-copy: add list of all call-states
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 04/23] block/block-copy: add max_chunk and max_workers parameters Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 06/23] block/block-copy: add ratelimit to block-copy Vladimir Sementsov-Ogievskiy
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

It simplifies debugging.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/block-copy.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 35213bd832..6bf1735b93 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -42,6 +42,9 @@ typedef struct BlockCopyCallState {
     /* Coroutine where async block-copy is running */
     Coroutine *co;
 
+    /* To reference all call states from BlockCopyState */
+    QLIST_ENTRY(BlockCopyCallState) list;
+
     /* State */
     int ret;
     bool finished;
@@ -81,7 +84,8 @@ typedef struct BlockCopyState {
     bool use_copy_range;
     int64_t copy_size;
     uint64_t len;
-    QLIST_HEAD(, BlockCopyTask) tasks;
+    QLIST_HEAD(, BlockCopyTask) tasks; /* All tasks from all block-copy calls */
+    QLIST_HEAD(, BlockCopyCallState) calls;
 
     BdrvRequestFlags write_flags;
 
@@ -282,6 +286,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
     }
 
     QLIST_INIT(&s->tasks);
+    QLIST_INIT(&s->calls);
 
     return s;
 }
@@ -669,6 +674,8 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
 {
     int ret;
 
+    QLIST_INSERT_HEAD(&call_state->s->calls, call_state, list);
+
     do {
         ret = block_copy_dirty_clusters(call_state);
 
@@ -694,6 +701,8 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
         call_state->cb(call_state->cb_opaque);
     }
 
+    QLIST_REMOVE(call_state, list);
+
     return ret;
 }
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 06/23] block/block-copy: add ratelimit to block-copy
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 05/23] block/block-copy: add list of all call-states Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 07/23] block/block-copy: add block_copy_cancel Vladimir Sementsov-Ogievskiy
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

We are going to directly use one async block-copy operation for backup
job, so we need rate limiter.

We want to maintain current backup behavior: only background copying is
limited and copy-before-write operations only participate in limit
calculation. Therefore we need one rate limiter for block-copy state
and boolean flag for block-copy call state for actual limitation.

Note, that we can't just calculate each chunk in limiter after
successful copying: it will not save us from starting a lot of async
sub-requests which will exceed limit too much. Instead let's use the
following scheme on sub-request creation:
1. If at the moment limit is not exceeded, create the request and
account it immediately.
2. If at the moment limit is already exceeded, drop create sub-request
and handle limit instead (by sleep).
With this approach we'll never exceed the limit more than by one
sub-request (which pretty much matches current backup behavior).

Note also, that if there is in-flight block-copy async call,
block_copy_kick() should be used after set-speed to apply new setup
faster. For that block_copy_kick() published in this patch.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/block/block-copy.h |  5 ++++-
 block/backup-top.c         |  2 +-
 block/backup.c             |  2 +-
 block/block-copy.c         | 46 +++++++++++++++++++++++++++++++++++++-
 4 files changed, 51 insertions(+), 4 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 22372aa375..b5a53ad59e 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -41,7 +41,7 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
                                      int64_t offset, int64_t *count);
 
 int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
-                            bool *error_is_read);
+                            bool ignore_ratelimit, bool *error_is_read);
 
 /*
  * Run block-copy in a coroutine, create corresponding BlockCopyCallState
@@ -76,6 +76,9 @@ bool block_copy_call_succeeded(BlockCopyCallState *call_state);
 bool block_copy_call_failed(BlockCopyCallState *call_state);
 int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
 
+void block_copy_set_speed(BlockCopyState *s, uint64_t speed);
+void block_copy_kick(BlockCopyCallState *call_state);
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
 void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
 
diff --git a/block/backup-top.c b/block/backup-top.c
index 789acf6965..779956ddc2 100644
--- a/block/backup-top.c
+++ b/block/backup-top.c
@@ -61,7 +61,7 @@ static coroutine_fn int backup_top_cbw(BlockDriverState *bs, uint64_t offset,
     off = QEMU_ALIGN_DOWN(offset, s->cluster_size);
     end = QEMU_ALIGN_UP(offset + bytes, s->cluster_size);
 
-    return block_copy(s->bcs, off, end - off, NULL);
+    return block_copy(s->bcs, off, end - off, true, NULL);
 }
 
 static int coroutine_fn backup_top_co_pdiscard(BlockDriverState *bs,
diff --git a/block/backup.c b/block/backup.c
index 4b07e9115d..09ff5a92ef 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -72,7 +72,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
 
     trace_backup_do_cow_enter(job, start, offset, bytes);
 
-    ret = block_copy(job->bcs, start, end - start, error_is_read);
+    ret = block_copy(job->bcs, start, end - start, true, error_is_read);
 
     trace_backup_do_cow_return(job, offset, bytes, ret);
 
diff --git a/block/block-copy.c b/block/block-copy.c
index 6bf1735b93..fa27450b14 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -26,6 +26,7 @@
 #define BLOCK_COPY_MAX_BUFFER (1 * MiB)
 #define BLOCK_COPY_MAX_MEM (128 * MiB)
 #define BLOCK_COPY_MAX_WORKERS 64
+#define BLOCK_COPY_SLICE_TIME 100000000ULL /* ns */
 
 static coroutine_fn int block_copy_task_entry(AioTask *task);
 
@@ -36,6 +37,7 @@ typedef struct BlockCopyCallState {
     int64_t bytes;
     int max_workers;
     int64_t max_chunk;
+    bool ignore_ratelimit;
     BlockCopyAsyncCallbackFunc cb;
     void *cb_opaque;
 
@@ -48,6 +50,7 @@ typedef struct BlockCopyCallState {
     /* State */
     int ret;
     bool finished;
+    QemuCoSleepState *sleep_state;
 
     /* OUT parameters */
     bool error_is_read;
@@ -111,6 +114,9 @@ typedef struct BlockCopyState {
     void *progress_opaque;
 
     SharedResource *mem;
+
+    uint64_t speed;
+    RateLimit rate_limit;
 } BlockCopyState;
 
 static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
@@ -623,6 +629,21 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
         }
         task->zeroes = ret & BDRV_BLOCK_ZERO;
 
+        if (s->speed) {
+            if (!call_state->ignore_ratelimit) {
+                uint64_t ns = ratelimit_calculate_delay(&s->rate_limit, 0);
+                if (ns > 0) {
+                    block_copy_task_end(task, -EAGAIN);
+                    g_free(task);
+                    qemu_co_sleep_ns_wakeable(QEMU_CLOCK_REALTIME, ns,
+                                              &call_state->sleep_state);
+                    continue;
+                }
+            }
+
+            ratelimit_calculate_delay(&s->rate_limit, task->bytes);
+        }
+
         trace_block_copy_process(s, task->offset);
 
         co_get_from_shres(s->mem, task->bytes);
@@ -661,6 +682,13 @@ out:
     return ret < 0 ? ret : found_dirty;
 }
 
+void block_copy_kick(BlockCopyCallState *call_state)
+{
+    if (call_state->sleep_state) {
+        qemu_co_sleep_wake(call_state->sleep_state);
+    }
+}
+
 /*
  * block_copy_common
  *
@@ -707,12 +735,13 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
 }
 
 int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
-                            bool *error_is_read)
+                            bool ignore_ratelimit, bool *error_is_read)
 {
     BlockCopyCallState call_state = {
         .s = s,
         .offset = start,
         .bytes = bytes,
+        .ignore_ratelimit = ignore_ratelimit,
         .max_workers = BLOCK_COPY_MAX_WORKERS,
     };
 
@@ -798,3 +827,18 @@ void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip)
 {
     s->skip_unallocated = skip;
 }
+
+void block_copy_set_speed(BlockCopyState *s, uint64_t speed)
+{
+    s->speed = speed;
+    if (speed > 0) {
+        ratelimit_set_speed(&s->rate_limit, speed, BLOCK_COPY_SLICE_TIME);
+    }
+
+    /*
+     * Note: it's good to kick all call states from here, but it should be done
+     * only from a coroutine, to not crash if s->calls list changed while
+     * entering one call. So for now, the only user of this function kicks its
+     * only one call_state by hand.
+     */
+}
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 07/23] block/block-copy: add block_copy_cancel
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 06/23] block/block-copy: add ratelimit to block-copy Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 08/23] blockjob: add set_speed to BlockJobDriver Vladimir Sementsov-Ogievskiy
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Add function to cancel running async block-copy call. It will be used
in backup.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/block/block-copy.h | 13 +++++++++++++
 block/block-copy.c         | 24 +++++++++++++++++++-----
 2 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index b5a53ad59e..7821850f88 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -74,11 +74,24 @@ void block_copy_call_free(BlockCopyCallState *call_state);
 bool block_copy_call_finished(BlockCopyCallState *call_state);
 bool block_copy_call_succeeded(BlockCopyCallState *call_state);
 bool block_copy_call_failed(BlockCopyCallState *call_state);
+bool block_copy_call_cancelled(BlockCopyCallState *call_state);
 int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
 
 void block_copy_set_speed(BlockCopyState *s, uint64_t speed);
 void block_copy_kick(BlockCopyCallState *call_state);
 
+/*
+ * Cancel running block-copy call.
+ *
+ * Cancel leaves block-copy state valid: dirty bits are correct and you may use
+ * cancel + <run block_copy with same parameters> to emulate pause/resume.
+ *
+ * Note also, that the cancel is async: it only marks block-copy call to be
+ * cancelled. So, the call may be cancelled (block_copy_call_cancelled() reports
+ * true) but not yet finished (block_copy_call_finished() reports false).
+ */
+void block_copy_call_cancel(BlockCopyCallState *call_state);
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
 void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
 
diff --git a/block/block-copy.c b/block/block-copy.c
index fa27450b14..82cf945693 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -51,6 +51,7 @@ typedef struct BlockCopyCallState {
     int ret;
     bool finished;
     QemuCoSleepState *sleep_state;
+    bool cancelled;
 
     /* OUT parameters */
     bool error_is_read;
@@ -594,7 +595,7 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
     assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
     assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
 
-    while (bytes && aio_task_pool_status(aio) == 0) {
+    while (bytes && aio_task_pool_status(aio) == 0 && !call_state->cancelled) {
         BlockCopyTask *task;
         int64_t status_bytes;
 
@@ -707,7 +708,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
     do {
         ret = block_copy_dirty_clusters(call_state);
 
-        if (ret == 0) {
+        if (ret == 0 && !call_state->cancelled) {
             ret = block_copy_wait_one(call_state->s, call_state->offset,
                                       call_state->bytes);
         }
@@ -721,7 +722,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
          * 2. We have waited for some intersecting block-copy request
          *    It may have failed and produced new dirty bits.
          */
-    } while (ret > 0);
+    } while (ret > 0 && !call_state->cancelled);
 
     call_state->finished = true;
 
@@ -801,12 +802,19 @@ bool block_copy_call_finished(BlockCopyCallState *call_state)
 
 bool block_copy_call_succeeded(BlockCopyCallState *call_state)
 {
-    return call_state->finished && call_state->ret == 0;
+    return call_state->finished && !call_state->cancelled &&
+        call_state->ret == 0;
 }
 
 bool block_copy_call_failed(BlockCopyCallState *call_state)
 {
-    return call_state->finished && call_state->ret < 0;
+    return call_state->finished && !call_state->cancelled &&
+        call_state->ret < 0;
+}
+
+bool block_copy_call_cancelled(BlockCopyCallState *call_state)
+{
+    return call_state->cancelled;
 }
 
 int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
@@ -818,6 +826,12 @@ int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
     return call_state->ret;
 }
 
+void block_copy_call_cancel(BlockCopyCallState *call_state)
+{
+    call_state->cancelled = true;
+    block_copy_kick(call_state);
+}
+
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
 {
     return s->copy_bitmap;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 08/23] blockjob: add set_speed to BlockJobDriver
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (6 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 07/23] block/block-copy: add block_copy_cancel Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 09/23] job: call job_enter from job_pause Vladimir Sementsov-Ogievskiy
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

We are going to use async block-copy call in backup, so we'll need to
passthrough setting backup speed to block-copy call.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/block/blockjob_int.h | 2 ++
 blockjob.c                   | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index e2824a36a8..6633d83da2 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -52,6 +52,8 @@ struct BlockJobDriver {
      * besides job->blk to the new AioContext.
      */
     void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
+
+    void (*set_speed)(BlockJob *job, int64_t speed);
 };
 
 /**
diff --git a/blockjob.c b/blockjob.c
index 98ac8af982..db3a21699c 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -256,6 +256,7 @@ static bool job_timer_pending(Job *job)
 
 void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
 {
+    const BlockJobDriver *drv = block_job_driver(job);
     int64_t old_speed = job->speed;
 
     if (job_apply_verb(&job->job, JOB_VERB_SET_SPEED, errp)) {
@@ -270,6 +271,11 @@ void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
     ratelimit_set_speed(&job->limit, speed, BLOCK_JOB_SLICE_TIME);
 
     job->speed = speed;
+
+    if (drv->set_speed) {
+        drv->set_speed(job, speed);
+    }
+
     if (speed && speed <= old_speed) {
         return;
     }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 09/23] job: call job_enter from job_pause
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (7 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 08/23] blockjob: add set_speed to BlockJobDriver Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-18 13:45   ` Max Reitz
  2021-04-07 11:19   ` Max Reitz
  2021-01-16 21:46 ` [PATCH v4 10/23] qapi: backup: add max-chunk and max-workers to x-perf struct Vladimir Sementsov-Ogievskiy
                   ` (16 subsequent siblings)
  25 siblings, 2 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

If main job coroutine called job_yield (while some background process
is in progress), we should give it a chance to call job_pause_point().
It will be used in backup, when moved on async block-copy.

Note, that job_user_pause is not enough: we want to handle
child_job_drained_begin() as well, which call job_pause().

Still, if job is already in job_do_yield() in job_pause_point() we
should not enter it.

iotest 109 output is modified: on stop we do bdrv_drain_all() which now
triggers job pause immediately (and pause after ready is standby).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 job.c                      |  3 +++
 tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/job.c b/job.c
index 8fecf38960..3aaaebafe2 100644
--- a/job.c
+++ b/job.c
@@ -553,6 +553,9 @@ static bool job_timer_not_pending(Job *job)
 void job_pause(Job *job)
 {
     job->pause_count++;
+    if (!job->paused) {
+        job_enter(job);
+    }
 }
 
 void job_resume(Job *job)
diff --git a/tests/qemu-iotests/109.out b/tests/qemu-iotests/109.out
index 6e73406cdb..8f839b4b7f 100644
--- a/tests/qemu-iotests/109.out
+++ b/tests/qemu-iotests/109.out
@@ -42,6 +42,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 1024, "offset": 1024, "speed": 0, "type": "mirror"}}
@@ -91,6 +93,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 197120, "offset": 197120, "speed": 0, "type": "mirror"}}
@@ -140,6 +144,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 327680, "offset": 327680, "speed": 0, "type": "mirror"}}
@@ -189,6 +195,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 1024, "offset": 1024, "speed": 0, "type": "mirror"}}
@@ -238,6 +246,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 65536, "offset": 65536, "speed": 0, "type": "mirror"}}
@@ -287,6 +297,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2560, "offset": 2560, "speed": 0, "type": "mirror"}}
@@ -335,6 +347,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2560, "offset": 2560, "speed": 0, "type": "mirror"}}
@@ -383,6 +397,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 31457280, "offset": 31457280, "speed": 0, "type": "mirror"}}
@@ -431,6 +447,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 327680, "offset": 327680, "speed": 0, "type": "mirror"}}
@@ -479,6 +497,8 @@ read 512/512 bytes at offset 0
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2048, "offset": 2048, "speed": 0, "type": "mirror"}}
@@ -507,6 +527,8 @@ WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 512, "offset": 512, "speed": 0, "type": "mirror"}}
@@ -528,6 +550,8 @@ Images are identical.
 {"execute":"quit"}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 512, "offset": 512, "speed": 0, "type": "mirror"}}
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 10/23] qapi: backup: add max-chunk and max-workers to x-perf struct
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (8 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 09/23] job: call job_enter from job_pause Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 11/23] iotests: 56: prepare for backup over block-copy Vladimir Sementsov-Ogievskiy
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Add new parameters to configure future backup features. The patch
doesn't introduce aio backup requests (so we actually have only one
worker) neither requests larger than one cluster. Still, formally we
satisfy these maximums anyway, so add the parameters now, to facilitate
further patch which will really change backup job behavior.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 qapi/block-core.json | 13 ++++++++++++-
 block/backup.c       | 28 +++++++++++++++++++++++-----
 block/replication.c  |  2 +-
 blockdev.c           |  8 +++++++-
 4 files changed, 43 insertions(+), 8 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index d161b099db..c0e9d119d2 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1379,10 +1379,21 @@
 #
 # @use-copy-range: Use copy offloading. Default true.
 #
+# @max-workers: Maximum number of parallel requests for the sustained background
+#               copying process. Doesn't influence copy-before-write operations.
+#               Default 64.
+#
+# @max-chunk: Maximum request length for the sustained background copying
+#             process. Doesn't influence copy-before-write operations.
+#             0 means unlimited. If max-chunk is non-zero then it should not be
+#             less than job cluster size which is calculated as maximum of
+#             target image cluster size and 64k. Default 0.
+#
 # Since: 5.2
 ##
 { 'struct': 'BackupPerf',
-  'data': { '*use-copy-range': 'bool' }}
+  'data': { '*use-copy-range': 'bool',
+            '*max-workers': 'int', '*max-chunk': 'int64' } }
 
 ##
 # @BackupCommon:
diff --git a/block/backup.c b/block/backup.c
index 09ff5a92ef..5522c0f3fe 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -388,6 +388,29 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
         return NULL;
     }
 
+    cluster_size = backup_calculate_cluster_size(target, errp);
+    if (cluster_size < 0) {
+        goto error;
+    }
+
+    if (perf->max_workers < 1) {
+        error_setg(errp, "max-workers must be greater than zero");
+        return NULL;
+    }
+
+    if (perf->max_chunk < 0) {
+        error_setg(errp, "max-chunk must be zero (which means no limit) or "
+                   "positive");
+        return NULL;
+    }
+
+    if (perf->max_chunk && perf->max_chunk < cluster_size) {
+        error_setg(errp, "Required max-chunk (%" PRIi64 ") is less than backup "
+                   "cluster size (%" PRIi64 ")", perf->max_chunk, cluster_size);
+        return NULL;
+    }
+
+
     if (sync_bitmap) {
         /* If we need to write to this bitmap, check that we can: */
         if (bitmap_mode != BITMAP_SYNC_MODE_NEVER &&
@@ -420,11 +443,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
         goto error;
     }
 
-    cluster_size = backup_calculate_cluster_size(target, errp);
-    if (cluster_size < 0) {
-        goto error;
-    }
-
     /*
      * If source is in backing chain of target assume that target is going to be
      * used for "image fleecing", i.e. it should represent a kind of snapshot of
diff --git a/block/replication.c b/block/replication.c
index 22ffc811ee..97be7ef4de 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -454,7 +454,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
     int64_t active_length, hidden_length, disk_length;
     AioContext *aio_context;
     Error *local_err = NULL;
-    BackupPerf perf = { .use_copy_range = true };
+    BackupPerf perf = { .use_copy_range = true, .max_workers = 1 };
 
     aio_context = bdrv_get_aio_context(bs);
     aio_context_acquire(aio_context);
diff --git a/blockdev.c b/blockdev.c
index 84685e057b..6db433cef8 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2794,7 +2794,7 @@ static BlockJob *do_backup_common(BackupCommon *backup,
 {
     BlockJob *job = NULL;
     BdrvDirtyBitmap *bmap = NULL;
-    BackupPerf perf = { .use_copy_range = true };
+    BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
     int job_flags = JOB_DEFAULT;
 
     if (!backup->has_speed) {
@@ -2823,6 +2823,12 @@ static BlockJob *do_backup_common(BackupCommon *backup,
         if (backup->x_perf->has_use_copy_range) {
             perf.use_copy_range = backup->x_perf->use_copy_range;
         }
+        if (backup->x_perf->has_max_workers) {
+            perf.max_workers = backup->x_perf->max_workers;
+        }
+        if (backup->x_perf->has_max_chunk) {
+            perf.max_chunk = backup->x_perf->max_chunk;
+        }
     }
 
     if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 11/23] iotests: 56: prepare for backup over block-copy
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (9 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 10/23] qapi: backup: add max-chunk and max-workers to x-perf struct Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 12/23] iotests: 185: " Vladimir Sementsov-Ogievskiy
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

After introducing parallel async copy requests instead of plain
cluster-by-cluster copying loop, we'll have to wait for paused status,
as we need to wait for several parallel request. So, let's gently wait
instead of just asserting that job already paused.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 tests/qemu-iotests/056 | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
index 052456aa00..e2978ba019 100755
--- a/tests/qemu-iotests/056
+++ b/tests/qemu-iotests/056
@@ -307,8 +307,13 @@ class BackupTest(iotests.QMPTestCase):
         event = self.vm.event_wait(name="BLOCK_JOB_ERROR",
                                    match={'data': {'device': 'drive0'}})
         self.assertNotEqual(event, None)
-        # OK, job should be wedged
-        res = self.vm.qmp('query-block-jobs')
+        # OK, job should pause, but it can't do it immediately, as it can't
+        # cancel other parallel requests (which didn't fail)
+        with iotests.Timeout(60, "Timeout waiting for backup actually paused"):
+            while True:
+                res = self.vm.qmp('query-block-jobs')
+                if res['return'][0]['status'] == 'paused':
+                    break
         self.assert_qmp(res, 'return[0]/status', 'paused')
         res = self.vm.qmp('block-job-dismiss', id='drive0')
         self.assert_qmp(res, 'error/desc',
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 12/23] iotests: 185: prepare for backup over block-copy
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (10 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 11/23] iotests: 56: prepare for backup over block-copy Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 13/23] iotests: 219: " Vladimir Sementsov-Ogievskiy
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

The further change of moving backup to be a one block-copy call will
make copying chunk-size and cluster-size two separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
185 test however assumes, that with speed limited to 64K, one iteration
would result in offset=64K. It will change, as first iteration would
result in offset=1M independently of speed.

So, let's explicitly specify, what test wants: set max-chunk to 64K, so
that one iteration is 64K. Note, that we don't need to limit
max-workers, as block-copy rate limiter will handle the situation and
wouldn't start new workers when speed limit is obviously reached.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 tests/qemu-iotests/185     | 3 ++-
 tests/qemu-iotests/185.out | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/185 b/tests/qemu-iotests/185
index fd5e6ebe11..0efadcbf62 100755
--- a/tests/qemu-iotests/185
+++ b/tests/qemu-iotests/185
@@ -182,7 +182,8 @@ _send_qemu_cmd $h \
                       'target': '$TEST_IMG.copy',
                       'format': '$IMGFMT',
                       'sync': 'full',
-                      'speed': 65536 } }" \
+                      'speed': 65536,
+                      'x-perf': {'max-chunk': 65536} } }" \
     "return"
 
 # If we don't sleep here 'quit' command races with disk I/O
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
index eab55d22bf..9dedc8eacb 100644
--- a/tests/qemu-iotests/185.out
+++ b/tests/qemu-iotests/185.out
@@ -88,7 +88,8 @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off
                       'target': 'TEST_DIR/t.IMGFMT.copy',
                       'format': 'IMGFMT',
                       'sync': 'full',
-                      'speed': 65536 } }
+                      'speed': 65536,
+                      'x-perf': { 'max-chunk': 65536 } } }
 Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 lazy_refcounts=off refcount_bits=16
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 13/23] iotests: 219: prepare for backup over block-copy
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (11 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 12/23] iotests: 185: " Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 14/23] iotests: 257: " Vladimir Sementsov-Ogievskiy
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

The further change of moving backup to be a one block-copy call will
make copying chunk-size and cluster-size two separate things. So, even
with 64k cluster sized qcow2 image, default chunk would be 1M.
Test 219 depends on specified chunk-size. Update it for explicit
chunk-size for backup as for mirror.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 tests/qemu-iotests/219 | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
index db272c5249..d7b177bf09 100755
--- a/tests/qemu-iotests/219
+++ b/tests/qemu-iotests/219
@@ -203,13 +203,13 @@ with iotests.FilePath('disk.img') as disk_path, \
     # but related to this also automatic state transitions like job
     # completion), but still get pause points often enough to avoid making this
     # test very slow, it's important to have the right ratio between speed and
-    # buf_size.
+    # copy-chunk-size.
     #
-    # For backup, buf_size is hard-coded to the source image cluster size (64k),
-    # so we'll pick the same for mirror. The slice time, i.e. the granularity
-    # of the rate limiting is 100ms. With a speed of 256k per second, we can
-    # get four pause points per second. This gives us 250ms per iteration,
-    # which should be enough to stay deterministic.
+    # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
+    # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
+    # is 100ms. With a speed of 256k per second, we can get four pause points
+    # per second. This gives us 250ms per iteration, which should be enough to
+    # stay deterministic.
 
     test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
         'device': 'drive0-node',
@@ -226,6 +226,7 @@ with iotests.FilePath('disk.img') as disk_path, \
                 'target': copy_path,
                 'sync': 'full',
                 'speed': 262144,
+                'x-perf': {'max-chunk': 65536},
                 'auto-finalize': auto_finalize,
                 'auto-dismiss': auto_dismiss,
             })
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 14/23] iotests: 257: prepare for backup over block-copy
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (12 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 13/23] iotests: 219: " Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 15/23] block/block-copy: make progress_bytes_callback optional Vladimir Sementsov-Ogievskiy
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Iotest 257 dumps a lot of in-progress information of backup job, such
as offset and bitmap dirtiness. Further commit will move backup to be
one block-copy call, which will introduce async parallel requests
instead of plain cluster-by-cluster copying. To keep things
deterministic, allow only one worker (only one copy request at a time)
for this test.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 tests/qemu-iotests/257     |   1 +
 tests/qemu-iotests/257.out | 306 ++++++++++++++++++-------------------
 2 files changed, 154 insertions(+), 153 deletions(-)

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index c80e06ae28..0b37ed7708 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -191,6 +191,7 @@ def blockdev_backup(vm, device, target, sync, **kwargs):
                         target=target,
                         sync=sync,
                         filter_node_name='backup-top',
+                        x_perf={'max-workers': 1},
                         **kwargs)
     return result
 
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
index 64dd460055..a7ba512f4c 100644
--- a/tests/qemu-iotests/257.out
+++ b/tests/qemu-iotests/257.out
@@ -30,7 +30,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -78,7 +78,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -92,7 +92,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -205,7 +205,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -219,7 +219,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -290,7 +290,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -338,7 +338,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -354,7 +354,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -416,7 +416,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -430,7 +430,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -501,7 +501,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -549,7 +549,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -563,7 +563,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -676,7 +676,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -690,7 +690,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -761,7 +761,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -809,7 +809,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -823,7 +823,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -936,7 +936,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -950,7 +950,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1021,7 +1021,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1069,7 +1069,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1085,7 +1085,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -1147,7 +1147,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1161,7 +1161,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1232,7 +1232,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1280,7 +1280,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1294,7 +1294,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -1407,7 +1407,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1421,7 +1421,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1492,7 +1492,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1540,7 +1540,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1554,7 +1554,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -1667,7 +1667,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1681,7 +1681,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1752,7 +1752,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1800,7 +1800,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1816,7 +1816,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -1878,7 +1878,7 @@ expecting 13 dirty sectors; have 13. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -1892,7 +1892,7 @@ expecting 13 dirty sectors; have 13. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -1963,7 +1963,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2011,7 +2011,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2025,7 +2025,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -2138,7 +2138,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2152,7 +2152,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2223,7 +2223,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2271,7 +2271,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2285,7 +2285,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -2398,7 +2398,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2412,7 +2412,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2483,7 +2483,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2531,7 +2531,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2547,7 +2547,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -2609,7 +2609,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2623,7 +2623,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2694,7 +2694,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2742,7 +2742,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2756,7 +2756,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -2869,7 +2869,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -2883,7 +2883,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -2954,7 +2954,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3002,7 +3002,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3016,7 +3016,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -3129,7 +3129,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3143,7 +3143,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3214,7 +3214,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3262,7 +3262,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3278,7 +3278,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -3340,7 +3340,7 @@ expecting 1014 dirty sectors; have 1014. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3354,7 +3354,7 @@ expecting 1014 dirty sectors; have 1014. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3425,7 +3425,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3473,7 +3473,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3487,7 +3487,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -3600,7 +3600,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3614,7 +3614,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3685,7 +3685,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3733,7 +3733,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3747,7 +3747,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -3860,7 +3860,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3874,7 +3874,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -3945,7 +3945,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -3993,7 +3993,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4009,7 +4009,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -4071,7 +4071,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4085,7 +4085,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4156,7 +4156,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4204,7 +4204,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4218,7 +4218,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -4331,7 +4331,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4345,7 +4345,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4416,7 +4416,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4464,7 +4464,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4478,7 +4478,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -4591,7 +4591,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4605,7 +4605,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4676,7 +4676,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4724,7 +4724,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4740,7 +4740,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -4802,7 +4802,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4816,7 +4816,7 @@ expecting 14 dirty sectors; have 14. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -4887,7 +4887,7 @@ write -P0x76 0x3ff0000 0x10000
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4935,7 +4935,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -4949,7 +4949,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 
 --- Write #2 ---
@@ -5062,7 +5062,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
@@ -5076,7 +5076,7 @@ expecting 12 dirty sectors; have 12. OK!
 {"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
 {"return": {}}
 {}
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
 {"return": {}}
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
@@ -5139,155 +5139,155 @@ qemu_img compare "TEST_DIR/PID-img" "TEST_DIR/PID-fbackup2" ==> Identical, OK!
 
 -- Sync mode incremental tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
 
 -- Sync mode bitmap tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
 -- Sync mode full tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'full'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
 -- Sync mode top tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'top'"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
 -- Sync mode none tests --
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
 
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
 {"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 15/23] block/block-copy: make progress_bytes_callback optional
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (13 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 14/23] iotests: 257: " Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 16/23] block/backup: drop extra gotos from backup_run() Vladimir Sementsov-Ogievskiy
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

We are going to stop use of this callback in the following commit.
Still the callback handling code will be dropped in a separate commit.
So, for now let's make it optional.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/block-copy.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 82cf945693..61d82d9a1c 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -454,7 +454,9 @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
         t->call_state->error_is_read = error_is_read;
     } else {
         progress_work_done(t->s->progress, t->bytes);
-        t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
+        if (t->s->progress_bytes_callback) {
+            t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
+        }
     }
     co_put_to_shres(t->s->mem, t->bytes);
     block_copy_task_end(t, ret);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 16/23] block/backup: drop extra gotos from backup_run()
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (14 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 15/23] block/block-copy: make progress_bytes_callback optional Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:46 ` [PATCH v4 17/23] backup: move to block-copy Vladimir Sementsov-Ogievskiy
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/backup.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 5522c0f3fe..466608ee55 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -236,7 +236,7 @@ static void backup_init_bcs_bitmap(BackupBlockJob *job)
 static int coroutine_fn backup_run(Job *job, Error **errp)
 {
     BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
-    int ret = 0;
+    int ret;
 
     backup_init_bcs_bitmap(s);
 
@@ -246,13 +246,12 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
 
         for (offset = 0; offset < s->len; ) {
             if (yield_and_check(s)) {
-                ret = -ECANCELED;
-                goto out;
+                return -ECANCELED;
             }
 
             ret = block_copy_reset_unallocated(s->bcs, offset, &count);
             if (ret < 0) {
-                goto out;
+                return ret;
             }
 
             offset += count;
@@ -273,11 +272,10 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
             job_yield(job);
         }
     } else {
-        ret = backup_loop(s);
+        return backup_loop(s);
     }
 
- out:
-    return ret;
+    return 0;
 }
 
 static const BlockJobDriver backup_job_driver = {
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 17/23] backup: move to block-copy
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (15 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 16/23] block/backup: drop extra gotos from backup_run() Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:46 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:47 ` [PATCH v4 18/23] qapi: backup: disable copy_range by default Vladimir Sementsov-Ogievskiy
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:46 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

This brings async request handling and block-status driven chunk sizes
to backup out of the box, which improves backup performance.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/backup.c | 187 +++++++++++++++++++++++++++++++------------------
 1 file changed, 120 insertions(+), 67 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 466608ee55..cc525d5544 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -22,7 +22,6 @@
 #include "block/block-copy.h"
 #include "qapi/error.h"
 #include "qapi/qmp/qerror.h"
-#include "qemu/ratelimit.h"
 #include "qemu/cutils.h"
 #include "sysemu/block-backend.h"
 #include "qemu/bitmap.h"
@@ -44,41 +43,17 @@ typedef struct BackupBlockJob {
     BlockdevOnError on_source_error;
     BlockdevOnError on_target_error;
     uint64_t len;
-    uint64_t bytes_read;
     int64_t cluster_size;
     BackupPerf perf;
 
     BlockCopyState *bcs;
+
+    bool wait;
+    BlockCopyCallState *bg_bcs_call;
 } BackupBlockJob;
 
 static const BlockJobDriver backup_job_driver;
 
-static void backup_progress_bytes_callback(int64_t bytes, void *opaque)
-{
-    BackupBlockJob *s = opaque;
-
-    s->bytes_read += bytes;
-}
-
-static int coroutine_fn backup_do_cow(BackupBlockJob *job,
-                                      int64_t offset, uint64_t bytes,
-                                      bool *error_is_read)
-{
-    int ret = 0;
-    int64_t start, end; /* bytes */
-
-    start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
-    end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
-
-    trace_backup_do_cow_enter(job, start, offset, bytes);
-
-    ret = block_copy(job->bcs, start, end - start, true, error_is_read);
-
-    trace_backup_do_cow_return(job, offset, bytes, ret);
-
-    return ret;
-}
-
 static void backup_cleanup_sync_bitmap(BackupBlockJob *job, int ret)
 {
     BdrvDirtyBitmap *bm;
@@ -158,53 +133,96 @@ static BlockErrorAction backup_error_action(BackupBlockJob *job,
     }
 }
 
-static bool coroutine_fn yield_and_check(BackupBlockJob *job)
+static void coroutine_fn backup_block_copy_callback(void *opaque)
 {
-    uint64_t delay_ns;
-
-    if (job_is_cancelled(&job->common.job)) {
-        return true;
-    }
-
-    /*
-     * We need to yield even for delay_ns = 0 so that bdrv_drain_all() can
-     * return. Without a yield, the VM would not reboot.
-     */
-    delay_ns = block_job_ratelimit_get_delay(&job->common, job->bytes_read);
-    job->bytes_read = 0;
-    job_sleep_ns(&job->common.job, delay_ns);
+    BackupBlockJob *s = opaque;
 
-    if (job_is_cancelled(&job->common.job)) {
-        return true;
+    if (s->wait) {
+        s->wait = false;
+        aio_co_wake(s->common.job.co);
+    } else {
+        job_enter(&s->common.job);
     }
-
-    return false;
 }
 
 static int coroutine_fn backup_loop(BackupBlockJob *job)
 {
-    bool error_is_read;
-    int64_t offset;
-    BdrvDirtyBitmapIter *bdbi;
+    BlockCopyCallState *s = NULL;
     int ret = 0;
+    bool error_is_read;
+    BlockErrorAction act;
+
+    while (true) { /* retry loop */
+        job->bg_bcs_call = s = block_copy_async(job->bcs, 0,
+                QEMU_ALIGN_UP(job->len, job->cluster_size),
+                job->perf.max_workers, job->perf.max_chunk,
+                backup_block_copy_callback, job);
+
+        while (!block_copy_call_finished(s) &&
+               !job_is_cancelled(&job->common.job))
+        {
+            job_yield(&job->common.job);
+        }
 
-    bdbi = bdrv_dirty_iter_new(block_copy_dirty_bitmap(job->bcs));
-    while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
-        do {
-            if (yield_and_check(job)) {
-                goto out;
-            }
-            ret = backup_do_cow(job, offset, job->cluster_size, &error_is_read);
-            if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
-                           BLOCK_ERROR_ACTION_REPORT)
-            {
-                goto out;
-            }
-        } while (ret < 0);
+        if (!block_copy_call_finished(s)) {
+            assert(job_is_cancelled(&job->common.job));
+            /*
+             * Note that we can't use job_yield() here, as it doesn't work for
+             * cancelled job.
+             */
+            block_copy_call_cancel(s);
+            job->wait = true;
+            qemu_coroutine_yield();
+            assert(block_copy_call_finished(s));
+            ret = 0;
+            goto out;
+        }
+
+        if (job_is_cancelled(&job->common.job) ||
+            block_copy_call_succeeded(s))
+        {
+            ret = 0;
+            goto out;
+        }
+
+        if (block_copy_call_cancelled(s)) {
+            /*
+             * Job is not cancelled but only block-copy call. This is possible
+             * after job pause. Now the pause is finished, start new block-copy
+             * iteration.
+             */
+            block_copy_call_free(s);
+            continue;
+        }
+
+        /* The only remaining case is failed block-copy call. */
+        assert(block_copy_call_failed(s));
+
+        ret = block_copy_call_status(s, &error_is_read);
+        act = backup_error_action(job, error_is_read, -ret);
+        switch (act) {
+        case BLOCK_ERROR_ACTION_REPORT:
+            goto out;
+        case BLOCK_ERROR_ACTION_STOP:
+            /*
+             * Go to pause prior to starting new block-copy call on the next
+             * iteration.
+             */
+            job_pause_point(&job->common.job);
+            break;
+        case BLOCK_ERROR_ACTION_IGNORE:
+            /* Proceed to new block-copy call to retry. */
+            break;
+        default:
+            abort();
+        }
+
+        block_copy_call_free(s);
     }
 
- out:
-    bdrv_dirty_iter_free(bdbi);
+out:
+    block_copy_call_free(s);
+    job->bg_bcs_call = NULL;
     return ret;
 }
 
@@ -245,7 +263,13 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
         int64_t count;
 
         for (offset = 0; offset < s->len; ) {
-            if (yield_and_check(s)) {
+            if (job_is_cancelled(job)) {
+                return -ECANCELED;
+            }
+
+            job_pause_point(job);
+
+            if (job_is_cancelled(job)) {
                 return -ECANCELED;
             }
 
@@ -278,6 +302,33 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
     return 0;
 }
 
+static void coroutine_fn backup_pause(Job *job)
+{
+    BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
+
+    if (s->bg_bcs_call && !block_copy_call_finished(s->bg_bcs_call)) {
+        block_copy_call_cancel(s->bg_bcs_call);
+        s->wait = true;
+        qemu_coroutine_yield();
+    }
+}
+
+static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
+{
+    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
+
+    /*
+     * block_job_set_speed() is called first from block_job_create(), when we
+     * don't yet have s->bcs.
+     */
+    if (s->bcs) {
+        block_copy_set_speed(s->bcs, speed);
+        if (s->bg_bcs_call) {
+            block_copy_kick(s->bg_bcs_call);
+        }
+    }
+}
+
 static const BlockJobDriver backup_job_driver = {
     .job_driver = {
         .instance_size          = sizeof(BackupBlockJob),
@@ -288,7 +339,9 @@ static const BlockJobDriver backup_job_driver = {
         .commit                 = backup_commit,
         .abort                  = backup_abort,
         .clean                  = backup_clean,
-    }
+        .pause                  = backup_pause,
+    },
+    .set_speed = backup_set_speed,
 };
 
 static int64_t backup_calculate_cluster_size(BlockDriverState *target,
@@ -485,8 +538,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     job->len = len;
     job->perf = *perf;
 
-    block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
     block_copy_set_progress_meter(bcs, &job->common.job.progress);
+    block_copy_set_speed(bcs, speed);
 
     /* Required permissions are already taken by backup-top target */
     block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 18/23] qapi: backup: disable copy_range by default
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (16 preceding siblings ...)
  2021-01-16 21:46 ` [PATCH v4 17/23] backup: move to block-copy Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:47 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:47 ` [PATCH v4 19/23] block/block-copy: drop unused block_copy_set_progress_callback() Vladimir Sementsov-Ogievskiy
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:47 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Further commit will add a benchmark
(scripts/simplebench/bench-backup.py), which will show that backup
works better with async parallel requests (previous commit) and
disabled copy_range. So, let's disable copy_range by default.

Note: the option was added several commits ago with default to true,
to follow old behavior (the feature was enabled unconditionally), and
only now we are going to change the default behavior.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 qapi/block-core.json | 2 +-
 blockdev.c           | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index c0e9d119d2..933c2327c8 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1377,7 +1377,7 @@
 # Optional parameters for backup. These parameters don't affect
 # functionality, but may significantly affect performance.
 #
-# @use-copy-range: Use copy offloading. Default true.
+# @use-copy-range: Use copy offloading. Default false.
 #
 # @max-workers: Maximum number of parallel requests for the sustained background
 #               copying process. Doesn't influence copy-before-write operations.
diff --git a/blockdev.c b/blockdev.c
index 6db433cef8..41d1431210 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2794,7 +2794,7 @@ static BlockJob *do_backup_common(BackupCommon *backup,
 {
     BlockJob *job = NULL;
     BdrvDirtyBitmap *bmap = NULL;
-    BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
+    BackupPerf perf = { .max_workers = 64 };
     int job_flags = JOB_DEFAULT;
 
     if (!backup->has_speed) {
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 19/23] block/block-copy: drop unused block_copy_set_progress_callback()
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (17 preceding siblings ...)
  2021-01-16 21:47 ` [PATCH v4 18/23] qapi: backup: disable copy_range by default Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:47 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:47 ` [PATCH v4 20/23] block/block-copy: drop unused argument of block_copy() Vladimir Sementsov-Ogievskiy
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:47 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Drop unused code.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/block/block-copy.h |  6 ------
 block/block-copy.c         | 15 ---------------
 2 files changed, 21 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 7821850f88..1cbea0b79b 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -18,7 +18,6 @@
 #include "block/block.h"
 #include "qemu/co-shared-resource.h"
 
-typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
 typedef void (*BlockCopyAsyncCallbackFunc)(void *opaque);
 typedef struct BlockCopyState BlockCopyState;
 typedef struct BlockCopyCallState BlockCopyCallState;
@@ -28,11 +27,6 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
                                      BdrvRequestFlags write_flags,
                                      Error **errp);
 
-void block_copy_set_progress_callback(
-        BlockCopyState *s,
-        ProgressBytesCallbackFunc progress_bytes_callback,
-        void *progress_opaque);
-
 void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm);
 
 void block_copy_state_free(BlockCopyState *s);
diff --git a/block/block-copy.c b/block/block-copy.c
index 61d82d9a1c..2ea8b28684 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -110,9 +110,6 @@ typedef struct BlockCopyState {
     bool skip_unallocated;
 
     ProgressMeter *progress;
-    /* progress_bytes_callback: called when some copying progress is done. */
-    ProgressBytesCallbackFunc progress_bytes_callback;
-    void *progress_opaque;
 
     SharedResource *mem;
 
@@ -298,15 +295,6 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
     return s;
 }
 
-void block_copy_set_progress_callback(
-        BlockCopyState *s,
-        ProgressBytesCallbackFunc progress_bytes_callback,
-        void *progress_opaque)
-{
-    s->progress_bytes_callback = progress_bytes_callback;
-    s->progress_opaque = progress_opaque;
-}
-
 void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm)
 {
     s->progress = pm;
@@ -454,9 +442,6 @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
         t->call_state->error_is_read = error_is_read;
     } else {
         progress_work_done(t->s->progress, t->bytes);
-        if (t->s->progress_bytes_callback) {
-            t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
-        }
     }
     co_put_to_shres(t->s->mem, t->bytes);
     block_copy_task_end(t, ret);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 20/23] block/block-copy: drop unused argument of block_copy()
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (18 preceding siblings ...)
  2021-01-16 21:47 ` [PATCH v4 19/23] block/block-copy: drop unused block_copy_set_progress_callback() Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:47 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:47 ` [PATCH v4 21/23] simplebench/bench_block_job: use correct shebang line with python3 Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:47 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/block/block-copy.h |  2 +-
 block/backup-top.c         |  2 +-
 block/block-copy.c         | 10 ++--------
 3 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index 1cbea0b79b..338f2ea7fd 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -35,7 +35,7 @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
                                      int64_t offset, int64_t *count);
 
 int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
-                            bool ignore_ratelimit, bool *error_is_read);
+                            bool ignore_ratelimit);
 
 /*
  * Run block-copy in a coroutine, create corresponding BlockCopyCallState
diff --git a/block/backup-top.c b/block/backup-top.c
index 779956ddc2..6e7e7bf340 100644
--- a/block/backup-top.c
+++ b/block/backup-top.c
@@ -61,7 +61,7 @@ static coroutine_fn int backup_top_cbw(BlockDriverState *bs, uint64_t offset,
     off = QEMU_ALIGN_DOWN(offset, s->cluster_size);
     end = QEMU_ALIGN_UP(offset + bytes, s->cluster_size);
 
-    return block_copy(s->bcs, off, end - off, true, NULL);
+    return block_copy(s->bcs, off, end - off, true);
 }
 
 static int coroutine_fn backup_top_co_pdiscard(BlockDriverState *bs,
diff --git a/block/block-copy.c b/block/block-copy.c
index 2ea8b28684..39ae481c8b 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -723,7 +723,7 @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
 }
 
 int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
-                            bool ignore_ratelimit, bool *error_is_read)
+                            bool ignore_ratelimit)
 {
     BlockCopyCallState call_state = {
         .s = s,
@@ -733,13 +733,7 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
         .max_workers = BLOCK_COPY_MAX_WORKERS,
     };
 
-    int ret = block_copy_common(&call_state);
-
-    if (error_is_read && ret < 0) {
-        *error_is_read = call_state.error_is_read;
-    }
-
-    return ret;
+    return block_copy_common(&call_state);
 }
 
 static void coroutine_fn block_copy_async_co_entry(void *opaque)
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 21/23] simplebench/bench_block_job: use correct shebang line with python3
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (19 preceding siblings ...)
  2021-01-16 21:47 ` [PATCH v4 20/23] block/block-copy: drop unused argument of block_copy() Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:47 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:47 ` [PATCH v4 22/23] simplebench: bench_block_job: add cmd_options argument Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:47 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 scripts/simplebench/bench_block_job.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
index 9808d696cf..a0dda1dc4e 100755
--- a/scripts/simplebench/bench_block_job.py
+++ b/scripts/simplebench/bench_block_job.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
 #
 # Benchmark block jobs
 #
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 22/23] simplebench: bench_block_job: add cmd_options argument
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (20 preceding siblings ...)
  2021-01-16 21:47 ` [PATCH v4 21/23] simplebench/bench_block_job: use correct shebang line with python3 Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:47 ` Vladimir Sementsov-Ogievskiy
  2021-01-16 21:47 ` [PATCH v4 23/23] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:47 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Add argument to allow additional block-job options.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 scripts/simplebench/bench-example.py   |  2 +-
 scripts/simplebench/bench_block_job.py | 11 +++++++----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
index d9c7f7bc17..4864435f39 100644
--- a/scripts/simplebench/bench-example.py
+++ b/scripts/simplebench/bench-example.py
@@ -25,7 +25,7 @@ from bench_block_job import bench_block_copy, drv_file, drv_nbd
 
 def bench_func(env, case):
     """ Handle one "cell" of benchmarking table. """
-    return bench_block_copy(env['qemu_binary'], env['cmd'],
+    return bench_block_copy(env['qemu_binary'], env['cmd'], {}
                             case['source'], case['target'])
 
 
diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
index a0dda1dc4e..7332845c1c 100755
--- a/scripts/simplebench/bench_block_job.py
+++ b/scripts/simplebench/bench_block_job.py
@@ -78,16 +78,19 @@ def bench_block_job(cmd, cmd_args, qemu_args):
 
 
 # Bench backup or mirror
-def bench_block_copy(qemu_binary, cmd, source, target):
+def bench_block_copy(qemu_binary, cmd, cmd_options, source, target):
     """Helper to run bench_block_job() for mirror or backup"""
     assert cmd in ('blockdev-backup', 'blockdev-mirror')
 
     source['node-name'] = 'source'
     target['node-name'] = 'target'
 
-    return bench_block_job(cmd,
-                           {'job-id': 'job0', 'device': 'source',
-                            'target': 'target', 'sync': 'full'},
+    cmd_options['job-id'] = 'job0'
+    cmd_options['device'] = 'source'
+    cmd_options['target'] = 'target'
+    cmd_options['sync'] = 'full'
+
+    return bench_block_job(cmd, cmd_options,
                            [qemu_binary,
                             '-blockdev', json.dumps(source),
                             '-blockdev', json.dumps(target)])
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v4 23/23] simplebench: add bench-backup.py
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (21 preceding siblings ...)
  2021-01-16 21:47 ` [PATCH v4 22/23] simplebench: bench_block_job: add cmd_options argument Vladimir Sementsov-Ogievskiy
@ 2021-01-16 21:47 ` Vladimir Sementsov-Ogievskiy
  2021-01-18 14:01   ` Max Reitz
  2021-01-18 15:07 ` [PATCH v4 00/23] backup performance: block_status + async Max Reitz
                   ` (2 subsequent siblings)
  25 siblings, 1 reply; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-16 21:47 UTC (permalink / raw)
  To: qemu-block
  Cc: kwolf, vsementsov, wencongyang2, xiechanglong.d, qemu-devel,
	armbru, jsnow, den, mreitz

Add script to benchmark new backup architecture.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 scripts/simplebench/bench-backup.py | 167 ++++++++++++++++++++++++++++
 1 file changed, 167 insertions(+)
 create mode 100755 scripts/simplebench/bench-backup.py

diff --git a/scripts/simplebench/bench-backup.py b/scripts/simplebench/bench-backup.py
new file mode 100755
index 0000000000..2cf7a852e0
--- /dev/null
+++ b/scripts/simplebench/bench-backup.py
@@ -0,0 +1,167 @@
+#!/usr/bin/env python3
+#
+# Bench backup block-job
+#
+# Copyright (c) 2020 Virtuozzo International GmbH.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import argparse
+import json
+
+import simplebench
+from results_to_text import results_to_text
+from bench_block_job import bench_block_copy, drv_file, drv_nbd
+
+
+def bench_func(env, case):
+    """ Handle one "cell" of benchmarking table. """
+    cmd_options = env['cmd-options'] if 'cmd-options' in env else {}
+    return bench_block_copy(env['qemu-binary'], env['cmd'],
+                            cmd_options,
+                            case['source'], case['target'])
+
+
+def bench(args):
+    test_cases = []
+
+    sources = {}
+    targets = {}
+    for d in args.dir:
+        label, path = d.split(':')  # paths with colon not unsupported
+        sources[label] = drv_file(path + '/test-source')
+        targets[label] = drv_file(path + '/test-target')
+
+    if args.nbd:
+        nbd = args.nbd.split(':')
+        host = nbd[0]
+        port = '10809' if len(nbd) == 1 else nbd[1]
+        drv = drv_nbd(host, port)
+        sources['nbd'] = drv
+        targets['nbd'] = drv
+
+    for t in args.test:
+        src, dst = t.split(':')
+
+        test_cases.append({
+            'id': t,
+            'source': sources[src],
+            'target': targets[dst]
+        })
+
+    binaries = []  # list of (<label>, <path>, [<options>])
+    for i, q in enumerate(args.env):
+        name_path = q.split(':')
+        if len(name_path) == 1:
+            label = f'q{i}'
+            path_opts = name_path[0].split(',')
+        else:
+            assert len(name_path) == 2  # paths with colon not supported
+            label = name_path[0]
+            path_opts = name_path[1].split(',')
+
+        binaries.append((label, path_opts[0], path_opts[1:]))
+
+    test_envs = []
+
+    bin_paths = {}
+    for i, q in enumerate(args.env):
+        opts = q.split(',')
+        label_path = opts[0]
+        opts = opts[1:]
+
+        if ':' in label_path:
+            # path with colon inside is not supported
+            label, path = label_path.split(':')
+            bin_paths[label] = path
+        elif label_path in bin_paths:
+            label = label_path
+            path = bin_paths[label]
+        else:
+            path = label_path
+            label = f'q{i}'
+            bin_paths[label] = path
+
+        x_perf = {}
+        is_mirror = False
+        for opt in opts:
+            if opt == 'mirror':
+                is_mirror = True
+            elif opt == 'copy-range=on':
+                x_perf['use-copy-range'] = True
+            elif opt == 'copy-range=off':
+                x_perf['use-copy-range'] = False
+            elif opt.startswith('max-workers='):
+                x_perf['max-workers'] = int(opt.split('=')[1])
+
+        if is_mirror:
+            assert not x_perf
+            test_envs.append({
+                    'id': f'mirror({label})',
+                    'cmd': 'blockdev-mirror',
+                    'qemu-binary': path
+                })
+        else:
+            test_envs.append({
+                'id': f'backup({label})\n' + '\n'.join(opts),
+                'cmd': 'blockdev-backup',
+                'cmd-options': {'x-perf': x_perf} if x_perf else {},
+                'qemu-binary': path
+            })
+
+    result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
+    with open('results.json', 'w') as f:
+        json.dump(result, f, indent=4)
+    print(results_to_text(result))
+
+
+class ExtendAction(argparse.Action):
+    def __call__(self, parser, namespace, values, option_string=None):
+        items = getattr(namespace, self.dest) or []
+        items.extend(values)
+        setattr(namespace, self.dest, items)
+
+
+if __name__ == '__main__':
+    p = argparse.ArgumentParser('Backup benchmark', epilog='''
+ENV format
+
+    (LABEL:PATH|LABEL|PATH)[,max-workers=N][,use-copy-range=(on|off)][,mirror]
+
+    LABEL                short name for the binary
+    PATH                 path to the binary
+    max-workers          set x-perf.max-workers of backup job
+    use-copy-range       set x-perf.use-copy-range of backup job
+    mirror               use mirror job instead of backup''',
+                                formatter_class=argparse.RawTextHelpFormatter)
+    p.add_argument('--env', nargs='+', help='''\
+Qemu binaries with labels and options, see below
+"ENV format" section''',
+                   action=ExtendAction)
+    p.add_argument('--dir', nargs='+', help='''\
+Directories, each containing "test-source" and/or
+"test-target" files, raw images to used in
+benchmarking. File path with label, like
+label:/path/to/directory''',
+                   action=ExtendAction)
+    p.add_argument('--nbd', help='''\
+host:port for remote NBD image, (or just host, for
+default port 10809). Use it in tests, label is "nbd"
+(but you cannot create test nbd:nbd).''')
+    p.add_argument('--test', nargs='+', help='''\
+Tests, in form source-dir-label:target-dir-label''',
+                   action=ExtendAction)
+
+    bench(p.parse_args())
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 09/23] job: call job_enter from job_pause
  2021-01-16 21:46 ` [PATCH v4 09/23] job: call job_enter from job_pause Vladimir Sementsov-Ogievskiy
@ 2021-01-18 13:45   ` Max Reitz
  2021-01-18 14:20     ` Max Reitz
  2021-04-07 11:19   ` Max Reitz
  1 sibling, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-18 13:45 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
> If main job coroutine called job_yield (while some background process
> is in progress), we should give it a chance to call job_pause_point().
> It will be used in backup, when moved on async block-copy.
> 
> Note, that job_user_pause is not enough: we want to handle
> child_job_drained_begin() as well, which call job_pause().

OK.

> Still, if job is already in job_do_yield() in job_pause_point() we
> should not enter it.

Agreed.

> iotest 109 output is modified: on stop we do bdrv_drain_all() which now
> triggers job pause immediately (and pause after ready is standby).

Sounds like a good thing.

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>   job.c                      |  3 +++
>   tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
>   2 files changed, 27 insertions(+)
> 
> diff --git a/job.c b/job.c
> index 8fecf38960..3aaaebafe2 100644
> --- a/job.c
> +++ b/job.c
> @@ -553,6 +553,9 @@ static bool job_timer_not_pending(Job *job)
>   void job_pause(Job *job)
>   {
>       job->pause_count++;
> +    if (!job->paused) {
> +        job_enter(job);
> +    }
>   }

I see job_pause is also called from block_job_error_action() – should we 
reenter the job there, too?

(It looks to me like e.g. mirror would basically just continue to run, 
then, until it needs to yield because of some other issue.  I don’t know 
whether that’s a problem, because I suppose we don’t guarantee to stop 
immediately on an error, though I suspect users would expect us to do 
that as early as possible (i.e., not to launch new requests).

[Quite some time later]

I’ve now tested a mirror job that stops due to a target error, and it 
actually does not make any progress; or at least it doesn’t report any. 
  So it looks like my concern is unjustified.  I don’t know why it’s 
unjustified, though, so perhaps you can explain it before I give my R-b O:))

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 23/23] simplebench: add bench-backup.py
  2021-01-16 21:47 ` [PATCH v4 23/23] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
@ 2021-01-18 14:01   ` Max Reitz
  0 siblings, 0 replies; 45+ messages in thread
From: Max Reitz @ 2021-01-18 14:01 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 16.01.21 22:47, Vladimir Sementsov-Ogievskiy wrote:
> Add script to benchmark new backup architecture.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>   scripts/simplebench/bench-backup.py | 167 ++++++++++++++++++++++++++++
>   1 file changed, 167 insertions(+)
>   create mode 100755 scripts/simplebench/bench-backup.py
> 
> diff --git a/scripts/simplebench/bench-backup.py b/scripts/simplebench/bench-backup.py
> new file mode 100755
> index 0000000000..2cf7a852e0
> --- /dev/null
> +++ b/scripts/simplebench/bench-backup.py
> @@ -0,0 +1,167 @@
> +#!/usr/bin/env python3
> +#
> +# Bench backup block-job
> +#
> +# Copyright (c) 2020 Virtuozzo International GmbH.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +#
> +
> +import argparse
> +import json
> +
> +import simplebench
> +from results_to_text import results_to_text
> +from bench_block_job import bench_block_copy, drv_file, drv_nbd
> +
> +
> +def bench_func(env, case):
> +    """ Handle one "cell" of benchmarking table. """
> +    cmd_options = env['cmd-options'] if 'cmd-options' in env else {}
> +    return bench_block_copy(env['qemu-binary'], env['cmd'],
> +                            cmd_options,
> +                            case['source'], case['target'])
> +
> +
> +def bench(args):
> +    test_cases = []
> +
> +    sources = {}
> +    targets = {}
> +    for d in args.dir:
> +        label, path = d.split(':')  # paths with colon not unsupported

s/ un//, I think.

Adding these comments (and the assert below) instead of using splitn() 
is fine for me.

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 09/23] job: call job_enter from job_pause
  2021-01-18 13:45   ` Max Reitz
@ 2021-01-18 14:20     ` Max Reitz
  0 siblings, 0 replies; 45+ messages in thread
From: Max Reitz @ 2021-01-18 14:20 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 18.01.21 14:45, Max Reitz wrote:
> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>> If main job coroutine called job_yield (while some background process
>> is in progress), we should give it a chance to call job_pause_point().
>> It will be used in backup, when moved on async block-copy.
>>
>> Note, that job_user_pause is not enough: we want to handle
>> child_job_drained_begin() as well, which call job_pause().
> 
> OK.
> 
>> Still, if job is already in job_do_yield() in job_pause_point() we
>> should not enter it.
> 
> Agreed.
> 
>> iotest 109 output is modified: on stop we do bdrv_drain_all() which now
>> triggers job pause immediately (and pause after ready is standby).
> 
> Sounds like a good thing.
> 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   job.c                      |  3 +++
>>   tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
>>   2 files changed, 27 insertions(+)
>>
>> diff --git a/job.c b/job.c
>> index 8fecf38960..3aaaebafe2 100644
>> --- a/job.c
>> +++ b/job.c
>> @@ -553,6 +553,9 @@ static bool job_timer_not_pending(Job *job)
>>   void job_pause(Job *job)
>>   {
>>       job->pause_count++;
>> +    if (!job->paused) {
>> +        job_enter(job);
>> +    }
>>   }
> 
> I see job_pause is also called from block_job_error_action() – should we 
> reenter the job there, too?
> 
> (It looks to me like e.g. mirror would basically just continue to run, 
> then, until it needs to yield because of some other issue.  I don’t know 
> whether that’s a problem, because I suppose we don’t guarantee to stop 
> immediately on an error, though I suspect users would expect us to do 
> that as early as possible (i.e., not to launch new requests).
> 
> [Quite some time later]
> 
> I’ve now tested a mirror job that stops due to a target error, and it 
> actually does not make any progress; or at least it doesn’t report any. 
>   So it looks like my concern is unjustified.  I don’t know why it’s 
> unjustified, though, so perhaps you can explain it before I give my R-b 
> O:))

Oh, I guess because job_enter_cond() doesn’t enter if the job is busy 
already.  That would make a lot of sense, so I’m going to assume that’s 
what’s preventing the job_enter() to do anything if the job is already 
running (which it has to be to invoke block_job_error_action()).

Reviewed-by: Max Reitz <mreitz@redhat.com>



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (22 preceding siblings ...)
  2021-01-16 21:47 ` [PATCH v4 23/23] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
@ 2021-01-18 15:07 ` Max Reitz
  2021-01-18 17:04   ` Vladimir Sementsov-Ogievskiy
  2021-01-19 18:40 ` Max Reitz
  2021-01-20 10:20 ` [PATCH v4 11.5/23] iotests/129: Limit backup's max-chunk/max-workers Max Reitz
  25 siblings, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-18 15:07 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
> Hi Max!
> I applied my series onto yours 129-fixing and found, that 129 fails for backup.
> And setting small max-chunk and even max-workers to 1 doesn't help! (setting
> speed like in v3 still helps).
> 
> And I found, that the problem is that really, the whole backup job goes during
> drain, because in new architecture we do just job_yield() during the whole
> background block-copy.

OK, so as it was in v3, the job was drained, but since it was already 
yielding while block-copy was running in the background, nothing 
happened; the block-copy completed and only then was the job woken (and 
then there was no reason to pause, because it was done already).

So now the job is entered on drain, too (not only user pauses), which 
means that it gets a chance to pause background requests.

In backup’s case, that means invoking job_yield() again, which sets a 
job_pause_point(), which will cancel the block-copy.  Once the job is 
unpaused (re-entered by job_resume()), backup sees block-copy is 
cancelled (and finished), leaves the loop, and retries with a new 
block-copy call.

I think I got it now.


So all that’s left is issuing a thanks to you – thanks! – and announcing 
that I’ve applied this series to my block branch (with s/not 
unsupported/not supported/ in patch 23):

https://git.xanclic.moe/XanClic/qemu/commits/branch/block

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-18 15:07 ` [PATCH v4 00/23] backup performance: block_status + async Max Reitz
@ 2021-01-18 17:04   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-18 17:04 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

18.01.2021 18:07, Max Reitz wrote:
> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>> Hi Max!
>> I applied my series onto yours 129-fixing and found, that 129 fails for backup.
>> And setting small max-chunk and even max-workers to 1 doesn't help! (setting
>> speed like in v3 still helps).
>>
>> And I found, that the problem is that really, the whole backup job goes during
>> drain, because in new architecture we do just job_yield() during the whole
>> background block-copy.
> 
> OK, so as it was in v3, the job was drained, but since it was already yielding while block-copy was running in the background, nothing happened; the block-copy completed and only then was the job woken (and then there was no reason to pause, because it was done already).
> 
> So now the job is entered on drain, too (not only user pauses), which means that it gets a chance to pause background requests.
> 
> In backup’s case, that means invoking job_yield() again, which sets a job_pause_point(), which will cancel the block-copy.  Once the job is unpaused (re-entered by job_resume()), backup sees block-copy is cancelled (and finished), leaves the loop, and retries with a new block-copy call.
> 
> I think I got it now.
> 
> 
> So all that’s left is issuing a thanks to you – thanks! – and announcing that I’ve applied this series to my block branch (with s/not unsupported/not supported/ in patch 23):
> 
> https://git.xanclic.moe/XanClic/qemu/commits/branch/block
> 

Great! Thanks!


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (23 preceding siblings ...)
  2021-01-18 15:07 ` [PATCH v4 00/23] backup performance: block_status + async Max Reitz
@ 2021-01-19 18:40 ` Max Reitz
  2021-01-19 19:22   ` Vladimir Sementsov-Ogievskiy
  2021-01-20 10:20 ` [PATCH v4 11.5/23] iotests/129: Limit backup's max-chunk/max-workers Max Reitz
  25 siblings, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-19 18:40 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
> Hi Max!
> I applied my series onto yours 129-fixing and found, that 129 fails for backup.
> And setting small max-chunk and even max-workers to 1 doesn't help! (setting
> speed like in v3 still helps).
> 
> And I found, that the problem is that really, the whole backup job goes during
> drain, because in new architecture we do just job_yield() during the whole
> background block-copy.
> 
> This leads to modifying the existing patch in the series, which does job_enter()
> from job_user_pause: we just need call job_enter() from job_pause() to cover
> not only user pauses but also drained_begin.
> 
> So, now I don't need any additional fixing of 129.
> 
> Changes in v4:
> - add a lot of Max's r-b's, thanks!
> 
> 03: fix over-80 line (in comment), add r-b
> 09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
>      now changed to finally fix 129 iotest, drop r-b
> 
> 10: squash-in additional wording on max-chunk, fix error message, keep r-b
> 17: drop extra include, assert job_is_cancelled() instead of check, add r-b
> 18: adjust commit message, add r-b
> 23: add comments and assertion, about the fact that test doesn't support
>      paths with colon inside
>      fix s/disable-copy-range/use-copy-range/

Hmmm, for me, 129 sometimes fails still, because it completes too 
quickly...  (The error then is that 'return[0]' does not exist in 
query-block-jobs’s result, because the job is already gone.)

When I insert a print(result) after the query-block-jobs, I can see that 
the job has always progressed really far, even if its still running. 
(Like, generally the offset is just one MB shy of 1G.)

I suspect the problem is that block-copy just copies too much from the 
start (by default); i.e., it starts 64 workers with, hm, well, 1 MB of 
chunk size?  Shouldn’t fill the 128 MB immediately...

Anyway, limiting the number of workers (to 1) and the chunk size (to 
64k) with x-perf does ensure that the backup job’s progress is limited 
to 1 MB or so, which looks fine to me.

I suppose we should do that, then (in 129), before patch 17?

(PS: I can also see a MacOS failure in iotest 256.  I suspect it’s 
related to this series, because 256 is a backup test (with iothreads), 
but I’m not sure yet.  The log is here:

https://cirrus-ci.com/task/5276331753603072
)

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-19 18:40 ` Max Reitz
@ 2021-01-19 19:22   ` Vladimir Sementsov-Ogievskiy
  2021-01-19 19:29     ` Vladimir Sementsov-Ogievskiy
  2021-01-20 10:39     ` Max Reitz
  0 siblings, 2 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-19 19:22 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

19.01.2021 21:40, Max Reitz wrote:
> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>> Hi Max!
>> I applied my series onto yours 129-fixing and found, that 129 fails for backup.
>> And setting small max-chunk and even max-workers to 1 doesn't help! (setting
>> speed like in v3 still helps).
>>
>> And I found, that the problem is that really, the whole backup job goes during
>> drain, because in new architecture we do just job_yield() during the whole
>> background block-copy.
>>
>> This leads to modifying the existing patch in the series, which does job_enter()
>> from job_user_pause: we just need call job_enter() from job_pause() to cover
>> not only user pauses but also drained_begin.
>>
>> So, now I don't need any additional fixing of 129.
>>
>> Changes in v4:
>> - add a lot of Max's r-b's, thanks!
>>
>> 03: fix over-80 line (in comment), add r-b
>> 09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
>>      now changed to finally fix 129 iotest, drop r-b
>>
>> 10: squash-in additional wording on max-chunk, fix error message, keep r-b
>> 17: drop extra include, assert job_is_cancelled() instead of check, add r-b
>> 18: adjust commit message, add r-b
>> 23: add comments and assertion, about the fact that test doesn't support
>>      paths with colon inside
>>      fix s/disable-copy-range/use-copy-range/
> 
> Hmmm, for me, 129 sometimes fails still, because it completes too quickly...  (The error then is that 'return[0]' does not exist in query-block-jobs’s result, because the job is already gone.)
> 
> When I insert a print(result) after the query-block-jobs, I can see that the job has always progressed really far, even if its still running. (Like, generally the offset is just one MB shy of 1G.)
> 
> I suspect the problem is that block-copy just copies too much from the start (by default); i.e., it starts 64 workers with, hm, well, 1 MB of chunk size?  Shouldn’t fill the 128 MB immediately...
> 
> Anyway, limiting the number of workers (to 1) and the chunk size (to 64k) with x-perf does ensure that the backup job’s progress is limited to 1 MB or so, which looks fine to me.
> 
> I suppose we should do that, then (in 129), before patch 17?

Yes, that sounds reasonable

> 
> (PS: I can also see a MacOS failure in iotest 256.  I suspect it’s related to this series, because 256 is a backup test (with iothreads), but I’m not sure yet.  The log is here:
> 
> https://cirrus-ci.com/task/5276331753603072
> )
> 

qemu received signal 31 ?

googling for MacOS...

  31    SIGUSR2      terminate process    User defined signal 2


And what does it mean? Something interesting may be in qemu log..


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-19 19:22   ` Vladimir Sementsov-Ogievskiy
@ 2021-01-19 19:29     ` Vladimir Sementsov-Ogievskiy
  2021-01-20 10:39     ` Max Reitz
  1 sibling, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-19 19:29 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

19.01.2021 22:22, Vladimir Sementsov-Ogievskiy wrote:
>> Hmmm, for me, 129 sometimes fails still, because it completes too quickly...  (The error then is that 'return[0]' does not exist in query-block-jobs’s result, because the job is already gone.)
>>
>> When I insert a print(result) after the query-block-jobs, I can see that the job has always progressed really far, even if its still running. (Like, generally the offset is just one MB shy of 1G.)
>>
>> I suspect the problem is that block-copy just copies too much from the start (by default); i.e., it starts 64 workers with, hm, well, 1 MB of chunk size?  Shouldn’t fill the 128 MB immediately...
>>
>> Anyway, limiting the number of workers (to 1) and the chunk size (to 64k) with x-perf does ensure that the backup job’s progress is limited to 1 MB or so, which looks fine to me.
>>
>> I suppose we should do that, then (in 129), before patch 17?
> 
> Yes, that sounds reasonable

Still, may be keeping number of workers >1 is good to, to test new architecture.. Just make it to be 4 or 8

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v4 11.5/23] iotests/129: Limit backup's max-chunk/max-workers
  2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
                   ` (24 preceding siblings ...)
  2021-01-19 18:40 ` Max Reitz
@ 2021-01-20 10:20 ` Max Reitz
  2021-01-20 11:24   ` Vladimir Sementsov-Ogievskiy
  25 siblings, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-20 10:20 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, qemu-devel, Max Reitz

Right now, this does not change anything, because backup ignores
max-chunk and max-workers.  However, as soon as backup is switched over
to block-copy for the background copying process, we will need it to
keep 129 passing.

Signed-off-by: Max Reitz <mreitz@redhat.com>
---
Hi Vladimir, would you be OK with this?
---
 tests/qemu-iotests/129 | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
index 9a56217bf8..2ac7e7a24d 100755
--- a/tests/qemu-iotests/129
+++ b/tests/qemu-iotests/129
@@ -70,9 +70,14 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
                           sync="full", buf_size=65536)
 
     def test_drive_backup(self):
+        # Limit max-chunk and max-workers so that block-copy will not
+        # launch so many workers working on so much data each that
+        # stop's bdrv_drain_all() would finish the job
         self.do_test_stop("drive-backup", device="drive0",
                           target=self.target_img, format=iotests.imgfmt,
-                          sync="full")
+                          sync="full",
+                          x_perf={ 'max-chunk': 65536,
+                                   'max-workers': 8 })
 
     def test_block_commit(self):
         # Add overlay above the source node so that we actually use a
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-19 19:22   ` Vladimir Sementsov-Ogievskiy
  2021-01-19 19:29     ` Vladimir Sementsov-Ogievskiy
@ 2021-01-20 10:39     ` Max Reitz
  2021-01-20 13:50       ` Max Reitz
  1 sibling, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-20 10:39 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 19.01.21 20:22, Vladimir Sementsov-Ogievskiy wrote:
> 19.01.2021 21:40, Max Reitz wrote:
>> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>>> Hi Max!
>>> I applied my series onto yours 129-fixing and found, that 129 fails 
>>> for backup.
>>> And setting small max-chunk and even max-workers to 1 doesn't help! 
>>> (setting
>>> speed like in v3 still helps).
>>>
>>> And I found, that the problem is that really, the whole backup job 
>>> goes during
>>> drain, because in new architecture we do just job_yield() during the 
>>> whole
>>> background block-copy.
>>>
>>> This leads to modifying the existing patch in the series, which does 
>>> job_enter()
>>> from job_user_pause: we just need call job_enter() from job_pause() 
>>> to cover
>>> not only user pauses but also drained_begin.
>>>
>>> So, now I don't need any additional fixing of 129.
>>>
>>> Changes in v4:
>>> - add a lot of Max's r-b's, thanks!
>>>
>>> 03: fix over-80 line (in comment), add r-b
>>> 09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
>>>      now changed to finally fix 129 iotest, drop r-b
>>>
>>> 10: squash-in additional wording on max-chunk, fix error message, 
>>> keep r-b
>>> 17: drop extra include, assert job_is_cancelled() instead of check, 
>>> add r-b
>>> 18: adjust commit message, add r-b
>>> 23: add comments and assertion, about the fact that test doesn't support
>>>      paths with colon inside
>>>      fix s/disable-copy-range/use-copy-range/
>>
>> Hmmm, for me, 129 sometimes fails still, because it completes too 
>> quickly...  (The error then is that 'return[0]' does not exist in 
>> query-block-jobs’s result, because the job is already gone.)
>>
>> When I insert a print(result) after the query-block-jobs, I can see 
>> that the job has always progressed really far, even if its still 
>> running. (Like, generally the offset is just one MB shy of 1G.)
>>
>> I suspect the problem is that block-copy just copies too much from the 
>> start (by default); i.e., it starts 64 workers with, hm, well, 1 MB of 
>> chunk size?  Shouldn’t fill the 128 MB immediately...
>>
>> Anyway, limiting the number of workers (to 1) and the chunk size (to 
>> 64k) with x-perf does ensure that the backup job’s progress is limited 
>> to 1 MB or so, which looks fine to me.
>>
>> I suppose we should do that, then (in 129), before patch 17?
> 
> Yes, that sounds reasonable
> 
>>
>> (PS: I can also see a MacOS failure in iotest 256.  I suspect it’s 
>> related to this series, because 256 is a backup test (with iothreads), 
>> but I’m not sure yet.  The log is here:
>>
>> https://cirrus-ci.com/task/5276331753603072
>> )
>>
> 
> qemu received signal 31 ?
> 
> googling for MacOS...
> 
>   31    SIGUSR2      terminate process    User defined signal 2

coroutine-sigaltstack uses SIGUSR2 to set up new coroutines.  Perhaps 
it’s unrelated to backup?  Guess I’ll just run the test one more time. O:)

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 11.5/23] iotests/129: Limit backup's max-chunk/max-workers
  2021-01-20 10:20 ` [PATCH v4 11.5/23] iotests/129: Limit backup's max-chunk/max-workers Max Reitz
@ 2021-01-20 11:24   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-01-20 11:24 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: Kevin Wolf, qemu-devel

20.01.2021 13:20, Max Reitz wrote:
> Right now, this does not change anything, because backup ignores
> max-chunk and max-workers.  However, as soon as backup is switched over
> to block-copy for the background copying process, we will need it to
> keep 129 passing.
> 
> Signed-off-by: Max Reitz <mreitz@redhat.com>
> ---
> Hi Vladimir, would you be OK with this?

Yes, thanks!

Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Hmm, interesting, what's going on with defaults:

we issue 64 requests by 1M, and 63 are throttled. On stop we do drain, probably throttling filter is deactivated somehow (otherwise drain will hang).. So 63 requests should finish and backup should be paused itself.

Then after drain throttling should work again? Probably, job is resumed earlier than throttling is activated again, so new 64 requests are issued prior to throttling activation.. Or probably something is still wrong. I don't want to care too much :)

> ---
>   tests/qemu-iotests/129 | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
> index 9a56217bf8..2ac7e7a24d 100755
> --- a/tests/qemu-iotests/129
> +++ b/tests/qemu-iotests/129
> @@ -70,9 +70,14 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
>                             sync="full", buf_size=65536)
>   
>       def test_drive_backup(self):
> +        # Limit max-chunk and max-workers so that block-copy will not
> +        # launch so many workers working on so much data each that
> +        # stop's bdrv_drain_all() would finish the job
>           self.do_test_stop("drive-backup", device="drive0",
>                             target=self.target_img, format=iotests.imgfmt,
> -                          sync="full")
> +                          sync="full",
> +                          x_perf={ 'max-chunk': 65536,
> +                                   'max-workers': 8 })
>   
>       def test_block_commit(self):
>           # Add overlay above the source node so that we actually use a
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-20 10:39     ` Max Reitz
@ 2021-01-20 13:50       ` Max Reitz
  2021-01-20 14:34         ` Max Reitz
  0 siblings, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-20 13:50 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 20.01.21 11:39, Max Reitz wrote:
> On 19.01.21 20:22, Vladimir Sementsov-Ogievskiy wrote:
>> 19.01.2021 21:40, Max Reitz wrote:
>>> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>>>> Hi Max!
>>>> I applied my series onto yours 129-fixing and found, that 129 fails 
>>>> for backup.
>>>> And setting small max-chunk and even max-workers to 1 doesn't help! 
>>>> (setting
>>>> speed like in v3 still helps).
>>>>
>>>> And I found, that the problem is that really, the whole backup job 
>>>> goes during
>>>> drain, because in new architecture we do just job_yield() during the 
>>>> whole
>>>> background block-copy.
>>>>
>>>> This leads to modifying the existing patch in the series, which does 
>>>> job_enter()
>>>> from job_user_pause: we just need call job_enter() from job_pause() 
>>>> to cover
>>>> not only user pauses but also drained_begin.
>>>>
>>>> So, now I don't need any additional fixing of 129.
>>>>
>>>> Changes in v4:
>>>> - add a lot of Max's r-b's, thanks!
>>>>
>>>> 03: fix over-80 line (in comment), add r-b
>>>> 09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
>>>>      now changed to finally fix 129 iotest, drop r-b
>>>>
>>>> 10: squash-in additional wording on max-chunk, fix error message, 
>>>> keep r-b
>>>> 17: drop extra include, assert job_is_cancelled() instead of check, 
>>>> add r-b
>>>> 18: adjust commit message, add r-b
>>>> 23: add comments and assertion, about the fact that test doesn't 
>>>> support
>>>>      paths with colon inside
>>>>      fix s/disable-copy-range/use-copy-range/
>>>
>>> Hmmm, for me, 129 sometimes fails still, because it completes too 
>>> quickly...  (The error then is that 'return[0]' does not exist in 
>>> query-block-jobs’s result, because the job is already gone.)
>>>
>>> When I insert a print(result) after the query-block-jobs, I can see 
>>> that the job has always progressed really far, even if its still 
>>> running. (Like, generally the offset is just one MB shy of 1G.)
>>>
>>> I suspect the problem is that block-copy just copies too much from 
>>> the start (by default); i.e., it starts 64 workers with, hm, well, 1 
>>> MB of chunk size?  Shouldn’t fill the 128 MB immediately...
>>>
>>> Anyway, limiting the number of workers (to 1) and the chunk size (to 
>>> 64k) with x-perf does ensure that the backup job’s progress is 
>>> limited to 1 MB or so, which looks fine to me.
>>>
>>> I suppose we should do that, then (in 129), before patch 17?
>>
>> Yes, that sounds reasonable
>>
>>>
>>> (PS: I can also see a MacOS failure in iotest 256.  I suspect it’s 
>>> related to this series, because 256 is a backup test (with 
>>> iothreads), but I’m not sure yet.  The log is here:
>>>
>>> https://cirrus-ci.com/task/5276331753603072
>>> )
>>>
>>
>> qemu received signal 31 ?
>>
>> googling for MacOS...
>>
>>   31    SIGUSR2      terminate process    User defined signal 2
> 
> coroutine-sigaltstack uses SIGUSR2 to set up new coroutines.  Perhaps 
> it’s unrelated to backup?  Guess I’ll just run the test one more time. O:)

I ran it again, got the same error.  There is no error on master, or 
before backup uses block_copy.

I’m trying to run a test directly on the “move to block-copy” commit, 
but so far Cirrus doesn’t seem to want me to do another test run right now.

(Though I’m pretty sure if there is no error before the block-copy 
commit, then using block-copy must be the problem.  The remaining 
patches in my block branch are just disabling copy_range, some clean-up, 
the simplebench patches, the locking code error reporting change, and a 
new iotest.)

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-20 13:50       ` Max Reitz
@ 2021-01-20 14:34         ` Max Reitz
  2021-01-20 14:44           ` Max Reitz
  0 siblings, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-20 14:34 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 20.01.21 14:50, Max Reitz wrote:
> On 20.01.21 11:39, Max Reitz wrote:
>> On 19.01.21 20:22, Vladimir Sementsov-Ogievskiy wrote:
>>> 19.01.2021 21:40, Max Reitz wrote:
>>>> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>>>>> Hi Max!
>>>>> I applied my series onto yours 129-fixing and found, that 129 fails 
>>>>> for backup.
>>>>> And setting small max-chunk and even max-workers to 1 doesn't help! 
>>>>> (setting
>>>>> speed like in v3 still helps).
>>>>>
>>>>> And I found, that the problem is that really, the whole backup job 
>>>>> goes during
>>>>> drain, because in new architecture we do just job_yield() during 
>>>>> the whole
>>>>> background block-copy.
>>>>>
>>>>> This leads to modifying the existing patch in the series, which 
>>>>> does job_enter()
>>>>> from job_user_pause: we just need call job_enter() from job_pause() 
>>>>> to cover
>>>>> not only user pauses but also drained_begin.
>>>>>
>>>>> So, now I don't need any additional fixing of 129.
>>>>>
>>>>> Changes in v4:
>>>>> - add a lot of Max's r-b's, thanks!
>>>>>
>>>>> 03: fix over-80 line (in comment), add r-b
>>>>> 09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
>>>>>      now changed to finally fix 129 iotest, drop r-b
>>>>>
>>>>> 10: squash-in additional wording on max-chunk, fix error message, 
>>>>> keep r-b
>>>>> 17: drop extra include, assert job_is_cancelled() instead of check, 
>>>>> add r-b
>>>>> 18: adjust commit message, add r-b
>>>>> 23: add comments and assertion, about the fact that test doesn't 
>>>>> support
>>>>>      paths with colon inside
>>>>>      fix s/disable-copy-range/use-copy-range/
>>>>
>>>> Hmmm, for me, 129 sometimes fails still, because it completes too 
>>>> quickly...  (The error then is that 'return[0]' does not exist in 
>>>> query-block-jobs’s result, because the job is already gone.)
>>>>
>>>> When I insert a print(result) after the query-block-jobs, I can see 
>>>> that the job has always progressed really far, even if its still 
>>>> running. (Like, generally the offset is just one MB shy of 1G.)
>>>>
>>>> I suspect the problem is that block-copy just copies too much from 
>>>> the start (by default); i.e., it starts 64 workers with, hm, well, 1 
>>>> MB of chunk size?  Shouldn’t fill the 128 MB immediately...
>>>>
>>>> Anyway, limiting the number of workers (to 1) and the chunk size (to 
>>>> 64k) with x-perf does ensure that the backup job’s progress is 
>>>> limited to 1 MB or so, which looks fine to me.
>>>>
>>>> I suppose we should do that, then (in 129), before patch 17?
>>>
>>> Yes, that sounds reasonable
>>>
>>>>
>>>> (PS: I can also see a MacOS failure in iotest 256.  I suspect it’s 
>>>> related to this series, because 256 is a backup test (with 
>>>> iothreads), but I’m not sure yet.  The log is here:
>>>>
>>>> https://cirrus-ci.com/task/5276331753603072
>>>> )
>>>>
>>>
>>> qemu received signal 31 ?
>>>
>>> googling for MacOS...
>>>
>>>   31    SIGUSR2      terminate process    User defined signal 2
>>
>> coroutine-sigaltstack uses SIGUSR2 to set up new coroutines.  Perhaps 
>> it’s unrelated to backup?  Guess I’ll just run the test one more time. 
>> O:)
> 
> I ran it again, got the same error.  There is no error on master, or 
> before backup uses block_copy.
> 
> I’m trying to run a test directly on the “move to block-copy” commit, 
> but so far Cirrus doesn’t seem to want me to do another test run right now.
> 
> (Though I’m pretty sure if there is no error before the block-copy 
> commit, then using block-copy must be the problem.  The remaining 
> patches in my block branch are just disabling copy_range, some clean-up, 
> the simplebench patches, the locking code error reporting change, and a 
> new iotest.)

I was able to reproduce the signal on Linux, by passing
--with-coroutine=sigaltstack to configure (sometimes takes like 10 runs 
to fail, though).  “move to block-copy” is indeed the first failing commiit.

Letting qemu_coroutine_new() set up an aborting signal handler for 
SIGUSR2 instead of restoring the default action (i.e. exiting without 
core dump) yields this back trace:

Thread 1:

#0  0x00007f61870ba9d5 in raise () at /lib64/libc.so.6
#1  0x00007f61870a38a4 in abort () at /lib64/libc.so.6
#2  0x0000562cd2be7350 in sigusr2_handler (s=<optimized out>) at 
../util/coroutine-sigaltstack.c:151
#3  0x00007f6187cee1e0 in <signal handler called> () at 
/lib64/libpthread.so.0
#4  0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6
#5  0x0000562cd30d9e48 in qemu_coroutine_new () at 
../util/coroutine-sigaltstack.c:220
#6  0x0000562cd3106206 in qemu_coroutine_create
     (entry=entry@entry=0x562cd303c950 <block_copy_async_co_entry>, 
opaque=opaque@entry=0x7f6170002c00)
     at ../util/qemu-coroutine.c:75
#7  0x0000562cd303cd0e in block_copy_async
     (s=0x562cd4f94400, offset=offset@entry=0, bytes=67108864, 
max_workers=64, max_chunk=0, cb=cb@entry=0x562cd301dfe0 
<backup_block_copy_callback>, cb_opaque=0x562cd6256940) at 
../block/block-copy.c:752
#8  0x0000562cd301dd4e in backup_loop (job=<optimized out>) at 
../block/backup.c:156
#9  backup_run (job=0x562cd6256940, errp=<optimized out>) at 
../block/backup.c:299
#10 0x0000562cd2fc84d2 in job_co_entry (opaque=0x562cd6256940) at 
../job.c:911
#11 0x0000562cd30da04b in coroutine_bootstrap 
(self=self@entry=0x562cd6262ef0, co=co@entry=0x562cd6262ef0)
     at ../util/coroutine-sigaltstack.c:105
#12 0x0000562cd30da0e1 in coroutine_trampoline (signal=<optimized out>) 
at ../util/coroutine-sigaltstack.c:146
#13 0x00007f6187cee1e0 in <signal handler called> () at 
/lib64/libpthread.so.0
#14 0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6

Thread 2:

(gdb) bt
#0  0x00007f6187cee2b6 in __libc_sigaction () at /lib64/libpthread.so.0
#1  0x0000562cd30d9dc7 in qemu_coroutine_new () at 
../util/coroutine-sigaltstack.c:194
#2  0x0000562cd3106206 in qemu_coroutine_create
     (entry=entry@entry=0x562cd3016c20 <aio_task_co>, 
opaque=opaque@entry=0x7f616c0063d0)
     at ../util/qemu-coroutine.c:75
#3  0x0000562cd3016dd4 in aio_task_pool_start_task (pool=0x7f616c0067d0, 
task=0x7f616c0063d0)
     at ../block/aio_task.c:94
#4  0x0000562cd303c193 in block_copy_task_run (task=<optimized out>, 
pool=<optimized out>)
     at ../block/block-copy.c:330
#5  block_copy_dirty_clusters (call_state=0x7f616c002c00) at 
../block/block-copy.c:646
#6  block_copy_common (call_state=0x7f616c002c00) at 
../block/block-copy.c:696
#7  0x0000562cd30da04b in coroutine_bootstrap 
(self=self@entry=0x7f616c003660, co=co@entry=0x7f616c003660)
     at ../util/coroutine-sigaltstack.c:105
#8  0x0000562cd30da0e1 in coroutine_trampoline (signal=<optimized out>) 
at ../util/coroutine-sigaltstack.c:146
#9  0x00007f6187cee1e0 in <signal handler called> () at 
/lib64/libpthread.so.0
#10 0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6
#11 0x0000562cd30d9f61 in qemu_coroutine_new () at 
../util/coroutine-sigaltstack.c:254
#12 0x0000562cd61b40d0 in  ()
#13 0x00007f6163fff930 in  ()
#14 0x00007f6187cecc5f in annobin_pt_longjmp.c_end () at 
/lib64/libpthread.so.0
#15 0x0000562cd30da00d in qemu_coroutine_switch
     (from_=0x562cd5cc3400, to_=0x7f616c003108, 
action=action@entry=COROUTINE_YIELD)
     at ../util/coroutine-sigaltstack.c:284
#16 0x0000562cd31065f0 in qemu_coroutine_yield () at 
../util/qemu-coroutine.c:193
#17 0x0000562cd2fc751d in job_do_yield (job=0x562cd5cc3400, 
ns=18446744073709551615) at ../job.c:478
#18 0x0000562cd2fc855f in job_yield (job=0x562cd5cc3400) at ../job.c:525
#19 0x0000562cd301dd74 in backup_loop (job=<optimized out>) at 
../block/backup.c:164
#20 backup_run (job=0x562cd5cc3400, errp=<optimized out>) at 
../block/backup.c:299
#21 0x0000562cd2fc84d2 in job_co_entry (opaque=0x562cd5cc3400) at 
../job.c:911
#22 0x0000562cd30da04b in coroutine_bootstrap 
(self=self@entry=0x562cd61b40d0, co=co@entry=0x562cd61b40d0)
     at ../util/coroutine-sigaltstack.c:105
#23 0x0000562cd30da0e1 in coroutine_trampoline (signal=<optimized out>) 
at ../util/coroutine-sigaltstack.c:146
#24 0x00007f6187cee1e0 in <signal handler called> () at 
/lib64/libpthread.so.0
#25 0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6
#26 0x0000000000000001 in  ()
#27 0x0000000000000000 in  ()

 From a glance, it looks to me like two coroutines are created 
simultaneously in two threads, and so one thread sets up a special 
SIGUSR2 action, then another reverts SIGUSR2 to the default, and then 
the first one kills itself with SIGUSR2.

Not sure what this has to do with backup, though it is interesting that 
backup_loop() runs in two threads.  So perhaps some AioContext problem.

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-20 14:34         ` Max Reitz
@ 2021-01-20 14:44           ` Max Reitz
  2021-01-20 15:53             ` Max Reitz
  0 siblings, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-01-20 14:44 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 20.01.21 15:34, Max Reitz wrote:
> On 20.01.21 14:50, Max Reitz wrote:
>> On 20.01.21 11:39, Max Reitz wrote:
>>> On 19.01.21 20:22, Vladimir Sementsov-Ogievskiy wrote:
>>>> 19.01.2021 21:40, Max Reitz wrote:
>>>>> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> Hi Max!
>>>>>> I applied my series onto yours 129-fixing and found, that 129 
>>>>>> fails for backup.
>>>>>> And setting small max-chunk and even max-workers to 1 doesn't 
>>>>>> help! (setting
>>>>>> speed like in v3 still helps).
>>>>>>
>>>>>> And I found, that the problem is that really, the whole backup job 
>>>>>> goes during
>>>>>> drain, because in new architecture we do just job_yield() during 
>>>>>> the whole
>>>>>> background block-copy.
>>>>>>
>>>>>> This leads to modifying the existing patch in the series, which 
>>>>>> does job_enter()
>>>>>> from job_user_pause: we just need call job_enter() from 
>>>>>> job_pause() to cover
>>>>>> not only user pauses but also drained_begin.
>>>>>>
>>>>>> So, now I don't need any additional fixing of 129.
>>>>>>
>>>>>> Changes in v4:
>>>>>> - add a lot of Max's r-b's, thanks!
>>>>>>
>>>>>> 03: fix over-80 line (in comment), add r-b
>>>>>> 09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
>>>>>>      now changed to finally fix 129 iotest, drop r-b
>>>>>>
>>>>>> 10: squash-in additional wording on max-chunk, fix error message, 
>>>>>> keep r-b
>>>>>> 17: drop extra include, assert job_is_cancelled() instead of 
>>>>>> check, add r-b
>>>>>> 18: adjust commit message, add r-b
>>>>>> 23: add comments and assertion, about the fact that test doesn't 
>>>>>> support
>>>>>>      paths with colon inside
>>>>>>      fix s/disable-copy-range/use-copy-range/
>>>>>
>>>>> Hmmm, for me, 129 sometimes fails still, because it completes too 
>>>>> quickly...  (The error then is that 'return[0]' does not exist in 
>>>>> query-block-jobs’s result, because the job is already gone.)
>>>>>
>>>>> When I insert a print(result) after the query-block-jobs, I can see 
>>>>> that the job has always progressed really far, even if its still 
>>>>> running. (Like, generally the offset is just one MB shy of 1G.)
>>>>>
>>>>> I suspect the problem is that block-copy just copies too much from 
>>>>> the start (by default); i.e., it starts 64 workers with, hm, well, 
>>>>> 1 MB of chunk size?  Shouldn’t fill the 128 MB immediately...
>>>>>
>>>>> Anyway, limiting the number of workers (to 1) and the chunk size 
>>>>> (to 64k) with x-perf does ensure that the backup job’s progress is 
>>>>> limited to 1 MB or so, which looks fine to me.
>>>>>
>>>>> I suppose we should do that, then (in 129), before patch 17?
>>>>
>>>> Yes, that sounds reasonable
>>>>
>>>>>
>>>>> (PS: I can also see a MacOS failure in iotest 256.  I suspect it’s 
>>>>> related to this series, because 256 is a backup test (with 
>>>>> iothreads), but I’m not sure yet.  The log is here:
>>>>>
>>>>> https://cirrus-ci.com/task/5276331753603072
>>>>> )
>>>>>
>>>>
>>>> qemu received signal 31 ?
>>>>
>>>> googling for MacOS...
>>>>
>>>>   31    SIGUSR2      terminate process    User defined signal 2
>>>
>>> coroutine-sigaltstack uses SIGUSR2 to set up new coroutines.  Perhaps 
>>> it’s unrelated to backup?  Guess I’ll just run the test one more 
>>> time. O:)
>>
>> I ran it again, got the same error.  There is no error on master, or 
>> before backup uses block_copy.
>>
>> I’m trying to run a test directly on the “move to block-copy” commit, 
>> but so far Cirrus doesn’t seem to want me to do another test run right 
>> now.
>>
>> (Though I’m pretty sure if there is no error before the block-copy 
>> commit, then using block-copy must be the problem.  The remaining 
>> patches in my block branch are just disabling copy_range, some 
>> clean-up, the simplebench patches, the locking code error reporting 
>> change, and a new iotest.)
> 
> I was able to reproduce the signal on Linux, by passing
> --with-coroutine=sigaltstack to configure (sometimes takes like 10 runs 
> to fail, though).  “move to block-copy” is indeed the first failing 
> commiit.
> 
> Letting qemu_coroutine_new() set up an aborting signal handler for 
> SIGUSR2 instead of restoring the default action (i.e. exiting without 
> core dump) yields this back trace:
> 
> Thread 1:
> 
> #0  0x00007f61870ba9d5 in raise () at /lib64/libc.so.6
> #1  0x00007f61870a38a4 in abort () at /lib64/libc.so.6
> #2  0x0000562cd2be7350 in sigusr2_handler (s=<optimized out>) at 
> ../util/coroutine-sigaltstack.c:151
> #3  0x00007f6187cee1e0 in <signal handler called> () at 
> /lib64/libpthread.so.0
> #4  0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6
> #5  0x0000562cd30d9e48 in qemu_coroutine_new () at 
> ../util/coroutine-sigaltstack.c:220
> #6  0x0000562cd3106206 in qemu_coroutine_create
>      (entry=entry@entry=0x562cd303c950 <block_copy_async_co_entry>, 
> opaque=opaque@entry=0x7f6170002c00)
>      at ../util/qemu-coroutine.c:75
> #7  0x0000562cd303cd0e in block_copy_async
>      (s=0x562cd4f94400, offset=offset@entry=0, bytes=67108864, 
> max_workers=64, max_chunk=0, cb=cb@entry=0x562cd301dfe0 
> <backup_block_copy_callback>, cb_opaque=0x562cd6256940) at 
> ../block/block-copy.c:752
> #8  0x0000562cd301dd4e in backup_loop (job=<optimized out>) at 
> ../block/backup.c:156
> #9  backup_run (job=0x562cd6256940, errp=<optimized out>) at 
> ../block/backup.c:299
> #10 0x0000562cd2fc84d2 in job_co_entry (opaque=0x562cd6256940) at 
> ../job.c:911
> #11 0x0000562cd30da04b in coroutine_bootstrap 
> (self=self@entry=0x562cd6262ef0, co=co@entry=0x562cd6262ef0)
>      at ../util/coroutine-sigaltstack.c:105
> #12 0x0000562cd30da0e1 in coroutine_trampoline (signal=<optimized out>) 
> at ../util/coroutine-sigaltstack.c:146
> #13 0x00007f6187cee1e0 in <signal handler called> () at 
> /lib64/libpthread.so.0
> #14 0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6
> 
> Thread 2:
> 
> (gdb) bt
> #0  0x00007f6187cee2b6 in __libc_sigaction () at /lib64/libpthread.so.0
> #1  0x0000562cd30d9dc7 in qemu_coroutine_new () at 
> ../util/coroutine-sigaltstack.c:194
> #2  0x0000562cd3106206 in qemu_coroutine_create
>      (entry=entry@entry=0x562cd3016c20 <aio_task_co>, 
> opaque=opaque@entry=0x7f616c0063d0)
>      at ../util/qemu-coroutine.c:75
> #3  0x0000562cd3016dd4 in aio_task_pool_start_task (pool=0x7f616c0067d0, 
> task=0x7f616c0063d0)
>      at ../block/aio_task.c:94
> #4  0x0000562cd303c193 in block_copy_task_run (task=<optimized out>, 
> pool=<optimized out>)
>      at ../block/block-copy.c:330
> #5  block_copy_dirty_clusters (call_state=0x7f616c002c00) at 
> ../block/block-copy.c:646
> #6  block_copy_common (call_state=0x7f616c002c00) at 
> ../block/block-copy.c:696
> #7  0x0000562cd30da04b in coroutine_bootstrap 
> (self=self@entry=0x7f616c003660, co=co@entry=0x7f616c003660)
>      at ../util/coroutine-sigaltstack.c:105
> #8  0x0000562cd30da0e1 in coroutine_trampoline (signal=<optimized out>) 
> at ../util/coroutine-sigaltstack.c:146
> #9  0x00007f6187cee1e0 in <signal handler called> () at 
> /lib64/libpthread.so.0
> #10 0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6
> #11 0x0000562cd30d9f61 in qemu_coroutine_new () at 
> ../util/coroutine-sigaltstack.c:254
> #12 0x0000562cd61b40d0 in  ()
> #13 0x00007f6163fff930 in  ()
> #14 0x00007f6187cecc5f in annobin_pt_longjmp.c_end () at 
> /lib64/libpthread.so.0
> #15 0x0000562cd30da00d in qemu_coroutine_switch
>      (from_=0x562cd5cc3400, to_=0x7f616c003108, 
> action=action@entry=COROUTINE_YIELD)
>      at ../util/coroutine-sigaltstack.c:284
> #16 0x0000562cd31065f0 in qemu_coroutine_yield () at 
> ../util/qemu-coroutine.c:193
> #17 0x0000562cd2fc751d in job_do_yield (job=0x562cd5cc3400, 
> ns=18446744073709551615) at ../job.c:478
> #18 0x0000562cd2fc855f in job_yield (job=0x562cd5cc3400) at ../job.c:525
> #19 0x0000562cd301dd74 in backup_loop (job=<optimized out>) at 
> ../block/backup.c:164
> #20 backup_run (job=0x562cd5cc3400, errp=<optimized out>) at 
> ../block/backup.c:299
> #21 0x0000562cd2fc84d2 in job_co_entry (opaque=0x562cd5cc3400) at 
> ../job.c:911
> #22 0x0000562cd30da04b in coroutine_bootstrap 
> (self=self@entry=0x562cd61b40d0, co=co@entry=0x562cd61b40d0)
>      at ../util/coroutine-sigaltstack.c:105
> #23 0x0000562cd30da0e1 in coroutine_trampoline (signal=<optimized out>) 
> at ../util/coroutine-sigaltstack.c:146
> #24 0x00007f6187cee1e0 in <signal handler called> () at 
> /lib64/libpthread.so.0
> #25 0x00007f61870bad8a in sigsuspend () at /lib64/libc.so.6
> #26 0x0000000000000001 in  ()
> #27 0x0000000000000000 in  ()
> 
>  From a glance, it looks to me like two coroutines are created 
> simultaneously in two threads, and so one thread sets up a special 
> SIGUSR2 action, then another reverts SIGUSR2 to the default, and then 
> the first one kills itself with SIGUSR2.
> 
> Not sure what this has to do with backup, though it is interesting that 
> backup_loop() runs in two threads.  So perhaps some AioContext problem.

Oh, 256 runs two backups concurrently.  So it isn’t that interesting, 
but perhaps part of the problem still.  (I have no idea, still looking.)

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-20 14:44           ` Max Reitz
@ 2021-01-20 15:53             ` Max Reitz
  2021-01-20 16:00               ` Max Reitz
  2021-01-20 16:04               ` Daniel P. Berrangé
  0 siblings, 2 replies; 45+ messages in thread
From: Max Reitz @ 2021-01-20 15:53 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 20.01.21 15:44, Max Reitz wrote:
> On 20.01.21 15:34, Max Reitz wrote:

[...]

>>  From a glance, it looks to me like two coroutines are created 
>> simultaneously in two threads, and so one thread sets up a special 
>> SIGUSR2 action, then another reverts SIGUSR2 to the default, and then 
>> the first one kills itself with SIGUSR2.
>>
>> Not sure what this has to do with backup, though it is interesting 
>> that backup_loop() runs in two threads.  So perhaps some AioContext 
>> problem.
> 
> Oh, 256 runs two backups concurrently.  So it isn’t that interesting, 
> but perhaps part of the problem still.  (I have no idea, still looking.)

So this is what I found out:

coroutine-sigaltstack, when creating a new coroutine, sets up a signal 
handler for SIGUSR2, then kills itself with SIGUSR2, then uses the 
signal handler context (with a sigaltstack) for the new coroutine, and 
then (the signal handler returns after a sigsetjmp()) the old SIGUSR2 
behavior is restored.

What I fail to understand is how this is thread-safe.  Setting up signal 
handlers is a process-wide action.  When one thread changes what SIGUSR2 
does, this will affect all threads immediately, so when two threads run 
coroutine-sigaltstack’s qemu_coroutine_new() concurrently, and one 
thread reverts to the default action before the other has SIGUSR2’ed 
itself, that later SIGUSR2 will kill the whole process.

(I suppose it gets even more interesting when one thread has set up the 
sigaltstack, then the other sets up its own sigaltstack, and then both 
kill themselves with SIGUSR2, so both coroutines get the same stack...)

I have no idea why this has never been hit before, but it makes sense 
why block-copy backup makes it apparent: It creates 64+x coroutines in a 
very short time span, and 256 makes it do so in two threads concurrently 
(thanks to launching two backups in two AioContexts in a transaction).

So...  Looks to me like a bug in coroutine-sigaltstack.  Not sure what 
to do now, though.  I don’t think we can use block-copy for backup 
before that backend is fixed.  (And that is assuming that it’s indeed 
coroutine-sigaltstack’s fault.)

I’ll try to add some locking, see what it does, and send a mail 
concerning coroutine-sigaltstack to qemu-devel.

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-20 15:53             ` Max Reitz
@ 2021-01-20 16:00               ` Max Reitz
  2021-01-20 16:04               ` Daniel P. Berrangé
  1 sibling, 0 replies; 45+ messages in thread
From: Max Reitz @ 2021-01-20 16:00 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den, jsnow

On 20.01.21 16:53, Max Reitz wrote:
> On 20.01.21 15:44, Max Reitz wrote:
>> On 20.01.21 15:34, Max Reitz wrote:
> 
> [...]
> 
>>>  From a glance, it looks to me like two coroutines are created 
>>> simultaneously in two threads, and so one thread sets up a special 
>>> SIGUSR2 action, then another reverts SIGUSR2 to the default, and then 
>>> the first one kills itself with SIGUSR2.
>>>
>>> Not sure what this has to do with backup, though it is interesting 
>>> that backup_loop() runs in two threads.  So perhaps some AioContext 
>>> problem.
>>
>> Oh, 256 runs two backups concurrently.  So it isn’t that interesting, 
>> but perhaps part of the problem still.  (I have no idea, still looking.)
> 
> So this is what I found out:
> 
> coroutine-sigaltstack, when creating a new coroutine, sets up a signal 
> handler for SIGUSR2, then kills itself with SIGUSR2, then uses the 
> signal handler context (with a sigaltstack) for the new coroutine, and 
> then (the signal handler returns after a sigsetjmp()) the old SIGUSR2 
> behavior is restored.
> 
> What I fail to understand is how this is thread-safe.  Setting up signal 
> handlers is a process-wide action.  When one thread changes what SIGUSR2 
> does, this will affect all threads immediately, so when two threads run 
> coroutine-sigaltstack’s qemu_coroutine_new() concurrently, and one 
> thread reverts to the default action before the other has SIGUSR2’ed 
> itself, that later SIGUSR2 will kill the whole process.
> 
> (I suppose it gets even more interesting when one thread has set up the 
> sigaltstack, then the other sets up its own sigaltstack, and then both 
> kill themselves with SIGUSR2, so both coroutines get the same stack...)
> 
> I have no idea why this has never been hit before, but it makes sense 
> why block-copy backup makes it apparent: It creates 64+x coroutines in a 
> very short time span, and 256 makes it do so in two threads concurrently 
> (thanks to launching two backups in two AioContexts in a transaction).
> 
> So...  Looks to me like a bug in coroutine-sigaltstack.  Not sure what 
> to do now, though.  I don’t think we can use block-copy for backup 
> before that backend is fixed.  (And that is assuming that it’s indeed 
> coroutine-sigaltstack’s fault.)
> 
> I’ll try to add some locking, see what it does, and send a mail 
> concerning coroutine-sigaltstack to qemu-devel.

Yeah, adding a mutex around the section where SIGUSR2 handling and the 
sigaltstack are non-default makes the problem go away.

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-20 15:53             ` Max Reitz
  2021-01-20 16:00               ` Max Reitz
@ 2021-01-20 16:04               ` Daniel P. Berrangé
  2021-01-20 16:40                 ` Max Reitz
  1 sibling, 1 reply; 45+ messages in thread
From: Daniel P. Berrangé @ 2021-01-20 16:04 UTC (permalink / raw)
  To: Max Reitz
  Cc: kwolf, Vladimir Sementsov-Ogievskiy, qemu-block, wencongyang2,
	xiechanglong.d, qemu-devel, armbru, den, jsnow

On Wed, Jan 20, 2021 at 04:53:26PM +0100, Max Reitz wrote:
> On 20.01.21 15:44, Max Reitz wrote:
> > On 20.01.21 15:34, Max Reitz wrote:
> 
> [...]
> 
> > >  From a glance, it looks to me like two coroutines are created
> > > simultaneously in two threads, and so one thread sets up a special
> > > SIGUSR2 action, then another reverts SIGUSR2 to the default, and
> > > then the first one kills itself with SIGUSR2.
> > > 
> > > Not sure what this has to do with backup, though it is interesting
> > > that backup_loop() runs in two threads.  So perhaps some AioContext
> > > problem.
> > 
> > Oh, 256 runs two backups concurrently.  So it isn’t that interesting,
> > but perhaps part of the problem still.  (I have no idea, still looking.)
> 
> So this is what I found out:
> 
> coroutine-sigaltstack, when creating a new coroutine, sets up a signal
> handler for SIGUSR2, then kills itself with SIGUSR2, then uses the signal
> handler context (with a sigaltstack) for the new coroutine, and then (the
> signal handler returns after a sigsetjmp()) the old SIGUSR2 behavior is
> restored.
> 
> What I fail to understand is how this is thread-safe.  Setting up signal
> handlers is a process-wide action.  When one thread changes what SIGUSR2
> does, this will affect all threads immediately, so when two threads run
> coroutine-sigaltstack’s qemu_coroutine_new() concurrently, and one thread
> reverts to the default action before the other has SIGUSR2’ed itself, that
> later SIGUSR2 will kill the whole process.
> 
> (I suppose it gets even more interesting when one thread has set up the
> sigaltstack, then the other sets up its own sigaltstack, and then both kill
> themselves with SIGUSR2, so both coroutines get the same stack...)
> 
> I have no idea why this has never been hit before, but it makes sense why
> block-copy backup makes it apparent: It creates 64+x coroutines in a very
> short time span, and 256 makes it do so in two threads concurrently (thanks
> to launching two backups in two AioContexts in a transaction).
> 
> So...  Looks to me like a bug in coroutine-sigaltstack.  Not sure what to do
> now, though.  I don’t think we can use block-copy for backup before that
> backend is fixed.  (And that is assuming that it’s indeed
> coroutine-sigaltstack’s fault.)
> 
> I’ll try to add some locking, see what it does, and send a mail concerning
> coroutine-sigaltstack to qemu-devel.

I'm wondering if we should simply remove the sigaltstack impl and use
ucontext on MacOS too.

MacOS has ucontext marked as deprecated by default, seemingly because
this functionality was deprecated by POSIX. The functionality is still
available without deprecation warnings if you set _XOPEN_SOURCE.

IOW, it is trivial to make the ucontext impl work on MacOS simply by
adding

 #define _XOPEN_SOURCE 600

before including ucontext.h in coroutine-ucontext.c, and removing the
restrictions in configure



diff --git a/configure b/configure
index 881af4b6be..a58bdf70f3 100755
--- a/configure
+++ b/configure
@@ -4822,8 +4822,9 @@ fi
 # specific one.
 
 ucontext_works=no
-if test "$darwin" != "yes"; then
+
   cat > $TMPC << EOF
+#define _XOPEN_SOURCE 600
 #include <ucontext.h>
 #ifdef __stub_makecontext
 #error Ignoring glibc stub makecontext which will always fail
@@ -4833,7 +4834,6 @@ EOF
   if compile_prog "" "" ; then
     ucontext_works=yes
   fi
-fi
 
 if test "$coroutine" = ""; then
   if test "$mingw32" = "yes"; then
diff --git a/util/coroutine-ucontext.c b/util/coroutine-ucontext.c
index 904b375192..9c0a2cf85c 100644
--- a/util/coroutine-ucontext.c
+++ b/util/coroutine-ucontext.c
@@ -22,6 +22,7 @@
 #ifdef _FORTIFY_SOURCE
 #undef _FORTIFY_SOURCE
 #endif
+#define _XOPEN_SOURCE 600
 #include "qemu/osdep.h"
 #include <ucontext.h>
 #include "qemu/coroutine_int.h"



Further more for iOS there was a proposal to add support for using
libucontext, which provides a clean impl of ucontext APIs for x86
and aarch64 hosts.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 00/23] backup performance: block_status + async
  2021-01-20 16:04               ` Daniel P. Berrangé
@ 2021-01-20 16:40                 ` Max Reitz
  0 siblings, 0 replies; 45+ messages in thread
From: Max Reitz @ 2021-01-20 16:40 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: kwolf, Vladimir Sementsov-Ogievskiy, qemu-block, wencongyang2,
	xiechanglong.d, qemu-devel, armbru, den, jsnow

On 20.01.21 17:04, Daniel P. Berrangé wrote:
> On Wed, Jan 20, 2021 at 04:53:26PM +0100, Max Reitz wrote:
>> On 20.01.21 15:44, Max Reitz wrote:
>>> On 20.01.21 15:34, Max Reitz wrote:
>>
>> [...]
>>
>>>>   From a glance, it looks to me like two coroutines are created
>>>> simultaneously in two threads, and so one thread sets up a special
>>>> SIGUSR2 action, then another reverts SIGUSR2 to the default, and
>>>> then the first one kills itself with SIGUSR2.
>>>>
>>>> Not sure what this has to do with backup, though it is interesting
>>>> that backup_loop() runs in two threads.  So perhaps some AioContext
>>>> problem.
>>>
>>> Oh, 256 runs two backups concurrently.  So it isn’t that interesting,
>>> but perhaps part of the problem still.  (I have no idea, still looking.)
>>
>> So this is what I found out:
>>
>> coroutine-sigaltstack, when creating a new coroutine, sets up a signal
>> handler for SIGUSR2, then kills itself with SIGUSR2, then uses the signal
>> handler context (with a sigaltstack) for the new coroutine, and then (the
>> signal handler returns after a sigsetjmp()) the old SIGUSR2 behavior is
>> restored.
>>
>> What I fail to understand is how this is thread-safe.  Setting up signal
>> handlers is a process-wide action.  When one thread changes what SIGUSR2
>> does, this will affect all threads immediately, so when two threads run
>> coroutine-sigaltstack’s qemu_coroutine_new() concurrently, and one thread
>> reverts to the default action before the other has SIGUSR2’ed itself, that
>> later SIGUSR2 will kill the whole process.
>>
>> (I suppose it gets even more interesting when one thread has set up the
>> sigaltstack, then the other sets up its own sigaltstack, and then both kill
>> themselves with SIGUSR2, so both coroutines get the same stack...)
>>
>> I have no idea why this has never been hit before, but it makes sense why
>> block-copy backup makes it apparent: It creates 64+x coroutines in a very
>> short time span, and 256 makes it do so in two threads concurrently (thanks
>> to launching two backups in two AioContexts in a transaction).
>>
>> So...  Looks to me like a bug in coroutine-sigaltstack.  Not sure what to do
>> now, though.  I don’t think we can use block-copy for backup before that
>> backend is fixed.  (And that is assuming that it’s indeed
>> coroutine-sigaltstack’s fault.)
>>
>> I’ll try to add some locking, see what it does, and send a mail concerning
>> coroutine-sigaltstack to qemu-devel.
> 
> I'm wondering if we should simply remove the sigaltstack impl and use
> ucontext on MacOS too.
> 
> MacOS has ucontext marked as deprecated by default, seemingly because
> this functionality was deprecated by POSIX. The functionality is still
> available without deprecation warnings if you set _XOPEN_SOURCE.

 From my outside point of view (on coroutine backends), everything you 
wrote below sounds like a very reasonable thing to do.  So perhaps we 
should (I’m not the right person to decide that, though).

However, for me, the immediate question is what I can do now.  I naively 
believe that dropping the sigaltstack implementation would require a 
deprecation cycle.  (If it doesn’t and we can get it out in, say, a 
week, great.)

I need something that helps right now, because I have Vladimir’s series 
in my block branch, the failure doesn’t seem to be its fault, but I 
can’t send a pull request as long as concurrent backups in two iothreads 
make qemu effectively crash when using a specific coroutine backend. 
(And I don’t see configure giving me a warning that choosing sigaltstack 
might be bad idea.)

I suppose I hope that the prospect of wanting to drop sigaltstack 
altogether may lessen the resistance to just wrapping most of its 
qemu_coroutine_new() implementation in a lock until then...  (i.e., what 
the RFC does that I’ve attached to
https://lists.nongnu.org/archive/html/qemu-devel/2021-01/msg05164.html)

Max

> IOW, it is trivial to make the ucontext impl work on MacOS simply by
> adding
> 
>   #define _XOPEN_SOURCE 600
> 
> before including ucontext.h in coroutine-ucontext.c, and removing the
> restrictions in configure
> 
> 
> 
> diff --git a/configure b/configure
> index 881af4b6be..a58bdf70f3 100755
> --- a/configure
> +++ b/configure
> @@ -4822,8 +4822,9 @@ fi
>   # specific one.
>   
>   ucontext_works=no
> -if test "$darwin" != "yes"; then
> +
>     cat > $TMPC << EOF
> +#define _XOPEN_SOURCE 600
>   #include <ucontext.h>
>   #ifdef __stub_makecontext
>   #error Ignoring glibc stub makecontext which will always fail
> @@ -4833,7 +4834,6 @@ EOF
>     if compile_prog "" "" ; then
>       ucontext_works=yes
>     fi
> -fi
>   
>   if test "$coroutine" = ""; then
>     if test "$mingw32" = "yes"; then
> diff --git a/util/coroutine-ucontext.c b/util/coroutine-ucontext.c
> index 904b375192..9c0a2cf85c 100644
> --- a/util/coroutine-ucontext.c
> +++ b/util/coroutine-ucontext.c
> @@ -22,6 +22,7 @@
>   #ifdef _FORTIFY_SOURCE
>   #undef _FORTIFY_SOURCE
>   #endif
> +#define _XOPEN_SOURCE 600
>   #include "qemu/osdep.h"
>   #include <ucontext.h>
>   #include "qemu/coroutine_int.h"
> 
> 
> 
> Further more for iOS there was a proposal to add support for using
> libucontext, which provides a clean impl of ucontext APIs for x86
> and aarch64 hosts.
> 
> Regards,
> Daniel
> 



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 09/23] job: call job_enter from job_pause
  2021-01-16 21:46 ` [PATCH v4 09/23] job: call job_enter from job_pause Vladimir Sementsov-Ogievskiy
  2021-01-18 13:45   ` Max Reitz
@ 2021-04-07 11:19   ` Max Reitz
  2021-04-07 11:38     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 45+ messages in thread
From: Max Reitz @ 2021-04-07 11:19 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den

On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
> If main job coroutine called job_yield (while some background process
> is in progress), we should give it a chance to call job_pause_point().
> It will be used in backup, when moved on async block-copy.
> 
> Note, that job_user_pause is not enough: we want to handle
> child_job_drained_begin() as well, which call job_pause().
> 
> Still, if job is already in job_do_yield() in job_pause_point() we
> should not enter it.
> 
> iotest 109 output is modified: on stop we do bdrv_drain_all() which now
> triggers job pause immediately (and pause after ready is standby).
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>   job.c                      |  3 +++
>   tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
>   2 files changed, 27 insertions(+)

While looking into

https://lists.gnu.org/archive/html/qemu-block/2021-04/msg00035.html

I noticed this:

$ ./qemu-img create -f raw src.img 1G
$ ./qemu-img create -f raw dst.img 1G

$ (echo '
     {"execute":"qmp_capabilities"}
     {"execute":"blockdev-mirror",
      "arguments":{"job-id":"mirror",
                   "device":"source",
                   "target":"target",
                   "sync":"full",
                   "filter-node-name":"mirror-top"}}
'; sleep 3; echo '
     {"execute":"human-monitor-command",
      "arguments":{"command-line":
                   "qemu-io mirror-top \"write 0 1G\""}}') \
| x86_64-softmmu/qemu-system-x86_64 \
     -qmp stdio \
     -blockdev file,node-name=source,filename=src.img \
     -blockdev file,node-name=target,filename=dst.img \
     -object iothread,id=iothr0 \
     -device virtio-blk,drive=source,iothread=iothr0

Before this commit, qemu-io returned an error that there is a permission 
conflict with virtio-blk.  After this commit, there is an abort (“qemu: 
qemu_mutex_unlock_impl: Operation not permitted”):

#0  0x00007f8445a4eef5 in raise () at /usr/lib/libc.so.6
#1  0x00007f8445a38862 in abort () at /usr/lib/libc.so.6
#2  0x000055fbb14a36bf in error_exit
     (err=<optimized out>, msg=msg@entry=0x55fbb1634790 <__func__.27> 
"qemu_mutex_unlock_impl")
     at ../util/qemu-thread-posix.c:37
#3  0x000055fbb14a3bc3 in qemu_mutex_unlock_impl
     (mutex=mutex@entry=0x55fbb25ab6e0, file=file@entry=0x55fbb1636957 
"../util/async.c", line=line@entry=650)
     at ../util/qemu-thread-posix.c:109
#4  0x000055fbb14b2e75 in aio_context_release 
(ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
#5  0x000055fbb13d2029 in bdrv_do_drained_begin
     (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false, 
parent=parent@entry=0x0, 
ignore_bds_parents=ignore_bds_parents@entry=false, poll=poll@entry=true) 
at ../block/io.c:441
#6  0x000055fbb13d2192 in bdrv_do_drained_begin
     (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false, 
bs=0x55fbb3a87000) at ../block/io.c:448
#7  0x000055fbb13c71a7 in blk_drain (blk=0x55fbb26c5a00) at 
../block/block-backend.c:1718
#8  0x000055fbb13c8bbd in blk_unref (blk=0x55fbb26c5a00) at 
../block/block-backend.c:498
#9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
#10 0x000055fbb1024863 in hmp_qemu_io (mon=0x7fffaf3fc7d0, 
qdict=<optimized out>)
     at ../block/monitor/block-hmp-cmds.c:628

Can you make anything out of this?

Max



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 09/23] job: call job_enter from job_pause
  2021-04-07 11:19   ` Max Reitz
@ 2021-04-07 11:38     ` Vladimir Sementsov-Ogievskiy
  2021-04-21  8:31       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-07 11:38 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den

07.04.2021 14:19, Max Reitz wrote:
> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>> If main job coroutine called job_yield (while some background process
>> is in progress), we should give it a chance to call job_pause_point().
>> It will be used in backup, when moved on async block-copy.
>>
>> Note, that job_user_pause is not enough: we want to handle
>> child_job_drained_begin() as well, which call job_pause().
>>
>> Still, if job is already in job_do_yield() in job_pause_point() we
>> should not enter it.
>>
>> iotest 109 output is modified: on stop we do bdrv_drain_all() which now
>> triggers job pause immediately (and pause after ready is standby).
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   job.c                      |  3 +++
>>   tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
>>   2 files changed, 27 insertions(+)
> 
> While looking into
> 
> https://lists.gnu.org/archive/html/qemu-block/2021-04/msg00035.html
> 
> I noticed this:
> 
> $ ./qemu-img create -f raw src.img 1G
> $ ./qemu-img create -f raw dst.img 1G
> 
> $ (echo '
>     {"execute":"qmp_capabilities"}
>     {"execute":"blockdev-mirror",
>      "arguments":{"job-id":"mirror",
>                   "device":"source",
>                   "target":"target",
>                   "sync":"full",
>                   "filter-node-name":"mirror-top"}}
> '; sleep 3; echo '
>     {"execute":"human-monitor-command",
>      "arguments":{"command-line":
>                   "qemu-io mirror-top \"write 0 1G\""}}') \
> | x86_64-softmmu/qemu-system-x86_64 \
>     -qmp stdio \
>     -blockdev file,node-name=source,filename=src.img \
>     -blockdev file,node-name=target,filename=dst.img \
>     -object iothread,id=iothr0 \
>     -device virtio-blk,drive=source,iothread=iothr0
> 
> Before this commit, qemu-io returned an error that there is a permission conflict with virtio-blk.  After this commit, there is an abort (“qemu: qemu_mutex_unlock_impl: Operation not permitted”):
> 
> #0  0x00007f8445a4eef5 in raise () at /usr/lib/libc.so.6
> #1  0x00007f8445a38862 in abort () at /usr/lib/libc.so.6
> #2  0x000055fbb14a36bf in error_exit
>     (err=<optimized out>, msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>     at ../util/qemu-thread-posix.c:37
> #3  0x000055fbb14a3bc3 in qemu_mutex_unlock_impl
>     (mutex=mutex@entry=0x55fbb25ab6e0, file=file@entry=0x55fbb1636957 "../util/async.c", line=line@entry=650)
>     at ../util/qemu-thread-posix.c:109
> #4  0x000055fbb14b2e75 in aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
> #5  0x000055fbb13d2029 in bdrv_do_drained_begin
>     (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false, parent=parent@entry=0x0, ignore_bds_parents=ignore_bds_parents@entry=false, poll=poll@entry=true) at ../block/io.c:441
> #6  0x000055fbb13d2192 in bdrv_do_drained_begin
>     (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false, bs=0x55fbb3a87000) at ../block/io.c:448
> #7  0x000055fbb13c71a7 in blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
> #8  0x000055fbb13c8bbd in blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
> #9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
> #10 0x000055fbb1024863 in hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>     at ../block/monitor/block-hmp-cmds.c:628
> 
> Can you make anything out of this?
> 

Hmm.. Interesting.

man pthread_mutex_unlock
...
      EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or PTHREAD_MUTEX_RECURSIVE, or the mutex is a
               robust mutex, and the current thread does not own the mutex.

So, thread doesn't own the mutex.. We have an iothread here.

AIO_WAIT_WHILE() documents that ctx must be acquired exactly once by caller.. But I don't see, where is it acquired in the call stack?

The other question, is why permission conflict is lost with the commit. Strange. I ss that hmp_qemu_io creates blk with perm=0 and shread=BLK_PERM_ALL.. How could it conflict even before the considered commit?

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v4 09/23] job: call job_enter from job_pause
  2021-04-07 11:38     ` Vladimir Sementsov-Ogievskiy
@ 2021-04-21  8:31       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 45+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-21  8:31 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: kwolf, wencongyang2, xiechanglong.d, qemu-devel, armbru, den

07.04.2021 14:38, Vladimir Sementsov-Ogievskiy wrote:
> 07.04.2021 14:19, Max Reitz wrote:
>> On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
>>> If main job coroutine called job_yield (while some background process
>>> is in progress), we should give it a chance to call job_pause_point().
>>> It will be used in backup, when moved on async block-copy.
>>>
>>> Note, that job_user_pause is not enough: we want to handle
>>> child_job_drained_begin() as well, which call job_pause().
>>>
>>> Still, if job is already in job_do_yield() in job_pause_point() we
>>> should not enter it.
>>>
>>> iotest 109 output is modified: on stop we do bdrv_drain_all() which now
>>> triggers job pause immediately (and pause after ready is standby).
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   job.c                      |  3 +++
>>>   tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
>>>   2 files changed, 27 insertions(+)
>>
>> While looking into
>>
>> https://lists.gnu.org/archive/html/qemu-block/2021-04/msg00035.html
>>
>> I noticed this:
>>
>> $ ./qemu-img create -f raw src.img 1G
>> $ ./qemu-img create -f raw dst.img 1G
>>
>> $ (echo '
>>     {"execute":"qmp_capabilities"}
>>     {"execute":"blockdev-mirror",
>>      "arguments":{"job-id":"mirror",
>>                   "device":"source",
>>                   "target":"target",
>>                   "sync":"full",
>>                   "filter-node-name":"mirror-top"}}
>> '; sleep 3; echo '
>>     {"execute":"human-monitor-command",
>>      "arguments":{"command-line":
>>                   "qemu-io mirror-top \"write 0 1G\""}}') \
>> | x86_64-softmmu/qemu-system-x86_64 \
>>     -qmp stdio \
>>     -blockdev file,node-name=source,filename=src.img \
>>     -blockdev file,node-name=target,filename=dst.img \
>>     -object iothread,id=iothr0 \
>>     -device virtio-blk,drive=source,iothread=iothr0
>>
>> Before this commit, qemu-io returned an error that there is a permission conflict with virtio-blk.  After this commit, there is an abort (“qemu: qemu_mutex_unlock_impl: Operation not permitted”):
>>
>> #0  0x00007f8445a4eef5 in raise () at /usr/lib/libc.so.6
>> #1  0x00007f8445a38862 in abort () at /usr/lib/libc.so.6
>> #2  0x000055fbb14a36bf in error_exit
>>     (err=<optimized out>, msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>>     at ../util/qemu-thread-posix.c:37
>> #3  0x000055fbb14a3bc3 in qemu_mutex_unlock_impl
>>     (mutex=mutex@entry=0x55fbb25ab6e0, file=file@entry=0x55fbb1636957 "../util/async.c", line=line@entry=650)
>>     at ../util/qemu-thread-posix.c:109
>> #4  0x000055fbb14b2e75 in aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
>> #5  0x000055fbb13d2029 in bdrv_do_drained_begin
>>     (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false, parent=parent@entry=0x0, ignore_bds_parents=ignore_bds_parents@entry=false, poll=poll@entry=true) at ../block/io.c:441
>> #6  0x000055fbb13d2192 in bdrv_do_drained_begin
>>     (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false, bs=0x55fbb3a87000) at ../block/io.c:448
>> #7  0x000055fbb13c71a7 in blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
>> #8  0x000055fbb13c8bbd in blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
>> #9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
>> #10 0x000055fbb1024863 in hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>>     at ../block/monitor/block-hmp-cmds.c:628
>>
>> Can you make anything out of this?
>>
> 
> Hmm.. Interesting.
> 
> man pthread_mutex_unlock
> ...
>      EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or PTHREAD_MUTEX_RECURSIVE, or the mutex is a
>               robust mutex, and the current thread does not own the mutex.
> 
> So, thread doesn't own the mutex.. We have an iothread here.
> 
> AIO_WAIT_WHILE() documents that ctx must be acquired exactly once by caller.. But I don't see, where is it acquired in the call stack?
> 
> The other question, is why permission conflict is lost with the commit. Strange. I ss that hmp_qemu_io creates blk with perm=0 and shread=BLK_PERM_ALL.. How could it conflict even before the considered commit?
> 


Sorry, I've answered and forgot about this thread. Now, looking through my series I find this again. Seems that problem is really in lacking aio-context locking around blk_unref(). I'll send patch now.


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2021-04-21  8:32 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-16 21:46 [PATCH v4 00/23] backup performance: block_status + async Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 01/23] qapi: backup: add perf.use-copy-range parameter Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 02/23] block/block-copy: More explicit call_state Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 03/23] block/block-copy: implement block_copy_async Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 04/23] block/block-copy: add max_chunk and max_workers parameters Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 05/23] block/block-copy: add list of all call-states Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 06/23] block/block-copy: add ratelimit to block-copy Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 07/23] block/block-copy: add block_copy_cancel Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 08/23] blockjob: add set_speed to BlockJobDriver Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 09/23] job: call job_enter from job_pause Vladimir Sementsov-Ogievskiy
2021-01-18 13:45   ` Max Reitz
2021-01-18 14:20     ` Max Reitz
2021-04-07 11:19   ` Max Reitz
2021-04-07 11:38     ` Vladimir Sementsov-Ogievskiy
2021-04-21  8:31       ` Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 10/23] qapi: backup: add max-chunk and max-workers to x-perf struct Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 11/23] iotests: 56: prepare for backup over block-copy Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 12/23] iotests: 185: " Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 13/23] iotests: 219: " Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 14/23] iotests: 257: " Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 15/23] block/block-copy: make progress_bytes_callback optional Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 16/23] block/backup: drop extra gotos from backup_run() Vladimir Sementsov-Ogievskiy
2021-01-16 21:46 ` [PATCH v4 17/23] backup: move to block-copy Vladimir Sementsov-Ogievskiy
2021-01-16 21:47 ` [PATCH v4 18/23] qapi: backup: disable copy_range by default Vladimir Sementsov-Ogievskiy
2021-01-16 21:47 ` [PATCH v4 19/23] block/block-copy: drop unused block_copy_set_progress_callback() Vladimir Sementsov-Ogievskiy
2021-01-16 21:47 ` [PATCH v4 20/23] block/block-copy: drop unused argument of block_copy() Vladimir Sementsov-Ogievskiy
2021-01-16 21:47 ` [PATCH v4 21/23] simplebench/bench_block_job: use correct shebang line with python3 Vladimir Sementsov-Ogievskiy
2021-01-16 21:47 ` [PATCH v4 22/23] simplebench: bench_block_job: add cmd_options argument Vladimir Sementsov-Ogievskiy
2021-01-16 21:47 ` [PATCH v4 23/23] simplebench: add bench-backup.py Vladimir Sementsov-Ogievskiy
2021-01-18 14:01   ` Max Reitz
2021-01-18 15:07 ` [PATCH v4 00/23] backup performance: block_status + async Max Reitz
2021-01-18 17:04   ` Vladimir Sementsov-Ogievskiy
2021-01-19 18:40 ` Max Reitz
2021-01-19 19:22   ` Vladimir Sementsov-Ogievskiy
2021-01-19 19:29     ` Vladimir Sementsov-Ogievskiy
2021-01-20 10:39     ` Max Reitz
2021-01-20 13:50       ` Max Reitz
2021-01-20 14:34         ` Max Reitz
2021-01-20 14:44           ` Max Reitz
2021-01-20 15:53             ` Max Reitz
2021-01-20 16:00               ` Max Reitz
2021-01-20 16:04               ` Daniel P. Berrangé
2021-01-20 16:40                 ` Max Reitz
2021-01-20 10:20 ` [PATCH v4 11.5/23] iotests/129: Limit backup's max-chunk/max-workers Max Reitz
2021-01-20 11:24   ` Vladimir Sementsov-Ogievskiy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.