All of lore.kernel.org
 help / color / mirror / Atom feed
* Avoid copying unallocated clusters during full backup
@ 2020-04-17 18:33 Leo Luan
  2020-04-17 20:11 ` John Snow
  0 siblings, 1 reply; 11+ messages in thread
From: Leo Luan @ 2020-04-17 18:33 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2442 bytes --]

When doing a full backup from a single layer qcow2 disk file to a new qcow2
file, the backup_run function does not unset unallocated parts in the copy
bit map.  The subsequent backup_loop call goes through these unallocated
clusters unnecessarily.  In the case when the target and source reside in
different file systems, an EXDEV error would cause zeroes to be actually
copied into the target and that causes a target file size explosion to the
full virtual disk size.

This patch aims to unset the unallocated parts in the copy bitmap when it
is safe to do so, thereby avoid dealing with unallocated clusters in the
backup loop to prevent significant performance or storage efficiency
impacts when running full backup jobs.

Any insights or corrections?

diff --git a/block/backup.c b/block/backup.c
index cf62b1a38c..609d551b1e 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -139,6 +139,29 @@ static void backup_clean(Job *job)
     bdrv_backup_top_drop(s->backup_top);
 }

+static bool backup_ok_to_skip_unallocated(BackupBlockJob *s)
+{
+    /* Checks whether this backup job can avoid copying or dealing with
+       unallocated clusters in the backup loop and their associated
+       performance and storage effciency impacts. Check for the condition
+       when it's safe to skip copying unallocated clusters that allows the
+       corresponding bits in the copy bitmap to be unset.  The assumption
+       here is that it is ok to do so when we are doing a full backup,
+       the target file is a qcow2, and the source is single layer.
+       Do we need to add additional checks (so that it does not break
+       something) or add addtional conditions to optimize additional use
+       cases?
+     */
+
+    if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
+       s->bcs->target->bs->drv != NULL &&
+       strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0 &&
+       s->bcs->source->bs->backing_file[0] == '\0')
+       return true;
+    else
+        return false;
+}
+
 void backup_do_checkpoint(BlockJob *job, Error **errp)
 {
     BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
@@ -248,7 +271,7 @@ static int coroutine_fn backup_run(Job *job, Error
**errp)

     backup_init_copy_bitmap(s);

-    if (s->sync_mode == MIRROR_SYNC_MODE_TOP) {
+    if (s->sync_mode == MIRROR_SYNC_MODE_TOP ||
backup_ok_to_skip_unallocated(s)) {
         int64_t offset = 0;
         int64_t count;

[-- Attachment #2: Type: text/html, Size: 2853 bytes --]

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-17 18:33 Avoid copying unallocated clusters during full backup Leo Luan
@ 2020-04-17 20:11 ` John Snow
  2020-04-17 20:24   ` Eric Blake
  2020-04-17 22:31   ` Leo Luan
  0 siblings, 2 replies; 11+ messages in thread
From: John Snow @ 2020-04-17 20:11 UTC (permalink / raw)
  To: Leo Luan, qemu-devel, Qemu-block; +Cc: Vladimir Sementsov-Ogievskiy, Max Reitz



On 4/17/20 2:33 PM, Leo Luan wrote:
> When doing a full backup from a single layer qcow2 disk file to a new
> qcow2 file, the backup_run function does not unset unallocated parts in
> the copy bit map.  The subsequent backup_loop call goes through these
> unallocated clusters unnecessarily.  In the case when the target and
> source reside in different file systems, an EXDEV error would cause
> zeroes to be actually copied into the target and that causes a target
> file size explosion to the full virtual disk size.
> 

I think the idea, generally, is to leave the detection of unallocated
portions to the format (qcow2) and the protocol (posix file) respectively.

As far as I know, it is incorrect to assume that unallocated data
can/will/should always be read as zeroes; so it may not be the case that
it is "safe" to skip this data, because the target may or may not need
explicit zeroing.

> This patch aims to unset the unallocated parts in the copy bitmap when
> it is safe to do so, thereby avoid dealing with unallocated clusters in
> the backup loop to prevent significant performance or storage efficiency
> impacts when running full backup jobs.
> 
> Any insights or corrections?
> 
> diff --git a/block/backup.c b/block/backup.c
> index cf62b1a38c..609d551b1e 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -139,6 +139,29 @@ static void backup_clean(Job *job)
>      bdrv_backup_top_drop(s->backup_top);
>  }
>  
> +static bool backup_ok_to_skip_unallocated(BackupBlockJob *s)
> +{
> +    /* Checks whether this backup job can avoid copying or dealing with
> +       unallocated clusters in the backup loop and their associated
> +       performance and storage effciency impacts. Check for the condition
> +       when it's safe to skip copying unallocated clusters that allows the
> +       corresponding bits in the copy bitmap to be unset.  The assumption
> +       here is that it is ok to do so when we are doing a full backup,
> +       the target file is a qcow2, and the source is single layer.
> +       Do we need to add additional checks (so that it does not break
> +       something) or add addtional conditions to optimize additional use
> +       cases?
> +     */
> +
> +    if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
> +       s->bcs->target->bs->drv != NULL &&
> +       strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0 &&
> +       s->bcs->source->bs->backing_file[0] == '\0')

This isn't going to suffice upstream; the backup job can't be performing
format introspection to determine behavior on the fly.

I think what you're really after is something like
bdrv_unallocated_blocks_are_zero().

> +       return true;
> +    else
> +        return false;
> +}
> +
>  void backup_do_checkpoint(BlockJob *job, Error **errp)
>  {
>      BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
> @@ -248,7 +271,7 @@ static int coroutine_fn backup_run(Job *job, Error
> **errp)
>  
>      backup_init_copy_bitmap(s);
>  
> -    if (s->sync_mode == MIRROR_SYNC_MODE_TOP) {
> +    if (s->sync_mode == MIRROR_SYNC_MODE_TOP ||

So the basic premise is that if you are copying a qcow2 file and the
unallocated portions as defined by the qcow2 metadata are zero, it's
safe to skip those, so you can treat it like SYNC_MODE_TOP.

I think you *also* have to know if the *source* needs those regions
explicitly zeroed, and it's not always safe to just skip them at the
manifest level.

I thought there was code that handled this to some extent already, but I
don't know. I think Vladimir has worked on it recently and can probably
let you know where I am mistaken :)

--js

> backup_ok_to_skip_unallocated(s)) {
>          int64_t offset = 0;
>          int64_t count;
>  



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-17 20:11 ` John Snow
@ 2020-04-17 20:24   ` Eric Blake
  2020-04-17 22:57     ` Leo Luan
  2020-04-17 22:31   ` Leo Luan
  1 sibling, 1 reply; 11+ messages in thread
From: Eric Blake @ 2020-04-17 20:24 UTC (permalink / raw)
  To: John Snow, Leo Luan, qemu-devel, Qemu-block
  Cc: Vladimir Sementsov-Ogievskiy, Max Reitz

On 4/17/20 3:11 PM, John Snow wrote:

>> +
>> +    if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
>> +       s->bcs->target->bs->drv != NULL &&
>> +       strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0 &&
>> +       s->bcs->source->bs->backing_file[0] == '\0')
> 
> This isn't going to suffice upstream; the backup job can't be performing
> format introspection to determine behavior on the fly.

Agreed.  The idea is right (we NEED to make backup operations smarter 
based on knowledge about both source and destination block status), but 
the implementation is not (a check for strcncmp("qcow2") is not ideal).

> 
> I think what you're really after is something like
> bdrv_unallocated_blocks_are_zero().

The fact that qemu-img already has a lot of optimizations makes me 
wonder what we can salvage from there into reusable code that both 
qemu-img and block backup can share, so that we're not reimplementing 
block status handling in multiple places.

> So the basic premise is that if you are copying a qcow2 file and the
> unallocated portions as defined by the qcow2 metadata are zero, it's
> safe to skip those, so you can treat it like SYNC_MODE_TOP.
> 
> I think you *also* have to know if the *source* needs those regions
> explicitly zeroed, and it's not always safe to just skip them at the
> manifest level.
> 
> I thought there was code that handled this to some extent already, but I
> don't know. I think Vladimir has worked on it recently and can probably
> let you know where I am mistaken :)

Yes, I'm hoping Vladimir (or his other buddies at Virtuozzo) can chime 
in.  Meanwhile, I've working on v2 of some patches that will improve 
qemu's ability to tell if a destination qcow2 file already reads as all 
zeroes, and we already have bdrv_block_status() for telling which 
portions of a source image already read as all zeroes (whether or not it 
is due to not being allocated, the goal here is that we should NOT have 
to copy anything that reads as zero on the source over to the 
destination if the destination already starts life as reading all zero).

And if nothing else, qemu 5.0 just added 'qemu-img convert 
--target-is-zero' as a last-ditch means of telling qemu to assume the 
destination reads as all zeroes, even if it cannot quickly prove it; we 
probably want to add a similar knob into the QMP commands for initiating 
block backup, for the same reasons.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-17 20:11 ` John Snow
  2020-04-17 20:24   ` Eric Blake
@ 2020-04-17 22:31   ` Leo Luan
  1 sibling, 0 replies; 11+ messages in thread
From: Leo Luan @ 2020-04-17 22:31 UTC (permalink / raw)
  To: John Snow; +Cc: Vladimir Sementsov-Ogievskiy, qemu-devel, Qemu-block, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 4766 bytes --]

On Fri, Apr 17, 2020 at 1:11 PM John Snow <jsnow@redhat.com> wrote:

>
>
> On 4/17/20 2:33 PM, Leo Luan wrote:
> > When doing a full backup from a single layer qcow2 disk file to a new
> > qcow2 file, the backup_run function does not unset unallocated parts in
> > the copy bit map.  The subsequent backup_loop call goes through these
> > unallocated clusters unnecessarily.  In the case when the target and
> > source reside in different file systems, an EXDEV error would cause
> > zeroes to be actually copied into the target and that causes a target
> > file size explosion to the full virtual disk size.
> >
>
> I think the idea, generally, is to leave the detection of unallocated
> portions to the format (qcow2) and the protocol (posix file) respectively.
>
> As far as I know, it is incorrect to assume that unallocated data
> can/will/should always be read as zeroes; so it may not be the case that
> it is "safe" to skip this data, because the target may or may not need
> explicit zeroing.
>

Thanks for pointing this out.  Would it be safe to skip unallocated
clusters if both source and target's bdrv_unallocated_blocks_are_zero()
returns true?

> This patch aims to unset the unallocated parts in the copy bitmap when
> > it is safe to do so, thereby avoid dealing with unallocated clusters in
> > the backup loop to prevent significant performance or storage efficiency
> > impacts when running full backup jobs.
> >
> > Any insights or corrections?
> >
> > diff --git a/block/backup.c b/block/backup.c
> > index cf62b1a38c..609d551b1e 100644
> > --- a/block/backup.c
> > +++ b/block/backup.c
> > @@ -139,6 +139,29 @@ static void backup_clean(Job *job)
> >      bdrv_backup_top_drop(s->backup_top);
> >  }
> >
> > +static bool backup_ok_to_skip_unallocated(BackupBlockJob *s)
> > +{
> > +    /* Checks whether this backup job can avoid copying or dealing with
> > +       unallocated clusters in the backup loop and their associated
> > +       performance and storage effciency impacts. Check for the
> condition
> > +       when it's safe to skip copying unallocated clusters that allows
> the
> > +       corresponding bits in the copy bitmap to be unset.  The
> assumption
> > +       here is that it is ok to do so when we are doing a full backup,
> > +       the target file is a qcow2, and the source is single layer.
> > +       Do we need to add additional checks (so that it does not break
> > +       something) or add addtional conditions to optimize additional use
> > +       cases?
> > +     */
> > +
> > +    if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
> > +       s->bcs->target->bs->drv != NULL &&
> > +       strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0 &&
> > +       s->bcs->source->bs->backing_file[0] == '\0')
>
> This isn't going to suffice upstream; the backup job can't be performing
> format introspection to determine behavior on the fly.
>
> I think what you're really after is something like
> bdrv_unallocated_blocks_are_zero().
>

Thanks for this pointer.


>
> > +       return true;
> > +    else
> > +        return false;
> > +}
> > +
> >  void backup_do_checkpoint(BlockJob *job, Error **errp)
> >  {
> >      BackupBlockJob *backup_job = container_of(job, BackupBlockJob,
> common);
> > @@ -248,7 +271,7 @@ static int coroutine_fn backup_run(Job *job, Error
> > **errp)
> >
> >      backup_init_copy_bitmap(s);
> >
> > -    if (s->sync_mode == MIRROR_SYNC_MODE_TOP) {
> > +    if (s->sync_mode == MIRROR_SYNC_MODE_TOP ||
>
> So the basic premise is that if you are copying a qcow2 file and the
> unallocated portions as defined by the qcow2 metadata are zero, it's
> safe to skip those, so you can treat it like SYNC_MODE_TOP.
>

In the MIRROR_SYNC_MODE_TOP case, the check for unallocated clusters does
not go all the way to the base level.  So it would be incorrect to treat
the MIRROR_SYNC_MODE_FULL the same as MIRROR_SYNC_MODE_TOP unless the
source does not have a backing and the base itself.  That's why I added a
check for the backing_file field of the source.  I guess the code can be
potentially extended with a new flag to do the block status check all the
way into the base level for the case of the FULL mode?

I think you *also* have to know if the *source* needs those regions
> explicitly zeroed, and it's not always safe to just skip them at the
> manifest level.
>

Do you mean some operation changing the target into non-sparse?

>
> I thought there was code that handled this to some extent already, but I
> don't know. I think Vladimir has worked on it recently and can probably
> let you know where I am mistaken :)
>

Thanks for the reply!


> --js
>
> > backup_ok_to_skip_unallocated(s)) {
> >          int64_t offset = 0;
> >          int64_t count;
> >
>
> John Snow

[-- Attachment #2: Type: text/html, Size: 11924 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-17 20:24   ` Eric Blake
@ 2020-04-17 22:57     ` Leo Luan
  2020-04-18  0:34       ` John Snow
  0 siblings, 1 reply; 11+ messages in thread
From: Leo Luan @ 2020-04-17 22:57 UTC (permalink / raw)
  To: Eric Blake
  Cc: Vladimir Sementsov-Ogievskiy, John Snow, qemu-devel, Qemu-block,
	Max Reitz

[-- Attachment #1: Type: text/plain, Size: 3334 bytes --]

On Fri, Apr 17, 2020 at 1:24 PM Eric Blake <eblake@redhat.com> wrote:

> On 4/17/20 3:11 PM, John Snow wrote:
>
> >> +
> >> +    if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
> >> +       s->bcs->target->bs->drv != NULL &&
> >> +       strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0
> &&
> >> +       s->bcs->source->bs->backing_file[0] == '\0')
> >
> > This isn't going to suffice upstream; the backup job can't be performing
> > format introspection to determine behavior on the fly.
>
> Agreed.  The idea is right (we NEED to make backup operations smarter
> based on knowledge about both source and destination block status), but
> the implementation is not (a check for strcncmp("qcow2") is not ideal).
>

I see/agree that using strncmp("qcow2") is not general enough for the
upstream.  Would changing it to bdrv_unallocated_blocks_are_zero() suffice?


> >
> > I think what you're really after is something like
> > bdrv_unallocated_blocks_are_zero().
>
> The fact that qemu-img already has a lot of optimizations makes me
> wonder what we can salvage from there into reusable code that both
> qemu-img and block backup can share, so that we're not reimplementing
> block status handling in multiple places.
>

A general fix reusing some existing code would be great.  When will it
appear in the upstream?  We are hoping to avoid needing to use a private
branch if possible.

>
> > So the basic premise is that if you are copying a qcow2 file and the
> > unallocated portions as defined by the qcow2 metadata are zero, it's
> > safe to skip those, so you can treat it like SYNC_MODE_TOP.
> >
> > I think you *also* have to know if the *source* needs those regions
> > explicitly zeroed, and it's not always safe to just skip them at the
> > manifest level.
> >
> > I thought there was code that handled this to some extent already, but I
> > don't know. I think Vladimir has worked on it recently and can probably
> > let you know where I am mistaken :)
>
> Yes, I'm hoping Vladimir (or his other buddies at Virtuozzo) can chime
> in.  Meanwhile, I've working on v2 of some patches that will improve
> qemu's ability to tell if a destination qcow2 file already reads as all
> zeroes, and we already have bdrv_block_status() for telling which
> portions of a source image already read as all zeroes (whether or not it
> is due to not being allocated, the goal here is that we should NOT have
> to copy anything that reads as zero on the source over to the
> destination if the destination already starts life as reading all zero).
>

Can the eventual/optimal solution allow unallocated clusters to be skipped
entirely in the backup loop and make the detection of allocated zeroes an
option, not forcing the backup thread to loop through a potentially huge
empty virtual disk?

>
> And if nothing else, qemu 5.0 just added 'qemu-img convert
> --target-is-zero' as a last-ditch means of telling qemu to assume the
> destination reads as all zeroes, even if it cannot quickly prove it; we
> probably want to add a similar knob into the QMP commands for initiating
> block backup, for the same reasons.
>

This seems a good way of assuring the status of the target file.

Thanks!

>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.           +1-919-301-3226
> Virtualization:  qemu.org | libvirt.org
>
>

[-- Attachment #2: Type: text/html, Size: 4836 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-17 22:57     ` Leo Luan
@ 2020-04-18  0:34       ` John Snow
  2020-04-18  1:43         ` Leo Luan
  0 siblings, 1 reply; 11+ messages in thread
From: John Snow @ 2020-04-18  0:34 UTC (permalink / raw)
  To: Leo Luan, Eric Blake
  Cc: Vladimir Sementsov-Ogievskiy, qemu-devel, Qemu-block, Max Reitz



On 4/17/20 6:57 PM, Leo Luan wrote:
> On Fri, Apr 17, 2020 at 1:24 PM Eric Blake <eblake@redhat.com
> <mailto:eblake@redhat.com>> wrote:
> 
>     On 4/17/20 3:11 PM, John Snow wrote:
> 
>     >> +
>     >> +    if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
>     >> +       s->bcs->target->bs->drv != NULL &&
>     >> +       strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5)
>     == 0 &&
>     >> +       s->bcs->source->bs->backing_file[0] == '\0')
>     >
>     > This isn't going to suffice upstream; the backup job can't be
>     performing
>     > format introspection to determine behavior on the fly.
> 
>     Agreed.  The idea is right (we NEED to make backup operations smarter
>     based on knowledge about both source and destination block status), but
>     the implementation is not (a check for strcncmp("qcow2") is not ideal).
> 
> 
> I see/agree that using strncmp("qcow2") is not general enough for the
> upstream.  Would changing it to bdrv_unallocated_blocks_are_zero() suffice?
> 

I don't know, to be really honest with you. Vladimir reworked the backup
code recently and Virtuozzo et al have shown a very aggressive interest
in optimizing the backup loop. I haven't really worked on that code
since their rewrite.

Dropping unallocated regions from the backup manifest is one strategy,
but I think there will be cases where we won't be able to treat it like
"TOP", but may still have unallocated regions we don't want to copy (We
have a backing file which is itself unallocated.)

I'm interested in a more general purpose mechanism for efficient
copying. I think that instead of the backup job itself doing this in
backup.c by populating the copy manifest, that it's also appropriate to
try to copy every last block and have the backup loop implementation
decide it doesn't actually need to copy that block.

That way, the copy optimizations can be shared by any implementation
that needs to do efficient copying, and we can avoid special format and
graph-inspection code in the backup job main interface code.

To be clear, I see these as identical amounts of work:

- backup job runs a loop to inspect every cluster to see if it is
allocated or not, and modifies its cluster backup manifest accordingly

- backup job loops through the entire block and calls a smart_copy()
function that might degrade into a no-op if the right conditions are met
(source is unallocated, explicit zeroes are not needed on the destination)

Either way, you're looping and interrogating the disk, but in one case
the efficiencies go deeper than *just* the backup code.

I think Vladimir has put a lot of work into making the backup code
highly optimized, so I would consult with him to find out where the best
place to put new optimizations are, if any -- he'll know!

--js


> 
>     >
>     > I think what you're really after is something like
>     > bdrv_unallocated_blocks_are_zero().
> 
>     The fact that qemu-img already has a lot of optimizations makes me
>     wonder what we can salvage from there into reusable code that both
>     qemu-img and block backup can share, so that we're not reimplementing
>     block status handling in multiple places.
> 
> 
> A general fix reusing some existing code would be great.  When will it
> appear in the upstream?  We are hoping to avoid needing to use a private
> branch if possible.  
> 
> 
>     > So the basic premise is that if you are copying a qcow2 file and the
>     > unallocated portions as defined by the qcow2 metadata are zero, it's
>     > safe to skip those, so you can treat it like SYNC_MODE_TOP.
>     >
>     > I think you *also* have to know if the *source* needs those regions
>     > explicitly zeroed, and it's not always safe to just skip them at the
>     > manifest level.
>     >
>     > I thought there was code that handled this to some extent already,
>     but I
>     > don't know. I think Vladimir has worked on it recently and can
>     probably
>     > let you know where I am mistaken :)
> 
>     Yes, I'm hoping Vladimir (or his other buddies at Virtuozzo) can chime
>     in.  Meanwhile, I've working on v2 of some patches that will improve
>     qemu's ability to tell if a destination qcow2 file already reads as all
>     zeroes, and we already have bdrv_block_status() for telling which
>     portions of a source image already read as all zeroes (whether or
>     not it
>     is due to not being allocated, the goal here is that we should NOT have
>     to copy anything that reads as zero on the source over to the
>     destination if the destination already starts life as reading all zero).
> 
> 
> Can the eventual/optimal solution allow unallocated clusters to be
> skipped entirely in the backup loop and make the detection of allocated
> zeroes an option, not forcing the backup thread to loop through a
> potentially huge empty virtual disk?
> 

I mean, using the TOP code is doing the same thing, really: it's looking
at allocation status and marking those blocks as "already copied", more
or less.

> 
>     And if nothing else, qemu 5.0 just added 'qemu-img convert
>     --target-is-zero' as a last-ditch means of telling qemu to assume the
>     destination reads as all zeroes, even if it cannot quickly prove it; we
>     probably want to add a similar knob into the QMP commands for
>     initiating
>     block backup, for the same reasons.
> 
> 
> This seems a good way of assuring the status of the target file.
> 
> Thanks!
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-18  0:34       ` John Snow
@ 2020-04-18  1:43         ` Leo Luan
  2020-04-20 10:56           ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 11+ messages in thread
From: Leo Luan @ 2020-04-18  1:43 UTC (permalink / raw)
  To: John Snow; +Cc: Vladimir Sementsov-Ogievskiy, qemu-devel, Qemu-block, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 6347 bytes --]

On Fri, Apr 17, 2020 at 5:34 PM John Snow <jsnow@redhat.com> wrote:

>
>
> On 4/17/20 6:57 PM, Leo Luan wrote:
> > On Fri, Apr 17, 2020 at 1:24 PM Eric Blake <eblake@redhat.com
> > <mailto:eblake@redhat.com>> wrote:
> >
> >     On 4/17/20 3:11 PM, John Snow wrote:
> >
> >     >> +
> >     >> +    if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
> >     >> +       s->bcs->target->bs->drv != NULL &&
> >     >> +       strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5)
> >     == 0 &&
> >     >> +       s->bcs->source->bs->backing_file[0] == '\0')
> >     >
> >     > This isn't going to suffice upstream; the backup job can't be
> >     performing
> >     > format introspection to determine behavior on the fly.
> >
> >     Agreed.  The idea is right (we NEED to make backup operations smarter
> >     based on knowledge about both source and destination block status),
> but
> >     the implementation is not (a check for strcncmp("qcow2") is not
> ideal).
> >
> >
> > I see/agree that using strncmp("qcow2") is not general enough for the
> > upstream.  Would changing it to bdrv_unallocated_blocks_are_zero()
> suffice?
> >
>
> I don't know, to be really honest with you. Vladimir reworked the backup
> code recently and Virtuozzo et al have shown a very aggressive interest
> in optimizing the backup loop. I haven't really worked on that code
> since their rewrite.
>
> Dropping unallocated regions from the backup manifest is one strategy,
> but I think there will be cases where we won't be able to treat it like
> "TOP", but may still have unallocated regions we don't want to copy (We
> have a backing file which is itself unallocated.)
>
> I'm interested in a more general purpose mechanism for efficient
> copying. I think that instead of the backup job itself doing this in
> backup.c by populating the copy manifest, that it's also appropriate to
> try to copy every last block and have the backup loop implementation
> decide it doesn't actually need to copy that block.
>
> That way, the copy optimizations can be shared by any implementation
> that needs to do efficient copying, and we can avoid special format and
> graph-inspection code in the backup job main interface code.
>
> To be clear, I see these as identical amounts of work:
>
> - backup job runs a loop to inspect every cluster to see if it is
> allocated or not, and modifies its cluster backup manifest accordingly
>

This inspection can detect more than 1GB of unallocated (64KB) clusters per
loop and it's a shallower path.

>
> - backup job loops through the entire block and calls a smart_copy()
> function that might degrade into a no-op if the right conditions are met
> (source is unallocated, explicit zeroes are not needed on the destination)
>

If I am not mistaken, the copy loop does one cluster per iteration using a
twice deeper call path (trying to copy and eventually finding unallocated
clusters).  So with 64KB cluster size, it's 2 * 1G/64K ~= 32 million times
less efficient with the CPU cycles for large sparse virtual disks.

>
> Either way, you're looping and interrogating the disk, but in one case
> the efficiencies go deeper than *just* the backup code.
>

I think the early stop of inefficiency can help minimize the CPU impact of
the backup job on the VM instance.


> I think Vladimir has put a lot of work into making the backup code
> highly optimized, so I would consult with him to find out where the best
> place to put new optimizations are, if any -- he'll know!
>

Yes, hope that he will chime in.

Thanks!

>
> --js
>
>
> >
> >     >
> >     > I think what you're really after is something like
> >     > bdrv_unallocated_blocks_are_zero().
> >
> >     The fact that qemu-img already has a lot of optimizations makes me
> >     wonder what we can salvage from there into reusable code that both
> >     qemu-img and block backup can share, so that we're not reimplementing
> >     block status handling in multiple places.
> >
> >
> > A general fix reusing some existing code would be great.  When will it
> > appear in the upstream?  We are hoping to avoid needing to use a private
> > branch if possible.
> >
> >
> >     > So the basic premise is that if you are copying a qcow2 file and
> the
> >     > unallocated portions as defined by the qcow2 metadata are zero,
> it's
> >     > safe to skip those, so you can treat it like SYNC_MODE_TOP.
> >     >
> >     > I think you *also* have to know if the *source* needs those regions
> >     > explicitly zeroed, and it's not always safe to just skip them at
> the
> >     > manifest level.
> >     >
> >     > I thought there was code that handled this to some extent already,
> >     but I
> >     > don't know. I think Vladimir has worked on it recently and can
> >     probably
> >     > let you know where I am mistaken :)
> >
> >     Yes, I'm hoping Vladimir (or his other buddies at Virtuozzo) can
> chime
> >     in.  Meanwhile, I've working on v2 of some patches that will improve
> >     qemu's ability to tell if a destination qcow2 file already reads as
> all
> >     zeroes, and we already have bdrv_block_status() for telling which
> >     portions of a source image already read as all zeroes (whether or
> >     not it
> >     is due to not being allocated, the goal here is that we should NOT
> have
> >     to copy anything that reads as zero on the source over to the
> >     destination if the destination already starts life as reading all
> zero).
> >
> >
> > Can the eventual/optimal solution allow unallocated clusters to be
> > skipped entirely in the backup loop and make the detection of allocated
> > zeroes an option, not forcing the backup thread to loop through a
> > potentially huge empty virtual disk?
> >
>
> I mean, using the TOP code is doing the same thing, really: it's looking
> at allocation status and marking those blocks as "already copied", more
> or less.
>
> >
> >     And if nothing else, qemu 5.0 just added 'qemu-img convert
> >     --target-is-zero' as a last-ditch means of telling qemu to assume the
> >     destination reads as all zeroes, even if it cannot quickly prove it;
> we
> >     probably want to add a similar knob into the QMP commands for
> >     initiating
> >     block backup, for the same reasons.
> >
> >
> > This seems a good way of assuring the status of the target file.
> >
> > Thanks!
> >
>
>

[-- Attachment #2: Type: text/html, Size: 8568 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-18  1:43         ` Leo Luan
@ 2020-04-20 10:56           ` Vladimir Sementsov-Ogievskiy
  2020-04-20 14:31             ` Bryan S Rosenburg
  0 siblings, 1 reply; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-04-20 10:56 UTC (permalink / raw)
  To: Leo Luan, John Snow; +Cc: qemu-devel, Qemu-block, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 1889 bytes --]

Hi all!

Yes, I have big work-in-progress around backup, and I'm posting it part-by-part,
current chunk is "[PATCH v2 0/6] block-copy: use aio-task-pool", at
https://lists.gnu.org/archive/html/qemu-devel/2020-03/msg07671.html

The final target is backup job, which does only one block_copy() call.
It already works this way in Virtuozzo-8.0, and there is old outdated series,
which may give an idea of the full picture:
[RFC 00/24] backup performance: block_status + async
https://lists.gnu.org/archive/html/qemu-devel/2019-11/msg02335.html

After it, next steps are to reuse block_copu() for other jobs and qemu-img convert.

===

About skipping zeroes for FULL-mode.

1. Honestly, we have this skipping hardcoded in our downstream for a long time,
I'll attache the patch from vz-8.0.

To upstream it, we still lack one thing: knowledge, is target is already zeroed.
(for downstream, we just sure, that in all our scenarios backup target is a new
qcow2 image, all-zeros of course).

I think, we already have an unspoken agreement, that a kind of target-is-zero
option is appropriate way to achieve it, and qemu-img already has it..

On the other hand, I think, the best way of being sure in target-is-zero is just
to zero it. But to do it effectively for our most interesting scenarios (qcow2, NBD),
we need the following steps:

1. 64bit commands in NBD
2. 64bit write-zeroes in Qemu generic block-layer
3. support 64bit write-zeroes in qcow2 and nbd driver in Qemu

For 1,2 I have sent series..

===

Hmm. So, what to do now?

1. You can use downstream patch, like we in Virtuozzo, if it is appropriate for you.

2. Implement feature upstream. Most simple way is to add a skip-zeroes
option for backup job, and than, in case of the option enabled, do something like
in my patch (or just port it). Do you want to make patches? If not, I can handle it myself.

-- 
Best regards,
Vladimir

[-- Attachment #2: 0001-backup-skip-copying-unallocated-for-full-mode.patch --]
[-- Type: text/x-patch, Size: 59191 bytes --]

From b96914dab04ee82d44cf4ebf712ba414fea36538 Mon Sep 17 00:00:00 2001
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: Thu, 23 Jan 2020 21:01:01 +0300
Subject: [PATCH] backup: skip copying unallocated for full mode

This improves full backup of (partly) empty qcow2 disk.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/block-copy.h     |   3 +-
 block/backup.c                 |  12 ++-
 block/block-copy.c             |  18 +++-
 tests/qemu-iotests/056         |   6 +-
 tests/qemu-iotests/1000001     |   1 +
 tests/qemu-iotests/1000001.out |   2 +-
 tests/qemu-iotests/185.out     |   2 +-
 tests/qemu-iotests/256.out     |   4 +-
 tests/qemu-iotests/257         |  10 +--
 tests/qemu-iotests/257.out     | 148 ++++++++++++++++-----------------
 10 files changed, 112 insertions(+), 94 deletions(-)

diff --git a/include/block/block-copy.h b/include/block/block-copy.h
index fcbc06b977..c0bceb6f57 100644
--- a/include/block/block-copy.h
+++ b/include/block/block-copy.h
@@ -70,6 +70,7 @@ void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
 void block_copy_cancel(BlockCopyCallState *call_state);
 
 BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
-void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
+void block_copy_set_options(BlockCopyState *s, bool skip_unallocated,
+                            bool top_mode);
 
 #endif /* BLOCK_COPY_H */
diff --git a/block/backup.c b/block/backup.c
index ae191cb276..53791983fd 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -202,12 +202,15 @@ static void backup_init_bcs_bitmap(BackupBlockJob *job)
                                                NULL, true);
         assert(ret);
     } else {
-        if (job->sync_mode == MIRROR_SYNC_MODE_TOP) {
+        if (job->sync_mode == MIRROR_SYNC_MODE_TOP ||
+            job->sync_mode == MIRROR_SYNC_MODE_FULL)
+        {
             /*
              * We can't hog the coroutine to initialize this thoroughly.
              * Set a flag and resume work when we are able to yield safely.
              */
-            block_copy_set_skip_unallocated(job->bcs, true);
+            block_copy_set_options(job->bcs, true,
+                                   job->sync_mode == MIRROR_SYNC_MODE_TOP);
         }
         bdrv_set_dirty_bitmap(job->bcs_bitmap, 0, job->len);
     }
@@ -223,7 +226,9 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
 
     backup_init_bcs_bitmap(s);
 
-    if (s->sync_mode == MIRROR_SYNC_MODE_TOP) {
+    if (s->sync_mode == MIRROR_SYNC_MODE_TOP ||
+        s->sync_mode == MIRROR_SYNC_MODE_FULL)
+    {
         int64_t offset = 0;
         int64_t count;
 
@@ -245,7 +250,6 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
 
             offset += count;
         }
-        block_copy_set_skip_unallocated(s->bcs, false);
     }
 
     if (s->sync_mode == MIRROR_SYNC_MODE_NONE) {
diff --git a/block/block-copy.c b/block/block-copy.c
index ed51e6c0a6..f7fe6631eb 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -96,6 +96,7 @@ typedef struct BlockCopyState {
      * block_copy_reset_unallocated() every time it does.
      */
     bool skip_unallocated;
+    bool top_mode;
 
     ProgressMeter *progress;
 
@@ -438,7 +439,7 @@ static int block_copy_block_status(BlockCopyState *s, int64_t offset,
     BlockDriverState *base;
     int ret;
 
-    if (s->skip_unallocated && s->source->bs->backing) {
+    if (s->top_mode && s->source->bs->backing) {
         base = s->source->bs->backing->bs;
     } else {
         base = NULL;
@@ -471,14 +472,21 @@ static int block_copy_is_cluster_allocated(BlockCopyState *s, int64_t offset,
                                            int64_t *pnum)
 {
     BlockDriverState *bs = s->source->bs;
+    BlockDriverState *base;
     int64_t count, total_count = 0;
     int64_t bytes = s->len - offset;
     int ret;
 
+    if (s->top_mode && s->source->bs->backing) {
+        base = s->source->bs->backing->bs;
+    } else {
+        base = NULL;
+    }
+
     assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
 
     while (true) {
-        ret = bdrv_is_allocated(bs, offset, bytes, &count);
+        ret = bdrv_is_allocated_above(bs, base, false, offset, bytes, &count);
         if (ret < 0) {
             return ret;
         }
@@ -759,9 +767,11 @@ BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
     return s->copy_bitmap;
 }
 
-void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip)
+void block_copy_set_options(BlockCopyState *s, bool skip_unallocated,
+                            bool top_mode)
 {
-    s->skip_unallocated = skip;
+    s->skip_unallocated = skip_unallocated;
+    s->top_mode = top_mode;
 }
 
 void block_copy_set_speed(BlockCopyState *s, BlockCopyCallState *call_state,
diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
index ead0c0773f..6bd4ab887e 100755
--- a/tests/qemu-iotests/056
+++ b/tests/qemu-iotests/056
@@ -102,9 +102,11 @@ class TestSyncModesNoneAndTop(iotests.QMPTestCase):
         time.sleep(1)
         self.assertEqual(-1, qemu_io('-c', 'read -P0x41 0 512', target_img).find("verification failed"))
 
-class TestBeforeWriteNotifier(iotests.QMPTestCase):
+class TestCopyBeforeWrite(iotests.QMPTestCase):
     def setUp(self):
-        self.vm = iotests.VM().add_drive_raw("file=blkdebug::null-co://,id=drive0,align=65536,driver=blkdebug")
+        opts = "image.driver=null-co,image.read-zeroes=on," \
+            "id=drive0,align=65536,driver=blkdebug"
+        self.vm = iotests.VM().add_drive_raw(opts)
         self.vm.launch()
 
     def tearDown(self):
diff --git a/tests/qemu-iotests/1000001 b/tests/qemu-iotests/1000001
index d9e272f017..16b273cf91 100644
--- a/tests/qemu-iotests/1000001
+++ b/tests/qemu-iotests/1000001
@@ -22,6 +22,7 @@ import iotests
 from iotests import log, qemu_img, qemu_io, qemu_io_silent
 
 iotests.verify_platform(['linux'])
+iotests.verify_image_format(['qcow2'])
 
 patterns = [("0x5d", "0",         "64k"),
             ("0xd5", "1M",        "64k"),
diff --git a/tests/qemu-iotests/1000001.out b/tests/qemu-iotests/1000001.out
index a341220d07..5f5d456a4d 100644
--- a/tests/qemu-iotests/1000001.out
+++ b/tests/qemu-iotests/1000001.out
@@ -25,7 +25,7 @@ write -P0x1d 0x2008000 64k
 {"return": ""}
 write -P0xea 0x3fe0000 64k
 {"return": ""}
-{"data": {"device": "backup-job-2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup-job-2", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Cleanup ---
 
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
index ddfbf3c765..f4c8c1c968 100644
--- a/tests/qemu-iotests/185.out
+++ b/tests/qemu-iotests/185.out
@@ -54,7 +54,7 @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 l
 {"return": {}}
 {"return": {}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
-{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "disk", "len": 67108864, "offset": 65536, "speed": 65536, "type": "backup"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "disk", "len": 4194304, "offset": 65536, "speed": 65536, "type": "backup"}}
 
 === Start streaming job and exit qemu ===
 
diff --git a/tests/qemu-iotests/256.out b/tests/qemu-iotests/256.out
index f18ecb0f91..4f7e39e32e 100644
--- a/tests/qemu-iotests/256.out
+++ b/tests/qemu-iotests/256.out
@@ -62,8 +62,8 @@
 {
   "return": {}
 }
-{"data": {"device": "j0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "j1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "j0", "len": 0, "offset": 0, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "j1", "len": 0, "offset": 0, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Create Targets & Incremental Backups ---
 
diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index 908d728bf1..e1474675a8 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -383,17 +383,17 @@ def test_bitmap_sync(bsync_mode, msync_mode='bitmap', failure=None):
 
         if bsync_mode == 'always' and failure == 'intermediate':
             # TOP treats anything allocated as dirty, expect to see:
-            if msync_mode == 'top':
+            if msync_mode in ('top', 'full'):
                 ebitmap.dirty_group(0)
 
             # We manage to copy one sector (one bit) before the error.
             ebitmap.clear_bit(ebitmap.first_bit)
 
             # Full returns all bits set except what was copied/skipped
-            if msync_mode == 'full':
-                fail_bit = ebitmap.first_bit
-                ebitmap.clear()
-                ebitmap.dirty_bits(range(fail_bit, SIZE // GRANULARITY))
+            #if msync_mode == 'full':
+                #fail_bit = ebitmap.first_bit
+                #ebitmap.clear()
+                #ebitmap.dirty_bits(range(fail_bit, SIZE // GRANULARITY))
 
         ebitmap.compare(vm.get_bitmap(drive0.node, 'bitmap0', bitmaps=bitmaps))
 
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
index 6997b56567..47bf67e57b 100644
--- a/tests/qemu-iotests/257.out
+++ b/tests/qemu-iotests/257.out
@@ -32,7 +32,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -80,7 +80,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -207,7 +207,7 @@ expecting 15 dirty sectors; have 15. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -292,7 +292,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -340,7 +340,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 {"return": ""}
 
@@ -418,7 +418,7 @@ expecting 14 dirty sectors; have 14. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -503,7 +503,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -551,7 +551,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -678,7 +678,7 @@ expecting 15 dirty sectors; have 15. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -763,7 +763,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -811,7 +811,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -938,7 +938,7 @@ expecting 15 dirty sectors; have 15. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -1023,7 +1023,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -1071,7 +1071,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 {"return": ""}
 
@@ -1149,7 +1149,7 @@ expecting 14 dirty sectors; have 14. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -1234,7 +1234,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -1282,7 +1282,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -1409,7 +1409,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -1494,7 +1494,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -1542,7 +1542,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -1669,7 +1669,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -1754,7 +1754,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -1802,7 +1802,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 {"return": ""}
 
@@ -1880,7 +1880,7 @@ expecting 13 dirty sectors; have 13. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -1965,7 +1965,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -2013,7 +2013,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -2140,7 +2140,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -2225,7 +2225,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -2273,7 +2273,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -2339,7 +2339,7 @@ expecting 7 dirty sectors; have 7. OK!
 {"execute": "job-cancel", "arguments": {"id": "backup_1"}}
 {"return": {}}
 {"data": {"id": "backup_1", "type": "backup"}, "event": "BLOCK_JOB_PENDING", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
     "drive0": [
@@ -2400,7 +2400,7 @@ expecting 15 dirty sectors; have 15. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -2485,7 +2485,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -2533,7 +2533,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 {"return": ""}
 
@@ -2550,7 +2550,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
     "drive0": [
@@ -2611,7 +2611,7 @@ expecting 14 dirty sectors; have 14. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -2696,7 +2696,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -2744,7 +2744,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -2810,7 +2810,7 @@ expecting 7 dirty sectors; have 7. OK!
 {"execute": "job-finalize", "arguments": {"id": "backup_1"}}
 {"return": {}}
 {"data": {"id": "backup_1", "type": "backup"}, "event": "BLOCK_JOB_PENDING", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
     "drive0": [
@@ -2871,7 +2871,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -2956,7 +2956,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -3004,7 +3004,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -3070,7 +3070,7 @@ expecting 7 dirty sectors; have 7. OK!
 {"execute": "job-cancel", "arguments": {"id": "backup_1"}}
 {"return": {}}
 {"data": {"id": "backup_1", "type": "backup"}, "event": "BLOCK_JOB_PENDING", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
     "drive0": [
@@ -3131,7 +3131,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -3216,7 +3216,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -3264,7 +3264,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 {"return": ""}
 
@@ -3281,13 +3281,13 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-max-workers": 1}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
     "drive0": [
       {
         "busy": false,
-        "count": 66125824,
+        "count": 393216,
         "granularity": 65536,
         "name": "bitmap0",
         "persistent": false,
@@ -3299,7 +3299,7 @@ expecting 6 dirty sectors; have 6. OK!
 }
 
 = Checking Bitmap bitmap0 =
-expecting 1009 dirty sectors; have 1009. OK!
+expecting 6 dirty sectors; have 6. OK!
 
 --- Write #3 ---
 
@@ -3316,7 +3316,7 @@ write -P0xdd 0x3fc0000 0x10000
     "drive0": [
       {
         "busy": false,
-        "count": 66453504,
+        "count": 917504,
         "granularity": 65536,
         "name": "bitmap0",
         "persistent": false,
@@ -3328,7 +3328,7 @@ write -P0xdd 0x3fc0000 0x10000
 }
 
 = Checking Bitmap bitmap0 =
-expecting 1014 dirty sectors; have 1014. OK!
+expecting 14 dirty sectors; have 14. OK!
 
 --- Reference Backup #2 ---
 
@@ -3342,7 +3342,7 @@ expecting 1014 dirty sectors; have 1014. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -3359,7 +3359,7 @@ expecting 1014 dirty sectors; have 1014. OK!
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
 {"data": {"id": "backup_2", "type": "backup"}, "event": "BLOCK_JOB_PENDING", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_2", "len": 66453504, "offset": 66453504, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_2", "len": 917504, "offset": 917504, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
     "drive0": [
@@ -3427,7 +3427,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -3475,7 +3475,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -3541,7 +3541,7 @@ expecting 7 dirty sectors; have 7. OK!
 {"execute": "job-finalize", "arguments": {"id": "backup_1"}}
 {"return": {}}
 {"data": {"id": "backup_1", "type": "backup"}, "event": "BLOCK_JOB_PENDING", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
     "drive0": [
@@ -3602,7 +3602,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -3687,7 +3687,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -3735,7 +3735,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -3862,7 +3862,7 @@ expecting 15 dirty sectors; have 15. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -3947,7 +3947,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -3995,7 +3995,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 {"return": ""}
 
@@ -4073,7 +4073,7 @@ expecting 14 dirty sectors; have 14. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -4158,7 +4158,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -4206,7 +4206,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -4333,7 +4333,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -4418,7 +4418,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -4466,7 +4466,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -4593,7 +4593,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -4678,7 +4678,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -4726,7 +4726,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 {"return": ""}
 
@@ -4804,7 +4804,7 @@ expecting 14 dirty sectors; have 14. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
@@ -4889,7 +4889,7 @@ write -P0x76 0x3ff0000 0x10000
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_0", "len": 262144, "offset": 262144, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Add Bitmap ---
 
@@ -4937,7 +4937,7 @@ expecting 6 dirty sectors; have 6. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_1", "len": 458752, "offset": 458752, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #1 ---
 
@@ -5064,7 +5064,7 @@ expecting 12 dirty sectors; have 12. OK!
 {}
 {"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-max-workers": 1}}
 {"return": {}}
-{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "ref_backup_2", "len": 983040, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 
 --- Test Backup #2 ---
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* RE: Avoid copying unallocated clusters during full backup
  2020-04-20 10:56           ` Vladimir Sementsov-Ogievskiy
@ 2020-04-20 14:31             ` Bryan S Rosenburg
  2020-04-20 15:04               ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 11+ messages in thread
From: Bryan S Rosenburg @ 2020-04-20 14:31 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: Qemu-block, qemu-devel, Max Reitz, Leo Luan, John Snow, Qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2747 bytes --]

Vladimir, thank you for outlining the current state of affairs regarding 
efficient backup. I'd like to describe what we know about the 
image-expansion problem we're having using the current (qemu 4.2.0) code, 
just to be sure that your work is addressing it.

In our use case, the image-expansion problem occurs only when the source 
disk file and the target backup file are in different file systems. Both 
files are qcow2 files, and as long as they both reside in the same file 
system, the target file winds up with roughly the same size as the source. 
But if the target is in another file system (we've tried a second ext4 
hard disk file system, a tmpfs file system, and fuse-based file systems 
such as s3fs), the target ends up with a size comparable to the nominal 
size of the source disk.

I think the expansion is related to this comment in 
qemu/include/block/block.h:

/**
 * bdrv_co_copy_range:
. . . .
 * Note: block layer doesn't emulate or fallback to a bounce buffer 
approach
 * because usually the caller shouldn't attempt offloaded copy any more 
(e.g.
 * calling copy_file_range(2)) after the first error, thus it should fall 
back
 * to a read+write path in the caller level.

The bdrv_co_copy_range() service does the right things with respect to 
skipping unallocated ranges in the source disk and not writing zeros to 
the target. But qemu gives up on using this service the first time an 
underlying copy_file_range() system call fails, and copy_file_range() 
always fails with EXDEV when the source and destination files are on 
different file systems. In this specific case (at least), I think that 
falling back to a bounce buffer approach would make sense so that we don't 
lose the rest of the logic in bdrv_co_copy_range. As it is, qemu falls 
back on a very high-level loop reading from the source and writing to the 
target. At this high level, reading an unallocated range from the source 
simply returns a buffer full of zeroes, with no indication that the range 
was unallocated. The zeroes are then written to the target as if they were 
real data.

As a quick experiment, I tried a very localized fallback when 
copy_file_range returns EXDEV in handle_aiocb_copy_range() in 
qemu/block/file-posix.c. It's not a great fix because it has to allocate 
and free a buffer on the spot and it does not head off future calls to 
copy_file_range that will also fail, but it does fix the image-expansion 
problem when crossing file systems. I can provide the patch if anyone 
wants to see it.

I just wanted to get this aspect of the problem onto the table, to make 
sure it gets addressed in the current rework. Maybe it's a non-issue 
already.

- Bryan



[-- Attachment #2: Type: text/html, Size: 3441 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Avoid copying unallocated clusters during full backup
  2020-04-20 14:31             ` Bryan S Rosenburg
@ 2020-04-20 15:04               ` Vladimir Sementsov-Ogievskiy
  2020-04-21 14:41                 ` Bryan S Rosenburg
  0 siblings, 1 reply; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2020-04-20 15:04 UTC (permalink / raw)
  To: Bryan S Rosenburg
  Cc: Qemu-block, qemu-devel, Max Reitz, Leo Luan, John Snow, Qemu-devel

20.04.2020 17:31, Bryan S Rosenburg wrote:
> Vladimir, thank you for outlining the current state of affairs regarding efficient backup. I'd like to describe what we know about the image-expansion problem we're having using the current (qemu 4.2.0) code, just to be sure that your work is addressing it.
> 
> In our use case, the image-expansion problem occurs only when the source disk file and the target backup file are in different file systems. Both files are qcow2 files, and as long as they both reside in the same file system, the target file winds up with roughly the same size as the source. But if the target is in another file system (we've tried a second ext4 hard disk file system, a tmpfs file system, and fuse-based file systems such as s3fs), the target ends up with a size comparable to the nominal size of the source disk.
> 
> I think the expansion is related to this comment in qemu/include/block/block.h:
> 
> /**
> * bdrv_co_copy_range:
> . . . .
> * Note: block layer doesn't emulate or fallback to a bounce buffer approach
> * because usually the caller shouldn't attempt offloaded copy any more (e.g.
> * calling copy_file_range(2)) after the first error, thus it should fall back
> * to a read+write path in the caller level.
> 
> 
> 
> The bdrv_co_copy_range() service does the right things with respect to skipping unallocated ranges in the source disk and not writing zeros to the target. But qemu gives up on using this service the first time an underlying copy_file_range() system call fails, and copy_file_range() always fails with EXDEV when the source and destination files are on different file systems. In this specific case (at least), I think that falling back to a bounce buffer approach would make sense so that we don't lose the rest of the logic in bdrv_co_copy_range. As it is, qemu falls back on a very high-level loop reading from the source and writing to the target. At this high level, reading an unallocated range from the source simply returns a buffer full of zeroes, with no indication that the range was unallocated. The zeroes are then written to the target as if they were real data.
> 
> As a quick experiment, I tried a very localized fallback when copy_file_range returns EXDEV in handle_aiocb_copy_range() in qemu/block/file-posix.c. It's not a great fix because it has to allocate and free a buffer on the spot and it does not head off future calls to copy_file_range that will also fail, but it does fix the image-expansion problem when crossing file systems. I can provide the patch if anyone wants to see it.
> 
> I just wanted to get this aspect of the problem onto the table, to make sure it gets addressed in the current rework. Maybe it's a non-issue already.
> 

Yes, the problem is that copy_range subsystem handles block-status, when generic backup copying loop doesn't. I'm not sure that adding fallback into copy-range is a correct thing to do, at least it should be optional, enabled by flag.. But you don't need it for your problem,
as it is already fixed upstream:

You need to backport my commit 2d57511a88 "block/block-copy: use block_status" (together with 3 preparing patches before it, or with the whole series (including some refactoring after the 2d57511 commit)

Hope, it will help)

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: Avoid copying unallocated clusters during full backup
  2020-04-20 15:04               ` Vladimir Sementsov-Ogievskiy
@ 2020-04-21 14:41                 ` Bryan S Rosenburg
  0 siblings, 0 replies; 11+ messages in thread
From: Bryan S Rosenburg @ 2020-04-21 14:41 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: Qemu-block, qemu-devel, Max Reitz, Leo Luan, John Snow, Qemu-devel

[-- Attachment #1: Type: text/plain, Size: 813 bytes --]

Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote on 
04/20/2020 11:04:33 AM:

> Yes, the problem is that copy_range subsystem handles block-status, 
> when generic backup copying loop doesn't. I'm not sure that adding 
> fallback into copy-range is a correct thing to do, at least it 
> should be optional, enabled by flag.. But you don't need it for your 
problem,
> as it is already fixed upstream:
> 
> You need to backport my commit 2d57511a88 "block/block-copy: use 
> block_status" (together with 3 preparing patches before it, or with 
> the whole series (including some refactoring after the 2d57511 commit)

Vladimir, thanks for the pointer to the "block/block-copy: use 
block_status" patch set. Those 4 patches do in fact solve the problem we 
were seeing.

- Bryan



[-- Attachment #2: Type: text/html, Size: 1036 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-04-21 14:42 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-17 18:33 Avoid copying unallocated clusters during full backup Leo Luan
2020-04-17 20:11 ` John Snow
2020-04-17 20:24   ` Eric Blake
2020-04-17 22:57     ` Leo Luan
2020-04-18  0:34       ` John Snow
2020-04-18  1:43         ` Leo Luan
2020-04-20 10:56           ` Vladimir Sementsov-Ogievskiy
2020-04-20 14:31             ` Bryan S Rosenburg
2020-04-20 15:04               ` Vladimir Sementsov-Ogievskiy
2020-04-21 14:41                 ` Bryan S Rosenburg
2020-04-17 22:31   ` Leo Luan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.