All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/8] backup improvements
@ 2019-08-07  8:07 Vladimir Sementsov-Ogievskiy
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 1/8] block/backup: deal with zero detection Vladimir Sementsov-Ogievskiy
                   ` (7 more replies)
  0 siblings, 8 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

Hi all!

There are some fixes and refactorings I need on my way to resend
my backup-top series. It's obvious now that I need to share copying
code between backup and backup-top, as backup copying code becomes
smarter and more complicated. So the goal of the series is to make copying
code more share-able.

It's based on John's bitmaps branch, rebased on master.

Vladimir Sementsov-Ogievskiy (8):
  block/backup: deal with zero detection
  block/backup: refactor write_flags
  block/io: handle alignment and max_transfer for copy_range
  block/backup: improve unallocated clusters skipping
  block/backup: fix backup_cow_with_offload for last cluster
  block/backup: teach backup_cow_with_bounce_buffer to copy more at once
  block/backup: merge duplicated logic into backup_do_cow
  block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area

 block/backup.c | 191 ++++++++++++++++++++-----------------------------
 block/io.c     |  41 ++++++++---
 blockdev.c     |   8 +--
 3 files changed, 115 insertions(+), 125 deletions(-)

-- 
2.18.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 1/8] block/backup: deal with zero detection
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 2/8] block/backup: refactor write_flags Vladimir Sementsov-Ogievskiy
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

We have detect_zeroes option, so at least for blockdev-backup user
should define it if zero-detection is needed. For drive-backup leave
detection enabled by default but do it through existing option instead
of open-coding.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/backup.c | 15 ++++++---------
 blockdev.c     |  8 ++++----
 2 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 2073db279f..526897350d 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -113,7 +113,10 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
     BlockBackend *blk = job->common.blk;
     int nbytes;
     int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-    int write_flags = job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0;
+    int write_flags =
+            (job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0) |
+            (job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
+
 
     assert(QEMU_IS_ALIGNED(start, job->cluster_size));
     bdrv_reset_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
@@ -131,14 +134,8 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
         goto fail;
     }
 
-    if (buffer_is_zero(*bounce_buffer, nbytes)) {
-        ret = blk_co_pwrite_zeroes(job->target, start,
-                                   nbytes, write_flags | BDRV_REQ_MAY_UNMAP);
-    } else {
-        ret = blk_co_pwrite(job->target, start,
-                            nbytes, *bounce_buffer, write_flags |
-                            (job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0));
-    }
+    ret = blk_co_pwrite(job->target, start, nbytes, *bounce_buffer,
+                        write_flags);
     if (ret < 0) {
         trace_backup_do_cow_write_fail(job, start, ret);
         if (error_is_read) {
diff --git a/blockdev.c b/blockdev.c
index 29c6c6044a..2d7e7be538 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3613,7 +3613,7 @@ static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn,
     BlockDriverState *source = NULL;
     BlockJob *job = NULL;
     AioContext *aio_context;
-    QDict *options = NULL;
+    QDict *options;
     Error *local_err = NULL;
     int flags;
     int64_t size;
@@ -3686,10 +3686,10 @@ static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn,
         goto out;
     }
 
+    options = qdict_new();
+    qdict_put_str(options, "discard", "unmap");
+    qdict_put_str(options, "detect-zeroes", "unmap");
     if (backup->format) {
-        if (!options) {
-            options = qdict_new();
-        }
         qdict_put_str(options, "driver", backup->format);
     }
 
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 2/8] block/backup: refactor write_flags
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 1/8] block/backup: deal with zero detection Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07 16:53   ` Max Reitz
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

write flags are constant, let's store it in BackupBlockJob instead of
recalculating. It also makes two boolean fields to be unused, so,
drop them.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
---
 block/backup.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 526897350d..3cdbe973e6 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -50,14 +50,13 @@ typedef struct BackupBlockJob {
     uint64_t len;
     uint64_t bytes_read;
     int64_t cluster_size;
-    bool compress;
     NotifierWithReturn before_write;
     QLIST_HEAD(, CowRequest) inflight_reqs;
 
     bool use_copy_range;
     int64_t copy_range_size;
 
-    bool serialize_target_writes;
+    BdrvRequestFlags write_flags;
     bool initializing_bitmap;
 } BackupBlockJob;
 
@@ -113,10 +112,6 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
     BlockBackend *blk = job->common.blk;
     int nbytes;
     int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-    int write_flags =
-            (job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0) |
-            (job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
-
 
     assert(QEMU_IS_ALIGNED(start, job->cluster_size));
     bdrv_reset_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
@@ -135,7 +130,7 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
     }
 
     ret = blk_co_pwrite(job->target, start, nbytes, *bounce_buffer,
-                        write_flags);
+                        job->write_flags);
     if (ret < 0) {
         trace_backup_do_cow_write_fail(job, start, ret);
         if (error_is_read) {
@@ -163,7 +158,6 @@ static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
     BlockBackend *blk = job->common.blk;
     int nbytes;
     int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-    int write_flags = job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0;
 
     assert(QEMU_IS_ALIGNED(job->copy_range_size, job->cluster_size));
     assert(QEMU_IS_ALIGNED(start, job->cluster_size));
@@ -172,7 +166,7 @@ static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
     bdrv_reset_dirty_bitmap(job->copy_bitmap, start,
                             job->cluster_size * nr_clusters);
     ret = blk_co_copy_range(blk, start, job->target, start, nbytes,
-                            read_flags, write_flags);
+                            read_flags, job->write_flags);
     if (ret < 0) {
         trace_backup_do_cow_copy_range_fail(job, start, ret);
         bdrv_set_dirty_bitmap(job->copy_bitmap, start,
@@ -748,10 +742,16 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     job->sync_mode = sync_mode;
     job->sync_bitmap = sync_bitmap;
     job->bitmap_mode = bitmap_mode;
-    job->compress = compress;
 
-    /* Detect image-fleecing (and similar) schemes */
-    job->serialize_target_writes = bdrv_chain_contains(target, bs);
+    /*
+     * Set write flags:
+     *  1. Detect image-fleecing (and similar) schemes
+     *  2. Handle compression
+     */
+    job->write_flags =
+            (bdrv_chain_contains(target, bs) ? BDRV_REQ_SERIALISING : 0) |
+            (compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
+
     job->cluster_size = cluster_size;
     job->copy_bitmap = copy_bitmap;
     copy_bitmap = NULL;
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 1/8] block/backup: deal with zero detection Vladimir Sementsov-Ogievskiy
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 2/8] block/backup: refactor write_flags Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07 17:28   ` Max Reitz
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

copy_range ignores these limitations, let's improve it. block/backup
code handles max_transfer for copy_range by itself, now it's not needed
more, drop it.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c | 11 ++---------
 block/io.c     | 41 +++++++++++++++++++++++++++++++++--------
 2 files changed, 35 insertions(+), 17 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 3cdbe973e6..11e27c844d 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -54,7 +54,6 @@ typedef struct BackupBlockJob {
     QLIST_HEAD(, CowRequest) inflight_reqs;
 
     bool use_copy_range;
-    int64_t copy_range_size;
 
     BdrvRequestFlags write_flags;
     bool initializing_bitmap;
@@ -156,12 +155,11 @@ static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
     int ret;
     int nr_clusters;
     BlockBackend *blk = job->common.blk;
-    int nbytes;
+    int nbytes = end - start;
     int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
 
-    assert(QEMU_IS_ALIGNED(job->copy_range_size, job->cluster_size));
+    assert(end - start < INT_MAX);
     assert(QEMU_IS_ALIGNED(start, job->cluster_size));
-    nbytes = MIN(job->copy_range_size, end - start);
     nr_clusters = DIV_ROUND_UP(nbytes, job->cluster_size);
     bdrv_reset_dirty_bitmap(job->copy_bitmap, start,
                             job->cluster_size * nr_clusters);
@@ -756,11 +754,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
     job->copy_bitmap = copy_bitmap;
     copy_bitmap = NULL;
     job->use_copy_range = !compress; /* compression isn't supported for it */
-    job->copy_range_size = MIN_NON_ZERO(blk_get_max_transfer(job->common.blk),
-                                        blk_get_max_transfer(job->target));
-    job->copy_range_size = MAX(job->cluster_size,
-                               QEMU_ALIGN_UP(job->copy_range_size,
-                                             job->cluster_size));
 
     /* Required permissions are already taken with target's blk_new() */
     block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
diff --git a/block/io.c b/block/io.c
index 06305c6ea6..5abbd0fff2 100644
--- a/block/io.c
+++ b/block/io.c
@@ -3005,6 +3005,12 @@ static int coroutine_fn bdrv_co_copy_range_internal(
 {
     BdrvTrackedRequest req;
     int ret;
+    uint32_t align = MAX(src->bs->bl.request_alignment,
+                         dst->bs->bl.request_alignment);
+    uint32_t max_transfer =
+            QEMU_ALIGN_DOWN(MIN_NON_ZERO(MIN_NON_ZERO(src->bs->bl.max_transfer,
+                                                      dst->bs->bl.max_transfer),
+                                         INT_MAX), align);
 
     /* TODO We can support BDRV_REQ_NO_FALLBACK here */
     assert(!(read_flags & BDRV_REQ_NO_FALLBACK));
@@ -3031,7 +3037,10 @@ static int coroutine_fn bdrv_co_copy_range_internal(
 
     if (!src->bs->drv->bdrv_co_copy_range_from
         || !dst->bs->drv->bdrv_co_copy_range_to
-        || src->bs->encrypted || dst->bs->encrypted) {
+        || src->bs->encrypted || dst->bs->encrypted ||
+        !QEMU_IS_ALIGNED(src_offset, src->bs->bl.request_alignment) ||
+        !QEMU_IS_ALIGNED(dst_offset, dst->bs->bl.request_alignment) ||
+        !QEMU_IS_ALIGNED(bytes, align)) {
         return -ENOTSUP;
     }
 
@@ -3046,11 +3055,22 @@ static int coroutine_fn bdrv_co_copy_range_internal(
             wait_serialising_requests(&req);
         }
 
-        ret = src->bs->drv->bdrv_co_copy_range_from(src->bs,
-                                                    src, src_offset,
-                                                    dst, dst_offset,
-                                                    bytes,
-                                                    read_flags, write_flags);
+        while (bytes) {
+            int num = MIN(bytes, max_transfer);
+
+            ret = src->bs->drv->bdrv_co_copy_range_from(src->bs,
+                                                        src, src_offset,
+                                                        dst, dst_offset,
+                                                        num,
+                                                        read_flags,
+                                                        write_flags);
+            if (ret < 0) {
+                break;
+            }
+            bytes -= num;
+            src_offset += num;
+            dst_offset += num;
+        }
 
         tracked_request_end(&req);
         bdrv_dec_in_flight(src->bs);
@@ -3060,12 +3080,17 @@ static int coroutine_fn bdrv_co_copy_range_internal(
                               BDRV_TRACKED_WRITE);
         ret = bdrv_co_write_req_prepare(dst, dst_offset, bytes, &req,
                                         write_flags);
-        if (!ret) {
+        while (!ret && bytes) {
+            int num = MIN(bytes, max_transfer);
+
             ret = dst->bs->drv->bdrv_co_copy_range_to(dst->bs,
                                                       src, src_offset,
                                                       dst, dst_offset,
-                                                      bytes,
+                                                      num,
                                                       read_flags, write_flags);
+            bytes -= num;
+            src_offset += num;
+            dst_offset += num;
         }
         bdrv_co_write_req_finish(dst, dst_offset, bytes, &req, ret);
         tracked_request_end(&req);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07 18:01   ` Max Reitz
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 5/8] block/backup: fix backup_cow_with_offload for last cluster Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

Limit block_status querying to request bounds on write notifier to
avoid extra seeking.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c | 38 +++++++++++++++++++++-----------------
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 11e27c844d..a4d37d2d62 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -180,14 +180,14 @@ static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
  * return via pnum the number of contiguous clusters sharing this allocation.
  */
 static int backup_is_cluster_allocated(BackupBlockJob *s, int64_t offset,
-                                       int64_t *pnum)
+                                       int64_t bytes, int64_t *pnum)
 {
     BlockDriverState *bs = blk_bs(s->common.blk);
     int64_t count, total_count = 0;
-    int64_t bytes = s->len - offset;
     int ret;
 
     assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
+    bytes = MIN(s->len - offset, bytes);
 
     while (true) {
         ret = bdrv_is_allocated(bs, offset, bytes, &count);
@@ -224,12 +224,13 @@ static int backup_is_cluster_allocated(BackupBlockJob *s, int64_t offset,
  *         1 otherwise, and -ret on error.
  */
 static int64_t backup_bitmap_reset_unallocated(BackupBlockJob *s,
-                                               int64_t offset, int64_t *count)
+                                               int64_t offset, int64_t bytes,
+                                               int64_t *pnum)
 {
     int ret;
-    int64_t clusters, bytes, estimate;
+    int64_t clusters, estimate;
 
-    ret = backup_is_cluster_allocated(s, offset, &clusters);
+    ret = backup_is_cluster_allocated(s, offset, bytes, &clusters);
     if (ret < 0) {
         return ret;
     }
@@ -242,7 +243,7 @@ static int64_t backup_bitmap_reset_unallocated(BackupBlockJob *s,
         job_progress_set_remaining(&s->common.job, estimate);
     }
 
-    *count = bytes;
+    *pnum = bytes;
     return ret;
 }
 
@@ -255,7 +256,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     int ret = 0;
     int64_t start, end; /* bytes */
     void *bounce_buffer = NULL;
-    int64_t skip_bytes;
 
     qemu_co_rwlock_rdlock(&job->flush_rwlock);
 
@@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     wait_for_overlapping_requests(job, start, end);
     cow_request_begin(&cow_request, job, start, end);
 
+    if (job->initializing_bitmap) {
+        int64_t off, chunk;
+
+        for (off = offset; offset < end; offset += chunk) {
+            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
+            if (ret < 0) {
+                chunk = job->cluster_size;
+            }
+        }
+    }
+    ret = 0;
+
     while (start < end) {
         int64_t dirty_end;
 
@@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
             continue; /* already copied */
         }
 
-        if (job->initializing_bitmap) {
-            ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
-            if (ret == 0) {
-                trace_backup_do_cow_skip_range(job, start, skip_bytes);
-                start += skip_bytes;
-                continue;
-            }
-        }
-
         dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
                                                 end - start);
         if (dirty_end < 0) {
@@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
                 goto out;
             }
 
-            ret = backup_bitmap_reset_unallocated(s, offset, &count);
+            ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset,
+                                                  &count);
             if (ret < 0) {
                 goto out;
             }
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 5/8] block/backup: fix backup_cow_with_offload for last cluster
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07 18:05   ` Max Reitz
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 6/8] block/backup: teach backup_cow_with_bounce_buffer to copy more at once Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

We shouldn't try to copy bytes beyond EOF. Fix it.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/backup.c b/block/backup.c
index a4d37d2d62..eb41e4af4f 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -155,7 +155,7 @@ static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
     int ret;
     int nr_clusters;
     BlockBackend *blk = job->common.blk;
-    int nbytes = end - start;
+    int nbytes = MIN(end - start, job->len - start);
     int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
 
     assert(end - start < INT_MAX);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 6/8] block/backup: teach backup_cow_with_bounce_buffer to copy more at once
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 5/8] block/backup: fix backup_cow_with_offload for last cluster Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07 18:25   ` Max Reitz
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 7/8] block/backup: merge duplicated logic into backup_do_cow Vladimir Sementsov-Ogievskiy
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 8/8] block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area Vladimir Sementsov-Ogievskiy
  7 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

backup_cow_with_offload can transfer more than on cluster. Let
backup_cow_with_bounce_buffer behave similarly. It reduces number
of IO and there are no needs to copy cluster by cluster.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index eb41e4af4f..c765c073ad 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -104,22 +104,24 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
                                                       int64_t start,
                                                       int64_t end,
                                                       bool is_write_notifier,
-                                                      bool *error_is_read,
-                                                      void **bounce_buffer)
+                                                      bool *error_is_read)
 {
     int ret;
     BlockBackend *blk = job->common.blk;
     int nbytes;
     int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
+    void *bounce_buffer = blk_try_blockalign(blk, end);
 
-    assert(QEMU_IS_ALIGNED(start, job->cluster_size));
-    bdrv_reset_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
-    nbytes = MIN(job->cluster_size, job->len - start);
-    if (!*bounce_buffer) {
-        *bounce_buffer = blk_blockalign(blk, job->cluster_size);
+    if (!bounce_buffer) {
+        return -ENOMEM;
     }
 
-    ret = blk_co_pread(blk, start, nbytes, *bounce_buffer, read_flags);
+    assert(QEMU_IS_ALIGNED(start, job->cluster_size));
+    bdrv_reset_dirty_bitmap(job->copy_bitmap, start, end - start);
+
+    nbytes = MIN(end - start, job->len - start);
+
+    ret = blk_co_pread(blk, start, nbytes, bounce_buffer, read_flags);
     if (ret < 0) {
         trace_backup_do_cow_read_fail(job, start, ret);
         if (error_is_read) {
@@ -128,7 +130,7 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
         goto fail;
     }
 
-    ret = blk_co_pwrite(job->target, start, nbytes, *bounce_buffer,
+    ret = blk_co_pwrite(job->target, start, nbytes, bounce_buffer,
                         job->write_flags);
     if (ret < 0) {
         trace_backup_do_cow_write_fail(job, start, ret);
@@ -138,9 +140,12 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
         goto fail;
     }
 
+    qemu_vfree(bounce_buffer);
     return nbytes;
+
 fail:
     bdrv_set_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
+    qemu_vfree(bounce_buffer);
     return ret;
 
 }
@@ -255,7 +260,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     CowRequest cow_request;
     int ret = 0;
     int64_t start, end; /* bytes */
-    void *bounce_buffer = NULL;
 
     qemu_co_rwlock_rdlock(&job->flush_rwlock);
 
@@ -306,7 +310,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
         if (!job->use_copy_range) {
             ret = backup_cow_with_bounce_buffer(job, start, dirty_end,
                                                 is_write_notifier,
-                                                error_is_read, &bounce_buffer);
+                                                error_is_read);
         }
         if (ret < 0) {
             break;
@@ -321,10 +325,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
         ret = 0;
     }
 
-    if (bounce_buffer) {
-        qemu_vfree(bounce_buffer);
-    }
-
     cow_request_end(&cow_request);
 
     trace_backup_do_cow_return(job, offset, bytes, ret);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 7/8] block/backup: merge duplicated logic into backup_do_cow
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 6/8] block/backup: teach backup_cow_with_bounce_buffer to copy more at once Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07 18:37   ` Max Reitz
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 8/8] block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area Vladimir Sementsov-Ogievskiy
  7 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

backup_cow_with_offload and backup_cow_with_bounce_buffer contains a
lot of duplicated logic. Move it into backup_do_cow.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c | 83 +++++++++++++++++++-------------------------------
 1 file changed, 31 insertions(+), 52 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index c765c073ad..f19c9195fe 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -100,84 +100,60 @@ static void cow_request_end(CowRequest *req)
 
 /* Copy range to target with a bounce buffer and return the bytes copied. If
  * error occurred, return a negative error number */
-static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
-                                                      int64_t start,
-                                                      int64_t end,
-                                                      bool is_write_notifier,
-                                                      bool *error_is_read)
+static int coroutine_fn backup_cow_with_bounce_buffer(
+        BackupBlockJob *job, int64_t offset, int64_t bytes,
+        BdrvRequestFlags read_flags, bool *error_is_read)
 {
-    int ret;
+    int ret = 0;
     BlockBackend *blk = job->common.blk;
-    int nbytes;
-    int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-    void *bounce_buffer = blk_try_blockalign(blk, end);
+    void *bounce_buffer = blk_try_blockalign(blk, bytes);
 
     if (!bounce_buffer) {
         return -ENOMEM;
     }
 
-    assert(QEMU_IS_ALIGNED(start, job->cluster_size));
-    bdrv_reset_dirty_bitmap(job->copy_bitmap, start, end - start);
-
-    nbytes = MIN(end - start, job->len - start);
-
-    ret = blk_co_pread(blk, start, nbytes, bounce_buffer, read_flags);
+    ret = blk_co_pread(blk, offset, bytes, bounce_buffer, read_flags);
     if (ret < 0) {
-        trace_backup_do_cow_read_fail(job, start, ret);
+        trace_backup_do_cow_read_fail(job, offset, ret);
         if (error_is_read) {
             *error_is_read = true;
         }
-        goto fail;
+        goto out;
     }
 
-    ret = blk_co_pwrite(job->target, start, nbytes, bounce_buffer,
+    ret = blk_co_pwrite(job->target, offset, bytes, bounce_buffer,
                         job->write_flags);
     if (ret < 0) {
-        trace_backup_do_cow_write_fail(job, start, ret);
+        trace_backup_do_cow_write_fail(job, offset, ret);
         if (error_is_read) {
             *error_is_read = false;
         }
-        goto fail;
+        goto out;
     }
 
+out:
     qemu_vfree(bounce_buffer);
-    return nbytes;
 
-fail:
-    bdrv_set_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
-    qemu_vfree(bounce_buffer);
     return ret;
-
 }
 
 /* Copy range to target and return the bytes copied. If error occurred, return a
  * negative error number. */
 static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
-                                                int64_t start,
-                                                int64_t end,
-                                                bool is_write_notifier)
+                                                int64_t offset,
+                                                int64_t bytes,
+                                                BdrvRequestFlags read_flags)
 {
     int ret;
-    int nr_clusters;
     BlockBackend *blk = job->common.blk;
-    int nbytes = MIN(end - start, job->len - start);
-    int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-
-    assert(end - start < INT_MAX);
-    assert(QEMU_IS_ALIGNED(start, job->cluster_size));
-    nr_clusters = DIV_ROUND_UP(nbytes, job->cluster_size);
-    bdrv_reset_dirty_bitmap(job->copy_bitmap, start,
-                            job->cluster_size * nr_clusters);
-    ret = blk_co_copy_range(blk, start, job->target, start, nbytes,
+
+    ret = blk_co_copy_range(blk, offset, job->target, offset, bytes,
                             read_flags, job->write_flags);
     if (ret < 0) {
-        trace_backup_do_cow_copy_range_fail(job, start, ret);
-        bdrv_set_dirty_bitmap(job->copy_bitmap, start,
-                              job->cluster_size * nr_clusters);
-        return ret;
+        trace_backup_do_cow_copy_range_fail(job, offset, ret);
     }
 
-    return nbytes;
+    return ret;
 }
 
 /*
@@ -260,6 +236,8 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     CowRequest cow_request;
     int ret = 0;
     int64_t start, end; /* bytes */
+    BdrvRequestFlags read_flags =
+            is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
 
     qemu_co_rwlock_rdlock(&job->flush_rwlock);
 
@@ -285,6 +263,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
 
     while (start < end) {
         int64_t dirty_end;
+        int64_t cur_bytes;
 
         if (!bdrv_dirty_bitmap_get(job->copy_bitmap, start)) {
             trace_backup_do_cow_skip(job, start);
@@ -299,30 +278,30 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
         }
 
         trace_backup_do_cow_process(job, start);
+        cur_bytes = MIN(dirty_end - start, job->len - start);
+        bdrv_reset_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
 
         if (job->use_copy_range) {
-            ret = backup_cow_with_offload(job, start, dirty_end,
-                                          is_write_notifier);
+            ret = backup_cow_with_offload(job, start, cur_bytes, read_flags);
             if (ret < 0) {
                 job->use_copy_range = false;
             }
         }
         if (!job->use_copy_range) {
-            ret = backup_cow_with_bounce_buffer(job, start, dirty_end,
-                                                is_write_notifier,
-                                                error_is_read);
+            ret = backup_cow_with_bounce_buffer(job, start, cur_bytes,
+                                                read_flags, error_is_read);
         }
         if (ret < 0) {
+            bdrv_set_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
             break;
         }
 
         /* Publish progress, guest I/O counts as progress too.  Note that the
          * offset field is an opaque progress value, it is not a disk offset.
          */
-        start += ret;
-        job->bytes_read += ret;
-        job_progress_update(&job->common.job, ret);
-        ret = 0;
+        start += cur_bytes;
+        job->bytes_read += cur_bytes;
+        job_progress_update(&job->common.job, cur_bytes);
     }
 
     cow_request_end(&cow_request);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 8/8] block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area
  2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
                   ` (6 preceding siblings ...)
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 7/8] block/backup: merge duplicated logic into backup_do_cow Vladimir Sementsov-Ogievskiy
@ 2019-08-07  8:07 ` Vladimir Sementsov-Ogievskiy
  2019-08-07 18:46   ` Max Reitz
  7 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-07  8:07 UTC (permalink / raw)
  To: qemu-block
  Cc: fam, kwolf, vsementsov, armbru, qemu-devel, mreitz, stefanha, den, jsnow

Use effective bdrv_dirty_bitmap_next_dirty_area interface.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c | 56 ++++++++++++++++++++++----------------------------
 1 file changed, 24 insertions(+), 32 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index f19c9195fe..5ede0c8290 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -235,25 +235,28 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
 {
     CowRequest cow_request;
     int ret = 0;
-    int64_t start, end; /* bytes */
+    uint64_t off, cur_bytes;
+    int64_t aligned_offset, aligned_bytes, aligned_end;
     BdrvRequestFlags read_flags =
             is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
 
     qemu_co_rwlock_rdlock(&job->flush_rwlock);
 
-    start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
-    end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
+    aligned_offset = QEMU_ALIGN_DOWN(offset, job->cluster_size);
+    aligned_end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
+    aligned_bytes = aligned_end - aligned_offset;
 
-    trace_backup_do_cow_enter(job, start, offset, bytes);
+    trace_backup_do_cow_enter(job, aligned_offset, offset, bytes);
 
-    wait_for_overlapping_requests(job, start, end);
-    cow_request_begin(&cow_request, job, start, end);
+    wait_for_overlapping_requests(job, aligned_offset, aligned_end);
+    cow_request_begin(&cow_request, job, aligned_offset, aligned_end);
 
     if (job->initializing_bitmap) {
-        int64_t off, chunk;
+        int64_t chunk;
 
-        for (off = offset; offset < end; offset += chunk) {
-            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
+        for (off = aligned_offset; off < aligned_end; off += chunk) {
+            ret = backup_bitmap_reset_unallocated(job, off, aligned_end - off,
+                                                  &chunk);
             if (ret < 0) {
                 chunk = job->cluster_size;
             }
@@ -261,47 +264,36 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     }
     ret = 0;
 
-    while (start < end) {
-        int64_t dirty_end;
-        int64_t cur_bytes;
-
-        if (!bdrv_dirty_bitmap_get(job->copy_bitmap, start)) {
-            trace_backup_do_cow_skip(job, start);
-            start += job->cluster_size;
-            continue; /* already copied */
-        }
-
-        dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
-                                                end - start);
-        if (dirty_end < 0) {
-            dirty_end = end;
-        }
-
-        trace_backup_do_cow_process(job, start);
-        cur_bytes = MIN(dirty_end - start, job->len - start);
-        bdrv_reset_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
+    off = aligned_offset;
+    cur_bytes = aligned_bytes;
+    while (bdrv_dirty_bitmap_next_dirty_area(job->copy_bitmap,
+                                             &off, &cur_bytes))
+    {
+        trace_backup_do_cow_process(job, off);
+        bdrv_reset_dirty_bitmap(job->copy_bitmap, off, cur_bytes);
 
         if (job->use_copy_range) {
-            ret = backup_cow_with_offload(job, start, cur_bytes, read_flags);
+            ret = backup_cow_with_offload(job, off, cur_bytes, read_flags);
             if (ret < 0) {
                 job->use_copy_range = false;
             }
         }
         if (!job->use_copy_range) {
-            ret = backup_cow_with_bounce_buffer(job, start, cur_bytes,
+            ret = backup_cow_with_bounce_buffer(job, off, cur_bytes,
                                                 read_flags, error_is_read);
         }
         if (ret < 0) {
-            bdrv_set_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
+            bdrv_set_dirty_bitmap(job->copy_bitmap, off, cur_bytes);
             break;
         }
 
         /* Publish progress, guest I/O counts as progress too.  Note that the
          * offset field is an opaque progress value, it is not a disk offset.
          */
-        start += cur_bytes;
+        off += cur_bytes;
         job->bytes_read += cur_bytes;
         job_progress_update(&job->common.job, cur_bytes);
+        cur_bytes = offset + bytes - off;
     }
 
     cow_request_end(&cow_request);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 2/8] block/backup: refactor write_flags
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 2/8] block/backup: refactor write_flags Vladimir Sementsov-Ogievskiy
@ 2019-08-07 16:53   ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-08-07 16:53 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, qemu-devel, armbru, stefanha, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 493 bytes --]

On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> write flags are constant, let's store it in BackupBlockJob instead of
> recalculating. It also makes two boolean fields to be unused, so,
> drop them.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: John Snow <jsnow@redhat.com>
> ---
>  block/backup.c | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range Vladimir Sementsov-Ogievskiy
@ 2019-08-07 17:28   ` Max Reitz
  2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-08-07 17:28 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, qemu-devel, armbru, stefanha, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1507 bytes --]

On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> copy_range ignores these limitations, let's improve it. block/backup
> code handles max_transfer for copy_range by itself, now it's not needed
> more, drop it.

Shouldn’t this be two separate patches?

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/backup.c | 11 ++---------
>  block/io.c     | 41 +++++++++++++++++++++++++++++++++--------
>  2 files changed, 35 insertions(+), 17 deletions(-)

[...]

> diff --git a/block/io.c b/block/io.c
> index 06305c6ea6..5abbd0fff2 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -3005,6 +3005,12 @@ static int coroutine_fn bdrv_co_copy_range_internal(
>  {
>      BdrvTrackedRequest req;
>      int ret;
> +    uint32_t align = MAX(src->bs->bl.request_alignment,
> +                         dst->bs->bl.request_alignment);
> +    uint32_t max_transfer =
> +            QEMU_ALIGN_DOWN(MIN_NON_ZERO(MIN_NON_ZERO(src->bs->bl.max_transfer,
> +                                                      dst->bs->bl.max_transfer),
> +                                         INT_MAX), align);

I suppose the outer QEMU_ALIGN_DOWN() may result in @max_transfer of 0
(in theory, if one’s max_transfer is smaller than the other’s alignment).

Not likely, but should still be handled.

The rest looks good to me.

Max

>      /* TODO We can support BDRV_REQ_NO_FALLBACK here */
>      assert(!(read_flags & BDRV_REQ_NO_FALLBACK));


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping Vladimir Sementsov-Ogievskiy
@ 2019-08-07 18:01   ` Max Reitz
  2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-08-07 18:01 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, qemu-devel, armbru, stefanha, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 3236 bytes --]

On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> Limit block_status querying to request bounds on write notifier to
> avoid extra seeking.

I don’t understand this reasoning.  Checking whether something is
allocated for qcow2 should just mean an L2 cache lookup.  Which we have
to do anyway when we try to copy data off the source.

I could understand saying this makes the code easier, but I actually
don’t think it does.  If you implemented checking the allocation status
only for areas where the bitmap is reset (which I think this patch
should), then it’d just duplicate the existing loop.

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/backup.c | 38 +++++++++++++++++++++-----------------
>  1 file changed, 21 insertions(+), 17 deletions(-)
> 
> diff --git a/block/backup.c b/block/backup.c
> index 11e27c844d..a4d37d2d62 100644
> --- a/block/backup.c
> +++ b/block/backup.c

[...]

> @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>      wait_for_overlapping_requests(job, start, end);
>      cow_request_begin(&cow_request, job, start, end);
>  
> +    if (job->initializing_bitmap) {
> +        int64_t off, chunk;
> +
> +        for (off = offset; offset < end; offset += chunk) {

This is what I’m referring to, I think this loop should skip areas that
are clean.

> +            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
> +            if (ret < 0) {
> +                chunk = job->cluster_size;
> +            }
> +        }
> +    }
> +    ret = 0;
> +
>      while (start < end) {
>          int64_t dirty_end;
>  
> @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>              continue; /* already copied */
>          }
>  
> -        if (job->initializing_bitmap) {
> -            ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
> -            if (ret == 0) {
> -                trace_backup_do_cow_skip_range(job, start, skip_bytes);
> -                start += skip_bytes;
> -                continue;
> -            }
> -        }
> -
>          dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>                                                  end - start);
>          if (dirty_end < 0) {

Hm, you resolved that conflict differently from me.

I decided the bdrv_dirty_bitmap_next_zero() call should go before the
backup_bitmap_reset_unallocated() call so that we can then do a

  dirty_end = MIN(dirty_end, start + skip_bytes);

because we probably don’t want to copy anything past what
backup_bitmap_reset_unallocated() has inquired.


But then again this patch eliminates the difference anyway...

Max

> @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
>                  goto out;
>              }
>  
> -            ret = backup_bitmap_reset_unallocated(s, offset, &count);
> +            ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset,
> +                                                  &count);
>              if (ret < 0) {
>                  goto out;
>              }
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 5/8] block/backup: fix backup_cow_with_offload for last cluster
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 5/8] block/backup: fix backup_cow_with_offload for last cluster Vladimir Sementsov-Ogievskiy
@ 2019-08-07 18:05   ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-08-07 18:05 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, qemu-devel, armbru, stefanha, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 319 bytes --]

On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> We shouldn't try to copy bytes beyond EOF. Fix it.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/backup.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 6/8] block/backup: teach backup_cow_with_bounce_buffer to copy more at once
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 6/8] block/backup: teach backup_cow_with_bounce_buffer to copy more at once Vladimir Sementsov-Ogievskiy
@ 2019-08-07 18:25   ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-08-07 18:25 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, qemu-devel, armbru, stefanha, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1852 bytes --]

On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> backup_cow_with_offload can transfer more than on cluster. Let
> backup_cow_with_bounce_buffer behave similarly. It reduces number
> of IO and there are no needs to copy cluster by cluster.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/backup.c | 30 +++++++++++++++---------------
>  1 file changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/block/backup.c b/block/backup.c
> index eb41e4af4f..c765c073ad 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -104,22 +104,24 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job,
>                                                        int64_t start,
>                                                        int64_t end,
>                                                        bool is_write_notifier,
> -                                                      bool *error_is_read,
> -                                                      void **bounce_buffer)
> +                                                      bool *error_is_read)
>  {
>      int ret;
>      BlockBackend *blk = job->common.blk;
>      int nbytes;
>      int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
> +    void *bounce_buffer = blk_try_blockalign(blk, end);

s/end/end - start/ (or probably rather s/end/nbytes/ after that has been
set).

Rest looks good.

Max

>  
> -    assert(QEMU_IS_ALIGNED(start, job->cluster_size));
> -    bdrv_reset_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
> -    nbytes = MIN(job->cluster_size, job->len - start);
> -    if (!*bounce_buffer) {
> -        *bounce_buffer = blk_blockalign(blk, job->cluster_size);
> +    if (!bounce_buffer) {
> +        return -ENOMEM;
>      }


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] block/backup: merge duplicated logic into backup_do_cow
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 7/8] block/backup: merge duplicated logic into backup_do_cow Vladimir Sementsov-Ogievskiy
@ 2019-08-07 18:37   ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-08-07 18:37 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, qemu-devel, armbru, stefanha, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 446 bytes --]

On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> backup_cow_with_offload and backup_cow_with_bounce_buffer contains a
> lot of duplicated logic. Move it into backup_do_cow.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/backup.c | 83 +++++++++++++++++++-------------------------------
>  1 file changed, 31 insertions(+), 52 deletions(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 8/8] block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area
  2019-08-07  8:07 ` [Qemu-devel] [PATCH 8/8] block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area Vladimir Sementsov-Ogievskiy
@ 2019-08-07 18:46   ` Max Reitz
  2019-08-09  7:54     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-08-07 18:46 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, qemu-devel, armbru, stefanha, den, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 4832 bytes --]

On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> Use effective bdrv_dirty_bitmap_next_dirty_area interface.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/backup.c | 56 ++++++++++++++++++++++----------------------------
>  1 file changed, 24 insertions(+), 32 deletions(-)
> 
> diff --git a/block/backup.c b/block/backup.c
> index f19c9195fe..5ede0c8290 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -235,25 +235,28 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>  {
>      CowRequest cow_request;
>      int ret = 0;
> -    int64_t start, end; /* bytes */
> +    uint64_t off, cur_bytes;
> +    int64_t aligned_offset, aligned_bytes, aligned_end;
>      BdrvRequestFlags read_flags =
>              is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
>  
>      qemu_co_rwlock_rdlock(&job->flush_rwlock);
>  
> -    start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
> -    end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
> +    aligned_offset = QEMU_ALIGN_DOWN(offset, job->cluster_size);
> +    aligned_end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
> +    aligned_bytes = aligned_end - aligned_offset;
>  
> -    trace_backup_do_cow_enter(job, start, offset, bytes);
> +    trace_backup_do_cow_enter(job, aligned_offset, offset, bytes);
>  
> -    wait_for_overlapping_requests(job, start, end);
> -    cow_request_begin(&cow_request, job, start, end);
> +    wait_for_overlapping_requests(job, aligned_offset, aligned_end);
> +    cow_request_begin(&cow_request, job, aligned_offset, aligned_end);
>  
>      if (job->initializing_bitmap) {
> -        int64_t off, chunk;
> +        int64_t chunk;
>  
> -        for (off = offset; offset < end; offset += chunk) {
> -            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
> +        for (off = aligned_offset; off < aligned_end; off += chunk) {
> +            ret = backup_bitmap_reset_unallocated(job, off, aligned_end - off,
> +                                                  &chunk);
>              if (ret < 0) {
>                  chunk = job->cluster_size;
>              }
> @@ -261,47 +264,36 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>      }
>      ret = 0;
>  
> -    while (start < end) {
> -        int64_t dirty_end;
> -        int64_t cur_bytes;
> -
> -        if (!bdrv_dirty_bitmap_get(job->copy_bitmap, start)) {
> -            trace_backup_do_cow_skip(job, start);
> -            start += job->cluster_size;
> -            continue; /* already copied */
> -        }
> -
> -        dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
> -                                                end - start);
> -        if (dirty_end < 0) {
> -            dirty_end = end;
> -        }
> -
> -        trace_backup_do_cow_process(job, start);
> -        cur_bytes = MIN(dirty_end - start, job->len - start);
> -        bdrv_reset_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
> +    off = aligned_offset;
> +    cur_bytes = aligned_bytes;
> +    while (bdrv_dirty_bitmap_next_dirty_area(job->copy_bitmap,
> +                                             &off, &cur_bytes))
> +    {
> +        trace_backup_do_cow_process(job, off);
> +        bdrv_reset_dirty_bitmap(job->copy_bitmap, off, cur_bytes);
>  
>          if (job->use_copy_range) {
> -            ret = backup_cow_with_offload(job, start, cur_bytes, read_flags);
> +            ret = backup_cow_with_offload(job, off, cur_bytes, read_flags);
>              if (ret < 0) {
>                  job->use_copy_range = false;
>              }
>          }
>          if (!job->use_copy_range) {
> -            ret = backup_cow_with_bounce_buffer(job, start, cur_bytes,
> +            ret = backup_cow_with_bounce_buffer(job, off, cur_bytes,
>                                                  read_flags, error_is_read);
>          }
>          if (ret < 0) {
> -            bdrv_set_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
> +            bdrv_set_dirty_bitmap(job->copy_bitmap, off, cur_bytes);
>              break;
>          }
>  
>          /* Publish progress, guest I/O counts as progress too.  Note that the
>           * offset field is an opaque progress value, it is not a disk offset.
>           */
> -        start += cur_bytes;
> +        off += cur_bytes;
>          job->bytes_read += cur_bytes;
>          job_progress_update(&job->common.job, cur_bytes);
> +        cur_bytes = offset + bytes - off;

Hm, why not aligned_end - off?

(You could also drop aligned_bytes altogether and always set cur_bytes
to aligned_end - off.)

Max

>      }
>  
>      cow_request_end(&cow_request);
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-07 18:01   ` Max Reitz
@ 2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
  2019-08-09  9:12       ` Vladimir Sementsov-Ogievskiy
                         ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-09  7:50 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow

07.08.2019 21:01, Max Reitz wrote:
> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>> Limit block_status querying to request bounds on write notifier to
>> avoid extra seeking.
> 
> I don’t understand this reasoning.  Checking whether something is
> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
> to do anyway when we try to copy data off the source.

But for raw it's seeking. I think we just shouldn't do any unnecessary operations
in copy-before-write handling. So instead of calling
block_status(request_start, disk_end) I think it's better to call
block_status(request_start, request_end). And it is more applicable for reusing this
code too.

> 
> I could understand saying this makes the code easier, but I actually
> don’t think it does.  If you implemented checking the allocation status
> only for areas where the bitmap is reset (which I think this patch
> should), then it’d just duplicate the existing loop.
> 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/backup.c | 38 +++++++++++++++++++++-----------------
>>   1 file changed, 21 insertions(+), 17 deletions(-)
>>
>> diff --git a/block/backup.c b/block/backup.c
>> index 11e27c844d..a4d37d2d62 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
> 
> [...]
> 
>> @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>       wait_for_overlapping_requests(job, start, end);
>>       cow_request_begin(&cow_request, job, start, end);
>>   
>> +    if (job->initializing_bitmap) {
>> +        int64_t off, chunk;
>> +
>> +        for (off = offset; offset < end; offset += chunk) {
> 
> This is what I’m referring to, I think this loop should skip areas that
> are clean.

Agree, will do.

> 
>> +            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
>> +            if (ret < 0) {
>> +                chunk = job->cluster_size;
>> +            }
>> +        }
>> +    }
>> +    ret = 0;
>> +
>>       while (start < end) {
>>           int64_t dirty_end;
>>   
>> @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>               continue; /* already copied */
>>           }
>>   
>> -        if (job->initializing_bitmap) {
>> -            ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
>> -            if (ret == 0) {
>> -                trace_backup_do_cow_skip_range(job, start, skip_bytes);
>> -                start += skip_bytes;
>> -                continue;
>> -            }
>> -        }
>> -
>>           dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>>                                                   end - start);
>>           if (dirty_end < 0) {
> 
> Hm, you resolved that conflict differently from me.
> 
> I decided the bdrv_dirty_bitmap_next_zero() call should go before the
> backup_bitmap_reset_unallocated() call so that we can then do a
> 
>    dirty_end = MIN(dirty_end, start + skip_bytes);
> 
> because we probably don’t want to copy anything past what
> backup_bitmap_reset_unallocated() has inquired.
> 
> 
> But then again this patch eliminates the difference anyway...
>  >
>> @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
>>                   goto out;
>>               }
>>   
>> -            ret = backup_bitmap_reset_unallocated(s, offset, &count);
>> +            ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset,
>> +                                                  &count);
>>               if (ret < 0) {
>>                   goto out;
>>               }
>>
> 
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range
  2019-08-07 17:28   ` Max Reitz
@ 2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-09  7:50 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow

07.08.2019 20:28, Max Reitz wrote:
> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>> copy_range ignores these limitations, let's improve it. block/backup
>> code handles max_transfer for copy_range by itself, now it's not needed
>> more, drop it.
> 
> Shouldn’t this be two separate patches?

Not a problem, will do.

> 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/backup.c | 11 ++---------
>>   block/io.c     | 41 +++++++++++++++++++++++++++++++++--------
>>   2 files changed, 35 insertions(+), 17 deletions(-)
> 
> [...]
> 
>> diff --git a/block/io.c b/block/io.c
>> index 06305c6ea6..5abbd0fff2 100644
>> --- a/block/io.c
>> +++ b/block/io.c
>> @@ -3005,6 +3005,12 @@ static int coroutine_fn bdrv_co_copy_range_internal(
>>   {
>>       BdrvTrackedRequest req;
>>       int ret;
>> +    uint32_t align = MAX(src->bs->bl.request_alignment,
>> +                         dst->bs->bl.request_alignment);
>> +    uint32_t max_transfer =
>> +            QEMU_ALIGN_DOWN(MIN_NON_ZERO(MIN_NON_ZERO(src->bs->bl.max_transfer,
>> +                                                      dst->bs->bl.max_transfer),
>> +                                         INT_MAX), align);
> 
> I suppose the outer QEMU_ALIGN_DOWN() may result in @max_transfer of 0
> (in theory, if one’s max_transfer is smaller than the other’s alignment).
> 
> Not likely, but should still be handled.
> 
> The rest looks good to me.
> 
> Max
> 
>>       /* TODO We can support BDRV_REQ_NO_FALLBACK here */
>>       assert(!(read_flags & BDRV_REQ_NO_FALLBACK));
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 8/8] block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area
  2019-08-07 18:46   ` Max Reitz
@ 2019-08-09  7:54     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-09  7:54 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow

07.08.2019 21:46, Max Reitz wrote:
> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>> Use effective bdrv_dirty_bitmap_next_dirty_area interface.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/backup.c | 56 ++++++++++++++++++++++----------------------------
>>   1 file changed, 24 insertions(+), 32 deletions(-)
>>
>> diff --git a/block/backup.c b/block/backup.c
>> index f19c9195fe..5ede0c8290 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
>> @@ -235,25 +235,28 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>   {
>>       CowRequest cow_request;
>>       int ret = 0;
>> -    int64_t start, end; /* bytes */
>> +    uint64_t off, cur_bytes;
>> +    int64_t aligned_offset, aligned_bytes, aligned_end;
>>       BdrvRequestFlags read_flags =
>>               is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
>>   
>>       qemu_co_rwlock_rdlock(&job->flush_rwlock);
>>   
>> -    start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
>> -    end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
>> +    aligned_offset = QEMU_ALIGN_DOWN(offset, job->cluster_size);
>> +    aligned_end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
>> +    aligned_bytes = aligned_end - aligned_offset;
>>   
>> -    trace_backup_do_cow_enter(job, start, offset, bytes);
>> +    trace_backup_do_cow_enter(job, aligned_offset, offset, bytes);
>>   
>> -    wait_for_overlapping_requests(job, start, end);
>> -    cow_request_begin(&cow_request, job, start, end);
>> +    wait_for_overlapping_requests(job, aligned_offset, aligned_end);
>> +    cow_request_begin(&cow_request, job, aligned_offset, aligned_end);
>>   
>>       if (job->initializing_bitmap) {
>> -        int64_t off, chunk;
>> +        int64_t chunk;
>>   
>> -        for (off = offset; offset < end; offset += chunk) {
>> -            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
>> +        for (off = aligned_offset; off < aligned_end; off += chunk) {
>> +            ret = backup_bitmap_reset_unallocated(job, off, aligned_end - off,
>> +                                                  &chunk);
>>               if (ret < 0) {
>>                   chunk = job->cluster_size;
>>               }
>> @@ -261,47 +264,36 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>       }
>>       ret = 0;
>>   
>> -    while (start < end) {
>> -        int64_t dirty_end;
>> -        int64_t cur_bytes;
>> -
>> -        if (!bdrv_dirty_bitmap_get(job->copy_bitmap, start)) {
>> -            trace_backup_do_cow_skip(job, start);
>> -            start += job->cluster_size;
>> -            continue; /* already copied */
>> -        }
>> -
>> -        dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>> -                                                end - start);
>> -        if (dirty_end < 0) {
>> -            dirty_end = end;
>> -        }
>> -
>> -        trace_backup_do_cow_process(job, start);
>> -        cur_bytes = MIN(dirty_end - start, job->len - start);
>> -        bdrv_reset_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
>> +    off = aligned_offset;
>> +    cur_bytes = aligned_bytes;
>> +    while (bdrv_dirty_bitmap_next_dirty_area(job->copy_bitmap,
>> +                                             &off, &cur_bytes))
>> +    {
>> +        trace_backup_do_cow_process(job, off);
>> +        bdrv_reset_dirty_bitmap(job->copy_bitmap, off, cur_bytes);
>>   
>>           if (job->use_copy_range) {
>> -            ret = backup_cow_with_offload(job, start, cur_bytes, read_flags);
>> +            ret = backup_cow_with_offload(job, off, cur_bytes, read_flags);
>>               if (ret < 0) {
>>                   job->use_copy_range = false;
>>               }
>>           }
>>           if (!job->use_copy_range) {
>> -            ret = backup_cow_with_bounce_buffer(job, start, cur_bytes,
>> +            ret = backup_cow_with_bounce_buffer(job, off, cur_bytes,
>>                                                   read_flags, error_is_read);
>>           }
>>           if (ret < 0) {
>> -            bdrv_set_dirty_bitmap(job->copy_bitmap, start, dirty_end - start);
>> +            bdrv_set_dirty_bitmap(job->copy_bitmap, off, cur_bytes);
>>               break;
>>           }
>>   
>>           /* Publish progress, guest I/O counts as progress too.  Note that the
>>            * offset field is an opaque progress value, it is not a disk offset.
>>            */
>> -        start += cur_bytes;
>> +        off += cur_bytes;
>>           job->bytes_read += cur_bytes;
>>           job_progress_update(&job->common.job, cur_bytes);
>> +        cur_bytes = offset + bytes - off;
> 
> Hm, why not aligned_end - off?
> 
> (You could also drop aligned_bytes altogether and always set cur_bytes
> to aligned_end - off.)
> 

Hmm, right.

Thank you for reviewing!

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
@ 2019-08-09  9:12       ` Vladimir Sementsov-Ogievskiy
  2019-08-09 10:25         ` Vladimir Sementsov-Ogievskiy
  2019-08-09 10:10       ` Vladimir Sementsov-Ogievskiy
  2019-08-09 12:25       ` Max Reitz
  2 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-09  9:12 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow

09.08.2019 10:50, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2019 21:01, Max Reitz wrote:
>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>> Limit block_status querying to request bounds on write notifier to
>>> avoid extra seeking.
>>
>> I don’t understand this reasoning.  Checking whether something is
>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>> to do anyway when we try to copy data off the source.
> 
> But for raw it's seeking. I think we just shouldn't do any unnecessary operations
> in copy-before-write handling. So instead of calling
> block_status(request_start, disk_end) I think it's better to call
> block_status(request_start, request_end). And it is more applicable for reusing this
> code too.
> 
>>
>> I could understand saying this makes the code easier, but I actually
>> don’t think it does.  If you implemented checking the allocation status
>> only for areas where the bitmap is reset (which I think this patch
>> should), then it’d just duplicate the existing loop.
>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   block/backup.c | 38 +++++++++++++++++++++-----------------
>>>   1 file changed, 21 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/block/backup.c b/block/backup.c
>>> index 11e27c844d..a4d37d2d62 100644
>>> --- a/block/backup.c
>>> +++ b/block/backup.c
>>
>> [...]
>>
>>> @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>>       wait_for_overlapping_requests(job, start, end);
>>>       cow_request_begin(&cow_request, job, start, end);
>>> +    if (job->initializing_bitmap) {
>>> +        int64_t off, chunk;
>>> +
>>> +        for (off = offset; offset < end; offset += chunk) {
>>
>> This is what I’m referring to, I think this loop should skip areas that
>> are clean.
> 
> Agree, will do.

Hmm, I remembered that I thought of it, but decided that it may be not efficient if
bitmap fragmentation is higher than block-status fragmentation. So, if we don't know
what is better, let's keep simpler code.

> 
>>
>>> +            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
>>> +            if (ret < 0) {
>>> +                chunk = job->cluster_size;
>>> +            }
>>> +        }
>>> +    }
>>> +    ret = 0;
>>> +
>>>       while (start < end) {
>>>           int64_t dirty_end;
>>> @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>>               continue; /* already copied */
>>>           }
>>> -        if (job->initializing_bitmap) {
>>> -            ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
>>> -            if (ret == 0) {
>>> -                trace_backup_do_cow_skip_range(job, start, skip_bytes);
>>> -                start += skip_bytes;
>>> -                continue;
>>> -            }
>>> -        }
>>> -
>>>           dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>>>                                                   end - start);
>>>           if (dirty_end < 0) {
>>
>> Hm, you resolved that conflict differently from me.
>>
>> I decided the bdrv_dirty_bitmap_next_zero() call should go before the
>> backup_bitmap_reset_unallocated() call so that we can then do a
>>
>>    dirty_end = MIN(dirty_end, start + skip_bytes);
>>
>> because we probably don’t want to copy anything past what
>> backup_bitmap_reset_unallocated() has inquired.
>>
>>
>> But then again this patch eliminates the difference anyway...
>>  >
>>> @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
>>>                   goto out;
>>>               }
>>> -            ret = backup_bitmap_reset_unallocated(s, offset, &count);
>>> +            ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset,
>>> +                                                  &count);
>>>               if (ret < 0) {
>>>                   goto out;
>>>               }
>>>
>>
>>
> 
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
  2019-08-09  9:12       ` Vladimir Sementsov-Ogievskiy
@ 2019-08-09 10:10       ` Vladimir Sementsov-Ogievskiy
  2019-08-09 12:25       ` Max Reitz
  2 siblings, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-09 10:10 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow

09.08.2019 10:50, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2019 21:01, Max Reitz wrote:
>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>> Limit block_status querying to request bounds on write notifier to
>>> avoid extra seeking.
>>
>> I don’t understand this reasoning.  Checking whether something is
>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>> to do anyway when we try to copy data off the source.
> 
> But for raw it's seeking. I think we just shouldn't do any unnecessary operations
> in copy-before-write handling. So instead of calling
> block_status(request_start, disk_end) I think it's better to call
> block_status(request_start, request_end). And it is more applicable for reusing this
> code too.

oops, and seek lack the limit anyway.

But anyway, we have API with count limit and it seems natural to assume that some
driver may do more calculations for bigger request. So smaller request is good for
copy-before-write operation when we are in a harry to unfreeze guest write as soon
as possible.

Hmm, example of such drive may be NBD, which can concatenate block-status results of
exported disk.


> 
>>
>> I could understand saying this makes the code easier, but I actually
>> don’t think it does.  If you implemented checking the allocation status
>> only for areas where the bitmap is reset (which I think this patch
>> should), then it’d just duplicate the existing loop.
>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   block/backup.c | 38 +++++++++++++++++++++-----------------
>>>   1 file changed, 21 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/block/backup.c b/block/backup.c
>>> index 11e27c844d..a4d37d2d62 100644
>>> --- a/block/backup.c
>>> +++ b/block/backup.c
>>
>> [...]
>>
>>> @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>>       wait_for_overlapping_requests(job, start, end);
>>>       cow_request_begin(&cow_request, job, start, end);
>>> +    if (job->initializing_bitmap) {
>>> +        int64_t off, chunk;
>>> +
>>> +        for (off = offset; offset < end; offset += chunk) {
>>
>> This is what I’m referring to, I think this loop should skip areas that
>> are clean.
> 
> Agree, will do.
> 
>>
>>> +            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
>>> +            if (ret < 0) {
>>> +                chunk = job->cluster_size;
>>> +            }
>>> +        }
>>> +    }
>>> +    ret = 0;
>>> +
>>>       while (start < end) {
>>>           int64_t dirty_end;
>>> @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>>               continue; /* already copied */
>>>           }
>>> -        if (job->initializing_bitmap) {
>>> -            ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
>>> -            if (ret == 0) {
>>> -                trace_backup_do_cow_skip_range(job, start, skip_bytes);
>>> -                start += skip_bytes;
>>> -                continue;
>>> -            }
>>> -        }
>>> -
>>>           dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>>>                                                   end - start);
>>>           if (dirty_end < 0) {
>>
>> Hm, you resolved that conflict differently from me.
>>
>> I decided the bdrv_dirty_bitmap_next_zero() call should go before the
>> backup_bitmap_reset_unallocated() call so that we can then do a
>>
>>    dirty_end = MIN(dirty_end, start + skip_bytes);
>>
>> because we probably don’t want to copy anything past what
>> backup_bitmap_reset_unallocated() has inquired.
>>
>>
>> But then again this patch eliminates the difference anyway...
>>  >
>>> @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
>>>                   goto out;
>>>               }
>>> -            ret = backup_bitmap_reset_unallocated(s, offset, &count);
>>> +            ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset,
>>> +                                                  &count);
>>>               if (ret < 0) {
>>>                   goto out;
>>>               }
>>>
>>
>>
> 
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-09  9:12       ` Vladimir Sementsov-Ogievskiy
@ 2019-08-09 10:25         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-09 10:25 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow

09.08.2019 12:12, Vladimir Sementsov-Ogievskiy wrote:
> 09.08.2019 10:50, Vladimir Sementsov-Ogievskiy wrote:
>> 07.08.2019 21:01, Max Reitz wrote:
>>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>>> Limit block_status querying to request bounds on write notifier to
>>>> avoid extra seeking.
>>>
>>> I don’t understand this reasoning.  Checking whether something is
>>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>>> to do anyway when we try to copy data off the source.
>>
>> But for raw it's seeking. I think we just shouldn't do any unnecessary operations
>> in copy-before-write handling. So instead of calling
>> block_status(request_start, disk_end) I think it's better to call
>> block_status(request_start, request_end). And it is more applicable for reusing this
>> code too.
>>
>>>
>>> I could understand saying this makes the code easier, but I actually
>>> don’t think it does.  If you implemented checking the allocation status
>>> only for areas where the bitmap is reset (which I think this patch
>>> should), then it’d just duplicate the existing loop.
>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>   block/backup.c | 38 +++++++++++++++++++++-----------------
>>>>   1 file changed, 21 insertions(+), 17 deletions(-)
>>>>
>>>> diff --git a/block/backup.c b/block/backup.c
>>>> index 11e27c844d..a4d37d2d62 100644
>>>> --- a/block/backup.c
>>>> +++ b/block/backup.c
>>>
>>> [...]
>>>
>>>> @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>>>       wait_for_overlapping_requests(job, start, end);
>>>>       cow_request_begin(&cow_request, job, start, end);
>>>> +    if (job->initializing_bitmap) {
>>>> +        int64_t off, chunk;
>>>> +
>>>> +        for (off = offset; offset < end; offset += chunk) {
>>>
>>> This is what I’m referring to, I think this loop should skip areas that
>>> are clean.
>>
>> Agree, will do.
> 
> Hmm, I remembered that I thought of it, but decided that it may be not efficient if
> bitmap fragmentation is higher than block-status fragmentation. So, if we don't know
> what is better, let's keep simpler code.

Hmm2, but I see a compromise (may be you meant exactly this): not using next zero as limit
(but always use request_end), but before each iteration skip to next_dirty.

> 
>>
>>>
>>>> +            ret = backup_bitmap_reset_unallocated(job, off, end - off, &chunk);
>>>> +            if (ret < 0) {
>>>> +                chunk = job->cluster_size;
>>>> +            }
>>>> +        }
>>>> +    }
>>>> +    ret = 0;
>>>> +
>>>>       while (start < end) {
>>>>           int64_t dirty_end;
>>>> @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
>>>>               continue; /* already copied */
>>>>           }
>>>> -        if (job->initializing_bitmap) {
>>>> -            ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
>>>> -            if (ret == 0) {
>>>> -                trace_backup_do_cow_skip_range(job, start, skip_bytes);
>>>> -                start += skip_bytes;
>>>> -                continue;
>>>> -            }
>>>> -        }
>>>> -
>>>>           dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>>>>                                                   end - start);
>>>>           if (dirty_end < 0) {
>>>
>>> Hm, you resolved that conflict differently from me.
>>>
>>> I decided the bdrv_dirty_bitmap_next_zero() call should go before the
>>> backup_bitmap_reset_unallocated() call so that we can then do a
>>>
>>>    dirty_end = MIN(dirty_end, start + skip_bytes);
>>>
>>> because we probably don’t want to copy anything past what
>>> backup_bitmap_reset_unallocated() has inquired.
>>>
>>>
>>> But then again this patch eliminates the difference anyway...
>>>  >
>>>> @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
>>>>                   goto out;
>>>>               }
>>>> -            ret = backup_bitmap_reset_unallocated(s, offset, &count);
>>>> +            ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset,
>>>> +                                                  &count);
>>>>               if (ret < 0) {
>>>>                   goto out;
>>>>               }
>>>>
>>>
>>>
>>
>>
> 
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
  2019-08-09  9:12       ` Vladimir Sementsov-Ogievskiy
  2019-08-09 10:10       ` Vladimir Sementsov-Ogievskiy
@ 2019-08-09 12:25       ` Max Reitz
  2019-08-09 12:47         ` Vladimir Sementsov-Ogievskiy
  2 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-08-09 12:25 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 871 bytes --]

On 09.08.19 09:50, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2019 21:01, Max Reitz wrote:
>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>> Limit block_status querying to request bounds on write notifier to
>>> avoid extra seeking.
>>
>> I don’t understand this reasoning.  Checking whether something is
>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>> to do anyway when we try to copy data off the source.
> 
> But for raw it's seeking.

(1) That’s a bug in block_status then, isn’t it?

file-posix cannot determine the allocation status, or rather, everything
is allocated.  bdrv_co_block_status() should probably pass @want_zero on
to the driver’s implementation, and file-posix should just
unconditionally return DATA if it’s false.

(2) Why would you even use sync=top for raw nodes?

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-09 12:25       ` Max Reitz
@ 2019-08-09 12:47         ` Vladimir Sementsov-Ogievskiy
  2019-08-09 12:53           ` Max Reitz
  0 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-09 12:47 UTC (permalink / raw)
  To: Max Reitz, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow

09.08.2019 15:25, Max Reitz wrote:
> On 09.08.19 09:50, Vladimir Sementsov-Ogievskiy wrote:
>> 07.08.2019 21:01, Max Reitz wrote:
>>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>>> Limit block_status querying to request bounds on write notifier to
>>>> avoid extra seeking.
>>>
>>> I don’t understand this reasoning.  Checking whether something is
>>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>>> to do anyway when we try to copy data off the source.
>>
>> But for raw it's seeking.
> 
> (1) That’s a bug in block_status then, isn’t it?
> 
> file-posix cannot determine the allocation status, or rather, everything
> is allocated.  bdrv_co_block_status() should probably pass @want_zero on
> to the driver’s implementation, and file-posix should just
> unconditionally return DATA if it’s false.
> 
> (2) Why would you even use sync=top for raw nodes?
> 

As I described in parallel letters, raw was bad example. NBD is good.
Anyway, now I'm refactoring cluster skipping more deeply for v2.

About top-mode: finally block-status should be used to improve other
modes too. In virtuozzo we skip unallocated for full mode too, for example.
Unfortunately, backup is most long-term thing to upstream for me..

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
  2019-08-09 12:47         ` Vladimir Sementsov-Ogievskiy
@ 2019-08-09 12:53           ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-08-09 12:53 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, armbru, stefanha, jsnow


[-- Attachment #1.1: Type: text/plain, Size: 1620 bytes --]

On 09.08.19 14:47, Vladimir Sementsov-Ogievskiy wrote:
> 09.08.2019 15:25, Max Reitz wrote:
>> On 09.08.19 09:50, Vladimir Sementsov-Ogievskiy wrote:
>>> 07.08.2019 21:01, Max Reitz wrote:
>>>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>>>> Limit block_status querying to request bounds on write notifier to
>>>>> avoid extra seeking.
>>>>
>>>> I don’t understand this reasoning.  Checking whether something is
>>>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>>>> to do anyway when we try to copy data off the source.
>>>
>>> But for raw it's seeking.
>>
>> (1) That’s a bug in block_status then, isn’t it?
>>
>> file-posix cannot determine the allocation status, or rather, everything
>> is allocated.  bdrv_co_block_status() should probably pass @want_zero on
>> to the driver’s implementation, and file-posix should just
>> unconditionally return DATA if it’s false.
>>
>> (2) Why would you even use sync=top for raw nodes?
>>
> 
> As I described in parallel letters, raw was bad example. NBD is good.

Does NBD support backing files?

> Anyway, now I'm refactoring cluster skipping more deeply for v2.
> 
> About top-mode: finally block-status should be used to improve other
> modes too. In virtuozzo we skip unallocated for full mode too, for example.

But this patch here is about sync=top.

Skipping is an optimization, the block_status querying here happens
because copying anything that isn’t allocated in the top layer would be
wrong.

Max

> Unfortunately, backup is most long-term thing to upstream for me..


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2019-08-09 13:06 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-07  8:07 [Qemu-devel] [PATCH 0/8] backup improvements Vladimir Sementsov-Ogievskiy
2019-08-07  8:07 ` [Qemu-devel] [PATCH 1/8] block/backup: deal with zero detection Vladimir Sementsov-Ogievskiy
2019-08-07  8:07 ` [Qemu-devel] [PATCH 2/8] block/backup: refactor write_flags Vladimir Sementsov-Ogievskiy
2019-08-07 16:53   ` Max Reitz
2019-08-07  8:07 ` [Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range Vladimir Sementsov-Ogievskiy
2019-08-07 17:28   ` Max Reitz
2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
2019-08-07  8:07 ` [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping Vladimir Sementsov-Ogievskiy
2019-08-07 18:01   ` Max Reitz
2019-08-09  7:50     ` Vladimir Sementsov-Ogievskiy
2019-08-09  9:12       ` Vladimir Sementsov-Ogievskiy
2019-08-09 10:25         ` Vladimir Sementsov-Ogievskiy
2019-08-09 10:10       ` Vladimir Sementsov-Ogievskiy
2019-08-09 12:25       ` Max Reitz
2019-08-09 12:47         ` Vladimir Sementsov-Ogievskiy
2019-08-09 12:53           ` Max Reitz
2019-08-07  8:07 ` [Qemu-devel] [PATCH 5/8] block/backup: fix backup_cow_with_offload for last cluster Vladimir Sementsov-Ogievskiy
2019-08-07 18:05   ` Max Reitz
2019-08-07  8:07 ` [Qemu-devel] [PATCH 6/8] block/backup: teach backup_cow_with_bounce_buffer to copy more at once Vladimir Sementsov-Ogievskiy
2019-08-07 18:25   ` Max Reitz
2019-08-07  8:07 ` [Qemu-devel] [PATCH 7/8] block/backup: merge duplicated logic into backup_do_cow Vladimir Sementsov-Ogievskiy
2019-08-07 18:37   ` Max Reitz
2019-08-07  8:07 ` [Qemu-devel] [PATCH 8/8] block/backup: backup_do_cow: use bdrv_dirty_bitmap_next_dirty_area Vladimir Sementsov-Ogievskiy
2019-08-07 18:46   ` Max Reitz
2019-08-09  7:54     ` Vladimir Sementsov-Ogievskiy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.