qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters
@ 2019-08-14 14:43 Vladimir Sementsov-Ogievskiy
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 1/4] block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t Vladimir Sementsov-Ogievskiy
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-14 14:43 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Hi all!

There is a bug in not yet merged patch
"block/backup: teach TOP to never copy unallocated regions"
in https://github.com/jnsnow/qemu bitmaps. 04 fixes it. So, I propose
to put 01-03 somewhere before
"block/backup: teach TOP to never copy unallocated regions"
and squash 04 into "block/backup: teach TOP to never copy unallocated regions"

Based-on: https://github.com/jnsnow/qemu bitmaps

Vladimir Sementsov-Ogievskiy (4):
  block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t
  block/dirty-bitmap: add _next_dirty API
  block/backup: use bdrv_dirty_bitmap_next_dirty
  block/backup: fix and improve skipping unallocated in backup_do_cow

 include/block/dirty-bitmap.h |  8 ++--
 include/qemu/hbitmap.h       | 18 +++++++--
 block/backup.c               | 33 ++++++++++-------
 block/dirty-bitmap.c         | 12 ++++--
 block/mirror.c               |  4 +-
 tests/test-hbitmap.c         | 32 ++++++++--------
 util/hbitmap.c               | 72 ++++++++++++++++++++----------------
 block/trace-events           |  1 -
 8 files changed, 107 insertions(+), 73 deletions(-)

-- 
2.18.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH 1/4] block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t
  2019-08-14 14:43 [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
@ 2019-08-14 14:43 ` Vladimir Sementsov-Ogievskiy
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 2/4] block/dirty-bitmap: add _next_dirty API Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-14 14:43 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

It's very uncomfortable that we can't use same variable as result of
bdrv_dirty_bitmap_next_zero and parameter of
bdrv_dirty_bitmap_next_dirty_area.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/dirty-bitmap.h |  6 +++---
 include/qemu/hbitmap.h       |  5 ++---
 block/dirty-bitmap.c         |  6 +++---
 block/mirror.c               |  4 ++--
 tests/test-hbitmap.c         | 32 ++++++++++++++++----------------
 util/hbitmap.c               | 15 ++++++++++-----
 6 files changed, 36 insertions(+), 32 deletions(-)

diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
index 4b4b731b46..493c4566d3 100644
--- a/include/block/dirty-bitmap.h
+++ b/include/block/dirty-bitmap.h
@@ -108,10 +108,10 @@ bool bdrv_has_changed_persistent_bitmaps(BlockDriverState *bs);
 BdrvDirtyBitmap *bdrv_dirty_bitmap_next(BlockDriverState *bs,
                                         BdrvDirtyBitmap *bitmap);
 char *bdrv_dirty_bitmap_sha256(const BdrvDirtyBitmap *bitmap, Error **errp);
-int64_t bdrv_dirty_bitmap_next_zero(BdrvDirtyBitmap *bitmap, uint64_t offset,
-                                    uint64_t bytes);
+int64_t bdrv_dirty_bitmap_next_zero(BdrvDirtyBitmap *bitmap, int64_t offset,
+                                    int64_t bytes);
 bool bdrv_dirty_bitmap_next_dirty_area(BdrvDirtyBitmap *bitmap,
-                                       uint64_t *offset, uint64_t *bytes);
+                                       int64_t *offset, int64_t *bytes);
 BdrvDirtyBitmap *bdrv_reclaim_dirty_bitmap_locked(BlockDriverState *bs,
                                                   BdrvDirtyBitmap *bitmap,
                                                   Error **errp);
diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
index 4afbe6292e..5149ac7721 100644
--- a/include/qemu/hbitmap.h
+++ b/include/qemu/hbitmap.h
@@ -309,7 +309,7 @@ unsigned long hbitmap_iter_skip_words(HBitmapIter *hbi);
  * bitmap is looked through. You can use UINT64_MAX as @count to search up to
  * the bitmap end.
  */
-int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t start, uint64_t count);
+int64_t hbitmap_next_zero(const HBitmap *hb, int64_t start, int64_t count);
 
 /* hbitmap_next_dirty_area:
  * @hb: The HBitmap to operate on
@@ -324,8 +324,7 @@ int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t start, uint64_t count);
  * @offset and @bytes appropriately. Otherwise returns false and leaves @offset
  * and @bytes unchanged.
  */
-bool hbitmap_next_dirty_area(const HBitmap *hb, uint64_t *start,
-                             uint64_t *count);
+bool hbitmap_next_dirty_area(const HBitmap *hb, int64_t *start, int64_t *count);
 
 /* hbitmap_create_meta:
  * Create a "meta" hbitmap to track dirtiness of the bits in this HBitmap.
diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 134e0c9a0c..eeccdb48f5 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -802,14 +802,14 @@ char *bdrv_dirty_bitmap_sha256(const BdrvDirtyBitmap *bitmap, Error **errp)
     return hbitmap_sha256(bitmap->bitmap, errp);
 }
 
-int64_t bdrv_dirty_bitmap_next_zero(BdrvDirtyBitmap *bitmap, uint64_t offset,
-                                    uint64_t bytes)
+int64_t bdrv_dirty_bitmap_next_zero(BdrvDirtyBitmap *bitmap, int64_t offset,
+                                    int64_t bytes)
 {
     return hbitmap_next_zero(bitmap->bitmap, offset, bytes);
 }
 
 bool bdrv_dirty_bitmap_next_dirty_area(BdrvDirtyBitmap *bitmap,
-                                       uint64_t *offset, uint64_t *bytes)
+                                       int64_t *offset, int64_t *bytes)
 {
     return hbitmap_next_dirty_area(bitmap->bitmap, offset, bytes);
 }
diff --git a/block/mirror.c b/block/mirror.c
index 56937ebe92..0d3b078f44 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1198,8 +1198,8 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
                      QEMUIOVector *qiov, int flags)
 {
     QEMUIOVector target_qiov;
-    uint64_t dirty_offset = offset;
-    uint64_t dirty_bytes;
+    int64_t dirty_offset = offset;
+    int64_t dirty_bytes;
 
     if (qiov) {
         qemu_iovec_init(&target_qiov, qiov->niov);
diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
index eed5d288cb..462a6c3f74 100644
--- a/tests/test-hbitmap.c
+++ b/tests/test-hbitmap.c
@@ -946,7 +946,7 @@ static void test_hbitmap_next_zero_check_range(TestHBitmapData *data,
 
 static void test_hbitmap_next_zero_check(TestHBitmapData *data, int64_t start)
 {
-    test_hbitmap_next_zero_check_range(data, start, UINT64_MAX);
+    test_hbitmap_next_zero_check_range(data, start, INT64_MAX);
 }
 
 static void test_hbitmap_next_zero_do(TestHBitmapData *data, int granularity)
@@ -1014,11 +1014,11 @@ static void test_hbitmap_next_zero_after_truncate(TestHBitmapData *data,
 }
 
 static void test_hbitmap_next_dirty_area_check(TestHBitmapData *data,
-                                               uint64_t offset,
-                                               uint64_t count)
+                                               int64_t offset,
+                                               int64_t count)
 {
-    uint64_t off1, off2;
-    uint64_t len1 = 0, len2;
+    int64_t off1, off2;
+    int64_t len1 = 0, len2;
     bool ret1, ret2;
     int64_t end;
 
@@ -1054,24 +1054,24 @@ static void test_hbitmap_next_dirty_area_do(TestHBitmapData *data,
                                             int granularity)
 {
     hbitmap_test_init(data, L3, granularity);
-    test_hbitmap_next_dirty_area_check(data, 0, UINT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, 0, INT64_MAX);
     test_hbitmap_next_dirty_area_check(data, 0, 1);
     test_hbitmap_next_dirty_area_check(data, L3 - 1, 1);
 
     hbitmap_set(data->hb, L2, 1);
     test_hbitmap_next_dirty_area_check(data, 0, 1);
     test_hbitmap_next_dirty_area_check(data, 0, L2);
-    test_hbitmap_next_dirty_area_check(data, 0, UINT64_MAX);
-    test_hbitmap_next_dirty_area_check(data, L2 - 1, UINT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, 0, INT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, L2 - 1, INT64_MAX);
     test_hbitmap_next_dirty_area_check(data, L2 - 1, 1);
     test_hbitmap_next_dirty_area_check(data, L2 - 1, 2);
     test_hbitmap_next_dirty_area_check(data, L2 - 1, 3);
-    test_hbitmap_next_dirty_area_check(data, L2, UINT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, L2, INT64_MAX);
     test_hbitmap_next_dirty_area_check(data, L2, 1);
     test_hbitmap_next_dirty_area_check(data, L2 + 1, 1);
 
     hbitmap_set(data->hb, L2 + 5, L1);
-    test_hbitmap_next_dirty_area_check(data, 0, UINT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, 0, INT64_MAX);
     test_hbitmap_next_dirty_area_check(data, L2 - 2, 8);
     test_hbitmap_next_dirty_area_check(data, L2 + 1, 5);
     test_hbitmap_next_dirty_area_check(data, L2 + 1, 3);
@@ -1083,16 +1083,16 @@ static void test_hbitmap_next_dirty_area_do(TestHBitmapData *data,
     test_hbitmap_next_dirty_area_check(data, L2 + 1, 0);
 
     hbitmap_set(data->hb, L2 * 2, L3 - L2 * 2);
-    test_hbitmap_next_dirty_area_check(data, 0, UINT64_MAX);
-    test_hbitmap_next_dirty_area_check(data, L2, UINT64_MAX);
-    test_hbitmap_next_dirty_area_check(data, L2 + 1, UINT64_MAX);
-    test_hbitmap_next_dirty_area_check(data, L2 + 5 + L1 - 1, UINT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, 0, INT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, L2, INT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, L2 + 1, INT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, L2 + 5 + L1 - 1, INT64_MAX);
     test_hbitmap_next_dirty_area_check(data, L2 + 5 + L1, 5);
     test_hbitmap_next_dirty_area_check(data, L2 * 2 - L1, L1 + 1);
     test_hbitmap_next_dirty_area_check(data, L2 * 2, L2);
 
     hbitmap_set(data->hb, 0, L3);
-    test_hbitmap_next_dirty_area_check(data, 0, UINT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, 0, INT64_MAX);
 }
 
 static void test_hbitmap_next_dirty_area_0(TestHBitmapData *data,
@@ -1119,7 +1119,7 @@ static void test_hbitmap_next_dirty_area_after_truncate(TestHBitmapData *data,
     hbitmap_test_init(data, L1, 0);
     hbitmap_test_truncate_impl(data, L1 * 2);
     hbitmap_set(data->hb, L1 + 1, 1);
-    test_hbitmap_next_dirty_area_check(data, 0, UINT64_MAX);
+    test_hbitmap_next_dirty_area_check(data, 0, INT64_MAX);
 }
 
 int main(int argc, char **argv)
diff --git a/util/hbitmap.c b/util/hbitmap.c
index fd44c897ab..c08751fb50 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -193,7 +193,7 @@ void hbitmap_iter_init(HBitmapIter *hbi, const HBitmap *hb, uint64_t first)
     }
 }
 
-int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t start, uint64_t count)
+int64_t hbitmap_next_zero(const HBitmap *hb, int64_t start, int64_t count)
 {
     size_t pos = (start >> hb->granularity) >> BITS_PER_LEVEL;
     unsigned long *last_lev = hb->levels[HBITMAP_LEVELS - 1];
@@ -202,6 +202,8 @@ int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t start, uint64_t count)
     uint64_t end_bit, sz;
     int64_t res;
 
+    assert(start >= 0 && count >= 0);
+
     if (start >= hb->orig_size || count == 0) {
         return -1;
     }
@@ -244,14 +246,15 @@ int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t start, uint64_t count)
     return res;
 }
 
-bool hbitmap_next_dirty_area(const HBitmap *hb, uint64_t *start,
-                             uint64_t *count)
+bool hbitmap_next_dirty_area(const HBitmap *hb, int64_t *start, int64_t *count)
 {
     HBitmapIter hbi;
     int64_t firt_dirty_off, area_end;
     uint32_t granularity = 1UL << hb->granularity;
     uint64_t end;
 
+    assert(*start >= 0 && *count >= 0);
+
     if (*start >= hb->orig_size || *count == 0) {
         return false;
     }
@@ -704,6 +707,7 @@ HBitmap *hbitmap_alloc(uint64_t size, int granularity)
     HBitmap *hb = g_new0(struct HBitmap, 1);
     unsigned i;
 
+    assert(size <= INT64_MAX);
     hb->orig_size = size;
 
     assert(granularity >= 0 && granularity < 64);
@@ -734,6 +738,7 @@ void hbitmap_truncate(HBitmap *hb, uint64_t size)
     uint64_t num_elements = size;
     uint64_t old;
 
+    assert(size <= INT64_MAX);
     hb->orig_size = size;
 
     /* Size comes in as logical elements, adjust for granularity. */
@@ -791,8 +796,8 @@ bool hbitmap_can_merge(const HBitmap *a, const HBitmap *b)
  */
 static void hbitmap_sparse_merge(HBitmap *dst, const HBitmap *src)
 {
-    uint64_t offset = 0;
-    uint64_t count = src->orig_size;
+    int64_t offset = 0;
+    int64_t count = src->orig_size;
 
     while (hbitmap_next_dirty_area(src, &offset, &count)) {
         hbitmap_set(dst, offset, count);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH 2/4] block/dirty-bitmap: add _next_dirty API
  2019-08-14 14:43 [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 1/4] block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t Vladimir Sementsov-Ogievskiy
@ 2019-08-14 14:43 ` Vladimir Sementsov-Ogievskiy
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 3/4] block/backup: use bdrv_dirty_bitmap_next_dirty Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-14 14:43 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

We have bdrv_dirty_bitmap_next_zero, let's add corresponding
bdrv_dirty_bitmap_next_dirty, which is more comfortable to use than
bitmap iterators in some cases. It will be used in the following
commit.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/dirty-bitmap.h |  2 ++
 include/qemu/hbitmap.h       | 13 ++++++++
 block/dirty-bitmap.c         |  6 ++++
 util/hbitmap.c               | 61 +++++++++++++++++++-----------------
 4 files changed, 54 insertions(+), 28 deletions(-)

diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
index 493c4566d3..d1310ee76e 100644
--- a/include/block/dirty-bitmap.h
+++ b/include/block/dirty-bitmap.h
@@ -108,6 +108,8 @@ bool bdrv_has_changed_persistent_bitmaps(BlockDriverState *bs);
 BdrvDirtyBitmap *bdrv_dirty_bitmap_next(BlockDriverState *bs,
                                         BdrvDirtyBitmap *bitmap);
 char *bdrv_dirty_bitmap_sha256(const BdrvDirtyBitmap *bitmap, Error **errp);
+int64_t bdrv_dirty_bitmap_next_dirty(BdrvDirtyBitmap *bitmap, int64_t offset,
+                                     int64_t bytes);
 int64_t bdrv_dirty_bitmap_next_zero(BdrvDirtyBitmap *bitmap, int64_t offset,
                                     int64_t bytes);
 bool bdrv_dirty_bitmap_next_dirty_area(BdrvDirtyBitmap *bitmap,
diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
index 5149ac7721..80ecabe8e2 100644
--- a/include/qemu/hbitmap.h
+++ b/include/qemu/hbitmap.h
@@ -299,6 +299,19 @@ void hbitmap_iter_init(HBitmapIter *hbi, const HBitmap *hb, uint64_t first);
  */
 unsigned long hbitmap_iter_skip_words(HBitmapIter *hbi);
 
+/*
+ * hbitmap_next_dirty:
+ *
+ * Find next dirty bit within selected range. If not found, return -1.
+ *
+ * @hb: The HBitmap to operate on
+ * @start: The bit to start from.
+ * @count: Number of bits to proceed. If @start+@count > bitmap size, the whole
+ * bitmap is looked through. You can use UINT64_MAX as @count to search up to
+ * the bitmap end.
+ */
+int64_t hbitmap_next_dirty(const HBitmap *hb, int64_t start, int64_t count);
+
 /* hbitmap_next_zero:
  *
  * Find next not dirty bit within selected range. If not found, return -1.
diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index eeccdb48f5..efe3a39d49 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -802,6 +802,12 @@ char *bdrv_dirty_bitmap_sha256(const BdrvDirtyBitmap *bitmap, Error **errp)
     return hbitmap_sha256(bitmap->bitmap, errp);
 }
 
+int64_t bdrv_dirty_bitmap_next_dirty(BdrvDirtyBitmap *bitmap, int64_t offset,
+                                     int64_t bytes)
+{
+    return hbitmap_next_zero(bitmap->bitmap, offset, bytes);
+}
+
 int64_t bdrv_dirty_bitmap_next_zero(BdrvDirtyBitmap *bitmap, int64_t offset,
                                     int64_t bytes)
 {
diff --git a/util/hbitmap.c b/util/hbitmap.c
index c08751fb50..3f2d7451ce 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -193,6 +193,30 @@ void hbitmap_iter_init(HBitmapIter *hbi, const HBitmap *hb, uint64_t first)
     }
 }
 
+int64_t hbitmap_next_dirty(const HBitmap *hb, int64_t start, int64_t count)
+{
+    HBitmapIter hbi;
+    int64_t firt_dirty_off;
+    uint64_t end;
+
+    assert(start >= 0 && count >= 0);
+
+    if (start >= hb->orig_size || count == 0) {
+        return -1;
+    }
+
+    end = count > hb->orig_size - start ? hb->orig_size : start + count;
+
+    hbitmap_iter_init(&hbi, hb, start);
+    firt_dirty_off = hbitmap_iter_next(&hbi);
+
+    if (firt_dirty_off < 0 || firt_dirty_off >= end) {
+        return -1;
+    }
+
+    return MAX(start, firt_dirty_off);
+}
+
 int64_t hbitmap_next_zero(const HBitmap *hb, int64_t start, int64_t count)
 {
     size_t pos = (start >> hb->granularity) >> BITS_PER_LEVEL;
@@ -248,40 +272,21 @@ int64_t hbitmap_next_zero(const HBitmap *hb, int64_t start, int64_t count)
 
 bool hbitmap_next_dirty_area(const HBitmap *hb, int64_t *start, int64_t *count)
 {
-    HBitmapIter hbi;
-    int64_t firt_dirty_off, area_end;
-    uint32_t granularity = 1UL << hb->granularity;
-    uint64_t end;
-
-    assert(*start >= 0 && *count >= 0);
-
-    if (*start >= hb->orig_size || *count == 0) {
-        return false;
-    }
-
-    end = *count > hb->orig_size - *start ? hb->orig_size : *start + *count;
-
-    hbitmap_iter_init(&hbi, hb, *start);
-    firt_dirty_off = hbitmap_iter_next(&hbi);
+    int64_t area_start, area_end;
 
-    if (firt_dirty_off < 0 || firt_dirty_off >= end) {
+    area_start = hbitmap_next_dirty(hb, *start, *count);
+    if (area_start < 0) {
         return false;
     }
 
-    if (firt_dirty_off + granularity >= end) {
-        area_end = end;
-    } else {
-        area_end = hbitmap_next_zero(hb, firt_dirty_off + granularity,
-                                     end - firt_dirty_off - granularity);
-        if (area_end < 0) {
-            area_end = end;
-        }
+    area_end = QEMU_ALIGN_UP(area_start + 1, 1UL << hb->granularity);
+    area_end = hbitmap_next_zero(hb, area_end, *start + *count - area_end);
+    if (area_end < 0) {
+        area_end = MIN(hb->orig_size, *start + *count);
     }
 
-    if (firt_dirty_off > *start) {
-        *start = firt_dirty_off;
-    }
-    *count = area_end - *start;
+    *start = area_start;
+    *count = area_end - area_start;
 
     return true;
 }
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH 3/4] block/backup: use bdrv_dirty_bitmap_next_dirty
  2019-08-14 14:43 [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 1/4] block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t Vladimir Sementsov-Ogievskiy
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 2/4] block/dirty-bitmap: add _next_dirty API Vladimir Sementsov-Ogievskiy
@ 2019-08-14 14:43 ` Vladimir Sementsov-Ogievskiy
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 4/4] block/backup: fix and improve skipping unallocated in backup_do_cow Vladimir Sementsov-Ogievskiy
  2019-08-14 16:54 ` [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
  4 siblings, 0 replies; 9+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-14 14:43 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Use more effective search for next dirty byte. Trace point is dropped
to not introduce additional variable and logic only for trace point.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c     | 7 +++----
 block/trace-events | 1 -
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 07d751aea4..9bddea1b59 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -272,10 +272,9 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     while (start < end) {
         int64_t dirty_end;
 
-        if (!bdrv_dirty_bitmap_get(job->copy_bitmap, start)) {
-            trace_backup_do_cow_skip(job, start);
-            start += job->cluster_size;
-            continue; /* already copied */
+        start = bdrv_dirty_bitmap_next_dirty(job->copy_bitmap, start, end);
+        if (start < 0) {
+            break;
         }
 
         if (job->initializing_bitmap) {
diff --git a/block/trace-events b/block/trace-events
index 04209f058d..7e46f7e036 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -40,7 +40,6 @@ mirror_yield_in_flight(void *s, int64_t offset, int in_flight) "s %p offset %" P
 # backup.c
 backup_do_cow_enter(void *job, int64_t start, int64_t offset, uint64_t bytes) "job %p start %" PRId64 " offset %" PRId64 " bytes %" PRIu64
 backup_do_cow_return(void *job, int64_t offset, uint64_t bytes, int ret) "job %p offset %" PRId64 " bytes %" PRIu64 " ret %d"
-backup_do_cow_skip(void *job, int64_t start) "job %p start %"PRId64
 backup_do_cow_skip_range(void *job, int64_t start, uint64_t bytes) "job %p start %"PRId64" bytes %"PRId64
 backup_do_cow_process(void *job, int64_t start) "job %p start %"PRId64
 backup_do_cow_read_fail(void *job, int64_t start, int ret) "job %p start %"PRId64" ret %d"
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Qemu-devel] [PATCH 4/4] block/backup: fix and improve skipping unallocated in backup_do_cow
  2019-08-14 14:43 [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 3/4] block/backup: use bdrv_dirty_bitmap_next_dirty Vladimir Sementsov-Ogievskiy
@ 2019-08-14 14:43 ` Vladimir Sementsov-Ogievskiy
  2019-08-14 16:54 ` [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
  4 siblings, 0 replies; 9+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-14 14:43 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

1. There is a bug: after detection an allocated area of skip_bytes
length we ignore skip_bytes variable and my finish up by copying more
than skip_bytes.

2. If request area is allocated we call block_status for each cluster
on each loop iteration, even if after the first call we know that the
whole request area is allocated.

Solve all of this by handling resetting all unallocated bytes from
requested area before copying loop.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/backup.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 9bddea1b59..d0815b21c8 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -257,7 +257,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     int ret = 0;
     int64_t start, end; /* bytes */
     void *bounce_buffer = NULL;
-    int64_t skip_bytes;
 
     qemu_co_rwlock_rdlock(&job->flush_rwlock);
 
@@ -269,6 +268,22 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
     wait_for_overlapping_requests(job, start, end);
     cow_request_begin(&cow_request, job, start, end);
 
+    if (job->initializing_bitmap) {
+        int64_t off = start;
+        int64_t count;
+
+        while (off < end) {
+            off = bdrv_dirty_bitmap_next_dirty(job->copy_bitmap,
+                                               off, end - off);
+            if (off < 0) {
+                break;
+            }
+
+            backup_bitmap_reset_unallocated(job, off, &count);
+            off += count;
+        }
+    }
+
     while (start < end) {
         int64_t dirty_end;
 
@@ -277,15 +292,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
             break;
         }
 
-        if (job->initializing_bitmap) {
-            ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
-            if (ret == 0) {
-                trace_backup_do_cow_skip_range(job, start, skip_bytes);
-                start += skip_bytes;
-                continue;
-            }
-        }
-
         dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
                                                 (end - start));
         if (dirty_end < 0) {
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters
  2019-08-14 14:43 [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2019-08-14 14:43 ` [Qemu-devel] [PATCH 4/4] block/backup: fix and improve skipping unallocated in backup_do_cow Vladimir Sementsov-Ogievskiy
@ 2019-08-14 16:54 ` Vladimir Sementsov-Ogievskiy
  2019-08-16 19:11   ` John Snow
  4 siblings, 1 reply; 9+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-14 16:54 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, Denis Lunev, qemu-devel, mreitz, jsnow



14 авг. 2019 г. 17:43 пользователь Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> написал:

Hi all!

There is a bug in not yet merged patch
"block/backup: teach TOP to never copy unallocated regions"
in https://github.com/jnsnow/qemu bitmaps. 04 fixes it. So, I propose
to put 01-03 somewhere before
"block/backup: teach TOP to never copy unallocated regions"
and squash 04 into "block/backup: teach TOP to never copy unallocated regions"

Hmm, don't bother with it. Simpler is fix the bug in your commit by just use skip_bytes variable when initializing dirty_end.


Based-on: https://github.com/jnsnow/qemu bitmaps

Vladimir Sementsov-Ogievskiy (4):
  block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t
  block/dirty-bitmap: add _next_dirty API
  block/backup: use bdrv_dirty_bitmap_next_dirty
  block/backup: fix and improve skipping unallocated in backup_do_cow

include/block/dirty-bitmap.h |  8 ++--
include/qemu/hbitmap.h       | 18 +++++++--
block/backup.c               | 33 ++++++++++-------
block/dirty-bitmap.c         | 12 ++++--
block/mirror.c               |  4 +-
tests/test-hbitmap.c         | 32 ++++++++--------
util/hbitmap.c               | 72 ++++++++++++++++++++----------------
block/trace-events           |  1 -
8 files changed, 107 insertions(+), 73 deletions(-)

--
2.18.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters
  2019-08-14 16:54 ` [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
@ 2019-08-16 19:11   ` John Snow
  2019-08-20  8:28     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 9+ messages in thread
From: John Snow @ 2019-08-16 19:11 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, Denis Lunev, qemu-devel, mreitz



On 8/14/19 12:54 PM, Vladimir Sementsov-Ogievskiy wrote:
> 
> 
> 14 авг. 2019 г. 17:43 пользователь Vladimir Sementsov-Ogievskiy
> <vsementsov@virtuozzo.com> написал:
> 
>     Hi all!
> 
>     There is a bug in not yet merged patch
>     "block/backup: teach TOP to never copy unallocated regions"
>     in https://github.com/jnsnow/qemu bitmaps. 04 fixes it. So, I propose
>     to put 01-03 somewhere before
>     "block/backup: teach TOP to never copy unallocated regions"
>     and squash 04 into "block/backup: teach TOP to never copy
>     unallocated regions" 
> 
> 
> Hmm, don't bother with it. Simpler is fix the bug in your commit by just
> use skip_bytes variable when initializing dirty_end.
> 

OK, just use Max's fix instead of this entire 4 patch series?

--js


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters
  2019-08-16 19:11   ` John Snow
@ 2019-08-20  8:28     ` Vladimir Sementsov-Ogievskiy
  2019-08-20 19:30       ` John Snow
  0 siblings, 1 reply; 9+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-08-20  8:28 UTC (permalink / raw)
  To: John Snow, qemu-block; +Cc: fam, kwolf, Denis Lunev, qemu-devel, mreitz

16.08.2019 22:11, John Snow wrote:
> 
> 
> On 8/14/19 12:54 PM, Vladimir Sementsov-Ogievskiy wrote:
>>
>>
>> 14 авг. 2019 г. 17:43 пользователь Vladimir Sementsov-Ogievskiy
>> <vsementsov@virtuozzo.com> написал:
>>
>>      Hi all!
>>
>>      There is a bug in not yet merged patch
>>      "block/backup: teach TOP to never copy unallocated regions"
>>      in https://github.com/jnsnow/qemu bitmaps. 04 fixes it. So, I propose
>>      to put 01-03 somewhere before
>>      "block/backup: teach TOP to never copy unallocated regions"
>>      and squash 04 into "block/backup: teach TOP to never copy
>>      unallocated regions"
>>
>>
>> Hmm, don't bother with it. Simpler is fix the bug in your commit by just
>> use skip_bytes variable when initializing dirty_end.
>>
> 
> OK, just use Max's fix instead of this entire 4 patch series?
> 
> --js
> 

Yes, I think it's OK

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters
  2019-08-20  8:28     ` Vladimir Sementsov-Ogievskiy
@ 2019-08-20 19:30       ` John Snow
  0 siblings, 0 replies; 9+ messages in thread
From: John Snow @ 2019-08-20 19:30 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, mreitz, Denis Lunev, qemu-devel



On 8/20/19 4:28 AM, Vladimir Sementsov-Ogievskiy wrote:
> 16.08.2019 22:11, John Snow wrote:
>>
>>
>> On 8/14/19 12:54 PM, Vladimir Sementsov-Ogievskiy wrote:
>>>
>>>
>>> 14 авг. 2019 г. 17:43 пользователь Vladimir Sementsov-Ogievskiy
>>> <vsementsov@virtuozzo.com> написал:
>>>
>>>      Hi all!
>>>
>>>      There is a bug in not yet merged patch
>>>      "block/backup: teach TOP to never copy unallocated regions"
>>>      in https://github.com/jnsnow/qemu bitmaps. 04 fixes it. So, I propose
>>>      to put 01-03 somewhere before
>>>      "block/backup: teach TOP to never copy unallocated regions"
>>>      and squash 04 into "block/backup: teach TOP to never copy
>>>      unallocated regions"
>>>
>>>
>>> Hmm, don't bother with it. Simpler is fix the bug in your commit by just
>>> use skip_bytes variable when initializing dirty_end.
>>>
>>
>> OK, just use Max's fix instead of this entire 4 patch series?
>>
>> --js
>>
> 
> Yes, I think it's OK
> 

ACK, thank you!


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-08-20 19:36 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-14 14:43 [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
2019-08-14 14:43 ` [Qemu-devel] [PATCH 1/4] block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t Vladimir Sementsov-Ogievskiy
2019-08-14 14:43 ` [Qemu-devel] [PATCH 2/4] block/dirty-bitmap: add _next_dirty API Vladimir Sementsov-Ogievskiy
2019-08-14 14:43 ` [Qemu-devel] [PATCH 3/4] block/backup: use bdrv_dirty_bitmap_next_dirty Vladimir Sementsov-Ogievskiy
2019-08-14 14:43 ` [Qemu-devel] [PATCH 4/4] block/backup: fix and improve skipping unallocated in backup_do_cow Vladimir Sementsov-Ogievskiy
2019-08-14 16:54 ` [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters Vladimir Sementsov-Ogievskiy
2019-08-16 19:11   ` John Snow
2019-08-20  8:28     ` Vladimir Sementsov-Ogievskiy
2019-08-20 19:30       ` John Snow

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).