All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/19]  parallels: Add full dirty bitmap support
@ 2023-10-02  8:57 Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 01/19] parallels: Move inactivation code to a separate function Alexander Ivanov
                   ` (18 more replies)
  0 siblings, 19 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Parallels format driver:
* make some preparation
* add dirty bitmap saving
* make dirty bitmap RW
* fix broken checks
* refactor leak check
* add parallels format support to several tests

Alexander Ivanov (19):
  parallels: Move inactivation code to a separate function
  parallels: Add mark_unused() helper
  parallels: Move host clusters allocation to a separate function
  parallels: Set data_end value in parallels_check_leak()
  parallels: Recreate used bitmap in parallels_check_leak()
  parallels: Add a note about used bitmap in parallels_check_duplicate()
  parallels: Create used bitmap even if checks needed
  parallels: Make mark_used() and mark_unused() global functions
  parallels: Add dirty bitmaps saving
  parallels: Let image extensions work in RW mode
  parallels: Handle L1 entries equal to one
  parallels: Make a loaded dirty bitmap persistent
  parallels: Reverse a conditional in parallels_check_leak() to reduce
    indents
  parallels: Truncate images on the last used cluster
  parallels: Remove unnecessary data_end field
  parallels: Check unused clusters in parallels_check_leak()
  tests: Add parallels images support to test 165
  tests: Turned on 256, 299, 304 and block-status-cache for parallels
    format
  tests: Add parallels format support to image-fleecing

 block/parallels-ext.c                       | 182 +++++++++-
 block/parallels.c                           | 357 ++++++++++++--------
 block/parallels.h                           |  15 +-
 tests/qemu-iotests/165                      |  42 ++-
 tests/qemu-iotests/256                      |   2 +-
 tests/qemu-iotests/299                      |   2 +-
 tests/qemu-iotests/304                      |   2 +-
 tests/qemu-iotests/tests/block-status-cache |   2 +-
 tests/qemu-iotests/tests/image-fleecing     |  13 +-
 9 files changed, 443 insertions(+), 174 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 01/19] parallels: Move inactivation code to a separate function
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 02/19] parallels: Add mark_unused() helper Alexander Ivanov
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

We are going to add parallels image extensions storage and need a separate
function for inactivation code.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index d026ce9e2f..d5b333d5a4 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -1426,18 +1426,27 @@ fail:
     return ret;
 }
 
+static int parallels_inactivate(BlockDriverState *bs)
+{
+    BDRVParallelsState *s = bs->opaque;
+    int ret;
+
+    s->header->inuse = 0;
+    parallels_update_header(bs);
+
+    /* errors are ignored, so we might as well pass exact=true */
+    ret = bdrv_truncate(bs->file, s->data_end << BDRV_SECTOR_BITS, true,
+                        PREALLOC_MODE_OFF, 0, NULL);
+
+    return ret;
+}
 
 static void parallels_close(BlockDriverState *bs)
 {
     BDRVParallelsState *s = bs->opaque;
 
     if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) {
-        s->header->inuse = 0;
-        parallels_update_header(bs);
-
-        /* errors are ignored, so we might as well pass exact=true */
-        bdrv_truncate(bs->file, s->data_end << BDRV_SECTOR_BITS, true,
-                      PREALLOC_MODE_OFF, 0, NULL);
+        parallels_inactivate(bs);
     }
 
     parallels_free_used_bitmap(bs);
@@ -1477,6 +1486,7 @@ static BlockDriver bdrv_parallels = {
     .bdrv_co_check              = parallels_co_check,
     .bdrv_co_pdiscard           = parallels_co_pdiscard,
     .bdrv_co_pwrite_zeroes      = parallels_co_pwrite_zeroes,
+    .bdrv_inactivate            = parallels_inactivate,
 };
 
 static void bdrv_parallels_init(void)
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 02/19] parallels: Add mark_unused() helper
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 01/19] parallels: Move inactivation code to a separate function Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 03/19] parallels: Move host clusters allocation to a separate function Alexander Ivanov
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Add a helper to set unused areas in used bitmap.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/block/parallels.c b/block/parallels.c
index d5b333d5a4..b5e19ff921 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -195,6 +195,23 @@ static int mark_used(BlockDriverState *bs, unsigned long *bitmap,
     return 0;
 }
 
+static int mark_unused(BlockDriverState *bs, unsigned long *bitmap,
+                       uint32_t bitmap_size, int64_t off, uint32_t count)
+{
+    BDRVParallelsState *s = bs->opaque;
+    uint32_t cluster_index = host_cluster_index(s, off);
+    unsigned long next_unused;
+    if (cluster_index + count > bitmap_size) {
+        return -E2BIG;
+    }
+    next_unused = find_next_zero_bit(bitmap, bitmap_size, cluster_index);
+    if (next_unused < cluster_index + count) {
+        return -EINVAL;
+    }
+    bitmap_clear(bitmap, cluster_index, count);
+    return 0;
+}
+
 /*
  * Collect used bitmap. The image can contain errors, we should fill the
  * bitmap anyway, as much as we can. This information will be used for
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 03/19] parallels: Move host clusters allocation to a separate function
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 01/19] parallels: Move inactivation code to a separate function Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 02/19] parallels: Add mark_unused() helper Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 04/19] parallels: Set data_end value in parallels_check_leak() Alexander Ivanov
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

For parallels images extensions we need to allocate host clusters
without any connection to BAT. Move host clusters allocation code to
allocate_host_clusters().

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 124 ++++++++++++++++++++++++----------------------
 block/parallels.h |   4 ++
 2 files changed, 69 insertions(+), 59 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index b5e19ff921..3c69afa04b 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -264,58 +264,29 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
     g_free(s->used_bmap);
 }
 
-static int64_t coroutine_fn GRAPH_RDLOCK
-allocate_clusters(BlockDriverState *bs, int64_t sector_num,
-                  int nb_sectors, int *pnum)
+int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
+                                         int64_t *clusters)
 {
-    int ret = 0;
     BDRVParallelsState *s = bs->opaque;
-    int64_t i, pos, idx, to_allocate, first_free, host_off;
-
-    pos = block_status(s, sector_num, nb_sectors, pnum);
-    if (pos > 0) {
-        return pos;
-    }
-
-    idx = sector_num / s->tracks;
-    to_allocate = DIV_ROUND_UP(sector_num + *pnum, s->tracks) - idx;
-
-    /*
-     * This function is called only by parallels_co_writev(), which will never
-     * pass a sector_num at or beyond the end of the image (because the block
-     * layer never passes such a sector_num to that function). Therefore, idx
-     * is always below s->bat_size.
-     * block_status() will limit *pnum so that sector_num + *pnum will not
-     * exceed the image end. Therefore, idx + to_allocate cannot exceed
-     * s->bat_size.
-     * Note that s->bat_size is an unsigned int, therefore idx + to_allocate
-     * will always fit into a uint32_t.
-     */
-    assert(idx < s->bat_size && idx + to_allocate <= s->bat_size);
+    int64_t first_free, next_used, host_off, bytes, prealloc_clusters;
+    uint32_t new_usedsize;
+    int ret = 0;
 
     first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
     if (first_free == s->used_bmap_size) {
-        uint32_t new_usedsize;
-        int64_t bytes = to_allocate * s->cluster_size;
-        bytes += s->prealloc_size * BDRV_SECTOR_SIZE;
-
         host_off = s->data_end * BDRV_SECTOR_SIZE;
+        prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
+        bytes = prealloc_clusters * s->cluster_size;
 
-        /*
-         * We require the expanded size to read back as zero. If the
-         * user permitted truncation, we try that; but if it fails, we
-         * force the safer-but-slower fallocate.
-         */
         if (s->prealloc_mode == PRL_PREALLOC_MODE_TRUNCATE) {
-            ret = bdrv_co_truncate(bs->file, host_off + bytes,
-                                   false, PREALLOC_MODE_OFF,
-                                   BDRV_REQ_ZERO_WRITE, NULL);
+            ret = bdrv_truncate(bs->file, host_off + bytes, false,
+                                PREALLOC_MODE_OFF, BDRV_REQ_ZERO_WRITE, NULL);
             if (ret == -ENOTSUP) {
                 s->prealloc_mode = PRL_PREALLOC_MODE_FALLOCATE;
             }
         }
         if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
-            ret = bdrv_co_pwrite_zeroes(bs->file, host_off, bytes, 0);
+            ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
         }
         if (ret < 0) {
             return ret;
@@ -325,15 +296,15 @@ allocate_clusters(BlockDriverState *bs, int64_t sector_num,
         s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
                                           new_usedsize);
         s->used_bmap_size = new_usedsize;
+        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
+            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
+        }
     } else {
-        int64_t next_used;
         next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
 
         /* Not enough continuous clusters in the middle, adjust the size */
-        if (next_used - first_free < to_allocate) {
-            to_allocate = next_used - first_free;
-            *pnum = (idx + to_allocate) * s->tracks - sector_num;
-        }
+        *clusters = MIN(*clusters, next_used - first_free);
+        bytes = *clusters * s->cluster_size;
 
         host_off = s->data_start * BDRV_SECTOR_SIZE;
         host_off += first_free * s->cluster_size;
@@ -345,14 +316,58 @@ allocate_clusters(BlockDriverState *bs, int64_t sector_num,
          */
         if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
                 host_off < s->data_end * BDRV_SECTOR_SIZE) {
-            ret = bdrv_co_pwrite_zeroes(bs->file, host_off,
-                                        s->cluster_size * to_allocate, 0);
+            ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
             if (ret < 0) {
                 return ret;
             }
         }
     }
 
+    ret = mark_used(bs, s->used_bmap, s->used_bmap_size, host_off, *clusters);
+    if (ret < 0) {
+        /* Image consistency is broken. Alarm! */
+        return ret;
+    }
+
+    return host_off;
+}
+
+static int64_t coroutine_fn GRAPH_RDLOCK
+allocate_clusters(BlockDriverState *bs, int64_t sector_num,
+                  int nb_sectors, int *pnum)
+{
+    int ret = 0;
+    BDRVParallelsState *s = bs->opaque;
+    int64_t i, pos, idx, to_allocate, host_off;
+
+    pos = block_status(s, sector_num, nb_sectors, pnum);
+    if (pos > 0) {
+        return pos;
+    }
+
+    idx = sector_num / s->tracks;
+    to_allocate = DIV_ROUND_UP(sector_num + *pnum, s->tracks) - idx;
+
+    /*
+     * This function is called only by parallels_co_writev(), which will never
+     * pass a sector_num at or beyond the end of the image (because the block
+     * layer never passes such a sector_num to that function). Therefore, idx
+     * is always below s->bat_size.
+     * block_status() will limit *pnum so that sector_num + *pnum will not
+     * exceed the image end. Therefore, idx + to_allocate cannot exceed
+     * s->bat_size.
+     * Note that s->bat_size is an unsigned int, therefore idx + to_allocate
+     * will always fit into a uint32_t.
+     */
+    assert(idx < s->bat_size && idx + to_allocate <= s->bat_size);
+
+    host_off = parallels_allocate_host_clusters(bs, &to_allocate);
+    if (host_off < 0) {
+        return host_off;
+    }
+
+    *pnum = MIN(*pnum, (idx + to_allocate) * s->tracks - sector_num);
+
     /*
      * Try to read from backing to fill empty clusters
      * FIXME: 1. previous write_zeroes may be redundant
@@ -369,32 +384,23 @@ allocate_clusters(BlockDriverState *bs, int64_t sector_num,
 
         ret = bdrv_co_pread(bs->backing, idx * s->tracks * BDRV_SECTOR_SIZE,
                             nb_cow_bytes, buf, 0);
-        if (ret < 0) {
-            qemu_vfree(buf);
-            return ret;
+        if (ret == 0) {
+            ret = bdrv_co_pwrite(bs->file, host_off, nb_cow_bytes, buf, 0);
         }
 
-        ret = bdrv_co_pwrite(bs->file, s->data_end * BDRV_SECTOR_SIZE,
-                             nb_cow_bytes, buf, 0);
         qemu_vfree(buf);
         if (ret < 0) {
+            mark_unused(bs, s->used_bmap, s->used_bmap_size,
+                        host_off, to_allocate);
             return ret;
         }
     }
 
-    ret = mark_used(bs, s->used_bmap, s->used_bmap_size, host_off, to_allocate);
-    if (ret < 0) {
-        /* Image consistency is broken. Alarm! */
-        return ret;
-    }
     for (i = 0; i < to_allocate; i++) {
         parallels_set_bat_entry(s, idx + i,
                 host_off / BDRV_SECTOR_SIZE / s->off_multiplier);
         host_off += s->cluster_size;
     }
-    if (host_off > s->data_end * BDRV_SECTOR_SIZE) {
-        s->data_end = host_off / BDRV_SECTOR_SIZE;
-    }
 
     return bat2sect(s, idx) + sector_num % s->tracks;
 }
diff --git a/block/parallels.h b/block/parallels.h
index 6b199443cf..3e4f397502 100644
--- a/block/parallels.h
+++ b/block/parallels.h
@@ -90,7 +90,11 @@ typedef struct BDRVParallelsState {
     Error *migration_blocker;
 } BDRVParallelsState;
 
+int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
+                                         int64_t *clusters);
+
 int parallels_read_format_extension(BlockDriverState *bs,
                                     int64_t ext_off, Error **errp);
 
+
 #endif
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 04/19] parallels: Set data_end value in parallels_check_leak()
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (2 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 03/19] parallels: Move host clusters allocation to a separate function Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 05/19] parallels: Recreate used bitmap " Alexander Ivanov
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

In parallels_check_leak() we change file size but don't correct data_end
field of BDRVParallelsState structure. Fix it.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/parallels.c b/block/parallels.c
index 3c69afa04b..e8d6d30fa5 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -803,6 +803,7 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
                 res->check_errors++;
                 return ret;
             }
+            s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
             if (explicit) {
                 res->leaks_fixed += count;
             }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 05/19] parallels: Recreate used bitmap in parallels_check_leak()
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (3 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 04/19] parallels: Set data_end value in parallels_check_leak() Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 06/19] parallels: Add a note about used bitmap in parallels_check_duplicate() Alexander Ivanov
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

In parallels_check_leak() file can be truncated. In this case the used
bitmap would not comply to the file. Recreate the bitmap after file
truncation.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/block/parallels.c b/block/parallels.c
index e8d6d30fa5..1a4f8d1eb2 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -804,6 +804,14 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
                 return ret;
             }
             s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
+
+            parallels_free_used_bitmap(bs);
+            ret = parallels_fill_used_bitmap(bs);
+            if (ret == -ENOMEM) {
+                res->check_errors++;
+                return ret;
+            }
+
             if (explicit) {
                 res->leaks_fixed += count;
             }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 06/19] parallels: Add a note about used bitmap in parallels_check_duplicate()
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (4 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 05/19] parallels: Recreate used bitmap " Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 07/19] parallels: Create used bitmap even if checks needed Alexander Ivanov
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

In parallels_check_duplicate() We use a bitmap for duplication detection.
This bitmap is not related to used_bmap field in BDRVParallelsState. Add
a comment about it to avoid confusion.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/block/parallels.c b/block/parallels.c
index 1a4f8d1eb2..b735ba8390 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -834,7 +834,10 @@ parallels_check_duplicate(BlockDriverState *bs, BdrvCheckResult *res,
     bool fixed = false;
 
     /*
-     * Create a bitmap of used clusters.
+     * Create a bitmap of used clusters. Please note that this bitmap is not
+     * related to used_bmap field in BDRVParallelsState and is created only for
+     * local usage.
+     *
      * If a bit is set, there is a BAT entry pointing to this cluster.
      * Loop through the BAT entries, check bits relevant to an entry offset.
      * If bit is set, this entry is duplicated. Otherwise set the bit.
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 07/19] parallels: Create used bitmap even if checks needed
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (5 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 06/19] parallels: Add a note about used bitmap in parallels_check_duplicate() Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 08/19] parallels: Make mark_used() and mark_unused() global functions Alexander Ivanov
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

All the checks were fixed to work with used bitmap. Create used bitmap in
parallels_open() even if need_check is true.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index b735ba8390..8208c0bc1a 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -1416,13 +1416,11 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
     }
     need_check = need_check || s->data_end > file_nb_sectors;
 
-    if (!need_check) {
-        ret = parallels_fill_used_bitmap(bs);
-        if (ret == -ENOMEM) {
-            goto fail;
-        }
-        need_check = need_check || ret < 0; /* These are correctable errors */
+    ret = parallels_fill_used_bitmap(bs);
+    if (ret == -ENOMEM) {
+        goto fail;
     }
+    need_check = need_check || ret < 0; /* These are correctable errors */
 
     /*
      * We don't repair the image here if it's opened for checks. Also we don't
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 08/19] parallels: Make mark_used() and mark_unused() global functions
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (6 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 07/19] parallels: Create used bitmap even if checks needed Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 09/19] parallels: Add dirty bitmaps saving Alexander Ivanov
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

We will need these functions in parallels-ext.c too. Let them be global
functions parallels_mark_used() and parallels_mark_unused().

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 22 ++++++++++++----------
 block/parallels.h |  5 +++++
 2 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index 8208c0bc1a..adb43a7069 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -178,8 +178,8 @@ static void parallels_set_bat_entry(BDRVParallelsState *s,
     bitmap_set(s->bat_dirty_bmap, bat_entry_off(index) / s->bat_dirty_block, 1);
 }
 
-static int mark_used(BlockDriverState *bs, unsigned long *bitmap,
-                     uint32_t bitmap_size, int64_t off, uint32_t count)
+int parallels_mark_used(BlockDriverState *bs, unsigned long *bitmap,
+                        uint32_t bitmap_size, int64_t off, uint32_t count)
 {
     BDRVParallelsState *s = bs->opaque;
     uint32_t cluster_index = host_cluster_index(s, off);
@@ -195,8 +195,8 @@ static int mark_used(BlockDriverState *bs, unsigned long *bitmap,
     return 0;
 }
 
-static int mark_unused(BlockDriverState *bs, unsigned long *bitmap,
-                       uint32_t bitmap_size, int64_t off, uint32_t count)
+int parallels_mark_unused(BlockDriverState *bs, unsigned long *bitmap,
+                          uint32_t bitmap_size, int64_t off, uint32_t count)
 {
     BDRVParallelsState *s = bs->opaque;
     uint32_t cluster_index = host_cluster_index(s, off);
@@ -249,7 +249,8 @@ static int parallels_fill_used_bitmap(BlockDriverState *bs)
             continue;
         }
 
-        err2 = mark_used(bs, s->used_bmap, s->used_bmap_size, host_off, 1);
+        err2 = parallels_mark_used(bs, s->used_bmap, s->used_bmap_size,
+                                   host_off, 1);
         if (err2 < 0 && err == 0) {
             err = err2;
         }
@@ -323,7 +324,8 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
         }
     }
 
-    ret = mark_used(bs, s->used_bmap, s->used_bmap_size, host_off, *clusters);
+    ret = parallels_mark_used(bs, s->used_bmap, s->used_bmap_size,
+                              host_off, *clusters);
     if (ret < 0) {
         /* Image consistency is broken. Alarm! */
         return ret;
@@ -390,8 +392,8 @@ allocate_clusters(BlockDriverState *bs, int64_t sector_num,
 
         qemu_vfree(buf);
         if (ret < 0) {
-            mark_unused(bs, s->used_bmap, s->used_bmap_size,
-                        host_off, to_allocate);
+            parallels_mark_unused(bs, s->used_bmap, s->used_bmap_size,
+                                  host_off, to_allocate);
             return ret;
         }
     }
@@ -865,7 +867,7 @@ parallels_check_duplicate(BlockDriverState *bs, BdrvCheckResult *res,
             continue;
         }
 
-        ret = mark_used(bs, bitmap, bitmap_size, host_off, 1);
+        ret = parallels_mark_used(bs, bitmap, bitmap_size, host_off, 1);
         assert(ret != -E2BIG);
         if (ret == 0) {
             continue;
@@ -925,7 +927,7 @@ parallels_check_duplicate(BlockDriverState *bs, BdrvCheckResult *res,
          * considered, and the bitmap size doesn't change. This specifically
          * means that -E2BIG is OK.
          */
-        ret = mark_used(bs, bitmap, bitmap_size, host_off, 1);
+        ret = parallels_mark_used(bs, bitmap, bitmap_size, host_off, 1);
         if (ret == -EBUSY) {
             res->check_errors++;
             goto out_repair_bat;
diff --git a/block/parallels.h b/block/parallels.h
index 3e4f397502..4e7aa6b80f 100644
--- a/block/parallels.h
+++ b/block/parallels.h
@@ -90,6 +90,11 @@ typedef struct BDRVParallelsState {
     Error *migration_blocker;
 } BDRVParallelsState;
 
+int parallels_mark_used(BlockDriverState *bs, unsigned long *bitmap,
+                        uint32_t bitmap_size, int64_t off, uint32_t count);
+int parallels_mark_unused(BlockDriverState *bs, unsigned long *bitmap,
+                          uint32_t bitmap_size, int64_t off, uint32_t count);
+
 int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
                                          int64_t *clusters);
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 09/19] parallels: Add dirty bitmaps saving
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (7 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 08/19] parallels: Make mark_used() and mark_unused() global functions Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 10/19] parallels: Let image extensions work in RW mode Alexander Ivanov
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Now dirty bitmaps can be loaded but there is no their saving. Add code for
dirty bitmap storage.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels-ext.c | 167 ++++++++++++++++++++++++++++++++++++++++++
 block/parallels.c     |  16 +++-
 block/parallels.h     |   5 ++
 3 files changed, 186 insertions(+), 2 deletions(-)

diff --git a/block/parallels-ext.c b/block/parallels-ext.c
index 8a109f005a..0a632a2331 100644
--- a/block/parallels-ext.c
+++ b/block/parallels-ext.c
@@ -24,6 +24,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/error-report.h"
 #include "qapi/error.h"
 #include "block/block-io.h"
 #include "block/block_int.h"
@@ -301,3 +302,169 @@ out:
 
     return ret;
 }
+
+static void parallels_save_bitmap(BlockDriverState *bs, BdrvDirtyBitmap *bitmap,
+                                  uint8_t **buf, int *buf_size)
+{
+    BDRVParallelsState *s = bs->opaque;
+    ParallelsFeatureHeader *fh;
+    ParallelsDirtyBitmapFeature *bh;
+    uint64_t *l1_table, l1_size, granularity, limit;
+    int64_t bm_size, ser_size, offset, buf_used;
+    int64_t alloc_size = 1;
+    const char *name;
+    uint8_t *bm_buf;
+    QemuUUID uuid;
+    int ret = 0;
+
+    if (!bdrv_dirty_bitmap_get_persistence(bitmap) ||
+        bdrv_dirty_bitmap_inconsistent(bitmap)) {
+        return;
+    }
+
+    bm_size = bdrv_dirty_bitmap_size(bitmap);
+    granularity = bdrv_dirty_bitmap_granularity(bitmap);
+    limit = bdrv_dirty_bitmap_serialization_coverage(s->cluster_size, bitmap);
+    ser_size = bdrv_dirty_bitmap_serialization_size(bitmap, 0, bm_size);
+    l1_size = DIV_ROUND_UP(ser_size, s->cluster_size);
+
+    buf_used = l1_size * 8 + sizeof(*fh) + sizeof(*bh);
+    /* Check if there is enough space for the final section */
+    if (*buf_size - buf_used < sizeof(*fh)) {
+        return;
+    }
+
+    name = bdrv_dirty_bitmap_name(bitmap);
+    ret = qemu_uuid_parse(name, &uuid);
+    if (ret < 0) {
+        error_report("Can't save dirty bitmap: ID parsing error: '%s'", name);
+        return;
+    }
+
+    fh = (ParallelsFeatureHeader *)*buf;
+    bh = (ParallelsDirtyBitmapFeature *)(*buf + sizeof(*fh));
+    l1_table = (uint64_t *)((uint8_t *)bh + sizeof(*bh));
+
+    fh->magic = cpu_to_le64(PARALLELS_DIRTY_BITMAP_FEATURE_MAGIC);
+    fh->data_size = cpu_to_le32(l1_size * 8 + sizeof(*bh));
+
+    bh->l1_size = cpu_to_le32(l1_size);
+    bh->size = cpu_to_le64(bm_size >> BDRV_SECTOR_BITS);
+    bh->granularity = cpu_to_le32(granularity >> BDRV_SECTOR_BITS);
+    memcpy(bh->id, &uuid, sizeof(uuid));
+
+    bm_buf = qemu_blockalign(bs, s->cluster_size);
+
+    offset = 0;
+    while ((offset = bdrv_dirty_bitmap_next_dirty(bitmap, offset, bm_size)) >= 0) {
+        uint64_t idx = offset / limit;
+        int64_t cluster_off, end, write_size;
+
+        offset = QEMU_ALIGN_DOWN(offset, limit);
+        end = MIN(bm_size, offset + limit);
+        write_size = bdrv_dirty_bitmap_serialization_size(bitmap, offset,
+                                                          end - offset);
+        assert(write_size <= s->cluster_size);
+
+        bdrv_dirty_bitmap_serialize_part(bitmap, bm_buf, offset, end - offset);
+        if (write_size < s->cluster_size) {
+            memset(bm_buf + write_size, 0, s->cluster_size - write_size);
+        }
+
+        cluster_off = parallels_allocate_host_clusters(bs, &alloc_size);
+        if (cluster_off <= 0) {
+            goto end;
+        }
+
+        ret = bdrv_pwrite(bs->file, cluster_off, s->cluster_size, bm_buf, 0);
+        if (ret < 0) {
+            memset(&fh->magic, 0, sizeof(fh->magic));
+            parallels_mark_unused(bs, s->used_bmap, s->used_bmap_size,
+                                  cluster_off, 1);
+            goto end;
+        }
+
+        l1_table[idx] = cpu_to_le64(cluster_off >> BDRV_SECTOR_BITS);
+        offset = end;
+    }
+
+    *buf_size -= buf_used;
+    *buf += buf_used;
+
+end:
+    qemu_vfree(bm_buf);
+}
+
+void parallels_store_persistent_dirty_bitmaps(BlockDriverState *bs,
+                                              Error **errp)
+{
+    BDRVParallelsState *s = bs->opaque;
+    BdrvDirtyBitmap *bitmap;
+    ParallelsFormatExtensionHeader *eh;
+    int remaining = s->cluster_size;
+    uint8_t *buf, *pos;
+    int64_t header_off, alloc_size = 1;
+    g_autofree uint8_t *hash = NULL;
+    size_t hash_len = 0;
+    int ret;
+
+    s->header->ext_off = 0;
+
+    if (!bdrv_has_named_bitmaps(bs)) {
+        return;
+    }
+
+    buf = qemu_blockalign0(bs, s->cluster_size);
+
+    eh = (ParallelsFormatExtensionHeader *)buf;
+    pos = buf + sizeof(*eh);
+
+    eh->magic = cpu_to_le64(PARALLELS_FORMAT_EXTENSION_MAGIC);
+
+    FOR_EACH_DIRTY_BITMAP(bs, bitmap) {
+        parallels_save_bitmap(bs, bitmap, &pos, &remaining);
+    }
+
+    header_off = parallels_allocate_host_clusters(bs, &alloc_size);
+    if (header_off < 0) {
+        error_report("Can't save dirty bitmap: cluster allocation error");
+        ret = header_off;
+        goto end;
+    }
+
+    ret = qcrypto_hash_bytes(QCRYPTO_HASH_ALG_MD5,
+                             (const char *)(buf + sizeof(*eh)),
+                             s->cluster_size - sizeof(*eh),
+                             &hash, &hash_len, errp);
+    if (ret < 0 || hash_len != sizeof(eh->check_sum)) {
+        error_report("Can't save dirty bitmap: hash error");
+        ret = -EINVAL;
+        goto end;
+    }
+    memcpy(eh->check_sum, hash, hash_len);
+
+    ret = bdrv_pwrite(bs->file, header_off, s->cluster_size, buf, 0);
+    if (ret < 0) {
+        error_report("Can't save dirty bitmap: IO error");
+        parallels_mark_unused(bs, s->used_bmap, s->used_bmap_size,
+                              header_off, 1);
+        goto end;
+    }
+
+    s->header->ext_off = cpu_to_le64(header_off / BDRV_SECTOR_SIZE);
+end:
+    qemu_vfree(buf);
+}
+
+bool coroutine_fn parallels_co_can_store_new_dirty_bitmap(BlockDriverState *bs,
+                                                          const char *name,
+                                                          uint32_t granularity,
+                                                          Error **errp)
+{
+    if (bdrv_find_dirty_bitmap(bs, name)) {
+        error_setg(errp, "Bitmap already exists: %s", name);
+        return false;
+    }
+
+    return true;
+}
diff --git a/block/parallels.c b/block/parallels.c
index adb43a7069..d2a45e0c04 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -1464,14 +1464,25 @@ fail:
 static int parallels_inactivate(BlockDriverState *bs)
 {
     BDRVParallelsState *s = bs->opaque;
+    Error *err = NULL;
     int ret;
 
+    parallels_store_persistent_dirty_bitmaps(bs, &err);
+    if (err != NULL) {
+        error_reportf_err(err, "Lost persistent bitmaps during "
+                          "inactivation of node '%s': ",
+                          bdrv_get_device_or_node_name(bs));
+    }
+
     s->header->inuse = 0;
     parallels_update_header(bs);
 
     /* errors are ignored, so we might as well pass exact=true */
-    ret = bdrv_truncate(bs->file, s->data_end << BDRV_SECTOR_BITS, true,
-                        PREALLOC_MODE_OFF, 0, NULL);
+    ret = bdrv_truncate(bs->file, s->data_end << BDRV_SECTOR_BITS,
+                        true, PREALLOC_MODE_OFF, 0, NULL);
+    if (ret < 0) {
+        error_report("Failed to truncate image: %s", strerror(-ret));
+    }
 
     return ret;
 }
@@ -1522,6 +1533,7 @@ static BlockDriver bdrv_parallels = {
     .bdrv_co_pdiscard           = parallels_co_pdiscard,
     .bdrv_co_pwrite_zeroes      = parallels_co_pwrite_zeroes,
     .bdrv_inactivate            = parallels_inactivate,
+    .bdrv_co_can_store_new_dirty_bitmap = parallels_co_can_store_new_dirty_bitmap,
 };
 
 static void bdrv_parallels_init(void)
diff --git a/block/parallels.h b/block/parallels.h
index 4e7aa6b80f..18b4f8068e 100644
--- a/block/parallels.h
+++ b/block/parallels.h
@@ -100,6 +100,11 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
 
 int parallels_read_format_extension(BlockDriverState *bs,
                                     int64_t ext_off, Error **errp);
+void parallels_store_persistent_dirty_bitmaps(BlockDriverState *bs,
+                                              Error **errp);
+bool coroutine_fn
+parallels_co_can_store_new_dirty_bitmap(BlockDriverState *bs, const char *name,
+                                        uint32_t granularity, Error **errp);
 
 
 #endif
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 10/19] parallels: Let image extensions work in RW mode
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (8 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 09/19] parallels: Add dirty bitmaps saving Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 11/19] parallels: Handle L1 entries equal to one Alexander Ivanov
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Now we support extensions saving and can let to work with them in
read-write mode.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels-ext.c |  4 ----
 block/parallels.c     | 17 ++++-------------
 2 files changed, 4 insertions(+), 17 deletions(-)

diff --git a/block/parallels-ext.c b/block/parallels-ext.c
index 0a632a2331..a0de4f1b07 100644
--- a/block/parallels-ext.c
+++ b/block/parallels-ext.c
@@ -177,10 +177,6 @@ static BdrvDirtyBitmap *parallels_load_bitmap(BlockDriverState *bs,
         return NULL;
     }
 
-    /* We support format extension only for RO parallels images. */
-    assert(!(bs->open_flags & BDRV_O_RDWR));
-    bdrv_dirty_bitmap_set_readonly(bitmap, true);
-
     return bitmap;
 }
 
diff --git a/block/parallels.c b/block/parallels.c
index d2a45e0c04..772a8ca1eb 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -1371,19 +1371,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
     }
 
     if (ph.ext_off) {
-        if (flags & BDRV_O_RDWR) {
-            /*
-             * It's unsafe to open image RW if there is an extension (as we
-             * don't support it). But parallels driver in QEMU historically
-             * ignores the extension, so print warning and don't care.
-             */
-            warn_report("Format Extension ignored in RW mode");
-        } else {
-            ret = parallels_read_format_extension(
-                    bs, le64_to_cpu(ph.ext_off) << BDRV_SECTOR_BITS, errp);
-            if (ret < 0) {
-                goto fail;
-            }
+        ret = parallels_read_format_extension(
+                bs, le64_to_cpu(ph.ext_off) << BDRV_SECTOR_BITS, errp);
+        if (ret < 0) {
+            goto fail;
         }
     }
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 11/19] parallels: Handle L1 entries equal to one
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (9 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 10/19] parallels: Let image extensions work in RW mode Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 12/19] parallels: Make a loaded dirty bitmap persistent Alexander Ivanov
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

If all the bits in a dirty bitmap cluster are ones, the cluster shouldn't
be written. Instead the corresponding L1 entry should be set to 1.

Check if all bits in a memory region are ones and set 1 to L1 entries
corresponding clusters filled with ones.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels-ext.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/block/parallels-ext.c b/block/parallels-ext.c
index a0de4f1b07..ebda6b0a01 100644
--- a/block/parallels-ext.c
+++ b/block/parallels-ext.c
@@ -354,7 +354,7 @@ static void parallels_save_bitmap(BlockDriverState *bs, BdrvDirtyBitmap *bitmap,
     offset = 0;
     while ((offset = bdrv_dirty_bitmap_next_dirty(bitmap, offset, bm_size)) >= 0) {
         uint64_t idx = offset / limit;
-        int64_t cluster_off, end, write_size;
+        int64_t cluster_off, end, write_size, first_zero;
 
         offset = QEMU_ALIGN_DOWN(offset, limit);
         end = MIN(bm_size, offset + limit);
@@ -367,6 +367,16 @@ static void parallels_save_bitmap(BlockDriverState *bs, BdrvDirtyBitmap *bitmap,
             memset(bm_buf + write_size, 0, s->cluster_size - write_size);
         }
 
+        first_zero = bdrv_dirty_bitmap_next_zero(bitmap, offset, bm_size);
+        if (first_zero < 0) {
+            goto end;
+        }
+        if (first_zero - offset >= s->cluster_size) {
+            l1_table[idx] = 1;
+            offset = end;
+            continue;
+        }
+
         cluster_off = parallels_allocate_host_clusters(bs, &alloc_size);
         if (cluster_off <= 0) {
             goto end;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 12/19] parallels: Make a loaded dirty bitmap persistent
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (10 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 11/19] parallels: Handle L1 entries equal to one Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 13/19] parallels: Reverse a conditional in parallels_check_leak() to reduce indents Alexander Ivanov
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

After bitmap loading the bitmap is not persistent and is removed on image
saving. Set bitmap persistence to true.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels-ext.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/parallels-ext.c b/block/parallels-ext.c
index ebda6b0a01..bb4478c350 100644
--- a/block/parallels-ext.c
+++ b/block/parallels-ext.c
@@ -256,6 +256,7 @@ static int parallels_parse_format_extension(BlockDriverState *bs,
             if (!bitmap) {
                 goto fail;
             }
+            bdrv_dirty_bitmap_set_persistence(bitmap, true);
             bitmaps = g_slist_append(bitmaps, bitmap);
             break;
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 13/19] parallels: Reverse a conditional in parallels_check_leak() to reduce indents
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (11 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 12/19] parallels: Make a loaded dirty bitmap persistent Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 14/19] parallels: Truncate images on the last used cluster Alexander Ivanov
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Let the function return a success code if a file size is not bigger than
image_end_offset. Thus we can decrease indents in the next code block.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 72 +++++++++++++++++++++++------------------------
 1 file changed, 36 insertions(+), 36 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index 772a8ca1eb..7b16ca2ab2 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -772,7 +772,7 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
                      BdrvCheckMode fix, bool explicit)
 {
     BDRVParallelsState *s = bs->opaque;
-    int64_t size;
+    int64_t size, count;
     int ret;
 
     size = bdrv_co_getlength(bs->file->bs);
@@ -780,43 +780,43 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
         res->check_errors++;
         return size;
     }
+    if (size <= res->image_end_offset) {
+        return 0;
+    }
+
+    count = DIV_ROUND_UP(size - res->image_end_offset, s->cluster_size);
+    if (explicit) {
+        fprintf(stderr,
+                "%s space leaked at the end of the image %" PRId64 "\n",
+                fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
+                size - res->image_end_offset);
+        res->leaks += count;
+    }
+    if (fix & BDRV_FIX_LEAKS) {
+        Error *local_err = NULL;
+
+        /*
+         * In order to really repair the image, we must shrink it.
+         * That means we have to pass exact=true.
+         */
+        ret = bdrv_co_truncate(bs->file, res->image_end_offset, true,
+                               PREALLOC_MODE_OFF, 0, &local_err);
+        if (ret < 0) {
+            error_report_err(local_err);
+            res->check_errors++;
+            return ret;
+        }
+        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
+
+        parallels_free_used_bitmap(bs);
+        ret = parallels_fill_used_bitmap(bs);
+        if (ret == -ENOMEM) {
+            res->check_errors++;
+            return ret;
+        }
 
-    if (size > res->image_end_offset) {
-        int64_t count;
-        count = DIV_ROUND_UP(size - res->image_end_offset, s->cluster_size);
         if (explicit) {
-            fprintf(stderr,
-                    "%s space leaked at the end of the image %" PRId64 "\n",
-                    fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
-                    size - res->image_end_offset);
-            res->leaks += count;
-        }
-        if (fix & BDRV_FIX_LEAKS) {
-            Error *local_err = NULL;
-
-            /*
-             * In order to really repair the image, we must shrink it.
-             * That means we have to pass exact=true.
-             */
-            ret = bdrv_co_truncate(bs->file, res->image_end_offset, true,
-                                   PREALLOC_MODE_OFF, 0, &local_err);
-            if (ret < 0) {
-                error_report_err(local_err);
-                res->check_errors++;
-                return ret;
-            }
-            s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
-
-            parallels_free_used_bitmap(bs);
-            ret = parallels_fill_used_bitmap(bs);
-            if (ret == -ENOMEM) {
-                res->check_errors++;
-                return ret;
-            }
-
-            if (explicit) {
-                res->leaks_fixed += count;
-            }
+            res->leaks_fixed += count;
         }
     }
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 14/19] parallels: Truncate images on the last used cluster
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (12 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 13/19] parallels: Reverse a conditional in parallels_check_leak() to reduce indents Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 15/19] parallels: Remove unnecessary data_end field Alexander Ivanov
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

On an image closing there can be unused clusters in the end of the image.
Truncate these clusters and update data_end field.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index 7b16ca2ab2..48ea5b3f03 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -1452,6 +1452,19 @@ fail:
     return ret;
 }
 
+static int parallels_truncate_unused_clusters(BlockDriverState *bs)
+{
+    BDRVParallelsState *s = bs->opaque;
+    uint64_t end_off = 0;
+    if (s->used_bmap_size > 0) {
+        end_off = find_last_bit(s->used_bmap, s->used_bmap_size);
+        end_off = (end_off + 1) * s->cluster_size;
+    }
+    end_off += s->data_start * BDRV_SECTOR_SIZE;
+    s->data_end = end_off / BDRV_SECTOR_SIZE;
+    return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
+}
+
 static int parallels_inactivate(BlockDriverState *bs)
 {
     BDRVParallelsState *s = bs->opaque;
@@ -1469,8 +1482,7 @@ static int parallels_inactivate(BlockDriverState *bs)
     parallels_update_header(bs);
 
     /* errors are ignored, so we might as well pass exact=true */
-    ret = bdrv_truncate(bs->file, s->data_end << BDRV_SECTOR_BITS,
-                        true, PREALLOC_MODE_OFF, 0, NULL);
+    ret = parallels_truncate_unused_clusters(bs);
     if (ret < 0) {
         error_report("Failed to truncate image: %s", strerror(-ret));
     }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 15/19] parallels: Remove unnecessary data_end field
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (13 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 14/19] parallels: Truncate images on the last used cluster Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-06 19:43   ` Mike Maslenkin
  2023-10-02  8:57 ` [PATCH 16/19] parallels: Check unused clusters in parallels_check_leak() Alexander Ivanov
                   ` (3 subsequent siblings)
  18 siblings, 1 reply; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Since we have used bitmap, field data_end in BDRVParallelsState is
redundant and can be removed.

Add parallels_data_end() helper and remove data_end handling.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 33 +++++++++++++--------------------
 block/parallels.h |  1 -
 2 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index 48ea5b3f03..80a7171b84 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -265,6 +265,13 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
     g_free(s->used_bmap);
 }
 
+static int64_t parallels_data_end(BDRVParallelsState *s)
+{
+    int64_t data_end = s->data_start * BDRV_SECTOR_SIZE;
+    data_end += s->used_bmap_size * s->cluster_size;
+    return data_end;
+}
+
 int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
                                          int64_t *clusters)
 {
@@ -275,7 +282,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
 
     first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
     if (first_free == s->used_bmap_size) {
-        host_off = s->data_end * BDRV_SECTOR_SIZE;
+        host_off = parallels_data_end(s);
         prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
         bytes = prealloc_clusters * s->cluster_size;
 
@@ -297,9 +304,6 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
         s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
                                           new_usedsize);
         s->used_bmap_size = new_usedsize;
-        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
-            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
-        }
     } else {
         next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
 
@@ -315,8 +319,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
          * branch. In the other case we are likely re-using hole. Preallocate
          * the space if required by the prealloc_mode.
          */
-        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
-                host_off < s->data_end * BDRV_SECTOR_SIZE) {
+        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
             ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
             if (ret < 0) {
                 return ret;
@@ -757,13 +760,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
         }
     }
 
-    if (high_off == 0) {
-        res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
-    } else {
-        res->image_end_offset = high_off + s->cluster_size;
-        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
-    }
-
+    res->image_end_offset = parallels_data_end(s);
     return 0;
 }
 
@@ -806,7 +803,6 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
             res->check_errors++;
             return ret;
         }
-        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
 
         parallels_free_used_bitmap(bs);
         ret = parallels_fill_used_bitmap(bs);
@@ -1361,8 +1357,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
     }
 
     s->data_start = data_start;
-    s->data_end = s->data_start;
-    if (s->data_end < (s->header_size >> BDRV_SECTOR_BITS)) {
+    if (s->data_start < (s->header_size >> BDRV_SECTOR_BITS)) {
         /*
          * There is not enough unused space to fit to block align between BAT
          * and actual data. We can't avoid read-modify-write...
@@ -1403,11 +1398,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
 
     for (i = 0; i < s->bat_size; i++) {
         sector = bat2sect(s, i);
-        if (sector + s->tracks > s->data_end) {
-            s->data_end = sector + s->tracks;
+        if (sector + s->tracks > file_nb_sectors) {
+            need_check = true;
         }
     }
-    need_check = need_check || s->data_end > file_nb_sectors;
 
     ret = parallels_fill_used_bitmap(bs);
     if (ret == -ENOMEM) {
@@ -1461,7 +1455,6 @@ static int parallels_truncate_unused_clusters(BlockDriverState *bs)
         end_off = (end_off + 1) * s->cluster_size;
     }
     end_off += s->data_start * BDRV_SECTOR_SIZE;
-    s->data_end = end_off / BDRV_SECTOR_SIZE;
     return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
 }
 
diff --git a/block/parallels.h b/block/parallels.h
index 18b4f8068e..a6a048d890 100644
--- a/block/parallels.h
+++ b/block/parallels.h
@@ -79,7 +79,6 @@ typedef struct BDRVParallelsState {
     unsigned int bat_size;
 
     int64_t  data_start;
-    int64_t  data_end;
     uint64_t prealloc_size;
     ParallelsPreallocMode prealloc_mode;
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 16/19] parallels: Check unused clusters in parallels_check_leak()
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (14 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 15/19] parallels: Remove unnecessary data_end field Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 17/19] tests: Add parallels images support to test 165 Alexander Ivanov
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Since we have used bitmap, leak check is useless. Transform
parallels_truncate_unused_clusters() to parallels_check_unused_clusters()
helper and use it in leak check.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 block/parallels.c | 112 ++++++++++++++++++++++++++--------------------
 1 file changed, 63 insertions(+), 49 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index 80a7171b84..053ec0f8a1 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -764,56 +764,82 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
     return 0;
 }
 
+static int64_t parallels_check_unused_clusters(BlockDriverState *bs,
+                                               bool truncate)
+{
+    BDRVParallelsState *s = bs->opaque;
+    int64_t leak, file_size, end_off = 0;
+    int ret;
+
+    file_size = bdrv_getlength(bs->file->bs);
+    if (file_size < 0) {
+        return file_size;
+    }
+
+    if (s->used_bmap_size > 0) {
+        end_off = find_last_bit(s->used_bmap, s->used_bmap_size);
+        if (end_off == s->used_bmap_size) {
+            return 0;
+        }
+        end_off = (end_off + 1) * s->cluster_size;
+    }
+
+    end_off += s->data_start * BDRV_SECTOR_SIZE;
+    leak = file_size - end_off;
+    if (leak < 0) {
+        return -EINVAL;
+    }
+    if (!truncate || leak == 0) {
+        return leak;
+    }
+
+    ret = bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
+    if (ret) {
+        return ret;
+    }
+
+    parallels_free_used_bitmap(bs);
+    ret = parallels_fill_used_bitmap(bs);
+    if (ret < 0) {
+        return ret;
+    }
+
+    return leak;
+}
+
 static int coroutine_fn GRAPH_RDLOCK
 parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
                      BdrvCheckMode fix, bool explicit)
 {
     BDRVParallelsState *s = bs->opaque;
-    int64_t size, count;
-    int ret;
+    int64_t leak, count, size;
+
+    leak = parallels_check_unused_clusters(bs, fix & BDRV_FIX_LEAKS);
+    if (leak < 0) {
+        res->check_errors++;
+        return leak;
+    }
+    if (leak == 0) {
+        return 0;
+    }
 
     size = bdrv_co_getlength(bs->file->bs);
     if (size < 0) {
-        res->check_errors++;
         return size;
     }
-    if (size <= res->image_end_offset) {
+    res->image_end_offset = size;
+
+    if (!explicit) {
         return 0;
     }
 
-    count = DIV_ROUND_UP(size - res->image_end_offset, s->cluster_size);
-    if (explicit) {
-        fprintf(stderr,
-                "%s space leaked at the end of the image %" PRId64 "\n",
-                fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
-                size - res->image_end_offset);
-        res->leaks += count;
-    }
+    count = DIV_ROUND_UP(leak, s->cluster_size);
+    fprintf(stderr,
+            "%s space leaked at the end of the image %" PRId64 "\n",
+            fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR", leak);
+    res->leaks += count;
     if (fix & BDRV_FIX_LEAKS) {
-        Error *local_err = NULL;
-
-        /*
-         * In order to really repair the image, we must shrink it.
-         * That means we have to pass exact=true.
-         */
-        ret = bdrv_co_truncate(bs->file, res->image_end_offset, true,
-                               PREALLOC_MODE_OFF, 0, &local_err);
-        if (ret < 0) {
-            error_report_err(local_err);
-            res->check_errors++;
-            return ret;
-        }
-
-        parallels_free_used_bitmap(bs);
-        ret = parallels_fill_used_bitmap(bs);
-        if (ret == -ENOMEM) {
-            res->check_errors++;
-            return ret;
-        }
-
-        if (explicit) {
-            res->leaks_fixed += count;
-        }
+        res->leaks_fixed += count;
     }
 
     return 0;
@@ -1446,18 +1472,6 @@ fail:
     return ret;
 }
 
-static int parallels_truncate_unused_clusters(BlockDriverState *bs)
-{
-    BDRVParallelsState *s = bs->opaque;
-    uint64_t end_off = 0;
-    if (s->used_bmap_size > 0) {
-        end_off = find_last_bit(s->used_bmap, s->used_bmap_size);
-        end_off = (end_off + 1) * s->cluster_size;
-    }
-    end_off += s->data_start * BDRV_SECTOR_SIZE;
-    return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
-}
-
 static int parallels_inactivate(BlockDriverState *bs)
 {
     BDRVParallelsState *s = bs->opaque;
@@ -1475,7 +1489,7 @@ static int parallels_inactivate(BlockDriverState *bs)
     parallels_update_header(bs);
 
     /* errors are ignored, so we might as well pass exact=true */
-    ret = parallels_truncate_unused_clusters(bs);
+    ret = parallels_check_unused_clusters(bs, true);
     if (ret < 0) {
         error_report("Failed to truncate image: %s", strerror(-ret));
     }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 17/19] tests: Add parallels images support to test 165
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (15 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 16/19] parallels: Check unused clusters in parallels_check_leak() Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 18/19] tests: Turned on 256, 299, 304 and block-status-cache for parallels format Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 19/19] tests: Add parallels format support to image-fleecing Alexander Ivanov
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Use a different bitmap name for parallels images because their has own ID
format, and can't contain an arbitrary string.

Replace image reopen by shutdown/launch VM because parallels images doesn't
support reopen.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 tests/qemu-iotests/165 | 42 ++++++++++++++++++++++++++----------------
 1 file changed, 26 insertions(+), 16 deletions(-)

diff --git a/tests/qemu-iotests/165 b/tests/qemu-iotests/165
index e3ef28e2ee..3181cccb89 100755
--- a/tests/qemu-iotests/165
+++ b/tests/qemu-iotests/165
@@ -38,6 +38,10 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
 
     def setUp(self):
         qemu_img('create', '-f', iotests.imgfmt, disk, str(disk_size))
+        if iotests.imgfmt == 'parallels':
+            self.bitmap_name = '00000000-0000-0000-0000-000000000000'
+        else:
+            self.bitmap_name = 'bitmap0'
 
     def tearDown(self):
         os.remove(disk)
@@ -50,12 +54,12 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
 
     def getSha256(self):
         result = self.vm.qmp('x-debug-block-dirty-bitmap-sha256',
-                             node='drive0', name='bitmap0')
+                             node='drive0', name=self.bitmap_name)
         return result['return']['sha256']
 
     def checkBitmap(self, sha256):
         result = self.vm.qmp('x-debug-block-dirty-bitmap-sha256',
-                             node='drive0', name='bitmap0')
+                             node='drive0', name=self.bitmap_name)
         self.assert_qmp(result, 'return/sha256', sha256);
 
     def writeRegions(self, regions):
@@ -65,7 +69,7 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
 
     def qmpAddBitmap(self):
         self.vm.qmp('block-dirty-bitmap-add', node='drive0',
-                    name='bitmap0', persistent=True)
+                    name=self.bitmap_name, persistent=True)
 
     def test_persistent(self):
         self.vm = self.mkVm()
@@ -117,7 +121,7 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
         assert sha256_1 != sha256_2 # Otherwise, it's not very interesting.
 
         result = self.vm.qmp('block-dirty-bitmap-clear', node='drive0',
-                             name='bitmap0')
+                             name=self.bitmap_name)
         self.assert_qmp(result, 'return', {})
 
         # Start with regions1
@@ -136,17 +140,23 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
         self.writeRegions(regions2)
         assert sha256_1 == self.getSha256()
 
-        # Reopen to RW
-        result = self.vm.qmp('blockdev-reopen', options=[{
-            'node-name': 'node0',
-            'driver': iotests.imgfmt,
-            'file': {
-                'driver': 'file',
-                'filename': disk
-            },
-            'read-only': False
-        }])
-        self.assert_qmp(result, 'return', {})
+        if iotests.imgfmt == 'parallels':
+            # parallels doesn't support reopen
+            self.vm.shutdown()
+            self.vm = self.mkVm()
+            self.vm.launch()
+        else:
+            # Reopen to RW
+            result = self.vm.qmp('blockdev-reopen', options=[{
+                'node-name': 'node0',
+                'driver': iotests.imgfmt,
+                'file': {
+                    'driver': 'file',
+                    'filename': disk
+                },
+                'read-only': False
+            }])
+            self.assert_qmp(result, 'return', {})
 
         # Check that bitmap is reopened to RW and we can write to it.
         self.writeRegions(regions2)
@@ -156,6 +166,6 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
 
 
 if __name__ == '__main__':
-    iotests.main(supported_fmts=['qcow2'],
+    iotests.main(supported_fmts=['qcow2', 'parallels'],
                  supported_protocols=['file'],
                  unsupported_imgopts=['compat'])
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 18/19] tests: Turned on 256, 299, 304 and block-status-cache for parallels format
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (16 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 17/19] tests: Add parallels images support to test 165 Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  2023-10-02  8:57 ` [PATCH 19/19] tests: Add parallels format support to image-fleecing Alexander Ivanov
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

These tests pass with parallels format. Add parallels to supporting
formats for these tests.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 tests/qemu-iotests/256                      | 2 +-
 tests/qemu-iotests/299                      | 2 +-
 tests/qemu-iotests/304                      | 2 +-
 tests/qemu-iotests/tests/block-status-cache | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tests/qemu-iotests/256 b/tests/qemu-iotests/256
index d7e67f4a05..1930bb017e 100755
--- a/tests/qemu-iotests/256
+++ b/tests/qemu-iotests/256
@@ -26,7 +26,7 @@ from iotests import log
 
 iotests.verify_virtio_scsi_pci_or_ccw()
 
-iotests.script_initialize(supported_fmts=['qcow2'])
+iotests.script_initialize(supported_fmts=['qcow2', 'parallels'])
 size = 64 * 1024 * 1024
 
 with iotests.FilePath('img0') as img0_path, \
diff --git a/tests/qemu-iotests/299 b/tests/qemu-iotests/299
index a7122941fd..d8c4399446 100755
--- a/tests/qemu-iotests/299
+++ b/tests/qemu-iotests/299
@@ -23,7 +23,7 @@ import iotests
 
 # The test is unrelated to formats, restrict it to qcow2 to avoid extra runs
 iotests.script_initialize(
-    supported_fmts=['qcow2'],
+    supported_fmts=['qcow2', 'parallels'],
 )
 
 nbd_sock = iotests.file_path('nbd.sock', base_dir=iotests.sock_dir)
diff --git a/tests/qemu-iotests/304 b/tests/qemu-iotests/304
index 198f282087..1ebf999930 100755
--- a/tests/qemu-iotests/304
+++ b/tests/qemu-iotests/304
@@ -23,7 +23,7 @@
 import iotests
 from iotests import qemu_img_create, qemu_img_log, file_path
 
-iotests.script_initialize(supported_fmts=['qcow2'],
+iotests.script_initialize(supported_fmts=['qcow2', 'parallels'],
                           supported_protocols=['file'])
 
 test_img = file_path('test.qcow2')
diff --git a/tests/qemu-iotests/tests/block-status-cache b/tests/qemu-iotests/tests/block-status-cache
index 5a7bc2c149..ade3d5b169 100755
--- a/tests/qemu-iotests/tests/block-status-cache
+++ b/tests/qemu-iotests/tests/block-status-cache
@@ -131,5 +131,5 @@ class TestBscWithNbd(iotests.QMPTestCase):
 if __name__ == '__main__':
     # The block-status cache only works on the protocol layer, so to test it,
     # we can only use the raw format
-    iotests.main(supported_fmts=['raw'],
+    iotests.main(supported_fmts=['raw', 'parallels'],
                  supported_protocols=['file'])
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 19/19] tests: Add parallels format support to image-fleecing
  2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
                   ` (17 preceding siblings ...)
  2023-10-02  8:57 ` [PATCH 18/19] tests: Turned on 256, 299, 304 and block-status-cache for parallels format Alexander Ivanov
@ 2023-10-02  8:57 ` Alexander Ivanov
  18 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-02  8:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, den, stefanha, vsementsov, kwolf, hreitz

Use a different bitmap name for parallels images because their has own ID
format, and can't contain an arbitrary string.

Replace hardcoded 'qcow2' format to iotests.imgfmt.

Add 'parallels' to supported formats.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
 tests/qemu-iotests/tests/image-fleecing | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/tests/qemu-iotests/tests/image-fleecing b/tests/qemu-iotests/tests/image-fleecing
index f6e449d071..00d7f37af4 100755
--- a/tests/qemu-iotests/tests/image-fleecing
+++ b/tests/qemu-iotests/tests/image-fleecing
@@ -28,7 +28,7 @@ import iotests
 from iotests import log, qemu_img, qemu_io
 
 iotests.script_initialize(
-    supported_fmts=['qcow2'],
+    supported_fmts=['qcow2', 'parallels'],
     supported_platforms=['linux'],
     required_fmts=['copy-before-write'],
     unsupported_imgopts=['compat']
@@ -61,12 +61,17 @@ def do_test(vm, use_cbw, use_snapshot_access_filter, base_img_path,
     if push_backup:
         assert use_cbw
 
+    if iotests.imgfmt == 'parallels':
+        bitmap_name = '00000000-0000-0000-0000-000000000000'
+    else:
+        bitmap_name = 'bitmap0'
+
     log('--- Setting up images ---')
     log('')
 
     qemu_img('create', '-f', iotests.imgfmt, base_img_path, '64M')
     if bitmap:
-        qemu_img('bitmap', '--add', base_img_path, 'bitmap0')
+        qemu_img('bitmap', '--add', base_img_path, bitmap_name)
 
     if use_snapshot_access_filter:
         assert use_cbw
@@ -75,7 +80,7 @@ def do_test(vm, use_cbw, use_snapshot_access_filter, base_img_path,
         qemu_img('create', '-f', 'qcow2', fleece_img_path, '64M')
 
     if push_backup:
-        qemu_img('create', '-f', 'qcow2', target_img_path, '64M')
+        qemu_img('create', '-f', iotests.imgfmt, target_img_path, '64M')
 
     for p in patterns:
         qemu_io('-f', iotests.imgfmt,
@@ -130,7 +135,7 @@ def do_test(vm, use_cbw, use_snapshot_access_filter, base_img_path,
         }
 
         if bitmap:
-            fl_cbw['bitmap'] = {'node': src_node, 'name': 'bitmap0'}
+            fl_cbw['bitmap'] = {'node': src_node, 'name': bitmap_name}
 
         log(vm.qmp('blockdev-add', fl_cbw))
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 15/19] parallels: Remove unnecessary data_end field
  2023-10-02  8:57 ` [PATCH 15/19] parallels: Remove unnecessary data_end field Alexander Ivanov
@ 2023-10-06 19:43   ` Mike Maslenkin
  2023-10-07 10:18     ` Alexander Ivanov
  0 siblings, 1 reply; 26+ messages in thread
From: Mike Maslenkin @ 2023-10-06 19:43 UTC (permalink / raw)
  To: Alexander Ivanov
  Cc: qemu-devel, qemu-block, den, stefanha, vsementsov, kwolf, hreitz

On Mon, Oct 2, 2023 at 12:01 PM Alexander Ivanov
<alexander.ivanov@virtuozzo.com> wrote:
>
> Since we have used bitmap, field data_end in BDRVParallelsState is
> redundant and can be removed.
>
> Add parallels_data_end() helper and remove data_end handling.
>
> Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
> ---
>  block/parallels.c | 33 +++++++++++++--------------------
>  block/parallels.h |  1 -
>  2 files changed, 13 insertions(+), 21 deletions(-)
>
> diff --git a/block/parallels.c b/block/parallels.c
> index 48ea5b3f03..80a7171b84 100644
> --- a/block/parallels.c
> +++ b/block/parallels.c
> @@ -265,6 +265,13 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
>      g_free(s->used_bmap);
>  }
>
> +static int64_t parallels_data_end(BDRVParallelsState *s)
> +{
> +    int64_t data_end = s->data_start * BDRV_SECTOR_SIZE;
> +    data_end += s->used_bmap_size * s->cluster_size;
> +    return data_end;
> +}
> +
>  int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>                                           int64_t *clusters)
>  {
> @@ -275,7 +282,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>
>      first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
>      if (first_free == s->used_bmap_size) {
> -        host_off = s->data_end * BDRV_SECTOR_SIZE;
> +        host_off = parallels_data_end(s);
>          prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
>          bytes = prealloc_clusters * s->cluster_size;
>
> @@ -297,9 +304,6 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>          s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
>                                            new_usedsize);
>          s->used_bmap_size = new_usedsize;
> -        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
> -            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
> -        }
>      } else {
>          next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
>
> @@ -315,8 +319,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>           * branch. In the other case we are likely re-using hole. Preallocate
>           * the space if required by the prealloc_mode.
>           */
> -        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
> -                host_off < s->data_end * BDRV_SECTOR_SIZE) {
> +        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
>              ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
>              if (ret < 0) {
>                  return ret;
> @@ -757,13 +760,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
>          }
>      }
>
> -    if (high_off == 0) {
> -        res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
> -    } else {
> -        res->image_end_offset = high_off + s->cluster_size;
> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
> -    }
> -
> +    res->image_end_offset = parallels_data_end(s);
>      return 0;
>  }
>
> @@ -806,7 +803,6 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
>              res->check_errors++;
>              return ret;
>          }
> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>
>          parallels_free_used_bitmap(bs);
>          ret = parallels_fill_used_bitmap(bs);
> @@ -1361,8 +1357,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>      }
>
>      s->data_start = data_start;
> -    s->data_end = s->data_start;
> -    if (s->data_end < (s->header_size >> BDRV_SECTOR_BITS)) {
> +    if (s->data_start < (s->header_size >> BDRV_SECTOR_BITS)) {
>          /*
>           * There is not enough unused space to fit to block align between BAT
>           * and actual data. We can't avoid read-modify-write...
> @@ -1403,11 +1398,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>
>      for (i = 0; i < s->bat_size; i++) {
>          sector = bat2sect(s, i);
> -        if (sector + s->tracks > s->data_end) {
> -            s->data_end = sector + s->tracks;
> +        if (sector + s->tracks > file_nb_sectors) {
> +            need_check = true;
>          }
>      }
> -    need_check = need_check || s->data_end > file_nb_sectors;
>
>      ret = parallels_fill_used_bitmap(bs);
>      if (ret == -ENOMEM) {
> @@ -1461,7 +1455,6 @@ static int parallels_truncate_unused_clusters(BlockDriverState *bs)
>          end_off = (end_off + 1) * s->cluster_size;
>      }
>      end_off += s->data_start * BDRV_SECTOR_SIZE;
> -    s->data_end = end_off / BDRV_SECTOR_SIZE;
>      return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
>  }
>
> diff --git a/block/parallels.h b/block/parallels.h
> index 18b4f8068e..a6a048d890 100644
> --- a/block/parallels.h
> +++ b/block/parallels.h
> @@ -79,7 +79,6 @@ typedef struct BDRVParallelsState {
>      unsigned int bat_size;
>
>      int64_t  data_start;
> -    int64_t  data_end;
>      uint64_t prealloc_size;
>      ParallelsPreallocMode prealloc_mode;
>
> --
> 2.34.1
>

Is it intended behavior?

Run:
1. ./qemu-img create -f parallels $TEST_IMG 1T
2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
3. ./qemu-img check  $TEST_IMG
       No errors were found on the image.
       Image end offset: 150994944

Without this patch `qemu-img check` reports:
       ERROR space leaked at the end of the image 145752064

      139 leaked clusters were found on the image.
      This means waste of disk space, but no harm to data.
      Image end offset: 5242880

Note: there is another issue caused by previous commits exists.
g_free asserts from parallels_free_used_bitmap() because of
s->used_bmap is NULL.

To reproduce this crash at revision before or without patch 15/19, run commands:
1. ./qemu-img create -f parallels $TEST_IMG 1T
2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
3. ./qemu-img check -r leaks $TEST_IMG

Regards,
Mike.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 15/19] parallels: Remove unnecessary data_end field
  2023-10-06 19:43   ` Mike Maslenkin
@ 2023-10-07 10:18     ` Alexander Ivanov
  2023-10-07 11:21       ` Mike Maslenkin
  0 siblings, 1 reply; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-07 10:18 UTC (permalink / raw)
  To: Mike Maslenkin
  Cc: qemu-devel, qemu-block, den, stefanha, vsementsov, kwolf, hreitz



On 10/6/23 21:43, Mike Maslenkin wrote:
> On Mon, Oct 2, 2023 at 12:01 PM Alexander Ivanov
> <alexander.ivanov@virtuozzo.com> wrote:
>> Since we have used bitmap, field data_end in BDRVParallelsState is
>> redundant and can be removed.
>>
>> Add parallels_data_end() helper and remove data_end handling.
>>
>> Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
>> ---
>>   block/parallels.c | 33 +++++++++++++--------------------
>>   block/parallels.h |  1 -
>>   2 files changed, 13 insertions(+), 21 deletions(-)
>>
>> diff --git a/block/parallels.c b/block/parallels.c
>> index 48ea5b3f03..80a7171b84 100644
>> --- a/block/parallels.c
>> +++ b/block/parallels.c
>> @@ -265,6 +265,13 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
>>       g_free(s->used_bmap);
>>   }
>>
>> +static int64_t parallels_data_end(BDRVParallelsState *s)
>> +{
>> +    int64_t data_end = s->data_start * BDRV_SECTOR_SIZE;
>> +    data_end += s->used_bmap_size * s->cluster_size;
>> +    return data_end;
>> +}
>> +
>>   int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>                                            int64_t *clusters)
>>   {
>> @@ -275,7 +282,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>
>>       first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
>>       if (first_free == s->used_bmap_size) {
>> -        host_off = s->data_end * BDRV_SECTOR_SIZE;
>> +        host_off = parallels_data_end(s);
>>           prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
>>           bytes = prealloc_clusters * s->cluster_size;
>>
>> @@ -297,9 +304,6 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>           s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
>>                                             new_usedsize);
>>           s->used_bmap_size = new_usedsize;
>> -        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
>> -            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
>> -        }
>>       } else {
>>           next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
>>
>> @@ -315,8 +319,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>            * branch. In the other case we are likely re-using hole. Preallocate
>>            * the space if required by the prealloc_mode.
>>            */
>> -        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
>> -                host_off < s->data_end * BDRV_SECTOR_SIZE) {
>> +        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
>>               ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
>>               if (ret < 0) {
>>                   return ret;
>> @@ -757,13 +760,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
>>           }
>>       }
>>
>> -    if (high_off == 0) {
>> -        res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
>> -    } else {
>> -        res->image_end_offset = high_off + s->cluster_size;
>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>> -    }
>> -
>> +    res->image_end_offset = parallels_data_end(s);
>>       return 0;
>>   }
>>
>> @@ -806,7 +803,6 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
>>               res->check_errors++;
>>               return ret;
>>           }
>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>>
>>           parallels_free_used_bitmap(bs);
>>           ret = parallels_fill_used_bitmap(bs);
>> @@ -1361,8 +1357,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>>       }
>>
>>       s->data_start = data_start;
>> -    s->data_end = s->data_start;
>> -    if (s->data_end < (s->header_size >> BDRV_SECTOR_BITS)) {
>> +    if (s->data_start < (s->header_size >> BDRV_SECTOR_BITS)) {
>>           /*
>>            * There is not enough unused space to fit to block align between BAT
>>            * and actual data. We can't avoid read-modify-write...
>> @@ -1403,11 +1398,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>>
>>       for (i = 0; i < s->bat_size; i++) {
>>           sector = bat2sect(s, i);
>> -        if (sector + s->tracks > s->data_end) {
>> -            s->data_end = sector + s->tracks;
>> +        if (sector + s->tracks > file_nb_sectors) {
>> +            need_check = true;
>>           }
>>       }
>> -    need_check = need_check || s->data_end > file_nb_sectors;
>>
>>       ret = parallels_fill_used_bitmap(bs);
>>       if (ret == -ENOMEM) {
>> @@ -1461,7 +1455,6 @@ static int parallels_truncate_unused_clusters(BlockDriverState *bs)
>>           end_off = (end_off + 1) * s->cluster_size;
>>       }
>>       end_off += s->data_start * BDRV_SECTOR_SIZE;
>> -    s->data_end = end_off / BDRV_SECTOR_SIZE;
>>       return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
>>   }
>>
>> diff --git a/block/parallels.h b/block/parallels.h
>> index 18b4f8068e..a6a048d890 100644
>> --- a/block/parallels.h
>> +++ b/block/parallels.h
>> @@ -79,7 +79,6 @@ typedef struct BDRVParallelsState {
>>       unsigned int bat_size;
>>
>>       int64_t  data_start;
>> -    int64_t  data_end;
>>       uint64_t prealloc_size;
>>       ParallelsPreallocMode prealloc_mode;
>>
>> --
>> 2.34.1
>>
> Is it intended behavior?
>
> Run:
> 1. ./qemu-img create -f parallels $TEST_IMG 1T
> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
> 3. ./qemu-img check  $TEST_IMG
>         No errors were found on the image.
>         Image end offset: 150994944
>
> Without this patch `qemu-img check` reports:
>         ERROR space leaked at the end of the image 145752064
>
>        139 leaked clusters were found on the image.
>        This means waste of disk space, but no harm to data.
>        Image end offset: 5242880
The original intention is do nothing at this point if an image is opened as
RO. In the next patch parallels_check_leak() was rewritten to detect
unused clusters at the image end.

But there is a bug: (end_off == s->used_bmap_size) case is handled in an
incorrect way. Will fix it, thank you.
>
> Note: there is another issue caused by previous commits exists.
> g_free asserts from parallels_free_used_bitmap() because of
> s->used_bmap is NULL.
Maybe I don't understand your point, but if you meant that g_free() could be
called with NULL argument, it's not a problem. GLib Manual says:

    void g_free (/|gpointer
    <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Basic-Types.php#gpointer>
    mem|/);

    If /|mem|/ is |NULL|
    <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
    it simply returns, so there is no need to check /|mem|/ against
    |NULL|
    <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
    before calling this function.

> To reproduce this crash at revision before or without patch 15/19, run commands:
> 1. ./qemu-img create -f parallels $TEST_IMG 1T
> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
> 3. ./qemu-img check -r leaks $TEST_IMG
Sorry, but I couldn't reproduce it. Reset to 14/19, made the three steps
and had such output:

    $ ./qemu-img create -f parallels $TEST_IMG 1T
    Formatting 'test.img', fmt=parallels size=1099511627776
    cluster_size=1048576

    $ dd if=/dev/zero of=$TEST_IMG seek=12  bs=1M count=128 conv=notrunc
    128+0 records in
    128+0 records out
    134217728 bytes (134 MB, 128 MiB) copied, 0.0797576 s, 1.7 GB/s

    $ ./qemu-img check -r leaks $TEST_IMG
    Repairing space leaked at the end of the image 141557760
    The following inconsistencies were found and repaired:

    135 leaked clusters
    0 corruptions

    Double checking the fixed image now...
    No errors were found on the image.
    Image end offset: 5242880

Checked with GCC and Clang.
> Regards,
> Mike.

-- 
Best regards,
Alexander Ivanov



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 15/19] parallels: Remove unnecessary data_end field
  2023-10-07 10:18     ` Alexander Ivanov
@ 2023-10-07 11:21       ` Mike Maslenkin
  2023-10-07 14:30         ` Alexander Ivanov
  0 siblings, 1 reply; 26+ messages in thread
From: Mike Maslenkin @ 2023-10-07 11:21 UTC (permalink / raw)
  To: Alexander Ivanov
  Cc: qemu-devel, qemu-block, den, stefanha, vsementsov, kwolf, hreitz

On Sat, Oct 7, 2023 at 1:18 PM Alexander Ivanov
<alexander.ivanov@virtuozzo.com> wrote:
>
>
>
> On 10/6/23 21:43, Mike Maslenkin wrote:
> > On Mon, Oct 2, 2023 at 12:01 PM Alexander Ivanov
> > <alexander.ivanov@virtuozzo.com> wrote:
> >> Since we have used bitmap, field data_end in BDRVParallelsState is
> >> redundant and can be removed.
> >>
> >> Add parallels_data_end() helper and remove data_end handling.
> >>
> >> Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
> >> ---
> >>   block/parallels.c | 33 +++++++++++++--------------------
> >>   block/parallels.h |  1 -
> >>   2 files changed, 13 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/block/parallels.c b/block/parallels.c
> >> index 48ea5b3f03..80a7171b84 100644
> >> --- a/block/parallels.c
> >> +++ b/block/parallels.c
> >> @@ -265,6 +265,13 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
> >>       g_free(s->used_bmap);
> >>   }
> >>
> >> +static int64_t parallels_data_end(BDRVParallelsState *s)
> >> +{
> >> +    int64_t data_end = s->data_start * BDRV_SECTOR_SIZE;
> >> +    data_end += s->used_bmap_size * s->cluster_size;
> >> +    return data_end;
> >> +}
> >> +
> >>   int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>                                            int64_t *clusters)
> >>   {
> >> @@ -275,7 +282,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>
> >>       first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
> >>       if (first_free == s->used_bmap_size) {
> >> -        host_off = s->data_end * BDRV_SECTOR_SIZE;
> >> +        host_off = parallels_data_end(s);
> >>           prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
> >>           bytes = prealloc_clusters * s->cluster_size;
> >>
> >> @@ -297,9 +304,6 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>           s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
> >>                                             new_usedsize);
> >>           s->used_bmap_size = new_usedsize;
> >> -        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
> >> -            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
> >> -        }
> >>       } else {
> >>           next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
> >>
> >> @@ -315,8 +319,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>            * branch. In the other case we are likely re-using hole. Preallocate
> >>            * the space if required by the prealloc_mode.
> >>            */
> >> -        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
> >> -                host_off < s->data_end * BDRV_SECTOR_SIZE) {
> >> +        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
> >>               ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
> >>               if (ret < 0) {
> >>                   return ret;
> >> @@ -757,13 +760,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
> >>           }
> >>       }
> >>
> >> -    if (high_off == 0) {
> >> -        res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
> >> -    } else {
> >> -        res->image_end_offset = high_off + s->cluster_size;
> >> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
> >> -    }
> >> -
> >> +    res->image_end_offset = parallels_data_end(s);
> >>       return 0;
> >>   }
> >>
> >> @@ -806,7 +803,6 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
> >>               res->check_errors++;
> >>               return ret;
> >>           }
> >> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
> >>
> >>           parallels_free_used_bitmap(bs);
> >>           ret = parallels_fill_used_bitmap(bs);
> >> @@ -1361,8 +1357,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
> >>       }
> >>
> >>       s->data_start = data_start;
> >> -    s->data_end = s->data_start;
> >> -    if (s->data_end < (s->header_size >> BDRV_SECTOR_BITS)) {
> >> +    if (s->data_start < (s->header_size >> BDRV_SECTOR_BITS)) {
> >>           /*
> >>            * There is not enough unused space to fit to block align between BAT
> >>            * and actual data. We can't avoid read-modify-write...
> >> @@ -1403,11 +1398,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
> >>
> >>       for (i = 0; i < s->bat_size; i++) {
> >>           sector = bat2sect(s, i);
> >> -        if (sector + s->tracks > s->data_end) {
> >> -            s->data_end = sector + s->tracks;
> >> +        if (sector + s->tracks > file_nb_sectors) {
> >> +            need_check = true;
> >>           }
> >>       }
> >> -    need_check = need_check || s->data_end > file_nb_sectors;
> >>
> >>       ret = parallels_fill_used_bitmap(bs);
> >>       if (ret == -ENOMEM) {
> >> @@ -1461,7 +1455,6 @@ static int parallels_truncate_unused_clusters(BlockDriverState *bs)
> >>           end_off = (end_off + 1) * s->cluster_size;
> >>       }
> >>       end_off += s->data_start * BDRV_SECTOR_SIZE;
> >> -    s->data_end = end_off / BDRV_SECTOR_SIZE;
> >>       return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
> >>   }
> >>
> >> diff --git a/block/parallels.h b/block/parallels.h
> >> index 18b4f8068e..a6a048d890 100644
> >> --- a/block/parallels.h
> >> +++ b/block/parallels.h
> >> @@ -79,7 +79,6 @@ typedef struct BDRVParallelsState {
> >>       unsigned int bat_size;
> >>
> >>       int64_t  data_start;
> >> -    int64_t  data_end;
> >>       uint64_t prealloc_size;
> >>       ParallelsPreallocMode prealloc_mode;
> >>
> >> --
> >> 2.34.1
> >>
> > Is it intended behavior?
> >
> > Run:
> > 1. ./qemu-img create -f parallels $TEST_IMG 1T
> > 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
> > 3. ./qemu-img check  $TEST_IMG
> >         No errors were found on the image.
> >         Image end offset: 150994944
> >
> > Without this patch `qemu-img check` reports:
> >         ERROR space leaked at the end of the image 145752064
> >
> >        139 leaked clusters were found on the image.
> >        This means waste of disk space, but no harm to data.
> >        Image end offset: 5242880
> The original intention is do nothing at this point if an image is opened as
> RO. In the next patch parallels_check_leak() was rewritten to detect
> unused clusters at the image end.
>
> But there is a bug: (end_off == s->used_bmap_size) case is handled in an
> incorrect way. Will fix it, thank you.
> >
> > Note: there is another issue caused by previous commits exists.
> > g_free asserts from parallels_free_used_bitmap() because of
> > s->used_bmap is NULL.
> Maybe I don't understand your point, but if you meant that g_free() could be
> called with NULL argument, it's not a problem. GLib Manual says:
>
>     void g_free (/|gpointer
>     <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Basic-Types.php#gpointer>
>     mem|/);
>
>     If /|mem|/ is |NULL|
>     <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
>     it simply returns, so there is no need to check /|mem|/ against
>     |NULL|
>     <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
>     before calling this function.
>
> > To reproduce this crash at revision before or without patch 15/19, run commands:
> > 1. ./qemu-img create -f parallels $TEST_IMG 1T
> > 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
> > 3. ./qemu-img check -r leaks $TEST_IMG
> Sorry, but I couldn't reproduce it. Reset to 14/19, made the three steps
> and had such output:
>
>     $ ./qemu-img create -f parallels $TEST_IMG 1T
>     Formatting 'test.img', fmt=parallels size=1099511627776
>     cluster_size=1048576
>
>     $ dd if=/dev/zero of=$TEST_IMG seek=12  bs=1M count=128 conv=notrunc
>     128+0 records in
>     128+0 records out
>     134217728 bytes (134 MB, 128 MiB) copied, 0.0797576 s, 1.7 GB/s
>
>     $ ./qemu-img check -r leaks $TEST_IMG
>     Repairing space leaked at the end of the image 141557760
>     The following inconsistencies were found and repaired:
>
>     135 leaked clusters
>     0 corruptions
>
>     Double checking the fixed image now...
>     No errors were found on the image.
>     Image end offset: 5242880

My comment regarding patch 15 is about 'check' operation is not able
to detect leaked data anymore.
So, after this patch I see:

$ ./qemu-img info   leak.bin
image: leak.bin
file format: parallels
virtual size: 1 TiB (1099511627776 bytes)
disk size: 145 MiB
Child node '/file':
    filename: leak.bin
    protocol type: file
    file length: 146 MiB (153092096 bytes)
    disk size: 145 MiB

$ ./qemu-img check -r leaks leak.bin
No errors were found on the image.
Image end offset: 153092096

After reverting this patch  'check' reports about:
ERROR space leaked at the end of the image 148897792

So, after reverting patch 15 I tried to repair such image
and got abort trap.

I rechecked with downloaded patches, rebuild from scratch and can tell
that there is no abort on master branch, but it appears after applying
patches[1-9].
Obviously It can be debugged and the reason is that
parallels_fill_used_bitmap() returns after

 s->used_bmap_size = DIV_ROUND_UP(payload_bytes, s->cluster_size);
    if (s->used_bmap_size == 0) {
        return 0;
    }

Because DIV_ROUND_UP(payload_bytes, s->cluster_size); gives a 0;

So subsequent parallels_free_used_bitmap() called from
parallels_close() causes an assert.

So, the first invocation of parallels_free_used_bitmap is:
  * frame #0: 0x0000000100091830 qemu-img`parallels_check_leak
[inlined] parallels_free_used_bitmap(bs=0x0000000101011600) at
parallels.c:263:33 [opt]
    frame #1: 0x0000000100091830
qemu-img`parallels_check_leak(bs=0x0000000101011600,
res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS, explicit=true) at
parallels.c:811:9 [opt]
    frame #2: 0x0000000100090d80
qemu-img`parallels_co_check(bs=0x0000000101011600,
res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS) at parallels.c:1014:15
[opt]
    frame #3: 0x0000000100014f6c
qemu-img`bdrv_co_check_entry(opaque=0x000000016fdff560) at
block-gen.c:556:14 [opt]

And the second generates abort from there:
  * frame #0: 0x000000010008fef8 qemu-img`parallels_close [inlined]
parallels_free_used_bitmap(bs=<unavailable>) at parallels.c:263:33
[opt]
    frame #1: 0x000000010008fef8
qemu-img`parallels_close(bs=0x0000000101011600) at parallels.c:1501:5
[opt]
    frame #2: 0x0000000100019d3c qemu-img`bdrv_unref [inlined]
bdrv_close(bs=0x0000000101011600) at block.c:5164:13 [opt]

After the first parallels_free_used_bitmap(), there is an actual image
truncation happens, so there is no payload at the moment of the next
parallels_fill_used_bitmap(),

PS: there are a chances that some patches were not applied clearly,
I'll recheck this later.
It would be nice if it was possible to fetch changes from some repo,
rather than extracting  it from gmail.

Regards,
Mike.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 15/19] parallels: Remove unnecessary data_end field
  2023-10-07 11:21       ` Mike Maslenkin
@ 2023-10-07 14:30         ` Alexander Ivanov
  2023-10-07 17:54           ` Mike Maslenkin
  0 siblings, 1 reply; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-07 14:30 UTC (permalink / raw)
  To: Mike Maslenkin
  Cc: qemu-devel, qemu-block, den, stefanha, vsementsov, kwolf, hreitz



On 10/7/23 13:21, Mike Maslenkin wrote:
> On Sat, Oct 7, 2023 at 1:18 PM Alexander Ivanov
> <alexander.ivanov@virtuozzo.com>  wrote:
>>
>> On 10/6/23 21:43, Mike Maslenkin wrote:
>>> On Mon, Oct 2, 2023 at 12:01 PM Alexander Ivanov
>>> <alexander.ivanov@virtuozzo.com>  wrote:
>>>> Since we have used bitmap, field data_end in BDRVParallelsState is
>>>> redundant and can be removed.
>>>>
>>>> Add parallels_data_end() helper and remove data_end handling.
>>>>
>>>> Signed-off-by: Alexander Ivanov<alexander.ivanov@virtuozzo.com>
>>>> ---
>>>>    block/parallels.c | 33 +++++++++++++--------------------
>>>>    block/parallels.h |  1 -
>>>>    2 files changed, 13 insertions(+), 21 deletions(-)
>>>>
>>>> diff --git a/block/parallels.c b/block/parallels.c
>>>> index 48ea5b3f03..80a7171b84 100644
>>>> --- a/block/parallels.c
>>>> +++ b/block/parallels.c
>>>> @@ -265,6 +265,13 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
>>>>        g_free(s->used_bmap);
>>>>    }
>>>>
>>>> +static int64_t parallels_data_end(BDRVParallelsState *s)
>>>> +{
>>>> +    int64_t data_end = s->data_start * BDRV_SECTOR_SIZE;
>>>> +    data_end += s->used_bmap_size * s->cluster_size;
>>>> +    return data_end;
>>>> +}
>>>> +
>>>>    int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>                                             int64_t *clusters)
>>>>    {
>>>> @@ -275,7 +282,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>
>>>>        first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
>>>>        if (first_free == s->used_bmap_size) {
>>>> -        host_off = s->data_end * BDRV_SECTOR_SIZE;
>>>> +        host_off = parallels_data_end(s);
>>>>            prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
>>>>            bytes = prealloc_clusters * s->cluster_size;
>>>>
>>>> @@ -297,9 +304,6 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>            s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
>>>>                                              new_usedsize);
>>>>            s->used_bmap_size = new_usedsize;
>>>> -        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
>>>> -            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
>>>> -        }
>>>>        } else {
>>>>            next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
>>>>
>>>> @@ -315,8 +319,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>             * branch. In the other case we are likely re-using hole. Preallocate
>>>>             * the space if required by the prealloc_mode.
>>>>             */
>>>> -        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
>>>> -                host_off < s->data_end * BDRV_SECTOR_SIZE) {
>>>> +        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
>>>>                ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
>>>>                if (ret < 0) {
>>>>                    return ret;
>>>> @@ -757,13 +760,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
>>>>            }
>>>>        }
>>>>
>>>> -    if (high_off == 0) {
>>>> -        res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
>>>> -    } else {
>>>> -        res->image_end_offset = high_off + s->cluster_size;
>>>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>>>> -    }
>>>> -
>>>> +    res->image_end_offset = parallels_data_end(s);
>>>>        return 0;
>>>>    }
>>>>
>>>> @@ -806,7 +803,6 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
>>>>                res->check_errors++;
>>>>                return ret;
>>>>            }
>>>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>>>>
>>>>            parallels_free_used_bitmap(bs);
>>>>            ret = parallels_fill_used_bitmap(bs);
>>>> @@ -1361,8 +1357,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>>>>        }
>>>>
>>>>        s->data_start = data_start;
>>>> -    s->data_end = s->data_start;
>>>> -    if (s->data_end < (s->header_size >> BDRV_SECTOR_BITS)) {
>>>> +    if (s->data_start < (s->header_size >> BDRV_SECTOR_BITS)) {
>>>>            /*
>>>>             * There is not enough unused space to fit to block align between BAT
>>>>             * and actual data. We can't avoid read-modify-write...
>>>> @@ -1403,11 +1398,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>>>>
>>>>        for (i = 0; i < s->bat_size; i++) {
>>>>            sector = bat2sect(s, i);
>>>> -        if (sector + s->tracks > s->data_end) {
>>>> -            s->data_end = sector + s->tracks;
>>>> +        if (sector + s->tracks > file_nb_sectors) {
>>>> +            need_check = true;
>>>>            }
>>>>        }
>>>> -    need_check = need_check || s->data_end > file_nb_sectors;
>>>>
>>>>        ret = parallels_fill_used_bitmap(bs);
>>>>        if (ret == -ENOMEM) {
>>>> @@ -1461,7 +1455,6 @@ static int parallels_truncate_unused_clusters(BlockDriverState *bs)
>>>>            end_off = (end_off + 1) * s->cluster_size;
>>>>        }
>>>>        end_off += s->data_start * BDRV_SECTOR_SIZE;
>>>> -    s->data_end = end_off / BDRV_SECTOR_SIZE;
>>>>        return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
>>>>    }
>>>>
>>>> diff --git a/block/parallels.h b/block/parallels.h
>>>> index 18b4f8068e..a6a048d890 100644
>>>> --- a/block/parallels.h
>>>> +++ b/block/parallels.h
>>>> @@ -79,7 +79,6 @@ typedef struct BDRVParallelsState {
>>>>        unsigned int bat_size;
>>>>
>>>>        int64_t  data_start;
>>>> -    int64_t  data_end;
>>>>        uint64_t prealloc_size;
>>>>        ParallelsPreallocMode prealloc_mode;
>>>>
>>>> --
>>>> 2.34.1
>>>>
>>> Is it intended behavior?
>>>
>>> Run:
>>> 1. ./qemu-img create -f parallels $TEST_IMG 1T
>>> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
>>> 3. ./qemu-img check  $TEST_IMG
>>>          No errors were found on the image.
>>>          Image end offset: 150994944
>>>
>>> Without this patch `qemu-img check` reports:
>>>          ERROR space leaked at the end of the image 145752064
>>>
>>>         139 leaked clusters were found on the image.
>>>         This means waste of disk space, but no harm to data.
>>>         Image end offset: 5242880
>> The original intention is do nothing at this point if an image is opened as
>> RO. In the next patch parallels_check_leak() was rewritten to detect
>> unused clusters at the image end.
>>
>> But there is a bug: (end_off == s->used_bmap_size) case is handled in an
>> incorrect way. Will fix it, thank you.
>>> Note: there is another issue caused by previous commits exists.
>>> g_free asserts from parallels_free_used_bitmap() because of
>>> s->used_bmap is NULL.
>> Maybe I don't understand your point, but if you meant that g_free() could be
>> called with NULL argument, it's not a problem. GLib Manual says:
>>
>>      void g_free (/|gpointer
>>      <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Basic-Types.php#gpointer>
>>      mem|/);
>>
>>      If /|mem|/ is|NULL|
>>      <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
>>      it simply returns, so there is no need to check /|mem|/ against
>>      |NULL|
>>      <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
>>      before calling this function.
>>
>>> To reproduce this crash at revision before or without patch 15/19, run commands:
>>> 1. ./qemu-img create -f parallels $TEST_IMG 1T
>>> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
>>> 3. ./qemu-img check -r leaks $TEST_IMG
>> Sorry, but I couldn't reproduce it. Reset to 14/19, made the three steps
>> and had such output:
>>
>>      $ ./qemu-img create -f parallels $TEST_IMG 1T
>>      Formatting 'test.img', fmt=parallels size=1099511627776
>>      cluster_size=1048576
>>
>>      $ dd if=/dev/zero of=$TEST_IMG seek=12  bs=1M count=128 conv=notrunc
>>      128+0 records in
>>      128+0 records out
>>      134217728 bytes (134 MB, 128 MiB) copied, 0.0797576 s, 1.7 GB/s
>>
>>      $ ./qemu-img check -r leaks $TEST_IMG
>>      Repairing space leaked at the end of the image 141557760
>>      The following inconsistencies were found and repaired:
>>
>>      135 leaked clusters
>>      0 corruptions
>>
>>      Double checking the fixed image now...
>>      No errors were found on the image.
>>      Image end offset: 5242880
> My comment regarding patch 15 is about 'check' operation is not able
> to detect leaked data anymore.
> So, after this patch I see:
>
> $ ./qemu-img info   leak.bin
> image: leak.bin
> file format: parallels
> virtual size: 1 TiB (1099511627776 bytes)
> disk size: 145 MiB
> Child node '/file':
>      filename: leak.bin
>      protocol type: file
>      file length: 146 MiB (153092096 bytes)
>      disk size: 145 MiB
>
> $ ./qemu-img check -r leaks leak.bin
> No errors were found on the image.
> Image end offset: 153092096
>
> After reverting this patch  'check' reports about:
> ERROR space leaked at the end of the image 148897792
>
> So, after reverting patch 15 I tried to repair such image
> and got abort trap.
Yes, I understand this part. OK, I think, I could place 16 patch before 
15 and
leaks would handle in the correct way at any point of the patch sequence.
>
> I rechecked with downloaded patches, rebuild from scratch and can tell
> that there is no abort on master branch, but it appears after applying
> patches[1-9].
Maybe I do something wrong, but I reset to the top of mainstream, applied
1-9 patches, rebuilt QEMU and didn't see any abort.

> Obviously It can be debugged and the reason is that
> parallels_fill_used_bitmap() returns after
>
>   s->used_bmap_size = DIV_ROUND_UP(payload_bytes, s->cluster_size);
>      if (s->used_bmap_size == 0) {
>          return 0;
>      }
>
> Because DIV_ROUND_UP(payload_bytes, s->cluster_size); gives a 0;
>
> So subsequent parallels_free_used_bitmap() called from
> parallels_close() causes an assert.
>
> So, the first invocation of parallels_free_used_bitmap is:
>    * frame #0: 0x0000000100091830 qemu-img`parallels_check_leak
> [inlined] parallels_free_used_bitmap(bs=0x0000000101011600) at
> parallels.c:263:33 [opt]
>      frame #1: 0x0000000100091830
> qemu-img`parallels_check_leak(bs=0x0000000101011600,
> res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS, explicit=true) at
> parallels.c:811:9 [opt]
>      frame #2: 0x0000000100090d80
> qemu-img`parallels_co_check(bs=0x0000000101011600,
> res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS) at parallels.c:1014:15
> [opt]
>      frame #3: 0x0000000100014f6c
> qemu-img`bdrv_co_check_entry(opaque=0x000000016fdff560) at
> block-gen.c:556:14 [opt]
>
> And the second generates abort from there:
>    * frame #0: 0x000000010008fef8 qemu-img`parallels_close [inlined]
> parallels_free_used_bitmap(bs=<unavailable>) at parallels.c:263:33
In this line we have:

    BDRVParallelsState *s = bs->opaque;

So there is only one possibility to abort - incorrect bs. I don't know 
how it
could be possible.

> [opt]
>      frame #1: 0x000000010008fef8
> qemu-img`parallels_close(bs=0x0000000101011600) at parallels.c:1501:5
> [opt]
>      frame #2: 0x0000000100019d3c qemu-img`bdrv_unref [inlined]
> bdrv_close(bs=0x0000000101011600) at block.c:5164:13 [opt]
>
> After the first parallels_free_used_bitmap(), there is an actual image
> truncation happens, so there is no payload at the moment of the next
> parallels_fill_used_bitmap(),
>
> PS: there are a chances that some patches were not applied clearly,
> I'll recheck this later.
I just reset to the mainstream top and apply 1-9 patches:

    $ git reset --hard 2f3913f4b2ad74baeb5a6f1d36efbd9ecdf1057d
    HEAD is now at 2f3913f4b2 Merge tag 'for_upstream' of
    https://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging
    $ git am *.eml
    Applying: parallels: Move inactivation code to a separate function
    Applying: parallels: Add mark_unused() helper
    Applying: parallels: Move host clusters allocation to a separate
    function
    Applying: parallels: Set data_end value in parallels_check_leak()
    Applying: parallels: Recreate used bitmap in parallels_check_leak()
    Applying: parallels: Add a note about used bitmap in
    parallels_check_duplicate()
    Applying: parallels: Create used bitmap even if checks needed
    Applying: parallels: Make mark_used() and mark_unused() global functions
    Applying: parallels: Add dirty bitmaps saving

> It would be nice if it was possible to fetch changes from some repo,
> rather than extracting  it from gmail.
You can fetch it here (branch "parallels") - 
https://github.com/AlexanderIvanov-Virtuozzo/qemu.git
>
> Regards,
> Mike.

-- 
Best regards,
Alexander Ivanov



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 15/19] parallels: Remove unnecessary data_end field
  2023-10-07 14:30         ` Alexander Ivanov
@ 2023-10-07 17:54           ` Mike Maslenkin
  2023-10-08 13:54             ` Alexander Ivanov
  0 siblings, 1 reply; 26+ messages in thread
From: Mike Maslenkin @ 2023-10-07 17:54 UTC (permalink / raw)
  To: Alexander Ivanov
  Cc: qemu-devel, qemu-block, den, stefanha, vsementsov, kwolf, hreitz

On Sat, Oct 7, 2023 at 5:30 PM Alexander Ivanov
<alexander.ivanov@virtuozzo.com> wrote:
>
>
>
> On 10/7/23 13:21, Mike Maslenkin wrote:
> > On Sat, Oct 7, 2023 at 1:18 PM Alexander Ivanov
> > <alexander.ivanov@virtuozzo.com>  wrote:
> >>
> >> On 10/6/23 21:43, Mike Maslenkin wrote:
> >>> On Mon, Oct 2, 2023 at 12:01 PM Alexander Ivanov
> >>> <alexander.ivanov@virtuozzo.com>  wrote:
> >>>> Since we have used bitmap, field data_end in BDRVParallelsState is
> >>>> redundant and can be removed.
> >>>>
> >>>> Add parallels_data_end() helper and remove data_end handling.
> >>>>
> >>>> Signed-off-by: Alexander Ivanov<alexander.ivanov@virtuozzo.com>
> >>>> ---
> >>>>    block/parallels.c | 33 +++++++++++++--------------------
> >>>>    block/parallels.h |  1 -
> >>>>    2 files changed, 13 insertions(+), 21 deletions(-)
> >>>>
> >>>> diff --git a/block/parallels.c b/block/parallels.c
> >>>> index 48ea5b3f03..80a7171b84 100644
> >>>> --- a/block/parallels.c
> >>>> +++ b/block/parallels.c
> >>>> @@ -265,6 +265,13 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
> >>>>        g_free(s->used_bmap);
> >>>>    }
> >>>>
> >>>> +static int64_t parallels_data_end(BDRVParallelsState *s)
> >>>> +{
> >>>> +    int64_t data_end = s->data_start * BDRV_SECTOR_SIZE;
> >>>> +    data_end += s->used_bmap_size * s->cluster_size;
> >>>> +    return data_end;
> >>>> +}
> >>>> +
> >>>>    int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>>>                                             int64_t *clusters)
> >>>>    {
> >>>> @@ -275,7 +282,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>>>
> >>>>        first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
> >>>>        if (first_free == s->used_bmap_size) {
> >>>> -        host_off = s->data_end * BDRV_SECTOR_SIZE;
> >>>> +        host_off = parallels_data_end(s);
> >>>>            prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
> >>>>            bytes = prealloc_clusters * s->cluster_size;
> >>>>
> >>>> @@ -297,9 +304,6 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>>>            s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
> >>>>                                              new_usedsize);
> >>>>            s->used_bmap_size = new_usedsize;
> >>>> -        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
> >>>> -            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
> >>>> -        }
> >>>>        } else {
> >>>>            next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
> >>>>
> >>>> @@ -315,8 +319,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
> >>>>             * branch. In the other case we are likely re-using hole. Preallocate
> >>>>             * the space if required by the prealloc_mode.
> >>>>             */
> >>>> -        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
> >>>> -                host_off < s->data_end * BDRV_SECTOR_SIZE) {
> >>>> +        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
> >>>>                ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
> >>>>                if (ret < 0) {
> >>>>                    return ret;
> >>>> @@ -757,13 +760,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
> >>>>            }
> >>>>        }
> >>>>
> >>>> -    if (high_off == 0) {
> >>>> -        res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
> >>>> -    } else {
> >>>> -        res->image_end_offset = high_off + s->cluster_size;
> >>>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
> >>>> -    }
> >>>> -
> >>>> +    res->image_end_offset = parallels_data_end(s);
> >>>>        return 0;
> >>>>    }
> >>>>
> >>>> @@ -806,7 +803,6 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
> >>>>                res->check_errors++;
> >>>>                return ret;
> >>>>            }
> >>>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
> >>>>
> >>>>            parallels_free_used_bitmap(bs);
> >>>>            ret = parallels_fill_used_bitmap(bs);
> >>>> @@ -1361,8 +1357,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
> >>>>        }
> >>>>
> >>>>        s->data_start = data_start;
> >>>> -    s->data_end = s->data_start;
> >>>> -    if (s->data_end < (s->header_size >> BDRV_SECTOR_BITS)) {
> >>>> +    if (s->data_start < (s->header_size >> BDRV_SECTOR_BITS)) {
> >>>>            /*
> >>>>             * There is not enough unused space to fit to block align between BAT
> >>>>             * and actual data. We can't avoid read-modify-write...
> >>>> @@ -1403,11 +1398,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
> >>>>
> >>>>        for (i = 0; i < s->bat_size; i++) {
> >>>>            sector = bat2sect(s, i);
> >>>> -        if (sector + s->tracks > s->data_end) {
> >>>> -            s->data_end = sector + s->tracks;
> >>>> +        if (sector + s->tracks > file_nb_sectors) {
> >>>> +            need_check = true;
> >>>>            }
> >>>>        }
> >>>> -    need_check = need_check || s->data_end > file_nb_sectors;
> >>>>
> >>>>        ret = parallels_fill_used_bitmap(bs);
> >>>>        if (ret == -ENOMEM) {
> >>>> @@ -1461,7 +1455,6 @@ static int parallels_truncate_unused_clusters(BlockDriverState *bs)
> >>>>            end_off = (end_off + 1) * s->cluster_size;
> >>>>        }
> >>>>        end_off += s->data_start * BDRV_SECTOR_SIZE;
> >>>> -    s->data_end = end_off / BDRV_SECTOR_SIZE;
> >>>>        return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
> >>>>    }
> >>>>
> >>>> diff --git a/block/parallels.h b/block/parallels.h
> >>>> index 18b4f8068e..a6a048d890 100644
> >>>> --- a/block/parallels.h
> >>>> +++ b/block/parallels.h
> >>>> @@ -79,7 +79,6 @@ typedef struct BDRVParallelsState {
> >>>>        unsigned int bat_size;
> >>>>
> >>>>        int64_t  data_start;
> >>>> -    int64_t  data_end;
> >>>>        uint64_t prealloc_size;
> >>>>        ParallelsPreallocMode prealloc_mode;
> >>>>
> >>>> --
> >>>> 2.34.1
> >>>>
> >>> Is it intended behavior?
> >>>
> >>> Run:
> >>> 1. ./qemu-img create -f parallels $TEST_IMG 1T
> >>> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
> >>> 3. ./qemu-img check  $TEST_IMG
> >>>          No errors were found on the image.
> >>>          Image end offset: 150994944
> >>>
> >>> Without this patch `qemu-img check` reports:
> >>>          ERROR space leaked at the end of the image 145752064
> >>>
> >>>         139 leaked clusters were found on the image.
> >>>         This means waste of disk space, but no harm to data.
> >>>         Image end offset: 5242880
> >> The original intention is do nothing at this point if an image is opened as
> >> RO. In the next patch parallels_check_leak() was rewritten to detect
> >> unused clusters at the image end.
> >>
> >> But there is a bug: (end_off == s->used_bmap_size) case is handled in an
> >> incorrect way. Will fix it, thank you.
> >>> Note: there is another issue caused by previous commits exists.
> >>> g_free asserts from parallels_free_used_bitmap() because of
> >>> s->used_bmap is NULL.
> >> Maybe I don't understand your point, but if you meant that g_free() could be
> >> called with NULL argument, it's not a problem. GLib Manual says:
> >>
> >>      void g_free (/|gpointer
> >>      <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Basic-Types.php#gpointer>
> >>      mem|/);
> >>
> >>      If /|mem|/ is|NULL|
> >>      <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
> >>      it simply returns, so there is no need to check /|mem|/ against
> >>      |NULL|
> >>      <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
> >>      before calling this function.
> >>
> >>> To reproduce this crash at revision before or without patch 15/19, run commands:
> >>> 1. ./qemu-img create -f parallels $TEST_IMG 1T
> >>> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
> >>> 3. ./qemu-img check -r leaks $TEST_IMG
> >> Sorry, but I couldn't reproduce it. Reset to 14/19, made the three steps
> >> and had such output:
> >>
> >>      $ ./qemu-img create -f parallels $TEST_IMG 1T
> >>      Formatting 'test.img', fmt=parallels size=1099511627776
> >>      cluster_size=1048576
> >>
> >>      $ dd if=/dev/zero of=$TEST_IMG seek=12  bs=1M count=128 conv=notrunc
> >>      128+0 records in
> >>      128+0 records out
> >>      134217728 bytes (134 MB, 128 MiB) copied, 0.0797576 s, 1.7 GB/s
> >>
> >>      $ ./qemu-img check -r leaks $TEST_IMG
> >>      Repairing space leaked at the end of the image 141557760
> >>      The following inconsistencies were found and repaired:
> >>
> >>      135 leaked clusters
> >>      0 corruptions
> >>
> >>      Double checking the fixed image now...
> >>      No errors were found on the image.
> >>      Image end offset: 5242880
> > My comment regarding patch 15 is about 'check' operation is not able
> > to detect leaked data anymore.
> > So, after this patch I see:
> >
> > $ ./qemu-img info   leak.bin
> > image: leak.bin
> > file format: parallels
> > virtual size: 1 TiB (1099511627776 bytes)
> > disk size: 145 MiB
> > Child node '/file':
> >      filename: leak.bin
> >      protocol type: file
> >      file length: 146 MiB (153092096 bytes)
> >      disk size: 145 MiB
> >
> > $ ./qemu-img check -r leaks leak.bin
> > No errors were found on the image.
> > Image end offset: 153092096
> >
> > After reverting this patch  'check' reports about:
> > ERROR space leaked at the end of the image 148897792
> >
> > So, after reverting patch 15 I tried to repair such image
> > and got abort trap.
> Yes, I understand this part. OK, I think, I could place 16 patch before
> 15 and
> leaks would handle in the correct way at any point of the patch sequence.
> >
> > I rechecked with downloaded patches, rebuild from scratch and can tell
> > that there is no abort on master branch, but it appears after applying
> > patches[1-9].
> Maybe I do something wrong, but I reset to the top of mainstream, applied
> 1-9 patches, rebuilt QEMU and didn't see any abort.
>
> > Obviously It can be debugged and the reason is that
> > parallels_fill_used_bitmap() returns after
> >
> >   s->used_bmap_size = DIV_ROUND_UP(payload_bytes, s->cluster_size);
> >      if (s->used_bmap_size == 0) {
> >          return 0;
> >      }
> >
> > Because DIV_ROUND_UP(payload_bytes, s->cluster_size); gives a 0;
> >
> > So subsequent parallels_free_used_bitmap() called from
> > parallels_close() causes an assert.
> >
> > So, the first invocation of parallels_free_used_bitmap is:
> >    * frame #0: 0x0000000100091830 qemu-img`parallels_check_leak
> > [inlined] parallels_free_used_bitmap(bs=0x0000000101011600) at
> > parallels.c:263:33 [opt]
> >      frame #1: 0x0000000100091830
> > qemu-img`parallels_check_leak(bs=0x0000000101011600,
> > res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS, explicit=true) at
> > parallels.c:811:9 [opt]
> >      frame #2: 0x0000000100090d80
> > qemu-img`parallels_co_check(bs=0x0000000101011600,
> > res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS) at parallels.c:1014:15
> > [opt]
> >      frame #3: 0x0000000100014f6c
> > qemu-img`bdrv_co_check_entry(opaque=0x000000016fdff560) at
> > block-gen.c:556:14 [opt]
> >
> > And the second generates abort from there:
> >    * frame #0: 0x000000010008fef8 qemu-img`parallels_close [inlined]
> > parallels_free_used_bitmap(bs=<unavailable>) at parallels.c:263:33
> In this line we have:
>
>     BDRVParallelsState *s = bs->opaque;
>
> So there is only one possibility to abort - incorrect bs. I don't know
> how it
> could be possible.
>
> > [opt]
> >      frame #1: 0x000000010008fef8
> > qemu-img`parallels_close(bs=0x0000000101011600) at parallels.c:1501:5
> > [opt]
> >      frame #2: 0x0000000100019d3c qemu-img`bdrv_unref [inlined]
> > bdrv_close(bs=0x0000000101011600) at block.c:5164:13 [opt]
> >
> > After the first parallels_free_used_bitmap(), there is an actual image
> > truncation happens, so there is no payload at the moment of the next
> > parallels_fill_used_bitmap(),
> >
> > PS: there are a chances that some patches were not applied clearly,
> > I'll recheck this later.
> I just reset to the mainstream top and apply 1-9 patches:
>
>     $ git reset --hard 2f3913f4b2ad74baeb5a6f1d36efbd9ecdf1057d
>     HEAD is now at 2f3913f4b2 Merge tag 'for_upstream' of
>     https://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging
>     $ git am *.eml
>     Applying: parallels: Move inactivation code to a separate function
>     Applying: parallels: Add mark_unused() helper
>     Applying: parallels: Move host clusters allocation to a separate
>     function
>     Applying: parallels: Set data_end value in parallels_check_leak()
>     Applying: parallels: Recreate used bitmap in parallels_check_leak()
>     Applying: parallels: Add a note about used bitmap in
>     parallels_check_duplicate()
>     Applying: parallels: Create used bitmap even if checks needed
>     Applying: parallels: Make mark_used() and mark_unused() global functions
>     Applying: parallels: Add dirty bitmaps saving
>
> > It would be nice if it was possible to fetch changes from some repo,
> > rather than extracting  it from gmail.
> You can fetch it here (branch "parallels") -
> https://github.com/AlexanderIvanov-Virtuozzo/qemu.git
> >
> > Regards,
> > Mike.
>
> --
> Best regards,
> Alexander Ivanov
>
Thanks for the link. I've fetched your repo and reverted "parallels:
Remove unnecessary data_end field" as it hides reproduction,
because it makes 'check' blind for the case we are discussing.
So the situation is the same:
1. parallels_open calls parallels_fill_used_bitmap(). A this time file
size is 145M (i.e leaked clusters are there) and s->used_bmap_size =
139.
2  Then parallels_co_check()->parallels_check_leak () is invoked.
     At the first parallels_check_leak calls
bdrv_co_truncate(offset=5242880), that is true as we have only empty
BAT on the image.
     At this step image truncated to 5M i.e. contains only empty BAT.
     So, on line 809 s->data_end = 10240 i.e 5M (10240<<9)
      809:         s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;

      811:        parallels_free_used_bitmap(bs);
      812:        ret = parallels_fill_used_bitmap(bs);

Line 811 invalidates used_bmap and sets used_bmap_size to 0.
parallels_fill_used_bitmap Invoked on line 812  returns 0, because
payload_bytes = 0 (current file size 5M - s->data_start *
BDRV_SECTOR_SIZE ),
and s->used_bmap_size is NOT initialized.

3. parallels_close() invoked to finish work and exit process.
   parallels_close() calls  parallels_free_used_bitmap() unconditionally.

static void parallels_free_used_bitmap(BlockDriverState *bs)
{
    BDRVParallelsState *s = bs->opaque;
    s->used_bmap_size = 0;
ASSERT IS HERE >>>>  g_free(s->used_bmap);
}

The fix is trivial...

if (s->used_bmap_size) {
   g_free(s->used_bmap);
   s->used_bmap_size = 0;
}

PS: I retuned to your HEAD. Killed gdb thus made image marked is
incorrectly closed.
But 'qemu-img check' only removed  incorrectly closed flags and didn't
remove leaked clusters.

Regards,
Mike.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 15/19] parallels: Remove unnecessary data_end field
  2023-10-07 17:54           ` Mike Maslenkin
@ 2023-10-08 13:54             ` Alexander Ivanov
  0 siblings, 0 replies; 26+ messages in thread
From: Alexander Ivanov @ 2023-10-08 13:54 UTC (permalink / raw)
  To: Mike Maslenkin
  Cc: qemu-devel, qemu-block, den, stefanha, vsementsov, kwolf, hreitz



On 10/7/23 19:54, Mike Maslenkin wrote:
> On Sat, Oct 7, 2023 at 5:30 PM Alexander Ivanov
> <alexander.ivanov@virtuozzo.com> wrote:
>>
>>
>> On 10/7/23 13:21, Mike Maslenkin wrote:
>>> On Sat, Oct 7, 2023 at 1:18 PM Alexander Ivanov
>>> <alexander.ivanov@virtuozzo.com>  wrote:
>>>> On 10/6/23 21:43, Mike Maslenkin wrote:
>>>>> On Mon, Oct 2, 2023 at 12:01 PM Alexander Ivanov
>>>>> <alexander.ivanov@virtuozzo.com>  wrote:
>>>>>> Since we have used bitmap, field data_end in BDRVParallelsState is
>>>>>> redundant and can be removed.
>>>>>>
>>>>>> Add parallels_data_end() helper and remove data_end handling.
>>>>>>
>>>>>> Signed-off-by: Alexander Ivanov<alexander.ivanov@virtuozzo.com>
>>>>>> ---
>>>>>>     block/parallels.c | 33 +++++++++++++--------------------
>>>>>>     block/parallels.h |  1 -
>>>>>>     2 files changed, 13 insertions(+), 21 deletions(-)
>>>>>>
>>>>>> diff --git a/block/parallels.c b/block/parallels.c
>>>>>> index 48ea5b3f03..80a7171b84 100644
>>>>>> --- a/block/parallels.c
>>>>>> +++ b/block/parallels.c
>>>>>> @@ -265,6 +265,13 @@ static void parallels_free_used_bitmap(BlockDriverState *bs)
>>>>>>         g_free(s->used_bmap);
>>>>>>     }
>>>>>>
>>>>>> +static int64_t parallels_data_end(BDRVParallelsState *s)
>>>>>> +{
>>>>>> +    int64_t data_end = s->data_start * BDRV_SECTOR_SIZE;
>>>>>> +    data_end += s->used_bmap_size * s->cluster_size;
>>>>>> +    return data_end;
>>>>>> +}
>>>>>> +
>>>>>>     int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>>>                                              int64_t *clusters)
>>>>>>     {
>>>>>> @@ -275,7 +282,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>>>
>>>>>>         first_free = find_first_zero_bit(s->used_bmap, s->used_bmap_size);
>>>>>>         if (first_free == s->used_bmap_size) {
>>>>>> -        host_off = s->data_end * BDRV_SECTOR_SIZE;
>>>>>> +        host_off = parallels_data_end(s);
>>>>>>             prealloc_clusters = *clusters + s->prealloc_size / s->tracks;
>>>>>>             bytes = prealloc_clusters * s->cluster_size;
>>>>>>
>>>>>> @@ -297,9 +304,6 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>>>             s->used_bmap = bitmap_zero_extend(s->used_bmap, s->used_bmap_size,
>>>>>>                                               new_usedsize);
>>>>>>             s->used_bmap_size = new_usedsize;
>>>>>> -        if (host_off + bytes > s->data_end * BDRV_SECTOR_SIZE) {
>>>>>> -            s->data_end = (host_off + bytes) / BDRV_SECTOR_SIZE;
>>>>>> -        }
>>>>>>         } else {
>>>>>>             next_used = find_next_bit(s->used_bmap, s->used_bmap_size, first_free);
>>>>>>
>>>>>> @@ -315,8 +319,7 @@ int64_t parallels_allocate_host_clusters(BlockDriverState *bs,
>>>>>>              * branch. In the other case we are likely re-using hole. Preallocate
>>>>>>              * the space if required by the prealloc_mode.
>>>>>>              */
>>>>>> -        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE &&
>>>>>> -                host_off < s->data_end * BDRV_SECTOR_SIZE) {
>>>>>> +        if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
>>>>>>                 ret = bdrv_pwrite_zeroes(bs->file, host_off, bytes, 0);
>>>>>>                 if (ret < 0) {
>>>>>>                     return ret;
>>>>>> @@ -757,13 +760,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
>>>>>>             }
>>>>>>         }
>>>>>>
>>>>>> -    if (high_off == 0) {
>>>>>> -        res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
>>>>>> -    } else {
>>>>>> -        res->image_end_offset = high_off + s->cluster_size;
>>>>>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>>>>>> -    }
>>>>>> -
>>>>>> +    res->image_end_offset = parallels_data_end(s);
>>>>>>         return 0;
>>>>>>     }
>>>>>>
>>>>>> @@ -806,7 +803,6 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
>>>>>>                 res->check_errors++;
>>>>>>                 return ret;
>>>>>>             }
>>>>>> -        s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>>>>>>
>>>>>>             parallels_free_used_bitmap(bs);
>>>>>>             ret = parallels_fill_used_bitmap(bs);
>>>>>> @@ -1361,8 +1357,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>>>>>>         }
>>>>>>
>>>>>>         s->data_start = data_start;
>>>>>> -    s->data_end = s->data_start;
>>>>>> -    if (s->data_end < (s->header_size >> BDRV_SECTOR_BITS)) {
>>>>>> +    if (s->data_start < (s->header_size >> BDRV_SECTOR_BITS)) {
>>>>>>             /*
>>>>>>              * There is not enough unused space to fit to block align between BAT
>>>>>>              * and actual data. We can't avoid read-modify-write...
>>>>>> @@ -1403,11 +1398,10 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
>>>>>>
>>>>>>         for (i = 0; i < s->bat_size; i++) {
>>>>>>             sector = bat2sect(s, i);
>>>>>> -        if (sector + s->tracks > s->data_end) {
>>>>>> -            s->data_end = sector + s->tracks;
>>>>>> +        if (sector + s->tracks > file_nb_sectors) {
>>>>>> +            need_check = true;
>>>>>>             }
>>>>>>         }
>>>>>> -    need_check = need_check || s->data_end > file_nb_sectors;
>>>>>>
>>>>>>         ret = parallels_fill_used_bitmap(bs);
>>>>>>         if (ret == -ENOMEM) {
>>>>>> @@ -1461,7 +1455,6 @@ static int parallels_truncate_unused_clusters(BlockDriverState *bs)
>>>>>>             end_off = (end_off + 1) * s->cluster_size;
>>>>>>         }
>>>>>>         end_off += s->data_start * BDRV_SECTOR_SIZE;
>>>>>> -    s->data_end = end_off / BDRV_SECTOR_SIZE;
>>>>>>         return bdrv_truncate(bs->file, end_off, true, PREALLOC_MODE_OFF, 0, NULL);
>>>>>>     }
>>>>>>
>>>>>> diff --git a/block/parallels.h b/block/parallels.h
>>>>>> index 18b4f8068e..a6a048d890 100644
>>>>>> --- a/block/parallels.h
>>>>>> +++ b/block/parallels.h
>>>>>> @@ -79,7 +79,6 @@ typedef struct BDRVParallelsState {
>>>>>>         unsigned int bat_size;
>>>>>>
>>>>>>         int64_t  data_start;
>>>>>> -    int64_t  data_end;
>>>>>>         uint64_t prealloc_size;
>>>>>>         ParallelsPreallocMode prealloc_mode;
>>>>>>
>>>>>> --
>>>>>> 2.34.1
>>>>>>
>>>>> Is it intended behavior?
>>>>>
>>>>> Run:
>>>>> 1. ./qemu-img create -f parallels $TEST_IMG 1T
>>>>> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
>>>>> 3. ./qemu-img check  $TEST_IMG
>>>>>           No errors were found on the image.
>>>>>           Image end offset: 150994944
>>>>>
>>>>> Without this patch `qemu-img check` reports:
>>>>>           ERROR space leaked at the end of the image 145752064
>>>>>
>>>>>          139 leaked clusters were found on the image.
>>>>>          This means waste of disk space, but no harm to data.
>>>>>          Image end offset: 5242880
>>>> The original intention is do nothing at this point if an image is opened as
>>>> RO. In the next patch parallels_check_leak() was rewritten to detect
>>>> unused clusters at the image end.
>>>>
>>>> But there is a bug: (end_off == s->used_bmap_size) case is handled in an
>>>> incorrect way. Will fix it, thank you.
>>>>> Note: there is another issue caused by previous commits exists.
>>>>> g_free asserts from parallels_free_used_bitmap() because of
>>>>> s->used_bmap is NULL.
>>>> Maybe I don't understand your point, but if you meant that g_free() could be
>>>> called with NULL argument, it's not a problem. GLib Manual says:
>>>>
>>>>       void g_free (/|gpointer
>>>>       <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Basic-Types.php#gpointer>
>>>>       mem|/);
>>>>
>>>>       If /|mem|/ is|NULL|
>>>>       <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
>>>>       it simply returns, so there is no need to check /|mem|/ against
>>>>       |NULL|
>>>>       <https://www.manpagez.com/html/glib/glib-2.56.0/glib-Standard-Macros.php#NULL:CAPS>
>>>>       before calling this function.
>>>>
>>>>> To reproduce this crash at revision before or without patch 15/19, run commands:
>>>>> 1. ./qemu-img create -f parallels $TEST_IMG 1T
>>>>> 2. dd if=/dev/zero of=$TEST_IMG oseek=12  bs=1M count=128 conv=notrunc
>>>>> 3. ./qemu-img check -r leaks $TEST_IMG
>>>> Sorry, but I couldn't reproduce it. Reset to 14/19, made the three steps
>>>> and had such output:
>>>>
>>>>       $ ./qemu-img create -f parallels $TEST_IMG 1T
>>>>       Formatting 'test.img', fmt=parallels size=1099511627776
>>>>       cluster_size=1048576
>>>>
>>>>       $ dd if=/dev/zero of=$TEST_IMG seek=12  bs=1M count=128 conv=notrunc
>>>>       128+0 records in
>>>>       128+0 records out
>>>>       134217728 bytes (134 MB, 128 MiB) copied, 0.0797576 s, 1.7 GB/s
>>>>
>>>>       $ ./qemu-img check -r leaks $TEST_IMG
>>>>       Repairing space leaked at the end of the image 141557760
>>>>       The following inconsistencies were found and repaired:
>>>>
>>>>       135 leaked clusters
>>>>       0 corruptions
>>>>
>>>>       Double checking the fixed image now...
>>>>       No errors were found on the image.
>>>>       Image end offset: 5242880
>>> My comment regarding patch 15 is about 'check' operation is not able
>>> to detect leaked data anymore.
>>> So, after this patch I see:
>>>
>>> $ ./qemu-img info   leak.bin
>>> image: leak.bin
>>> file format: parallels
>>> virtual size: 1 TiB (1099511627776 bytes)
>>> disk size: 145 MiB
>>> Child node '/file':
>>>       filename: leak.bin
>>>       protocol type: file
>>>       file length: 146 MiB (153092096 bytes)
>>>       disk size: 145 MiB
>>>
>>> $ ./qemu-img check -r leaks leak.bin
>>> No errors were found on the image.
>>> Image end offset: 153092096
>>>
>>> After reverting this patch  'check' reports about:
>>> ERROR space leaked at the end of the image 148897792
>>>
>>> So, after reverting patch 15 I tried to repair such image
>>> and got abort trap.
>> Yes, I understand this part. OK, I think, I could place 16 patch before
>> 15 and
>> leaks would handle in the correct way at any point of the patch sequence.
>>> I rechecked with downloaded patches, rebuild from scratch and can tell
>>> that there is no abort on master branch, but it appears after applying
>>> patches[1-9].
>> Maybe I do something wrong, but I reset to the top of mainstream, applied
>> 1-9 patches, rebuilt QEMU and didn't see any abort.
>>
>>> Obviously It can be debugged and the reason is that
>>> parallels_fill_used_bitmap() returns after
>>>
>>>    s->used_bmap_size = DIV_ROUND_UP(payload_bytes, s->cluster_size);
>>>       if (s->used_bmap_size == 0) {
>>>           return 0;
>>>       }
>>>
>>> Because DIV_ROUND_UP(payload_bytes, s->cluster_size); gives a 0;
>>>
>>> So subsequent parallels_free_used_bitmap() called from
>>> parallels_close() causes an assert.
>>>
>>> So, the first invocation of parallels_free_used_bitmap is:
>>>     * frame #0: 0x0000000100091830 qemu-img`parallels_check_leak
>>> [inlined] parallels_free_used_bitmap(bs=0x0000000101011600) at
>>> parallels.c:263:33 [opt]
>>>       frame #1: 0x0000000100091830
>>> qemu-img`parallels_check_leak(bs=0x0000000101011600,
>>> res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS, explicit=true) at
>>> parallels.c:811:9 [opt]
>>>       frame #2: 0x0000000100090d80
>>> qemu-img`parallels_co_check(bs=0x0000000101011600,
>>> res=0x000000016fdff5d8, fix=BDRV_FIX_LEAKS) at parallels.c:1014:15
>>> [opt]
>>>       frame #3: 0x0000000100014f6c
>>> qemu-img`bdrv_co_check_entry(opaque=0x000000016fdff560) at
>>> block-gen.c:556:14 [opt]
>>>
>>> And the second generates abort from there:
>>>     * frame #0: 0x000000010008fef8 qemu-img`parallels_close [inlined]
>>> parallels_free_used_bitmap(bs=<unavailable>) at parallels.c:263:33
>> In this line we have:
>>
>>      BDRVParallelsState *s = bs->opaque;
>>
>> So there is only one possibility to abort - incorrect bs. I don't know
>> how it
>> could be possible.
>>
>>> [opt]
>>>       frame #1: 0x000000010008fef8
>>> qemu-img`parallels_close(bs=0x0000000101011600) at parallels.c:1501:5
>>> [opt]
>>>       frame #2: 0x0000000100019d3c qemu-img`bdrv_unref [inlined]
>>> bdrv_close(bs=0x0000000101011600) at block.c:5164:13 [opt]
>>>
>>> After the first parallels_free_used_bitmap(), there is an actual image
>>> truncation happens, so there is no payload at the moment of the next
>>> parallels_fill_used_bitmap(),
>>>
>>> PS: there are a chances that some patches were not applied clearly,
>>> I'll recheck this later.
>> I just reset to the mainstream top and apply 1-9 patches:
>>
>>      $ git reset --hard 2f3913f4b2ad74baeb5a6f1d36efbd9ecdf1057d
>>      HEAD is now at 2f3913f4b2 Merge tag 'for_upstream' of
>>      https://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging
>>      $ git am *.eml
>>      Applying: parallels: Move inactivation code to a separate function
>>      Applying: parallels: Add mark_unused() helper
>>      Applying: parallels: Move host clusters allocation to a separate
>>      function
>>      Applying: parallels: Set data_end value in parallels_check_leak()
>>      Applying: parallels: Recreate used bitmap in parallels_check_leak()
>>      Applying: parallels: Add a note about used bitmap in
>>      parallels_check_duplicate()
>>      Applying: parallels: Create used bitmap even if checks needed
>>      Applying: parallels: Make mark_used() and mark_unused() global functions
>>      Applying: parallels: Add dirty bitmaps saving
>>
>>> It would be nice if it was possible to fetch changes from some repo,
>>> rather than extracting  it from gmail.
>> You can fetch it here (branch "parallels") -
>> https://github.com/AlexanderIvanov-Virtuozzo/qemu.git
>>> Regards,
>>> Mike.
>> --
>> Best regards,
>> Alexander Ivanov
>>
> Thanks for the link. I've fetched your repo and reverted "parallels:
> Remove unnecessary data_end field" as it hides reproduction,
> because it makes 'check' blind for the case we are discussing.
> So the situation is the same:
> 1. parallels_open calls parallels_fill_used_bitmap(). A this time file
> size is 145M (i.e leaked clusters are there) and s->used_bmap_size =
> 139.
> 2  Then parallels_co_check()->parallels_check_leak () is invoked.
>       At the first parallels_check_leak calls
> bdrv_co_truncate(offset=5242880), that is true as we have only empty
> BAT on the image.
>       At this step image truncated to 5M i.e. contains only empty BAT.
>       So, on line 809 s->data_end = 10240 i.e 5M (10240<<9)
>        809:         s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
>
>        811:        parallels_free_used_bitmap(bs);
>        812:        ret = parallels_fill_used_bitmap(bs);
>
> Line 811 invalidates used_bmap and sets used_bmap_size to 0.
> parallels_fill_used_bitmap Invoked on line 812  returns 0, because
> payload_bytes = 0 (current file size 5M - s->data_start *
> BDRV_SECTOR_SIZE ),
> and s->used_bmap_size is NOT initialized.
>
> 3. parallels_close() invoked to finish work and exit process.
>     parallels_close() calls  parallels_free_used_bitmap() unconditionally.
>
> static void parallels_free_used_bitmap(BlockDriverState *bs)
> {
>      BDRVParallelsState *s = bs->opaque;
>      s->used_bmap_size = 0;
> ASSERT IS HERE >>>>  g_free(s->used_bmap);
Oh, now I see, s->used_bmap is not reset to NULL in 
parallels_free_used_bitmap().
That's why I could not reproduce the bug - on my system by a chance 
s->used_bmap
contained NULL.

Maybe it would be better to put NULL to s->used_bmap in
parallels_free_used_bitmap() and let g_free() handle NULL argument 
properly?
> }
>
> The fix is trivial...
>
> if (s->used_bmap_size) {
>     g_free(s->used_bmap);
>     s->used_bmap_size = 0;
> }
>
> PS: I retuned to your HEAD. Killed gdb thus made image marked is
> incorrectly closed.
> But 'qemu-img check' only removed  incorrectly closed flags and didn't
> remove leaked clusters.
Will work on it.
>
> Regards,
> Mike.

-- 
Best regards,
Alexander Ivanov



^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-10-08 13:55 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-02  8:57 [PATCH 00/19] parallels: Add full dirty bitmap support Alexander Ivanov
2023-10-02  8:57 ` [PATCH 01/19] parallels: Move inactivation code to a separate function Alexander Ivanov
2023-10-02  8:57 ` [PATCH 02/19] parallels: Add mark_unused() helper Alexander Ivanov
2023-10-02  8:57 ` [PATCH 03/19] parallels: Move host clusters allocation to a separate function Alexander Ivanov
2023-10-02  8:57 ` [PATCH 04/19] parallels: Set data_end value in parallels_check_leak() Alexander Ivanov
2023-10-02  8:57 ` [PATCH 05/19] parallels: Recreate used bitmap " Alexander Ivanov
2023-10-02  8:57 ` [PATCH 06/19] parallels: Add a note about used bitmap in parallels_check_duplicate() Alexander Ivanov
2023-10-02  8:57 ` [PATCH 07/19] parallels: Create used bitmap even if checks needed Alexander Ivanov
2023-10-02  8:57 ` [PATCH 08/19] parallels: Make mark_used() and mark_unused() global functions Alexander Ivanov
2023-10-02  8:57 ` [PATCH 09/19] parallels: Add dirty bitmaps saving Alexander Ivanov
2023-10-02  8:57 ` [PATCH 10/19] parallels: Let image extensions work in RW mode Alexander Ivanov
2023-10-02  8:57 ` [PATCH 11/19] parallels: Handle L1 entries equal to one Alexander Ivanov
2023-10-02  8:57 ` [PATCH 12/19] parallels: Make a loaded dirty bitmap persistent Alexander Ivanov
2023-10-02  8:57 ` [PATCH 13/19] parallels: Reverse a conditional in parallels_check_leak() to reduce indents Alexander Ivanov
2023-10-02  8:57 ` [PATCH 14/19] parallels: Truncate images on the last used cluster Alexander Ivanov
2023-10-02  8:57 ` [PATCH 15/19] parallels: Remove unnecessary data_end field Alexander Ivanov
2023-10-06 19:43   ` Mike Maslenkin
2023-10-07 10:18     ` Alexander Ivanov
2023-10-07 11:21       ` Mike Maslenkin
2023-10-07 14:30         ` Alexander Ivanov
2023-10-07 17:54           ` Mike Maslenkin
2023-10-08 13:54             ` Alexander Ivanov
2023-10-02  8:57 ` [PATCH 16/19] parallels: Check unused clusters in parallels_check_leak() Alexander Ivanov
2023-10-02  8:57 ` [PATCH 17/19] tests: Add parallels images support to test 165 Alexander Ivanov
2023-10-02  8:57 ` [PATCH 18/19] tests: Turned on 256, 299, 304 and block-status-cache for parallels format Alexander Ivanov
2023-10-02  8:57 ` [PATCH 19/19] tests: Add parallels format support to image-fleecing Alexander Ivanov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.