qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/3] qcow2: advanced compression options
@ 2019-11-11 16:04 Andrey Shinkevich
  2019-11-11 16:04 ` [PATCH v6 1/3] block: introduce compress filter driver Andrey Shinkevich
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Andrey Shinkevich @ 2019-11-11 16:04 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: kwolf, vsementsov, armbru, mreitz, andrey.shinkevich, den

The compression filter driver is introduced as suggested by Max.
A sample usage of the filter can be found in the test #214.
Now, multiple clusters can be written compressed.
It is useful for the backup job.

v6: The new approach to write compressed data was applied. The patch
    v5 4/4 with the block-stream compress test case was removed from
    the series.
    Discussed in the email thread with the message ID
    <1571603828-185910-1-git-send-email-andrey.shinkevich@virtuozzo.com>

Andrey Shinkevich (3):
  block: introduce compress filter driver
  qcow2: Allow writing compressed data of multiple clusters
  tests/qemu-iotests: add case to write compressed data of multiple
    clusters

 block/Makefile.objs        |   1 +
 block/filter-compress.c    | 212 +++++++++++++++++++++++++++++++++++++++++++++
 block/qcow2.c              | 102 ++++++++++++++++------
 qapi/block-core.json       |  10 ++-
 tests/qemu-iotests/214     |  43 +++++++++
 tests/qemu-iotests/214.out |  14 +++
 6 files changed, 351 insertions(+), 31 deletions(-)
 create mode 100644 block/filter-compress.c

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v6 1/3] block: introduce compress filter driver
  2019-11-11 16:04 [PATCH v6 0/3] qcow2: advanced compression options Andrey Shinkevich
@ 2019-11-11 16:04 ` Andrey Shinkevich
  2019-11-11 20:47   ` Eric Blake
                     ` (2 more replies)
  2019-11-11 16:04 ` [PATCH v6 2/3] qcow2: Allow writing compressed data of multiple clusters Andrey Shinkevich
  2019-11-11 16:04 ` [PATCH v6 3/3] tests/qemu-iotests: add case to write " Andrey Shinkevich
  2 siblings, 3 replies; 10+ messages in thread
From: Andrey Shinkevich @ 2019-11-11 16:04 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: kwolf, vsementsov, armbru, mreitz, andrey.shinkevich, den

Allow writing all the data compressed through the filter driver.
The written data will be aligned by the cluster size.
Based on the QEMU current implementation, that data can be written to
unallocated clusters only. May be used for a backup job.

Suggested-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
 block/Makefile.objs     |   1 +
 block/filter-compress.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++
 qapi/block-core.json    |  10 ++-
 3 files changed, 219 insertions(+), 4 deletions(-)
 create mode 100644 block/filter-compress.c

diff --git a/block/Makefile.objs b/block/Makefile.objs
index e394fe0..330529b 100644
--- a/block/Makefile.objs
+++ b/block/Makefile.objs
@@ -43,6 +43,7 @@ block-obj-y += crypto.o
 
 block-obj-y += aio_task.o
 block-obj-y += backup-top.o
+block-obj-y += filter-compress.o
 
 common-obj-y += stream.o
 
diff --git a/block/filter-compress.c b/block/filter-compress.c
new file mode 100644
index 0000000..a7b0337
--- /dev/null
+++ b/block/filter-compress.c
@@ -0,0 +1,212 @@
+/*
+ * Compress filter block driver
+ *
+ * Copyright (c) 2019 Virtuozzo International GmbH
+ *
+ * Author:
+ *   Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
+ *   (based on block/copy-on-read.c by Max Reitz)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 or
+ * (at your option) any later version of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "block/block_int.h"
+#include "qemu/module.h"
+
+
+static int zip_open(BlockDriverState *bs, QDict *options, int flags,
+                     Error **errp)
+{
+    bs->backing = bdrv_open_child(NULL, options, "file", bs, &child_file, false,
+                                  errp);
+    if (!bs->backing) {
+        return -EINVAL;
+    }
+
+    bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
+        BDRV_REQ_WRITE_COMPRESSED |
+        (BDRV_REQ_FUA & bs->backing->bs->supported_write_flags);
+
+    bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
+        ((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
+            bs->backing->bs->supported_zero_flags);
+
+    return 0;
+}
+
+
+#define PERM_PASSTHROUGH (BLK_PERM_CONSISTENT_READ \
+                          | BLK_PERM_WRITE \
+                          | BLK_PERM_RESIZE)
+#define PERM_UNCHANGED (BLK_PERM_ALL & ~PERM_PASSTHROUGH)
+
+static void zip_child_perm(BlockDriverState *bs, BdrvChild *c,
+                            const BdrvChildRole *role,
+                            BlockReopenQueue *reopen_queue,
+                            uint64_t perm, uint64_t shared,
+                            uint64_t *nperm, uint64_t *nshared)
+{
+    *nperm = perm & PERM_PASSTHROUGH;
+    *nshared = (shared & PERM_PASSTHROUGH) | PERM_UNCHANGED;
+
+    /*
+     * We must not request write permissions for an inactive node, the child
+     * cannot provide it.
+     */
+    if (!(bs->open_flags & BDRV_O_INACTIVE)) {
+        *nperm |= BLK_PERM_WRITE_UNCHANGED;
+    }
+}
+
+
+static int64_t zip_getlength(BlockDriverState *bs)
+{
+    return bdrv_getlength(bs->backing->bs);
+}
+
+
+static int coroutine_fn zip_co_truncate(BlockDriverState *bs, int64_t offset,
+                                         bool exact, PreallocMode prealloc,
+                                         Error **errp)
+{
+    return bdrv_co_truncate(bs->backing, offset, exact, prealloc, errp);
+}
+
+
+static int coroutine_fn zip_co_preadv(BlockDriverState *bs,
+                                       uint64_t offset, uint64_t bytes,
+                                       QEMUIOVector *qiov, int flags)
+{
+    return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
+}
+
+
+static int coroutine_fn zip_co_preadv_part(BlockDriverState *bs,
+                                            uint64_t offset, uint64_t bytes,
+                                            QEMUIOVector *qiov,
+                                            size_t qiov_offset,
+                                            int flags)
+{
+    return bdrv_co_preadv_part(bs->backing, offset, bytes, qiov, qiov_offset,
+                               flags);
+}
+
+
+static int coroutine_fn zip_co_pwritev(BlockDriverState *bs,
+                                        uint64_t offset, uint64_t bytes,
+                                        QEMUIOVector *qiov, int flags)
+{
+    return bdrv_co_pwritev(bs->backing, offset, bytes, qiov,
+                           flags | BDRV_REQ_WRITE_COMPRESSED);
+}
+
+
+static int coroutine_fn zip_co_pwritev_part(BlockDriverState *bs,
+                                             uint64_t offset, uint64_t bytes,
+                                             QEMUIOVector *qiov,
+                                             size_t qiov_offset, int flags)
+{
+    return bdrv_co_pwritev_part(bs->backing, offset, bytes, qiov, qiov_offset,
+                                flags | BDRV_REQ_WRITE_COMPRESSED);
+}
+
+
+static int coroutine_fn zip_co_pwrite_zeroes(BlockDriverState *bs,
+                                              int64_t offset, int bytes,
+                                              BdrvRequestFlags flags)
+{
+    return bdrv_co_pwrite_zeroes(bs->backing, offset, bytes, flags);
+}
+
+
+static int coroutine_fn zip_co_pdiscard(BlockDriverState *bs,
+                                         int64_t offset, int bytes)
+{
+    return bdrv_co_pdiscard(bs->backing, offset, bytes);
+}
+
+
+static void zip_refresh_limits(BlockDriverState *bs, Error **errp)
+{
+    BlockDriverInfo bdi;
+    int ret;
+
+    if (!bs->backing) {
+        return;
+    }
+
+    ret = bdrv_get_info(bs->backing->bs, &bdi);
+    if (ret < 0 || bdi.cluster_size == 0) {
+        return;
+    }
+
+    bs->backing->bs->bl.request_alignment = bdi.cluster_size;
+    bs->backing->bs->bl.max_transfer = bdi.cluster_size;
+}
+
+
+static void zip_eject(BlockDriverState *bs, bool eject_flag)
+{
+    bdrv_eject(bs->backing->bs, eject_flag);
+}
+
+
+static void zip_lock_medium(BlockDriverState *bs, bool locked)
+{
+    bdrv_lock_medium(bs->backing->bs, locked);
+}
+
+
+static bool zip_recurse_is_first_non_filter(BlockDriverState *bs,
+                                             BlockDriverState *candidate)
+{
+    return bdrv_recurse_is_first_non_filter(bs->backing->bs, candidate);
+}
+
+
+static BlockDriver bdrv_compress = {
+    .format_name                        = "compress",
+
+    .bdrv_open                          = zip_open,
+    .bdrv_child_perm                    = zip_child_perm,
+
+    .bdrv_getlength                     = zip_getlength,
+    .bdrv_co_truncate                   = zip_co_truncate,
+
+    .bdrv_co_preadv                     = zip_co_preadv,
+    .bdrv_co_preadv_part                = zip_co_preadv_part,
+    .bdrv_co_pwritev                    = zip_co_pwritev,
+    .bdrv_co_pwritev_part               = zip_co_pwritev_part,
+    .bdrv_co_pwrite_zeroes              = zip_co_pwrite_zeroes,
+    .bdrv_co_pdiscard                   = zip_co_pdiscard,
+    .bdrv_refresh_limits                = zip_refresh_limits,
+
+    .bdrv_eject                         = zip_eject,
+    .bdrv_lock_medium                   = zip_lock_medium,
+
+    .bdrv_co_block_status               = bdrv_co_block_status_from_backing,
+
+    .bdrv_recurse_is_first_non_filter   = zip_recurse_is_first_non_filter,
+
+    .has_variable_length                = true,
+    .is_filter                          = true,
+};
+
+static void bdrv_compress_init(void)
+{
+    bdrv_register(&bdrv_compress);
+}
+
+block_init(bdrv_compress_init);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index aa97ee2..33d8cd8 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2884,15 +2884,16 @@
 # @copy-on-read: Since 3.0
 # @blklogwrites: Since 3.0
 # @blkreplay: Since 4.2
+# @compress: Since 4.2
 #
 # Since: 2.9
 ##
 { 'enum': 'BlockdevDriver',
   'data': [ 'blkdebug', 'blklogwrites', 'blkreplay', 'blkverify', 'bochs',
-            'cloop', 'copy-on-read', 'dmg', 'file', 'ftp', 'ftps', 'gluster',
-            'host_cdrom', 'host_device', 'http', 'https', 'iscsi', 'luks',
-            'nbd', 'nfs', 'null-aio', 'null-co', 'nvme', 'parallels', 'qcow',
-            'qcow2', 'qed', 'quorum', 'raw', 'rbd',
+            'cloop', 'copy-on-read', 'compress', 'dmg', 'file', 'ftp', 'ftps',
+            'gluster', 'host_cdrom', 'host_device', 'http', 'https', 'iscsi',
+            'luks', 'nbd', 'nfs', 'null-aio', 'null-co', 'nvme', 'parallels',
+            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'rbd',
             { 'name': 'replication', 'if': 'defined(CONFIG_REPLICATION)' },
             'sheepdog',
             'ssh', 'throttle', 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat', 'vxhs' ] }
@@ -4045,6 +4046,7 @@
       'bochs':      'BlockdevOptionsGenericFormat',
       'cloop':      'BlockdevOptionsGenericFormat',
       'copy-on-read':'BlockdevOptionsGenericFormat',
+      'compress':   'BlockdevOptionsGenericFormat',
       'dmg':        'BlockdevOptionsGenericFormat',
       'file':       'BlockdevOptionsFile',
       'ftp':        'BlockdevOptionsCurlFtp',
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v6 2/3] qcow2: Allow writing compressed data of multiple clusters
  2019-11-11 16:04 [PATCH v6 0/3] qcow2: advanced compression options Andrey Shinkevich
  2019-11-11 16:04 ` [PATCH v6 1/3] block: introduce compress filter driver Andrey Shinkevich
@ 2019-11-11 16:04 ` Andrey Shinkevich
  2019-11-11 16:04 ` [PATCH v6 3/3] tests/qemu-iotests: add case to write " Andrey Shinkevich
  2 siblings, 0 replies; 10+ messages in thread
From: Andrey Shinkevich @ 2019-11-11 16:04 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: kwolf, vsementsov, armbru, mreitz, andrey.shinkevich, den

QEMU currently supports writing compressed data of the size equal to
one cluster. This patch allows writing QCOW2 compressed data that
exceed one cluster. Now, we split buffered data into separate clusters
and write them compressed using the existing functionality.

Suggested-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/qcow2.c | 102 ++++++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 75 insertions(+), 27 deletions(-)

diff --git a/block/qcow2.c b/block/qcow2.c
index 7c18721..0e03a1a 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -4222,10 +4222,8 @@ fail:
     return ret;
 }
 
-/* XXX: put compressed sectors first, then all the cluster aligned
-   tables to avoid losing bytes in alignment */
 static coroutine_fn int
-qcow2_co_pwritev_compressed_part(BlockDriverState *bs,
+qcow2_co_pwritev_compressed_task(BlockDriverState *bs,
                                  uint64_t offset, uint64_t bytes,
                                  QEMUIOVector *qiov, size_t qiov_offset)
 {
@@ -4235,32 +4233,11 @@ qcow2_co_pwritev_compressed_part(BlockDriverState *bs,
     uint8_t *buf, *out_buf;
     uint64_t cluster_offset;
 
-    if (has_data_file(bs)) {
-        return -ENOTSUP;
-    }
-
-    if (bytes == 0) {
-        /* align end of file to a sector boundary to ease reading with
-           sector based I/Os */
-        int64_t len = bdrv_getlength(bs->file->bs);
-        if (len < 0) {
-            return len;
-        }
-        return bdrv_co_truncate(bs->file, len, false, PREALLOC_MODE_OFF, NULL);
-    }
-
-    if (offset_into_cluster(s, offset)) {
-        return -EINVAL;
-    }
+    assert(bytes == s->cluster_size || (bytes < s->cluster_size &&
+           (offset + bytes == bs->total_sectors << BDRV_SECTOR_BITS)));
 
     buf = qemu_blockalign(bs, s->cluster_size);
-    if (bytes != s->cluster_size) {
-        if (bytes > s->cluster_size ||
-            offset + bytes != bs->total_sectors << BDRV_SECTOR_BITS)
-        {
-            qemu_vfree(buf);
-            return -EINVAL;
-        }
+    if (bytes < s->cluster_size) {
         /* Zero-pad last write if image size is not cluster aligned */
         memset(buf + bytes, 0, s->cluster_size - bytes);
     }
@@ -4309,6 +4286,77 @@ fail:
     return ret;
 }
 
+static coroutine_fn int qcow2_co_pwritev_compressed_task_entry(AioTask *task)
+{
+    Qcow2AioTask *t = container_of(task, Qcow2AioTask, task);
+
+    assert(!t->cluster_type && !t->l2meta);
+
+    return qcow2_co_pwritev_compressed_task(t->bs, t->offset, t->bytes, t->qiov,
+                                            t->qiov_offset);
+}
+
+/*
+ * XXX: put compressed sectors first, then all the cluster aligned
+ * tables to avoid losing bytes in alignment
+ */
+static coroutine_fn int
+qcow2_co_pwritev_compressed_part(BlockDriverState *bs,
+                                 uint64_t offset, uint64_t bytes,
+                                 QEMUIOVector *qiov, size_t qiov_offset)
+{
+    BDRVQcow2State *s = bs->opaque;
+    AioTaskPool *aio = NULL;
+    int ret = 0;
+
+    if (has_data_file(bs)) {
+        return -ENOTSUP;
+    }
+
+    if (bytes == 0) {
+        /*
+         * align end of file to a sector boundary to ease reading with
+         * sector based I/Os
+         */
+        int64_t len = bdrv_getlength(bs->file->bs);
+        if (len < 0) {
+            return len;
+        }
+        return bdrv_co_truncate(bs->file, len, false, PREALLOC_MODE_OFF, NULL);
+    }
+
+    if (offset_into_cluster(s, offset)) {
+        return -EINVAL;
+    }
+
+    while (bytes && aio_task_pool_status(aio) == 0) {
+        uint64_t chunk_size = MIN(bytes, s->cluster_size);
+
+        if (!aio && chunk_size != bytes) {
+            aio = aio_task_pool_new(QCOW2_MAX_WORKERS);
+        }
+
+        ret = qcow2_add_task(bs, aio, qcow2_co_pwritev_compressed_task_entry,
+                             0, 0, offset, chunk_size, qiov, qiov_offset, NULL);
+        if (ret < 0) {
+            break;
+        }
+        qiov_offset += chunk_size;
+        offset += chunk_size;
+        bytes -= chunk_size;
+    }
+
+    if (aio) {
+        aio_task_pool_wait_all(aio);
+        if (ret == 0) {
+            ret = aio_task_pool_status(aio);
+        }
+        g_free(aio);
+    }
+
+    return ret;
+}
+
 static int coroutine_fn
 qcow2_co_preadv_compressed(BlockDriverState *bs,
                            uint64_t file_cluster_offset,
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v6 3/3] tests/qemu-iotests: add case to write compressed data of multiple clusters
  2019-11-11 16:04 [PATCH v6 0/3] qcow2: advanced compression options Andrey Shinkevich
  2019-11-11 16:04 ` [PATCH v6 1/3] block: introduce compress filter driver Andrey Shinkevich
  2019-11-11 16:04 ` [PATCH v6 2/3] qcow2: Allow writing compressed data of multiple clusters Andrey Shinkevich
@ 2019-11-11 16:04 ` Andrey Shinkevich
  2 siblings, 0 replies; 10+ messages in thread
From: Andrey Shinkevich @ 2019-11-11 16:04 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: kwolf, vsementsov, armbru, mreitz, andrey.shinkevich, den

Add the case to the iotest #214 that checks possibility of writing
compressed data of more than one cluster size. The test case involves
the compress filter driver showing a sample usage of that.

Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
 tests/qemu-iotests/214     | 43 +++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/214.out | 14 ++++++++++++++
 2 files changed, 57 insertions(+)

diff --git a/tests/qemu-iotests/214 b/tests/qemu-iotests/214
index 21ec8a2..5012112 100755
--- a/tests/qemu-iotests/214
+++ b/tests/qemu-iotests/214
@@ -89,6 +89,49 @@ _check_test_img -r all
 $QEMU_IO -c "read  -P 0x11  0 4M" "$TEST_IMG" 2>&1 | _filter_qemu_io | _filter_testdir
 $QEMU_IO -c "read  -P 0x22 4M 4M" "$TEST_IMG" 2>&1 | _filter_qemu_io | _filter_testdir
 
+echo
+echo "=== Write compressed data of multiple clusters ==="
+echo
+cluster_size=0x10000
+_make_test_img 2M -o cluster_size=$cluster_size
+
+echo "Write uncompressed data:"
+let data_size="8 * $cluster_size"
+$QEMU_IO -c "write -P 0xaa 0 $data_size" "$TEST_IMG" \
+         2>&1 | _filter_qemu_io | _filter_testdir
+sizeA=$($QEMU_IMG info --output=json "$TEST_IMG" |
+        sed -n '/"actual-size":/ s/[^0-9]//gp')
+
+_make_test_img 2M -o cluster_size=$cluster_size
+echo "Write compressed data:"
+let data_size="3 * $cluster_size + ($cluster_size / 2)"
+# Set compress=on. That will align the written data
+# by the cluster size and will write them compressed.
+QEMU_IO_OPTIONS=$QEMU_IO_OPTIONS_NO_FMT \
+$QEMU_IO -c "write -P 0xbb 0 $data_size" --image-opts \
+         "driver=compress,file.driver=$IMGFMT,file.file.driver=file,file.file.filename=$TEST_IMG" \
+         2>&1 | _filter_qemu_io | _filter_testdir
+
+let offset="4 * $cluster_size"
+QEMU_IO_OPTIONS=$QEMU_IO_OPTIONS_NO_FMT \
+$QEMU_IO -c "write -P 0xcc $offset $data_size" "json:{\
+    'driver': 'compress',
+    'file': {'driver': '$IMGFMT',
+             'file': {'driver': 'file',
+                      'filename': '$TEST_IMG'}}}" | \
+                          _filter_qemu_io | _filter_testdir
+
+sizeB=$($QEMU_IMG info --output=json "$TEST_IMG" |
+        sed -n '/"actual-size":/ s/[^0-9]//gp')
+
+if [ $sizeA -le $sizeB ]
+then
+    echo "Compression ERROR"
+fi
+
+$QEMU_IMG check --output=json "$TEST_IMG" |
+          sed -n 's/,$//; /"compressed-clusters":/ s/^ *//p'
+
 # success, all done
 echo '*** done'
 rm -f $seq.full
diff --git a/tests/qemu-iotests/214.out b/tests/qemu-iotests/214.out
index 0fcd8dc..4a2ec33 100644
--- a/tests/qemu-iotests/214.out
+++ b/tests/qemu-iotests/214.out
@@ -32,4 +32,18 @@ read 4194304/4194304 bytes at offset 0
 4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 read 4194304/4194304 bytes at offset 4194304
 4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+=== Write compressed data of multiple clusters ===
+
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2097152
+Write uncompressed data:
+wrote 524288/524288 bytes at offset 0
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2097152
+Write compressed data:
+wrote 229376/229376 bytes at offset 0
+224 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 229376/229376 bytes at offset 262144
+224 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+"compressed-clusters": 8
 *** done
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 1/3] block: introduce compress filter driver
  2019-11-11 16:04 ` [PATCH v6 1/3] block: introduce compress filter driver Andrey Shinkevich
@ 2019-11-11 20:47   ` Eric Blake
  2019-11-12  8:57     ` Vladimir Sementsov-Ogievskiy
  2019-11-12  9:39   ` Kevin Wolf
  2019-11-12 10:33   ` Vladimir Sementsov-Ogievskiy
  2 siblings, 1 reply; 10+ messages in thread
From: Eric Blake @ 2019-11-11 20:47 UTC (permalink / raw)
  To: Andrey Shinkevich, qemu-devel, qemu-block
  Cc: kwolf, den, vsementsov, armbru, mreitz

On 11/11/19 10:04 AM, Andrey Shinkevich wrote:
> Allow writing all the data compressed through the filter driver.
> The written data will be aligned by the cluster size.
> Based on the QEMU current implementation, that data can be written to
> unallocated clusters only. May be used for a backup job.
> 
> Suggested-by: Max Reitz <mreitz@redhat.com>
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> ---
>   block/Makefile.objs     |   1 +
>   block/filter-compress.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++
>   qapi/block-core.json    |  10 ++-
>   3 files changed, 219 insertions(+), 4 deletions(-)
>   create mode 100644 block/filter-compress.c

> +++ b/qapi/block-core.json
> @@ -2884,15 +2884,16 @@
>   # @copy-on-read: Since 3.0
>   # @blklogwrites: Since 3.0
>   # @blkreplay: Since 4.2
> +# @compress: Since 4.2

Are we still trying to get this in 4.2, even though soft freeze is past? 
  Or are we going to have to defer it to 5.0 as a new feature?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 1/3] block: introduce compress filter driver
  2019-11-11 20:47   ` Eric Blake
@ 2019-11-12  8:57     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-11-12  8:57 UTC (permalink / raw)
  To: Eric Blake, Andrey Shinkevich, qemu-devel, qemu-block
  Cc: kwolf, Denis Lunev, armbru, mreitz

11.11.2019 23:47, Eric Blake wrote:
> On 11/11/19 10:04 AM, Andrey Shinkevich wrote:
>> Allow writing all the data compressed through the filter driver.
>> The written data will be aligned by the cluster size.
>> Based on the QEMU current implementation, that data can be written to
>> unallocated clusters only. May be used for a backup job.
>>
>> Suggested-by: Max Reitz <mreitz@redhat.com>
>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>> ---
>>   block/Makefile.objs     |   1 +
>>   block/filter-compress.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++
>>   qapi/block-core.json    |  10 ++-
>>   3 files changed, 219 insertions(+), 4 deletions(-)
>>   create mode 100644 block/filter-compress.c
> 
>> +++ b/qapi/block-core.json
>> @@ -2884,15 +2884,16 @@
>>   # @copy-on-read: Since 3.0
>>   # @blklogwrites: Since 3.0
>>   # @blkreplay: Since 4.2
>> +# @compress: Since 4.2
> 
> Are we still trying to get this in 4.2, even though soft freeze is past?  Or are we going to have to defer it to 5.0 as a new feature?
> 

5.0 of course

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 1/3] block: introduce compress filter driver
  2019-11-11 16:04 ` [PATCH v6 1/3] block: introduce compress filter driver Andrey Shinkevich
  2019-11-11 20:47   ` Eric Blake
@ 2019-11-12  9:39   ` Kevin Wolf
  2019-11-12 10:07     ` Andrey Shinkevich
  2019-11-12 10:33   ` Vladimir Sementsov-Ogievskiy
  2 siblings, 1 reply; 10+ messages in thread
From: Kevin Wolf @ 2019-11-12  9:39 UTC (permalink / raw)
  To: Andrey Shinkevich; +Cc: vsementsov, qemu-block, armbru, qemu-devel, den, mreitz

Am 11.11.2019 um 17:04 hat Andrey Shinkevich geschrieben:
> Allow writing all the data compressed through the filter driver.
> The written data will be aligned by the cluster size.
> Based on the QEMU current implementation, that data can be written to
> unallocated clusters only. May be used for a backup job.
> 
> Suggested-by: Max Reitz <mreitz@redhat.com>
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>

> +static BlockDriver bdrv_compress = {
> +    .format_name                        = "compress",
> +
> +    .bdrv_open                          = zip_open,
> +    .bdrv_child_perm                    = zip_child_perm,

Why do you call the functions zip_* when the driver is called compress?
I think zip would be a driver for zip archives, which we don't use here.

> +    .bdrv_getlength                     = zip_getlength,
> +    .bdrv_co_truncate                   = zip_co_truncate,
> +
> +    .bdrv_co_preadv                     = zip_co_preadv,
> +    .bdrv_co_preadv_part                = zip_co_preadv_part,
> +    .bdrv_co_pwritev                    = zip_co_pwritev,
> +    .bdrv_co_pwritev_part               = zip_co_pwritev_part,

If you implement .bdrv_co_preadv/pwritev_part, isn't the implementation
of .bdrv_co_preadv/pwritev (without _part) dead code?

> +    .bdrv_co_pwrite_zeroes              = zip_co_pwrite_zeroes,
> +    .bdrv_co_pdiscard                   = zip_co_pdiscard,
> +    .bdrv_refresh_limits                = zip_refresh_limits,
> +
> +    .bdrv_eject                         = zip_eject,
> +    .bdrv_lock_medium                   = zip_lock_medium,
> +
> +    .bdrv_co_block_status               = bdrv_co_block_status_from_backing,

Why not use bs->file? (Well, apart from the still not merged filter
series by Max...)

> +    .bdrv_recurse_is_first_non_filter   = zip_recurse_is_first_non_filter,
> +
> +    .has_variable_length                = true,
> +    .is_filter                          = true,
> +};

Kevin



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 1/3] block: introduce compress filter driver
  2019-11-12  9:39   ` Kevin Wolf
@ 2019-11-12 10:07     ` Andrey Shinkevich
  2019-11-12 10:24       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 10+ messages in thread
From: Andrey Shinkevich @ 2019-11-12 10:07 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Vladimir Sementsov-Ogievskiy, Denis Lunev, qemu-block, armbru,
	qemu-devel, mreitz

On 12/11/2019 12:39, Kevin Wolf wrote:
> Am 11.11.2019 um 17:04 hat Andrey Shinkevich geschrieben:
>> Allow writing all the data compressed through the filter driver.
>> The written data will be aligned by the cluster size.
>> Based on the QEMU current implementation, that data can be written to
>> unallocated clusters only. May be used for a backup job.
>>
>> Suggested-by: Max Reitz <mreitz@redhat.com>
>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> 
>> +static BlockDriver bdrv_compress = {
>> +    .format_name                        = "compress",
>> +
>> +    .bdrv_open                          = zip_open,
>> +    .bdrv_child_perm                    = zip_child_perm,
> 
> Why do you call the functions zip_* when the driver is called compress?
> I think zip would be a driver for zip archives, which we don't use here.
> 

Kevin,
Thanks for your response.
I was trying to make my mind up with a short form for 'compress'.
I will change the 'zip' for something like 'compr'.

>> +    .bdrv_getlength                     = zip_getlength,
>> +    .bdrv_co_truncate                   = zip_co_truncate,
>> +
>> +    .bdrv_co_preadv                     = zip_co_preadv,
>> +    .bdrv_co_preadv_part                = zip_co_preadv_part,
>> +    .bdrv_co_pwritev                    = zip_co_pwritev,
>> +    .bdrv_co_pwritev_part               = zip_co_pwritev_part,
> 
> If you implement .bdrv_co_preadv/pwritev_part, isn't the implementation
> of .bdrv_co_preadv/pwritev (without _part) dead code?
> 

Understood and will remove the dead path.

>> +    .bdrv_co_pwrite_zeroes              = zip_co_pwrite_zeroes,
>> +    .bdrv_co_pdiscard                   = zip_co_pdiscard,
>> +    .bdrv_refresh_limits                = zip_refresh_limits,
>> +
>> +    .bdrv_eject                         = zip_eject,
>> +    .bdrv_lock_medium                   = zip_lock_medium,
>> +
>> +    .bdrv_co_block_status               = bdrv_co_block_status_from_backing,
> 
> Why not use bs->file? (Well, apart from the still not merged filter
> series by Max...)
> 

We need to keep a backing chain unbroken with the filter inserted. So, 
the backing child should not be zero. It is necessary for the backup 
job, for instance. When I initialized both children pointing to the same 
node, the code didn't work properly. I have to reproduce it again to 
tell you exactly what happened then and will appreciate your advice 
about a proper design.

Andrey

>> +    .bdrv_recurse_is_first_non_filter   = zip_recurse_is_first_non_filter,
>> +
>> +    .has_variable_length                = true,
>> +    .is_filter                          = true,
>> +};
> 
> Kevin
> 

-- 
With the best regards,
Andrey Shinkevich


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 1/3] block: introduce compress filter driver
  2019-11-12 10:07     ` Andrey Shinkevich
@ 2019-11-12 10:24       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-11-12 10:24 UTC (permalink / raw)
  To: Andrey Shinkevich, Kevin Wolf
  Cc: Denis Lunev, qemu-block, armbru, qemu-devel, mreitz

12.11.2019 13:07, Andrey Shinkevich wrote:
> On 12/11/2019 12:39, Kevin Wolf wrote:
>> Am 11.11.2019 um 17:04 hat Andrey Shinkevich geschrieben:
>>> Allow writing all the data compressed through the filter driver.
>>> The written data will be aligned by the cluster size.
>>> Based on the QEMU current implementation, that data can be written to
>>> unallocated clusters only. May be used for a backup job.
>>>
>>> Suggested-by: Max Reitz <mreitz@redhat.com>
>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>
>>> +static BlockDriver bdrv_compress = {
>>> +    .format_name                        = "compress",
>>> +
>>> +    .bdrv_open                          = zip_open,
>>> +    .bdrv_child_perm                    = zip_child_perm,
>>
>> Why do you call the functions zip_* when the driver is called compress?
>> I think zip would be a driver for zip archives, which we don't use here.
>>
> 
> Kevin,
> Thanks for your response.
> I was trying to make my mind up with a short form for 'compress'.
> I will change the 'zip' for something like 'compr'.

I'd keep it compress, it sounds better.

> 
>>> +    .bdrv_getlength                     = zip_getlength,
>>> +    .bdrv_co_truncate                   = zip_co_truncate,
>>> +
>>> +    .bdrv_co_preadv                     = zip_co_preadv,
>>> +    .bdrv_co_preadv_part                = zip_co_preadv_part,
>>> +    .bdrv_co_pwritev                    = zip_co_pwritev,
>>> +    .bdrv_co_pwritev_part               = zip_co_pwritev_part,
>>
>> If you implement .bdrv_co_preadv/pwritev_part, isn't the implementation
>> of .bdrv_co_preadv/pwritev (without _part) dead code?
>>
> 
> Understood and will remove the dead path.
> 
>>> +    .bdrv_co_pwrite_zeroes              = zip_co_pwrite_zeroes,
>>> +    .bdrv_co_pdiscard                   = zip_co_pdiscard,
>>> +    .bdrv_refresh_limits                = zip_refresh_limits,
>>> +
>>> +    .bdrv_eject                         = zip_eject,
>>> +    .bdrv_lock_medium                   = zip_lock_medium,
>>> +
>>> +    .bdrv_co_block_status               = bdrv_co_block_status_from_backing,
>>
>> Why not use bs->file? (Well, apart from the still not merged filter
>> series by Max...)
>>
> 
> We need to keep a backing chain unbroken with the filter inserted. So,
> the backing child should not be zero. It is necessary for the backup
> job, for instance. When I initialized both children pointing to the same
> node, the code didn't work properly. I have to reproduce it again to
> tell you exactly what happened then and will appreciate your advice
> about a proper design.
> 
> Andrey


file-child based filters are pain, which needs 42-patches (seems postponed now :(
Max's series to work correct (or at least more correct than now). file-child
based filters break backing-chains, and backing-child-based works well. So, I don't
know any benefit of file-child based filters, and I think there is no reason to
create new ones. If in future for some reason we need file-child support in the filter
it's simple to add it (so filter will have only one child, but it may be backing or
file, as requested by user).

So, I propose to start with backing-child which works better, and add file-child support
in future if needed.

Also, note that backup-top filter uses backing child, and there is a reason to make it
public filter (to realize image fleecing without backup job), so we'll have public
backing-child based filter anyway.

Also, we have pending series about using COR filter in block-stream (it breaks
backing-chain, as it is file-child-based), and now I think that the simplest
way to fix it is support backing child in block-stream (so, block-stream job
will create COR filter with backing child instead of file child).

> 
>>> +    .bdrv_recurse_is_first_non_filter   = zip_recurse_is_first_non_filter,
>>> +
>>> +    .has_variable_length                = true,
>>> +    .is_filter                          = true,
>>> +};
>>
>> Kevin
>>
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v6 1/3] block: introduce compress filter driver
  2019-11-11 16:04 ` [PATCH v6 1/3] block: introduce compress filter driver Andrey Shinkevich
  2019-11-11 20:47   ` Eric Blake
  2019-11-12  9:39   ` Kevin Wolf
@ 2019-11-12 10:33   ` Vladimir Sementsov-Ogievskiy
  2 siblings, 0 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-11-12 10:33 UTC (permalink / raw)
  To: Andrey Shinkevich, qemu-devel, qemu-block
  Cc: kwolf, Denis Lunev, armbru, mreitz

11.11.2019 19:04, Andrey Shinkevich wrote:
> Allow writing all the data compressed through the filter driver.
> The written data will be aligned by the cluster size.
> Based on the QEMU current implementation, that data can be written to
> unallocated clusters only. May be used for a backup job.
> 
> Suggested-by: Max Reitz <mreitz@redhat.com>
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> ---
>   block/Makefile.objs     |   1 +
>   block/filter-compress.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++
>   qapi/block-core.json    |  10 ++-
>   3 files changed, 219 insertions(+), 4 deletions(-)
>   create mode 100644 block/filter-compress.c
> 
> diff --git a/block/Makefile.objs b/block/Makefile.objs
> index e394fe0..330529b 100644
> --- a/block/Makefile.objs
> +++ b/block/Makefile.objs
> @@ -43,6 +43,7 @@ block-obj-y += crypto.o
>   
>   block-obj-y += aio_task.o
>   block-obj-y += backup-top.o
> +block-obj-y += filter-compress.o
>   
>   common-obj-y += stream.o
>   
> diff --git a/block/filter-compress.c b/block/filter-compress.c
> new file mode 100644
> index 0000000..a7b0337
> --- /dev/null
> +++ b/block/filter-compress.c
> @@ -0,0 +1,212 @@
> +/*
> + * Compress filter block driver
> + *
> + * Copyright (c) 2019 Virtuozzo International GmbH
> + *
> + * Author:
> + *   Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> + *   (based on block/copy-on-read.c by Max Reitz)
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 or
> + * (at your option) any later version of the License.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "block/block_int.h"
> +#include "qemu/module.h"
> +
> +
> +static int zip_open(BlockDriverState *bs, QDict *options, int flags,
> +                     Error **errp)
> +{
> +    bs->backing = bdrv_open_child(NULL, options, "file", bs, &child_file, false,
> +                                  errp);
> +    if (!bs->backing) {
> +        return -EINVAL;
> +    }
> +
> +    bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
> +        BDRV_REQ_WRITE_COMPRESSED |
> +        (BDRV_REQ_FUA & bs->backing->bs->supported_write_flags);
> +
> +    bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
> +        ((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
> +            bs->backing->bs->supported_zero_flags);
> +
> +    return 0;
> +}
> +
> +
> +#define PERM_PASSTHROUGH (BLK_PERM_CONSISTENT_READ \
> +                          | BLK_PERM_WRITE \
> +                          | BLK_PERM_RESIZE)
> +#define PERM_UNCHANGED (BLK_PERM_ALL & ~PERM_PASSTHROUGH)
> +
> +static void zip_child_perm(BlockDriverState *bs, BdrvChild *c,
> +                            const BdrvChildRole *role,
> +                            BlockReopenQueue *reopen_queue,
> +                            uint64_t perm, uint64_t shared,
> +                            uint64_t *nperm, uint64_t *nshared)
> +{
> +    *nperm = perm & PERM_PASSTHROUGH;
> +    *nshared = (shared & PERM_PASSTHROUGH) | PERM_UNCHANGED;
> +
> +    /*
> +     * We must not request write permissions for an inactive node, the child
> +     * cannot provide it.
> +     */
> +    if (!(bs->open_flags & BDRV_O_INACTIVE)) {
> +        *nperm |= BLK_PERM_WRITE_UNCHANGED;
> +    }
> +}
> +
> +
> +static int64_t zip_getlength(BlockDriverState *bs)
> +{
> +    return bdrv_getlength(bs->backing->bs);
> +}
> +
> +
> +static int coroutine_fn zip_co_truncate(BlockDriverState *bs, int64_t offset,
> +                                         bool exact, PreallocMode prealloc,
> +                                         Error **errp)
> +{
> +    return bdrv_co_truncate(bs->backing, offset, exact, prealloc, errp);
> +}
> +
> +
> +static int coroutine_fn zip_co_preadv(BlockDriverState *bs,
> +                                       uint64_t offset, uint64_t bytes,
> +                                       QEMUIOVector *qiov, int flags)
> +{
> +    return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
> +}
> +
> +
> +static int coroutine_fn zip_co_preadv_part(BlockDriverState *bs,
> +                                            uint64_t offset, uint64_t bytes,
> +                                            QEMUIOVector *qiov,
> +                                            size_t qiov_offset,
> +                                            int flags)
> +{
> +    return bdrv_co_preadv_part(bs->backing, offset, bytes, qiov, qiov_offset,
> +                               flags);
> +}
> +
> +
> +static int coroutine_fn zip_co_pwritev(BlockDriverState *bs,
> +                                        uint64_t offset, uint64_t bytes,
> +                                        QEMUIOVector *qiov, int flags)
> +{
> +    return bdrv_co_pwritev(bs->backing, offset, bytes, qiov,
> +                           flags | BDRV_REQ_WRITE_COMPRESSED);
> +}
> +
> +
> +static int coroutine_fn zip_co_pwritev_part(BlockDriverState *bs,
> +                                             uint64_t offset, uint64_t bytes,
> +                                             QEMUIOVector *qiov,
> +                                             size_t qiov_offset, int flags)
> +{
> +    return bdrv_co_pwritev_part(bs->backing, offset, bytes, qiov, qiov_offset,
> +                                flags | BDRV_REQ_WRITE_COMPRESSED);
> +}
> +
> +
> +static int coroutine_fn zip_co_pwrite_zeroes(BlockDriverState *bs,
> +                                              int64_t offset, int bytes,
> +                                              BdrvRequestFlags flags)
> +{
> +    return bdrv_co_pwrite_zeroes(bs->backing, offset, bytes, flags);
> +}
> +
> +
> +static int coroutine_fn zip_co_pdiscard(BlockDriverState *bs,
> +                                         int64_t offset, int bytes)
> +{
> +    return bdrv_co_pdiscard(bs->backing, offset, bytes);
> +}
> +
> +
> +static void zip_refresh_limits(BlockDriverState *bs, Error **errp)
> +{
> +    BlockDriverInfo bdi;
> +    int ret;
> +
> +    if (!bs->backing) {
> +        return;
> +    }
> +
> +    ret = bdrv_get_info(bs->backing->bs, &bdi);
> +    if (ret < 0 || bdi.cluster_size == 0) {
> +        return;
> +    }
> +
> +    bs->backing->bs->bl.request_alignment = bdi.cluster_size;
> +    bs->backing->bs->bl.max_transfer = bdi.cluster_size;


I think, you should not edit these fields of child, we don't own them.
This handler should set ours bs->bl structure, bs->bl of the filter itself.

Also, we need a way to unset max_transfer here after next patch, to allow
multiple-cluster compressed writes.. But only for qcow2.

This means (sorry, I sent you on the wrong path). that we need separate
bs->bl.max_write_compressed, which defaults to cluster_size and may be set
by driver. And in the following patch which add multiple cluster compressed
write support to qcow2, we should set this bs->bl.max_write_compressed to
INT_MAX.

> +}
> +
> +
> +static void zip_eject(BlockDriverState *bs, bool eject_flag)
> +{
> +    bdrv_eject(bs->backing->bs, eject_flag);
> +}
> +
> +
> +static void zip_lock_medium(BlockDriverState *bs, bool locked)
> +{
> +    bdrv_lock_medium(bs->backing->bs, locked);
> +}
> +
> +
> +static bool zip_recurse_is_first_non_filter(BlockDriverState *bs,
> +                                             BlockDriverState *candidate)
> +{
> +    return bdrv_recurse_is_first_non_filter(bs->backing->bs, candidate);
> +}
> +
> +
> +static BlockDriver bdrv_compress = {
> +    .format_name                        = "compress",
> +
> +    .bdrv_open                          = zip_open,
> +    .bdrv_child_perm                    = zip_child_perm,
> +
> +    .bdrv_getlength                     = zip_getlength,
> +    .bdrv_co_truncate                   = zip_co_truncate,
> +
> +    .bdrv_co_preadv                     = zip_co_preadv,
> +    .bdrv_co_preadv_part                = zip_co_preadv_part,
> +    .bdrv_co_pwritev                    = zip_co_pwritev,
> +    .bdrv_co_pwritev_part               = zip_co_pwritev_part,
> +    .bdrv_co_pwrite_zeroes              = zip_co_pwrite_zeroes,
> +    .bdrv_co_pdiscard                   = zip_co_pdiscard,
> +    .bdrv_refresh_limits                = zip_refresh_limits,
> +
> +    .bdrv_eject                         = zip_eject,
> +    .bdrv_lock_medium                   = zip_lock_medium,
> +
> +    .bdrv_co_block_status               = bdrv_co_block_status_from_backing,
> +
> +    .bdrv_recurse_is_first_non_filter   = zip_recurse_is_first_non_filter,
> +
> +    .has_variable_length                = true,
> +    .is_filter                          = true,
> +};
> +
> +static void bdrv_compress_init(void)
> +{
> +    bdrv_register(&bdrv_compress);
> +}
> +
> +block_init(bdrv_compress_init);
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index aa97ee2..33d8cd8 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -2884,15 +2884,16 @@
>   # @copy-on-read: Since 3.0
>   # @blklogwrites: Since 3.0
>   # @blkreplay: Since 4.2
> +# @compress: Since 4.2
>   #
>   # Since: 2.9
>   ##
>   { 'enum': 'BlockdevDriver',
>     'data': [ 'blkdebug', 'blklogwrites', 'blkreplay', 'blkverify', 'bochs',
> -            'cloop', 'copy-on-read', 'dmg', 'file', 'ftp', 'ftps', 'gluster',
> -            'host_cdrom', 'host_device', 'http', 'https', 'iscsi', 'luks',
> -            'nbd', 'nfs', 'null-aio', 'null-co', 'nvme', 'parallels', 'qcow',
> -            'qcow2', 'qed', 'quorum', 'raw', 'rbd',
> +            'cloop', 'copy-on-read', 'compress', 'dmg', 'file', 'ftp', 'ftps',
> +            'gluster', 'host_cdrom', 'host_device', 'http', 'https', 'iscsi',
> +            'luks', 'nbd', 'nfs', 'null-aio', 'null-co', 'nvme', 'parallels',
> +            'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'rbd',
>               { 'name': 'replication', 'if': 'defined(CONFIG_REPLICATION)' },
>               'sheepdog',
>               'ssh', 'throttle', 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat', 'vxhs' ] }
> @@ -4045,6 +4046,7 @@
>         'bochs':      'BlockdevOptionsGenericFormat',
>         'cloop':      'BlockdevOptionsGenericFormat',
>         'copy-on-read':'BlockdevOptionsGenericFormat',
> +      'compress':   'BlockdevOptionsGenericFormat',
>         'dmg':        'BlockdevOptionsGenericFormat',
>         'file':       'BlockdevOptionsFile',
>         'ftp':        'BlockdevOptionsCurlFtp',
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-11-12 10:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-11 16:04 [PATCH v6 0/3] qcow2: advanced compression options Andrey Shinkevich
2019-11-11 16:04 ` [PATCH v6 1/3] block: introduce compress filter driver Andrey Shinkevich
2019-11-11 20:47   ` Eric Blake
2019-11-12  8:57     ` Vladimir Sementsov-Ogievskiy
2019-11-12  9:39   ` Kevin Wolf
2019-11-12 10:07     ` Andrey Shinkevich
2019-11-12 10:24       ` Vladimir Sementsov-Ogievskiy
2019-11-12 10:33   ` Vladimir Sementsov-Ogievskiy
2019-11-11 16:04 ` [PATCH v6 2/3] qcow2: Allow writing compressed data of multiple clusters Andrey Shinkevich
2019-11-11 16:04 ` [PATCH v6 3/3] tests/qemu-iotests: add case to write " Andrey Shinkevich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).