All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard
@ 2021-02-25 11:52 Vladimir Sementsov-Ogievskiy
  2021-02-25 11:52 ` [PATCH v2 1/3] qemu-io: add aio_discard Vladimir Sementsov-Ogievskiy
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-02-25 11:52 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, mreitz, kwolf, vsementsov, den

Hi all! It occurs that nothing prevents discarding and reallocating host
cluster during data writing. This way data writing will pollute another
newly allocated cluster of data or metadata.

OK, v2 is a try to solve the problem with CoRwlock.. And it is marked
RFC, because of a lot of iotest failures.. Some of problems with v2:

1. It's a more complicated to make a test, as everything is blocking
and I can't just break write and do discard.. I have to implement
aio_discard in qemu-io and rewrite test into several portions of io
commands splitted by "sleep 1".. OK, it's not a big problem, and I've
solved it.

2. iotest 7 fails with several leaked clusters. Seems, that it depend on
the fact that discard may be done in parallel with writes. Iotest 7 does
snapshots, so I think l1 table is updated to the moment when discard is
finally unlocked.. But I didn't dig into it, it's all my assumptions.

3. iotest 13 (and I think a lot more iotests) crashes on
assert(!to->locks_held); .. So with this assertion we can't keep rwlock
locked during data writing...

  #3  in __assert_fail () from /lib64/libc.so.6
  #4  in qemu_aio_coroutine_enter (ctx=0x55762120b700, co=0x55762121d700)
      at ../util/qemu-coroutine.c:158
  #5  in aio_co_enter (ctx=0x55762120b700, co=0x55762121d700) at ../util/async.c:628
  #6  in aio_co_wake (co=0x55762121d700) at ../util/async.c:612
  #7  in thread_pool_co_cb (opaque=0x7f17950daab0, ret=0) at ../util/thread-pool.c:279
  #8  in thread_pool_completion_bh (opaque=0x5576211e5070) at ../util/thread-pool.c:188
  #9  in aio_bh_call (bh=0x557621205df0) at ../util/async.c:136
  #10 in aio_bh_poll (ctx=0x55762120b700) at ../util/async.c:164
  #11 in aio_poll (ctx=0x55762120b700, blocking=true) at ../util/aio-posix.c:659
  #12 in blk_prw (blk=0x557621205790, offset=4303351808, 
      buf=0x55762123e000 '\364' <repeats 199 times>, <incomplete sequence \364>..., bytes=12288, 
      co_entry=0x557620d9dc97 <blk_write_entry>, flags=0) at ../block/block-backend.c:1335
  #13 in blk_pwrite (blk=0x557621205790, offset=4303351808, buf=0x55762123e000, 
      count=12288, flags=0) at ../block/block-backend.c:1501


So now I think that v1 is simpler.. It's more complicated (but not too
much) in code. But it keeps discards and data writes non-blocking each
other and avoids yields in critical sections.

Vladimir Sementsov-Ogievskiy (3):
  qemu-io: add aio_discard
  iotests: add qcow2-discard-during-rewrite
  block/qcow2: introduce inflight writes counters: fix discard

 block/qcow2.h                                 |   2 +
 block/qcow2-cluster.c                         |   4 +
 block/qcow2.c                                 |  18 ++-
 qemu-io-cmds.c                                | 117 ++++++++++++++++++
 .../tests/qcow2-discard-during-rewrite        |  99 +++++++++++++++
 .../tests/qcow2-discard-during-rewrite.out    |  17 +++
 6 files changed, 256 insertions(+), 1 deletion(-)
 create mode 100755 tests/qemu-iotests/tests/qcow2-discard-during-rewrite
 create mode 100644 tests/qemu-iotests/tests/qcow2-discard-during-rewrite.out

-- 
2.29.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/3] qemu-io: add aio_discard
  2021-02-25 11:52 [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Vladimir Sementsov-Ogievskiy
@ 2021-02-25 11:52 ` Vladimir Sementsov-Ogievskiy
  2021-02-25 11:52 ` [PATCH v2 2/3] iotests: add qcow2-discard-during-rewrite Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-02-25 11:52 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, mreitz, kwolf, vsementsov, den

Add aio_discard command like existing aio_write. It will be used in
further test.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 qemu-io-cmds.c | 117 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 117 insertions(+)

diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
index 97611969cb..28b5c3c092 100644
--- a/qemu-io-cmds.c
+++ b/qemu-io-cmds.c
@@ -1332,6 +1332,7 @@ struct aio_ctx {
     BlockBackend *blk;
     QEMUIOVector qiov;
     int64_t offset;
+    int64_t discard_bytes;
     char *buf;
     bool qflag;
     bool vflag;
@@ -1343,6 +1344,34 @@ struct aio_ctx {
     struct timespec t1;
 };
 
+static void aio_discard_done(void *opaque, int ret)
+{
+    struct aio_ctx *ctx = opaque;
+    struct timespec t2;
+
+    clock_gettime(CLOCK_MONOTONIC, &t2);
+
+
+    if (ret < 0) {
+        printf("aio_discard failed: %s\n", strerror(-ret));
+        block_acct_failed(blk_get_stats(ctx->blk), &ctx->acct);
+        goto out;
+    }
+
+    block_acct_done(blk_get_stats(ctx->blk), &ctx->acct);
+
+    if (ctx->qflag) {
+        goto out;
+    }
+
+    /* Finally, report back -- -C gives a parsable format */
+    t2 = tsub(t2, ctx->t1);
+    print_report("discarded", &t2, ctx->offset, ctx->discard_bytes,
+                 ctx->discard_bytes, 1, ctx->Cflag);
+out:
+    g_free(ctx);
+}
+
 static void aio_write_done(void *opaque, int ret)
 {
     struct aio_ctx *ctx = opaque;
@@ -1671,6 +1700,93 @@ static int aio_write_f(BlockBackend *blk, int argc, char **argv)
     return 0;
 }
 
+static void aio_discard_help(void)
+{
+    printf(
+"\n"
+" asynchronously discards a range of bytes from the given offset\n"
+"\n"
+" Example:\n"
+" 'aio_discard 0 64k' - discards 64K at start of a disk\n"
+"\n"
+" Note that due to its asynchronous nature, this command will be\n"
+" considered successful once the request is submitted, independently\n"
+" of potential I/O errors or pattern mismatches.\n"
+" -C, -- report statistics in a machine parsable format\n"
+" -i, -- treat request as invalid, for exercising stats\n"
+" -q, -- quiet mode, do not show I/O statistics\n"
+"\n");
+}
+
+static int aio_discard_f(BlockBackend *blk, int argc, char **argv);
+
+static const cmdinfo_t aio_discard_cmd = {
+    .name       = "aio_discard",
+    .cfunc      = aio_discard_f,
+    .perm       = BLK_PERM_WRITE,
+    .argmin     = 2,
+    .argmax     = -1,
+    .args       = "[-Ciq] off len",
+    .oneline    = "asynchronously discards a number of bytes",
+    .help       = aio_discard_help,
+};
+
+static int aio_discard_f(BlockBackend *blk, int argc, char **argv)
+{
+    int ret;
+    int c;
+    struct aio_ctx *ctx = g_new0(struct aio_ctx, 1);
+
+    ctx->blk = blk;
+    while ((c = getopt(argc, argv, "Ciq")) != -1) {
+        switch (c) {
+        case 'C':
+            ctx->Cflag = true;
+            break;
+        case 'q':
+            ctx->qflag = true;
+            break;
+        case 'i':
+            printf("injecting invalid discard request\n");
+            block_acct_invalid(blk_get_stats(blk), BLOCK_ACCT_UNMAP);
+            g_free(ctx);
+            return 0;
+        default:
+            g_free(ctx);
+            qemuio_command_usage(&aio_write_cmd);
+            return -EINVAL;
+        }
+    }
+
+    if (optind != argc - 2) {
+        g_free(ctx);
+        qemuio_command_usage(&aio_write_cmd);
+        return -EINVAL;
+    }
+
+    ctx->offset = cvtnum(argv[optind]);
+    if (ctx->offset < 0) {
+        ret = ctx->offset;
+        print_cvtnum_err(ret, argv[optind]);
+        g_free(ctx);
+        return ret;
+    }
+    optind++;
+
+    ctx->discard_bytes = cvtnum(argv[optind]);
+    if (ctx->discard_bytes < 0) {
+        ret = ctx->discard_bytes;
+        print_cvtnum_err(ret, argv[optind]);
+        g_free(ctx);
+        return ret;
+    }
+
+    blk_aio_pdiscard(blk, ctx->offset, ctx->discard_bytes,
+                     aio_discard_done, ctx);
+
+    return 0;
+}
+
 static int aio_flush_f(BlockBackend *blk, int argc, char **argv)
 {
     BlockAcctCookie cookie;
@@ -2494,6 +2610,7 @@ static void __attribute((constructor)) init_qemuio_commands(void)
     qemuio_add_command(&readv_cmd);
     qemuio_add_command(&write_cmd);
     qemuio_add_command(&writev_cmd);
+    qemuio_add_command(&aio_discard_cmd);
     qemuio_add_command(&aio_read_cmd);
     qemuio_add_command(&aio_write_cmd);
     qemuio_add_command(&aio_flush_cmd);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] iotests: add qcow2-discard-during-rewrite
  2021-02-25 11:52 [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Vladimir Sementsov-Ogievskiy
  2021-02-25 11:52 ` [PATCH v2 1/3] qemu-io: add aio_discard Vladimir Sementsov-Ogievskiy
@ 2021-02-25 11:52 ` Vladimir Sementsov-Ogievskiy
  2021-02-25 11:52 ` [PATCH v2 3/3] block/qcow2: introduce inflight writes counters: fix discard Vladimir Sementsov-Ogievskiy
  2021-03-12 15:24 ` [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Kevin Wolf
  3 siblings, 0 replies; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-02-25 11:52 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, mreitz, kwolf, vsementsov, den

Simple test:
 - start writing to allocated cluster A
 - discard this cluster
 - write to another unallocated cluster B (it's allocated in same place
   where A was allocated)
 - continue writing to A

For now last action pollutes cluster B which is a bug fixed by the
following commit.

For now, add test to "disabled" group, so that it doesn't run
automatically.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 .../tests/qcow2-discard-during-rewrite        | 99 +++++++++++++++++++
 .../tests/qcow2-discard-during-rewrite.out    | 17 ++++
 2 files changed, 116 insertions(+)
 create mode 100755 tests/qemu-iotests/tests/qcow2-discard-during-rewrite
 create mode 100644 tests/qemu-iotests/tests/qcow2-discard-during-rewrite.out

diff --git a/tests/qemu-iotests/tests/qcow2-discard-during-rewrite b/tests/qemu-iotests/tests/qcow2-discard-during-rewrite
new file mode 100755
index 0000000000..dd9964ef3f
--- /dev/null
+++ b/tests/qemu-iotests/tests/qcow2-discard-during-rewrite
@@ -0,0 +1,99 @@
+#!/usr/bin/env bash
+# group: quick disabled
+#
+# Test discarding (and reusing) host cluster during writing data to it.
+#
+# Copyright (c) 2021 Virtuozzo International GmbH.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+# creator
+owner=vsementsov@virtuozzo.com
+
+seq=`basename $0`
+echo "QA output created by $seq"
+
+status=1	# failure is the default!
+
+_cleanup()
+{
+    _cleanup_test_img
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+. ./../common.rc
+. ./../common.filter
+
+_supported_fmt qcow2
+_supported_proto file fuse
+_supported_os Linux
+
+size=1M
+_make_test_img $size
+
+(
+cat <<EOF
+write -P 1 0 64K
+
+break pwritev A
+aio_write -P 2 0 64K
+wait_break A
+
+aio_discard 0 64K
+EOF
+
+# Now discard must be blocked by discard_rw_lock.
+# Prior to introducing of discard_rw_lock discard
+# will run and discard the cluster we are writing
+# to. Give it a chance to do it.
+sleep 1
+
+# Write another cluster. Data writing should be
+# blocked by discard_rw_lock (as we have pending
+# writer). Prior to introducing of the lock this
+# write will allocate same cluster that was
+# discarded and successfully write to it.
+# Give it a chance.
+cat <<EOF
+aio_write -P 3 128K 64K
+EOF
+
+# a chance for write to finish
+sleep 1
+
+cat <<EOF
+resume A
+EOF
+
+# On good scenario wait for
+# 1. finish first write
+# 2. finish discard
+# 3. finish second write
+# On bad scenario discard and second write are finished,
+# wait only for first write, which will pollute data
+# of second write
+sleep 1
+
+cat <<EOF
+read -P 0 0 64K
+read -P 3 128K 64K
+EOF
+) | $QEMU_IO blkdebug::$TEST_IMG | _filter_qemu_io
+
+# success, all done
+echo "*** done"
+rm -f $seq.full
+status=0
diff --git a/tests/qemu-iotests/tests/qcow2-discard-during-rewrite.out b/tests/qemu-iotests/tests/qcow2-discard-during-rewrite.out
new file mode 100644
index 0000000000..dd54e928a6
--- /dev/null
+++ b/tests/qemu-iotests/tests/qcow2-discard-during-rewrite.out
@@ -0,0 +1,17 @@
+QA output created by qcow2-discard-during-rewrite
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576
+wrote 65536/65536 bytes at offset 0
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+blkdebug: Suspended request 'A'
+blkdebug: Resuming request 'A'
+discarded 65536/65536 bytes at offset 0
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 65536/65536 bytes at offset 0
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 65536/65536 bytes at offset 131072
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 65536/65536 bytes at offset 0
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 65536/65536 bytes at offset 131072
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+*** done
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] block/qcow2: introduce inflight writes counters: fix discard
  2021-02-25 11:52 [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Vladimir Sementsov-Ogievskiy
  2021-02-25 11:52 ` [PATCH v2 1/3] qemu-io: add aio_discard Vladimir Sementsov-Ogievskiy
  2021-02-25 11:52 ` [PATCH v2 2/3] iotests: add qcow2-discard-during-rewrite Vladimir Sementsov-Ogievskiy
@ 2021-02-25 11:52 ` Vladimir Sementsov-Ogievskiy
  2021-03-12 15:24 ` [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Kevin Wolf
  3 siblings, 0 replies; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-02-25 11:52 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, mreitz, kwolf, vsementsov, den

There is a bug in qcow2: host cluster can be discarded (refcount
becomes 0) and reused during data write. In this case data write may
pollute another cluster (recently allocated) or even metadata.

To fix the issue introduce rw-lock: take read-lock on data writing and
write-lock on discard.

Enable qcow2-discard-during-rewrite as it is fixed.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/qcow2.h                                  |  2 ++
 block/qcow2-cluster.c                          |  4 ++++
 block/qcow2.c                                  | 18 +++++++++++++++++-
 .../tests/qcow2-discard-during-rewrite         |  2 +-
 4 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/block/qcow2.h b/block/qcow2.h
index 0678073b74..7ebb6e2677 100644
--- a/block/qcow2.h
+++ b/block/qcow2.h
@@ -420,6 +420,8 @@ typedef struct BDRVQcow2State {
      * is to convert the image with the desired compression type set.
      */
     Qcow2CompressionType compression_type;
+
+    CoRwlock discard_rw_lock;
 } BDRVQcow2State;
 
 typedef struct Qcow2COWRegion {
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
index bd0597842f..e16775dd59 100644
--- a/block/qcow2-cluster.c
+++ b/block/qcow2-cluster.c
@@ -1938,6 +1938,8 @@ int qcow2_cluster_discard(BlockDriverState *bs, uint64_t offset,
     int64_t cleared;
     int ret;
 
+    qemu_co_rwlock_wrlock(&s->discard_rw_lock);
+
     /* Caller must pass aligned values, except at image end */
     assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
     assert(QEMU_IS_ALIGNED(end_offset, s->cluster_size) ||
@@ -1965,6 +1967,8 @@ fail:
     s->cache_discards = false;
     qcow2_process_discards(bs, ret);
 
+    qemu_co_rwlock_unlock(&s->discard_rw_lock);
+
     return ret;
 }
 
diff --git a/block/qcow2.c b/block/qcow2.c
index d9f49a52e7..e1a5d89aa1 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -1897,6 +1897,7 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
 
     /* Initialise locks */
     qemu_co_mutex_init(&s->lock);
+    qemu_co_rwlock_init(&s->discard_rw_lock);
 
     if (qemu_in_coroutine()) {
         /* From bdrv_co_create.  */
@@ -2536,12 +2537,14 @@ static coroutine_fn int qcow2_co_pwritev_task(BlockDriverState *bs,
         }
     }
 
+    qemu_co_rwlock_unlock(&s->discard_rw_lock);
     qemu_co_mutex_lock(&s->lock);
 
     ret = qcow2_handle_l2meta(bs, &l2meta, true);
     goto out_locked;
 
 out_unlocked:
+    qemu_co_rwlock_unlock(&s->discard_rw_lock);
     qemu_co_mutex_lock(&s->lock);
 
 out_locked:
@@ -2605,6 +2608,8 @@ static coroutine_fn int qcow2_co_pwritev_part(
             goto out_locked;
         }
 
+        qemu_co_rwlock_rdlock(&s->discard_rw_lock);
+
         qemu_co_mutex_unlock(&s->lock);
 
         if (!aio && cur_bytes != bytes) {
@@ -4097,10 +4102,15 @@ qcow2_co_copy_range_to(BlockDriverState *bs,
             goto fail;
         }
 
+        qemu_co_rwlock_rdlock(&s->discard_rw_lock);
         qemu_co_mutex_unlock(&s->lock);
+
         ret = bdrv_co_copy_range_to(src, src_offset, s->data_file, host_offset,
                                     cur_bytes, read_flags, write_flags);
+
+        qemu_co_rwlock_unlock(&s->discard_rw_lock);
         qemu_co_mutex_lock(&s->lock);
+
         if (ret < 0) {
             goto fail;
         }
@@ -4536,13 +4546,19 @@ qcow2_co_pwritev_compressed_task(BlockDriverState *bs,
     }
 
     ret = qcow2_pre_write_overlap_check(bs, 0, cluster_offset, out_len, true);
-    qemu_co_mutex_unlock(&s->lock);
     if (ret < 0) {
+        qemu_co_mutex_unlock(&s->lock);
         goto fail;
     }
 
+    qemu_co_rwlock_rdlock(&s->discard_rw_lock);
+    qemu_co_mutex_unlock(&s->lock);
+
     BLKDBG_EVENT(s->data_file, BLKDBG_WRITE_COMPRESSED);
     ret = bdrv_co_pwrite(s->data_file, cluster_offset, out_len, out_buf, 0);
+
+    qemu_co_rwlock_unlock(&s->discard_rw_lock);
+
     if (ret < 0) {
         goto fail;
     }
diff --git a/tests/qemu-iotests/tests/qcow2-discard-during-rewrite b/tests/qemu-iotests/tests/qcow2-discard-during-rewrite
index dd9964ef3f..5df0048757 100755
--- a/tests/qemu-iotests/tests/qcow2-discard-during-rewrite
+++ b/tests/qemu-iotests/tests/qcow2-discard-during-rewrite
@@ -1,5 +1,5 @@
 #!/usr/bin/env bash
-# group: quick disabled
+# group: quick
 #
 # Test discarding (and reusing) host cluster during writing data to it.
 #
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard
  2021-02-25 11:52 [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2021-02-25 11:52 ` [PATCH v2 3/3] block/qcow2: introduce inflight writes counters: fix discard Vladimir Sementsov-Ogievskiy
@ 2021-03-12 15:24 ` Kevin Wolf
  2021-03-12 15:54   ` Vladimir Sementsov-Ogievskiy
  2021-03-18 15:37   ` Vladimir Sementsov-Ogievskiy
  3 siblings, 2 replies; 8+ messages in thread
From: Kevin Wolf @ 2021-03-12 15:24 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: den, qemu-devel, qemu-block, mreitz

Am 25.02.2021 um 12:52 hat Vladimir Sementsov-Ogievskiy geschrieben:
> Hi all! It occurs that nothing prevents discarding and reallocating host
> cluster during data writing. This way data writing will pollute another
> newly allocated cluster of data or metadata.
> 
> OK, v2 is a try to solve the problem with CoRwlock.. And it is marked
> RFC, because of a lot of iotest failures.. Some of problems with v2:
> 
> 1. It's a more complicated to make a test, as everything is blocking
> and I can't just break write and do discard.. I have to implement
> aio_discard in qemu-io and rewrite test into several portions of io
> commands splitted by "sleep 1".. OK, it's not a big problem, and I've
> solved it.

Right, this just demonstrates that it's doing what it's supposed to.

> 2. iotest 7 fails with several leaked clusters. Seems, that it depend on
> the fact that discard may be done in parallel with writes. Iotest 7 does
> snapshots, so I think l1 table is updated to the moment when discard is
> finally unlocked.. But I didn't dig into it, it's all my assumptions.

This one looks a bit odd, but it may be related to the bug in your code
that you forgot that qcow2_cluster_discard() is not a coroutine_fn.
Later tests fail during the unlock:

qemu-img: ../util/qemu-coroutine-lock.c:358: qemu_co_rwlock_unlock: Assertion `qemu_in_coroutine()' failed.

#0  0x00007ff33f7d89d5 in raise () from /lib64/libc.so.6
#1  0x00007ff33f7c18a4 in abort () from /lib64/libc.so.6
#2  0x00007ff33f7c1789 in __assert_fail_base.cold () from /lib64/libc.so.6
#3  0x00007ff33f7d1026 in __assert_fail () from /lib64/libc.so.6
#4  0x0000556f4ffd3c94 in qemu_co_rwlock_unlock (lock=0x556f51f63ca0) at ../util/qemu-coroutine-lock.c:358
#5  0x0000556f4fef5e09 in qcow2_cluster_discard (bs=0x556f51f56a80, offset=37748736, bytes=0, type=QCOW2_DISCARD_NEVER, full_discard=false) at ../block/qcow2-cluster.c:1970
#6  0x0000556f4ff46c23 in qcow2_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/qcow2-snapshot.c:736
#7  0x0000556f4ff0d7b6 in bdrv_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/snapshot.c:227
#8  0x0000556f4fe85526 in img_snapshot (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:3337
#9  0x0000556f4fe8a227 in main (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:5375

> 3. iotest 13 (and I think a lot more iotests) crashes on
> assert(!to->locks_held); .. So with this assertion we can't keep rwlock
> locked during data writing...
> 
>   #3  in __assert_fail () from /lib64/libc.so.6
>   #4  in qemu_aio_coroutine_enter (ctx=0x55762120b700, co=0x55762121d700)
>       at ../util/qemu-coroutine.c:158
>   #5  in aio_co_enter (ctx=0x55762120b700, co=0x55762121d700) at ../util/async.c:628
>   #6  in aio_co_wake (co=0x55762121d700) at ../util/async.c:612
>   #7  in thread_pool_co_cb (opaque=0x7f17950daab0, ret=0) at ../util/thread-pool.c:279
>   #8  in thread_pool_completion_bh (opaque=0x5576211e5070) at ../util/thread-pool.c:188
>   #9  in aio_bh_call (bh=0x557621205df0) at ../util/async.c:136
>   #10 in aio_bh_poll (ctx=0x55762120b700) at ../util/async.c:164
>   #11 in aio_poll (ctx=0x55762120b700, blocking=true) at ../util/aio-posix.c:659
>   #12 in blk_prw (blk=0x557621205790, offset=4303351808, 
>       buf=0x55762123e000 '\364' <repeats 199 times>, <incomplete sequence \364>..., bytes=12288, 
>       co_entry=0x557620d9dc97 <blk_write_entry>, flags=0) at ../block/block-backend.c:1335
>   #13 in blk_pwrite (blk=0x557621205790, offset=4303351808, buf=0x55762123e000, 
>       count=12288, flags=0) at ../block/block-backend.c:1501

This is another bug in your code: A taken lock belongs to its coroutine.
You can't lock in one coroutine and unlock in another.

The changes you made to the write code seem unnecessarily complicated
anyway: Why not just qemu_co_rwlock_rdlock() right at the start of
qcow2_co_pwritev_part() and unlock at its end, without taking and
dropping the lock repeatedly? It makes both the locking more obviously
correct and also fixes the bug (013 passes with this change).

> So now I think that v1 is simpler.. It's more complicated (but not too
> much) in code. But it keeps discards and data writes non-blocking each
> other and avoids yields in critical sections.

I think an approach with additional data structures is almost certainly
more complex and harder to maintain (and as the review from Max showed,
also to understand). I wouldn't give up yet on the CoRwlock based
approach, it's almost trivial code in comparison.

True, making qcow2_cluster_discard() a coroutine_fn requires a
preparational patch that is less trivial, but at least this can be seen
as something that we would want to do sooner or later anyway.

Kevin



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard
  2021-03-12 15:24 ` [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Kevin Wolf
@ 2021-03-12 15:54   ` Vladimir Sementsov-Ogievskiy
  2021-03-18 15:37   ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-12 15:54 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, qemu-devel, mreitz, den

12.03.2021 18:24, Kevin Wolf wrote:
> Am 25.02.2021 um 12:52 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> Hi all! It occurs that nothing prevents discarding and reallocating host
>> cluster during data writing. This way data writing will pollute another
>> newly allocated cluster of data or metadata.
>>
>> OK, v2 is a try to solve the problem with CoRwlock.. And it is marked
>> RFC, because of a lot of iotest failures.. Some of problems with v2:
>>
>> 1. It's a more complicated to make a test, as everything is blocking
>> and I can't just break write and do discard.. I have to implement
>> aio_discard in qemu-io and rewrite test into several portions of io
>> commands splitted by "sleep 1".. OK, it's not a big problem, and I've
>> solved it.
> 
> Right, this just demonstrates that it's doing what it's supposed to.
> 
>> 2. iotest 7 fails with several leaked clusters. Seems, that it depend on
>> the fact that discard may be done in parallel with writes. Iotest 7 does
>> snapshots, so I think l1 table is updated to the moment when discard is
>> finally unlocked.. But I didn't dig into it, it's all my assumptions.
> 
> This one looks a bit odd, but it may be related to the bug in your code
> that you forgot that qcow2_cluster_discard() is not a coroutine_fn.
> Later tests fail during the unlock:
> 
> qemu-img: ../util/qemu-coroutine-lock.c:358: qemu_co_rwlock_unlock: Assertion `qemu_in_coroutine()' failed.
> 
> #0  0x00007ff33f7d89d5 in raise () from /lib64/libc.so.6
> #1  0x00007ff33f7c18a4 in abort () from /lib64/libc.so.6
> #2  0x00007ff33f7c1789 in __assert_fail_base.cold () from /lib64/libc.so.6
> #3  0x00007ff33f7d1026 in __assert_fail () from /lib64/libc.so.6
> #4  0x0000556f4ffd3c94 in qemu_co_rwlock_unlock (lock=0x556f51f63ca0) at ../util/qemu-coroutine-lock.c:358
> #5  0x0000556f4fef5e09 in qcow2_cluster_discard (bs=0x556f51f56a80, offset=37748736, bytes=0, type=QCOW2_DISCARD_NEVER, full_discard=false) at ../block/qcow2-cluster.c:1970
> #6  0x0000556f4ff46c23 in qcow2_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/qcow2-snapshot.c:736
> #7  0x0000556f4ff0d7b6 in bdrv_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/snapshot.c:227
> #8  0x0000556f4fe85526 in img_snapshot (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:3337
> #9  0x0000556f4fe8a227 in main (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:5375
> 
>> 3. iotest 13 (and I think a lot more iotests) crashes on
>> assert(!to->locks_held); .. So with this assertion we can't keep rwlock
>> locked during data writing...
>>
>>    #3  in __assert_fail () from /lib64/libc.so.6
>>    #4  in qemu_aio_coroutine_enter (ctx=0x55762120b700, co=0x55762121d700)
>>        at ../util/qemu-coroutine.c:158
>>    #5  in aio_co_enter (ctx=0x55762120b700, co=0x55762121d700) at ../util/async.c:628
>>    #6  in aio_co_wake (co=0x55762121d700) at ../util/async.c:612
>>    #7  in thread_pool_co_cb (opaque=0x7f17950daab0, ret=0) at ../util/thread-pool.c:279
>>    #8  in thread_pool_completion_bh (opaque=0x5576211e5070) at ../util/thread-pool.c:188
>>    #9  in aio_bh_call (bh=0x557621205df0) at ../util/async.c:136
>>    #10 in aio_bh_poll (ctx=0x55762120b700) at ../util/async.c:164
>>    #11 in aio_poll (ctx=0x55762120b700, blocking=true) at ../util/aio-posix.c:659
>>    #12 in blk_prw (blk=0x557621205790, offset=4303351808,
>>        buf=0x55762123e000 '\364' <repeats 199 times>, <incomplete sequence \364>..., bytes=12288,
>>        co_entry=0x557620d9dc97 <blk_write_entry>, flags=0) at ../block/block-backend.c:1335
>>    #13 in blk_pwrite (blk=0x557621205790, offset=4303351808, buf=0x55762123e000,
>>        count=12288, flags=0) at ../block/block-backend.c:1501
> 
> This is another bug in your code: A taken lock belongs to its coroutine.
> You can't lock in one coroutine and unlock in another.
> 
> The changes you made to the write code seem unnecessarily complicated
> anyway: Why not just qemu_co_rwlock_rdlock() right at the start of
> qcow2_co_pwritev_part() and unlock at its end, without taking and
> dropping the lock repeatedly? It makes both the locking more obviously
> correct and also fixes the bug (013 passes with this change).
> 
>> So now I think that v1 is simpler.. It's more complicated (but not too
>> much) in code. But it keeps discards and data writes non-blocking each
>> other and avoids yields in critical sections.
> 
> I think an approach with additional data structures is almost certainly
> more complex and harder to maintain (and as the review from Max showed,
> also to understand). I wouldn't give up yet on the CoRwlock based
> approach, it's almost trivial code in comparison.

Sure

> 
> True, making qcow2_cluster_discard() a coroutine_fn requires a
> preparational patch that is less trivial, but at least this can be seen
> as something that we would want to do sooner or later anyway.
> 

Thanks for help, I'll try your suggestions and resend.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard
  2021-03-12 15:24 ` [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Kevin Wolf
  2021-03-12 15:54   ` Vladimir Sementsov-Ogievskiy
@ 2021-03-18 15:37   ` Vladimir Sementsov-Ogievskiy
  2021-03-18 15:51     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-18 15:37 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, qemu-devel, mreitz, den

12.03.2021 18:24, Kevin Wolf wrote:
> Am 25.02.2021 um 12:52 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> Hi all! It occurs that nothing prevents discarding and reallocating host
>> cluster during data writing. This way data writing will pollute another
>> newly allocated cluster of data or metadata.
>>
>> OK, v2 is a try to solve the problem with CoRwlock.. And it is marked
>> RFC, because of a lot of iotest failures.. Some of problems with v2:
>>
>> 1. It's a more complicated to make a test, as everything is blocking
>> and I can't just break write and do discard.. I have to implement
>> aio_discard in qemu-io and rewrite test into several portions of io
>> commands splitted by "sleep 1".. OK, it's not a big problem, and I've
>> solved it.
> 
> Right, this just demonstrates that it's doing what it's supposed to.
> 
>> 2. iotest 7 fails with several leaked clusters. Seems, that it depend on
>> the fact that discard may be done in parallel with writes. Iotest 7 does
>> snapshots, so I think l1 table is updated to the moment when discard is
>> finally unlocked.. But I didn't dig into it, it's all my assumptions.
> 
> This one looks a bit odd, but it may be related to the bug in your code
> that you forgot that qcow2_cluster_discard() is not a coroutine_fn.
> Later tests fail during the unlock:
> 
> qemu-img: ../util/qemu-coroutine-lock.c:358: qemu_co_rwlock_unlock: Assertion `qemu_in_coroutine()' failed.
> 
> #0  0x00007ff33f7d89d5 in raise () from /lib64/libc.so.6
> #1  0x00007ff33f7c18a4 in abort () from /lib64/libc.so.6
> #2  0x00007ff33f7c1789 in __assert_fail_base.cold () from /lib64/libc.so.6
> #3  0x00007ff33f7d1026 in __assert_fail () from /lib64/libc.so.6
> #4  0x0000556f4ffd3c94 in qemu_co_rwlock_unlock (lock=0x556f51f63ca0) at ../util/qemu-coroutine-lock.c:358
> #5  0x0000556f4fef5e09 in qcow2_cluster_discard (bs=0x556f51f56a80, offset=37748736, bytes=0, type=QCOW2_DISCARD_NEVER, full_discard=false) at ../block/qcow2-cluster.c:1970
> #6  0x0000556f4ff46c23 in qcow2_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/qcow2-snapshot.c:736
> #7  0x0000556f4ff0d7b6 in bdrv_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/snapshot.c:227
> #8  0x0000556f4fe85526 in img_snapshot (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:3337
> #9  0x0000556f4fe8a227 in main (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:5375
> 
>> 3. iotest 13 (and I think a lot more iotests) crashes on
>> assert(!to->locks_held); .. So with this assertion we can't keep rwlock
>> locked during data writing...
>>
>>    #3  in __assert_fail () from /lib64/libc.so.6
>>    #4  in qemu_aio_coroutine_enter (ctx=0x55762120b700, co=0x55762121d700)
>>        at ../util/qemu-coroutine.c:158
>>    #5  in aio_co_enter (ctx=0x55762120b700, co=0x55762121d700) at ../util/async.c:628
>>    #6  in aio_co_wake (co=0x55762121d700) at ../util/async.c:612
>>    #7  in thread_pool_co_cb (opaque=0x7f17950daab0, ret=0) at ../util/thread-pool.c:279
>>    #8  in thread_pool_completion_bh (opaque=0x5576211e5070) at ../util/thread-pool.c:188
>>    #9  in aio_bh_call (bh=0x557621205df0) at ../util/async.c:136
>>    #10 in aio_bh_poll (ctx=0x55762120b700) at ../util/async.c:164
>>    #11 in aio_poll (ctx=0x55762120b700, blocking=true) at ../util/aio-posix.c:659
>>    #12 in blk_prw (blk=0x557621205790, offset=4303351808,
>>        buf=0x55762123e000 '\364' <repeats 199 times>, <incomplete sequence \364>..., bytes=12288,
>>        co_entry=0x557620d9dc97 <blk_write_entry>, flags=0) at ../block/block-backend.c:1335
>>    #13 in blk_pwrite (blk=0x557621205790, offset=4303351808, buf=0x55762123e000,
>>        count=12288, flags=0) at ../block/block-backend.c:1501
> 
> This is another bug in your code: A taken lock belongs to its coroutine.
> You can't lock in one coroutine and unlock in another.
> 
> The changes you made to the write code seem unnecessarily complicated
> anyway: Why not just qemu_co_rwlock_rdlock() right at the start of
> qcow2_co_pwritev_part() and unlock at its end, without taking and
> dropping the lock repeatedly? It makes both the locking more obviously
> correct and also fixes the bug (013 passes with this change).
> 
>> So now I think that v1 is simpler.. It's more complicated (but not too
>> much) in code. But it keeps discards and data writes non-blocking each
>> other and avoids yields in critical sections.
> 
> I think an approach with additional data structures is almost certainly
> more complex and harder to maintain (and as the review from Max showed,
> also to understand). I wouldn't give up yet on the CoRwlock based
> approach, it's almost trivial code in comparison.
> 
> True, making qcow2_cluster_discard() a coroutine_fn requires a
> preparational patch that is less trivial, but at least this can be seen
> as something that we would want to do sooner or later anyway.


Actually, recount of cluster may become not only because of guest discard. So correct thing is rwlock in update_refcount(), when we want to decrease refcount from 1 to 0.. And for this update_refcount() should be moved to coroutine, and better the whole qcow2 driver..


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard
  2021-03-18 15:37   ` Vladimir Sementsov-Ogievskiy
@ 2021-03-18 15:51     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 8+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-03-18 15:51 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, qemu-devel, mreitz, den

18.03.2021 18:37, Vladimir Sementsov-Ogievskiy wrote:
> 12.03.2021 18:24, Kevin Wolf wrote:
>> Am 25.02.2021 um 12:52 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>> Hi all! It occurs that nothing prevents discarding and reallocating host
>>> cluster during data writing. This way data writing will pollute another
>>> newly allocated cluster of data or metadata.
>>>
>>> OK, v2 is a try to solve the problem with CoRwlock.. And it is marked
>>> RFC, because of a lot of iotest failures.. Some of problems with v2:
>>>
>>> 1. It's a more complicated to make a test, as everything is blocking
>>> and I can't just break write and do discard.. I have to implement
>>> aio_discard in qemu-io and rewrite test into several portions of io
>>> commands splitted by "sleep 1".. OK, it's not a big problem, and I've
>>> solved it.
>>
>> Right, this just demonstrates that it's doing what it's supposed to.
>>
>>> 2. iotest 7 fails with several leaked clusters. Seems, that it depend on
>>> the fact that discard may be done in parallel with writes. Iotest 7 does
>>> snapshots, so I think l1 table is updated to the moment when discard is
>>> finally unlocked.. But I didn't dig into it, it's all my assumptions.
>>
>> This one looks a bit odd, but it may be related to the bug in your code
>> that you forgot that qcow2_cluster_discard() is not a coroutine_fn.
>> Later tests fail during the unlock:
>>
>> qemu-img: ../util/qemu-coroutine-lock.c:358: qemu_co_rwlock_unlock: Assertion `qemu_in_coroutine()' failed.
>>
>> #0  0x00007ff33f7d89d5 in raise () from /lib64/libc.so.6
>> #1  0x00007ff33f7c18a4 in abort () from /lib64/libc.so.6
>> #2  0x00007ff33f7c1789 in __assert_fail_base.cold () from /lib64/libc.so.6
>> #3  0x00007ff33f7d1026 in __assert_fail () from /lib64/libc.so.6
>> #4  0x0000556f4ffd3c94 in qemu_co_rwlock_unlock (lock=0x556f51f63ca0) at ../util/qemu-coroutine-lock.c:358
>> #5  0x0000556f4fef5e09 in qcow2_cluster_discard (bs=0x556f51f56a80, offset=37748736, bytes=0, type=QCOW2_DISCARD_NEVER, full_discard=false) at ../block/qcow2-cluster.c:1970
>> #6  0x0000556f4ff46c23 in qcow2_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/qcow2-snapshot.c:736
>> #7  0x0000556f4ff0d7b6 in bdrv_snapshot_create (bs=0x556f51f56a80, sn_info=0x7fff89fb9a30) at ../block/snapshot.c:227
>> #8  0x0000556f4fe85526 in img_snapshot (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:3337
>> #9  0x0000556f4fe8a227 in main (argc=4, argv=0x7fff89fb9d30) at ../qemu-img.c:5375
>>
>>> 3. iotest 13 (and I think a lot more iotests) crashes on
>>> assert(!to->locks_held); .. So with this assertion we can't keep rwlock
>>> locked during data writing...
>>>
>>>    #3  in __assert_fail () from /lib64/libc.so.6
>>>    #4  in qemu_aio_coroutine_enter (ctx=0x55762120b700, co=0x55762121d700)
>>>        at ../util/qemu-coroutine.c:158
>>>    #5  in aio_co_enter (ctx=0x55762120b700, co=0x55762121d700) at ../util/async.c:628
>>>    #6  in aio_co_wake (co=0x55762121d700) at ../util/async.c:612
>>>    #7  in thread_pool_co_cb (opaque=0x7f17950daab0, ret=0) at ../util/thread-pool.c:279
>>>    #8  in thread_pool_completion_bh (opaque=0x5576211e5070) at ../util/thread-pool.c:188
>>>    #9  in aio_bh_call (bh=0x557621205df0) at ../util/async.c:136
>>>    #10 in aio_bh_poll (ctx=0x55762120b700) at ../util/async.c:164
>>>    #11 in aio_poll (ctx=0x55762120b700, blocking=true) at ../util/aio-posix.c:659
>>>    #12 in blk_prw (blk=0x557621205790, offset=4303351808,
>>>        buf=0x55762123e000 '\364' <repeats 199 times>, <incomplete sequence \364>..., bytes=12288,
>>>        co_entry=0x557620d9dc97 <blk_write_entry>, flags=0) at ../block/block-backend.c:1335
>>>    #13 in blk_pwrite (blk=0x557621205790, offset=4303351808, buf=0x55762123e000,
>>>        count=12288, flags=0) at ../block/block-backend.c:1501
>>
>> This is another bug in your code: A taken lock belongs to its coroutine.
>> You can't lock in one coroutine and unlock in another.
>>
>> The changes you made to the write code seem unnecessarily complicated
>> anyway: Why not just qemu_co_rwlock_rdlock() right at the start of
>> qcow2_co_pwritev_part() and unlock at its end, without taking and
>> dropping the lock repeatedly? It makes both the locking more obviously
>> correct and also fixes the bug (013 passes with this change).
>>
>>> So now I think that v1 is simpler.. It's more complicated (but not too
>>> much) in code. But it keeps discards and data writes non-blocking each
>>> other and avoids yields in critical sections.
>>
>> I think an approach with additional data structures is almost certainly
>> more complex and harder to maintain (and as the review from Max showed,
>> also to understand). I wouldn't give up yet on the CoRwlock based
>> approach, it's almost trivial code in comparison.
>>
>> True, making qcow2_cluster_discard() a coroutine_fn requires a
>> preparational patch that is less trivial, but at least this can be seen
>> as something that we would want to do sooner or later anyway.
> 
> 
> Actually, recount of cluster may become not only because of guest discard. So correct thing is rwlock in update_refcount(), when we want to decrease refcount from 1 to 0.. And for this update_refcount() should be moved to coroutine, and better the whole qcow2 driver..
> 
> 

Or I have better idea: we can fix only the case when update_refcount is called from the coroutine. As other cases are already broken, as they modify qcow2 metadata not taking qcow2 mutext.


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-03-18 15:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-25 11:52 [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Vladimir Sementsov-Ogievskiy
2021-02-25 11:52 ` [PATCH v2 1/3] qemu-io: add aio_discard Vladimir Sementsov-Ogievskiy
2021-02-25 11:52 ` [PATCH v2 2/3] iotests: add qcow2-discard-during-rewrite Vladimir Sementsov-Ogievskiy
2021-02-25 11:52 ` [PATCH v2 3/3] block/qcow2: introduce inflight writes counters: fix discard Vladimir Sementsov-Ogievskiy
2021-03-12 15:24 ` [PATCH v2(RFC) 0/3] qcow2: fix parallel rewrite and discard Kevin Wolf
2021-03-12 15:54   ` Vladimir Sementsov-Ogievskiy
2021-03-18 15:37   ` Vladimir Sementsov-Ogievskiy
2021-03-18 15:51     ` Vladimir Sementsov-Ogievskiy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.