qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] active-mirror: support unaligned guest operations
@ 2019-09-12 15:13 Vladimir Sementsov-Ogievskiy
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 1/4] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
                   ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-09-12 15:13 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Commit 9adc1cb49af8d fixed a bug about unaligned (to dirty bitmap
granularity) guest writes (and discards) by simply requesting
corresponding alignment on mirror-top filter. However forcing large
alignment obviously decreases performance of unaligned requests.

So it's time for a new solution which is in 03. And 04 reverts
9adc1cb49af8d.

Vladimir Sementsov-Ogievskiy (4):
  block/mirror: simplify do_sync_target_write
  block/block-backend: add blk_co_pwritev_part
  block/mirror: support unaligned write in active mirror
  Revert "mirror: Only mirror granularity-aligned chunks"

 include/sysemu/block-backend.h |   4 +
 block/block-backend.c          |  17 +++-
 block/mirror.c                 | 153 +++++++++++++--------------------
 3 files changed, 78 insertions(+), 96 deletions(-)

-- 
2.21.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 1/4] block/mirror: simplify do_sync_target_write
  2019-09-12 15:13 [Qemu-devel] [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
@ 2019-09-12 15:13 ` Vladimir Sementsov-Ogievskiy
  2019-10-02 14:57   ` Max Reitz
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 2/4] block/block-backend: add blk_co_pwritev_part Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-09-12 15:13 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

do_sync_target_write is called from bdrv_mirror_top_do_write after
write/discard operation, all inside active_write/active_write_settle
protecting us from mirror iteration. So the whole area is dirty for
sure, no reason to examine dirty bitmap.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/mirror.c | 95 +++++++++++++++-----------------------------------
 1 file changed, 28 insertions(+), 67 deletions(-)

diff --git a/block/mirror.c b/block/mirror.c
index 853e2c7510..d176bf5920 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1203,84 +1203,45 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
                      uint64_t offset, uint64_t bytes,
                      QEMUIOVector *qiov, int flags)
 {
-    QEMUIOVector target_qiov;
-    uint64_t dirty_offset = offset;
-    uint64_t dirty_bytes;
-
-    if (qiov) {
-        qemu_iovec_init(&target_qiov, qiov->niov);
-    }
-
-    while (true) {
-        bool valid_area;
-        int ret;
-
-        bdrv_dirty_bitmap_lock(job->dirty_bitmap);
-        dirty_bytes = MIN(offset + bytes - dirty_offset, INT_MAX);
-        valid_area = bdrv_dirty_bitmap_next_dirty_area(job->dirty_bitmap,
-                                                       &dirty_offset,
-                                                       &dirty_bytes);
-        if (!valid_area) {
-            bdrv_dirty_bitmap_unlock(job->dirty_bitmap);
-            break;
-        }
+    int ret;
 
-        bdrv_reset_dirty_bitmap_locked(job->dirty_bitmap,
-                                       dirty_offset, dirty_bytes);
-        bdrv_dirty_bitmap_unlock(job->dirty_bitmap);
+    bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
 
-        job_progress_increase_remaining(&job->common.job, dirty_bytes);
+    job_progress_increase_remaining(&job->common.job, bytes);
 
-        assert(dirty_offset - offset <= SIZE_MAX);
-        if (qiov) {
-            qemu_iovec_reset(&target_qiov);
-            qemu_iovec_concat(&target_qiov, qiov,
-                              dirty_offset - offset, dirty_bytes);
-        }
-
-        switch (method) {
-        case MIRROR_METHOD_COPY:
-            ret = blk_co_pwritev(job->target, dirty_offset, dirty_bytes,
-                                 qiov ? &target_qiov : NULL, flags);
-            break;
+    switch (method) {
+    case MIRROR_METHOD_COPY:
+        ret = blk_co_pwritev(job->target, offset, bytes, qiov, flags);
+        break;
 
-        case MIRROR_METHOD_ZERO:
-            assert(!qiov);
-            ret = blk_co_pwrite_zeroes(job->target, dirty_offset, dirty_bytes,
-                                       flags);
-            break;
+    case MIRROR_METHOD_ZERO:
+        assert(!qiov);
+        ret = blk_co_pwrite_zeroes(job->target, offset, bytes, flags);
+        break;
 
-        case MIRROR_METHOD_DISCARD:
-            assert(!qiov);
-            ret = blk_co_pdiscard(job->target, dirty_offset, dirty_bytes);
-            break;
+    case MIRROR_METHOD_DISCARD:
+        assert(!qiov);
+        ret = blk_co_pdiscard(job->target, offset, bytes);
+        break;
 
-        default:
-            abort();
-        }
+    default:
+        abort();
+    }
 
-        if (ret >= 0) {
-            job_progress_update(&job->common.job, dirty_bytes);
-        } else {
-            BlockErrorAction action;
+    if (ret >= 0) {
+        job_progress_update(&job->common.job, bytes);
+    } else {
+        BlockErrorAction action;
 
-            bdrv_set_dirty_bitmap(job->dirty_bitmap, dirty_offset, dirty_bytes);
-            job->actively_synced = false;
+        bdrv_set_dirty_bitmap(job->dirty_bitmap, offset, bytes);
+        job->actively_synced = false;
 
-            action = mirror_error_action(job, false, -ret);
-            if (action == BLOCK_ERROR_ACTION_REPORT) {
-                if (!job->ret) {
-                    job->ret = ret;
-                }
-                break;
+        action = mirror_error_action(job, false, -ret);
+        if (action == BLOCK_ERROR_ACTION_REPORT) {
+            if (!job->ret) {
+                job->ret = ret;
             }
         }
-
-        dirty_offset += dirty_bytes;
-    }
-
-    if (qiov) {
-        qemu_iovec_destroy(&target_qiov);
     }
 }
 
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 2/4] block/block-backend: add blk_co_pwritev_part
  2019-09-12 15:13 [Qemu-devel] [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 1/4] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
@ 2019-09-12 15:13 ` Vladimir Sementsov-Ogievskiy
  2019-10-02 14:57   ` Max Reitz
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 3/4] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-09-12 15:13 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Add blk write function with qiov_offset parameter. It's needed for the
following commit.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/sysemu/block-backend.h |  4 ++++
 block/block-backend.c          | 17 +++++++++++++----
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index 368d53af77..73f2cef7fe 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -121,6 +121,10 @@ void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops, void *opaque);
 int coroutine_fn blk_co_preadv(BlockBackend *blk, int64_t offset,
                                unsigned int bytes, QEMUIOVector *qiov,
                                BdrvRequestFlags flags);
+int coroutine_fn blk_co_pwritev_part(BlockBackend *blk, int64_t offset,
+                                     unsigned int bytes,
+                                     QEMUIOVector *qiov, size_t qiov_offset,
+                                     BdrvRequestFlags flags);
 int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
                                unsigned int bytes, QEMUIOVector *qiov,
                                BdrvRequestFlags flags);
diff --git a/block/block-backend.c b/block/block-backend.c
index 1c605d5444..b3d00478af 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1176,9 +1176,10 @@ int coroutine_fn blk_co_preadv(BlockBackend *blk, int64_t offset,
     return ret;
 }
 
-int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
-                                unsigned int bytes, QEMUIOVector *qiov,
-                                BdrvRequestFlags flags)
+int coroutine_fn blk_co_pwritev_part(BlockBackend *blk, int64_t offset,
+                                     unsigned int bytes,
+                                     QEMUIOVector *qiov, size_t qiov_offset,
+                                     BdrvRequestFlags flags)
 {
     int ret;
     BlockDriverState *bs;
@@ -1205,11 +1206,19 @@ int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
         flags |= BDRV_REQ_FUA;
     }
 
-    ret = bdrv_co_pwritev(blk->root, offset, bytes, qiov, flags);
+    ret = bdrv_co_pwritev_part(blk->root, offset, bytes, qiov, qiov_offset,
+                               flags);
     bdrv_dec_in_flight(bs);
     return ret;
 }
 
+int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
+                                unsigned int bytes, QEMUIOVector *qiov,
+                                BdrvRequestFlags flags)
+{
+    return blk_co_pwritev_part(blk, offset, bytes, qiov, 0, flags);
+}
+
 typedef struct BlkRwCo {
     BlockBackend *blk;
     int64_t offset;
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-09-12 15:13 [Qemu-devel] [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 1/4] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 2/4] block/block-backend: add blk_co_pwritev_part Vladimir Sementsov-Ogievskiy
@ 2019-09-12 15:13 ` Vladimir Sementsov-Ogievskiy
  2019-10-02 14:57   ` Max Reitz
  2019-10-04 16:31   ` Max Reitz
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 4/4] Revert "mirror: Only mirror granularity-aligned chunks" Vladimir Sementsov-Ogievskiy
  2019-10-02  9:53 ` [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
  4 siblings, 2 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-09-12 15:13 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
region in the dirty bitmap, which means that we may not copy some bytes
and assume them copied, which actually leads to producing corrupted
target.

So 9adc1cb49af8d forced dirty bitmap granularity to be
request_alignment for mirror-top filter, so we are not working with
unaligned requests. However forcing large alignment obviously decreases
performance of unaligned requests.

This commit provides another solution for the problem: if unaligned
padding is already dirty, we can safely ignore it, as
1. It's dirty, it will be copied by mirror_iteration anyway
2. It's dirty, so skipping it now we don't increase dirtiness of the
   bitmap and therefore don't damage "synchronicity" of the
   write-blocking mirror.

If unaligned padding is not dirty, we just write it, no reason to touch
dirty bitmap if we succeed (on failure we'll set the whole region
ofcourse, but we loss "synchronicity" on failure anyway).

Note: we need to disable dirty_bitmap, otherwise we will not be able to
see in do_sync_target_write bitmap state before current operation. We
may of course check dirty bitmap before the operation in
bdrv_mirror_top_do_write and remember it, but we don't need active
dirty bitmap for write-blocking mirror anyway.

New code-path is unused until the following commit reverts
9adc1cb49af8d.

Suggested-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 38 insertions(+), 1 deletion(-)

diff --git a/block/mirror.c b/block/mirror.c
index d176bf5920..d192f6a96b 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
                      QEMUIOVector *qiov, int flags)
 {
     int ret;
+    size_t qiov_offset = 0;
+
+    if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
+        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) {
+            /*
+             * Dirty unaligned padding
+             * 1. It's already dirty, no damage to "actively_synced" if we just
+             *    skip unaligned part.
+             * 2. If we copy it, we can't reset corresponding bit in
+             *    dirty_bitmap as there may be some "dirty" bytes still not
+             *    copied.
+             * So, just ignore it.
+             */
+            qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) - offset;
+            if (bytes <= qiov_offset) {
+                /* nothing to do after shrink */
+                return;
+            }
+            offset += qiov_offset;
+            bytes -= qiov_offset;
+    }
+
+    if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
+        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
+    {
+        uint64_t tail = (offset + bytes) % job->granularity;
+
+        if (bytes <= tail) {
+            /* nothing to do after shrink */
+            return;
+        }
+        bytes -= tail;
+    }
 
     bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
 
@@ -1211,7 +1244,8 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
 
     switch (method) {
     case MIRROR_METHOD_COPY:
-        ret = blk_co_pwritev(job->target, offset, bytes, qiov, flags);
+        ret = blk_co_pwritev_part(job->target, offset, bytes,
+                                  qiov, qiov_offset, flags);
         break;
 
     case MIRROR_METHOD_ZERO:
@@ -1640,6 +1674,9 @@ static BlockJob *mirror_start_job(
     if (!s->dirty_bitmap) {
         goto fail;
     }
+    if (s->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
+        bdrv_disable_dirty_bitmap(s->dirty_bitmap);
+    }
 
     ret = block_job_add_bdrv(&s->common, "source", bs, 0,
                              BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE |
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [Qemu-devel] [PATCH 4/4] Revert "mirror: Only mirror granularity-aligned chunks"
  2019-09-12 15:13 [Qemu-devel] [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 3/4] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
@ 2019-09-12 15:13 ` Vladimir Sementsov-Ogievskiy
  2019-10-04 16:33   ` Max Reitz
  2019-10-02  9:53 ` [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
  4 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-09-12 15:13 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

This reverts commit 9adc1cb49af8d4e54f57980b1eed5c0a4b2dafa6.
    "mirror: Only mirror granularity-aligned chunks"

Since previous commit unaligned chunks are supported by
do_sync_target_write.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/mirror.c | 29 -----------------------------
 1 file changed, 29 deletions(-)

diff --git a/block/mirror.c b/block/mirror.c
index d192f6a96b..4626b91541 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1482,15 +1482,6 @@ static void bdrv_mirror_top_child_perm(BlockDriverState *bs, BdrvChild *c,
     *nshared = BLK_PERM_ALL;
 }
 
-static void bdrv_mirror_top_refresh_limits(BlockDriverState *bs, Error **errp)
-{
-    MirrorBDSOpaque *s = bs->opaque;
-
-    if (s && s->job && s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
-        bs->bl.request_alignment = s->job->granularity;
-    }
-}
-
 /* Dummy node that provides consistent read to its users without requiring it
  * from its backing file and that allows writes on the backing file chain. */
 static BlockDriver bdrv_mirror_top = {
@@ -1503,7 +1494,6 @@ static BlockDriver bdrv_mirror_top = {
     .bdrv_co_block_status       = bdrv_co_block_status_from_backing,
     .bdrv_refresh_filename      = bdrv_mirror_top_refresh_filename,
     .bdrv_child_perm            = bdrv_mirror_top_child_perm,
-    .bdrv_refresh_limits        = bdrv_mirror_top_refresh_limits,
 };
 
 static BlockJob *mirror_start_job(
@@ -1651,25 +1641,6 @@ static BlockJob *mirror_start_job(
         s->should_complete = true;
     }
 
-    /*
-     * Must be called before we start tracking writes, but after
-     *
-     *     ((MirrorBlockJob *)
-     *         ((MirrorBDSOpaque *)
-     *             mirror_top_bs->opaque
-     *         )->job
-     *     )->copy_mode
-     *
-     * has the correct value.
-     * (We start tracking writes as of the following
-     * bdrv_create_dirty_bitmap() call.)
-     */
-    bdrv_refresh_limits(mirror_top_bs, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        goto fail;
-    }
-
     s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
     if (!s->dirty_bitmap) {
         goto fail;
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 0/4] active-mirror: support unaligned guest operations
  2019-09-12 15:13 [Qemu-devel] [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 4/4] Revert "mirror: Only mirror granularity-aligned chunks" Vladimir Sementsov-Ogievskiy
@ 2019-10-02  9:53 ` Vladimir Sementsov-Ogievskiy
  4 siblings, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-02  9:53 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, Denis Lunev, jsnow, qemu-devel, mreitz

ping

12.09.2019 18:13, Vladimir Sementsov-Ogievskiy wrote:
> Commit 9adc1cb49af8d fixed a bug about unaligned (to dirty bitmap
> granularity) guest writes (and discards) by simply requesting
> corresponding alignment on mirror-top filter. However forcing large
> alignment obviously decreases performance of unaligned requests.
> 
> So it's time for a new solution which is in 03. And 04 reverts
> 9adc1cb49af8d.
> 
> Vladimir Sementsov-Ogievskiy (4):
>    block/mirror: simplify do_sync_target_write
>    block/block-backend: add blk_co_pwritev_part
>    block/mirror: support unaligned write in active mirror
>    Revert "mirror: Only mirror granularity-aligned chunks"
> 
>   include/sysemu/block-backend.h |   4 +
>   block/block-backend.c          |  17 +++-
>   block/mirror.c                 | 153 +++++++++++++--------------------
>   3 files changed, 78 insertions(+), 96 deletions(-)
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 3/4] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
@ 2019-10-02 14:57   ` Max Reitz
  2019-10-02 15:03     ` Vladimir Sementsov-Ogievskiy
  2019-10-04 16:31   ` Max Reitz
  1 sibling, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-10-02 14:57 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block; +Cc: kwolf, den, jsnow, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 1205 bytes --]

On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
> region in the dirty bitmap, which means that we may not copy some bytes
> and assume them copied, which actually leads to producing corrupted
> target.
> 
> So 9adc1cb49af8d forced dirty bitmap granularity to be
> request_alignment for mirror-top filter, so we are not working with
> unaligned requests. However forcing large alignment obviously decreases
> performance of unaligned requests.
> 
> This commit provides another solution for the problem: if unaligned
> padding is already dirty, we can safely ignore it, as
> 1. It's dirty, it will be copied by mirror_iteration anyway
> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>    bitmap and therefore don't damage "synchronicity" of the
>    write-blocking mirror.

But that’s not what active mirror is for.  The point of active mirror is
that it must converge because every guest write will contribute towards
that goal.

If you skip active mirroring for unaligned guest writes, they will not
contribute towards converging, but in fact lead to the opposite.

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/4] block/mirror: simplify do_sync_target_write
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 1/4] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
@ 2019-10-02 14:57   ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-10-02 14:57 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block; +Cc: kwolf, den, jsnow, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 575 bytes --]

On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
> do_sync_target_write is called from bdrv_mirror_top_do_write after
> write/discard operation, all inside active_write/active_write_settle
> protecting us from mirror iteration. So the whole area is dirty for
> sure, no reason to examine dirty bitmap.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/mirror.c | 95 +++++++++++++++-----------------------------------
>  1 file changed, 28 insertions(+), 67 deletions(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/4] block/block-backend: add blk_co_pwritev_part
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 2/4] block/block-backend: add blk_co_pwritev_part Vladimir Sementsov-Ogievskiy
@ 2019-10-02 14:57   ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-10-02 14:57 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block; +Cc: kwolf, den, jsnow, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 487 bytes --]

On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
> Add blk write function with qiov_offset parameter. It's needed for the
> following commit.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/sysemu/block-backend.h |  4 ++++
>  block/block-backend.c          | 17 +++++++++++++----
>  2 files changed, 17 insertions(+), 4 deletions(-)

I don’t know about the next commit, but:

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-02 14:57   ` Max Reitz
@ 2019-10-02 15:03     ` Vladimir Sementsov-Ogievskiy
  2019-10-02 15:06       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-02 15:03 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

02.10.2019 17:57, Max Reitz wrote:
> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>> region in the dirty bitmap, which means that we may not copy some bytes
>> and assume them copied, which actually leads to producing corrupted
>> target.
>>
>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>> request_alignment for mirror-top filter, so we are not working with
>> unaligned requests. However forcing large alignment obviously decreases
>> performance of unaligned requests.
>>
>> This commit provides another solution for the problem: if unaligned
>> padding is already dirty, we can safely ignore it, as
>> 1. It's dirty, it will be copied by mirror_iteration anyway
>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>     bitmap and therefore don't damage "synchronicity" of the
>>     write-blocking mirror.
> 
> But that’s not what active mirror is for.  The point of active mirror is
> that it must converge because every guest write will contribute towards
> that goal.
> 
> If you skip active mirroring for unaligned guest writes, they will not
> contribute towards converging, but in fact lead to the opposite.
> 

The will not contribute only if region is already dirty. Actually, after
first iteration of mirroring (copying the whole disk), all following writes
will contribute, so the whole process must converge. It is a bit similar with
running one mirror loop in normal mode, and then enable write-blocking.

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-02 15:03     ` Vladimir Sementsov-Ogievskiy
@ 2019-10-02 15:06       ` Vladimir Sementsov-Ogievskiy
  2019-10-02 15:52         ` Max Reitz
  0 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-02 15:06 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
> 02.10.2019 17:57, Max Reitz wrote:
>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>> region in the dirty bitmap, which means that we may not copy some bytes
>>> and assume them copied, which actually leads to producing corrupted
>>> target.
>>>
>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>> request_alignment for mirror-top filter, so we are not working with
>>> unaligned requests. However forcing large alignment obviously decreases
>>> performance of unaligned requests.
>>>
>>> This commit provides another solution for the problem: if unaligned
>>> padding is already dirty, we can safely ignore it, as
>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>     bitmap and therefore don't damage "synchronicity" of the
>>>     write-blocking mirror.
>>
>> But that’s not what active mirror is for.  The point of active mirror is
>> that it must converge because every guest write will contribute towards
>> that goal.
>>
>> If you skip active mirroring for unaligned guest writes, they will not
>> contribute towards converging, but in fact lead to the opposite.
>>
> 
> The will not contribute only if region is already dirty. Actually, after
> first iteration of mirroring (copying the whole disk), all following writes
> will contribute, so the whole process must converge. It is a bit similar with
> running one mirror loop in normal mode, and then enable write-blocking.
> 


In other words, we don't need "all guest writes contribute" to converge,
"all guest writes don't create new dirty bits" is enough, as we have parallel
mirror iteration which contiguously handles dirty bits.

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-02 15:06       ` Vladimir Sementsov-Ogievskiy
@ 2019-10-02 15:52         ` Max Reitz
  2019-10-03  9:34           ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-10-02 15:52 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, jsnow, qemu-devel, Denis Lunev


[-- Attachment #1.1: Type: text/plain, Size: 2266 bytes --]

On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>> 02.10.2019 17:57, Max Reitz wrote:
>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>> and assume them copied, which actually leads to producing corrupted
>>>> target.
>>>>
>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>> request_alignment for mirror-top filter, so we are not working with
>>>> unaligned requests. However forcing large alignment obviously decreases
>>>> performance of unaligned requests.
>>>>
>>>> This commit provides another solution for the problem: if unaligned
>>>> padding is already dirty, we can safely ignore it, as
>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>     bitmap and therefore don't damage "synchronicity" of the
>>>>     write-blocking mirror.
>>>
>>> But that’s not what active mirror is for.  The point of active mirror is
>>> that it must converge because every guest write will contribute towards
>>> that goal.
>>>
>>> If you skip active mirroring for unaligned guest writes, they will not
>>> contribute towards converging, but in fact lead to the opposite.
>>>
>>
>> The will not contribute only if region is already dirty. Actually, after
>> first iteration of mirroring (copying the whole disk), all following writes
>> will contribute, so the whole process must converge. It is a bit similar with
>> running one mirror loop in normal mode, and then enable write-blocking.
>>
> 
> 
> In other words, we don't need "all guest writes contribute" to converge,
> "all guest writes don't create new dirty bits" is enough, as we have parallel
> mirror iteration which contiguously handles dirty bits.

Hm, in a sense.  But it does mean that guest writes will not contribute
to convergence.

And that’s against the current definition of write-blocking, which does
state that “when data is written to the source, write it (synchronously)
to the target as well”.

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-02 15:52         ` Max Reitz
@ 2019-10-03  9:34           ` Vladimir Sementsov-Ogievskiy
  2019-10-04 12:59             ` Max Reitz
  0 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-03  9:34 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

02.10.2019 18:52, Max Reitz wrote:
> On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
>> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>> 02.10.2019 17:57, Max Reitz wrote:
>>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>>> and assume them copied, which actually leads to producing corrupted
>>>>> target.
>>>>>
>>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>>> request_alignment for mirror-top filter, so we are not working with
>>>>> unaligned requests. However forcing large alignment obviously decreases
>>>>> performance of unaligned requests.
>>>>>
>>>>> This commit provides another solution for the problem: if unaligned
>>>>> padding is already dirty, we can safely ignore it, as
>>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>>      bitmap and therefore don't damage "synchronicity" of the
>>>>>      write-blocking mirror.
>>>>
>>>> But that’s not what active mirror is for.  The point of active mirror is
>>>> that it must converge because every guest write will contribute towards
>>>> that goal.
>>>>
>>>> If you skip active mirroring for unaligned guest writes, they will not
>>>> contribute towards converging, but in fact lead to the opposite.
>>>>
>>>
>>> The will not contribute only if region is already dirty. Actually, after
>>> first iteration of mirroring (copying the whole disk), all following writes
>>> will contribute, so the whole process must converge. It is a bit similar with
>>> running one mirror loop in normal mode, and then enable write-blocking.
>>>
>>
>>
>> In other words, we don't need "all guest writes contribute" to converge,
>> "all guest writes don't create new dirty bits" is enough, as we have parallel
>> mirror iteration which contiguously handles dirty bits.
> 
> Hm, in a sense.  But it does mean that guest writes will not contribute
> to convergence.
> 
> And that’s against the current definition of write-blocking, which does
> state that “when data is written to the source, write it (synchronously)
> to the target as well”.
> 

Hmm, understand. But IMHO our proposed behavior is better in general.
Do you think it's a problem to change spec now?
If yes, I'll resend with an option


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-03  9:34           ` Vladimir Sementsov-Ogievskiy
@ 2019-10-04 12:59             ` Max Reitz
  2019-10-04 13:22               ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-10-04 12:59 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, jsnow, qemu-devel, Denis Lunev


[-- Attachment #1.1: Type: text/plain, Size: 2836 bytes --]

On 03.10.19 11:34, Vladimir Sementsov-Ogievskiy wrote:
> 02.10.2019 18:52, Max Reitz wrote:
>> On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
>>> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>>> 02.10.2019 17:57, Max Reitz wrote:
>>>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>>>> and assume them copied, which actually leads to producing corrupted
>>>>>> target.
>>>>>>
>>>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>>>> request_alignment for mirror-top filter, so we are not working with
>>>>>> unaligned requests. However forcing large alignment obviously decreases
>>>>>> performance of unaligned requests.
>>>>>>
>>>>>> This commit provides another solution for the problem: if unaligned
>>>>>> padding is already dirty, we can safely ignore it, as
>>>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>>>      bitmap and therefore don't damage "synchronicity" of the
>>>>>>      write-blocking mirror.
>>>>>
>>>>> But that’s not what active mirror is for.  The point of active mirror is
>>>>> that it must converge because every guest write will contribute towards
>>>>> that goal.
>>>>>
>>>>> If you skip active mirroring for unaligned guest writes, they will not
>>>>> contribute towards converging, but in fact lead to the opposite.
>>>>>
>>>>
>>>> The will not contribute only if region is already dirty. Actually, after
>>>> first iteration of mirroring (copying the whole disk), all following writes
>>>> will contribute, so the whole process must converge. It is a bit similar with
>>>> running one mirror loop in normal mode, and then enable write-blocking.
>>>>
>>>
>>>
>>> In other words, we don't need "all guest writes contribute" to converge,
>>> "all guest writes don't create new dirty bits" is enough, as we have parallel
>>> mirror iteration which contiguously handles dirty bits.
>>
>> Hm, in a sense.  But it does mean that guest writes will not contribute
>> to convergence.
>>
>> And that’s against the current definition of write-blocking, which does
>> state that “when data is written to the source, write it (synchronously)
>> to the target as well”.
>>
> 
> Hmm, understand. But IMHO our proposed behavior is better in general.
> Do you think it's a problem to change spec now?
> If yes, I'll resend with an option

Well, the thing is that I’d find it weird if write-blocking wasn’t
blocking in all cases.  And in my opinion, it makes more sense for
active mirror if all writes actively contributed to convergence.

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-04 12:59             ` Max Reitz
@ 2019-10-04 13:22               ` Vladimir Sementsov-Ogievskiy
  2019-10-04 14:48                 ` Max Reitz
  0 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-04 13:22 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

04.10.2019 15:59, Max Reitz wrote:
> On 03.10.19 11:34, Vladimir Sementsov-Ogievskiy wrote:
>> 02.10.2019 18:52, Max Reitz wrote:
>>> On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
>>>> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>>>> 02.10.2019 17:57, Max Reitz wrote:
>>>>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>>>>> and assume them copied, which actually leads to producing corrupted
>>>>>>> target.
>>>>>>>
>>>>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>>>>> request_alignment for mirror-top filter, so we are not working with
>>>>>>> unaligned requests. However forcing large alignment obviously decreases
>>>>>>> performance of unaligned requests.
>>>>>>>
>>>>>>> This commit provides another solution for the problem: if unaligned
>>>>>>> padding is already dirty, we can safely ignore it, as
>>>>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>>>>       bitmap and therefore don't damage "synchronicity" of the
>>>>>>>       write-blocking mirror.
>>>>>>
>>>>>> But that’s not what active mirror is for.  The point of active mirror is
>>>>>> that it must converge because every guest write will contribute towards
>>>>>> that goal.
>>>>>>
>>>>>> If you skip active mirroring for unaligned guest writes, they will not
>>>>>> contribute towards converging, but in fact lead to the opposite.
>>>>>>
>>>>>
>>>>> The will not contribute only if region is already dirty. Actually, after
>>>>> first iteration of mirroring (copying the whole disk), all following writes
>>>>> will contribute, so the whole process must converge. It is a bit similar with
>>>>> running one mirror loop in normal mode, and then enable write-blocking.
>>>>>
>>>>
>>>>
>>>> In other words, we don't need "all guest writes contribute" to converge,
>>>> "all guest writes don't create new dirty bits" is enough, as we have parallel
>>>> mirror iteration which contiguously handles dirty bits.
>>>
>>> Hm, in a sense.  But it does mean that guest writes will not contribute
>>> to convergence.
>>>
>>> And that’s against the current definition of write-blocking, which does
>>> state that “when data is written to the source, write it (synchronously)
>>> to the target as well”.
>>>
>>
>> Hmm, understand. But IMHO our proposed behavior is better in general.
>> Do you think it's a problem to change spec now?
>> If yes, I'll resend with an option
> 
> Well, the thing is that I’d find it weird if write-blocking wasn’t
> blocking in all cases.  And in my opinion, it makes more sense for
> active mirror if all writes actively contributed to convergence.
> 

Why? What is the benefit in it?
What is "all writes actively contributed to convergence" for user?

I think for user there may be the following criteria:

1. guaranteed converge, with any guest write load.
Both current and my proposed variants are OK.

2. Less impact on guest.
Obviously my proposed variant is better

3. Total time of mirroring
Seems, current may be a bit better, but I don't think that unaligned
tails gives significant impact.

===

So, assume I want [1]+[2]. And possibly
2.2: Even less impact on guest: ignore not only unaligned tails if they are
already dirty, but full synchronous mirror operation if area is already dirty.

How should I call this? Should it be separate mode, or option for write-blocking?

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-04 13:22               ` Vladimir Sementsov-Ogievskiy
@ 2019-10-04 14:48                 ` Max Reitz
  2019-10-04 15:04                   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-10-04 14:48 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, jsnow, qemu-devel, Denis Lunev


[-- Attachment #1.1: Type: text/plain, Size: 4569 bytes --]

On 04.10.19 15:22, Vladimir Sementsov-Ogievskiy wrote:
> 04.10.2019 15:59, Max Reitz wrote:
>> On 03.10.19 11:34, Vladimir Sementsov-Ogievskiy wrote:
>>> 02.10.2019 18:52, Max Reitz wrote:
>>>> On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
>>>>> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> 02.10.2019 17:57, Max Reitz wrote:
>>>>>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>>>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>>>>>> and assume them copied, which actually leads to producing corrupted
>>>>>>>> target.
>>>>>>>>
>>>>>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>>>>>> request_alignment for mirror-top filter, so we are not working with
>>>>>>>> unaligned requests. However forcing large alignment obviously decreases
>>>>>>>> performance of unaligned requests.
>>>>>>>>
>>>>>>>> This commit provides another solution for the problem: if unaligned
>>>>>>>> padding is already dirty, we can safely ignore it, as
>>>>>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>>>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>>>>>       bitmap and therefore don't damage "synchronicity" of the
>>>>>>>>       write-blocking mirror.
>>>>>>>
>>>>>>> But that’s not what active mirror is for.  The point of active mirror is
>>>>>>> that it must converge because every guest write will contribute towards
>>>>>>> that goal.
>>>>>>>
>>>>>>> If you skip active mirroring for unaligned guest writes, they will not
>>>>>>> contribute towards converging, but in fact lead to the opposite.
>>>>>>>
>>>>>>
>>>>>> The will not contribute only if region is already dirty. Actually, after
>>>>>> first iteration of mirroring (copying the whole disk), all following writes
>>>>>> will contribute, so the whole process must converge. It is a bit similar with
>>>>>> running one mirror loop in normal mode, and then enable write-blocking.
>>>>>>
>>>>>
>>>>>
>>>>> In other words, we don't need "all guest writes contribute" to converge,
>>>>> "all guest writes don't create new dirty bits" is enough, as we have parallel
>>>>> mirror iteration which contiguously handles dirty bits.
>>>>
>>>> Hm, in a sense.  But it does mean that guest writes will not contribute
>>>> to convergence.
>>>>
>>>> And that’s against the current definition of write-blocking, which does
>>>> state that “when data is written to the source, write it (synchronously)
>>>> to the target as well”.
>>>>
>>>
>>> Hmm, understand. But IMHO our proposed behavior is better in general.
>>> Do you think it's a problem to change spec now?
>>> If yes, I'll resend with an option
>>
>> Well, the thing is that I’d find it weird if write-blocking wasn’t
>> blocking in all cases.  And in my opinion, it makes more sense for
>> active mirror if all writes actively contributed to convergence.
>>
> 
> Why? What is the benefit in it?
> What is "all writes actively contributed to convergence" for user?

One thing I wonder about is whether it’s really guaranteed that the
background job will run under full I/O load, and how often it runs.

I fear that with your model, the background job might starve and the
mirror may take a very long time.  It won’t diverge, but it also won’t
really converge.

The advantage of letting all writes block is that even under full I/O
load, the mirror job will progress at a steady pace.

> I think for user there may be the following criteria:
> 
> 1. guaranteed converge, with any guest write load.
> Both current and my proposed variants are OK.
> 
> 2. Less impact on guest.
> Obviously my proposed variant is better
> 
> 3. Total time of mirroring
> Seems, current may be a bit better, but I don't think that unaligned
> tails gives significant impact.
> 
> ===
> 
> So, assume I want [1]+[2]. And possibly
> 2.2: Even less impact on guest: ignore not only unaligned tails if they are
> already dirty, but full synchronous mirror operation if area is already dirty.
> 
> How should I call this? Should it be separate mode, or option for write-blocking?

I don’t know whether it makes sense to add a separate mode or a separate
option just for this difference.  I don’t think anyone would choose the
non-default option.

But I do think there’s quite a bit of difference in how the job behaves
still...  I don’t know. :-/

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-04 14:48                 ` Max Reitz
@ 2019-10-04 15:04                   ` Vladimir Sementsov-Ogievskiy
  2019-10-04 15:27                     ` Max Reitz
  0 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-04 15:04 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

04.10.2019 17:48, Max Reitz wrote:
> On 04.10.19 15:22, Vladimir Sementsov-Ogievskiy wrote:
>> 04.10.2019 15:59, Max Reitz wrote:
>>> On 03.10.19 11:34, Vladimir Sementsov-Ogievskiy wrote:
>>>> 02.10.2019 18:52, Max Reitz wrote:
>>>>> On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>> 02.10.2019 17:57, Max Reitz wrote:
>>>>>>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>>>>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>>>>>>> and assume them copied, which actually leads to producing corrupted
>>>>>>>>> target.
>>>>>>>>>
>>>>>>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>>>>>>> request_alignment for mirror-top filter, so we are not working with
>>>>>>>>> unaligned requests. However forcing large alignment obviously decreases
>>>>>>>>> performance of unaligned requests.
>>>>>>>>>
>>>>>>>>> This commit provides another solution for the problem: if unaligned
>>>>>>>>> padding is already dirty, we can safely ignore it, as
>>>>>>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>>>>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>>>>>>        bitmap and therefore don't damage "synchronicity" of the
>>>>>>>>>        write-blocking mirror.
>>>>>>>>
>>>>>>>> But that’s not what active mirror is for.  The point of active mirror is
>>>>>>>> that it must converge because every guest write will contribute towards
>>>>>>>> that goal.
>>>>>>>>
>>>>>>>> If you skip active mirroring for unaligned guest writes, they will not
>>>>>>>> contribute towards converging, but in fact lead to the opposite.
>>>>>>>>
>>>>>>>
>>>>>>> The will not contribute only if region is already dirty. Actually, after
>>>>>>> first iteration of mirroring (copying the whole disk), all following writes
>>>>>>> will contribute, so the whole process must converge. It is a bit similar with
>>>>>>> running one mirror loop in normal mode, and then enable write-blocking.
>>>>>>>
>>>>>>
>>>>>>
>>>>>> In other words, we don't need "all guest writes contribute" to converge,
>>>>>> "all guest writes don't create new dirty bits" is enough, as we have parallel
>>>>>> mirror iteration which contiguously handles dirty bits.
>>>>>
>>>>> Hm, in a sense.  But it does mean that guest writes will not contribute
>>>>> to convergence.
>>>>>
>>>>> And that’s against the current definition of write-blocking, which does
>>>>> state that “when data is written to the source, write it (synchronously)
>>>>> to the target as well”.
>>>>>
>>>>
>>>> Hmm, understand. But IMHO our proposed behavior is better in general.
>>>> Do you think it's a problem to change spec now?
>>>> If yes, I'll resend with an option
>>>
>>> Well, the thing is that I’d find it weird if write-blocking wasn’t
>>> blocking in all cases.  And in my opinion, it makes more sense for
>>> active mirror if all writes actively contributed to convergence.
>>>
>>
>> Why? What is the benefit in it?
>> What is "all writes actively contributed to convergence" for user?
> 
> One thing I wonder about is whether it’s really guaranteed that the
> background job will run under full I/O load, and how often it runs.
> 
> I fear that with your model, the background job might starve and the
> mirror may take a very long time.

Hmmm. I think mirror job is in same context as guest writes, and goes
through same IO api.. Why it will starve? (I understand that my words
are not an evidence...).

>  It won’t diverge, but it also won’t
> really converge.

But same will be with current behavior: guest is not guaranteed to write
to all parts of disk. And in most scenarios it doesn't. So, if mirror job
starve because of huge guest IO load, we will not converge anyway.

So, background process is necessary thing for converge anyway.

> 
> The advantage of letting all writes block is that even under full I/O
> load, the mirror job will progress at a steady pace.
> 
>> I think for user there may be the following criteria:
>>
>> 1. guaranteed converge, with any guest write load.
>> Both current and my proposed variants are OK.
>>
>> 2. Less impact on guest.
>> Obviously my proposed variant is better
>>
>> 3. Total time of mirroring
>> Seems, current may be a bit better, but I don't think that unaligned
>> tails gives significant impact.
>>
>> ===
>>
>> So, assume I want [1]+[2]. And possibly
>> 2.2: Even less impact on guest: ignore not only unaligned tails if they are
>> already dirty, but full synchronous mirror operation if area is already dirty.
>>
>> How should I call this? Should it be separate mode, or option for write-blocking?
> 
> I don’t know whether it makes sense to add a separate mode or a separate
> option just for this difference.  I don’t think anyone would choose the
> non-default option.

At least Virtuozzo will choose :)

> 
> But I do think there’s quite a bit of difference in how the job behaves
> still...  I don’t know. :-/
> 
> Max
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-04 15:04                   ` Vladimir Sementsov-Ogievskiy
@ 2019-10-04 15:27                     ` Max Reitz
  2019-10-04 15:38                       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-10-04 15:27 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, jsnow, qemu-devel, Denis Lunev


[-- Attachment #1.1: Type: text/plain, Size: 5533 bytes --]

On 04.10.19 17:04, Vladimir Sementsov-Ogievskiy wrote:
> 04.10.2019 17:48, Max Reitz wrote:
>> On 04.10.19 15:22, Vladimir Sementsov-Ogievskiy wrote:
>>> 04.10.2019 15:59, Max Reitz wrote:
>>>> On 03.10.19 11:34, Vladimir Sementsov-Ogievskiy wrote:
>>>>> 02.10.2019 18:52, Max Reitz wrote:
>>>>>> On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>> 02.10.2019 17:57, Max Reitz wrote:
>>>>>>>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>>>>>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>>>>>>>> and assume them copied, which actually leads to producing corrupted
>>>>>>>>>> target.
>>>>>>>>>>
>>>>>>>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>>>>>>>> request_alignment for mirror-top filter, so we are not working with
>>>>>>>>>> unaligned requests. However forcing large alignment obviously decreases
>>>>>>>>>> performance of unaligned requests.
>>>>>>>>>>
>>>>>>>>>> This commit provides another solution for the problem: if unaligned
>>>>>>>>>> padding is already dirty, we can safely ignore it, as
>>>>>>>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>>>>>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>>>>>>>        bitmap and therefore don't damage "synchronicity" of the
>>>>>>>>>>        write-blocking mirror.
>>>>>>>>>
>>>>>>>>> But that’s not what active mirror is for.  The point of active mirror is
>>>>>>>>> that it must converge because every guest write will contribute towards
>>>>>>>>> that goal.
>>>>>>>>>
>>>>>>>>> If you skip active mirroring for unaligned guest writes, they will not
>>>>>>>>> contribute towards converging, but in fact lead to the opposite.
>>>>>>>>>
>>>>>>>>
>>>>>>>> The will not contribute only if region is already dirty. Actually, after
>>>>>>>> first iteration of mirroring (copying the whole disk), all following writes
>>>>>>>> will contribute, so the whole process must converge. It is a bit similar with
>>>>>>>> running one mirror loop in normal mode, and then enable write-blocking.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> In other words, we don't need "all guest writes contribute" to converge,
>>>>>>> "all guest writes don't create new dirty bits" is enough, as we have parallel
>>>>>>> mirror iteration which contiguously handles dirty bits.
>>>>>>
>>>>>> Hm, in a sense.  But it does mean that guest writes will not contribute
>>>>>> to convergence.
>>>>>>
>>>>>> And that’s against the current definition of write-blocking, which does
>>>>>> state that “when data is written to the source, write it (synchronously)
>>>>>> to the target as well”.
>>>>>>
>>>>>
>>>>> Hmm, understand. But IMHO our proposed behavior is better in general.
>>>>> Do you think it's a problem to change spec now?
>>>>> If yes, I'll resend with an option
>>>>
>>>> Well, the thing is that I’d find it weird if write-blocking wasn’t
>>>> blocking in all cases.  And in my opinion, it makes more sense for
>>>> active mirror if all writes actively contributed to convergence.
>>>>
>>>
>>> Why? What is the benefit in it?
>>> What is "all writes actively contributed to convergence" for user?
>>
>> One thing I wonder about is whether it’s really guaranteed that the
>> background job will run under full I/O load, and how often it runs.
>>
>> I fear that with your model, the background job might starve and the
>> mirror may take a very long time.
> 
> Hmmm. I think mirror job is in same context as guest writes, and goes
> through same IO api.. Why it will starve? (I understand that my words
> are not an evidence...).

I thought that maybe if the disk is read to write and write all the
time, there’d be no time for the mirror coroutine.

>>  It won’t diverge, but it also won’t
>> really converge.
> 
> But same will be with current behavior: guest is not guaranteed to write
> to all parts of disk. And in most scenarios it doesn't. So, if mirror job
> starve because of huge guest IO load, we will not converge anyway.
> 
> So, background process is necessary thing for converge anyway.

Good point.  That convinces me.

>> The advantage of letting all writes block is that even under full I/O
>> load, the mirror job will progress at a steady pace.
>>
>>> I think for user there may be the following criteria:
>>>
>>> 1. guaranteed converge, with any guest write load.
>>> Both current and my proposed variants are OK.
>>>
>>> 2. Less impact on guest.
>>> Obviously my proposed variant is better
>>>
>>> 3. Total time of mirroring
>>> Seems, current may be a bit better, but I don't think that unaligned
>>> tails gives significant impact.
>>>
>>> ===
>>>
>>> So, assume I want [1]+[2]. And possibly
>>> 2.2: Even less impact on guest: ignore not only unaligned tails if they are
>>> already dirty, but full synchronous mirror operation if area is already dirty.
>>>
>>> How should I call this? Should it be separate mode, or option for write-blocking?
>>
>> I don’t know whether it makes sense to add a separate mode or a separate
>> option just for this difference.  I don’t think anyone would choose the
>> non-default option.
> 
> At least Virtuozzo will choose :)

But if Virtuozzo knows exactly what to choose, then we can just
implement only that behavior.

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-04 15:27                     ` Max Reitz
@ 2019-10-04 15:38                       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-04 15:38 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

04.10.2019 18:27, Max Reitz wrote:
> On 04.10.19 17:04, Vladimir Sementsov-Ogievskiy wrote:
>> 04.10.2019 17:48, Max Reitz wrote:
>>> On 04.10.19 15:22, Vladimir Sementsov-Ogievskiy wrote:
>>>> 04.10.2019 15:59, Max Reitz wrote:
>>>>> On 03.10.19 11:34, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> 02.10.2019 18:52, Max Reitz wrote:
>>>>>>> On 02.10.19 17:06, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>> 02.10.2019 18:03, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>>> 02.10.2019 17:57, Max Reitz wrote:
>>>>>>>>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>>>>>>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>>>>>>>>> and assume them copied, which actually leads to producing corrupted
>>>>>>>>>>> target.
>>>>>>>>>>>
>>>>>>>>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>>>>>>>>> request_alignment for mirror-top filter, so we are not working with
>>>>>>>>>>> unaligned requests. However forcing large alignment obviously decreases
>>>>>>>>>>> performance of unaligned requests.
>>>>>>>>>>>
>>>>>>>>>>> This commit provides another solution for the problem: if unaligned
>>>>>>>>>>> padding is already dirty, we can safely ignore it, as
>>>>>>>>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>>>>>>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>>>>>>>>         bitmap and therefore don't damage "synchronicity" of the
>>>>>>>>>>>         write-blocking mirror.
>>>>>>>>>>
>>>>>>>>>> But that’s not what active mirror is for.  The point of active mirror is
>>>>>>>>>> that it must converge because every guest write will contribute towards
>>>>>>>>>> that goal.
>>>>>>>>>>
>>>>>>>>>> If you skip active mirroring for unaligned guest writes, they will not
>>>>>>>>>> contribute towards converging, but in fact lead to the opposite.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The will not contribute only if region is already dirty. Actually, after
>>>>>>>>> first iteration of mirroring (copying the whole disk), all following writes
>>>>>>>>> will contribute, so the whole process must converge. It is a bit similar with
>>>>>>>>> running one mirror loop in normal mode, and then enable write-blocking.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> In other words, we don't need "all guest writes contribute" to converge,
>>>>>>>> "all guest writes don't create new dirty bits" is enough, as we have parallel
>>>>>>>> mirror iteration which contiguously handles dirty bits.
>>>>>>>
>>>>>>> Hm, in a sense.  But it does mean that guest writes will not contribute
>>>>>>> to convergence.
>>>>>>>
>>>>>>> And that’s against the current definition of write-blocking, which does
>>>>>>> state that “when data is written to the source, write it (synchronously)
>>>>>>> to the target as well”.
>>>>>>>
>>>>>>
>>>>>> Hmm, understand. But IMHO our proposed behavior is better in general.
>>>>>> Do you think it's a problem to change spec now?
>>>>>> If yes, I'll resend with an option
>>>>>
>>>>> Well, the thing is that I’d find it weird if write-blocking wasn’t
>>>>> blocking in all cases.  And in my opinion, it makes more sense for
>>>>> active mirror if all writes actively contributed to convergence.
>>>>>
>>>>
>>>> Why? What is the benefit in it?
>>>> What is "all writes actively contributed to convergence" for user?
>>>
>>> One thing I wonder about is whether it’s really guaranteed that the
>>> background job will run under full I/O load, and how often it runs.
>>>
>>> I fear that with your model, the background job might starve and the
>>> mirror may take a very long time.
>>
>> Hmmm. I think mirror job is in same context as guest writes, and goes
>> through same IO api.. Why it will starve? (I understand that my words
>> are not an evidence...).
> 
> I thought that maybe if the disk is read to write and write all the
> time, there’d be no time for the mirror coroutine.
> 
>>>   It won’t diverge, but it also won’t
>>> really converge.
>>
>> But same will be with current behavior: guest is not guaranteed to write
>> to all parts of disk. And in most scenarios it doesn't. So, if mirror job
>> starve because of huge guest IO load, we will not converge anyway.
>>
>> So, background process is necessary thing for converge anyway.
> 
> Good point.  That convinces me.
> 
>>> The advantage of letting all writes block is that even under full I/O
>>> load, the mirror job will progress at a steady pace.
>>>
>>>> I think for user there may be the following criteria:
>>>>
>>>> 1. guaranteed converge, with any guest write load.
>>>> Both current and my proposed variants are OK.
>>>>
>>>> 2. Less impact on guest.
>>>> Obviously my proposed variant is better
>>>>
>>>> 3. Total time of mirroring
>>>> Seems, current may be a bit better, but I don't think that unaligned
>>>> tails gives significant impact.
>>>>
>>>> ===
>>>>
>>>> So, assume I want [1]+[2]. And possibly
>>>> 2.2: Even less impact on guest: ignore not only unaligned tails if they are
>>>> already dirty, but full synchronous mirror operation if area is already dirty.
>>>>
>>>> How should I call this? Should it be separate mode, or option for write-blocking?
>>>
>>> I don’t know whether it makes sense to add a separate mode or a separate
>>> option just for this difference.  I don’t think anyone would choose the
>>> non-default option.
>>
>> At least Virtuozzo will choose :)
> 
> But if Virtuozzo knows exactly what to choose, then we can just
> implement only that behavior.
> 

Then, welcome to these series)

Still [2.2] is arguable: if we skip some dirty areas on synchronous copy, guest
will be a bit happier, but we miss a chance to skip read part of copying for the
mirror part. But this may be postponed.


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 3/4] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
  2019-10-02 14:57   ` Max Reitz
@ 2019-10-04 16:31   ` Max Reitz
  2019-10-11  8:33     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 25+ messages in thread
From: Max Reitz @ 2019-10-04 16:31 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block; +Cc: kwolf, den, jsnow, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 3588 bytes --]

On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
> region in the dirty bitmap, which means that we may not copy some bytes
> and assume them copied, which actually leads to producing corrupted
> target.
> 
> So 9adc1cb49af8d forced dirty bitmap granularity to be
> request_alignment for mirror-top filter, so we are not working with
> unaligned requests. However forcing large alignment obviously decreases
> performance of unaligned requests.
> 
> This commit provides another solution for the problem: if unaligned
> padding is already dirty, we can safely ignore it, as
> 1. It's dirty, it will be copied by mirror_iteration anyway
> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>    bitmap and therefore don't damage "synchronicity" of the
>    write-blocking mirror.
> 
> If unaligned padding is not dirty, we just write it, no reason to touch
> dirty bitmap if we succeed (on failure we'll set the whole region
> ofcourse, but we loss "synchronicity" on failure anyway).
> 
> Note: we need to disable dirty_bitmap, otherwise we will not be able to
> see in do_sync_target_write bitmap state before current operation. We
> may of course check dirty bitmap before the operation in
> bdrv_mirror_top_do_write and remember it, but we don't need active
> dirty bitmap for write-blocking mirror anyway.
> 
> New code-path is unused until the following commit reverts
> 9adc1cb49af8d.
> 
> Suggested-by: Denis V. Lunev <den@openvz.org>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/block/mirror.c b/block/mirror.c
> index d176bf5920..d192f6a96b 100644
> --- a/block/mirror.c
> +++ b/block/mirror.c
> @@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
>                       QEMUIOVector *qiov, int flags)
>  {
>      int ret;
> +    size_t qiov_offset = 0;
> +
> +    if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) {
> +            /*
> +             * Dirty unaligned padding
> +             * 1. It's already dirty, no damage to "actively_synced" if we just
> +             *    skip unaligned part.
> +             * 2. If we copy it, we can't reset corresponding bit in
> +             *    dirty_bitmap as there may be some "dirty" bytes still not
> +             *    copied.
> +             * So, just ignore it.
> +             */
> +            qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) - offset;
> +            if (bytes <= qiov_offset) {
> +                /* nothing to do after shrink */
> +                return;
> +            }
> +            offset += qiov_offset;
> +            bytes -= qiov_offset;
> +    }
> +
> +    if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
> +    {
> +        uint64_t tail = (offset + bytes) % job->granularity;
> +
> +        if (bytes <= tail) {
> +            /* nothing to do after shrink */
> +            return;
> +        }
> +        bytes -= tail;
> +    }
>  
>      bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
>  

The bdrv_set_dirty_bitmap() in the error case below needs to use the
original offset/bytes, I suppose.

Apart from that, looks good to me.

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 4/4] Revert "mirror: Only mirror granularity-aligned chunks"
  2019-09-12 15:13 ` [Qemu-devel] [PATCH 4/4] Revert "mirror: Only mirror granularity-aligned chunks" Vladimir Sementsov-Ogievskiy
@ 2019-10-04 16:33   ` Max Reitz
  0 siblings, 0 replies; 25+ messages in thread
From: Max Reitz @ 2019-10-04 16:33 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block; +Cc: kwolf, den, jsnow, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 488 bytes --]

On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
> This reverts commit 9adc1cb49af8d4e54f57980b1eed5c0a4b2dafa6.
>     "mirror: Only mirror granularity-aligned chunks"
> 
> Since previous commit unaligned chunks are supported by
> do_sync_target_write.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/mirror.c | 29 -----------------------------
>  1 file changed, 29 deletions(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-04 16:31   ` Max Reitz
@ 2019-10-11  8:33     ` Vladimir Sementsov-Ogievskiy
  2019-10-11  8:58       ` Max Reitz
  0 siblings, 1 reply; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  8:33 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

04.10.2019 19:31, Max Reitz wrote:
> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>> region in the dirty bitmap, which means that we may not copy some bytes
>> and assume them copied, which actually leads to producing corrupted
>> target.
>>
>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>> request_alignment for mirror-top filter, so we are not working with
>> unaligned requests. However forcing large alignment obviously decreases
>> performance of unaligned requests.
>>
>> This commit provides another solution for the problem: if unaligned
>> padding is already dirty, we can safely ignore it, as
>> 1. It's dirty, it will be copied by mirror_iteration anyway
>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>     bitmap and therefore don't damage "synchronicity" of the
>>     write-blocking mirror.
>>
>> If unaligned padding is not dirty, we just write it, no reason to touch
>> dirty bitmap if we succeed (on failure we'll set the whole region
>> ofcourse, but we loss "synchronicity" on failure anyway).
>>
>> Note: we need to disable dirty_bitmap, otherwise we will not be able to
>> see in do_sync_target_write bitmap state before current operation. We
>> may of course check dirty bitmap before the operation in
>> bdrv_mirror_top_do_write and remember it, but we don't need active
>> dirty bitmap for write-blocking mirror anyway.
>>
>> New code-path is unused until the following commit reverts
>> 9adc1cb49af8d.
>>
>> Suggested-by: Denis V. Lunev <den@openvz.org>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 38 insertions(+), 1 deletion(-)
>>
>> diff --git a/block/mirror.c b/block/mirror.c
>> index d176bf5920..d192f6a96b 100644
>> --- a/block/mirror.c
>> +++ b/block/mirror.c
>> @@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
>>                        QEMUIOVector *qiov, int flags)
>>   {
>>       int ret;
>> +    size_t qiov_offset = 0;
>> +
>> +    if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) {
>> +            /*
>> +             * Dirty unaligned padding
>> +             * 1. It's already dirty, no damage to "actively_synced" if we just
>> +             *    skip unaligned part.
>> +             * 2. If we copy it, we can't reset corresponding bit in
>> +             *    dirty_bitmap as there may be some "dirty" bytes still not
>> +             *    copied.
>> +             * So, just ignore it.
>> +             */
>> +            qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) - offset;
>> +            if (bytes <= qiov_offset) {
>> +                /* nothing to do after shrink */
>> +                return;
>> +            }
>> +            offset += qiov_offset;
>> +            bytes -= qiov_offset;
>> +    }
>> +
>> +    if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
>> +    {
>> +        uint64_t tail = (offset + bytes) % job->granularity;
>> +
>> +        if (bytes <= tail) {
>> +            /* nothing to do after shrink */
>> +            return;
>> +        }
>> +        bytes -= tail;
>> +    }
>>   
>>       bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
>>   
> 
> The bdrv_set_dirty_bitmap() in the error case below needs to use the
> original offset/bytes, I suppose.

No, because we shrink tail only if it is already dirty. And we've locked the
region for in-flight operation, so nobody can clear the bitmap in a meantime.

But still, here is something to do:

for not-shrinked tails, if any, we should:
1. align down for reset
2. align up for set on failure


> 
> Apart from that, looks good to me.
> 
> Max
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-11  8:33     ` Vladimir Sementsov-Ogievskiy
@ 2019-10-11  8:58       ` Max Reitz
  2019-10-11  9:09         ` Vladimir Sementsov-Ogievskiy
  2019-10-11  9:10         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 2 replies; 25+ messages in thread
From: Max Reitz @ 2019-10-11  8:58 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, jsnow, qemu-devel, Denis Lunev


[-- Attachment #1.1: Type: text/plain, Size: 4386 bytes --]

On 11.10.19 10:33, Vladimir Sementsov-Ogievskiy wrote:
> 04.10.2019 19:31, Max Reitz wrote:
>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>> region in the dirty bitmap, which means that we may not copy some bytes
>>> and assume them copied, which actually leads to producing corrupted
>>> target.
>>>
>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>> request_alignment for mirror-top filter, so we are not working with
>>> unaligned requests. However forcing large alignment obviously decreases
>>> performance of unaligned requests.
>>>
>>> This commit provides another solution for the problem: if unaligned
>>> padding is already dirty, we can safely ignore it, as
>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>     bitmap and therefore don't damage "synchronicity" of the
>>>     write-blocking mirror.
>>>
>>> If unaligned padding is not dirty, we just write it, no reason to touch
>>> dirty bitmap if we succeed (on failure we'll set the whole region
>>> ofcourse, but we loss "synchronicity" on failure anyway).
>>>
>>> Note: we need to disable dirty_bitmap, otherwise we will not be able to
>>> see in do_sync_target_write bitmap state before current operation. We
>>> may of course check dirty bitmap before the operation in
>>> bdrv_mirror_top_do_write and remember it, but we don't need active
>>> dirty bitmap for write-blocking mirror anyway.
>>>
>>> New code-path is unused until the following commit reverts
>>> 9adc1cb49af8d.
>>>
>>> Suggested-by: Denis V. Lunev <den@openvz.org>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++-
>>>   1 file changed, 38 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/block/mirror.c b/block/mirror.c
>>> index d176bf5920..d192f6a96b 100644
>>> --- a/block/mirror.c
>>> +++ b/block/mirror.c
>>> @@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
>>>                        QEMUIOVector *qiov, int flags)
>>>   {
>>>       int ret;
>>> +    size_t qiov_offset = 0;
>>> +
>>> +    if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
>>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) {
>>> +            /*
>>> +             * Dirty unaligned padding
>>> +             * 1. It's already dirty, no damage to "actively_synced" if we just
>>> +             *    skip unaligned part.
>>> +             * 2. If we copy it, we can't reset corresponding bit in
>>> +             *    dirty_bitmap as there may be some "dirty" bytes still not
>>> +             *    copied.
>>> +             * So, just ignore it.
>>> +             */
>>> +            qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) - offset;
>>> +            if (bytes <= qiov_offset) {
>>> +                /* nothing to do after shrink */
>>> +                return;
>>> +            }
>>> +            offset += qiov_offset;
>>> +            bytes -= qiov_offset;
>>> +    }
>>> +
>>> +    if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
>>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
>>> +    {
>>> +        uint64_t tail = (offset + bytes) % job->granularity;
>>> +
>>> +        if (bytes <= tail) {
>>> +            /* nothing to do after shrink */
>>> +            return;
>>> +        }
>>> +        bytes -= tail;
>>> +    }
>>>   
>>>       bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
>>>   
>>
>> The bdrv_set_dirty_bitmap() in the error case below needs to use the
>> original offset/bytes, I suppose.
> 
> No, because we shrink tail only if it is already dirty. And we've locked the
> region for in-flight operation, so nobody can clear the bitmap in a meantime.

True.  But wouldn’t it be simpler to understand to just use the original
offsets?

> But still, here is something to do:
> 
> for not-shrinked tails, if any, we should:
> 1. align down for reset
> 2. align up for set on failure

Well, the align up is done automatically, and I think that’s pretty
self-explanatory.

Max

>>
>> Apart from that, looks good to me.
>>
>> Max
>>
> 
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-11  8:58       ` Max Reitz
@ 2019-10-11  9:09         ` Vladimir Sementsov-Ogievskiy
  2019-10-11  9:10         ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:09 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

11.10.2019 11:58, Max Reitz wrote:
> On 11.10.19 10:33, Vladimir Sementsov-Ogievskiy wrote:
>> 04.10.2019 19:31, Max Reitz wrote:
>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>> and assume them copied, which actually leads to producing corrupted
>>>> target.
>>>>
>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>> request_alignment for mirror-top filter, so we are not working with
>>>> unaligned requests. However forcing large alignment obviously decreases
>>>> performance of unaligned requests.
>>>>
>>>> This commit provides another solution for the problem: if unaligned
>>>> padding is already dirty, we can safely ignore it, as
>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>      bitmap and therefore don't damage "synchronicity" of the
>>>>      write-blocking mirror.
>>>>
>>>> If unaligned padding is not dirty, we just write it, no reason to touch
>>>> dirty bitmap if we succeed (on failure we'll set the whole region
>>>> ofcourse, but we loss "synchronicity" on failure anyway).
>>>>
>>>> Note: we need to disable dirty_bitmap, otherwise we will not be able to
>>>> see in do_sync_target_write bitmap state before current operation. We
>>>> may of course check dirty bitmap before the operation in
>>>> bdrv_mirror_top_do_write and remember it, but we don't need active
>>>> dirty bitmap for write-blocking mirror anyway.
>>>>
>>>> New code-path is unused until the following commit reverts
>>>> 9adc1cb49af8d.
>>>>
>>>> Suggested-by: Denis V. Lunev <den@openvz.org>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>    block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++-
>>>>    1 file changed, 38 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/block/mirror.c b/block/mirror.c
>>>> index d176bf5920..d192f6a96b 100644
>>>> --- a/block/mirror.c
>>>> +++ b/block/mirror.c
>>>> @@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
>>>>                         QEMUIOVector *qiov, int flags)
>>>>    {
>>>>        int ret;
>>>> +    size_t qiov_offset = 0;
>>>> +
>>>> +    if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
>>>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) {
>>>> +            /*
>>>> +             * Dirty unaligned padding
>>>> +             * 1. It's already dirty, no damage to "actively_synced" if we just
>>>> +             *    skip unaligned part.
>>>> +             * 2. If we copy it, we can't reset corresponding bit in
>>>> +             *    dirty_bitmap as there may be some "dirty" bytes still not
>>>> +             *    copied.
>>>> +             * So, just ignore it.
>>>> +             */
>>>> +            qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) - offset;
>>>> +            if (bytes <= qiov_offset) {
>>>> +                /* nothing to do after shrink */
>>>> +                return;
>>>> +            }
>>>> +            offset += qiov_offset;
>>>> +            bytes -= qiov_offset;
>>>> +    }
>>>> +
>>>> +    if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
>>>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
>>>> +    {
>>>> +        uint64_t tail = (offset + bytes) % job->granularity;
>>>> +
>>>> +        if (bytes <= tail) {
>>>> +            /* nothing to do after shrink */
>>>> +            return;
>>>> +        }
>>>> +        bytes -= tail;
>>>> +    }
>>>>    
>>>>        bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
>>>>    
>>>
>>> The bdrv_set_dirty_bitmap() in the error case below needs to use the
>>> original offset/bytes, I suppose.
>>
>> No, because we shrink tail only if it is already dirty. And we've locked the
>> region for in-flight operation, so nobody can clear the bitmap in a meantime.
> 
> True.  But wouldn’t it be simpler to understand to just use the original
> offsets?
> 
>> But still, here is something to do:
>>
>> for not-shrinked tails, if any, we should:
>> 1. align down for reset
>> 2. align up for set on failure
> 
> Well, the align up is done automatically, and I think that’s pretty
> self-explanatory.

Patch to make hbitmap_reset very strict (assert on analigned except for the
very end of the bitmap, if orig_size is unaligned) is queued by John.
So, I've made explicit alignment in v2.

> 
>>>
>>> Apart from that, looks good to me.
>>>
>>> Max
>>>
>>
>>
> 
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
  2019-10-11  8:58       ` Max Reitz
  2019-10-11  9:09         ` Vladimir Sementsov-Ogievskiy
@ 2019-10-11  9:10         ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 25+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:10 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: kwolf, jsnow, qemu-devel, Denis Lunev

11.10.2019 11:58, Max Reitz wrote:
> On 11.10.19 10:33, Vladimir Sementsov-Ogievskiy wrote:
>> 04.10.2019 19:31, Max Reitz wrote:
>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>> and assume them copied, which actually leads to producing corrupted
>>>> target.
>>>>
>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>> request_alignment for mirror-top filter, so we are not working with
>>>> unaligned requests. However forcing large alignment obviously decreases
>>>> performance of unaligned requests.
>>>>
>>>> This commit provides another solution for the problem: if unaligned
>>>> padding is already dirty, we can safely ignore it, as
>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>>      bitmap and therefore don't damage "synchronicity" of the
>>>>      write-blocking mirror.
>>>>
>>>> If unaligned padding is not dirty, we just write it, no reason to touch
>>>> dirty bitmap if we succeed (on failure we'll set the whole region
>>>> ofcourse, but we loss "synchronicity" on failure anyway).
>>>>
>>>> Note: we need to disable dirty_bitmap, otherwise we will not be able to
>>>> see in do_sync_target_write bitmap state before current operation. We
>>>> may of course check dirty bitmap before the operation in
>>>> bdrv_mirror_top_do_write and remember it, but we don't need active
>>>> dirty bitmap for write-blocking mirror anyway.
>>>>
>>>> New code-path is unused until the following commit reverts
>>>> 9adc1cb49af8d.
>>>>
>>>> Suggested-by: Denis V. Lunev <den@openvz.org>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>    block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++-
>>>>    1 file changed, 38 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/block/mirror.c b/block/mirror.c
>>>> index d176bf5920..d192f6a96b 100644
>>>> --- a/block/mirror.c
>>>> +++ b/block/mirror.c
>>>> @@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
>>>>                         QEMUIOVector *qiov, int flags)
>>>>    {
>>>>        int ret;
>>>> +    size_t qiov_offset = 0;
>>>> +
>>>> +    if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
>>>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) {
>>>> +            /*
>>>> +             * Dirty unaligned padding
>>>> +             * 1. It's already dirty, no damage to "actively_synced" if we just
>>>> +             *    skip unaligned part.
>>>> +             * 2. If we copy it, we can't reset corresponding bit in
>>>> +             *    dirty_bitmap as there may be some "dirty" bytes still not
>>>> +             *    copied.
>>>> +             * So, just ignore it.
>>>> +             */
>>>> +            qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) - offset;
>>>> +            if (bytes <= qiov_offset) {
>>>> +                /* nothing to do after shrink */
>>>> +                return;
>>>> +            }
>>>> +            offset += qiov_offset;
>>>> +            bytes -= qiov_offset;
>>>> +    }
>>>> +
>>>> +    if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
>>>> +        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
>>>> +    {
>>>> +        uint64_t tail = (offset + bytes) % job->granularity;
>>>> +
>>>> +        if (bytes <= tail) {
>>>> +            /* nothing to do after shrink */
>>>> +            return;
>>>> +        }
>>>> +        bytes -= tail;
>>>> +    }
>>>>    
>>>>        bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
>>>>    
>>>
>>> The bdrv_set_dirty_bitmap() in the error case below needs to use the
>>> original offset/bytes, I suppose.
>>
>> No, because we shrink tail only if it is already dirty. And we've locked the
>> region for in-flight operation, so nobody can clear the bitmap in a meantime.
> 
> True.  But wouldn’t it be simpler to understand to just use the original
> offsets?

then we need to store them.. And if we do it, I'd prefer to add an assertion that
shrunk tails are still dirty instead.

> 
>> But still, here is something to do:
>>
>> for not-shrinked tails, if any, we should:
>> 1. align down for reset
>> 2. align up for set on failure
> 
> Well, the align up is done automatically, and I think that’s pretty
> self-explanatory.
> 
> Max
> 
>>>
>>> Apart from that, looks good to me.
>>>
>>> Max
>>>
>>
>>
> 
> 


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2019-10-11  9:16 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-12 15:13 [Qemu-devel] [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
2019-09-12 15:13 ` [Qemu-devel] [PATCH 1/4] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
2019-10-02 14:57   ` Max Reitz
2019-09-12 15:13 ` [Qemu-devel] [PATCH 2/4] block/block-backend: add blk_co_pwritev_part Vladimir Sementsov-Ogievskiy
2019-10-02 14:57   ` Max Reitz
2019-09-12 15:13 ` [Qemu-devel] [PATCH 3/4] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
2019-10-02 14:57   ` Max Reitz
2019-10-02 15:03     ` Vladimir Sementsov-Ogievskiy
2019-10-02 15:06       ` Vladimir Sementsov-Ogievskiy
2019-10-02 15:52         ` Max Reitz
2019-10-03  9:34           ` Vladimir Sementsov-Ogievskiy
2019-10-04 12:59             ` Max Reitz
2019-10-04 13:22               ` Vladimir Sementsov-Ogievskiy
2019-10-04 14:48                 ` Max Reitz
2019-10-04 15:04                   ` Vladimir Sementsov-Ogievskiy
2019-10-04 15:27                     ` Max Reitz
2019-10-04 15:38                       ` Vladimir Sementsov-Ogievskiy
2019-10-04 16:31   ` Max Reitz
2019-10-11  8:33     ` Vladimir Sementsov-Ogievskiy
2019-10-11  8:58       ` Max Reitz
2019-10-11  9:09         ` Vladimir Sementsov-Ogievskiy
2019-10-11  9:10         ` Vladimir Sementsov-Ogievskiy
2019-09-12 15:13 ` [Qemu-devel] [PATCH 4/4] Revert "mirror: Only mirror granularity-aligned chunks" Vladimir Sementsov-Ogievskiy
2019-10-04 16:33   ` Max Reitz
2019-10-02  9:53 ` [PATCH 0/4] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).