All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] active-mirror: support unaligned guest operations
@ 2019-10-11  9:07 Vladimir Sementsov-Ogievskiy
  2019-10-11  9:07 ` [PATCH v2 1/5] hbitmap: handle set/reset with zero length Vladimir Sementsov-Ogievskiy
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:07 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Commit 9adc1cb49af8d fixed a bug about unaligned (to dirty bitmap
granularity) guest writes (and discards) by simply requesting
corresponding alignment on mirror-top filter. However forcing large
alignment obviously decreases performance of unaligned requests.

So it's time for a new solution which is in 04. And 05 reverts
9adc1cb49af8d.

v2:
01: new fix (do we need it for stable?)
02,03,05: add Max's r-b
04: fix bitmap handling
    improve comments

Vladimir Sementsov-Ogievskiy (5):
  hbitmap: handle set/reset with zero length
  block/mirror: simplify do_sync_target_write
  block/block-backend: add blk_co_pwritev_part
  block/mirror: support unaligned write in active mirror
  Revert "mirror: Only mirror granularity-aligned chunks"

 include/sysemu/block-backend.h |   4 +
 block/block-backend.c          |  17 +++-
 block/mirror.c                 | 181 ++++++++++++++++-----------------
 util/hbitmap.c                 |   8 ++
 4 files changed, 114 insertions(+), 96 deletions(-)

-- 
2.21.0



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/5] hbitmap: handle set/reset with zero length
  2019-10-11  9:07 [PATCH v2 0/5] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
@ 2019-10-11  9:07 ` Vladimir Sementsov-Ogievskiy
  2019-10-18 15:29   ` Max Reitz
  2019-10-11  9:07 ` [PATCH v2 2/5] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:07 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Passing zero length to these functions leads to unpredicted results.
Zero-length set/reset may occur in active-mirror, on zero-length write
(which is unlikely, but not guaranteed to never happen).

Let's just do nothing on zero-length request.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 util/hbitmap.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/util/hbitmap.c b/util/hbitmap.c
index fd44c897ab..86b0231046 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -387,6 +387,10 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count)
     uint64_t first, n;
     uint64_t last = start + count - 1;
 
+    if (count == 0) {
+        return;
+    }
+
     trace_hbitmap_set(hb, start, count,
                       start >> hb->granularity, last >> hb->granularity);
 
@@ -477,6 +481,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count)
     uint64_t first;
     uint64_t last = start + count - 1;
 
+    if (count == 0) {
+        return;
+    }
+
     trace_hbitmap_reset(hb, start, count,
                         start >> hb->granularity, last >> hb->granularity);
 
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/5] block/mirror: simplify do_sync_target_write
  2019-10-11  9:07 [PATCH v2 0/5] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
  2019-10-11  9:07 ` [PATCH v2 1/5] hbitmap: handle set/reset with zero length Vladimir Sementsov-Ogievskiy
@ 2019-10-11  9:07 ` Vladimir Sementsov-Ogievskiy
  2019-10-11  9:07 ` [PATCH v2 3/5] block/block-backend: add blk_co_pwritev_part Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:07 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

do_sync_target_write is called from bdrv_mirror_top_do_write after
write/discard operation, all inside active_write/active_write_settle
protecting us from mirror iteration. So the whole area is dirty for
sure, no reason to examine dirty bitmap.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/mirror.c | 95 +++++++++++++++-----------------------------------
 1 file changed, 28 insertions(+), 67 deletions(-)

diff --git a/block/mirror.c b/block/mirror.c
index fe984efb90..1ae57a5131 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1181,84 +1181,45 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
                      uint64_t offset, uint64_t bytes,
                      QEMUIOVector *qiov, int flags)
 {
-    QEMUIOVector target_qiov;
-    uint64_t dirty_offset = offset;
-    uint64_t dirty_bytes;
-
-    if (qiov) {
-        qemu_iovec_init(&target_qiov, qiov->niov);
-    }
-
-    while (true) {
-        bool valid_area;
-        int ret;
-
-        bdrv_dirty_bitmap_lock(job->dirty_bitmap);
-        dirty_bytes = MIN(offset + bytes - dirty_offset, INT_MAX);
-        valid_area = bdrv_dirty_bitmap_next_dirty_area(job->dirty_bitmap,
-                                                       &dirty_offset,
-                                                       &dirty_bytes);
-        if (!valid_area) {
-            bdrv_dirty_bitmap_unlock(job->dirty_bitmap);
-            break;
-        }
+    int ret;
 
-        bdrv_reset_dirty_bitmap_locked(job->dirty_bitmap,
-                                       dirty_offset, dirty_bytes);
-        bdrv_dirty_bitmap_unlock(job->dirty_bitmap);
+    bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
 
-        job_progress_increase_remaining(&job->common.job, dirty_bytes);
+    job_progress_increase_remaining(&job->common.job, bytes);
 
-        assert(dirty_offset - offset <= SIZE_MAX);
-        if (qiov) {
-            qemu_iovec_reset(&target_qiov);
-            qemu_iovec_concat(&target_qiov, qiov,
-                              dirty_offset - offset, dirty_bytes);
-        }
-
-        switch (method) {
-        case MIRROR_METHOD_COPY:
-            ret = blk_co_pwritev(job->target, dirty_offset, dirty_bytes,
-                                 qiov ? &target_qiov : NULL, flags);
-            break;
+    switch (method) {
+    case MIRROR_METHOD_COPY:
+        ret = blk_co_pwritev(job->target, offset, bytes, qiov, flags);
+        break;
 
-        case MIRROR_METHOD_ZERO:
-            assert(!qiov);
-            ret = blk_co_pwrite_zeroes(job->target, dirty_offset, dirty_bytes,
-                                       flags);
-            break;
+    case MIRROR_METHOD_ZERO:
+        assert(!qiov);
+        ret = blk_co_pwrite_zeroes(job->target, offset, bytes, flags);
+        break;
 
-        case MIRROR_METHOD_DISCARD:
-            assert(!qiov);
-            ret = blk_co_pdiscard(job->target, dirty_offset, dirty_bytes);
-            break;
+    case MIRROR_METHOD_DISCARD:
+        assert(!qiov);
+        ret = blk_co_pdiscard(job->target, offset, bytes);
+        break;
 
-        default:
-            abort();
-        }
+    default:
+        abort();
+    }
 
-        if (ret >= 0) {
-            job_progress_update(&job->common.job, dirty_bytes);
-        } else {
-            BlockErrorAction action;
+    if (ret >= 0) {
+        job_progress_update(&job->common.job, bytes);
+    } else {
+        BlockErrorAction action;
 
-            bdrv_set_dirty_bitmap(job->dirty_bitmap, dirty_offset, dirty_bytes);
-            job->actively_synced = false;
+        bdrv_set_dirty_bitmap(job->dirty_bitmap, offset, bytes);
+        job->actively_synced = false;
 
-            action = mirror_error_action(job, false, -ret);
-            if (action == BLOCK_ERROR_ACTION_REPORT) {
-                if (!job->ret) {
-                    job->ret = ret;
-                }
-                break;
+        action = mirror_error_action(job, false, -ret);
+        if (action == BLOCK_ERROR_ACTION_REPORT) {
+            if (!job->ret) {
+                job->ret = ret;
             }
         }
-
-        dirty_offset += dirty_bytes;
-    }
-
-    if (qiov) {
-        qemu_iovec_destroy(&target_qiov);
     }
 }
 
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/5] block/block-backend: add blk_co_pwritev_part
  2019-10-11  9:07 [PATCH v2 0/5] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
  2019-10-11  9:07 ` [PATCH v2 1/5] hbitmap: handle set/reset with zero length Vladimir Sementsov-Ogievskiy
  2019-10-11  9:07 ` [PATCH v2 2/5] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
@ 2019-10-11  9:07 ` Vladimir Sementsov-Ogievskiy
  2019-10-11  9:07 ` [PATCH v2 4/5] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:07 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Add blk write function with qiov_offset parameter. It's needed for the
following commit.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 include/sysemu/block-backend.h |  4 ++++
 block/block-backend.c          | 17 +++++++++++++----
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index 368d53af77..73f2cef7fe 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -121,6 +121,10 @@ void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops, void *opaque);
 int coroutine_fn blk_co_preadv(BlockBackend *blk, int64_t offset,
                                unsigned int bytes, QEMUIOVector *qiov,
                                BdrvRequestFlags flags);
+int coroutine_fn blk_co_pwritev_part(BlockBackend *blk, int64_t offset,
+                                     unsigned int bytes,
+                                     QEMUIOVector *qiov, size_t qiov_offset,
+                                     BdrvRequestFlags flags);
 int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
                                unsigned int bytes, QEMUIOVector *qiov,
                                BdrvRequestFlags flags);
diff --git a/block/block-backend.c b/block/block-backend.c
index 1c605d5444..b3d00478af 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1176,9 +1176,10 @@ int coroutine_fn blk_co_preadv(BlockBackend *blk, int64_t offset,
     return ret;
 }
 
-int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
-                                unsigned int bytes, QEMUIOVector *qiov,
-                                BdrvRequestFlags flags)
+int coroutine_fn blk_co_pwritev_part(BlockBackend *blk, int64_t offset,
+                                     unsigned int bytes,
+                                     QEMUIOVector *qiov, size_t qiov_offset,
+                                     BdrvRequestFlags flags)
 {
     int ret;
     BlockDriverState *bs;
@@ -1205,11 +1206,19 @@ int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
         flags |= BDRV_REQ_FUA;
     }
 
-    ret = bdrv_co_pwritev(blk->root, offset, bytes, qiov, flags);
+    ret = bdrv_co_pwritev_part(blk->root, offset, bytes, qiov, qiov_offset,
+                               flags);
     bdrv_dec_in_flight(bs);
     return ret;
 }
 
+int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
+                                unsigned int bytes, QEMUIOVector *qiov,
+                                BdrvRequestFlags flags)
+{
+    return blk_co_pwritev_part(blk, offset, bytes, qiov, 0, flags);
+}
+
 typedef struct BlkRwCo {
     BlockBackend *blk;
     int64_t offset;
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/5] block/mirror: support unaligned write in active mirror
  2019-10-11  9:07 [PATCH v2 0/5] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2019-10-11  9:07 ` [PATCH v2 3/5] block/block-backend: add blk_co_pwritev_part Vladimir Sementsov-Ogievskiy
@ 2019-10-11  9:07 ` Vladimir Sementsov-Ogievskiy
  2019-10-18 15:51   ` Max Reitz
  2019-10-11  9:07 ` [PATCH v2 5/5] Revert "mirror: Only mirror granularity-aligned chunks" Vladimir Sementsov-Ogievskiy
  2019-10-18 15:54 ` [PATCH v2 0/5] active-mirror: support unaligned guest operations Max Reitz
  5 siblings, 1 reply; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:07 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
region in the dirty bitmap, which means that we may not copy some bytes
and assume them copied, which actually leads to producing corrupted
target.

So 9adc1cb49af8d forced dirty bitmap granularity to be
request_alignment for mirror-top filter, so we are not working with
unaligned requests. However forcing large alignment obviously decreases
performance of unaligned requests.

This commit provides another solution for the problem: if unaligned
padding is already dirty, we can safely ignore it, as
1. It's dirty, it will be copied by mirror_iteration anyway
2. It's dirty, so skipping it now we don't increase dirtiness of the
   bitmap and therefore don't damage "synchronicity" of the
   write-blocking mirror.

If unaligned padding is not dirty, we just write it, no reason to touch
dirty bitmap if we succeed (on failure we'll set the whole region
ofcourse, but we loss "synchronicity" on failure anyway).

Note: we need to disable dirty_bitmap, otherwise we will not be able to
see in do_sync_target_write bitmap state before current operation. We
may of course check dirty bitmap before the operation in
bdrv_mirror_top_do_write and remember it, but we don't need active
dirty bitmap for write-blocking mirror anyway.

New code-path is unused until the following commit reverts
9adc1cb49af8d.

Suggested-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/mirror.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 68 insertions(+), 3 deletions(-)

diff --git a/block/mirror.c b/block/mirror.c
index 1ae57a5131..1ed1d79ff8 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1182,14 +1182,67 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
                      QEMUIOVector *qiov, int flags)
 {
     int ret;
+    size_t qiov_offset = 0;
+    int64_t bitmap_offset, bitmap_end;
 
-    bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
+    if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
+        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset))
+    {
+            /*
+             * Dirty unaligned padding: ignore it.
+             *
+             * Reasoning:
+             * 1. If we copy it, we can't reset corresponding bit in
+             *    dirty_bitmap as there may be some "dirty" bytes still not
+             *    copied.
+             * 2. It's already dirty, so skipping it we don't diverge mirror
+             *    progress.
+             *
+             * Note, that because of this, guest write may have no contribution
+             * into mirror converge, but that's not bad, as we have background
+             * process of mirroring. If under some bad circumstances (high guest
+             * IO load) background process starve, we will not converge anyway,
+             * even if each write will contribute, as guest is not guaranteed to
+             * rewrite the whole disk.
+             */
+            qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) - offset;
+            if (bytes <= qiov_offset) {
+                /* nothing to do after shrink */
+                return;
+            }
+            offset += qiov_offset;
+            bytes -= qiov_offset;
+    }
+
+    if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
+        bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
+    {
+        uint64_t tail = (offset + bytes) % job->granularity;
+
+        if (bytes <= tail) {
+            /* nothing to do after shrink */
+            return;
+        }
+        bytes -= tail;
+    }
+
+    /*
+     * Tails are either clean or shrunk, so for bitmap resetting
+     * we safely align the range down.
+     */
+    bitmap_offset = QEMU_ALIGN_UP(offset, job->granularity);
+    bitmap_end = QEMU_ALIGN_DOWN(offset + bytes, job->granularity);
+    if (bitmap_offset < bitmap_end) {
+        bdrv_reset_dirty_bitmap(job->dirty_bitmap, bitmap_offset,
+                                bitmap_end - bitmap_offset);
+    }
 
     job_progress_increase_remaining(&job->common.job, bytes);
 
     switch (method) {
     case MIRROR_METHOD_COPY:
-        ret = blk_co_pwritev(job->target, offset, bytes, qiov, flags);
+        ret = blk_co_pwritev_part(job->target, offset, bytes,
+                                  qiov, qiov_offset, flags);
         break;
 
     case MIRROR_METHOD_ZERO:
@@ -1211,7 +1264,16 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
     } else {
         BlockErrorAction action;
 
-        bdrv_set_dirty_bitmap(job->dirty_bitmap, offset, bytes);
+        /*
+         * We failed, so we should mark dirty the whole area, aligned up.
+         * Note that we don't care about shrunk tails if any: they were dirty
+         * at function start, and they must be still dirty, as we've locked
+         * the region for in-flight op.
+         */
+        bitmap_offset = QEMU_ALIGN_DOWN(offset, job->granularity);
+        bitmap_end = QEMU_ALIGN_UP(offset + bytes, job->granularity);
+        bdrv_set_dirty_bitmap(job->dirty_bitmap, bitmap_offset,
+                              bitmap_end - bitmap_offset);
         job->actively_synced = false;
 
         action = mirror_error_action(job, false, -ret);
@@ -1618,6 +1680,9 @@ static BlockJob *mirror_start_job(
     if (!s->dirty_bitmap) {
         goto fail;
     }
+    if (s->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
+        bdrv_disable_dirty_bitmap(s->dirty_bitmap);
+    }
 
     ret = block_job_add_bdrv(&s->common, "source", bs, 0,
                              BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE |
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/5] Revert "mirror: Only mirror granularity-aligned chunks"
  2019-10-11  9:07 [PATCH v2 0/5] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2019-10-11  9:07 ` [PATCH v2 4/5] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
@ 2019-10-11  9:07 ` Vladimir Sementsov-Ogievskiy
  2019-10-18 15:54 ` [PATCH v2 0/5] active-mirror: support unaligned guest operations Max Reitz
  5 siblings, 0 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-11  9:07 UTC (permalink / raw)
  To: qemu-block; +Cc: fam, kwolf, vsementsov, qemu-devel, mreitz, den, jsnow

This reverts commit 9adc1cb49af8d4e54f57980b1eed5c0a4b2dafa6.
    "mirror: Only mirror granularity-aligned chunks"

Since previous commit unaligned chunks are supported by
do_sync_target_write.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
 block/mirror.c | 29 -----------------------------
 1 file changed, 29 deletions(-)

diff --git a/block/mirror.c b/block/mirror.c
index 1ed1d79ff8..eecb2d5793 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1488,15 +1488,6 @@ static void bdrv_mirror_top_child_perm(BlockDriverState *bs, BdrvChild *c,
     *nshared = BLK_PERM_ALL;
 }
 
-static void bdrv_mirror_top_refresh_limits(BlockDriverState *bs, Error **errp)
-{
-    MirrorBDSOpaque *s = bs->opaque;
-
-    if (s && s->job && s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
-        bs->bl.request_alignment = s->job->granularity;
-    }
-}
-
 /* Dummy node that provides consistent read to its users without requiring it
  * from its backing file and that allows writes on the backing file chain. */
 static BlockDriver bdrv_mirror_top = {
@@ -1509,7 +1500,6 @@ static BlockDriver bdrv_mirror_top = {
     .bdrv_co_block_status       = bdrv_co_block_status_from_backing,
     .bdrv_refresh_filename      = bdrv_mirror_top_refresh_filename,
     .bdrv_child_perm            = bdrv_mirror_top_child_perm,
-    .bdrv_refresh_limits        = bdrv_mirror_top_refresh_limits,
 };
 
 static BlockJob *mirror_start_job(
@@ -1657,25 +1647,6 @@ static BlockJob *mirror_start_job(
         s->should_complete = true;
     }
 
-    /*
-     * Must be called before we start tracking writes, but after
-     *
-     *     ((MirrorBlockJob *)
-     *         ((MirrorBDSOpaque *)
-     *             mirror_top_bs->opaque
-     *         )->job
-     *     )->copy_mode
-     *
-     * has the correct value.
-     * (We start tracking writes as of the following
-     * bdrv_create_dirty_bitmap() call.)
-     */
-    bdrv_refresh_limits(mirror_top_bs, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        goto fail;
-    }
-
     s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
     if (!s->dirty_bitmap) {
         goto fail;
-- 
2.21.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/5] hbitmap: handle set/reset with zero length
  2019-10-11  9:07 ` [PATCH v2 1/5] hbitmap: handle set/reset with zero length Vladimir Sementsov-Ogievskiy
@ 2019-10-18 15:29   ` Max Reitz
  0 siblings, 0 replies; 10+ messages in thread
From: Max Reitz @ 2019-10-18 15:29 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, jsnow, qemu-devel, den


[-- Attachment #1.1: Type: text/plain, Size: 516 bytes --]

On 11.10.19 11:07, Vladimir Sementsov-Ogievskiy wrote:
> Passing zero length to these functions leads to unpredicted results.
> Zero-length set/reset may occur in active-mirror, on zero-length write
> (which is unlikely, but not guaranteed to never happen).
> 
> Let's just do nothing on zero-length request.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  util/hbitmap.c | 8 ++++++++
>  1 file changed, 8 insertions(+)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 4/5] block/mirror: support unaligned write in active mirror
  2019-10-11  9:07 ` [PATCH v2 4/5] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
@ 2019-10-18 15:51   ` Max Reitz
  0 siblings, 0 replies; 10+ messages in thread
From: Max Reitz @ 2019-10-18 15:51 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, jsnow, qemu-devel, den


[-- Attachment #1.1: Type: text/plain, Size: 1828 bytes --]

On 11.10.19 11:07, Vladimir Sementsov-Ogievskiy wrote:
> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
> region in the dirty bitmap, which means that we may not copy some bytes
> and assume them copied, which actually leads to producing corrupted
> target.
> 
> So 9adc1cb49af8d forced dirty bitmap granularity to be
> request_alignment for mirror-top filter, so we are not working with
> unaligned requests. However forcing large alignment obviously decreases
> performance of unaligned requests.
> 
> This commit provides another solution for the problem: if unaligned
> padding is already dirty, we can safely ignore it, as
> 1. It's dirty, it will be copied by mirror_iteration anyway
> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>    bitmap and therefore don't damage "synchronicity" of the
>    write-blocking mirror.
> 
> If unaligned padding is not dirty, we just write it, no reason to touch
> dirty bitmap if we succeed (on failure we'll set the whole region
> ofcourse, but we loss "synchronicity" on failure anyway).
> 
> Note: we need to disable dirty_bitmap, otherwise we will not be able to
> see in do_sync_target_write bitmap state before current operation. We
> may of course check dirty bitmap before the operation in
> bdrv_mirror_top_do_write and remember it, but we don't need active
> dirty bitmap for write-blocking mirror anyway.
> 
> New code-path is unused until the following commit reverts
> 9adc1cb49af8d.
> 
> Suggested-by: Denis V. Lunev <den@openvz.org>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/mirror.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 68 insertions(+), 3 deletions(-)

Reviewed-by: Max Reitz <mreitz@redhat.com>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/5] active-mirror: support unaligned guest operations
  2019-10-11  9:07 [PATCH v2 0/5] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2019-10-11  9:07 ` [PATCH v2 5/5] Revert "mirror: Only mirror granularity-aligned chunks" Vladimir Sementsov-Ogievskiy
@ 2019-10-18 15:54 ` Max Reitz
  2019-10-18 16:11   ` Vladimir Sementsov-Ogievskiy
  5 siblings, 1 reply; 10+ messages in thread
From: Max Reitz @ 2019-10-18 15:54 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: fam, kwolf, jsnow, qemu-devel, den


[-- Attachment #1.1: Type: text/plain, Size: 1339 bytes --]

On 11.10.19 11:07, Vladimir Sementsov-Ogievskiy wrote:
> Commit 9adc1cb49af8d fixed a bug about unaligned (to dirty bitmap
> granularity) guest writes (and discards) by simply requesting
> corresponding alignment on mirror-top filter. However forcing large
> alignment obviously decreases performance of unaligned requests.
> 
> So it's time for a new solution which is in 04. And 05 reverts
> 9adc1cb49af8d.
> 
> v2:
> 01: new fix (do we need it for stable?)

I don’t know? :-)

I’ll just add the stable tag for good measure, I suppose it can’t hurt.

> 02,03,05: add Max's r-b
> 04: fix bitmap handling
>     improve comments
> 
> Vladimir Sementsov-Ogievskiy (5):
>   hbitmap: handle set/reset with zero length
>   block/mirror: simplify do_sync_target_write
>   block/block-backend: add blk_co_pwritev_part
>   block/mirror: support unaligned write in active mirror
>   Revert "mirror: Only mirror granularity-aligned chunks"
> 
>  include/sysemu/block-backend.h |   4 +
>  block/block-backend.c          |  17 +++-
>  block/mirror.c                 | 181 ++++++++++++++++-----------------
>  util/hbitmap.c                 |   8 ++
>  4 files changed, 114 insertions(+), 96 deletions(-)

Thanks, applied to my block branch:

https://git.xanclic.moe/XanClic/qemu/commits/branch/block

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/5] active-mirror: support unaligned guest operations
  2019-10-18 15:54 ` [PATCH v2 0/5] active-mirror: support unaligned guest operations Max Reitz
@ 2019-10-18 16:11   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 10+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2019-10-18 16:11 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: fam, kwolf, jsnow, qemu-devel, Denis Lunev

18.10.2019 18:54, Max Reitz wrote:
> On 11.10.19 11:07, Vladimir Sementsov-Ogievskiy wrote:
>> Commit 9adc1cb49af8d fixed a bug about unaligned (to dirty bitmap
>> granularity) guest writes (and discards) by simply requesting
>> corresponding alignment on mirror-top filter. However forcing large
>> alignment obviously decreases performance of unaligned requests.
>>
>> So it's time for a new solution which is in 04. And 05 reverts
>> 9adc1cb49af8d.
>>
>> v2:
>> 01: new fix (do we need it for stable?)
> 
> I don’t know? :-)
> 
> I’ll just add the stable tag for good measure, I suppose it can’t hurt.
> 
>> 02,03,05: add Max's r-b
>> 04: fix bitmap handling
>>      improve comments
>>
>> Vladimir Sementsov-Ogievskiy (5):
>>    hbitmap: handle set/reset with zero length
>>    block/mirror: simplify do_sync_target_write
>>    block/block-backend: add blk_co_pwritev_part
>>    block/mirror: support unaligned write in active mirror
>>    Revert "mirror: Only mirror granularity-aligned chunks"
>>
>>   include/sysemu/block-backend.h |   4 +
>>   block/block-backend.c          |  17 +++-
>>   block/mirror.c                 | 181 ++++++++++++++++-----------------
>>   util/hbitmap.c                 |   8 ++
>>   4 files changed, 114 insertions(+), 96 deletions(-)
> 
> Thanks, applied to my block branch:
> 
> https://git.xanclic.moe/XanClic/qemu/commits/branch/block
> 
> Max
> 

Thank you!

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-10-18 16:21 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-11  9:07 [PATCH v2 0/5] active-mirror: support unaligned guest operations Vladimir Sementsov-Ogievskiy
2019-10-11  9:07 ` [PATCH v2 1/5] hbitmap: handle set/reset with zero length Vladimir Sementsov-Ogievskiy
2019-10-18 15:29   ` Max Reitz
2019-10-11  9:07 ` [PATCH v2 2/5] block/mirror: simplify do_sync_target_write Vladimir Sementsov-Ogievskiy
2019-10-11  9:07 ` [PATCH v2 3/5] block/block-backend: add blk_co_pwritev_part Vladimir Sementsov-Ogievskiy
2019-10-11  9:07 ` [PATCH v2 4/5] block/mirror: support unaligned write in active mirror Vladimir Sementsov-Ogievskiy
2019-10-18 15:51   ` Max Reitz
2019-10-11  9:07 ` [PATCH v2 5/5] Revert "mirror: Only mirror granularity-aligned chunks" Vladimir Sementsov-Ogievskiy
2019-10-18 15:54 ` [PATCH v2 0/5] active-mirror: support unaligned guest operations Max Reitz
2019-10-18 16:11   ` Vladimir Sementsov-Ogievskiy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.