All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] block: Mirror discarded sectors
@ 2015-05-05 12:46 Fam Zheng
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard Fam Zheng
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Fam Zheng @ 2015-05-05 12:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, qemu-block, Stefan Hajnoczi, pbonzini, jsnow, wangxiaolong

This fixes the mirror assert failure reported by wangxiaolong:

https://lists.gnu.org/archive/html/qemu-devel/2015-04/msg04458.html

The direct cause is that hbitmap code couldn't handle unset of bits *after*
iterator's current position. We could fix that, but the bdrv_reset_dirty() call
is more questionable:

Before, if guest discarded some sectors during migration, it could see
different data after moving to dest side, depending on block backends of the
src and the dest. This is IMO worse than mirroring the actual reading as done
in this series, because we don't know what the guest is doing.

For example if a guest first issues WRITE SAME to wipe out the area then issues
UNMAP to discard it, just to get rid of some sensitive data completely, we may
miss both operations and leave stale data on dest image.


Fam Zheng (4):
  block: Fix dirty bitmap in bdrv_co_discard
  block: Remove bdrv_reset_dirty
  qemu-iotests: Make block job methods common
  qemu-iotests: Add test case for mirror with unmap

 block.c                       | 12 --------
 block/io.c                    |  4 +--
 include/block/block_int.h     |  2 --
 tests/qemu-iotests/041        | 66 ++++++++++---------------------------------
 tests/qemu-iotests/131        | 59 ++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/131.out    |  5 ++++
 tests/qemu-iotests/group      |  1 +
 tests/qemu-iotests/iotests.py | 28 ++++++++++++++++++
 8 files changed, 110 insertions(+), 67 deletions(-)
 create mode 100644 tests/qemu-iotests/131
 create mode 100644 tests/qemu-iotests/131.out

-- 
1.9.3

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-05 12:46 [Qemu-devel] [PATCH 0/4] block: Mirror discarded sectors Fam Zheng
@ 2015-05-05 12:46 ` Fam Zheng
  2015-05-05 13:06   ` Paolo Bonzini
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 2/4] block: Remove bdrv_reset_dirty Fam Zheng
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Fam Zheng @ 2015-05-05 12:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, qemu-block, Stefan Hajnoczi, pbonzini, jsnow, wangxiaolong

Unsetting dirty globally with discard is not very correct. The discard may zero
out sectors (depending on can_write_zeroes_with_unmap), we should replicate
this change to destinition side to make sure that the guest sees the same data.

Calling bdrv_reset_dirty also troubles mirror job because the hbitmap iterator
doesn't expect unsetting of bits after current position.

So let's do it the opposite way which fixes both problems: set the dirty bits
if we are to discard it.

Reported-by: wangxiaolong@ucloud.cn
Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block/io.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/io.c b/block/io.c
index 1ce62c4..809688b 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2343,8 +2343,6 @@ int coroutine_fn bdrv_co_discard(BlockDriverState *bs, int64_t sector_num,
         return -EROFS;
     }
 
-    bdrv_reset_dirty(bs, sector_num, nb_sectors);
-
     /* Do nothing if disabled.  */
     if (!(bs->open_flags & BDRV_O_UNMAP)) {
         return 0;
@@ -2354,6 +2352,8 @@ int coroutine_fn bdrv_co_discard(BlockDriverState *bs, int64_t sector_num,
         return 0;
     }
 
+    bdrv_set_dirty(bs, sector_num, nb_sectors);
+
     max_discard = MIN_NON_ZERO(bs->bl.max_discard, BDRV_REQUEST_MAX_SECTORS);
     while (nb_sectors > 0) {
         int ret;
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 2/4] block: Remove bdrv_reset_dirty
  2015-05-05 12:46 [Qemu-devel] [PATCH 0/4] block: Mirror discarded sectors Fam Zheng
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard Fam Zheng
@ 2015-05-05 12:46 ` Fam Zheng
  2015-05-05 21:57   ` Eric Blake
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 3/4] qemu-iotests: Make block job methods common Fam Zheng
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 4/4] qemu-iotests: Add test case for mirror with unmap Fam Zheng
  3 siblings, 1 reply; 15+ messages in thread
From: Fam Zheng @ 2015-05-05 12:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, qemu-block, Stefan Hajnoczi, pbonzini, jsnow, wangxiaolong

Using this function woule always be wrong because a dirty bitmap must
have a specific owner that consumes the dirty bits and calls
bdrv_reset_dirty_bitmap().

Remove the unused function to avoid future misuse.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 block.c                   | 12 ------------
 include/block/block_int.h |  2 --
 2 files changed, 14 deletions(-)

diff --git a/block.c b/block.c
index 7904098..511e13c 100644
--- a/block.c
+++ b/block.c
@@ -3324,18 +3324,6 @@ void bdrv_set_dirty(BlockDriverState *bs, int64_t cur_sector,
     }
 }
 
-void bdrv_reset_dirty(BlockDriverState *bs, int64_t cur_sector,
-                      int nr_sectors)
-{
-    BdrvDirtyBitmap *bitmap;
-    QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) {
-        if (!bdrv_dirty_bitmap_enabled(bitmap)) {
-            continue;
-        }
-        hbitmap_reset(bitmap->bitmap, cur_sector, nr_sectors);
-    }
-}
-
 /**
  * Advance an HBitmapIter to an arbitrary offset.
  */
diff --git a/include/block/block_int.h b/include/block/block_int.h
index db29b74..aaed2fa 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -635,7 +635,5 @@ bool blk_dev_is_medium_locked(BlockBackend *blk);
 void blk_dev_resize_cb(BlockBackend *blk);
 
 void bdrv_set_dirty(BlockDriverState *bs, int64_t cur_sector, int nr_sectors);
-void bdrv_reset_dirty(BlockDriverState *bs, int64_t cur_sector,
-                      int nr_sectors);
 
 #endif /* BLOCK_INT_H */
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 3/4] qemu-iotests: Make block job methods common
  2015-05-05 12:46 [Qemu-devel] [PATCH 0/4] block: Mirror discarded sectors Fam Zheng
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard Fam Zheng
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 2/4] block: Remove bdrv_reset_dirty Fam Zheng
@ 2015-05-05 12:46 ` Fam Zheng
  2015-05-05 22:17   ` John Snow
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 4/4] qemu-iotests: Add test case for mirror with unmap Fam Zheng
  3 siblings, 1 reply; 15+ messages in thread
From: Fam Zheng @ 2015-05-05 12:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, qemu-block, Stefan Hajnoczi, pbonzini, jsnow, wangxiaolong

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 tests/qemu-iotests/041        | 66 ++++++++++---------------------------------
 tests/qemu-iotests/iotests.py | 28 ++++++++++++++++++
 2 files changed, 43 insertions(+), 51 deletions(-)

diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
index 59a8f73..3d46ed7 100755
--- a/tests/qemu-iotests/041
+++ b/tests/qemu-iotests/041
@@ -34,38 +34,8 @@ quorum_img3 = os.path.join(iotests.test_dir, 'quorum3.img')
 quorum_repair_img = os.path.join(iotests.test_dir, 'quorum_repair.img')
 quorum_snapshot_file = os.path.join(iotests.test_dir, 'quorum_snapshot.img')
 
-class ImageMirroringTestCase(iotests.QMPTestCase):
-    '''Abstract base class for image mirroring test cases'''
 
-    def wait_ready(self, drive='drive0'):
-        '''Wait until a block job BLOCK_JOB_READY event'''
-        ready = False
-        while not ready:
-            for event in self.vm.get_qmp_events(wait=True):
-                if event['event'] == 'BLOCK_JOB_READY':
-                    self.assert_qmp(event, 'data/type', 'mirror')
-                    self.assert_qmp(event, 'data/device', drive)
-                    ready = True
-
-    def wait_ready_and_cancel(self, drive='drive0'):
-        self.wait_ready(drive=drive)
-        event = self.cancel_and_wait(drive=drive)
-        self.assertEquals(event['event'], 'BLOCK_JOB_COMPLETED')
-        self.assert_qmp(event, 'data/type', 'mirror')
-        self.assert_qmp(event, 'data/offset', event['data']['len'])
-
-    def complete_and_wait(self, drive='drive0', wait_ready=True):
-        '''Complete a block job and wait for it to finish'''
-        if wait_ready:
-            self.wait_ready(drive=drive)
-
-        result = self.vm.qmp('block-job-complete', device=drive)
-        self.assert_qmp(result, 'return', {})
-
-        event = self.wait_until_completed(drive=drive)
-        self.assert_qmp(event, 'data/type', 'mirror')
-
-class TestSingleDrive(ImageMirroringTestCase):
+class TestSingleDrive(iotests.QMPTestCase):
     image_len = 1 * 1024 * 1024 # MB
 
     def setUp(self):
@@ -221,17 +191,9 @@ class TestSingleDriveUnalignedLength(TestSingleDrive):
     test_small_buffer2 = None
     test_large_cluster = None
 
-class TestMirrorNoBacking(ImageMirroringTestCase):
+class TestMirrorNoBacking(iotests.QMPTestCase):
     image_len = 2 * 1024 * 1024 # MB
 
-    def complete_and_wait(self, drive='drive0', wait_ready=True):
-        iotests.create_image(target_backing_img, TestMirrorNoBacking.image_len)
-        return ImageMirroringTestCase.complete_and_wait(self, drive, wait_ready)
-
-    def compare_images(self, img1, img2):
-        iotests.create_image(target_backing_img, TestMirrorNoBacking.image_len)
-        return iotests.compare_images(img1, img2)
-
     def setUp(self):
         iotests.create_image(backing_img, TestMirrorNoBacking.image_len)
         qemu_img('create', '-f', iotests.imgfmt, '-o', 'backing_file=%s' % backing_img, test_img)
@@ -242,7 +204,10 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
         self.vm.shutdown()
         os.remove(test_img)
         os.remove(backing_img)
-        os.remove(target_backing_img)
+        try:
+            os.remove(target_backing_img)
+        except:
+            pass
         os.remove(target_img)
 
     def test_complete(self):
@@ -257,7 +222,7 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
         result = self.vm.qmp('query-block')
         self.assert_qmp(result, 'return[0]/inserted/file', target_img)
         self.vm.shutdown()
-        self.assertTrue(self.compare_images(test_img, target_img),
+        self.assertTrue(iotests.compare_images(test_img, target_img),
                         'target image does not match source after mirroring')
 
     def test_cancel(self):
@@ -272,7 +237,7 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
         result = self.vm.qmp('query-block')
         self.assert_qmp(result, 'return[0]/inserted/file', test_img)
         self.vm.shutdown()
-        self.assertTrue(self.compare_images(test_img, target_img),
+        self.assertTrue(iotests.compare_images(test_img, target_img),
                         'target image does not match source after mirroring')
 
     def test_large_cluster(self):
@@ -283,7 +248,6 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
                         %(TestMirrorNoBacking.image_len), target_backing_img)
         qemu_img('create', '-f', iotests.imgfmt, '-o', 'cluster_size=%d,backing_file=%s'
                         % (TestMirrorNoBacking.image_len, target_backing_img), target_img)
-        os.remove(target_backing_img)
 
         result = self.vm.qmp('drive-mirror', device='drive0', sync='full',
                              mode='existing', target=target_img)
@@ -293,10 +257,10 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
         result = self.vm.qmp('query-block')
         self.assert_qmp(result, 'return[0]/inserted/file', target_img)
         self.vm.shutdown()
-        self.assertTrue(self.compare_images(test_img, target_img),
+        self.assertTrue(iotests.compare_images(test_img, target_img),
                         'target image does not match source after mirroring')
 
-class TestMirrorResized(ImageMirroringTestCase):
+class TestMirrorResized(iotests.QMPTestCase):
     backing_len = 1 * 1024 * 1024 # MB
     image_len = 2 * 1024 * 1024 # MB
 
@@ -344,7 +308,7 @@ class TestMirrorResized(ImageMirroringTestCase):
         self.assertTrue(iotests.compare_images(test_img, target_img),
                         'target image does not match source after mirroring')
 
-class TestReadErrors(ImageMirroringTestCase):
+class TestReadErrors(iotests.QMPTestCase):
     image_len = 2 * 1024 * 1024 # MB
 
     # this should be a multiple of twice the default granularity
@@ -498,7 +462,7 @@ new_state = "1"
         self.assert_no_active_block_jobs()
         self.vm.shutdown()
 
-class TestWriteErrors(ImageMirroringTestCase):
+class TestWriteErrors(iotests.QMPTestCase):
     image_len = 2 * 1024 * 1024 # MB
 
     # this should be a multiple of twice the default granularity
@@ -624,7 +588,7 @@ new_state = "1"
         self.assert_no_active_block_jobs()
         self.vm.shutdown()
 
-class TestSetSpeed(ImageMirroringTestCase):
+class TestSetSpeed(iotests.QMPTestCase):
     image_len = 80 * 1024 * 1024 # MB
 
     def setUp(self):
@@ -690,7 +654,7 @@ class TestSetSpeed(ImageMirroringTestCase):
 
         self.wait_ready_and_cancel()
 
-class TestUnbackedSource(ImageMirroringTestCase):
+class TestUnbackedSource(iotests.QMPTestCase):
     image_len = 2 * 1024 * 1024 # MB
 
     def setUp(self):
@@ -731,7 +695,7 @@ class TestUnbackedSource(ImageMirroringTestCase):
         self.complete_and_wait()
         self.assert_no_active_block_jobs()
 
-class TestRepairQuorum(ImageMirroringTestCase):
+class TestRepairQuorum(iotests.QMPTestCase):
     """ This class test quorum file repair using drive-mirror.
         It's mostly a fork of TestSingleDrive """
     image_len = 1 * 1024 * 1024 # MB
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index e93e623..2e07cc4 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -326,6 +326,34 @@ class QMPTestCase(unittest.TestCase):
         self.assert_no_active_block_jobs()
         return event
 
+    def wait_ready(self, drive='drive0'):
+        '''Wait until a block job BLOCK_JOB_READY event'''
+        ready = False
+        while not ready:
+            for event in self.vm.get_qmp_events(wait=True):
+                if event['event'] == 'BLOCK_JOB_READY':
+                    self.assert_qmp(event, 'data/type', 'mirror')
+                    self.assert_qmp(event, 'data/device', drive)
+                    ready = True
+
+    def wait_ready_and_cancel(self, drive='drive0'):
+        self.wait_ready(drive=drive)
+        event = self.cancel_and_wait(drive=drive)
+        self.assertEquals(event['event'], 'BLOCK_JOB_COMPLETED')
+        self.assert_qmp(event, 'data/type', 'mirror')
+        self.assert_qmp(event, 'data/offset', event['data']['len'])
+
+    def complete_and_wait(self, drive='drive0', wait_ready=True):
+        '''Complete a block job and wait for it to finish'''
+        if wait_ready:
+            self.wait_ready(drive=drive)
+
+        result = self.vm.qmp('block-job-complete', device=drive)
+        self.assert_qmp(result, 'return', {})
+
+        event = self.wait_until_completed(drive=drive)
+        self.assert_qmp(event, 'data/type', 'mirror')
+
 def notrun(reason):
     '''Skip this test suite'''
     # Each test in qemu-iotests has a number ("seq")
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 4/4] qemu-iotests: Add test case for mirror with unmap
  2015-05-05 12:46 [Qemu-devel] [PATCH 0/4] block: Mirror discarded sectors Fam Zheng
                   ` (2 preceding siblings ...)
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 3/4] qemu-iotests: Make block job methods common Fam Zheng
@ 2015-05-05 12:46 ` Fam Zheng
  3 siblings, 0 replies; 15+ messages in thread
From: Fam Zheng @ 2015-05-05 12:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, qemu-block, Stefan Hajnoczi, pbonzini, jsnow, wangxiaolong

This checks that the discard on mirror source that effectively zeroes
data is also reflected by the data of target.

Signed-off-by: Fam Zheng <famz@redhat.com>
---
 tests/qemu-iotests/131     | 59 ++++++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/131.out |  5 ++++
 tests/qemu-iotests/group   |  1 +
 3 files changed, 65 insertions(+)
 create mode 100644 tests/qemu-iotests/131
 create mode 100644 tests/qemu-iotests/131.out

diff --git a/tests/qemu-iotests/131 b/tests/qemu-iotests/131
new file mode 100644
index 0000000..e33ca72
--- /dev/null
+++ b/tests/qemu-iotests/131
@@ -0,0 +1,59 @@
+#!/usr/bin/env python
+#
+# Test mirror with unmap
+#
+# Copyright (C) 2015 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import time
+import os
+import iotests
+from iotests import qemu_img, qemu_io
+
+test_img = os.path.join(iotests.test_dir, 'test.img')
+target_img = os.path.join(iotests.test_dir, 'target.img')
+
+class TestSingleDrive(iotests.QMPTestCase):
+    image_len = 2 * 1024 * 1024 # MB
+
+    def setUp(self):
+        # Write data to the image so we can compare later
+        qemu_img('create', '-f', iotests.imgfmt, test_img, str(TestSingleDrive.image_len))
+        qemu_io('-f', iotests.imgfmt, '-c', 'write -P0x5d 0 2M', test_img)
+
+        self.vm = iotests.VM().add_drive(test_img, 'discard=unmap')
+        self.vm.launch()
+
+    def tearDown(self):
+        self.vm.shutdown()
+        os.remove(test_img)
+        try:
+            os.remove(target_img)
+        except OSError:
+            pass
+
+    def test_mirror_discard(self):
+        result = self.vm.qmp('drive-mirror', device='drive0', sync='full',
+                             target=target_img)
+        self.assert_qmp(result, 'return', {})
+        self.vm.hmp_qemu_io('drive0', 'discard 0 64k')
+        self.complete_and_wait('drive0')
+        self.vm.shutdown()
+        self.assertTrue(iotests.compare_images(test_img, target_img),
+                        'target image does not match source after mirroring')
+
+if __name__ == '__main__':
+    iotests.main(supported_fmts=['raw'])
diff --git a/tests/qemu-iotests/131.out b/tests/qemu-iotests/131.out
new file mode 100644
index 0000000..ae1213e
--- /dev/null
+++ b/tests/qemu-iotests/131.out
@@ -0,0 +1,5 @@
+.
+----------------------------------------------------------------------
+Ran 1 tests
+
+OK
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index 6ca3466..34b16cb 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -128,3 +128,4 @@
 128 rw auto quick
 129 rw auto quick
 130 rw auto quick
+131 rw auto quick
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard Fam Zheng
@ 2015-05-05 13:06   ` Paolo Bonzini
  2015-05-06  1:45     ` Fam Zheng
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2015-05-05 13:06 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel
  Cc: Kevin Wolf, jsnow, qemu-block, Stefan Hajnoczi, wangxiaolong



On 05/05/2015 14:46, Fam Zheng wrote:
> Unsetting dirty globally with discard is not very correct. The discard may zero
> out sectors (depending on can_write_zeroes_with_unmap), we should replicate
> this change to destinition side to make sure that the guest sees the same data.
> 
> Calling bdrv_reset_dirty also troubles mirror job because the hbitmap iterator
> doesn't expect unsetting of bits after current position.
> 
> So let's do it the opposite way which fixes both problems: set the dirty bits
> if we are to discard it.

This is not enough, you also have to do the discard in block/mirror.c,
otherwise the destination image could even become fully provisioned!

Paolo

> Reported-by: wangxiaolong@ucloud.cn
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  block/io.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/block/io.c b/block/io.c
> index 1ce62c4..809688b 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -2343,8 +2343,6 @@ int coroutine_fn bdrv_co_discard(BlockDriverState *bs, int64_t sector_num,
>          return -EROFS;
>      }
>  
> -    bdrv_reset_dirty(bs, sector_num, nb_sectors);
> -
>      /* Do nothing if disabled.  */
>      if (!(bs->open_flags & BDRV_O_UNMAP)) {
>          return 0;
> @@ -2354,6 +2352,8 @@ int coroutine_fn bdrv_co_discard(BlockDriverState *bs, int64_t sector_num,
>          return 0;
>      }
>  
> +    bdrv_set_dirty(bs, sector_num, nb_sectors);
> +
>      max_discard = MIN_NON_ZERO(bs->bl.max_discard, BDRV_REQUEST_MAX_SECTORS);
>      while (nb_sectors > 0) {
>          int ret;
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/4] block: Remove bdrv_reset_dirty
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 2/4] block: Remove bdrv_reset_dirty Fam Zheng
@ 2015-05-05 21:57   ` Eric Blake
  0 siblings, 0 replies; 15+ messages in thread
From: Eric Blake @ 2015-05-05 21:57 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel
  Cc: Kevin Wolf, qemu-block, Stefan Hajnoczi, pbonzini, jsnow, wangxiaolong

[-- Attachment #1: Type: text/plain, Size: 628 bytes --]

On 05/05/2015 06:46 AM, Fam Zheng wrote:
> Using this function woule always be wrong because a dirty bitmap must

s/woule/would/

> have a specific owner that consumes the dirty bits and calls
> bdrv_reset_dirty_bitmap().
> 
> Remove the unused function to avoid future misuse.
> 
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  block.c                   | 12 ------------
>  include/block/block_int.h |  2 --
>  2 files changed, 14 deletions(-)
> 

Reviewed-by Eric Blake <eblake@redhat.com>

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] qemu-iotests: Make block job methods common
  2015-05-05 12:46 ` [Qemu-devel] [PATCH 3/4] qemu-iotests: Make block job methods common Fam Zheng
@ 2015-05-05 22:17   ` John Snow
  2015-05-06  1:48     ` Fam Zheng
  0 siblings, 1 reply; 15+ messages in thread
From: John Snow @ 2015-05-05 22:17 UTC (permalink / raw)
  To: Fam Zheng, qemu-devel
  Cc: Kevin Wolf, pbonzini, qemu-block, Stefan Hajnoczi, wangxiaolong



On 05/05/2015 08:46 AM, Fam Zheng wrote:
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  tests/qemu-iotests/041        | 66 ++++++++++---------------------------------
>  tests/qemu-iotests/iotests.py | 28 ++++++++++++++++++
>  2 files changed, 43 insertions(+), 51 deletions(-)
> 
> diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
> index 59a8f73..3d46ed7 100755
> --- a/tests/qemu-iotests/041
> +++ b/tests/qemu-iotests/041
> @@ -34,38 +34,8 @@ quorum_img3 = os.path.join(iotests.test_dir, 'quorum3.img')
>  quorum_repair_img = os.path.join(iotests.test_dir, 'quorum_repair.img')
>  quorum_snapshot_file = os.path.join(iotests.test_dir, 'quorum_snapshot.img')
>  
> -class ImageMirroringTestCase(iotests.QMPTestCase):
> -    '''Abstract base class for image mirroring test cases'''
>  
> -    def wait_ready(self, drive='drive0'):
> -        '''Wait until a block job BLOCK_JOB_READY event'''
> -        ready = False
> -        while not ready:
> -            for event in self.vm.get_qmp_events(wait=True):
> -                if event['event'] == 'BLOCK_JOB_READY':
> -                    self.assert_qmp(event, 'data/type', 'mirror')
> -                    self.assert_qmp(event, 'data/device', drive)
> -                    ready = True
> -
> -    def wait_ready_and_cancel(self, drive='drive0'):
> -        self.wait_ready(drive=drive)
> -        event = self.cancel_and_wait(drive=drive)
> -        self.assertEquals(event['event'], 'BLOCK_JOB_COMPLETED')
> -        self.assert_qmp(event, 'data/type', 'mirror')
> -        self.assert_qmp(event, 'data/offset', event['data']['len'])
> -
> -    def complete_and_wait(self, drive='drive0', wait_ready=True):
> -        '''Complete a block job and wait for it to finish'''
> -        if wait_ready:
> -            self.wait_ready(drive=drive)
> -
> -        result = self.vm.qmp('block-job-complete', device=drive)
> -        self.assert_qmp(result, 'return', {})
> -
> -        event = self.wait_until_completed(drive=drive)
> -        self.assert_qmp(event, 'data/type', 'mirror')
> -
> -class TestSingleDrive(ImageMirroringTestCase):
> +class TestSingleDrive(iotests.QMPTestCase):
>      image_len = 1 * 1024 * 1024 # MB
>  
>      def setUp(self):
> @@ -221,17 +191,9 @@ class TestSingleDriveUnalignedLength(TestSingleDrive):
>      test_small_buffer2 = None
>      test_large_cluster = None
>  
> -class TestMirrorNoBacking(ImageMirroringTestCase):
> +class TestMirrorNoBacking(iotests.QMPTestCase):
>      image_len = 2 * 1024 * 1024 # MB
>  
> -    def complete_and_wait(self, drive='drive0', wait_ready=True):
> -        iotests.create_image(target_backing_img, TestMirrorNoBacking.image_len)
> -        return ImageMirroringTestCase.complete_and_wait(self, drive, wait_ready)
> -
> -    def compare_images(self, img1, img2):
> -        iotests.create_image(target_backing_img, TestMirrorNoBacking.image_len)
> -        return iotests.compare_images(img1, img2)
> -
>      def setUp(self):
>          iotests.create_image(backing_img, TestMirrorNoBacking.image_len)
>          qemu_img('create', '-f', iotests.imgfmt, '-o', 'backing_file=%s' % backing_img, test_img)
> @@ -242,7 +204,10 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
>          self.vm.shutdown()
>          os.remove(test_img)
>          os.remove(backing_img)
> -        os.remove(target_backing_img)
> +        try:
> +            os.remove(target_backing_img)
> +        except:
> +            pass
>          os.remove(target_img)
>  
>      def test_complete(self):
> @@ -257,7 +222,7 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
>          result = self.vm.qmp('query-block')
>          self.assert_qmp(result, 'return[0]/inserted/file', target_img)
>          self.vm.shutdown()
> -        self.assertTrue(self.compare_images(test_img, target_img),
> +        self.assertTrue(iotests.compare_images(test_img, target_img),
>                          'target image does not match source after mirroring')
>  
>      def test_cancel(self):
> @@ -272,7 +237,7 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
>          result = self.vm.qmp('query-block')
>          self.assert_qmp(result, 'return[0]/inserted/file', test_img)
>          self.vm.shutdown()
> -        self.assertTrue(self.compare_images(test_img, target_img),
> +        self.assertTrue(iotests.compare_images(test_img, target_img),
>                          'target image does not match source after mirroring')
>  
>      def test_large_cluster(self):
> @@ -283,7 +248,6 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
>                          %(TestMirrorNoBacking.image_len), target_backing_img)
>          qemu_img('create', '-f', iotests.imgfmt, '-o', 'cluster_size=%d,backing_file=%s'
>                          % (TestMirrorNoBacking.image_len, target_backing_img), target_img)
> -        os.remove(target_backing_img)
>  
>          result = self.vm.qmp('drive-mirror', device='drive0', sync='full',
>                               mode='existing', target=target_img)
> @@ -293,10 +257,10 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
>          result = self.vm.qmp('query-block')
>          self.assert_qmp(result, 'return[0]/inserted/file', target_img)
>          self.vm.shutdown()
> -        self.assertTrue(self.compare_images(test_img, target_img),
> +        self.assertTrue(iotests.compare_images(test_img, target_img),
>                          'target image does not match source after mirroring')
>  
> -class TestMirrorResized(ImageMirroringTestCase):
> +class TestMirrorResized(iotests.QMPTestCase):
>      backing_len = 1 * 1024 * 1024 # MB
>      image_len = 2 * 1024 * 1024 # MB
>  
> @@ -344,7 +308,7 @@ class TestMirrorResized(ImageMirroringTestCase):
>          self.assertTrue(iotests.compare_images(test_img, target_img),
>                          'target image does not match source after mirroring')
>  
> -class TestReadErrors(ImageMirroringTestCase):
> +class TestReadErrors(iotests.QMPTestCase):
>      image_len = 2 * 1024 * 1024 # MB
>  
>      # this should be a multiple of twice the default granularity
> @@ -498,7 +462,7 @@ new_state = "1"
>          self.assert_no_active_block_jobs()
>          self.vm.shutdown()
>  
> -class TestWriteErrors(ImageMirroringTestCase):
> +class TestWriteErrors(iotests.QMPTestCase):
>      image_len = 2 * 1024 * 1024 # MB
>  
>      # this should be a multiple of twice the default granularity
> @@ -624,7 +588,7 @@ new_state = "1"
>          self.assert_no_active_block_jobs()
>          self.vm.shutdown()
>  
> -class TestSetSpeed(ImageMirroringTestCase):
> +class TestSetSpeed(iotests.QMPTestCase):
>      image_len = 80 * 1024 * 1024 # MB
>  
>      def setUp(self):
> @@ -690,7 +654,7 @@ class TestSetSpeed(ImageMirroringTestCase):
>  
>          self.wait_ready_and_cancel()
>  
> -class TestUnbackedSource(ImageMirroringTestCase):
> +class TestUnbackedSource(iotests.QMPTestCase):
>      image_len = 2 * 1024 * 1024 # MB
>  
>      def setUp(self):
> @@ -731,7 +695,7 @@ class TestUnbackedSource(ImageMirroringTestCase):
>          self.complete_and_wait()
>          self.assert_no_active_block_jobs()
>  
> -class TestRepairQuorum(ImageMirroringTestCase):
> +class TestRepairQuorum(iotests.QMPTestCase):
>      """ This class test quorum file repair using drive-mirror.
>          It's mostly a fork of TestSingleDrive """
>      image_len = 1 * 1024 * 1024 # MB
> diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
> index e93e623..2e07cc4 100644
> --- a/tests/qemu-iotests/iotests.py
> +++ b/tests/qemu-iotests/iotests.py
> @@ -326,6 +326,34 @@ class QMPTestCase(unittest.TestCase):
>          self.assert_no_active_block_jobs()
>          return event
>  
> +    def wait_ready(self, drive='drive0'):
> +        '''Wait until a block job BLOCK_JOB_READY event'''
> +        ready = False
> +        while not ready:
> +            for event in self.vm.get_qmp_events(wait=True):
> +                if event['event'] == 'BLOCK_JOB_READY':
> +                    self.assert_qmp(event, 'data/type', 'mirror')
> +                    self.assert_qmp(event, 'data/device', drive)
> +                    ready = True
> +

I know you only moved the code, but since it's going into the general
purpose iotests instead of inside of a specific test case...

get_qmp_events will completely empty the queue, but we're only
interested in one specific event. This may make writing tests harder
depending on what order events arrive in.

You may want to rely upon self.event_wait() instead, which allows you to
specify an event to match, and any events it finds that don't match this
are left alone to be pulled by future calls.

Something like this: (not tested...)

def wait_ready(self, drive='drive0'):
  filter = {'data': {'type': 'mirror', 'device': drive } }
  event = self.event_wait(BLOCK_JOB_READY, match=filter)

This way we can use wait_ready to test in asynchronous cases with other
stuff flying around.

(Hmm, looks like I never converted the existing users of get_qmp_events,
which can/would cause similar difficulties...)

> +    def wait_ready_and_cancel(self, drive='drive0'):
> +        self.wait_ready(drive=drive)
> +        event = self.cancel_and_wait(drive=drive)
> +        self.assertEquals(event['event'], 'BLOCK_JOB_COMPLETED')
> +        self.assert_qmp(event, 'data/type', 'mirror')
> +        self.assert_qmp(event, 'data/offset', event['data']['len'])
> +
> +    def complete_and_wait(self, drive='drive0', wait_ready=True):
> +        '''Complete a block job and wait for it to finish'''
> +        if wait_ready:
> +            self.wait_ready(drive=drive)
> +
> +        result = self.vm.qmp('block-job-complete', device=drive)
> +        self.assert_qmp(result, 'return', {})
> +
> +        event = self.wait_until_completed(drive=drive)
> +        self.assert_qmp(event, 'data/type', 'mirror')
> +
>  def notrun(reason):
>      '''Skip this test suite'''
>      # Each test in qemu-iotests has a number ("seq")
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-05 13:06   ` Paolo Bonzini
@ 2015-05-06  1:45     ` Fam Zheng
  2015-05-06  8:59       ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Fam Zheng @ 2015-05-06  1:45 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, qemu-block, qemu-devel, Stefan Hajnoczi, jsnow, wangxiaolong

On Tue, 05/05 15:06, Paolo Bonzini wrote:
> 
> 
> On 05/05/2015 14:46, Fam Zheng wrote:
> > Unsetting dirty globally with discard is not very correct. The discard may zero
> > out sectors (depending on can_write_zeroes_with_unmap), we should replicate
> > this change to destinition side to make sure that the guest sees the same data.
> > 
> > Calling bdrv_reset_dirty also troubles mirror job because the hbitmap iterator
> > doesn't expect unsetting of bits after current position.
> > 
> > So let's do it the opposite way which fixes both problems: set the dirty bits
> > if we are to discard it.
> 
> This is not enough, you also have to do the discard in block/mirror.c,
> otherwise the destination image could even become fully provisioned!

I wasn't sure what if src and dest have different can_write_zeroes_with_unmap
value, but your argument is stronger.

I will add discard in mirror.  Besides that, do we need to compare the
can_write_zeroes_with_unmap?

Fam

> 
> Paolo
> 
> > Reported-by: wangxiaolong@ucloud.cn
> > Signed-off-by: Fam Zheng <famz@redhat.com>
> > ---
> >  block/io.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/block/io.c b/block/io.c
> > index 1ce62c4..809688b 100644
> > --- a/block/io.c
> > +++ b/block/io.c
> > @@ -2343,8 +2343,6 @@ int coroutine_fn bdrv_co_discard(BlockDriverState *bs, int64_t sector_num,
> >          return -EROFS;
> >      }
> >  
> > -    bdrv_reset_dirty(bs, sector_num, nb_sectors);
> > -
> >      /* Do nothing if disabled.  */
> >      if (!(bs->open_flags & BDRV_O_UNMAP)) {
> >          return 0;
> > @@ -2354,6 +2352,8 @@ int coroutine_fn bdrv_co_discard(BlockDriverState *bs, int64_t sector_num,
> >          return 0;
> >      }
> >  
> > +    bdrv_set_dirty(bs, sector_num, nb_sectors);
> > +
> >      max_discard = MIN_NON_ZERO(bs->bl.max_discard, BDRV_REQUEST_MAX_SECTORS);
> >      while (nb_sectors > 0) {
> >          int ret;
> > 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] qemu-iotests: Make block job methods common
  2015-05-05 22:17   ` John Snow
@ 2015-05-06  1:48     ` Fam Zheng
  0 siblings, 0 replies; 15+ messages in thread
From: Fam Zheng @ 2015-05-06  1:48 UTC (permalink / raw)
  To: John Snow
  Cc: Kevin Wolf, qemu-block, qemu-devel, Stefan Hajnoczi, pbonzini,
	wangxiaolong

On Tue, 05/05 18:17, John Snow wrote:
> 
> 
> On 05/05/2015 08:46 AM, Fam Zheng wrote:
> > Signed-off-by: Fam Zheng <famz@redhat.com>
> > ---
> >  tests/qemu-iotests/041        | 66 ++++++++++---------------------------------
> >  tests/qemu-iotests/iotests.py | 28 ++++++++++++++++++
> >  2 files changed, 43 insertions(+), 51 deletions(-)
> > 
> > diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
> > index 59a8f73..3d46ed7 100755
> > --- a/tests/qemu-iotests/041
> > +++ b/tests/qemu-iotests/041
> > @@ -34,38 +34,8 @@ quorum_img3 = os.path.join(iotests.test_dir, 'quorum3.img')
> >  quorum_repair_img = os.path.join(iotests.test_dir, 'quorum_repair.img')
> >  quorum_snapshot_file = os.path.join(iotests.test_dir, 'quorum_snapshot.img')
> >  
> > -class ImageMirroringTestCase(iotests.QMPTestCase):
> > -    '''Abstract base class for image mirroring test cases'''
> >  
> > -    def wait_ready(self, drive='drive0'):
> > -        '''Wait until a block job BLOCK_JOB_READY event'''
> > -        ready = False
> > -        while not ready:
> > -            for event in self.vm.get_qmp_events(wait=True):
> > -                if event['event'] == 'BLOCK_JOB_READY':
> > -                    self.assert_qmp(event, 'data/type', 'mirror')
> > -                    self.assert_qmp(event, 'data/device', drive)
> > -                    ready = True
> > -
> > -    def wait_ready_and_cancel(self, drive='drive0'):
> > -        self.wait_ready(drive=drive)
> > -        event = self.cancel_and_wait(drive=drive)
> > -        self.assertEquals(event['event'], 'BLOCK_JOB_COMPLETED')
> > -        self.assert_qmp(event, 'data/type', 'mirror')
> > -        self.assert_qmp(event, 'data/offset', event['data']['len'])
> > -
> > -    def complete_and_wait(self, drive='drive0', wait_ready=True):
> > -        '''Complete a block job and wait for it to finish'''
> > -        if wait_ready:
> > -            self.wait_ready(drive=drive)
> > -
> > -        result = self.vm.qmp('block-job-complete', device=drive)
> > -        self.assert_qmp(result, 'return', {})
> > -
> > -        event = self.wait_until_completed(drive=drive)
> > -        self.assert_qmp(event, 'data/type', 'mirror')
> > -
> > -class TestSingleDrive(ImageMirroringTestCase):
> > +class TestSingleDrive(iotests.QMPTestCase):
> >      image_len = 1 * 1024 * 1024 # MB
> >  
> >      def setUp(self):
> > @@ -221,17 +191,9 @@ class TestSingleDriveUnalignedLength(TestSingleDrive):
> >      test_small_buffer2 = None
> >      test_large_cluster = None
> >  
> > -class TestMirrorNoBacking(ImageMirroringTestCase):
> > +class TestMirrorNoBacking(iotests.QMPTestCase):
> >      image_len = 2 * 1024 * 1024 # MB
> >  
> > -    def complete_and_wait(self, drive='drive0', wait_ready=True):
> > -        iotests.create_image(target_backing_img, TestMirrorNoBacking.image_len)
> > -        return ImageMirroringTestCase.complete_and_wait(self, drive, wait_ready)
> > -
> > -    def compare_images(self, img1, img2):
> > -        iotests.create_image(target_backing_img, TestMirrorNoBacking.image_len)
> > -        return iotests.compare_images(img1, img2)
> > -
> >      def setUp(self):
> >          iotests.create_image(backing_img, TestMirrorNoBacking.image_len)
> >          qemu_img('create', '-f', iotests.imgfmt, '-o', 'backing_file=%s' % backing_img, test_img)
> > @@ -242,7 +204,10 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
> >          self.vm.shutdown()
> >          os.remove(test_img)
> >          os.remove(backing_img)
> > -        os.remove(target_backing_img)
> > +        try:
> > +            os.remove(target_backing_img)
> > +        except:
> > +            pass
> >          os.remove(target_img)
> >  
> >      def test_complete(self):
> > @@ -257,7 +222,7 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
> >          result = self.vm.qmp('query-block')
> >          self.assert_qmp(result, 'return[0]/inserted/file', target_img)
> >          self.vm.shutdown()
> > -        self.assertTrue(self.compare_images(test_img, target_img),
> > +        self.assertTrue(iotests.compare_images(test_img, target_img),
> >                          'target image does not match source after mirroring')
> >  
> >      def test_cancel(self):
> > @@ -272,7 +237,7 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
> >          result = self.vm.qmp('query-block')
> >          self.assert_qmp(result, 'return[0]/inserted/file', test_img)
> >          self.vm.shutdown()
> > -        self.assertTrue(self.compare_images(test_img, target_img),
> > +        self.assertTrue(iotests.compare_images(test_img, target_img),
> >                          'target image does not match source after mirroring')
> >  
> >      def test_large_cluster(self):
> > @@ -283,7 +248,6 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
> >                          %(TestMirrorNoBacking.image_len), target_backing_img)
> >          qemu_img('create', '-f', iotests.imgfmt, '-o', 'cluster_size=%d,backing_file=%s'
> >                          % (TestMirrorNoBacking.image_len, target_backing_img), target_img)
> > -        os.remove(target_backing_img)
> >  
> >          result = self.vm.qmp('drive-mirror', device='drive0', sync='full',
> >                               mode='existing', target=target_img)
> > @@ -293,10 +257,10 @@ class TestMirrorNoBacking(ImageMirroringTestCase):
> >          result = self.vm.qmp('query-block')
> >          self.assert_qmp(result, 'return[0]/inserted/file', target_img)
> >          self.vm.shutdown()
> > -        self.assertTrue(self.compare_images(test_img, target_img),
> > +        self.assertTrue(iotests.compare_images(test_img, target_img),
> >                          'target image does not match source after mirroring')
> >  
> > -class TestMirrorResized(ImageMirroringTestCase):
> > +class TestMirrorResized(iotests.QMPTestCase):
> >      backing_len = 1 * 1024 * 1024 # MB
> >      image_len = 2 * 1024 * 1024 # MB
> >  
> > @@ -344,7 +308,7 @@ class TestMirrorResized(ImageMirroringTestCase):
> >          self.assertTrue(iotests.compare_images(test_img, target_img),
> >                          'target image does not match source after mirroring')
> >  
> > -class TestReadErrors(ImageMirroringTestCase):
> > +class TestReadErrors(iotests.QMPTestCase):
> >      image_len = 2 * 1024 * 1024 # MB
> >  
> >      # this should be a multiple of twice the default granularity
> > @@ -498,7 +462,7 @@ new_state = "1"
> >          self.assert_no_active_block_jobs()
> >          self.vm.shutdown()
> >  
> > -class TestWriteErrors(ImageMirroringTestCase):
> > +class TestWriteErrors(iotests.QMPTestCase):
> >      image_len = 2 * 1024 * 1024 # MB
> >  
> >      # this should be a multiple of twice the default granularity
> > @@ -624,7 +588,7 @@ new_state = "1"
> >          self.assert_no_active_block_jobs()
> >          self.vm.shutdown()
> >  
> > -class TestSetSpeed(ImageMirroringTestCase):
> > +class TestSetSpeed(iotests.QMPTestCase):
> >      image_len = 80 * 1024 * 1024 # MB
> >  
> >      def setUp(self):
> > @@ -690,7 +654,7 @@ class TestSetSpeed(ImageMirroringTestCase):
> >  
> >          self.wait_ready_and_cancel()
> >  
> > -class TestUnbackedSource(ImageMirroringTestCase):
> > +class TestUnbackedSource(iotests.QMPTestCase):
> >      image_len = 2 * 1024 * 1024 # MB
> >  
> >      def setUp(self):
> > @@ -731,7 +695,7 @@ class TestUnbackedSource(ImageMirroringTestCase):
> >          self.complete_and_wait()
> >          self.assert_no_active_block_jobs()
> >  
> > -class TestRepairQuorum(ImageMirroringTestCase):
> > +class TestRepairQuorum(iotests.QMPTestCase):
> >      """ This class test quorum file repair using drive-mirror.
> >          It's mostly a fork of TestSingleDrive """
> >      image_len = 1 * 1024 * 1024 # MB
> > diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
> > index e93e623..2e07cc4 100644
> > --- a/tests/qemu-iotests/iotests.py
> > +++ b/tests/qemu-iotests/iotests.py
> > @@ -326,6 +326,34 @@ class QMPTestCase(unittest.TestCase):
> >          self.assert_no_active_block_jobs()
> >          return event
> >  
> > +    def wait_ready(self, drive='drive0'):
> > +        '''Wait until a block job BLOCK_JOB_READY event'''
> > +        ready = False
> > +        while not ready:
> > +            for event in self.vm.get_qmp_events(wait=True):
> > +                if event['event'] == 'BLOCK_JOB_READY':
> > +                    self.assert_qmp(event, 'data/type', 'mirror')
> > +                    self.assert_qmp(event, 'data/device', drive)
> > +                    ready = True
> > +
> 
> I know you only moved the code, but since it's going into the general
> purpose iotests instead of inside of a specific test case...
> 
> get_qmp_events will completely empty the queue, but we're only
> interested in one specific event. This may make writing tests harder
> depending on what order events arrive in.
> 
> You may want to rely upon self.event_wait() instead, which allows you to
> specify an event to match, and any events it finds that don't match this
> are left alone to be pulled by future calls.
> 
> Something like this: (not tested...)
> 
> def wait_ready(self, drive='drive0'):
>   filter = {'data': {'type': 'mirror', 'device': drive } }
>   event = self.event_wait(BLOCK_JOB_READY, match=filter)
> 
> This way we can use wait_ready to test in asynchronous cases with other
> stuff flying around.
> 
> (Hmm, looks like I never converted the existing users of get_qmp_events,
> which can/would cause similar difficulties...)

Indeed it's nicer. I'll add a patch to improve that (not to mess up this code
moving patch).

Fam

> 
> > +    def wait_ready_and_cancel(self, drive='drive0'):
> > +        self.wait_ready(drive=drive)
> > +        event = self.cancel_and_wait(drive=drive)
> > +        self.assertEquals(event['event'], 'BLOCK_JOB_COMPLETED')
> > +        self.assert_qmp(event, 'data/type', 'mirror')
> > +        self.assert_qmp(event, 'data/offset', event['data']['len'])
> > +
> > +    def complete_and_wait(self, drive='drive0', wait_ready=True):
> > +        '''Complete a block job and wait for it to finish'''
> > +        if wait_ready:
> > +            self.wait_ready(drive=drive)
> > +
> > +        result = self.vm.qmp('block-job-complete', device=drive)
> > +        self.assert_qmp(result, 'return', {})
> > +
> > +        event = self.wait_until_completed(drive=drive)
> > +        self.assert_qmp(event, 'data/type', 'mirror')
> > +
> >  def notrun(reason):
> >      '''Skip this test suite'''
> >      # Each test in qemu-iotests has a number ("seq")
> > 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-06  1:45     ` Fam Zheng
@ 2015-05-06  8:59       ` Paolo Bonzini
  2015-05-06  9:50         ` Fam Zheng
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2015-05-06  8:59 UTC (permalink / raw)
  To: Fam Zheng
  Cc: Kevin Wolf, qemu-block, qemu-devel, Stefan Hajnoczi, jsnow, wangxiaolong



On 06/05/2015 03:45, Fam Zheng wrote:
>> > This is not enough, you also have to do the discard in block/mirror.c,
>> > otherwise the destination image could even become fully provisioned!
> I wasn't sure what if src and dest have different can_write_zeroes_with_unmap
> value, but your argument is stronger.
> 
> I will add discard in mirror.  Besides that, do we need to compare the
> can_write_zeroes_with_unmap?

Hmm, if can_write_zeroes_with_unmap is set, it's probably better to
write zeroes instead of discarding, in case the guest is relying on it.

Paolo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-06  8:59       ` Paolo Bonzini
@ 2015-05-06  9:50         ` Fam Zheng
  2015-05-06 10:21           ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Fam Zheng @ 2015-05-06  9:50 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, qemu-block, qemu-devel, Stefan Hajnoczi, jsnow, wangxiaolong

On Wed, 05/06 10:59, Paolo Bonzini wrote:
> 
> 
> On 06/05/2015 03:45, Fam Zheng wrote:
> >> > This is not enough, you also have to do the discard in block/mirror.c,
> >> > otherwise the destination image could even become fully provisioned!
> > I wasn't sure what if src and dest have different can_write_zeroes_with_unmap
> > value, but your argument is stronger.
> > 
> > I will add discard in mirror.  Besides that, do we need to compare the
> > can_write_zeroes_with_unmap?
> 
> Hmm, if can_write_zeroes_with_unmap is set, it's probably better to
> write zeroes instead of discarding, in case the guest is relying on it.
> 

I think there are four cases:

 #   src can_write_zeroes_with_unmap       target can_write_zeroes_with_unmap
--------------------------------------------------------------------------------
 1              true                                   true
 2              true                                   false
 3              false                                  true
 4              false                                  false

I think replicating discard only works for 1, because both side would then read
0.  For case 2 & 3 it's probably better to mirror the actual reading of source.

I'm not sure about 4.

What do you think?

Fam

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-06  9:50         ` Fam Zheng
@ 2015-05-06 10:21           ` Paolo Bonzini
  2015-05-11  8:02             ` Fam Zheng
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2015-05-06 10:21 UTC (permalink / raw)
  To: Fam Zheng
  Cc: Kevin Wolf, qemu-block, qemu-devel, Stefan Hajnoczi, jsnow, wangxiaolong



On 06/05/2015 11:50, Fam Zheng wrote:
>  #   src can_write_zeroes_with_unmap       target can_write_zeroes_with_unmap
> --------------------------------------------------------------------------------
>  1              true                                   true
>  2              true                                   false
>  3              false                                  true
>  4              false                                  false

1 should replicate WRITE SAME, in case the unmap granularity of the
target is different from that of the source.  In that case, a discard on
the target might leave some sectors untouched.  Writing zeroes would
ensure matching data between the source and the target.

2 should _not_ discard: it should write zeroes even at the cost of
making the target fully provisioned.  Perhaps you can optimize it by
looking at bdrv_get_block_status for the target, and checking the answer
for BDRV_ZERO.

3 and 4 can use discard on the target.

So it looks like only the source setting matters.

We need to check the cost of bdrv_co_get_block_status for "raw", too.
If it's too expensive, that can be a problem.

Paolo

> For case 2 & 3 it's probably better to mirror the actual reading of source.
> 
> I'm not sure about 4.

Even in case 1, discard could be "UNMAP" and write zeroes could be
"WRITE SAME".  If the unmap granularity of the target is For unaligned
sectors, UNMAP might leave some sectors aside while WRITE SAME will
write with zeroes.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-06 10:21           ` Paolo Bonzini
@ 2015-05-11  8:02             ` Fam Zheng
  2015-05-11  8:33               ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Fam Zheng @ 2015-05-11  8:02 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, qemu-block, qemu-devel, Stefan Hajnoczi, jsnow, wangxiaolong

On Wed, 05/06 12:21, Paolo Bonzini wrote:
> 
> 
> On 06/05/2015 11:50, Fam Zheng wrote:
> >  #   src can_write_zeroes_with_unmap       target can_write_zeroes_with_unmap
> > --------------------------------------------------------------------------------
> >  1              true                                   true
> >  2              true                                   false
> >  3              false                                  true
> >  4              false                                  false
> 
> 1 should replicate WRITE SAME, in case the unmap granularity of the
> target is different from that of the source.  In that case, a discard on
> the target might leave some sectors untouched.  Writing zeroes would
> ensure matching data between the source and the target.
> 
> 2 should _not_ discard: it should write zeroes even at the cost of
> making the target fully provisioned.  Perhaps you can optimize it by
> looking at bdrv_get_block_status for the target, and checking the answer
> for BDRV_ZERO.
> 
> 3 and 4 can use discard on the target.
> 
> So it looks like only the source setting matters.
> 
> We need to check the cost of bdrv_co_get_block_status for "raw", too.
> If it's too expensive, that can be a problem.

In my test, bdrv_is_allocated_above() takes 10~20us on ext4 raw image on my
laptop SATA.  And also note that:

	/*
	 * ...
	 *
	 * 'pnum' is set to the number of sectors (including and immediately following
	 * the specified sector) that are known to be in the same
	 * allocated/unallocated state.
	 *
	 * 'nb_sectors' is the max value 'pnum' should be set to.  If nb_sectors goes
	 * beyond the end of the disk image it will be clamped.
	 */
	static int64_t coroutine_fn bdrv_co_get_block_status(BlockDriverState *bs,

So we currently don't coalesce the sequential dirty sectors.

Was that intentional?

Fam

> 
> Paolo
> 
> > For case 2 & 3 it's probably better to mirror the actual reading of source.
> > 
> > I'm not sure about 4.
> 
> Even in case 1, discard could be "UNMAP" and write zeroes could be
> "WRITE SAME".  If the unmap granularity of the target is For unaligned
> sectors, UNMAP might leave some sectors aside while WRITE SAME will
> write with zeroes.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard
  2015-05-11  8:02             ` Fam Zheng
@ 2015-05-11  8:33               ` Paolo Bonzini
  0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2015-05-11  8:33 UTC (permalink / raw)
  To: Fam Zheng
  Cc: Kevin Wolf, qemu-block, qemu-devel, Stefan Hajnoczi, jsnow, wangxiaolong



On 11/05/2015 10:02, Fam Zheng wrote:
> 
> 	/*
> 	 * ...
> 	 *
> 	 * 'pnum' is set to the number of sectors (including and immediately following
> 	 * the specified sector) that are known to be in the same
> 	 * allocated/unallocated state.
> 	 *
> 	 * 'nb_sectors' is the max value 'pnum' should be set to.  If nb_sectors goes
> 	 * beyond the end of the disk image it will be clamped.
> 	 */
> 	static int64_t coroutine_fn bdrv_co_get_block_status(BlockDriverState *bs,
> 
> So we currently don't coalesce the sequential dirty sectors.
> 
> Was that intentional?

The idea, I think, is that for some drivers bdrv_co_get_block_status is
O(nb_sectors).

It's okay to let a driver return *pnum > nb_sectors.  You do need to
audit the callers, though.  It would be nice also to add TODOs whenever
the code is fine but it could explot *pnum > nb_sectors to get better
performance.

Paolo

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-05-11  8:33 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-05 12:46 [Qemu-devel] [PATCH 0/4] block: Mirror discarded sectors Fam Zheng
2015-05-05 12:46 ` [Qemu-devel] [PATCH 1/4] block: Fix dirty bitmap in bdrv_co_discard Fam Zheng
2015-05-05 13:06   ` Paolo Bonzini
2015-05-06  1:45     ` Fam Zheng
2015-05-06  8:59       ` Paolo Bonzini
2015-05-06  9:50         ` Fam Zheng
2015-05-06 10:21           ` Paolo Bonzini
2015-05-11  8:02             ` Fam Zheng
2015-05-11  8:33               ` Paolo Bonzini
2015-05-05 12:46 ` [Qemu-devel] [PATCH 2/4] block: Remove bdrv_reset_dirty Fam Zheng
2015-05-05 21:57   ` Eric Blake
2015-05-05 12:46 ` [Qemu-devel] [PATCH 3/4] qemu-iotests: Make block job methods common Fam Zheng
2015-05-05 22:17   ` John Snow
2015-05-06  1:48     ` Fam Zheng
2015-05-05 12:46 ` [Qemu-devel] [PATCH 4/4] qemu-iotests: Add test case for mirror with unmap Fam Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.