qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 00/13] Block layer patches
@ 2020-01-27 17:55 Kevin Wolf
  2020-01-27 17:55 ` [PULL 01/13] iotests.py: Let wait_migration wait even more Kevin Wolf
                   ` (13 more replies)
  0 siblings, 14 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

The following changes since commit 105b07f1ba462ec48b27e5cb74ddf81c6a79364c:

  Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20200127' into staging (2020-01-27 13:02:36 +0000)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 5fbf1d56c24018772e900a40a0955175ff82f35c:

  iscsi: Don't access non-existent scsi_lba_status_descriptor (2020-01-27 17:19:53 +0100)

----------------------------------------------------------------
Block layer patches:

- iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)
- AioContext fixes in QMP commands for backup and bitmaps
- iotests fixes

----------------------------------------------------------------
Eiichi Tsukata (1):
      block/backup: fix memory leak in bdrv_backup_top_append()

Felipe Franciosi (1):
      iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)

Kevin Wolf (1):
      iscsi: Don't access non-existent scsi_lba_status_descriptor

Max Reitz (1):
      iotests.py: Let wait_migration wait even more

Sergio Lopez (8):
      blockdev: fix coding style issues in drive_backup_prepare
      blockdev: unify qmp_drive_backup and drive-backup transaction paths
      blockdev: unify qmp_blockdev_backup and blockdev-backup transaction paths
      blockdev: honor bdrv_try_set_aio_context() context requirements
      block/backup-top: Don't acquire context while dropping top
      blockdev: Acquire AioContext on dirty bitmap functions
      blockdev: Return bs to the proper context on snapshot abort
      iotests: Test handling of AioContexts with some blockdev actions

Thomas Huth (1):
      iotests: Add more "skip_if_unsupported" statements to the python tests

 block/backup-top.c            |   7 +-
 block/backup.c                |   3 +
 block/iscsi.c                 |   7 +-
 blockdev.c                    | 393 +++++++++++++++++++++++-------------------
 tests/qemu-iotests/iotests.py |   6 +-
 tests/qemu-iotests/030        |   4 +-
 tests/qemu-iotests/040        |   2 +
 tests/qemu-iotests/041        |  39 +----
 tests/qemu-iotests/141.out    |   2 +
 tests/qemu-iotests/185.out    |   2 +
 tests/qemu-iotests/219        |   7 +-
 tests/qemu-iotests/219.out    |   8 +
 tests/qemu-iotests/234        |   8 +-
 tests/qemu-iotests/245        |   2 +
 tests/qemu-iotests/262        |   4 +-
 tests/qemu-iotests/280        |   2 +-
 tests/qemu-iotests/281        | 247 ++++++++++++++++++++++++++
 tests/qemu-iotests/281.out    |   5 +
 tests/qemu-iotests/group      |   1 +
 19 files changed, 510 insertions(+), 239 deletions(-)
 create mode 100755 tests/qemu-iotests/281
 create mode 100644 tests/qemu-iotests/281.out



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PULL 01/13] iotests.py: Let wait_migration wait even more
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 02/13] iotests: Add more "skip_if_unsupported" statements to the python tests Kevin Wolf
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Max Reitz <mreitz@redhat.com>

The "migration completed" event may be sent (on the source, to be
specific) before the migration is actually completed, so the VM runstate
will still be "finish-migrate" instead of "postmigrate".  So ask the
users of VM.wait_migration() to specify the final runstate they desire
and then poll the VM until it has reached that state.  (This should be
over very quickly, so busy polling is fine.)

Without this patch, I see intermittent failures in the new iotest 280
under high system load.  I have not yet seen such failures with other
iotests that use VM.wait_migration() and query-status afterwards, but
maybe they just occur even more rarely, or it is because they also wait
on the destination VM to be running.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/qemu-iotests/iotests.py | 6 +++++-
 tests/qemu-iotests/234        | 8 ++++----
 tests/qemu-iotests/262        | 4 ++--
 tests/qemu-iotests/280        | 2 +-
 4 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 13fd8b5cd2..0b62c42851 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -668,12 +668,16 @@ class VM(qtest.QEMUQtestMachine):
             }
         ]))
 
-    def wait_migration(self):
+    def wait_migration(self, expect_runstate):
         while True:
             event = self.event_wait('MIGRATION')
             log(event, filters=[filter_qmp_event])
             if event['data']['status'] == 'completed':
                 break
+        # The event may occur in finish-migrate, so wait for the expected
+        # post-migration runstate
+        while self.qmp('query-status')['return']['status'] != expect_runstate:
+            pass
 
     def node_info(self, node_name):
         nodes = self.qmp('query-named-block-nodes')
diff --git a/tests/qemu-iotests/234 b/tests/qemu-iotests/234
index 34c818c485..59a7f949ec 100755
--- a/tests/qemu-iotests/234
+++ b/tests/qemu-iotests/234
@@ -69,9 +69,9 @@ with iotests.FilePath('img') as img_path, \
     iotests.log(vm_a.qmp('migrate', uri='exec:cat >%s' % (fifo_a)))
     with iotests.Timeout(3, 'Migration does not complete'):
         # Wait for the source first (which includes setup=setup)
-        vm_a.wait_migration()
+        vm_a.wait_migration('postmigrate')
         # Wait for the destination second (which does not)
-        vm_b.wait_migration()
+        vm_b.wait_migration('running')
 
     iotests.log(vm_a.qmp('query-migrate')['return']['status'])
     iotests.log(vm_b.qmp('query-migrate')['return']['status'])
@@ -98,9 +98,9 @@ with iotests.FilePath('img') as img_path, \
     iotests.log(vm_b.qmp('migrate', uri='exec:cat >%s' % (fifo_b)))
     with iotests.Timeout(3, 'Migration does not complete'):
         # Wait for the source first (which includes setup=setup)
-        vm_b.wait_migration()
+        vm_b.wait_migration('postmigrate')
         # Wait for the destination second (which does not)
-        vm_a.wait_migration()
+        vm_a.wait_migration('running')
 
     iotests.log(vm_a.qmp('query-migrate')['return']['status'])
     iotests.log(vm_b.qmp('query-migrate')['return']['status'])
diff --git a/tests/qemu-iotests/262 b/tests/qemu-iotests/262
index 0963daa806..bbcb5260a6 100755
--- a/tests/qemu-iotests/262
+++ b/tests/qemu-iotests/262
@@ -71,9 +71,9 @@ with iotests.FilePath('img') as img_path, \
     iotests.log(vm_a.qmp('migrate', uri='exec:cat >%s' % (fifo)))
     with iotests.Timeout(3, 'Migration does not complete'):
         # Wait for the source first (which includes setup=setup)
-        vm_a.wait_migration()
+        vm_a.wait_migration('postmigrate')
         # Wait for the destination second (which does not)
-        vm_b.wait_migration()
+        vm_b.wait_migration('running')
 
     iotests.log(vm_a.qmp('query-migrate')['return']['status'])
     iotests.log(vm_b.qmp('query-migrate')['return']['status'])
diff --git a/tests/qemu-iotests/280 b/tests/qemu-iotests/280
index 0b1fa8e1d8..85e9114c5e 100755
--- a/tests/qemu-iotests/280
+++ b/tests/qemu-iotests/280
@@ -45,7 +45,7 @@ with iotests.FilePath('base') as base_path , \
     vm.qmp_log('migrate', uri='exec:cat > /dev/null')
 
     with iotests.Timeout(3, 'Migration does not complete'):
-        vm.wait_migration()
+        vm.wait_migration('postmigrate')
 
     iotests.log('\nVM is now stopped:')
     iotests.log(vm.qmp('query-migrate')['return']['status'])
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 02/13] iotests: Add more "skip_if_unsupported" statements to the python tests
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
  2020-01-27 17:55 ` [PULL 01/13] iotests.py: Let wait_migration wait even more Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 03/13] blockdev: fix coding style issues in drive_backup_prepare Kevin Wolf
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Thomas Huth <thuth@redhat.com>

The python code already contains a possibility to skip tests if the
corresponding driver is not available in the qemu binary - use it
in more spots to avoid that the tests are failing if the driver has
been disabled.

While we're at it, we can now also remove some of the old checks that
were using iotests.supports_quorum() - and which were apparently not
working as expected since the tests aborted instead of being skipped
when "quorum" was missing in the QEMU binary.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/qemu-iotests/030 |  4 +---
 tests/qemu-iotests/040 |  2 ++
 tests/qemu-iotests/041 | 39 +++------------------------------------
 tests/qemu-iotests/245 |  2 ++
 4 files changed, 8 insertions(+), 39 deletions(-)

diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index be35bde06f..0990681c1e 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -530,6 +530,7 @@ class TestQuorum(iotests.QMPTestCase):
     children = []
     backing = []
 
+    @iotests.skip_if_unsupported(['quorum'])
     def setUp(self):
         opts = ['driver=quorum', 'vote-threshold=2']
 
@@ -560,9 +561,6 @@ class TestQuorum(iotests.QMPTestCase):
             os.remove(img)
 
     def test_stream_quorum(self):
-        if not iotests.supports_quorum():
-            return
-
         self.assertNotEqual(qemu_io('-f', iotests.imgfmt, '-rU', '-c', 'map', self.children[0]),
                             qemu_io('-f', iotests.imgfmt, '-rU', '-c', 'map', self.backing[0]),
                             'image file map matches backing file before streaming')
diff --git a/tests/qemu-iotests/040 b/tests/qemu-iotests/040
index 762ad1ebcb..74f62c3c4a 100755
--- a/tests/qemu-iotests/040
+++ b/tests/qemu-iotests/040
@@ -106,6 +106,7 @@ class TestSingleDrive(ImageCommitTestCase):
         self.assertEqual(-1, qemu_io('-f', 'raw', '-c', 'read -P 0xab 0 524288', backing_img).find("verification failed"))
         self.assertEqual(-1, qemu_io('-f', 'raw', '-c', 'read -P 0xef 524288 524288', backing_img).find("verification failed"))
 
+    @iotests.skip_if_unsupported(['throttle'])
     def test_commit_with_filter_and_quit(self):
         result = self.vm.qmp('object-add', qom_type='throttle-group', id='tg')
         self.assert_qmp(result, 'return', {})
@@ -125,6 +126,7 @@ class TestSingleDrive(ImageCommitTestCase):
         self.has_quit = True
 
     # Same as above, but this time we add the filter after starting the job
+    @iotests.skip_if_unsupported(['throttle'])
     def test_commit_plus_filter_and_quit(self):
         result = self.vm.qmp('object-add', qom_type='throttle-group', id='tg')
         self.assert_qmp(result, 'return', {})
diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
index d7be30b62b..c07437fda1 100755
--- a/tests/qemu-iotests/041
+++ b/tests/qemu-iotests/041
@@ -871,6 +871,7 @@ class TestRepairQuorum(iotests.QMPTestCase):
     image_len = 1 * 1024 * 1024 # MB
     IMAGES = [ quorum_img1, quorum_img2, quorum_img3 ]
 
+    @iotests.skip_if_unsupported(['quorum'])
     def setUp(self):
         self.vm = iotests.VM()
 
@@ -891,9 +892,8 @@ class TestRepairQuorum(iotests.QMPTestCase):
         #assemble the quorum block device from the individual files
         args = { "driver": "quorum", "node-name": "quorum0",
                  "vote-threshold": 2, "children": [ "img0", "img1", "img2" ] }
-        if iotests.supports_quorum():
-            result = self.vm.qmp("blockdev-add", **args)
-            self.assert_qmp(result, 'return', {})
+        result = self.vm.qmp("blockdev-add", **args)
+        self.assert_qmp(result, 'return', {})
 
 
     def tearDown(self):
@@ -906,9 +906,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
                 pass
 
     def test_complete(self):
-        if not iotests.supports_quorum():
-            return
-
         self.assert_no_active_block_jobs()
 
         result = self.vm.qmp('drive-mirror', job_id='job0', device='quorum0',
@@ -925,9 +922,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
                         'target image does not match source after mirroring')
 
     def test_cancel(self):
-        if not iotests.supports_quorum():
-            return
-
         self.assert_no_active_block_jobs()
 
         result = self.vm.qmp('drive-mirror', job_id='job0', device='quorum0',
@@ -942,9 +936,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
         self.vm.shutdown()
 
     def test_cancel_after_ready(self):
-        if not iotests.supports_quorum():
-            return
-
         self.assert_no_active_block_jobs()
 
         result = self.vm.qmp('drive-mirror', job_id='job0', device='quorum0',
@@ -961,9 +952,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
                         'target image does not match source after mirroring')
 
     def test_pause(self):
-        if not iotests.supports_quorum():
-            return
-
         self.assert_no_active_block_jobs()
 
         result = self.vm.qmp('drive-mirror', job_id='job0', device='quorum0',
@@ -989,9 +977,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
                         'target image does not match source after mirroring')
 
     def test_medium_not_found(self):
-        if not iotests.supports_quorum():
-            return
-
         if iotests.qemu_default_machine != 'pc':
             return
 
@@ -1003,9 +988,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
         self.assert_qmp(result, 'error/class', 'GenericError')
 
     def test_image_not_found(self):
-        if not iotests.supports_quorum():
-            return
-
         result = self.vm.qmp('drive-mirror', job_id='job0', device='quorum0',
                              sync='full', node_name='repair0', replaces='img1',
                              mode='existing', target=quorum_repair_img,
@@ -1013,9 +995,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
         self.assert_qmp(result, 'error/class', 'GenericError')
 
     def test_device_not_found(self):
-        if not iotests.supports_quorum():
-            return
-
         result = self.vm.qmp('drive-mirror', job_id='job0',
                              device='nonexistent', sync='full',
                              node_name='repair0',
@@ -1024,9 +1003,6 @@ class TestRepairQuorum(iotests.QMPTestCase):
         self.assert_qmp(result, 'error/class', 'GenericError')
 
     def test_wrong_sync_mode(self):
-        if not iotests.supports_quorum():
-            return
-
         result = self.vm.qmp('drive-mirror', device='quorum0', job_id='job0',
                              node_name='repair0',
                              replaces='img1',
@@ -1034,27 +1010,18 @@ class TestRepairQuorum(iotests.QMPTestCase):
         self.assert_qmp(result, 'error/class', 'GenericError')
 
     def test_no_node_name(self):
-        if not iotests.supports_quorum():
-            return
-
         result = self.vm.qmp('drive-mirror', job_id='job0', device='quorum0',
                              sync='full', replaces='img1',
                              target=quorum_repair_img, format=iotests.imgfmt)
         self.assert_qmp(result, 'error/class', 'GenericError')
 
     def test_nonexistent_replaces(self):
-        if not iotests.supports_quorum():
-            return
-
         result = self.vm.qmp('drive-mirror', job_id='job0', device='quorum0',
                              sync='full', node_name='repair0', replaces='img77',
                              target=quorum_repair_img, format=iotests.imgfmt)
         self.assert_qmp(result, 'error/class', 'GenericError')
 
     def test_after_a_quorum_snapshot(self):
-        if not iotests.supports_quorum():
-            return
-
         result = self.vm.qmp('blockdev-snapshot-sync', node_name='img1',
                              snapshot_file=quorum_snapshot_file,
                              snapshot_node_name="snap1");
diff --git a/tests/qemu-iotests/245 b/tests/qemu-iotests/245
index e66a23c5f0..d12b253065 100644
--- a/tests/qemu-iotests/245
+++ b/tests/qemu-iotests/245
@@ -478,6 +478,7 @@ class TestBlockdevReopen(iotests.QMPTestCase):
     # This test verifies that we can't change the children of a block
     # device during a reopen operation in a way that would create
     # cycles in the node graph
+    @iotests.skip_if_unsupported(['blkverify'])
     def test_graph_cycles(self):
         opts = []
 
@@ -534,6 +535,7 @@ class TestBlockdevReopen(iotests.QMPTestCase):
         self.assert_qmp(result, 'return', {})
 
     # Misc reopen tests with different block drivers
+    @iotests.skip_if_unsupported(['quorum', 'throttle'])
     def test_misc_drivers(self):
         ####################
         ###### quorum ######
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 03/13] blockdev: fix coding style issues in drive_backup_prepare
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
  2020-01-27 17:55 ` [PULL 01/13] iotests.py: Let wait_migration wait even more Kevin Wolf
  2020-01-27 17:55 ` [PULL 02/13] iotests: Add more "skip_if_unsupported" statements to the python tests Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 04/13] blockdev: unify qmp_drive_backup and drive-backup transaction paths Kevin Wolf
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

Fix a couple of minor coding style issues in drive_backup_prepare.

Signed-off-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 blockdev.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 8e029e9c01..553e315972 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3620,7 +3620,7 @@ static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn,
 
     if (!backup->has_format) {
         backup->format = backup->mode == NEW_IMAGE_MODE_EXISTING ?
-                         NULL : (char*) bs->drv->format_name;
+                         NULL : (char *) bs->drv->format_name;
     }
 
     /* Early check to avoid creating target */
@@ -3630,8 +3630,10 @@ static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn,
 
     flags = bs->open_flags | BDRV_O_RDWR;
 
-    /* See if we have a backing HD we can use to create our new image
-     * on top of. */
+    /*
+     * See if we have a backing HD we can use to create our new image
+     * on top of.
+     */
     if (backup->sync == MIRROR_SYNC_MODE_TOP) {
         source = backing_bs(bs);
         if (!source) {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 04/13] blockdev: unify qmp_drive_backup and drive-backup transaction paths
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (2 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 03/13] blockdev: fix coding style issues in drive_backup_prepare Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 05/13] blockdev: unify qmp_blockdev_backup and blockdev-backup " Kevin Wolf
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

Issuing a drive-backup from qmp_drive_backup takes a slightly
different path than when it's issued from a transaction. In the code,
this is manifested as some redundancy between do_drive_backup() and
drive_backup_prepare().

This change unifies both paths, merging do_drive_backup() and
drive_backup_prepare(), and changing qmp_drive_backup() to create a
transaction instead of calling do_backup_common() direcly.

As a side-effect, now qmp_drive_backup() is executed inside a drained
section, as it happens when creating a drive-backup transaction. This
change is visible from the user's perspective, as the job gets paused
and immediately resumed before starting the actual work.

Also fix tests 141, 185 and 219 to cope with the extra
JOB_STATUS_CHANGE lines.

Signed-off-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 blockdev.c                 | 224 +++++++++++++++++--------------------
 tests/qemu-iotests/141.out |   2 +
 tests/qemu-iotests/185.out |   2 +
 tests/qemu-iotests/219     |   7 +-
 tests/qemu-iotests/219.out |   8 ++
 5 files changed, 117 insertions(+), 126 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 553e315972..5e85fc042e 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1761,39 +1761,128 @@ typedef struct DriveBackupState {
     BlockJob *job;
 } DriveBackupState;
 
-static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn,
-                            Error **errp);
+static BlockJob *do_backup_common(BackupCommon *backup,
+                                  BlockDriverState *bs,
+                                  BlockDriverState *target_bs,
+                                  AioContext *aio_context,
+                                  JobTxn *txn, Error **errp);
 
 static void drive_backup_prepare(BlkActionState *common, Error **errp)
 {
     DriveBackupState *state = DO_UPCAST(DriveBackupState, common, common);
-    BlockDriverState *bs;
     DriveBackup *backup;
+    BlockDriverState *bs;
+    BlockDriverState *target_bs;
+    BlockDriverState *source = NULL;
     AioContext *aio_context;
+    QDict *options;
     Error *local_err = NULL;
+    int flags;
+    int64_t size;
+    bool set_backing_hd = false;
 
     assert(common->action->type == TRANSACTION_ACTION_KIND_DRIVE_BACKUP);
     backup = common->action->u.drive_backup.data;
 
+    if (!backup->has_mode) {
+        backup->mode = NEW_IMAGE_MODE_ABSOLUTE_PATHS;
+    }
+
     bs = bdrv_lookup_bs(backup->device, backup->device, errp);
     if (!bs) {
         return;
     }
 
+    if (!bs->drv) {
+        error_setg(errp, "Device has no medium");
+        return;
+    }
+
     aio_context = bdrv_get_aio_context(bs);
     aio_context_acquire(aio_context);
 
     /* Paired with .clean() */
     bdrv_drained_begin(bs);
 
-    state->bs = bs;
+    if (!backup->has_format) {
+        backup->format = backup->mode == NEW_IMAGE_MODE_EXISTING ?
+                         NULL : (char *) bs->drv->format_name;
+    }
+
+    /* Early check to avoid creating target */
+    if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
+        goto out;
+    }
+
+    flags = bs->open_flags | BDRV_O_RDWR;
+
+    /*
+     * See if we have a backing HD we can use to create our new image
+     * on top of.
+     */
+    if (backup->sync == MIRROR_SYNC_MODE_TOP) {
+        source = backing_bs(bs);
+        if (!source) {
+            backup->sync = MIRROR_SYNC_MODE_FULL;
+        }
+    }
+    if (backup->sync == MIRROR_SYNC_MODE_NONE) {
+        source = bs;
+        flags |= BDRV_O_NO_BACKING;
+        set_backing_hd = true;
+    }
+
+    size = bdrv_getlength(bs);
+    if (size < 0) {
+        error_setg_errno(errp, -size, "bdrv_getlength failed");
+        goto out;
+    }
+
+    if (backup->mode != NEW_IMAGE_MODE_EXISTING) {
+        assert(backup->format);
+        if (source) {
+            bdrv_refresh_filename(source);
+            bdrv_img_create(backup->target, backup->format, source->filename,
+                            source->drv->format_name, NULL,
+                            size, flags, false, &local_err);
+        } else {
+            bdrv_img_create(backup->target, backup->format, NULL, NULL, NULL,
+                            size, flags, false, &local_err);
+        }
+    }
 
-    state->job = do_drive_backup(backup, common->block_job_txn, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
         goto out;
     }
 
+    options = qdict_new();
+    qdict_put_str(options, "discard", "unmap");
+    qdict_put_str(options, "detect-zeroes", "unmap");
+    if (backup->format) {
+        qdict_put_str(options, "driver", backup->format);
+    }
+
+    target_bs = bdrv_open(backup->target, NULL, options, flags, errp);
+    if (!target_bs) {
+        goto out;
+    }
+
+    if (set_backing_hd) {
+        bdrv_set_backing_hd(target_bs, source, &local_err);
+        if (local_err) {
+            goto unref;
+        }
+    }
+
+    state->bs = bs;
+
+    state->job = do_backup_common(qapi_DriveBackup_base(backup),
+                                  bs, target_bs, aio_context,
+                                  common->block_job_txn, errp);
+
+unref:
+    bdrv_unref(target_bs);
 out:
     aio_context_release(aio_context);
 }
@@ -3587,126 +3676,13 @@ static BlockJob *do_backup_common(BackupCommon *backup,
     return job;
 }
 
-static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn,
-                                 Error **errp)
-{
-    BlockDriverState *bs;
-    BlockDriverState *target_bs;
-    BlockDriverState *source = NULL;
-    BlockJob *job = NULL;
-    AioContext *aio_context;
-    QDict *options;
-    Error *local_err = NULL;
-    int flags;
-    int64_t size;
-    bool set_backing_hd = false;
-
-    if (!backup->has_mode) {
-        backup->mode = NEW_IMAGE_MODE_ABSOLUTE_PATHS;
-    }
-
-    bs = bdrv_lookup_bs(backup->device, backup->device, errp);
-    if (!bs) {
-        return NULL;
-    }
-
-    if (!bs->drv) {
-        error_setg(errp, "Device has no medium");
-        return NULL;
-    }
-
-    aio_context = bdrv_get_aio_context(bs);
-    aio_context_acquire(aio_context);
-
-    if (!backup->has_format) {
-        backup->format = backup->mode == NEW_IMAGE_MODE_EXISTING ?
-                         NULL : (char *) bs->drv->format_name;
-    }
-
-    /* Early check to avoid creating target */
-    if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
-        goto out;
-    }
-
-    flags = bs->open_flags | BDRV_O_RDWR;
-
-    /*
-     * See if we have a backing HD we can use to create our new image
-     * on top of.
-     */
-    if (backup->sync == MIRROR_SYNC_MODE_TOP) {
-        source = backing_bs(bs);
-        if (!source) {
-            backup->sync = MIRROR_SYNC_MODE_FULL;
-        }
-    }
-    if (backup->sync == MIRROR_SYNC_MODE_NONE) {
-        source = bs;
-        flags |= BDRV_O_NO_BACKING;
-        set_backing_hd = true;
-    }
-
-    size = bdrv_getlength(bs);
-    if (size < 0) {
-        error_setg_errno(errp, -size, "bdrv_getlength failed");
-        goto out;
-    }
-
-    if (backup->mode != NEW_IMAGE_MODE_EXISTING) {
-        assert(backup->format);
-        if (source) {
-            bdrv_refresh_filename(source);
-            bdrv_img_create(backup->target, backup->format, source->filename,
-                            source->drv->format_name, NULL,
-                            size, flags, false, &local_err);
-        } else {
-            bdrv_img_create(backup->target, backup->format, NULL, NULL, NULL,
-                            size, flags, false, &local_err);
-        }
-    }
-
-    if (local_err) {
-        error_propagate(errp, local_err);
-        goto out;
-    }
-
-    options = qdict_new();
-    qdict_put_str(options, "discard", "unmap");
-    qdict_put_str(options, "detect-zeroes", "unmap");
-    if (backup->format) {
-        qdict_put_str(options, "driver", backup->format);
-    }
-
-    target_bs = bdrv_open(backup->target, NULL, options, flags, errp);
-    if (!target_bs) {
-        goto out;
-    }
-
-    if (set_backing_hd) {
-        bdrv_set_backing_hd(target_bs, source, &local_err);
-        if (local_err) {
-            goto unref;
-        }
-    }
-
-    job = do_backup_common(qapi_DriveBackup_base(backup),
-                           bs, target_bs, aio_context, txn, errp);
-
-unref:
-    bdrv_unref(target_bs);
-out:
-    aio_context_release(aio_context);
-    return job;
-}
-
-void qmp_drive_backup(DriveBackup *arg, Error **errp)
+void qmp_drive_backup(DriveBackup *backup, Error **errp)
 {
-
-    BlockJob *job;
-    job = do_drive_backup(arg, NULL, errp);
-    if (job) {
-        job_start(&job->job);
-    }
+    TransactionAction action = {
+        .type = TRANSACTION_ACTION_KIND_DRIVE_BACKUP,
+        .u.drive_backup.data = backup,
+    };
+    blockdev_do_action(&action, errp);
 }
 
 BlockDeviceInfoList *qmp_query_named_block_nodes(Error **errp)
diff --git a/tests/qemu-iotests/141.out b/tests/qemu-iotests/141.out
index 3645675ce8..263b680bdf 100644
--- a/tests/qemu-iotests/141.out
+++ b/tests/qemu-iotests/141.out
@@ -13,6 +13,8 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/m.
 Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "job0"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
 {'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
 {"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: node is used as backing hd of 'NODE_NAME'"}}
 {'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
index 8379ac5854..9a3b65782b 100644
--- a/tests/qemu-iotests/185.out
+++ b/tests/qemu-iotests/185.out
@@ -65,6 +65,8 @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 l
 Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "disk"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
 {"return": {}}
 { 'execute': 'quit' }
 {"return": {}}
diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
index e0c51662c0..655f54d881 100755
--- a/tests/qemu-iotests/219
+++ b/tests/qemu-iotests/219
@@ -63,7 +63,7 @@ def test_pause_resume(vm):
                 # logged immediately
                 iotests.log(vm.qmp('query-jobs'))
 
-def test_job_lifecycle(vm, job, job_args, has_ready=False):
+def test_job_lifecycle(vm, job, job_args, has_ready=False, is_mirror=False):
     global img_size
 
     iotests.log('')
@@ -135,6 +135,9 @@ def test_job_lifecycle(vm, job, job_args, has_ready=False):
     iotests.log('Waiting for PENDING state...')
     iotests.log(iotests.filter_qmp_event(vm.event_wait('JOB_STATUS_CHANGE')))
     iotests.log(iotests.filter_qmp_event(vm.event_wait('JOB_STATUS_CHANGE')))
+    if is_mirror:
+        iotests.log(iotests.filter_qmp_event(vm.event_wait('JOB_STATUS_CHANGE')))
+        iotests.log(iotests.filter_qmp_event(vm.event_wait('JOB_STATUS_CHANGE')))
 
     if not job_args.get('auto-finalize', True):
         # PENDING state:
@@ -218,7 +221,7 @@ with iotests.FilePath('disk.img') as disk_path, \
 
     for auto_finalize in [True, False]:
         for auto_dismiss in [True, False]:
-            test_job_lifecycle(vm, 'drive-backup', job_args={
+            test_job_lifecycle(vm, 'drive-backup', is_mirror=True, job_args={
                 'device': 'drive0-node',
                 'target': copy_path,
                 'sync': 'full',
diff --git a/tests/qemu-iotests/219.out b/tests/qemu-iotests/219.out
index 8ebd3fee60..0ea5d0b9d5 100644
--- a/tests/qemu-iotests/219.out
+++ b/tests/qemu-iotests/219.out
@@ -135,6 +135,8 @@ Pause/resume in RUNNING
 {"return": {}}
 
 Waiting for PENDING state...
+{"data": {"id": "job0", "status": "paused"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"id": "job0", "status": "running"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "waiting"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "pending"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "concluded"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -186,6 +188,8 @@ Pause/resume in RUNNING
 {"return": {}}
 
 Waiting for PENDING state...
+{"data": {"id": "job0", "status": "paused"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"id": "job0", "status": "running"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "waiting"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "pending"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "concluded"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
@@ -245,6 +249,8 @@ Pause/resume in RUNNING
 {"return": {}}
 
 Waiting for PENDING state...
+{"data": {"id": "job0", "status": "paused"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"id": "job0", "status": "running"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "waiting"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "pending"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"return": [{"current-progress": 4194304, "id": "job0", "status": "pending", "total-progress": 4194304, "type": "backup"}]}
@@ -304,6 +310,8 @@ Pause/resume in RUNNING
 {"return": {}}
 
 Waiting for PENDING state...
+{"data": {"id": "job0", "status": "paused"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"id": "job0", "status": "running"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "waiting"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"id": "job0", "status": "pending"}, "event": "JOB_STATUS_CHANGE", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"return": [{"current-progress": 4194304, "id": "job0", "status": "pending", "total-progress": 4194304, "type": "backup"}]}
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 05/13] blockdev: unify qmp_blockdev_backup and blockdev-backup transaction paths
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (3 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 04/13] blockdev: unify qmp_drive_backup and drive-backup transaction paths Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 06/13] blockdev: honor bdrv_try_set_aio_context() context requirements Kevin Wolf
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

Issuing a blockdev-backup from qmp_blockdev_backup takes a slightly
different path than when it's issued from a transaction. In the code,
this is manifested as some redundancy between do_blockdev_backup() and
blockdev_backup_prepare().

This change unifies both paths, merging do_blockdev_backup() and
blockdev_backup_prepare(), and changing qmp_blockdev_backup() to
create a transaction instead of calling do_backup_common() direcly.

As a side-effect, now qmp_blockdev_backup() is executed inside a
drained section, as it happens when creating a blockdev-backup
transaction. This change is visible from the user's perspective, as
the job gets paused and immediately resumed before starting the actual
work.

Signed-off-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 blockdev.c | 60 ++++++++++++------------------------------------------
 1 file changed, 13 insertions(+), 47 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 5e85fc042e..152a0f7454 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1940,16 +1940,13 @@ typedef struct BlockdevBackupState {
     BlockJob *job;
 } BlockdevBackupState;
 
-static BlockJob *do_blockdev_backup(BlockdevBackup *backup, JobTxn *txn,
-                                    Error **errp);
-
 static void blockdev_backup_prepare(BlkActionState *common, Error **errp)
 {
     BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common);
     BlockdevBackup *backup;
-    BlockDriverState *bs, *target;
+    BlockDriverState *bs;
+    BlockDriverState *target_bs;
     AioContext *aio_context;
-    Error *local_err = NULL;
 
     assert(common->action->type == TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP);
     backup = common->action->u.blockdev_backup.data;
@@ -1959,8 +1956,8 @@ static void blockdev_backup_prepare(BlkActionState *common, Error **errp)
         return;
     }
 
-    target = bdrv_lookup_bs(backup->target, backup->target, errp);
-    if (!target) {
+    target_bs = bdrv_lookup_bs(backup->target, backup->target, errp);
+    if (!target_bs) {
         return;
     }
 
@@ -1971,13 +1968,10 @@ static void blockdev_backup_prepare(BlkActionState *common, Error **errp)
     /* Paired with .clean() */
     bdrv_drained_begin(state->bs);
 
-    state->job = do_blockdev_backup(backup, common->block_job_txn, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        goto out;
-    }
+    state->job = do_backup_common(qapi_BlockdevBackup_base(backup),
+                                  bs, target_bs, aio_context,
+                                  common->block_job_txn, errp);
 
-out:
     aio_context_release(aio_context);
 }
 
@@ -3695,41 +3689,13 @@ XDbgBlockGraph *qmp_x_debug_query_block_graph(Error **errp)
     return bdrv_get_xdbg_block_graph(errp);
 }
 
-BlockJob *do_blockdev_backup(BlockdevBackup *backup, JobTxn *txn,
-                             Error **errp)
+void qmp_blockdev_backup(BlockdevBackup *backup, Error **errp)
 {
-    BlockDriverState *bs;
-    BlockDriverState *target_bs;
-    AioContext *aio_context;
-    BlockJob *job;
-
-    bs = bdrv_lookup_bs(backup->device, backup->device, errp);
-    if (!bs) {
-        return NULL;
-    }
-
-    target_bs = bdrv_lookup_bs(backup->target, backup->target, errp);
-    if (!target_bs) {
-        return NULL;
-    }
-
-    aio_context = bdrv_get_aio_context(bs);
-    aio_context_acquire(aio_context);
-
-    job = do_backup_common(qapi_BlockdevBackup_base(backup),
-                           bs, target_bs, aio_context, txn, errp);
-
-    aio_context_release(aio_context);
-    return job;
-}
-
-void qmp_blockdev_backup(BlockdevBackup *arg, Error **errp)
-{
-    BlockJob *job;
-    job = do_blockdev_backup(arg, NULL, errp);
-    if (job) {
-        job_start(&job->job);
-    }
+    TransactionAction action = {
+        .type = TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP,
+        .u.blockdev_backup.data = backup,
+    };
+    blockdev_do_action(&action, errp);
 }
 
 /* Parameter check and block job starting for drive mirroring.
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 06/13] blockdev: honor bdrv_try_set_aio_context() context requirements
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (4 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 05/13] blockdev: unify qmp_blockdev_backup and blockdev-backup " Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 07/13] block/backup-top: Don't acquire context while dropping top Kevin Wolf
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

bdrv_try_set_aio_context() requires that the old context is held, and
the new context is not held. Fix all the occurrences where it's not
done this way.

Suggested-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 blockdev.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 60 insertions(+), 8 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 152a0f7454..1dacbc20ec 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1535,6 +1535,7 @@ static void external_snapshot_prepare(BlkActionState *common,
                              DO_UPCAST(ExternalSnapshotState, common, common);
     TransactionAction *action = common->action;
     AioContext *aio_context;
+    AioContext *old_context;
     int ret;
 
     /* 'blockdev-snapshot' and 'blockdev-snapshot-sync' have similar
@@ -1675,7 +1676,16 @@ static void external_snapshot_prepare(BlkActionState *common,
         goto out;
     }
 
+    /* Honor bdrv_try_set_aio_context() context acquisition requirements. */
+    old_context = bdrv_get_aio_context(state->new_bs);
+    aio_context_release(aio_context);
+    aio_context_acquire(old_context);
+
     ret = bdrv_try_set_aio_context(state->new_bs, aio_context, errp);
+
+    aio_context_release(old_context);
+    aio_context_acquire(aio_context);
+
     if (ret < 0) {
         goto out;
     }
@@ -1775,11 +1785,13 @@ static void drive_backup_prepare(BlkActionState *common, Error **errp)
     BlockDriverState *target_bs;
     BlockDriverState *source = NULL;
     AioContext *aio_context;
+    AioContext *old_context;
     QDict *options;
     Error *local_err = NULL;
     int flags;
     int64_t size;
     bool set_backing_hd = false;
+    int ret;
 
     assert(common->action->type == TRANSACTION_ACTION_KIND_DRIVE_BACKUP);
     backup = common->action->u.drive_backup.data;
@@ -1868,6 +1880,21 @@ static void drive_backup_prepare(BlkActionState *common, Error **errp)
         goto out;
     }
 
+    /* Honor bdrv_try_set_aio_context() context acquisition requirements. */
+    old_context = bdrv_get_aio_context(target_bs);
+    aio_context_release(aio_context);
+    aio_context_acquire(old_context);
+
+    ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
+    if (ret < 0) {
+        bdrv_unref(target_bs);
+        aio_context_release(old_context);
+        return;
+    }
+
+    aio_context_release(old_context);
+    aio_context_acquire(aio_context);
+
     if (set_backing_hd) {
         bdrv_set_backing_hd(target_bs, source, &local_err);
         if (local_err) {
@@ -1947,6 +1974,8 @@ static void blockdev_backup_prepare(BlkActionState *common, Error **errp)
     BlockDriverState *bs;
     BlockDriverState *target_bs;
     AioContext *aio_context;
+    AioContext *old_context;
+    int ret;
 
     assert(common->action->type == TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP);
     backup = common->action->u.blockdev_backup.data;
@@ -1961,7 +1990,18 @@ static void blockdev_backup_prepare(BlkActionState *common, Error **errp)
         return;
     }
 
+    /* Honor bdrv_try_set_aio_context() context acquisition requirements. */
     aio_context = bdrv_get_aio_context(bs);
+    old_context = bdrv_get_aio_context(target_bs);
+    aio_context_acquire(old_context);
+
+    ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
+    if (ret < 0) {
+        aio_context_release(old_context);
+        return;
+    }
+
+    aio_context_release(old_context);
     aio_context_acquire(aio_context);
     state->bs = bs;
 
@@ -3562,7 +3602,6 @@ static BlockJob *do_backup_common(BackupCommon *backup,
     BlockJob *job = NULL;
     BdrvDirtyBitmap *bmap = NULL;
     int job_flags = JOB_DEFAULT;
-    int ret;
 
     if (!backup->has_speed) {
         backup->speed = 0;
@@ -3586,11 +3625,6 @@ static BlockJob *do_backup_common(BackupCommon *backup,
         backup->compress = false;
     }
 
-    ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
-    if (ret < 0) {
-        return NULL;
-    }
-
     if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
         (backup->sync == MIRROR_SYNC_MODE_INCREMENTAL)) {
         /* done before desugaring 'incremental' to print the right message */
@@ -3825,6 +3859,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
     BlockDriverState *bs;
     BlockDriverState *source, *target_bs;
     AioContext *aio_context;
+    AioContext *old_context;
     BlockMirrorBackingMode backing_mode;
     Error *local_err = NULL;
     QDict *options = NULL;
@@ -3937,12 +3972,22 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
                    (arg->mode == NEW_IMAGE_MODE_EXISTING ||
                     !bdrv_has_zero_init(target_bs)));
 
+
+    /* Honor bdrv_try_set_aio_context() context acquisition requirements. */
+    old_context = bdrv_get_aio_context(target_bs);
+    aio_context_release(aio_context);
+    aio_context_acquire(old_context);
+
     ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
     if (ret < 0) {
         bdrv_unref(target_bs);
-        goto out;
+        aio_context_release(old_context);
+        return;
     }
 
+    aio_context_release(old_context);
+    aio_context_acquire(aio_context);
+
     blockdev_mirror_common(arg->has_job_id ? arg->job_id : NULL, bs, target_bs,
                            arg->has_replaces, arg->replaces, arg->sync,
                            backing_mode, zero_target,
@@ -3984,6 +4029,7 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
     BlockDriverState *bs;
     BlockDriverState *target_bs;
     AioContext *aio_context;
+    AioContext *old_context;
     BlockMirrorBackingMode backing_mode = MIRROR_LEAVE_BACKING_CHAIN;
     Error *local_err = NULL;
     bool zero_target;
@@ -4001,10 +4047,16 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
 
     zero_target = (sync == MIRROR_SYNC_MODE_FULL);
 
+    /* Honor bdrv_try_set_aio_context() context acquisition requirements. */
+    old_context = bdrv_get_aio_context(target_bs);
     aio_context = bdrv_get_aio_context(bs);
-    aio_context_acquire(aio_context);
+    aio_context_acquire(old_context);
 
     ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
+
+    aio_context_release(old_context);
+    aio_context_acquire(aio_context);
+
     if (ret < 0) {
         goto out;
     }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 07/13] block/backup-top: Don't acquire context while dropping top
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (5 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 06/13] blockdev: honor bdrv_try_set_aio_context() context requirements Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 08/13] blockdev: Acquire AioContext on dirty bitmap functions Kevin Wolf
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

All paths that lead to bdrv_backup_top_drop(), except for the call
from backup_clean(), imply that the BDS AioContext has already been
acquired, so doing it there too can potentially lead to QEMU hanging
on AIO_WAIT_WHILE().

An easy way to trigger this situation is by issuing a two actions
transaction, with a proper and a bogus blockdev-backup, so the second
one will trigger a rollback. This will trigger a hang with an stack
trace like this one:

 #0  0x00007fb680c75016 in __GI_ppoll (fds=0x55e74580f7c0, nfds=1, timeout=<optimized out>,
     timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
 #1  0x000055e743386e09 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>)
     at /usr/include/bits/poll2.h:77
 #2  0x000055e743386e09 in qemu_poll_ns
     (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:336
 #3  0x000055e743388dc4 in aio_poll (ctx=0x55e7458925d0, blocking=blocking@entry=true)
     at util/aio-posix.c:669
 #4  0x000055e743305dea in bdrv_flush (bs=bs@entry=0x55e74593c0d0) at block/io.c:2878
 #5  0x000055e7432be58e in bdrv_close (bs=0x55e74593c0d0) at block.c:4017
 #6  0x000055e7432be58e in bdrv_delete (bs=<optimized out>) at block.c:4262
 #7  0x000055e7432be58e in bdrv_unref (bs=bs@entry=0x55e74593c0d0) at block.c:5644
 #8  0x000055e743316b9b in bdrv_backup_top_drop (bs=bs@entry=0x55e74593c0d0) at block/backup-top.c:273
 #9  0x000055e74331461f in backup_job_create
     (job_id=0x0, bs=bs@entry=0x55e7458d5820, target=target@entry=0x55e74589f640, speed=0, sync_mode=MIRROR_SYNC_MODE_FULL, sync_bitmap=sync_bitmap@entry=0x0, bitmap_mode=BITMAP_SYNC_MODE_ON_SUCCESS, compress=false, filter_node_name=0x0, on_source_error=BLOCKDEV_ON_ERROR_REPORT, on_target_error=BLOCKDEV_ON_ERROR_REPORT, creation_flags=0, cb=0x0, opaque=0x0, txn=0x0, errp=0x7ffddfd1efb0) at block/backup.c:478
 #10 0x000055e74315bc52 in do_backup_common
     (backup=backup@entry=0x55e746c066d0, bs=bs@entry=0x55e7458d5820, target_bs=target_bs@entry=0x55e74589f640, aio_context=aio_context@entry=0x55e7458a91e0, txn=txn@entry=0x0, errp=errp@entry=0x7ffddfd1efb0)
     at blockdev.c:3580
 #11 0x000055e74315c37c in do_blockdev_backup
     (backup=backup@entry=0x55e746c066d0, txn=0x0, errp=errp@entry=0x7ffddfd1efb0)
     at /usr/src/debug/qemu-kvm-4.2.0-2.module+el8.2.0+5135+ed3b2489.x86_64/./qapi/qapi-types-block-core.h:1492
 #12 0x000055e74315c449 in blockdev_backup_prepare (common=0x55e746a8de90, errp=0x7ffddfd1f018)
     at blockdev.c:1885
 #13 0x000055e743160152 in qmp_transaction
     (dev_list=<optimized out>, has_props=<optimized out>, props=0x55e7467fe2c0, errp=errp@entry=0x7ffddfd1f088) at blockdev.c:2340
 #14 0x000055e743287ff5 in qmp_marshal_transaction
     (args=<optimized out>, ret=<optimized out>, errp=0x7ffddfd1f0f8)
     at qapi/qapi-commands-transaction.c:44
 #15 0x000055e74333de6c in do_qmp_dispatch
     (errp=0x7ffddfd1f0f0, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55e743c28d60 <qmp_commands>) at qapi/qmp-dispatch.c:132
 #16 0x000055e74333de6c in qmp_dispatch
     (cmds=0x55e743c28d60 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>)
     at qapi/qmp-dispatch.c:175
 #17 0x000055e74325c061 in monitor_qmp_dispatch (mon=0x55e745908030, req=<optimized out>)
     at monitor/qmp.c:145
 #18 0x000055e74325c6fa in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:234
 #19 0x000055e743385866 in aio_bh_call (bh=0x55e745807ae0) at util/async.c:117
 #20 0x000055e743385866 in aio_bh_poll (ctx=ctx@entry=0x55e7458067a0) at util/async.c:117
 #21 0x000055e743388c54 in aio_dispatch (ctx=0x55e7458067a0) at util/aio-posix.c:459
 #22 0x000055e743385742 in aio_ctx_dispatch
     (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
 #23 0x00007fb68543e67d in g_main_dispatch (context=0x55e745893a40) at gmain.c:3176
 #24 0x00007fb68543e67d in g_main_context_dispatch (context=context@entry=0x55e745893a40) at gmain.c:3829
 #25 0x000055e743387d08 in glib_pollfds_poll () at util/main-loop.c:219
 #26 0x000055e743387d08 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
 #27 0x000055e743387d08 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
 #28 0x000055e74316a3c1 in main_loop () at vl.c:1828
 #29 0x000055e743016a72 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
     at vl.c:4504

Fix this by not acquiring the AioContext there, and ensuring all paths
leading to it have it already acquired (backup_clean()).

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782111
Signed-off-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/backup-top.c | 5 -----
 block/backup.c     | 3 +++
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/block/backup-top.c b/block/backup-top.c
index 818d3f26b4..b8d863ff08 100644
--- a/block/backup-top.c
+++ b/block/backup-top.c
@@ -255,9 +255,6 @@ append_failed:
 void bdrv_backup_top_drop(BlockDriverState *bs)
 {
     BDRVBackupTopState *s = bs->opaque;
-    AioContext *aio_context = bdrv_get_aio_context(bs);
-
-    aio_context_acquire(aio_context);
 
     bdrv_drained_begin(bs);
 
@@ -271,6 +268,4 @@ void bdrv_backup_top_drop(BlockDriverState *bs)
     bdrv_drained_end(bs);
 
     bdrv_unref(bs);
-
-    aio_context_release(aio_context);
 }
diff --git a/block/backup.c b/block/backup.c
index cf62b1a38c..1383e219f5 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -135,8 +135,11 @@ static void backup_abort(Job *job)
 static void backup_clean(Job *job)
 {
     BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
+    AioContext *aio_context = bdrv_get_aio_context(s->backup_top);
 
+    aio_context_acquire(aio_context);
     bdrv_backup_top_drop(s->backup_top);
+    aio_context_release(aio_context);
 }
 
 void backup_do_checkpoint(BlockJob *job, Error **errp)
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 08/13] blockdev: Acquire AioContext on dirty bitmap functions
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (6 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 07/13] block/backup-top: Don't acquire context while dropping top Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 09/13] blockdev: Return bs to the proper context on snapshot abort Kevin Wolf
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

Dirty map addition and removal functions are not acquiring to BDS
AioContext, while they may call to code that expects it to be
acquired.

This may trigger a crash with a stack trace like this one:

 #0  0x00007f0ef146370f in __GI_raise (sig=sig@entry=6)
     at ../sysdeps/unix/sysv/linux/raise.c:50
 #1  0x00007f0ef144db25 in __GI_abort () at abort.c:79
 #2  0x0000565022294dce in error_exit
     (err=<optimized out>, msg=msg@entry=0x56502243a730 <__func__.16350> "qemu_mutex_unlock_impl") at util/qemu-thread-posix.c:36
 #3  0x00005650222950ba in qemu_mutex_unlock_impl
     (mutex=mutex@entry=0x5650244b0240, file=file@entry=0x565022439adf "util/async.c", line=line@entry=526) at util/qemu-thread-posix.c:108
 #4  0x0000565022290029 in aio_context_release
     (ctx=ctx@entry=0x5650244b01e0) at util/async.c:526
 #5  0x000056502221cd08 in bdrv_can_store_new_dirty_bitmap
     (bs=bs@entry=0x5650244dc820, name=name@entry=0x56502481d360 "bitmap1", granularity=granularity@entry=65536, errp=errp@entry=0x7fff22831718)
     at block/dirty-bitmap.c:542
 #6  0x000056502206ae53 in qmp_block_dirty_bitmap_add
     (errp=0x7fff22831718, disabled=false, has_disabled=<optimized out>, persistent=<optimized out>, has_persistent=true, granularity=65536, has_granularity=<optimized out>, name=0x56502481d360 "bitmap1", node=<optimized out>) at blockdev.c:2894
 #7  0x000056502206ae53 in qmp_block_dirty_bitmap_add
     (node=<optimized out>, name=0x56502481d360 "bitmap1", has_granularity=<optimized out>, granularity=<optimized out>, has_persistent=true, persistent=<optimized out>, has_disabled=false, disabled=false, errp=0x7fff22831718) at blockdev.c:2856
 #8  0x00005650221847a3 in qmp_marshal_block_dirty_bitmap_add
     (args=<optimized out>, ret=<optimized out>, errp=0x7fff22831798)
     at qapi/qapi-commands-block-core.c:651
 #9  0x0000565022247e6c in do_qmp_dispatch
     (errp=0x7fff22831790, allow_oob=<optimized out>, request=<optimized out>, cmds=0x565022b32d60 <qmp_commands>) at qapi/qmp-dispatch.c:132
 #10 0x0000565022247e6c in qmp_dispatch
     (cmds=0x565022b32d60 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
 #11 0x0000565022166061 in monitor_qmp_dispatch
     (mon=0x56502450faa0, req=<optimized out>) at monitor/qmp.c:145
 #12 0x00005650221666fa in monitor_qmp_bh_dispatcher
     (data=<optimized out>) at monitor/qmp.c:234
 #13 0x000056502228f866 in aio_bh_call (bh=0x56502440eae0)
     at util/async.c:117
 #14 0x000056502228f866 in aio_bh_poll (ctx=ctx@entry=0x56502440d7a0)
     at util/async.c:117
 #15 0x0000565022292c54 in aio_dispatch (ctx=0x56502440d7a0)
     at util/aio-posix.c:459
 #16 0x000056502228f742 in aio_ctx_dispatch
     (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
 #17 0x00007f0ef5ce667d in g_main_dispatch (context=0x56502449aa40)
     at gmain.c:3176
 #18 0x00007f0ef5ce667d in g_main_context_dispatch
     (context=context@entry=0x56502449aa40) at gmain.c:3829
 #19 0x0000565022291d08 in glib_pollfds_poll () at util/main-loop.c:219
 #20 0x0000565022291d08 in os_host_main_loop_wait
     (timeout=<optimized out>) at util/main-loop.c:242
 #21 0x0000565022291d08 in main_loop_wait (nonblocking=<optimized out>)
     at util/main-loop.c:518
 #22 0x00005650220743c1 in main_loop () at vl.c:1828
 #23 0x0000565021f20a72 in main
     (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
     at vl.c:4504

Fix this by acquiring the AioContext at qmp_block_dirty_bitmap_add()
and qmp_block_dirty_bitmap_add().

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782175
Signed-off-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 blockdev.c | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 1dacbc20ec..d4ef6cd452 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2984,6 +2984,7 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
 {
     BlockDriverState *bs;
     BdrvDirtyBitmap *bitmap;
+    AioContext *aio_context;
 
     if (!name || name[0] == '\0') {
         error_setg(errp, "Bitmap name cannot be empty");
@@ -2995,11 +2996,14 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
         return;
     }
 
+    aio_context = bdrv_get_aio_context(bs);
+    aio_context_acquire(aio_context);
+
     if (has_granularity) {
         if (granularity < 512 || !is_power_of_2(granularity)) {
             error_setg(errp, "Granularity must be power of 2 "
                              "and at least 512");
-            return;
+            goto out;
         }
     } else {
         /* Default to cluster size, if available: */
@@ -3017,12 +3021,12 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
     if (persistent &&
         !bdrv_can_store_new_dirty_bitmap(bs, name, granularity, errp))
     {
-        return;
+        goto out;
     }
 
     bitmap = bdrv_create_dirty_bitmap(bs, granularity, name, errp);
     if (bitmap == NULL) {
-        return;
+        goto out;
     }
 
     if (disabled) {
@@ -3030,6 +3034,9 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
     }
 
     bdrv_dirty_bitmap_set_persistence(bitmap, persistent);
+
+out:
+    aio_context_release(aio_context);
 }
 
 static BdrvDirtyBitmap *do_block_dirty_bitmap_remove(
@@ -3038,21 +3045,27 @@ static BdrvDirtyBitmap *do_block_dirty_bitmap_remove(
 {
     BlockDriverState *bs;
     BdrvDirtyBitmap *bitmap;
+    AioContext *aio_context;
 
     bitmap = block_dirty_bitmap_lookup(node, name, &bs, errp);
     if (!bitmap || !bs) {
         return NULL;
     }
 
+    aio_context = bdrv_get_aio_context(bs);
+    aio_context_acquire(aio_context);
+
     if (bdrv_dirty_bitmap_check(bitmap, BDRV_BITMAP_BUSY | BDRV_BITMAP_RO,
                                 errp)) {
+        aio_context_release(aio_context);
         return NULL;
     }
 
     if (bdrv_dirty_bitmap_get_persistence(bitmap) &&
         bdrv_remove_persistent_dirty_bitmap(bs, name, errp) < 0)
     {
-            return NULL;
+        aio_context_release(aio_context);
+        return NULL;
     }
 
     if (release) {
@@ -3063,6 +3076,7 @@ static BdrvDirtyBitmap *do_block_dirty_bitmap_remove(
         *bitmap_bs = bs;
     }
 
+    aio_context_release(aio_context);
     return release ? NULL : bitmap;
 }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 09/13] blockdev: Return bs to the proper context on snapshot abort
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (7 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 08/13] blockdev: Acquire AioContext on dirty bitmap functions Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 10/13] iotests: Test handling of AioContexts with some blockdev actions Kevin Wolf
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

external_snapshot_abort() calls to bdrv_set_backing_hd(), which
returns state->old_bs to the main AioContext, as it's intended to be
used then the BDS is going to be released. As that's not the case when
aborting an external snapshot, return it to the AioContext it was
before the call.

This issue can be triggered by issuing a transaction with two actions,
a proper blockdev-snapshot-sync and a bogus one, so the second will
trigger a transaction abort. This results in a crash with an stack
trace like this one:

 #0  0x00007fa1048b28df in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
 #1  0x00007fa10489ccf5 in __GI_abort () at abort.c:79
 #2  0x00007fa10489cbc9 in __assert_fail_base
     (fmt=0x7fa104a03300 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=0x557224014d30 "block.c", line=2240, function=<optimized out>) at assert.c:92
 #3  0x00007fa1048aae96 in __GI___assert_fail
     (assertion=assertion@entry=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=file@entry=0x557224014d30 "block.c", line=line@entry=2240, function=function@entry=0x5572240b5d60 <__PRETTY_FUNCTION__.31620> "bdrv_replace_child_noperm") at assert.c:101
 #4  0x0000557223e631f8 in bdrv_replace_child_noperm (child=0x557225b9c980, new_bs=new_bs@entry=0x557225c42e40) at block.c:2240
 #5  0x0000557223e68be7 in bdrv_replace_node (from=0x557226951a60, to=0x557225c42e40, errp=0x5572247d6138 <error_abort>) at block.c:4196
 #6  0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1731
 #7  0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1717
 #8  0x0000557223d09013 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x557225cc7d70, errp=errp@entry=0x7ffe704c0c98) at blockdev.c:2360
 #9  0x0000557223e32085 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffe704c0d08) at qapi/qapi-commands-transaction.c:44
 #10 0x0000557223ee798c in do_qmp_dispatch (errp=0x7ffe704c0d00, allow_oob=<optimized out>, request=<optimized out>, cmds=0x5572247d3cc0 <qmp_commands>) at qapi/qmp-dispatch.c:132
 #11 0x0000557223ee798c in qmp_dispatch (cmds=0x5572247d3cc0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
 #12 0x0000557223e06141 in monitor_qmp_dispatch (mon=0x557225c69ff0, req=<optimized out>) at monitor/qmp.c:120
 #13 0x0000557223e0678a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
 #14 0x0000557223f2f366 in aio_bh_call (bh=0x557225b9dc60) at util/async.c:117
 #15 0x0000557223f2f366 in aio_bh_poll (ctx=ctx@entry=0x557225b9c840) at util/async.c:117
 #16 0x0000557223f32754 in aio_dispatch (ctx=0x557225b9c840) at util/aio-posix.c:459
 #17 0x0000557223f2f242 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
 #18 0x00007fa10913467d in g_main_dispatch (context=0x557225c28e80) at gmain.c:3176
 #19 0x00007fa10913467d in g_main_context_dispatch (context=context@entry=0x557225c28e80) at gmain.c:3829
 #20 0x0000557223f31808 in glib_pollfds_poll () at util/main-loop.c:219
 #21 0x0000557223f31808 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
 #22 0x0000557223f31808 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
 #23 0x0000557223d13201 in main_loop () at vl.c:1828
 #24 0x0000557223bbfb82 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1779036
Signed-off-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 blockdev.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/blockdev.c b/blockdev.c
index d4ef6cd452..4cd9a58d36 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1731,6 +1731,8 @@ static void external_snapshot_abort(BlkActionState *common)
     if (state->new_bs) {
         if (state->overlay_appended) {
             AioContext *aio_context;
+            AioContext *tmp_context;
+            int ret;
 
             aio_context = bdrv_get_aio_context(state->old_bs);
             aio_context_acquire(aio_context);
@@ -1738,6 +1740,25 @@ static void external_snapshot_abort(BlkActionState *common)
             bdrv_ref(state->old_bs);   /* we can't let bdrv_set_backind_hd()
                                           close state->old_bs; we need it */
             bdrv_set_backing_hd(state->new_bs, NULL, &error_abort);
+
+            /*
+             * The call to bdrv_set_backing_hd() above returns state->old_bs to
+             * the main AioContext. As we're still going to be using it, return
+             * it to the AioContext it was before.
+             */
+            tmp_context = bdrv_get_aio_context(state->old_bs);
+            if (aio_context != tmp_context) {
+                aio_context_release(aio_context);
+                aio_context_acquire(tmp_context);
+
+                ret = bdrv_try_set_aio_context(state->old_bs,
+                                               aio_context, NULL);
+                assert(ret == 0);
+
+                aio_context_release(tmp_context);
+                aio_context_acquire(aio_context);
+            }
+
             bdrv_replace_node(state->new_bs, state->old_bs, &error_abort);
             bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 10/13] iotests: Test handling of AioContexts with some blockdev actions
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (8 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 09/13] blockdev: Return bs to the proper context on snapshot abort Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 11/13] block/backup: fix memory leak in bdrv_backup_top_append() Kevin Wolf
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Sergio Lopez <slp@redhat.com>

Includes the following tests:

 - Adding a dirty bitmap.
   * RHBZ: 1782175

 - Starting a drive-mirror to an NBD-backed target.
   * RHBZ: 1746217, 1773517

 - Aborting an external snapshot transaction.
   * RHBZ: 1779036

 - Aborting a blockdev backup transaction.
   * RHBZ: 1782111

For each one of them, a VM with a number of disks running in an
IOThread AioContext is used.

Signed-off-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/qemu-iotests/281     | 247 +++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/281.out |   5 +
 tests/qemu-iotests/group   |   1 +
 3 files changed, 253 insertions(+)
 create mode 100755 tests/qemu-iotests/281
 create mode 100644 tests/qemu-iotests/281.out

diff --git a/tests/qemu-iotests/281 b/tests/qemu-iotests/281
new file mode 100755
index 0000000000..269d583b2c
--- /dev/null
+++ b/tests/qemu-iotests/281
@@ -0,0 +1,247 @@
+#!/usr/bin/env python
+#
+# Test cases for blockdev + IOThread interactions
+#
+# Copyright (C) 2019 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import os
+import iotests
+from iotests import qemu_img
+
+image_len = 64 * 1024 * 1024
+
+# Test for RHBZ#1782175
+class TestDirtyBitmapIOThread(iotests.QMPTestCase):
+    drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
+    images = { 'drive0': drive0_img }
+
+    def setUp(self):
+        for name in self.images:
+            qemu_img('create', '-f', iotests.imgfmt,
+                     self.images[name], str(image_len))
+
+        self.vm = iotests.VM()
+        self.vm.add_object('iothread,id=iothread0')
+
+        for name in self.images:
+            self.vm.add_blockdev('driver=file,filename=%s,node-name=file_%s'
+                                 % (self.images[name], name))
+            self.vm.add_blockdev('driver=qcow2,file=file_%s,node-name=%s'
+                                 % (name, name))
+
+        self.vm.launch()
+        self.vm.qmp('x-blockdev-set-iothread',
+                    node_name='drive0', iothread='iothread0',
+                    force=True)
+
+    def tearDown(self):
+        self.vm.shutdown()
+        for name in self.images:
+            os.remove(self.images[name])
+
+    def test_add_dirty_bitmap(self):
+        result = self.vm.qmp(
+            'block-dirty-bitmap-add',
+            node='drive0',
+            name='bitmap1',
+            persistent=True,
+        )
+
+        self.assert_qmp(result, 'return', {})
+
+
+# Test for RHBZ#1746217 & RHBZ#1773517
+class TestNBDMirrorIOThread(iotests.QMPTestCase):
+    nbd_sock = os.path.join(iotests.sock_dir, 'nbd.sock')
+    drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
+    mirror_img = os.path.join(iotests.test_dir, 'mirror.img')
+    images = { 'drive0': drive0_img, 'mirror': mirror_img }
+
+    def setUp(self):
+        for name in self.images:
+            qemu_img('create', '-f', iotests.imgfmt,
+                     self.images[name], str(image_len))
+
+        self.vm_src = iotests.VM(path_suffix='src')
+        self.vm_src.add_object('iothread,id=iothread0')
+        self.vm_src.add_blockdev('driver=file,filename=%s,node-name=file0'
+                          % (self.drive0_img))
+        self.vm_src.add_blockdev('driver=qcow2,file=file0,node-name=drive0')
+        self.vm_src.launch()
+        self.vm_src.qmp('x-blockdev-set-iothread',
+                        node_name='drive0', iothread='iothread0',
+                        force=True)
+
+        self.vm_tgt = iotests.VM(path_suffix='tgt')
+        self.vm_tgt.add_object('iothread,id=iothread0')
+        self.vm_tgt.add_blockdev('driver=file,filename=%s,node-name=file0'
+                          % (self.mirror_img))
+        self.vm_tgt.add_blockdev('driver=qcow2,file=file0,node-name=drive0')
+        self.vm_tgt.launch()
+        self.vm_tgt.qmp('x-blockdev-set-iothread',
+                        node_name='drive0', iothread='iothread0',
+                        force=True)
+
+    def tearDown(self):
+        self.vm_src.shutdown()
+        self.vm_tgt.shutdown()
+        for name in self.images:
+            os.remove(self.images[name])
+
+    def test_nbd_mirror(self):
+        result = self.vm_tgt.qmp(
+            'nbd-server-start',
+            addr={
+                'type': 'unix',
+                'data': { 'path': self.nbd_sock }
+            }
+        )
+        self.assert_qmp(result, 'return', {})
+
+        result = self.vm_tgt.qmp(
+            'nbd-server-add',
+            device='drive0',
+            writable=True
+        )
+        self.assert_qmp(result, 'return', {})
+
+        result = self.vm_src.qmp(
+            'drive-mirror',
+            device='drive0',
+            target='nbd+unix:///drive0?socket=' + self.nbd_sock,
+            sync='full',
+            mode='existing',
+            speed=64*1024*1024,
+            job_id='j1'
+        )
+        self.assert_qmp(result, 'return', {})
+
+        self.vm_src.event_wait(name="BLOCK_JOB_READY")
+
+
+# Test for RHBZ#1779036
+class TestExternalSnapshotAbort(iotests.QMPTestCase):
+    drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
+    snapshot_img = os.path.join(iotests.test_dir, 'snapshot.img')
+    images = { 'drive0': drive0_img, 'snapshot': snapshot_img }
+
+    def setUp(self):
+        for name in self.images:
+            qemu_img('create', '-f', iotests.imgfmt,
+                     self.images[name], str(image_len))
+
+        self.vm = iotests.VM()
+        self.vm.add_object('iothread,id=iothread0')
+        self.vm.add_blockdev('driver=file,filename=%s,node-name=file0'
+                          % (self.drive0_img))
+        self.vm.add_blockdev('driver=qcow2,file=file0,node-name=drive0')
+        self.vm.launch()
+        self.vm.qmp('x-blockdev-set-iothread',
+                    node_name='drive0', iothread='iothread0',
+                    force=True)
+
+    def tearDown(self):
+        self.vm.shutdown()
+        for name in self.images:
+            os.remove(self.images[name])
+
+    def test_external_snapshot_abort(self):
+        # Use a two actions transaction with a bogus values on the second
+        # one to trigger an abort of the transaction.
+        result = self.vm.qmp('transaction', actions=[
+            {
+                'type': 'blockdev-snapshot-sync',
+                'data': { 'node-name': 'drive0',
+                          'snapshot-file': self.snapshot_img,
+                          'snapshot-node-name': 'snap1',
+                          'mode': 'absolute-paths',
+                          'format': 'qcow2' }
+            },
+            {
+                'type': 'blockdev-snapshot-sync',
+                'data': { 'node-name': 'drive0',
+                          'snapshot-file': '/fakesnapshot',
+                          'snapshot-node-name': 'snap2',
+                          'mode': 'absolute-paths',
+                          'format': 'qcow2' }
+            },
+        ])
+
+        # Crashes on failure, we expect this error.
+        self.assert_qmp(result, 'error/class', 'GenericError')
+
+
+# Test for RHBZ#1782111
+class TestBlockdevBackupAbort(iotests.QMPTestCase):
+    drive0_img = os.path.join(iotests.test_dir, 'drive0.img')
+    drive1_img = os.path.join(iotests.test_dir, 'drive1.img')
+    snap0_img = os.path.join(iotests.test_dir, 'snap0.img')
+    snap1_img = os.path.join(iotests.test_dir, 'snap1.img')
+    images = { 'drive0': drive0_img,
+               'drive1': drive1_img,
+               'snap0': snap0_img,
+               'snap1': snap1_img }
+
+    def setUp(self):
+        for name in self.images:
+            qemu_img('create', '-f', iotests.imgfmt,
+                     self.images[name], str(image_len))
+
+        self.vm = iotests.VM()
+        self.vm.add_object('iothread,id=iothread0')
+        self.vm.add_device('virtio-scsi,iothread=iothread0')
+
+        for name in self.images:
+            self.vm.add_blockdev('driver=file,filename=%s,node-name=file_%s'
+                                 % (self.images[name], name))
+            self.vm.add_blockdev('driver=qcow2,file=file_%s,node-name=%s'
+                                 % (name, name))
+
+        self.vm.add_device('scsi-hd,drive=drive0')
+        self.vm.add_device('scsi-hd,drive=drive1')
+        self.vm.launch()
+
+    def tearDown(self):
+        self.vm.shutdown()
+        for name in self.images:
+            os.remove(self.images[name])
+
+    def test_blockdev_backup_abort(self):
+        # Use a two actions transaction with a bogus values on the second
+        # one to trigger an abort of the transaction.
+        result = self.vm.qmp('transaction', actions=[
+            {
+                'type': 'blockdev-backup',
+                'data': { 'device': 'drive0',
+                          'target': 'snap0',
+                          'sync': 'full',
+                          'job-id': 'j1' }
+            },
+            {
+                'type': 'blockdev-backup',
+                'data': { 'device': 'drive1',
+                          'target': 'snap1',
+                          'sync': 'full' }
+            },
+        ])
+
+        # Hangs on failure, we expect this error.
+        self.assert_qmp(result, 'error/class', 'GenericError')
+
+if __name__ == '__main__':
+    iotests.main(supported_fmts=['qcow2'],
+                 supported_protocols=['file'])
diff --git a/tests/qemu-iotests/281.out b/tests/qemu-iotests/281.out
new file mode 100644
index 0000000000..89968f35d7
--- /dev/null
+++ b/tests/qemu-iotests/281.out
@@ -0,0 +1,5 @@
+....
+----------------------------------------------------------------------
+Ran 4 tests
+
+OK
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index cb2b789e44..e041cc1ee3 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -288,3 +288,4 @@
 277 rw quick
 279 rw backing quick
 280 rw migration quick
+281 rw quick
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 11/13] block/backup: fix memory leak in bdrv_backup_top_append()
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (9 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 10/13] iotests: Test handling of AioContexts with some blockdev actions Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 12/13] iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711) Kevin Wolf
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Eiichi Tsukata <devel@etsukata.com>

bdrv_open_driver() allocates bs->opaque according to drv->instance_size.
There is no need to allocate it and overwrite opaque in
bdrv_backup_top_append().

Reproducer:

  $ QTEST_QEMU_BINARY=./x86_64-softmmu/qemu-system-x86_64 valgrind -q --leak-check=full tests/test-replication -p /replication/secondary/start
  ==29792== 24 bytes in 1 blocks are definitely lost in loss record 52 of 226
  ==29792==    at 0x483AB1A: calloc (vg_replace_malloc.c:762)
  ==29792==    by 0x4B07CE0: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.6000.7)
  ==29792==    by 0x12BAB9: bdrv_open_driver (block.c:1289)
  ==29792==    by 0x12BEA9: bdrv_new_open_driver (block.c:1359)
  ==29792==    by 0x1D15CB: bdrv_backup_top_append (backup-top.c:190)
  ==29792==    by 0x1CC11A: backup_job_create (backup.c:439)
  ==29792==    by 0x1CD542: replication_start (replication.c:544)
  ==29792==    by 0x1401B9: replication_start_all (replication.c:52)
  ==29792==    by 0x128B50: test_secondary_start (test-replication.c:427)
  ...

Fixes: 7df7868b9640 ("block: introduce backup-top filter driver")
Signed-off-by: Eiichi Tsukata <devel@etsukata.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/backup-top.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/backup-top.c b/block/backup-top.c
index b8d863ff08..9aed2eb4c0 100644
--- a/block/backup-top.c
+++ b/block/backup-top.c
@@ -196,7 +196,7 @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
     }
 
     top->total_sectors = source->total_sectors;
-    top->opaque = state = g_new0(BDRVBackupTopState, 1);
+    state = top->opaque;
 
     bdrv_ref(target);
     state->target = bdrv_attach_child(top, target, "target", &child_file, errp);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 12/13] iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (10 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 11/13] block/backup: fix memory leak in bdrv_backup_top_append() Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-27 17:55 ` [PULL 13/13] iscsi: Don't access non-existent scsi_lba_status_descriptor Kevin Wolf
  2020-01-28  9:32 ` [PULL 00/13] Block layer patches Peter Maydell
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

From: Felipe Franciosi <felipe@nutanix.com>

When querying an iSCSI server for the provisioning status of blocks (via
GET LBA STATUS), Qemu only validates that the response descriptor zero's
LBA matches the one requested. Given the SCSI spec allows servers to
respond with the status of blocks beyond the end of the LUN, Qemu may
have its heap corrupted by clearing/setting too many bits at the end of
its allocmap for the LUN.

A malicious guest in control of the iSCSI server could carefully program
Qemu's heap (by selectively setting the bitmap) and then smash it.

This limits the number of bits that iscsi_co_block_status() will try to
update in the allocmap so it can't overflow the bitmap.

Fixes: CVE-2020-1711
Cc: qemu-stable@nongnu.org
Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
Signed-off-by: Peter Turschmid <peter.turschm@nutanix.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/iscsi.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index 2aea7e3f13..cbd57294ab 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -701,7 +701,7 @@ static int coroutine_fn iscsi_co_block_status(BlockDriverState *bs,
     struct scsi_get_lba_status *lbas = NULL;
     struct scsi_lba_status_descriptor *lbasd = NULL;
     struct IscsiTask iTask;
-    uint64_t lba;
+    uint64_t lba, max_bytes;
     int ret;
 
     iscsi_co_init_iscsitask(iscsilun, &iTask);
@@ -721,6 +721,7 @@ static int coroutine_fn iscsi_co_block_status(BlockDriverState *bs,
     }
 
     lba = offset / iscsilun->block_size;
+    max_bytes = (iscsilun->num_blocks - lba) * iscsilun->block_size;
 
     qemu_mutex_lock(&iscsilun->mutex);
 retry:
@@ -764,7 +765,7 @@ retry:
         goto out_unlock;
     }
 
-    *pnum = (int64_t) lbasd->num_blocks * iscsilun->block_size;
+    *pnum = MIN((int64_t) lbasd->num_blocks * iscsilun->block_size, max_bytes);
 
     if (lbasd->provisioning == SCSI_PROVISIONING_TYPE_DEALLOCATED ||
         lbasd->provisioning == SCSI_PROVISIONING_TYPE_ANCHORED) {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PULL 13/13] iscsi: Don't access non-existent scsi_lba_status_descriptor
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (11 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 12/13] iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711) Kevin Wolf
@ 2020-01-27 17:55 ` Kevin Wolf
  2020-01-28  9:32 ` [PULL 00/13] Block layer patches Peter Maydell
  13 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-01-27 17:55 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

In iscsi_co_block_status(), we may have received num_descriptors == 0
from the iscsi server. Therefore, we can't unconditionally access
lbas->descriptors[0]. Add the missing check.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Felipe Franciosi <felipe@nutanix.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
---
 block/iscsi.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index cbd57294ab..c8feaa2f0e 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -753,7 +753,7 @@ retry:
     }
 
     lbas = scsi_datain_unmarshall(iTask.task);
-    if (lbas == NULL) {
+    if (lbas == NULL || lbas->num_descriptors == 0) {
         ret = -EIO;
         goto out_unlock;
     }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
  2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
                   ` (12 preceding siblings ...)
  2020-01-27 17:55 ` [PULL 13/13] iscsi: Don't access non-existent scsi_lba_status_descriptor Kevin Wolf
@ 2020-01-28  9:32 ` Peter Maydell
  13 siblings, 0 replies; 26+ messages in thread
From: Peter Maydell @ 2020-01-28  9:32 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: QEMU Developers, Qemu-block

On Mon, 27 Jan 2020 at 17:56, Kevin Wolf <kwolf@redhat.com> wrote:
>
> The following changes since commit 105b07f1ba462ec48b27e5cb74ddf81c6a79364c:
>
>   Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20200127' into staging (2020-01-27 13:02:36 +0000)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to 5fbf1d56c24018772e900a40a0955175ff82f35c:
>
>   iscsi: Don't access non-existent scsi_lba_status_descriptor (2020-01-27 17:19:53 +0100)
>
> ----------------------------------------------------------------
> Block layer patches:
>
> - iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)
> - AioContext fixes in QMP commands for backup and bitmaps
> - iotests fixes
>
> ----------------------------------------------------------------


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.0
for any user-visible changes.

-- PMM


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
  2022-05-04 14:25 Kevin Wolf
@ 2022-05-05 13:09 ` Richard Henderson
  0 siblings, 0 replies; 26+ messages in thread
From: Richard Henderson @ 2022-05-05 13:09 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: qemu-devel

On 5/4/22 09:25, Kevin Wolf wrote:
> The following changes since commit 2e3408b3cc7de4e87a9adafc8c19bfce3abec947:
> 
>    Merge tag 'misc-pull-request' of gitlab.com:marcandre.lureau/qemu into staging (2022-05-03 09:13:17 -0700)
> 
> are available in the Git repository at:
> 
>    git://repo.or.cz/qemu/kevin.git tags/for-upstream
> 
> for you to fetch changes up to c1fe694357a328c807ae3cc6961c19e923448fcc:
> 
>    coroutine-win32: use QEMU_DEFINE_STATIC_CO_TLS() (2022-05-04 15:55:23 +0200)
> 
> ----------------------------------------------------------------
> Block layer patches
> 
> - Fix and re-enable GLOBAL_STATE_CODE assertions
> - vhost-user: Fixes for VHOST_USER_ADD/REM_MEM_REG
> - vmdk: Fix reopening bs->file
> - coroutine: use QEMU_DEFINE_STATIC_CO_TLS()
> - docs/qemu-img: Fix list of formats which implement check

Applied, thanks.  Please update https://wiki.qemu.org/ChangeLog/7.1 as appropriate.


r~


> 
> ----------------------------------------------------------------
> Denis V. Lunev (1):
>        qemu-img: properly list formats which have consistency check implemented
> 
> Hanna Reitz (6):
>        block: Classify bdrv_get_flags() as I/O function
>        qcow2: Do not reopen data_file in invalidate_cache
>        Revert "main-loop: Disable GLOBAL_STATE_CODE() assertions"
>        iotests: Add regression test for issue 945
>        block/vmdk: Fix reopening bs->file
>        iotests/reopen-file: Test reopening file child
> 
> Kevin Wolf (3):
>        docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
>        libvhost-user: Fix extra vu_add/rem_mem_reg reply
>        vhost-user: Don't pass file descriptor for VHOST_USER_REM_MEM_REG
> 
> Stefan Hajnoczi (3):
>        coroutine-ucontext: use QEMU_DEFINE_STATIC_CO_TLS()
>        coroutine: use QEMU_DEFINE_STATIC_CO_TLS()
>        coroutine-win32: use QEMU_DEFINE_STATIC_CO_TLS()
> 
>   docs/interop/vhost-user.rst                        |  17 ++++
>   docs/tools/qemu-img.rst                            |   4 +-
>   include/block/block-global-state.h                 |   1 -
>   include/block/block-io.h                           |   1 +
>   include/qemu/main-loop.h                           |   3 +-
>   block.c                                            |   2 +-
>   block/qcow2.c                                      | 104 ++++++++++++---------
>   block/vmdk.c                                       |  56 ++++++++++-
>   hw/virtio/vhost-user.c                             |   2 +-
>   subprojects/libvhost-user/libvhost-user.c          |  17 ++--
>   util/coroutine-ucontext.c                          |  38 +++++---
>   util/coroutine-win32.c                             |  18 +++-
>   util/qemu-coroutine.c                              |  41 ++++----
>   tests/qemu-iotests/tests/export-incoming-iothread  |  81 ++++++++++++++++
>   .../tests/export-incoming-iothread.out             |   5 +
>   tests/qemu-iotests/tests/reopen-file               |  89 ++++++++++++++++++
>   tests/qemu-iotests/tests/reopen-file.out           |   5 +
>   17 files changed, 388 insertions(+), 96 deletions(-)
>   create mode 100755 tests/qemu-iotests/tests/export-incoming-iothread
>   create mode 100644 tests/qemu-iotests/tests/export-incoming-iothread.out
>   create mode 100755 tests/qemu-iotests/tests/reopen-file
>   create mode 100644 tests/qemu-iotests/tests/reopen-file.out
> 
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PULL 00/13] Block layer patches
@ 2022-05-04 14:25 Kevin Wolf
  2022-05-05 13:09 ` Richard Henderson
  0 siblings, 1 reply; 26+ messages in thread
From: Kevin Wolf @ 2022-05-04 14:25 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, qemu-devel

The following changes since commit 2e3408b3cc7de4e87a9adafc8c19bfce3abec947:

  Merge tag 'misc-pull-request' of gitlab.com:marcandre.lureau/qemu into staging (2022-05-03 09:13:17 -0700)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to c1fe694357a328c807ae3cc6961c19e923448fcc:

  coroutine-win32: use QEMU_DEFINE_STATIC_CO_TLS() (2022-05-04 15:55:23 +0200)

----------------------------------------------------------------
Block layer patches

- Fix and re-enable GLOBAL_STATE_CODE assertions
- vhost-user: Fixes for VHOST_USER_ADD/REM_MEM_REG
- vmdk: Fix reopening bs->file
- coroutine: use QEMU_DEFINE_STATIC_CO_TLS()
- docs/qemu-img: Fix list of formats which implement check

----------------------------------------------------------------
Denis V. Lunev (1):
      qemu-img: properly list formats which have consistency check implemented

Hanna Reitz (6):
      block: Classify bdrv_get_flags() as I/O function
      qcow2: Do not reopen data_file in invalidate_cache
      Revert "main-loop: Disable GLOBAL_STATE_CODE() assertions"
      iotests: Add regression test for issue 945
      block/vmdk: Fix reopening bs->file
      iotests/reopen-file: Test reopening file child

Kevin Wolf (3):
      docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
      libvhost-user: Fix extra vu_add/rem_mem_reg reply
      vhost-user: Don't pass file descriptor for VHOST_USER_REM_MEM_REG

Stefan Hajnoczi (3):
      coroutine-ucontext: use QEMU_DEFINE_STATIC_CO_TLS()
      coroutine: use QEMU_DEFINE_STATIC_CO_TLS()
      coroutine-win32: use QEMU_DEFINE_STATIC_CO_TLS()

 docs/interop/vhost-user.rst                        |  17 ++++
 docs/tools/qemu-img.rst                            |   4 +-
 include/block/block-global-state.h                 |   1 -
 include/block/block-io.h                           |   1 +
 include/qemu/main-loop.h                           |   3 +-
 block.c                                            |   2 +-
 block/qcow2.c                                      | 104 ++++++++++++---------
 block/vmdk.c                                       |  56 ++++++++++-
 hw/virtio/vhost-user.c                             |   2 +-
 subprojects/libvhost-user/libvhost-user.c          |  17 ++--
 util/coroutine-ucontext.c                          |  38 +++++---
 util/coroutine-win32.c                             |  18 +++-
 util/qemu-coroutine.c                              |  41 ++++----
 tests/qemu-iotests/tests/export-incoming-iothread  |  81 ++++++++++++++++
 .../tests/export-incoming-iothread.out             |   5 +
 tests/qemu-iotests/tests/reopen-file               |  89 ++++++++++++++++++
 tests/qemu-iotests/tests/reopen-file.out           |   5 +
 17 files changed, 388 insertions(+), 96 deletions(-)
 create mode 100755 tests/qemu-iotests/tests/export-incoming-iothread
 create mode 100644 tests/qemu-iotests/tests/export-incoming-iothread.out
 create mode 100755 tests/qemu-iotests/tests/reopen-file
 create mode 100644 tests/qemu-iotests/tests/reopen-file.out



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
  2021-11-15 20:55 ` Richard Henderson
@ 2021-11-16  8:49   ` Hanna Reitz
  0 siblings, 0 replies; 26+ messages in thread
From: Hanna Reitz @ 2021-11-16  8:49 UTC (permalink / raw)
  To: Richard Henderson, Kevin Wolf, qemu-block; +Cc: peter.maydell, qemu-devel

On 15.11.21 21:55, Richard Henderson wrote:
> On 11/15/21 3:53 PM, Kevin Wolf wrote:
>> The following changes since commit 
>> 42f6c9179be4401974dd3a75ee72defd16b5092d:
>>
>>    Merge tag 'pull-ppc-20211112' of https://github.com/legoater/qemu 
>> into staging (2021-11-12 12:28:25 +0100)
>>
>> are available in the Git repository at:
>>
>>    git://repo.or.cz/qemu/kevin.git tags/for-upstream
>>
>> for you to fetch changes up to 7461272c5f6032436ef9032c091c0118539483e4:
>>
>>    softmmu/qdev-monitor: fix use-after-free in qdev_set_id() 
>> (2021-11-15 15:49:46 +0100)
>>
>> ----------------------------------------------------------------
>> Block layer patches
>>
>> - Fixes to image streaming job and block layer reconfiguration to make
>>    iotest 030 pass again
>> - docs: Deprecate incorrectly typed device_add arguments
>> - file-posix: Fix alignment after reopen changing O_DIRECT
>>
>> ----------------------------------------------------------------
>> Hanna Reitz (10):
>>        stream: Traverse graph after modification
>>        block: Manipulate children list in .attach/.detach
>>        block: Unite remove_empty_child and child_free
>>        block: Drop detached child from ignore list
>>        block: Pass BdrvChild ** to replace_child_noperm
>>        block: Restructure remove_file_or_backing_child()
>>        transactions: Invoke clean() after everything else
>>        block: Let replace_child_tran keep indirect pointer
>>        block: Let replace_child_noperm free children
>>        iotests/030: Unthrottle parallel jobs in reverse
>>
>> Kevin Wolf (2):
>>        docs: Deprecate incorrectly typed device_add arguments
>>        file-posix: Fix alignment after reopen changing O_DIRECT
>>
>> Stefan Hajnoczi (1):
>>        softmmu/qdev-monitor: fix use-after-free in qdev_set_id()
>>
>>   docs/about/deprecated.rst   |  14 +++
>>   include/qemu/transactions.h |   3 +
>>   block.c                     | 233 
>> +++++++++++++++++++++++++++++++++-----------
>>   block/file-posix.c          |  20 +++-
>>   block/stream.c              |   7 +-
>>   softmmu/qdev-monitor.c      |   2 +-
>>   util/transactions.c         |   8 +-
>>   tests/qemu-iotests/030      |  11 ++-
>>   tests/qemu-iotests/142      |  22 +++++
>>   tests/qemu-iotests/142.out  |  15 +++
>>   10 files changed, 269 insertions(+), 66 deletions(-)
>
> This is failing iotest 142 for build-tcg-disabled.
> I did retry, in case it was transitory.
>
> https://gitlab.com/qemu-project/qemu/-/jobs/1784955950

Thanks, seems like a problem that appears on block devices with sector 
sizes greater than 512 bytes.  Since Kevin is on PTO, I’ll (try to) fix 
the test and send a v2.

Hanna



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
  2021-11-15 14:53 Kevin Wolf
  2021-11-15 20:55 ` Richard Henderson
@ 2021-11-15 20:59 ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-11-15 20:59 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: peter.maydell, qemu-devel

Hi Kevin,

On 11/15/21 15:53, Kevin Wolf wrote:
> The following changes since commit 42f6c9179be4401974dd3a75ee72defd16b5092d:
> 
>   Merge tag 'pull-ppc-20211112' of https://github.com/legoater/qemu into staging (2021-11-12 12:28:25 +0100)
> 
> are available in the Git repository at:
> 
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
> 
> for you to fetch changes up to 7461272c5f6032436ef9032c091c0118539483e4:
> 
>   softmmu/qdev-monitor: fix use-after-free in qdev_set_id() (2021-11-15 15:49:46 +0100)
> 
> ----------------------------------------------------------------
> Block layer patches
> 
> - Fixes to image streaming job and block layer reconfiguration to make
>   iotest 030 pass again
> - docs: Deprecate incorrectly typed device_add arguments
> - file-posix: Fix alignment after reopen changing O_DIRECT
> 
> ----------------------------------------------------------------
> Hanna Reitz (10):
>       stream: Traverse graph after modification
>       block: Manipulate children list in .attach/.detach
>       block: Unite remove_empty_child and child_free
>       block: Drop detached child from ignore list
>       block: Pass BdrvChild ** to replace_child_noperm
>       block: Restructure remove_file_or_backing_child()
>       transactions: Invoke clean() after everything else
>       block: Let replace_child_tran keep indirect pointer
>       block: Let replace_child_noperm free children
>       iotests/030: Unthrottle parallel jobs in reverse
> 
> Kevin Wolf (2):
>       docs: Deprecate incorrectly typed device_add arguments
>       file-posix: Fix alignment after reopen changing O_DIRECT
> 
> Stefan Hajnoczi (1):
>       softmmu/qdev-monitor: fix use-after-free in qdev_set_id()
> 
>  docs/about/deprecated.rst   |  14 +++
>  include/qemu/transactions.h |   3 +
>  block.c                     | 233 +++++++++++++++++++++++++++++++++-----------
>  block/file-posix.c          |  20 +++-
>  block/stream.c              |   7 +-
>  softmmu/qdev-monitor.c      |   2 +-
>  util/transactions.c         |   8 +-
>  tests/qemu-iotests/030      |  11 ++-
>  tests/qemu-iotests/142      |  22 +++++
>  tests/qemu-iotests/142.out  |  15 +++
>  10 files changed, 269 insertions(+), 66 deletions(-)

Looking at current /staging I noticed iotest#142 failed,
build-tcg-disabled job:

+++ 142.out.bad
@@ -750,6 +750,7 @@
 --- Alignment after changing O_DIRECT ---
+qemu-io: Cannot get 'write' permission without 'resize': Image size is
not a multiple of request alignment
 read 42/42 bytes at offset 42
 42 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 read 42/42 bytes at offset 42

https://gitlab.com/qemu-project/qemu/-/jobs/1784955950#L2794



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
  2021-11-15 14:53 Kevin Wolf
@ 2021-11-15 20:55 ` Richard Henderson
  2021-11-16  8:49   ` Hanna Reitz
  2021-11-15 20:59 ` Philippe Mathieu-Daudé
  1 sibling, 1 reply; 26+ messages in thread
From: Richard Henderson @ 2021-11-15 20:55 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: peter.maydell, qemu-devel

On 11/15/21 3:53 PM, Kevin Wolf wrote:
> The following changes since commit 42f6c9179be4401974dd3a75ee72defd16b5092d:
> 
>    Merge tag 'pull-ppc-20211112' of https://github.com/legoater/qemu into staging (2021-11-12 12:28:25 +0100)
> 
> are available in the Git repository at:
> 
>    git://repo.or.cz/qemu/kevin.git tags/for-upstream
> 
> for you to fetch changes up to 7461272c5f6032436ef9032c091c0118539483e4:
> 
>    softmmu/qdev-monitor: fix use-after-free in qdev_set_id() (2021-11-15 15:49:46 +0100)
> 
> ----------------------------------------------------------------
> Block layer patches
> 
> - Fixes to image streaming job and block layer reconfiguration to make
>    iotest 030 pass again
> - docs: Deprecate incorrectly typed device_add arguments
> - file-posix: Fix alignment after reopen changing O_DIRECT
> 
> ----------------------------------------------------------------
> Hanna Reitz (10):
>        stream: Traverse graph after modification
>        block: Manipulate children list in .attach/.detach
>        block: Unite remove_empty_child and child_free
>        block: Drop detached child from ignore list
>        block: Pass BdrvChild ** to replace_child_noperm
>        block: Restructure remove_file_or_backing_child()
>        transactions: Invoke clean() after everything else
>        block: Let replace_child_tran keep indirect pointer
>        block: Let replace_child_noperm free children
>        iotests/030: Unthrottle parallel jobs in reverse
> 
> Kevin Wolf (2):
>        docs: Deprecate incorrectly typed device_add arguments
>        file-posix: Fix alignment after reopen changing O_DIRECT
> 
> Stefan Hajnoczi (1):
>        softmmu/qdev-monitor: fix use-after-free in qdev_set_id()
> 
>   docs/about/deprecated.rst   |  14 +++
>   include/qemu/transactions.h |   3 +
>   block.c                     | 233 +++++++++++++++++++++++++++++++++-----------
>   block/file-posix.c          |  20 +++-
>   block/stream.c              |   7 +-
>   softmmu/qdev-monitor.c      |   2 +-
>   util/transactions.c         |   8 +-
>   tests/qemu-iotests/030      |  11 ++-
>   tests/qemu-iotests/142      |  22 +++++
>   tests/qemu-iotests/142.out  |  15 +++
>   10 files changed, 269 insertions(+), 66 deletions(-)

This is failing iotest 142 for build-tcg-disabled.
I did retry, in case it was transitory.

https://gitlab.com/qemu-project/qemu/-/jobs/1784955950


r~



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PULL 00/13] Block layer patches
@ 2021-11-15 14:53 Kevin Wolf
  2021-11-15 20:55 ` Richard Henderson
  2021-11-15 20:59 ` Philippe Mathieu-Daudé
  0 siblings, 2 replies; 26+ messages in thread
From: Kevin Wolf @ 2021-11-15 14:53 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

The following changes since commit 42f6c9179be4401974dd3a75ee72defd16b5092d:

  Merge tag 'pull-ppc-20211112' of https://github.com/legoater/qemu into staging (2021-11-12 12:28:25 +0100)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 7461272c5f6032436ef9032c091c0118539483e4:

  softmmu/qdev-monitor: fix use-after-free in qdev_set_id() (2021-11-15 15:49:46 +0100)

----------------------------------------------------------------
Block layer patches

- Fixes to image streaming job and block layer reconfiguration to make
  iotest 030 pass again
- docs: Deprecate incorrectly typed device_add arguments
- file-posix: Fix alignment after reopen changing O_DIRECT

----------------------------------------------------------------
Hanna Reitz (10):
      stream: Traverse graph after modification
      block: Manipulate children list in .attach/.detach
      block: Unite remove_empty_child and child_free
      block: Drop detached child from ignore list
      block: Pass BdrvChild ** to replace_child_noperm
      block: Restructure remove_file_or_backing_child()
      transactions: Invoke clean() after everything else
      block: Let replace_child_tran keep indirect pointer
      block: Let replace_child_noperm free children
      iotests/030: Unthrottle parallel jobs in reverse

Kevin Wolf (2):
      docs: Deprecate incorrectly typed device_add arguments
      file-posix: Fix alignment after reopen changing O_DIRECT

Stefan Hajnoczi (1):
      softmmu/qdev-monitor: fix use-after-free in qdev_set_id()

 docs/about/deprecated.rst   |  14 +++
 include/qemu/transactions.h |   3 +
 block.c                     | 233 +++++++++++++++++++++++++++++++++-----------
 block/file-posix.c          |  20 +++-
 block/stream.c              |   7 +-
 softmmu/qdev-monitor.c      |   2 +-
 util/transactions.c         |   8 +-
 tests/qemu-iotests/030      |  11 ++-
 tests/qemu-iotests/142      |  22 +++++
 tests/qemu-iotests/142.out  |  15 +++
 10 files changed, 269 insertions(+), 66 deletions(-)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
  2021-10-06 10:59 Kevin Wolf
@ 2021-10-06 15:49 ` Richard Henderson
  0 siblings, 0 replies; 26+ messages in thread
From: Richard Henderson @ 2021-10-06 15:49 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: peter.maydell, qemu-devel

On 10/6/21 3:59 AM, Kevin Wolf wrote:
> The following changes since commit e3acc2c1961cbe22ca474cd5da4163b7bbf7cea3:
> 
>    tests/docker/dockerfiles: Bump fedora-i386-cross to fedora 34 (2021-10-05 16:40:39 -0700)
> 
> are available in the Git repository at:
> 
>    git://repo.or.cz/qemu/kevin.git tags/for-upstream
> 
> for you to fetch changes up to 3765315d4c84f9c0799744f43a314169baaccc05:
> 
>    iotests: Update for pylint 2.11.1 (2021-10-06 10:25:55 +0200)
> 
> ----------------------------------------------------------------
> Block layer patches
> 
> - Fix I/O errors because of incorrectly detected max_iov
> - Fix not white-listed copy-before-write
> - qemu-storage-daemon: Only display FUSE help when FUSE is built-in
> - iotests: update environment and linting configuration
> 
> ----------------------------------------------------------------
> Emanuele Giuseppe Esposito (1):
>        include/block.h: remove outdated comment
> 
> John Snow (5):
>        iotests: add 'qemu' package location to PYTHONPATH in testenv
>        iotests/linters: check mypy files all at once
>        iotests/mirror-top-perms: Adjust imports
>        iotests/migrate-bitmaps-test: delint
>        iotests: Update for pylint 2.11.1
> 
> Paolo Bonzini (1):
>        block: introduce max_hw_iov for use in scsi-generic
> 
> Philippe Mathieu-Daudé (1):
>        qemu-storage-daemon: Only display FUSE help when FUSE is built-in
> 
> Vladimir Sementsov-Ogievskiy (5):
>        block: implement bdrv_new_open_driver_opts()
>        block: bdrv_insert_node(): fix and improve error handling
>        block: bdrv_insert_node(): doc and style
>        block: bdrv_insert_node(): don't use bdrv_open()
>        iotests/image-fleecing: declare requirement of copy-before-write
> 
>   include/block/block.h                         |  8 ++-
>   include/block/block_int.h                     |  7 +++
>   include/sysemu/block-backend.h                |  1 +
>   block.c                                       | 79 ++++++++++++++++++++++-----
>   block/block-backend.c                         |  6 ++
>   block/file-posix.c                            |  2 +-
>   block/io.c                                    |  1 +
>   hw/scsi/scsi-generic.c                        |  2 +-
>   storage-daemon/qemu-storage-daemon.c          |  2 +
>   tests/qemu-iotests/iotests.py                 |  2 -
>   tests/qemu-iotests/testenv.py                 | 15 +++--
>   tests/qemu-iotests/testrunner.py              |  7 ++-
>   tests/qemu-iotests/235                        |  2 -
>   tests/qemu-iotests/297                        | 52 +++++++-----------
>   tests/qemu-iotests/300                        |  5 +-
>   tests/qemu-iotests/pylintrc                   |  6 +-
>   tests/qemu-iotests/tests/image-fleecing       |  1 +
>   tests/qemu-iotests/tests/migrate-bitmaps-test | 50 +++++++++--------
>   tests/qemu-iotests/tests/mirror-top-perms     | 12 ++--
>   19 files changed, 164 insertions(+), 96 deletions(-)

Applied, thanks.

r~



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PULL 00/13] Block layer patches
@ 2021-10-06 10:59 Kevin Wolf
  2021-10-06 15:49 ` Richard Henderson
  0 siblings, 1 reply; 26+ messages in thread
From: Kevin Wolf @ 2021-10-06 10:59 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, peter.maydell, qemu-devel

The following changes since commit e3acc2c1961cbe22ca474cd5da4163b7bbf7cea3:

  tests/docker/dockerfiles: Bump fedora-i386-cross to fedora 34 (2021-10-05 16:40:39 -0700)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 3765315d4c84f9c0799744f43a314169baaccc05:

  iotests: Update for pylint 2.11.1 (2021-10-06 10:25:55 +0200)

----------------------------------------------------------------
Block layer patches

- Fix I/O errors because of incorrectly detected max_iov
- Fix not white-listed copy-before-write
- qemu-storage-daemon: Only display FUSE help when FUSE is built-in
- iotests: update environment and linting configuration

----------------------------------------------------------------
Emanuele Giuseppe Esposito (1):
      include/block.h: remove outdated comment

John Snow (5):
      iotests: add 'qemu' package location to PYTHONPATH in testenv
      iotests/linters: check mypy files all at once
      iotests/mirror-top-perms: Adjust imports
      iotests/migrate-bitmaps-test: delint
      iotests: Update for pylint 2.11.1

Paolo Bonzini (1):
      block: introduce max_hw_iov for use in scsi-generic

Philippe Mathieu-Daudé (1):
      qemu-storage-daemon: Only display FUSE help when FUSE is built-in

Vladimir Sementsov-Ogievskiy (5):
      block: implement bdrv_new_open_driver_opts()
      block: bdrv_insert_node(): fix and improve error handling
      block: bdrv_insert_node(): doc and style
      block: bdrv_insert_node(): don't use bdrv_open()
      iotests/image-fleecing: declare requirement of copy-before-write

 include/block/block.h                         |  8 ++-
 include/block/block_int.h                     |  7 +++
 include/sysemu/block-backend.h                |  1 +
 block.c                                       | 79 ++++++++++++++++++++++-----
 block/block-backend.c                         |  6 ++
 block/file-posix.c                            |  2 +-
 block/io.c                                    |  1 +
 hw/scsi/scsi-generic.c                        |  2 +-
 storage-daemon/qemu-storage-daemon.c          |  2 +
 tests/qemu-iotests/iotests.py                 |  2 -
 tests/qemu-iotests/testenv.py                 | 15 +++--
 tests/qemu-iotests/testrunner.py              |  7 ++-
 tests/qemu-iotests/235                        |  2 -
 tests/qemu-iotests/297                        | 52 +++++++-----------
 tests/qemu-iotests/300                        |  5 +-
 tests/qemu-iotests/pylintrc                   |  6 +-
 tests/qemu-iotests/tests/image-fleecing       |  1 +
 tests/qemu-iotests/tests/migrate-bitmaps-test | 50 +++++++++--------
 tests/qemu-iotests/tests/mirror-top-perms     | 12 ++--
 19 files changed, 164 insertions(+), 96 deletions(-)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
       [not found] <20200311154218.15532-1-kwolf@redhat.com>
  2020-03-12 13:46 ` Peter Maydell
@ 2020-03-12 17:34 ` Peter Maydell
  1 sibling, 0 replies; 26+ messages in thread
From: Peter Maydell @ 2020-03-12 17:34 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: QEMU Developers, Qemu-block

On Wed, 11 Mar 2020 at 15:42, Kevin Wolf <kwolf@redhat.com> wrote:
>
> The following changes since commit ba29883206d92a29ad5a466e679ccfc2ee6132ef:
>
>   Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20200310' into staging (2020-03-10 16:50:28 +0000)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to 8bb3b023f2055054ee119cb45b42d2b14be7fc8a:
>
>   qemu-iotests: adding LUKS cleanup for non-UTF8 secret error (2020-03-11 15:54:38 +0100)
>
> ----------------------------------------------------------------
> Block layer patches:
>
> - Relax restrictions for blockdev-snapshot (allows libvirt to do live
>   storage migration with blockdev-mirror)
> - luks: Delete created files when block_crypto_co_create_opts_luks fails
> - Fix memleaks in qmp_object_add


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.0
for any user-visible changes.

-- PMM


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
  2020-03-12 13:46 ` Peter Maydell
@ 2020-03-12 14:42   ` Kevin Wolf
  0 siblings, 0 replies; 26+ messages in thread
From: Kevin Wolf @ 2020-03-12 14:42 UTC (permalink / raw)
  To: Peter Maydell; +Cc: QEMU Developers, Qemu-block

Am 12.03.2020 um 14:46 hat Peter Maydell geschrieben:
> On Wed, 11 Mar 2020 at 15:42, Kevin Wolf <kwolf@redhat.com> wrote:
> >
> > The following changes since commit ba29883206d92a29ad5a466e679ccfc2ee6132ef:
> >
> >   Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20200310' into staging (2020-03-10 16:50:28 +0000)
> >
> > are available in the Git repository at:
> >
> >   git://repo.or.cz/qemu/kevin.git tags/for-upstream
> >
> > for you to fetch changes up to 8bb3b023f2055054ee119cb45b42d2b14be7fc8a:
> >
> >   qemu-iotests: adding LUKS cleanup for non-UTF8 secret error (2020-03-11 15:54:38 +0100)
> >
> > ----------------------------------------------------------------
> > Block layer patches:
> >
> > - Relax restrictions for blockdev-snapshot (allows libvirt to do live
> >   storage migration with blockdev-mirror)
> > - luks: Delete created files when block_crypto_co_create_opts_luks fails
> > - Fix memleaks in qmp_object_add
> >
> > ----------------------------------------------------------------
> 
> 
> iotest 030 hung on x86-64 Linux (Ubuntu):
> 
> petmay01 11801  0.0  0.0  34668 26112 ?        S    11:24   0:03  |
>                    \_ make --output-sync -C build/alldbg check V=1 -j8
> petmay01 15277  0.0  0.0   4628   792 ?        S    11:25   0:00  |
>                        \_ /bin/sh
> /home/petmay01/linaro/qemu-for-merges/tests/check-block.sh
> petmay01 15344  0.0  0.0  14172  3360 ?        S    11:25   0:00  |
>                            \_ bash ./check -makecheck -qcow2 -g auto
> petmay01 27902  0.0  0.0  14172  2128 ?        S    11:25   0:00  |
>                                \_ bash ./check -makecheck -qcow2 -g
> auto
> petmay01 27903  0.0  0.0  52660 16400 ?        S    11:25   0:00  |
>                                    \_ python3 -B 030
> petmay01  1728  0.0  0.1 1011792 51604 ?       Sl   11:26   0:01  |
>                                        \_
> /home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/../../x86_64-softmmu/qemu-system-x86_64
> -display none -vga none -chardev
> socket,id=mon,path=/tmp/tmp.QBQTAAybTi/qemu-27903-monitor.sock -mon
> chardev=mon,mode=control -qtest
> unix:path=/tmp/tmp.QBQTAAybTi/qemu-27903-qtest.sock -accel qtest
> -nodefaults -display none -accel qtest -drive
> if=virtio,id=drive0,file=blkdebug::/home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/scratch/test.img,format=qcow2,cache=writeback,aio=threads,backing.node-name=mid,backing.backing.node-name=base
> 
> I had to manually kill the offending QEMU process; resulting
> output in the log:
> 
> --- /home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/030.out
>  2019-07-15 17:18:35.251364738 +01
> 00
> +++ /home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/030.out.bad
>   2020-03-12 13:44:
> 43.101182680 +0000
> @@ -1,5 +1,27 @@
> -...........................
> +........................E..
> +======================================================================
> +ERROR: test_stream_pause (__main__.TestSingleDrive)
> +----------------------------------------------------------------------
> +Traceback (most recent call last):
> +  File "030", line 93, in test_stream_pause
> +    self.pause_wait('drive0')
> +  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/iotests.py",
> line 927, in pause_wait
> +    result = self.vm.qmp('query-block-jobs')
> +  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/machine.py",
> line 405, in qmp
> +    return self._qmp.cmd(cmd, args=qmp_args)
> +  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/qmp.py",
> line 215, in cmd
> +    return self.cmd_obj(qmp_cmd)
> +  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/qmp.py",
> line 198, in cmd_obj
> +    resp = self.__json_read()
> +  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/qmp.py",
> line 89, in __json_read
> +    data = self.__sockfile.readline()
> +  File "/usr/lib/python3.6/socket.py", line 586, in readinto
> +    return self._sock.recv_into(b)
> +  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/iotests.py",
> line 383, in timeout
> +    raise Exception(self.errmsg)
> +Exception: Timeout waiting for job to pause
> +
>  ----------------------------------------------------------------------

For the record (discussed on IRC):

This seems to be intermittent failure where a short timeout (1 second)
might have been too short under heavy load. That this results in a hang
is a test case bug, but it already exists on master.

Peter will retry the test later.

Kevin



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PULL 00/13] Block layer patches
       [not found] <20200311154218.15532-1-kwolf@redhat.com>
@ 2020-03-12 13:46 ` Peter Maydell
  2020-03-12 14:42   ` Kevin Wolf
  2020-03-12 17:34 ` Peter Maydell
  1 sibling, 1 reply; 26+ messages in thread
From: Peter Maydell @ 2020-03-12 13:46 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: QEMU Developers, Qemu-block

On Wed, 11 Mar 2020 at 15:42, Kevin Wolf <kwolf@redhat.com> wrote:
>
> The following changes since commit ba29883206d92a29ad5a466e679ccfc2ee6132ef:
>
>   Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20200310' into staging (2020-03-10 16:50:28 +0000)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to 8bb3b023f2055054ee119cb45b42d2b14be7fc8a:
>
>   qemu-iotests: adding LUKS cleanup for non-UTF8 secret error (2020-03-11 15:54:38 +0100)
>
> ----------------------------------------------------------------
> Block layer patches:
>
> - Relax restrictions for blockdev-snapshot (allows libvirt to do live
>   storage migration with blockdev-mirror)
> - luks: Delete created files when block_crypto_co_create_opts_luks fails
> - Fix memleaks in qmp_object_add
>
> ----------------------------------------------------------------


iotest 030 hung on x86-64 Linux (Ubuntu):

petmay01 11801  0.0  0.0  34668 26112 ?        S    11:24   0:03  |
                   \_ make --output-sync -C build/alldbg check V=1 -j8
petmay01 15277  0.0  0.0   4628   792 ?        S    11:25   0:00  |
                       \_ /bin/sh
/home/petmay01/linaro/qemu-for-merges/tests/check-block.sh
petmay01 15344  0.0  0.0  14172  3360 ?        S    11:25   0:00  |
                           \_ bash ./check -makecheck -qcow2 -g auto
petmay01 27902  0.0  0.0  14172  2128 ?        S    11:25   0:00  |
                               \_ bash ./check -makecheck -qcow2 -g
auto
petmay01 27903  0.0  0.0  52660 16400 ?        S    11:25   0:00  |
                                   \_ python3 -B 030
petmay01  1728  0.0  0.1 1011792 51604 ?       Sl   11:26   0:01  |
                                       \_
/home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/../../x86_64-softmmu/qemu-system-x86_64
-display none -vga none -chardev
socket,id=mon,path=/tmp/tmp.QBQTAAybTi/qemu-27903-monitor.sock -mon
chardev=mon,mode=control -qtest
unix:path=/tmp/tmp.QBQTAAybTi/qemu-27903-qtest.sock -accel qtest
-nodefaults -display none -accel qtest -drive
if=virtio,id=drive0,file=blkdebug::/home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/scratch/test.img,format=qcow2,cache=writeback,aio=threads,backing.node-name=mid,backing.backing.node-name=base

I had to manually kill the offending QEMU process; resulting
output in the log:

--- /home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/030.out
 2019-07-15 17:18:35.251364738 +01
00
+++ /home/petmay01/linaro/qemu-for-merges/build/alldbg/tests/qemu-iotests/030.out.bad
  2020-03-12 13:44:
43.101182680 +0000
@@ -1,5 +1,27 @@
-...........................
+........................E..
+======================================================================
+ERROR: test_stream_pause (__main__.TestSingleDrive)
+----------------------------------------------------------------------
+Traceback (most recent call last):
+  File "030", line 93, in test_stream_pause
+    self.pause_wait('drive0')
+  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/iotests.py",
line 927, in pause_wait
+    result = self.vm.qmp('query-block-jobs')
+  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/machine.py",
line 405, in qmp
+    return self._qmp.cmd(cmd, args=qmp_args)
+  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/qmp.py",
line 215, in cmd
+    return self.cmd_obj(qmp_cmd)
+  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/qmp.py",
line 198, in cmd_obj
+    resp = self.__json_read()
+  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/../../python/qemu/qmp.py",
line 89, in __json_read
+    data = self.__sockfile.readline()
+  File "/usr/lib/python3.6/socket.py", line 586, in readinto
+    return self._sock.recv_into(b)
+  File "/home/petmay01/linaro/qemu-for-merges/tests/qemu-iotests/iotests.py",
line 383, in timeout
+    raise Exception(self.errmsg)
+Exception: Timeout waiting for job to pause
+
 ----------------------------------------------------------------------


thanks
-- PMM


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-05-05 13:12 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-27 17:55 [PULL 00/13] Block layer patches Kevin Wolf
2020-01-27 17:55 ` [PULL 01/13] iotests.py: Let wait_migration wait even more Kevin Wolf
2020-01-27 17:55 ` [PULL 02/13] iotests: Add more "skip_if_unsupported" statements to the python tests Kevin Wolf
2020-01-27 17:55 ` [PULL 03/13] blockdev: fix coding style issues in drive_backup_prepare Kevin Wolf
2020-01-27 17:55 ` [PULL 04/13] blockdev: unify qmp_drive_backup and drive-backup transaction paths Kevin Wolf
2020-01-27 17:55 ` [PULL 05/13] blockdev: unify qmp_blockdev_backup and blockdev-backup " Kevin Wolf
2020-01-27 17:55 ` [PULL 06/13] blockdev: honor bdrv_try_set_aio_context() context requirements Kevin Wolf
2020-01-27 17:55 ` [PULL 07/13] block/backup-top: Don't acquire context while dropping top Kevin Wolf
2020-01-27 17:55 ` [PULL 08/13] blockdev: Acquire AioContext on dirty bitmap functions Kevin Wolf
2020-01-27 17:55 ` [PULL 09/13] blockdev: Return bs to the proper context on snapshot abort Kevin Wolf
2020-01-27 17:55 ` [PULL 10/13] iotests: Test handling of AioContexts with some blockdev actions Kevin Wolf
2020-01-27 17:55 ` [PULL 11/13] block/backup: fix memory leak in bdrv_backup_top_append() Kevin Wolf
2020-01-27 17:55 ` [PULL 12/13] iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711) Kevin Wolf
2020-01-27 17:55 ` [PULL 13/13] iscsi: Don't access non-existent scsi_lba_status_descriptor Kevin Wolf
2020-01-28  9:32 ` [PULL 00/13] Block layer patches Peter Maydell
     [not found] <20200311154218.15532-1-kwolf@redhat.com>
2020-03-12 13:46 ` Peter Maydell
2020-03-12 14:42   ` Kevin Wolf
2020-03-12 17:34 ` Peter Maydell
2021-10-06 10:59 Kevin Wolf
2021-10-06 15:49 ` Richard Henderson
2021-11-15 14:53 Kevin Wolf
2021-11-15 20:55 ` Richard Henderson
2021-11-16  8:49   ` Hanna Reitz
2021-11-15 20:59 ` Philippe Mathieu-Daudé
2022-05-04 14:25 Kevin Wolf
2022-05-05 13:09 ` Richard Henderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).