All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] some migration bugs
@ 2017-02-25 19:31 Vladimir Sementsov-Ogievskiy
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test Vladimir Sementsov-Ogievskiy
                   ` (4 more replies)
  0 siblings, 5 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-25 19:31 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

Hi all!

Here are some migration related bugs, two about INACTIVE bdses and one
use-after-free.

I'm absolutely not sure, that these bugs should be fixed like I'm fixing,
but problem definitely exists.

Reset in stopped state is strange case, may be such usage should be
restricted.
About INACTIVE - looks like it should be a separate run-state, not only
bdrv-flag.
Situation with migration state, which is global, but is set/reset/changed
in not controlled manner is not very good too..

Vladimir Sementsov-Ogievskiy (4):
  iotests: add migration corner cases test
  qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  savevm: fix savevm after migration
  migration: fix use-after-free of to_dst_file

 block/snapshot.c           |  3 +-
 migration/savevm.c         | 16 +++++++++++
 qmp.c                      |  3 +-
 tests/qemu-iotests/175     | 71 ++++++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/175.out |  5 ++++
 tests/qemu-iotests/group   |  1 +
 6 files changed, 97 insertions(+), 2 deletions(-)
 create mode 100644 tests/qemu-iotests/175
 create mode 100644 tests/qemu-iotests/175.out

-- 
2.11.1

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test
  2017-02-25 19:31 [Qemu-devel] [PATCH 0/4] some migration bugs Vladimir Sementsov-Ogievskiy
@ 2017-02-25 19:31 ` Vladimir Sementsov-Ogievskiy
  2017-03-07  9:14   ` Fam Zheng
  2017-03-07 11:23   ` Dr. David Alan Gilbert
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-25 19:31 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/175     | 71 ++++++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/175.out |  5 ++++
 tests/qemu-iotests/group   |  1 +
 3 files changed, 77 insertions(+)
 create mode 100644 tests/qemu-iotests/175
 create mode 100644 tests/qemu-iotests/175.out

diff --git a/tests/qemu-iotests/175 b/tests/qemu-iotests/175
new file mode 100644
index 0000000000..ef86c70db5
--- /dev/null
+++ b/tests/qemu-iotests/175
@@ -0,0 +1,71 @@
+#!/usr/bin/env python
+#
+# Test migration corner-cases
+#
+# Copyright (C) Vladimir Sementsov-Ogievskiy 2017
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import os
+import iotests
+import time
+from iotests import qemu_img
+
+disk = os.path.join(iotests.test_dir, 'disk')
+
+class TestMigrationCornerCases(iotests.QMPTestCase):
+    def setUp(self):
+        qemu_img('create', '-f', iotests.imgfmt, disk, '10M')
+        self.vm = iotests.VM().add_drive(disk)
+        self.vm.launch()
+
+    def tearDown(self):
+        self.vm.shutdown()
+        os.remove(disk)
+
+    def test_migrate_reset_cont_write(self):
+        result = self.vm.qmp('migrate', uri='exec:cat>/dev/null')
+        self.assert_qmp(result, 'return', {})
+        time.sleep(4)
+
+        result = self.vm.qmp('human-monitor-command',
+                             command_line='system_reset')
+        self.assert_qmp(result, 'return', '')
+
+        result = self.vm.qmp('cont')
+        self.assert_qmp(result, 'return', {})
+
+        result = self.vm.qmp('human-monitor-command',
+                             command_line='qemu-io drive0 "write 0 512"')
+        self.assert_qmp(result, 'return', '')
+
+    def test_migrate_savevm(self):
+        result = self.vm.qmp('migrate', uri='exec:cat>/dev/null')
+        self.assert_qmp(result, 'return', {})
+        time.sleep(4)
+
+        result = self.vm.qmp('human-monitor-command', command_line='savevm')
+        self.assert_qmp(result, 'return', '')
+
+    def test_savevm_set_speed_savevm(self):
+        for i in range(10):
+            result = self.vm.qmp('human-monitor-command', command_line='savevm')
+            self.assert_qmp(result, 'return', '')
+
+            result = self.vm.qmp('migrate_set_speed', value=9223372036853727232)
+            self.assert_qmp(result, 'return', {})
+
+if __name__ == '__main__':
+    iotests.main()
diff --git a/tests/qemu-iotests/175.out b/tests/qemu-iotests/175.out
new file mode 100644
index 0000000000..8d7e996700
--- /dev/null
+++ b/tests/qemu-iotests/175.out
@@ -0,0 +1,5 @@
+...
+----------------------------------------------------------------------
+Ran 3 tests
+
+OK
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index 985b9a6a36..1f4bf03185 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -167,3 +167,4 @@
 172 auto
 173 rw auto
 174 auto
+175 auto quick
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  2017-02-25 19:31 [Qemu-devel] [PATCH 0/4] some migration bugs Vladimir Sementsov-Ogievskiy
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test Vladimir Sementsov-Ogievskiy
@ 2017-02-25 19:31 ` Vladimir Sementsov-Ogievskiy
  2017-03-07  9:19   ` Fam Zheng
  2017-03-07 10:02   ` Kevin Wolf
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-25 19:31 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

We must invalidate on RUN_STATE_PRELAUNCH too, as it is available
through qmp_system_reset from RUN_STATE_POSTMIGRATE. Otherwise, we will
come to

qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
       Assertion `!(bs->open_flags & 0x0800)' failed.

on the first write after vm start.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 qmp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/qmp.c b/qmp.c
index dfaabac1a6..e61795d033 100644
--- a/qmp.c
+++ b/qmp.c
@@ -198,7 +198,8 @@ void qmp_cont(Error **errp)
     /* Continuing after completed migration. Images have been inactivated to
      * allow the destination to take control. Need to get control back now. */
     if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
-        runstate_check(RUN_STATE_POSTMIGRATE))
+        runstate_check(RUN_STATE_POSTMIGRATE) ||
+        runstate_check(RUN_STATE_PRELAUNCH))
     {
         bdrv_invalidate_cache_all(&local_err);
         if (local_err) {
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-02-25 19:31 [Qemu-devel] [PATCH 0/4] some migration bugs Vladimir Sementsov-Ogievskiy
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test Vladimir Sementsov-Ogievskiy
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH Vladimir Sementsov-Ogievskiy
@ 2017-02-25 19:31 ` Vladimir Sementsov-Ogievskiy
  2017-02-27  9:42   ` Denis V. Lunev
  2017-03-07  9:53   ` Kevin Wolf
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file Vladimir Sementsov-Ogievskiy
  2017-03-06 14:23 ` [Qemu-devel] [PATCH 0/4] some migration bugs Denis V. Lunev
  4 siblings, 2 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-25 19:31 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

After migration all drives are inactive and savevm will fail with

qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
   Assertion `!(bs->open_flags & 0x0800)' failed.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/snapshot.c   |  3 ++-
 migration/savevm.c | 11 +++++++++++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/block/snapshot.c b/block/snapshot.c
index bf5c2ca5e1..256d06ac9f 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
 int bdrv_can_snapshot(BlockDriverState *bs)
 {
     BlockDriver *drv = bs->drv;
-    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
+    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
+        (bs->open_flags & BDRV_O_INACTIVE)) {
         return 0;
     }
 
diff --git a/migration/savevm.c b/migration/savevm.c
index 5ecd264134..75e56d2d07 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2068,6 +2068,17 @@ int save_vmstate(Monitor *mon, const char *name)
     Error *local_err = NULL;
     AioContext *aio_context;
 
+    if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
+        runstate_check(RUN_STATE_POSTMIGRATE) ||
+        runstate_check(RUN_STATE_PRELAUNCH))
+    {
+        bdrv_invalidate_cache_all(&local_err);
+        if (local_err) {
+            error_report_err(local_err);
+            return -EINVAL;
+        }
+    }
+
     if (!bdrv_all_can_snapshot(&bs)) {
         monitor_printf(mon, "Device '%s' is writable but does not "
                        "support snapshots.\n", bdrv_get_device_name(bs));
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file
  2017-02-25 19:31 [Qemu-devel] [PATCH 0/4] some migration bugs Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration Vladimir Sementsov-Ogievskiy
@ 2017-02-25 19:31 ` Vladimir Sementsov-Ogievskiy
  2017-02-27 10:44   ` Dr. David Alan Gilbert
  2017-02-28  9:59   ` Dr. David Alan Gilbert
  2017-03-06 14:23 ` [Qemu-devel] [PATCH 0/4] some migration bugs Denis V. Lunev
  4 siblings, 2 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-25 19:31 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

hmp_savevm calls qemu_savevm_state(f), which sets to_dst_file=f in
global migration state. Then hmp_savevm closes f (g_free called).

Next access to to_dst_file in migration state (for example,
qmp_migrate_set_speed) will use it after it was freed.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 migration/savevm.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/migration/savevm.c b/migration/savevm.c
index 75e56d2d07..fcb8fd8acd 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1276,6 +1276,11 @@ done:
         status = MIGRATION_STATUS_COMPLETED;
     }
     migrate_set_state(&ms->state, MIGRATION_STATUS_SETUP, status);
+
+    /* f is outer parameter, it should not stay in global migration state after
+     * this function finished */
+    ms->to_dst_file = NULL;
+
     return ret;
 }
 
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration Vladimir Sementsov-Ogievskiy
@ 2017-02-27  9:42   ` Denis V. Lunev
  2017-03-07  9:53   ` Kevin Wolf
  1 sibling, 0 replies; 32+ messages in thread
From: Denis V. Lunev @ 2017-02-27  9:42 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, jsnow, lirans

On 02/25/2017 10:31 PM, Vladimir Sementsov-Ogievskiy wrote:
> After migration all drives are inactive and savevm will fail with
>
> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>    Assertion `!(bs->open_flags & 0x0800)' failed.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/snapshot.c   |  3 ++-
>  migration/savevm.c | 11 +++++++++++
>  2 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/block/snapshot.c b/block/snapshot.c
> index bf5c2ca5e1..256d06ac9f 100644
> --- a/block/snapshot.c
> +++ b/block/snapshot.c
> @@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
>  int bdrv_can_snapshot(BlockDriverState *bs)
>  {
>      BlockDriver *drv = bs->drv;
> -    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
> +    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
> +        (bs->open_flags & BDRV_O_INACTIVE)) {
>          return 0;
>      }
>  
at my opinion we do not need this hunk. It will result in a wrong
thing.


> diff --git a/migration/savevm.c b/migration/savevm.c
> index 5ecd264134..75e56d2d07 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -2068,6 +2068,17 @@ int save_vmstate(Monitor *mon, const char *name)
>      Error *local_err = NULL;
>      AioContext *aio_context;
>  
> +    if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
> +        runstate_check(RUN_STATE_POSTMIGRATE) ||
> +        runstate_check(RUN_STATE_PRELAUNCH))
> +    {
> +        bdrv_invalidate_cache_all(&local_err);
> +        if (local_err) {
> +            error_report_err(local_err);
> +            return -EINVAL;
> +        }
> +    }
> +
>      if (!bdrv_all_can_snapshot(&bs)) {
>          monitor_printf(mon, "Device '%s' is writable but does not "
>                         "support snapshots.\n", bdrv_get_device_name(bs));

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file Vladimir Sementsov-Ogievskiy
@ 2017-02-27 10:44   ` Dr. David Alan Gilbert
  2017-02-28  9:59   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2017-02-27 10:44 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	quintela, mreitz, kwolf, peter.maydell, den, jsnow, lirans

* Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> hmp_savevm calls qemu_savevm_state(f), which sets to_dst_file=f in
> global migration state. Then hmp_savevm closes f (g_free called).
> 
> Next access to to_dst_file in migration state (for example,
> qmp_migrate_set_speed) will use it after it was freed.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/savevm.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 75e56d2d07..fcb8fd8acd 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1276,6 +1276,11 @@ done:
>          status = MIGRATION_STATUS_COMPLETED;
>      }
>      migrate_set_state(&ms->state, MIGRATION_STATUS_SETUP, status);
> +
> +    /* f is outer parameter, it should not stay in global migration state after
> +     * this function finished */
> +    ms->to_dst_file = NULL;
> +
>      return ret;
>  }
>  
> -- 
> 2.11.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file Vladimir Sementsov-Ogievskiy
  2017-02-27 10:44   ` Dr. David Alan Gilbert
@ 2017-02-28  9:59   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2017-02-28  9:59 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, quintela, mreitz, kwolf, peter.maydell, den, jsnow,
	lirans

* Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> hmp_savevm calls qemu_savevm_state(f), which sets to_dst_file=f in
> global migration state. Then hmp_savevm closes f (g_free called).
> 
> Next access to to_dst_file in migration state (for example,
> qmp_migrate_set_speed) will use it after it was freed.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Queued just this one.

> ---
>  migration/savevm.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 75e56d2d07..fcb8fd8acd 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1276,6 +1276,11 @@ done:
>          status = MIGRATION_STATUS_COMPLETED;
>      }
>      migrate_set_state(&ms->state, MIGRATION_STATUS_SETUP, status);
> +
> +    /* f is outer parameter, it should not stay in global migration state after
> +     * this function finished */
> +    ms->to_dst_file = NULL;
> +
>      return ret;
>  }
>  
> -- 
> 2.11.1
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] some migration bugs
  2017-02-25 19:31 [Qemu-devel] [PATCH 0/4] some migration bugs Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file Vladimir Sementsov-Ogievskiy
@ 2017-03-06 14:23 ` Denis V. Lunev
  2017-03-20  9:44   ` [Qemu-devel] ping " Vladimir Sementsov-Ogievskiy
  4 siblings, 1 reply; 32+ messages in thread
From: Denis V. Lunev @ 2017-03-06 14:23 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, jsnow, lirans

On 02/25/2017 10:31 PM, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
>
> Here are some migration related bugs, two about INACTIVE bdses and one
> use-after-free.
>
> I'm absolutely not sure, that these bugs should be fixed like I'm fixing,
> but problem definitely exists.
>
> Reset in stopped state is strange case, may be such usage should be
> restricted.
> About INACTIVE - looks like it should be a separate run-state, not only
> bdrv-flag.
> Situation with migration state, which is global, but is set/reset/changed
> in not controlled manner is not very good too..
>
> Vladimir Sementsov-Ogievskiy (4):
>   iotests: add migration corner cases test
>   qmp-cont: invalidate on RUN_STATE_PRELAUNCH
>   savevm: fix savevm after migration
>   migration: fix use-after-free of to_dst_file
>
>  block/snapshot.c           |  3 +-
>  migration/savevm.c         | 16 +++++++++++
>  qmp.c                      |  3 +-
>  tests/qemu-iotests/175     | 71 ++++++++++++++++++++++++++++++++++++++++++++++
>  tests/qemu-iotests/175.out |  5 ++++
>  tests/qemu-iotests/group   |  1 +
>  6 files changed, 97 insertions(+), 2 deletions(-)
>  create mode 100644 tests/qemu-iotests/175
>  create mode 100644 tests/qemu-iotests/175.out
>
guys, what about patches 1-3?

Den

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test Vladimir Sementsov-Ogievskiy
@ 2017-03-07  9:14   ` Fam Zheng
  2017-03-07 11:23   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 32+ messages in thread
From: Fam Zheng @ 2017-03-07  9:14 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, stefanha,
	amit.shah, quintela, mreitz, kwolf, peter.maydell, dgilbert, den,
	jsnow, lirans

On Sat, 02/25 22:31, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/175     | 71 ++++++++++++++++++++++++++++++++++++++++++++++
>  tests/qemu-iotests/175.out |  5 ++++
>  tests/qemu-iotests/group   |  1 +
>  3 files changed, 77 insertions(+)
>  create mode 100644 tests/qemu-iotests/175
>  create mode 100644 tests/qemu-iotests/175.out
> 
> diff --git a/tests/qemu-iotests/175 b/tests/qemu-iotests/175
> new file mode 100644
> index 0000000000..ef86c70db5
> --- /dev/null
> +++ b/tests/qemu-iotests/175
> @@ -0,0 +1,71 @@
> +#!/usr/bin/env python
> +#
> +# Test migration corner-cases
> +#
> +# Copyright (C) Vladimir Sementsov-Ogievskiy 2017
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +#
> +
> +import os
> +import iotests
> +import time
> +from iotests import qemu_img
> +
> +disk = os.path.join(iotests.test_dir, 'disk')
> +
> +class TestMigrationCornerCases(iotests.QMPTestCase):
> +    def setUp(self):
> +        qemu_img('create', '-f', iotests.imgfmt, disk, '10M')
> +        self.vm = iotests.VM().add_drive(disk)
> +        self.vm.launch()
> +
> +    def tearDown(self):
> +        self.vm.shutdown()
> +        os.remove(disk)
> +
> +    def test_migrate_reset_cont_write(self):
> +        result = self.vm.qmp('migrate', uri='exec:cat>/dev/null')
> +        self.assert_qmp(result, 'return', {})
> +        time.sleep(4)

Should you query the migration status instead of blink sleep?

> +
> +        result = self.vm.qmp('human-monitor-command',
> +                             command_line='system_reset')
> +        self.assert_qmp(result, 'return', '')
> +
> +        result = self.vm.qmp('cont')
> +        self.assert_qmp(result, 'return', {})
> +
> +        result = self.vm.qmp('human-monitor-command',
> +                             command_line='qemu-io drive0 "write 0 512"')
> +        self.assert_qmp(result, 'return', '')
> +
> +    def test_migrate_savevm(self):
> +        result = self.vm.qmp('migrate', uri='exec:cat>/dev/null')
> +        self.assert_qmp(result, 'return', {})
> +        time.sleep(4)

Ditto.

> +
> +        result = self.vm.qmp('human-monitor-command', command_line='savevm')
> +        self.assert_qmp(result, 'return', '')
> +
> +    def test_savevm_set_speed_savevm(self):
> +        for i in range(10):
> +            result = self.vm.qmp('human-monitor-command', command_line='savevm')
> +            self.assert_qmp(result, 'return', '')
> +
> +            result = self.vm.qmp('migrate_set_speed', value=9223372036853727232)
> +            self.assert_qmp(result, 'return', {})
> +
> +if __name__ == '__main__':
> +    iotests.main()
> diff --git a/tests/qemu-iotests/175.out b/tests/qemu-iotests/175.out
> new file mode 100644
> index 0000000000..8d7e996700
> --- /dev/null
> +++ b/tests/qemu-iotests/175.out
> @@ -0,0 +1,5 @@
> +...
> +----------------------------------------------------------------------
> +Ran 3 tests
> +
> +OK
> diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
> index 985b9a6a36..1f4bf03185 100644
> --- a/tests/qemu-iotests/group
> +++ b/tests/qemu-iotests/group
> @@ -167,3 +167,4 @@
>  172 auto
>  173 rw auto
>  174 auto
> +175 auto quick

Not sure it still qualifies as "quick" given the two "time.sleep(4)", but more
importantly I wonder if the sleep should be improved.

Fam

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH Vladimir Sementsov-Ogievskiy
@ 2017-03-07  9:19   ` Fam Zheng
  2017-03-07 10:02   ` Kevin Wolf
  1 sibling, 0 replies; 32+ messages in thread
From: Fam Zheng @ 2017-03-07  9:19 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, stefanha,
	amit.shah, quintela, mreitz, kwolf, peter.maydell, dgilbert, den,
	jsnow, lirans

On Sat, 02/25 22:31, Vladimir Sementsov-Ogievskiy wrote:
> We must invalidate on RUN_STATE_PRELAUNCH too, as it is available
> through qmp_system_reset from RUN_STATE_POSTMIGRATE. Otherwise, we will
> come to
> 
> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>        Assertion `!(bs->open_flags & 0x0800)' failed.
> 
> on the first write after vm start.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  qmp.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/qmp.c b/qmp.c
> index dfaabac1a6..e61795d033 100644
> --- a/qmp.c
> +++ b/qmp.c
> @@ -198,7 +198,8 @@ void qmp_cont(Error **errp)
>      /* Continuing after completed migration. Images have been inactivated to
>       * allow the destination to take control. Need to get control back now. */
>      if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
> -        runstate_check(RUN_STATE_POSTMIGRATE))
> +        runstate_check(RUN_STATE_POSTMIGRATE) ||
> +        runstate_check(RUN_STATE_PRELAUNCH))
>      {
>          bdrv_invalidate_cache_all(&local_err);
>          if (local_err) {
> -- 
> 2.11.1
> 

Reviewed-by: Fam Zheng <famz@redhat.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration Vladimir Sementsov-Ogievskiy
  2017-02-27  9:42   ` Denis V. Lunev
@ 2017-03-07  9:53   ` Kevin Wolf
  2017-03-07  9:59     ` Vladimir Sementsov-Ogievskiy
  2017-03-28 10:55     ` Dr. David Alan Gilbert
  1 sibling, 2 replies; 32+ messages in thread
From: Kevin Wolf @ 2017-03-07  9:53 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, quintela, mreitz, peter.maydell, dgilbert, den, jsnow,
	lirans

Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> After migration all drives are inactive and savevm will fail with
> 
> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>    Assertion `!(bs->open_flags & 0x0800)' failed.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

What's the exact state you're in? I tried to reproduce this, but just
doing a live migration and then savevm on the destination works fine for
me.

Hm... Or do you mean on the source? In that case, I think the operation
must fail, but of course more gracefully than now.

Actually, the question that you're asking implicitly here is how the
source qemu process should be "reactivated" after a failed migration.
Currently, as far as I know, this is only with issuing a "cont" command.
It might make sense to provide a way to get control without resuming the
VM, but I doubt that adding automatic resume to every QMP command is the
right way to achieve it.

Dave, Juan, what do you think?

> diff --git a/block/snapshot.c b/block/snapshot.c
> index bf5c2ca5e1..256d06ac9f 100644
> --- a/block/snapshot.c
> +++ b/block/snapshot.c
> @@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
>  int bdrv_can_snapshot(BlockDriverState *bs)
>  {
>      BlockDriver *drv = bs->drv;
> -    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
> +    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
> +        (bs->open_flags & BDRV_O_INACTIVE)) {
>          return 0;
>      }

I wasn't sure if this doesn't disable too much, but it seems it only
makes 'info snapshots' turn up empty, which might not be nice, but maybe
tolerable.

At least it should definitely fix the assertion.

> diff --git a/migration/savevm.c b/migration/savevm.c
> index 5ecd264134..75e56d2d07 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -2068,6 +2068,17 @@ int save_vmstate(Monitor *mon, const char *name)
>      Error *local_err = NULL;
>      AioContext *aio_context;
>  
> +    if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
> +        runstate_check(RUN_STATE_POSTMIGRATE) ||
> +        runstate_check(RUN_STATE_PRELAUNCH))
> +    {
> +        bdrv_invalidate_cache_all(&local_err);
> +        if (local_err) {
> +            error_report_err(local_err);
> +            return -EINVAL;
> +        }
> +    }
> +

This hunk can't go in before the more general question of implicitly or
explicitly regaining control after a failed migration is answered.

Kevin

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-07  9:53   ` Kevin Wolf
@ 2017-03-07  9:59     ` Vladimir Sementsov-Ogievskiy
  2017-03-07 11:01       ` Kevin Wolf
  2017-03-28 10:55     ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-03-07  9:59 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, quintela, mreitz, peter.maydell, dgilbert, den, jsnow,
	lirans

07.03.2017 12:53, Kevin Wolf wrote:
> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> After migration all drives are inactive and savevm will fail with
>>
>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>>     Assertion `!(bs->open_flags & 0x0800)' failed.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> What's the exact state you're in? I tried to reproduce this, but just
> doing a live migration and then savevm on the destination works fine for
> me.
>
> Hm... Or do you mean on the source? In that case, I think the operation
> must fail, but of course more gracefully than now.

Yes, I mean on the source. It may not be migration for "mirgration", but 
for dumping state to file. In that case it seems not wrong to make a 
snapshot on source.

>
> Actually, the question that you're asking implicitly here is how the
> source qemu process should be "reactivated" after a failed migration.
> Currently, as far as I know, this is only with issuing a "cont" command.
> It might make sense to provide a way to get control without resuming the
> VM, but I doubt that adding automatic resume to every QMP command is the
> right way to achieve it.
>
> Dave, Juan, what do you think?
>
>> diff --git a/block/snapshot.c b/block/snapshot.c
>> index bf5c2ca5e1..256d06ac9f 100644
>> --- a/block/snapshot.c
>> +++ b/block/snapshot.c
>> @@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
>>   int bdrv_can_snapshot(BlockDriverState *bs)
>>   {
>>       BlockDriver *drv = bs->drv;
>> -    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
>> +    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
>> +        (bs->open_flags & BDRV_O_INACTIVE)) {
>>           return 0;
>>       }
> I wasn't sure if this doesn't disable too much, but it seems it only
> makes 'info snapshots' turn up empty, which might not be nice, but maybe
> tolerable.
>
> At least it should definitely fix the assertion.
>
>> diff --git a/migration/savevm.c b/migration/savevm.c
>> index 5ecd264134..75e56d2d07 100644
>> --- a/migration/savevm.c
>> +++ b/migration/savevm.c
>> @@ -2068,6 +2068,17 @@ int save_vmstate(Monitor *mon, const char *name)
>>       Error *local_err = NULL;
>>       AioContext *aio_context;
>>   
>> +    if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
>> +        runstate_check(RUN_STATE_POSTMIGRATE) ||
>> +        runstate_check(RUN_STATE_PRELAUNCH))
>> +    {
>> +        bdrv_invalidate_cache_all(&local_err);
>> +        if (local_err) {
>> +            error_report_err(local_err);
>> +            return -EINVAL;
>> +        }
>> +    }
>> +
> This hunk can't go in before the more general question of implicitly or
> explicitly regaining control after a failed migration is answered.
>
> Kevin


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH Vladimir Sementsov-Ogievskiy
  2017-03-07  9:19   ` Fam Zheng
@ 2017-03-07 10:02   ` Kevin Wolf
  2017-03-07 10:11     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 32+ messages in thread
From: Kevin Wolf @ 2017-03-07 10:02 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, quintela, mreitz, peter.maydell, dgilbert, den, jsnow,
	lirans

Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> We must invalidate on RUN_STATE_PRELAUNCH too, as it is available
> through qmp_system_reset from RUN_STATE_POSTMIGRATE. Otherwise, we will
> come to
> 
> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>        Assertion `!(bs->open_flags & 0x0800)' failed.
> 
> on the first write after vm start.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Wouldn't it make more sense to invalidate in qmp_system_reset() where
the migration states are left?

Or maybe BDRV_O_INACTIVE could even be tied directly to runstates? Not
sure how realistic this one is, but if we start adding invalidate_cache
calls all over the place, maybe that's a sign that we need to look for a
more central place.

Kevin

>  qmp.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/qmp.c b/qmp.c
> index dfaabac1a6..e61795d033 100644
> --- a/qmp.c
> +++ b/qmp.c
> @@ -198,7 +198,8 @@ void qmp_cont(Error **errp)
>      /* Continuing after completed migration. Images have been inactivated to
>       * allow the destination to take control. Need to get control back now. */
>      if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
> -        runstate_check(RUN_STATE_POSTMIGRATE))
> +        runstate_check(RUN_STATE_POSTMIGRATE) ||
> +        runstate_check(RUN_STATE_PRELAUNCH))
>      {
>          bdrv_invalidate_cache_all(&local_err);
>          if (local_err) {
> -- 
> 2.11.1
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  2017-03-07 10:02   ` Kevin Wolf
@ 2017-03-07 10:11     ` Vladimir Sementsov-Ogievskiy
  2017-03-07 10:22       ` Kevin Wolf
  2017-04-26 12:22       ` [Qemu-devel] [Qemu-block] " Kashyap Chamarthy
  0 siblings, 2 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-03-07 10:11 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, quintela, mreitz, peter.maydell, dgilbert, den, jsnow,
	lirans

07.03.2017 13:02, Kevin Wolf wrote:
> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> We must invalidate on RUN_STATE_PRELAUNCH too, as it is available
>> through qmp_system_reset from RUN_STATE_POSTMIGRATE. Otherwise, we will
>> come to
>>
>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>>         Assertion `!(bs->open_flags & 0x0800)' failed.
>>
>> on the first write after vm start.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Wouldn't it make more sense to invalidate in qmp_system_reset() where
> the migration states are left?
>
> Or maybe BDRV_O_INACTIVE could even be tied directly to runstates? Not
> sure how realistic this one is, but if we start adding invalidate_cache
> calls all over the place, maybe that's a sign that we need to look for a
> more central place.

I've proposed it in cover letter) These bugs and my fixes are just show 
that something should be rethought.. I don't claim that these fixes are 
true way, they are just the simplest.

> Kevin
>
>>   qmp.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/qmp.c b/qmp.c
>> index dfaabac1a6..e61795d033 100644
>> --- a/qmp.c
>> +++ b/qmp.c
>> @@ -198,7 +198,8 @@ void qmp_cont(Error **errp)
>>       /* Continuing after completed migration. Images have been inactivated to
>>        * allow the destination to take control. Need to get control back now. */
>>       if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
>> -        runstate_check(RUN_STATE_POSTMIGRATE))
>> +        runstate_check(RUN_STATE_POSTMIGRATE) ||
>> +        runstate_check(RUN_STATE_PRELAUNCH))
>>       {
>>           bdrv_invalidate_cache_all(&local_err);
>>           if (local_err) {
>> -- 
>> 2.11.1
>>


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  2017-03-07 10:11     ` Vladimir Sementsov-Ogievskiy
@ 2017-03-07 10:22       ` Kevin Wolf
  2017-04-26 12:22       ` [Qemu-devel] [Qemu-block] " Kashyap Chamarthy
  1 sibling, 0 replies; 32+ messages in thread
From: Kevin Wolf @ 2017-03-07 10:22 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, quintela, mreitz, peter.maydell, dgilbert, den, jsnow,
	lirans

Am 07.03.2017 um 11:11 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 07.03.2017 13:02, Kevin Wolf wrote:
> >Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >>We must invalidate on RUN_STATE_PRELAUNCH too, as it is available
> >>through qmp_system_reset from RUN_STATE_POSTMIGRATE. Otherwise, we will
> >>come to
> >>
> >>qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> >>        Assertion `!(bs->open_flags & 0x0800)' failed.
> >>
> >>on the first write after vm start.
> >>
> >>Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> >Wouldn't it make more sense to invalidate in qmp_system_reset() where
> >the migration states are left?
> >
> >Or maybe BDRV_O_INACTIVE could even be tied directly to runstates? Not
> >sure how realistic this one is, but if we start adding invalidate_cache
> >calls all over the place, maybe that's a sign that we need to look for a
> >more central place.
> 
> I've proposed it in cover letter) These bugs and my fixes are just
> show that something should be rethought.. I don't claim that these
> fixes are true way, they are just the simplest.

Ah, sorry, I read the cover letter a while ago, but not again before
reading the patches now. You're right that it's something that needs to
be addressed in some way. Your patches may be simple, but I think there
is no hope to ever get it completely correct with that approach because
there are so many cases (basically each QMP command is one case).

The cover letter seems to propose that INACTIVE could be a runstate. I
don't think that's quite right, because we already have many runstates
that would qualify as inactive in this sense, but that still need to be
separate. But possibly we can have a mapping from runstates to the
inactive flag.

One (implementation) problem with directly coupling inactive with
runstates is that runstate_set() can't fail at the moment, but
bdrv_inactivate/invalidate_cache() can.

Kevin

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-07  9:59     ` Vladimir Sementsov-Ogievskiy
@ 2017-03-07 11:01       ` Kevin Wolf
  2017-03-07 11:20         ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 32+ messages in thread
From: Kevin Wolf @ 2017-03-07 11:01 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, quintela, mreitz, peter.maydell, dgilbert, den, jsnow,
	lirans

Am 07.03.2017 um 10:59 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 07.03.2017 12:53, Kevin Wolf wrote:
> >Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >>After migration all drives are inactive and savevm will fail with
> >>
> >>qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> >>    Assertion `!(bs->open_flags & 0x0800)' failed.
> >>
> >>Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> >What's the exact state you're in? I tried to reproduce this, but just
> >doing a live migration and then savevm on the destination works fine for
> >me.
> >
> >Hm... Or do you mean on the source? In that case, I think the operation
> >must fail, but of course more gracefully than now.
> 
> Yes, I mean on the source. It may not be migration for "mirgration",
> but for dumping state to file. In that case it seems not wrong to
> make a snapshot on source.

Yes, resuming on the source is definitely valid. I'm only questioning if
'savevm' (and by extension, any other command that modifies the VM or
its images) should automagically regain control of the VM, or whether
that should be more explicit.

Kevin

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-07 11:01       ` Kevin Wolf
@ 2017-03-07 11:20         ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-07 11:20 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel, pbonzini,
	armbru, eblake, famz, stefanha, amit.shah, quintela, mreitz,
	peter.maydell, den, jsnow, lirans

* Kevin Wolf (kwolf@redhat.com) wrote:
> Am 07.03.2017 um 10:59 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > 07.03.2017 12:53, Kevin Wolf wrote:
> > >Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > >>After migration all drives are inactive and savevm will fail with
> > >>
> > >>qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> > >>    Assertion `!(bs->open_flags & 0x0800)' failed.
> > >>
> > >>Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > >What's the exact state you're in? I tried to reproduce this, but just
> > >doing a live migration and then savevm on the destination works fine for
> > >me.
> > >
> > >Hm... Or do you mean on the source? In that case, I think the operation
> > >must fail, but of course more gracefully than now.
> > 
> > Yes, I mean on the source. It may not be migration for "mirgration",
> > but for dumping state to file. In that case it seems not wrong to
> > make a snapshot on source.
> 
> Yes, resuming on the source is definitely valid. I'm only questioning if
> 'savevm' (and by extension, any other command that modifies the VM or
> its images) should automagically regain control of the VM, or whether
> that should be more explicit.

How many things other than 'cont' and 'savevm' are there?

We should definitely fix it so  a savevm there doesn't assert,
but I'm also unsure if it's worth a separate explicit command.

Dave

> Kevin
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test
  2017-02-25 19:31 ` [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test Vladimir Sementsov-Ogievskiy
  2017-03-07  9:14   ` Fam Zheng
@ 2017-03-07 11:23   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-07 11:23 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	quintela, mreitz, kwolf, peter.maydell, den, jsnow, lirans

* Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

If I understand this correctly, these are tests that currently fail and
are fixed by the following patches;  in which case it needs reordering
to keep stuff bisectable.

Dave

> ---
>  tests/qemu-iotests/175     | 71 ++++++++++++++++++++++++++++++++++++++++++++++
>  tests/qemu-iotests/175.out |  5 ++++
>  tests/qemu-iotests/group   |  1 +
>  3 files changed, 77 insertions(+)
>  create mode 100644 tests/qemu-iotests/175
>  create mode 100644 tests/qemu-iotests/175.out
> 
> diff --git a/tests/qemu-iotests/175 b/tests/qemu-iotests/175
> new file mode 100644
> index 0000000000..ef86c70db5
> --- /dev/null
> +++ b/tests/qemu-iotests/175
> @@ -0,0 +1,71 @@
> +#!/usr/bin/env python
> +#
> +# Test migration corner-cases
> +#
> +# Copyright (C) Vladimir Sementsov-Ogievskiy 2017
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +#
> +
> +import os
> +import iotests
> +import time
> +from iotests import qemu_img
> +
> +disk = os.path.join(iotests.test_dir, 'disk')
> +
> +class TestMigrationCornerCases(iotests.QMPTestCase):
> +    def setUp(self):
> +        qemu_img('create', '-f', iotests.imgfmt, disk, '10M')
> +        self.vm = iotests.VM().add_drive(disk)
> +        self.vm.launch()
> +
> +    def tearDown(self):
> +        self.vm.shutdown()
> +        os.remove(disk)
> +
> +    def test_migrate_reset_cont_write(self):
> +        result = self.vm.qmp('migrate', uri='exec:cat>/dev/null')
> +        self.assert_qmp(result, 'return', {})
> +        time.sleep(4)
> +
> +        result = self.vm.qmp('human-monitor-command',
> +                             command_line='system_reset')
> +        self.assert_qmp(result, 'return', '')
> +
> +        result = self.vm.qmp('cont')
> +        self.assert_qmp(result, 'return', {})
> +
> +        result = self.vm.qmp('human-monitor-command',
> +                             command_line='qemu-io drive0 "write 0 512"')
> +        self.assert_qmp(result, 'return', '')
> +
> +    def test_migrate_savevm(self):
> +        result = self.vm.qmp('migrate', uri='exec:cat>/dev/null')
> +        self.assert_qmp(result, 'return', {})
> +        time.sleep(4)
> +
> +        result = self.vm.qmp('human-monitor-command', command_line='savevm')
> +        self.assert_qmp(result, 'return', '')
> +
> +    def test_savevm_set_speed_savevm(self):
> +        for i in range(10):
> +            result = self.vm.qmp('human-monitor-command', command_line='savevm')
> +            self.assert_qmp(result, 'return', '')
> +
> +            result = self.vm.qmp('migrate_set_speed', value=9223372036853727232)
> +            self.assert_qmp(result, 'return', {})
> +
> +if __name__ == '__main__':
> +    iotests.main()
> diff --git a/tests/qemu-iotests/175.out b/tests/qemu-iotests/175.out
> new file mode 100644
> index 0000000000..8d7e996700
> --- /dev/null
> +++ b/tests/qemu-iotests/175.out
> @@ -0,0 +1,5 @@
> +...
> +----------------------------------------------------------------------
> +Ran 3 tests
> +
> +OK
> diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
> index 985b9a6a36..1f4bf03185 100644
> --- a/tests/qemu-iotests/group
> +++ b/tests/qemu-iotests/group
> @@ -167,3 +167,4 @@
>  172 auto
>  173 rw auto
>  174 auto
> +175 auto quick
> -- 
> 2.11.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Qemu-devel] ping Re: [PATCH 0/4] some migration bugs
  2017-03-06 14:23 ` [Qemu-devel] [PATCH 0/4] some migration bugs Denis V. Lunev
@ 2017-03-20  9:44   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-03-20  9:44 UTC (permalink / raw)
  To: Denis V. Lunev, qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, jsnow, lirans

06.03.2017 17:23, Denis V. Lunev wrote:
> On 02/25/2017 10:31 PM, Vladimir Sementsov-Ogievskiy wrote:
>> Hi all!
>>
>> Here are some migration related bugs, two about INACTIVE bdses and one
>> use-after-free.
>>
>> I'm absolutely not sure, that these bugs should be fixed like I'm fixing,
>> but problem definitely exists.
>>
>> Reset in stopped state is strange case, may be such usage should be
>> restricted.
>> About INACTIVE - looks like it should be a separate run-state, not only
>> bdrv-flag.
>> Situation with migration state, which is global, but is set/reset/changed
>> in not controlled manner is not very good too..
>>
>> Vladimir Sementsov-Ogievskiy (4):
>>    iotests: add migration corner cases test
>>    qmp-cont: invalidate on RUN_STATE_PRELAUNCH
>>    savevm: fix savevm after migration
>>    migration: fix use-after-free of to_dst_file
>>
>>   block/snapshot.c           |  3 +-
>>   migration/savevm.c         | 16 +++++++++++
>>   qmp.c                      |  3 +-
>>   tests/qemu-iotests/175     | 71 ++++++++++++++++++++++++++++++++++++++++++++++
>>   tests/qemu-iotests/175.out |  5 ++++
>>   tests/qemu-iotests/group   |  1 +
>>   6 files changed, 97 insertions(+), 2 deletions(-)
>>   create mode 100644 tests/qemu-iotests/175
>>   create mode 100644 tests/qemu-iotests/175.out
>>
> guys, what about patches 1-3?
>
> Den

At least, with 2 and 3, fixing an assert? Do someone going to work with 
adding a runstate?


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-07  9:53   ` Kevin Wolf
  2017-03-07  9:59     ` Vladimir Sementsov-Ogievskiy
@ 2017-03-28 10:55     ` Dr. David Alan Gilbert
  2017-03-28 11:09       ` Kevin Wolf
  2017-03-28 11:18       ` [Qemu-devel] " Denis V. Lunev
  1 sibling, 2 replies; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-28 10:55 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel, pbonzini,
	armbru, eblake, famz, stefanha, quintela, mreitz, peter.maydell,
	den, jsnow, lirans

* Kevin Wolf (kwolf@redhat.com) wrote:
> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > After migration all drives are inactive and savevm will fail with
> > 
> > qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> >    Assertion `!(bs->open_flags & 0x0800)' failed.
> > 
> > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> 
> What's the exact state you're in? I tried to reproduce this, but just
> doing a live migration and then savevm on the destination works fine for
> me.
> 
> Hm... Or do you mean on the source? In that case, I think the operation
> must fail, but of course more gracefully than now.
> 
> Actually, the question that you're asking implicitly here is how the
> source qemu process should be "reactivated" after a failed migration.
> Currently, as far as I know, this is only with issuing a "cont" command.
> It might make sense to provide a way to get control without resuming the
> VM, but I doubt that adding automatic resume to every QMP command is the
> right way to achieve it.
> 
> Dave, Juan, what do you think?

I'd only ever really thought of 'cont' or retrying the migration.
However, it does make sense to me that you might want to do a savevm instead;
if you can't migrate then perhaps a savevm is the best you can do before
your machine dies.  Are there any other things that should be allowed?

We would want to be careful not to accidentally reactivate the disks on the source
after what was actually a succesful migration.

As for the actual patch contents, I'd leave that to you to say if it's
OK from the block side of things.

Dave

> > diff --git a/block/snapshot.c b/block/snapshot.c
> > index bf5c2ca5e1..256d06ac9f 100644
> > --- a/block/snapshot.c
> > +++ b/block/snapshot.c
> > @@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
> >  int bdrv_can_snapshot(BlockDriverState *bs)
> >  {
> >      BlockDriver *drv = bs->drv;
> > -    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
> > +    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
> > +        (bs->open_flags & BDRV_O_INACTIVE)) {
> >          return 0;
> >      }
> 
> I wasn't sure if this doesn't disable too much, but it seems it only
> makes 'info snapshots' turn up empty, which might not be nice, but maybe
> tolerable.
> 
> At least it should definitely fix the assertion.

Did Denis have some concerns about this chunk?

> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index 5ecd264134..75e56d2d07 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -2068,6 +2068,17 @@ int save_vmstate(Monitor *mon, const char *name)
> >      Error *local_err = NULL;
> >      AioContext *aio_context;
> >  
> > +    if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
> > +        runstate_check(RUN_STATE_POSTMIGRATE) ||
> > +        runstate_check(RUN_STATE_PRELAUNCH))
> > +    {
> > +        bdrv_invalidate_cache_all(&local_err);
> > +        if (local_err) {
> > +            error_report_err(local_err);
> > +            return -EINVAL;
> > +        }
> > +    }
> > +
> 
> This hunk can't go in before the more general question of implicitly or
> explicitly regaining control after a failed migration is answered.
> 
> Kevin
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-28 10:55     ` Dr. David Alan Gilbert
@ 2017-03-28 11:09       ` Kevin Wolf
  2017-03-28 11:13         ` Dr. David Alan Gilbert
  2017-03-28 11:18       ` [Qemu-devel] " Denis V. Lunev
  1 sibling, 1 reply; 32+ messages in thread
From: Kevin Wolf @ 2017-03-28 11:09 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel, pbonzini,
	armbru, eblake, famz, stefanha, quintela, mreitz, peter.maydell,
	den, jsnow, lirans

Am 28.03.2017 um 12:55 hat Dr. David Alan Gilbert geschrieben:
> * Kevin Wolf (kwolf@redhat.com) wrote:
> > Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > After migration all drives are inactive and savevm will fail with
> > > 
> > > qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> > >    Assertion `!(bs->open_flags & 0x0800)' failed.
> > > 
> > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > 
> > What's the exact state you're in? I tried to reproduce this, but just
> > doing a live migration and then savevm on the destination works fine for
> > me.
> > 
> > Hm... Or do you mean on the source? In that case, I think the operation
> > must fail, but of course more gracefully than now.
> > 
> > Actually, the question that you're asking implicitly here is how the
> > source qemu process should be "reactivated" after a failed migration.
> > Currently, as far as I know, this is only with issuing a "cont" command.
> > It might make sense to provide a way to get control without resuming the
> > VM, but I doubt that adding automatic resume to every QMP command is the
> > right way to achieve it.
> > 
> > Dave, Juan, what do you think?
> 
> I'd only ever really thought of 'cont' or retrying the migration.
> However, it does make sense to me that you might want to do a savevm
> instead; if you can't migrate then perhaps a savevm is the best you
> can do before your machine dies.  Are there any other things that
> should be allowed?

I think we need to ask the other way round: Any reason _not_ to allow
certain operations that you can normally perform on a stopped VM?

> We would want to be careful not to accidentally reactivate the disks
> on the source after what was actually a succesful migration.

Yes, that's exactly my concern, even with savevm. That's why I suggested
we could have a 'cont'-like thing that just gets back control of the
images and moves into the normal paused state, but doesn't immediately
resume the actual VM.

Kevin

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-28 11:09       ` Kevin Wolf
@ 2017-03-28 11:13         ` Dr. David Alan Gilbert
  2017-03-28 12:09           ` Kevin Wolf
  0 siblings, 1 reply; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-28 11:13 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel, pbonzini,
	armbru, eblake, famz, stefanha, quintela, mreitz, peter.maydell,
	den, jsnow, lirans

* Kevin Wolf (kwolf@redhat.com) wrote:
> Am 28.03.2017 um 12:55 hat Dr. David Alan Gilbert geschrieben:
> > * Kevin Wolf (kwolf@redhat.com) wrote:
> > > Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > > After migration all drives are inactive and savevm will fail with
> > > > 
> > > > qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> > > >    Assertion `!(bs->open_flags & 0x0800)' failed.
> > > > 
> > > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > > 
> > > What's the exact state you're in? I tried to reproduce this, but just
> > > doing a live migration and then savevm on the destination works fine for
> > > me.
> > > 
> > > Hm... Or do you mean on the source? In that case, I think the operation
> > > must fail, but of course more gracefully than now.
> > > 
> > > Actually, the question that you're asking implicitly here is how the
> > > source qemu process should be "reactivated" after a failed migration.
> > > Currently, as far as I know, this is only with issuing a "cont" command.
> > > It might make sense to provide a way to get control without resuming the
> > > VM, but I doubt that adding automatic resume to every QMP command is the
> > > right way to achieve it.
> > > 
> > > Dave, Juan, what do you think?
> > 
> > I'd only ever really thought of 'cont' or retrying the migration.
> > However, it does make sense to me that you might want to do a savevm
> > instead; if you can't migrate then perhaps a savevm is the best you
> > can do before your machine dies.  Are there any other things that
> > should be allowed?
> 
> I think we need to ask the other way round: Any reason _not_ to allow
> certain operations that you can normally perform on a stopped VM?
> 
> > We would want to be careful not to accidentally reactivate the disks
> > on the source after what was actually a succesful migration.
> 
> Yes, that's exactly my concern, even with savevm. That's why I suggested
> we could have a 'cont'-like thing that just gets back control of the
> images and moves into the normal paused state, but doesn't immediately
> resume the actual VM.

OK, lets say we had that block-reactivate (for want of a better name),
how would we stop everything asserting if the user tried to do it 
before they'd run block-reactivate?

Dave

> Kevin
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-28 10:55     ` Dr. David Alan Gilbert
  2017-03-28 11:09       ` Kevin Wolf
@ 2017-03-28 11:18       ` Denis V. Lunev
  1 sibling, 0 replies; 32+ messages in thread
From: Denis V. Lunev @ 2017-03-28 11:18 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Kevin Wolf
  Cc: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel, pbonzini,
	armbru, eblake, famz, stefanha, quintela, mreitz, peter.maydell,
	jsnow, lirans

On 03/28/2017 01:55 PM, Dr. David Alan Gilbert wrote:
> * Kevin Wolf (kwolf@redhat.com) wrote:
>> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>> After migration all drives are inactive and savevm will fail with
>>>
>>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>>>    Assertion `!(bs->open_flags & 0x0800)' failed.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> What's the exact state you're in? I tried to reproduce this, but just
>> doing a live migration and then savevm on the destination works fine for
>> me.
>>
>> Hm... Or do you mean on the source? In that case, I think the operation
>> must fail, but of course more gracefully than now.
>>
>> Actually, the question that you're asking implicitly here is how the
>> source qemu process should be "reactivated" after a failed migration.
>> Currently, as far as I know, this is only with issuing a "cont" command.
>> It might make sense to provide a way to get control without resuming the
>> VM, but I doubt that adding automatic resume to every QMP command is the
>> right way to achieve it.
>>
>> Dave, Juan, what do you think?
> I'd only ever really thought of 'cont' or retrying the migration.
> However, it does make sense to me that you might want to do a savevm instead;
> if you can't migrate then perhaps a savevm is the best you can do before
> your machine dies.  Are there any other things that should be allowed?
>
> We would want to be careful not to accidentally reactivate the disks on the source
> after what was actually a succesful migration.
>
> As for the actual patch contents, I'd leave that to you to say if it's
> OK from the block side of things.
>
> Dave
>
>>> diff --git a/block/snapshot.c b/block/snapshot.c
>>> index bf5c2ca5e1..256d06ac9f 100644
>>> --- a/block/snapshot.c
>>> +++ b/block/snapshot.c
>>> @@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
>>>  int bdrv_can_snapshot(BlockDriverState *bs)
>>>  {
>>>      BlockDriver *drv = bs->drv;
>>> -    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
>>> +    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
>>> +        (bs->open_flags & BDRV_O_INACTIVE)) {
>>>          return 0;
>>>      }
>> I wasn't sure if this doesn't disable too much, but it seems it only
>> makes 'info snapshots' turn up empty, which might not be nice, but maybe
>> tolerable.
>>
>> At least it should definitely fix the assertion.
> Did Denis have some concerns about this chunk?
Yep. I really think that this check is unnecessary and wrong.
Actually all disks are in the INACTIVE state and we will face
the problem later on the actual write. This exact operation
is sane.

Den

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-28 11:13         ` Dr. David Alan Gilbert
@ 2017-03-28 12:09           ` Kevin Wolf
  2017-03-28 13:16             ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 32+ messages in thread
From: Kevin Wolf @ 2017-03-28 12:09 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel, pbonzini,
	armbru, eblake, famz, stefanha, quintela, mreitz, peter.maydell,
	den, jsnow, lirans

Am 28.03.2017 um 13:13 hat Dr. David Alan Gilbert geschrieben:
> * Kevin Wolf (kwolf@redhat.com) wrote:
> > Am 28.03.2017 um 12:55 hat Dr. David Alan Gilbert geschrieben:
> > > * Kevin Wolf (kwolf@redhat.com) wrote:
> > > > Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > > > After migration all drives are inactive and savevm will fail with
> > > > > 
> > > > > qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> > > > >    Assertion `!(bs->open_flags & 0x0800)' failed.
> > > > > 
> > > > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > > > 
> > > > What's the exact state you're in? I tried to reproduce this, but just
> > > > doing a live migration and then savevm on the destination works fine for
> > > > me.
> > > > 
> > > > Hm... Or do you mean on the source? In that case, I think the operation
> > > > must fail, but of course more gracefully than now.
> > > > 
> > > > Actually, the question that you're asking implicitly here is how the
> > > > source qemu process should be "reactivated" after a failed migration.
> > > > Currently, as far as I know, this is only with issuing a "cont" command.
> > > > It might make sense to provide a way to get control without resuming the
> > > > VM, but I doubt that adding automatic resume to every QMP command is the
> > > > right way to achieve it.
> > > > 
> > > > Dave, Juan, what do you think?
> > > 
> > > I'd only ever really thought of 'cont' or retrying the migration.
> > > However, it does make sense to me that you might want to do a savevm
> > > instead; if you can't migrate then perhaps a savevm is the best you
> > > can do before your machine dies.  Are there any other things that
> > > should be allowed?
> > 
> > I think we need to ask the other way round: Any reason _not_ to allow
> > certain operations that you can normally perform on a stopped VM?
> > 
> > > We would want to be careful not to accidentally reactivate the disks
> > > on the source after what was actually a succesful migration.
> > 
> > Yes, that's exactly my concern, even with savevm. That's why I suggested
> > we could have a 'cont'-like thing that just gets back control of the
> > images and moves into the normal paused state, but doesn't immediately
> > resume the actual VM.
> 
> OK, lets say we had that block-reactivate (for want of a better name),
> how would we stop everything asserting if the user tried to do it 
> before they'd run block-reactivate?

We would have to add checks to the monitor commands that assume that the
image is activated and error out if it isn't.

Maybe just adding the check to blk_is_available() would be enough, but
we'd have to check carefully whether it covers all cases and causes no
false positives.

By the way, I wouldn't call this 'block-reactivate' because I don't
think this should be a block-specific command. It's a VM lifecycle
command that switches from a postmigrate state (that assumes we have no
control over the VM's resources any more) to a paused state (where we do
have this control). Maybe something like 'migration-abort'.

Kevin

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-28 12:09           ` Kevin Wolf
@ 2017-03-28 13:16             ` Vladimir Sementsov-Ogievskiy
  2017-03-28 14:15               ` Paolo Bonzini
  0 siblings, 1 reply; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-03-28 13:16 UTC (permalink / raw)
  To: Kevin Wolf, Dr. David Alan Gilbert
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	quintela, mreitz, peter.maydell, den, jsnow, lirans

28.03.2017 15:09, Kevin Wolf wrote:
> Am 28.03.2017 um 13:13 hat Dr. David Alan Gilbert geschrieben:
>> * Kevin Wolf (kwolf@redhat.com) wrote:
>>> Am 28.03.2017 um 12:55 hat Dr. David Alan Gilbert geschrieben:
>>>> * Kevin Wolf (kwolf@redhat.com) wrote:
>>>>> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>>>>> After migration all drives are inactive and savevm will fail with
>>>>>>
>>>>>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>>>>>>     Assertion `!(bs->open_flags & 0x0800)' failed.
>>>>>>
>>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>>> What's the exact state you're in? I tried to reproduce this, but just
>>>>> doing a live migration and then savevm on the destination works fine for
>>>>> me.
>>>>>
>>>>> Hm... Or do you mean on the source? In that case, I think the operation
>>>>> must fail, but of course more gracefully than now.
>>>>>
>>>>> Actually, the question that you're asking implicitly here is how the
>>>>> source qemu process should be "reactivated" after a failed migration.
>>>>> Currently, as far as I know, this is only with issuing a "cont" command.
>>>>> It might make sense to provide a way to get control without resuming the
>>>>> VM, but I doubt that adding automatic resume to every QMP command is the
>>>>> right way to achieve it.
>>>>>
>>>>> Dave, Juan, what do you think?
>>>> I'd only ever really thought of 'cont' or retrying the migration.
>>>> However, it does make sense to me that you might want to do a savevm
>>>> instead; if you can't migrate then perhaps a savevm is the best you
>>>> can do before your machine dies.  Are there any other things that
>>>> should be allowed?
>>> I think we need to ask the other way round: Any reason _not_ to allow
>>> certain operations that you can normally perform on a stopped VM?
>>>
>>>> We would want to be careful not to accidentally reactivate the disks
>>>> on the source after what was actually a succesful migration.
>>> Yes, that's exactly my concern, even with savevm. That's why I suggested
>>> we could have a 'cont'-like thing that just gets back control of the
>>> images and moves into the normal paused state, but doesn't immediately
>>> resume the actual VM.
>> OK, lets say we had that block-reactivate (for want of a better name),
>> how would we stop everything asserting if the user tried to do it
>> before they'd run block-reactivate?
> We would have to add checks to the monitor commands that assume that the
> image is activated and error out if it isn't.
>
> Maybe just adding the check to blk_is_available() would be enough, but
> we'd have to check carefully whether it covers all cases and causes no
> false positives.
>
> By the way, I wouldn't call this 'block-reactivate' because I don't
> think this should be a block-specific command. It's a VM lifecycle
> command that switches from a postmigrate state (that assumes we have no
> control over the VM's resources any more) to a paused state (where we do
> have this control). Maybe something like 'migration-abort'.

'abort' is not very good too I think. migration is completed, nothing to 
abort.. (may be successful migration to file for suspend, some kind of 
vm cloning, etc)

>
> Kevin


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-28 13:16             ` Vladimir Sementsov-Ogievskiy
@ 2017-03-28 14:15               ` Paolo Bonzini
  2017-03-29 15:29                 ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 32+ messages in thread
From: Paolo Bonzini @ 2017-03-28 14:15 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, Kevin Wolf, Dr. David Alan Gilbert
  Cc: qemu-block, qemu-devel, armbru, eblake, famz, stefanha, quintela,
	mreitz, peter.maydell, den, jsnow, lirans



On 28/03/2017 15:16, Vladimir Sementsov-Ogievskiy wrote:
> 28.03.2017 15:09, Kevin Wolf wrote:
>> Am 28.03.2017 um 13:13 hat Dr. David Alan Gilbert geschrieben:
>>> * Kevin Wolf (kwolf@redhat.com) wrote:
>>>> Am 28.03.2017 um 12:55 hat Dr. David Alan Gilbert geschrieben:
>>>>> * Kevin Wolf (kwolf@redhat.com) wrote:
>>>>>> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>>>>>> After migration all drives are inactive and savevm will fail with
>>>>>>>
>>>>>>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>>>>>>>     Assertion `!(bs->open_flags & 0x0800)' failed.
>>>>>>>
>>>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy
>>>>>>> <vsementsov@virtuozzo.com>
>>>>>> What's the exact state you're in? I tried to reproduce this, but just
>>>>>> doing a live migration and then savevm on the destination works
>>>>>> fine for
>>>>>> me.
>>>>>>
>>>>>> Hm... Or do you mean on the source? In that case, I think the
>>>>>> operation
>>>>>> must fail, but of course more gracefully than now.
>>>>>>
>>>>>> Actually, the question that you're asking implicitly here is how the
>>>>>> source qemu process should be "reactivated" after a failed migration.
>>>>>> Currently, as far as I know, this is only with issuing a "cont"
>>>>>> command.
>>>>>> It might make sense to provide a way to get control without
>>>>>> resuming the
>>>>>> VM, but I doubt that adding automatic resume to every QMP command
>>>>>> is the
>>>>>> right way to achieve it.
>>>>>>
>>>>>> Dave, Juan, what do you think?
>>>>> I'd only ever really thought of 'cont' or retrying the migration.
>>>>> However, it does make sense to me that you might want to do a savevm
>>>>> instead; if you can't migrate then perhaps a savevm is the best you
>>>>> can do before your machine dies.  Are there any other things that
>>>>> should be allowed?
>>>> I think we need to ask the other way round: Any reason _not_ to allow
>>>> certain operations that you can normally perform on a stopped VM?
>>>>
>>>>> We would want to be careful not to accidentally reactivate the disks
>>>>> on the source after what was actually a succesful migration.
>>>> Yes, that's exactly my concern, even with savevm. That's why I
>>>> suggested
>>>> we could have a 'cont'-like thing that just gets back control of the
>>>> images and moves into the normal paused state, but doesn't immediately
>>>> resume the actual VM.
>>> OK, lets say we had that block-reactivate (for want of a better name),
>>> how would we stop everything asserting if the user tried to do it
>>> before they'd run block-reactivate?
>> We would have to add checks to the monitor commands that assume that the
>> image is activated and error out if it isn't.
>>
>> Maybe just adding the check to blk_is_available() would be enough, but
>> we'd have to check carefully whether it covers all cases and causes no
>> false positives.
>>
>> By the way, I wouldn't call this 'block-reactivate' because I don't
>> think this should be a block-specific command. It's a VM lifecycle
>> command that switches from a postmigrate state (that assumes we have no
>> control over the VM's resources any more) to a paused state (where we do
>> have this control). Maybe something like 'migration-abort'.
> 
> 'abort' is not very good too I think. migration is completed, nothing to
> abort.. (may be successful migration to file for suspend, some kind of
> vm cloning, etc)

There is already migrate_cancel.  Does it make sense to make it
reactivate fds if migration is completed?

Paolo

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-28 14:15               ` Paolo Bonzini
@ 2017-03-29 15:29                 ` Dr. David Alan Gilbert
  2017-03-29 15:53                   ` Paolo Bonzini
  0 siblings, 1 reply; 32+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-29 15:29 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vladimir Sementsov-Ogievskiy, Kevin Wolf, qemu-block, qemu-devel,
	armbru, eblake, famz, stefanha, quintela, mreitz, peter.maydell,
	den, jsnow, lirans

* Paolo Bonzini (pbonzini@redhat.com) wrote:
> 
> 
> On 28/03/2017 15:16, Vladimir Sementsov-Ogievskiy wrote:
> > 28.03.2017 15:09, Kevin Wolf wrote:
> >> Am 28.03.2017 um 13:13 hat Dr. David Alan Gilbert geschrieben:
> >>> * Kevin Wolf (kwolf@redhat.com) wrote:
> >>>> Am 28.03.2017 um 12:55 hat Dr. David Alan Gilbert geschrieben:
> >>>>> * Kevin Wolf (kwolf@redhat.com) wrote:
> >>>>>> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >>>>>>> After migration all drives are inactive and savevm will fail with
> >>>>>>>
> >>>>>>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> >>>>>>>     Assertion `!(bs->open_flags & 0x0800)' failed.
> >>>>>>>
> >>>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy
> >>>>>>> <vsementsov@virtuozzo.com>
> >>>>>> What's the exact state you're in? I tried to reproduce this, but just
> >>>>>> doing a live migration and then savevm on the destination works
> >>>>>> fine for
> >>>>>> me.
> >>>>>>
> >>>>>> Hm... Or do you mean on the source? In that case, I think the
> >>>>>> operation
> >>>>>> must fail, but of course more gracefully than now.
> >>>>>>
> >>>>>> Actually, the question that you're asking implicitly here is how the
> >>>>>> source qemu process should be "reactivated" after a failed migration.
> >>>>>> Currently, as far as I know, this is only with issuing a "cont"
> >>>>>> command.
> >>>>>> It might make sense to provide a way to get control without
> >>>>>> resuming the
> >>>>>> VM, but I doubt that adding automatic resume to every QMP command
> >>>>>> is the
> >>>>>> right way to achieve it.
> >>>>>>
> >>>>>> Dave, Juan, what do you think?
> >>>>> I'd only ever really thought of 'cont' or retrying the migration.
> >>>>> However, it does make sense to me that you might want to do a savevm
> >>>>> instead; if you can't migrate then perhaps a savevm is the best you
> >>>>> can do before your machine dies.  Are there any other things that
> >>>>> should be allowed?
> >>>> I think we need to ask the other way round: Any reason _not_ to allow
> >>>> certain operations that you can normally perform on a stopped VM?
> >>>>
> >>>>> We would want to be careful not to accidentally reactivate the disks
> >>>>> on the source after what was actually a succesful migration.
> >>>> Yes, that's exactly my concern, even with savevm. That's why I
> >>>> suggested
> >>>> we could have a 'cont'-like thing that just gets back control of the
> >>>> images and moves into the normal paused state, but doesn't immediately
> >>>> resume the actual VM.
> >>> OK, lets say we had that block-reactivate (for want of a better name),
> >>> how would we stop everything asserting if the user tried to do it
> >>> before they'd run block-reactivate?
> >> We would have to add checks to the monitor commands that assume that the
> >> image is activated and error out if it isn't.
> >>
> >> Maybe just adding the check to blk_is_available() would be enough, but
> >> we'd have to check carefully whether it covers all cases and causes no
> >> false positives.
> >>
> >> By the way, I wouldn't call this 'block-reactivate' because I don't
> >> think this should be a block-specific command. It's a VM lifecycle
> >> command that switches from a postmigrate state (that assumes we have no
> >> control over the VM's resources any more) to a paused state (where we do
> >> have this control). Maybe something like 'migration-abort'.
> > 
> > 'abort' is not very good too I think. migration is completed, nothing to
> > abort.. (may be successful migration to file for suspend, some kind of
> > vm cloning, etc)
> 
> There is already migrate_cancel.  Does it make sense to make it
> reactivate fds if migration is completed?

It's potentially racy to do that.
Imagine if your migration is almost finished and you issue a migrate_cancel,
what happens?
Maybe it cancelled it.
Maybe it just completed in time - and you really better not be accessing
the disks on the source unless you're sure the destination isn't running.

Dave

> Paolo
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
  2017-03-29 15:29                 ` Dr. David Alan Gilbert
@ 2017-03-29 15:53                   ` Paolo Bonzini
  2017-04-25 14:22                     ` [Qemu-devel] ping " Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 32+ messages in thread
From: Paolo Bonzini @ 2017-03-29 15:53 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Vladimir Sementsov-Ogievskiy, Kevin Wolf, qemu-block, qemu-devel,
	armbru, eblake, famz, stefanha, quintela, mreitz, peter.maydell,
	den, jsnow, lirans



On 29/03/2017 17:29, Dr. David Alan Gilbert wrote:
>>> 'abort' is not very good too I think. migration is completed, nothing to
>>> abort.. (may be successful migration to file for suspend, some kind of
>>> vm cloning, etc)
>> There is already migrate_cancel.  Does it make sense to make it
>> reactivate fds if migration is completed?
> It's potentially racy to do that.
> Imagine if your migration is almost finished and you issue a migrate_cancel,
> what happens?
> Maybe it cancelled it.
> Maybe it just completed in time - and you really better not be accessing
> the disks on the source unless you're sure the destination isn't running.

If you want to avoid races, you probably have to use -S anyway on the
destination and do more handshaking between source and destination.  But
yes, just to be safe it'd be better to add a new flag (-f for HMP,
'cancel-completed-migration':true for QMP or something like that).

Paolo

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Qemu-devel] ping Re: [PATCH 3/4] savevm: fix savevm after migration
  2017-03-29 15:53                   ` Paolo Bonzini
@ 2017-04-25 14:22                     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-04-25 14:22 UTC (permalink / raw)
  To: Paolo Bonzini, Dr. David Alan Gilbert
  Cc: Kevin Wolf, qemu-block, qemu-devel, armbru, eblake, famz,
	stefanha, quintela, mreitz, peter.maydell, den, jsnow, lirans

29.03.2017 18:53, Paolo Bonzini wrote:
>
> On 29/03/2017 17:29, Dr. David Alan Gilbert wrote:
>>>> 'abort' is not very good too I think. migration is completed, nothing to
>>>> abort.. (may be successful migration to file for suspend, some kind of
>>>> vm cloning, etc)
>>> There is already migrate_cancel.  Does it make sense to make it
>>> reactivate fds if migration is completed?
>> It's potentially racy to do that.
>> Imagine if your migration is almost finished and you issue a migrate_cancel,
>> what happens?
>> Maybe it cancelled it.
>> Maybe it just completed in time - and you really better not be accessing
>> the disks on the source unless you're sure the destination isn't running.
> If you want to avoid races, you probably have to use -S anyway on the
> destination and do more handshaking between source and destination.  But
> yes, just to be safe it'd be better to add a new flag (-f for HMP,
> 'cancel-completed-migration':true for QMP or something like that).
>
> Paolo

Looks like we haven't final conclusion on this (solve problems, noted in 
patches 2 and 3). Are someone working on this now?

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  2017-03-07 10:11     ` Vladimir Sementsov-Ogievskiy
  2017-03-07 10:22       ` Kevin Wolf
@ 2017-04-26 12:22       ` Kashyap Chamarthy
  2017-04-26 13:43         ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 32+ messages in thread
From: Kashyap Chamarthy @ 2017-04-26 12:22 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: Kevin Wolf, peter.maydell, famz, lirans, qemu-block, quintela,
	qemu-devel, armbru, stefanha, den, amit.shah, pbonzini, mreitz,
	dgilbert

On Tue, Mar 07, 2017 at 01:11:23PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 07.03.2017 13:02, Kevin Wolf wrote:
> > Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > We must invalidate on RUN_STATE_PRELAUNCH too, as it is available
> > > through qmp_system_reset from RUN_STATE_POSTMIGRATE. Otherwise, we will
> > > come to
> > > 
> > > qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> > >         Assertion `!(bs->open_flags & 0x0800)' failed.
> > > 
> > > on the first write after vm start.
> > > 
> > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > Wouldn't it make more sense to invalidate in qmp_system_reset() where
> > the migration states are left?
> > 
> > Or maybe BDRV_O_INACTIVE could even be tied directly to runstates? Not
> > sure how realistic this one is, but if we start adding invalidate_cache
> > calls all over the place, maybe that's a sign that we need to look for a
> > more central place.
> 
> I've proposed it in cover letter) These bugs and my fixes are just show that
> something should be rethought.. I don't claim that these fixes are true way,
> they are just the simplest.

Hi Vladimir,

I wonder if you have a new version of your patch ("qmp-cont: invalidate
on RUN_STATE_PRELAUNCH").

Hailiang Zhang tells me on this the below thread that your patch fixes
the issue:

  http://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg03925.html
  -- [QEMU-2.8] Source QEMU crashes with: "bdrv_co_pwritev: Assertion
  `!(bs->open_flags & BDRV_O_INACTIVE)' failed"

-- 
/kashyap

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH
  2017-04-26 12:22       ` [Qemu-devel] [Qemu-block] " Kashyap Chamarthy
@ 2017-04-26 13:43         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 32+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-04-26 13:43 UTC (permalink / raw)
  To: Kashyap Chamarthy
  Cc: Kevin Wolf, peter.maydell, famz, lirans, qemu-block, quintela,
	qemu-devel, armbru, stefanha, den, amit.shah, pbonzini, mreitz,
	dgilbert

26.04.2017 15:22, Kashyap Chamarthy wrote:
> On Tue, Mar 07, 2017 at 01:11:23PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> 07.03.2017 13:02, Kevin Wolf wrote:
>>> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>>> We must invalidate on RUN_STATE_PRELAUNCH too, as it is available
>>>> through qmp_system_reset from RUN_STATE_POSTMIGRATE. Otherwise, we will
>>>> come to
>>>>
>>>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>>>>          Assertion `!(bs->open_flags & 0x0800)' failed.
>>>>
>>>> on the first write after vm start.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> Wouldn't it make more sense to invalidate in qmp_system_reset() where
>>> the migration states are left?
>>>
>>> Or maybe BDRV_O_INACTIVE could even be tied directly to runstates? Not
>>> sure how realistic this one is, but if we start adding invalidate_cache
>>> calls all over the place, maybe that's a sign that we need to look for a
>>> more central place.
>> I've proposed it in cover letter) These bugs and my fixes are just show that
>> something should be rethought.. I don't claim that these fixes are true way,
>> they are just the simplest.
> Hi Vladimir,
>
> I wonder if you have a new version of your patch ("qmp-cont: invalidate
> on RUN_STATE_PRELAUNCH").
>
> Hailiang Zhang tells me on this the below thread that your patch fixes
> the issue:
>
>    http://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg03925.html
>    -- [QEMU-2.8] Source QEMU crashes with: "bdrv_co_pwritev: Assertion
>    `!(bs->open_flags & BDRV_O_INACTIVE)' failed"
>

No, I haven't new version, discussion is unfinished about true way of 
doing it. This version is ok with the case it fixes.


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2017-04-26 13:44 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-25 19:31 [Qemu-devel] [PATCH 0/4] some migration bugs Vladimir Sementsov-Ogievskiy
2017-02-25 19:31 ` [Qemu-devel] [PATCH 1/4] iotests: add migration corner cases test Vladimir Sementsov-Ogievskiy
2017-03-07  9:14   ` Fam Zheng
2017-03-07 11:23   ` Dr. David Alan Gilbert
2017-02-25 19:31 ` [Qemu-devel] [PATCH 2/4] qmp-cont: invalidate on RUN_STATE_PRELAUNCH Vladimir Sementsov-Ogievskiy
2017-03-07  9:19   ` Fam Zheng
2017-03-07 10:02   ` Kevin Wolf
2017-03-07 10:11     ` Vladimir Sementsov-Ogievskiy
2017-03-07 10:22       ` Kevin Wolf
2017-04-26 12:22       ` [Qemu-devel] [Qemu-block] " Kashyap Chamarthy
2017-04-26 13:43         ` Vladimir Sementsov-Ogievskiy
2017-02-25 19:31 ` [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration Vladimir Sementsov-Ogievskiy
2017-02-27  9:42   ` Denis V. Lunev
2017-03-07  9:53   ` Kevin Wolf
2017-03-07  9:59     ` Vladimir Sementsov-Ogievskiy
2017-03-07 11:01       ` Kevin Wolf
2017-03-07 11:20         ` Dr. David Alan Gilbert
2017-03-28 10:55     ` Dr. David Alan Gilbert
2017-03-28 11:09       ` Kevin Wolf
2017-03-28 11:13         ` Dr. David Alan Gilbert
2017-03-28 12:09           ` Kevin Wolf
2017-03-28 13:16             ` Vladimir Sementsov-Ogievskiy
2017-03-28 14:15               ` Paolo Bonzini
2017-03-29 15:29                 ` Dr. David Alan Gilbert
2017-03-29 15:53                   ` Paolo Bonzini
2017-04-25 14:22                     ` [Qemu-devel] ping " Vladimir Sementsov-Ogievskiy
2017-03-28 11:18       ` [Qemu-devel] " Denis V. Lunev
2017-02-25 19:31 ` [Qemu-devel] [PATCH 4/4] migration: fix use-after-free of to_dst_file Vladimir Sementsov-Ogievskiy
2017-02-27 10:44   ` Dr. David Alan Gilbert
2017-02-28  9:59   ` Dr. David Alan Gilbert
2017-03-06 14:23 ` [Qemu-devel] [PATCH 0/4] some migration bugs Denis V. Lunev
2017-03-20  9:44   ` [Qemu-devel] ping " Vladimir Sementsov-Ogievskiy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.