* [Qemu-devel] [PATCH] vl: introduce vm_shutdown()
@ 2018-02-20 13:10 Stefan Hajnoczi
2018-02-23 8:20 ` Fam Zheng
0 siblings, 1 reply; 5+ messages in thread
From: Stefan Hajnoczi @ 2018-02-20 13:10 UTC (permalink / raw)
To: qemu-devel; +Cc: Fam Zheng, qemu-block, Kevin Wolf, Stefan Hajnoczi
Commit 00d09fdbbae5f7864ce754913efc84c12fdf9f1a ("vl: pause vcpus before
stopping iothreads") and commit dce8921b2baaf95974af8176406881872067adfa
("iothread: Stop threads before main() quits") tried to work around the
fact that emulation was still active during termination by stopping
iothreads. They suffer from race conditions:
1. virtio_scsi_handle_cmd_vq() racing with iothread_stop_all() hits the
virtio_scsi_ctx_check() assertion failure because the BDS AioContext
has been modified by iothread_stop_all().
2. Guest vq kick racing with main loop termination leaves a readable
ioeventfd that is handled by the next aio_poll() when external
clients are enabled again, resulting in unwanted emulation activity.
This patch obsoletes those commits by fully disabling emulation activity
when vcpus are stopped.
Use the new vm_shutdown() function instead of pause_all_vcpus() so that
vm change state handlers are invoked too. Virtio devices will now stop
their ioeventfds, preventing further emulation activity after vm_stop().
Note that vm_stop(RUN_STATE_SHUTDOWN) cannot be used because it emits a
QMP STOP event that may affect existing clients.
It is no longer necessary to call replay_disable_events() directly since
vm_shutdown() does so already.
Drop iothread_stop_all() since it is no longer used.
Cc: Fam Zheng <famz@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
include/sysemu/iothread.h | 1 -
include/sysemu/sysemu.h | 1 +
cpus.c | 16 +++++++++++++---
iothread.c | 31 -------------------------------
vl.c | 13 +++----------
5 files changed, 17 insertions(+), 45 deletions(-)
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
index 799614ffd2..8a7ac2c528 100644
--- a/include/sysemu/iothread.h
+++ b/include/sysemu/iothread.h
@@ -45,7 +45,6 @@ typedef struct {
char *iothread_get_id(IOThread *iothread);
IOThread *iothread_by_id(const char *id);
AioContext *iothread_get_aio_context(IOThread *iothread);
-void iothread_stop_all(void);
GMainContext *iothread_get_g_main_context(IOThread *iothread);
/*
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 77bb3da582..54f91dbc03 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -55,6 +55,7 @@ void vm_start(void);
int vm_prepare_start(void);
int vm_stop(RunState state);
int vm_stop_force_state(RunState state);
+int vm_shutdown(void);
typedef enum WakeupReason {
/* Always keep QEMU_WAKEUP_REASON_NONE = 0 */
diff --git a/cpus.c b/cpus.c
index f298b659f4..90279f73fc 100644
--- a/cpus.c
+++ b/cpus.c
@@ -993,7 +993,7 @@ void cpu_synchronize_all_pre_loadvm(void)
}
}
-static int do_vm_stop(RunState state)
+static int do_vm_stop(RunState state, bool send_stop)
{
int ret = 0;
@@ -1002,7 +1002,9 @@ static int do_vm_stop(RunState state)
pause_all_vcpus();
runstate_set(state);
vm_state_notify(0, state);
- qapi_event_send_stop(&error_abort);
+ if (send_stop) {
+ qapi_event_send_stop(&error_abort);
+ }
}
bdrv_drain_all();
@@ -1012,6 +1014,14 @@ static int do_vm_stop(RunState state)
return ret;
}
+/* Special vm_stop() variant for terminating the process. Historically clients
+ * did not expect a QMP STOP event and so we need to retain compatibility.
+ */
+int vm_shutdown(void)
+{
+ return do_vm_stop(RUN_STATE_SHUTDOWN, false);
+}
+
static bool cpu_can_run(CPUState *cpu)
{
if (cpu->stop) {
@@ -2007,7 +2017,7 @@ int vm_stop(RunState state)
return 0;
}
- return do_vm_stop(state);
+ return do_vm_stop(state, true);
}
/**
diff --git a/iothread.c b/iothread.c
index 4b9bbde4cd..68d92086e3 100644
--- a/iothread.c
+++ b/iothread.c
@@ -101,18 +101,6 @@ void iothread_stop(IOThread *iothread)
qemu_thread_join(&iothread->thread);
}
-static int iothread_stop_iter(Object *object, void *opaque)
-{
- IOThread *iothread;
-
- iothread = (IOThread *)object_dynamic_cast(object, TYPE_IOTHREAD);
- if (!iothread) {
- return 0;
- }
- iothread_stop(iothread);
- return 0;
-}
-
static void iothread_instance_init(Object *obj)
{
IOThread *iothread = IOTHREAD(obj);
@@ -333,25 +321,6 @@ IOThreadInfoList *qmp_query_iothreads(Error **errp)
return head;
}
-void iothread_stop_all(void)
-{
- Object *container = object_get_objects_root();
- BlockDriverState *bs;
- BdrvNextIterator it;
-
- for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
- AioContext *ctx = bdrv_get_aio_context(bs);
- if (ctx == qemu_get_aio_context()) {
- continue;
- }
- aio_context_acquire(ctx);
- bdrv_set_aio_context(bs, qemu_get_aio_context());
- aio_context_release(ctx);
- }
-
- object_child_foreach(container, iothread_stop_iter, NULL);
-}
-
static gpointer iothread_g_main_context_init(gpointer opaque)
{
AioContext *ctx;
diff --git a/vl.c b/vl.c
index 81724f5f17..e7e7e13f86 100644
--- a/vl.c
+++ b/vl.c
@@ -4762,17 +4762,10 @@ int main(int argc, char **argv, char **envp)
os_setup_post();
main_loop();
- replay_disable_events();
- /* The ordering of the following is delicate. Stop vcpus to prevent new
- * I/O requests being queued by the guest. Then stop IOThreads (this
- * includes a drain operation and completes all request processing). At
- * this point emulated devices are still associated with their IOThreads
- * (if any) but no longer have any work to do. Only then can we close
- * block devices safely because we know there is no more I/O coming.
- */
- pause_all_vcpus();
- iothread_stop_all();
+ /* No more vcpu or device emulation activity beyond this point */
+ vm_shutdown();
+
bdrv_close_all();
res_free();
--
2.14.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH] vl: introduce vm_shutdown()
2018-02-20 13:10 [Qemu-devel] [PATCH] vl: introduce vm_shutdown() Stefan Hajnoczi
@ 2018-02-23 8:20 ` Fam Zheng
2018-02-27 15:30 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
0 siblings, 1 reply; 5+ messages in thread
From: Fam Zheng @ 2018-02-23 8:20 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: qemu-devel, qemu-block, Kevin Wolf, pbonzini
On Tue, 02/20 13:10, Stefan Hajnoczi wrote:
> 1. virtio_scsi_handle_cmd_vq() racing with iothread_stop_all() hits the
> virtio_scsi_ctx_check() assertion failure because the BDS AioContext
> has been modified by iothread_stop_all().
Does this patch fix the issue completely? IIUC virtio_scsi_handle_cmd can
already be entered at the time of main thread calling virtio_scsi_clear_aio(),
so this race condition still exists:
main thread iothread
-----------------------------------------------------------------------------
vm_shutdown
...
virtio_bus_stop_ioeventfd
virtio_scsi_dataplane_stop
aio_poll()
...
virtio_scsi_data_plane_handle_cmd()
aio_context_acquire(s->ctx)
virtio_scsi_acquire(s).enter
virtio_scsi_clear_aio()
aio_context_release(s->ctx)
virtio_scsi_acquire(s).return
virtio_scsi_handle_cmd_vq()
...
virtqueue_pop()
Is it possible that the above virtqueue_pop() still returns one element that was
queued before vm_shutdown() was called?
If so I think we additionally need to an "s->ioeventfd_stopped" flag set in
virtio_scsi_stop_ioeventfd() and check it in
virtio_scsi_data_plane_handle_cmd().
Fam
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [Qemu-block] [PATCH] vl: introduce vm_shutdown()
2018-02-23 8:20 ` Fam Zheng
@ 2018-02-27 15:30 ` Stefan Hajnoczi
2018-02-28 1:58 ` Fam Zheng
0 siblings, 1 reply; 5+ messages in thread
From: Stefan Hajnoczi @ 2018-02-27 15:30 UTC (permalink / raw)
To: Fam Zheng; +Cc: Stefan Hajnoczi, Kevin Wolf, pbonzini, qemu-devel, qemu-block
[-- Attachment #1: Type: text/plain, Size: 1876 bytes --]
On Fri, Feb 23, 2018 at 04:20:44PM +0800, Fam Zheng wrote:
> On Tue, 02/20 13:10, Stefan Hajnoczi wrote:
> > 1. virtio_scsi_handle_cmd_vq() racing with iothread_stop_all() hits the
> > virtio_scsi_ctx_check() assertion failure because the BDS AioContext
> > has been modified by iothread_stop_all().
>
> Does this patch fix the issue completely? IIUC virtio_scsi_handle_cmd can
> already be entered at the time of main thread calling virtio_scsi_clear_aio(),
> so this race condition still exists:
>
> main thread iothread
> -----------------------------------------------------------------------------
> vm_shutdown
> ...
> virtio_bus_stop_ioeventfd
> virtio_scsi_dataplane_stop
> aio_poll()
> ...
> virtio_scsi_data_plane_handle_cmd()
> aio_context_acquire(s->ctx)
> virtio_scsi_acquire(s).enter
> virtio_scsi_clear_aio()
> aio_context_release(s->ctx)
> virtio_scsi_acquire(s).return
> virtio_scsi_handle_cmd_vq()
> ...
> virtqueue_pop()
>
> Is it possible that the above virtqueue_pop() still returns one element that was
> queued before vm_shutdown() was called?
No, it can't because virtio_scsi_clear_aio() invokes
virtio_queue_host_notifier_aio_read(&vq->host_notifier) to process the
virtqueue. By the time we get back to iothread's
virtio_scsi_data_plane_handle_cmd() the virtqueue is already empty.
Vcpus have been paused so no additional elements can slip into the
virtqueue.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [Qemu-block] [PATCH] vl: introduce vm_shutdown()
2018-02-27 15:30 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
@ 2018-02-28 1:58 ` Fam Zheng
2018-02-28 9:40 ` Stefan Hajnoczi
0 siblings, 1 reply; 5+ messages in thread
From: Fam Zheng @ 2018-02-28 1:58 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Stefan Hajnoczi, Kevin Wolf, pbonzini, qemu-devel, qemu-block
On Tue, 02/27 15:30, Stefan Hajnoczi wrote:
> On Fri, Feb 23, 2018 at 04:20:44PM +0800, Fam Zheng wrote:
> > On Tue, 02/20 13:10, Stefan Hajnoczi wrote:
> > > 1. virtio_scsi_handle_cmd_vq() racing with iothread_stop_all() hits the
> > > virtio_scsi_ctx_check() assertion failure because the BDS AioContext
> > > has been modified by iothread_stop_all().
> >
> > Does this patch fix the issue completely? IIUC virtio_scsi_handle_cmd can
> > already be entered at the time of main thread calling virtio_scsi_clear_aio(),
> > so this race condition still exists:
> >
> > main thread iothread
> > -----------------------------------------------------------------------------
> > vm_shutdown
> > ...
> > virtio_bus_stop_ioeventfd
> > virtio_scsi_dataplane_stop
> > aio_poll()
> > ...
> > virtio_scsi_data_plane_handle_cmd()
> > aio_context_acquire(s->ctx)
> > virtio_scsi_acquire(s).enter
> > virtio_scsi_clear_aio()
> > aio_context_release(s->ctx)
> > virtio_scsi_acquire(s).return
> > virtio_scsi_handle_cmd_vq()
> > ...
> > virtqueue_pop()
> >
> > Is it possible that the above virtqueue_pop() still returns one element that was
> > queued before vm_shutdown() was called?
>
> No, it can't because virtio_scsi_clear_aio() invokes
> virtio_queue_host_notifier_aio_read(&vq->host_notifier) to process the
> virtqueue. By the time we get back to iothread's
> virtio_scsi_data_plane_handle_cmd() the virtqueue is already empty.
>
> Vcpus have been paused so no additional elements can slip into the
> virtqueue.
So there is:
static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
{
VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
if (event_notifier_test_and_clear(n)) {
virtio_queue_notify_aio_vq(vq);
}
}
Guest kicks after adding an element to VQ, but we check ioeventfd before trying
virtqueue_pop(). Is that a problem? If VCPUs are paused after enqueuing but
before kicking VQ, the ioeventfd is not set, the virtqueue is not processed
here.
Fam
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [Qemu-block] [PATCH] vl: introduce vm_shutdown()
2018-02-28 1:58 ` Fam Zheng
@ 2018-02-28 9:40 ` Stefan Hajnoczi
0 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2018-02-28 9:40 UTC (permalink / raw)
To: Fam Zheng; +Cc: Stefan Hajnoczi, Kevin Wolf, pbonzini, qemu-devel, qemu-block
[-- Attachment #1: Type: text/plain, Size: 2839 bytes --]
On Wed, Feb 28, 2018 at 09:58:13AM +0800, Fam Zheng wrote:
> On Tue, 02/27 15:30, Stefan Hajnoczi wrote:
> > On Fri, Feb 23, 2018 at 04:20:44PM +0800, Fam Zheng wrote:
> > > On Tue, 02/20 13:10, Stefan Hajnoczi wrote:
> > > > 1. virtio_scsi_handle_cmd_vq() racing with iothread_stop_all() hits the
> > > > virtio_scsi_ctx_check() assertion failure because the BDS AioContext
> > > > has been modified by iothread_stop_all().
> > >
> > > Does this patch fix the issue completely? IIUC virtio_scsi_handle_cmd can
> > > already be entered at the time of main thread calling virtio_scsi_clear_aio(),
> > > so this race condition still exists:
> > >
> > > main thread iothread
> > > -----------------------------------------------------------------------------
> > > vm_shutdown
> > > ...
> > > virtio_bus_stop_ioeventfd
> > > virtio_scsi_dataplane_stop
> > > aio_poll()
> > > ...
> > > virtio_scsi_data_plane_handle_cmd()
> > > aio_context_acquire(s->ctx)
> > > virtio_scsi_acquire(s).enter
> > > virtio_scsi_clear_aio()
> > > aio_context_release(s->ctx)
> > > virtio_scsi_acquire(s).return
> > > virtio_scsi_handle_cmd_vq()
> > > ...
> > > virtqueue_pop()
> > >
> > > Is it possible that the above virtqueue_pop() still returns one element that was
> > > queued before vm_shutdown() was called?
> >
> > No, it can't because virtio_scsi_clear_aio() invokes
> > virtio_queue_host_notifier_aio_read(&vq->host_notifier) to process the
> > virtqueue. By the time we get back to iothread's
> > virtio_scsi_data_plane_handle_cmd() the virtqueue is already empty.
> >
> > Vcpus have been paused so no additional elements can slip into the
> > virtqueue.
>
> So there is:
>
> static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
> {
> VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
> if (event_notifier_test_and_clear(n)) {
> virtio_queue_notify_aio_vq(vq);
> }
> }
>
> Guest kicks after adding an element to VQ, but we check ioeventfd before trying
> virtqueue_pop(). Is that a problem? If VCPUs are paused after enqueuing but
> before kicking VQ, the ioeventfd is not set, the virtqueue is not processed
> here.
You are right.
This race condition also affects the existing 'stop' command, where
ioeventfd is disabled in the same way. I'll send a v2 with a patch to
fix this.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-02-28 9:41 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-20 13:10 [Qemu-devel] [PATCH] vl: introduce vm_shutdown() Stefan Hajnoczi
2018-02-23 8:20 ` Fam Zheng
2018-02-27 15:30 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2018-02-28 1:58 ` Fam Zheng
2018-02-28 9:40 ` Stefan Hajnoczi
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.