From: Stefan Hajnoczi <stefanha@redhat.com>
To: John Snow <jsnow@redhat.com>
Cc: "Kevin Wolf" <kwolf@redhat.com>,
qemu-block@nongnu.org, "Xujun Ma" <xuma@redhat.com>,
qemu-devel@nongnu.org, "Max Reitz" <mreitz@redhat.com>,
"Yihuang Yu" <yihyu@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@redhat.com>
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH] virtio-blk: Cancel the pending BH when the dataplane is reset
Date: Thu, 22 Aug 2019 14:03:10 +0100 [thread overview]
Message-ID: <20190822130310.GG20491@stefanha-x1.localdomain> (raw)
In-Reply-To: <3e99f7fb-9c9b-de49-0997-c1e684159d4a@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4152 bytes --]
On Fri, Aug 16, 2019 at 02:35:56PM -0400, John Snow wrote:
>
>
> On 8/16/19 1:15 PM, Philippe Mathieu-Daudé wrote:
> > When 'system_reset' is called, the main loop clear the memory
> > region cache before the BH has a chance to execute. Later when
> > the deferred function is called, some assumptions that were
> > made when scheduling them are no longer true when they actually
> > execute.
> >
> > This is what happens using a virtio-blk device (fresh RHEL7.8 install):
> >
> > $ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; echo q) \
> > | qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \
> > -device virtio-blk-pci,id=image1,drive=drive_image1 \
> > -drive file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none \
> > -device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \
> > -netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \
> > -monitor stdio -serial null -nographic
> > (qemu) system_reset
> > (qemu) system_reset
> > (qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: Assertion `caches != NULL' failed.
> > Aborted
> >
> > (gdb) bt
> > Thread 1 (Thread 0x7f109c17b680 (LWP 10939)):
> > #0 0x00005604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at hw/virtio/virtio.c:227
> > #1 0x000056040832972b in vring_avail_flags (vq=0x56040a24bdd0) at hw/virtio/virtio.c:235
> > #2 0x000056040832d13d in virtio_should_notify (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648
> > #3 0x000056040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662
> > #4 0x00005604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at hw/block/dataplane/virtio-blk.c:75
> > #5 0x000056040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90
> > #6 0x000056040883dccd in aio_bh_poll (ctx=0x560409161980) at util/async.c:118
> > #7 0x0000560408842af7 in aio_dispatch (ctx=0x560409161980) at util/aio-posix.c:460
> > #8 0x000056040883e068 in aio_ctx_dispatch (source=0x560409161980, callback=0x0, user_data=0x0) at util/async.c:261
> > #9 0x00007f10a8fca06d in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
> > #10 0x0000560408841445 in glib_pollfds_poll () at util/main-loop.c:215
> > #11 0x00005604088414bf in os_host_main_loop_wait (timeout=0) at util/main-loop.c:238
> > #12 0x00005604088415c4 in main_loop_wait (nonblocking=0) at util/main-loop.c:514
> > #13 0x0000560408416b1e in main_loop () at vl.c:1923
> > #14 0x000056040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, envp=0x7ffc2c3f9d00) at vl.c:4578
> >
> > Fix this by cancelling the BH when the virtio dataplane is stopped.
> >
> > Reported-by: Yihuang Yu <yihyu@redhat.com>
> > Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> > Fixes: https://bugs.launchpad.net/qemu/+bug/1839428
> > Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > ---
> > hw/block/dataplane/virtio-blk.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> > index 9299a1a7c2..4030faa21d 100644
> > --- a/hw/block/dataplane/virtio-blk.c
> > +++ b/hw/block/dataplane/virtio-blk.c
> > @@ -301,6 +301,8 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
> > /* Clean up guest notifier (irq) */
> > k->set_guest_notifiers(qbus->parent, nvqs, false);
> >
> > + qemu_bh_cancel(s->bh);
> > +
> > vblk->dataplane_started = false;
> > s->stopping = false;
> > }
> >
>
> Naive question:
>
> Since we're canceling the BH here and we're stopping the device; do we
> need to do anything like clear out batch_notify_vqs? I assume in
> system_reset contexts that's going to be handled anyway, are there
> non-reset contexts where it matters?
Spurious guest notifications aren't a problem but missing notifications
can hang the guest. I have proposed a modified version of this code
that ensures pending batched notifications are sent.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2019-08-22 13:05 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-16 17:15 [Qemu-devel] [PATCH] virtio-blk: Cancel the pending BH when the dataplane is reset Philippe Mathieu-Daudé
2019-08-16 18:35 ` [Qemu-devel] [Qemu-block] " John Snow
2019-08-22 13:03 ` Stefan Hajnoczi [this message]
2019-08-22 13:01 ` [Qemu-devel] " Stefan Hajnoczi
2019-08-22 14:56 ` Philippe Mathieu-Daudé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190822130310.GG20491@stefanha-x1.localdomain \
--to=stefanha@redhat.com \
--cc=jsnow@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=philmd@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=xuma@redhat.com \
--cc=yihyu@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).