All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Alexander Bulekov <alxndr@bu.edu>,
	qemu-devel@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>,
	dgilbert@redhat.com
Subject: Re: [PATCH] virtio: clean up when virtio_queue_set_rings() fails
Date: Thu, 6 Feb 2020 01:28:24 -0500	[thread overview]
Message-ID: <20200206012705-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20200205145657.GI58062@stefanha-x1.localdomain>

On Wed, Feb 05, 2020 at 02:56:57PM +0000, Stefan Hajnoczi wrote:
> On Wed, Feb 05, 2020 at 01:13:37AM -0500, Michael S. Tsirkin wrote:
> > On Tue, Feb 04, 2020 at 03:16:18PM +0000, Stefan Hajnoczi wrote:
> > > hw/virtio.c checks vq->vring.desc != NULL to see if the vring has been
> > > set up successfully.
> > > 
> > > When virtio_queue_set_rings() fails due to an invalid vring memory
> > > address it must clear vq->vring.desc (and related fields) so we don't
> > > treat this virtqueue as successfully initialized later on.
> > > 
> > > This bug was found by device fuzzing and can be triggered as follows:
> > > 
> > >   $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
> > >          -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
> > >          -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
> > >          -display none \
> > >          -qtest stdio
> > >   endianness
> > >   outl 0xcf8 0x80002020
> > >   outl 0xcfc 0xe0000000
> > >   outl 0xcf8 0x80002004
> > >   outw 0xcfc 0x7
> > >   write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
> > >   inb 0x4
> > >   writew 0xe000001c 0x1
> > >   write 0xe0000014 0x1 0x0d
> > > 
> > > The following error message is produced:
> > > 
> > >   qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.
> > > 
> > > The backtrace looks like this:
> > > 
> > >   #0  0x00007ffff5520625 in raise () at /lib64/libc.so.6
> > >   #1  0x00007ffff55098d9 in abort () at /lib64/libc.so.6
> > >   #2  0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
> > >   #3  0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
> > >   #4  0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
> > >   #5  vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
> > >   #6  0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
> > >   #7  virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
> > >   #8  virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
> > >   #9  0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
> > >   #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
> > >   #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
> > >   #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
> > >   #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
> > >   #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
> > >   #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
> > >   #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
> > >   #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
> > >   #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
> > >   #19 0x00005555559b20c9 in main_loop () at vl.c:1683
> > >   #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441
> > > 
> > > Reported-by: Alexander Bulekov <alxndr@bu.edu>
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > ---
> > >  hw/virtio/virtio.c | 5 +++++
> > >  1 file changed, 5 insertions(+)
> > > 
> > > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > > index 2c5410e981..5d7f619a1e 100644
> > > --- a/hw/virtio/virtio.c
> > > +++ b/hw/virtio/virtio.c
> > > @@ -2163,6 +2163,11 @@ void virtio_queue_set_rings(VirtIODevice *vdev, int n, hwaddr desc,
> > >      vdev->vq[n].vring.avail = avail;
> > >      vdev->vq[n].vring.used = used;
> > >      virtio_init_region_cache(vdev, n);
> > > +    if (vdev->broken) {
> > > +        vdev->vq[n].vring.desc = 0;
> > > +        vdev->vq[n].vring.avail = 0;
> > > +        vdev->vq[n].vring.used = 0;
> > > +    }
> > >  }
> > 
> > OK but now that you mention it, guest can cause us to call
> > virtio_queue_set_rings at any time.
> > If device was running there's still a window when desc is set
> > but cache isn't.
> > 
> > Is this an issue?
> 
> Yes, this is an issue for devices that support iothreads because the
> virtqueue could be processed in the iothread while the vcpu thread
> enters the virtio.c code.
> 
> > I also worry desc is tested in a way that isn't safe for RCU ...
> > 
> > Should we convert most/all users to check the presence of region cache?
> 
> Yes.
> 
> I suspect there are additional issues because virtio.c doesn't use
> locking, except for vring.caches RCU. :(
> 
> Stefan

OK so maybe convert checks of desc to checks of vring.caches,
wrapping it up in an API? And I guess virtio_init_region_cache
should return an error status, given it can fail ...

-- 
MST



      reply	other threads:[~2020-02-06  6:29 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-04 15:16 [PATCH] virtio: clean up when virtio_queue_set_rings() fails Stefan Hajnoczi
2020-02-04 16:02 ` Cornelia Huck
2020-02-05 14:49   ` Stefan Hajnoczi
2020-02-05 15:55     ` Cornelia Huck
2020-02-04 16:36 ` Michael S. Tsirkin
2020-02-05  6:13 ` Michael S. Tsirkin
2020-02-05 14:56   ` Stefan Hajnoczi
2020-02-06  6:28     ` Michael S. Tsirkin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200206012705-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=alxndr@bu.edu \
    --cc=dgilbert@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.