All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] virtio: gracefully handle invalid region caches
@ 2020-02-07 10:46 Stefan Hajnoczi
  2020-02-24 13:35 ` Stefan Hajnoczi
  2020-02-27  8:51 ` Michael S. Tsirkin
  0 siblings, 2 replies; 4+ messages in thread
From: Stefan Hajnoczi @ 2020-02-07 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Cornelia Huck, Paolo Bonzini, Stefan Hajnoczi,
	Michael S. Tsirkin

The virtqueue code sets up MemoryRegionCaches to access the virtqueue
guest RAM data structures.  The code currently assumes that
VRingMemoryRegionCaches is initialized before device emulation code
accesses the virtqueue.  An assertion will fail in
vring_get_region_caches() when this is not true.  Device fuzzing found a
case where this assumption is false (see below).

Virtqueue guest RAM addresses can also be changed from a vCPU thread
while an IOThread is accessing the virtqueue.  This breaks the same
assumption but this time the caches could become invalid partway through
the virtqueue code.  The code fetches the caches RCU pointer multiple
times so we will need to validate the pointer every time it is fetched.

Add checks each time we call vring_get_region_caches() and treat invalid
caches as a nop: memory stores are ignored and memory reads return 0.

The fuzz test failure is as follows:

  $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
         -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
         -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
         -display none \
         -qtest stdio
  endianness
  outl 0xcf8 0x80002020
  outl 0xcfc 0xe0000000
  outl 0xcf8 0x80002004
  outw 0xcfc 0x7
  write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
  inb 0x4
  writew 0xe000001c 0x1
  write 0xe0000014 0x1 0x0d

The following error message is produced:

  qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.

The backtrace looks like this:

  #0  0x00007ffff5520625 in raise () at /lib64/libc.so.6
  #1  0x00007ffff55098d9 in abort () at /lib64/libc.so.6
  #2  0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
  #3  0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
  #4  0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
  #5  vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
  #6  0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
  #7  virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
  #8  virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
  #9  0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
  #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
  #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
  #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
  #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
  #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
  #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
  #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
  #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
  #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
  #19 0x00005555559b20c9 in main_loop () at vl.c:1683
  #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441

Reported-by: Alexander Bulekov <alxndr@bu.edu>
Cc: Michael Tsirkin <mst@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
An alternative solution is to keep the vring.caches pointer non-NULL all
the time so no checks are necessary.  We would need to initialize it to
a VRingMemoryRegionCaches object that points to unassigned_mem.  This
way virtio.c never hits NULL pointers and all memory loads/stores become
nop when caches are invalid.

I think this solution is cleaner but couldn't see a reasonable way of
initializing MemoryRegionCache objects so that they point to a 64-bit
unassigned_mem MemoryRegion.  Maybe someone who knows the memory API
better knows whether this is doable?

Michael: We discussed changing vring.desc checks, but I think that's no
longer necessary with this patch.  If a guest gets past a vring.desc
check then it can no longer trigger the assertion failure.
---
 hw/virtio/virtio.c | 99 ++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 91 insertions(+), 8 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 2c5410e981..00d444699d 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -282,15 +282,19 @@ static void vring_packed_flags_write(VirtIODevice *vdev,
 /* Called within rcu_read_lock().  */
 static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
 {
-    VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
-    assert(caches != NULL);
-    return caches;
+    return atomic_rcu_read(&vq->vring.caches);
 }
+
 /* Called within rcu_read_lock().  */
 static inline uint16_t vring_avail_flags(VirtQueue *vq)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, flags);
+
+    if (!caches) {
+        return 0;
+    }
+
     return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
 }
 
@@ -299,6 +303,11 @@ static inline uint16_t vring_avail_idx(VirtQueue *vq)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, idx);
+
+    if (!caches) {
+        return 0;
+    }
+
     vq->shadow_avail_idx = virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
     return vq->shadow_avail_idx;
 }
@@ -308,6 +317,11 @@ static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, ring[i]);
+
+    if (!caches) {
+        return 0;
+    }
+
     return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
 }
 
@@ -323,6 +337,11 @@ static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingUsed, ring[i]);
+
+    if (!caches) {
+        return;
+    }
+
     virtio_tswap32s(vq->vdev, &uelem->id);
     virtio_tswap32s(vq->vdev, &uelem->len);
     address_space_write_cached(&caches->used, pa, uelem, sizeof(VRingUsedElem));
@@ -334,6 +353,11 @@ static uint16_t vring_used_idx(VirtQueue *vq)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingUsed, idx);
+
+    if (!caches) {
+        return 0;
+    }
+
     return virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
 }
 
@@ -342,8 +366,12 @@ static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingUsed, idx);
-    virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
-    address_space_cache_invalidate(&caches->used, pa, sizeof(val));
+
+    if (caches) {
+        virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
+        address_space_cache_invalidate(&caches->used, pa, sizeof(val));
+    }
+
     vq->used_idx = val;
 }
 
@@ -353,8 +381,13 @@ static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask)
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     VirtIODevice *vdev = vq->vdev;
     hwaddr pa = offsetof(VRingUsed, flags);
-    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
+    uint16_t flags;
 
+    if (!caches) {
+        return;
+    }
+
+    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
     virtio_stw_phys_cached(vdev, &caches->used, pa, flags | mask);
     address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
 }
@@ -365,8 +398,13 @@ static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask)
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     VirtIODevice *vdev = vq->vdev;
     hwaddr pa = offsetof(VRingUsed, flags);
-    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
+    uint16_t flags;
 
+    if (!caches) {
+        return;
+    }
+
+    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
     virtio_stw_phys_cached(vdev, &caches->used, pa, flags & ~mask);
     address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
 }
@@ -381,6 +419,10 @@ static inline void vring_set_avail_event(VirtQueue *vq, uint16_t val)
     }
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return;
+    }
+
     pa = offsetof(VRingUsed, ring[vq->vring.num]);
     virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
     address_space_cache_invalidate(&caches->used, pa, sizeof(val));
@@ -410,7 +452,11 @@ static void virtio_queue_packed_set_notification(VirtQueue *vq, int enable)
     VRingMemoryRegionCaches *caches;
 
     RCU_READ_LOCK_GUARD();
-    caches  = vring_get_region_caches(vq);
+    caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return;
+    }
+
     vring_packed_event_read(vq->vdev, &caches->used, &e);
 
     if (!enable) {
@@ -597,6 +643,10 @@ static int virtio_queue_packed_empty_rcu(VirtQueue *vq)
     }
 
     cache = vring_get_region_caches(vq);
+    if (!cache) {
+        return 1;
+    }
+
     vring_packed_desc_read_flags(vq->vdev, &desc.flags, &cache->desc,
                                  vq->last_avail_idx);
 
@@ -777,6 +827,10 @@ static void virtqueue_packed_fill_desc(VirtQueue *vq,
     }
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return;
+    }
+
     vring_packed_desc_write(vq->vdev, &desc, &caches->desc, head, strict_order);
 }
 
@@ -949,6 +1003,10 @@ static void virtqueue_split_get_avail_bytes(VirtQueue *vq,
 
     max = vq->vring.num;
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        goto err;
+    }
+
     while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
         MemoryRegionCache *desc_cache = &caches->desc;
         unsigned int num_bufs;
@@ -1089,6 +1147,9 @@ static void virtqueue_packed_get_avail_bytes(VirtQueue *vq,
 
     max = vq->vring.num;
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        goto err;
+    }
 
     for (;;) {
         unsigned int num_bufs = total_bufs;
@@ -1194,6 +1255,10 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
     }
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        goto err;
+    }
+
     desc_size = virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED) ?
                                 sizeof(VRingPackedDesc) : sizeof(VRingDesc);
     if (caches->desc.len < vq->vring.num * desc_size) {
@@ -1387,6 +1452,11 @@ static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
     i = head;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        virtio_error(vdev, "Region caches not initialized");
+        goto done;
+    }
+
     if (caches->desc.len < max * sizeof(VRingDesc)) {
         virtio_error(vdev, "Cannot map descriptor ring");
         goto done;
@@ -1509,6 +1579,11 @@ static void *virtqueue_packed_pop(VirtQueue *vq, size_t sz)
     i = vq->last_avail_idx;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        virtio_error(vdev, "Region caches not initialized");
+        goto done;
+    }
+
     if (caches->desc.len < max * sizeof(VRingDesc)) {
         virtio_error(vdev, "Cannot map descriptor ring");
         goto done;
@@ -1628,6 +1703,10 @@ static unsigned int virtqueue_packed_drop_all(VirtQueue *vq)
     VRingPackedDesc desc;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return 0;
+    }
+
     desc_cache = &caches->desc;
 
     virtio_queue_set_notification(vq, 0);
@@ -2412,6 +2491,10 @@ static bool virtio_packed_should_notify(VirtIODevice *vdev, VirtQueue *vq)
     VRingMemoryRegionCaches *caches;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return false;
+    }
+
     vring_packed_event_read(vdev, &caches->avail, &e);
 
     old = vq->signalled_used;
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] virtio: gracefully handle invalid region caches
  2020-02-07 10:46 [PATCH v2] virtio: gracefully handle invalid region caches Stefan Hajnoczi
@ 2020-02-24 13:35 ` Stefan Hajnoczi
  2020-02-24 13:37   ` Michael S. Tsirkin
  2020-02-27  8:51 ` Michael S. Tsirkin
  1 sibling, 1 reply; 4+ messages in thread
From: Stefan Hajnoczi @ 2020-02-24 13:35 UTC (permalink / raw)
  To: mst; +Cc: Alexander Bulekov, Cornelia Huck, qemu-devel, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 13685 bytes --]

On Fri, Feb 07, 2020 at 10:46:19AM +0000, Stefan Hajnoczi wrote:
> The virtqueue code sets up MemoryRegionCaches to access the virtqueue
> guest RAM data structures.  The code currently assumes that
> VRingMemoryRegionCaches is initialized before device emulation code
> accesses the virtqueue.  An assertion will fail in
> vring_get_region_caches() when this is not true.  Device fuzzing found a
> case where this assumption is false (see below).
> 
> Virtqueue guest RAM addresses can also be changed from a vCPU thread
> while an IOThread is accessing the virtqueue.  This breaks the same
> assumption but this time the caches could become invalid partway through
> the virtqueue code.  The code fetches the caches RCU pointer multiple
> times so we will need to validate the pointer every time it is fetched.
> 
> Add checks each time we call vring_get_region_caches() and treat invalid
> caches as a nop: memory stores are ignored and memory reads return 0.
> 
> The fuzz test failure is as follows:
> 
>   $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
>          -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
>          -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
>          -display none \
>          -qtest stdio
>   endianness
>   outl 0xcf8 0x80002020
>   outl 0xcfc 0xe0000000
>   outl 0xcf8 0x80002004
>   outw 0xcfc 0x7
>   write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
>   inb 0x4
>   writew 0xe000001c 0x1
>   write 0xe0000014 0x1 0x0d
> 
> The following error message is produced:
> 
>   qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.
> 
> The backtrace looks like this:
> 
>   #0  0x00007ffff5520625 in raise () at /lib64/libc.so.6
>   #1  0x00007ffff55098d9 in abort () at /lib64/libc.so.6
>   #2  0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
>   #3  0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
>   #4  0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
>   #5  vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
>   #6  0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
>   #7  virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
>   #8  virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
>   #9  0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
>   #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
>   #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
>   #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
>   #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
>   #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
>   #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
>   #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
>   #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
>   #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
>   #19 0x00005555559b20c9 in main_loop () at vl.c:1683
>   #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441
> 
> Reported-by: Alexander Bulekov <alxndr@bu.edu>
> Cc: Michael Tsirkin <mst@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> An alternative solution is to keep the vring.caches pointer non-NULL all
> the time so no checks are necessary.  We would need to initialize it to
> a VRingMemoryRegionCaches object that points to unassigned_mem.  This
> way virtio.c never hits NULL pointers and all memory loads/stores become
> nop when caches are invalid.
> 
> I think this solution is cleaner but couldn't see a reasonable way of
> initializing MemoryRegionCache objects so that they point to a 64-bit
> unassigned_mem MemoryRegion.  Maybe someone who knows the memory API
> better knows whether this is doable?
> 
> Michael: We discussed changing vring.desc checks, but I think that's no
> longer necessary with this patch.  If a guest gets past a vring.desc
> check then it can no longer trigger the assertion failure.
> ---
>  hw/virtio/virtio.c | 99 ++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 91 insertions(+), 8 deletions(-)

Ping?

> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 2c5410e981..00d444699d 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -282,15 +282,19 @@ static void vring_packed_flags_write(VirtIODevice *vdev,
>  /* Called within rcu_read_lock().  */
>  static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
>  {
> -    VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
> -    assert(caches != NULL);
> -    return caches;
> +    return atomic_rcu_read(&vq->vring.caches);
>  }
> +
>  /* Called within rcu_read_lock().  */
>  static inline uint16_t vring_avail_flags(VirtQueue *vq)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingAvail, flags);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
>  }
>  
> @@ -299,6 +303,11 @@ static inline uint16_t vring_avail_idx(VirtQueue *vq)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingAvail, idx);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      vq->shadow_avail_idx = virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
>      return vq->shadow_avail_idx;
>  }
> @@ -308,6 +317,11 @@ static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingAvail, ring[i]);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
>  }
>  
> @@ -323,6 +337,11 @@ static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingUsed, ring[i]);
> +
> +    if (!caches) {
> +        return;
> +    }
> +
>      virtio_tswap32s(vq->vdev, &uelem->id);
>      virtio_tswap32s(vq->vdev, &uelem->len);
>      address_space_write_cached(&caches->used, pa, uelem, sizeof(VRingUsedElem));
> @@ -334,6 +353,11 @@ static uint16_t vring_used_idx(VirtQueue *vq)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingUsed, idx);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      return virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
>  }
>  
> @@ -342,8 +366,12 @@ static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingUsed, idx);
> -    virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> -    address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> +
> +    if (caches) {
> +        virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> +        address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> +    }
> +
>      vq->used_idx = val;
>  }
>  
> @@ -353,8 +381,13 @@ static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask)
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      VirtIODevice *vdev = vq->vdev;
>      hwaddr pa = offsetof(VRingUsed, flags);
> -    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> +    uint16_t flags;
>  
> +    if (!caches) {
> +        return;
> +    }
> +
> +    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
>      virtio_stw_phys_cached(vdev, &caches->used, pa, flags | mask);
>      address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
>  }
> @@ -365,8 +398,13 @@ static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask)
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      VirtIODevice *vdev = vq->vdev;
>      hwaddr pa = offsetof(VRingUsed, flags);
> -    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> +    uint16_t flags;
>  
> +    if (!caches) {
> +        return;
> +    }
> +
> +    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
>      virtio_stw_phys_cached(vdev, &caches->used, pa, flags & ~mask);
>      address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
>  }
> @@ -381,6 +419,10 @@ static inline void vring_set_avail_event(VirtQueue *vq, uint16_t val)
>      }
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return;
> +    }
> +
>      pa = offsetof(VRingUsed, ring[vq->vring.num]);
>      virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
>      address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> @@ -410,7 +452,11 @@ static void virtio_queue_packed_set_notification(VirtQueue *vq, int enable)
>      VRingMemoryRegionCaches *caches;
>  
>      RCU_READ_LOCK_GUARD();
> -    caches  = vring_get_region_caches(vq);
> +    caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return;
> +    }
> +
>      vring_packed_event_read(vq->vdev, &caches->used, &e);
>  
>      if (!enable) {
> @@ -597,6 +643,10 @@ static int virtio_queue_packed_empty_rcu(VirtQueue *vq)
>      }
>  
>      cache = vring_get_region_caches(vq);
> +    if (!cache) {
> +        return 1;
> +    }
> +
>      vring_packed_desc_read_flags(vq->vdev, &desc.flags, &cache->desc,
>                                   vq->last_avail_idx);
>  
> @@ -777,6 +827,10 @@ static void virtqueue_packed_fill_desc(VirtQueue *vq,
>      }
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return;
> +    }
> +
>      vring_packed_desc_write(vq->vdev, &desc, &caches->desc, head, strict_order);
>  }
>  
> @@ -949,6 +1003,10 @@ static void virtqueue_split_get_avail_bytes(VirtQueue *vq,
>  
>      max = vq->vring.num;
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        goto err;
> +    }
> +
>      while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
>          MemoryRegionCache *desc_cache = &caches->desc;
>          unsigned int num_bufs;
> @@ -1089,6 +1147,9 @@ static void virtqueue_packed_get_avail_bytes(VirtQueue *vq,
>  
>      max = vq->vring.num;
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        goto err;
> +    }
>  
>      for (;;) {
>          unsigned int num_bufs = total_bufs;
> @@ -1194,6 +1255,10 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>      }
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        goto err;
> +    }
> +
>      desc_size = virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED) ?
>                                  sizeof(VRingPackedDesc) : sizeof(VRingDesc);
>      if (caches->desc.len < vq->vring.num * desc_size) {
> @@ -1387,6 +1452,11 @@ static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
>      i = head;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        virtio_error(vdev, "Region caches not initialized");
> +        goto done;
> +    }
> +
>      if (caches->desc.len < max * sizeof(VRingDesc)) {
>          virtio_error(vdev, "Cannot map descriptor ring");
>          goto done;
> @@ -1509,6 +1579,11 @@ static void *virtqueue_packed_pop(VirtQueue *vq, size_t sz)
>      i = vq->last_avail_idx;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        virtio_error(vdev, "Region caches not initialized");
> +        goto done;
> +    }
> +
>      if (caches->desc.len < max * sizeof(VRingDesc)) {
>          virtio_error(vdev, "Cannot map descriptor ring");
>          goto done;
> @@ -1628,6 +1703,10 @@ static unsigned int virtqueue_packed_drop_all(VirtQueue *vq)
>      VRingPackedDesc desc;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      desc_cache = &caches->desc;
>  
>      virtio_queue_set_notification(vq, 0);
> @@ -2412,6 +2491,10 @@ static bool virtio_packed_should_notify(VirtIODevice *vdev, VirtQueue *vq)
>      VRingMemoryRegionCaches *caches;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return false;
> +    }
> +
>      vring_packed_event_read(vdev, &caches->avail, &e);
>  
>      old = vq->signalled_used;
> -- 
> 2.24.1
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] virtio: gracefully handle invalid region caches
  2020-02-24 13:35 ` Stefan Hajnoczi
@ 2020-02-24 13:37   ` Michael S. Tsirkin
  0 siblings, 0 replies; 4+ messages in thread
From: Michael S. Tsirkin @ 2020-02-24 13:37 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Alexander Bulekov, Cornelia Huck, qemu-devel, Paolo Bonzini

On Mon, Feb 24, 2020 at 01:35:54PM +0000, Stefan Hajnoczi wrote:
> On Fri, Feb 07, 2020 at 10:46:19AM +0000, Stefan Hajnoczi wrote:
> > The virtqueue code sets up MemoryRegionCaches to access the virtqueue
> > guest RAM data structures.  The code currently assumes that
> > VRingMemoryRegionCaches is initialized before device emulation code
> > accesses the virtqueue.  An assertion will fail in
> > vring_get_region_caches() when this is not true.  Device fuzzing found a
> > case where this assumption is false (see below).
> > 
> > Virtqueue guest RAM addresses can also be changed from a vCPU thread
> > while an IOThread is accessing the virtqueue.  This breaks the same
> > assumption but this time the caches could become invalid partway through
> > the virtqueue code.  The code fetches the caches RCU pointer multiple
> > times so we will need to validate the pointer every time it is fetched.
> > 
> > Add checks each time we call vring_get_region_caches() and treat invalid
> > caches as a nop: memory stores are ignored and memory reads return 0.
> > 
> > The fuzz test failure is as follows:
> > 
> >   $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
> >          -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
> >          -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
> >          -display none \
> >          -qtest stdio
> >   endianness
> >   outl 0xcf8 0x80002020
> >   outl 0xcfc 0xe0000000
> >   outl 0xcf8 0x80002004
> >   outw 0xcfc 0x7
> >   write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
> >   inb 0x4
> >   writew 0xe000001c 0x1
> >   write 0xe0000014 0x1 0x0d
> > 
> > The following error message is produced:
> > 
> >   qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.
> > 
> > The backtrace looks like this:
> > 
> >   #0  0x00007ffff5520625 in raise () at /lib64/libc.so.6
> >   #1  0x00007ffff55098d9 in abort () at /lib64/libc.so.6
> >   #2  0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
> >   #3  0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
> >   #4  0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
> >   #5  vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
> >   #6  0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
> >   #7  virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
> >   #8  virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
> >   #9  0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
> >   #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
> >   #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
> >   #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
> >   #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
> >   #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
> >   #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
> >   #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
> >   #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
> >   #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
> >   #19 0x00005555559b20c9 in main_loop () at vl.c:1683
> >   #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441
> > 
> > Reported-by: Alexander Bulekov <alxndr@bu.edu>
> > Cc: Michael Tsirkin <mst@redhat.com>
> > Cc: Cornelia Huck <cohuck@redhat.com>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> > An alternative solution is to keep the vring.caches pointer non-NULL all
> > the time so no checks are necessary.  We would need to initialize it to
> > a VRingMemoryRegionCaches object that points to unassigned_mem.  This
> > way virtio.c never hits NULL pointers and all memory loads/stores become
> > nop when caches are invalid.
> > 
> > I think this solution is cleaner but couldn't see a reasonable way of
> > initializing MemoryRegionCache objects so that they point to a 64-bit
> > unassigned_mem MemoryRegion.  Maybe someone who knows the memory API
> > better knows whether this is doable?
> > 
> > Michael: We discussed changing vring.desc checks, but I think that's no
> > longer necessary with this patch.  If a guest gets past a vring.desc
> > check then it can no longer trigger the assertion failure.
> > ---
> >  hw/virtio/virtio.c | 99 ++++++++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 91 insertions(+), 8 deletions(-)
> 
> Ping?

Queued, thanks!

> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > index 2c5410e981..00d444699d 100644
> > --- a/hw/virtio/virtio.c
> > +++ b/hw/virtio/virtio.c
> > @@ -282,15 +282,19 @@ static void vring_packed_flags_write(VirtIODevice *vdev,
> >  /* Called within rcu_read_lock().  */
> >  static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
> >  {
> > -    VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
> > -    assert(caches != NULL);
> > -    return caches;
> > +    return atomic_rcu_read(&vq->vring.caches);
> >  }
> > +
> >  /* Called within rcu_read_lock().  */
> >  static inline uint16_t vring_avail_flags(VirtQueue *vq)
> >  {
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      hwaddr pa = offsetof(VRingAvail, flags);
> > +
> > +    if (!caches) {
> > +        return 0;
> > +    }
> > +
> >      return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> >  }
> >  
> > @@ -299,6 +303,11 @@ static inline uint16_t vring_avail_idx(VirtQueue *vq)
> >  {
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      hwaddr pa = offsetof(VRingAvail, idx);
> > +
> > +    if (!caches) {
> > +        return 0;
> > +    }
> > +
> >      vq->shadow_avail_idx = virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> >      return vq->shadow_avail_idx;
> >  }
> > @@ -308,6 +317,11 @@ static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
> >  {
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      hwaddr pa = offsetof(VRingAvail, ring[i]);
> > +
> > +    if (!caches) {
> > +        return 0;
> > +    }
> > +
> >      return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> >  }
> >  
> > @@ -323,6 +337,11 @@ static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
> >  {
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      hwaddr pa = offsetof(VRingUsed, ring[i]);
> > +
> > +    if (!caches) {
> > +        return;
> > +    }
> > +
> >      virtio_tswap32s(vq->vdev, &uelem->id);
> >      virtio_tswap32s(vq->vdev, &uelem->len);
> >      address_space_write_cached(&caches->used, pa, uelem, sizeof(VRingUsedElem));
> > @@ -334,6 +353,11 @@ static uint16_t vring_used_idx(VirtQueue *vq)
> >  {
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      hwaddr pa = offsetof(VRingUsed, idx);
> > +
> > +    if (!caches) {
> > +        return 0;
> > +    }
> > +
> >      return virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> >  }
> >  
> > @@ -342,8 +366,12 @@ static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
> >  {
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      hwaddr pa = offsetof(VRingUsed, idx);
> > -    virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> > -    address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> > +
> > +    if (caches) {
> > +        virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> > +        address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> > +    }
> > +
> >      vq->used_idx = val;
> >  }
> >  
> > @@ -353,8 +381,13 @@ static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask)
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      VirtIODevice *vdev = vq->vdev;
> >      hwaddr pa = offsetof(VRingUsed, flags);
> > -    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> > +    uint16_t flags;
> >  
> > +    if (!caches) {
> > +        return;
> > +    }
> > +
> > +    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> >      virtio_stw_phys_cached(vdev, &caches->used, pa, flags | mask);
> >      address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
> >  }
> > @@ -365,8 +398,13 @@ static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask)
> >      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> >      VirtIODevice *vdev = vq->vdev;
> >      hwaddr pa = offsetof(VRingUsed, flags);
> > -    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> > +    uint16_t flags;
> >  
> > +    if (!caches) {
> > +        return;
> > +    }
> > +
> > +    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> >      virtio_stw_phys_cached(vdev, &caches->used, pa, flags & ~mask);
> >      address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
> >  }
> > @@ -381,6 +419,10 @@ static inline void vring_set_avail_event(VirtQueue *vq, uint16_t val)
> >      }
> >  
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        return;
> > +    }
> > +
> >      pa = offsetof(VRingUsed, ring[vq->vring.num]);
> >      virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> >      address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> > @@ -410,7 +452,11 @@ static void virtio_queue_packed_set_notification(VirtQueue *vq, int enable)
> >      VRingMemoryRegionCaches *caches;
> >  
> >      RCU_READ_LOCK_GUARD();
> > -    caches  = vring_get_region_caches(vq);
> > +    caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        return;
> > +    }
> > +
> >      vring_packed_event_read(vq->vdev, &caches->used, &e);
> >  
> >      if (!enable) {
> > @@ -597,6 +643,10 @@ static int virtio_queue_packed_empty_rcu(VirtQueue *vq)
> >      }
> >  
> >      cache = vring_get_region_caches(vq);
> > +    if (!cache) {
> > +        return 1;
> > +    }
> > +
> >      vring_packed_desc_read_flags(vq->vdev, &desc.flags, &cache->desc,
> >                                   vq->last_avail_idx);
> >  
> > @@ -777,6 +827,10 @@ static void virtqueue_packed_fill_desc(VirtQueue *vq,
> >      }
> >  
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        return;
> > +    }
> > +
> >      vring_packed_desc_write(vq->vdev, &desc, &caches->desc, head, strict_order);
> >  }
> >  
> > @@ -949,6 +1003,10 @@ static void virtqueue_split_get_avail_bytes(VirtQueue *vq,
> >  
> >      max = vq->vring.num;
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        goto err;
> > +    }
> > +
> >      while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
> >          MemoryRegionCache *desc_cache = &caches->desc;
> >          unsigned int num_bufs;
> > @@ -1089,6 +1147,9 @@ static void virtqueue_packed_get_avail_bytes(VirtQueue *vq,
> >  
> >      max = vq->vring.num;
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        goto err;
> > +    }
> >  
> >      for (;;) {
> >          unsigned int num_bufs = total_bufs;
> > @@ -1194,6 +1255,10 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> >      }
> >  
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        goto err;
> > +    }
> > +
> >      desc_size = virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED) ?
> >                                  sizeof(VRingPackedDesc) : sizeof(VRingDesc);
> >      if (caches->desc.len < vq->vring.num * desc_size) {
> > @@ -1387,6 +1452,11 @@ static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
> >      i = head;
> >  
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        virtio_error(vdev, "Region caches not initialized");
> > +        goto done;
> > +    }
> > +
> >      if (caches->desc.len < max * sizeof(VRingDesc)) {
> >          virtio_error(vdev, "Cannot map descriptor ring");
> >          goto done;
> > @@ -1509,6 +1579,11 @@ static void *virtqueue_packed_pop(VirtQueue *vq, size_t sz)
> >      i = vq->last_avail_idx;
> >  
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        virtio_error(vdev, "Region caches not initialized");
> > +        goto done;
> > +    }
> > +
> >      if (caches->desc.len < max * sizeof(VRingDesc)) {
> >          virtio_error(vdev, "Cannot map descriptor ring");
> >          goto done;
> > @@ -1628,6 +1703,10 @@ static unsigned int virtqueue_packed_drop_all(VirtQueue *vq)
> >      VRingPackedDesc desc;
> >  
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        return 0;
> > +    }
> > +
> >      desc_cache = &caches->desc;
> >  
> >      virtio_queue_set_notification(vq, 0);
> > @@ -2412,6 +2491,10 @@ static bool virtio_packed_should_notify(VirtIODevice *vdev, VirtQueue *vq)
> >      VRingMemoryRegionCaches *caches;
> >  
> >      caches = vring_get_region_caches(vq);
> > +    if (!caches) {
> > +        return false;
> > +    }
> > +
> >      vring_packed_event_read(vdev, &caches->avail, &e);
> >  
> >      old = vq->signalled_used;
> > -- 
> > 2.24.1
> > 




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] virtio: gracefully handle invalid region caches
  2020-02-07 10:46 [PATCH v2] virtio: gracefully handle invalid region caches Stefan Hajnoczi
  2020-02-24 13:35 ` Stefan Hajnoczi
@ 2020-02-27  8:51 ` Michael S. Tsirkin
  1 sibling, 0 replies; 4+ messages in thread
From: Michael S. Tsirkin @ 2020-02-27  8:51 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Alexander Bulekov, qemu-stable, Cornelia Huck, qemu-devel, Paolo Bonzini

On Fri, Feb 07, 2020 at 10:46:19AM +0000, Stefan Hajnoczi wrote:
> The virtqueue code sets up MemoryRegionCaches to access the virtqueue
> guest RAM data structures.  The code currently assumes that
> VRingMemoryRegionCaches is initialized before device emulation code
> accesses the virtqueue.  An assertion will fail in
> vring_get_region_caches() when this is not true.  Device fuzzing found a
> case where this assumption is false (see below).
> 
> Virtqueue guest RAM addresses can also be changed from a vCPU thread
> while an IOThread is accessing the virtqueue.  This breaks the same
> assumption but this time the caches could become invalid partway through
> the virtqueue code.  The code fetches the caches RCU pointer multiple
> times so we will need to validate the pointer every time it is fetched.
> 
> Add checks each time we call vring_get_region_caches() and treat invalid
> caches as a nop: memory stores are ignored and memory reads return 0.
> 
> The fuzz test failure is as follows:
> 
>   $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
>          -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
>          -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
>          -display none \
>          -qtest stdio
>   endianness
>   outl 0xcf8 0x80002020
>   outl 0xcfc 0xe0000000
>   outl 0xcf8 0x80002004
>   outw 0xcfc 0x7
>   write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
>   inb 0x4
>   writew 0xe000001c 0x1
>   write 0xe0000014 0x1 0x0d
> 
> The following error message is produced:
> 
>   qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.
> 
> The backtrace looks like this:
> 
>   #0  0x00007ffff5520625 in raise () at /lib64/libc.so.6
>   #1  0x00007ffff55098d9 in abort () at /lib64/libc.so.6
>   #2  0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
>   #3  0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
>   #4  0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
>   #5  vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
>   #6  0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
>   #7  virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
>   #8  virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
>   #9  0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
>   #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
>   #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
>   #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
>   #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
>   #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
>   #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
>   #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
>   #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
>   #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
>   #19 0x00005555559b20c9 in main_loop () at vl.c:1683
>   #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441
> 
> Reported-by: Alexander Bulekov <alxndr@bu.edu>
> Cc: Michael Tsirkin <mst@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

I sent a pull already, but I just realized it's needed for stable.
In case it gets merged earlier, CC stable now.

> ---
> An alternative solution is to keep the vring.caches pointer non-NULL all
> the time so no checks are necessary.  We would need to initialize it to
> a VRingMemoryRegionCaches object that points to unassigned_mem.  This
> way virtio.c never hits NULL pointers and all memory loads/stores become
> nop when caches are invalid.
> 
> I think this solution is cleaner but couldn't see a reasonable way of
> initializing MemoryRegionCache objects so that they point to a 64-bit
> unassigned_mem MemoryRegion.  Maybe someone who knows the memory API
> better knows whether this is doable?
> 
> Michael: We discussed changing vring.desc checks, but I think that's no
> longer necessary with this patch.  If a guest gets past a vring.desc
> check then it can no longer trigger the assertion failure.
> ---
>  hw/virtio/virtio.c | 99 ++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 91 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 2c5410e981..00d444699d 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -282,15 +282,19 @@ static void vring_packed_flags_write(VirtIODevice *vdev,
>  /* Called within rcu_read_lock().  */
>  static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
>  {
> -    VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
> -    assert(caches != NULL);
> -    return caches;
> +    return atomic_rcu_read(&vq->vring.caches);
>  }
> +
>  /* Called within rcu_read_lock().  */
>  static inline uint16_t vring_avail_flags(VirtQueue *vq)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingAvail, flags);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
>  }
>  
> @@ -299,6 +303,11 @@ static inline uint16_t vring_avail_idx(VirtQueue *vq)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingAvail, idx);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      vq->shadow_avail_idx = virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
>      return vq->shadow_avail_idx;
>  }
> @@ -308,6 +317,11 @@ static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingAvail, ring[i]);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
>  }
>  
> @@ -323,6 +337,11 @@ static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingUsed, ring[i]);
> +
> +    if (!caches) {
> +        return;
> +    }
> +
>      virtio_tswap32s(vq->vdev, &uelem->id);
>      virtio_tswap32s(vq->vdev, &uelem->len);
>      address_space_write_cached(&caches->used, pa, uelem, sizeof(VRingUsedElem));
> @@ -334,6 +353,11 @@ static uint16_t vring_used_idx(VirtQueue *vq)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingUsed, idx);
> +
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      return virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
>  }
>  
> @@ -342,8 +366,12 @@ static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingUsed, idx);
> -    virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> -    address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> +
> +    if (caches) {
> +        virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> +        address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> +    }
> +
>      vq->used_idx = val;
>  }
>  
> @@ -353,8 +381,13 @@ static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask)
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      VirtIODevice *vdev = vq->vdev;
>      hwaddr pa = offsetof(VRingUsed, flags);
> -    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> +    uint16_t flags;
>  
> +    if (!caches) {
> +        return;
> +    }
> +
> +    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
>      virtio_stw_phys_cached(vdev, &caches->used, pa, flags | mask);
>      address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
>  }
> @@ -365,8 +398,13 @@ static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask)
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      VirtIODevice *vdev = vq->vdev;
>      hwaddr pa = offsetof(VRingUsed, flags);
> -    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> +    uint16_t flags;
>  
> +    if (!caches) {
> +        return;
> +    }
> +
> +    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
>      virtio_stw_phys_cached(vdev, &caches->used, pa, flags & ~mask);
>      address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
>  }
> @@ -381,6 +419,10 @@ static inline void vring_set_avail_event(VirtQueue *vq, uint16_t val)
>      }
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return;
> +    }
> +
>      pa = offsetof(VRingUsed, ring[vq->vring.num]);
>      virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
>      address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> @@ -410,7 +452,11 @@ static void virtio_queue_packed_set_notification(VirtQueue *vq, int enable)
>      VRingMemoryRegionCaches *caches;
>  
>      RCU_READ_LOCK_GUARD();
> -    caches  = vring_get_region_caches(vq);
> +    caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return;
> +    }
> +
>      vring_packed_event_read(vq->vdev, &caches->used, &e);
>  
>      if (!enable) {
> @@ -597,6 +643,10 @@ static int virtio_queue_packed_empty_rcu(VirtQueue *vq)
>      }
>  
>      cache = vring_get_region_caches(vq);
> +    if (!cache) {
> +        return 1;
> +    }
> +
>      vring_packed_desc_read_flags(vq->vdev, &desc.flags, &cache->desc,
>                                   vq->last_avail_idx);
>  
> @@ -777,6 +827,10 @@ static void virtqueue_packed_fill_desc(VirtQueue *vq,
>      }
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return;
> +    }
> +
>      vring_packed_desc_write(vq->vdev, &desc, &caches->desc, head, strict_order);
>  }
>  
> @@ -949,6 +1003,10 @@ static void virtqueue_split_get_avail_bytes(VirtQueue *vq,
>  
>      max = vq->vring.num;
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        goto err;
> +    }
> +
>      while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
>          MemoryRegionCache *desc_cache = &caches->desc;
>          unsigned int num_bufs;
> @@ -1089,6 +1147,9 @@ static void virtqueue_packed_get_avail_bytes(VirtQueue *vq,
>  
>      max = vq->vring.num;
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        goto err;
> +    }
>  
>      for (;;) {
>          unsigned int num_bufs = total_bufs;
> @@ -1194,6 +1255,10 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>      }
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        goto err;
> +    }
> +
>      desc_size = virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED) ?
>                                  sizeof(VRingPackedDesc) : sizeof(VRingDesc);
>      if (caches->desc.len < vq->vring.num * desc_size) {
> @@ -1387,6 +1452,11 @@ static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
>      i = head;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        virtio_error(vdev, "Region caches not initialized");
> +        goto done;
> +    }
> +
>      if (caches->desc.len < max * sizeof(VRingDesc)) {
>          virtio_error(vdev, "Cannot map descriptor ring");
>          goto done;
> @@ -1509,6 +1579,11 @@ static void *virtqueue_packed_pop(VirtQueue *vq, size_t sz)
>      i = vq->last_avail_idx;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        virtio_error(vdev, "Region caches not initialized");
> +        goto done;
> +    }
> +
>      if (caches->desc.len < max * sizeof(VRingDesc)) {
>          virtio_error(vdev, "Cannot map descriptor ring");
>          goto done;
> @@ -1628,6 +1703,10 @@ static unsigned int virtqueue_packed_drop_all(VirtQueue *vq)
>      VRingPackedDesc desc;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return 0;
> +    }
> +
>      desc_cache = &caches->desc;
>  
>      virtio_queue_set_notification(vq, 0);
> @@ -2412,6 +2491,10 @@ static bool virtio_packed_should_notify(VirtIODevice *vdev, VirtQueue *vq)
>      VRingMemoryRegionCaches *caches;
>  
>      caches = vring_get_region_caches(vq);
> +    if (!caches) {
> +        return false;
> +    }
> +
>      vring_packed_event_read(vdev, &caches->avail, &e);
>  
>      old = vq->signalled_used;
> -- 
> 2.24.1



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-02-27  8:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-07 10:46 [PATCH v2] virtio: gracefully handle invalid region caches Stefan Hajnoczi
2020-02-24 13:35 ` Stefan Hajnoczi
2020-02-24 13:37   ` Michael S. Tsirkin
2020-02-27  8:51 ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.