All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michael Roth <mdroth@linux.vnet.ibm.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	qemu-stable@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>,
	Michael Tsirkin <mst@redhat.com>
Subject: [PATCH 50/78] virtio: gracefully handle invalid region caches
Date: Tue, 16 Jun 2020 09:15:19 -0500	[thread overview]
Message-ID: <20200616141547.24664-51-mdroth@linux.vnet.ibm.com> (raw)
In-Reply-To: <20200616141547.24664-1-mdroth@linux.vnet.ibm.com>

From: Stefan Hajnoczi <stefanha@redhat.com>

The virtqueue code sets up MemoryRegionCaches to access the virtqueue
guest RAM data structures.  The code currently assumes that
VRingMemoryRegionCaches is initialized before device emulation code
accesses the virtqueue.  An assertion will fail in
vring_get_region_caches() when this is not true.  Device fuzzing found a
case where this assumption is false (see below).

Virtqueue guest RAM addresses can also be changed from a vCPU thread
while an IOThread is accessing the virtqueue.  This breaks the same
assumption but this time the caches could become invalid partway through
the virtqueue code.  The code fetches the caches RCU pointer multiple
times so we will need to validate the pointer every time it is fetched.

Add checks each time we call vring_get_region_caches() and treat invalid
caches as a nop: memory stores are ignored and memory reads return 0.

The fuzz test failure is as follows:

  $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
         -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
         -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
         -display none \
         -qtest stdio
  endianness
  outl 0xcf8 0x80002020
  outl 0xcfc 0xe0000000
  outl 0xcf8 0x80002004
  outw 0xcfc 0x7
  write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
  inb 0x4
  writew 0xe000001c 0x1
  write 0xe0000014 0x1 0x0d

The following error message is produced:

  qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.

The backtrace looks like this:

  #0  0x00007ffff5520625 in raise () at /lib64/libc.so.6
  #1  0x00007ffff55098d9 in abort () at /lib64/libc.so.6
  #2  0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
  #3  0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
  #4  0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
  #5  vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
  #6  0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
  #7  virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
  #8  virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
  #9  0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
  #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
  #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
  #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
  #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
  #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
  #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
  #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
  #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
  #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
  #19 0x00005555559b20c9 in main_loop () at vl.c:1683
  #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441

Reported-by: Alexander Bulekov <alxndr@bu.edu>
Cc: Michael Tsirkin <mst@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20200207104619.164892-1-stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit abdd16f4681cc4d6bf84990227b5c9b98e869ccd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 hw/virtio/virtio.c | 99 ++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 91 insertions(+), 8 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 344d817644..6c71141ed1 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -282,15 +282,19 @@ static void vring_packed_flags_write(VirtIODevice *vdev,
 /* Called within rcu_read_lock().  */
 static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
 {
-    VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
-    assert(caches != NULL);
-    return caches;
+    return atomic_rcu_read(&vq->vring.caches);
 }
+
 /* Called within rcu_read_lock().  */
 static inline uint16_t vring_avail_flags(VirtQueue *vq)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, flags);
+
+    if (!caches) {
+        return 0;
+    }
+
     return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
 }
 
@@ -299,6 +303,11 @@ static inline uint16_t vring_avail_idx(VirtQueue *vq)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, idx);
+
+    if (!caches) {
+        return 0;
+    }
+
     vq->shadow_avail_idx = virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
     return vq->shadow_avail_idx;
 }
@@ -308,6 +317,11 @@ static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, ring[i]);
+
+    if (!caches) {
+        return 0;
+    }
+
     return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
 }
 
@@ -323,6 +337,11 @@ static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingUsed, ring[i]);
+
+    if (!caches) {
+        return;
+    }
+
     virtio_tswap32s(vq->vdev, &uelem->id);
     virtio_tswap32s(vq->vdev, &uelem->len);
     address_space_write_cached(&caches->used, pa, uelem, sizeof(VRingUsedElem));
@@ -334,6 +353,11 @@ static uint16_t vring_used_idx(VirtQueue *vq)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingUsed, idx);
+
+    if (!caches) {
+        return 0;
+    }
+
     return virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
 }
 
@@ -342,8 +366,12 @@ static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingUsed, idx);
-    virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
-    address_space_cache_invalidate(&caches->used, pa, sizeof(val));
+
+    if (caches) {
+        virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
+        address_space_cache_invalidate(&caches->used, pa, sizeof(val));
+    }
+
     vq->used_idx = val;
 }
 
@@ -353,8 +381,13 @@ static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask)
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     VirtIODevice *vdev = vq->vdev;
     hwaddr pa = offsetof(VRingUsed, flags);
-    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
+    uint16_t flags;
 
+    if (!caches) {
+        return;
+    }
+
+    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
     virtio_stw_phys_cached(vdev, &caches->used, pa, flags | mask);
     address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
 }
@@ -365,8 +398,13 @@ static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask)
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     VirtIODevice *vdev = vq->vdev;
     hwaddr pa = offsetof(VRingUsed, flags);
-    uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
+    uint16_t flags;
 
+    if (!caches) {
+        return;
+    }
+
+    flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
     virtio_stw_phys_cached(vdev, &caches->used, pa, flags & ~mask);
     address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
 }
@@ -381,6 +419,10 @@ static inline void vring_set_avail_event(VirtQueue *vq, uint16_t val)
     }
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return;
+    }
+
     pa = offsetof(VRingUsed, ring[vq->vring.num]);
     virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
     address_space_cache_invalidate(&caches->used, pa, sizeof(val));
@@ -410,7 +452,11 @@ static void virtio_queue_packed_set_notification(VirtQueue *vq, int enable)
     VRingMemoryRegionCaches *caches;
 
     RCU_READ_LOCK_GUARD();
-    caches  = vring_get_region_caches(vq);
+    caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return;
+    }
+
     vring_packed_event_read(vq->vdev, &caches->used, &e);
 
     if (!enable) {
@@ -592,6 +638,10 @@ static int virtio_queue_packed_empty_rcu(VirtQueue *vq)
     }
 
     cache = vring_get_region_caches(vq);
+    if (!cache) {
+        return 1;
+    }
+
     vring_packed_desc_read_flags(vq->vdev, &desc.flags, &cache->desc,
                                  vq->last_avail_idx);
 
@@ -772,6 +822,10 @@ static void virtqueue_packed_fill_desc(VirtQueue *vq,
     }
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return;
+    }
+
     vring_packed_desc_write(vq->vdev, &desc, &caches->desc, head, strict_order);
 }
 
@@ -944,6 +998,10 @@ static void virtqueue_split_get_avail_bytes(VirtQueue *vq,
 
     max = vq->vring.num;
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        goto err;
+    }
+
     while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
         MemoryRegionCache *desc_cache = &caches->desc;
         unsigned int num_bufs;
@@ -1084,6 +1142,9 @@ static void virtqueue_packed_get_avail_bytes(VirtQueue *vq,
 
     max = vq->vring.num;
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        goto err;
+    }
 
     for (;;) {
         unsigned int num_bufs = total_bufs;
@@ -1189,6 +1250,10 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
     }
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        goto err;
+    }
+
     desc_size = virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED) ?
                                 sizeof(VRingPackedDesc) : sizeof(VRingDesc);
     if (caches->desc.len < vq->vring.num * desc_size) {
@@ -1382,6 +1447,11 @@ static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
     i = head;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        virtio_error(vdev, "Region caches not initialized");
+        goto done;
+    }
+
     if (caches->desc.len < max * sizeof(VRingDesc)) {
         virtio_error(vdev, "Cannot map descriptor ring");
         goto done;
@@ -1504,6 +1574,11 @@ static void *virtqueue_packed_pop(VirtQueue *vq, size_t sz)
     i = vq->last_avail_idx;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        virtio_error(vdev, "Region caches not initialized");
+        goto done;
+    }
+
     if (caches->desc.len < max * sizeof(VRingDesc)) {
         virtio_error(vdev, "Cannot map descriptor ring");
         goto done;
@@ -1623,6 +1698,10 @@ static unsigned int virtqueue_packed_drop_all(VirtQueue *vq)
     VRingPackedDesc desc;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return 0;
+    }
+
     desc_cache = &caches->desc;
 
     virtio_queue_set_notification(vq, 0);
@@ -2406,6 +2485,10 @@ static bool virtio_packed_should_notify(VirtIODevice *vdev, VirtQueue *vq)
     VRingMemoryRegionCaches *caches;
 
     caches = vring_get_region_caches(vq);
+    if (!caches) {
+        return false;
+    }
+
     vring_packed_event_read(vdev, &caches->avail, &e);
 
     old = vq->signalled_used;
-- 
2.17.1



  parent reply	other threads:[~2020-06-16 14:49 UTC|newest]

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-16 14:14 [PATCH 00/78] Patch Round-up for stable 4.2.1, freeze on 2020-06-22 Michael Roth
2020-06-16 14:14 ` [PATCH 01/78] block/nbd: extract the common cleanup code Michael Roth
2020-06-16 14:14 ` [PATCH 02/78] block/nbd: fix memory leak in nbd_open() Michael Roth
2020-06-16 14:14 ` [PATCH 03/78] i386: Resolve CPU models to v1 by default Michael Roth
2020-06-16 14:14 ` [PATCH 04/78] qapi: better document NVMe blockdev @device parameter Michael Roth
2020-06-16 14:14 ` [PATCH 05/78] target/arm: ensure we use current exception state after SCR update Michael Roth
2020-06-16 14:14 ` [PATCH 06/78] block: Activate recursively even for already active nodes Michael Roth
2020-06-16 14:14 ` [PATCH 07/78] virtio-blk: fix out-of-bounds access to bitmap in notify_guest_bh Michael Roth
2020-06-16 14:14 ` [PATCH 08/78] numa: remove not needed check Michael Roth
2020-06-16 14:14 ` [PATCH 09/78] numa: properly check if numa is supported Michael Roth
2020-06-16 14:14 ` [PATCH 10/78] backup-top: Begin drain earlier Michael Roth
2020-06-16 14:14 ` [PATCH 11/78] arm/arm-powerctl: set NSACR.{CP11, CP10} bits in arm_set_cpu_on() Michael Roth
2020-06-16 14:14 ` [PATCH 12/78] arm/arm-powerctl: rebuild hflags after setting CP15 " Michael Roth
2020-06-16 14:14 ` [PATCH 13/78] hw/i386/pc: fix regression in parsing vga cmdline parameter Michael Roth
2020-06-16 14:14 ` [PATCH 14/78] tests/ide-test: Create a single unit-test covering more PRDT cases Michael Roth
2020-06-16 14:14 ` [PATCH 15/78] ide: Fix incorrect handling of some PRDTs in ide_dma_cb() Michael Roth
2020-06-16 14:14 ` [PATCH 16/78] target/arm: Set ISSIs16Bit in make_issinfo Michael Roth
2020-06-16 14:14 ` [PATCH 17/78] virtio: update queue size on guest write Michael Roth
2020-06-16 14:14 ` [PATCH 18/78] virtio-mmio: " Michael Roth
2020-06-16 14:14 ` [PATCH 19/78] virtio: add ability to delete vq through a pointer Michael Roth
2020-06-16 14:14 ` [PATCH 20/78] virtio: make virtio_delete_queue idempotent Michael Roth
2020-06-16 14:14 ` [PATCH 21/78] virtio: reset region cache when on queue deletion Michael Roth
2020-06-16 14:14 ` [PATCH 22/78] virtio-net: delete also control queue when TX/RX deleted Michael Roth
2020-06-16 14:14 ` [PATCH 23/78] intel_iommu: a fix to vtd_find_as_from_bus_num() Michael Roth
2020-06-16 14:14 ` [PATCH 24/78] intel_iommu: add present bit check for pasid table entries Michael Roth
2020-06-16 14:14 ` [PATCH 25/78] vfio/pci: Don't remove irqchip notifier if not registered Michael Roth
2020-06-16 14:14 ` [PATCH 26/78] qcow2-bitmaps: fix qcow2_can_store_new_dirty_bitmap Michael Roth
2020-06-16 14:14 ` [PATCH 27/78] dp8393x: Mask EOL bit from descriptor addresses Michael Roth
2020-06-16 14:14 ` [PATCH 28/78] dp8393x: Always use 32-bit accesses Michael Roth
2020-06-16 14:14 ` [PATCH 29/78] dp8393x: Clean up endianness hacks Michael Roth
2020-06-16 14:14 ` [PATCH 30/78] dp8393x: Have dp8393x_receive() return the packet size Michael Roth
2020-06-16 14:15 ` [PATCH 31/78] dp8393x: Update LLFA and CRDA registers from rx descriptor Michael Roth
2020-06-16 14:15 ` [PATCH 32/78] dp8393x: Clear RRRA command register bit only when appropriate Michael Roth
2020-06-16 14:15 ` [PATCH 33/78] dp8393x: Implement packet size limit and RBAE interrupt Michael Roth
2020-06-16 14:15 ` [PATCH 34/78] dp8393x: Don't clobber packet checksum Michael Roth
2020-06-16 14:15 ` [PATCH 35/78] dp8393x: Use long-word-aligned RRA pointers in 32-bit mode Michael Roth
2020-06-16 14:15 ` [PATCH 36/78] dp8393x: Pad frames to word or long word boundary Michael Roth
2020-06-16 14:15 ` [PATCH 37/78] dp8393x: Clear descriptor in_use field to release packet Michael Roth
2020-06-16 14:15 ` [PATCH 38/78] dp8393x: Always update RRA pointers and sequence numbers Michael Roth
2020-06-16 14:15 ` [PATCH 39/78] dp8393x: Don't reset Silicon Revision register Michael Roth
2020-06-16 14:15 ` [PATCH 40/78] dp8393x: Don't stop reception upon RBE interrupt assertion Michael Roth
2020-06-16 14:15 ` [PATCH 41/78] s390/sclp: improve special wait psw logic Michael Roth
2020-06-16 14:15 ` [PATCH 42/78] plugins/core: add missing break in cb_to_tcg_flags Michael Roth
2020-06-16 14:15 ` [PATCH 43/78] tcg: save vaddr temp for plugin usage Michael Roth
2020-06-16 14:15 ` [PATCH 44/78] qcow2: update_refcount(): Reset old_table_index after qcow2_cache_put() Michael Roth
2020-06-16 14:15 ` [PATCH 45/78] qcow2: Fix qcow2_alloc_cluster_abort() for external data file Michael Roth
2020-06-16 14:15 ` [PATCH 46/78] iotests: Test copy offloading with " Michael Roth
2020-06-16 14:15 ` [PATCH 47/78] qcow2: Fix alloc_cluster_abort() for pre-existing clusters Michael Roth
2020-06-16 14:15 ` [PATCH 48/78] iotests/026: Test EIO on preallocated zero cluster Michael Roth
2020-06-16 14:15 ` [PATCH 49/78] iotests/026: Test EIO on allocation in a data-file Michael Roth
2020-06-16 14:15 ` Michael Roth [this message]
2020-06-16 14:15 ` [PATCH 51/78] scsi/qemu-pr-helper: Fix out-of-bounds access to trnptid_list[] Michael Roth
2020-06-16 14:15 ` [PATCH 52/78] block/qcow2-threads: fix qcow2_decompress Michael Roth
2020-06-16 14:15 ` [PATCH 53/78] job: refactor progress to separate object Michael Roth
2020-06-16 14:15 ` [PATCH 54/78] block/block-copy: fix progress calculation Michael Roth
2020-06-16 14:15 ` [PATCH 55/78] target/ppc: Fix rlwinm on ppc64 Michael Roth
2020-06-16 14:15 ` [PATCH 56/78] block/io: fix bdrv_co_do_copy_on_readv Michael Roth
2020-06-16 14:15 ` [PATCH 57/78] compat: disable edid on correct virtio-gpu device Michael Roth
2020-06-16 14:15 ` [PATCH 58/78] qga: Installer: Wait for installation to finish Michael Roth
2020-06-16 14:15 ` [PATCH 59/78] qga-win: Handle VSS_E_PROVIDER_ALREADY_REGISTERED error Michael Roth
2020-06-16 14:15 ` [PATCH 60/78] qga-win: prevent crash when executing guest-file-read with large count Michael Roth
2020-06-16 14:15 ` [PATCH 61/78] qga: Fix undefined C behavior Michael Roth
2020-06-16 14:15 ` [PATCH 62/78] qemu-ga: document vsock-listen in the man page Michael Roth
2020-06-16 14:15 ` [PATCH 63/78] hw/i386/amd_iommu.c: Fix corruption of log events passed to guest Michael Roth
2020-06-16 14:15 ` [PATCH 64/78] tcg/i386: Fix INDEX_op_dup2_vec Michael Roth
2020-06-16 14:15 ` [PATCH 65/78] dump: Fix writing of ELF section Michael Roth
2020-06-16 14:15 ` [PATCH 66/78] xen-block: Fix double qlist remove and request leak Michael Roth
2020-06-16 14:15 ` [PATCH 67/78] vhost-user-gpu: Release memory returned by vu_queue_pop() with free() Michael Roth
2020-06-16 14:15 ` [PATCH 68/78] target/ppc: Fix mtmsr(d) L=1 variant that loses interrupts Michael Roth
2020-06-16 14:15 ` [PATCH 69/78] hostmem: don't use mbind() if host-nodes is empty Michael Roth
2020-06-16 14:15 ` [PATCH 70/78] target/arm: Clear tail in gvec_fmul_idx_*, gvec_fmla_idx_* Michael Roth
2020-06-16 14:15 ` [PATCH 71/78] qemu-nbd: Close inherited stderr Michael Roth
2020-06-16 14:15 ` [PATCH 72/78] 9p: Lock directory streams with a CoMutex Michael Roth
2020-06-16 15:14   ` Greg Kurz
2020-06-16 16:09     ` Christian Schoenebeck
2020-06-16 16:41       ` Greg Kurz
2020-06-16 22:46         ` Michael Roth
2020-06-18 13:47           ` Christian Schoenebeck
2020-06-16 14:15 ` [PATCH 73/78] net: Do not include a newline in the id of -nic devices Michael Roth
2020-06-16 14:15 ` [PATCH 74/78] nbd/server: Avoid long error message assertions CVE-2020-10761 Michael Roth
2020-06-16 14:15 ` [PATCH 75/78] virtio-balloon: fix free page hinting without an iothread Michael Roth
2020-06-16 14:15 ` [PATCH 76/78] virtio-balloon: fix free page hinting check on unrealize Michael Roth
2020-06-16 14:15 ` [PATCH 77/78] virtio-balloon: unref the iothread when unrealizing Michael Roth
2020-06-16 14:15 ` [PATCH 78/78] block: Call attention to truncation of long NBD exports Michael Roth
2020-06-17 14:39 ` [PATCH 00/78] Patch Round-up for stable 4.2.1, freeze on 2020-06-22 Cole Robinson
2020-06-17 15:54 ` Liam Merwick
2020-06-17 20:02 ` Karl Heubaum
2020-06-20  0:14 ` Finn Thain
2020-06-20  3:39   ` Finn Thain
2020-06-22 20:31     ` Michael Roth
2020-06-20 21:44 ` Bruce Rogers
2020-06-22 20:26 ` Michael Roth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200616141547.24664-51-mdroth@linux.vnet.ibm.com \
    --to=mdroth@linux.vnet.ibm.com \
    --cc=cohuck@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-stable@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.