All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation
@ 2016-10-07 16:19 Paolo Bonzini
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests Paolo Bonzini
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-07 16:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, kwolf, famz, qemu-block

Another tiny bit from the multiqueue series.  This takes care of
reimplementing bdrv_drain to process each BDS in the tree in order.
A side effect is to separate draining of data writes from draining
of metadata writes, which allows us to reinstate QED's .bdrv_drain
implementation.

A couple words on the role of this in the multiqueue world.  This series
is half of what's needed to remove RFifoLock and contention callbacks.
In order to do so, aio_poll will only run in the I/O thread; when the
main QEMU thread wants to drain a BlockDriverState it will rely on the
I/O thread to do the work.  in_flight then provides a quick way to detect
whether to wake up a thread sitting in bdrv_drain.

Compared to previous attempts, the main change is that some tracking
has to be done at the BlockBackend level, because throttling has been moved
there.

This series requires:
- "replication: interrupt failover if the main device is closed"
- "blockjob: introduce .drain callback for jobs"

The next (already written) steps are:
- "aio: convert from RFifoLock to QemuRecMutex"
- "aio: push aio_context_acquire/release down to dispatching"
- "aio: explicitly acquire aiocontext in all callbacks"
- "coroutine-lock: make it thread-safe"
- "block: make BlockDriverState fields thread-safe"

In total these are about 60 patches.  I plan to merge the first into
2.8 as a bugfix.  Further (planned) steps are:
- blockjob: do not protect with AioContext lock
  This is just using a QemuMutex to protect BlockJob fields
- block drivers: make them thread-safe
  This ensures everything is protected by the CoMutex or,
  for AIO-based drivers, by a QemuMutex.
- block: remove bdrv_set_aio_context

Paolo

Fam Zheng (1):
  qed: Implement .bdrv_drain

Paolo Bonzini (2):
  block: add BDS field to count in-flight requests
  block: change drain to look only at one child at a time

 block/block-backend.c     |  17 ++++++-
 block/io.c                | 127 ++++++++++++++++++++++++++++++++--------------
 block/qed.c               |  16 +++++-
 include/block/block_int.h |  10 ++--
 4 files changed, 124 insertions(+), 46 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-07 16:19 [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation Paolo Bonzini
@ 2016-10-07 16:19 ` Paolo Bonzini
  2016-10-10 10:36   ` Kevin Wolf
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 2/3] block: change drain to look only at one child at a time Paolo Bonzini
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-07 16:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, kwolf, famz, qemu-block

Unlike tracked_requests, this field also counts throttled requests,
and remains non-zero if an AIO operation needs a BH to be "really"
completed.

With this change, it is no longer necessary to have a dummy
BdrvTrackedRequest for requests that are never serialising, and
it is no longer necessary to poll the AioContext once after
bdrv_requests_pending(bs) returns false.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/block-backend.c     | 17 ++++++++--
 block/io.c                | 81 ++++++++++++++++++++++++++++++++---------------
 include/block/block_int.h | 10 +++---
 3 files changed, 77 insertions(+), 31 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 37595d2..a247a10 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -774,12 +774,16 @@ int coroutine_fn blk_co_preadv(BlockBackend *blk, int64_t offset,
         return ret;
     }
 
+    bdrv_inc_in_flight(blk_bs(blk));
+
     /* throttling disk I/O */
     if (blk->public.throttle_state) {
         throttle_group_co_io_limits_intercept(blk, bytes, false);
     }
 
-    return bdrv_co_preadv(blk->root, offset, bytes, qiov, flags);
+    ret = bdrv_co_preadv(blk->root, offset, bytes, qiov, flags);
+    bdrv_dec_in_flight(blk_bs(blk));
+    return ret;
 }
 
 int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
@@ -795,6 +799,8 @@ int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
         return ret;
     }
 
+    bdrv_inc_in_flight(blk_bs(blk));
+
     /* throttling disk I/O */
     if (blk->public.throttle_state) {
         throttle_group_co_io_limits_intercept(blk, bytes, true);
@@ -804,7 +810,9 @@ int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
         flags |= BDRV_REQ_FUA;
     }
 
-    return bdrv_co_pwritev(blk->root, offset, bytes, qiov, flags);
+    ret = bdrv_co_pwritev(blk->root, offset, bytes, qiov, flags);
+    bdrv_dec_in_flight(blk_bs(blk));
+    return ret;
 }
 
 typedef struct BlkRwCo {
@@ -897,6 +905,8 @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags)
 static void error_callback_bh(void *opaque)
 {
     struct BlockBackendAIOCB *acb = opaque;
+
+    bdrv_dec_in_flight(acb->common.bs);
     acb->common.cb(acb->common.opaque, acb->ret);
     qemu_aio_unref(acb);
 }
@@ -907,6 +917,7 @@ BlockAIOCB *blk_abort_aio_request(BlockBackend *blk,
 {
     struct BlockBackendAIOCB *acb;
 
+    bdrv_inc_in_flight(blk_bs(blk));
     acb = blk_aio_get(&block_backend_aiocb_info, blk, cb, opaque);
     acb->blk = blk;
     acb->ret = ret;
@@ -929,6 +940,7 @@ static const AIOCBInfo blk_aio_em_aiocb_info = {
 static void blk_aio_complete(BlkAioEmAIOCB *acb)
 {
     if (acb->has_returned) {
+        bdrv_dec_in_flight(acb->common.bs);
         acb->common.cb(acb->common.opaque, acb->rwco.ret);
         qemu_aio_unref(acb);
     }
@@ -950,6 +962,7 @@ static BlockAIOCB *blk_aio_prwv(BlockBackend *blk, int64_t offset, int bytes,
     BlkAioEmAIOCB *acb;
     Coroutine *co;
 
+    bdrv_inc_in_flight(blk_bs(blk));
     acb = blk_aio_get(&blk_aio_em_aiocb_info, blk, cb, opaque);
     acb->rwco = (BlkRwCo) {
         .blk    = blk,
diff --git a/block/io.c b/block/io.c
index c0a5410..5e1fbf1 100644
--- a/block/io.c
+++ b/block/io.c
@@ -143,7 +143,7 @@ bool bdrv_requests_pending(BlockDriverState *bs)
 {
     BdrvChild *child;
 
-    if (!QLIST_EMPTY(&bs->tracked_requests)) {
+    if (atomic_read(&bs->in_flight)) {
         return true;
     }
 
@@ -176,12 +176,9 @@ typedef struct {
 
 static void bdrv_drain_poll(BlockDriverState *bs)
 {
-    bool busy = true;
-
-    while (busy) {
+    while (bdrv_requests_pending(bs)) {
         /* Keep iterating */
-        busy = bdrv_requests_pending(bs);
-        busy |= aio_poll(bdrv_get_aio_context(bs), busy);
+        aio_poll(bdrv_get_aio_context(bs), true);
     }
 }
 
@@ -189,8 +186,10 @@ static void bdrv_co_drain_bh_cb(void *opaque)
 {
     BdrvCoDrainData *data = opaque;
     Coroutine *co = data->co;
+    BlockDriverState *bs = data->bs;
 
-    bdrv_drain_poll(data->bs);
+    bdrv_dec_in_flight(bs);
+    bdrv_drain_poll(bs);
     data->done = true;
     qemu_coroutine_enter(co);
 }
@@ -209,6 +208,7 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs)
         .bs = bs,
         .done = false,
     };
+    bdrv_inc_in_flight(bs);
     aio_bh_schedule_oneshot(bdrv_get_aio_context(bs),
                             bdrv_co_drain_bh_cb, &data);
 
@@ -279,7 +279,7 @@ void bdrv_drain(BlockDriverState *bs)
 void bdrv_drain_all(void)
 {
     /* Always run first iteration so any pending completion BHs run */
-    bool busy = true;
+    bool waited = true;
     BlockDriverState *bs;
     BdrvNextIterator it;
     BlockJob *job = NULL;
@@ -313,8 +313,8 @@ void bdrv_drain_all(void)
      * request completion.  Therefore we must keep looping until there was no
      * more activity rather than simply draining each device independently.
      */
-    while (busy) {
-        busy = false;
+    while (waited) {
+        waited = false;
 
         for (ctx = aio_ctxs; ctx != NULL; ctx = ctx->next) {
             AioContext *aio_context = ctx->data;
@@ -323,12 +323,11 @@ void bdrv_drain_all(void)
             for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
                 if (aio_context == bdrv_get_aio_context(bs)) {
                     if (bdrv_requests_pending(bs)) {
-                        busy = true;
-                        aio_poll(aio_context, busy);
+                        aio_poll(aio_context, true);
+                        waited = true;
                     }
                 }
             }
-            busy |= aio_poll(aio_context, false);
             aio_context_release(aio_context);
         }
     }
@@ -476,6 +475,16 @@ static bool tracked_request_overlaps(BdrvTrackedRequest *req,
     return true;
 }
 
+void bdrv_inc_in_flight(BlockDriverState *bs)
+{
+    atomic_inc(&bs->in_flight);
+}
+
+void bdrv_dec_in_flight(BlockDriverState *bs)
+{
+    atomic_dec(&bs->in_flight);
+}
+
 static bool coroutine_fn wait_serialising_requests(BdrvTrackedRequest *self)
 {
     BlockDriverState *bs = self->bs;
@@ -1097,6 +1106,8 @@ int coroutine_fn bdrv_co_preadv(BdrvChild *child,
         return ret;
     }
 
+    bdrv_inc_in_flight(bs);
+
     /* Don't do copy-on-read if we read data before write operation */
     if (bs->copy_on_read && !(flags & BDRV_REQ_NO_SERIALISING)) {
         flags |= BDRV_REQ_COPY_ON_READ;
@@ -1132,6 +1143,7 @@ int coroutine_fn bdrv_co_preadv(BdrvChild *child,
                               use_local_qiov ? &local_qiov : qiov,
                               flags);
     tracked_request_end(&req);
+    bdrv_dec_in_flight(bs);
 
     if (use_local_qiov) {
         qemu_iovec_destroy(&local_qiov);
@@ -1480,6 +1492,7 @@ int coroutine_fn bdrv_co_pwritev(BdrvChild *child,
         return ret;
     }
 
+    bdrv_inc_in_flight(bs);
     /*
      * Align write if necessary by performing a read-modify-write cycle.
      * Pad qiov with the read parts and be sure to have a tracked request not
@@ -1581,6 +1594,7 @@ fail:
     qemu_vfree(tail_buf);
 out:
     tracked_request_end(&req);
+    bdrv_dec_in_flight(bs);
     return ret;
 }
 
@@ -1680,17 +1694,19 @@ static int64_t coroutine_fn bdrv_co_get_block_status(BlockDriverState *bs,
     }
 
     *file = NULL;
+    bdrv_inc_in_flight(bs);
     ret = bs->drv->bdrv_co_get_block_status(bs, sector_num, nb_sectors, pnum,
                                             file);
     if (ret < 0) {
         *pnum = 0;
-        return ret;
+        goto out;
     }
 
     if (ret & BDRV_BLOCK_RAW) {
         assert(ret & BDRV_BLOCK_OFFSET_VALID);
-        return bdrv_get_block_status(bs->file->bs, ret >> BDRV_SECTOR_BITS,
-                                     *pnum, pnum, file);
+        ret = bdrv_get_block_status(bs->file->bs, ret >> BDRV_SECTOR_BITS,
+                                    *pnum, pnum, file);
+        goto out;
     }
 
     if (ret & (BDRV_BLOCK_DATA | BDRV_BLOCK_ZERO)) {
@@ -1732,6 +1748,8 @@ static int64_t coroutine_fn bdrv_co_get_block_status(BlockDriverState *bs,
         }
     }
 
+out:
+    bdrv_dec_in_flight(bs);
     return ret;
 }
 
@@ -2077,6 +2095,7 @@ static const AIOCBInfo bdrv_em_co_aiocb_info = {
 static void bdrv_co_complete(BlockAIOCBCoroutine *acb)
 {
     if (!acb->need_bh) {
+        bdrv_dec_in_flight(acb->common.bs);
         acb->common.cb(acb->common.opaque, acb->req.error);
         qemu_aio_unref(acb);
     }
@@ -2127,6 +2146,9 @@ static BlockAIOCB *bdrv_co_aio_prw_vector(BdrvChild *child,
     Coroutine *co;
     BlockAIOCBCoroutine *acb;
 
+    /* Matched by bdrv_co_complete's bdrv_dec_in_flight.  */
+    bdrv_inc_in_flight(child->bs);
+
     acb = qemu_aio_get(&bdrv_em_co_aiocb_info, child->bs, cb, opaque);
     acb->child = child;
     acb->need_bh = true;
@@ -2160,6 +2182,9 @@ BlockAIOCB *bdrv_aio_flush(BlockDriverState *bs,
     Coroutine *co;
     BlockAIOCBCoroutine *acb;
 
+    /* Matched by bdrv_co_complete's bdrv_dec_in_flight.  */
+    bdrv_inc_in_flight(bs);
+
     acb = qemu_aio_get(&bdrv_em_co_aiocb_info, bs, cb, opaque);
     acb->need_bh = true;
     acb->req.error = -EINPROGRESS;
@@ -2188,6 +2213,9 @@ BlockAIOCB *bdrv_aio_pdiscard(BlockDriverState *bs, int64_t offset, int count,
 
     trace_bdrv_aio_pdiscard(bs, offset, count, opaque);
 
+    /* Matched by bdrv_co_complete's bdrv_dec_in_flight.  */
+    bdrv_inc_in_flight(bs);
+
     acb = qemu_aio_get(&bdrv_em_co_aiocb_info, bs, cb, opaque);
     acb->need_bh = true;
     acb->req.error = -EINPROGRESS;
@@ -2248,23 +2276,22 @@ static void coroutine_fn bdrv_flush_co_entry(void *opaque)
 int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
 {
     int ret;
-    BdrvTrackedRequest req;
 
     if (!bs || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
         bdrv_is_sg(bs)) {
         return 0;
     }
 
-    tracked_request_begin(&req, bs, 0, 0, BDRV_TRACKED_FLUSH);
+    bdrv_inc_in_flight(bs);
 
     int current_gen = bs->write_gen;
 
     /* Wait until any previous flushes are completed */
-    while (bs->active_flush_req != NULL) {
+    while (bs->active_flush_req) {
         qemu_co_queue_wait(&bs->flush_queue);
     }
 
-    bs->active_flush_req = &req;
+    bs->active_flush_req = true;
 
     /* Write back all layers by calling one driver function */
     if (bs->drv->bdrv_co_flush) {
@@ -2334,11 +2361,11 @@ flush_parent:
 out:
     /* Notify any pending flushes that we have completed */
     bs->flushed_gen = current_gen;
-    bs->active_flush_req = NULL;
+    bs->active_flush_req = false;
     /* Return value is ignored - it's ok if wait queue is empty */
     qemu_co_queue_next(&bs->flush_queue);
 
-    tracked_request_end(&req);
+    bdrv_dec_in_flight(bs);
     return ret;
 }
 
@@ -2421,6 +2448,7 @@ int coroutine_fn bdrv_co_pdiscard(BlockDriverState *bs, int64_t offset,
         return 0;
     }
 
+    bdrv_inc_in_flight(bs);
     tracked_request_begin(&req, bs, offset, count, BDRV_TRACKED_DISCARD);
 
     ret = notifier_with_return_list_notify(&bs->before_write_notifiers, &req);
@@ -2467,6 +2495,7 @@ out:
     bdrv_set_dirty(bs, req.offset >> BDRV_SECTOR_BITS,
                    req.bytes >> BDRV_SECTOR_BITS);
     tracked_request_end(&req);
+    bdrv_dec_in_flight(bs);
     return ret;
 }
 
@@ -2499,13 +2528,12 @@ int bdrv_pdiscard(BlockDriverState *bs, int64_t offset, int count)
 static int bdrv_co_do_ioctl(BlockDriverState *bs, int req, void *buf)
 {
     BlockDriver *drv = bs->drv;
-    BdrvTrackedRequest tracked_req;
     CoroutineIOCompletion co = {
         .coroutine = qemu_coroutine_self(),
     };
     BlockAIOCB *acb;
 
-    tracked_request_begin(&tracked_req, bs, 0, 0, BDRV_TRACKED_IOCTL);
+    bdrv_inc_in_flight(bs);
     if (!drv || !drv->bdrv_aio_ioctl) {
         co.ret = -ENOTSUP;
         goto out;
@@ -2518,7 +2546,7 @@ static int bdrv_co_do_ioctl(BlockDriverState *bs, int req, void *buf)
     }
     qemu_coroutine_yield();
 out:
-    tracked_request_end(&tracked_req);
+    bdrv_dec_in_flight(bs);
     return co.ret;
 }
 
@@ -2575,6 +2603,9 @@ BlockAIOCB *bdrv_aio_ioctl(BlockDriverState *bs,
                                             bs, cb, opaque);
     Coroutine *co;
 
+    /* Matched by bdrv_co_complete's bdrv_dec_in_flight.  */
+    bdrv_inc_in_flight(bs);
+
     acb->need_bh = true;
     acb->req.error = -EINPROGRESS;
     acb->req.req = req;
diff --git a/include/block/block_int.h b/include/block/block_int.h
index 3e79228..5a7308b 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -62,8 +62,6 @@
 enum BdrvTrackedRequestType {
     BDRV_TRACKED_READ,
     BDRV_TRACKED_WRITE,
-    BDRV_TRACKED_FLUSH,
-    BDRV_TRACKED_IOCTL,
     BDRV_TRACKED_DISCARD,
 };
 
@@ -443,7 +441,7 @@ struct BlockDriverState {
                          note this is a reference count */
 
     CoQueue flush_queue;            /* Serializing flush queue */
-    BdrvTrackedRequest *active_flush_req; /* Flush request in flight */
+    bool active_flush_req;          /* Flush request in flight? */
     unsigned int write_gen;         /* Current data generation */
     unsigned int flushed_gen;       /* Flushed write generation */
 
@@ -471,7 +469,8 @@ struct BlockDriverState {
     /* Callback before write request is processed */
     NotifierWithReturnList before_write_notifiers;
 
-    /* number of in-flight serialising requests */
+    /* number of in-flight requests; overall and serialising */
+    unsigned int in_flight;
     unsigned int serialising_in_flight;
 
     /* Offset after the highest byte written to */
@@ -785,6 +784,9 @@ bool bdrv_requests_pending(BlockDriverState *bs);
 void bdrv_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap **out);
 void bdrv_undo_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap *in);
 
+void bdrv_inc_in_flight(BlockDriverState *bs);
+void bdrv_dec_in_flight(BlockDriverState *bs);
+
 void blockdev_close_all_bdrv_states(void);
 
 #endif /* BLOCK_INT_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 2/3] block: change drain to look only at one child at a time
  2016-10-07 16:19 [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation Paolo Bonzini
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests Paolo Bonzini
@ 2016-10-07 16:19 ` Paolo Bonzini
  2016-10-10 11:17   ` Kevin Wolf
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 3/3] qed: Implement .bdrv_drain Paolo Bonzini
  2016-10-11 16:46 ` [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation no-reply
  3 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-07 16:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, kwolf, famz, qemu-block

bdrv_requests_pending is checking children to also wait until internal
requests (such as metadata writes) have completed.  However, checking
children is in general overkill.  Children requests can be of two kinds:

- requests caused by an operation on bs, e.g. a bdrv_aio_write to bs
causing a write to bs->file->bs.  In this case, the parent's in_flight
count will always be incremented by at least one for every request in
the child.

- asynchronous metadata writes or flushes.  Such writes can be started
even if bs's in_flight count is zero, but not after the .bdrv_drain
callback has been invoked.

This patch therefore changes bdrv_drain to finish I/O in the parent
(after which the parent's in_flight will be locked to zero), call
bdrv_drain (after which the parent will not generate I/O on the child
anymore), and then wait for internal I/O in the children to complete.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/io.c | 54 ++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 36 insertions(+), 18 deletions(-)

diff --git a/block/io.c b/block/io.c
index 5e1fbf1..c634fbc 100644
--- a/block/io.c
+++ b/block/io.c
@@ -156,16 +156,33 @@ bool bdrv_requests_pending(BlockDriverState *bs)
     return false;
 }
 
-static void bdrv_drain_recurse(BlockDriverState *bs)
+static bool bdrv_drain_poll(BlockDriverState *bs)
+{
+    bool waited = false;
+
+    while (atomic_read(&bs->in_flight) > 0) {
+        aio_poll(bdrv_get_aio_context(bs), true);
+        waited = true;
+    }
+    return waited;
+}
+
+static bool bdrv_drain_io_recurse(BlockDriverState *bs)
 {
     BdrvChild *child;
+    bool waited;
+
+    waited = bdrv_drain_poll(bs);
 
     if (bs->drv && bs->drv->bdrv_drain) {
         bs->drv->bdrv_drain(bs);
     }
+
     QLIST_FOREACH(child, &bs->children, next) {
-        bdrv_drain_recurse(child->bs);
+        waited |= bdrv_drain_io_recurse(child->bs);
     }
+
+    return waited;
 }
 
 typedef struct {
@@ -174,14 +191,6 @@ typedef struct {
     bool done;
 } BdrvCoDrainData;
 
-static void bdrv_drain_poll(BlockDriverState *bs)
-{
-    while (bdrv_requests_pending(bs)) {
-        /* Keep iterating */
-        aio_poll(bdrv_get_aio_context(bs), true);
-    }
-}
-
 static void bdrv_co_drain_bh_cb(void *opaque)
 {
     BdrvCoDrainData *data = opaque;
@@ -218,6 +227,20 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs)
     assert(data.done);
 }
 
+static void bdrv_co_drain_io_recurse(BlockDriverState *bs)
+{
+    BdrvChild *child;
+
+    bdrv_co_yield_to_drain(bs);
+    if (bs->drv && bs->drv->bdrv_drain) {
+        bs->drv->bdrv_drain(bs);
+    }
+
+    QLIST_FOREACH(child, &bs->children, next) {
+        bdrv_co_drain_io_recurse(child->bs);
+    }
+}
+
 void bdrv_drained_begin(BlockDriverState *bs)
 {
     if (!bs->quiesce_counter++) {
@@ -226,11 +249,10 @@ void bdrv_drained_begin(BlockDriverState *bs)
     }
 
     bdrv_io_unplugged_begin(bs);
-    bdrv_drain_recurse(bs);
     if (qemu_in_coroutine()) {
-        bdrv_co_yield_to_drain(bs);
+        bdrv_co_drain_io_recurse(bs);
     } else {
-        bdrv_drain_poll(bs);
+        bdrv_drain_io_recurse(bs);
     }
     bdrv_io_unplugged_end(bs);
 }
@@ -299,7 +321,6 @@ void bdrv_drain_all(void)
         aio_context_acquire(aio_context);
         bdrv_parent_drained_begin(bs);
         bdrv_io_unplugged_begin(bs);
-        bdrv_drain_recurse(bs);
         aio_context_release(aio_context);
 
         if (!g_slist_find(aio_ctxs, aio_context)) {
@@ -322,10 +343,7 @@ void bdrv_drain_all(void)
             aio_context_acquire(aio_context);
             for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
                 if (aio_context == bdrv_get_aio_context(bs)) {
-                    if (bdrv_requests_pending(bs)) {
-                        aio_poll(aio_context, true);
-                        waited = true;
-                    }
+                    waited |= bdrv_drain_io_recurse(bs);
                 }
             }
             aio_context_release(aio_context);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 3/3] qed: Implement .bdrv_drain
  2016-10-07 16:19 [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation Paolo Bonzini
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests Paolo Bonzini
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 2/3] block: change drain to look only at one child at a time Paolo Bonzini
@ 2016-10-07 16:19 ` Paolo Bonzini
  2016-10-11 16:46 ` [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation no-reply
  3 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-07 16:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, kwolf, famz, qemu-block

From: Fam Zheng <famz@redhat.com>

The "need_check_timer" is used to clear the "NEED_CHECK" flag in the
image header after a grace period once metadata update has finished. To
comply with the bdrv_drain semantics, we should make sure it remains
deleted once .bdrv_drain is called.

The change to qed_need_check_timer_cb is needed because bdrv_qed_drain
is called after s->bs has been drained, and should not operate on it;
instead it should operate on the BdrvChild-ren exclusively.  Doing so
is easy because QED does not have a bdrv_co_flush_to_os callback, hence
all that is needed to flush it is to ensure writes have reached the disk.

Based on commit df9a681dc9a (which however included some unrelated
hunks, possibly due to a merge failure or an overlooked squash).
The patch was reverted because at the time bdrv_qed_drain could call
qed_plug_allocating_write_reqs while an allocating write was queued.
This however is not possible anymore after the previous patch;
.bdrv_drain is only called after all writes have completed at the
QED level, and its purpose is to trigger metadata writes in bs->file.

Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/qed.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/block/qed.c b/block/qed.c
index 3ee879b..1a7ef0a 100644
--- a/block/qed.c
+++ b/block/qed.c
@@ -336,7 +336,7 @@ static void qed_need_check_timer_cb(void *opaque)
     qed_plug_allocating_write_reqs(s);
 
     /* Ensure writes are on disk before clearing flag */
-    bdrv_aio_flush(s->bs, qed_clear_need_check, s);
+    bdrv_aio_flush(s->bs->file->bs, qed_clear_need_check, s);
 }
 
 static void qed_start_need_check_timer(BDRVQEDState *s)
@@ -378,6 +378,19 @@ static void bdrv_qed_attach_aio_context(BlockDriverState *bs,
     }
 }
 
+static void bdrv_qed_drain(BlockDriverState *bs)
+{
+    BDRVQEDState *s = bs->opaque;
+
+    /* Fire the timer immediately in order to start doing I/O as soon as the
+     * header is flushed.
+     */
+    if (s->need_check_timer && timer_pending(s->need_check_timer)) {
+        qed_cancel_need_check_timer(s);
+        qed_need_check_timer_cb(s);
+    }
+}
+
 static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
                          Error **errp)
 {
@@ -1668,6 +1681,7 @@ static BlockDriver bdrv_qed = {
     .bdrv_check               = bdrv_qed_check,
     .bdrv_detach_aio_context  = bdrv_qed_detach_aio_context,
     .bdrv_attach_aio_context  = bdrv_qed_attach_aio_context,
+    .bdrv_drain               = bdrv_qed_drain,
 };
 
 static void bdrv_qed_init(void)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests Paolo Bonzini
@ 2016-10-10 10:36   ` Kevin Wolf
  2016-10-10 16:37     ` Paolo Bonzini
  0 siblings, 1 reply; 14+ messages in thread
From: Kevin Wolf @ 2016-10-10 10:36 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, stefanha, famz, qemu-block

Am 07.10.2016 um 18:19 hat Paolo Bonzini geschrieben:
> Unlike tracked_requests, this field also counts throttled requests [in
> the BlockBackend layer], and remains non-zero if an AIO operation
> needs a BH to be "really" completed.

Do you actually like this?

I think this is an incredibly ugly layering violation that we would
regret sooner or later. Requests that are pending in the BlockBackend
layer should be drained by BlockBackend code. Intertwining it with the
BlockDriverState feels wrong to me, the BDS shouldn't know about its
parent's requests.


At the BlockBackend level (or really anywhere in the graph), we have two
different kinds of drain requests that we just fail to implement
differently today:

1. blk_drain(), i.e. a request to actually drain all requests pending in
   this BlockBackend. This is what we implement today, even though only
   indirectly: We call bdrv_drain() on the root BDS and it calls back
   into what is case 2.

2. BdrvChildRole.drained_begin/end(), i.e. stop sending requests to a
   child node. The efficient way to do this is not what we're doing
   today (sending all throttled requests and waiting for their
   completion), but just stopping to process throttled requests.

Your approach would permanently prevent us from doing this right and
force us to do the full drain even for the second case.


Maybe things become even a bit clearer if you extend your view of the
graph by one level and take the users of BlockBackends into account.
Nobody would expect block jobs or even guest devices to increment the
refcount of BDSes for requests that they will ever want to issue.
Instead, what we do is to stop them from sending requests while the
child is drained. This is the model for BlockBackend.

We're currently still using another layering violation there, which is
aio_disable_external(), but this could be done in a clean way by having
.drain_begin/end callbacks in BlockDevOps so that we propagate the
requests properly through the graph instead of bypassing it. The
advantage in fixing this would be that we don't stop devices in the same
AioContext that aren't related to the BDS being drained. Not sure if
it's important enough to actually fix it, but we should keep it in mind
as the model for how things should really work if done properly.

> With this change, it is no longer necessary to have a dummy
> BdrvTrackedRequest for requests that are never serialising, and
> it is no longer necessary to poll the AioContext once after
> bdrv_requests_pending(bs) returns false.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Kevin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] block: change drain to look only at one child at a time
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 2/3] block: change drain to look only at one child at a time Paolo Bonzini
@ 2016-10-10 11:17   ` Kevin Wolf
  0 siblings, 0 replies; 14+ messages in thread
From: Kevin Wolf @ 2016-10-10 11:17 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, stefanha, famz, qemu-block

Am 07.10.2016 um 18:19 hat Paolo Bonzini geschrieben:
> bdrv_requests_pending is checking children to also wait until internal
> requests (such as metadata writes) have completed.  However, checking
> children is in general overkill.  Children requests can be of two kinds:
> 
> - requests caused by an operation on bs, e.g. a bdrv_aio_write to bs
> causing a write to bs->file->bs.  In this case, the parent's in_flight
> count will always be incremented by at least one for every request in
> the child.

What about node with multiple parents? The in_flight count will be
incremented for one of the parents, but is this good enough if we want
to drain the other one?

Actually, I don't think we ever bothered to consciously define what a
drain operation does, i.e. if only the requests on a single node are
drained or all nodes below it, too. Currently it is implemented
recursively and this is what the bdrv_drain() comment says, too:

 * Wait for pending requests to complete on a single BlockDriverState subtree,
 * and suspend block driver's internal I/O until next request arrives.

If we want to keep the operation defined for the whole subtree, your
assumption made here is wrong. If we want to change it, we need to audit
all callers and check if they need to recurse themselves now.

> - asynchronous metadata writes or flushes.  Such writes can be started
> even if bs's in_flight count is zero, but not after the .bdrv_drain
> callback has been invoked.

I don't think it actually makes a difference for the code, but this
could be data as well (imagine a writeback cache implemented in qemu as
I proposed to do in order to mitigate COW perfomance hits in qcow2).

> This patch therefore changes bdrv_drain to finish I/O in the parent
> (after which the parent's in_flight will be locked to zero), call
> bdrv_drain (after which the parent will not generate I/O on the child
> anymore), and then wait for internal I/O in the children to complete.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

However, it seems the only way in which you rely on the wrong assumption
is where the old code assumed the same. We should be calling the
BdrvChild .drained_begin/end callbacks for the children, with or without
this patch, so this is a preexisting bug.

Kevin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-10 10:36   ` Kevin Wolf
@ 2016-10-10 16:37     ` Paolo Bonzini
  2016-10-11 11:00       ` Kevin Wolf
  0 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-10 16:37 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-devel, stefanha, famz, qemu-block



On 10/10/2016 12:36, Kevin Wolf wrote:
> 
> At the BlockBackend level (or really anywhere in the graph), we have two
> different kinds of drain requests that we just fail to implement
> differently today:
> 
> 1. blk_drain(), i.e. a request to actually drain all requests pending in
>    this BlockBackend. This is what we implement today, even though only
>    indirectly: We call bdrv_drain() on the root BDS and it calls back
>    into what is case 2.
> 
> 2. BdrvChildRole.drained_begin/end(), i.e. stop sending requests to a
>    child node. The efficient way to do this is not what we're doing
>    today (sending all throttled requests and waiting for their
>    completion), but just stopping to process throttled requests.
> 
> Your approach would permanently prevent us from doing this right and
> force us to do the full drain even for the second case.

You're entirely correct that these patches are a layering violation.
However, I think you're wrong that they are a step backwards, because
they are merely exposing pre-existing gaps in the BDS/BB separation.  In
fact, I think that these patches are a very good first step towards
getting the design correct, and I suspect that we agree on the design
since we have talked about it in the past.

Now, the good news is that we have an API that makes sense, and we're
using it mostly correctly.  (The exception is that there are places
where we don't have a BlockBackend and thus call
bdrv_drain/bdrv_co_drain instead of blk_drain/blk_co_drain).  Yes,
bdrv_drained_begin and bdrv_drained_end may need tweaks to work with
multiple parents, but wherever we use one of the APIs in the family
we're using the right one.

The bad news is that while the APIs are good, they are implemented in
the wrong way---all of them, though some less than others.  In
particular, blk_drain and bdrv_drain and
bdrv_drained_begin/bdrv_drained_end represent three completely different
concepts but we conflate the implementations.   But again, knowing and
exposing the problem is the first step of solving it, and IMO these
patches if anything make the problem easier to solve.


So, let's first of all look at what the three operations should mean.

blk_drain should mean "wait for completion of all requests sent from
*this* BlockBackend".  Typically it would be used by the owner of the
BlockBackend to get its own *data structures* in a known-good state.  It
should _not_ imply any wait for metadata writes to complete, and it
should not do anything to requests sent from other BlockBackends.  Three
random notes: 1) this should obviously include throttled operations,
just like it does now; 2) there should be a blk_co_drain too; 3) if the
implementation can't be contained entirely within BlockBackend, we're
doing it wrong.

bdrv_drain should only be needed in bdrv_set_aio_context and bdrv_close,
everyone else should drain its own BlockBackend or use a section.  Its
role is to ensure the BDS's children are completely quiescent.
Quiescence of the parents should be a precondition:
bdrv_set_aio_context, should use bdrv_drained_begin/bdrv_drained_end for
this purpose; while in the case of bdrv_close there should be no parents.

bdrv_{co_,}drained_begin and bdrv_{co_,}drained_end should split into two:

- blk_{co_,}isolate_begin(blk) and blk_{co_,}isolate_end(blk).  These
operations visit the BdrvChild graph in the child-to-parent direction,
starting at blk->root->bs, and tell each root BdrvChild (other than blk)
to quiesce itself.  Quiescence means ensuring that no new I/O operations
are triggered.  This in turn has both an external aspect (a Notifier or
other callback invoked by blk_root_drained_begin/end) and an internal
aspect (throttling).

- bdrv_drained_begin(bs) and bdrv_drained_end(bs), which quiesce all
root BdrvChildren reachable from bs; bdrv_drained_begin then does a
recursive bdrv_drain(bs) to flush all pending writes.  Just like
bdrv_drain, bdrv_drained_begin and bdrv_drained_end should be used very
rarely, possibly only in bdrv_set_aio_context.


In this new setting, BlockBackend need not disable throttling in
blk_root_drained_begin/end, and this is important because it explains
why conflating the concepts is bad.  Once BlockBackends can have
"isolated sections" instead of "drained sections", calling blk_drain
from blk_root_drained_begin/end is just one way to ensure that throttled
writes are not submitted.  It's not a very good one, but that doesn't
change the fact that blk_drain should wait for throttled requests to
complete without disabling throttling!  In fact, I surmise that any
other case where you want blk_drain to ignore throttling is a case where
you have a bug or missing feature elsewhere, for example some minimal
bdrv_cancel support that lets you cancel those throttled requests.  (And
I suspect that the handling of plugged operations in bdrv_drain to be
the same, but I haven't put much thought on it).


Now let's look at the implementation.

blk_drain and blk_co_drain can simply count in-flight requests, exactly
as done in patch 1.  Sure, I'm adding it at the wrong layer because not
every user of the block layer has already gotten its own personal BB.
However, moving the count from BDS to BB is easy, and another thing that
patch 1 gets right is to track the lifetime of AIO requests correctly
and remove the "double aio_poll" hack once and for all from the
implementation of bdrv_drain.

Because of the new precondition, bdrv_drain only needs to do the
recursive descent that it does in patch 2, without the previous
quiescing.  So that's another thing that the series nudges in the right
direction. :)  bdrv_drain should probably be renamed, but I can't think
of a good one.  Food for thought: think of merging the .bdrv_drain
callback with .bdrv_inactivate.

blk_isolate_begin(blk) should be just "bdrv_quiesce_begin(blk->root)"
(I'm now really sold on that BdrvChild thing!) and likewise for
blk_isolate_end->bdrv_quiesce_end, with bdrv_quiesce_begin/end doing the
child-to-parent walk.  Their first argument means "invoke quiescing
callbacks on all parents of blk->root, except for blk->root itself".
The BdrvChild callbacks would be renamed to .bdrv_quiesce_begin and
.bdrv_quiesce_end.

bdrv_drained_begin(bs) and bdrv_drained_end(bs) should be implemented as
above.

bdrv_drain_all should be changed to bdrv_quiesce_all_{begin,end}.  qdev
drive properties would install a vmstate notifier to quiesce their own
BlockBackend rather than relying on a blind bdrv_drain_all from vm_stop.


And thirdly, how to get there.

It should not surprise you that I think that these patches are the first
step.  The second is to separate sections (isolated and quiescent)
sections from the concept of draining, including phasing out
bdrv_drain_all.  Now the multi-parent case is handled correctly and you
can introduce "BB for everyone" including block jobs, block migration
and the NBD server.  Finally you cleanup bdrv_drain and move the
in-flight count from BDS to BB.

Thoughts?

Paolo

> 
> Maybe things become even a bit clearer if you extend your view of the
> graph by one level and take the users of BlockBackends into account.
> Nobody would expect block jobs or even guest devices to increment the
> refcount of BDSes for requests that they will ever want to issue.
> Instead, what we do is to stop them from sending requests while the
> child is drained. This is the model for BlockBackend.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-10 16:37     ` Paolo Bonzini
@ 2016-10-11 11:00       ` Kevin Wolf
  2016-10-11 14:09         ` Paolo Bonzini
  0 siblings, 1 reply; 14+ messages in thread
From: Kevin Wolf @ 2016-10-11 11:00 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, stefanha, famz, qemu-block

Am 10.10.2016 um 18:37 hat Paolo Bonzini geschrieben:
> On 10/10/2016 12:36, Kevin Wolf wrote:
> > 
> > At the BlockBackend level (or really anywhere in the graph), we have two
> > different kinds of drain requests that we just fail to implement
> > differently today:
> > 
> > 1. blk_drain(), i.e. a request to actually drain all requests pending in
> >    this BlockBackend. This is what we implement today, even though only
> >    indirectly: We call bdrv_drain() on the root BDS and it calls back
> >    into what is case 2.
> > 
> > 2. BdrvChildRole.drained_begin/end(), i.e. stop sending requests to a
> >    child node. The efficient way to do this is not what we're doing
> >    today (sending all throttled requests and waiting for their
> >    completion), but just stopping to process throttled requests.
> > 
> > Your approach would permanently prevent us from doing this right and
> > force us to do the full drain even for the second case.
> 
> You're entirely correct that these patches are a layering violation.
> However, I think you're wrong that they are a step backwards, because
> they are merely exposing pre-existing gaps in the BDS/BB separation.  In
> fact, I think that these patches are a very good first step towards
> getting the design correct, and I suspect that we agree on the design
> since we have talked about it in the past.
> 
> Now, the good news is that we have an API that makes sense, and we're
> using it mostly correctly.  (The exception is that there are places
> where we don't have a BlockBackend and thus call
> bdrv_drain/bdrv_co_drain instead of blk_drain/blk_co_drain).  Yes,
> bdrv_drained_begin and bdrv_drained_end may need tweaks to work with
> multiple parents, but wherever we use one of the APIs in the family
> we're using the right one.
> 
> The bad news is that while the APIs are good, they are implemented in
> the wrong way---all of them, though some less than others.  In
> particular, blk_drain and bdrv_drain and
> bdrv_drained_begin/bdrv_drained_end represent three completely different
> concepts but we conflate the implementations.   But again, knowing and
> exposing the problem is the first step of solving it, and IMO these
> patches if anything make the problem easier to solve.
> 
> 
> So, let's first of all look at what the three operations should mean.
> 
> blk_drain should mean "wait for completion of all requests sent from
> *this* BlockBackend".  Typically it would be used by the owner of the
> BlockBackend to get its own *data structures* in a known-good state.  It
> should _not_ imply any wait for metadata writes to complete, and it
> should not do anything to requests sent from other BlockBackends.  Three
> random notes: 1) this should obviously include throttled operations,
> just like it does now; 2) there should be a blk_co_drain too; 3) if the
> implementation can't be contained entirely within BlockBackend, we're
> doing it wrong.
> 
> bdrv_drain should only be needed in bdrv_set_aio_context and bdrv_close,
> everyone else should drain its own BlockBackend or use a section.  Its
> role is to ensure the BDS's children are completely quiescent.
> Quiescence of the parents should be a precondition:
> bdrv_set_aio_context, should use bdrv_drained_begin/bdrv_drained_end for
> this purpose; while in the case of bdrv_close there should be no parents.

You making some good points here. I never spent much thought on the fact
that bdrv_drain and bdrv_drained_begin/end are really for different
purposes.

Actually, does bdrv_set_aio_context really need bdrv_drain, or should it
be using a begin/end pair, too?

bdrv_close() is an interesting case because it has both a section and
then inside that another bdrv_drain() call. Maybe that's the only
constellation in which bdrv_drain() without a section even really makes
sense because then the parents are already quiesced. If that's the only
user of it, though, maybe we should inline it and remove the public
defintion of bdrv_drain(), which is almost always wrong.

> bdrv_{co_,}drained_begin and bdrv_{co_,}drained_end should split into two:
> 
> - blk_{co_,}isolate_begin(blk) and blk_{co_,}isolate_end(blk).  These
> operations visit the BdrvChild graph in the child-to-parent direction,
> starting at blk->root->bs, and tell each root BdrvChild (other than blk)
> to quiesce itself.  Quiescence means ensuring that no new I/O operations
> are triggered.  This in turn has both an external aspect (a Notifier or
> other callback invoked by blk_root_drained_begin/end) and an internal
> aspect (throttling).

It wouldn't only tell the BlockBackends to quiesce themselves, but also
all nodes on the path so that they don't issue internal requests.

Anyway, let's see which of the existing bdrv_drained_begin/end users
would use this (please correct):

* Block jobs use during completion

* The QMP transaction commands to start block jobs drain as well, but
  they don't have a BlockBackend yet. Those would call a BDS-level
  bdrv_isolate_begin/end then, right?

* Quorum wants to isolate itself from new parent requests while adding a
  new child, that's another user for bdrv_isolate_begin/end

* bdrv_snapshot_delete(), probably the save BDS-level thing

> - bdrv_drained_begin(bs) and bdrv_drained_end(bs), which quiesce all
> root BdrvChildren reachable from bs; bdrv_drained_begin then does a
> recursive bdrv_drain(bs) to flush all pending writes.  Just like
> bdrv_drain, bdrv_drained_begin and bdrv_drained_end should be used very
> rarely, possibly only in bdrv_set_aio_context.

bdrv_close probably, too. And that one doesn't really need the recursive
semantics because it closes only one node. If that makes the refcount of
the other nodes go down to zero, they will be closed individually and
drain themselves.

The reason why bdrv_set_aio_context needs the recursive version is
because it's recursive itself.

Possibly add a bool recursive flag to bdrv_drained_begin/end?

> In this new setting, BlockBackend need not disable throttling in
> blk_root_drained_begin/end, and this is important because it explains
> why conflating the concepts is bad.  Once BlockBackends can have
> "isolated sections" instead of "drained sections", calling blk_drain
> from blk_root_drained_begin/end is just one way to ensure that throttled
> writes are not submitted.  It's not a very good one, but that doesn't
> change the fact that blk_drain should wait for throttled requests to
> complete without disabling throttling!  In fact, I surmise that any
> other case where you want blk_drain to ignore throttling is a case where
> you have a bug or missing feature elsewhere, for example some minimal
> bdrv_cancel support that lets you cancel those throttled requests.  (And
> I suspect that the handling of plugged operations in bdrv_drain to be
> the same, but I haven't put much thought on it).

Okay, I think we agree on the theory.

Now I'm curious how this relates to the layering violation in this
specific patch.

> Now let's look at the implementation.
> 
> blk_drain and blk_co_drain can simply count in-flight requests, exactly
> as done in patch 1.  Sure, I'm adding it at the wrong layer because not
> every user of the block layer has already gotten its own personal BB.

Really? Where is it still missing? I was pretty sure I had converted all
users.

> However, moving the count from BDS to BB is easy, and another thing that
> patch 1 gets right is to track the lifetime of AIO requests correctly
> and remove the "double aio_poll" hack once and for all from the
> implementation of bdrv_drain.
> 
> Because of the new precondition, bdrv_drain only needs to do the
> recursive descent that it does in patch 2, without the previous
> quiescing.  So that's another thing that the series nudges in the right
> direction. :)  bdrv_drain should probably be renamed, but I can't think
> of a good one.  Food for thought: think of merging the .bdrv_drain
> callback with .bdrv_inactivate.

Hm... I always thought that .bdrv_drain should really be
.bdrv_drained_begin/end in theory, but we didn't have a user for that
because the qed implementation for .bdrv_drained_end would happen to be
empty.

As for .bdrv_inactivate/invalidate, I agree that it should ideally
imply .bdrv_drained_begin/end, though in practice we don't assert
cleared BDRV_O_INACTIVE on reads, so I think it might not hold true.

I'm not so sure if .bdrv_drained_begin/end should imply inactivation,
though. Inactivation really means "this qemu process gives up control of
the image", so that a migration destination can take over. In terms of
the qcow2-based locking series that this was originally part of,
inactivation would mean dropping the lock.

Giving up control is most definitely not what you want in most cases
where you do .bdrv_drained_begin/end. Rather, you want _exclusive_
access, which is pretty much the opposite.

And this is true both for the isolate users mentioned above and for
things like changing the AIO context, which legitimately continue to use
bdrv_drain_begin/end.

I was wondering if we need separate .bdrv_isolate_begin/end callbacks,
but I think on this level there is actually no difference: Both tell the
block driver to stop sending requests.

> blk_isolate_begin(blk) should be just "bdrv_quiesce_begin(blk->root)"
> (I'm now really sold on that BdrvChild thing!) and likewise for
> blk_isolate_end->bdrv_quiesce_end, with bdrv_quiesce_begin/end doing the
> child-to-parent walk.  Their first argument means "invoke quiescing
> callbacks on all parents of blk->root, except for blk->root itself".
> The BdrvChild callbacks would be renamed to .bdrv_quiesce_begin and
> .bdrv_quiesce_end.
> 
> bdrv_drained_begin(bs) and bdrv_drained_end(bs) should be implemented as
> above.
> 
> bdrv_drain_all should be changed to bdrv_quiesce_all_{begin,end}.  qdev
> drive properties would install a vmstate notifier to quiesce their own
> BlockBackend rather than relying on a blind bdrv_drain_all from vm_stop.

Sounds reasonable.

> And thirdly, how to get there.
> 
> It should not surprise you that I think that these patches are the first
> step.  The second is to separate sections (isolated and quiescent)
> sections from the concept of draining, including phasing out
> bdrv_drain_all.  Now the multi-parent case is handled correctly and you
> can introduce "BB for everyone" including block jobs, block migration
> and the NBD server.  Finally you cleanup bdrv_drain and move the
> in-flight count from BDS to BB.
> 
> Thoughts?

Why don't we start with adding a proper blk_drain so that we don't have
to introduce the ugly intermediate state? This would mostly mean
splitting the throttling drain into one version called by blk_drain that
restarts all requests while ignoring the throttling, and changing the
BdrvChild callback to only stop sending them. Once you're there, I think
that gets you rid of the BDS request count manipulations from the BB
layer because the BDS layer already counts all requests.

You hinted above that the reason is that not every place that needs it
has a BB yet. If you can tell me which users are missing, I can take a
look at converting them. I'm not aware of any and already considered
this done (we should have noticed them (as in failing builds) while
doing the BdrvChild conversion).

Kevin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-11 11:00       ` Kevin Wolf
@ 2016-10-11 14:09         ` Paolo Bonzini
  2016-10-11 15:25           ` Kevin Wolf
  0 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-11 14:09 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-devel, stefanha, famz, qemu-block

First of all, a correction:

>> The exception is that there are places
>> where we don't have a BlockBackend and thus call
>> bdrv_drain/bdrv_co_drain instead of blk_drain/blk_co_drain

Nevermind---it's just that there is no blk_drained_begin/end yet.

On 11/10/2016 13:00, Kevin Wolf wrote:
> Actually, does bdrv_set_aio_context really need bdrv_drain, or should it
> be using a begin/end pair, too?

Hmm, yes, it should use a pair.

> I was wondering if we need separate .bdrv_isolate_begin/end callbacks,
> but I think on this level there is actually no difference: Both tell the
> block driver to stop sending requests.

Yup, the difference is in whether you exempt one BlockBackend.  If you
do, that BB is isolated.  If you don't, the BDS is quiesced (or drained).

> Anyway, let's see which of the existing bdrv_drained_begin/end users
> would use this (please correct):
> 
> * Block jobs use during completion
>
> * The QMP transaction commands to start block jobs drain as well, but
>   they don't have a BlockBackend yet. Those would call a BDS-level
>   bdrv_isolate_begin/end then, right?
> 
> * Quorum wants to isolate itself from new parent requests while adding a
>   new child, that's another user for bdrv_isolate_begin/end
> 
> * bdrv_snapshot_delete(), probably the save BDS-level thing

Is what you call "a BDS-level bdrv_isolate_begin/end" the same as my
"bdrv_drained_begin(bs) and bdrv_drained_end(bs), which quiesce all root
BdrvChildren reachable from bs"?  Anyway I think we agree.

> Okay, I think we agree on the theory.
> 
> Now I'm curious how this relates to the layering violation in this
> specific patch.

It doesn't, but knowing that we want to go to the same place helps.
Especially since you said I was permanently preventing getting there. :)

>> blk_drain and blk_co_drain can simply count in-flight requests, exactly
>> as done in patch 1.  Sure, I'm adding it at the wrong layer because not
>> every user of the block layer has already gotten its own personal BB.
> 
> Really? Where is it still missing? I was pretty sure I had converted all
> users.

I mentioned block jobs, block migration and the NBD server.  They do use
a BB (as you said, they wouldn't compile after the BdrvChild
conversion), but they don't have their own *separate* BB as far as I can
tell, with separate throttling state etc.  How far are we from that?

> As for .bdrv_inactivate/invalidate, I agree that it should ideally
> imply .bdrv_drained_begin/end, though in practice we don't assert
> cleared BDRV_O_INACTIVE on reads, so I think it might not hold true.
> 
> I'm not so sure if .bdrv_drained_begin/end should imply inactivation,
> though.

Again, bad naming hurts...  See above where you said that bdrv_drain
should be called from bdrv_close only, and not be recursive.  And indeed
in qcow2_close you see:

    if (!(s->flags & BDRV_O_INACTIVE)) {
        qcow2_inactivate(bs);
    }

so this in bdrv_close:

    bdrv_flush(bs);
    bdrv_drain(bs); /* in case flush left pending I/O */

should be replaced by a non-recursive bdrv_inactivate, and bdrv_flush
should be the default implementation of bdrv_inactivate.  The QED code
added in patch 3 can also become its .bdrv_inactivate callback.  In
addition, stopping the VM could also do a "full flush" without setting
BDRV_O_INACTIVE instead of using bdrv_flush_all.

> Why don't we start with adding a proper blk_drain so that we don't have
> to introduce the ugly intermediate state? This would mostly mean
> splitting the throttling drain into one version called by blk_drain that
> restarts all requests while ignoring the throttling, and changing the
> BdrvChild callback to only stop sending them. Once you're there, I think
> that gets you rid of the BDS request count manipulations from the BB
> layer because the BDS layer already counts all requests.

I guess that would be doable.  However I don't think it's so easy.  I
also don't think it's very interesting (that's probably where we disagree).

First of all, the current implementation of bdrv_drain is nasty,
especially the double aio_poll and the recursive bdrv_requests_pending
are hard to follow.  My aim for these two patches was to have an
implementation of bdrv_drain that is more easily understandable, so that
you can _then_ move things to the correct layer.  So even if I
implemented a series to add a proper blk_drain, I would start with these
two patches and then move things up.

Second, there's the issue of bdrv_drained_begin/end, which is the main
reason why I placed the request count at the BDS level.  BlockBackend
isolation is a requirement before you can count requests exclusively at
the BB level.  At which point, implementing isolated sections properly
(including migration from aio_disable/enable_external to two new
BlockDevOps, and giving NBD+jobs+migration a separate BB) is more
interesting than cobbling together the minimum that's enough to
eliminate bs->in_flight.

All in all I don't think this should be done as a single patch series
and certainly wouldn't be ready in time for 2.8.  These two patches
(plus the blockjob_drain one that I posted) are needed for me to get rid
of RFifoLock and fix bdrv_drain_all deadlocks.  I'd really love to do
that in 2.8, and the patches for that are ready to be posted (as RFC I
guess).

Paolo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-11 14:09         ` Paolo Bonzini
@ 2016-10-11 15:25           ` Kevin Wolf
  2016-10-11 16:45             ` Paolo Bonzini
  0 siblings, 1 reply; 14+ messages in thread
From: Kevin Wolf @ 2016-10-11 15:25 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, stefanha, famz, qemu-block

Am 11.10.2016 um 16:09 hat Paolo Bonzini geschrieben:
> > Anyway, let's see which of the existing bdrv_drained_begin/end users
> > would use this (please correct):
> > 
> > * Block jobs use during completion
> >
> > * The QMP transaction commands to start block jobs drain as well, but
> >   they don't have a BlockBackend yet. Those would call a BDS-level
> >   bdrv_isolate_begin/end then, right?
> > 
> > * Quorum wants to isolate itself from new parent requests while adding a
> >   new child, that's another user for bdrv_isolate_begin/end
> > 
> > * bdrv_snapshot_delete(), probably the save BDS-level thing
> 
> Is what you call "a BDS-level bdrv_isolate_begin/end" the same as my
> "bdrv_drained_begin(bs) and bdrv_drained_end(bs), which quiesce all root
> BdrvChildren reachable from bs"?  Anyway I think we agree.

Ah, is your bdrv_drained_begin() working in child-to-parent direction,
too? Then I misunderstood and it's the same, yes.

> >> blk_drain and blk_co_drain can simply count in-flight requests, exactly
> >> as done in patch 1.  Sure, I'm adding it at the wrong layer because not
> >> every user of the block layer has already gotten its own personal BB.
> > 
> > Really? Where is it still missing? I was pretty sure I had converted all
> > users.
> 
> I mentioned block jobs, block migration and the NBD server.  They do use
> a BB (as you said, they wouldn't compile after the BdrvChild
> conversion), but they don't have their own *separate* BB as far as I can
> tell, with separate throttling state etc.  How far are we from that?

They have separate BBs, we're already there.

> > As for .bdrv_inactivate/invalidate, I agree that it should ideally
> > imply .bdrv_drained_begin/end, though in practice we don't assert
> > cleared BDRV_O_INACTIVE on reads, so I think it might not hold true.
> > 
> > I'm not so sure if .bdrv_drained_begin/end should imply inactivation,
> > though.
> 
> Again, bad naming hurts...  See above where you said that bdrv_drain
> should be called from bdrv_close only, and not be recursive.  And indeed
> in qcow2_close you see:
> 
>     if (!(s->flags & BDRV_O_INACTIVE)) {
>         qcow2_inactivate(bs);
>     }
> 
> so this in bdrv_close:
> 
>     bdrv_flush(bs);
>     bdrv_drain(bs); /* in case flush left pending I/O */
> 
> should be replaced by a non-recursive bdrv_inactivate, and bdrv_flush
> should be the default implementation of bdrv_inactivate.  The QED code
> added in patch 3 can also become its .bdrv_inactivate callback.  In
> addition, stopping the VM could also do a "full flush" without setting
> BDRV_O_INACTIVE instead of using bdrv_flush_all.

bdrv_drain() should only be called from bdrv_close(), yes, and I would
agree with inactivating an image as the first step before we close it.

However, the .bdrv_drained_begin/end callbacks in BlockDriver are still
called in the context of isolation and that's different from
inactivating.

> > Why don't we start with adding a proper blk_drain so that we don't have
> > to introduce the ugly intermediate state? This would mostly mean
> > splitting the throttling drain into one version called by blk_drain that
> > restarts all requests while ignoring the throttling, and changing the
> > BdrvChild callback to only stop sending them. Once you're there, I think
> > that gets you rid of the BDS request count manipulations from the BB
> > layer because the BDS layer already counts all requests.
> 
> I guess that would be doable.  However I don't think it's so easy.  I
> also don't think it's very interesting (that's probably where we disagree).
> 
> First of all, the current implementation of bdrv_drain is nasty,
> especially the double aio_poll and the recursive bdrv_requests_pending
> are hard to follow.  My aim for these two patches was to have an
> implementation of bdrv_drain that is more easily understandable, so that
> you can _then_ move things to the correct layer.  So even if I
> implemented a series to add a proper blk_drain, I would start with these
> two patches and then move things up.
> 
> Second, there's the issue of bdrv_drained_begin/end, which is the main
> reason why I placed the request count at the BDS level.  BlockBackend
> isolation is a requirement before you can count requests exclusively at
> the BB level.  At which point, implementing isolated sections properly
> (including migration from aio_disable/enable_external to two new
> BlockDevOps, and giving NBD+jobs+migration a separate BB) is more
> interesting than cobbling together the minimum that's enough to
> eliminate bs->in_flight.

I think my point was that you don't have to count requests at the BB
level if you know that there are no requests pending on the BB level
that haven't reached the BDS level yet. If you move the restarting o
throttled requests to blk_drain(), shouldn't that be all you need to do
on the BB level for now?

Of course, having the full thing would be nice, but I don't think it's a
requirement.

Hm, or actually, doesn't the BdrvChild callback still do this work after
your patch? Maybe I'm failing to understand something important about
the patch...

> All in all I don't think this should be done as a single patch series
> and certainly wouldn't be ready in time for 2.8.  These two patches
> (plus the blockjob_drain one that I posted) are needed for me to get rid
> of RFifoLock and fix bdrv_drain_all deadlocks.  I'd really love to do
> that in 2.8, and the patches for that are ready to be posted (as RFC I
> guess).

Agreed, I don't want to make the full thing a requirement.

Kevin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-11 15:25           ` Kevin Wolf
@ 2016-10-11 16:45             ` Paolo Bonzini
  2016-10-12  9:50               ` Kevin Wolf
  0 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-11 16:45 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, famz, qemu-devel, stefanha



On 11/10/2016 17:25, Kevin Wolf wrote:
> Am 11.10.2016 um 16:09 hat Paolo Bonzini geschrieben:
>> Is what you call "a BDS-level bdrv_isolate_begin/end" the same as my
>> "bdrv_drained_begin(bs) and bdrv_drained_end(bs), which quiesce all root
>> BdrvChildren reachable from bs"?  Anyway I think we agree.
> 
> Ah, is your bdrv_drained_begin() working in child-to-parent direction,
> too?

Yes.

>> I mentioned block jobs, block migration and the NBD server.  They do use
>> a BB (as you said, they wouldn't compile after the BdrvChild
>> conversion), but they don't have their own *separate* BB as far as I can
>> tell, with separate throttling state etc.  How far are we from that?
> 
> They have separate BBs, we're already there.

Hey, you're right!  I don't know how I missed that!

> bdrv_drain() should only be called from bdrv_close(), yes, and I would
> agree with inactivating an image as the first step before we close it.
> 
> However, the .bdrv_drained_begin/end callbacks in BlockDriver are still
> called in the context of isolation and that's different from
> inactivating.

Agreed, .bdrv_drained_begin/end is separate from inactivation.  That's
just "bdrv_drain" (which we agreed has to be called from bdrv_close only).

> I think my point was that you don't have to count requests at the BB
> level if you know that there are no requests pending on the BB level
> that haven't reached the BDS level yet.

I need to count requests at the BB level because the blk_aio_*
operations have a separate bottom half that is invoked if either 1) they
never reach BDS (because of an error); or 2) the bdrv_co_* call
completes without yielding.  The count must be >0 when blk_aio_*
returns, or bdrv_drain (and thus blk_drain) won't loop.  Because
bdrv_drain and blk_drain are conflated, the counter must be the BDS one.

In turn, the BDS counter is needed because of the lack of isolated
sections.  The right design would be for blk_isolate_begin to call
blk_drain on *other* BlockBackends reachable in a child-to-parent visit.
 Instead, until that is implemented, we have bdrv_drained_begin that
emulates that through the same-named callback, followed by a
parent-to-child bdrv_drain that is almost always unnecessary.

> If you move the restarting to
> throttled requests to blk_drain(), shouldn't that be all you need to do
> on the BB level for now?
>
> Hm, or actually, doesn't the BdrvChild callback still do this work after
> your patch?

Yes, it does.  The only difference is that the callback is not needed
anymore for bdrv_requests_pending to answer true.  With my patch,
bdrv_requests_pending always answers true if there are throttled
requests.  Without my patch, it relies on blk_root_drained_begin's call
to throttle_group_restart_blk.

The long-term view is that there would be a difference between blk_drain
and the BdrvChild callback.  blk_drain waits for throttled requests
(using the same algorithm as bdrv_drain and a BB-level counter only).
Instead, the BdrvChild callback can merely arrange to be ignored by the
throttle group.

(By the way, I need to repost this series anyway, but let's finish the
discussion first to understand what you'd like to have in 2.8).

Paolo

> Maybe I'm failing to understand something important about
> the patch...
> 
>> All in all I don't think this should be done as a single patch series
>> and certainly wouldn't be ready in time for 2.8.  These two patches
>> (plus the blockjob_drain one that I posted) are needed for me to get rid
>> of RFifoLock and fix bdrv_drain_all deadlocks.  I'd really love to do
>> that in 2.8, and the patches for that are ready to be posted (as RFC I
>> guess).
> 
> Agreed, I don't want to make the full thing a requirement.
> 
> Kevin
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation
  2016-10-07 16:19 [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation Paolo Bonzini
                   ` (2 preceding siblings ...)
  2016-10-07 16:19 ` [Qemu-devel] [PATCH 3/3] qed: Implement .bdrv_drain Paolo Bonzini
@ 2016-10-11 16:46 ` no-reply
  3 siblings, 0 replies; 14+ messages in thread
From: no-reply @ 2016-10-11 16:46 UTC (permalink / raw)
  To: pbonzini; +Cc: famz, qemu-devel, kwolf

Hi,

Your series failed automatic build test. Please find the testing commands and
their output below. If you have docker installed, you can probably reproduce it
locally.

Subject: [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation
Message-id: 1475857193-28735-1-git-send-email-pbonzini@redhat.com
Type: series

=== TEST SCRIPT BEGIN ===
#!/bin/bash
set -e
git submodule update --init dtc
# Let docker tests dump environment info
export SHOW_ENV=1
make J=8 docker-test-quick@centos6
make J=8 docker-test-mingw@fedora
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
783bbdc qed: Implement .bdrv_drain
5532f33 block: change drain to look only at one child at a time
36c2426 block: add BDS field to count in-flight requests

=== OUTPUT BEGIN ===
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf'
  BUILD   centos6
  ARCHIVE qemu.tgz
  ARCHIVE dtc.tgz
  COPY    RUNNER
  RUN     test-quick in centos6
Packages installed:
SDL-devel-1.2.14-7.el6_7.1.x86_64
ccache-3.1.6-2.el6.x86_64
epel-release-6-8.noarch
gcc-4.4.7-17.el6.x86_64
git-1.7.1-4.el6_7.1.x86_64
glib2-devel-2.28.8-5.el6.x86_64
libfdt-devel-1.4.0-1.el6.x86_64
make-3.81-23.el6.x86_64
package g++ is not installed
pixman-devel-0.32.8-1.el6.x86_64
tar-1.23-15.el6_8.x86_64
zlib-devel-1.2.3-29.el6.x86_64

Environment variables:
PACKAGES=libfdt-devel ccache     tar git make gcc g++     zlib-devel glib2-devel SDL-devel pixman-devel     epel-release
HOSTNAME=c061aa2804f1
TERM=xterm
MAKEFLAGS= -j8
HISTSIZE=1000
J=8
USER=root
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
MAIL=/var/spool/mail/root
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
LANG=en_US.UTF-8
TARGET_LIST=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/root
TEST_DIR=/tmp/qemu-test
LOGNAME=root
LESSOPEN=||/usr/bin/lesspipe.sh %s
FEATURES= dtc
DEBUG=
G_BROKEN_FILENAMES=1
CCACHE_HASHDIR=
_=/usr/bin/env

Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/var/tmp/qemu-build/install
No C++ compiler available; disabling C++ specific optional code
Install prefix    /var/tmp/qemu-build/install
BIOS directory    /var/tmp/qemu-build/install/share/qemu
binary directory  /var/tmp/qemu-build/install/bin
library directory /var/tmp/qemu-build/install/lib
module directory  /var/tmp/qemu-build/install/lib/qemu
libexec directory /var/tmp/qemu-build/install/libexec
include directory /var/tmp/qemu-build/install/include
config directory  /var/tmp/qemu-build/install/etc
local state directory   /var/tmp/qemu-build/install/var
Manual directory  /var/tmp/qemu-build/install/share/man
ELF interp prefix /usr/gnemul/qemu-%M
Source path       /tmp/qemu-test/src
C compiler        cc
Host C compiler   cc
C++ compiler      
Objective-C compiler cc
ARFLAGS           rv
CFLAGS            -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g 
QEMU_CFLAGS       -I/usr/include/pixman-1    -pthread -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include   -fPIE -DPIE -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv  -Wendif-labels -Wmissing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-all
LDFLAGS           -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g 
make              make
install           install
python            python -B
smbd              /usr/sbin/smbd
module support    no
host CPU          x86_64
host big endian   no
target list       x86_64-softmmu aarch64-softmmu
tcg debug enabled no
gprof enabled     no
sparse enabled    no
strip binaries    yes
profiler          no
static build      no
pixman            system
SDL support       yes (1.2.14)
GTK support       no 
GTK GL support    no
VTE support       no 
TLS priority      NORMAL
GNUTLS support    no
GNUTLS rnd        no
libgcrypt         no
libgcrypt kdf     no
nettle            no 
nettle kdf        no
libtasn1          no
curses support    no
virgl support     no
curl support      no
mingw32 support   no
Audio drivers     oss
Block whitelist (rw) 
Block whitelist (ro) 
VirtFS support    no
VNC support       yes
VNC SASL support  no
VNC JPEG support  no
VNC PNG support   no
xen support       no
brlapi support    no
bluez  support    no
Documentation     no
PIE               yes
vde support       no
netmap support    no
Linux AIO support no
ATTR/XATTR support yes
Install blobs     yes
KVM support       yes
RDMA support      no
TCG interpreter   no
fdt support       yes
preadv support    yes
fdatasync         yes
madvise           yes
posix_madvise     yes
libcap-ng support no
vhost-net support yes
vhost-scsi support yes
vhost-vsock support yes
Trace backends    log
spice support     no 
rbd support       no
xfsctl support    no
smartcard support no
libusb            no
usb net redir     no
OpenGL support    no
OpenGL dmabufs    no
libiscsi support  no
libnfs support    no
build guest agent yes
QGA VSS support   no
QGA w32 disk info no
QGA MSI support   no
seccomp support   no
coroutine backend ucontext
coroutine pool    yes
debug stack usage no
GlusterFS support no
Archipelago support no
gcov              gcov
gcov enabled      no
TPM support       yes
libssh2 support   no
TPM passthrough   yes
QOM debugging     yes
lzo support       no
snappy support    no
bzip2 support     no
NUMA host support no
tcmalloc support  no
jemalloc support  no
avx2 optimization no
replication support yes
  GEN     x86_64-softmmu/config-devices.mak.tmp
  GEN     aarch64-softmmu/config-devices.mak.tmp
  GEN     config-host.h
  GEN     qemu-options.def
  GEN     qmp-commands.h
  GEN     qapi-types.h
  GEN     qapi-visit.h
  GEN     qapi-event.h
  GEN     x86_64-softmmu/config-devices.mak
  GEN     aarch64-softmmu/config-devices.mak
  GEN     qmp-introspect.h
  GEN     module_block.h
  GEN     tests/test-qapi-types.h
  GEN     tests/test-qapi-visit.h
  GEN     tests/test-qmp-commands.h
  GEN     tests/test-qapi-event.h
  GEN     tests/test-qmp-introspect.h
  GEN     config-all-devices.mak
  GEN     trace/generated-events.h
  GEN     trace/generated-tracers.h
  GEN     trace/generated-tcg-tracers.h
  GEN     trace/generated-helpers-wrappers.h
  GEN     trace/generated-helpers.h
  CC      tests/qemu-iotests/socket_scm_helper.o
  GEN     qga/qapi-generated/qga-qapi-types.h
  GEN     qga/qapi-generated/qga-qapi-visit.h
  GEN     qga/qapi-generated/qga-qmp-commands.h
  GEN     qga/qapi-generated/qga-qapi-types.c
  GEN     qga/qapi-generated/qga-qapi-visit.c
  GEN     qga/qapi-generated/qga-qmp-marshal.c
  GEN     qmp-introspect.c
  GEN     qapi-types.c
  GEN     qapi-visit.c
  GEN     qapi-event.c
  CC      qapi/qapi-visit-core.o
  CC      qapi/qapi-dealloc-visitor.o
  CC      qapi/qmp-input-visitor.o
  CC      qapi/qmp-output-visitor.o
  CC      qapi/qmp-registry.o
  CC      qapi/qmp-dispatch.o
  CC      qapi/string-input-visitor.o
  CC      qapi/string-output-visitor.o
  CC      qapi/opts-visitor.o
  CC      qapi/qapi-clone-visitor.o
  CC      qapi/qmp-event.o
  CC      qapi/qapi-util.o
  CC      qobject/qnull.o
  CC      qobject/qint.o
  CC      qobject/qstring.o
  CC      qobject/qdict.o
  CC      qobject/qlist.o
  CC      qobject/qfloat.o
  CC      qobject/qbool.o
  CC      qobject/qjson.o
  CC      qobject/qobject.o
  CC      qobject/json-lexer.o
  CC      qobject/json-streamer.o
  CC      qobject/json-parser.o
  GEN     trace/generated-events.c
  CC      trace/control.o
  CC      trace/qmp.o
  CC      util/osdep.o
  CC      util/cutils.o
  CC      util/unicode.o
  CC      util/qemu-timer-common.o
  CC      util/bufferiszero.o
  CC      util/compatfd.o
  CC      util/event_notifier-posix.o
  CC      util/mmap-alloc.o
  CC      util/oslib-posix.o
  CC      util/qemu-openpty.o
  CC      util/qemu-thread-posix.o
  CC      util/memfd.o
  CC      util/envlist.o
  CC      util/bitmap.o
  CC      util/path.o
  CC      util/module.o
  CC      util/bitops.o
  CC      util/fifo8.o
  CC      util/hbitmap.o
  CC      util/acl.o
  CC      util/error.o
  CC      util/qemu-error.o
  CC      util/id.o
  CC      util/iov.o
  CC      util/qemu-config.o
  CC      util/qemu-sockets.o
  CC      util/notify.o
  CC      util/uri.o
  CC      util/qemu-option.o
  CC      util/qemu-progress.o
  CC      util/hexdump.o
  CC      util/crc32c.o
  CC      util/uuid.o
  CC      util/throttle.o
  CC      util/getauxval.o
  CC      util/readline.o
  CC      util/rfifolock.o
  CC      util/rcu.o
  CC      util/qemu-coroutine.o
  CC      util/qemu-coroutine-lock.o
  CC      util/qemu-coroutine-io.o
  CC      util/qemu-coroutine-sleep.o
  CC      util/coroutine-ucontext.o
  CC      util/buffer.o
  CC      util/timed-average.o
  CC      util/base64.o
  CC      util/log.o
  CC      util/qdist.o
  CC      util/qht.o
  CC      util/range.o
  CC      crypto/pbkdf-stub.o
  CC      stubs/arch-query-cpu-def.o
  CC      stubs/arch-query-cpu-model-expansion.o
  CC      stubs/arch-query-cpu-model-comparison.o
  CC      stubs/arch-query-cpu-model-baseline.o
  CC      stubs/bdrv-next-monitor-owned.o
  CC      stubs/blk-commit-all.o
  CC      stubs/blockdev-close-all-bdrv-states.o
  CC      stubs/clock-warp.o
  CC      stubs/cpu-get-clock.o
  CC      stubs/cpu-get-icount.o
  CC      stubs/dump.o
  CC      stubs/fdset-add-fd.o
  CC      stubs/fdset-find-fd.o
  CC      stubs/fdset-get-fd.o
  CC      stubs/fdset-remove-fd.o
  CC      stubs/gdbstub.o
  CC      stubs/get-fd.o
  CC      stubs/get-next-serial.o
  CC      stubs/get-vm-name.o
  CC      stubs/iothread-lock.o
  CC      stubs/is-daemonized.o
  CC      stubs/machine-init-done.o
  CC      stubs/migr-blocker.o
  CC      stubs/mon-is-qmp.o
  CC      stubs/mon-printf.o
  CC      stubs/monitor-init.o
  CC      stubs/notify-event.o
  CC      stubs/qtest.o
  CC      stubs/replay.o
  CC      stubs/replay-user.o
  CC      stubs/reset.o
  CC      stubs/runstate-check.o
  CC      stubs/set-fd-handler.o
  CC      stubs/slirp.o
  CC      stubs/sysbus.o
  CC      stubs/trace-control.o
  CC      stubs/uuid.o
  CC      stubs/vm-stop.o
  CC      stubs/vmstate.o
  CC      stubs/cpus.o
  CC      stubs/kvm.o
  CC      stubs/qmp_pc_dimm_device_list.o
  CC      stubs/target-monitor-defs.o
  CC      stubs/target-get-monitor-def.o
  CC      stubs/vhost.o
  CC      stubs/iohandler.o
  CC      stubs/smbios_type_38.o
  CC      stubs/ipmi.o
  CC      stubs/pc_madt_cpu_entry.o
  CC      contrib/ivshmem-client/main.o
  CC      contrib/ivshmem-client/ivshmem-client.o
  CC      contrib/ivshmem-server/ivshmem-server.o
  CC      contrib/ivshmem-server/main.o
  CC      qemu-nbd.o
  CC      async.o
  CC      thread-pool.o
  CC      block.o
  CC      blockjob.o
  CC      main-loop.o
  CC      iohandler.o
  CC      qemu-timer.o
  CC      aio-posix.o
  CC      qemu-io-cmds.o
  CC      replication.o
  CC      block/raw_bsd.o
  CC      block/qcow.o
  CC      block/vdi.o
  CC      block/vmdk.o
  CC      block/cloop.o
  CC      block/bochs.o
  CC      block/vpc.o
  CC      block/vvfat.o
  CC      block/dmg.o
  CC      block/qcow2.o
  CC      block/qcow2-refcount.o
  CC      block/qcow2-cluster.o
  CC      block/qcow2-snapshot.o
  CC      block/qcow2-cache.o
  CC      block/qed.o
  CC      block/qed-gencb.o
  CC      block/qed-l2-cache.o
  CC      block/qed-table.o
  CC      block/qed-cluster.o
  CC      block/qed-check.o
  CC      block/vhdx.o
  CC      block/vhdx-endian.o
  CC      block/vhdx-log.o
  CC      block/quorum.o
  CC      block/parallels.o
  CC      block/blkdebug.o
  CC      block/blkverify.o
  CC      block/blkreplay.o
  CC      block/block-backend.o
  CC      block/snapshot.o
  CC      block/qapi.o
  CC      block/raw-posix.o
  CC      block/null.o
  CC      block/mirror.o
  CC      block/commit.o
  CC      block/io.o
  CC      block/throttle-groups.o
  CC      block/nbd.o
  CC      block/nbd-client.o
  CC      block/sheepdog.o
  CC      block/accounting.o
  CC      block/dirty-bitmap.o
  CC      block/write-threshold.o
  CC      block/backup.o
  CC      block/replication.o
  CC      block/crypto.o
  CC      nbd/server.o
  CC      nbd/client.o
  CC      nbd/common.o
  CC      crypto/init.o
  CC      crypto/hash.o
  CC      crypto/hash-glib.o
  CC      crypto/aes.o
  CC      crypto/desrfb.o
  CC      crypto/cipher.o
  CC      crypto/tlscreds.o
  CC      crypto/tlscredsanon.o
  CC      crypto/tlscredsx509.o
  CC      crypto/tlssession.o
  CC      crypto/secret.o
  CC      crypto/random-platform.o
  CC      crypto/ivgen.o
  CC      crypto/pbkdf.o
  CC      crypto/ivgen-essiv.o
  CC      crypto/ivgen-plain.o
  CC      crypto/ivgen-plain64.o
  CC      crypto/xts.o
  CC      crypto/afsplit.o
  CC      crypto/block.o
  CC      crypto/block-qcow.o
  CC      crypto/block-luks.o
  CC      io/channel.o
  CC      io/channel-command.o
  CC      io/channel-buffer.o
  CC      io/channel-file.o
  CC      io/channel-socket.o
  CC      io/channel-tls.o
  CC      io/channel-watch.o
  CC      io/channel-websock.o
  CC      io/channel-util.o
  CC      io/task.o
  CC      qom/object.o
  CC      qom/container.o
  CC      qom/qom-qobject.o
  CC      qom/object_interfaces.o
  GEN     qemu-img-cmds.h
  CC      qemu-io.o
  CC      qemu-bridge-helper.o
  CC      blockdev.o
  CC      blockdev-nbd.o
  CC      iothread.o
  CC      qdev-monitor.o
  CC      device-hotplug.o
  CC      os-posix.o
  CC      qemu-char.o
  CC      page_cache.o
  CC      accel.o
  CC      bt-host.o
  CC      bt-vhci.o
  CC      dma-helpers.o
  CC      vl.o
  CC      tpm.o
  CC      device_tree.o
  GEN     qmp-marshal.c
  CC      qmp.o
  CC      hmp.o
  CC      tcg-runtime.o
  CC      cpus-common.o
  CC      audio/audio.o
  CC      audio/noaudio.o
  CC      audio/wavaudio.o
  CC      audio/mixeng.o
  CC      audio/sdlaudio.o
  CC      audio/ossaudio.o
  CC      audio/wavcapture.o
  CC      backends/rng.o
  CC      backends/rng-egd.o
  CC      backends/rng-random.o
  CC      backends/msmouse.o
  CC      backends/testdev.o
  CC      backends/tpm.o
  CC      backends/hostmem.o
  CC      backends/hostmem-ram.o
  CC      backends/hostmem-file.o
  CC      block/stream.o
  CC      disas/arm.o
  CC      disas/i386.o
  CC      fsdev/qemu-fsdev-dummy.o
  CC      fsdev/qemu-fsdev-opts.o
  CC      hw/acpi/core.o
  CC      hw/acpi/piix4.o
  CC      hw/acpi/pcihp.o
  CC      hw/acpi/ich9.o
  CC      hw/acpi/tco.o
  CC      hw/acpi/cpu_hotplug.o
  CC      hw/acpi/memory_hotplug.o
  CC      hw/acpi/memory_hotplug_acpi_table.o
  CC      hw/acpi/cpu.o
  CC      hw/acpi/acpi_interface.o
  CC      hw/acpi/bios-linker-loader.o
  CC      hw/acpi/aml-build.o
  CC      hw/acpi/ipmi.o
  CC      hw/audio/sb16.o
  CC      hw/audio/es1370.o
  CC      hw/audio/ac97.o
  CC      hw/audio/fmopl.o
  CC      hw/audio/adlib.o
  CC      hw/audio/gus.o
  CC      hw/audio/gusemu_hal.o
  CC      hw/audio/gusemu_mixer.o
  CC      hw/audio/cs4231a.o
  CC      hw/audio/intel-hda.o
  CC      hw/audio/hda-codec.o
  CC      hw/audio/pcspk.o
  CC      hw/audio/wm8750.o
  CC      hw/audio/pl041.o
  CC      hw/audio/lm4549.o
  CC      hw/audio/marvell_88w8618.o
  CC      hw/block/block.o
  CC      hw/block/cdrom.o
  CC      hw/block/fdc.o
  CC      hw/block/hd-geometry.o
  CC      hw/block/m25p80.o
  CC      hw/block/nand.o
  CC      hw/block/pflash_cfi01.o
  CC      hw/block/pflash_cfi02.o
  CC      hw/block/ecc.o
  CC      hw/block/onenand.o
  CC      hw/block/nvme.o
  CC      hw/bt/core.o
  CC      hw/bt/l2cap.o
  CC      hw/bt/sdp.o
  CC      hw/bt/hci.o
  CC      hw/bt/hid.o
  CC      hw/bt/hci-csr.o
  CC      hw/char/ipoctal232.o
  CC      hw/char/parallel.o
  CC      hw/char/pl011.o
  CC      hw/char/serial-isa.o
  CC      hw/char/serial.o
  CC      hw/char/serial-pci.o
  CC      hw/char/virtio-console.o
  CC      hw/char/cadence_uart.o
  CC      hw/char/debugcon.o
  CC      hw/char/imx_serial.o
  CC      hw/core/qdev.o
  CC      hw/core/qdev-properties.o
  CC      hw/core/bus.o
  CC      hw/core/fw-path-provider.o
  CC      hw/core/irq.o
  CC      hw/core/hotplug.o
  CC      hw/core/ptimer.o
  CC      hw/core/sysbus.o
  CC      hw/core/machine.o
  CC      hw/core/null-machine.o
  CC      hw/core/loader.o
  CC      hw/core/qdev-properties-system.o
  CC      hw/core/register.o
  CC      hw/core/or-irq.o
  CC      hw/core/platform-bus.o
  CC      hw/display/ads7846.o
  CC      hw/display/cirrus_vga.o
  CC      hw/display/pl110.o
  CC      hw/display/ssd0303.o
  CC      hw/display/ssd0323.o
  CC      hw/display/vga-pci.o
  CC      hw/display/vga-isa.o
  CC      hw/display/vmware_vga.o
  CC      hw/display/blizzard.o
  CC      hw/display/exynos4210_fimd.o
  CC      hw/display/framebuffer.o
  CC      hw/display/tc6393xb.o
  CC      hw/dma/pl080.o
  CC      hw/dma/xlnx-zynq-devcfg.o
  CC      hw/dma/pl330.o
  CC      hw/dma/i8257.o
  CC      hw/gpio/max7310.o
  CC      hw/gpio/pl061.o
  CC      hw/gpio/zaurus.o
  CC      hw/gpio/gpio_key.o
  CC      hw/i2c/core.o
  CC      hw/i2c/smbus.o
  CC      hw/i2c/smbus_eeprom.o
  CC      hw/i2c/i2c-ddc.o
  CC      hw/i2c/versatile_i2c.o
  CC      hw/i2c/smbus_ich9.o
  CC      hw/i2c/pm_smbus.o
  CC      hw/i2c/bitbang_i2c.o
  CC      hw/i2c/exynos4210_i2c.o
  CC      hw/i2c/imx_i2c.o
  CC      hw/i2c/aspeed_i2c.o
  CC      hw/ide/core.o
  CC      hw/ide/atapi.o
  CC      hw/ide/qdev.o
  CC      hw/ide/pci.o
  CC      hw/ide/isa.o
  CC      hw/ide/piix.o
  CC      hw/ide/microdrive.o
  CC      hw/ide/ahci.o
  CC      hw/ide/ich.o
  CC      hw/input/hid.o
  CC      hw/input/lm832x.o
  CC      hw/input/pckbd.o
  CC      hw/input/pl050.o
  CC      hw/input/ps2.o
  CC      hw/input/stellaris_input.o
  CC      hw/input/tsc2005.o
  CC      hw/input/vmmouse.o
  CC      hw/input/virtio-input.o
  CC      hw/input/virtio-input-hid.o
  CC      hw/input/virtio-input-host.o
  CC      hw/intc/i8259_common.o
  CC      hw/intc/i8259.o
  CC      hw/intc/pl190.o
  CC      hw/intc/imx_avic.o
  CC      hw/intc/realview_gic.o
  CC      hw/intc/ioapic_common.o
  CC      hw/intc/arm_gic_common.o
  CC      hw/intc/arm_gic.o
  CC      hw/intc/arm_gicv2m.o
  CC      hw/intc/arm_gicv3_common.o
  CC      hw/intc/arm_gicv3.o
  CC      hw/intc/arm_gicv3_dist.o
  CC      hw/intc/arm_gicv3_redist.o
  CC      hw/intc/arm_gicv3_its_common.o
  CC      hw/intc/intc.o
  CC      hw/ipack/ipack.o
  CC      hw/ipack/tpci200.o
  CC      hw/ipmi/ipmi.o
  CC      hw/ipmi/ipmi_bmc_sim.o
  CC      hw/ipmi/ipmi_bmc_extern.o
  CC      hw/ipmi/isa_ipmi_kcs.o
  CC      hw/ipmi/isa_ipmi_bt.o
  CC      hw/isa/isa-bus.o
  CC      hw/isa/apm.o
  CC      hw/mem/pc-dimm.o
  CC      hw/mem/nvdimm.o
  CC      hw/misc/applesmc.o
  CC      hw/misc/max111x.o
  CC      hw/misc/tmp105.o
  CC      hw/misc/debugexit.o
  CC      hw/misc/sga.o
  CC      hw/misc/pc-testdev.o
  CC      hw/misc/pci-testdev.o
  CC      hw/misc/arm_l2x0.o
  CC      hw/misc/arm_integrator_debug.o
  CC      hw/misc/a9scu.o
  CC      hw/misc/arm11scu.o
  CC      hw/net/eepro100.o
  CC      hw/net/ne2000.o
  CC      hw/net/pcnet-pci.o
  CC      hw/net/pcnet.o
  CC      hw/net/e1000.o
  CC      hw/net/e1000x_common.o
  CC      hw/net/net_tx_pkt.o
  CC      hw/net/net_rx_pkt.o
  CC      hw/net/e1000e.o
  CC      hw/net/e1000e_core.o
  CC      hw/net/rtl8139.o
  CC      hw/net/vmxnet3.o
  CC      hw/net/smc91c111.o
  CC      hw/net/lan9118.o
  CC      hw/net/ne2000-isa.o
  CC      hw/net/xgmac.o
  CC      hw/net/allwinner_emac.o
  CC      hw/net/imx_fec.o
  CC      hw/net/cadence_gem.o
  CC      hw/net/stellaris_enet.o
  CC      hw/net/rocker/rocker.o
  CC      hw/net/rocker/rocker_fp.o
  CC      hw/net/rocker/rocker_desc.o
  CC      hw/net/rocker/rocker_world.o
  CC      hw/nvram/eeprom93xx.o
  CC      hw/net/rocker/rocker_of_dpa.o
  CC      hw/nvram/fw_cfg.o
  CC      hw/pci-bridge/pci_bridge_dev.o
  CC      hw/pci-bridge/pci_expander_bridge.o
  CC      hw/pci-bridge/xio3130_upstream.o
  CC      hw/pci-bridge/ioh3420.o
  CC      hw/pci-bridge/xio3130_downstream.o
  CC      hw/pci-bridge/i82801b11.o
  CC      hw/pci-host/pam.o
/tmp/qemu-test/src/hw/nvram/fw_cfg.c: In function ‘fw_cfg_dma_transfer’:
/tmp/qemu-test/src/hw/nvram/fw_cfg.c:330: warning: ‘read’ may be used uninitialized in this function
  CC      hw/pci-host/versatile.o
  CC      hw/pci-host/piix.o
  CC      hw/pci-host/q35.o
  CC      hw/pci-host/gpex.o
  CC      hw/pci/pci.o
  CC      hw/pci/pci_bridge.o
  CC      hw/pci/msix.o
  CC      hw/pci/msi.o
  CC      hw/pci/shpc.o
  CC      hw/pci/slotid_cap.o
  CC      hw/pci/pci_host.o
  CC      hw/pci/pcie_host.o
  CC      hw/pci/pcie.o
  CC      hw/pci/pcie_aer.o
  CC      hw/pci/pcie_port.o
  CC      hw/pci/pci-stub.o
  CC      hw/pcmcia/pcmcia.o
  CC      hw/scsi/scsi-disk.o
  CC      hw/scsi/scsi-generic.o
  CC      hw/scsi/scsi-bus.o
  CC      hw/scsi/lsi53c895a.o
  CC      hw/scsi/mptsas.o
  CC      hw/scsi/mptconfig.o
  CC      hw/scsi/mptendian.o
  CC      hw/scsi/megasas.o
  CC      hw/scsi/vmw_pvscsi.o
  CC      hw/scsi/esp.o
  CC      hw/scsi/esp-pci.o
  CC      hw/sd/pl181.o
  CC      hw/sd/ssi-sd.o
  CC      hw/sd/sd.o
  CC      hw/sd/core.o
  CC      hw/sd/sdhci.o
  CC      hw/smbios/smbios.o
  CC      hw/smbios/smbios_type_38.o
  CC      hw/ssi/pl022.o
  CC      hw/ssi/ssi.o
  CC      hw/ssi/xilinx_spips.o
  CC      hw/ssi/aspeed_smc.o
  CC      hw/ssi/stm32f2xx_spi.o
  CC      hw/timer/arm_timer.o
  CC      hw/timer/arm_mptimer.o
  CC      hw/timer/a9gtimer.o
  CC      hw/timer/cadence_ttc.o
  CC      hw/timer/ds1338.o
  CC      hw/timer/hpet.o
  CC      hw/timer/i8254_common.o
  CC      hw/timer/i8254.o
  CC      hw/timer/pl031.o
  CC      hw/timer/twl92230.o
  CC      hw/timer/imx_epit.o
  CC      hw/timer/imx_gpt.o
  CC      hw/timer/stm32f2xx_timer.o
  CC      hw/timer/aspeed_timer.o
  CC      hw/tpm/tpm_tis.o
  CC      hw/tpm/tpm_passthrough.o
  CC      hw/tpm/tpm_util.o
  CC      hw/usb/core.o
  CC      hw/usb/combined-packet.o
  CC      hw/usb/bus.o
  CC      hw/usb/libhw.o
  CC      hw/usb/desc.o
  CC      hw/usb/desc-msos.o
  CC      hw/usb/hcd-uhci.o
  CC      hw/usb/hcd-ohci.o
  CC      hw/usb/hcd-ehci.o
  CC      hw/usb/hcd-ehci-pci.o
  CC      hw/usb/hcd-ehci-sysbus.o
  CC      hw/usb/hcd-xhci.o
  CC      hw/usb/hcd-musb.o
  CC      hw/usb/dev-hub.o
  CC      hw/usb/dev-hid.o
  CC      hw/usb/dev-wacom.o
  CC      hw/usb/dev-storage.o
  CC      hw/usb/dev-uas.o
  CC      hw/usb/dev-audio.o
  CC      hw/usb/dev-serial.o
  CC      hw/usb/dev-network.o
  CC      hw/usb/dev-bluetooth.o
  CC      hw/usb/dev-smartcard-reader.o
  CC      hw/usb/dev-mtp.o
  CC      hw/usb/host-stub.o
  CC      hw/virtio/virtio-rng.o
  CC      hw/virtio/virtio-pci.o
  CC      hw/virtio/virtio-bus.o
  CC      hw/virtio/virtio-mmio.o
  CC      hw/watchdog/watchdog.o
  CC      hw/watchdog/wdt_i6300esb.o
  CC      hw/watchdog/wdt_ib700.o
  CC      migration/migration.o
  CC      migration/socket.o
  CC      migration/fd.o
  CC      migration/exec.o
  CC      migration/tls.o
  CC      migration/vmstate.o
  CC      migration/qemu-file.o
  CC      migration/qemu-file-channel.o
  CC      migration/xbzrle.o
  CC      migration/postcopy-ram.o
  CC      migration/qjson.o
  CC      migration/block.o
  CC      net/net.o
  CC      net/queue.o
  CC      net/checksum.o
  CC      net/util.o
  CC      net/hub.o
  CC      net/socket.o
  CC      net/dump.o
  CC      net/eth.o
  CC      net/l2tpv3.o
  CC      net/tap.o
  CC      net/vhost-user.o
  CC      net/tap-linux.o
  CC      net/slirp.o
  CC      net/filter.o
  CC      net/filter-buffer.o
  CC      net/filter-mirror.o
  CC      net/colo-compare.o
  CC      net/colo.o
  CC      net/filter-rewriter.o
  CC      qom/cpu.o
  CC      replay/replay.o
  CC      replay/replay-internal.o
  CC      replay/replay-events.o
/tmp/qemu-test/src/replay/replay-internal.c: In function ‘replay_put_array’:
/tmp/qemu-test/src/replay/replay-internal.c:65: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
  CC      replay/replay-time.o
  CC      replay/replay-input.o
  CC      replay/replay-char.o
  CC      replay/replay-snapshot.o
  CC      slirp/cksum.o
  CC      slirp/if.o
  CC      slirp/ip_icmp.o
  CC      slirp/ip6_icmp.o
  CC      slirp/ip6_input.o
  CC      slirp/ip6_output.o
  CC      slirp/ip_input.o
  CC      slirp/ip_output.o
  CC      slirp/dnssearch.o
  CC      slirp/dhcpv6.o
  CC      slirp/slirp.o
  CC      slirp/mbuf.o
  CC      slirp/misc.o
  CC      slirp/sbuf.o
  CC      slirp/socket.o
  CC      slirp/tcp_input.o
  CC      slirp/tcp_output.o
  CC      slirp/tcp_subr.o
  CC      slirp/tcp_timer.o
/tmp/qemu-test/src/slirp/tcp_input.c: In function ‘tcp_input’:
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_p’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_len’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_tos’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_id’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_off’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_ttl’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_sum’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_src.s_addr’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:219: warning: ‘save_ip.ip_dst.s_addr’ may be used uninitialized in this function
/tmp/qemu-test/src/slirp/tcp_input.c:220: warning: ‘save_ip6.ip_nh’ may be used uninitialized in this function
  CC      slirp/udp.o
  CC      slirp/udp6.o
  CC      slirp/bootp.o
  CC      slirp/tftp.o
  CC      slirp/arp_table.o
  CC      slirp/ndp_table.o
  CC      ui/keymaps.o
  CC      ui/console.o
  CC      ui/cursor.o
  CC      ui/qemu-pixman.o
  CC      ui/input.o
  CC      ui/input-keymap.o
  CC      ui/input-legacy.o
  CC      ui/input-linux.o
  CC      ui/sdl.o
  CC      ui/sdl_zoom.o
  CC      ui/x_keymap.o
  CC      ui/vnc.o
  CC      ui/vnc-enc-zlib.o
  CC      ui/vnc-enc-hextile.o
  CC      ui/vnc-enc-tight.o
  CC      ui/vnc-palette.o
  CC      ui/vnc-enc-zrle.o
  CC      ui/vnc-ws.o
  CC      ui/vnc-auth-vencrypt.o
  CC      ui/vnc-jobs.o
  LINK    tests/qemu-iotests/socket_scm_helper
  CC      qga/commands.o
  CC      qga/guest-agent-command-state.o
  CC      qga/main.o
  CC      qga/commands-posix.o
  CC      qga/channel-posix.o
  CC      qga/qapi-generated/qga-qapi-types.o
  CC      qga/qapi-generated/qga-qapi-visit.o
  CC      qga/qapi-generated/qga-qmp-marshal.o
  CC      qmp-introspect.o
  CC      qapi-types.o
  CC      qapi-visit.o
  CC      qapi-event.o
  AR      libqemustub.a
  AS      optionrom/multiboot.o
  CC      qemu-img.o
  AS      optionrom/linuxboot.o
  CC      optionrom/linuxboot_dma.o
cc: unrecognized option '-no-integrated-as'
cc: unrecognized option '-no-integrated-as'
  AS      optionrom/kvmvapic.o
  CC      qmp-marshal.o
  BUILD   optionrom/multiboot.img
  BUILD   optionrom/linuxboot.img
  CC      trace/generated-events.o
  BUILD   optionrom/linuxboot_dma.img
  BUILD   optionrom/multiboot.raw
  BUILD   optionrom/linuxboot.raw
  BUILD   optionrom/kvmvapic.img
  BUILD   optionrom/linuxboot_dma.raw
  BUILD   optionrom/kvmvapic.raw
  SIGN    optionrom/multiboot.bin
  SIGN    optionrom/linuxboot.bin
  SIGN    optionrom/linuxboot_dma.bin
  SIGN    optionrom/kvmvapic.bin
  AR      libqemuutil.a
  LINK    qemu-ga
  LINK    ivshmem-client
  LINK    ivshmem-server
  LINK    qemu-nbd
  LINK    qemu-io
  LINK    qemu-bridge-helper
  LINK    qemu-img
  GEN     x86_64-softmmu/hmp-commands.h
  GEN     x86_64-softmmu/hmp-commands-info.h
  GEN     x86_64-softmmu/config-target.h
  GEN     aarch64-softmmu/hmp-commands.h
  GEN     aarch64-softmmu/hmp-commands-info.h
  GEN     aarch64-softmmu/config-target.h
  CC      x86_64-softmmu/exec.o
  CC      x86_64-softmmu/translate-all.o
  CC      x86_64-softmmu/cpu-exec.o
  CC      x86_64-softmmu/cpu-exec-common.o
  CC      x86_64-softmmu/translate-common.o
  CC      x86_64-softmmu/tcg/tcg.o
  CC      x86_64-softmmu/tcg/tcg-op.o
  CC      aarch64-softmmu/exec.o
  CC      aarch64-softmmu/translate-all.o
  CC      aarch64-softmmu/cpu-exec.o
  CC      x86_64-softmmu/tcg/optimize.o
  CC      x86_64-softmmu/tcg/tcg-common.o
  CC      x86_64-softmmu/fpu/softfloat.o
  CC      x86_64-softmmu/disas.o
  CC      x86_64-softmmu/arch_init.o
  CC      x86_64-softmmu/cpus.o
  CC      x86_64-softmmu/monitor.o
  CC      x86_64-softmmu/gdbstub.o
  CC      aarch64-softmmu/translate-common.o
  CC      aarch64-softmmu/cpu-exec-common.o
  CC      x86_64-softmmu/balloon.o
  CC      x86_64-softmmu/ioport.o
  CC      aarch64-softmmu/tcg/tcg.o
  CC      aarch64-softmmu/tcg/tcg-op.o
  CC      x86_64-softmmu/numa.o
  CC      x86_64-softmmu/qtest.o
  CC      aarch64-softmmu/tcg/optimize.o
  CC      x86_64-softmmu/bootdevice.o
  CC      x86_64-softmmu/kvm-all.o
  CC      aarch64-softmmu/tcg/tcg-common.o
  CC      aarch64-softmmu/fpu/softfloat.o
  CC      x86_64-softmmu/memory.o
  CC      aarch64-softmmu/disas.o
  CC      x86_64-softmmu/cputlb.o
  GEN     aarch64-softmmu/gdbstub-xml.c
  CC      aarch64-softmmu/kvm-stub.o
  CC      x86_64-softmmu/memory_mapping.o
  CC      aarch64-softmmu/arch_init.o
  CC      aarch64-softmmu/cpus.o
  CC      x86_64-softmmu/dump.o
  CC      aarch64-softmmu/monitor.o
  CC      x86_64-softmmu/migration/ram.o
  CC      aarch64-softmmu/gdbstub.o
  CC      aarch64-softmmu/balloon.o
  CC      aarch64-softmmu/ioport.o
  CC      aarch64-softmmu/numa.o
  CC      aarch64-softmmu/qtest.o
  CC      x86_64-softmmu/migration/savevm.o
  CC      aarch64-softmmu/bootdevice.o
  CC      aarch64-softmmu/memory.o
  CC      aarch64-softmmu/cputlb.o
  CC      x86_64-softmmu/xen-common-stub.o
  CC      aarch64-softmmu/memory_mapping.o
  CC      x86_64-softmmu/xen-hvm-stub.o
  CC      aarch64-softmmu/dump.o
  CC      x86_64-softmmu/hw/acpi/nvdimm.o
  CC      aarch64-softmmu/migration/ram.o
  CC      aarch64-softmmu/migration/savevm.o
  CC      x86_64-softmmu/hw/block/virtio-blk.o
  CC      x86_64-softmmu/hw/block/dataplane/virtio-blk.o
  CC      aarch64-softmmu/xen-common-stub.o
  CC      aarch64-softmmu/xen-hvm-stub.o
  CC      aarch64-softmmu/hw/adc/stm32f2xx_adc.o
  CC      aarch64-softmmu/hw/block/virtio-blk.o
  CC      x86_64-softmmu/hw/char/virtio-serial-bus.o
  CC      x86_64-softmmu/hw/core/nmi.o
  CC      x86_64-softmmu/hw/core/generic-loader.o
  CC      x86_64-softmmu/hw/cpu/core.o
  CC      x86_64-softmmu/hw/display/vga.o
  CC      x86_64-softmmu/hw/display/virtio-gpu.o
  CC      aarch64-softmmu/hw/block/dataplane/virtio-blk.o
  CC      aarch64-softmmu/hw/char/exynos4210_uart.o
  CC      aarch64-softmmu/hw/char/omap_uart.o
  CC      aarch64-softmmu/hw/char/digic-uart.o
  CC      aarch64-softmmu/hw/char/stm32f2xx_usart.o
  CC      aarch64-softmmu/hw/char/bcm2835_aux.o
  CC      aarch64-softmmu/hw/char/virtio-serial-bus.o
  CC      aarch64-softmmu/hw/core/nmi.o
  CC      aarch64-softmmu/hw/core/generic-loader.o
  CC      aarch64-softmmu/hw/cpu/arm11mpcore.o
  CC      aarch64-softmmu/hw/cpu/realview_mpcore.o
  CC      x86_64-softmmu/hw/display/virtio-gpu-3d.o
  CC      x86_64-softmmu/hw/display/virtio-gpu-pci.o
  CC      aarch64-softmmu/hw/cpu/a9mpcore.o
  CC      aarch64-softmmu/hw/cpu/a15mpcore.o
  CC      aarch64-softmmu/hw/cpu/core.o
  CC      aarch64-softmmu/hw/display/omap_dss.o
  CC      x86_64-softmmu/hw/display/virtio-vga.o
  CC      aarch64-softmmu/hw/display/omap_lcdc.o
  CC      x86_64-softmmu/hw/intc/apic.o
  CC      x86_64-softmmu/hw/intc/apic_common.o
  CC      x86_64-softmmu/hw/intc/ioapic.o
  CC      x86_64-softmmu/hw/isa/lpc_ich9.o
  CC      x86_64-softmmu/hw/misc/vmport.o
  CC      aarch64-softmmu/hw/display/pxa2xx_lcd.o
  CC      aarch64-softmmu/hw/display/bcm2835_fb.o
  CC      aarch64-softmmu/hw/display/vga.o
  CC      aarch64-softmmu/hw/display/virtio-gpu.o
  CC      x86_64-softmmu/hw/misc/ivshmem.o
  CC      x86_64-softmmu/hw/misc/pvpanic.o
  CC      aarch64-softmmu/hw/display/virtio-gpu-3d.o
  CC      x86_64-softmmu/hw/misc/edu.o
  CC      aarch64-softmmu/hw/display/virtio-gpu-pci.o
  CC      x86_64-softmmu/hw/misc/hyperv_testdev.o
  CC      aarch64-softmmu/hw/display/dpcd.o
  CC      x86_64-softmmu/hw/net/virtio-net.o
  CC      x86_64-softmmu/hw/net/vhost_net.o
  CC      x86_64-softmmu/hw/scsi/virtio-scsi.o
  CC      x86_64-softmmu/hw/scsi/virtio-scsi-dataplane.o
  CC      x86_64-softmmu/hw/scsi/vhost-scsi.o
  CC      x86_64-softmmu/hw/timer/mc146818rtc.o
  CC      x86_64-softmmu/hw/vfio/common.o
  CC      x86_64-softmmu/hw/vfio/pci.o
  CC      x86_64-softmmu/hw/vfio/pci-quirks.o
  CC      x86_64-softmmu/hw/vfio/platform.o
  CC      x86_64-softmmu/hw/vfio/calxeda-xgmac.o
  CC      aarch64-softmmu/hw/display/xlnx_dp.o
  CC      aarch64-softmmu/hw/dma/xlnx_dpdma.o
  CC      x86_64-softmmu/hw/vfio/amd-xgbe.o
  CC      x86_64-softmmu/hw/vfio/spapr.o
  CC      x86_64-softmmu/hw/virtio/virtio.o
  CC      x86_64-softmmu/hw/virtio/virtio-balloon.o
  CC      aarch64-softmmu/hw/dma/omap_dma.o
  CC      x86_64-softmmu/hw/virtio/vhost.o
  CC      x86_64-softmmu/hw/virtio/vhost-backend.o
  CC      x86_64-softmmu/hw/virtio/vhost-user.o
  CC      x86_64-softmmu/hw/virtio/vhost-vsock.o
  CC      x86_64-softmmu/hw/i386/multiboot.o
  CC      aarch64-softmmu/hw/dma/soc_dma.o
  CC      x86_64-softmmu/hw/i386/pc.o
  CC      x86_64-softmmu/hw/i386/pc_piix.o
  CC      x86_64-softmmu/hw/i386/pc_q35.o
  CC      x86_64-softmmu/hw/i386/pc_sysfw.o
  CC      aarch64-softmmu/hw/dma/pxa2xx_dma.o
  CC      aarch64-softmmu/hw/dma/bcm2835_dma.o
  CC      aarch64-softmmu/hw/gpio/omap_gpio.o
  CC      x86_64-softmmu/hw/i386/x86-iommu.o
  CC      x86_64-softmmu/hw/i386/intel_iommu.o
  CC      aarch64-softmmu/hw/gpio/imx_gpio.o
  CC      aarch64-softmmu/hw/i2c/omap_i2c.o
  CC      aarch64-softmmu/hw/input/pxa2xx_keypad.o
  CC      aarch64-softmmu/hw/input/tsc210x.o
  CC      x86_64-softmmu/hw/i386/amd_iommu.o
  CC      aarch64-softmmu/hw/intc/armv7m_nvic.o
  CC      aarch64-softmmu/hw/intc/exynos4210_gic.o
  CC      aarch64-softmmu/hw/intc/exynos4210_combiner.o
  CC      x86_64-softmmu/hw/i386/kvmvapic.o
  CC      x86_64-softmmu/hw/i386/acpi-build.o
  CC      aarch64-softmmu/hw/intc/omap_intc.o
  CC      x86_64-softmmu/hw/i386/pci-assign-load-rom.o
  CC      x86_64-softmmu/hw/i386/kvm/clock.o
  CC      x86_64-softmmu/hw/i386/kvm/apic.o
/tmp/qemu-test/src/hw/i386/acpi-build.c: In function ‘build_append_pci_bus_devices’:
/tmp/qemu-test/src/hw/i386/acpi-build.c:472: warning: ‘notify_method’ may be used uninitialized in this function
  CC      aarch64-softmmu/hw/intc/bcm2835_ic.o
  CC      x86_64-softmmu/hw/i386/kvm/i8259.o
  CC      x86_64-softmmu/hw/i386/kvm/ioapic.o
  CC      x86_64-softmmu/hw/i386/kvm/i8254.o
  CC      aarch64-softmmu/hw/intc/bcm2836_control.o
  CC      aarch64-softmmu/hw/intc/allwinner-a10-pic.o
  CC      x86_64-softmmu/hw/i386/kvm/pci-assign.o
  CC      aarch64-softmmu/hw/intc/aspeed_vic.o
  CC      aarch64-softmmu/hw/intc/arm_gicv3_cpuif.o
  CC      x86_64-softmmu/target-i386/translate.o
  CC      aarch64-softmmu/hw/misc/ivshmem.o
  CC      x86_64-softmmu/target-i386/helper.o
  CC      aarch64-softmmu/hw/misc/arm_sysctl.o
  CC      aarch64-softmmu/hw/misc/cbus.o
  CC      aarch64-softmmu/hw/misc/exynos4210_pmu.o
  CC      x86_64-softmmu/target-i386/cpu.o
  CC      x86_64-softmmu/target-i386/bpt_helper.o
  CC      aarch64-softmmu/hw/misc/imx_ccm.o
  CC      aarch64-softmmu/hw/misc/imx31_ccm.o
  CC      aarch64-softmmu/hw/misc/imx25_ccm.o
  CC      aarch64-softmmu/hw/misc/imx6_ccm.o
  CC      x86_64-softmmu/target-i386/excp_helper.o
  CC      aarch64-softmmu/hw/misc/imx6_src.o
  CC      aarch64-softmmu/hw/misc/mst_fpga.o
  CC      x86_64-softmmu/target-i386/fpu_helper.o
  CC      aarch64-softmmu/hw/misc/omap_clk.o
  CC      aarch64-softmmu/hw/misc/omap_gpmc.o
  CC      aarch64-softmmu/hw/misc/omap_l4.o
  CC      aarch64-softmmu/hw/misc/omap_sdrc.o
  CC      x86_64-softmmu/target-i386/cc_helper.o
  CC      x86_64-softmmu/target-i386/int_helper.o
  CC      x86_64-softmmu/target-i386/svm_helper.o
  CC      aarch64-softmmu/hw/misc/omap_tap.o
  CC      x86_64-softmmu/target-i386/smm_helper.o
  CC      aarch64-softmmu/hw/misc/bcm2835_mbox.o
  CC      aarch64-softmmu/hw/misc/zynq_slcr.o
  CC      aarch64-softmmu/hw/misc/bcm2835_property.o
  CC      aarch64-softmmu/hw/misc/zynq-xadc.o
  CC      aarch64-softmmu/hw/misc/stm32f2xx_syscfg.o
  CC      x86_64-softmmu/target-i386/misc_helper.o
  CC      aarch64-softmmu/hw/misc/edu.o
  CC      x86_64-softmmu/target-i386/mem_helper.o
  CC      x86_64-softmmu/target-i386/seg_helper.o
  CC      aarch64-softmmu/hw/misc/auxbus.o
  CC      aarch64-softmmu/hw/misc/aspeed_scu.o
  CC      x86_64-softmmu/target-i386/mpx_helper.o
  CC      aarch64-softmmu/hw/misc/aspeed_sdmc.o
  CC      x86_64-softmmu/target-i386/gdbstub.o
  CC      x86_64-softmmu/target-i386/machine.o
  CC      aarch64-softmmu/hw/net/virtio-net.o
  CC      aarch64-softmmu/hw/net/vhost_net.o
  CC      x86_64-softmmu/target-i386/arch_memory_mapping.o
  CC      x86_64-softmmu/target-i386/arch_dump.o
  CC      aarch64-softmmu/hw/pcmcia/pxa2xx.o
  CC      aarch64-softmmu/hw/scsi/virtio-scsi.o
  CC      x86_64-softmmu/target-i386/monitor.o
  CC      x86_64-softmmu/target-i386/kvm.o
  CC      x86_64-softmmu/target-i386/hyperv.o
  CC      aarch64-softmmu/hw/scsi/virtio-scsi-dataplane.o
  GEN     trace/generated-helpers.c
  CC      aarch64-softmmu/hw/scsi/vhost-scsi.o
  CC      x86_64-softmmu/trace/control-target.o
  CC      aarch64-softmmu/hw/sd/omap_mmc.o
  CC      aarch64-softmmu/hw/sd/pxa2xx_mmci.o
  CC      aarch64-softmmu/hw/ssi/omap_spi.o
  CC      aarch64-softmmu/hw/ssi/imx_spi.o
  CC      aarch64-softmmu/hw/timer/exynos4210_mct.o
  CC      aarch64-softmmu/hw/timer/exynos4210_pwm.o
  CC      aarch64-softmmu/hw/timer/exynos4210_rtc.o
  CC      aarch64-softmmu/hw/timer/omap_gptimer.o
/tmp/qemu-test/src/hw/i386/pc_piix.c: In function ‘igd_passthrough_isa_bridge_create’:
/tmp/qemu-test/src/hw/i386/pc_piix.c:1046: warning: ‘pch_rev_id’ may be used uninitialized in this function
  CC      aarch64-softmmu/hw/timer/omap_synctimer.o
  CC      aarch64-softmmu/hw/timer/pxa2xx_timer.o
  CC      aarch64-softmmu/hw/timer/allwinner-a10-pit.o
  CC      aarch64-softmmu/hw/timer/digic-timer.o
  CC      aarch64-softmmu/hw/usb/tusb6010.o
  CC      aarch64-softmmu/hw/vfio/common.o
  CC      aarch64-softmmu/hw/vfio/pci.o
  CC      aarch64-softmmu/hw/vfio/pci-quirks.o
  CC      aarch64-softmmu/hw/vfio/platform.o
  CC      aarch64-softmmu/hw/vfio/calxeda-xgmac.o
  CC      aarch64-softmmu/hw/vfio/amd-xgbe.o
  CC      aarch64-softmmu/hw/vfio/spapr.o
  CC      aarch64-softmmu/hw/virtio/virtio.o
  CC      aarch64-softmmu/hw/virtio/virtio-balloon.o
  CC      aarch64-softmmu/hw/virtio/vhost.o
  CC      x86_64-softmmu/trace/generated-helpers.o
  CC      aarch64-softmmu/hw/virtio/vhost-backend.o
  CC      aarch64-softmmu/hw/virtio/vhost-user.o
  CC      aarch64-softmmu/hw/virtio/vhost-vsock.o
  CC      aarch64-softmmu/hw/arm/boot.o
  CC      aarch64-softmmu/hw/arm/collie.o
  CC      aarch64-softmmu/hw/arm/exynos4_boards.o
  CC      aarch64-softmmu/hw/arm/gumstix.o
  CC      aarch64-softmmu/hw/arm/highbank.o
  LINK    x86_64-softmmu/qemu-system-x86_64
  CC      aarch64-softmmu/hw/arm/digic_boards.o
  CC      aarch64-softmmu/hw/arm/integratorcp.o
  CC      aarch64-softmmu/hw/arm/mainstone.o
  CC      aarch64-softmmu/hw/arm/musicpal.o
  CC      aarch64-softmmu/hw/arm/nseries.o
  CC      aarch64-softmmu/hw/arm/omap_sx1.o
  CC      aarch64-softmmu/hw/arm/palm.o
  CC      aarch64-softmmu/hw/arm/realview.o
  CC      aarch64-softmmu/hw/arm/spitz.o
  CC      aarch64-softmmu/hw/arm/stellaris.o
  CC      aarch64-softmmu/hw/arm/tosa.o
  CC      aarch64-softmmu/hw/arm/versatilepb.o
  CC      aarch64-softmmu/hw/arm/vexpress.o
  CC      aarch64-softmmu/hw/arm/virt.o
  CC      aarch64-softmmu/hw/arm/xilinx_zynq.o
  CC      aarch64-softmmu/hw/arm/z2.o
  CC      aarch64-softmmu/hw/arm/virt-acpi-build.o
  CC      aarch64-softmmu/hw/arm/netduino2.o
  CC      aarch64-softmmu/hw/arm/sysbus-fdt.o
  CC      aarch64-softmmu/hw/arm/armv7m.o
  CC      aarch64-softmmu/hw/arm/exynos4210.o
  CC      aarch64-softmmu/hw/arm/pxa2xx.o
  CC      aarch64-softmmu/hw/arm/pxa2xx_gpio.o
  CC      aarch64-softmmu/hw/arm/pxa2xx_pic.o
  CC      aarch64-softmmu/hw/arm/digic.o
  CC      aarch64-softmmu/hw/arm/omap1.o
  CC      aarch64-softmmu/hw/arm/omap2.o
  CC      aarch64-softmmu/hw/arm/strongarm.o
  CC      aarch64-softmmu/hw/arm/allwinner-a10.o
  CC      aarch64-softmmu/hw/arm/cubieboard.o
  CC      aarch64-softmmu/hw/arm/bcm2835_peripherals.o
  CC      aarch64-softmmu/hw/arm/bcm2836.o
  CC      aarch64-softmmu/hw/arm/raspi.o
  CC      aarch64-softmmu/hw/arm/stm32f205_soc.o
  CC      aarch64-softmmu/hw/arm/xlnx-zynqmp.o
  CC      aarch64-softmmu/hw/arm/xlnx-ep108.o
  CC      aarch64-softmmu/hw/arm/fsl-imx25.o
  CC      aarch64-softmmu/hw/arm/imx25_pdk.o
  CC      aarch64-softmmu/hw/arm/kzm.o
  CC      aarch64-softmmu/hw/arm/fsl-imx31.o
  CC      aarch64-softmmu/hw/arm/fsl-imx6.o
  CC      aarch64-softmmu/hw/arm/sabrelite.o
  CC      aarch64-softmmu/hw/arm/aspeed_soc.o
  CC      aarch64-softmmu/hw/arm/aspeed.o
  CC      aarch64-softmmu/target-arm/arm-semi.o
  CC      aarch64-softmmu/target-arm/machine.o
  CC      aarch64-softmmu/target-arm/psci.o
  CC      aarch64-softmmu/target-arm/arch_dump.o
  CC      aarch64-softmmu/target-arm/monitor.o
  CC      aarch64-softmmu/target-arm/kvm-stub.o
  CC      aarch64-softmmu/target-arm/translate.o
  CC      aarch64-softmmu/target-arm/op_helper.o
  CC      aarch64-softmmu/target-arm/helper.o
  CC      aarch64-softmmu/target-arm/cpu.o
  CC      aarch64-softmmu/target-arm/neon_helper.o
  CC      aarch64-softmmu/target-arm/gdbstub.o
  CC      aarch64-softmmu/target-arm/iwmmxt_helper.o
  CC      aarch64-softmmu/target-arm/cpu64.o
  CC      aarch64-softmmu/target-arm/translate-a64.o
  CC      aarch64-softmmu/target-arm/helper-a64.o
  CC      aarch64-softmmu/target-arm/gdbstub64.o
  CC      aarch64-softmmu/target-arm/crypto_helper.o
  CC      aarch64-softmmu/target-arm/arm-powerctl.o
/tmp/qemu-test/src/target-arm/translate-a64.c: In function ‘handle_shri_with_rndacc’:
/tmp/qemu-test/src/target-arm/translate-a64.c:6333: warning: ‘tcg_src_hi’ may be used uninitialized in this function
/tmp/qemu-test/src/target-arm/translate-a64.c: In function ‘disas_simd_scalar_two_reg_misc’:
/tmp/qemu-test/src/target-arm/translate-a64.c:8060: warning: ‘rmode’ may be used uninitialized in this function
  GEN     trace/generated-helpers.c
  CC      aarch64-softmmu/trace/control-target.o
  CC      aarch64-softmmu/gdbstub-xml.o
  CC      aarch64-softmmu/trace/generated-helpers.o
  LINK    aarch64-softmmu/qemu-system-aarch64
  TEST    tests/qapi-schema/alternate-any.out
  TEST    tests/qapi-schema/alternate-array.out
  TEST    tests/qapi-schema/alternate-base.out
  TEST    tests/qapi-schema/alternate-conflict-dict.out
  TEST    tests/qapi-schema/alternate-clash.out
  TEST    tests/qapi-schema/alternate-conflict-string.out
  TEST    tests/qapi-schema/alternate-empty.out
  TEST    tests/qapi-schema/alternate-nested.out
  TEST    tests/qapi-schema/alternate-unknown.out
  TEST    tests/qapi-schema/args-alternate.out
  TEST    tests/qapi-schema/args-any.out
  TEST    tests/qapi-schema/args-array-empty.out
  TEST    tests/qapi-schema/args-array-unknown.out
  TEST    tests/qapi-schema/args-bad-boxed.out
  TEST    tests/qapi-schema/args-boxed-anon.out
  TEST    tests/qapi-schema/args-boxed-empty.out
  TEST    tests/qapi-schema/args-boxed-string.out
  TEST    tests/qapi-schema/args-int.out
  TEST    tests/qapi-schema/args-invalid.out
  TEST    tests/qapi-schema/args-member-array-bad.out
  TEST    tests/qapi-schema/args-member-case.out
  TEST    tests/qapi-schema/args-member-unknown.out
  TEST    tests/qapi-schema/args-name-clash.out
  TEST    tests/qapi-schema/args-union.out
  TEST    tests/qapi-schema/args-unknown.out
  TEST    tests/qapi-schema/bad-base.out
  TEST    tests/qapi-schema/bad-data.out
  TEST    tests/qapi-schema/bad-ident.out
  TEST    tests/qapi-schema/bad-type-bool.out
  TEST    tests/qapi-schema/bad-type-dict.out
  TEST    tests/qapi-schema/bad-type-int.out
  TEST    tests/qapi-schema/base-cycle-direct.out
  TEST    tests/qapi-schema/base-cycle-indirect.out
  TEST    tests/qapi-schema/command-int.out
  TEST    tests/qapi-schema/comments.out
  TEST    tests/qapi-schema/double-data.out
  TEST    tests/qapi-schema/double-type.out
  TEST    tests/qapi-schema/duplicate-key.out
  TEST    tests/qapi-schema/empty.out
  TEST    tests/qapi-schema/enum-bad-name.out
  TEST    tests/qapi-schema/enum-bad-prefix.out
  TEST    tests/qapi-schema/enum-clash-member.out
  TEST    tests/qapi-schema/enum-dict-member.out
  TEST    tests/qapi-schema/enum-int-member.out
  TEST    tests/qapi-schema/enum-member-case.out
  TEST    tests/qapi-schema/enum-missing-data.out
  TEST    tests/qapi-schema/enum-wrong-data.out
  TEST    tests/qapi-schema/escape-outside-string.out
  TEST    tests/qapi-schema/escape-too-big.out
  TEST    tests/qapi-schema/escape-too-short.out
  TEST    tests/qapi-schema/event-boxed-empty.out
  TEST    tests/qapi-schema/event-case.out
  TEST    tests/qapi-schema/event-nest-struct.out
  TEST    tests/qapi-schema/flat-union-array-branch.out
  TEST    tests/qapi-schema/flat-union-bad-base.out
  TEST    tests/qapi-schema/flat-union-bad-discriminator.out
  TEST    tests/qapi-schema/flat-union-base-any.out
  TEST    tests/qapi-schema/flat-union-base-union.out
  TEST    tests/qapi-schema/flat-union-clash-member.out
  TEST    tests/qapi-schema/flat-union-empty.out
  TEST    tests/qapi-schema/flat-union-incomplete-branch.out
  TEST    tests/qapi-schema/flat-union-inline.out
  TEST    tests/qapi-schema/flat-union-int-branch.out
  TEST    tests/qapi-schema/flat-union-invalid-branch-key.out
  TEST    tests/qapi-schema/flat-union-invalid-discriminator.out
  TEST    tests/qapi-schema/flat-union-no-base.out
  TEST    tests/qapi-schema/flat-union-optional-discriminator.out
  TEST    tests/qapi-schema/flat-union-string-discriminator.out
  TEST    tests/qapi-schema/funny-char.out
  TEST    tests/qapi-schema/ident-with-escape.out
  TEST    tests/qapi-schema/include-before-err.out
  TEST    tests/qapi-schema/include-cycle.out
  TEST    tests/qapi-schema/include-format-err.out
  TEST    tests/qapi-schema/include-nested-err.out
  TEST    tests/qapi-schema/include-no-file.out
  TEST    tests/qapi-schema/include-non-file.out
  TEST    tests/qapi-schema/include-relpath.out
  TEST    tests/qapi-schema/include-repetition.out
  TEST    tests/qapi-schema/include-self-cycle.out
  TEST    tests/qapi-schema/include-simple.out
  TEST    tests/qapi-schema/indented-expr.out
  TEST    tests/qapi-schema/leading-comma-list.out
  TEST    tests/qapi-schema/leading-comma-object.out
  TEST    tests/qapi-schema/missing-colon.out
  TEST    tests/qapi-schema/missing-comma-list.out
  TEST    tests/qapi-schema/missing-comma-object.out
  TEST    tests/qapi-schema/missing-type.out
  TEST    tests/qapi-schema/nested-struct-data.out
  TEST    tests/qapi-schema/non-objects.out
  TEST    tests/qapi-schema/qapi-schema-test.out
  TEST    tests/qapi-schema/quoted-structural-chars.out
  TEST    tests/qapi-schema/redefined-builtin.out
  TEST    tests/qapi-schema/redefined-command.out
  TEST    tests/qapi-schema/redefined-event.out
  TEST    tests/qapi-schema/redefined-type.out
  TEST    tests/qapi-schema/reserved-command-q.out
  TEST    tests/qapi-schema/reserved-enum-q.out
  TEST    tests/qapi-schema/reserved-member-has.out
  TEST    tests/qapi-schema/reserved-member-q.out
  TEST    tests/qapi-schema/reserved-member-u.out
  TEST    tests/qapi-schema/reserved-member-underscore.out
  TEST    tests/qapi-schema/reserved-type-kind.out
  TEST    tests/qapi-schema/reserved-type-list.out
  TEST    tests/qapi-schema/returns-alternate.out
  TEST    tests/qapi-schema/returns-array-bad.out
  TEST    tests/qapi-schema/returns-dict.out
  TEST    tests/qapi-schema/returns-unknown.out
  TEST    tests/qapi-schema/returns-whitelist.out
  TEST    tests/qapi-schema/struct-base-clash-deep.out
  TEST    tests/qapi-schema/struct-base-clash.out
  TEST    tests/qapi-schema/struct-data-invalid.out
  TEST    tests/qapi-schema/struct-member-invalid.out
  TEST    tests/qapi-schema/trailing-comma-list.out
  TEST    tests/qapi-schema/trailing-comma-object.out
  TEST    tests/qapi-schema/type-bypass-bad-gen.out
  TEST    tests/qapi-schema/unclosed-list.out
  TEST    tests/qapi-schema/unclosed-object.out
  TEST    tests/qapi-schema/unclosed-string.out
  TEST    tests/qapi-schema/unicode-str.out
  TEST    tests/qapi-schema/union-base-no-discriminator.out
  TEST    tests/qapi-schema/union-branch-case.out
  TEST    tests/qapi-schema/union-clash-branches.out
  TEST    tests/qapi-schema/union-empty.out
  TEST    tests/qapi-schema/union-invalid-base.out
  TEST    tests/qapi-schema/union-optional-branch.out
  TEST    tests/qapi-schema/union-unknown.out
  TEST    tests/qapi-schema/unknown-escape.out
  TEST    tests/qapi-schema/unknown-expr-key.out
  CC      tests/check-qdict.o
  CC      tests/check-qfloat.o
  CC      tests/check-qint.o
  CC      tests/check-qstring.o
  CC      tests/check-qlist.o
  CC      tests/check-qnull.o
  CC      tests/check-qjson.o
  CC      tests/test-qmp-output-visitor.o
  GEN     tests/test-qapi-visit.c
  GEN     tests/test-qapi-types.c
  GEN     tests/test-qapi-event.c
  GEN     tests/test-qmp-introspect.c
  CC      tests/test-clone-visitor.o
  CC      tests/test-qmp-input-visitor.o
  CC      tests/test-qmp-input-strict.o
  CC      tests/test-qmp-commands.o
  GEN     tests/test-qmp-marshal.c
  CC      tests/test-string-input-visitor.o
  CC      tests/test-string-output-visitor.o
  CC      tests/test-qmp-event.o
  CC      tests/test-opts-visitor.o
  CC      tests/test-coroutine.o
  CC      tests/test-visitor-serialization.o
  CC      tests/test-iov.o
  CC      tests/test-aio.o
  CC      tests/test-rfifolock.o
  CC      tests/test-throttle.o
  CC      tests/test-thread-pool.o
  CC      tests/test-hbitmap.o
  CC      tests/test-blockjob.o
  CC      tests/test-blockjob-txn.o
  CC      tests/test-x86-cpuid.o
  CC      tests/test-xbzrle.o
  CC      tests/test-vmstate.o
  CC      tests/test-cutils.o
  CC      tests/test-mul64.o
  CC      tests/test-int128.o
  CC      tests/rcutorture.o
  CC      tests/test-rcu-list.o
  CC      tests/test-qdist.o
/tmp/qemu-test/src/tests/test-int128.c:180: warning: ‘__noclone__’ attribute directive ignored
  CC      tests/test-qht.o
  CC      tests/test-qht-par.o
  CC      tests/qht-bench.o
  CC      tests/test-bitops.o
  CC      tests/check-qom-interface.o
  CC      tests/check-qom-proplist.o
  CC      tests/test-qemu-opts.o
  CC      tests/test-write-threshold.o
  CC      tests/test-crypto-hash.o
  CC      tests/test-crypto-cipher.o
  CC      tests/test-crypto-secret.o
  CC      tests/test-qga.o
  CC      tests/libqtest.o
  CC      tests/test-timed-average.o
  CC      tests/test-io-task.o
  CC      tests/test-io-channel-socket.o
  CC      tests/io-channel-helpers.o
  CC      tests/test-io-channel-file.o
  CC      tests/test-io-channel-command.o
  CC      tests/test-io-channel-buffer.o
  CC      tests/test-base64.o
  CC      tests/test-crypto-ivgen.o
  CC      tests/test-crypto-afsplit.o
  CC      tests/test-crypto-xts.o
  CC      tests/test-crypto-block.o
  CC      tests/test-logging.o
  CC      tests/test-replication.o
  CC      tests/test-bufferiszero.o
  CC      tests/test-uuid.o
  CC      tests/vhost-user-test.o
  CC      tests/libqos/pci.o
  CC      tests/libqos/fw_cfg.o
  CC      tests/libqos/malloc.o
  CC      tests/libqos/i2c.o
  CC      tests/libqos/libqos.o
  CC      tests/libqos/pci-pc.o
  CC      tests/libqos/malloc-pc.o
  CC      tests/libqos/libqos-pc.o
  CC      tests/libqos/ahci.o
  CC      tests/libqos/virtio.o
  CC      tests/libqos/virtio-pci.o
  CC      tests/libqos/virtio-mmio.o
  CC      tests/libqos/malloc-generic.o
  CC      tests/endianness-test.o
  CC      tests/fdc-test.o
  CC      tests/ide-test.o
  CC      tests/ahci-test.o
/tmp/qemu-test/src/tests/ide-test.c: In function ‘cdrom_pio_impl’:
/tmp/qemu-test/src/tests/ide-test.c:739: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
/tmp/qemu-test/src/tests/ide-test.c: In function ‘test_cdrom_dma’:
/tmp/qemu-test/src/tests/ide-test.c:832: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
  CC      tests/hd-geo-test.o
  CC      tests/boot-order-test.o
  CC      tests/bios-tables-test.o
  CC      tests/boot-sector.o
  CC      tests/boot-serial-test.o
  CC      tests/pxe-test.o
  CC      tests/rtc-test.o
  CC      tests/ipmi-kcs-test.o
  CC      tests/ipmi-bt-test.o
  CC      tests/i440fx-test.o
/tmp/qemu-test/src/tests/boot-sector.c: In function ‘boot_sector_init’:
/tmp/qemu-test/src/tests/boot-sector.c:89: warning: ignoring return value of ‘fwrite’, declared with attribute warn_unused_result
  CC      tests/fw_cfg-test.o
  CC      tests/drive_del-test.o
  CC      tests/wdt_ib700-test.o
  CC      tests/tco-test.o
  CC      tests/e1000e-test.o
  CC      tests/e1000-test.o
  CC      tests/rtl8139-test.o
  CC      tests/pcnet-test.o
  CC      tests/eepro100-test.o
  CC      tests/ne2000-test.o
  CC      tests/nvme-test.o
  CC      tests/ac97-test.o
  CC      tests/es1370-test.o
  CC      tests/virtio-net-test.o
  CC      tests/virtio-balloon-test.o
  CC      tests/virtio-blk-test.o
  CC      tests/virtio-rng-test.o
  CC      tests/virtio-scsi-test.o
  CC      tests/virtio-serial-test.o
  CC      tests/virtio-console-test.o
  CC      tests/tpci200-test.o
  CC      tests/ipoctal232-test.o
  CC      tests/display-vga-test.o
  CC      tests/intel-hda-test.o
  CC      tests/ivshmem-test.o
  CC      tests/vmxnet3-test.o
  CC      tests/pvpanic-test.o
  CC      tests/i82801b11-test.o
  CC      tests/ioh3420-test.o
  CC      tests/usb-hcd-ohci-test.o
  CC      tests/libqos/malloc-spapr.o
  CC      tests/libqos/libqos-spapr.o
  CC      tests/libqos/rtas.o
  CC      tests/libqos/pci-spapr.o
  CC      tests/libqos/usb.o
  CC      tests/usb-hcd-uhci-test.o
  CC      tests/usb-hcd-ehci-test.o
  CC      tests/usb-hcd-xhci-test.o
  CC      tests/pc-cpu-test.o
  CC      tests/q35-test.o
  CC      tests/test-netfilter.o
  CC      tests/test-filter-mirror.o
  CC      tests/test-filter-redirector.o
  CC      tests/postcopy-test.o
  CC      tests/test-x86-cpuid-compat.o
  CC      tests/device-introspect-test.o
  CC      tests/qom-test.o
  CC      tests/ptimer-test.o
  CC      tests/ptimer-test-stubs.o
  LINK    tests/check-qdict
  LINK    tests/check-qfloat
  LINK    tests/check-qint
  LINK    tests/check-qstring
  LINK    tests/check-qlist
  LINK    tests/check-qnull
  LINK    tests/check-qjson
  CC      tests/test-qapi-visit.o
  CC      tests/test-qapi-types.o
  CC      tests/test-qapi-event.o
  CC      tests/test-qmp-introspect.o
  CC      tests/test-qmp-marshal.o
  LINK    tests/test-coroutine
  LINK    tests/test-iov
  LINK    tests/test-aio
  LINK    tests/test-rfifolock
  LINK    tests/test-throttle
  LINK    tests/test-thread-pool
  LINK    tests/test-hbitmap
  LINK    tests/test-blockjob
  LINK    tests/test-blockjob-txn
  LINK    tests/test-x86-cpuid
  LINK    tests/test-xbzrle
  LINK    tests/test-vmstate
  LINK    tests/test-cutils
  LINK    tests/test-mul64
  LINK    tests/test-int128
  LINK    tests/rcutorture
  LINK    tests/test-rcu-list
  LINK    tests/test-qdist
  LINK    tests/test-qht
  LINK    tests/qht-bench
  LINK    tests/test-bitops
  LINK    tests/check-qom-interface
  LINK    tests/check-qom-proplist
  LINK    tests/test-qemu-opts
  LINK    tests/test-write-threshold
  LINK    tests/test-crypto-hash
  LINK    tests/test-crypto-cipher
  LINK    tests/test-crypto-secret
  LINK    tests/test-qga
  LINK    tests/test-timed-average
  LINK    tests/test-io-task
  LINK    tests/test-io-channel-socket
  LINK    tests/test-io-channel-file
  LINK    tests/test-io-channel-command
  LINK    tests/test-io-channel-buffer
  LINK    tests/test-base64
  LINK    tests/test-crypto-ivgen
  LINK    tests/test-crypto-afsplit
  LINK    tests/test-crypto-xts
  LINK    tests/test-crypto-block
  LINK    tests/test-logging
  LINK    tests/test-replication
  LINK    tests/test-bufferiszero
  LINK    tests/test-uuid
  LINK    tests/vhost-user-test
  LINK    tests/endianness-test
  LINK    tests/fdc-test
  LINK    tests/ide-test
  LINK    tests/ahci-test
  LINK    tests/hd-geo-test
  LINK    tests/boot-order-test
  LINK    tests/bios-tables-test
  LINK    tests/boot-serial-test
  LINK    tests/pxe-test
  LINK    tests/rtc-test
  LINK    tests/ipmi-kcs-test
  LINK    tests/ipmi-bt-test
  LINK    tests/i440fx-test
  LINK    tests/fw_cfg-test
  LINK    tests/drive_del-test
  LINK    tests/wdt_ib700-test
  LINK    tests/tco-test
  LINK    tests/e1000-test
  LINK    tests/e1000e-test
  LINK    tests/rtl8139-test
  LINK    tests/pcnet-test
  LINK    tests/eepro100-test
  LINK    tests/ne2000-test
  LINK    tests/nvme-test
  LINK    tests/ac97-test
  LINK    tests/es1370-test
  LINK    tests/virtio-net-test
  LINK    tests/virtio-balloon-test
  LINK    tests/virtio-blk-test
  LINK    tests/virtio-rng-test
  LINK    tests/virtio-scsi-test
  LINK    tests/virtio-serial-test
  LINK    tests/virtio-console-test
  LINK    tests/tpci200-test
  LINK    tests/ipoctal232-test
  LINK    tests/display-vga-test
  LINK    tests/intel-hda-test
  LINK    tests/ivshmem-test
  LINK    tests/vmxnet3-test
  LINK    tests/pvpanic-test
  LINK    tests/i82801b11-test
  LINK    tests/ioh3420-test
  LINK    tests/usb-hcd-ohci-test
  LINK    tests/usb-hcd-uhci-test
  LINK    tests/usb-hcd-ehci-test
  LINK    tests/usb-hcd-xhci-test
  LINK    tests/pc-cpu-test
  LINK    tests/q35-test
  LINK    tests/test-netfilter
  LINK    tests/test-filter-mirror
  LINK    tests/test-filter-redirector
  LINK    tests/postcopy-test
  LINK    tests/test-x86-cpuid-compat
  LINK    tests/device-introspect-test
  LINK    tests/qom-test
  LINK    tests/ptimer-test
  GTESTER tests/check-qdict
  GTESTER tests/check-qint
  GTESTER tests/check-qfloat
  GTESTER tests/check-qstring
  GTESTER tests/check-qlist
  GTESTER tests/check-qnull
  GTESTER tests/check-qjson
  LINK    tests/test-qmp-output-visitor
  LINK    tests/test-clone-visitor
  LINK    tests/test-qmp-input-visitor
  LINK    tests/test-qmp-input-strict
  LINK    tests/test-qmp-commands
  LINK    tests/test-string-input-visitor
  LINK    tests/test-string-output-visitor
  LINK    tests/test-qmp-event
  LINK    tests/test-opts-visitor
  GTESTER tests/test-coroutine
  GTESTER tests/test-iov
  LINK    tests/test-visitor-serialization
  GTESTER tests/test-aio
  GTESTER tests/test-rfifolock
  GTESTER tests/test-throttle
  GTESTER tests/test-thread-pool
  GTESTER tests/test-hbitmap
  GTESTER tests/test-blockjob
  GTESTER tests/test-blockjob-txn
  GTESTER tests/test-x86-cpuid
  GTESTER tests/test-xbzrle
  GTESTER tests/test-vmstate
  GTESTER tests/test-cutils
  GTESTER tests/test-mul64
  GTESTER tests/test-int128
  GTESTER tests/rcutorture
  GTESTER tests/test-rcu-list
  GTESTER tests/test-qdist
  GTESTER tests/test-qht
  LINK    tests/test-qht-par
  GTESTER tests/test-bitops
  GTESTER tests/check-qom-interface
  GTESTER tests/check-qom-proplist
  GTESTER tests/test-qemu-opts
  GTESTER tests/test-write-threshold
  GTESTER tests/test-crypto-hash
  GTESTER tests/test-crypto-cipher
  GTESTER tests/test-crypto-secret
  GTESTER tests/test-qga
  GTESTER tests/test-timed-average
  GTESTER tests/test-io-task
  GTESTER tests/test-io-channel-socket
  GTESTER tests/test-io-channel-file
  GTESTER tests/test-io-channel-command
  GTESTER tests/test-io-channel-buffer
  GTESTER tests/test-base64
  GTESTER tests/test-crypto-ivgen
  GTESTER tests/test-crypto-afsplit
  GTESTER tests/test-crypto-xts
  GTESTER tests/test-crypto-block
  GTESTER tests/test-logging
  GTESTER tests/test-replication
  GTESTER tests/test-bufferiszero
  GTESTER tests/test-uuid
  GTESTER check-qtest-x86_64
  GTESTER check-qtest-aarch64
  GTESTER tests/test-qmp-output-visitor
  GTESTER tests/test-clone-visitor
  GTESTER tests/test-qmp-input-visitor
  GTESTER tests/test-qmp-input-strict
  GTESTER tests/test-qmp-commands
  GTESTER tests/test-string-input-visitor
  GTESTER tests/test-string-output-visitor
  GTESTER tests/test-qmp-event
  GTESTER tests/test-opts-visitor
  GTESTER tests/test-visitor-serialization
  GTESTER tests/test-qht-par
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
=== OUTPUT END ===

Abort: command timeout (>3600 seconds)


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-11 16:45             ` Paolo Bonzini
@ 2016-10-12  9:50               ` Kevin Wolf
  2016-10-12 10:13                 ` Paolo Bonzini
  0 siblings, 1 reply; 14+ messages in thread
From: Kevin Wolf @ 2016-10-12  9:50 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-block, famz, qemu-devel, stefanha

Am 11.10.2016 um 18:45 hat Paolo Bonzini geschrieben:
> > I think my point was that you don't have to count requests at the BB
> > level if you know that there are no requests pending on the BB level
> > that haven't reached the BDS level yet.
> 
> I need to count requests at the BB level because the blk_aio_*
> operations have a separate bottom half that is invoked if either 1) they
> never reach BDS (because of an error); or 2) the bdrv_co_* call
> completes without yielding.  The count must be >0 when blk_aio_*
> returns, or bdrv_drain (and thus blk_drain) won't loop.  Because
> bdrv_drain and blk_drain are conflated, the counter must be the BDS one.

Okay, makes sense.

> In turn, the BDS counter is needed because of the lack of isolated
> sections.  The right design would be for blk_isolate_begin to call
> blk_drain on *other* BlockBackends reachable in a child-to-parent visit.

Not really the blk_drain() that completes all pending requests of the
BB, but the BdrvChild callbacks that quiesce the BB. But I think we
really agree here and are just having trouble with the terminology.

>  Instead, until that is implemented, we have bdrv_drained_begin that
> emulates that through the same-named callback, followed by a
> parent-to-child bdrv_drain that is almost always unnecessary.

Yes.

> (By the way, I need to repost this series anyway, but let's finish the
> discussion first to understand what you'd like to have in 2.8).

I'm still not completely sold on the order in which we should do things,
but you've been insisting enough that I'll just trust you on this.

Kevin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests
  2016-10-12  9:50               ` Kevin Wolf
@ 2016-10-12 10:13                 ` Paolo Bonzini
  0 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2016-10-12 10:13 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, famz, qemu-devel, stefanha



On 12/10/2016 11:50, Kevin Wolf wrote:
> > (By the way, I need to repost this series anyway, but let's finish the
> > discussion first to understand what you'd like to have in 2.8).
>
> I'm still not completely sold on the order in which we should do things,
> but you've been insisting enough that I'll just trust you on this.

Thanks, I'll post v2 then.

Paolo

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-10-12 10:13 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-07 16:19 [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation Paolo Bonzini
2016-10-07 16:19 ` [Qemu-devel] [PATCH 1/3] block: add BDS field to count in-flight requests Paolo Bonzini
2016-10-10 10:36   ` Kevin Wolf
2016-10-10 16:37     ` Paolo Bonzini
2016-10-11 11:00       ` Kevin Wolf
2016-10-11 14:09         ` Paolo Bonzini
2016-10-11 15:25           ` Kevin Wolf
2016-10-11 16:45             ` Paolo Bonzini
2016-10-12  9:50               ` Kevin Wolf
2016-10-12 10:13                 ` Paolo Bonzini
2016-10-07 16:19 ` [Qemu-devel] [PATCH 2/3] block: change drain to look only at one child at a time Paolo Bonzini
2016-10-10 11:17   ` Kevin Wolf
2016-10-07 16:19 ` [Qemu-devel] [PATCH 3/3] qed: Implement .bdrv_drain Paolo Bonzini
2016-10-11 16:46 ` [Qemu-devel] [PATCH 0/3] block: new bdrv_drain implementation no-reply

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.